re think data management software design upon the arrival
play

Re-think Data Management Software Design Upon the Arrival of Storage - PowerPoint PPT Presentation

Re-think Data Management Software Design Upon the Arrival of Storage Hardware with Built-in Transparent Compression 07/2020 1 The Rise of Computational Storage Homogeneous Computing Heterogenous Computing Compute Networking Storage


  1. Re-think Data Management Software Design Upon the Arrival of Storage Hardware with Built-in Transparent Compression 07/2020 1

  2. The Rise of Computational Storage Homogeneous Computing Heterogenous Computing Compute Networking Storage FPGA/GPU/TPU SmartNICs Computational Storage Domain Specific Compute End of Moore’s Law 10 → 100-400Gb/s Fast & Big Data Growth 2

  3. The Rise of Computational Storage to Data-Driven from Processor-Driven… CPU FPGA CPU DRAM DRAM DRAM DRAM Controller Controller Controller Controller FPGA FPGA FPGA FPGA … … … … Flash Flash Flash Flash Flash Flash Flash Flash SSD SSD SSD SSD CSD CSD CSD CSD CSD: Computational Storage Drive CPU & Memory I/O bottlenecks Balanced compute & storage I/O § § Limited FPGAs, specific sockets required Multiple FPGAs, easily plug-in via storage § § Massive data movement Minimize data movement § § No compute parallelism Maximum compute parallelism § § 3

  4. Computational Storage: A Simple Idea Computational Storage q End of Moore’s Law è heterogeneous computing FPGA/GPU/TPU SmartNICs Low-hanging fruits ←SW HW → In-line per-4KB Flash NAND Driver zlib compression Control Flash & decompression FPGA Computational Storage Drive (CSD) with Data Path Transparent Compression 4

  5. ScaleFlux Computational Storage Drive CPU FPGA CPU ü Complete, validated solution ü Pre-Programmed FPGA ü Hardware FPGA ü Software Flash FC Controller ü Firmware CSD SSD ü No FPGA knowledge or coding Flash Flash Flash Flash ü Field upgradeable ü Standard U.2 & AIC form factors Single FPGA combines Multiple, discrete components Compute and SSD Functions for Compute and SSD Functions 5

  6. CSD 2000: Data Path Compression/Decompression FIO: 4K Random R/W IOPS FIO: 8K Random R/W IOPS FIO: 16K Random R/W IOPS 700 450 250 400 600 CSD 2000 220% 200 170% 350 200% 500 70/30 300 70/30 230% R/W 70/30 IOPS (k) Better R/W IOPS (k) Better IOPS (k) Better 150 R/W 400 250 100% 220% Write 200 220% 300 100 100% 150 Write Vendor-A NVMe 100% 200 Write 100 50 100 50 0 0 0 % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 1 1 1 100% Reads 100% Writes 100% Reads 100% Writes 100% Reads 100% Writes 2.5:1 Compressible Data, 8 jobs, 32 QD, steady 2.5:1 Compressible Data, 8 jobs, 32 QD, steady 2.5:1 Compressible Data, 8 jobs, 32 QD, steady state after preconditioning state after preconditioning state after preconditioning Increasing Mix R/W Performance Advantage with Larger Block Sizes 6

  7. Open a Door for System-level Innovations Data path compression Logical storage space Physical storage space utilization efficiency utilization efficiency OS/Applications can purposely waste logical storage space to gain performance benefits 4KB Expanded LBA space (e.g., 32TB) Data 0’s Transparent compression Transparent compression Compressed data NAND Flash (e.g., 4TB) Unnecessary to completely fill each Unnecessary to use all the LBAs 4KB sector with user data 7

  8. PostgreSQL 8KB/page Performance Reserved for future update FF Data Storage space Fillfactor (FF) Normalized 8KB/page Performance 150% Data 0’s FF=50 SFX NVMe Commodity NVMe Data path compression FF=100 100% Compressed data Physical storage usage 300GB 600GB 1,200GB 8

  9. PostgreSQL (Sysbench-TPCC) 740GB 1.4TB 140.0% 150.0% 140.0% 130.0% 130.0% 120.0% Normalized TPS Normalized TPS 120.0% 110.0% 110.0% 100.0% FF75 FF75 FF75 FF75 100.0% 90.0% (905GB) (189GB) (1,762GB) (365GB) 90.0% FF100 FF100 FF100 FF100 80.0% 80.0% (740GB) (178GB) (1,433GB) (342GB) 70.0% 70.0% 60.0% 60.0% Vendor-A CSD 2000 Vendor-A CSD 2000 Logical size Physical size Comp Logical size Physical size Comp Fillfactor Drive Fillfactor Drive (GB) (GB) Ratio (GB) (GB) Ratio Vendor-A 740 1.00 Vendor-A 1,433 1.00 100 740 100 1,433 CSD 2000 178 4.12 CSD 2000 342 4.19 Vendor-A 905 1.00 Vendor-A 1,762 1.00 75 905 75 1,762 CSD 2000 189 4.75 CSD 2000 365 4.82 9

  10. Table-less Hash-based Key-Value Store Key space K Key space K . . . . . . Hash function f K à T Hash . . . Hash function f K à L table KV pairs are tightly KV pairs are loosely Unoccupied packed in L packed in L space 4KB 4KB . . . . . . LBA space L LBA space L KV store purposely under-utilizes logical storage space to eliminate hash table without sacrificing physical storage utilization 10

  11. Table-less Hash-based Key-Value Store ü Simple code base & high operational concurrency Key space K ü Very small memory footprint . . . ü Absence of frequent background operations (e.g., GC and compaction) è low and consistent CPU usage Hash function f K à L KV pairs are loosely Unoccupied packed in L space 4KB Compared with RocksDB . . . ² >2x ops/s improvement LBA space L ² >2x less average CPU usage We will open source and are looking for collaborations to together grow the community! 11

  12. Summary Transparent compression Logical storage space Physical storage space utilization efficiency utilization efficiency Unique opportunities to re-think the data management software design www.scaleflux.com tong.zhang@scaleflux.com 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend