workload characterization of a leadership class storage
play

Workload Characterization of a Leadership Class Storage Cluster - PowerPoint PPT Presentation

Workload Characterization of a Leadership Class Storage Cluster Technology Integration Group National Center for Computational Sciences Presented by Youngjae Kim Youngjae Kim, Raghul Gunasekaran, Galen M. Shipman, David A. Dillow, Zhe Zhang,


  1. Workload Characterization of a Leadership Class Storage Cluster Technology Integration Group National Center for Computational Sciences Presented by Youngjae Kim Youngjae Kim, Raghul Gunasekaran, Galen M. Shipman, David A. Dillow, Zhe Zhang, Bradley W. Settlemyer

  2. A Demanding Computational Environment 18,688 224,256 300+ TB 2.3 PFlops Jaguar XT5 Nodes Cores memory Jaguar XT4 7,832 31,328 63 TB 263 TFlops Nodes Cores memory Frost (SGI Ice) 128 Node institutional cluster Smoky 80 Node software development cluster Lens 30 Node visualization and analysis cluster 2

  3. Spider: A Large-scale Storage System • Over 10.7 PB of RAID 6 formatted capacity • 13,400 x 1 TB HDDs • 192 Lustre I/O servers • Over 3TB of memory (on Lustre I/O servers) • Available to many compute systems through high-speed IB network – Over 2,000 IB ports – Over 3 miles (5 kilometers) cable – Over 26,000 client mounts for I/O – Peak I/O performance is 240 GB/s 3

  4. Spider Architecture XT5 Serial ATA InfiniBand SeaStar2+ 3D Torus 3 Gbit/sec 16 Gbit/sec 9.6 Gbytes/sec 384 366 Gbytes/s 384 Gbytes/s 384 Gbytes/s Gbytes/s Jaguar XT5 96 Gbytes/s Jaguar XT4 Other Systems (Viz, Clusters) Enterprise Storage Storage Nodes SION Network Lustre Router Nodes controllers and large run parallel file system provides connectivity run parallel file system racks of disks are connected software and manage between OLCF client software and via InfiniBand. incoming FS traffic. resources and forward I/O operations primarily carries from HPC clients. 96 ¡DataDirect ¡S2A9900 ¡ 48 DataDirect S2A9900 192 dual quad core storage traffic. controller ¡with ¡1TB ¡drives ¡ controller pairs with Xeon servers with 192 (XT5) and 48 (XT4) and ¡2 ¡ac4ve ¡InfiniBand ¡ 1 Tbyte drives 16 Gbytes of RAM each 3000+ port 16 Gbit/sec one dual core and 4 InifiniBand connec4ons ¡per ¡ InfiniBand switch Opteron nodes with connections per pair controller ¡ complex 8 GB of RAM each 4

  5. Outline • Background • Motivation • Workload Characterization – Data collection tool – Understanding workloads • Bandwidth requirements • Request size distribution • Correlating request size and bandwidth, etc. – Modeling I/O workloads • Summary and Future works – Incorporating flash based storage technology – Further investigating application to file system’s behavior 5

  6. Monthly Peak Bandwidth • Measured monthly peak read and write bandwidth on 48 controllers (half our capacity) Read ¡ Read ¡GB/s ¡ Write ¡GB/s ¡ ~96GB/s ¡ 100 ¡ Write ¡ ~68GB/s ¡ 90 ¡ 80 ¡ Bandwidth ¡GB/s ¡ 70 ¡ 60 ¡ 50 ¡ 40 ¡ 30 ¡ 20 ¡ 10 ¡ 0 ¡ Jan-­‑2010 ¡ Feb-­‑2010 ¡ Mar-­‑2010 ¡ Apr-­‑2010 ¡ May-­‑2010 ¡ Jun-­‑2010 ¡ Observa2on ¡Period ¡ 6

  7. Snapshot of I/O Bandwidth Usage • Observed read and write bandwidth for a week in April 100 Write Read 90 80 Bandwidth (GB/s) 70 60 50 40 30 20 10 0 Apr-20 Apr-21 Apr-22 Apr-23 Apr-24 Apr-25 Apr-25 Apr-26 Data sampled every 2 seconds from 48 controllers (half our capacity) 7

  8. Motivation Why Characterize I/O Workloads on Storage Clusters? • Research Challenges and Limitation – Understanding I/O behavior of such large-scale storage system is of importance. – Lack of understanding on I/O workloads will lead under- or over-provisioned systems, increasing installation and operational cost ($). • Storage System Design Cycle 1. Requirements 1 ¡ - Understand I/O demands 3. Validation Operation, maintenance 2 . Design 3 ¡ 2 ¡ (performance efficiency, - Architect and build capacity utilization) storage system • Goals – Understanding I/O demands of large-scale production system – Synthesizing the I/O workload to provide useful tool to storage controller, network, and disk-subsystem designers 8

  9. Data Collection Tool • Monitoring Tool DDN1 ¡ DDN2 ¡ DDN96 ¡ – Monitors variety of parameters from the back-end storage hardware – Metrics: Bandwidth (MB/s), IOPs Server ¡ Running ¡ DDNTool ¡ • Design Implementation – DDN S2A9900 API for reading controller metrics MySQL ¡server ¡ – A custom utility tool* on the management server • Periodically collects stats from all the controllers • Supports multiple sampling rates (2, 60, 600) seconds – Data is archived in a MySQL database. * Developed by Ross Miller, et. al., in TechInt group, NCCS, ORNL 9

  10. Characterizing Workloads • Data collected from RAID controllers – Bandwidth/IOPS (every 2 sec) – Request size stats (every 1 min) – Used data collected from Jan. to June (around 6 months) • Workload Characterization and Modeling – Metrics • I/O bandwidth distribution • Read to write ratio • Request size distribution • Inter-arrival time • Idle time distribution – Used curve-fitting technique to develop synthesized workloads 10

  11. Bandwidth Distribution • Peak bandwidth Peak Read BW up to 2.7GB/s >> Peak Write BW up to 1.6GB/s 3000 Bandwidth (MB/s) 2500 2000 1500 1000 500 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Controller no. Max (Read) Max (Write) • 95 th , 99 th percentiles bandwidth 900 Bandwidth (MB/s) Write ¡bandwidth ¡ 800 700 600 Read ¡Bandwidth ¡ 500 400 300 200 100 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Controller no. 95th pct (Read) 95th pct (Write) 99th pct (Read) 99th pct (Write) Write bandwidth >> Read bandwidth for both 95 th and 99 th percentiles bandwidth Observations: 1. Long-tail distribution of read write bandwidth across all controllers 2. Read peak bandwidth much higher than write peak bandwidth, but majority of bandwidth higher in writes over reads (e.g., 95-99 percentiles of bandwidth) 3. Variation in peak bandwidth across controllers 11

  12. Aggregate Bandwidth • Peak aggregate bandwidth vs. Sum of peak bandwidth at every controller 130 Aggregate Read ¡ 120 Individual Sum -­‑27% ¡ 110 Write ¡ Total Bandwidth (GB/s) 100 -­‑ ¡20% ¡ 90 80 70 60 Write ¡ 50 -­‑ ¡20% ¡ Read ¡ 40 -­‑49% ¡ 30 20 10 0 95th Read 95th Write 99th Read 99th Write 100th Read 100th Write Observations: 1. Peak bandwidths of every controller unlikely to happen at the same time 2. Read bandwidth more unlikely to happen at the same time than write bandwidth for 99 th and 100 th percentiles of bandwidth 12

  13. Modeling I/O Bandwidth Distribution • We observed that read write bandwidth follows a long-tail dist. • Pareto model is one of the simplest long tailed dist. models. � 1 − x α • Pareto model validation x , for x ≥ x m m F X ( x ) = 0 , for x < x m – Single controller e is the minimum positive value fo 1 1 0.8 0.8 Distribution P(x<X) Distribution P(x<X) Write ¡ Read ¡ 0.6 0.6 0.4 0.4 0.2 0.2 Observed Observed Pareto Model Pareto Model 0 0 1 10 100 1000 10000 1 10 100 1000 10000 Bandwidth (MB/s) - Log-Scale Bandwidth (MB/s) - Log-Scale Goodness-­‑of-­‑fit ¡(R 2 ): ¡0.98 ¡ Goodness-­‑of-­‑fit ¡(R 2 ): ¡0.99 ¡ α ¡= ¡1.24 ¡ α ¡= ¡2.6 ¡ 13

  14. Read to Write Ratio • Percentage of write requests 100 Write 90 Write Percentage (%) 80 70 60 50 40 Average: 57.8 % 30 20 10 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 Controller no. 42.2% Read requests è still significantly high!!! 42.2% read requests: 1. Spider is the center-wide shared file system. 2. Spider supports an array of computational resources such as Jaguar XT5/ XT4, visualization systems, and application development. 14

  15. Request Size Distribution 25-30% • Cumulative distribution • Probability distribution Reads / writes 1 1 Read Read Write Write 0.8 0.8 Distribution P(x) Distribution P(x<X) Reads are 0.6 about 2 0.6 times more 0.4 than writes. 0.4 > 50% 0.2 small writes 0 0.2 About 20% <16K 512K 1M 1.5M small reads Request Size 0 <16K <512K <1M <1.5M Majority of request size (>95%) Request Size - <16KB - 512KB and 1MB 1. Linux block layer clusters near 512KB boundary. 2. Lustre tries to send 1MB request. 15

  16. Correlating Request Size and Bandwidth • Challenges: different sampling rates – Bandwidth sampling @ 2 second intervals – Request size distribution @ 60 seconds intervals • Assumption – Larger requests are more likely to lead to higher bandwidth. • Observed from 48 controllers – (Write BW, Req. Size) – (Read BW, Req. Size) 3000 3000 2500 2500 Bandwidth (MB/s) Bandwidth (MB/s) 2000 2000 Peak bandwidth happens at 1 MB large requests . 1500 1500 1000 1000 500 500 0 0 0 500 1000 1500 2000 2500 3000 3500 4000 0 500 1000 1500 2000 2500 3000 3500 4000 Read Request Size (KB) Write Request Size (KB) 16

  17. What about Flash in Storage? • Major observations from workload characterization – Reads and writes are bursty. – Peak bandwidth occurs at 1MB large requests. – More than 50% small requests and about 20% small read requests • What about Flash? – Cons – Pros • Lifetime constraint • Lower access latency (10K~1M erase cycle) (~0.5ms) • Expensive • Lower power consumption • Performance variability (~1W) • High resilience to vibration temperature 17

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend