diecast testing distributed systems with an accurate
play

DieCast: Testing Distributed Systems with an Accurate Scale Model - PowerPoint PPT Presentation

DieCast: Testing Distributed Systems with an Accurate Scale Model Diwaker Gupta Diwaker Gupta Kashi V. Vishwanath Amin Vahdat University of California, San Diego High performance Alice filesystem Limited testing infrastructure Diverse


  1. DieCast: Testing Distributed Systems with an Accurate Scale Model Diwaker Gupta Diwaker Gupta Kashi V. Vishwanath Amin Vahdat University of California, San Diego

  2. High performance Alice filesystem Limited testing infrastructure Diverse deployment Use smaller environments infrastructure to test a much larger system June 7, 2008 NSDI 2008 | DieCast 2

  3. Goals • Fidelity – How closely can we replicate the target system? • Reproducibility • Reproducibility – Can we do controlled experiments? • Efficiency – Use fewer resources DieCast can scale up a test infrastructure by an order of magnitude June 7, 2008 NSDI 2008 | DieCast 3

  4. DieCast Overview � Replicate target system using fewer machines � Resource equivalence: perceived CPU capacity, disk and network characteristics capacity, disk and network characteristics � Preserve application performance × Not scaled × Physical memory: mitigating solutions × Secondary storage: cheap June 7, 2008 NSDI 2008 | DieCast 4

  5. Original System Application servers Switches Load balancer Database servers Web servers Fidelity • Reproducibility • Efficiency • NSDI 2008 | DieCast June 7, 2008 5

  6. Server Consolidation (VMs) Network emulation Fidelity • Reproducibility • Efficiency • June 7, 2008 NSDI 2008 | DieCast 6

  7. Multiplexing Leads to Resource Partitioning 3 GHz CPU, 1 Gbps N/W, 15 Mbps disk I/O, 2 GB RAM Split equally among 5 VMs ~ 600 MHz CPU, 200 Mbps N/W, 3 Mbps disk I/O, 400 MB RAM each June 7, 2008 NSDI 2008 | DieCast 7

  8. Time Dilation [NSDI 2006] Key idea : time is also a resource! 1 sec Real time • Slow down passage of time (No dilation) within the OS 10 Mb Events • CPU, network, disk – all appear faster faster • Experiments take longer Perceived bandwidth = 10 Mb/s T ime D ilation F actor (TDF) = 100 msec Dilated time Real time/Virtual time 10 Mb Events In this example, TDF = 1sec/100ms = 10 Perceived bandwidth = 100 Mb/s June 7, 2008 NSDI 2008 | DieCast 8

  9. Multiplexing Under Time Dilation 3 GHz CPU, 1 Gbps N/W, 15 Mbps disk I/O, 2 GB RAM ~ 600 MHz CPU, 200 Mbps N/W, 3 Mbps disk I/O, 400-MB RAM, each TDF 5 ~ 3 GHz CPU, 1 Gbps N/W, 15 Mbps disk I/O?, 400 MB RAM each June 7, 2008 NSDI 2008 | DieCast 9

  10. Time Dilation: External Interactions Dilated Time Frame External systems Network running in the real time frame June 7, 2008 NSDI 2008 | DieCast 10

  11. Disk I/O Scaling • Invariant : perceived disk characteristics are preserved – Seek time – Read/write throughput – Read/write throughput • Issues – Low level functionality in firmware – Different I/O models – Per request scaling is difficult June 7, 2008 NSDI 2008 | DieCast 11

  12. Implementation Details • Supported platforms – Xen 2.0.7, 3.0.4, 3.1 – Can be ported to non-virtualized systems • Support for unmodified guest OSes • Support for unmodified guest OSes • Disk I/O scaling for different I/O models – Fully virtualized: integration with DiskSim – Paravirtualized: scaling in device driver June 7, 2008 NSDI 2008 | DieCast 12

  13. Disk I/O Scaling: Fully Virtualized VMs Request completion Domain-0 Domain-0 time in VM VM simulated disk (Unmodified OS) (Unmodified OS) disksim Guest OS unaware that no real disk ioemu ioemu exists I/O emulation Disk device VM disk driver image Guest OS Xen Xen filesystem June 7, 2008 NSDI 2008 | DieCast 13

  14. Disk I/O Scaling: Fully Virtualized VMs Service time in simulated disk: T sim Domain-0 Domain-0 VM VM DiskSim running time: T disksim (Unmodified OS) (Unmodified OS) disksim Required perceived time: T sim ⇒ Total real time ioemu ioemu Actual time to T real = TDF*T sim service: T ioemu Delay: Delay Disk device VM disk driver image Xen Xen T real = T ioemu + Delay + T disksim ⇒ Delay = (TDF*T sim ) – T disksim – T ioemu June 7, 2008 NSDI 2008 | DieCast 14

  15. Network I/O Scaling Invariant : Perceived network characteristics (bandwidths and latencies) must be preserved 10 Mb/s, 20ms RTT Real Configuration Real Configuration Perceived Configuration Perceived Configuration Original system 10 Mb/s, 20 ms 10 Mb/s, 20 ms (TDF 1) Time Dilation 10 Mb/s, 20 ms 50 Mb/s, 4 ms (TDF 5) DieCast (TDF 5) 2 Mb/s, 100 ms 10 Mb/s, 20 ms Network emulation: ModelNet, Dummynet June 7, 2008 NSDI 2008 | DieCast 15

  16. Recap • Multiplex VMs for efficiency • Time dilation to scale resources • Disk I/O scaling • Network I/O scaling • Network I/O scaling Fidelity • Reproducibility • Efficiency • At this point, the scaled system almost looks like original system! June 7, 2008 NSDI 2008 | DieCast 16

  17. Validation • How well does DieCast scaled performance match the original system? – Application specific metrics • Can a smaller system be configured to match the resources of a larger system? the resources of a larger system? – Resource utilization profiles • Applications: RUBiS , BitTorrent, Isaac • RUBiS – eBay like e-Commerce service – Ships with workload generator June 7, 2008 NSDI 2008 | DieCast 17

  18. RUBiS: Topology 4 DB 4 DB Wide 8 Web 8 Web Area Servers Servers Link Wide Area Link 16 Workload Generators June 7, 2008 NSDI 2008 | DieCast 18

  19. Experimental Setup Baseline DieCast scaled configuration: Configuration: 40 physical 4 physical machines, machines 10 VMs each • Xen 3.1, fully virtualized VMs • Debian Etch, Linux 2.6.17, 256 MB RAM • DiskSim emulating Seagate ST3217 • Network emulation using ModelNet June 7, 2008 NSDI 2008 | DieCast 19

  20. RUBiS: Throughput June 7, 2008 NSDI 2008 | DieCast 20

  21. RUBiS: Response Time June 7, 2008 NSDI 2008 | DieCast 21

  22. RUBiS: Resource Usage CPU Memory Network June 7, 2008 NSDI 2008 | DieCast 22

  23. Validation Recap • Evaluated – RUBiS Many more details in the paper – BitTorrent – Isaac – Isaac • Demonstrated – Match application specific metrics – Preserve resource utilization profile June 7, 2008 NSDI 2008 | DieCast 23

  24. Case study: Panasas • Panasas builds scalable storage systems for high performance computing – http://www.panasas.com • Caters to variety of clients • Caters to variety of clients • Difficult or even impossible to replicate deployment environment of all clients • Limited resources for testing June 7, 2008 NSDI 2008 | DieCast 24

  25. DieCast in Panasas • Custom OS Clients • Integrated hw/sw offering • Not runnable on Xen • Porting DieCast to non- virtualized environments Clients run Linux, can be virtualized Dummynet for network scaling Storage cluster June 7, 2008 NSDI 2008 | DieCast 25

  26. Panasas: Evaluation Summary Baseline DieCast scaled: 1 PM, 10 VMs • Validation – Two benchmarks from standard test suite: IOZone, MPI-IO; varying block sizes – Match performance metrics Scaling: Used 100 machines to scale to 1000 clients June 7, 2008 NSDI 2008 | DieCast 26

  27. Limitations • Memory scaling • Long running workloads • Specialized hardware appliances • Fine grained timing June 7, 2008 NSDI 2008 | DieCast 27

  28. Summary • DieCast: scalable testing – Fidelity, Reproducibility, Efficiency • Contributions – Support for unmodified operating systems – Support for unmodified operating systems – Implement disk I/O scaling (DiskSim integration) – CPU scheduler enhancements for time dilation – Comprehensive evaluation, including a commercial storage system June 7, 2008 NSDI 2008 | DieCast 28

  29. Thanks! Questions? dgupta@cs.ucsd.edu

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend