virtualization
play

<virtualization> Feedback-directed emulation IP address - PDF document

Emulab: Network Testbed Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller, Jonathon Duerig, Shashi Guruprasad, Tim Stack, Kirk Webb, Jay Lepreau 2 The Basic Idea: Whats Wrong? Use


  1. Emulab: Network Testbed Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller, Jonathon Duerig, Shashi Guruprasad, Tim Stack, Kirk Webb, Jay Lepreau 2 The Basic Idea: What’s Wrong? Use virtualization to Too small perform network experiments using fewer physical resources. Inefficient ... and do this in a way that: Solution? is transparent to applications Virtualize and Multiplex preserves experiment fidelity 3 Not Just Picking a VM Technology Challenges Opportunities • Fidelity • Closed world • Can re-run • Preserve network experiments Complete Virtual Network topology Experimentation System 5 6

  2. Full System • Virtualization technology • Host and network • Resource mapping <virtualization> • Feedback-directed emulation • IP address assignment • Scalable control system • Routing table calculation 7 Start: FreeBSD jail • Namespace isolation • Virtual disks • We added network virtualization: • Ability to bind to multiple interfaces • New virtual network device ( veth ) • Separate routing tables 9 10 What does it mean to <mapping> make a good mapping?

  3. Good Mapping assign • Pack well • Solves an NP-hard problem • Use resources efficiently • Pack both nodes and links • Specifying packing criteria • Avoid scarce resources • Do it quickly • Paper: [Ricci+:CCR03] • Critical path for creating an • Based on simulated annealing experiment • We extended for virtual nodes 13 14 Resource-Based Packing • Use quantities we can directly measure • Resource-based system Assigning Quickly “This virtual node uses 100 MHz of CPU” “This physical node has 3 GHz of CPU” • Works well for heterogenous virtual and physical nodes 15 Small Topologies Virtual Topologies 17 18

  4. Prepass Scaling With Prepass 200 19 20 Mapping Quality <feedback> 21 How do I know how tightly I can pack my Closed, repeatable virtual nodes? world I don’t!

  5. The Plan Picking Initial Packing • Pick a packing • Pick a packing • Start one-to-one • Run experiment • Run experiment • Possibly with a subset of topology • Monitor for artifacts • Monitor for artifacts • Start tightly packed • If artifacts found: • If artifacts found: • Optimistically assume low usage • Re-pack • Re-pack • Repeat • Repeat 25 26 Monitoring for Artifacts Re-Packing • CPU near 100% • Measure resource use • Significant paging activity • Feed into resource-based packing • Disk utilization 27 28 Feedback in a Nutshell • Rely on packing, not isolation • Discover packing factors empirically <numbers> • Re-use between experiments 29

  6. kindex: Packing Factors Feedback Case Study Transactions Response Round Per Second Time (s) Bootstrap: 74 physical 2.29 0.43 Round 1: 7 physical 1.85 0.53 Round 2: 7 physical 2.29 0.43 Deployed Use Conclusion • Creation time: 7 minutes for 100 nodes • Virtualization increases Emulab’s capacity • 5,125 experiments • Transparently • 296,621 virtual nodes • Preserves fidelity • 32% of Emulab nodes virtual • Requires solving several challenging problems • 5.75 average packing factor • Proven useful in production • Average: 58 nodes, max: 1,000 nodes www.emulab.net 33 34 Related Work • Virtual Network Emulation • ModelNet, DieCast • Virtual Machines • Xen, VMWare, vservers, OpenVZ, NET <end/> • Network Virtualization • NetNS, OpenVZ, Trellis, IMUNES • Feedback-based Mapping • Hippodrome 36

  7. Minimal Effective ModelNet Virtualization • Applications can only run on edge nodes: • Application transparency single-homed only • More basic virtualization • Application fidelity • No artifact detection • System capacity • No feedback system • Emulab has richer control framework • Scales to much larger interior networks 37 38 Application Transparency Application Fidelity • Physical results � Virtual • Real applications • Simulation results • Virtual machines • Virtual node interference • Keep most semantics of unshared machines • Perfect resource isolation • Simple processes • Detect artifacts and re-run • Preserve experimenter’s topology • Full network virtualization 39 40 System Capacity • Low overhead • In-kernel (vservers, jails) • Hypervisor (VMWare, Xen) • Don’t prolong experiments • DieCast 41

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend