modeling and emulation of internet paths
play

Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon - PowerPoint PPT Presentation

Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah 1 Distributed Systems How will you evaluate your distributed system? DHT P2P Content Distribution 2


  1. Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah 1

  2. Distributed Systems • How will you evaluate your distributed system? – DHT – P2P – Content Distribution 2

  3. Network Emulation 3

  4. Why Emulation? • Test distributed systems + Repeatable + Real • PCs, Applications, Protocols + Controlled • Dedicated Nodes, Network Parameters • Link Emulation 4

  5. Goal: Path Emulation Emulated Internet 5

  6. Link by Link Emulation 50 Mbps 10 Mbps 100 Mbps 50 Mbps 6

  7. End to End Emulation RTT ABW 7

  8. Contributions • Principles for path emulation – Pick appropriate queue sizes – Separate capacity from ABW – Model reactivity as a function of flows – Model shared bottlenecks 8

  9. Obvious Solution ABW RTT Link Emulator Queue Shape Delay App App Shape Delay Queue 9

  10. Obvious Solution (Good News) • Actual Path • Bandwidth Accuracy – 8.0% Error on forward path – 7.2% Error on reverse path 2.3 Mbps Iperf Iperf 2.2 Mbps 10

  11. Obvious Solution (Bad News) Real Path Obvious Emulation RTT (ms) RTT (ms) Time (s) Time (s) Latency is an order of magnitude higher 11

  12. Obvious Solution (More Problems) • Measure asymmetric path • Bandwidth Accuracy – 50.6% error on forward path! • Much smaller than on real path – 8.5% error on reverse path 6.4 Mbps Iperf Iperf 2.6 Mbps 12

  13. What’s Happening? • TCP increases congestion window until it sees a loss • There are no losses until the queue fills up • Queue fills up • Delays grows until queue is full • Large delays were queuing delays 13

  14. What’s Happening Link Emulator 50 Packet 12 ms 6.4 Mbps App App 12 ms 50 Packet 2.6 Mbps 14

  15. Delays Link Emulator 97 ms App App 239 ms How do we make this more rigorous? 15

  16. Maximum Tolerable Queueing Delay Max Tolerable Queueing Delay Max Window Size Other RTT Queue size must be limited above by delay Target Bandwidth 16

  17. Upper Bound (Queue Size) Capacity Total Queue Size Max Tolerable Delay Upper limit is proportional to capacity 17

  18. Upper bound • Forward direction 13k – 9 packets • Reverse direction 5k – 3 packets • Queue size is too small – Drops even small bursts! 18

  19. Lower Bound (Queue Size) Total Window Size Small Queues Drop Packets On Bursts 19

  20. Can’t Fulfill Both Bounds • Upper limit is 13k • Lower limit is 65k – 1 TCP connection, no window scaling • No viable queue size 20

  21. Capacity Vs. ABW • Capacity is the rate at which everyone’s packets drain from the queue • ABW is the rate at which MY packets drain from the queue • Link emulator replicates capacity • Setting capacity == ABW interacts with queue size 21

  22. Capacity and Queue Size Queue Size (KB) Viable Queue Sizes Upper Bound Lower Bound Capacity (Mbps) 22

  23. Our Solution • Set queue based on constraints • Set shaper to high bandwidth • Introduce CBR cross-traffic Path CBR CBR Source Sink Emulator Queue Shape Delay App App Shape Delay Queue CBR CBR Sink Source 23

  24. CBR Traffic • TCP cross-traffic backs off • CBR does not • Background traffic cannot back off – If it does, the user will see larger ABW than they set 24

  25. Reactivity • Reactive CBR traffic?!?!?! • Approximate aggregate ABW as function of number of foreground flows • Change CBR traffic based on flow count 25

  26. Does it work? 26

  27. Testing Bandwidth Obvious Error Our Error Forward 50.6 % 4.1 % Reverse 8.5 % 5.0 % 6.4 Mbps Iperf 2.6 Mbps Iperf 27

  28. More Bandwidth Tests Forward Reverse Link Error Path Error 2.3 Mbps 2.2 Mbps 8.0 % 2.1 % 4.1 Mbps 2.8 Mbps 31.7 % 5.8 % 6.4 Mbps 2.6 Mbps 50.6 % 4.1 % 25.9 Mbps 17.2 Mbps 20.4 % 10.2 % 8.0 Mbps 8.0 Mbps 22.0 % 6.3 % 12.0 Mbps 12.0 Mbps 21.5 % 6.5 % 10.0 Mbps 3.0 Mbps 66.5 % 8.5 % 28

  29. Testing Delay Real Path Path Emulator RTT (ms) RTT (ms) Time (s) Time (s) Obvious solution was an order of magnitude higher 29

  30. BitTorrent Setup • Measured conditions among 13 PlanetLab hosts • 12 BitTorrent Clients, 1 Seed • Isolate capacity and queue size changes 30

  31. BitTorrent Obvious Solution Our Solution Download Duration (s) Nodes 31

  32. Related Work • Link Emulation – Emulab – Dummynet – ModelNet – NIST Net 32

  33. Related Work • Queue Sizes – Apenzeller et al (Sigcomm 2004) • Large number of flows – Buffer requirements are small • Small number of flows – Queue size should be bandwidth-delay product – We build on this work to determine our lower bound – We focus on emulating a given bandwidth rather than maximizing performance 33

  34. Related Work • Characterize traffic through a particular link – Harpoon (IMC 2004) – Swing (Sigcomm 2006) – Tmix (CCR 2006) • We use only end to end measurements and characterize reactivity as a function of flows 34

  35. Conclusion • New path emulator – End to end conditions • Four principles combine for accuracy – Pick appropriate queue sizes – Separate capacity from ABW – Model reactivity as a function of flows – Model shared bottlenecks 35

  36. Questions? • Available now at www.emulab.net • Email: duerig@cs.utah.edu 36

  37. Backup Slides 37

  38. Does capacity matter? 38

  39. Scale 39

  40. Shared Bottlenecks are Hard 40

  41. Stationarity 41

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend