Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon - - PowerPoint PPT Presentation

modeling and emulation of internet paths
SMART_READER_LITE
LIVE PREVIEW

Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon - - PowerPoint PPT Presentation

Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah 1 Distributed Systems How will you evaluate your distributed system? DHT P2P Content Distribution 2


slide-1
SLIDE 1

1

Modeling and Emulation of Internet Paths

Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah

slide-2
SLIDE 2

2

Distributed Systems

  • How will you evaluate your distributed

system?

– DHT – P2P – Content Distribution

slide-3
SLIDE 3

3

Network Emulation

slide-4
SLIDE 4

4

Why Emulation?

  • Test distributed systems

+ Repeatable + Real

  • PCs, Applications, Protocols

+ Controlled

  • Dedicated Nodes, Network Parameters
  • Link Emulation
slide-5
SLIDE 5

5

Goal: Path Emulation

Emulated Internet

slide-6
SLIDE 6

6

Link by Link Emulation

10 Mbps 50 Mbps 100 Mbps 50 Mbps

slide-7
SLIDE 7

7

End to End Emulation

RTT ABW

slide-8
SLIDE 8

8

Contributions

  • Principles for path emulation

– Pick appropriate queue sizes – Separate capacity from ABW – Model reactivity as a function of flows – Model shared bottlenecks

slide-9
SLIDE 9

9

Obvious Solution

RTT ABW

App App Shape Delay Queue Link Emulator Shape Delay Queue

slide-10
SLIDE 10

10

Obvious Solution (Good News)

  • Actual Path
  • Bandwidth Accuracy

– 8.0% Error on forward path – 7.2% Error on reverse path 2.3 Mbps 2.2 Mbps

Iperf Iperf

slide-11
SLIDE 11

11

Obvious Solution (Bad News)

Real Path Obvious Emulation Time (s) Time (s) RTT (ms) RTT (ms)

Latency is an order of magnitude higher

slide-12
SLIDE 12

12

Obvious Solution (More Problems)

  • Measure asymmetric path
  • Bandwidth Accuracy

– 50.6% error on forward path!

  • Much smaller than on real path

– 8.5% error on reverse path 6.4 Mbps 2.6 Mbps

Iperf Iperf

slide-13
SLIDE 13

13

What’s Happening?

  • TCP increases congestion window until it

sees a loss

  • There are no losses until the queue fills up
  • Queue fills up
  • Delays grows until queue is full
  • Large delays were queuing delays
slide-14
SLIDE 14

14

What’s Happening

App App

6.4 Mbps

12 ms 50 Packet Link Emulator

2.6 Mbps

12 ms 50 Packet

slide-15
SLIDE 15

15

Delays

App App Link Emulator 97 ms 239 ms How do we make this more rigorous?

slide-16
SLIDE 16

16

Maximum Tolerable Queueing Delay

Max Tolerable Queueing Delay Max Window Size Target Bandwidth Other RTT Queue size must be limited above by delay

slide-17
SLIDE 17

17

Upper Bound (Queue Size)

Max Tolerable Delay Total Queue Size Capacity Upper limit is proportional to capacity

slide-18
SLIDE 18

18

Upper bound

  • Forward direction 13k

– 9 packets

  • Reverse direction 5k

– 3 packets

  • Queue size is too small

– Drops even small bursts!

slide-19
SLIDE 19

19

Lower Bound (Queue Size)

Total Window Size Small Queues Drop Packets On Bursts

slide-20
SLIDE 20

20

Can’t Fulfill Both Bounds

  • Upper limit is 13k
  • Lower limit is 65k

– 1 TCP connection, no window scaling

  • No viable queue size
slide-21
SLIDE 21

21

Capacity Vs. ABW

  • Capacity is the rate at which everyone’s

packets drain from the queue

  • ABW is the rate at which MY packets drain

from the queue

  • Link emulator replicates capacity
  • Setting capacity == ABW interacts with

queue size

slide-22
SLIDE 22

22

Capacity and Queue Size

Queue Size (KB) Capacity (Mbps)

Viable Queue Sizes Upper Bound Lower Bound

slide-23
SLIDE 23

23

Our Solution

  • Set queue based on constraints
  • Set shaper to high bandwidth
  • Introduce CBR cross-traffic

App App Shape Delay Queue Path Emulator Shape Delay Queue

CBR Source CBR Source CBR Sink CBR Sink

slide-24
SLIDE 24

24

CBR Traffic

  • TCP cross-traffic backs off
  • CBR does not
  • Background traffic cannot back off

– If it does, the user will see larger ABW than they set

slide-25
SLIDE 25

25

Reactivity

  • Reactive CBR traffic?!?!?!
  • Approximate aggregate ABW as function of

number of foreground flows

  • Change CBR traffic based on flow count
slide-26
SLIDE 26

26

Does it work?

slide-27
SLIDE 27

27

Testing Bandwidth

6.4 Mbps 2.6 Mbps

Iperf Iperf

Obvious Error Our Error Forward 50.6 % 4.1 % Reverse 8.5 % 5.0 %

slide-28
SLIDE 28

28

More Bandwidth Tests

Forward Reverse Link Error Path Error 2.3 Mbps 2.2 Mbps 8.0 % 2.1 % 4.1 Mbps 2.8 Mbps 31.7 % 5.8 % 6.4 Mbps 2.6 Mbps 50.6 % 4.1 % 25.9 Mbps 17.2 Mbps 20.4 % 10.2 % 8.0 Mbps 8.0 Mbps 22.0 % 6.3 % 12.0 Mbps 12.0 Mbps 21.5 % 6.5 % 10.0 Mbps 3.0 Mbps 66.5 % 8.5 %

slide-29
SLIDE 29

29

Testing Delay

Real Path Path Emulator Time (s) Time (s) RTT (ms) RTT (ms)

Obvious solution was an order of magnitude higher

slide-30
SLIDE 30

30

BitTorrent Setup

  • Measured conditions among 13 PlanetLab

hosts

  • 12 BitTorrent Clients, 1 Seed
  • Isolate capacity and queue size changes
slide-31
SLIDE 31

31

BitTorrent

Nodes Download Duration (s)

Obvious Solution Our Solution

slide-32
SLIDE 32

32

Related Work

  • Link Emulation

– Emulab – Dummynet – ModelNet – NIST Net

slide-33
SLIDE 33

33

Related Work

  • Queue Sizes

– Apenzeller et al (Sigcomm 2004)

  • Large number of flows

– Buffer requirements are small

  • Small number of flows

– Queue size should be bandwidth-delay product – We build on this work to determine our lower bound

– We focus on emulating a given bandwidth rather than maximizing performance

slide-34
SLIDE 34

34

Related Work

  • Characterize traffic through a particular

link

– Harpoon (IMC 2004) – Swing (Sigcomm 2006) – Tmix (CCR 2006)

  • We use only end to end measurements

and characterize reactivity as a function of flows

slide-35
SLIDE 35

35

Conclusion

  • New path emulator

– End to end conditions

  • Four principles combine for accuracy

– Pick appropriate queue sizes – Separate capacity from ABW – Model reactivity as a function of flows – Model shared bottlenecks

slide-36
SLIDE 36

36

Questions?

  • Available now at www.emulab.net
  • Email: duerig@cs.utah.edu
slide-37
SLIDE 37

37

Backup Slides

slide-38
SLIDE 38

38

Does capacity matter?

slide-39
SLIDE 39

39

Scale

slide-40
SLIDE 40

40

Shared Bottlenecks are Hard

slide-41
SLIDE 41

41

Stationarity