Inferring Queue State by Measuring Delay in a WiFi Network David - - PowerPoint PPT Presentation

inferring queue state by measuring delay in a wifi network
SMART_READER_LITE
LIVE PREVIEW

Inferring Queue State by Measuring Delay in a WiFi Network David - - PowerPoint PPT Presentation

Inferring Queue State by Measuring Delay in a WiFi Network David Malone, Douglas J Leith, Ian Dangerfield 11 May 2009 Wifi, Delay, Queue Length and Congestion Control RTT a proxy for queue length. Not too crazy in wired networks. For


slide-1
SLIDE 1

Inferring Queue State by Measuring Delay in a WiFi Network

David Malone, Douglas J Leith, Ian Dangerfield 11 May 2009

slide-2
SLIDE 2

Wifi, Delay, Queue Length and Congestion Control

  • RTT a proxy for queue length.
  • Not too crazy in wired networks.
  • For Wifi?

Transmissions (~500us) Someone else transmits, stop counting! Data and then ACK Collision followed by timeout Counting Down (20us)

  • With fixed traffic, what is impact of random service?
  • What is impact of variable traffic (not even sharing buffer)?
  • What will Vegas do in practice?
  • Want to understand these for future design work.
slide-3
SLIDE 3

Sample Previous work

  • V. Jacobson. pathchar — a tool to infer characteristics of

internet paths. MSRI, April 1997.

  • N Sundaram, WS Conner, and A Rangarajan. Estimation of

bandwidth in bridged home networks. WiNMee, 2007.

  • M Franceschinis, M Mellia, M Meo, and M Munafo.

Measuring TCP over WiFi: A real case. WiNMee, April 2005.

  • G. McCullagh. Exploring delay-based tcp congestion control.

Masters Thesis, 2008.

slide-4
SLIDE 4

Fixed Traffic: How bad is the problem?

20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 5 10 15 20 Drain Time (us) Queue Length (packets) Observed Drain Time

slide-5
SLIDE 5

What do the stats look like?

20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 5 10 15 20 200 400 600 800 1000 1200 1400 Drain Time (us) Number of Observations Packets in queue Drain Time Distribution Mean Drain Time Number of Observations

Note: the variance is getting bigger.

slide-6
SLIDE 6

How does it grow?

2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 2 4 6 8 10 12 14 16 18 20 22 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 Drain Time Standard Deviation (us) Number of Observations Queue Length (packets) Measured Estimate V sqrt(n)

For fixed traffic service time looks uncorrelated.

slide-7
SLIDE 7

Queue Length Prediction

  • Suppose traffic is fixed.
  • We have collect all statistics.
  • Given an RTT, can we guess how full queue is?
  • Easier: more or less than half full?
slide-8
SLIDE 8

Results of thresholding

0.1 0.2 0.3 0.4 0.5 0.6 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 Fraction of Packets Threshold (us) Delay threshold wrong Delay Threshold wrong by 50% or more

slide-9
SLIDE 9

Using History

  • Mistake 10% of time, not good for congestion control.
  • Only using one sample, what happens if we use history.
  • Obvious thing to do: filter.
slide-10
SLIDE 10

Filters

7/8 Filter srtt ← 7/8srtt + 1/8rtt. Exponential Time srtt ← e−∆T/Tcsrtt + (1 − e−∆T/Tc)rtt. Windowed Mean srtt ← mean

last RTT rtt.

Windowed Min srtt ← min

last RTT rtt.

slide-11
SLIDE 11

How much better do we do?

0.1 0.2 0.3 0.4 0.5 0.6 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 Fraction of Packets Threshold (us) Exp Time Threshold wrong Window Mean Threshold wrong 7/8 Filter Threshold wrong Window Min Threshold wrong Raw Threshold wrong Exp Time Threshold wrong by >= 50% Window Mean Threshold wrong by >= 50% 7/8 Filter Threshold wrong by >= 50% Window Min Threshold wrong by >= 50% Raw Threshold wrong by >= 50%

slide-12
SLIDE 12

Variable Network Conditions

  • Other traffic can change service rate.
  • Need not even share same buffer.
  • Nonlinear because of collisions.
  • What happens when we add/remove competing stations?
slide-13
SLIDE 13

Removing stations (4 → 1)

20000 40000 60000 80000 100000 120000 140000 160000 230 232 234 236 238 240 242 244 Drain Time (us) Time (s) 5 10 15 20 230 232 234 236 238 240 242 244 Queue Length (packets) Time (s)

slide-14
SLIDE 14

Adding stations (4 → 8, ACK Prio)

100000 200000 300000 400000 500000 600000 700000 100 120 140 160 180 200 Drain Time (us) Time (s) 5 10 15 20 100 120 140 160 180 200 Queue Length (packets) Time (s)

Even base RTT changes.

slide-15
SLIDE 15

Vegas in Practice

TargetCwnd = cwnd × baseRTT/minRTT Make decision based on TargetCwnd − cwnd.

  • Will Vegas make right decisions based on current RTT?
  • Will Vegas get correct base RTT?
  • Vary delay with dummynet.
  • Vary BW by adding competing stations.
slide-16
SLIDE 16

Vegas with 5ms RTT

50 100 150 200 250 300 350 400 450 20 40 60 80 100 120 140 Throughput (pps) Time (s) 5 10 15 20 25 30 35 40 45 20 40 60 80 100 120 140 cwnd (packets) Time (s)

Lower bound like 1 − α/cwnd.

slide-17
SLIDE 17

Vegas with 200ms RTT

50 100 150 200 250 300 350 400 450 20 40 60 80 100 120 140 Throughput (pps) Time (s) 20 40 60 80 100 120 140 20 40 60 80 100 120 140 cwnd (packets) Time (s)

Sees loss and goes into Reno mode.

slide-18
SLIDE 18

Conclusion

  • With fixed traffic, delay is quite variable.
  • Variability grows with buffer occupancy like √n.
  • Obvious filters make things worse.
  • Need to deal with change in traffic conditions.
  • Linux Vegas does OK.
  • Switch to Reno helps.
  • Vegas insensitive at smaller buffer sizes.
  • Variability at larger buffer sizes still a problem.