codel
play

CoDel present by Van Jacobson to the IETF-84 Transport Area Open - PowerPoint PPT Presentation

Kathie Nichols CoDel present by Van Jacobson to the IETF-84 Transport Area Open Meeting 30 July 2012 Vancouver, Canada 2 3 Sender Receiver 4 Sender Receiver 5 Sender Receiver Queue forms at a bottleneck Theres probably


  1. Kathie Nichols’ CoDel present by Van Jacobson to the IETF-84 Transport Area Open Meeting 30 July 2012 Vancouver, Canada

  2. 2

  3. 3

  4. Sender Receiver 4

  5. Sender Receiver 5

  6. Sender Receiver • Queue forms at a bottleneck • There’s probably just one bottleneck (each flow sees exactly one) ➡ Choices: can move the queue (by making a new bottleneck) or control it. 5

  7. Good Queue / Bad Queue Queue length Time 6

  8. Good Queue / Bad Queue Queue length Time Queue length Time 6

  9. Good Queue / Bad Queue • Good queue goes Queue length away in an RTT, bad queue hangs around. ➡ queue length min() over a sliding window Time measures bad queue ... Queue length ➡ ... as long as window is at least an RTT wide. Time 7

  10. Good Queue / Bad Queue • Good queue goes Queue length away in an RTT, bad queue hangs around. ➡ tracking min() in a sliding window gives Time bad queue ... Queue length ➡ ... as long as window is at least an RTT wide. Time 8

  11. Good Queue / Bad Queue • Good queue goes Queue length away in an RTT, bad queue hangs around. ➡ tracking min() in a sliding window gives Time bad queue ... Queue length ➡ ... as long as window is at least an RTT wide. Time 8

  12. How big is the queue? • Can measure size in bytes – interesting if worried about overflow – requires output bandwidth to compute delay • Can measure packet’s sojourn time – direct measure of delay – easy (no enque/deque coupling so works with any packet pipeline). 9

  13. Sojourn Time • Works with time-varying output bandwidth (e.g., wireless and shared links) • Better behaved than queue length – no high frequency phase noise • Includes everything that affects packet so works for multi-queue links 10

  14. 6 5 Two views of a Queue 4 Q delay (ms.) 3 2 1 Top graph is sojourn time, 0 24.5 25.0 25.5 26.0 bottom is queue size. Time (sec.) 6 5 4 Q size (ms.) 3 (one ftp + web traffic; 2 10Mbps bottleneck; 1 80ms RTT; TCP Reno) 11 0

  15. 6 5 Two views of a Queue 4 Q delay (ms.) 3 2 1 Top graph is sojourn time, 0 24.5 25.0 25.5 26.0 bottom is queue size. Time (sec.) 6 5 4 Q size (ms.) 3 (one ftp + web traffic; 2 10Mbps bottleneck; 1 80ms RTT; TCP Reno) 11 0

  16. 2.0 Two views of a Queue 1.5 Q delay (ms.) 1.0 0.5 0.0 Top graph is sojourn time, 25.00 25.05 25.10 25.15 25.20 bottom is queue size. Time (sec.) 2.5 2.0 1.5 Q size (ms.) (one ftp + web traffic; 1.0 10Mbps bottleneck; 0.5 80ms RTT; TCP Reno) 12 0.0

  17. Multi-Queue behavior 13

  18. Controlling Queue a) Measure what you’ve got b) Decide what you want c) If (a) isn’t (b), move it toward (b) 14

  19. Controlling Queue - Estimator a) Measure what you’ve got - Setpoint b) Decide what you want - Control loop c) If (a) isn’t (b), move it toward (b) 15

  20. How much ‘bad’ queue do we want? • Can’t let the link go idle (need one or two MTU of backlog) • More than this will give higher utilization at low degree of multiplexing (1-3 bulk xfers) at the cost of higher delay • Can the trade-off be quantified? 16

  21. Utilization vs. Target for a single Reno TCP 100 95 Bottleneck Link Utilization (% of max) 90 85 80 75 0 20 40 60 80 100 Target (% of RTT) 17

  22. Power vs. Target for a Reno TCP 1.00 0.95 0.90 Average Power (Xput/Delay) 0.85 0.80 0.75 0.70 0 20 40 60 80 100 Target (as % of RTT) 18

  23. Power vs. Target for a Reno TCP 1.00 0.95 0.90 Average Power (Xput/Delay) 0.85 0.80 0.75 0.70 0 20 40 60 80 100 Target (as % of RTT) 18

  24. Power vs. Target for a Reno TCP 1.00 0.99 0.98 Average Power (Xput/Delay) 0.97 0.96 0.95 0.94 0.93 0 5 10 15 20 25 30 Target (as % of RTT) 19

  25. Power vs. Target for a Reno TCP 1.00 0.99 0.98 Average Power (Xput/Delay) 0.97 0.96 0.95 0.94 0.93 0 5 10 15 20 25 30 Target (as % of RTT) 19

  26. 20

  27. • Setpoint target of 5% of nominal RTT (5ms for 100ms RTT) yields substantial utilization improvement for small added delay. 20

  28. • Setpoint target of 5% of nominal RTT (5ms for 100ms RTT) yields substantial utilization improvement for small added delay. • Result holds independent of bandwidth and congestion control algorithm (tested with Reno, Cubic & Westwood). 20

  29. • Setpoint target of 5% of nominal RTT (5ms for 100ms RTT) yields substantial utilization improvement for small added delay. • Result holds independent of bandwidth and congestion control algorithm (tested with Reno, Cubic & Westwood). ➡ CoDel has no free parameters: running- min window width determined by worst- case expected RTT and target is a fixed fraction of same RTT. 20

  30. Algorithm & Control Law (see I-D, CACM paper and Linux kernels >= 3.5) 21

  31. Eric Dumazet has combined CoDel with a simple SFQ (256-1024 buckets with RR service discipline). Cost in state & cycles is small and improvement is big. • provides isolation - protects low rate CBR and web for a better user experience. Makes IW1 0 concerns a non-issue. • gets rid of bottleneck bi-directional traffic problems (‘ack-compression’ burstiness) • improves flow mixing for better network performance (reduce HoL blocking) ➡ Since we’re adding code, add fqcodel, not codel. 22

  32. Where are we? • thanks to Jim Gettys and the ACM, have dead-tree publication to protect ideas • un-encumbered code (BSD/GPL dual-license) available for ns2, ns3 & linux • in both simulation and real deployment, CoDel does no harm – it either does nothing or reduces delay without affecting xput. 23

  33. What needs to be done • Still looking at parts of the algorithm but changes likely to be 2nd order. • Would like to see CoDel on both ends of every home/small-office access link but: - We need to know more about how traffic behaves on particular bottlenecks (wi-fi, 3G cellular, cable modem) - There are system issues with deployment 24

  34. Deployment Issues Cable Home RTR/AP Headend Modem Gateway 25

  35. Deployment Issues Cable Home RTR/AP Headend Modem Gateway Protocol Device Linux Device stack Driver kernel 26

  36. Deployment Issues Cable Home RTR/AP Headend Modem Gateway Protocol Device Linux Device stack Driver kernel Phone 3G ? RAN SGSN Cellphone CPU Modem 27

  37. Our thanks to: • Jim Gettys • CoDel early experimenters, particularly Dave Taht • Eric Dumazet • ACM Queue • Eben Moglen 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend