Revisiting Old Friends: Is CoDel Really Achieving What RED Cannot? - - PowerPoint PPT Presentation

revisiting old friends is codel really achieving what red
SMART_READER_LITE
LIVE PREVIEW

Revisiting Old Friends: Is CoDel Really Achieving What RED Cannot? - - PowerPoint PPT Presentation

Revisiting Old Friends: Is CoDel Really Achieving What RED Cannot? Nicolas Kuhn 1 Emmanuel Lochin 2 Olivier Mehani 3 1 IMT Telecom Bretagne, France 2 Universit e de Toulouse, France 3 National ICT Australia, Australia 1/21 Revisiting Old


slide-1
SLIDE 1

Revisiting Old Friends: Is CoDel Really Achieving What RED Cannot?

Nicolas Kuhn 1 Emmanuel Lochin 2 Olivier Mehani 3

1IMT Telecom Bretagne, France 2Universit´

e de Toulouse, France

3National ICT Australia, Australia 1/21 Revisiting Old Friends: CoDel vs. RED 2014 1 /

slide-2
SLIDE 2

Context and objectives

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

2/21 Revisiting Old Friends: CoDel vs. RED 2014 2 /

slide-3
SLIDE 3

Context and objectives

Context - History of AQM

Deployment of loss-based TCP TCP flows competing on a bottleneck would back off at the same moment (tail drops) ⇒ under utilization of the available capacity ⇒ lots of loss events Active Queue Management (AQM) a solution to avoid loss synchronization queue management schemes that drop packets before tail drops occur due to operationnal and deployment issues: ⇒ no AQM scheme has been turned on Buffer size in the routers to overcome from physical layer impairments (fluctuating bandwidth) to avoid loss events ⇒ large buffers are deployed in the Internet

3/21 Revisiting Old Friends: CoDel vs. RED 2014 3 /

slide-4
SLIDE 4

Context and objectives

Context - History of AQM

Deployment of loss-based TCP TCP flows competing on a bottleneck would back off at the same moment (tail drops) ⇒ under utilization of the available capacity ⇒ lots of loss events Active Queue Management (AQM) a solution to avoid loss synchronization queue management schemes that drop packets before tail drops occur due to operationnal and deployment issues: ⇒ no AQM scheme has been turned on Buffer size in the routers to overcome from physical layer impairments (fluctuating bandwidth) to avoid loss events ⇒ large buffers are deployed in the Internet

3/21 Revisiting Old Friends: CoDel vs. RED 2014 3 /

slide-5
SLIDE 5

Context and objectives

Context - History of AQM

Deployment of loss-based TCP TCP flows competing on a bottleneck would back off at the same moment (tail drops) ⇒ under utilization of the available capacity ⇒ lots of loss events Active Queue Management (AQM) a solution to avoid loss synchronization queue management schemes that drop packets before tail drops occur due to operationnal and deployment issues: ⇒ no AQM scheme has been turned on Buffer size in the routers to overcome from physical layer impairments (fluctuating bandwidth) to avoid loss events ⇒ large buffers are deployed in the Internet

3/21 Revisiting Old Friends: CoDel vs. RED 2014 3 /

slide-6
SLIDE 6

Context and objectives

Context - Bufferbloat

Origins of the bufferbloat deployment of aggressive congestion control (such as TCP CUBIC) large buffers in the routers ⇒ permanent queuing in the routers ⇒ high queuing delay ⇒ network latency AQM In the past proposed to avoid loss synchronisation, is one solution for the bufferbloat: adapt the knowledge of AQM schemes to control the queuing delay in the routers in the 90’s: RED was based on the number of packets in the buffer recent proposals: PIE and CoDel are based on the queuing delay

4/21 Revisiting Old Friends: CoDel vs. RED 2014 4 /

slide-7
SLIDE 7

Context and objectives

Context - Bufferbloat

Origins of the bufferbloat deployment of aggressive congestion control (such as TCP CUBIC) large buffers in the routers ⇒ permanent queuing in the routers ⇒ high queuing delay ⇒ network latency AQM In the past proposed to avoid loss synchronisation, is one solution for the bufferbloat: adapt the knowledge of AQM schemes to control the queuing delay in the routers in the 90’s: RED was based on the number of packets in the buffer recent proposals: PIE and CoDel are based on the queuing delay

4/21 Revisiting Old Friends: CoDel vs. RED 2014 4 /

slide-8
SLIDE 8

Context and objectives

Objectives

Considering that ⇒ a performance comparison of RED, CoDel and PIE is missing ⇒ their impact on various congestion controls is missing Our objectives are ⇒ compare the performance of RED and CoDel with various TCP variants (delay-based / loss-based) ⇒ discuss deployment and auto-tuning issues What we do not consider: PIE: code was missing when running the simulations FQ-CoDel (hybrid scheduling/CoDel): did not exist at the time of the study

5/21 Revisiting Old Friends: CoDel vs. RED 2014 5 /

slide-9
SLIDE 9

Context and objectives

Objectives

Considering that ⇒ a performance comparison of RED, CoDel and PIE is missing ⇒ their impact on various congestion controls is missing Our objectives are ⇒ compare the performance of RED and CoDel with various TCP variants (delay-based / loss-based) ⇒ discuss deployment and auto-tuning issues What we do not consider: PIE: code was missing when running the simulations FQ-CoDel (hybrid scheduling/CoDel): did not exist at the time of the study

5/21 Revisiting Old Friends: CoDel vs. RED 2014 5 /

slide-10
SLIDE 10

Context and objectives

Objectives

Considering that ⇒ a performance comparison of RED, CoDel and PIE is missing ⇒ their impact on various congestion controls is missing Our objectives are ⇒ compare the performance of RED and CoDel with various TCP variants (delay-based / loss-based) ⇒ discuss deployment and auto-tuning issues What we do not consider: PIE: code was missing when running the simulations FQ-CoDel (hybrid scheduling/CoDel): did not exist at the time of the study

5/21 Revisiting Old Friends: CoDel vs. RED 2014 5 /

slide-11
SLIDE 11

RED and CoDel

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

6/21 Revisiting Old Friends: CoDel vs. RED 2014 6 /

slide-12
SLIDE 12

RED and CoDel

RED and CoDel

Random Early Detection (RED) from the 90’s dropping probability, pdrop: function of the number of packets in the queue depending on pdrop, incoming packets might be dropped Controlled Delay (CoDel) to tackle bufferbloat measures the queuing delay for each packet, qdelp Ndrop is the cumulative number of drop events every interval (default is 100 ms), while dequeuing p:

qdelp > target delay (5 ms) qdelp < target delay p is dropped p is dequed Ndrop + + Ndrop = 0 interval= interval √

Ndrop

interval= 100 ms

7/21 Revisiting Old Friends: CoDel vs. RED 2014 7 /

slide-13
SLIDE 13

RED and CoDel

RED and CoDel

Random Early Detection (RED) from the 90’s dropping probability, pdrop: function of the number of packets in the queue depending on pdrop, incoming packets might be dropped Controlled Delay (CoDel) to tackle bufferbloat measures the queuing delay for each packet, qdelp Ndrop is the cumulative number of drop events every interval (default is 100 ms), while dequeuing p:

qdelp > target delay (5 ms) qdelp < target delay p is dropped p is dequed Ndrop + + Ndrop = 0 interval= interval √

Ndrop

interval= 100 ms

7/21 Revisiting Old Friends: CoDel vs. RED 2014 7 /

slide-14
SLIDE 14

Simulating the bufferbloat in ns-2

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

8/21 Revisiting Old Friends: CoDel vs. RED 2014 8 /

slide-15
SLIDE 15

Simulating the bufferbloat in ns-2

Topology and traffic

Topology

delay Dw, capacitiy Cw

1 2 3 4 5

Pappl pareto applications Transmission of B bytes with FTP delay Dc, capacity Cc

Traffic Pappl applications transmit a file (size generated following a Pareto law): consistent with the distribution of the flow size measured in the

  • Internet. This traffic is injected to dynamically load the network.

FTP transmission of B bytes to understand the protocols impacts.

9/21 Revisiting Old Friends: CoDel vs. RED 2014 9 /

slide-16
SLIDE 16

Simulating the bufferbloat in ns-2

Topology and traffic

Topology

delay Dw, capacitiy Cw

1 2 3 4 5

Pappl pareto applications Transmission of B bytes with FTP delay Dc, capacity Cc

Traffic Pappl applications transmit a file (size generated following a Pareto law): consistent with the distribution of the flow size measured in the

  • Internet. This traffic is injected to dynamically load the network.

FTP transmission of B bytes to understand the protocols impacts.

9/21 Revisiting Old Friends: CoDel vs. RED 2014 9 /

slide-17
SLIDE 17

Simulating the bufferbloat in ns-2

Network and application characteristics

Finding central link capacities, Cc, causing Bufferbloat (Pappl = 100, Cw = 10 Mbps)

100 200 300 400 500 600 10 20 30 40 50 60 70 Queue size [pkt] Time [s] Capacity 1Mbps Capacity 1.25Mbps Capacity 1.5Mbps Capacity 2Mbps Capacity 5Mbps

Selecting capacity, Papp and buffer size Cc = 1 Mbps ⇒ constant buffering Papp = 100 buffer sizes: 1) ≪ BDP (q = 10), 2) ≃ BDP (q = 45), 3) ≫ BDP (q = 127), 4) q = ∞

10/21 Revisiting Old Friends: CoDel vs. RED 2014 10 /

slide-18
SLIDE 18

Simulating the bufferbloat in ns-2

Network and application characteristics

Finding central link capacities, Cc, causing Bufferbloat (Pappl = 100, Cw = 10 Mbps)

100 200 300 400 500 600 10 20 30 40 50 60 70 Queue size [pkt] Time [s] Capacity 1Mbps Capacity 1.25Mbps Capacity 1.5Mbps Capacity 2Mbps Capacity 5Mbps

Selecting capacity, Papp and buffer size Cc = 1 Mbps ⇒ constant buffering Papp = 100 buffer sizes: 1) ≪ BDP (q = 10), 2) ≃ BDP (q = 45), 3) ≫ BDP (q = 127), 4) q = ∞

10/21 Revisiting Old Friends: CoDel vs. RED 2014 10 /

slide-19
SLIDE 19

Impact of AQM with CUBIC and VEGAS

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

11/21 Revisiting Old Friends: CoDel vs. RED 2014 11 /

slide-20
SLIDE 20

Impact of AQM with CUBIC and VEGAS

Drop ratio vs. queuing delay

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Drop ratio probability Queuing delay [s] queue 10 (cyan) queue 45 (yellow) queue 125 (black) unlimited queue (orange)

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: TCP CUBIC: Drop ratio versus queuing delay (TCP Vegas shows the same behaviour)

Interpretation introduction of RED or CoDel ⇒ drop events whatever the queue size with DropTail, the queuing delay is maximised by the size of the queue queuing delay is between 0.01 s and 0.1 s with CoDel queuing delay is between 0.1 s and 0.5 s with RED

12/21 Revisiting Old Friends: CoDel vs. RED 2014 12 /

slide-21
SLIDE 21

Impact of AQM with CUBIC and VEGAS

Drop ratio vs. queuing delay

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Drop ratio probability Queuing delay [s] queue 10 (cyan) queue 45 (yellow) queue 125 (black) unlimited queue (orange)

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: TCP CUBIC: Drop ratio versus queuing delay (TCP Vegas shows the same behaviour)

Interpretation introduction of RED or CoDel ⇒ drop events whatever the queue size with DropTail, the queuing delay is maximised by the size of the queue queuing delay is between 0.01 s and 0.1 s with CoDel queuing delay is between 0.1 s and 0.5 s with RED

12/21 Revisiting Old Friends: CoDel vs. RED 2014 12 /

slide-22
SLIDE 22

Impact of AQM with CUBIC and VEGAS

VEGAS and CUBIC with DropTail

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) VEGAS

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(b) CUBIC

Figure: DropTail: Achieved throughput versus queuing delay for varying buffer sizes

Interpretation DropTail and VEGAS: throughput decreases when the queue size

  • increases. When the queue is large, VEGAS reacts to queuing delay

increases. DropTail and CUBIC: throughput increases with larger queues. The larger the queue, the bigger the queueing delay.

13/21 Revisiting Old Friends: CoDel vs. RED 2014 13 /

slide-23
SLIDE 23

Impact of AQM with CUBIC and VEGAS

VEGAS and CUBIC with DropTail

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) VEGAS

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(b) CUBIC

Figure: DropTail: Achieved throughput versus queuing delay for varying buffer sizes

Interpretation DropTail and VEGAS: throughput decreases when the queue size

  • increases. When the queue is large, VEGAS reacts to queuing delay

increases. DropTail and CUBIC: throughput increases with larger queues. The larger the queue, the bigger the queueing delay.

13/21 Revisiting Old Friends: CoDel vs. RED 2014 13 /

slide-24
SLIDE 24

Impact of AQM with CUBIC and VEGAS

VEGAS with RED or CoDel

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: VEGAS w/ AQM: Achieved throughput versus queuing delay

Interpretation the queuing delay is between 0.01 s and 0.1 s with CoDel the queuing delay is between 0.1 s and 0.5 s with RED the throughput is the same whatever the choice of the AQM is.

14/21 Revisiting Old Friends: CoDel vs. RED 2014 14 /

slide-25
SLIDE 25

Impact of AQM with CUBIC and VEGAS

VEGAS with RED or CoDel

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: VEGAS w/ AQM: Achieved throughput versus queuing delay

Interpretation the queuing delay is between 0.01 s and 0.1 s with CoDel the queuing delay is between 0.1 s and 0.5 s with RED the throughput is the same whatever the choice of the AQM is.

14/21 Revisiting Old Friends: CoDel vs. RED 2014 14 /

slide-26
SLIDE 26

Impact of AQM with CUBIC and VEGAS

CUBIC with RED or CoDel

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: CUBIC w/ AQM: Achieved throughput versus queuing delay

Interpretation the queuing delay is between 0.01 s and 0.1 s with CoDel the queuing delay is between 0.1 s and 0.5 s with RED the throughput is larger with RED (up to 0.75 Mbps) than with CoDel (up to 0.45 Mbps)

15/21 Revisiting Old Friends: CoDel vs. RED 2014 15 /

slide-27
SLIDE 27

Impact of AQM with CUBIC and VEGAS

CUBIC with RED or CoDel

0.2 0.4 0.6 0.8 1 0.001 0.01 0.1 1 10 Throughput [Mbps] Queuing delay [s] Queue: 10 Queue: 45 Queue: 125 Queue: 1000000000

(a) DropTail

0.001 0.01 0.1 1 10 Queuing delay [s]

(b) RED

0.001 0.01 0.1 1 10 Queuing delay [s]

(c) CoDel

Figure: CUBIC w/ AQM: Achieved throughput versus queuing delay

Interpretation the queuing delay is between 0.01 s and 0.1 s with CoDel the queuing delay is between 0.1 s and 0.5 s with RED the throughput is larger with RED (up to 0.75 Mbps) than with CoDel (up to 0.45 Mbps)

15/21 Revisiting Old Friends: CoDel vs. RED 2014 15 /

slide-28
SLIDE 28

Impact of AQM with CUBIC and VEGAS

Early conclusions

CoDel is a good candidate to reduce latency RED may reduce the latency as well RED allows to transmit more traffic and better exploit the capacity of the bottleneck ⇒ a better trade-off might exist between latency reduction and more efficient capacity use than the one of CoDel

16/21 Revisiting Old Friends: CoDel vs. RED 2014 16 /

slide-29
SLIDE 29

Application Delays and Goodputs

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

17/21 Revisiting Old Friends: CoDel vs. RED 2014 17 /

slide-30
SLIDE 30

Application Delays and Goodputs

Application Delay

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Reno Vegas CompoundCubic Packet delay [S] Transport protocol

(a) DropTail

Reno Vegas Compound Cubic Transport protocol

(b) RED

Reno Vegas Compound Cubic Transport protocol

(c) CoDel

Figure: Packet transmission times

Interpretation RED and CoDel enable reduction of the latency compared to DropTail CUBIC the packet transmission time is reduced by 87% with CoDel and by 75% with RED the median packet transmission time with CUBIC and CoDel is 115 ms compared to 226 ms with RED latency is reduced by 44% when the congestion control is VEGAS rather than CUBIC

18/21 Revisiting Old Friends: CoDel vs. RED 2014 18 /

slide-31
SLIDE 31

Application Delays and Goodputs

Application Delay

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Reno Vegas CompoundCubic Packet delay [S] Transport protocol

(a) DropTail

Reno Vegas Compound Cubic Transport protocol

(b) RED

Reno Vegas Compound Cubic Transport protocol

(c) CoDel

Figure: Packet transmission times

Interpretation RED and CoDel enable reduction of the latency compared to DropTail CUBIC the packet transmission time is reduced by 87% with CoDel and by 75% with RED the median packet transmission time with CUBIC and CoDel is 115 ms compared to 226 ms with RED latency is reduced by 44% when the congestion control is VEGAS rather than CUBIC

18/21 Revisiting Old Friends: CoDel vs. RED 2014 18 /

slide-32
SLIDE 32

Application Delays and Goodputs

Application Goodput

1000 2000 3000 4000 5000 Reno Vegas Compound Cubic Transmission time [S] Transport protocol

(a) DropTail

Reno Vegas Compound Cubic Transport protocol

(b) RED

Reno Vegas Compound Cubic Transport protocol

(c) CoDel

Figure: Time needed to transmit 10 MB

Interpretation dropping events generated by RED do not impact this transmission time much with CUBIC, introducing RED increases the median transmission time

  • f 10 MB by 5% compared to DropTail

with CUBIC, introducing CoDel results in an increase of 42% of this transmission time.

19/21 Revisiting Old Friends: CoDel vs. RED 2014 19 /

slide-33
SLIDE 33

Application Delays and Goodputs

Application Goodput

1000 2000 3000 4000 5000 Reno Vegas Compound Cubic Transmission time [S] Transport protocol

(a) DropTail

Reno Vegas Compound Cubic Transport protocol

(b) RED

Reno Vegas Compound Cubic Transport protocol

(c) CoDel

Figure: Time needed to transmit 10 MB

Interpretation dropping events generated by RED do not impact this transmission time much with CUBIC, introducing RED increases the median transmission time

  • f 10 MB by 5% compared to DropTail

with CUBIC, introducing CoDel results in an increase of 42% of this transmission time.

19/21 Revisiting Old Friends: CoDel vs. RED 2014 19 /

slide-34
SLIDE 34

Discussion

Table of content

1

Context and objectives

2

RED and CoDel

3

Simulating the bufferbloat in ns-2

4

Impact of AQM with CUBIC and VEGAS

5

Application Delays and Goodputs

6

Discussion

20/21 Revisiting Old Friends: CoDel vs. RED 2014 20 /

slide-35
SLIDE 35

Discussion

Deployment of CoDel and RED

AQM: a solution to tackle the bufferbloat that SHOULD be deployed. RED and CoDel enable to reduce the latency: in our simulations, CoDel reduced the latency by 87% and RED by 75% a trade-off must be found between reducing the latency and degrading the end-to-end performance: CoDel increased the time needed to transmit 10 MB by 42%, while RED only introduced a 5% increase deployment issues of RED: RED was not tuned on because it is hard to configure for a given network. Adaptive RED (proposed after Gentle RED) has less deployment issues but was not deployed deployment issues with CoDel: in a document published by CableLabs, the authors explain that they had to adjust CoDel’s target value to account for MAC/PHY delays even for packets reaching an empty queue. There is a need for a large parameters sensitivity consider the intended traffic to be carried: as an example, conjoint deployment of LEDBAT and AQM is a problem as this protocol would not be ”low-than-best-effort” anymore.

21/21 Revisiting Old Friends: CoDel vs. RED 2014 21 /

slide-36
SLIDE 36

Discussion

Deployment of CoDel and RED

AQM: a solution to tackle the bufferbloat that SHOULD be deployed. RED and CoDel enable to reduce the latency: in our simulations, CoDel reduced the latency by 87% and RED by 75% a trade-off must be found between reducing the latency and degrading the end-to-end performance: CoDel increased the time needed to transmit 10 MB by 42%, while RED only introduced a 5% increase deployment issues of RED: RED was not tuned on because it is hard to configure for a given network. Adaptive RED (proposed after Gentle RED) has less deployment issues but was not deployed deployment issues with CoDel: in a document published by CableLabs, the authors explain that they had to adjust CoDel’s target value to account for MAC/PHY delays even for packets reaching an empty queue. There is a need for a large parameters sensitivity consider the intended traffic to be carried: as an example, conjoint deployment of LEDBAT and AQM is a problem as this protocol would not be ”low-than-best-effort” anymore.

21/21 Revisiting Old Friends: CoDel vs. RED 2014 21 /

slide-37
SLIDE 37

Discussion

Deployment of CoDel and RED

AQM: a solution to tackle the bufferbloat that SHOULD be deployed. RED and CoDel enable to reduce the latency: in our simulations, CoDel reduced the latency by 87% and RED by 75% a trade-off must be found between reducing the latency and degrading the end-to-end performance: CoDel increased the time needed to transmit 10 MB by 42%, while RED only introduced a 5% increase deployment issues of RED: RED was not tuned on because it is hard to configure for a given network. Adaptive RED (proposed after Gentle RED) has less deployment issues but was not deployed deployment issues with CoDel: in a document published by CableLabs, the authors explain that they had to adjust CoDel’s target value to account for MAC/PHY delays even for packets reaching an empty queue. There is a need for a large parameters sensitivity consider the intended traffic to be carried: as an example, conjoint deployment of LEDBAT and AQM is a problem as this protocol would not be ”low-than-best-effort” anymore.

21/21 Revisiting Old Friends: CoDel vs. RED 2014 21 /

slide-38
SLIDE 38

Discussion

Deployment of CoDel and RED

AQM: a solution to tackle the bufferbloat that SHOULD be deployed. RED and CoDel enable to reduce the latency: in our simulations, CoDel reduced the latency by 87% and RED by 75% a trade-off must be found between reducing the latency and degrading the end-to-end performance: CoDel increased the time needed to transmit 10 MB by 42%, while RED only introduced a 5% increase deployment issues of RED: RED was not tuned on because it is hard to configure for a given network. Adaptive RED (proposed after Gentle RED) has less deployment issues but was not deployed deployment issues with CoDel: in a document published by CableLabs, the authors explain that they had to adjust CoDel’s target value to account for MAC/PHY delays even for packets reaching an empty queue. There is a need for a large parameters sensitivity consider the intended traffic to be carried: as an example, conjoint deployment of LEDBAT and AQM is a problem as this protocol would not be ”low-than-best-effort” anymore.

21/21 Revisiting Old Friends: CoDel vs. RED 2014 21 /

slide-39
SLIDE 39

Appendix

Appendix

slide-40
SLIDE 40

Appendix

On CoDel’s target value:1 The default target value is 5 ms, but this value SHOULD be tuned to be at least the transmission time of a single MTU-sized packet at the prevalent egress link speed (which for e.g. 3 Mbps and MTU 1500 is ∼15 ms). On LEDBAT not being LBE over AQMs:2 [. . . ] RED invalidates LEDBAT low priority [with] similar throughput of TCP and LEDBAT, both at flow and aggregate levels

  • 1T. Hoeiland-Joergensen et al. FlowQueue-CoDel.

Internet-Draft draft-hoeiland-joergensen-aqm-fq-codel-00.txt. Mar. 2014. url: http://www.rfc-editor.org/internet-drafts/draft-hoeiland-joergensen- aqm-fq-codel-00.txt, sec. 5.1.2.

  • 2Y. Gong et al. “Interaction or Interference: Can AQM and Low Priority Congestion

Control Successfully Collaborate?” In: CoNEXT 2012. Nice, France, 2012, pp. 25–26. doi: 10.1145/2413247.2413263. url: http://conferences.sigcomm.org/co- next/2012/eproceedings/student/p25.pdf, sec. 2.