Chair of Network Architectures and Services Department of Informatics Technical University of Munich
Evaluation of TCP BBR v2 Congestion Control Using Network Emulation - - PowerPoint PPT Presentation
Evaluation of TCP BBR v2 Congestion Control Using Network Emulation - - PowerPoint PPT Presentation
Chair of Network Architectures and Services Department of Informatics Technical University of Munich Evaluation of TCP BBR v2 Congestion Control Using Network Emulation Intermediate talk for the Masters Thesis by Juliane Aulbach advised by
Congestion Control
loss-based
- perating point
Kleinrock’s optimal
- perating point
BDP BDP+BtlneckBufSize RTprop
RTT
BtlBw
Amount Inflight Delivery Rate Kleinrock’s Optimal Operating Point [3]
- J. Aulbach — TCP BBR v2
2
TCP BBR
Bottleneck-Bandwidth & RTT Congestion-based Congestion Control
- BBR: Bottleneck Bandwidth and RTT
- 2016: Google presents BBR [1]
- 2017: BBR becomes part of the Linux kernel (version 4.9 or higher)
- 2018: Google announces BBR v2
How does BBR work?
- Maximum bandwidth is determined by a single bottleneck
- Estimates bandwidth-delay product (BDP) to create a model of the network
- BDP = Bottleneck Bandwidth (BtlBw) · Propagation delay (RTprop)
- max BtlBw = windowed_max(delivered data / elapsed time)
- min RTT = windowed_min(time acknowledged - time send)
- Continuosly updates model according to changes in the measurements
- J. Aulbach — TCP BBR v2
3
TCP BBR
BBR Phases Start Startup Drain ProbeBW ProbeRTT 10 20 30 5 10 15 20 Time / s Sending Rate / Mbit
s
BBR
Startup Drain Probe BW Probe RTT
- J. Aulbach — TCP BBR v2
4
TCP BBR
Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2]
- Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow
- > Adapt bandwidth probing
- J. Aulbach — TCP BBR v2
5
TCP BBR
Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2]
- Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow
- > Adapt bandwidth probing
- Loss-agnostic
- > Use packet loss as an explicit signal, with an explicit target loss rate ceiling
- ECN-agnostic
- > Use DCTCP-style ECN, if available, to help keep queues short
- J. Aulbach — TCP BBR v2
5
TCP BBR
Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2]
- Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow
- > Adapt bandwidth probing
- Loss-agnostic
- > Use packet loss as an explicit signal, with an explicit target loss rate ceiling
- ECN-agnostic
- > Use DCTCP-style ECN, if available, to help keep queues short
- Problems with ACK aggregation
- > Estimate recent degree of aggregation to match CUBIC throughput
- J. Aulbach — TCP BBR v2
5
TCP BBR
Bottleneck-Bandwidth & RTT Issues with initial version of BBR The following list is presented by Neil Cardwell at IETF Meeting 104 [2]
- Low throughput for Reno/CUBIC flows sharing a bottleneck with a bulk BBR flow
- > Adapt bandwidth probing
- Loss-agnostic
- > Use packet loss as an explicit signal, with an explicit target loss rate ceiling
- ECN-agnostic
- > Use DCTCP-style ECN, if available, to help keep queues short
- Problems with ACK aggregation
- > Estimate recent degree of aggregation to match CUBIC throughput
- Throughput variations during ProbeRTT
- > Less restrictive constraints for sending rate if entering ProbeRTT
- J. Aulbach — TCP BBR v2
5
Parameters & Measurements
Mininet Setup
Sender1 Sender2 . . . Sendern Switch1 Switch2 Switch3 Receiver1 Receiver2 . . . Receivern Bottleneck Link Mininet setup with sending and receiving hosts connected via a bottleneck link [3]
- J. Aulbach — TCP BBR v2
6
Results
RTT Unfairness 10 20 30 40 50 5 10 Time / s Sending Rate / Mbit
s
BBR 10 ms RTT 50 ms RTT Fair Share
- J. Aulbach — TCP BBR v2
7
Results
RTT Unfairness 10 20 30 40 50 5 10 Time / s Sending Rate / Mbit
s
BBR 10 ms RTT 50 ms RTT Fair Share 10 20 30 40 50 5 10 Time / s Sending Rate / Mbit
s
BBR v2 10 ms RTT 50 ms RTT Fair Share
- J. Aulbach — TCP BBR v2
7
Results
Buffersize 10−1 100 101 20 40 60 80 100 Buffersize in multiples of BDP [log] Bandwidth
- Avg. [%]
CUBIC vs BBR
CUBIC BBR
10−1 100 101 20 40 60 80 100 Buffersize in multiples of BDP [log] Bandwidth
- Avg. [%]
CUBIC vs BBR v2
CUBIC BBR v2
- J. Aulbach — TCP BBR v2
8
Results
Buffersize 10−1 100 101 20 40 60 80 100 Buffersize in multiples of BDP [log] Bandwidth
- Avg. [%]
CUBIC vs BBR
CUBIC BBR
10−1 100 101 20 40 60 80 100 Buffersize in multiples of BDP [log] Bandwidth
- Avg. [%]
CUBIC vs BBR v2
CUBIC BBR v2
10−1 100 101 20 40 60 80 100 Buffersize in multiples of BDP [log] Bandwidth
- Avg. [%]
BBR vs BBR v2
BBR BBR v2
- J. Aulbach — TCP BBR v2
8
Conclusion
Challenges
- Mininet is not working properly on current Linux kernel
- There is no official draft or paper for BBR v2
Status Task Status Find out how BBR and BBR v2 work
- Deploy Linux with BBR v2
- Reproduce results from [3] as ground trouth
- Repeat all tests with BBR v2
- Compare test results from BBR and BBR v2
- Evaluate improvements and deteriorations
- Extend measurements
- J. Aulbach — TCP BBR v2
9
Backup Slides
BBR Phases Start Startup Drain ProbeBW ProbeRTT 10 20 30 5 10 15 20 Time / s Sending Rate / Mbit
s
BBR
Startup Drain Probe BW Probe RTT
10 20 30 5 10 15 20 Time / s Sending Rate / Mbit
s
BBR v2
Startup Drain ProbeBW ProbeRTT
- J. Aulbach — TCP BBR v2
10
Backup Slides
Fairness: Round-trip time
0.5 0.75 1 Ftp
CUBIC vs BBR
101 102 103 25 50 75 RTT / ms [log]
Bandwidth
- Avg. [%]
CUBIC BBR
0.5 0.75 1 Ftp
CUBIC vs BBR
101 102 103 25 50 75 RTT [log ms]
Bandwidth
- Avg. [%]
CUBIC BBR v2
0.5 0.75 1 Ftp
BBR vs BBR2
101 102 103 25 50 75 RTT [log ms]
Bandwidth
- Avg. [%]
BBR BBR v2
- J. Aulbach — TCP BBR v2
11
Backup Slides
Performance 10−3 10−2 10−1 100 101 102 50 100 Loss Rate (%) - Log Scale Throughput Mbit/s Fully use bandwidth, despite high loss
CUBIC BBR
Bottleneck Bandwidth = 100Mbit, RTT = 100ms 2,000 4,000 6,000 8,000 10,000 2,000 4,000 6,000 Buffersize / kB Latency / ms Fully use bandwidth, despite high loss
Cubic BBR
Bottleneck Bandwidth = 10Mbit, RTT = 40ms
- J. Aulbach — TCP BBR v2
12
Bibliography
[1]
- N. Cardwell and Y. Cheng.
Making Linux TCP Fast: 04_Making_Linux_TCP_Fast_netdev_1.2_final, 2016. [2]
- N. Cardwell, Y. Cheng, S. H. Yeganeh, P
. Jha, M. Mathis, and van Jacobson. BBR v2 - A Model-based Congestion Control: slides-104-iccrg-an-update-on-bbr-00, 2019. [3]
- B. Jaeger, D. Scholz, D. Raumer, F. Geyer, and G. Carle.
Reproducible measurements of TCP BBR congestion control. Computer Communications, 144:31–43, 2019.
- J. Aulbach — TCP BBR v2
13