TCP Congestion Control Beyond Bandwidth-Delay Product for Mobile - - PowerPoint PPT Presentation

tcp congestion control beyond bandwidth delay product for
SMART_READER_LITE
LIVE PREVIEW

TCP Congestion Control Beyond Bandwidth-Delay Product for Mobile - - PowerPoint PPT Presentation

TCP Congestion Control Beyond Bandwidth-Delay Product for Mobile Cellular Networks Wai Kay Leong , Zixiao Wang, Ben Leong The 13th International Conference on emerging Networking EXperiments and Technologies Mobile Internet usage exceeds Desktop


slide-1
SLIDE 1

TCP Congestion Control Beyond Bandwidth-Delay Product for Mobile Cellular Networks

Wai Kay Leong, Zixiao Wang, Ben Leong

The 13th International Conference on emerging Networking EXperiments and Technologies

slide-2
SLIDE 2

CoNEXT ’17 Seoul, Incheon

Mobile Internet usage exceeds Desktop

2

slide-3
SLIDE 3

CoNEXT ’17 Seoul, Incheon

What's different about cellular?

Negligible random packet losses

  • Hybrid ARQ scheme
  • As compared to 802.11 Wi-Fi

Large buffers

  • In the Megabytes

Asymmetric uplink/downlink

  • ACK delay

Fair scheduling at “base station”

  • No contention with other users

3

slide-4
SLIDE 4

Why Traditional TCP does not work

IN MOBILE/CELLULAR NETWORKS

4

slide-5
SLIDE 5

CoNEXT ’17 Seoul, Incheon

  • 1. Large/ Deep buffers

5

Deep buffer causes high latency (hundreds

  • f ms)

Cwnd Time Buffer overflow Packets in flight (buffer) Actual RTT

Lack of congestion signal

CUBIC Reno

slide-6
SLIDE 6

CoNEXT ’17 Seoul, Incheon

  • 2. Uplink Congestion

More predominant in slower 3G/HSPA networks ACK gets delayed in return uplink

  • Stuck in deep buffer/ high volume of users
  • Server is prevented from sending new packet even though downlink is clear

6

Data ACK …… Idle link

slide-7
SLIDE 7

CoNEXT ’17 Seoul, Incheon

Rethink congestion control for mobile networks

Traditional TCP congestion control

  • Lack of congestion signal (ECN not popular)
  • Long delay/high latency (CUBIC)
  • ACK clocked

Rise in new mobile TCP algorithms

  • Sprout
  • Verus
  • PCC
  • BBR

7

slide-8
SLIDE 8

CoNEXT ’17 Seoul, Incheon

Key Idea

Purpose of congestion control is to

  • avoid congestion
  • finding the correct rate to send packets
  • ideally keep 1×BDP packets in transit

Why not just send at the correct rate?

  • Vary conditions of mobile networks
  • Try to forecast the condition (Sprout, PROTEUS, Verus, etc.)
  • Try to build a model (PCC, Remy)

8

Our Insight: Timely estimation of the bandwidth + quick reaction to new network condition is sufficient

slide-9
SLIDE 9

CoNEXT ’17 Seoul, Incheon

Our Approach

Abandon ACK clocking Pure rate-based sending of packets

1. Estimate current bandwidth/receive rate 2. Send packets at estimated rate 3. Observe buffer delay 4. Update send rate

Takes advantage of large buffer Congestion with others mitigated by fair scheduling in base station

9

slide-10
SLIDE 10

CoNEXT ’17 Seoul, Incheon

We need a means to

  • 1. Estimate the bandwidth/receive rate
  • 2. Detect congestion by measuring one-way delay

Make use of TCP timestamp option

  • Enabled by default on most servers and phones

10

slide-11
SLIDE 11

CoNEXT ’17 Seoul, Incheon

Estimating Receive Rate

Receiver will send ACK when packet is received ACK will be timestamped Compute rate by

  • comparing timestamps: tr1 – tr0 = Δt
  • and bytes ACK:

ΔACK/Δt = ρ

11

Sender Receiver

TSval = tr0 TSval = tr1 Δt

slide-12
SLIDE 12

CoNEXT ’17 Seoul, Incheon

Estimating Buffer/ Queuing Delay

Only relative increase/decrease of tbuff matters

12

Sender Receiver

TSval = tack Relative delay RD = tack – tsnd Actual delay Queuing delay tbuff = RD – RDmin tsnd tack (RDmin)

slide-13
SLIDE 13

CoNEXT ’17 Seoul, Incheon

Putting it together

13

Estimate Receive Rate/Bandwidth Detect Congestion from Queuing delay Pure Rate-based Mechanism

slide-14
SLIDE 14

CoNEXT ’17 Seoul, Incheon

Self-oscillating feedback loop

14

Increase in queuing delay tbuff > T Link Congested Decrease in queueing delay tbuff < T No Congestion Buff ffer D Drain Sta tate Send slower than bandwidth (σd<ρ) Buffer Fi r Fill State Send faster than bandwidth (σf>ρ) Slow S Start rt Send burst of 10 packets 1 2 3 Estimate bandwidth (ρ)

ρ and tbuff constantly updated

slide-15
SLIDE 15

CoNEXT ’17 Seoul, Incheon

Buffer Drain State Buffer Fill State

Packet Trace, aka Sawtooth

15

Time Packets in buffer/ Queuing Delay

delay delay

Takes at least 1×RTT to get feedback on queuing delay Latency oscillates between the peaks and throughs

T

slide-16
SLIDE 16

CoNEXT ’17 Seoul, Incheon

PropRate

Sending rate is a proportion of bandwidth/receive rate

  • σf = kfρ
  • σd = kdρ

Three parameters controlsthe sawtooth

  • kf – proportion to fill buffer
  • kd – proportion to drain buffer
  • T – threshold for switching state

16

slide-17
SLIDE 17

CoNEXT ’17 Seoul, Incheon

Parameters

By adjusting the parameters, kf,kd and T, we can change the shape of the sawtooth.

17

Time Packets in buffer/ Queuing Delay

T

Average latency

slide-18
SLIDE 18

CoNEXT ’17 Seoul, Incheon

Parameters

By adjusting the parameters, kf,kd and T, we can change the shape of the sawtooth.

18

Time Packets in buffer/ Queuing Delay

T

Average latency

slide-19
SLIDE 19

CoNEXT ’17 Seoul, Incheon

Parameters

By adjusting the parameters, kf,kd and T, we can change the shape of the sawtooth.

19

Time Packets in buffer/ Queuing Delay

T

Average latency

slide-20
SLIDE 20

CoNEXT ’17 Seoul, Incheon

Parameters

By adjusting the parameters, kf,kd and T, we can change the shape of the sawtooth.

20

Time Packets in buffer/ Queuing Delay

T

Average latency

slide-21
SLIDE 21

CoNEXT ’17 Seoul, Incheon

Parameters

By adjusting the parameters, kf,kd and T, we can change the shape of the sawtooth. Throughput is maximum because buffer is always filled

21

Time Packets in buffer/ Queuing Delay

T

Average latency

slide-22
SLIDE 22

CoNEXT ’17 Seoul, Incheon

Parameters

Throughput is maximum because buffer is always filled Average latency can be adjusted

22

Time

T

Average latency Packets in buffer/ Queuing Delay

slide-23
SLIDE 23

CoNEXT ’17 Seoul, Incheon

Parameters

Throughput is maximum because buffer is always filled Average latency can be adjusted

23

Time

T

Average latency Packets in buffer/ Queuing Delay

slide-24
SLIDE 24

CoNEXT ’17 Seoul, Incheon

Average latency

Two optimization modes

Optimizing for Throughput

  • Buffer to be kept filled
  • Implies maximum throughput
  • Latency suffers due to queuing delay

Optimizing for Latency

  • Buffer needs to be emptied
  • Reduced utilization  reduced

throughput

  • More responsive latencies

24

T

Queuing Delay Time

T

Time

Average latency

slide-25
SLIDE 25

CoNEXT ’17 Seoul, Incheon

Please read our paper

Parameter tuning

  • Specify target latency to set the parameter

Updating Threshold

  • Due to network volatility

Some math

25

읽으십시오

slide-26
SLIDE 26

Evaluation

26

slide-27
SLIDE 27

CoNEXT ’17 Seoul, Incheon

Performance Evaluation

  • 1. Compare with other TCP

protocols

  • Traditional TCP: CUBIC, Vegas,

Westwood, LEDBAT

  • State-of-art Mobile: Sprout, PCC,

Verus, BBR

  • 2. Delayed ACK/Saturated Uplink
  • 3. Throughput vs Delay tradeoff
  • 4. Fairness/Contention
  • 5. Computation overhead

Two Scenarios

  • 1. Emulated networks
  • 2. Real cellular networks

Three flavours of PropRate

Low, Medium, High

+ Frontier

Enumerate parameter space

27

slide-28
SLIDE 28

CoNEXT ’17 Seoul, Incheon

Trace-based Emulation

Keep network constant – for fair comparison Cellsim Emulator (from MIT) Actual Network Traces

  • Three local cellular ISPs
  • Two scenarios: stationary (in our lab) and mobile (on a bus)
  • MIT traces [Winstein et al.]

28

Mobile Cellsim Server wired wired uplink downlink

1010 0010 0101 1100 1011

slide-29
SLIDE 29

CoNEXT ’17 Seoul, Incheon

Results – Local ISP, Stationary

29

Good throughput/Bad latency Good latency/ Bad throughput

slide-30
SLIDE 30

CoNEXT ’17 Seoul, Incheon

Results – Local ISP, Mobile

30

Good throughput/Bad latency Good latency/ Bad throughput

slide-31
SLIDE 31

CoNEXT ’17 Seoul, Incheon

Results – Real LTE Network

31

slide-32
SLIDE 32

CoNEXT ’17 Seoul, Incheon

Results

PropRate more optimal than other TCP variants

  • Achieves higher throughput
  • or, lower latency

32

slide-33
SLIDE 33

CoNEXT ’17 Seoul, Incheon

Congested/ Saturated Uplink

33

Mobile Server USB Tether congested uplink downlink LTE Smartphone

slide-34
SLIDE 34

CoNEXT ’17 Seoul, Incheon

Congested Uplink – Real LTE Network

34

slide-35
SLIDE 35

CoNEXT ’17 Seoul, Incheon

Results

PropRate more optimal than other TCP variants

  • Achieves higher throughput
  • or, lower latency

Decoupling ACK clocking improves resilience

  • Towards asymmetric links

35

slide-36
SLIDE 36

CoNEXT ’17 Seoul, Incheon

Performance Frontiers

36

slide-37
SLIDE 37

CoNEXT ’17 Seoul, Incheon

Results

PropRate more optimal than other TCP variants

  • Achieves higher throughput
  • or, lower latency

Decoupling ACK clocking improves resilience

  • Towards asymmetric links

Frontier hull shows PropRate is always most optimal

37

slide-38
SLIDE 38

CoNEXT ’17 Seoul, Incheon

Fairness – Self Contention

38

slide-39
SLIDE 39

CoNEXT ’17 Seoul, Incheon

Fairness – Contention from others

39

slide-40
SLIDE 40

CoNEXT ’17 Seoul, Incheon

Results

PropRate more optimal than other TCP variants

  • Achieves higher throughput
  • or, lower latency

Decoupling ACK clocking improves resilience

  • Towards asymmetric links

Frontier hull shows PropRate is always most optimal PropRate can compete with CUBIC flows

40

slide-41
SLIDE 41

CoNEXT ’17 Seoul, Incheon

Whither the future?

Resurgence in interest in TCP

  • Different emergent networks: Datacenter, Wi-Fi, Cellular, etc.

Traditional TCP: CUBIC/Compound

  • Floods buffer  Increased latency

Delay-based algorithms: Vegas, Westwood, etc.

  • Good latency
  • Starved by CUBIC

41

slide-42
SLIDE 42

CoNEXT ’17 Seoul, Incheon

Is TCP ready for rate-based algorithms?

Pure rate-based algorithms: PropRate & BBR

  • Handles bufferbloat
  • Compete well against CUBIC

Can co-exist with CUBIC

  • Facilitate transition to better rate-based TCP algorithms

PropRate builds on a framework

  • More optimal algorithms in the future?
  • Better integration with TCP stack in the future?

42

slide-43
SLIDE 43

Thank You

QUESTIONS?

43

감사합니다