Congestion Control In The Congestion Control In The Internet - - PDF document

congestion control in the congestion control in the
SMART_READER_LITE
LIVE PREVIEW

Congestion Control In The Congestion Control In The Internet - - PDF document

Congestion Control In The Congestion Control In The Internet Internet JY Le Boudec Fall 2009 1 Plan of This Module Plan of This Module 1. Congestion control: theory 2. Application to the Internet 2 Theory of Congestion Control Theory of


slide-1
SLIDE 1

Congestion Control In The Congestion Control In The Internet Internet

JY Le Boudec Fall 2009

1

slide-2
SLIDE 2

Plan of This Module Plan of This Module

  • 1. Congestion control: theory
  • 2. Application to the Internet

2

slide-3
SLIDE 3

Theory of Congestion Control Theory of Congestion Control

What you have to learn in this first part:

  • 1. What is the problem; congestion collapse
  • 2. Efficiency versus Fairness
  • 3. Forms of fairness
  • 4. Forms of congestion control
  • 5. Additive Increase Multiplicative Decrease (AIMD)

3

slide-4
SLIDE 4

Inefficiency Inefficiency

Network may lose some packets Assume you let users send as they want Example 1: how much will S1 send to D1 ? S2 to D2 ?

4

S1 C1 = 100 Kb/s S2 C2 = 1000 Kb/s D1 D2 C3 = 110 Kb/s C4 = 100 Kb/s C5 = 10 Kb/s

slide-5
SLIDE 5

Solution Solution

Both send 10 kb/s Inefficient ! A better allocation is: S1: 100 kb/s S2: 10 kb/s The problem was that S2 sent too much

5

S1 S2 D1 D2 x41 = 10 x52 = 10 C1 = 100 Kb/s C2 = 1000 Kb/s C3 = 110 Kb/s C4 = 100 Kb/s C5 = 10 Kb/s

slide-6
SLIDE 6

Congestion Collapse Congestion Collapse

How much can node i send to its destination ?

6

link (i-1) link i link (i+1) node i node i+1 source i

slide-7
SLIDE 7

Solution Solution

We can solve in close form the symmetric case (all links and sources the same) If λ < c/2 there is no loss: Else:

7

nk (i-1) link i link (i+1) node i node i+1 source i

λi λi’ λi’’

slide-8
SLIDE 8

8

slide-9
SLIDE 9

9

2 4 6 8 10 12 1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97

slide-10
SLIDE 10

This is congestion collapse ! Take home message

Sources should limit their rates to adapt it to the network condition Otherwise inefficiency or congestion collapse may occur Congestion collapse means: as the offered load increases, the total throughput decreases

10

slide-11
SLIDE 11

Tout se complique Tout se complique

A network should be organized so as to avoid inefficiency However, being maximally efficient may be a problem Example : what is the maximum throughput ?

11

slide-12
SLIDE 12

Solution Solution

12

slide-13
SLIDE 13

Take Home Message Take Home Message

Efficiency may be at the expense of fairness What is fairness ?

13

slide-14
SLIDE 14

Definitions of Fairness Definitions of Fairness

In simple cases, fairness means same to all May lead to stupid decisions Example

14

slide-15
SLIDE 15

Definitions of Fairness Definitions of Fairness

A better allocation, as fair but more efficient, is: This is the max-min fair allocation for this example

15

slide-16
SLIDE 16

Max-Min fairness Max-Min fairness

We say that an allocation is max-min fair if it satisfies the following criterion:

If we start from this allocation and increase the rate of source s, then we must decrease the rate of some other (less rich) source s’

16

slide-17
SLIDE 17

Example Example

Are these allocations max-min fair ?

1. 2.

17

X1 X2

slide-18
SLIDE 18

Answer Answer

  • 1. No; I can increase x1 without modifying anyone
  • 2. Yes;

if I try to increase x0 I must decrease x2 and x2 ≤ x0 if I try to increase x1 I must decrease x0 and x0 ≤ x1 if I try to increase x2 I must decrease x0 and x0 ≤ x2

18

slide-19
SLIDE 19

The Maths of Max-Min Fairness The Maths of Max-Min Fairness

Given a set of constraints for the rates

If it exists, the max-min fair allocation is unique There exists one max-min fair allocation if the set of feasible rates is convex (this is the case for networks, we have linear constraints)

For a set of feasible rates as in our case (the sum of the rates on every link is upper bounded), the (unique) max min fair allocation is

  • btained by water-filling

19

slide-20
SLIDE 20

Water-Filling Example Water-Filling Example

Step 1:

Rate t to all sources Increase t until t = c/10 Freeze the values for sources that use a link that is fully used x0 = c/10 x2 = c/10

Step 2

Rate t to all non frozen sources x1 = t Increase t until t = 9c/10 Freeze the values for sources that use a link that is fully used x1 = 9c/10

20

slide-21
SLIDE 21

Proportional Fairness Proportional Fairness

Max-min fairness is the most egalitarian, but efficient, allocation Sometimes too egalitarian I sources, ni=1 Max-min fair allocation is xi= c/2 for all For I large, one might think that x0 should be penalized as it uses more of the network This is what proportional fairness does

21

slide-22
SLIDE 22

Definition of Proportional Fairness Definition of Proportional Fairness

Two ideas

Relative shares matter, not absolute Global effect

22

slide-23
SLIDE 23

Example Example

Are these allocations proportionnally fair ?

1. 2.

23

X1 X2

slide-24
SLIDE 24

Solution Solution

  • 1. I can increase x2 alone and the average rate of change is positive.

The answer is: No

  • 2. Let us try to decrease x0 by δ. This allows us to increase x1 by δ and

x2 by δ/9. For δ small enough (δ ≤ 0.1), the allocation is feasible. The average rate of change is

24

slide-25
SLIDE 25

The Maths of Proportional Fairness The Maths of Proportional Fairness

Given a set of constraints for the rates that is convex:

The proportionally fair allocation exists and is unique It is obtained by maximizing over all feasible allocations:

25

slide-26
SLIDE 26

Example Example

26

slide-27
SLIDE 27

27

slide-28
SLIDE 28

Utility Fairness Utility Fairness

One can interpret proportional fairness as the allocation that maximizes a global utility. The utility of an allocation, to source I, is here the log of the rate If we take some other utility function we have what we call a utility fairness Max-min fairness is the limit of utility fairness when the utility function converges to a step function

28

U(x) = ln (x): proportional fairness U(x) = 1- (1/x)m: m large => ~ max min fairness

slide-29
SLIDE 29

Take Home Message Take Home Message

Sources should adapt their rate to the state of the network in order to avoid inefficiencies and congestion collapse

This is called “congestion control”

A rate adaptation mechanism should target some form of fairness

E.g. max-min fairness or proportional fairness

29

slide-30
SLIDE 30

How can con How can congestion control be estion control be implemented ? implemented ?

30

slide-31
SLIDE 31

Additive Increase Multiplicative Additive Increase Multiplicative Decrease Decrease

It is a congestion control mechanism that can be implemented end to end It is the basis for what we have in the Internet We explain it on a simple example

31

slide-32
SLIDE 32

A Simple Network Model A Simple Network Model

Network sends a one bit feedback Sources reduce rate if y(t)=1, increase otherwise Question: what form of increase/decrease laws should one pick ?

32

Rate xi(t) Feedback y(t)

slide-33
SLIDE 33

Linear Laws Linear Laws

We consider linear laws if y(t) == 1 then xi(t+1) = u1 xi(t) + v1 if y(t) == 0 then xi(t+1) = u0 xi(t) + v0 We want to decrease when y(t)==1, so We want to increase when y(t)==0, so

33

slide-34
SLIDE 34

Example Example

u1 = 0.5, v1 =0, u0 = 0, v0 = 1 (unit: Mb/s)

34

slide-35
SLIDE 35

Impact of Fairness Impact of Fairness

Does such a scheme converge to a fair allocation ? Here max-min and proportionally fair are the same (i.e. same rate to all) The scheme may not converge as sources may not be stationary But we would like that the scheme increases fairness

35

slide-36
SLIDE 36

36

slide-37
SLIDE 37

37

slide-38
SLIDE 38

38

slide-39
SLIDE 39

39

slide-40
SLIDE 40

Example Example

40

slide-41
SLIDE 41

Slow Start Slow Start

AIMD’s fairness can be improved if we know that one source gets much less than some other

For example, if initial condition is a small value We can increase more rapidly the rate of a source that we know is below its fair share

Slow Start is one algorithm for this

Set initial value to the Additive Increase v0 Increase the rate multiplicatively until a target rate is reached or negative feedback is received Apply multiplicative decrease to target rate if negative feedback is received Exit slow start when target rate is received

41

target rate

slide-42
SLIDE 42

42

3 sources u1 = 0.5, v1 =0, u0 = 0, v0 = 0.01 (unit: Mb/s) 3rd source starts with rate v0 Source 1 Source 1

With Slow Start Without Slow Start

time time rate (all 3 sources) Source 3 Source 3

slide-43
SLIDE 43

43

slide-44
SLIDE 44

Plan of This Module Plan of This Module

  • 1. Congestion control: theory
  • 2. Application to the Internet

44

slide-45
SLIDE 45

Congestion Control in the Internet is in TCP Congestion Control in the Internet is in TCP

TCP is used to avoid congestion in the Internet

in addition to what was shown: a TCP source adjusts its window to the congestion status of the Internet (slow start, congestion avoidance) this avoids congestion collapse and ensures some fairness

TCP sources interprets losses as a negative feedback

use to reduce the sending rate

UDP sources are a problem for the Internet

use for long lived sessions (ex: RealAudio) is a threat: congestion collapse UDP sources should imitate TCP : “TCP friendly”

45

slide-46
SLIDE 46

TCP Congestion Control is based on AIMD TCP Congestion Control is based on AIMD

TCP adjusts the window size (in addition to offered window ie credit mechanism)

W = min (cwnd, OfferedWindow)

Principles of TCP Congestion Control

negative feedback = loss, positive feedback = ACK received Additive Increase (1 MSS), Multiplicative Decrease (0.5) Slow start with increase factor = 2

Reaction to loss depends on nature of loss detection

Loss detected by timeout => slow start Loss detected by fast retransmit or selective Ack => no slow start

46

slide-47
SLIDE 47

A Trace A Trace

47

created from data from: IEEE Transactions on Networking, Oct. 95, “TCP Vegas”, L. Brakmo and L. Petersen

twnd cwnd A B C 1 2 3 4 5 6 7 8 9 60 30 Bytes seconds

slide-48
SLIDE 48

A Trace A Trace

48

created from data from: IEEE Transactions on Networking, Oct. 95, “TCP Vegas”, L. Brakmo and L. Petersen

twnd cwnd congestion avoidance slow start congestion avoidance A B C B C 1 2 3 4 5 6 7 8 9 60 30 Bytes seconds

slide-49
SLIDE 49

How AIMD is Approximated How AIMD is Approximated

Multiplicative decrease (on loss detection by timeout)

twn twnd = d = 0 0.5 .5 × curren rent_w t_win indow dow twn twnd = d = m max ax (tw (twnd nd, 2 , 2 × MS MSS) S)

Additive increase

for each ACK received

twn twnd = d = t twnd wnd + + MS MSS S × MS MSS / S / twnd wnd This is equivalent in packets to twn twnd = d = m min in (tw (twnd nd, m , max- ax-si size) ze) (6 (64K 4KB) B)

49

twnd = 1 MSS 3 2 4

slide-50
SLIDE 50

How How multiplicative increase (

multiplicative increase (Slow Start) is

Slow Start) is approximated approximated

non dupl. ack received non dupl. ack received during slow start -> during slow start ->

cw cwnd nd = cw cwnd nd + MSS (i (in by n bytes) (1) if if cw cwnd = tw nd = twnd nd th then en tr tran ansition to to co cong nges esti tion av avoi

  • ida

dance

(1) is equivalent in packets to

50

3 2 5 cwnd = 1 seg 4 6 7 8

slide-51
SLIDE 51

AIMD and Slow Start are Im AIMD and Slow Start are Implemented as a lemented as a Finite State Ma Finite State Machine chine

“Congestion Avoidance” = phase of additive increase “Slow start” = phase of slow start, as according to theory Multiplicative decrease is implemented as a state transition

51

ad addi diti tive in incr crea ease ad addi diti tive in incr crea ease multipli plica cati tive ve de decr crea ease se sl slow

  • w s

start lo loss ss lo loss ss Congesti stion

  • n a

avoida idance nce Congesti stion

  • n a

avoida idance nce

slide-52
SLIDE 52

Slow Start and Congestion Avoidance Slow Start and Congestion Avoidance

52

exponential increase for cwnd until cwnd = twnd Slow Start additive increase for twnd, cwnd = twnd Congestion Avoidance

cwnd = twnd retransmission timeout:

  • multiplicative

decrease for twnd

  • cwnd = 1 seg

retransmission timeout:

  • multiplicative

decrease for twnd

  • cwnd = 1 seg

connection opening: twnd = 65535 B cwnd = 1 seg notes this shows only 2 states out of 3 twnd = target window

slide-53
SLIDE 53

Fast Recovery Fast Recovery

Slow start used when we assume that the network condition is new

initial phase, major change guessed by timeout expiration

In all other packet loss detection events, slow start is not used, but “fast recovery” is used instead

target window is halved but temporary window size increase is allowed to help recover losses

when a loss occurs, the sending rate is no longer approximated by the W/RTT

53

slide-54
SLIDE 54

Fast Retransmit Fast Retransmit

Fast Retransmit triggers Fast Recovery at time t1

assume cwnd = 4 when P4 is sent fast recovery increases window and allows to send P7 at time t2, cwd is set to 2

54

×

P1 P2 P3 P4 P5 P6 P3 P7 A1 A2 A2 A2 A2 A? t1 t2

slide-55
SLIDE 55

Fast Recovery Details Fast Recovery Details

Multiplicative decrease

twnd = 0.5 × current-size twnd = max (twnd, 2 × MSS)

Fast Recovery

cwnd = twnd + 3 × MSS (exp. increase) cwnd = min (cwnd, 64K) retransmission of the missing segment (n)

For each duplicated ACK

cwnd = cwnd + MSS (exp. increase) cwnd = min (cwnd, 64K) send following segments

55

slide-56
SLIDE 56

Fast Recovery Example Fast Recovery Example

56

twnd cwnd 1 2 3 4 5 6 60 30 Bytes seconds E F A B C D E F A-B, E-F: fast recovery C-D: slow start

slide-57
SLIDE 57

TCP Congestion Control Has Three TCP Congestion Control Has Three States States

57

Slow Start

  • exponential

increase Congestion Avoidance

  • additive

increase Fast Recovery

  • exponential

increase beyond twnd new connection:

  • retr. timeout:

cwnd = twnd:

  • retr. timeout:
  • retr. timeout:

expected ack received: fast retransmit: fast retransmit:

slide-58
SLIDE 58

Fairness of TCP Fairness of TCP

TCP implements some approximation of AIMD AIMD is designed to be approximately fair in a single link scenario Q: what happens in a network ? Does TCP distribute rates according to max-min fairness or proportional fairness ?

58

slide-59
SLIDE 59

Fairness of TCP Fairness of TCP

A: TCP tends to distribute rate so as to maximize utility of source given by with xi = rate, τi = RTT for source i

(proof: see lecture notes)

Assume all sources have same RTT. For rates that are not too small and not too large, this is close to proportional fairness (but a little closer

to max-min fairness)

59

slide-60
SLIDE 60

TCP Bias Against Large RTTs TCP Bias Against Large RTTs

TCP tends to distribute rate so as to maximize utility of source given by For sources with different RTTs, there is a bias

60

slide-61
SLIDE 61

Fairness of TCP Fairness of TCP

61

Example network with two TCP sources

link capacity, delay

limited queues on the link (8 segments)

NS simulation

router destination

10 Mb/s, 20 ms 1 Mb/s 10 ms 10 Mb/s, 60 ms 8 seg. 8 seg.

S1 S2

slide-62
SLIDE 62

Throughput in time Throughput in time

62

time ACK numbers S1 S2

slide-63
SLIDE 63

Bias of TCP Against Large RTTs Bias of TCP Against Large RTTs

A source that uses many hops obtains less because

it uses more resources (≈ proportional fairness) – desired bias it has a longer RTT – undesired bias

Cause is additive increase is one packet per RTT

63

slide-64
SLIDE 64

AQM AQM

By default. routers drop packets when buffers are full

this is called “Tail Drop”

Issues

buffers fill, then drop -> buffers are full most of the time

produces large average delays large queuing delay => large RTT => smaller TCP throughput

also: produces very irregular behaviour but buffers are needed to amortize bursts

Solution

active queue management (AQM)

drop packets before buffer is full, based on estimation of traffic load

64

slide-65
SLIDE 65

RED (Random Early Detection) RED (Random Early Detection)

RED is the first AQM scheme proposed today, the only one in use Principles

queue estimates its average queue length avg ← a × measured + (1 - a) × avg

incoming packet is dropped with a probability that depends on avg roughly speaking. drop proba is read from the curve below + uniformization procedure

65

avg th-min th-max max-p 1 q

slide-66
SLIDE 66

Example network for RED Example network for RED

66

Example network with three TCP sources

different link delays limited queues on the link (20 packets)

router destination

2 Mb/s, 10 ms 2 Mb/s 100 ms 2 Mb/s, 60 ms 2 Mb/s, 100 ms 20 seg. 20 seg.

S1 S3 S2

slide-67
SLIDE 67

Throughput vs Time, with Tail Drop Throughput vs Time, with Tail Drop

67

time ACK numbers S1 S2 S3

slide-68
SLIDE 68

Throughput vs Time, with RED Throughput vs Time, with RED

68

time ACK numbers S1 S2 S3 RED makes the buffer closer to a fluid queuing system

slide-69
SLIDE 69

TCP Loss - TCP Loss - Throughput Formula hroughput Formula

69

Consider a large TCP connection (many bytes to transmit) Assume we observe that, in average, a fraction q of packets is lost (or marked with ECN) Can we say something about the expected throughput for this connection ?

slide-70
SLIDE 70

TCP Loss - TCP Loss - Throughput Formula hroughput Formula

70

TCP connection with

RTT T segment size L average packet loss ratio q constant C = 1.22

Transmission time negligible compared to RTT, losses are rare, time spent in Slow Start and Fast Recovery negligible

slide-71
SLIDE 71

Example Example

  • Q. Guess the ratio between the throughputs θ1 and θ2 and of S1 and

S2

71

router destination

10 Mb/s, 20 ms 1 Mb/s 10 ms 10 Mb/s, 60 ms 8 seg. 8 seg.

S1 S2

solution

slide-72
SLIDE 72

TCP Friendly Applications TCP Friendly Applications

All TCP/IP applications that generate long lived flows should mimic the behavior of a TCP source

RTP/UDP flow sending video/audio data

Adaptive algorithm

application determines the sending rate feedback - amount of lost packets, loss ratio sending rate = rate of a TCP flow experiencing the same loss ratio

72

slide-73
SLIDE 73

Facts to remember Facts to remember

73

TCP performs congestion control in end-systems

sender increases its sending window until loss occurs, then decreases

additive increase (no loss) multiplicative decrease (loss)

TCP states

slow start, congestion avoidance, fast recovery

Negative bias towards long round trip times UDP applications should behave like TCP with the same loss rate

slide-74
SLIDE 74

Congestion Control: Summary Congestion Control: Summary

Congestion control aims at avoiding congestion collapse in the network With TCP/IP, performed in end-systems, mainly with TCP TCP TCP Cong Congestion co estion control su rol summary mmary Principle: sender increases its sending window until losses occur, then decrease target window: additive increase (no loss), multiplicative decrease (loss) 3 phases: slow start slow start: starts with 1, exponential increase up to twnd congestion avoidance congestion avoidance:additive increase until loss or no increase fast recove fast recovery ry: fast retransmission of one segment slow start entered at setup or after retransmission timeout fast recovery entered at fast retransmit

74

additive increase additive increase multiplicative decrease slow start loss loss

slide-75
SLIDE 75

Solutions Solutions

75

slide-76
SLIDE 76

Example Example

  • Q. Guess the ratio between the throughputs θ1 and θ2 and of S1 and

S2

  • A. If processing time is negligible and router drops packets in the

same proportion for all flows, then throughput is proportional to 1/RTT, thus

76

router destination

10 Mb/s, 20 ms 1 Mb/s 10 ms 10 Mb/s, 60 ms 8 seg. 8 seg.

S1 S2

time ACK numbers S1 S2

back