TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali - - PowerPoint PPT Presentation

tcp pacing in data center networks
SMART_READER_LITE
LIVE PREVIEW

TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali - - PowerPoint PPT Presentation

TCP Pacing in Data Center Networks Monia Ghobadi, Yashar Ganjali Department of Computer Science, University of Toronto {monia, yganjali}@cs.toronto.edu 1 TCP , Oh TCP! 2 TCP , Oh TCP! TCP congestion control 2 TCP , Oh TCP! TCP


slide-1
SLIDE 1

TCP Pacing in Data Center Networks

1

Monia Ghobadi, Yashar Ganjali

Department of Computer Science, University of Toronto {monia, yganjali}@cs.toronto.edu

slide-2
SLIDE 2

TCP , Oh TCP!

2

slide-3
SLIDE 3

TCP , Oh TCP!

๏ TCP congestion control

2

slide-4
SLIDE 4

TCP , Oh TCP!

๏ TCP congestion control ๏Focus on evolution of cwnd over RTT.

2

slide-5
SLIDE 5

TCP , Oh TCP!

๏ TCP congestion control ๏Focus on evolution of cwnd over RTT. ๏ Damages

2

slide-6
SLIDE 6

TCP , Oh TCP!

๏ TCP congestion control ๏Focus on evolution of cwnd over RTT. ๏ Damages ๏ TCP pacing

2

slide-7
SLIDE 7

The Tortoise and the Hare: Why Bother Pacing?

3

slide-8
SLIDE 8

The Tortoise and the Hare: Why Bother Pacing?

๏ Renewed interest in pacing for the data center

environment

๏Small buffer switches ๏Small round-trip times

๏Disparity between the total capacity of the network and the capacity of individual queues

3

slide-9
SLIDE 9

The Tortoise and the Hare: Why Bother Pacing?

๏ Renewed interest in pacing for the data center

environment

๏Small buffer switches ๏Small round-trip times

๏Disparity between the total capacity of the network and the capacity of individual queues

๏Focus on tail latency cause by short-term

unfairness in TCP

3

slide-10
SLIDE 10

TCP Pacing’s Potential

4

slide-11
SLIDE 11

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers

4

slide-12
SLIDE 12

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers ๏ Better short-term fairness among flows of

similar RTTs:

๏Improves worst-flow latency

4

slide-13
SLIDE 13

TCP Pacing’s Potential

๏ Better link utilization on small switch buffers ๏ Better short-term fairness among flows of

similar RTTs:

๏Improves worst-flow latency ๏ Allows slow-start to be circumvented ๏Saving many round-trip time ๏May allow much larger initial congestion window to

be used safely

4

slide-14
SLIDE 14

Contributions

5

slide-15
SLIDE 15

Contributions

๏ Effectiveness of TCP pacing in data centers.

5

slide-16
SLIDE 16

Contributions

๏ Effectiveness of TCP pacing in data centers. ๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a certain threshold (Point of Inflection).

5

slide-17
SLIDE 17

Contributions

๏ Effectiveness of TCP pacing in data centers. ๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a certain threshold (Point of Inflection).

๏ Inconclusive results in previous works.

5

slide-18
SLIDE 18

Contributions

๏ Effectiveness of TCP pacing in data centers. ๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a certain threshold (Point of Inflection).

๏ Inconclusive results in previous works. ๏ Inter-flow bursts.

5

slide-19
SLIDE 19

Contributions

๏ Effectiveness of TCP pacing in data centers. ๏ Benefits of using paced TCP diminish as we increase

the number of concurrent connections beyond a certain threshold (Point of Inflection).

๏ Inconclusive results in previous works. ๏ Inter-flow bursts. ๏ Test-bed experiments.

5

slide-20
SLIDE 20

Inter-flow Bursts

6

slide-21
SLIDE 21

Inter-flow Bursts

๏ C: bottleneck link capacity

6

slide-22
SLIDE 22

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size

6

slide-23
SLIDE 23

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows.

6

slide-24
SLIDE 24

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

6

slide-25
SLIDE 25

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

slide-26
SLIDE 26

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

slide-27
SLIDE 27

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

slide-28
SLIDE 28

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

6

0 RTT 2RTT 3RTT

slide-29
SLIDE 29

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

๏ X: Inter-flow burst

6

0 RTT 2RTT 3RTT

slide-30
SLIDE 30

Inter-flow Bursts

๏ C: bottleneck link capacity ๏ Bmax :buffer size ๏ N: longed lived flows. ๏ W: packets in every RTT in paced or non-paced

manner.

๏ X: Inter-flow burst

6

0 RTT 2RTT 3RTT

slide-31
SLIDE 31

Modeling

7

slide-32
SLIDE 32

Modeling

7

best case of non-paced

slide-33
SLIDE 33

Modeling

7

best case of non-paced worst case of paced

slide-34
SLIDE 34

Modeling

7

RTT

best case of non-paced worst case of paced

slide-35
SLIDE 35

Modeling

7

RTT RTT

best case of non-paced worst case of paced

slide-36
SLIDE 36

Modeling

7

RTT RTT

best case of non-paced worst case of paced

slide-37
SLIDE 37

Modeling

7

RTT RTT

best case of non-paced worst case of paced

slide-38
SLIDE 38

Experimental Studies

8

slide-39
SLIDE 39

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients.

8

slide-40
SLIDE 40

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients. ๏ Bottleneck BW: 1,2, 3 Gbps

8

slide-41
SLIDE 41

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients. ๏ Bottleneck BW: 1,2, 3 Gbps ๏ RTT: 1 to 100ms

8

slide-42
SLIDE 42

Experimental Studies

๏ Flow of sizes 1,2, 3 MB between servers and clients. ๏ Bottleneck BW: 1,2, 3 Gbps ๏ RTT: 1 to 100ms ๏ Bottleneck utilization, Drop rate, average and tail

FCT

8

slide-43
SLIDE 43

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

slide-44
SLIDE 44

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion

slide-45
SLIDE 45

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

slide-46
SLIDE 46

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

slide-47
SLIDE 47

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

No congestion Congestion

slide-48
SLIDE 48

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38% No congestion Congestion

slide-49
SLIDE 49

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38% No congestion Congestion

slide-50
SLIDE 50

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

38% No congestion Congestion

slide-51
SLIDE 51

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

0.1 0.2 0.3 0.4 0.063 0.039 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced

38% No congestion Congestion

slide-52
SLIDE 52

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

0.1 0.2 0.3 0.4 0.063 0.039 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced

38% No congestion Congestion 1RTT

slide-53
SLIDE 53

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

0.1 0.2 0.3 0.4 0.063 0.039 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced

38% No congestion Congestion 1RTT 2RTTs

slide-54
SLIDE 54

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

0.1 0.2 0.3 0.4 0.063 0.039 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced 1 2 3 5 0.06 7 0.5 0.2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced

38% No congestion Congestion 1RTT 2RTTs

slide-55
SLIDE 55

50 100 150 200 250 300 50 100 150 200 250 300 Time (sec) Bottleneck Link Utilization (Mbps) paced non−paced

Base-Case Experiment:

One flow vs Two flows, 64KB of buffering, Utilization/Drop/FCT

9

0.1 0.2 0.3 0.4 0.063 0.039 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced 1 2 3 5 0.06 7 0.5 0.2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Flow Completion Time (sec) CDF paced non−paced

38% No congestion Congestion 1RTT 2RTTs 2RTTs

slide-56
SLIDE 56

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10

slide-57
SLIDE 57

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10 10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

PoI N*

slide-58
SLIDE 58

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10 10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Bottleneck Link Drop (%) paced non−paced

PoI N*

slide-59
SLIDE 59

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Bottleneck Link Drop (%) paced non−paced

PoI N*

slide-60
SLIDE 60

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of Flows 99th Percentile FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Bottleneck Link Drop (%) paced non−paced

PoI N*

slide-61
SLIDE 61

Multiple flows: Link Utilization/Drop/Latency

Buffer size 1.7% of BDP , varying number of flows

10 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of Flows 99th Percentile FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Bottleneck Link Drop (%) paced non−paced

PoI N*

  • Number of concurrent connections increase beyond a

certain point the benefits of pacing diminish.

slide-62
SLIDE 62

Multiple flows: Link Utilization/Drop/Latency

Buffer size 3.4% of BDP , varying number of flows

11

slide-63
SLIDE 63

Multiple flows: Link Utilization/Drop/Latency

Buffer size 3.4% of BDP , varying number of flows

11 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of Flows 99th Percentile FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

20 40 60 80 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Number of flows sharing the bottleneck Bottleneck link drop(%) paced nonpaced

slide-64
SLIDE 64

Multiple flows: Link Utilization/Drop/Latency

Buffer size 3.4% of BDP , varying number of flows

11 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Number of Flows 99th Percentile FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

N* PoI

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

20 40 60 80 100 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Number of flows sharing the bottleneck Bottleneck link drop(%) paced nonpaced

  • Aagarwal et al.: Don’t pace!
  • 50 flows, BDP 1250 packets and buffer size

312 packets

  • N* = 8 flows.
  • Kulik et al.: Pace!
  • 1 flow, BDP 91 packets, buffer size 10

packets.

  • N* = 9 flows.
slide-65
SLIDE 65

N* vs. Bufger

12

slide-66
SLIDE 66

50 100 150 200 250 200 400 600 800 1000 Buffer Size (KB) Bottleneck Link Utilization (Mbps) paced non−paced

N* vs. Bufger

12

slide-67
SLIDE 67

50 100 150 200 250 200 400 600 800 1000 Buffer Size (KB) Bottleneck Link Utilization (Mbps) paced non−paced

N* vs. Bufger

12

50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Buffer size(KB) Bottleneck link drop(%) paced nonpaced

slide-68
SLIDE 68

50 100 150 200 250 200 400 600 800 1000 Buffer Size (KB) Bottleneck Link Utilization (Mbps) paced non−paced

50 100 150 200 250 0.2 0.4 0.6 0.8 1 1.2 1.4 Buffer Size (KB) Average CT (sec) paced non−paced

N* vs. Bufger

12

50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Buffer size(KB) Bottleneck link drop(%) paced nonpaced

slide-69
SLIDE 69

50 100 150 200 250 0.4 0.8 1.2 1.6 2 2.4 2.8 Buffer Size (KB) 99th Percentile CT (sec) paced non−paced 50 100 150 200 250 200 400 600 800 1000 Buffer Size (KB) Bottleneck Link Utilization (Mbps) paced non−paced

50 100 150 200 250 0.2 0.4 0.6 0.8 1 1.2 1.4 Buffer Size (KB) Average CT (sec) paced non−paced

N* vs. Bufger

12

50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Buffer size(KB) Bottleneck link drop(%) paced nonpaced

slide-70
SLIDE 70

50 100 150 200 250 0.4 0.8 1.2 1.6 2 2.4 2.8 Buffer Size (KB) 99th Percentile CT (sec) paced non−paced 50 100 150 200 250 200 400 600 800 1000 Buffer Size (KB) Bottleneck Link Utilization (Mbps) paced non−paced

50 100 150 200 250 0.2 0.4 0.6 0.8 1 1.2 1.4 Buffer Size (KB) Average CT (sec) paced non−paced

N* vs. Bufger

12

50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Buffer size(KB) Bottleneck link drop(%) paced nonpaced

slide-71
SLIDE 71

Clustering Effect:

The probability of packets from a flow being followed by packets from other flows

13

slide-72
SLIDE 72

Clustering Effect:

The probability of packets from a flow being followed by packets from other flows

Non-paced: Packets of each flow are clustered together.

13

slide-73
SLIDE 73

Clustering Effect:

The probability of packets from a flow being followed by packets from other flows

Non-paced: Packets of each flow are clustered together. Paced: Packets of different flows are multiplexed.

13

slide-74
SLIDE 74

Drop Synchronization:

Number of Flows Affected by Drop Event

14

slide-75
SLIDE 75

Drop Synchronization:

Number of Flows Affected by Drop Event

14

NetFPGA router to count the number of flows affected by drop events.

slide-76
SLIDE 76

Drop Synchronization:

Number of Flows Affected by Drop Event

14

10 20 30 40 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced

50 100 150 200 250 300 350 400 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced

N: 48 N: 96 N: 384

NetFPGA router to count the number of flows affected by drop events.

slide-77
SLIDE 77

Drop Synchronization:

Number of Flows Affected by Drop Event

14

10 20 30 40 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced

50 100 150 200 250 300 350 400 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Flows Affected by Drop Event CDF paced non−paced

N: 48 N: 96 N: 384

NetFPGA router to count the number of flows affected by drop events.

slide-78
SLIDE 78

Future Trends for Pacing:

per-egress pacing.

15

slide-79
SLIDE 79

Future Trends for Pacing:

per-egress pacing.

15

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 Number of Flows Average RCT (sec) per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps)

per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of Flows 99th Percentile RCT (sec) per−flow paced non−paced per−host + per−flow paced

N* PoI

6 12 24 48 96 192 5 10 15 20 Number of flows Bottleneck Link Drop (%) per−flow paced nonpaced per−host + per−flow paced

slide-80
SLIDE 80

Future Trends for Pacing:

per-egress pacing.

15

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 Number of Flows Average RCT (sec) per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps)

per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of Flows 99th Percentile RCT (sec) per−flow paced non−paced per−host + per−flow paced

N* PoI

6 12 24 48 96 192 5 10 15 20 Number of flows Bottleneck Link Drop (%) per−flow paced nonpaced per−host + per−flow paced

slide-81
SLIDE 81

Future Trends for Pacing:

per-egress pacing.

15

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 Number of Flows Average RCT (sec) per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps)

per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of Flows 99th Percentile RCT (sec) per−flow paced non−paced per−host + per−flow paced

N* PoI

6 12 24 48 96 192 5 10 15 20 Number of flows Bottleneck Link Drop (%) per−flow paced nonpaced per−host + per−flow paced

slide-82
SLIDE 82

Future Trends for Pacing:

per-egress pacing.

15

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 Number of Flows Average RCT (sec) per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps)

per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of Flows 99th Percentile RCT (sec) per−flow paced non−paced per−host + per−flow paced

N* PoI

6 12 24 48 96 192 5 10 15 20 Number of flows Bottleneck Link Drop (%) per−flow paced nonpaced per−host + per−flow paced

slide-83
SLIDE 83

Future Trends for Pacing:

per-egress pacing.

15

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 Number of Flows Average RCT (sec) per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps)

per−flow paced non−paced per−host + per−flow paced

PoI N*

6 12 24 48 96 192 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of Flows 99th Percentile RCT (sec) per−flow paced non−paced per−host + per−flow paced

N* PoI

6 12 24 48 96 192 5 10 15 20 Number of flows Bottleneck Link Drop (%) per−flow paced nonpaced per−host + per−flow paced

slide-84
SLIDE 84

Conclusions and Future work

๏ Re-examine TCP pacing’s effectiveness: ๏Demonstrate when TCP pacing brings benefits in

such environments.

๏ Inter-flow burstiness ๏ Burst-pacing vs. packet-pacing. ๏ Per-egress pacing.

16

slide-85
SLIDE 85

Renewed Interest

17

slide-86
SLIDE 86

Traffjc Burstiness Survey

๏‘Bursty’ is a word with no agreed meaning. How do

you define a bursty traffic?

๏If you are involved with a data center, is your data

center traffic bursty?

๏ If yes, do you think that it will be useful to supress

the burstiness in your traffic?

๏ If no, are you already supressing the burstiness?

How? Would you anticipate the traffic becoming burstier in the future?

18

monia@cs.toronto.edu

slide-87
SLIDE 87

19

slide-88
SLIDE 88

Base-Case Experiment:

One RPC vs Two RPCs, 64KB of bufgering, Latency

20

slide-89
SLIDE 89

Multiple flows: Link Utilization/Drop/ Latency

Bufger size: 6% of BDP, varying number of

21

slide-90
SLIDE 90

Base-Case Experiment:

One RPC vs Two RPCs, 64KB of bufgering, Latency / Queue Occupancy

22

slide-91
SLIDE 91

Base-Case Experiment:

One RPC vs Two RPCs, 64KB of bufgering, Latency / Queue Occupancy

22

slide-92
SLIDE 92

Base-Case Experiment:

One RPC vs Two RPCs, 64KB of bufgering, Latency / Queue Occupancy

22

slide-93
SLIDE 93

Base-Case Experiment:

One RPC vs Two RPCs, 64KB of bufgering, Latency / Queue Occupancy

22

slide-94
SLIDE 94

Functional test

23

slide-95
SLIDE 95

Functional test

23

slide-96
SLIDE 96

Functional test

23

slide-97
SLIDE 97

Functional test

23

slide-98
SLIDE 98

RPC vs. Streaming

24

1GE 10GE 10GE RTT = 10ms

Paced by ack clocking Bursty

slide-99
SLIDE 99

Zooming in more on the paced flow

25

slide-100
SLIDE 100

Multiple flows: Link Utilization/Drop/Latency

Buffer size 6.8% of BDP , varying number of flows

26

slide-101
SLIDE 101

Multiple flows: Link Utilization/Drop/Latency

Buffer size 6.8% of BDP , varying number of flows

26 10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

slide-102
SLIDE 102

Multiple flows: Link Utilization/Drop/Latency

Buffer size 6.8% of BDP , varying number of flows

26 10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

20 40 60 80 100 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 Number of flows sharing the bottleneck Bottleneck link drop(%) paced nonpaced

slide-103
SLIDE 103

Multiple flows: Link Utilization/Drop/Latency

Buffer size 6.8% of BDP , varying number of flows

26 10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

20 40 60 80 100 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 Number of flows sharing the bottleneck Bottleneck link drop(%) paced nonpaced

slide-104
SLIDE 104

Multiple flows: Link Utilization/Drop/Latency

Buffer size 6.8% of BDP , varying number of flows

26 10 20 30 40 50 60 70 80 90 100 0.4 0.8 1.2 1.6 2 2.4 2.8 3.2 3.6 4 Number of Flows 99th Percentile FCT (sec) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 Number of Flows Average FCT (sec) paced non−paced

PoI N*

10 20 30 40 50 60 70 80 90 100 200 400 600 800 1000 Number of Flows Bottleneck Link Utilization (Mbps) paced non−paced

N* PoI

20 40 60 80 100 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 Number of flows sharing the bottleneck Bottleneck link drop(%) paced nonpaced