Lecture 18: Congestion Control in Data Center Networks 1 Overview - - PowerPoint PPT Presentation

lecture 18 congestion control in data center networks
SMART_READER_LITE
LIVE PREVIEW

Lecture 18: Congestion Control in Data Center Networks 1 Overview - - PowerPoint PPT Presentation

Lecture 18: Congestion Control in Data Center Networks 1 Overview Why is the problem different from that in the Internet? What are possible solutions? 2 DC Traffic Patterns In-cast applications Client send queries to servers


slide-1
SLIDE 1

Lecture 18: Congestion Control in Data Center Networks

1

slide-2
SLIDE 2

Overview

  • Why is the problem different from that in the

Internet?

  • What are possible solutions?

2

slide-3
SLIDE 3

DC Traffic Patterns

  • In-cast applications

– Client send queries to servers – Responses are synchronized

  • Few overlapping long flows

– According to DCTCP’s measurement

3

slide-4
SLIDE 4

Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan

Microsoft Research Stanford University

Data Center TCP (DCTCP)

4

slide-5
SLIDE 5

Data Center Packet Transport

  • Large purpose-built DCs

– Huge investment: R&D, business

  • Transport inside the DC

– TCP rules (99.9% of traffic)

  • How’s TCP doing?

5

slide-6
SLIDE 6

TCP in the Data Center

  • We’ll see TCP does not meet demands of apps.

– Suffers from bursty packet drops, Incast [SIGCOMM ‘09], ... – Builds up large queues:

Ø Adds significant latency. Ø Wastes precious buffers, esp. bad with shallow-buffered switches.

  • Operators work around TCP problems.

‒ Ad-hoc, inefficient, often expensive solutions ‒ No solid understanding of consequences, tradeoffs

6

slide-7
SLIDE 7

Roadmap

  • What’s really going on?

– Interviews with developers and operators – Analysis of applications – Switches: shallow-buffered vs deep-buffered – Measurements

  • A systematic study of transport in Microsoft’s DCs

– Identify impairments – Identify requirements

  • Our solution: Data Center TCP

7

slide-8
SLIDE 8

Case Study: Microsoft Bing

  • Measurements from 6000 server production cluster
  • Instrumentation passively collects logs

‒ Application-level ‒ Socket-level ‒ Selected packet-level

  • More than 150TB of compressed data over a month

8

slide-9
SLIDE 9

TLA MLA MLA Worker Nodes ………

Partition/Aggregate Application Structure

9

Picasso

“Everything you can imagine is real.” “Bad artists copy. Good artists steal.” “It is your work in life that is the ultimate seduction.“ “The chief enemy of creativity is good sense.“ “Inspiration does exist, but it must find you working.” “I'd like to live as a poor man with lots of money.“ “Art is a lie that makes us realize the truth. “Computers are useless. They can only give you answers.” 1. 2. 3.

…..

  • 1. Art is a lie…
  • 2. The chief…

3.

…..

1.

  • 2. Art is a lie…

3.

…..

Art is…

Picasso

  • Time is money

Ø Strict deadlines (SLAs)

  • Missed deadline

Ø Lower quality result

Deadline = 250ms Deadline = 50ms Deadline = 10ms

slide-10
SLIDE 10

Generality of Partition/Aggregate

  • The foundation for many large-scale web applications.

– Web search, Social network composition, Ad selection, etc.

  • Example: Facebook

Partition/Aggregate ~ Multiget

– Aggregators: Web Servers – Workers: Memcached Servers

10

Memcached Servers Internet Web Servers

Memcached Protocol

slide-11
SLIDE 11

Workloads

  • Partition/Aggregate

(Query)

  • Short messages [50KB-1MB]

(Coordination, Control state)

  • Large flows [1MB-50MB]

(Data update)

11

Delay-sensitive Delay-sensitive Throughput-sensitive

slide-12
SLIDE 12

Impairments

  • Incast
  • Queue Buildup
  • Buffer Pressure

12

slide-13
SLIDE 13

Incast

13

TCP timeout

Worker 1 Worker 2 Worker 3 Worker 4 Aggregator RTOmin = 300 ms

  • Synchronized mice collide.

Ø Caused by Partition/Aggregate.

slide-14
SLIDE 14

Incast Really Happens

  • Requests are jittered over 10ms window.
  • Jittering switched off around 8:30 am.

14

Jittering trades off median against high percentiles. 99.9th percentile is being tracked.

MLA Query Completion Time (ms)

slide-15
SLIDE 15

InCast: Goodput collapses as senders increase

15

slide-16
SLIDE 16

InCast: Synchronized timeouts

16

slide-17
SLIDE 17

Queue Buildup

17

Sender 1 Sender 2 Receiver

  • Big flows buildup queues.

Ø Increased latency for short flows.

  • Measurements in Bing cluster

Ø For 90% packets: RTT < 1ms Ø For 10% packets: 1ms < RTT < 15ms

slide-18
SLIDE 18

Data Center Transport Requirements

18

  • 1. High Burst Tolerance

– Incast due to Partition/Aggregate is common.

  • 2. Low Latency

– Short flows, queries

  • 3. High Throughput

– Continuous data updates, large file transfers

The challenge is to achieve these three together.

slide-19
SLIDE 19

Tension Between Requirements

19

High Burst Tolerance High Throughput Low Latency

DCTCP

Deep Buffers: Ø Queuing Delays Increase Latency Shallow Buffers: Ø Bad for Bursts & Throughput Reduced RTOmin (SIGCOMM ‘09) Ø Doesn’t Help Latency AQM – RED: Ø Avg Queue Not Fast Enough for Incast

Objective: Low Queue Occupancy & High Throughput

slide-20
SLIDE 20

The DCTCP Algorithm

20

slide-21
SLIDE 21

Review: The TCP/ECN Control Loop

21

Sender 1 Sender 2 Receiver

ECN Mark (1 bit)

ECN = Explicit Congestion Notification

slide-22
SLIDE 22

Small Queues & TCP Throughput:

The Buffer Sizing Story

17

  • Bandwidth-delay product rule of thumb:

– A single flow needs buffers for 100% Throughput.

B Cwnd Buffer Size Throughput 100%

slide-23
SLIDE 23

Small Queues & TCP Throughput:

The Buffer Sizing Story

17

  • Bandwidth-delay product rule of thumb:

– A single flow needs buffers for 100% Throughput.

  • Appenzeller rule of thumb (SIGCOMM ‘04):

– Large # of flows: is enough.

B Cwnd Buffer Size Throughput 100%

slide-24
SLIDE 24

Small Queues & TCP Throughput:

The Buffer Sizing Story

17

  • Bandwidth-delay product rule of thumb:

– A single flow needs buffers for 100% Throughput.

  • Appenzeller rule of thumb (SIGCOMM ‘04):

– Large # of flows: is enough.

  • Can’t rely on stat-mux benefit in the DC.

– Measurements show typically 1-2 big flows at each server, at most 4.

slide-25
SLIDE 25

Small Queues & TCP Throughput:

The Buffer Sizing Story

17

  • Bandwidth-delay product rule of thumb:

– A single flow needs buffers for 100% Throughput.

  • Appenzeller rule of thumb (SIGCOMM ‘04):

– Large # of flows: is enough.

  • Can’t rely on stat-mux benefit in the DC.

– Measurements show typically 1-2 big flows at each server, at most 4.

B

Real Rule of Thumb: Low Variance in Sending Rate → Small Buffers Suffice

slide-26
SLIDE 26

Two Key Ideas

  • 1. React in proportion to the extent of congestion, not its presence.

ü Reduces variance in sending rates, lowering queuing requirements.

  • 2. Mark based on instantaneous queue length.

ü Fast feedback to better deal with bursts.

18

ECN Marks TCP DCTCP 1 0 1 1 1 1 0 1 1 1 Cut window by 50% Cut window by 40% 0 0 0 0 0 0 0 0 0 1 Cut window by 50% Cut window by 5%

slide-27
SLIDE 27

Data Center TCP Algorithm

Switch side:

– Mark packets when Queue Length > K.

19

Sender side:

– Maintain running average of fraction of packets marked (α). In each RTT: Ø Adaptive window decreases: – Note: decrease factor between 1 and 2. B K

Mark Don’t Mark

This image cannot currently be displayed.

slide-28
SLIDE 28

DCTCP in Action

20

Setup: Win 7, Broadcom 1Gbps Switch Scenario: 2 long-lived flows, K = 30KB

(Kbytes)

slide-29
SLIDE 29

Why it Works

  • 1. High Burst Tolerance

ü Large buffer headroom → bursts fit. ü Aggressive marking → sources react before packets are dropped.

  • 2. Low Latency

ü Small buffer occupancies → low queuing delay.

  • 3. High Throughput

ü ECN averaging → smooth rate adjustments, low variance.

21

slide-30
SLIDE 30

Evaluation

  • Implemented in Windows stack.
  • Real hardware, 1Gbps and 10Gbps experiments

– 90 server testbed – Broadcom Triumph 48 1G ports – 4MB shared memory – Cisco Cat4948 48 1G ports – 16MB shared memory – Broadcom Scorpion 24 10G ports – 4MB shared memory

  • Numerous micro-benchmarks

– Throughput and Queue Length – Multi-hop – Queue Buildup – Buffer Pressure

  • Cluster traffic benchmark

23

– Fairness and Convergence – Incast – Static vs Dynamic Buffer Mgmt

slide-31
SLIDE 31

Cluster Traffic Benchmark

  • Emulate traffic within 1 Rack of Bing cluster

– 45 1G servers, 10G server for external traffic

  • Generate query, and background traffic

– Flow sizes and arrival times follow distributions seen in Bing

  • Metric:

– Flow completion time for queries and background flows.

24

We use RTOmin = 10ms for both TCP & DCTCP.

slide-32
SLIDE 32

Baseline

25

Background Flows Query Flows

slide-33
SLIDE 33

Baseline

25

Background Flows Query Flows

ü Low latency for short flows.

slide-34
SLIDE 34

Baseline

25

Background Flows Query Flows

ü Low latency for short flows. ü High throughput for long flows.

slide-35
SLIDE 35

Baseline

25

Background Flows Query Flows

ü Low latency for short flows. ü High throughput for long flows. ü High burst tolerance for query flows.

slide-36
SLIDE 36

Scaled Background & Query

10x Background, 10x Query

26

slide-37
SLIDE 37

Scalability

37

slide-38
SLIDE 38

Conclusions

  • DCTCP satisfies all our requirements for Data Center

packet transport.

ü Handles bursts well ü Keeps queuing delays low ü Achieves high throughput

  • Features:

ü Very simple change to TCP and a single switch parameter. ü Based on mechanisms already available in Silicon.

27

slide-39
SLIDE 39

Discussion

  • What if traffic patterns change?

– E.g., many overlapping flows

  • What do you like/dislike?

39