Modeling Persistent Congestion for Tail Drop Queue Alex Marder & - - PowerPoint PPT Presentation

modeling persistent congestion for tail drop queue
SMART_READER_LITE
LIVE PREVIEW

Modeling Persistent Congestion for Tail Drop Queue Alex Marder & - - PowerPoint PPT Presentation

Modeling Persistent Congestion for Tail Drop Queue Alex Marder & Jonathan M. Smith University of Pennsylvania Problem Can we determine the severity of persistent congestion? 100mbit >> 1mbit Why? How bad is interdomain


slide-1
SLIDE 1

Modeling Persistent Congestion for Tail Drop Queue

Alex Marder & Jonathan M. Smith University of Pennsylvania

slide-2
SLIDE 2

Problem

  • Can we determine the severity of persistent congestion?
  • 100mbit >> 1mbit
  • Why?
  • How bad is interdomain congestion?
  • Is service degraded due to DDoS attack?
  • What about TCP?
slide-3
SLIDE 3

Can We Use TCP?

  • Requires host on both sides of the link
  • Measures end-to-end throughput
  • Can be difficult to determine the bottleneck
  • Smaller RTT gets more throughput
slide-4
SLIDE 4

Goals

  • Use edge probing to determine the average per flow

throughput of TCP flows on persistently congested links

slide-5
SLIDE 5

Controlled Experiments: Setup

slide-6
SLIDE 6

Controlled Experiments

  • Use TCP flows to adjust per-flow throughput
  • 100 flows ≈ 10mbit, 1000 flows ≈ 1mbit
  • Flows last [1, 5] seconds
  • Immediately replaced by new flow
  • 1000 probes per measurement
  • 100ms intervals
slide-7
SLIDE 7

FIFO Tail Drop Queue

  • Queue depth: maximum number
  • f packets in queue
  • If Arrival Rate > Link Bandwidth
  • Queue size increases
  • If Arrival Rate < Link Bandwidth
  • Queue size decreases
  • Packets are dropped when queue

is full

slide-8
SLIDE 8

TCP Variants

NewReno

  • Additive Increase, Multiplicative

Decrease

  • Slow Start
  • Fast Retransmit
  • Fast Recovery with partial ACKs

CUBIC

  • Slow Start, Fast Retransmit, Fast

Recovery

  • Congestion Window increases follow a

cubic function – quickly initially, but slows as it nears old window size

  • Partially decouples window increases

from RTT

  • Default in current versions of Linux,

MacOS, and Windows

slide-9
SLIDE 9

Initial Setup

slide-10
SLIDE 10

TCP CUBIC: Mean Probe RTT Inc Increa eases es and Spread De Decr creases as Per Flow Throughput De Decr creases

slide-11
SLIDE 11

TCP CUBIC: 100m 100mbit (10 Flows) – 1m 1mbit (1000 Flows)

slide-12
SLIDE 12

TCP CUBIC: 10m 10mbit (100 Flows) – 1m 1mbit (1000 Flows)

slide-13
SLIDE 13

CUBIC vs NewReno: Mean and Spread are Different

slide-14
SLIDE 14

CUBIC vs NewReno: Model for CUBIC is Unusable for NewReno

slide-15
SLIDE 15

CUBIC vs NewReno: 1000 Probe RTTs Every 100ms

slide-16
SLIDE 16

CUBIC vs NewReno: 1000 Probe RTTs Every 100ms

slide-17
SLIDE 17

CUBIC vs NewReno: Probe RTTs Increase Slower Than Decrease

slide-18
SLIDE 18

Percent Increasing Metric

  • Percentage of Probe RTTs where RTTi > RTTi-1
  • Attempt to capture rate of queue increases vs decreases
  • Example:
  • 10 RTTs = [44, 46, 48, 43, 45, 44, 47, 42, 45, 48]
  • 6 RTTs are greater than previous RTT
  • Percent Increasing = 60%
slide-19
SLIDE 19

CUBIC vs NewReno: Percent Increasing Metric Reduces Potential Estimation Error (≈ 2Mbit)

slide-20
SLIDE 20

CUBIC & NewReno Mixes: All Fall Between CUBIC and NewReno Curves

slide-21
SLIDE 21

Bandwidth: Reduce Bandwidth to 500Mbit

slide-22
SLIDE 22

Bandwidth: Measuring Raw Average Throughput

slide-23
SLIDE 23

Measurements Are Independent of the Number of TCP Flows

slide-24
SLIDE 24

Queue Depth: Increase By 4ms (From 48ms to 52ms)

slide-25
SLIDE 25

Queue Depth: Stdev and % Increasing Are Resilient to Small Differences, Mean is Not

slide-26
SLIDE 26

TCP RTT: Impact of Different RTTs

slide-27
SLIDE 27

TCP RTT: Percent Increasing Estimation Error Based

  • n RTT Assumption
slide-28
SLIDE 28

TCP RTT: Probe RTTs Measure Throughput of Smallest TCP RTT Flows

slide-29
SLIDE 29

Probing Through Congestion

slide-30
SLIDE 30

1st Link: Reverse Path Congestion

slide-31
SLIDE 31

2nd Link: Forward Path Congestion

slide-32
SLIDE 32

Probing Through Congestion

slide-33
SLIDE 33

Probing Through Congestion: Looks Possible

slide-34
SLIDE 34

Conclusions & Future Work

  • Where it works:
  • CUBIC, NewReno, mixed
  • Bandwidth
  • Queue depth
  • Assumed TCP RTT

distribution

  • Hopefully soon:
  • Reduce error due to TCP

RTT

  • Probing through congestion
  • New experiments:
  • BBR
  • Higher bandwidths (10+

Gbit)

  • Throughput fluctuations