Streaming Video and TCP-Friendly Congestion Control Sugih Jamin - - PowerPoint PPT Presentation

streaming video and tcp friendly congestion control
SMART_READER_LITE
LIVE PREVIEW

Streaming Video and TCP-Friendly Congestion Control Sugih Jamin - - PowerPoint PPT Presentation

Streaming Video and TCP-Friendly Congestion Control Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs) Sugih Jamin (jamin@eecs.umich.edu) Video


slide-1
SLIDE 1

Streaming Video and TCP-Friendly Congestion Control

Sugih Jamin Department of EECS University of Michigan jamin@eecs.umich.edu Joint work with: Zhiheng Wang (UofM), Sujata Banerjee (HP Labs)

Sugih Jamin (jamin@eecs.umich.edu)

slide-2
SLIDE 2

Video Application on the Internet Adaptive playback streaming: Sender sends data i at time ti Receiver receives at time ti + ∆, ∆ = propagation delay + queueing delay To smooth out variable queueing delay, receiver buffers some amount of data (i to i + k, k > 0) before playing back data i By the time receiver is ready to play back data i + k, hopefully it would have arrived Otherwise, increase buffering (hence “adaptive”)

Sugih Jamin (jamin@eecs.umich.edu)

slide-3
SLIDE 3

Video Streaming Two ways to send data:

  • bulk transfer: transfer before playback
  • streaming: transfer while playback

Why Streaming?

  • shorter playback start time
  • smaller receiver buffer requirement
  • smaller interaction delay requirement

Sugih Jamin (jamin@eecs.umich.edu)

slide-4
SLIDE 4

Expectations vs. Reality Streaming media service requirements:

  • resource intensive
  • smooth (low variance) throughput

Internet service characteristics:

  • shared resource
  • variable bandwidth
  • unpredictable network latency
  • lossy channel

Sugih Jamin (jamin@eecs.umich.edu)

slide-5
SLIDE 5

Streaming Video over the Internet Effect of transient changes in available bandwidth:

  • empty buffer on playback
  • playback pause on rebuffering
  • larger buffer size increases start time

(consider live interactive sessions) Applicable to other streaming data: scientific visualization, massively multiplayer gaming dynamic object, web page download

Sugih Jamin (jamin@eecs.umich.edu)

slide-6
SLIDE 6

Case Study: Windows Media Player Application characteristics: WM Server sends traffic at a constant bit rate WMP client pauses video playback until sufficient packets have been buffered (rebuffering) WMP client asks for retransmission to recover lost packet. If lost packet cannot be recovered, the whole frame is considered lost WM Server reduces sending rate when lower available bandwidth is detected

Sugih Jamin (jamin@eecs.umich.edu)

slide-7
SLIDE 7

Streaming Video Quality

P3 P2 P 1 P 3

T

  • RTT/2

noqueueingdelay sufficientbandwidth largequeueingdelay notsufficientbandwidth

somequeueingdelay

bandwidthchanges

case1

case2

case3

videoquality packetloss videoquality videoquality

P3 P2 P1 P2 P1 P3 P2 P1 P3 P2 P1 P1 P2 P3 P1 P2 Sugih Jamin (jamin@eecs.umich.edu)

slide-8
SLIDE 8

Measuring Streaming Video Quality Metrics:

  • server transmission rate (service rate)
  • client rebuffering probability
  • client rebuffering duration
  • client frame loss

Sugih Jamin (jamin@eecs.umich.edu)

slide-9
SLIDE 9

Improving User Perceived Quality User less annoyed with lower but consistent quality than continual rebuffering Changes in available bandwidth cause changes in rebuffering probability and duration Streaming video needs low loss rate and smooth available bandwidth to reduce user annoyance Need: smooth congestion control mechanism

Sugih Jamin (jamin@eecs.umich.edu)

slide-10
SLIDE 10

TCP-Friendliness TCP is the standard transport protocol TCP does congestion control by linear probing for available bandwidth and multiplicative decrease on congestion detection (packet loss) “A congestion control protocol is TCP-friendly if, in steady state, its bandwidth utilization is no more than required by TCP under similar circumstances” [Floyd et al., 2000] TCP-friendliness in a proposed protocol ensures compatibility with TCP

Sugih Jamin (jamin@eecs.umich.edu)

slide-11
SLIDE 11

TCP-Friendly Rate Control (TFRC) Goals:

  • to provide streaming media with steady throughput
  • to be TCP-friendly

Instead of reacting to individual losses, tries to satisfy the TCP throughput function over time: T = s R

2p

3 + tRTO(3

3p

8 )p(1 + 32p2)

T: TCP throughput; s: packet size; p: loss rate R: path RTT; tRTO: re-transmit timeout

Sugih Jamin (jamin@eecs.umich.edu)

slide-12
SLIDE 12

Terminologies

Bandwidth Capacity Fair Share Calculated Allowed Rate Self-clocked Rate Sending Rate Data Rate Throughput Application OS Networks Application OS Networks

Sugih Jamin (jamin@eecs.umich.edu)

slide-13
SLIDE 13

Terminologies (contd) Data rate: the rate at which an application generates data Sending Rate: the rate at which a connection sends data Self-clocked rate: upper bound on the sending rate calculated by TFRC Fair share: TCP’s throughput during bulk data transfer Fair share load: ratio between the sending rate and the fair share Throughput: the incoming traffic rate measured at the receiver

Sugih Jamin (jamin@eecs.umich.edu)

slide-14
SLIDE 14

Does TFRC Provide Smoother Throughput? Experiment setup:

S(0)

R1

S(1) S(M-1)

R2

D(0) D(1) D(M-1)

1.5Mbps/50ms

. . . . . . . . . . . . . . . .

  • Data source: CBR-traffic
  • Background traffic
  • long/short-lived TCP flows with infinite amount of data
  • flash crowd: large number of short TCP bursts
  • long-range dependent traffic: a number of Pareto distributed

ON/OFF flows

Sugih Jamin (jamin@eecs.umich.edu)

slide-15
SLIDE 15

Not that Smooth

50 100 150 200 250 300 350 20 40 60 80 100 KBps Time (sec) TCP’s Self-clocked Rate TFRC’s Self-clocked Rate TFRC’s Sending Rate

  • Data rate: 50KBps
  • Background traffic: 1 long-lived TCP

Sugih Jamin (jamin@eecs.umich.edu)

slide-16
SLIDE 16

Worse with Bursty Background Traffic

10 20 30 40 50 60 100 200 300 400 500 600 KBps Time (sec)’ Congestion Rate Sending Rate

  • Data rate: 20KBps
  • Background traffic: 1 long-lived TCP + 5 ON/OFF flows

Sugih Jamin (jamin@eecs.umich.edu)

slide-17
SLIDE 17

Internet Experiments A sample path between MI and CA

20 40 60 80 100 300 350 400 450 500 KBps Round ID Self-clocked Rate Sending Rate

  • Data rate: 40 KBps
  • RTT: 67 msec
  • Loss event rate: 0.24%

Sugih Jamin (jamin@eecs.umich.edu)

slide-18
SLIDE 18

MARC’s Design Motivation TFRC congestion control is memoryless, whereas:

  • streaming media is “well-behaved”

when there is no congestion, streaming applications cannot always utilize their fair share fully

  • but during congestion, TFRC applies the same rate reduction

principle to streaming media traffic as to bulk data transfer traffic Media Aware Rate Control (MARC) proposition: “Well-behaved” streaming applications should be allowed to reduce their sending rate more slowly during congestion

Sugih Jamin (jamin@eecs.umich.edu)

slide-19
SLIDE 19

Media-Aware Rate Control (MARC)

  • Define token value C to keep track of a connection’s fair share

utilization C = βC′ + (T − Wsend)I C: token β: decay factor C′: previous token value T: previous calculated self-clocked rate Wsend: previous sending rate I: feedback interval

  • We use β = 0.9

Sugih Jamin (jamin@eecs.umich.edu)

slide-20
SLIDE 20

Media-Aware Rate Control (MARC) Our experiments use δ = 0.1

Sugih Jamin (jamin@eecs.umich.edu)

slide-21
SLIDE 21

MARC is Effective

50 100 150 200 250 300 350 20 40 60 80 100 KBps Time (sec) TCP’s Self-clocked Rate TFRC’s Self-clocked Rate TFRC’s Sending Rate

TFRC

50 100 150 200 250 20 40 60 80 100 KBps Time (sec) Self-clocked Rate Sending Rate

MARC

  • Data rate: 50KBps
  • Background traffic: 1 long-lived TCP

Sugih Jamin (jamin@eecs.umich.edu)

slide-22
SLIDE 22

MARC is TCP-Friendly

10 20 30 40 50 60 70 20 40 60 80 100 120 Sending Rate (KBps) Data Rate (KBps) MARC-RED TFRC-RED MARC-DropTail TFRC-DropTail Fair Share

  • Date rate: 10 CBR sources
  • Background traffic: 10 long-lived TCP

Sugih Jamin (jamin@eecs.umich.edu)

slide-23
SLIDE 23

Reaction Time to Persistent Congestion

  • Congestion at 50th sec, RTT: 80 msec
  • Fair share before congestion: 140 KBps

20 40 60 80 100 120 140 46 48 50 52 54 Self-clocked Rate (KBps) Time (sec) x = 52.44 MARC TFRC

(a) Data rate = 20 KBps

20 40 60 80 100 120 140 46 48 50 52 54 Self-clocked Rate (KBps) Time (sec) x = 51.81 MARC TFRC

(b) Data rate = 40 KBps

20 40 60 80 100 120 140 46 48 50 52 54 Self-clocked Rate (KBps) Time (sec) x = 51.33 MARC TFRC

(c) Data rate = 60 KBps

20 40 60 80 100 120 140 46 48 50 52 54 Self-clocked Rate (KBps) Time (sec) x = 50.90 MARC TFRC

(d) Data rate = 80 KBps Without token, MARC behaves exactly like TFRC.

Sugih Jamin (jamin@eecs.umich.edu)

slide-24
SLIDE 24

Token Dynamics

200 400 600 800 1000 1200 1400 1600 1800 10 20 30 40 50 60 70 80 90 100 200 300 400 500 600 Sending Rate (KBps) Token (KByte) Time (sec) Sending Rate Self-clocked Rate Flash-crowd Token

  • 1 long-lived TCP and 1 MARC flows
  • Data rate: 100 KBps
  • Flash-crowd (800 short-lived TCP) starts at the 50th second, lasts for

5 seconds

Sugih Jamin (jamin@eecs.umich.edu)

slide-25
SLIDE 25

MARC Improves User-Perceived Quality

0.2 0.4 0.6 0.8 1

1 2 3 4 5

Number of rebuffering events Probability density function TCP TFRC MARC

  • Data rate: 44KBps
  • Background traffic: 1 long-lived TCP + 1 ON/OFF flow

Sugih Jamin (jamin@eecs.umich.edu)

slide-26
SLIDE 26

Future Works

  • Layered video adaptation with MARC
  • Analyzing MARC
  • Streaming media over end-host multicast
  • multiple receivers
  • congestion control on end-host multicast
  • multiple sources
  • Integrated Flow Control

Sugih Jamin (jamin@eecs.umich.edu)