Trickle : Rate Limiting Video Streaming Monia Ghobadi - - PowerPoint PPT Presentation

trickle
SMART_READER_LITE
LIVE PREVIEW

Trickle : Rate Limiting Video Streaming Monia Ghobadi - - PowerPoint PPT Presentation

Trickle : Rate Limiting Video Streaming Monia Ghobadi <monia@cs.toronto.edu> Yuchung Cheng, Ankur Jain, Matt Mathis <ycheng, jankur, mattmathis@google.com> 1 Video Streaming TCP Just-in-time video delivery Ustreamer


slide-1
SLIDE 1

: Rate Limiting Video Streaming

Monia Ghobadi <monia@cs.toronto.edu>

1

Yuchung Cheng, Ankur Jain, Matt Mathis <ycheng, jankur, mattmathis@google.com>

Trickle

slide-2
SLIDE 2

2

Video Streaming

Ustreamer

TCP

Just-in-time video delivery Application pacing

slide-3
SLIDE 3

Throttling phase

Video Streaming

Token bucket 64kB

1500 1000 500 10 8 6 4 2 equence offset (KB)

3

Startup phase

Time (sec) Sequence offset (bytes)

Target streaming rate = 125% video encoding rate

slide-4
SLIDE 4

The Problem: Burstiness is Bad for TCP

Not specific to YouTube videos. Netflix sends bursts as large as 2MB.

Main contribution: A simple and generic technique to implement just-in-time video delivery by smoothly rate-limiting TCP transfers.

4

slide-5
SLIDE 5

To Rate Limit TCP

5

  • Dynamic upper bound on TCP’s congestion window.

Trickle

R = 50 pkts/sec (600Kbps) RTT = 200 ms max_cwnd = 50 (pkts/sec) x 0.2 (sec) = 10 pkts

  • Periodically computed based on RTT and target rate (R).
  • Only server side changes for easy deployment.
  • Not a special mechanism tailored only for YouTube.
slide-6
SLIDE 6

Demo*

6

Ustreamer Trickle Smooth Bursty

* http://www.cs.utoronto.ca/~monia/tcptrickle.html

slide-7
SLIDE 7

Experiments

7

Two data centers: India and Europe. 15 days in Fall 2011, total of 23 million videos. 4-way experiment: (1) Baseline1: application pacing with 64kB blocks, (2) Baseline2: application pacing with 64kB blocks, (3) Trickle, (4) shrunk-block: application pacing with 16kB blocks.

slide-8
SLIDE 8

8

Users Western Europe/India data center

(1) Baseline1 (2) Baseline2 (3) Trickle (4) shrunk-block

Experiments Methodology

(2) Baseline2

Same number of flows, flow sizes, flow completion times. Video ID IP/Port Bytes sent Retransmission rate RTT Transmission time Goodput Target rate

slide-9
SLIDE 9

Experiments: Packet Losses

9

Trickle reduces the average retransmission rate by 43%.

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 CDF Retransmission rate (%) baseline1 baseline2 Trickle shrunk-block

slide-10
SLIDE 10

10

Trickle reduces the average RTT by 28%.

Experiments: Queueing Delay

0.2 0.4 0.6 0.8 1 50 100 150 200 250 300 350 400 450 500 CDF Smoothed RTT (ms) baseline1 baseline2 Trickle shrunk-block

slide-11
SLIDE 11

Conclusions

11

  • Trickle rate limits TCP by dynamically setting the

maximum congestion window size.

  • Minimal sender-side changes, fast deployment.
  • Generally applicable to rate limit other kinds of streaming.