QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon - - PowerPoint PPT Presentation

qos services with dynamic packet state
SMART_READER_LITE
LIVE PREVIEW

QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon - - PowerPoint PPT Presentation

QoS Services with Dynamic Packet State Ion Stoica Carnegie Mellon University (joint work with Hui Zhang and Scott Shenker) Todays Internet Service: best-effort datagram delivery Architecture: stateless routers excepting


slide-1
SLIDE 1

QoS Services with Dynamic Packet State

Ion Stoica Carnegie Mellon University (joint work with Hui Zhang and Scott Shenker)

slide-2
SLIDE 2

istoica@cs.cmu.edu

Today’s Internet

  • Service: best-effort datagram delivery
  • Architecture: “stateless” routers

– excepting routing state, routers do not maintain any fine grained state about traffic

  • Properties

– scalable – robust

slide-3
SLIDE 3

istoica@cs.cmu.edu

Trends

  • Deploy more sophisticated services, e.g.,

traffic management, Quality of Service (QoS)

  • Two types of solutions:

– Stateless: preserve original Internet advantages

  • RED – support for congestion control
  • Differentiated services (Diffserv) – provide QoS

– Stateful: routers perform per flow management

  • Fair Queueing - support for congestion control
  • Integrated services (Intserv) – provide QoS
slide-4
SLIDE 4

istoica@cs.cmu.edu

Stateful Solutions: Router Complexity

  • Data path

– Per-flow classification – Per-flow buffer management – Per-flow scheduling

  • Control path

– install and maintain per-flow state for data and control planes

Classifier Buffer management Scheduler

flow 1 flow 2 flow n

  • utput interface

Per-flow State

slide-5
SLIDE 5

istoica@cs.cmu.edu

Stateless vs. Stateful

  • Stateless solutions are more

– scalable – robust

  • Stateful solutions provide more powerful and

flexible services

– Fair Queueing vs. RED – Intserv vs. Diffserv

slide-6
SLIDE 6

istoica@cs.cmu.edu

Question

  • Can we achieve the best of two worlds, i.e.,

provide services implemented by stateful networks while maintaining advantages of stateless architectures?

slide-7
SLIDE 7

istoica@cs.cmu.edu

Answer

  • Yes, at least in some interesting cases:

– Per-flow guaranteed services [SIGCOMM’99] – Fair Queueing approximation [SIGCOMM’98] – large spatial service granularity [NOSSDAV’98]

slide-8
SLIDE 8

istoica@cs.cmu.edu

Scalable Core (SCORE)

  • A contiguous and trusted region of network

in which

– edge nodes – perform per flow management – core nodes – do not perform any per flow management

slide-9
SLIDE 9

istoica@cs.cmu.edu

The Approach

  • Define a reference stateful network that

implements the desired service

Reference Stateful Network SCORE Network

  • 2. Emulate the functionality of the reference

network in a SCORE network

slide-10
SLIDE 10

istoica@cs.cmu.edu

The Idea

  • Instead of having core routers

maintaining per-flow state have packets carry per-flow state

Reference Stateful Network SCORE Network

slide-11
SLIDE 11

istoica@cs.cmu.edu

The Technique: Dynamic Packet State (DPS)

  • Ingress node: compute and insert flow

state in packet’s header

slide-12
SLIDE 12

istoica@cs.cmu.edu

The Technique: Dynamic Packet State (DPS)

  • Ingress node: compute and insert flow

state in packet’s header

slide-13
SLIDE 13

istoica@cs.cmu.edu

The Technique: Dynamic Packet State (DPS)

  • Core node:

– process packet based on state it carries and node’s state – update both packet and node’s state

slide-14
SLIDE 14

istoica@cs.cmu.edu

The Technique: Dynamic Packet State (DPS)

  • Egress node: remove state from packet’s

header

slide-15
SLIDE 15

istoica@cs.cmu.edu

Examples

  • Support for congestion control
  • Per flow guaranteed services
slide-16
SLIDE 16

istoica@cs.cmu.edu

Core-Stateless Fair Queueing (CSFQ)

  • Approximate functionality of a network in

which every node performs Fair Queueing (FQ)

Reference Stateful Network SCORE Network FQ FQ FQ FQ FQ FQ FQ CSFQ CSFQ CSFQ CSFQ CSFQ CSFQ

slide-17
SLIDE 17

istoica@cs.cmu.edu

Algorithm Outline

  • Ingress nodes: estimate rate r for each flow

and insert it in the packets’ headers

slide-18
SLIDE 18

istoica@cs.cmu.edu

Algorithm Outline

  • Ingress nodes: estimate rate r for each flow

and insert it in the packets’ headers

slide-19
SLIDE 19

istoica@cs.cmu.edu

Algorithm Outline

  • Core node:

– Compute fair rate f on the output link – Enqueue packet with probability – Update packet label to r = min(r, f) P = min(1, f / r)

slide-20
SLIDE 20

istoica@cs.cmu.edu

Algorithm Outline

  • Egress node: remove state from packet’s

header

slide-21
SLIDE 21

istoica@cs.cmu.edu

Example: CSFQ Core Core

  • Assume estimated fair rate f = 4

– flow 1, r = 8 => P = min(1, 4/8) = 0.5

  • expected rate of forwarded traffic 8*P = 4

– flow 2, r = 6 => P = min(1, 4/6) = 0.67

  • expected rate of forwarded traffic 6*P = 4

– flow 3, r = 2 => P = min(1, 4/2) = 1

  • expected rate of forwarded traffic 2

4 4 4 8

8 6 2 4 4 2

8 8 8 8 6 6 6 6 2 2 4 4 4 2 2

Core Node (10 Mbps) 8 6 2

FIFO

10

slide-22
SLIDE 22

istoica@cs.cmu.edu

Simulation Results

  • 1 UDP (10 Mbps) and 31 TCPs sharing a 10

Mbps link

– fair rate 0.31 Mbps

Bottleneck link (10 Mbps) UDP (#1) - 10 Mbps TCP (#2) TCP (#32) . . . UDP (#1) TCP (#2) TCP (#32) . . .

slide-23
SLIDE 23

istoica@cs.cmu.edu

CSFQ

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 4 7 10 13 16 19 22 25 28 31

Flow Number Throughput(Mbps

Throughput of TCP and UDP Flows with RED, FRED, FQ, CSFQ

FRED

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 4 7 10 13 16 19 22 25 28 31

Flow Number Throughput(Mbps

FQ

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 4 7 10 13 16 19 22 25 28 31

Flow Number Throughput(Mbps

RED

1 2 3 4 5 6 7 8 9 10 1 4 7 10 13 16 19 22 25 28 31

Flow Number Throughput(Mbps

slide-24
SLIDE 24

istoica@cs.cmu.edu

Results

  • Complexity

– n – number of (active) flows

  • Accuracy

– the extra service that a flow can receive in CSFQ as compared to FQ is bounded

FIFO/RED FRED FQ CSFQ State O(1) O(n) O(n) O(n) - edge O(1) - core Time O(1) O(1) O(log n) O(1)

slide-25
SLIDE 25

istoica@cs.cmu.edu

Examples

  • Support for congestion control
  • Per flow guaranteed services
slide-26
SLIDE 26

istoica@cs.cmu.edu

Guaranteed Services

  • Intserv:

– provide per flow bandwidth and delay guarantees, and achieve high resource utilization – support for fined grained and short-lived reservations – not scalable

  • Diffserv (Premium Service):

– Scalable (on data path) – cannot provide low delay guarantees and high resource utilization simultaneously

  • even at low utilization (e.g., 10%) in a medium network (e.g., 15

hops) the worst case queueing delay > 200ms

– centralized admission control (e.g., Bandwidth Broker) - not appropriate for short-lived reservations

slide-27
SLIDE 27

istoica@cs.cmu.edu

Goal

  • Unicast Intserv guaranteed service semantic
  • Diffserv like scalability
slide-28
SLIDE 28

istoica@cs.cmu.edu

Solution

  • Data path: approximate Jitter-Virtual Clock

(Jitter-VC) with Core-Jitter Virtual Clock (CJVC)

  • Control path: approximate distributed admission

control

Reference Stateful Network SCORE Network Jitter-VC Jitter-VC Jitter-VC Jitter-VC Jitter-VC Jitter-VC Jitter-VC CJVC CJVC CJVC CJVC CJVC CJVC

slide-29
SLIDE 29

istoica@cs.cmu.edu

Theoretical Results

  • CJVC provides same end-to-end delay

guarantees as Jitter-VC (and Weighted Fair Queueing)

  • Admission control: provides semantic of a

hard state protocol, but…

– typically achieves only 80 % link utilization

slide-30
SLIDE 30

istoica@cs.cmu.edu

Implementation

  • Problem: Where to insert the state ?
  • Possible solutions:

– between link layer and network layer headers (e.g., MPLS) – as an IP option – find room in IP header

  • Current implementation (FreeBSD 2.2.6):

use 17 bits in IP header

– 4 bits in DS field (former TOS) – 13 bits by reusing fragment offset

slide-31
SLIDE 31

istoica@cs.cmu.edu

Status

  • Working prototype in FreeBSD 2.2.6 that

implements:

– Core-Stateless Fair Queueing – Guaranteed services

  • data path – Core Jitter Virtual Clock
  • control path – distributed admission control
slide-32
SLIDE 32

istoica@cs.cmu.edu

Conclusions

  • Diffserv has serious limitations:

– no flow protection – cannot provide guaranteed services and high resource utilization simultaneously – no scalable admission control architecture (e.g. Bandwidth Broker)

  • DPS compatible with Diffserv: can greatly

enhances the functionality while requiring minimal changes

  • Let’s do it in Qbone !
slide-33
SLIDE 33

istoica@cs.cmu.edu

More Information

http://www.cs.cmu.edu/~istoica/DPS