Adaptive Real-Time Resource Management Richard West Boston - - PowerPoint PPT Presentation

adaptive real time resource management
SMART_READER_LITE
LIVE PREVIEW

Adaptive Real-Time Resource Management Richard West Boston - - PowerPoint PPT Presentation

Adaptive Real-Time Resource Management Richard West Boston University Computer Science Department Rich West (2001) Outline of Talk Problem Statement. How to guarantee QoS to applications? Variable resource demands / availability.


slide-1
SLIDE 1

Rich West (2001)

Adaptive Real-Time Resource Management

Richard West

Boston University Computer Science Department

slide-2
SLIDE 2

Rich West (2001)

Outline of Talk

■ Problem Statement. ■ How to guarantee QoS to applications? ■ Variable resource demands / availability. ■ Approach. ■ System mechanisms.

■ Dionisys.

■ System policies.

■ Dynamic Window-Constrained Scheduling.

■ Conclusions.

slide-3
SLIDE 3

Rich West (2001)

Problem Statement

■ Distributed, real-time (RT) applications: ■ e.g., VE, RT multimedia, tele-medicine, ATR. ■ Require QoS guarantees on end-to-end

transfer of information.

■ How do we guarantee QoS? ■ Need system support to maintain / maximize QoS: ■ Policies & mechanisms. ■ Adaptive / coordinated resource management.

slide-4
SLIDE 4

Rich West (2001)

Application Characteristics

■ Dynamic exchanges between processes. ■ The information (content & type) to be

exchanged changes with time.

■ Variable rates (bursts) of exchanges. ■ Variable resource demands. ■ Bandwidth, CPU cycles, memory. ■ Variable QoS requirements on information

exchanged.

slide-5
SLIDE 5

Rich West (2001)

QoS Requirements

■ Delay: e.g., max end-to-end delay, delay variation. ■ Loss-tolerance, fidelity, resolution: ■ Minimum degree of detail. ■ Throughput, rate: ■ e.g., 30 fps video. ■ e.g., min/max updates per second to shared data. ■ Consistency constraints: ■ When, with whom semantics.

slide-6
SLIDE 6

Rich West (2001)

Example Scenario

Video Server Video Client Video Client Distributed Video Game

slide-7
SLIDE 7

Rich West (2001)

Example: Distributed Video Game

slide-8
SLIDE 8

Rich West (2001)

Distributed Video Game

■ Requires consistency of shared (tank) objects. ■ Here QoS (and, hence, resource) requirements

vary with time based on current state of application.

■ Application-level spatial & temporal semantics. ■ Exchange state info only when two objects less

than distance d apart.

■ Exchange position, orientation and (varying

amounts of) graphical info about shared objects based on their distance apart.

slide-9
SLIDE 9

Rich West (2001)

Example: Video Server

■ QoS requirements: Loss-tolerance and frame rate. ■ Suppose a client requires at least 15fps playback rate

but prefers 30fps.

■ If network bandwidth is limited: ■ Adapt CPU service. ■ e.g. allocate more CPU cycles to compress

video info.

■ Adapt network service. ■ e.g. allow 1 frame in 2 to be dropped.

slide-10
SLIDE 10

Rich West (2001)

Video Server (continued)

■ If CPU cycles are limited:

■ Adapt CPU service. ■ If possible, reduce frame generation rate. ■ Adapt network service. ■ e.g. ensure no frames are now dropped. ■ If CPU and network resources are limited: ■ Adapt to new QoS region / requirements if

possible! Re-negotiation?

slide-11
SLIDE 11

Rich West (2001)

■ Need to maintain / maximize QoS on end-to-end

transfer of information.

■ Varying resource requirements & availability. ■ Static resource allocation too expensive. ■ Poor resource utilization & scalability. ■ Suppose enough resources are reserved to meet

the minimum needs of all applications.

■ How can we do better?

Summary of Problem

slide-12
SLIDE 12

Rich West (2001)

Approach

■ Dionisys QoS mechanisms. ■ Allow real-time applications to specify:

■ How actual service should be adapted to meet

required / improved QoS.

■ When and where adaptations should occur.

■ Coordinated CPU and network management. ■ Dynamic Window-Constrained Scheduling.

slide-13
SLIDE 13

Rich West (2001)

Dionisys

■ Key components: ■ Service managers (SMs). ■ Monitors - influence when to adapt. ■ Handlers - influence how to adapt. ■ Events.

■ Delivered to SMs, where adaptation is needed.

■ Event channels.

slide-14
SLIDE 14

Rich West (2001) Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Packet Scheduling, Policing etc Monitors Handlers Monitors Handlers Scheduler Process Process Process Process

SOURCE HOST DESTINATION HOST

CPU SM Network SM App-Specific SM System Level

  • App. Level

Events for App. processes Network Control path

slide-15
SLIDE 15

Rich West (2001) Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Packet Scheduling, Policing etc Monitors Handlers Monitors Handlers Scheduler Process Process Process Process

SOURCE HOST DESTINATION HOST

CPU SM Network SM App-Specific SM System Level

  • App. Level

Events for App. processes Network Control path QoS attribute path Data path

slide-16
SLIDE 16

Rich West (2001) Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Monitors Handlers Application Specific Policy Packet Scheduling, Policing etc Monitors Handlers Monitors Handlers Scheduler Monitors Handlers Application Specific Policy Buffer

  • Mgmt. etc

Monitors Handlers Monitors Handlers Scheduler Process Process Process Process

SOURCE HOST DESTINATION HOST

CPU SM Network SM App-Specific SM System Level

  • App. Level

Events for App. processes Network Control path QoS attribute path Data path

slide-17
SLIDE 17

Rich West (2001)

Dionisys Key

Process

Application process. Event channel. QoS attribute channel (shared memory on a single host). Data channel. Service Manager (SM) e.g., CPU SM. SM functions: App-specific monitors, handlers and service policy. Host machine.

slide-18
SLIDE 18

Rich West (2001)

■ Responsible for: ■ Monitoring application-specific service. ■ Handling events for service adaptation. ■ Providing service to applications.

■ Resource allocation.

■ Kernel level threads.

Service Managers

slide-19
SLIDE 19

Rich West (2001)

Monitors

■ Functions that monitor a specific service. ■ Influence when to adapt service provided to an

application.

■ e.g., QoS below desired level, or unacceptable. ■ Compiled into objects. ■ Dynamically-linked into target SM address-space.

slide-20
SLIDE 20

Rich West (2001)

Handlers

■ Functions executed in SMs to decide how to adapt

service provided to an application.

■ e.g., increase / decrease CPU cycles, or network

bandwidth.

■ Compiled into objects. ■ Dynamically-linked into target SM address-space.

slide-21
SLIDE 21

Rich West (2001)

Events

■ Generated when service adaptation is necessary. ■ Delivered to handlers where service needs adapting. ■ Have attributes that influence extent to which service

is adapted.

■ “Quality Events”.

slide-22
SLIDE 22

Rich West (2001)

Event Channels

. . . . . . . . . . . . SM 1 SM 2

M(1) M(2) M(m1) H(1) H(2) H(h1) M(1) M(2) M(m2) H(1) H(2) H(h2)

Monitors Handlers

slide-23
SLIDE 23

Rich West (2001) Packet Scheduling, Rate Ctrl. Monitors Handlers Monitors Handlers Scheduler Process Process

SERVER HOST

QoS attrs Memory Manager Network SM CPU SM Network QoS attrs Process Process REMOTE HOST Client Processes Ring Buffers

Example: Video Server

slide-24
SLIDE 24

Rich West (2001)

Example Adaptation Strategies

Downstream Adaptation Upstream Adaptation Scheduler

CPU Service Manager

Packet Scheduling, Policing etc

Handlers Monitors

Intra-SM Adaptation

Network Service Manager

Handlers Monitors

slide-25
SLIDE 25

Rich West (2001)

Adaptation Strategies (continued)

■ Upstream adaptation: ■ Applied in direction opposing flow of data.

■ e.g. feedback congestion control.

■ Downstream adaptation: ■ Applied in direction corresponding to flow of data.

■ e.g. forward error correction.

■ Intra-SM adaptation: ■ Applied to current service manager. ■ Lacks coordination between SMs.

slide-26
SLIDE 26

Rich West (2001)

Adaptation Example: Video Server

■ QoS requirements: Loss-tolerance and frame rate. ■ If network bandwidth is limited:

■ Apply upstream adaptation to increase CPU

cycles to e.g. compress video information.

■ Apply intra-SM adaptation in the network SM to

increase loss-tolerance.

slide-27
SLIDE 27

Rich West (2001)

Adaptation Example (continued)

■ If CPU cycles are limited:

■ Apply intra-SM adaptation in the CPU-SM to

reduce, for example, frame (generation) rate.

■ Apply downstream adaptation to reduce loss-

tolerance.

slide-28
SLIDE 28

Rich West (2001)

Experimental Scenario - Part 1

■ Server-side processes (one per stream): ■ Generate data for streaming to remote clients.

■ Stream of MPEG-1 I-frames (160x120 pixels)

per generator process.

■ Data placed in circular queues in shared

memory.

■ QoS attributes associated with each data stream: ■ Min / Max / Target frame rate. ■ “Quality” event channels between Network and CPU

service managers.

slide-29
SLIDE 29

Rich West (2001)

Experimental Scenario - Part 2

■ Client-side processes (one per stream): ■ Decode and playback incoming frames. ■ SparcStation Ultra-2 170Mhz dual processor server,

running Solaris 2.6 connected via switched 100Mbps Ethernet to one client (w/ UDP connection).

■ 3 Streams: ■ Stream 1: Target 30fps +/- 10% (3000 frames) ■ Stream 2: Target 20fps +/- 10% (2000 frames) ■ Stream 3: Target 10fps +/- 20% (1000 frames) ■ 3 second exponential idle time every 1000 frames.

slide-30
SLIDE 30

Rich West (2001)

Adaptation in Video Server

■ (Downstream) CPU SM monitors frame generation

rate.

■ (Upstream) Net SM monitors frame transmission

rate.

■ Apply adaptation if (monitored rate != target rate). ■ All monitors / SMs run at 10mS intervals.

slide-31
SLIDE 31

Rich West (2001)

Adaptation Handlers

■ CPU-Level: ■ Adjust priorities & time-slices of generator

processes by a function of target and monitored service rates.

■ Network-Level: ■ Invoke rate control if monitored rate exceeds

maximum rate.

■ Raise priority of packet stream Si if its service falls

below minimum service rate.

■ i.e., alter bandwidth allocation (yi-xi) / yi.

slide-32
SLIDE 32

Rich West (2001)

Adaptive Rate Control Block Diagram

Σ Σ Σ

BUFFER SENSOR SENSOR CPU SM NET SM HANDLER HANDLER + + +

  • Target

Service Rate Actual Transmission Rate PID Controller MONITOR Upstream Adaptation Downstream Adaptation

1 2 4 3 3 4

slide-33
SLIDE 33

Rich West (2001)

Quality Functions - Example

■ Can embed quality functions into handlers. ■ Service adaptation is a function of actual and

required service of all applications.

180 160 140 120 100 80 60 40 20 12 10 8 17 20 23 26 30 34

Q S (Rate, fps)

slide-34
SLIDE 34

Rich West (2001)

Non-Adaptive Rate Allocating Service

5 10 15 20 25 30 35 40 45 10 20 30 40 50 60 70 80 Target 10 fps Target 20 fps Target 30 fps

Actual Rate (Network Level) Time (seconds)

slide-35
SLIDE 35

Rich West (2001)

Non-Adaptive Rate Controlled Service

Actual Rate (Network Level) Time (seconds)

5 10 15 20 25 30 35 10 20 30 40 50 60 70 80 90 100 Target 10 fps Target 20 fps Target 30 fps

slide-36
SLIDE 36

Rich West (2001)

Network Rate - Upstream Adaptation

5 10 15 20 25 30 35 20 40 60 80 100 120

Actual Rate (Network Level) Time (seconds)

Target 10 fps Target 20 fps Target 30 fps

slide-37
SLIDE 37

Rich West (2001)

Network Rate - Downstream Adaptation

5 10 15 20 25 30 35 20 40 60 80 100 120

Actual Rate (Network Level) Time (seconds)

Target 10 fps Target 20 fps Target 30 fps

slide-38
SLIDE 38

Rich West (2001)

Comparison of Rate Control Methods

20 40 60 80 100 On Target In Range Above Max Rate % of Time

Non-adaptive Rate Control Downstream Adaptation Upstream Adaptation

slide-39
SLIDE 39

Rich West (2001)

Rate Control

■ Upstream adaptation leads to poorer rate control. ■ Longer time to reach steady state. ■ More prominent “sawtooth” effect as target rate is

tracked.

■ Larger fluctuations of actual rate from target.

■ Better tracking of target rate for more quality

critical streams.

slide-40
SLIDE 40

Rich West (2001) Rich West (2000) Rich West (2000)

Upstream Adaptation - 10fps

50 100 150 200 250 20 40 60 80 100 120

Buffered Frames & Missed Deadlines Time (seconds)

Buffered Frames Cumulative Missed Deadlines

slide-41
SLIDE 41

Rich West (2001) Rich West (2000)

Downstream Adaptation - 10fps

50 100 150 200 250 20 40 60 80 100 120

Buffered Frames & Missed Deadlines Time (seconds)

Buffered Frames Cumulative Missed Deadlines

slide-42
SLIDE 42

Rich West (2001)

Buffering

■ Upstream adaptation leads to greater variance in

buffer usage, compared to downstream / intra SM adaptation.

■ Network monitor triggers “request” for generation

  • f frames “too late”. That is, after buffer has

emptied.

■ Effect of an event being raised not seen until the

next “phase” of monitoring and handling.

slide-43
SLIDE 43

Rich West (2001)

Missed Deadlines

■ Higher buffering variance and, consequently, higher

queueing delays, imply potentially higher consecutive numbers (“bursts”) of missed deadlines.

■ Downstream adaptation can reduce the number of

consecutive deadlines missed at any time by:

■ Providing more accurate (responsive) service. ■ By effecting changes “more quickly” (in the current

event/monitoring cycle) at the network-level to compensate for inadequacies in service at the CPU-level.

slide-44
SLIDE 44

Rich West (2001)

Summary

■ Dionisys QoS mechanisms allow real-time

applications to specify:

■ How actual service should be adapted to meet

required / improved QoS.

■ When and where adaptations should occur. ■ Flexible approach to run-time service adaptation.

slide-45
SLIDE 45

Rich West (2001)

What About Service Policies?

■ Certain applications can tolerate lost / late

information.

■ Restrictions on: ■ when losses of info can occur. ■ when info must be generated. ■ Need real-time scheduling of: ■ threads / processes (info generators). ■ packets (info carriers).

slide-46
SLIDE 46

Rich West (2001)

DWCS

■ Dynamic Window-Constrained Scheduling of: ■ Threads

■ “Guarantee” minimum quantum of service

every fixed window of service time.

■ Packets ■ “Guarantee” at most x late / lost packets every

window of y packets.

slide-47
SLIDE 47

Rich West (2001)

DWCS Packet Scheduling

■ Two attributes per packet stream, Si: ■ Request period, Ti.

■ Defines interval between deadlines of

consecutive pairs of packets in Si.

■ Window-constraint, Wi = xi/yi.

■ Essentially, a “loss-tolerance”.

slide-48
SLIDE 48

Rich West (2001)

“x out of y” Guarantees

■ e.g., Stream S1 with C1=1, T1=2 and W1=1/2 ■ Feasible schedule if “x out of y” guarantees are met.

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 s1 s1 s1 time, t s1

Sliding window

slide-49
SLIDE 49

Rich West (2001)

DWCS - Original Conceptual View

Higher Priority = Lower Loss-Tolerance . . . Network Pipe EDF-ordered queues

slide-50
SLIDE 50

Rich West (2001)

(x,y)-firm DWCS: Pairwise Packet Ordering Table

Precedence amongst pairs of packets

  • Lowest window-constraint first
  • Same non-zero window-constraints, order EDF
  • Same non-zero window-constraints &

deadlines, order lowest window-numerator first

  • Zero window-constraints and denominators,
  • rder EDF
  • Zero window-constraints, order highest window-

denominator first

  • All other cases: first-come-first-serve
slide-51
SLIDE 51

Rich West (2001)

Example: “Fair” Scheduling

S1 1/2(0) 1/1(1) 1/2(2) 1/1(3) 1/2(4)... S2 3/4(0) 2/3(1) 2/2(2) 1/1(3) 3/4(4)... S3 6/8(0) 5/7(1) 4/6(2) 3/5(3) 3/4(4) 2/3(5) 1/2(6) 0/1(7) 6/8(8)... S1 S1 S1 S1 S1 S1 S1 S1 S2 S2 S2 S2 S3 S3 S3 S3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Time

slide-52
SLIDE 52

Rich West (2001)

Example: Variable Length Packets

S1 S2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 S2 S1 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Time

S2 S2 S2 S1 1/2(0) 1/1(5) 1/2(10) 0/1(15) 1/2(20) 0/1(25) 1/2(30)... S2 1/2(0) 0/1(3) 1/4(6) 1/3(9) 1/2(12) 0/1(15) 1/4(18) 1/3(21) 1/2(24) 0/1(27) 1/2(30)... S1

slide-53
SLIDE 53

Rich West (2001)

Window-Constraint Adjustment (A)

■ For stream Si whose head packet is serviced before

its deadline:

■ if (yi’ > xi’) then yi’=yi’-1; ■ else if (yi’ = xi’) and (xi’ > 0) then

■ xi’=xi’-1; yi’=yi’-1;

■ if (xi’=yi’=0) or (Si is tagged) then

■ xi’=xi; yi’=yi;

■ if (Si is tagged) then reset tag;

slide-54
SLIDE 54

Rich West (2001)

Window-Constraint Adjustment (B)

■ For stream Sj whose head packet misses its

deadline:

■ if (xj’ > 0) then

■ xj’=xj’-1; yj’=yj’-1; ■ if (xj’=yj’=0) then xj’=xj; yj’=yj;

■ else if (xj’=0) and (yj > 0) then

■ violation! One solution… ■ yj’=yj’+ε; ■ Tag Sj with a violation;

slide-55
SLIDE 55

Rich West (2001)

DWCS Algorithm Outline

■ Find stream Si with highest priority (see Table) ■ Service head packet of stream Si ■ Adjust Wi’ according to (A) ■ Deadlinei = Deadlinei + Ti ■ For each stream Sj missing its deadline:

■ While deadline is missed: ■ Adjust Wj’ according to (B) ■ Drop head packet of stream Sj if droppable ■ Deadlinej = Deadlinej + Tj

slide-56
SLIDE 56

Rich West (2001)

DWCS Implementation

Loss-Tolerance Heap Sx Sy Sz Deadline Heap Si Sj Sk Select next packet from head packets in each stream

Back of queue To back To back Head packet (stream n) Head packet (stream 1)

. . .

slide-57
SLIDE 57

Rich West (2001)

Scheduling Overhead

230 510 760 999 1307 1555 1914 57 63 76 87 122 157 208

200 400 600 800 1000 1200 1400 1600 1800 2000 120 240 360 480 600 720 840

Number of Streams Scheduling Overhead (uS)

Without heaps With heaps

slide-58
SLIDE 58

Rich West (2001)

Time (seconds)

10 20 30 40 50 200 400 600 800 1000 1200 1400

Bandwidth (Kbps)

DWCS (s1,s2) & SFQ (s1,s2) DWCS (s3) & SFQ (s3) DWCS (s4) & SFQ (s4)

Fair Scheduling: b/w ratios:1,1,2,4 W’s=7/8,14/16,6/8,4/8

slide-59
SLIDE 59

Rich West (2001)

100 200 300 400 500 600 700 800 50 100 150 200 250

Bandwidth (Kbps) Time (seconds)

s1 s2 s3

Mixed Traffic: W1=1/3,W2=2/3, W3=0/100,T1=1,T2=1,T3=∞

slide-60
SLIDE 60

Rich West (2001)

100 200 300 400 500 600 700 800 50 100 150 200 250

Bandwidth (Kbps) Time (seconds)

s1 s2 s3

Mixed Traffic: W1=1/3,W2=2/3, W3=0/1500,T1=1,T2=1,T3=∞

slide-61
SLIDE 61

Rich West (2001)

2000 4000 6000 8000 10000 12000 100 200 300 400 500 600 700 800

Number of Loss-Tolerance Violations Number of Streams

FIFO 1/80 1/90 1/100 1/110 1/120 1/130 1/140 1/150 DWCS total

Loss-Tolerance Violations (T=500, C=1)

slide-62
SLIDE 62

Rich West (2001)

DWCS Spreads Losses

■ Here, loss tolerance of 1/3 is violated more times

with DWCS than FIFO, but losses are spread evenly.

DWCS 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Time FIFO X X X X X X X X X X 16 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Time X X X X X X X X X X

slide-63
SLIDE 63

Rich West (2001)

Approximation Overheads (T=500)

120 240 360 480 600 720 840 1 2 4 8 12 50 100 150 200 250

Scheduler Overhead (uS) Number of Streams Cycles Between Checking Deadlines

slide-64
SLIDE 64

Rich West (2001)

Approximation Overheads (T=200)

120 240 360 480 600 720 840 1 2 4 8 12 50 100 150 200 250 300 Scheduler Overhead (uS) Number of Streams Cycles Between Checking Deadlines

slide-65
SLIDE 65

Rich West (2001)

Deadlines Missed (T=500)

120 240 360 480 600 720 840 1 2 4 8 12 100000 200000 300000 400000 500000 600000 700000 800000

Deadlines Missed Number of Streams Y

slide-66
SLIDE 66

Rich West (2001)

Deadlines Missed (T=200)

120 240 360 480 600 720 840 1 2 4 8 12 500000 1000000 1500000 2000000 2500000 3000000 3500000 4000000 4500000

Deadlines Missed Number of Streams Y

slide-67
SLIDE 67

Rich West (2001)

Loss-Tolerance Violations (T=500)

120 240 360 480 600 720 840 1 2 4 8 12 2000 4000 6000 8000 10000 12000 14000 16000

Violations Number of Streams Y

slide-68
SLIDE 68

Rich West (2001)

Loss-Tolerance Violations (T=200)

120 240 360 480 600 720 840 1 2 4 8 12 5000 10000 15000 20000 25000 30000 35000 40000

Violations Number of Streams Y

slide-69
SLIDE 69

Rich West (2001)

DWCS - Recent Developments

■ Support for (x,y)-hard deadlines as opposed to (x,y)-

firm deadlines.

■ Bounded service delay. ■ Guaranteed service in a finite window of time. ■ Optimal (100%) utilization bound for fixed-length

packets or (variable-length preemptive) threads.

■ Replacement CPU scheduler in Linux kernel. ■ www.cc.gatech.edu/~west/dwcs.html

slide-70
SLIDE 70

Rich West (2001)

(x,y)-Hard DWCS: Pairwise Packet Ordering Table

Precedence amongst pairs of packets

  • Earliest deadline first (EDF)
  • Same deadlines, order lowest window-

constraint first

  • Equal deadlines and zero window-constraints,
  • rder highest window-denominator first
  • Equal deadlines and equal non-zero window-

constraints, order lowest window-numerator first

  • All other cases: first-come-first-serve
slide-71
SLIDE 71

Rich West (2001)

EDF versus DWCS

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1 s1 s2 s1 s1 s1 s1 s1 s1 s1 s3 s2 s3 s2 s3 s2 s3 time, t s1 s2 s3 3/4(1),2/3(2),2/2(3),1/1(4),3/4(5),2/3(6),2/2(7),1/1(8),3/4(9)... 1/2(1),1/1(2),1/2(3),1/1(4),1/2(5)... 6/8(1),5/7(2),4/6(3),3/5(4),3/4(5),2/3(6),1/2(7),0/1(8),6/8(9)... s s s s s s s s s s s s s s s2 s1 3 1 2 3 1 2 3 1 2 3 1 1 EDF DWCS 2 3

slide-72
SLIDE 72

Rich West (2001)

DWCS Delay Characteristics

■ If feasible schedule, max delay of service to Si is: ■ (xi + 1)Ti - Ci ■ Note: Every time Si is not serviced for Ti time units

xi’ is decremented by 1 until it reaches 0.

■ If no feasible schedule, max delay of service to Si is

still bounded.

■ Function of time to have: ■ Earliest deadline, lowest window-constraint,

highest window-denominator.

slide-73
SLIDE 73

Rich West (2001)

Bandwidth Utilization

■ Minimum utilization factor of stream Si is: ■ i.e., min req’rd fraction of bandwidth. ■ Least upper bound on utilization is min of utilization

factors for all streams that fully utilize bandwidth.

■ i.e., guarantees a feasible schedule. ■ L.U.B. is 100% in a slotted-time system.

i i i i i i

T y )C x (y U − =

slide-74
SLIDE 74

Rich West (2001)

Scheduling Test

■ If:

and Ci=K, Ti=qK for all i, where q is 1,2,…etc, then a feasible schedule exists.

■ For variable length packets: ■ let Ci<=K for all i or fragment/combine packets &

translate service constraints.

■ e.g., ATM SAR layer.

1.0 T ).C y x (1

n 1 i i i i i

≤ −

∑ =

slide-75
SLIDE 75

Rich West (2001)

Simulation Scenario

■ 8 classes of packet streams: ■ Varied number of streams n, uniformly distributed

amongst traffic classes.

■ Total of a million packets serviced.

W i 1/10 1/20 1/30 1/40 1/50 1/60 1/70 1/80 Ti 400 400 480 480 560 560 640 640

slide-76
SLIDE 76

Rich West (2001)

Bandwidth Utilization Results

n D V U 480 0.9156 0.9518 496 0.9461 0.9835 504 0.9613 0.9994 512 15152 0.9766 1.0152 520 30990 0.9919 1.0311 528 46828 7038 1.0071 1.047 544 78528 31873 1.0376 1.0787 560 110240 53455 1.0681 1.1104 640 268800 148143 1.2207 1.269

∑=

8 1 i i i

T C . 8 n

slide-77
SLIDE 77

Rich West (2001)

(x,y)-hard Linux CPU DWCS: Average Violations per Process

0.125 0.25 0.375 0.444 0.571 0.667 0.8 0.875 1 1.111 1.2 1.333 1.5 1000 2000 3000 4000 5000 6000

Avg Violations per Process Utilization

quiescent (fft) quiescent (io)

slide-78
SLIDE 78

Rich West (2001)

(x,y)-hard Linux CPU DWCS: Average Violations per Process

0.125 0.222 0.286 0.375 0.438 0.5 0.571 0.656 0.75 0.8 0.857 0.889 1 10 20 30 40 50 60 70 80 90 100

Avg Violations per Process Utilization

quiescent (io) quiescent (fft)

slide-79
SLIDE 79

0.125 0.222 0.286 0.375 0.438 0.5 0.571 0.656 0.75 0.8 0.857 0.889 1 10 20 30 40 50 60 70 80 90 100 Avg Violations per Process Utilization I/O-bound CPU-bound

slide-80
SLIDE 80

Rich West (2001)

(x,y)-hard Linux CPU DWCS: Scheduling Latency

(3,1,3,2) (4,1,4,3) (4,1,2,2) (5,1,5,4) (6,1,3,4) (6,1,2,3) (8,1,2,4) (64,1,2,32) 20 40 60 80 100

Avg Latency (uS) (tasks,x,y,period)

Standard (fft) Standard (io) D W C S (fft) D W C S (io)

slide-81
SLIDE 81

Rich West (2001)

(x,y)-hard Linux CPU DWCS: % Execution Time in Violation

0.125 0.25 0.375 0.444 0.571 0.667 0.8 0.875 1 1.111 1.2 1.333 1.5 20 40 60 80 100

% Time in Violation Utilization io fft

slide-82
SLIDE 82

0.125 0.222 0.286 0.375 0.438 0.5 0.571 0.656 0.75 0.8 0.857 0.889 1 1.067 1.125 1.2 10 20 30 40 50 60 70 80 90 100 % Tim e in Violation Utilization

C PU -bound I/O -bound

slide-83
SLIDE 83

Rich West (2001)

Conclusions

■ Flexible approach to run-time service adaptation. ■ When, where and how to adapt. ■ Coordinated resource management. ■ Dionisys “quality events”, monitors, handlers etc. ■ DWCS guarantees explicit loss and delay constraints

for real-time / multimedia applications.

slide-84
SLIDE 84

Rich West (2001)

Current & Future Work

■ Linux kernel-level implementation of Dionisys

mechanisms.

■ Cluster-wide coordination of resources. ■ Language support for “QoS safety”.

■ Stability analysis.

■ Real-time “batched” events in Linux – “Ecalls”. ■ Switch / co-processor implementation of DWCS. ■ Scheduling variable-length packets.

slide-85
SLIDE 85

Rich West (2001)

Related Work

■ QoS Architectures: QoS-A (Campbell), Washington

  • Univ. (Gopalakrishna & Parulkar), QoS Broker

(Nahrstedt et al), U. Michigan (Abdelzaher, Shin), QuO (BBN) + more…

■ QoS Specification/Translation: Tenet (Ferrari),

EPIQ (Illinois).

■ QoS Evaluation: Rewards (Abdelzaher), Value fns

(Jensen), Payoffs (Kravets).

■ System Service Extensions: SPIN (U. Washington),

Exokernel (MIT).

slide-86
SLIDE 86

Rich West (2001)

Scheduling Related Work

■ Fair Scheduling: WFQ/WF2Q (Shenker, Keshav,

Bennett, Zhang etc), SFQ (Goyal et al), EEVDF/Proportional Share (Stoica, Jeffay et al).

■ (m,k) Deadline Scheduling: Distance-Based Priority

(Hamdaoui & Ramanathan), Dual-Priority Scheduling (Bernat & Burns), Skip-Over (Koren & Shasha).

■ Pinwheel Scheduling: Holte, Baruah etc. ■ Other multimedia scheduling: SMART (Nieh and

Lam).

slide-87
SLIDE 87

Rich West (2001)

Related Research Papers

■ Quality Events: A Flexible Mechanism for Quality

  • f Service Management, RTAS 2001.

■ Analysis of a Window-Constrained Scheduler for

Real-Time and Best-Effort Traffic Streams, RTSS 2000.

■ Dynamic Window-Constrained Scheduling for

Multimedia Applications, ICMCS’99.

■ Scalable Scheduling Support for Loss and Delay-

Constrained Media Streams, RTAS’99.

■ Exploiting Temporal and Spatial Constraints on

Distributed Shared Objects, ICDCS’97.