10/12/2019 Definitions A schedule is said to be feasible if it - - PDF document

10 12 2019
SMART_READER_LITE
LIVE PREVIEW

10/12/2019 Definitions A schedule is said to be feasible if it - - PDF document

10/12/2019 Definitions A schedule is said to be feasible if it satisfies a set of constraints. A task set is said to be feasible , if there exists an algorithm that generates a feasible schedule for . A task set is said to be


slide-1
SLIDE 1

10/12/2019

Definitions

A task set  is said to be feasible, if there exists an algorithm that generates a feasible schedule for . Examples of constraints

  • Timing constraints: activation, period, deadline, jitter.
  • Precedence:
  • rder of execution between tasks.
  • Resources:

synchronization for mutual exclusion. A schedule  is said to be feasible if it satisfies a set of constraints. A task set  is said to be schedulable with an algorithm A, if A generates a feasible schedule.

3

Feasibility vs. schedulability

Feasible task sets Space of all task sets Task sets schedulable with alg. A Task sets schedulable with alg. B unfeasible task set

The scheduling problem

Given a set  of n tasks, a set P of p processors, and a set

R of r resources, find an assignment of P and R to  that

produces a feasible schedule under a set of constraints.

Scheduling algorithm

 R P

feasible constraints

Complexity

 In 1975, Garey and Johnson showed that the general scheduling problem is NP hard.

In practice, it means that the time for finding a feasible schedule grows exponentially with the number of tasks.

Fortunately, polynomial time algorithms can be found under particular conditions.  Let’s consider an application with n = 30 tasks on a processor in which the elementary step takes 1 s  Consider 3 algorithms with the following complexity: A1: O(n) A2: O(n8) A3: O(8n) 30 s 182 hours 40.000 billion years

Why do we care about complexity?

slide-2
SLIDE 2

10/12/2019

Simplifying assumptions

  • Single processor
  • Homogeneous task sets
  • Fully preemptive tasks
  • Simultaneous activations
  • No precedence constraints
  • No resource constraints

Algorithm taxonomy

  • Preemptive vs. Non Preemptive
  • Static vs. dynamic
  • On line vs. Off line
  • Optimal vs. Heuristic

Static vs. Dynamic

Static scheduling decisions are taken based on fixed parameters, statically assigned to tasks before activation. Dynamic scheduling decisions are taken based on parameters that can change with time.

Off-line vs. On-line

Off-line all scheduling decisions are taken before task activation: the schedule is stored in a table (table-driven scheduling). On-line scheduling decisions are taken at run time

  • n the set of active tasks.

Optimal vs. Heuristic

Optimal They generate a schedule that minimizes a cost function, defined based on an

  • ptimality criterion.

Heuristic They generate a schedule according to a heuristic function that tries to satisfy an

  • ptimality

criterion, but there is no guarantee of success.

Optimality criteria

  • Feasibility: Find a feasible schedule if there

exists one.

  • Minimize the maximum lateness
  • Minimize the number of deadline miss
  • Assign a value to each task, then maximize

the cumulative value of the feasible tasks

slide-3
SLIDE 3

10/12/2019

Task set assumptions

We consider algorithms for different types of tasks:

  • Single-job tasks (one shot)

tasks with a single activation (not recurrent)

  • Periodic tasks

recurrent tasks regularly activated by a timer (each task potentially generates infinite jobs)

  • Aperiodic/Sporadic tasks

recurrent tasks irregularly activated by events (each task potentially generates infinite jobs)

  • Mixed task sets

Graham’s Notation

  

denotes the number of processors

denotes the constraints on tasks

denotes the optimality criterion

Examples: 1

  • preem. Ravg

Uniprocessor algorithm for preemptive tasks that minimizes the average response time.

2

sync

Lmax

Dual-core algorithm for synchronous tasks that minimizes the maximum lateness.

4

  • preem. Lmax

Quad-core algorithm for preemptive tasks that minimizes the maximum lateness.

Classical scheduling policies

 First Come First Served  Shortest Job First  Priority Scheduling  Round Robin

Not suited for real-time systems

First Come First Served

It assigns the CPU to tasks based on their arrival times (intrinsically non preemptive):

 t

Ready queue CPU

1 2 3 4

a1 a2 a3 a4 1 2 3 4 s2 s3 s4 f4

17

‐ Very unpredictable

response times strongly depend on task arrivals:

4 8 12 16 20 24 28 32

t

1 2 3

a1 a2 a3

R3 = 26 R2 = 26 R1 = 20

4 8 12 16 20 24 28 32

t

3 2 1

a3 a2 a1

R3 = 2 R2 = 8 R1 = 26

First Come First Served

18

Shortest Job First (SJF)

  • Static (Ci is a constant parameter)
  • It can be used on line or off‐line
  • Can be preemptive or non preemptive
  • It minimizes the average response time

It selects the ready task with the shortest computation time.

slide-4
SLIDE 4

10/12/2019

19

SJF - Optimality

S L

 SJF

S L

’

t

fS’ fL fL’ = fS < r0 fS’ + fL’  fS + fL

) ( ) ( 1 ) ' ( 1 ') (

1 1

      

 

 

R r f n r f n R

n i i i n i i i

20

 ’ ’’ *

. . .

) ' ' ( ') ( ) (      R R R

. . .

*) (  R

* = SJF

) (

SJF

R 

is the minimum response time achievable by any algorithm

SJF - Optimality

21

Is SJF suited for Real-Time?

‐ It is not optimal in the sense of feasibility

4 8 12 16 20 24 28 32

t

1 2 d1 d2 d3 A  SJF feasible 3

4 8 12 16 20 24 28 32

t

3 2 1 d1 d2 d3 SJF not feasible

22

  • Each task has a priority Pi, typically Pi  [0, 255]
  • The task with the highest priority is selected for

execution.

  • Tasks with the same priority are served FCFS

Priority Scheduling

pi  1/Ci  SJF pi  1/ai  FCFS NOTE:

23

Priority Scheduling

  • Problem: starvation

low priority tasks may experience long delays due to the preemption of high priority tasks.

  • A possible solution: aging

priority increases with waiting time

24

The ready queue is served with FCFS, but ... ‐ Each task i cannot execute for more than Q time units (Q = time quantum). ‐ When Q expires, i is put back in the queue.

CPU

READY queue

Q expired

Round Robin

slide-5
SLIDE 5

10/12/2019

25

n = number of task in the system

t

nQ nQ Q

i i i

nC Q C nQ R   ) (

Round Robin

Each task runs as it was executing alone on a virtual processor n times slower than the real one.

Time sharing

26

Round Robin

‐ if Q > max(Ci) then RR  FCFS ‐ if Q  context switch time () then t

n(Q + ) Q +  n(Q + )

              Q Q nC Q C Q n R

i i i

) (

27

CPU

High priority Medium priority Low priority system tasks interactive tasks batch tasks PRIORITY RR FCFS

Multi-Level Scheduling

28

Multi-Level Scheduling

CPU

priority

30

Real-Time Algorithms

They can either schedule tasks by  relative deadlines Di (static)  absolute deadlines di (dynamic)

Di di ri

slide-6
SLIDE 6

10/12/2019

31

Earliest Due Date

Problem Algorithm [Jackson 55]

  • Order the ready queue by increasing deadline.

Assumptions

  • All the tasks are simultaneously activated
  • Preemption is not needed.
  • Static (Di is fixed)

Property

  • It minimizes the maximum lateness (Lmax)

1

sync

Lmax

32

Lateness

Li > 0 di fi ri di Li < 0 fi ri

Li = fi  di

33

Maximum Lateness

Lmax = maxi (Li) if (Lmax < 0) then

no task exceeds its deadline

34

EDD - Optimality

Lmax = La = fa  da La’ = fa’  da < fa  da Lb’ = fb’  db < fa  da L’max < Lmax

A B

 EDD

A B

’

t

fa’ fb fb’ = fa < r0 da db

35

EDD - Optimality  ’ ’’ *

. . .

) ' ' ( ') ( ) (

max max max

     L L L . . . *) (

max 

 L

* = EDD

) (

max EDD

L 

is the minimum value achievable by any algorithm

36

EDD guarantee test (off line)

t

f1 f2 f3 f4 1 2 3 4

A task set  is feasible iff

i i

d f i  

i k k i

C f

1

i i k k

D C i   

1

slide-7
SLIDE 7

10/12/2019

37

Problem Algorithm [Horn 74]

  • Order the ready queue by increasing absolute deadline

Assumptions

  • Tasks can be activated dynamically
  • Dynamic algorithm (di depends on ai)
  • Tasks can be preempted at any time

Property

  • It minimizes the maximum lateness (Lmax)

1

  • preem. Lmax

Earliest Deadline First

38

EDF Example

39

EDF Guarantee test (on line)

t c1(t) c2(t) c3(t) c4(t)

t d t c i

i i k k

   

1

) (

40

Complexity

EDD

Scheduler (queue ordering): Feasibility Test (guarantee test):

EDF

Scheduler (insertion in the queue): Feasibility Test (guarantee single task): O(n logn) O(n) O(n) O(n)

41

EDF optimality

 In the sense of feasibility [Dertouzos 1974] An algorithm A is optimal in the sense of feasibility if it generates a feasible schedule, if there exists one. Demonstration method It is sufficient to prove that, given an arbitrary feasible schedule, the schedule generated by EDF is also feasible.

42

2 4 6 8 10 12 14 16

t = 4

  EDF

1 2 3 4

Dertouzos Transformation

for (t = 0 to Dmax–1) if ((t)  E(t)) { (tE) = (t); (t) = E(t); } (t) = task executing at time t E(t) = task with min d at time t tE = time at which E is executed (t) = 4 E(t) = 2 tE = 6

slide-8
SLIDE 8

10/12/2019

43

2 4 6 8 10 12 14 16

t = 4 1 2 3 4

(t) = 4 E(t) = 2 tE = 6 (t) = task executing at time t E(t) = task with min d at time t tE = time at which E is executed

Dertouzos Transformation

for (t = 0 to Dmax–1) if ((t)  E(t)) { (tE) = (t); (t) = E(t); }

44

2 4 6 8 10 12 14 16

t = 5 1 2 3 4

(t) = 3 E(t) = 2 tE = 7 (t) = task executing at time t E(t) = task with min d at time t tE = time at which E is executed

Dertouzos Transformation

for (t = 0 to Dmax–1) if ((t)  E(t)) { (tE) = (t); (t) = E(t); }

45

2 4 6 8 10 12 14 16

t = 5 1 2 3 4

(t) = 3 E(t) = 2 tE = 7 (t) = task executing at time t E(t) = task with min d at time t tE = time at which E is executed

Dertouzos Transformation

for (t = 0 to Dmax–1) if ((t)  E(t)) { (tE) = (t); (t) = E(t); }

46

2 4 6 8 10 12 14 16

t = 5 1 2 3 4

Dertouzos Transformation

Dertouzos transformation preserves schedulability, in fact:

  • for the advanced slice, this is obvious!
  • for the postponed slice, the slack cannot decrease:

47

A property of optimal algorithms

If a task set  is not schedulable by an

  • ptimal

algorithm, then  cannot be scheduled by any other algorithm. If an algorithm A minimizes Lmax then A is also optimal in the sense of feasibility. The opposite is not true.

48

Non Preemptive Scheduling

Under non preemptive execution, EDF is not optimal: 1 2 1 2 Feasible schedule EDF

2 4 3 1 5 6 7 8 9 2 4 3 1 5 6 7 8 9

slide-9
SLIDE 9

10/12/2019

49

1 2

2 4 3 1 5 6 7 8 9

Non Preemptive Scheduling

To achieve optimality, an algorithm should be clairvoyant, and decide to leave the CPU idle in the presence of ready tasks:

If we forbid leaving the CPU idle in the presence of ready

tasks, then EDF is optimal. We say that:

NP-EDF is optimal among non-idle scheduling algorithms

50

empty schedule partial schedule complete schedule F N F N N N N N

depth = n # leaves = n! complexity: O(n n!)

Non Preemptive Scheduling

The problem of finding a feasible schedule is NP hard and is treated off-line with tree search algorithms:

51

Bratley’s Algorithm

[Bratley 71]

( 1 | no-preem | Lmax )

Reduces the average complexity by a pruning rule:

Do not expand unless the partial schedule is found to be strongly feasible. A partial schedule is said to be strongly feasible if adding any of the remaining nodes it remains feasible.

52

6 5

1 2

6 5

3 1

6 6

1 3

6 3 4

3 1 2 1 3 4

6 4 4

Example 4 2 7 1 1 5 1 2 6 2 4 1 2 3 4 ai Ci di

1

6

Scheduled task finishing time

1 2 3 4

6 2 3 2

2 3 4 3 1 4 2 3

7

1

2

7

1

1 = {4, 2, 3, 1} 2 = {4, 3, 2, 1}

Bratley’s Algorithm

[Bratley 71]

( 1 | no-preem | Lmax )

53

Spring algorithm [Stankovic & Ramamritham 87] min H min H min H min H Backtracking is possible

Heuristic search

  • 1. The schedule for a set of N tasks is constructed in N steps
  • 2. The search is driven by a heuristic function H
  • 3. At each step the algorithm selects the task that minimizes

the heuristic function

54

H = ri  FCFS H = Ci  SJF H = Di  DM H = di  EDF H = w1 ri + w2 Di H = w1 Ci + w2 di H = w1 Vi + w2 di

Spring algorithm [Stankovic & Ramamritham 87]

Heuristic functions

Example of heuristic functions: Composite heuristic functions:

slide-10
SLIDE 10

10/12/2019

55

H = Ei ( w1 ri + w2 Di ) H = Ei ( w1 Ci + w2 di )

Eligibility Ei = 

i

Ei = 1

i

Spring algorithm [Stankovic & Ramamritham 87]

Possibility to handle precedence constraints: Heuristic functions:

Heuristic functions

56

Complexity:

min H min H min H min H

Exhaustive search: Heuristic search: Heuristic w. k btracks:

Heuristic algorithm

Spring algorithm [Stankovic & Ramamritham 87]

O(NꞏN!) O(N2) O(kN2)

57

Task ID start time length next

Implementation

Spring algorithm [Stankovic & Ramamritham 87]

If a feasible schedule is not found, does not mean that there not exist one. If a feasible solution is found, the schedule is stored into a disptach list:

58

Handling precedence constraints

Given a precedence graph, it constructs the schedule from the tail: among the nodes with no successors, LDF selects the task with the latest deadline:

Latest Deadline First (LDF)

[Lawler 73]

5 A B C D E F 3 6 5 4 2

deadline

F E C D B A

LDF

dA dD dC dB,dE dF

EDF

F E C D B A D misses its deadline

1 | prec, sync | Lmax

59

Handling precedence constraints

  • Assumes that arrival times are known a priori;
  • Transforms

precedence constraints into timing constraints by modifying arrival times and deadlines based on the precedence graph;

  • Applies EDF to the modified task set.

EDF* [Chetto & Chetto 89] 1 | prec, preem | Lmax

60

Handling precedence constraints

EDF* [Chetto & Chetto 89]

1 | prec, preem | Lmax

A B

A B

CA CB The idea is to:  postpone the arrival time of a successor  advance the deadine of a predecessor rB dB rA dA

slide-11
SLIDE 11

10/12/2019

61

Handling precedence constraints

EDF* [Chetto & Chetto 89]

1 | prec, preem | Lmax

A B

A B

CA CB The idea is to:  postpone the arrival time of a successor: r*B = rA + CA  advance the deadine of a predecessor: d*A = dB – CB rB dB rA dA r*B d*A

62

Handling precedence constraints

EDF* [Chetto & Chetto 89]

1 | prec, preem | Lmax

  • 1. For all root nodes, set r*i = ri.
  • 2. Select a task i such that all its immediate

predecessors have been modified, else exit.

  • 3. Set r*i = max { ri , max (r*k + Ck) }.
  • 4. Repeat from line 2.

Arrival time modification k i

63

Handling precedence constraints

EDF* [Chetto & Chetto 89]

1 | prec, preem | Lmax

  • 1. For all leaves, set d*i = di.
  • 2. Select a task i such that all its immediate

successors have been modified, else exit.

  • 3. Set d*i = min { di , min (d*k – Ck) }.
  • 4. Repeat from line 2.

Deadline modification i k

Summary

activ prec preem algorithm authors complexity

Sync Asyn Asyn Asyn Asyn Sync Asyn

N N N Y Y Y Y

* *

N N N Y Y EDD EDF NP-EDF

Tree search

Spring LDF EDF*

Jackson ‘55 Horn ‘74 Jeffay ‘91 Bratley ‘71 Stankovic ‘87 Lawler ‘73 Chetto2 ‘89

O(n logn) O(n)  task O(nꞏn!) O(n2) O(n logn) O(n logn) O(n logn)