scheduling for real-time systems
Lukas Pirl Operating Systems and Middleware Group Hasso Plattner Institute for Software Systems Engineering University of Potsdam 1
scheduling for real-time systems Lukas Pirl Operating Systems and - - PowerPoint PPT Presentation
scheduling for real-time systems Lukas Pirl Operating Systems and Middleware Group Hasso Plattner Institute for Software Systems Engineering University of Potsdam 1 roadmap scheduling := coordinate time sharing of tasks on processors task
Lukas Pirl Operating Systems and Middleware Group Hasso Plattner Institute for Software Systems Engineering University of Potsdam 1
scheduling
:= coordinate time sharing of tasks on processors
task assignment
:= placement of tasks on processors
uni- vs. multiprocessor scheduling critical sections
priority inversion
title | date | name | 2
processor := resource to share
theory can partly be applied to other shared resources
task := unit of execution
think: thread, process, job, … arrival time := moment task is created also: release time execution time := duration the task needs to run finishing time := moment task finishes absolute deadline := moment task needs to be finished relative deadline := absolute deadline - arrival time response time := finishing time - arrival time period := interval in which task needs to run once
3
a f e d p r
periodic
released periodically must run once within period not exactly every X units of time
sporadic
released regularly upper bound for release rate
aperiodic
sporadic without upper bound for release rate
4
example
read sensor every 10ms if value > threshold, send signal sensor reading is periodic signal handler is sporadic upper bound for release rate?
main objective: meet deadlines of tasks
for a feasible set of tasks i.e., no overload
depending on use case fairness, liveliness, latency, throughput, jitter, …
5
schedule S(i, t) :=
task scheduled to be running on processor i at time t
feasibility :=
tasks start after their release times tasks complete before their deadline loosely: “no overload considering all worst cases and overhead”
6
precompute schedule task set must be known a priori
scheduling algorithm must have worst case execution time more flexibility more complexity
7
static-priority algorithms
e.g., Rate Monotonic Scheduling (RMS) tasks’ priorities do not change within mode
dynamic-priority algorithms
e.g., Earliest Deadline First (EDF) tasks’ priorities might change between releases
8
non-preemptive
real-time systems usually cooperative anyway
preemptive
tasks might be interrupted by other tasks i.e., by higher-priority tasks more flexibility more complexity
bookkeeping preemption not always possible e.g., during IO
9
t2 release t1 release t2 deadline
t2 t1
non-optimal
t2 t1
missed deadline
t2 t1 t1
p1 p2 p3 p4
given
set of tasks precedence constraints arrival time execution time deadline
wanted
assignment to processors schedule
AKA “job shop scheduling”
10
t1 t2 t3 t5 t6 t8 t7 t4
given
set of tasks precedence constraints arrival time execution time deadline
wanted
assignment to processors schedule
AKA “job shop scheduling”
11
t1 t2 t3 t5 t6 t8 t7 t4
p1 p2 p3 p4
12
periodic, preemptable, independent, static-priority tasks
task priority inversely proportional to period task deadlines == task period
feasible schedule possible for utilization <= n(21/n - 1)
sufficient condition
makes some strong assumptions
shorter period means higher priority no resource sharing immediate preemption context switch does not affect execution time
13
several extensions to overcome limitations
“task servers” which run as task in traditional RMS provide time slots for tasks not meetings RMS’s requirements e.g., sporadic tasks tend to introduce other assumptions e.g., aperiodic tasks have no deadlines different feasibility criteria
14
preemptable, independent tasks task priority inversely proportional to deadline
i.e., if EDF cannot schedule a task set on a single processor, no algorithm can
15
processor is the only considered shared resource
no memory, I/O, … hard to predict xor a lot of knowledge a priori required
How to handle overload? How to handle degradation?
might lead to overload
16
system has primary and alternative version of task vary in execution time and quality of output primary
most expensive best quality result
alternative
reduced resource usage acceptable but lower quality result
17
increased reward with increased service quality does not decrease with execution time
i.e., quality is monotonically increasing function of execution time
e.g., iterative computation of Pi
18
19
20
assign tasks to processors
so, that utilization per processor <= threshold
uniprocessor scheduling per processor
determines utilization threshold
task-assignment generally NP-hard
similar to multiple knapsack problem bin packing
(re)use heuristics
assign tasks to processors schedule each processor all schedules feasible?
check stopping criterion declare failure change/improve assignment
yes no stop continue
some initial assignment
round-robin, random, …
balance utilization in intervals requires preemptive tasks
e.g., to migrate task to another processor
21
works with RMS uniprocessor algorithm processors statically assigned to tasks requirements of RMS apply
22
non-preemptive tasks tries to build schedule through search algorithm
schedules subset of tasks iteratively adding tasks can search whole search space based on walking “a tree of schedules” down and up nodes are partial schedules leafs are complete schedules if up or down based on feasibility of a node’s schedule
23
tasks are released at individual processors
processors may voluntarily take over tasks buddy strategy
categories: under-, fully, and overloaded
24
trial-and-error
assign communicating processes to same processor so no other processor must wait
25
t2 t1 t1 t1 t1 t2 t3 t4 t3 t4
p1 p1 p2 p2
26
fault-tolerant scheduling
How, when, where to backup schedule and its current state?
mode changes
mission can have multiple phases different sets of tasks priorities, arrival rates, …
27
through shared resources
e.g., binary semaphore can cause priority inversion i.e., lower-priority task blocks higher-priority task
28
mid low
L(A)
high
L(A) U(A) U(A)
priority inversion
low-priority task inherits priority from high-priority task when high-priority task waits for a resource the low-priority task holds
29
mid low
L(A)
high
L(A) U(A) U(A) inherits priority
may lead to certain deadlocks
e.g., tasks as programmed (arrival times, execution times, locks):
30
mid low high
L(A) L(B) L(B) L(A) L(A)
mid low high
L(A) L(B) U(B) U(A) L(B) L(A) U(A) U(B) L(A) U(A)
scheduling with priority inheritance:
deadlock
shared resources annotated with maximum priority
priority of a lower-priority task acquiring a shared resource “ceiled” to maximum
raise lower-priority task’s priority the moment the higher-priority task tries to acquire resource
immediate ceiling algorithm
raise lower-priority task’s priority the moment it locks the resource lower complexity might raise priority longer than desired
31
prevents certain deadlocks
e.g., tasks as programmed (arrival times, execution times, locks):
32
mid low high
L(A) L(B) L(A)
mid low high
L(A) L(B) U(B) U(A) L(B) L(A) U(A) U(B) L(A) U(A) ceiling priority
ceiling priority
scheduling with priority inheritance:
U(B) U(A) U(A) L(B) L(A) U(A) U(B)
33
priority inversion p r i
i t y c e i l i n g priority inheritance task assignment
f l i n e static d y n a m i c task dependency graph preemptive n
r e e m p t i v e periodic sporadic aperiodic