scheduling for real time systems
play

scheduling for real-time systems Lukas Pirl Operating Systems and - PowerPoint PPT Presentation

scheduling for real-time systems Lukas Pirl Operating Systems and Middleware Group Hasso Plattner Institute for Software Systems Engineering University of Potsdam 1 roadmap scheduling := coordinate time sharing of tasks on processors task


  1. scheduling for real-time systems Lukas Pirl Operating Systems and Middleware Group Hasso Plattner Institute for Software Systems Engineering University of Potsdam 1

  2. roadmap scheduling := coordinate time sharing of tasks on processors task assignment := placement of tasks on processors uni- vs. multiprocessor scheduling critical sections priority inversion title | date | name | 2

  3. terminology processor := resource to share theory can partly be applied to other shared resources p task := unit of execution think: thread, process, job, … r a rrival time := moment task is created also: release time e xecution time := duration the task needs to run a f d f inishing time := moment task finishes absolute d eadline := moment task needs to be finished relative deadline := absolute deadline - arrival time e r esponse time := finishing time - arrival time period := interval in which task needs to run once 3

  4. terminology periodic, sporadic, aperiodic periodic released periodically example must run once within period read sensor every 10ms not exactly every X units of time if value > threshold, send signal sporadic released regularly sensor reading is periodic upper bound for release rate signal handler is sporadic aperiodic upper bound for release rate? sporadic without upper bound for release rate 4

  5. motivation main objective: meet deadlines of tasks for a feasible set of tasks i.e., no overload other performance measures may apply depending on use case fairness, liveliness, latency, throughput, jitter, … 5

  6. schedule schedule S(i, t) := task scheduled to be running on processor i at time t feasibility := tasks start after their release times tasks complete before their deadline loosely: “no overload considering all worst cases and overhead” 6

  7. scheduling offline vs. online offline scheduling precompute schedule task set must be known a priori online scheduling scheduling algorithm must have worst case execution time more flexibility more complexity 7

  8. scheduling static vs. dynamic priorities static-priority algorithms e.g., Rate Monotonic Scheduling (RMS) tasks’ priorities do not change within mode dynamic-priority algorithms e.g., Earliest Deadline First (EDF) tasks’ priorities might change between releases 8

  9. scheduling t2 deadline t1 release t2 release preemptive vs. non-preemptive t1 t2 non-preemptive missed deadline once started tasks runs until completion or blocking t2 t1 non-optimal real-time systems usually cooperative anyway preemptive tasks might be interrupted by other tasks i.e., by higher-priority tasks t1 t2 t1 more flexibility more complexity esp. for guaranteeing real-time properties bookkeeping preemption not always possible e.g., during IO 9

  10. p1 p2 p3 p4 optimization problems task dependency graph t1 given set of tasks precedence constraints t2 t3 t4 t5 arrival time execution time deadline wanted t7 t6 assignment to processors schedule AKA “job shop scheduling” t8 deadlines? 10

  11. p1 p2 p3 p4 optimization problems task dependency graph t1 given set of tasks precedence constraints t2 t6 t4 t5 arrival time execution time deadline wanted t3 t8 assignment to processors schedule AKA “job shop scheduling” t7 11

  12. uniprocessor scheduling 12

  13. uniprocessor scheduling rate-monotonic scheduling (RMS) periodic, preemptable, independent, static-priority tasks task priority inversely proportional to period task deadlines == task period feasible schedule possible for utilization <= n(2 1/n - 1) sufficient condition makes some strong assumptions shorter period means higher priority no resource sharing immediate preemption context switch does not affect execution time 13

  14. uniprocessor scheduling rate-monotonic scheduling (RMS) several extensions to overcome limitations “task servers” which run as task in traditional RMS provide time slots for tasks not meetings RMS’s requirements e.g., sporadic tasks tend to introduce other assumptions e.g., aperiodic tasks have no deadlines different feasibility criteria 14

  15. uniprocessor scheduling earliest deadline first (EDF) preemptable, independent tasks task priority inversely proportional to deadline optimal on uniprocessors i.e., if EDF cannot schedule a task set on a single processor, no algorithm can 15

  16. uniprocessor scheduling shortcomings processor is the only considered shared resource no memory, I/O, … hard to predict xor a lot of knowledge a priori required How to handle overload? How to handle degradation? might lead to overload 16

  17. uniprocessor scheduling multiple task versions system has primary and alternative version of task vary in execution time and quality of output primary most expensive best quality result alternative reduced resource usage acceptable but lower quality result 17

  18. uniprocessor scheduling IRIS tasks i ncreased r eward with i ncreased s ervice quality does not decrease with execution time i.e., quality is monotonically increasing function of execution time e.g., iterative computation of Pi 18

  19. multiprocessor scheduling 19

  20. multiprocessor scheduling assign tasks to processors assign tasks to processors so, that utilization per processor <= threshold uniprocessor scheduling per processor schedule each change/improve processor assignment determines utilization threshold continue task-assignment generally NP-hard all schedules check stopping similar to feasible? criterion no multiple knapsack problem yes stop bin packing output schedule declare failure (re)use heuristics 20

  21. multiprocessor scheduling utilization balancing some initial assignment round-robin, random, … balance utilization in intervals requires preemptive tasks e.g., to migrate task to another processor 21

  22. multiprocessor scheduling next fit works with RMS uniprocessor algorithm processors statically assigned to tasks requirements of RMS apply 22

  23. multiprocessor scheduling myopic offline non-preemptive tasks tries to build schedule through search algorithm schedules subset of tasks iteratively adding tasks can search whole search space based on walking “a tree of schedules” down and up nodes are partial schedules leafs are complete schedules if up or down based on feasibility of a node’s schedule 23

  24. multiprocessor scheduling focused addressing and bidding tasks are released at individual processors overloaded processors offload work to other processors processors may voluntarily take over tasks buddy strategy categories: under-, fully, and overloaded overloaded ask underloaded to take over a task 24

  25. multiprocessor scheduling with precedence p1 t3 t2 p2 t1 t4 t1 trial-and-error assign communicating processes to same processor so no other processor must wait p1 t3 t4 p2 t1 t2 t1 25

  26. scheduling challenges 26

  27. challenges fault-tolerant scheduling How, when, where to backup schedule and its current state? mode changes mission can have multiple phases different sets of tasks priorities, arrival rates, … offline schedules: have multiple? online schedules: can modes overlap? 27

  28. challenges priority inversion through shared resources e.g., binary semaphore can cause priority inversion i.e., lower-priority task blocks higher-priority task priority inversion high L(A) U(A) mid low L(A) U(A) 28

  29. challenges priority inheritance protocol low-priority task inherits priority from high-priority task when high-priority task waits for a resource the low-priority task holds high L(A) U(A) U(A) mid inherits low L(A) priority 29

  30. challenges priority inheritance protocol may lead to certain deadlocks e.g., tasks as programmed (arrival times, execution times, locks): high L(A) U(A) mid L(B) L(A) U(A) U(B) low L(A) L(B) U(B) U(A) scheduling with priority inheritance: deadlock high L(A) L(B) L(A) mid L(B) low L(A) 30

  31. challenges priority ceiling protocol shared resources annotated with maximum priority of all tasks that may acquire it priority of a lower-priority task acquiring a shared resource “ceiled” to maximum original algorithm raise lower-priority task’s priority the moment the higher-priority task tries to acquire resource immediate ceiling algorithm raise lower-priority task’s priority the moment it locks the resource lower complexity might raise priority longer than desired 31

  32. challenges priority ceiling protocol prevents certain deadlocks e.g., tasks as programmed (arrival times, execution times, locks): ceiling priority high L(A) U(A) of resource A ceiling priority mid L(B) L(A) U(A) U(B) of resource B low L(A) L(B) U(B) U(A) scheduling with priority inheritance: high L(B) L(A) U(B) U(A) U(A) mid L(B) L(A) U(A) U(B) low L(A) 32

  33. p priority inversion r i o r i t y c e i l i n g o f f l i n e task assignment online periodic preemptive static d y n a m i c aperiodic task dependency graph n o n - p r e e m priority inheritance p t i v e sporadic 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend