Scheduling CS 450: Operating Systems Michael Saelee - - PowerPoint PPT Presentation

scheduling
SMART_READER_LITE
LIVE PREVIEW

Scheduling CS 450: Operating Systems Michael Saelee - - PowerPoint PPT Presentation

Scheduling CS 450: Operating Systems Michael Saelee <lee@iit.edu> // Scheduler never returns. It loops, doing: // Enter scheduler. // - choose a process to run void // - swtch to start running that process sched(void) // - eventually


slide-1
SLIDE 1

CS 450: Operating Systems Michael Saelee <lee@iit.edu>

Scheduling

slide-2
SLIDE 2

// Scheduler never returns. It loops, doing: // - choose a process to run // - swtch to start running that process // - eventually that process transfers control // via swtch back to the scheduler. void scheduler(void) { ... for(;;){ // Enable interrupts on this processor. sti(); // Loop over process table looking for process to run. acquire(&ptable.lock); for(p = ptable.proc; p < &ptable.proc[NPROC]; p++){ if(p->state != RUNNABLE) continue; c->proc = p; switchuvm(p); p->state = RUNNING; swtch(&(c->scheduler), p->context); switchkvm(); c->proc = 0; } release(&ptable.lock); } } // Enter scheduler. void sched(void) { int intena; struct proc *p = myproc(); ... intena = mycpu()->intena; swtch(&p->context, mycpu()->scheduler); mycpu()->intena = intena; } // Give up the CPU for one scheduling round. void yield(void) { acquire(&ptable.lock); myproc()->state = RUNNABLE; sched(); release(&ptable.lock); }

slide-3
SLIDE 3

§ Overview

slide-4
SLIDE 4

scheduling: policies & mechanisms used to 
 allocate resources to some set of entities

slide-5
SLIDE 5

resource & entities: CPU & processes

  • ther possibilities:
  • resources: memory, I/O bus/devices
  • entities: threads, users, groups
slide-6
SLIDE 6

policy: high-level “what”

  • aka scheduling disciplines

mechanism: low-level “how”

  • e.g., interrupts, context switch
slide-7
SLIDE 7

(we’ll start with policy first)

slide-8
SLIDE 8

essential idea:

  • CPU(s) are a limited resource
  • efficiently time-share CPU(s) among multiple processes
  • enable concurrency on a single CPU
slide-9
SLIDE 9

at a high level (policy), only concern ourselves with macro process state

  • ne of running, ready, or blocked
slide-10
SLIDE 10

running = consuming CPU ready = “runnable”, but not running blocked = not runnable 
 (e.g., waiting for I/O)

slide-11
SLIDE 11

Ready Running Blocked

I/O request
 (e.g., interrupt, syscall) scheduled I/O completion creation completion

Ready Blocked

swap in/out swap in/out preempted

slide-12
SLIDE 12

Ready

creation

domain of the “long-term” scheduler

  • choose which jobs are admitted to the system
  • may control mix of jobs (e.g., I/O vs. CPU bound)
  • not common in general-purpose, time-shared OSes
slide-13
SLIDE 13

domain of the “medium-term” scheduler

  • swaps processes out to disk to make room for others
  • active when there is insufficient memory
  • runs much less frequently (slower!) than CPU scheduler

Ready Blocked

swap in/out swap in/out

slide-14
SLIDE 14

Ready Running

scheduled preempted

CPU scheduler, aka “short-term” scheduler

  • chooses between in-memory, ready processes to run on CPU
  • invoked to carry out scheduling policies after interrupts/traps
slide-15
SLIDE 15

preemptive scheduling ☑ running → ready transition

  • relies on clock interrupt (to regain control of CPU)
slide-16
SLIDE 16

non-preemptive scheduling ☒ running → ready transition

  • NB: non-preemptive ≠ batch!
slide-17
SLIDE 17

Ready Running

scheduled

convenient to envision a ready queue/set the scheduling policy decides which process to run next from the queue of active, ready processes

slide-18
SLIDE 18

policies vary by:

  • 1. preemptive vs. non-preemptive
  • 2. factors used in selecting a process
  • 3. goals; i.e., why are we selecting a given process?
slide-19
SLIDE 19

scheduling goals are usually predicated on optimizing certain scheduling metrics — can be provable or based on heuristics

slide-20
SLIDE 20

§ Scheduling Metrics

slide-21
SLIDE 21

metrics we’ll be concerned with:

  • turnaround time
  • wait time
  • response time
  • throughput
  • utilization
slide-22
SLIDE 22

turnaround time:

Tturnaround = Tcompletion - Tcreation

i.e., total time to complete process

slide-23
SLIDE 23

turnaround time depends on much more than the scheduling discipline!

  • process / program complexity
  • I/O processing time
  • how many CPUs available
  • how many other processes need to run
slide-24
SLIDE 24

wait time: time spent in ready queue i.e., how long does the scheduler force a runnable process to wait for a CPU

  • better gauge of scheduler’s effectiveness
slide-25
SLIDE 25

turnaround & wait time are measured over the course of an entire process — sometimes refer to as the “job”

  • not a very useful metric for interactive processes
  • which typically alternate between CPU & I/O bursts,

indefinitely

slide-26
SLIDE 26

“bursty” execution

slide-27
SLIDE 27

burst length histogram

slide-28
SLIDE 28

turnaround & wait time can also be computed on a per-burst basis — i.e., how long does a burst (of CPU activity) need to complete/wait before getting to the next I/O burst?

slide-29
SLIDE 29

For interactive jobs, improving responsiveness is arguably more important than optimizing total turnaround/wait times

  • how to quantify responsiveness (in a way that is useful for

scheduling)?

slide-30
SLIDE 30

response time: Tresponse = Tfirstrun - Tarrival

  • i.e., how soon is a job given a chance to run after arriving?
  • NB: still no guarantee that job itself will be responsive (what

does this depend on?)

slide-31
SLIDE 31

throughput: number of completed jobs or bursts 
 per time unit (e.g., N/sec)

slide-32
SLIDE 32

utilization: % of time CPU is busy running jobs

  • context switch time count against utilization!
  • CPU can be idle if there are no active jobs or if all jobs are

blocked (e.g., on I/O)

slide-33
SLIDE 33

another (subjective) metric: fairness

  • what does this mean?
  • how to measure it?
  • is it useful?
  • how to enforce it?
slide-34
SLIDE 34

Review (scheduling metrics):

  • turnaround time (total & per-burst)
  • wait time (total & per-burst)
  • response time
  • throughput
  • utilization
slide-35
SLIDE 35

§ Scheduling Policies

slide-36
SLIDE 36
  • 1. First Come First Served (FCFS)
slide-37
SLIDE 37

Process Arrival Time Burst Time

P1 24 P2 3 P3 3

Wait times: P1 = 0, P2 = 24, P3 = 27 Average: (0+24+27)/3 = 17

P1 P2 P3

24 27 30

“Gantt chart”

slide-38
SLIDE 38

Convoy Effect

slide-39
SLIDE 39

Process Arrival Time Burst Time

P3 3 P2 3 P1 24

P3 P2 P1

3 6 30

Wait times: P1 = 6, P2 = 3, P3 = 0 Average: (6+3+0)/3 = 3

slide-40
SLIDE 40
  • 2. Shortest Job First (SJF)
slide-41
SLIDE 41

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

P2 waits P3 waits P4 waits

P1 P3 P2 P4

Non-preemptive SJF

slide-42
SLIDE 42

Wait times: P1 = 0, P2 = 6, P3 = 3, P4=7 Average: (0+6+3+7)/4 = 4

P2 waits P3 waits P4 waits

P1 P3 P2 P4

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-43
SLIDE 43

can we do better?

slide-44
SLIDE 44

Yes! (theoretically): Preemptive SJF a.k.a. Shortest-Remaining-Time-First

slide-45
SLIDE 45

P1 P3 P4 P2 P2 P1

P1 waits P2 waits P4 waits

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-46
SLIDE 46

P1 P3 P4 P2 P2 P1

P1 waits P2 waits P4 waits

Wait times: P1 = 9, P2 = 1, P3 = 0, P4 = 2 Average: (9+1+0+2)/4 = 3

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-47
SLIDE 47

SJF/SRTF are greedy algorithms; i.e., they always select the local optima greedy algorithms don’t always produce globally optimal results (e.g., hill-climbing)

  • is preemptive-SJF optimal?
slide-48
SLIDE 48

consider 4 jobs arriving at t=0, with burst lengths t0, t1, t2, t3

  • avg. wait time if scheduled in order?

= 3t0 + 2t1 + t2 4

slide-49
SLIDE 49

— a weighted average; clearly minimized by running shortest jobs first. I.e., SJF/PSJF are provably optimal w.r.t. wait time!

= 3t0 + 2t1 + t2 4

slide-50
SLIDE 50

at what cost? … potential starvation! (possible for both non-preemptive & preemptive variants)

slide-51
SLIDE 51

also, we’ve been making two simplifying assumptions:

  • 1. burst lengths are known in advance
  • 2. context switch time = 0
slide-52
SLIDE 52

typically predict future burst lengths based on past behavior

  • simple moving average (using a sliding window of past values)
  • exponentially weighted moving average (EMA)
slide-53
SLIDE 53

Observed: ρn-1 Estimated: σn-1 Weight (α): 0 ≤ α ≤ 1 EMA: σn = α⋅ρn-1 + (1–α)⋅σn-1 i.e., bigger α = more credence given to measured burst lengths

slide-54
SLIDE 54

Actual Avg (3) Error EMA Error 4 5.00 1.00 5.00 1.00 EMA Alpha: 0.2 5 4.00 1.00 4.80 0.20 5 4.50 0.50 4.84 0.16 6 4.67 1.33 4.87 1.13 13 5.33 7.67 5.10 7.90 12 8.00 4.00 6.68 5.32 11 10.33 0.67 7.74 3.26 6 12.00 6.00 8.39 2.39 7 9.67 2.67 7.92 0.92 5 8.00 3.00 7.73 2.73 Avg err: 2.78 2.50 0.0 2.6 5.2 7.8 10.4 13.0 Actual Avg (3) EMA

slide-55
SLIDE 55

how to deal with starvation?

  • ne way: enforce fairness
slide-56
SLIDE 56
  • 3. Round Robin (RR): the “fairest” of them all
  • FIFO queue
  • each job runs for max time quantum q
  • if unfinished, re-enter queue at back
slide-57
SLIDE 57

Given time quantum q and n jobs:

  • max wait time = q ∙ (n – 1)
  • each job receives 1/n timeshare
slide-58
SLIDE 58

P1 waits P2 waits P1 waits P2 waits P3 waits P4 waits P4 waits

P1 P3 P2 P1 P4 P2 P1 P4

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

Round Robin, q=3

slide-59
SLIDE 59

P1 waits P2 waits P1 waits P2 waits P3 waits P4 waits P4 waits

P1 P3 P2 P1 P4 P2 P1 P4

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

Wait times: P1 = 8, P2 = 8, P3 = 5, P4 = 7 Average: (8+8+5+7)/4 = 7

slide-60
SLIDE 60

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

  • Avg. Turnaround
  • Avg. Wait Time

RR q=1

9.75 5.75

RR q=3

11 7

RR q=4

9 5

RR q=7

8.75 4.75

slide-61
SLIDE 61

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

Throughput Utilization

RR q=1

0.25 1.0

RR q=3

0.25 1.0

RR q=4

0.25 1.0

RR q=7

0.25 1.0

slide-62
SLIDE 62

What happens when we add in context switch times?

slide-63
SLIDE 63

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

  • Avg. Turnaround
  • Avg. Wait Time

RR q=1

20.25 13.25

RR q=3

16.25 11.25

RR q=4

11.5 7.25

RR q=7

10.25 6.25

(CST=1)

slide-64
SLIDE 64

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

Throughput Utilization

RR q=1

0.125 0.5

RR q=3

0.167 0.667

RR q=4

0.19 0.762

RR q=7

0.2 0.8

(CST=1)

slide-65
SLIDE 65

CST = fixed, systemic overhead

  • increasing quantum relative to CST amortizes the cost
  • shorter q = worse utilization; longer q ⟶ FIFO
  • NB: cost of context switches is not just the constant CST!
  • Also: cache flush/reload, VM swapping, etc.
slide-66
SLIDE 66

generally, try to choose q to help tune responsiveness 
 (i.e., of interactive jobs) may use:

  • predetermined burst-length threshold (for interactive jobs)
  • median of EMAs
  • process profiling
slide-67
SLIDE 67

RR prevents starvation, and allows both CPU-hungry and interactive jobs to share resources fairly

  • but at the cost of utilization and turnaround times!
slide-68
SLIDE 68

Can exercise more fine-grained control by introducing a system

  • f arbitrary priorities
  • computed and assigned to jobs dynamically by scheduler
  • highest (current) priority goes next
slide-69
SLIDE 69

SJF is an example of a priority scheduler!

  • jobs are weighted using a burst-length prediction algorithm

(e.g., EMA)

  • priorities may vary over job lifetimes
slide-70
SLIDE 70

Recall: SJF is prone to starvation Common issue for priority schedulers

  • combat with aging
slide-71
SLIDE 71
  • 4. Highest Penalty Ratio Next (HPRN)
  • a priority scheduler that uses aging to avoid starvation
slide-72
SLIDE 72

Two statistics maintained for each job:

  • 1. total CPU execution time, t
  • 2. “wall clock” age, T
slide-73
SLIDE 73

Priority, “penalty ratio” = T / t

  • ∞ when job is first ready
  • decreases as job receives CPU time
slide-74
SLIDE 74

HPRN in practice would incur too many context switches (due to very short bursts) — fix by instituting a minimum burst quanta (how is this different from RR?)

slide-75
SLIDE 75
  • 5. Selfish RR
  • a more sophisticated priority based scheduling policy
slide-76
SLIDE 76

priority ↑ α priority ↑ β β = 0 :

RR

β ≥ (α ≠ 0) :

FCFS

β > (α = 0) :

RR in batches

α > β > 0 :

“Selfish” (ageist) RR

active (RR) CPU holding arriving

slide-77
SLIDE 77

Another problem (on top of starvation) possibly created by priority-based scheduling policies: priority inversion

slide-78
SLIDE 78

Process Priority State

P1 High Ready P2 Mid Ready P3 Mid Ready P4 Low Ready

slide-79
SLIDE 79

request allocated

Process Priority State

P1 High Running P2 Mid Ready P3 Mid Ready P4 Low Ready Resource P1 P2 P4 P3

slide-80
SLIDE 80

Process Priority State

P1 High Blocked P2 Mid Ready P3 Mid Ready P4 Low Ready

request

P1 P2 P4 Resource P3

allocated

(mutually exclusive allocation)

slide-81
SLIDE 81

Process Priority State

P1 High Blocked P2 Mid Running P3 Mid Ready P4 Low Ready

request

P1 P2 P4 Resource P3

allocated

(mutually exclusive allocation)

slide-82
SLIDE 82

request

P1 P4 Resource P3

allocated

Process Priority State

P1 High Blocked P2 Mid Done P3 Mid Running P4 Low Ready (mutually exclusive allocation)

slide-83
SLIDE 83

request

P1 P4 Resource

allocated

Process Priority State

P1 High Blocked P2 Mid Done P3 Mid Done P4 Low Running (mutually exclusive allocation)

slide-84
SLIDE 84

request

P1 Resource

Process Priority State

P1 High Blocked P2 Mid Done P3 Mid Done P4 Low Done (mutually exclusive allocation)

slide-85
SLIDE 85

P1 Resource

a l l

  • c

a t e d

Process Priority State

P1 High Ready P2 Mid Done P3 Mid Done P4 Low Done (mutually exclusive allocation)

slide-86
SLIDE 86

P1 Resource

a l l

  • c

a t e d

Process Priority State

P1 High Running P2 Mid Done P3 Mid Done P4 Low Done (mutually exclusive allocation)

slide-87
SLIDE 87

priority inversion: a high priority job effectively takes on the priority of a lower-level one that holds a required resource

slide-88
SLIDE 88

high profile case study: NASA Pathfinder

  • spacecraft developed a recurring system failure/reset
  • occurred after deploying robot to surface of Mars
slide-89
SLIDE 89

culprits:

  • flood of meteorological data
  • low priority of related job: ASI/MET
  • a shared, mutually exclusive resource (semaphore guarding

an IPC pipe)

slide-90
SLIDE 90

high priority job (for data aggregation & distribution) — bc_dist — required pipe

  • but always held by ASI/MET
  • in turn kept from running by various mid-priority jobs
slide-91
SLIDE 91

scheduling job determined that bc_dist couldn’t complete per hard deadline

  • declared error resulting in system reset!
  • re-produced in lab after 18-hours of on-Earth simulation
slide-92
SLIDE 92

fix: priority inheritance

  • job blocking a higher priority job will inherit the latter’s priority
  • e.g., run ASI/MET at bc_dist’s priority until resource released
slide-93
SLIDE 93

how?

  • enabling priority inheritance via semaphores (in vxWorks)
  • (why wasn’t it on by default?)
  • prescient remote tracing/patching facilities built in to system
slide-94
SLIDE 94

why did NASA not foresee this?

“Our before launch testing was limited to the “best case” high data rates and science activities… We did not expect nor test the “better than we could have ever imagined” case.”

  • Glenn Reeves 


Software team lead

slide-95
SLIDE 95

takeaways:

  • scheduling bugs are hard to predict, track down, and fix
  • priority inheritance provides a “solution” for priority inversion
  • scheduling is rocket science!
slide-96
SLIDE 96

questions:

  • w.r.t. priority inheritance:
  • pros/cons?
  • how to implement?
  • w.r.t. priority inversion:
  • detection? how else to “fix”?
  • effect on non-real-time OS?
slide-97
SLIDE 97

Even with fine-grained control offered by a priority scheduler, hard to impose different sets of goals on groups of jobs E.g., top-priority for system jobs, RR for interactive jobs, FCFS for batch jobs

slide-98
SLIDE 98
  • 6. Multi-Level Queue (MLQ)
  • disjoint ready queues
  • separate schedulers/policies for each
slide-99
SLIDE 99

Fixed priority RR (small q) FCFS RR (larger q) system interactive batch normal

slide-100
SLIDE 100

requires queue arbitration strategy in place

slide-101
SLIDE 101

Fixed priority RR (small q) FCFS RR (larger q) system interactive batch normal decreasing priority approach 1: prioritize top, non-empty queue

slide-102
SLIDE 102

Fixed priority RR (small q) FCFS RR (larger q) system interactive batch normal approach 2: aggregate time slices 30% 15%

50%

5%

slide-103
SLIDE 103

what processes go in which queues?

  • self-assigned
  • e.g., UNIX “nice” value
  • “profiling” based on initial burst(s)
  • CPU, I/O burst length
  • e.g., short, intermittent CPU bursts 


⇒ classify as interactive job

slide-104
SLIDE 104

classification issue: what if job requirements change dynamically?

  • e.g., photo editor: tool selection (interactive) ➞ apply filter

(CPU hungry) ➞ simple edits (interactive)

slide-105
SLIDE 105
  • 7. Multi-Level Feedback Queue (MLFQ)
  • supports movement between queues after initial assignment
  • based on ongoing job accounting
slide-106
SLIDE 106

RR (q=2) RR (q=8) RR (q=4) e.g., 3 RR queues with different q assignment based on q/burst-length fit

slide-107
SLIDE 107

RR (q=2) RR (q=4) RR (q=8)

P1

P1

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-108
SLIDE 108

RR (q=2)

P1

RR (q=4) RR (q=8)

P1

P2

P2

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-109
SLIDE 109

RR (q=2)

P2 P1

RR (q=4) RR (q=8)

P1 P2

P3

P3

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-110
SLIDE 110

RR (q=2)

P2 P1

RR (q=4) RR (q=8)

P1 P3 P2

P4

P4

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-111
SLIDE 111

RR (q=2)

P4 P2 P1

RR (q=4) RR (q=8)

P1 P3 P2 P4 P1

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-112
SLIDE 112

RR (q=2)

P4 P2

RR (q=4)

P1

RR (q=8)

P1 P3 P2 P4 P1 P2

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-113
SLIDE 113

RR (q=2)

P4

RR (q=4)

P1

RR (q=8)

P1 P3 P2 P4 P1 P2 P4

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-114
SLIDE 114

RR (q=2) RR (q=4)

P1

RR (q=8)

P1 P3 P2 P4 P1 P2 P4 P1

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-115
SLIDE 115

P1 P3 P2 P4 P1 P2 P4 P1

Wait times: P1 = 9, P2 = 7, P3 = 0, P4 = 6 Average: (9+7+0+6)/4 = 5.5 (vs 7 for RR, q=3)

Process Arrival Time Burst Time

P1 7 P2 2 4 P3 4 1 P4 5 4

slide-116
SLIDE 116
  • following I/O, processes return to previously assigned queue
  • when to move up?
  • for RR, when burst ≤ q
slide-117
SLIDE 117

RR (q=2) RR (q=4) RR (q=8)

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-118
SLIDE 118

RR (q=2) RR (q=4) RR (q=8)

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

Pf Pf

slide-119
SLIDE 119

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-120
SLIDE 120

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-121
SLIDE 121

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf Pf

(I/O)

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-122
SLIDE 122

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-123
SLIDE 123

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-124
SLIDE 124

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-125
SLIDE 125

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-126
SLIDE 126

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-127
SLIDE 127

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-128
SLIDE 128

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-129
SLIDE 129

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-130
SLIDE 130

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-131
SLIDE 131

RR (q=2) RR (q=4) RR (q=8)

Pf Pf Pf

(I/O)

Pf Pf Pf Pf

e.g., Pflaky arrives at t=0 CPU burst lengths = 7, 4, 1, 5 (I/O between)

slide-132
SLIDE 132
  • ther possible heuristics:
  • multi-queue hops due to huge bursts
  • exponential backoff to avoid queue hopping
  • dynamic queue creation for outliers
slide-133
SLIDE 133

§ Scheduler Evaluation

slide-134
SLIDE 134

i.e., how well does a given scheduling policy perform under different loads? typically, w.r.t. scheduling metrics: wait time, turnaround, utilization, etc.

slide-135
SLIDE 135

n.b., numerical metrics (e.g., wait time) are important, but may not tell the full story e.g., how, subjectively, does a given scheduler “feel” under regular load?

slide-136
SLIDE 136
  • 1. paper & pencil computations
  • 2. simulations with synthetic or real-world job traces
  • 3. mathematical models; e.g., queueing theory
  • 4. real world testing (e.g., production OSes)
slide-137
SLIDE 137

(never fear, you’ll try your hand at all!)

slide-138
SLIDE 138

e.g., UTSA process scheduling simulator

  • specify scheduling discipline and job details in configuration

file

  • bursts can be defined discretely, or using probability

distributions

slide-139
SLIDE 139
  • utput: Gantt charts & metrics
slide-140
SLIDE 140

SJF vs. PSJF vs. RR, q=10 vs. RR, q=20

processes: uniform bursts ≤ 20, CST = 1.0

Which is which?