CS 423 Operating System Design: Midterm Review Professor Adam - - PowerPoint PPT Presentation

cs 423 operating system design midterm review
SMART_READER_LITE
LIVE PREVIEW

CS 423 Operating System Design: Midterm Review Professor Adam - - PowerPoint PPT Presentation

CS 423 Operating System Design: Midterm Review Professor Adam Bates Spring 2018 CS 423: Operating Systems Design Goals for Today Learning Objective: Review material, and also my strategies for writing midterm questions


slide-1
SLIDE 1

CS 423: Operating Systems Design

Professor Adam Bates Spring 2018

CS 423
 Operating System Design: Midterm Review

slide-2
SLIDE 2

CS 423: Operating Systems Design

  • Learning Objective:
  • Review material, and also my strategies for writing

midterm questions

  • Announcements, etc:
  • Midterm exam on Wednesday at 11

2

Goals for Today

Reminder: Please put away devices at the start of class

slide-3
SLIDE 3

CS 423: Operating Systems Design

Midterm Details

3

  • In-Class on March 6th.
  • i.e., 50 minutes
  • Scantron Multiple choice
  • bring pencils!
  • 20-30 Questions
  • Openbook: Textbooks, paper notes, printed sheets
  • allowed. No electronic devices permitted (or necessary)!
  • Content: All lecture and text material covered prior

to March 6th (i.e., up to and including memory)

slide-4
SLIDE 4

CS 423: Operating Systems Design

■ Which of the following is not a good reason for

increasing the size of a system’s page frames?

■ Improves memory utilization/efficiency ■ Decreases memory footprint of

virtual memory management

■ Improves disk utilization/efficiency

4

Sample Midterm Q

slide-5
SLIDE 5

CS 423: Operating Systems Design 5

Sample Midterm Q

CS 423: Operating Systems Design

Page Size Considerations

13 ■ Small pages ■ Reason: ■ Locality of reference tends to be small (256) ■ Less fragmentation ■ Problem: require large page tables ■ Large pages ■ Reason ■ Small page table ■ I/O transfers have high seek time, so better to transfer more data per seek ■ Problem: Internal fragmentation, needless caching

■ Which of the following is not a good reason for

increasing the size of a system’s page frames?

■ Improves memory utilization/efficiency ■ Decreases memory footprint of

virtual memory management

■ Improves disk utilization/efficiency

slide-6
SLIDE 6

CS 423: Operating Systems Design 6

■ Which of the following is not a good reason for

increasing the size of a system’s page frames?

■ Less Fragmentation ■ Smaller Page Table ■ Better to transfer more data

per disk seek

Sample Midterm Q

CS 423: Operating Systems Design

Page Size Considerations

13 ■ Small pages ■ Reason: ■ Locality of reference tends to be small (256) ■ Less fragmentation ■ Problem: require large page tables ■ Large pages ■ Reason ■ Small page table ■ I/O transfers have high seek time, so better to transfer more data per seek ■ Problem: Internal fragmentation, needless caching
slide-7
SLIDE 7

CS 423: Operating Systems Design

Page Size Considerations

7

■ Small pages

■ Reason:

■ Locality of reference tends to be small (256) ■ Less fragmentation

■ Problem: require large page tables

■ Large pages

■ Reason

■ Small page table ■ I/O transfers have high seek time, so better to transfer more data

per seek

■ Problem: Internal fragmentation, needless caching

slide-8
SLIDE 8

CS 423: Operating Systems Design 8

■ Which of the following is not a good reason for

increasing the size of a system’s page frames?

■ Less Fragmentation ■ Smaller Page Table ■ Better to transfer more data

per disk seek

Sample Midterm Q

CS 423: Operating Systems Design

Page Size Considerations

13 ■ Small pages ■ Reason: ■ Locality of reference tends to be small (256) ■ Less fragmentation ■ Problem: require large page tables ■ Large pages ■ Reason ■ Small page table ■ I/O transfers have high seek time, so better to transfer more data per seek ■ Problem: Internal fragmentation, needless caching
slide-9
SLIDE 9

CS 423: Operating Systems Design

■ Which of the following is not a good reason for

increasing the size of a system’s page frames?

■ Improves memory utilization/efficiency ■ Decreases memory footprint of

virtual memory management

■ Improves disk utilization/efficiency

9

Sample Midterm Q

CS 423: Operating Systems Design

Page Size Considerations

13 ■ Small pages ■ Reason: ■ Locality of reference tends to be small (256) ■ Less fragmentation ■ Problem: require large page tables ■ Large pages ■ Reason ■ Small page table ■ I/O transfers have high seek time, so better to transfer more data per seek ■ Problem: Internal fragmentation, needless caching
slide-10
SLIDE 10

CS 423: Operating Systems Design 10

Sample Midterm Q

■ With CFS active, tasks X, Y, and Z accumulate virtual

execution time at a rate of 1, 2, and 3, respectively. What is the expected share of the CPU that each gets?

■ X=17%, Y=33%, Z=50% ■ X=55%, Y=27%, Z=18% ■ X=50%, Y=33%, Z=17% ■ X=18%, Y=27%, Z=55%

slide-11
SLIDE 11

CS 423: Operating Systems Design

CS 423: Operating Systems Design

Completely Fair Scheduler

17 ■ Merged into the 2.6.23 release of the Linux kernel and is the default scheduler. ■ Scheduler maintains a red-black tree where nodes are
  • rdered according to received virtual execution time
■ Node with smallest virtual received execution time is picked next ■ Priorities determine accumulation rate of virtual execution time ■ Higher priority à slower accumulation rate Property of CFS: If all task’s virtual clocks run at exactly the same speed, they will all get the same amount of time on the CPU. How does CFS account for I/O-intensive tasks?

11

Sample Midterm Q

■ With CFS active, tasks X, Y, and Z accumulate virtual

execution time at a rate of 1, 2, and 3, respectively. What is the expected share of the CPU that each gets?

■ X=17%, Y=33%, Z=50% ■ X=55%, Y=27%, Z=18% ■ X=50%, Y=33%, Z=17% ■ X=18%, Y=27%, Z=55%

slide-12
SLIDE 12

CS 423: Operating Systems Design

CS 423: Operating Systems Design

Completely Fair Scheduler

17 ■ Merged into the 2.6.23 release of the Linux kernel and is the default scheduler. ■ Scheduler maintains a red-black tree where nodes are
  • rdered according to received virtual execution time
■ Node with smallest virtual received execution time is picked next ■ Priorities determine accumulation rate of virtual execution time ■ Higher priority à slower accumulation rate Property of CFS: If all task’s virtual clocks run at exactly the same speed, they will all get the same amount of time on the CPU. How does CFS account for I/O-intensive tasks?

12

Sample Midterm Q

“X should have twice as much CPU as Y, three times as much CPU as Z”

■ With CFS active, tasks X, Y, and Z accumulate virtual

execution time at a rate of 1, 2, and 3, respectively. What is the expected share of the CPU that each gets?

■ X=17%, Y=33%, Z=50% ■ X=55%, Y=27%, Z=18% ■ X=50%, Y=33%, Z=17% ■ X=18%, Y=27%, Z=55%

slide-13
SLIDE 13

CS 423: Operating Systems Design

CS 423: Operating Systems Design

Completely Fair Scheduler

17 ■ Merged into the 2.6.23 release of the Linux kernel and is the default scheduler. ■ Scheduler maintains a red-black tree where nodes are
  • rdered according to received virtual execution time
■ Node with smallest virtual received execution time is picked next ■ Priorities determine accumulation rate of virtual execution time ■ Higher priority à slower accumulation rate Property of CFS: If all task’s virtual clocks run at exactly the same speed, they will all get the same amount of time on the CPU. How does CFS account for I/O-intensive tasks?

13

■ With CFS active, tasks X, Y, and Z accumulate virtual

execution time at a rate of 1, 2, and 3, respectively. What is the expected share of the CPU that each gets?

■ X=17%, Y=33%, Z=50% ■ X=55%, Y=27%, Z=18% ■ X=50%, Y=33%, Z=17% ■ X=18%, Y=27%, Z=55%

Sample Midterm Q

“X should have twice as much CPU as Y, three times as much CPU as Z”

slide-14
SLIDE 14

CS 423: Operating Systems Design 14

Sample Midterm Q

■ Below are chronologically-ordered series of tasks with

their completion time shown. Which sequence offers a pessimal (i.e., worst-case) average response time for FIFO scheduling?

■ 1, 2, 3, 4 ■ 2, 2, 2, 2 ■ 3, 1, 3, 1 ■ 4, 3, 2, 1

slide-15
SLIDE 15

CS 423: Operating Systems Design 15

Sample Midterm Q

■ Below are chronologically-ordered series of tasks with

their completion time shown. Which sequence offers a pessimal (i.e., worst-case) average response time for FIFO scheduling?

■ 1, 2, 3, 4 ■ 2, 2, 2, 2 ■ 3, 1, 3, 1 ■ 4, 3, 2, 1

CS 423: Operating Systems Design

FIFO vs. SJF

11 (1) Tasks (3) (2) (5) (4) FIFO (1) Tasks (3) (2) (5) (4) SJF Time
slide-16
SLIDE 16

CS 423: Operating Systems Design 16

Sample Midterm Q

■ Below are chronologically-ordered series of tasks with

their completion time shown. Which sequence offers a pessimal (i.e., worst-case) average response time for FIFO scheduling?

■ 1, 2, 3, 4 ■ 2, 2, 2, 2 ■ 3, 1, 3, 1 ■ 4, 3, 2, 1

CS 423: Operating Systems Design

FIFO vs. SJF

11 (1) Tasks (3) (2) (5) (4) FIFO (1) Tasks (3) (2) (5) (4) SJF Time

“Which sequence maximizes wait time?”

slide-17
SLIDE 17

CS 423: Operating Systems Design 17

Sample Midterm Q

■ Below are chronologically-ordered series of tasks with

their completion time shown. Which sequence offers a pessimal (i.e., worst-case) average response time for FIFO scheduling?

■ 1, 2, 3, 4 ■ 2, 2, 2, 2 ■ 3, 1, 3, 1 ■ 4, 3, 2, 1

CS 423: Operating Systems Design

FIFO vs. SJF

11 (1) Tasks (3) (2) (5) (4) FIFO (1) Tasks (3) (2) (5) (4) SJF Time

“Which sequence maximizes wait time?”

slide-18
SLIDE 18

CS 423: Operating Systems Design

More Q&A

18

slide-19
SLIDE 19

CS 423: Operating Systems Design

Remainder of these slides

19

  • This is not a study guide
  • I prepared these by walking the lecture slides from

start to finish and sampling important concepts

  • Slides intended to prompt discussion and questions
  • Test is written at this point, but this deck leaks minimal

information; don’t try to read into which slides I did/ didn’t copy over to here.

  • There are no memory slides since we just covered it,

but obviously there will be questions about memory

  • n the exam.
slide-20
SLIDE 20

CS 423: Operating Systems Design 20

Network Hardware

Machine specific part

Web Server Browser Slack Pop Mail Application Software

Read/Write Standard Output Device Control File System Communication

Operating System (machine independent part)

Standard Operating System Interface

Hardware Abstraction Layer

Overview: OS Stack

OS Runs on Multiple Platforms while presenting the same Interface:

slide-21
SLIDE 21

CS423: Operating Systems Design

Overview: OS Roles

Role #1: Referee

  • Manage resource alloca3on between users and applica3ons
  • Isolate different users and applica3ons from one another
  • Facilitate and mediate communica3on between different users and applica3ons

Role #2: Illusionist

  • Allow each applica3on to believe it has the en3re machine to itself
  • Create the appearance of an Infinite number of processors, (near) infinite memory
  • Abstract away complexity of reliability, storage, network communica3on…

Role #3: Glue

  • Manage hardware so applica3ons can be machine-agnos3c
  • Provide a set of common services that facilitate sharing among applica3ons
  • Examples of “Glue” OS Services?

21

slide-22
SLIDE 22

CS 423: Operating Systems Design

Review: System Calls

22

fnCall()

Process

sysCall()

Process OS

Function Calls

Caller and callee are in the same Process

  • Same user
  • Same “domain of trust”

System Calls

  • OS is trusted; user is not.
  • OS has super-privileges; user does not
  • Must take measures to prevent abuse
slide-23
SLIDE 23

CS 423: Operating Systems Design

Review: Process Abstraction

23

Possible process states

Running (occupy CPU)

Blocked

Ready (does not occupy CPU)

Other states: suspended, terminated

Question: in a single processor machine, how many process can be in running state?

slide-24
SLIDE 24

CS 423: Operating Systems Design

Review: Threads

24

Environment (resource) execution

■ (a) Three processes each with one thread ■ (b) One process with three threads

Environment (resource) execution

24

slide-25
SLIDE 25

CS 423: Operating Systems Design

Kernel Abstraction: HW Support

New PC Handler PC

Program Counter CPU Instructions Fetch and Execute

  • pcode

Select PC

New Mode

Mode Select Mode

Branch Address

25

slide-26
SLIDE 26

CS 423: Operating Systems Design

Kernel Abstraction: CTX Switch

26

Program Counter Program instructions Code Segment Offset Heap Data Segment

Operand

Data Operand

OpCode

Stack Segment Stack Pointer Registers Stack Program Counter Program instructions Code Segment Offset Heap Data Segment

Operand

Data Operand

OpCode

Stack Segment Stack Pointer Stack

Save State (Context) Load State (Context)

Registers

slide-27
SLIDE 27

CS 423: Operating Systems Design 27

The state for processes that are not running on the CPU are maintained in the Process Control Block (PCB) data structure

Updated during context switch

An alternate PCB diagram

Kernel Abstraction: PCBs

slide-28
SLIDE 28

CS 423: Operating Systems Design

Interrupts: Model

28

The Hardware (CPU) “Virtual” CPU

Context Switching + Scheduling

“Virtual” CPU “Virtual” CPU External Devices Interrupt Handler Interrupt Handler Interrupt Handler

Interrupts to drive scheduling decisions! Interrupt handlers are also tasks that share the CPU.

slide-29
SLIDE 29

CS 423: Operating Systems Design

Interrupts: Handling

29

HALT START Fetch next instruction Execute Instruction

interrupts disabled

Check for INT, init INT handler

Interrupt Stage Execute Stage Fetch Stage

How does interrupt handling change the instruction cycle?

slide-30
SLIDE 30

CS423: Operating Systems Design

Interrupts: Handling

Table set up by OS kernel; pointers to code to run on different events

30

Interrupt Vector Processor Register

h a n d l e Ti m e r I n t e r r u p t ( ) { . . . } h a n d l e D i v i d e B y Z e r o ( ) { . . . } h a n d l e S y s t e m C a l l ( ) { . . . }

slide-31
SLIDE 31

CS 423: Operating Systems Design

System Calls: Under the Hood

31

read (fd, buffer, nbytes)

slide-32
SLIDE 32

CS423: Operating Systems Design

Concurrency: Thread Lifecycle

32

Thread Creation sthread_create() Scheduler Resumes Thread Thread Exit s t h r e a d _ e x i t ( ) Thread Yield/Scheduler Suspends Thread s t h r e a d _ y i e l d ( ) Thread Waits for Event s t h r e a d _ j o i n ( ) Event Occurs 0ther Thread Calls s t h r e a d _ j o i n ( )

Init Ready Waiting Running Finished

slide-33
SLIDE 33

CS423: Operating Systems Design

Concurrency: Thread State

33

Kernel User-Level Processes

Heap Code Globals TCB 1 Kernel Thread 1 Stack TCB 2 Kernel Thread 2 Stack TCB 3 Kernel Thread 3 Stack Stack Stack PCB 1 Process 1 PCB 2 Process 2 Heap Code Globals Stack Process 1 Thread Heap Code Globals Stack Process 2 Thread

slide-34
SLIDE 34

CS423: Operating Systems Design

Synchronization: Principals

34

Shared Objects Synchronization Variables Atomic Instructions Hardware

Interrupt Disable Bounded Bufger Multiple Processors Semaphores Locks Test-and-Set Barrier Hardware Interrupts Condition Variables

Concurrent Applications

slide-35
SLIDE 35

CS423: Operating Systems Design

Synchronization: Locks

35

  • Lock::acquire

– wait un0l lock is free, then take it

  • Lock::release

– release lock, waking up anyone wai0ng for it

  • 1. At most one lock holder at a 0me (safety)
  • 2. If no one holding, acquire gets lock (progress)
  • 3. If all lock holders finish and no higher priority

waiters, waiter eventually gets lock (progress)

slide-36
SLIDE 36

CS423: Operating Systems Design

Synchronization: Condition Variables

  • Waiting inside a critical section
  • Called only when holding a lock
  • CV::Wait — atomically release lock and relinquish

processor

  • Reacquire the lock when wakened
  • CV::Signal — wake up a waiter, if any
  • CV::Broadcast — wake up all waiters, if any

36

slide-37
SLIDE 37

CS423: Operating Systems Design

Synchronization: Spinlocks

  • A spinlock is a lock where the processor waits in a

loop for the lock to become free

  • Assumes lock will be held for a short time
  • Used to protect the CPU scheduler and to implement locks

37

Spinlock::acquire() { while (testAndSet(&lockValue) == BUSY) ; } Spinlock::release() { lockValue = FREE; memorybarrier(); }

slide-38
SLIDE 38

CS423: Operating Systems Design

Semaphores

  • Semaphore has a non-negative integer value
  • P() atomically waits for value to become > 0, then decrements
  • V() atomically increments value (waking up waiter if needed)
  • Semaphores are like integers except:
  • Only operations are P and

V

  • Operations are atomic
  • If value is 1, two P’s will result in value 0 and one waiter

38

slide-39
SLIDE 39

CS 423: Operating Systems Design

Scheduling: Principals

39

Basic scheduling algorithms

FIFO (FCFS)

Shortest job first

Round Robin

What is an optimal algorithm in the sense

  • f maximizing the number of jobs finished

(i.e., minimizing average response time)?

slide-40
SLIDE 40

CS 423: Operating Systems Design

Scheduling: Mixed Workloads??

40

I/O Bound Tasks CPU Bound CPU Bound

Time

Issues I/O Request I/O Completes Issues I/O Request I/O Completes

slide-41
SLIDE 41

CS 423: Operating Systems Design 41

Scheduling: MFQ

Priority 1 Time Slice (ms)

Time Slice Expiration New or I/O Bound Task

2 4 3 80 40 20 10 Round Robin Queues

slide-42
SLIDE 42

CS 423: Operating Systems Design

Scheduling: Early Linux

42

■ Linux 1.2: circular queue w/ round-robin policy.

■ Simple and minimal. ■ Did not meet many of the aforementioned goals

■ Linux 2.2: introduced scheduling classes (real-

time, non-real-time).

/* Scheduling Policies */ #define SCHED_OTHER 0 // Normal user tasks (default) #define SCHED_FIFO 1 // RT: Will almost never be preempted #define SCHED_RR 2 // RT: Prioritized RR queues

slide-43
SLIDE 43

CS 423: Operating Systems Design

Scheduling: SCHED_NORMAL

43

■ Used for non real-time processes ■ Complex heuristic to balance the needs of I/O and CPU centric

applications

■ Processes start at 120 by default ■ Static priority

■ A “nice” value: 19 to -20. ■ Inherited from the parent process ■ Altered by user (negative values require special

permission)

■ Dynamic priority

■ Based on static priority and applications characteristics

(interactive or CPU-bound)

■ Favor interactive applications over CPU-bound ones

■ Timeslice is mapped from priority

slide-44
SLIDE 44

CS 423: Operating Systems Design

bonus = min (10, (avg. sleep time / 100) ms)

  • avg. sleep time is 0 => bonus is 0
  • avg. sleep time is 100 ms => bonus is 1
  • avg. sleep time is 1000 ms => bonus is 10
  • avg. sleep time is 1500 ms => bonus is 10
  • Your bonus increases as you sleep more.

dynamic priority = max (100, min (static priority – bonus + 5, 139))

Min priority # is still 100 Max priority # is still 139

44

Scheduling: SCHED_NORMAL Heuristic

How does a dynamic priority adjust CPU access?

(Bonus is subtracted to increase priority)

slide-45
SLIDE 45

CS 423: Operating Systems Design

Scheduling: CFS

45

■ Merged into the 2.6.23 release of the Linux kernel

and is the default scheduler.

■ Scheduler maintains a red-black tree where nodes are

  • rdered according to received virtual execution time

■ Node with smallest virtual received execution time is

picked next

■ Priorities determine accumulation rate of virtual

execution time

■ Higher priority à slower accumulation rate

slide-46
SLIDE 46

CS 423: Operating Systems Design

Scheduling: Red-Black Trees

46

■ CFS dispenses with a run queue and instead

maintains a time-ordered red-black tree. Why?

An RB tree is a BST w/ the constraints:

  • 1. Each node is red or black
  • 2. Root node is black
  • 3. All leaves (NIL) are black
  • 4. If node is red, both children are black
  • 5. Every path from a given node to its

descendent NIL leaves contains the same number of black nodes Takeaway: In an RB Tree, the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf.

slide-47
SLIDE 47

CS 423: Operating Systems Design

Scheduling: Multi-Processor

47

  • CPU affinity would seem to necessitate a multi-queue

approach to scheduling… but how?

  • Asymmetric Multiprocessing (AMP): One processor (e.g.,

CPU 0) handles all scheduling decisions and I/O processing, other processes execute only user code.

  • Symmetric Multiprocessing (SMP): Each processor is self-
  • scheduling. Could work with a single queue, but also

works with private queues.

  • Potential problems?
slide-48
SLIDE 48

CS 423: Operating Systems Design

RTS: Scheduling

48

Re: Real Time Scheduling of Periodic Tasks…

■ Result #1: Earliest Deadline First (EDF) is the optimal

dynamic priority scheduling policy for independent periodic tasks (meets the most deadlines of all dynamic priority scheduling policies)

■ Result #2: Rate Monotonic Scheduling (RM) is the

  • ptimal static priority scheduling policy for

independent periodic tasks (meets the most deadlines

  • f all static priority scheduling policies)
slide-49
SLIDE 49

CS 423: Operating Systems Design

How should we account for priority inversion?

RTS: Priority Inversion

Attempt to lock S results in blocking

49

High-priority task Low-priority task Lock S Preempt. Unlock S Lock S Unlock S Priority Inversion

slide-50
SLIDE 50

CS 423: Operating Systems Design

Consider the case below: a series of intermediate priority tasks is delaying a higher-priority one

RTS: Unbounded Priority Inversion

Attempt to lock S results in blocking

50

High-priority task Low-priority task Lock S Preempt. Intermediate-priority tasks Preempt.

Unbounded Priority Inversion

slide-51
SLIDE 51

CS 423: Operating Systems Design

Solution: Let a task inherit the priority of any higher-priority task it is blocking

RTS: Priority Inheritance Protocol

Attempt to lock S results in blocking

51

High-priority task Low-priority task Lock S Preempt. Intermediate-priority tasks

Lock S Unlock S Unlock S

slide-52
SLIDE 52

CS 423: Operating Systems Design

RTS: Priority Ceiling Protocol

52

■ Definition: The priority ceiling of a semaphore is the

highest priority of any task that can lock it

■ A task that requests a lock Rk is denied if its priority

is not higher than the highest priority ceiling of all semaphores currently locked by other tasks (say it belongs to semaphore Rh)

■ The task is said to be blocked by the task holding lock Rh

■ A task inherits the priority of the top higher-priority

task it is blocking