Operating Systems Operating Systems CMPSC 473 CMPSC 473 Midterm 2 - - PowerPoint PPT Presentation

operating systems operating systems cmpsc 473 cmpsc 473
SMART_READER_LITE
LIVE PREVIEW

Operating Systems Operating Systems CMPSC 473 CMPSC 473 Midterm 2 - - PowerPoint PPT Presentation

Operating Systems Operating Systems CMPSC 473 CMPSC 473 Midterm 2 Review Midterm 2 Review April 15, 2008 - Lecture 21 April 15, 2008 - Lecture 21 Instructor: Trent Jaeger Instructor: Trent Jaeger Scope Chapter 6 -- Synchronization


slide-1
SLIDE 1

Operating Systems Operating Systems CMPSC 473 CMPSC 473

Midterm 2 Review Midterm 2 Review April 15, 2008 - Lecture 21 April 15, 2008 - Lecture 21 Instructor: Trent Jaeger Instructor: Trent Jaeger

slide-2
SLIDE 2

2

Scope

  • Chapter 6 -- Synchronization
  • Chapter 7 -- Deadlocks
  • Chapter 8 -- Main Memory (Physical)
  • Chapter 9 -- Virtual Memory
  • Chapter 10 -- File System Interface
  • Chapter 11 -- File System Implementation

2

slide-3
SLIDE 3

3

Synchronization

  • Little’s Law -- Final
  • Problems
  • Synchronization Requirements
  • Disabling Interrupts
  • Busy-wait/Spinlock solutions
  • Related to properties
  • Hardware Enabled Solutions
  • OS-supported

3

slide-4
SLIDE 4

4

Requirements for Solution

1. Mutual Exclusion - If process Pi is executing in its critical section, then no

  • ther processes can be executing in their critical sections
  • 2.

Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely

  • 3.

Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

  • Assume that each process executes at a nonzero speed
  • No assumption concerning relative speed of the N processes
slide-5
SLIDE 5

5

Synchronization

  • Hardware Enabled Solutions
  • OS-supported Solutions
  • Mutex
  • Semaphores
  • Condition Variables
  • Apply these to code
  • Classic Synchronization Problems

5

slide-6
SLIDE 6

6

Example Blocking Implementation

Enter_CS(L) { Disable Interrupts Check if anyone is using L If not { Set L to being used } else { Move this PCB to Blocked queue for L Select another process to run from Ready queue Context switch to that process } Enable Interrupts } Exit_CS(L) { Disable Interrupts Check if blocked queue for L is empty if so { Set L to free } else { Move PCB from head of Blocked queue of L to Ready queue } Enable Interrupts }

NOTE: These are OS system calls!

slide-7
SLIDE 7

7

Semaphores

  • You are given a data-type Semaphore_t.
  • On a variable of this type, you are allowed

– P(Semaphore_t) -- wait – V(Semaphore_t) – signal

  • Intuitive Functionality:

– Logically one could visualize the semaphore as having a counter initially set to 0. – When you do a P(), you decrement the count, and need to block if the count becomes negative. – When you do a V(), you increment the count and you wake up 1 process from its blocked queue if not null.

slide-8
SLIDE 8

8

Deadlocks

  • Necessary Conditions
  • Safe States
  • Resource Allocation Graph
  • Deadlock Prevention
  • Safe States
  • Deadlock Detection
  • Detection Algorithm
  • Recovery

8

slide-9
SLIDE 9

9

Necessary Conditions for a Deadlock

  • Mutual exclusion: The requesting process is delayed until the

resource held by another is released.

  • Hold and wait: A process must be holding at least 1 resource

and must be waiting for 1 or more resources held by others.

  • No preemption: Resources cannot be preempted from one

and given to another.

  • Circular wait: A set (P0,P1,…Pn) of waiting processes must

exist such that P0 is waiting for a resource held by P1, P1 is waiting for …. by P2, … Pn is waiting for … held by P0.

slide-10
SLIDE 10

10

Deadlock Prevention Example

5 processes, 3 resource types A (10 instances), B (5 instances), C (7 instances)

3 3 4 P4 2 2 2 P3 2 9 P2 2 2 3 P1 3 5 7 P0 C B A 2 P4 1 1 2 P3 2 3 P2 2 P1 1 P0 C B A 1 3 4 P4 1 1 P3 6 P2 2 2 1 P1 3 4 7 P0 C B A 2 3 3 C B A

MaxNeeds Allocated StillNeeds Free This state is safe, because there is a reduction sequence <P1, P3, P4, P2, P0> that can satisfy all the requests. Exercise: Formally go through each of the steps that update these matrices for the reduction sequence.

slide-11
SLIDE 11

11

Deadlock Detection Example

5 processes, 3 resource types A (7 instances), B (2 instances), C (6 instances)

2 P4 1 1 2 P3 3 3 P2 2 P1 1 P0 C B A 2 P4 1 P3 P2 2 2 P1 P0 C B A C B A

Allocated Free Request This state is NOT deadlocked. By applying algorithm, the sequence <P0, P2, P3, P1, P4> will result in Done[i] being TRUE for all processes.

slide-12
SLIDE 12

12

Main Memory

  • Swapping
  • Allocation
  • Contiguous, Non-contiguous (paging)
  • Algorithms
  • Fragmentation
  • Internal, External
  • Page-tables, TLBs
  • virtual-physical translation
  • Page table structure, entries

12

slide-13
SLIDE 13

13

Memory Allocation

OS P1 P2 P3 Allocated Regions Free Regions (Holes) Queue of waiting requests/jobs

Question: How do we perform this allocation?

slide-14
SLIDE 14

14

  • Programs are provided with a virtual address

space (say 1 MB).

  • Role of the OS to fetch data from either

physical memory or disk.

– Done by a mechanism called (demand) paging.

  • Divide the virtual address space into units

called “virtual pages” each of which is of a fixed size (usually 4K or 8K).

– For example, 1M virtual address space has 256 4K pages.

  • Divide the physical address space into

“physical pages” or “frames”.

– For example, we could have only 32 4K-sized pages.

slide-15
SLIDE 15

15

Page Tables

Virtual Page # Offset in Page Virtual Address

ppn vpn … pp1 vp1 Present PP # VP #

Physical Page # Offset in Page Physical Address

slide-16
SLIDE 16

16

Example: A 2-level Page Table

Page Tables

Page dir Page table Page offset

10 10 12 32-bit virtual address 32-bit physical address

Page Directory

Code Data Stack

slide-17
SLIDE 17

17

Virtual Memory

  • Page Fault Handling
  • Performance Estimations
  • Memory Initialization
  • Page Replacement
  • Algorithms
  • Belady’s Anomaly
  • Uses of Virtual Memory
  • COW, Shared Pages, Memory-mapped Files
  • Thrashing

17

slide-18
SLIDE 18

18

Page Fault

  • If there is a reference to a page, first reference to that page will trap to
  • perating system:

– page fault

  • Operating system looks at another table to decide:

– Invalid reference -- abort – Just not in memory

  • Get empty frame
  • Swap page into frame
  • Reset tables
  • Set validation bit = v
  • Restart the instruction that caused the page fault
slide-19
SLIDE 19

19

Putting it all together!

VAS before execution

0x0…0

0xf…f Page Table PC

Heap Pointer

Stack Pointer

All Point To Null (i.e. fault the first time)

4K page size

No Page table Entries here

int A[2K]; int B[2k]; main() { int i, j, p; p = malloc(16K); }

slide-20
SLIDE 20

20

Belady’s Anomaly

  • Normally you expect number of page faults

to decrease as you increase physical memory size.

  • However, this may not be the case in certain

replacement algorithms

slide-21
SLIDE 21

21

File Systems

  • File System Concepts
  • Files, Directories, File Systems
  • Operations and Usage
  • Remote File Systems
  • File System Implementation
  • What’s on the disk? How’s it formatted?
  • What’s in memory? How’s it represented?
  • File System Usage
  • Get a file
  • Caching
  • Free Space
  • Recovery

21

slide-22
SLIDE 22

22

File System Mounting

slide-23
SLIDE 23

23

In-memory structures for

  • pen() syscall

fd=open(“a”,…); … read(fd,…); … close(fd);

P1

fd=open(“a”,…); … read(fd,…); … close(fd);

P2

fd=open(“b”,…); … write(fd,…); … close(fd);

P3 OS i-node of “b” i-node of “a” Per-process Open File Descriptor Table System-wide Open File Descriptor table (all in Memory)

slide-24
SLIDE 24

24

i-node

Filename Time Perm. …

Disk Block Disk Block Disk Block Disk Block Disk Block Disk Block

Data

Disk Block

Data

Disk Block

Data

Disk Block

Data

Disk Block

slide-25
SLIDE 25

25

  • On a write, should we do write-back or a write-

through?

– With write-back, you may loose data that is written if machine goes down before write-back – With write-through, you may be loosing performance

  • Loss in opportunity to perform several writes at a time
  • Perhaps the write may not even be needed!
  • DOS uses write-through
  • In UNIX,

– writes are buffered, and they are propagated in the background after a delay, i.e. every 30 secs there is a sync() call which propagates dirty blocks to disk. – This is usually done in the background. – Metadata (directories/i-nodes) writes are propagated immediately.

slide-26
SLIDE 26

26

Summary

  • Need a clear understanding of concepts in
  • Synchronization
  • Memory Management
  • File Systems
  • Be able to apply these concepts
  • Not so much algorithm memorization
  • I will provide algorithms in most cases
  • Some equation memorization

26

slide-27
SLIDE 27
  • Next time: Midterm 2

– Th, April 17, 6:30-7:45 – Location: 22 Deike

  • Good Luck!