Operating Operating Systems: Systems: Wrap Wrap- -up up Fall - - PowerPoint PPT Presentation

operating operating systems systems wrap wrap up up
SMART_READER_LITE
LIVE PREVIEW

Operating Operating Systems: Systems: Wrap Wrap- -up up Fall - - PowerPoint PPT Presentation

Operating Operating Systems: Systems: Wrap Wrap- -up up Fall Fall 2008 2008 Tiina Niklander Tiina Niklander EXAM: Wed 15.10. 16.00 EXAM: Wed 15.10. 16.00 A 111 A 111 No additional material allowed. No additional material


slide-1
SLIDE 1

Operating Operating Systems: Systems: Wrap Wrap-

  • up

up

Fall Fall 2008 2008 Tiina Niklander Tiina Niklander

slide-2
SLIDE 2

EXAM: Wed 15.10. 16.00 EXAM: Wed 15.10. 16.00 A 111 A 111

  • No additional material allowed.

No additional material allowed.

  • You can have calculator if you wish, but not necessary

You can have calculator if you wish, but not necessary (if calculations asked , then the formula is fine) (if calculations asked , then the formula is fine)

  • Remember to have student card (or id) and pencils with

Remember to have student card (or id) and pencils with you you

  • Overlapping CS exams: A special start time at 14.00 has

Overlapping CS exams: A special start time at 14.00 has been organised for those students. been organised for those students. – – Contact Tiina if Contact Tiina if you need this. you need this.

slide-3
SLIDE 3

Course feedback Course feedback

  • Remember to give feedback on

Remember to give feedback on all courses you all courses you participate! participate!

  • Improve the quality of the content, teaching, etc.

Improve the quality of the content, teaching, etc.

  • Feedback forms in Finnish from page:

Feedback forms in Finnish from page:

  • https://ilmo.cs.helsinki.fi/kurssit/servlet/Valinta

https://ilmo.cs.helsinki.fi/kurssit/servlet/Valinta

  • Forms in English from page:

Forms in English from page:

  • https://ilmo.cs.helsinki.fi/kurssit/servlet/Valinta?kieli=en

https://ilmo.cs.helsinki.fi/kurssit/servlet/Valinta?kieli=en

slide-4
SLIDE 4

Course structure Course structure

  • 1: Overview

1: Overview

  • 2: Processes and Threads

2: Processes and Threads

  • 3: Virtual Memory and

3: Virtual Memory and Paging Paging

  • 4: Page Replacement

4: Page Replacement

  • 5: Segmentation

5: Segmentation

  • 6: File Systems, part 1

6: File Systems, part 1

  • 7: File Systems, part 2

7: File Systems, part 2 Tanenbaum: Sections 1,2,3,4 Tanenbaum: Sections 1,2,3,4

  • 8: I/O Management

8: I/O Management

  • 9: Multiple Processor

9: Multiple Processor Systems Systems

  • 10: Example: Linux,

10: Example: Linux, Windows, Symbian Windows, Symbian

  • 11: Design Issues

11: Design Issues

  • 12: Recapitulation, hints

12: Recapitulation, hints for exam for exam , ,5,7.5,8,10,11 5,7.5,8,10,11

slide-5
SLIDE 5

Learning goals Learning goals

Themes: Themes:

  • Operating

Operating system’s general structure and main system’s general structure and main functionalities functionalities

  • Processes and

Processes and Threads Threads

  • Memory management and Virtual

Memory management and Virtual memory memory

  • File system and

File system and I/O I/O

slide-6
SLIDE 6

Goal: Operating system’s general Goal: Operating system’s general structure and main functionalities structure and main functionalities

  • Approaches (just passable, when master all of these):

Approaches (just passable, when master all of these):

  • Can describe the main services offered by the OS (operating system) and

Can describe the main services offered by the OS (operating system) and their functionalities their functionalities. .

  • Can outline the common OS structure and interfaces.

Can outline the common OS structure and interfaces.

  • Can position the services of an OS in modern computing environment

Can position the services of an OS in modern computing environment and and explain their explain their benefits benefits

  • Reaches (master the content):

Reaches (master the content):

  • Can explain in details the functionalities and services of an OS on one

Can explain in details the functionalities and services of an OS on one computer and as part of a distributed system. computer and as part of a distributed system.

  • Can outline and explain structure and interfaces of one specific OS

Can outline and explain structure and interfaces of one specific OS (such (such as as Linux, Windows). Linux, Windows).

slide-7
SLIDE 7

Goal: Goal: Processes and Threads Processes and Threads

  • Approaches:

Approaches:

  • Can describe the data structures and management functions used by OS

Can describe the data structures and management functions used by OS in controlling processes and threads in controlling processes and threads. .

  • Can describe the common scheduling mechanisms

Can describe the common scheduling mechanisms. .

  • Can distinguish user mode and

Can distinguish user mode and kernel kernel mode and explain their main mode and explain their main features. features.

  • Can

Can describe the process protection mechanisms describe the process protection mechanisms. .

  • Can explain different ways of executing a thread.

Can explain different ways of executing a thread.

  • Reaches:

Reaches:

  • Can explain on algorithmic level the features used by a given OS.

Can explain on algorithmic level the features used by a given OS.

  • Can compare different scheduling mechanisms.

Can compare different scheduling mechanisms.

  • Can select the most feasible thread execution mechanism for a given

Can select the most feasible thread execution mechanism for a given purpose and justify the selection. purpose and justify the selection.

slide-8
SLIDE 8

Goal: Memory management and Goal: Memory management and Virtual memory Virtual memory

  • Approaches:

Approaches:

  • Can explain the key concepts (paging, page table, address translation, page

Can explain the key concepts (paging, page table, address translation, page fault) of a virtual memory and describe the basic features of it. fault) of a virtual memory and describe the basic features of it.

  • Can describe a multi

Can describe a multi-

  • level page table and how it is used.

level page table and how it is used.

  • Can simulate, on algorithmic level, address translation of a system that

Can simulate, on algorithmic level, address translation of a system that used virtual memory. used virtual memory.

  • Reaches:

Reaches:

  • Can estimate the effect of the page size on page table size and process

Can estimate the effect of the page size on page table size and process functionality.

  • functionality. Can justify the selection of a certain page size.

Can justify the selection of a certain page size.

  • Can explain in details the advantages and disadvantages of combining

Can explain in details the advantages and disadvantages of combining (multi (multi-

  • level) paging and

level) paging and segmentation segmentation. .

  • Can simulate, on algorithmic level, all key mechanisms of the virtual

Can simulate, on algorithmic level, all key mechanisms of the virtual memory, especially the page replacement and allocation. memory, especially the page replacement and allocation.

slide-9
SLIDE 9

Goal: Goal: File system and I/ O File system and I/ O

  • Approaches:

Approaches:

  • Can outline the basic structure of a file system and explain how it works

Can outline the basic structure of a file system and explain how it works. .

  • Can describe how the data is moved between the devices, the OS and the

Can describe how the data is moved between the devices, the OS and the application application. .

  • Reaches:

Reaches:

  • Can explain the principle of distributed file system (such as NFS

Can explain the principle of distributed file system (such as NFS). ).

  • Can explain and compare the file systems of different OS, at least the file

Can explain and compare the file systems of different OS, at least the file systems of Linux and Windows. systems of Linux and Windows.

slide-10
SLIDE 10

Example exams: Fall 2006 Example exams: Fall 2006

  • Course lasted two periods

Course lasted two periods – – two course exams two course exams

  • Some extra material was included

Some extra material was included

  • Here only the questions relevant to this shorter

Here only the questions relevant to this shorter are shown: are shown:

  • Processes and threads

Processes and threads

  • Memory management

Memory management

  • Virtual memory

Virtual memory

  • File systems

File systems

  • Scheduling

Scheduling

slide-11
SLIDE 11

Example exam: Example exam: processes and threads processes and threads

  • Observe

Observe Solaris processes and threads (see Fig 4.15 below). Consider Solaris processes and threads (see Fig 4.15 below). Consider two two applications applications: M and K. Each of them has 8 threads within the process. : M and K. Each of them has 8 threads within the process. Application M is of type 3 (with 3 L Application M is of type 3 (with 3 L-

  • threads) and application K of type 4(with 8

threads) and application K of type 4(with 8 L L-

  • threads). Assume that the system has 4 processors. Assume also that only one

threads). Assume that the system has 4 processors. Assume also that only one

  • f the applications (either M or K) is executed in the system at any given time.
  • f the applications (either M or K) is executed in the system at any given time.

That is the applications are not executed concurrently. That is the applications are not executed concurrently.

  • [3p] Explain briefly (with the help of the figure) what do the following terms mean:

[3p] Explain briefly (with the help of the figure) what do the following terms mean:

  • ULT (User Level Thread)

ULT (User Level Thread)

  • LWP (Light Weight Process)

LWP (Light Weight Process)

  • KLT (Kernel Level Thread)

KLT (Kernel Level Thread)

  • [3p] How many threads can in each case (application M

[3p] How many threads can in each case (application M and and K) be executing concurrently in machine language K) be executing concurrently in machine language level level? Justify. Give answers separately to applications ? Justify. Give answers separately to applications M and K. and K.

  • [3p] If an executing thread will block because of I/O, will the application (M or K)

[3p] If an executing thread will block because of I/O, will the application (M or K) block or not? Justify. Give answers separately to applications M and K. block or not? Justify. Give answers separately to applications M and K.

slide-12
SLIDE 12

Example exam: Example exam: Memory management Memory management

  • The memory configuration (of 512 megabytes of memory) at a

The memory configuration (of 512 megabytes of memory) at a given point in time is given point in time is: : The The shaded areas are allocated; the white areas are free. shaded areas are allocated; the white areas are free. Additionally, the free areas are marked with letters after their Additionally, the free areas are marked with letters after their

  • sizes. The next five memory requests are for
  • sizes. The next five memory requests are for

50M, 24M, 10M, 60M, 30M 50M, 24M, 10M, 60M, 30M Indicate Indicate the location for each of the requests, when the memory the location for each of the requests, when the memory allocation is based on allocation is based on

  • a dynamic partitioning scheme and first

dynamic partitioning scheme and first-

  • fit placement algorithm

fit placement algorithm, ,

  • a dynamic partitioning scheme and best

dynamic partitioning scheme and best-

  • fit placement algorithm

fit placement algorithm, , 32M 48M (A) 48M 128M (B) 16M 16M (C) 64M 160M (D)

slide-13
SLIDE 13

Example exam: Example exam: Virtual memory Virtual memory

  • Please

Please explain in details, what are the components of the Memory explain in details, what are the components of the Memory Management Unit (MMU) and how does it do address translations, when the Management Unit (MMU) and how does it do address translations, when the system is based on paging virtual memory. system is based on paging virtual memory.

  • Explain

Explain how does the clock how does the clock-

  • algorithm operate. What will be the target page

algorithm operate. What will be the target page frame for a new page, if the clock algorithm is used in the situation given in frame for a new page, if the clock algorithm is used in the situation given in the table below (the process has only 4 page frames). The times are clock the table below (the process has only 4 page frames). The times are clock-

  • ticks from the beginning of the process

ticks from the beginning of the process. .

  • The

The execution of the process continues and generates the following page execution of the process continues and generates the following page reference string: reference string: 4 4, 0, 0, 2, 1, 5, 4, 5, 0, 3, , 0, 0, 2, 1, 5, 4, 5, 0, 3, 2 2 How How many page faults would occur if the working set policy were used many page faults would occur if the working set policy were used with a window size 4 instead of the fixed allocation? Show clearly which pages with a window size 4 instead of the fixed allocation? Show clearly which pages form the current working set and when each page fault would occur. form the current working set and when each page fault would occur.

Sivu# (page number) Sivutila# (page frame) Latausaika (Time loaded) Viittausaika (Time referenced) R M 2 60 121 1 1 1 130 120 2 26 122 1 3 3 20 123 1 1

slide-14
SLIDE 14

Example exam: Scheduling Example exam: Scheduling

  • Five jobs (processes) arrive to the system according the following table.

Five jobs (processes) arrive to the system according the following table. Determine the turnaround time for each job and the average turnaround time Determine the turnaround time for each job and the average turnaround time for all jobs. for all jobs.

  • Priority. Pure priority
  • Priority. Pure priority-
  • based (low number means high priority), one job at a time runs until it

based (low number means high priority), one job at a time runs until it finishes. finishes.

  • SPN (Shortest Job First). Each job is completed before the next can start.

SPN (Shortest Job First). Each job is completed before the next can start.

  • FCFS (First Come First Served). job is completed before the next can start.

FCFS (First Come First Served). job is completed before the next can start.

  • Round Robin. Time quantum is 2.

Round Robin. Time quantum is 2.

  • Remember to justify your answer! (Justification is more important than

Remember to justify your answer! (Justification is more important than correct numerical result) correct numerical result) Process Arrival time Priority Processing Time A 1 9 B 1 3 15 C 2 5 6 D 3 4 3 E 4 2 12

slide-15
SLIDE 15

Example exam: File systems Example exam: File systems

  • Free disk block s

Free disk block s. Explain two approaches to keep track of the free

. Explain two approaches to keep track of the free disk blocks available to allocation on a disk. Give the pros and disk blocks available to allocation on a disk. Give the pros and cons for each alternative. cons for each alternative.

  • ext2fs
  • ext2fs. What is

What is inode

inode and what information does it contain?

and what information does it contain?

  • ext2fs.
  • ext2fs. When the block size is 1 KB, how does the OS store a 30

When the block size is 1 KB, how does the OS store a 30 MB file test.txt? How does it locate the file’s allocated disk MB file test.txt? How does it locate the file’s allocated disk blocks? blocks?

  • NTFS.
  • NTFS. What is Master File Table (MFT) in NTFS, where is it

What is Master File Table (MFT) in NTFS, where is it located, how is it used? located, how is it used?

slide-16
SLIDE 16

Operating System Operating System Overview Overview

Selection of slides Selection of slides from previous lectures from previous lectures

slide-17
SLIDE 17

OS structure OS structure

User

Devices and device drivers Applications System programs Shell I/O module (Device controller) Interrupt handling Process management I/O management Memory management File system

blocks

System calls

protection

Resource management

slide-18
SLIDE 18

Sta Fig 1.11 Return from interrupt Start interrupt processing

Interrupt in details Interrupt in details

slide-19
SLIDE 19

Memory hierarchy Memory hierarchy

Pentium 4 cache: 8 KB data, 12 KB code/text, external 256 KB

nano = 10-9, micro = 10-6, milli = 10-3

slide-20
SLIDE 20

L Locality of reference

  • cality of reference

Spatial and temporal locality: Spatial and temporal locality:

  • For example in a loop a small set of instructions is

For example in a loop a small set of instructions is executed several times executed several times

  • Certain part of code only uses a small set of

Certain part of code only uses a small set of variables (data) variables (data)

  • When program makes a memory reference (data or

When program makes a memory reference (data or instructions), it is likely to refer again to the same instructions), it is likely to refer again to the same location or a location near by location or a location near by

  • This is the principle behind the usage of caches

This is the principle behind the usage of caches

slide-21
SLIDE 21

Systems calls Systems calls

  • I

Interface between user

nterface between user programs and OS programs and OS dealing with abstractions dealing with abstractions

  • Special kind of

Special kind of procedure call: switch procedure call: switch between user and kernel between user and kernel mode mode

  • Trap into kernel and

Trap into kernel and invoke OS invoke OS

System call Interrupt

slide-22
SLIDE 22

read( read(fd fd, buffer, , buffer, nbytes nbytes) )

Tan08 1 Tan08 1-

  • 17

17

slide-23
SLIDE 23

Operating system structures Operating system structures

  • Monolithic System

Monolithic System

  • Microkernel

Microkernel

  • Virtual Machines

Virtual Machines

slide-24
SLIDE 24

Monolithic Monolithic Systems Systems

  • Whole OS as one program in kernel mode

Whole OS as one program in kernel mode

  • Collection of procedures, linked together

Collection of procedures, linked together

  • Basic structure of a monolithic system

Basic structure of a monolithic system

  • Main program invokes requested service procedure

Main program invokes requested service procedure

  • Service procedures carry out the system calls

Service procedures carry out the system calls

  • Utility procedures help service procedures

Utility procedures help service procedures

  • There might be loadable extensions

There might be loadable extensions

slide-25
SLIDE 25

Micro Micro-

  • kernel

kernel

  • Split the OS to small, well

Split the OS to small, well-

  • defined modules

defined modules

  • Only the microkernel module runs in kernel mode

Only the microkernel module runs in kernel mode

  • Other modules run in user mode

Other modules run in user mode

  • Goal: increase availability

Goal: increase availability

  • A buggy driver cannot crash the whole computer

A buggy driver cannot crash the whole computer

  • MINIX has reincarnation server to automatically replace

MINIX has reincarnation server to automatically replace failed module failed module

  • Disadvantage:

Disadvantage:

  • A lot of mode switches within OS itself

A lot of mode switches within OS itself

slide-26
SLIDE 26

Virtualization today Virtualization today

  • Virtual machine monitor

Virtual machine monitor called called hypervisor hypervisor

(a) Type 1 hypervisor

  • n top of hardware

multiplex all OS in parallel (b) Type 2 hypervisor

  • n top of host OS

multiplex

  • nly guest OSes
slide-27
SLIDE 27

C and Metric Units C and Metric Units

  • Must be able to use:

Must be able to use:

  • Reading C

Reading C-

  • code examples

code examples

  • Unit conversions

Unit conversions

slide-28
SLIDE 28

Processes and Threads Processes and Threads

28

slide-29
SLIDE 29

Process Process

  • is an activity that has a

is an activity that has a program, input, output program, input, output and a state. and a state.

  • Some terms:

Some terms:

  • Text/code

Text/code = executable instructions = executable instructions

  • Data

Data = variables = variables

  • Stack

Stack = workarea workarea

  • Parameter passing to subroutines/system calls

Parameter passing to subroutines/system calls

  • Process Control Block, PCB

Process Control Block, PCB entry in entry in Process Table Process Table = management information = management information

29

slide-30
SLIDE 30

Process States (1) Process States (1)

  • Possible process states

Possible process states

  • running

running

  • blocked

blocked

  • ready

ready

  • Transitions between states shown

Transitions between states shown

30

slide-31
SLIDE 31

Process Control Block Process Control Block

Fields of a process table entry Fields of a process table entry

31

slide-32
SLIDE 32

Thread Model: Thread Model: Process Process vs vs thread thread

  • Items shared by all threads in a process

Items shared by all threads in a process

  • Items private to each

Items private to each thread in thread thread in thread

  • Each thread has its own stack!

Each thread has its own stack!

32

slide-33
SLIDE 33

User User-

  • level

level vs vs kernel kernel-

  • level threads

level threads

Tan08 Fig 2-16

  • Kernel (or OS) is not aware of

Kernel (or OS) is not aware of threads, schedules processes threads, schedules processes

  • User process must dispatch

User process must dispatch threads itself threads itself

  • All thread control and dispatching

done by the kernel

  • No control on the user level

33

slide-34
SLIDE 34

Solaris Solaris

34

slide-35
SLIDE 35

Scheduling Scheduling

35

slide-36
SLIDE 36

Introduction to Scheduling (2) Introduction to Scheduling (2)

Scheduling Algorithm Goals Scheduling Algorithm Goals

36

slide-37
SLIDE 37

37

CPU Scheduling: CPU Scheduling: Algorithms examples Algorithms examples

  • First

First-

  • Come

Come-

  • First

First-

  • Served

Served FCFS FCFS

  • Round Robin

Round Robin RR RR

  • Shortest Process Next

Shortest Process Next SPN SPN

  • Shortest Remaining Time

Shortest Remaining Time SRT SRT

  • Multilevel Feedback

Multilevel Feedback feedback feedback

slide-38
SLIDE 38

Yhteenveto Yhteenveto

Tbl 9.3 [Stal05]

slide-39
SLIDE 39

Memory Memory management management

slide-40
SLIDE 40

Relocation and Protection Relocation and Protection

  • Cannot be sure where

Cannot be sure where program will be loaded in program will be loaded in memory memory

  • address locations of variables,

address locations of variables, code routines cannot be code routines cannot be absolute absolute

  • must keep a program out of

must keep a program out of

  • ther processes’ partitions
  • ther processes’ partitions
  • Use

Use base and limit base and limit values values

  • address locations added to base

address locations added to base value to map to physical value to map to physical addr addr

  • address locations larger than

address locations larger than limit value is an error limit value is an error

  • Address translation by MMU

Address translation by MMU

40

slide-41
SLIDE 41

Memory Management: bookkeeping Memory Management: bookkeeping allocations and free areas allocations and free areas

  • Part of memory with 5 processes, 3 holes

Part of memory with 5 processes, 3 holes

  • tick marks show allocation units

tick marks show allocation units

  • shaded regions are free

shaded regions are free

  • (b) Corresponding bit map

(b) Corresponding bit map

  • (c) Same information as a list

(c) Same information as a list

41

slide-42
SLIDE 42

42

Allocation Allocation

Where to place the new Where to place the new process? process?

  • Goal: avoid external

Goal: avoid external fragmentation and fragmentation and compaction compaction

  • Some alternatives:

Some alternatives:

  • Best

Best-

  • fit

fit

  • First

First-

  • fit

fit

  • Next

Next-

  • fit

fit

  • Worst

Worst-

  • fit

fit

  • Quick

Quick-

  • fit

fit

Sta Fig 7.5

slide-43
SLIDE 43

43

Virtual memory Virtual memory (using paging) (using paging)

slide-44
SLIDE 44

Operating System Involvement with Paging Operating System Involvement with Paging

  • Process creation

Process creation

  • determine program size

determine program size

  • create page table

create page table

  • Process execution

Process execution

  • MMU reset for new process

MMU reset for new process

  • TLB flushed

TLB flushed

  • Page fault time

Page fault time

  • determine virtual address causing fault

determine virtual address causing fault

  • swap target page out, needed page in

swap target page out, needed page in

  • Process termination time

Process termination time

  • release page table, pages

release page table, pages

44

slide-45
SLIDE 45

Paging Paging

  • Each process has own

Each process has own page table page table

  • Contains the locations

Contains the locations (frame numbers) of (frame numbers) of allocated frames allocated frames

  • Page table location

Page table location stored in PCB, copied stored in PCB, copied to PTR for execution to PTR for execution

  • OS maintains a table

OS maintains a table (or list) of page (or list) of page frames, to know frames, to know which are unallocated which are unallocated

45

slide-46
SLIDE 46

Page table Page table

  • Each process has its own page table

Each process has its own page table

  • Each entry has a present bit, since not all pages need to be in the

Each entry has a present bit, since not all pages need to be in the memory all the time memory all the time -

  • > page faults

> page faults

  • Remember the locality principle

Remember the locality principle

  • Logical address space can be much larger than the physical

Logical address space can be much larger than the physical

46

Typical Page Table Entry , Fig. 3.11

slide-47
SLIDE 47

47

Page Tables Page Tables

Internal Internal

  • peration of
  • peration of

MMU with 16 MMU with 16 4 KB pages 4 KB pages

slide-48
SLIDE 48

TLB TLB – – Translation Translation Lookaside Lookaside Buffer Buffer

Goal is to speed up paging Goal is to speed up paging TLB is a cache in MMU for page table entries TLB is a cache in MMU for page table entries

48

slide-49
SLIDE 49

Sta Fig Sta Fig 8.8. 8.8.

Operation Operation

  • f Paging
  • f Paging

and TLB and TLB

slide-50
SLIDE 50

Multilevel and Multilevel and Inverted Inverted Page Tables Page Tables

slide-51
SLIDE 51

51

Two Two-

  • level hierarchical page table

level hierarchical page table

  • Top most level in one page and always in the memory

Top most level in one page and always in the memory

1 K entries (= 1024 = 210) 1K * 1K = 1M entries

slide-52
SLIDE 52

Address Address translation translation with two with two levels levels

52

(Fig 4-12 [Tane01]) FA Virtual Address:

PCB

slide-53
SLIDE 53

Inverted page table Inverted page table

53

j

  • Frame number

Frame number

  • Index of the

Index of the table table

  • Not stored in

Not stored in the entry the entry

Sta Fig 8.6

slide-54
SLIDE 54

Page Replacement Page Replacement

slide-55
SLIDE 55

Page Fault Handling Page Fault Handling

1. 1.

Hardware traps to kernel Hardware traps to kernel

2. 2.

General registers saved General registers saved

3. 3.

OS determines which virtual page needed OS determines which virtual page needed

4. 4.

OS checks validity of address, seeks page frame OS checks validity of address, seeks page frame

5. 5.

If selected frame is dirty, write it to disk If selected frame is dirty, write it to disk

6. 6.

OS schedules disk operation to bring new page in from disk OS schedules disk operation to bring new page in from disk

7. 7.

Page tables updated Page tables updated

8. 8.

Faulting instruction backed up to when it began Faulting instruction backed up to when it began

9. 9.

Faulting process scheduled Faulting process scheduled

10.

  • 10. Registers restored

Registers restored

11.

  • 11. Program continues

Program continues

55

slide-56
SLIDE 56

The Clock Page Replacement Alg. The Clock Page Replacement Alg.

  • Go trough all pages in circular fashion

Go trough all pages in circular fashion

  • Try to locate unused page with the NRU

Try to locate unused page with the NRU classification, used page gets second chance classification, used page gets second chance

56

slide-57
SLIDE 57

Working Working Set Set

  • Pages used by the most recent k memory

Pages used by the most recent k memory references (window size) references (window size)

  • Common approximation

Common approximation

  • Pages used during the recent t msec of current

Pages used during the recent t msec of current virtual time, time that the process has been actually virtual time, time that the process has been actually executing executing

  • Pages not in the working set, can be evicted

Pages not in the working set, can be evicted

  • For page replacement:

For page replacement:

  • Hardware set the R and M bits

Hardware set the R and M bits

  • Each page entry has ’time of last use’

Each page entry has ’time of last use’

57

) , ( t W

slide-58
SLIDE 58

The The WSClock WSClock Page Replacement Algorithm Page Replacement Algorithm

  • As clock, but only

As clock, but only residence set residence set

  • Evict unmodified page

Evict unmodified page not in working set and not in working set and

  • Schedule for disk write

Schedule for disk write modified page not in WS modified page not in WS

  • If no candidate page

If no candidate page found during one round, found during one round, wait for writes or evict wait for writes or evict any clean (=unmodified) any clean (=unmodified) WS page WS page

58

slide-59
SLIDE 59

Review of Page Replacement Review of Page Replacement Algorithms Algorithms

59

slide-60
SLIDE 60

Sharing of pages: Sharing of pages: example editor example editor

60

slide-61
SLIDE 61

Sharing a memory Sharing a memory-

  • mapped file

mapped file

  • Mapped file is an alternative of I/O, file is accessed as a big

Mapped file is an alternative of I/O, file is accessed as a big character array in memory character array in memory

  • Can be used as shared memory between processes

Can be used as shared memory between processes

61

Tan08 Fig 10-3

slide-62
SLIDE 62

Backing Store / Swap area Backing Store / Swap area

62

(a) Paging to static swap area (a) Paging to static swap area (b) Backing up pages dynamically (b) Backing up pages dynamically

slide-63
SLIDE 63

Segmentation with Paging: Segmentation with Paging: MULTICS MULTICS

63

  • Each segment table element points to page table

Each segment table element points to page table

  • Segment table still contains segment length

Segment table still contains segment length

PCB

Segment table Segment table entry

slide-64
SLIDE 64

64

File File systems systems

slide-65
SLIDE 65

File Attributes (or metadata) File Attributes (or metadata)

65

slide-66
SLIDE 66

Directory Directory

66

= File, that contains information about other files = File, that contains information about other files

  • Only OS is allowed directly to access these files

Only OS is allowed directly to access these files

  • All changes through

All changes through system calls only

system calls only

  • Root directory

, home directories

  • Processes can

create subdirectories

  • Fixed location

for root directory on disk

slide-67
SLIDE 67

File System File System Layout Layout

  • Master Boot Record (MBR) in fixed position

Master Boot Record (MBR) in fixed position

  • Disk partition information in partition table

Disk partition information in partition table (fixed loc) (fixed loc)

  • Each partition usually has boot block

Each partition usually has boot block

  • Everything else on partition depends on the file

Everything else on partition depends on the file system system

67

slide-68
SLIDE 68

Allocating blocks for files: Allocating blocks for files: contiguous contiguous

  • File location info: start block, number of blocks

File location info: start block, number of blocks

  • Good read performance: only one seek for the whole file

Good read performance: only one seek for the whole file

  • Difficult (or impossible) to increment file

Difficult (or impossible) to increment file

  • Fragmentation: Holes from deleted files (see (b) )

Fragmentation: Holes from deleted files (see (b) )

  • Used on CD

Used on CD-

  • ROMs

ROMs

68

slide-69
SLIDE 69

Allocating blocks for files: Allocating blocks for files: linked list of disk blocks linked list of disk blocks

  • File location info: Start block

File location info: Start block

  • No fragmentation: Each block has a pointer to next block

No fragmentation: Each block has a pointer to next block

  • Reading performance: sequential fine, random access difficult

Reading performance: sequential fine, random access difficult

69

slide-70
SLIDE 70

Implementing Files: Implementing Files: File Allocation Table (FAT) File Allocation Table (FAT)

  • Link information in FAT

Link information in FAT

  • Read performance:

Read performance:

  • Blocks of file in multiple

Blocks of file in multiple locations, more seeks locations, more seeks

  • Random access possible

Random access possible

  • On disk when power

On disk when power-

  • down

down

  • In memory when running

In memory when running

  • Does not scale for large disks

Does not scale for large disks

  • To improve read

To improve read performance: performance:

  • Consolidation,

Consolidation, defragmentation defragmentation

70

slide-71
SLIDE 71

Directory structure Directory structure

  • Directory entry contains

Directory entry contains

  • File name

File name

  • File location information: block number,

File location information: block number, i i-

  • node number

node number

  • File attributes (unless stored in

File attributes (unless stored in i i-

  • node)

node)

  • Specific structure depends on the file system

Specific structure depends on the file system

71

slide-72
SLIDE 72

Implementing Files: i Implementing Files: i-

  • node

node

  • Special data structure for

Special data structure for each file separately each file separately

  • In memory only when

In memory only when this file accessed this file accessed

  • Independent of disk size

Independent of disk size

  • I

I-

  • node has fixed number

node has fixed number

  • f block locations
  • f block locations
  • Reserve last for address to

Reserve last for address to block which contains block which contains more block addresses more block addresses

72

slide-73
SLIDE 73

UNIX i UNIX i-

  • node

node

[SGG07] Fig 11.9 [SGG07] Fig 11.9

73

slide-74
SLIDE 74

UNIX UNIX

Tan08 4 Tan08 4-

  • 35

35

74

slide-75
SLIDE 75

75

Sharing files Sharing files

  • Right to access:

Right to access:

  • Usage rights collected to file

Usage rights collected to file attributes (like UNIX: attributes (like UNIX: u,g,o u,g,o) )

  • Hard

Hard link link

  • Soft

Soft link (symbolic link link (symbolic link) )

Directory structure - Directed Acyclic Graph

slide-76
SLIDE 76

Logs and journaling Logs and journaling

slide-77
SLIDE 77

Log Log-

  • Structured File Systems

Structured File Systems

  • With CPUs faster, memory larger

With CPUs faster, memory larger

  • disk caches can also be larger

disk caches can also be larger

  • increasing number of read requests can come from cache

increasing number of read requests can come from cache

  • thus, most disk accesses will be writes

thus, most disk accesses will be writes

  • LFS Strategy structures entire disk as a log

LFS Strategy structures entire disk as a log

  • have all writes initially buffered in memory

have all writes initially buffered in memory

  • periodically write these to the end of the disk log

periodically write these to the end of the disk log

  • when file opened, locate

when file opened, locate i i-

  • node, then find blocks

node, then find blocks

  • To free no longer user blocks, cleaner thread compacts the

To free no longer user blocks, cleaner thread compacts the content content

77

slide-78
SLIDE 78

Journaling File Systems: Journaling File Systems: NTFS, ext3fs, ReiserFS NTFS, ext3fs, ReiserFS

  • Created to improve robustness and speed up

Created to improve robustness and speed up crash recovery crash recovery

  • Copies the log idea from the logging FS

Copies the log idea from the logging FS

  • First write to log intentions, then do operations

First write to log intentions, then do operations

  • Log elements:

Log elements:

  • Idempotent

Idempotent – – can be repeated several times can be repeated several times

  • Contain all structural changes

Contain all structural changes – – during recovery during recovery processing the log is enough processing the log is enough

78

CS Dept use ext3fs in all file servers

slide-79
SLIDE 79

LINUX VFS LINUX VFS

  • Identical interface

Identical interface from the from the applications applications

  • Supports several

Supports several actual different actual different file systems file systems

  • All requests via

All requests via VFS VFS

79

slide-80
SLIDE 80

Free blocks Tan08 4 Free blocks Tan08 4-

  • 22

22

80

Linked list of blocks Bit map

slide-81
SLIDE 81

Logical Logical Backup Backup Algorithm Algorithm

Dump changed files Dump changed files and full path to them and full path to them

81

(a) First scan: (a) First scan:

mark changed files mark changed files and all and all dirs dirs (b) Second scan: (b) Second scan: unmark unmark dirs dirs that do that do not have changes in not have changes in subtree subtree (c) (c) Dirs Dirs to dump to dump (d) Files to dump (d) Files to dump

slide-82
SLIDE 82

Block cache / Buffer cache Block cache / Buffer cache

  • Access to disk is much slower than access to memory

Access to disk is much slower than access to memory

  • Keep recently used blocks in memory for future need

Keep recently used blocks in memory for future need

  • Fast access using hash with collision chain

Fast access using hash with collision chain

  • Use write

Use write-

  • through caches

through caches

  • To maintain content on disk also and

To maintain content on disk also and

  • keep disk consistency (i

keep disk consistency (i-

  • nodes is essential on disk)

nodes is essential on disk)

82

slide-83
SLIDE 83

NFS NFS Network File System Network File System

Chapter 10.6.4. Chapter 10.6.4.

83

slide-84
SLIDE 84

NFS layer structure NFS layer structure

  • Fig. 10-36 [Tane 08]

84

slide-85
SLIDE 85

I/ O Software I/ O Software

85

slide-86
SLIDE 86

Sta Fig 1.19 Sta Fig 1.19

I/ O Communication Techniques I/ O Communication Techniques

86

slide-87
SLIDE 87

Layers of the I/ O system Layers of the I/ O system

87

slide-88
SLIDE 88

Disks Disks

88

slide-89
SLIDE 89

RAID RAID

  • 0: strips, no redundancy

0: strips, no redundancy

  • 1: Mirroring (with strips)

1: Mirroring (with strips)

  • 2: bits, uses Hamming

2: bits, uses Hamming code code

  • 3: bits, uses Parity bit

3: bits, uses Parity bit

  • 4: strips, with strip

4: strips, with strip-

  • for

for-

  • strip parity on dedicated disk

strip parity on dedicated disk

  • 5: strips, distributing parity strips uniformly

5: strips, distributing parity strips uniformly

89

slide-90
SLIDE 90

Disk Arm Scheduling Disk Arm Scheduling

  • Time required to read or write a disk block

Time required to read or write a disk block determined by 3 factors determined by 3 factors

1. 1.

Seek time Seek time

2. 2.

Rotational delay Rotational delay

3. 3.

Actual transfer time Actual transfer time

  • Seek time dominates

Seek time dominates

  • Error checking is done by controllers

Error checking is done by controllers

90

slide-91
SLIDE 91

Disk Arm Scheduling Algorithms Disk Arm Scheduling Algorithms

  • Goal: Reduce the mean seek time

Goal: Reduce the mean seek time

  • Random

Random? FIFO? PRI? LIFO? ? FIFO? PRI? LIFO?

  • Do not consider the current arm position

Do not consider the current arm position

  • Improvement: order the requests

Improvement: order the requests

  • SSF

SSF -

  • shortest seek first

shortest seek first

  • SCAN, LOOK

SCAN, LOOK – – elevator algorithm elevator algorithm

  • Assumption:

Assumption:

  • The real disk geometry matches the virtual, assumed,

The real disk geometry matches the virtual, assumed, geometry geometry – – this may not be true with disks where the this may not be true with disks where the controller can do error correction with replacements sectors. controller can do error correction with replacements sectors.

91

slide-92
SLIDE 92

Real Real-

  • time scheduling

time scheduling

Section 7.5. Multimedia Process Section 7.5. Multimedia Process Scheduling Scheduling

92

slide-93
SLIDE 93

Real Time Scheduling Real Time Scheduling

  • Real Time Scheduling algorithms

Real Time Scheduling algorithms

  • RMS

RMS

  • EDF

EDF

93

slide-94
SLIDE 94

This wrap This wrap-

  • up did not cover:

up did not cover:

  • Multiprocessor

Multiprocessor

  • Linux

Linux

  • Windows

Windows

slide-95
SLIDE 95

95

Important themes 1/ 2 Important themes 1/ 2

  • Structure of OS

Structure of OS

  • Process and thread

Process and thread

  • PCB, TCB, execution, mode and context switch

PCB, TCB, execution, mode and context switch

  • Memory management

Memory management

  • MMU’s structure

MMU’s structure

  • Different methods

Different methods

  • Memory allocation, address translation

Memory allocation, address translation

  • Virtual memory

Virtual memory

  • Paging, address translation

Paging, address translation

  • Page table, page fault, PTR, TLB

Page table, page fault, PTR, TLB

  • Policies, methods and algorithms

Policies, methods and algorithms

slide-96
SLIDE 96

96

Important themes 2/ 2 Important themes 2/ 2

  • Locality

Locality

  • Interrupts and interrupt handling, execution

Interrupts and interrupt handling, execution cycle cycle

  • Multiprogramming

Multiprogramming

  • User mode, kernel

User mode, kernel mode mode

  • File systems, I/O, multiprocessor scheduling

File systems, I/O, multiprocessor scheduling

  • Ext3fs, ntfs,

Ext3fs, ntfs,

  • ...

...

slide-97
SLIDE 97

97

  • - END

END --

  • Operating Systems