CSE 513 I ntroduction to Operating Systems Class 8 - I nput/ - - PowerPoint PPT Presentation

cse 513 i ntroduction to operating systems class 8 i nput
SMART_READER_LITE
LIVE PREVIEW

CSE 513 I ntroduction to Operating Systems Class 8 - I nput/ - - PowerPoint PPT Presentation

CSE 513 I ntroduction to Operating Systems Class 8 - I nput/ Output File Systems Jonathan Walpole Dept. of Comp. Sci. and Eng. Oregon Health and Science University I / O devices Device (mechanical hardware) Device controller


slide-1
SLIDE 1

CSE 513 I ntroduction to Operating Systems Class 8 - I nput/ Output File Systems

Jonathan Walpole

  • Dept. of Comp. Sci. and Eng.

Oregon Health and Science University

slide-2
SLIDE 2

I / O devices

Device (mechanical hardware) Device controller (electrical hardware) Device driver (sof tware)

slide-3
SLIDE 3

Devices and their controllers

Components of a simple personal computer

Monitor

Bus

slide-4
SLIDE 4

How to communicate with a device?

Hardware supports I / O ports or memory

mapped I / O f or accessing device controller registers and buf f ers

slide-5
SLIDE 5

Wide perf ormance range f or I / O

slide-6
SLIDE 6

Perf ormance challenges: I / O hardware

How to prevent slow devices f rom slowing down

memory

How to identif y I / O addresses without

interf ering with memory perf ormance

slide-7
SLIDE 7

Single vs. Dual Memory Bus Architecture

(a) A single- bus architecture (b) A dual- bus memory architecture

slide-8
SLIDE 8

Hardware view of Pentium

Structure of a large Pentium system

slide-9
SLIDE 9

Perf ormance challenges: I / O sof tware

How to prevent CPU throughput f rom being

limited by I / O device speed (f or slow devices)

How to prevent I / O throughput f rom being

limited by CPU speed (f or f ast devices)

How to achieve good utilization of CPU and I / O

devices

How to meet the real- time requirements of

devices

slide-10
SLIDE 10

Programmed I / O

Steps in printing a string

slide-11
SLIDE 11

Programmed I / O

Polling/ busy- waiting approach

copy_from_user(buffer,p,count); for(i=0;i<count;i++){ while (*p_stat_reg != READY); *p_data_reg = p[i]; } return();

slide-12
SLIDE 12

I nterrupt- driven I / O

Asynchronous approach

give device dat a, do somet hing else! r esume when device int er r upt s

copy_from_user(buffer,p,count); enable_interrupts(); while (*p_stat_reg != READY); *p_data_reg=p[0]; scheduler(); if (count==0){ unblock_user(); } else { *p_data_reg = p[i]; count--; i++; } ack_interrupt(); return_from_interrupt();

slide-13
SLIDE 13

I nterrupt driven I / O

(b)

slide-14
SLIDE 14

Hardware Support f or I nterrupts

How interrupts happens. Connections between devices and interrupt controller actually use interrupt lines on the bus rather than dedicated wires

slide-15
SLIDE 15

DMA

Of f load all work to a DMA controller

avoids using t he CPU t o do t he t r ansf er r educes number of int er r upt s DMA cont r oller is like a co-pr ocessor doing

pr ogr ammed I / O

copy_from_user(buffer,p,count); set_up_DMA_controller(); scheduler(); ack_interrupt(); unblock_user(); return_from_interrupt();

slide-16
SLIDE 16

DMA

slide-17
SLIDE 17

Sof tware engineering- related challenges

How to remove the complexities of I / O handling

f rom application programs

st andar d I / O API s (libr ar ies and syst em calls) gener ic cat egor ies acr oss dif f er ent device t ypes

How to support a wide range of device types on

a wide range of operating systems

st andar d int er f aces f or device dr iver s st andar d/ published int er f aces f or access t o ker nel

f acilit ies

slide-18
SLIDE 18

I / O Sof tware Layers

Layers of the I / O Sof tware System

slide-19
SLIDE 19

I nterrupt Handlers

  • I nterrupt handlers are best hidden
  • have dr iver st ar t ing an I / O oper at ion block unt il

int er r upt not if ies of complet ion

  • I nterrupt procedure does its task
  • t hen unblocks dr iver t hat st ar t ed it
  • Steps must be perf ormed in sof tware af ter

interrupt completed

Save regs not alr eady saved by int er r upt har dwar e

Set up cont ext f or int er r upt ser vice pr ocedur e

slide-20
SLIDE 20

I nterrupt Handlers

  • Set up st ack f or int errupt service procedure
  • Ack int errupt cont roller, reenable int errupt s
  • Copy regist ers f rom where saved
  • Run service procedure
  • Set up MMU cont ext f or process t o run next
  • Load new process' regist ers
  • St art running t he new process
slide-21
SLIDE 21

Device Drivers

  • Communications between drivers and device controllers goes over

the bus

slide-22
SLIDE 22

I / O Sof tware: Device Drivers

Device drivers “connect” devices with the

  • perating system

Typically a nast y assembly-level j ob

  • Must deal with hardware changes
  • Must deal with O.S. changes

Hide as many device-specif ic det ails as possible

Device drivers are typically given kernel

privileges f or ef f iciency

Can br ing down O.S.! How t o pr ovide ef f iciency and saf et y???

slide-23
SLIDE 23

Device- I ndependent I / O Sof tware Functions

Functions of the device- independent I / O sof tware

Providing a deice- independent block size Allocating and releasing dedicate devices Error reporting Buf f ering Unif orm interf acing f or device drivers

slide-24
SLIDE 24

Device- I ndependent I / O Sof tware I nterf ace

(a) Without a standard driver interf ace (b) With a standard driver interf ace

slide-25
SLIDE 25

Device- I ndependent I / O Sof tware Buf f ering

(a) Unbuf f ered input (b) Buf f ering in user space (c) Buf f ering in the kernel f ollowed by copying to user space (d) Double buf f ering in the kernel

slide-26
SLIDE 26

Copying Overhead in Network I / O

Networking may involve many copies

slide-27
SLIDE 27

User- Space I / O Sof tware

Layers of the I / O system and the main f unctions

  • f each layer
slide-28
SLIDE 28

Devices as f iles

  • Bef ore mounting,

f iles on f loppy are inaccessible

  • Af ter mounting f loppy on b,

f iles on f loppy are part of f ile hierarchy

slide-29
SLIDE 29

Disk Geometry

Disk head, platters, surf aces

cylinder Track Sector

slide-30
SLIDE 30

Physical vs. logical disk geometry

Constant Angular Velocity vs. Constant Linear

Velocity

slide-31
SLIDE 31

Disks

Disk parameters f or the original I BM PC f loppy disk and a Western Digital WD 18300 hard disk

slide-32
SLIDE 32

CD- ROMs

slide-33
SLIDE 33

CD- ROM data layout

Logical data layout on a CD- ROM

slide-34
SLIDE 34

CD- R Structure

  • Cross section of a CD- R disk and laser

not t o scale

  • Silver CD- ROM has similar structure

wit hout dye layer wit h pit t ed aluminum layer inst ead of gold

slide-35
SLIDE 35

Double sided dual layer DVD

slide-36
SLIDE 36

Plastic technology

CDs

Appr oximat ely 650 Mbyt es of dat a Appr oximat ely 74 minut es of audio

DVDs

Many t ypes of f or mat s

  • DVD- R, DVD- ROM, DVD- Video

Single layer vs. mult i-layer Single sided vs. double sided Aut hor ing vs. non-aut hor ing

slide-37
SLIDE 37

Disk Formatting

A disk sector

slide-38
SLIDE 38

Disk f ormatting with cylinder skew

slide-39
SLIDE 39

Disk f ormatting with interleaving

No interleaving Single interleaving Double interleaving

slide-40
SLIDE 40

Disk scheduling algorithms

Time required to read or write a disk block

determined by 3 f actors

Seek t ime Rot at ional delay Act ual t r ansf er t ime

Seek time dominates Error checking is done by controllers

slide-41
SLIDE 41

Disk scheduling algorithms

First- come f irst serve Shortest seek time f irst Scan back and f orth to ends of disk C- Scan only one direction Look back and f orth to last request C- Look only one direction

slide-42
SLIDE 42

Disk scheduling algorithms

slide-43
SLIDE 43

Error handling complicates scheduling

A disk track with a bad sector Substituting a spare f or the bad sector Shif ting all the sectors to bypass the bad one

slide-44
SLIDE 44

RAI D levels 0 to 2

slide-45
SLIDE 45

RAI D levels 3 to 5

slide-46
SLIDE 46

Part B File Systems

slide-47
SLIDE 47

Long- term I nf ormation Storage

  • Must store large amounts of data
  • I nf ormation stored must survive the termination
  • f the process using it
  • Multiple processes must be able to access the

inf ormation concurrently

slide-48
SLIDE 48

File naming and f ile extensions

Typical f ile extensions.

slide-49
SLIDE 49

File Structure

Three kinds of f ile structure

byt e sequence r ecor d sequence t r ee

slide-50
SLIDE 50

File Types

(a) An executable f ile (b) An archive

slide-51
SLIDE 51

File access

Sequential access

r ead all byt es/ r ecor ds f r om t he beginning cannot j ump ar ound, could r ewind or back up convenient when medium was mag t ape

Random access

byt es/ r ecor ds r ead in any or der essent ial f or dat a base syst ems r ead can be …

  • move f ile marker (seek), then read or …
  • read and then move f ile marker
slide-52
SLIDE 52

File attributes

slide-53
SLIDE 53

File operations

Cr eat e Delet e Open Close Read Wr it e Append Seek Get at t r ibut es Set At t r ibut es Rename

slide-54
SLIDE 54

Exmple Program Using File System Calls (1/ 2)

slide-55
SLIDE 55

Example Program Using File System Calls (2/ 2)

slide-56
SLIDE 56

Memory- mapped f iles

(a) Segmented process bef ore mapping f iles into its address space (b) Process af ter mapping

exist ing f ile abc int o one segment cr eat ing new segment f or xyz

slide-57
SLIDE 57

Directories - single level

A single level directory system

cont ains 4 f iles

  • wned by 3 dif f er ent people, A, B, and C
slide-58
SLIDE 58

Directories - two- level

Letters indicate owners of the directories and f iles

slide-59
SLIDE 59

Hierarchical directory systems

slide-60
SLIDE 60

Path names in Unix

slide-61
SLIDE 61

Directory operations

Create Delete Opendir Closedir Readdir Rename Link Unlink

slide-62
SLIDE 62

File system implementation on disk

A possible f ile system disk layout

slide-63
SLIDE 63

I mplementing f iles - contiguous allocation

(a) Contiguous allocation of disk space f or 7 f iles (b) State of the disk af ter f iles D and E have been removed

slide-64
SLIDE 64

I mplementing f iles - linked allocation

Storing a f ile as a linked list of disk blocks

slide-65
SLIDE 65

I mplementing Files - FAT

Linked list allocation using a f ile allocation table (FAT) in RAM

slide-66
SLIDE 66

I mplementing f iles - index nodes

An example i- node

slide-67
SLIDE 67

I mplementing directories

(a) A simple directory

f ixed size ent r ies disk addr esses and at t r ibut es in dir ect or y ent r y

(b) Directory in which each entry just ref ers to an i- node

slide-68
SLIDE 68

I mplementing directories

  • Two ways of handling long f ile names in directory

(a) I n-line, (b) I n a heap

slide-69
SLIDE 69

Shared f iles - hard links

File system containing a shared f ile

slide-70
SLIDE 70

Problems with shared f iles

(a) Situation prior to linking (b) Af ter the link is created (c)Af ter the original owner removes the f ile

slide-71
SLIDE 71

Disk space management and perf ormance

  • Dark line (lef t hand scale) gives data rate of a disk
  • Dotted line (right hand scale) gives disk space ef f iciency
  • All f iles 2KB

Block size

slide-72
SLIDE 72

Disk space management

(a) Storing the f ree list on a linked list (b) A bit map

slide-73
SLIDE 73

Disk Space Management (3)

(a) Almost- f ull block of pointers to f ree disk blocks in RAM

  • t hree blocks of point ers on disk

(b) Result of f reeing a 3- block f ile (c) Alternative strategy f or handling 3 f ree blocks

  • shaded ent ries are point ers t o f ree disk blocks
slide-74
SLIDE 74

Disk space management - quotas

Quotas f or keeping track of each user’s disk use

slide-75
SLIDE 75

Maintaining File System Consistency

Crashes can cause f ile system to have

corrupted data

First: bitmap vs. linked storage maps f sck / scandisk

Block-level ensur e t hat all blocks on disk ar e

account ed f or

  • traverse inodes counting allocated blocks
  • compare result to f ree list

File-level ensur e t hat all f iles ar e account ed f or

  • traverse directory system counting f iles
slide-76
SLIDE 76

File system consistency

File system states

(a) consist ent (b) missing block (c) duplicat e block in f ree list (d) duplicat e dat a block

slide-77
SLIDE 77

Maintaining File System Consistency

File level consistency

Have dir ect or ies wit h i-node # ’s of f iles Have link st at us in t he i-nodes t hemselves Compar e!

slide-78
SLIDE 78

Perf ormance issues

Buf f er cache

A buf f er in bet ween t he applicat ion and t he FS

  • Hold buf f er of accessed data
  • Allows data to be reused by other apps
  • Can implement pref etch strategies to minimize

latency

  • Serves a buf f er f or writing out application data

– Delayed writes allow the writes to happen when convenient – Trade- of f between consistency and perf ormance

Hash t able indexed t o minimize t he sear ching

slide-79
SLIDE 79

File system perf ormance - buf f er cache

The buf f er cache data structures

slide-80
SLIDE 80

File system perf ormance - data placement

  • I - nodes placed at the start of the disk
  • Disk divided into cylinder groups

each wit h it s own blocks and i-nodes

slide-81
SLIDE 81

Log- Structured File Systems

With CPUs f aster, memory larger

disk caches can also be lar ger incr easing number of r ead r equest s can come f r om cache t hus, most disk accesses will be wr it es

LFS Strategy structures entire disk as a log

have all wr it es init ially buf f er ed in memor y per iodically wr it e t hese t o t he end of t he disk log

  • long contiguous writes to disk
  • requires large contiguous f ree space!
  • data not updated in place

when f ile opened, locat e i-node, t hen f ind blocks

slide-82
SLIDE 82

Example: The Unix f ile system

  • File systems structure
  • The f ile system in UNI X is an integral part of the
  • perating system. Files are used f or:

Terminal handling

/ dev/ t t y***

P

ipes “ls | more”

Direct ories Socket s (f or net working)

  • The design of the UNI X f ile system allows the user to

write code that has a unif orm interf ace.

Boot block superblock i-nodes data block data block

slide-83
SLIDE 83

Unix f ile system structures

Superblock - tells the UNI X O. S. what the

f ormat of the disk is, the # of blocks, the number of i- nodes, etc. . .

I - nodes - are the metadata data structures

f or f iles and directories

Files - holds inf or mat ion such as owner , per missions,

dat e of modif icat ion, wher e t he dat a is

Dir ect or ies ar e j ust a special case of f iles t hat have

t he pair s < f ile-name, i-node # > st or ed in t he f ile

slide-84
SLIDE 84

The Unix name space

Files

Repr esent ed wit h an I -node and disk blocks t hat

hold t he dat a

File names ar e not st or ed wit h t he i-node (t hey’r e in

t he dir ect or ies t hat r ef er t o t hem)

  • This is why there is a link count in the inode

Directories

Dir ect or ies ar e also f iles Ent r ies ar e disblocks wit h <

f ilename, i-node # > pair s

Special dir ect or y mar king in i-node

slide-85
SLIDE 85

Unix directory entries

A UNI X V7 directory entry

slide-86
SLIDE 86

The Unix name space

/ inode #22

. 22 .. 22 usr 24 var 35 home 26 txt 23

/ inode #24

. 24 .. 22 src 38 conf 53

/ inode #53

## config.. ## Frame rate 33 Frame size ..

slide-87
SLIDE 87

Pathname translation in Unix

The steps in looking up / usr/ ast/ mbox

slide-88
SLIDE 88

Unix I - node structure

UNI X f iles consist of i- node and data blocks UNI X uses an indexed allocation scheme with

10 dir ect point er s t o blocks 1 indir ect point er t o blocks 1 double indir ect

point er t o blocks

1 t r iple indir ect

point er t o blocks

i-node

... ... ... ... ...

triple indirect pointer

slide-89
SLIDE 89

Unix I - node structrure

A UNI X i- node

slide-90
SLIDE 90

Maximum f ile size in Unix

  • Several parameters determine how many (and of what

size) f iles can be represented

  • No. of bit s in a disk address
  • No. of bit s in a virt ual memory ref erence

Disk block size

  • Example:

10 direct , 1 indirect , 1 double indirect , 1 t riple indirect

  • No. bit s in disk address -->

16-bit s

  • No. of bit s in virt ual memory ref erence -->

32-bit s

Disk block size -->

1024 kbyt es

What is t he maximum f ile size in t his syst em?

slide-91
SLIDE 91

The “mount” command in Unix

Commands

“mount ” allows f ile syst ems t o be added t o t he

names space

“umount ” t akes a f ile syst ems out of t he name space

Mount points

Mount point s can be any dir ect or y in t he name space Once a f ile syst em is mount ed, t he ent ir e subt r ee

at t he mount point is no longer accessible (unt il t he f ile syst em is unmount ed)

slide-92
SLIDE 92

Some UNI X f ile system f eatures

mounting - allows other f ile systems (disks),

whether local or remote to be “mounted” into a unif orm f ile system name space

/ mnt usr var home X11 bin

slide-93
SLIDE 93

Associating f iles with processes

  • To provide unif orm access to data, UNI X has a level of

indirection that is used f or opening f iles

process 1 process n ... Open File Table

... ... ...... ... ... ... ...... ...

... Files Each open file entry holds, permissions (R/W), file offset

slide-94
SLIDE 94

File system vs. virtual memory

Process Paging FS Disk Swap Open File Table

slide-95
SLIDE 95

A f ile- centric view of I PC

All input and output in UNI X are handled

through the open f ile table structure

This is how UNI X can pr ovide a single unif or m

int er f ace f or local or r emot e dat a access

Pipes in UNI X are nothing more than

A wr it ing pr ocess t hat has it s “st dout ” linked t o a

“pipe” f ile (inst ead of a / dev/ t t y*** f ile)

A r eading pr ocess t hat has it s “st din” linked t o a

“pipe” f ile (inst ead of a / dev/ t t y*** f ile)