Operating Systems: Operating Systems: Memory management Memory - - PowerPoint PPT Presentation

operating systems operating systems memory management
SMART_READER_LITE
LIVE PREVIEW

Operating Systems: Operating Systems: Memory management Memory - - PowerPoint PPT Presentation

Operating Systems: Operating Systems: Memory management Memory management Fall 2008 Fall 2008 Tiina Niklander Tiina Niklander Memory Management Memory Management Programmer wants memory to be Programmer wants memory to be


slide-1
SLIDE 1

Operating Systems: Operating Systems: Memory management Memory management

Fall 2008 Fall 2008 Tiina Niklander Tiina Niklander

slide-2
SLIDE 2

2

Memory Management Memory Management

  • Programmer wants memory to be

Programmer wants memory to be

  • Indefinitely large

Indefinitely large

  • Indefinitely fast

Indefinitely fast

  • Non volatile

Non volatile

  • Memory hierarchy

Memory hierarchy

  • small amount of fast, expensive memory

small amount of fast, expensive memory – – cache cache

  • some medium

some medium-

  • speed, medium price main memory

speed, medium price main memory

  • gigabytes of slow, cheap disk storage

gigabytes of slow, cheap disk storage

  • Memory manager handles the memory hierarchy

Memory manager handles the memory hierarchy

  • Requirements for memory management

Requirements for memory management

  • Logical and physical organization

Logical and physical organization

  • Protection and sharing

Protection and sharing

Magnetic tape Magnetic disk Main memory Cache Registers

slide-3
SLIDE 3

Memory management Memory management

  • Programs use logical addresses of their own address

Programs use logical addresses of their own address space (0..MAX) space (0..MAX)

  • OS kernel usually in a fixed location using physical

OS kernel usually in a fixed location using physical memory addresses directly memory addresses directly

  • Rest of the physical memory for user processes and

Rest of the physical memory for user processes and

  • ther OS parts
  • ther OS parts
  • OS task: memory allocation and process relocation

OS task: memory allocation and process relocation

  • Hardware task: address translation (to protect memory)

Hardware task: address translation (to protect memory)

  • MMU

MMU – – memory management unit memory management unit

3

slide-4
SLIDE 4

4

Basic Memory Management: One program Basic Memory Management: One program

Monoprogramming Monoprogramming without Swapping or Paging without Swapping or Paging

No No memory abstraction, no address space, memory abstraction, no address space, just an just an operating system with one user process

  • perating system with one user process
slide-5
SLIDE 5

Multiprogramming with Fixed Partitions Multiprogramming with Fixed Partitions

5

  • OS places one process in one partition

OS places one process in one partition

  • Internal fragmentation (whole partition allocated to a

Internal fragmentation (whole partition allocated to a smaller process) smaller process)

slide-6
SLIDE 6

Fixed partitions Fixed partitions

  • Process queue for partition on disk

Process queue for partition on disk

  • In shared queue for all partitions or

In shared queue for all partitions or

  • In multiple queues for different sizes,

In multiple queues for different sizes,

  • No free partition, OS can swap

No free partition, OS can swap

  • Move one process to disk

Move one process to disk

  • PCB always in memory

PCB always in memory

  • Program too large for any partition; programmer has to

Program too large for any partition; programmer has to design solution design solution

  • Overlaying: keep just part of the program in memory

Overlaying: keep just part of the program in memory

  • Write the program to control the part swapping between

Write the program to control the part swapping between memory and disk memory and disk

6

slide-7
SLIDE 7

Relocation and Protection Relocation and Protection

  • Cannot be sure where

Cannot be sure where program will be loaded in program will be loaded in memory memory

  • address locations of variables,

address locations of variables, code routines cannot be code routines cannot be absolute absolute

  • must keep a program out of

must keep a program out of

  • ther processes’
  • ther processes’ partitions

partitions

  • Use

Use base and limit base and limit values values

  • address locations added to base

address locations added to base value to map to physical value to map to physical addr addr

  • address locations larger than

address locations larger than limit value is an limit value is an error error

  • Address translation by MMU

Address translation by MMU

7

slide-8
SLIDE 8

Sharing Sharing

  • Access to shared code / data

Access to shared code / data

  • No violation of the protections!

No violation of the protections!

  • Shared code

Shared code

  • must be reentrant, not to change during execution

must be reentrant, not to change during execution

  • Just one copy of the shared code (e.g. library)

Just one copy of the shared code (e.g. library)

  • Shared data

Shared data

  • Processes co

Processes co-

  • operate and share data structures
  • perate and share data structures
  • E.g. shared buffer of producer and consumer

E.g. shared buffer of producer and consumer

  • Solution:

Solution: system calls between processes, threads system calls between processes, threads within a process (in virtual memory alt. solution) within a process (in virtual memory alt. solution)

8

slide-9
SLIDE 9

Swapping (1) Swapping (1)

9

Memory allocation changes as Memory allocation changes as

  • processes come into memory

processes come into memory

  • leave memory

leave memory

Shaded regions are unused memory Shaded regions are unused memory

slide-10
SLIDE 10

Swapping (2) Swapping (2)

10

  • (a) Allocating space for growing data segment

(a) Allocating space for growing data segment

  • (b) Allocating space for growing stack & data segment

(b) Allocating space for growing stack & data segment

slide-11
SLIDE 11

Dynamic partitions Dynamic partitions

11

  • No fixed predetermined partition sizes

No fixed predetermined partition sizes

  • External fragments

External fragments: 6M + 6M + 4M = 14M

: 6M + 6M + 4M = 14M

  • OS must occasionally reorganize the memory

OS must occasionally reorganize the memory (compaction)

(compaction)

slide-12
SLIDE 12

Memory Management: bookkeeping Memory Management: bookkeeping allocations and free areas allocations and free areas

  • Part of memory with 5 processes, 3 holes

Part of memory with 5 processes, 3 holes

  • tick marks show allocation units

tick marks show allocation units

  • shaded regions are free

shaded regions are free

  • (b) Corresponding bit map

(b) Corresponding bit map

  • (c) Same information as a list

(c) Same information as a list

12

slide-13
SLIDE 13

Combining freed areas Combining freed areas

13

Four neighbor combinations for the terminating process X Four neighbor combinations for the terminating process X

slide-14
SLIDE 14

14

Allocation Allocation

Where to place the new Where to place the new process? process?

  • Goal: avoid external

Goal: avoid external fragmentation and fragmentation and compaction compaction

  • Some alternatives:

Some alternatives:

  • Best

Best-

  • fit

fit

  • First

First-

  • fit

fit

  • Next

Next-

  • fit

fit

  • Worst

Worst-

  • fit

fit

  • Quick

Quick-

  • fit

fit

Sta Fig 7.5

slide-15
SLIDE 15

15

Virtual memory Virtual memory (using paging) (using paging)

slide-16
SLIDE 16

Paging Paging

  • OS: the program

OS: the program split to pages split to pages

  • Page location stored

Page location stored in page table in page table

  • Process location

Process location

  • Logical address

Logical address always the same always the same

  • MMU translates

MMU translates logical address to logical address to physical address physical address using page table using page table

  • Each page relocated

Each page relocated separately separately

16 0000001111111111 0000010000000000

Sta Fig 7.11

slide-17
SLIDE 17

Paging Paging

  • Each process has own

Each process has own page table page table

  • Contains the locations

Contains the locations (frame numbers) of (frame numbers) of allocated frames allocated frames

  • Page table location stored

Page table location stored in PCB, copied to PTR in PCB, copied to PTR for execution for execution

  • OS maintains a table (or

OS maintains a table (or list) of page frames, to list) of page frames, to know which are know which are unallocated unallocated

17

slide-18
SLIDE 18

Paging: Address Translation Paging: Address Translation

18

MMU has one special register, Page Table Register MMU has one special register, Page Table Register (PTR), for address translation (PTR), for address translation

slide-19
SLIDE 19

Paging: Address Translation Paging: Address Translation

19

Sta 8.3

slide-20
SLIDE 20

20

Page table Page table

slide-21
SLIDE 21

Page table Page table

  • Each process has its own page table

Each process has its own page table

  • Each entry has a present bit, since not all pages need to be in the

Each entry has a present bit, since not all pages need to be in the memory all the time memory all the time -

  • > page faults

> page faults

  • Remember the locality principle

Remember the locality principle

  • Logical address space can be much larger than the physical

Logical address space can be much larger than the physical

21

Typical Page Table Entry , Fig. 3.11

slide-22
SLIDE 22

22

Page Tables Page Tables

Internal Internal

  • peration of
  • peration of

MMU with 16 MMU with 16 4 KB pages 4 KB pages

slide-23
SLIDE 23

23

Translation Translation Lookaside Lookaside Buffer Buffer – – TLB TLB

slide-24
SLIDE 24

TLB TLB – – Translation Translation Lookaside Lookaside Buffer Buffer

Goal is to speed up paging Goal is to speed up paging TLB is a cache in MMU for page table entries TLB is a cache in MMU for page table entries

24

slide-25
SLIDE 25

TLB TLB – – translation lookaside translation lookaside buffer buffer

  • Part of memory management unit (MMU)

Part of memory management unit (MMU)

  • Cache for used page table entries to avoid extra memory

Cache for used page table entries to avoid extra memory access during address translation access during address translation

  • Associative search

Associative search

  • Compare with all elements at the same time (fast)

Compare with all elements at the same time (fast)

  • Each TLB element contains: page number, page table

Each TLB element contains: page number, page table entry, validity bit entry, validity bit

  • Each process uses the same page numbers 0, 1, 2, …,

Each process uses the same page numbers 0, 1, 2, …, stored on different page frames stored on different page frames

  • TLB must be cleared during process switch

TLB must be cleared during process switch

  • At least clear the validity bits (this is fast)

At least clear the validity bits (this is fast)

25

slide-26
SLIDE 26

Sta Fig Sta Fig 8.8. 8.8.

Operation Operation

  • f Paging
  • f Paging

and TLB and TLB

slide-27
SLIDE 27

27

  • Fig. 5.47 from

Hennessy-Patterson, Computer Architecture DEC Alpha AXP 21064 memory hierarchy

Fully assoc, 32 entry data TLB 8 KB, direct mapped, 256 line (each 32B) data cache Fully assoc, 12 entry instruction TLB 8 KB, direct mapped, 256 line (each 32B) instruction cache 2 MB, 64K line (each 32B) direct mapped, unified, write-back L2 cache main memory paging disk (dma) More? -> course Computer Organisation II

slide-28
SLIDE 28

TLB and cache TLB and cache

28

Sta Fig. 8.10

slide-29
SLIDE 29

Multilevel and Multilevel and Inverted Inverted Page Tables Page Tables

slide-30
SLIDE 30

Multilevel page table Multilevel page table

  • Large virtual address space

Large virtual address space

  • Logical address could be 32

Logical address could be 32-

  • or 64
  • r 64-
  • bits

bits

  • Each process has a large page table

Each process has a large page table

  • Using 32

Using 32-

  • bit address and frame size 4KB (12 bit offset),

bit address and frame size 4KB (12 bit offset), means 2 means 220

20 = 1M of page tables entries for a single process

= 1M of page tables entries for a single process

  • Each entry requires several bytes (lets say 4 bytes), so the

Each entry requires several bytes (lets say 4 bytes), so the final size of page table could be for example 4 MB final size of page table could be for example 4 MB

  • Thus the page table is divided to several pages also and

Thus the page table is divided to several pages also and part of it can be on the disk part of it can be on the disk

  • Only the part of the page table that covers the pages used

Only the part of the page table that covers the pages used currently in the execution of the process is in memory currently in the execution of the process is in memory

30

slide-31
SLIDE 31

31

Two Two-

  • level hierarchical page table

level hierarchical page table

  • Top most level in one page and always in the memory

Top most level in one page and always in the memory

1 K entries (= 1024 = 210) 1K * 1K = 1M entries

slide-32
SLIDE 32

Address Address translation translation with two with two levels levels

32

(Fig 4-12 [Tane01]) FA Virtual Address:

PCB

slide-33
SLIDE 33

Address translation with two levels Address translation with two levels

33

Sta Fig 8.5

slide-34
SLIDE 34

Inverted Page Tables Inverted Page Tables

Comparison of a traditional page table with an inverted page table Comparison of a traditional page table with an inverted page table

34

slide-35
SLIDE 35

Inverted page table Inverted page table

  • Physical memory often smaller than the virtual address

Physical memory often smaller than the virtual address space of processes space of processes

  • Invert booking: Store for each page frame what page

Invert booking: Store for each page frame what page (of which process) is stored there (of which process) is stored there

  • Only one global table (inverted page table), one entry for

Only one global table (inverted page table), one entry for each page frame. each page frame.

  • Search

Search for the page based on the content of the table for the page based on the content of the table

  • Inefficient

Inefficient, if done sequentially, , if done sequentially,

  • Use hash to calculate the location, start search from there

Use hash to calculate the location, start search from there

  • If page not found, page fault

If page not found, page fault

  • Useful, only if TLB is large

Useful, only if TLB is large

35

slide-36
SLIDE 36

Inverted page table Inverted page table

36

j

  • Frame number

Frame number

  • Index of the table

Index of the table

  • Not stored in the

Not stored in the entry entry

Sta Fig 8.6