Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

memory management
SMART_READER_LITE
LIVE PREVIEW

Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Recap: Virtual Memory (VM) Process A Process C Process B MMU 2 Physical Memory Recap: MMU Hardware that translates virtual addresses to


slide-1
SLIDE 1

Memory Management

1

Disclaimer: some slides are adopted from book authors’ slides with permission

slide-2
SLIDE 2

Recap: Virtual Memory (VM)

2

Process A Process B Process C Physical Memory

MMU

slide-3
SLIDE 3

Recap: MMU

  • Hardware that translates virtual addresses to

physical addresses

1) base register 2) base + limit registers (segmentation); 3) paging

3

slide-4
SLIDE 4

Recap: Page Table based Address Translation

4

0x12345678

Ox12345 0x678

Page # Offset frame #: 0xabcde frame #

  • ffset

0x678 0x12345

0xabcde678

Virtual address Physical address Page table

slide-5
SLIDE 5

Recap: Translation Lookaside Buffer

  • Cache frequent address translations

– So that CPU don’t need to access the page table all the time – Much faster

5

slide-6
SLIDE 6

Multi-level Paging

  • Two-level paging

6

slide-7
SLIDE 7

Two Level Address Translation

7

2nd level

  • ffset

1st level 1st level Page table 2nd level Page table

Frame # Offset Base ptr

Virtual address Physical address

slide-8
SLIDE 8

Example

8

2nd level

  • ffset

8 bits 8 bits

1st level

8 bits Virtual address format (24bits)

Vaddr: 0x0703FE 1st level idx: 07 2nd level idx: 03 Offset: FE Vaddr: 0x072370 1st level idx: __ 2nd level idx: __ Offset: __ Vaddr: 0x082370 1st level idx: __ 2nd level idx: __ Offset: __

slide-9
SLIDE 9

Multi-level Paging

  • Can save table space
  • How, why?

9

slide-10
SLIDE 10

Quiz

  • What is the minimum page table size of a

process that uses only 4MB memory space?

– assume a PTE size is 4B

10

2nd level

  • ffset

10 bits 12 bits

1st level

10 bits

  • ffset

12 bits

1st level

20 bits

4 * 2^10 + 4* 2^10 = 8KB 4 * 2^20 = 4MB

slide-11
SLIDE 11

Paging Summary

  • Advantages

– Efficient use of memory space

  • No external fragmentation
  • Two main Issues

– Translation speed can be slow

  • Solution: TLB

– Table size is big

  • Solution: Multi-level page table

11

slide-12
SLIDE 12

Concepts to Learn

  • Demand paging

12

slide-13
SLIDE 13

Virtual Memory (VM)

  • Abstraction

– 4GB linear address space for each process

  • Reality

– 1GB of actual physical memory shared with 20

  • ther processes
  • Does each process use the

(1) entire virtual memory (2) all the time?

13

slide-14
SLIDE 14

Demand Paging

  • Idea: instead of keeping the entire memory

pages in memory all the time, keep only part

  • f them on a on-demand basis

14

slide-15
SLIDE 15

Page Table Entry (PTE)

  • PTE format (architecture specific)

– Valid bit (V): whether the page is in memory – Modify bit (M): whether the page is modified – Reference bit (R): whether the page is accessed – Protection bits(P): readable, writable, executable

15

Page Frame No V M R P 20 bits 2 1 1 1

slide-16
SLIDE 16

Partial Memory Mapping

  • Not all pages are in memory (i.e., valid=1)

16

slide-17
SLIDE 17

Page Fault

  • When a virtual address can not be translated

to a physical address, MMU generates a trap to the OS

  • Page fault handling procedure

– Step 1: allocate a free page frame – Step 2: bring the stored page on disk (if necessary) – Step 3: update the PTE (mapping and valid bit) – Step 4: restart the instruction

17

slide-18
SLIDE 18

Page Fault Handling

18

slide-19
SLIDE 19

Demand Paging

19

slide-20
SLIDE 20

Starting Up a Process

20

Unmapped pages

Code Data Heap Stack

slide-21
SLIDE 21

Starting Up a Process

21

Access next instruction

Code Data Heap Stack

slide-22
SLIDE 22

Starting Up a Process

22

Page fault

Code Data Heap Stack

slide-23
SLIDE 23

Starting Up a Process

23

OS 1) allocates free page frame 2) loads the missed page from the disk (exec file) 3) updates the page table entry

Code Data Heap Stack

slide-24
SLIDE 24

Starting Up a Process

24

Code Data Heap Stack

Over time, more pages are mapped as needed

slide-25
SLIDE 25

Anonymous Page

  • An executable file contains code (binary)

– So we can read from the executable file

  • What about heap?

– No backing storage (unless it is swapped out later) – Simply map a new free page (anonymous page) into the address space

25

slide-26
SLIDE 26

Program Binary Sharing

  • Multiple instances of the same program

– E.g., 10 bash shells

26

Bash text Physical memory Bash #1 Bash #2