Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

memory management
SMART_READER_LITE
LIVE PREVIEW

Memory Management Disclaimer: some slides are adopted from book - - PowerPoint PPT Presentation

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Recap: Virtual Memory (VM) Process A Process C Process B MMU 2 Physical Memory Recap: MMU Hardware that translates virtual addresses to


slide-1
SLIDE 1

Memory Management

1

Disclaimer: some slides are adopted from book authors’ slides with permission

slide-2
SLIDE 2

Recap: Virtual Memory (VM)

2

Process A Process B Process C Physical Memory

MMU

slide-3
SLIDE 3

Recap: MMU

  • Hardware that translates virtual addresses to

physical addresses

1) base register 2) base + limit registers (segmentation); 3) paging

3

slide-4
SLIDE 4

Recap: Page Table based Address Translation

4

0x12345678

Ox12345 0x678

Page # Offset frame #: 0xabcde frame #

  • ffset

0x678 0x12345

0xabcde678

Virtual address Physical address Page table

slide-5
SLIDE 5

Advantages of Paging

  • No external fragmentation

– Efficient use of memory – Internal fragmentation (waste within a page) still exists

5

slide-6
SLIDE 6

Issues of Paging

  • Translation speed

– Each load/store instruction requires a translation – Table is stored in memory – Memory is slow to access

  • ~100 CPU cycles to access DRAM

6

slide-7
SLIDE 7

Translation Lookaside Buffer (TLB)

  • Cache frequent address translations

– So that CPU don’t need to access the page table all the time – Much faster

7

slide-8
SLIDE 8

Issues of Paging

  • Page size

– Small: minimize space waste, requires a large table – Big: can waste lots of space, the table size is small – Typical size: 4KB – How many pages are needed for 4GB (32bit)?

  • 4GB/4KB = 1M pages

– What is the required page table size?

  • assume 1 page table entry (PTE) is 4bytes
  • 1M * 4bytes = 4MB

– Btw, this is for each process. What if you have 100 processes? Or what if you have a 64bit address?

8

slide-9
SLIDE 9

Paging

  • Advantages

– No external fragmentation

  • Two main Issues

– Translation speed can be slow

  • TLB

– Table size is big

9

slide-10
SLIDE 10

Multi-level Paging

  • Two-level paging

10

slide-11
SLIDE 11

Two Level Address Translation

11

2nd level

  • ffset

1st level 1st level Page table 2nd level Page table

Frame # Offset Base ptr

Virtual address Physical address

slide-12
SLIDE 12

Example

12

2nd level

  • ffset

8 bits 8 bits

1st level

8 bits Virtual address format (24bits) Vaddr: 0x0703FE 1st level idx: 07 2nd level idx: 03 Offset: FE Vaddr: 0x072370 1st level idx: __ 2nd level idx: __ Offset: __ Vaddr: 0x082370 1st level idx: __ 2nd level idx: __ Offset: __

slide-13
SLIDE 13

Multi-level Paging

  • Can save table space
  • How, why?

13

slide-14
SLIDE 14

Summary

  • MMU

– Virtual address  physical address – Various designs are possible, but

  • Paged MMU

– Memory is divided into fixed-sized pages – Use page table to store the translation table – No external fragmentation: i.e., efficient space utilization

14

slide-15
SLIDE 15

Virtual Memory (VM)

  • Abstraction

– 4GB linear address space for each process

  • Reality

– 1GB of actual physical memory shared with 20

  • ther processes
  • Does each process use the

(1) entire virtual memory (2) all the time?

15

slide-16
SLIDE 16

Concepts to Learn

  • Demand paging

16

slide-17
SLIDE 17

Demand Paging

  • Idea: instead of keeping the entire memory

pages in memory all the time, keep only part

  • f them on a on-demand basis

17

slide-18
SLIDE 18

Page Table Entry (PTE)

  • PTE format (architecture specific)

– Valid bit (V): whether the page is in memory – Modify bit (M): whether the page is modified – Reference bit (R): whether the page is accessed – Protection bits(P): readable, writable, executable

18

Page Frame No V M R P 20 bits 2 1 1 1

slide-19
SLIDE 19

Partial Memory Mapping

  • Not all pages are in memory (i.e., valid=1)

19

slide-20
SLIDE 20

Page Fault

  • When a virtual address can not be translated

to a physical address, MMU generates a trap to the OS

  • Page fault handling procedure

– Step 1: allocate a free page frame – Step 2: bring the stored page on disk (if necessary) – Step 3: update the PTE (mapping and valid bit) – Step 4: restart the instruction

20

slide-21
SLIDE 21

Page Fault Handling

21

slide-22
SLIDE 22

Demand Paging

22

slide-23
SLIDE 23

Starting Up a Process

23

Unmapped pages

Code Data Heap Stack

slide-24
SLIDE 24

Starting Up a Process

24

Access next instruction

Code Data Heap Stack

slide-25
SLIDE 25

Starting Up a Process

25

Page fault

Code Data Heap Stack

slide-26
SLIDE 26

Starting Up a Process

26

OS 1) allocates free page frame 2) loads the missed page from the disk (exec file) 3) updates the page table entry

Code Data Heap Stack

slide-27
SLIDE 27

Starting Up a Process

27

Code Data Heap Stack

Over time, more pages are mapped as needed

slide-28
SLIDE 28

Anonymous Page

  • An executable file contains code (binary)

– So we can read from the executable file

  • What about heap?

– No backing storage (unless it is swapped out later) – Simply map a new free page (anonymous page) into the address space

28

slide-29
SLIDE 29

Program Binary Sharing

  • Multiple instances of the same program

– E.g., 10 bash shells

29

Bash text Physical memory Bash #1 Bash #2