Main Memory: Address Translation (Chapter 12-17) CS 4410 - - PowerPoint PPT Presentation

main memory address translation
SMART_READER_LITE
LIVE PREVIEW

Main Memory: Address Translation (Chapter 12-17) CS 4410 - - PowerPoint PPT Presentation

Main Memory: Address Translation (Chapter 12-17) CS 4410 Operating Systems Cant We All Just Get Along? Physical Reality: different processes/threads share the same hardware need to multiplex CPU (temporal) Memory (spatial)


slide-1
SLIDE 1

Main Memory: Address Translation

(Chapter 12-17)

CS 4410 Operating Systems

slide-2
SLIDE 2

Physical Reality: different processes/threads share the same hardware à need to multiplex

  • CPU (temporal)
  • Memory (spatial)
  • Disk and devices (later)

Why worry about memory sharing?

  • Complete working state of process and/or kernel is

defined by its data (memory, registers, disk)

  • Don’t want different processes to have access to

each other’s memory (protection)

Can’t We All Just Get Along?

2

slide-3
SLIDE 3

Isolation

Don’t want distinct process states collided in physical memory (unintended overlap à chaos)

Sharing

Want option to overlap when desired (for efficiency and communication)

Virtualization

Want to create the illusion of more resources than exist in underlying physical system

Utilization

Want to best use of this limited resource

Aspects of Memory Multiplexing

3

slide-4
SLIDE 4

A Day in the Life of a Program

4

sum.c

source files

... 0C40023C 21035000 1b80050c 8C048004 21047002 0C400020 ... 10201000 21040330 22500102 ...

0040 0000 1000 0000 .text

.data

main

max

#include <stdio.h> int max = 10; int main () { int i; int sum = 0; add(m, &sum); printf(“%d”,i); ... }

Compiler

(+ Assembler + Linker)

executable sum

“It’s alive!”

Loader

stack text data heap

process

0x00000000

pid xxx

0x00400000 0x10000000

SP PC

0xffffffff

max addi jal

slide-5
SLIDE 5

Virtual view of process memory

5

0xffffffff 0x00000000 stack text data heap

slide-6
SLIDE 6

TERMINOLOGY ALERT: Page: virtual Frame: physical

Paged Translation

6

stack text data heap Process View

Virtual Page 0 Virtual Page N

STACK 0 TEXT 0 DATA 0 HEAP 1

Physical Memory

HEAP 0 TEXT 1 STACK 1

Frame 0 Frame M

No more external fragmentation!

slide-7
SLIDE 7

Divide:

  • Physical memory into fixed-sized blocks called frames
  • Virtual memory into blocks of same size called pages

Management:

  • Keep track of which pages are mapped to which frames
  • Keep track of all free frames

Notice:

  • Not all pages may be mapped to frames

Paging Overview

7

slide-8
SLIDE 8

Address Translation, Conceptually

8

Translation Physical Memory Virtual Address Raise Exception Physical Address Valid Processor Data Data Invalid

W h

  • d
  • e

s t h i s ?

slide-9
SLIDE 9
  • Hardware device
  • Maps virtual to physical address (used to

access data) User Process:

  • deals with virtual addresses
  • Never sees the physical address

Physical Memory:

  • deals with physical addresses
  • Never sees the virtual address

Memory Management Unit (MMU)

9

slide-10
SLIDE 10

red cube is 255th byte in page 2. Where is the red cube in physical memory?

High-Level Address Translation

10

stack text data heap Process View

STACK 0 TEXT 0 DATA 0 HEAP 1

Physical Memory

HEAP 0 TEXT 1 STACK 1

Page 0 Page N Frame 0 Frame M

slide-11
SLIDE 11

Page number – Upper bits

  • Must be translated into a physical frame number

Page offset – Lower bits

  • Does not change in translation

For given logical address space 2m and page size 2n

Virtual Address Components

11

page number page offset

m - n n

slide-12
SLIDE 12

High-Level Address Translation

12

stack text data heap Virtual Memory

STACK 0 TEXT 0 DATA 0 HEAP 1

Physical Memory

HEAP 0 TEXT 1 STACK 1 0x0000 0x1000 0x2000 0x3000 0x4000 0x5000

0x20FF

0x0000 0x1000 0x2000 0x3000 0x4000 0x5000 0x6000

0x????

Who keeps track of the mapping? à Page Table

1 2 3 4 5…

  • 3

6 4 8 5

slide-13
SLIDE 13

13

Simple Page Table

Lives in Memory Page-table base register (PTBR)

  • Points to the page table
  • Saved/restored on context switch

PTBR Page-table

slide-14
SLIDE 14
  • Protection
  • Demand Loading
  • Copy-On-Write

Leveraging Paging

14

slide-15
SLIDE 15

15

Full Page Table

Meta Data about each frame Protection R/W/X, Modified, Valid, etc. MMU Enforces R/W/X protection (illegal access throws a page fault) PTBR Page-table

slide-16
SLIDE 16
  • Protection
  • Demand Loading
  • Copy-On-Write

Leveraging Paging

16

slide-17
SLIDE 17
  • Page not mapped until it is used
  • Requires free frame allocation
  • What if there is no free frame???
  • May involve reading page contents from

disk or over the network

Demand Loading

17

slide-18
SLIDE 18
  • Protection
  • Demand Loading
  • Copy-On-Write (or fork() for free)

Leveraging Paging

18

slide-19
SLIDE 19
  • P1 forks()
  • P2 created with
  • own page table
  • same translations
  • All pages

marked COW (in Page Table)

Copy on Write (COW)

19

stack text data heap P1 Virt Addr Space stack text data heap Physical Addr Space stack text data heap P2 Virt Addr Space COW X X X X X X X X

slide-20
SLIDE 20

Now one process tries to write to the stack (for example):

  • Page fault
  • Allocate new frame
  • Copy page
  • Both pages no

longer COW

Option 1: fork, then keep executing

20

stack text data heap P1 Virt Addr Space stack text data heap Physical Addr Space stack text data heap P2 Virt Addr Space stack COW X X X X X X X X

slide-21
SLIDE 21

Before P2 calls exec()

Option 2: fork, then call exec

21

stack text data heap P1 Virt Addr Space stack text data heap Physical Addr Space stack text data heap P2 Virt Addr Space stack text data heap P2 Virt Addr Space COW X X X X X X X X

slide-22
SLIDE 22

stack text data

After P2 calls exec()

  • Allocate new

frames

  • Load in new

pages

  • Pages no longer

COW

22

stack text data heap P1 Virt Addr Space stack text data heap Physical Addr Space P2 Virt Addr Space stack text data COW

Option 2: fork, then call exec

slide-23
SLIDE 23

Memory Consumption:

  • Internal Fragmentation
  • Make pages smaller? But then…
  • Page Table Space: consider 48-bit address space,

2KB page size, each PTE 8 bytes

  • How big is this page table?
  • Note: you need one for each process
  • How many pages in memory does it need?

Performance: every data/instruction access requires two memory accesses:

  • One for the page table
  • One for the data/instruction

Problems with Paging

23

slide-24
SLIDE 24
  • Paged Translation
  • Efficient Address Translation
  • Multi-Level Page Tables
  • Inverted Page Tables
  • TLBs

Address Translation

24

slide-25
SLIDE 25

25 Physical Memory Implementation

Level 1 Level 2 Level 3 Processor Virtual Address Offset Index 3 Index 2 Index 1 Frame Offset Physical Address

+ Allocate only PTEs in use + Simple memory allocation − more lookups per memory reference

Multi-Level Page Tables to reduce page table space

index 1 | index 2 | offset

Frame | Access Frame

slide-26
SLIDE 26

32-bit machine, 1KB page size

  • Logical address is divided into:

– a page offset of 10 bits (1024 = 2^10) – a page number of 22 bits (32-10)

  • Since the page table is paged, the page number is

further divided into:

– a 12-bit first index – a 10-bit second index

  • Thus, a logical address is as follows:

Two-Level Paging Example

page number page offset 12 10 10

26

index 1 index 2

  • ffset
slide-27
SLIDE 27

27 Physical Memory Implementation

Level 1 Level 2 Level 3 Processor Virtual Address Offset Index 3 Index 2 Index 1 Frame Offset Physical Address

+ First Level requires less contiguous memory − even more lookups per memory reference

This one goes to three levels!

slide-28
SLIDE 28

Index is an index into:

  • frames
  • physical process memory or next level page table
  • backing store
  • if page was swapped out

Synonyms:

  • Valid bit == Present bit
  • Dirty bit == Modified bit
  • Referenced bit == Accessed bit

Complete Page Table Entry (PTE)

28 Valid Protection R/W/X Ref Dirty Index

slide-29
SLIDE 29
  • Paged Translation
  • Efficient Address Translation
  • Multi-Level Page Tables
  • Inverted Page Tables
  • TLBs

Address Translation

29

slide-30
SLIDE 30

So many virtual pages… … comparatively few physical frames Traditional Page Tables:

  • map pages to frames
  • are numerous and sparse

Inverted Page Table: Motivation

30

physical address space P1 virtual address space P2 virtual address space P3 virtual address space P5 virtual address space P4 virtual address space

slide-31
SLIDE 31

31

Inverted Page Table: Implementation

Page-table

Physical Memory

pid

S e a r c h F

  • r

m a t c h i n g p a g e & p i d

page pid

Not to scale! Page table << Memory

Implementation:

  • 1 Page Table for entire system
  • 1 entry per frame in memory
  • Why don’t we store the frame #?

page #

  • ffset

Virtual address

frame

  • ffset

pid page

slide-32
SLIDE 32

Tradeoffs: ↓ memory to store page tables ↑ time to search page tables Solution: hashing

  • hash(page,pid) à PT entry (or chain of

entries)

  • What about:
  • collisions…
  • sharing…

Inverted Page Table: Discussion

32

slide-33
SLIDE 33
  • Paged Translation
  • Efficient Address Translation
  • Multi-Level Page Tables
  • Inverted Page Tables
  • TLBs

Address Translation

33

slide-34
SLIDE 34

Associative cache of virtual to physical page translations

34 Physical Memory

Frame Offset Physical Address Page# Offset Virtual Address Translation Lookaside Buffer (TLB) Virtual Page Page Frame Access Matching Entry Page Table Lookup

Translation Lookaside Buffer (TLB)

slide-35
SLIDE 35

Access TLB before you access memory.

Address Translation with TLB

35

TLB Physical Memory Virtual Address Virtual Address Frame Frame Raise Exception Physical Address Hit Valid Processor Page Table Data Data Miss Invalid Offset

Trick: access TLB while you access the cache.

slide-36
SLIDE 36

Process isolation

  • Keep a process from touching anyone else’s memory,
  • r the kernel’s

Efficient inter-process communication

  • Shared regions of memory between processes

Shared code segments

  • common libraries used by many different programs

Program initialization

  • Start running a program before it is entirely in memory

Dynamic memory allocation

  • Allocate and initialize stack/heap pages on demand

Address Translation Uses!

36

slide-37
SLIDE 37

Program debugging

  • Data breakpoints when address is accessed

Memory mapped files

  • Access file data using load/store instructions

Demand-paged virtual memory

  • Illusion of near-infinite memory, backed by disk or

memory on other machines

Checkpointing/restart

  • Transparently save a copy of a process, without

stopping the program while the save happens

Distributed shared memory

  • Illusion of memory that is shared between machines

MORE Address Translation Uses!

37