Virtual Memory 1 Learning to Play Well With Others (Physical) - - PowerPoint PPT Presentation

virtual memory
SMART_READER_LITE
LIVE PREVIEW

Virtual Memory 1 Learning to Play Well With Others (Physical) - - PowerPoint PPT Presentation

Virtual Memory 1 Learning to Play Well With Others (Physical) Memory 0x10000 (64KB) Stack Heap 0x00000 Learning to Play Well With Others (Physical) Memory malloc(0x20000) 0x10000 (64KB) Stack Heap 0x00000 Learning to Play Well With


slide-1
SLIDE 1

Virtual Memory

1

slide-2
SLIDE 2

Learning to Play Well With Others

0x00000 0x10000 (64KB) Stack Heap (Physical) Memory

slide-3
SLIDE 3

Learning to Play Well With Others

0x00000 0x10000 (64KB) Stack Heap (Physical) Memory malloc(0x20000)

slide-4
SLIDE 4

Learning to Play Well With Others

0x00000 0x10000 (64KB) Stack Heap (Physical) Memory malloc(0x20000)

slide-5
SLIDE 5

Learning to Play Well With Others

Stack Heap (Physical) Memory 0x00000 0x10000 (64KB)

slide-6
SLIDE 6

Learning to Play Well With Others

Stack Heap (Physical) Memory 0x00000 0x10000 (64KB)

slide-7
SLIDE 7

Learning to Play Well With Others

Stack Heap (Physical) Memory Stack Heap 0x00000 0x10000 (64KB)

slide-8
SLIDE 8

Learning to Play Well With Others

Stack Heap (Physical) Memory Stack Heap 0x00000 0x10000 (64KB)

slide-9
SLIDE 9

Learning to Play Well With Others

Stack Heap Virtual Memory 0x00000 0x10000 (64KB) Stack Heap Virtual Memory 0x00000 0x10000 (64KB)

slide-10
SLIDE 10

Learning to Play Well With Others

Stack Heap Virtual Memory 0x00000 0x10000 (64KB) Physical Memory 0x00000 0x10000 (64KB) Stack Heap Virtual Memory 0x00000 0x10000 (64KB)

slide-11
SLIDE 11

Learning to Play Well With Others

Stack Heap Virtual Memory 0x00000 0x10000 (64KB) Physical Memory 0x00000 0x10000 (64KB) Stack Heap Virtual Memory 0x00000 0x10000 (64KB)

slide-12
SLIDE 12

Learning to Play Well With Others

Stack Heap Virtual Memory 0x00000 0x400000 (4MB) Physical Memory 0x00000 0x10000 (64KB) Stack Heap Virtual Memory 0x00000 0xF000000 (240MB) Disk (GBs)

slide-13
SLIDE 13

Mapping

  • Virtual-to-physical mapping
  • Virtual --> “virtual address space”
  • physical --> “physical address space”
  • We will break both address spaces up into

“pages”

  • Typically 4KB in size, although sometimes large
  • Use a “page table” to map between virtual pages

and physical pages.

  • The processor generates “virtual” addresses
  • They are translated via “address translation” into

physical addresses.

6

slide-14
SLIDE 14

Implementing Virtual Memory

Physical Address Space Virtual Address Space 232 - 1 230 – 1 (or whatever) Stack We need to keep track of this mapping… Heap

slide-15
SLIDE 15

The Mapping Process

8

Virtual Page Number Page Offset (log(page size)) Virtual address (32 bits) Physical address (32 bits) Page Offset (log(page size)) Virtual-to-physical map Physical Page Number

slide-16
SLIDE 16

Two Problems With VM

  • How do we store the map compactly?
  • How do we translation quickly?

9

slide-17
SLIDE 17

How Big is the map?

  • 32 bit address space:
  • 4GB of virtual addresses
  • 1MPages
  • Each entry is 4 bytes (a 32 bit physical address)
  • 4MB of map
  • 64 bit address space
  • 16 exabytes of virtual address space
  • 4PetaPages
  • Entry is 8 bytes
  • 64PB of map

10

slide-18
SLIDE 18

Shrinking the map

  • Only store the entries that matter (i.e.,. enough

for your physical address space)

  • 64GB on a 64bit machine
  • 16M pages, 128MB of map
  • This is still pretty big.
  • Representing the map is now hard because we

need a “sparse” representation.

  • The OS allocates stuff all over the place.
  • For security, convenience, or caching optimizations
  • For instance: The stack is at the “top” of memory. The

heap is at the “bottom”

  • How do you represent this “sparse” map?

11

slide-19
SLIDE 19

Hierarchical Page Tables

  • Break the virtual page number into several pieces
  • If each piece has N bits, build an 2N-ary tree
  • Only store the part of the tree that contain valid

pages

  • To do translation, walk down the tree using the

pieces to select with child to visit.

12

slide-20
SLIDE 20

Hierarchical Page Table

Level 1 Page Table Level 2 Page Tables

Data Pages

Parts of the map that exist Root of the Current Page Table

p1

  • ffset

p2

Virtual Address (Processor Register)

Parts that don’t p1 p2 offset

11 12 21 22 31

10-bit L1 index 10-bit L2 index

Adapted from Arvind and Krste’s MIT Course 6.823 Fall 05

slide-21
SLIDE 21

Making Translation Fast

  • Address translation has to happen for every

memory access

  • This potentially puts it squarely on the critical for

memory operation (which are already slow)

14

slide-22
SLIDE 22

“Solution 1”: Use the Page Table

  • We could walk the page table on every memory

access

  • Result: every load or store requires an additional

3-4 loads to walk the page table.

  • Unacceptable performance hit.

15

slide-23
SLIDE 23

Solution 2: TLBs

  • We have a large pile of data (i.e., the page table) and we want

to access it very quickly (i.e., in one clock cycle)

  • So, build a cache for the page mapping, but call it a “translation

lookaside buffer” or “TLB”

16

slide-24
SLIDE 24

TLBs

  • TLBs are small (maybe 128 entries), highly-

associative (often fully-associative) caches for page table entries.

  • This raises the possibility of a TLB miss, which

can be expensive

  • To make them cheaper, there are “hardware page table

walkers” -- specialized state machines that can load page table entries into the TLB without OS intervention

  • This means that the page table format is now part of the

big-A architecture.

  • Typically, the OS can disable the walker and implement

its own format.

17

slide-25
SLIDE 25

Solution 3: Defer translating Accesses

  • If we translate before we go to the cache, we

have a “physical cache”, since cache works on physical addresses.

  • Critical path = TLB access time + Cache access time
  • Alternately, we could translate after the cache
  • Translation is only required on a miss.
  • This is a “virtual cache”
  • 18

CPU Physical Cache TLB Primary Memory VA PA CPU VA Virtual Cache PA TLB Primary Memory

slide-26
SLIDE 26

The Danger Of Virtual Caches (1)

  • Process A is running. It issues a memory request

to address 0x10000

  • It is a miss, and 0x10000 is brought into the virtual cache
  • A context switch occurs
  • Process B starts running. It issues a request to

0x10000

  • Will B get the right data?
  • 19
slide-27
SLIDE 27

The Danger Of Virtual Caches (1)

  • Process A is running. It issues a memory request

to address 0x10000

  • It is a miss, and 0x10000 is brought into the virtual cache
  • A context switch occurs
  • Process B starts running. It issues a request to

0x10000

  • Will B get the right data?
  • 19

No! We must flush virtual caches on a context switch.

slide-28
SLIDE 28

The Danger Of Virtual Caches (2)

  • There is no rule that says that each virtual address

maps to a different physical address.

  • When this occurs, it is called “aliasing”
  • Example: An alias exists in the cache
  • Store B to 0x1000
  • Now, a load from 0x2000 will return the wrong value

20

A A 0x1000 0x2000 Address Data Cache 0x1000 0xfff0000 0x2000 0xfff0000 Page Table B A 0x1000 0x2000 Address Data Cache 0x1000 0xfff0000 0x2000 0xfff0000 Page Table

slide-29
SLIDE 29

The Danger Of Virtual Caches (2)

  • Why are aliases useful?
  • Example: Copy on write
  • memcpy(A, B, 100000)
  • Adjusting the page table is much faster for large copies
  • The initial copy is free, and the OS will catch attempts to

write to the copy, and do the actual copy lazily.

  • There are also system calls that let you do this arbitrarily.

21

Virtual address space char * A My Big Data memcpy(A, B, 100000) Physical address space My Big Data memcpy(A, B, 100000) char * B; My Empty Buffer Virtual address space char * A My Big Data Physical address space My Big Data char * B; Un- writeable copy By Big Empty Buffer

slide-30
SLIDE 30

The Danger Of Virtual Caches (2)

  • Why are aliases useful?
  • Example: Copy on write
  • memcpy(A, B, 100000)
  • Adjusting the page table is much faster for large copies
  • The initial copy is free, and the OS will catch attempts to

write to the copy, and do the actual copy lazily.

  • There are also system calls that let you do this arbitrarily.

21

Virtual address space char * A My Big Data memcpy(A, B, 100000) Physical address space My Big Data memcpy(A, B, 100000) char * B; My Empty Buffer Virtual address space char * A My Big Data Physical address space My Big Data char * B; Un- writeable copy By Big Empty Buffer

Two virtual addresses pointing the same physical address

slide-31
SLIDE 31

Solution (4): Virtually indexed physically tagged

Index L is available without consulting the TLB ⇒ cache and TLB accesses can begin simultaneously Critical path = max(cache time, TLB time)!!! Tag comparison is made after both accesses are completed Work if the size of one cache way ≤ Page Size because then none of the cache inputs need to be translated (i.e., the index bits in physical and virtual addresses are the same)

VPN L = C-b b

TLB

Direct-map Cache Size 2C = 2L+b PPN Page Offset

=

hit? Data Physical Tag Tag VA PA “Virtual Index”

P

key idea: page offset bits are not translated and thus can be presented to the cache immediately

slide-32
SLIDE 32

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-33
SLIDE 33

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-34
SLIDE 34

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-35
SLIDE 35

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-36
SLIDE 36

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-37
SLIDE 37

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-38
SLIDE 38

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-39
SLIDE 39

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-40
SLIDE 40

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-41
SLIDE 41

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

Stack Heap

1GB

8GB Stack Heap (Physical) Memory

slide-42
SLIDE 42

Virtualizing Memory

§ We need to make it appear that there is more memory than there is in a system

– Allow many programs to be “running” or at least “ready to run” at

  • nce (mostly)

– Absorb memory leaks (sometimes... if you are programming in C or C ++)

slide-43
SLIDE 43

Page table with pages on disk

Level 1 Page Table Level 2 Page Tables

Data Pages

page in primary memory page on disk Root of the Current Page Table

p1

  • ffset

p2

Virtual Address (Processor Register)

PTE of a nonexistent page p1 p2 offset

11 12 21 22 31

10-bit L1 index 10-bit L2 index

Adapted from Arvind and Krste’s MIT Course 6.823 Fall 05

slide-44
SLIDE 44

The TLB With Disk

  • TLB entries always point to memory, not disks

26

slide-45
SLIDE 45

The Value of Paging

  • Disk are really really slow.
  • Paging is not very useful for expanding the active

memory capacity of a system

  • It’s good for “coarse grain context switching” between

apps

  • And for dealing with memory leaks ;-)
  • As a result, fast systems don’t page.

27

slide-46
SLIDE 46

The End

28

slide-47
SLIDE 47

Other uses for VM

  • VM provides us a mechanism for adding “meta

data” to different regions of memory.

  • The primary piece of meta data is the location of the

data in physical ram.

  • But we can support other bits of information as well
  • 29
slide-48
SLIDE 48

Other uses for VM

  • VM provides us a mechanism for adding “meta

data” to different regions of memory.

  • The primary piece of meta data is the location of the

data in physical ram.

  • But we can support other bits of information as well
  • Backing memory to disk
  • next slide
  • Protection
  • Pages can be readable, writable, or executable
  • Pages can be cachable or un-cachable
  • Pages can be write-through or write back.
  • Other tricks
  • Arrays bounds checking
  • Copy on write, etc.

30