1
Adapted from UC Berkeley CS252 S01
Lecture 19: Virtual Memory Virtual Memory concept, Virtual- - - PowerPoint PPT Presentation
Lecture 19: Virtual Memory Virtual Memory concept, Virtual- physical translation, page table, TLB, Alpha 21264 memory hierarchy 1 Adapted from UC Berkeley CS252 S01 Virtual Memory Virtual memory (VM) allows programs to have the illusion of a
1
Adapted from UC Berkeley CS252 S01
Make main memory (DRAM) acts like a cache for secondary
storage (magnetic disk)
Otherwise, application programmers have to move data in/out
main memory
That’s how virtual memory was first proposed
Allowing multiple processes share the physical memory in
multiprogramming environment
Providing protection for processes (compare Intel 8086:
without VM applications can overwrite OS kernel)
Facilitating program relocation in physical memory space
3
4
Cache block => page Cache Miss => page fault
TLB does fast address translations OS handles less frequently events:
page fault TLB miss (when software approach is used)
Miss penalty for virtual memory is very high => Full
Have software determine the location while accessing
Address divided into page number and page offset Page table and translation buffer used for address
Q: why fully associativity does not affect hit time?
7
Want to reduce miss rate & can handle in
Least Recently Used typically used A typical approximation of LRU
Hardware set reference bits OS record reference bits and clear them periodically OS selects a page among least-recently referenced for
replacement
Writing to disk is very expensive Use a write-back strategy
8
Virtual Page Number Page offset Physical Page Number Page offset Translation Virtual Address Physical Address 36 bits 33 bits 12 bits 12 bits
9
memory
read/write, etc.)
11
12
13
48-bit virtual address 41-bit physical address 8 KB pages => 13 bit page offset Each page table entry is 8 bytes
Virtual page number = 48 - 13 = 25 bytes Number of entries = number of pages = 225 = 32M Total size = number of entries x bytes/entry
Each process needs its own page table
14
Tag holds portions of virtual address Data portion holds physical page number,
Usually fully associative or highly set associative Usually 64 or 128 entries
15
TLB size : 32 to 4,096 entries Block size : 1 or 2 page table entries (4 or 8
Hit time: 0.5 to 1 clock cycle Miss penalty: 10 to 30 clock cycles (go to
Miss rate: 0.01% to 0.1% Associative : Fully associative or set
Write policy : Write back (replace
16
17
Hardware may support a range of page sizes OS selects the best one(s) for its purpose
18
Virtual indexed Physically tagged Physically indexed Physically tagged
19
Segmentation: variable-size memory space range,
Segmentation assign meanings to address spaces, and
Paging is used on the address space of each segment
kseg: reserved for OS kernel, not VM management seg0: virtual address accessible to user process seg1: virtual address accessible to OS kernel
20
Sees a large, flat memory space Assumes fast access to every place Hardware/OS hide the complexity
Manages multiple process spaces Reserves direct accesses to some portions of
May access physical memory, its own virtual
Hardware facilitates fast VM accesses, and OS
21
10-bit 1024 8B PTEs 13-bit 13-bit 28-bit
22
User programs can only access through virtual
PTE entry contains protection bits to allow
Valid, user read enable, kernel read enable, user
23
L1 instruction cache: 2-way, 64KB, 64-byte block, Virtually indexed and tagged
Use way prediction and line prediction to allow instruction
fetching Inst prefetcher: store four prefetched instructions, accessed before L2 cache L1 data cache: 2-way, 64KB, 64-byte block, Virtually indexed, physically tagged, write-through Victim buffer: 8-entry, checked before L2 access L2 unified cache: 1-way 1MB to 16MB, off-chip, write-back;
Allow critical-word transfer to L1 cache, transfers 16B per
2.25ns
TLB: 128-entry fully associative for inst and data (each) ES40: L1 miss penalty 22ns, L2 130 ns; up to 32GB memory; 256-bit memory buses (64-bit into processor) Read 5.13 for more details