main memory address translation
play

Main Memory: Address Translation (Chapter 12-17) CS 4410 - PowerPoint PPT Presentation

Main Memory: Address Translation (Chapter 12-17) CS 4410 Operating Systems Cant We All Just Get Along? Physical Reality: different processes/threads share the same hardware need to multiplex CPU (temporal) Memory (spatial and


  1. Main Memory: Address Translation (Chapter 12-17) CS 4410 Operating Systems

  2. Can’t We All Just Get Along? Physical Reality: different processes/threads share the same hardware à need to multiplex • CPU (temporal) • Memory (spatial and temporal) • Disk and devices (later) Why worry about memory sharing? • Complete working state of process and/or kernel is defined by its data (memory, registers, disk) • Don’t want different processes to have access to each other’s memory (protection) 2

  3. Aspects of Memory Multiplexing Isolation Don’t want distinct process states collided in physical memory (unintended overlap à chaos) Sharing Want option to overlap when desired (for efficiency and communication) Virtualization Want to create the illusion of more resources than exist in underlying physical system Utilization Want the best use of this limited resource 3

  4. A Day in the Life of a Program Compiler Loader (+ Assembler + Linker) “It’s alive!” sum pid xxx sum.c process executable source files PC SP 0xffffffff stack ... #include <stdio.h> 0040 0000 0C40023C 21035000 int max = 10; main .text 1b80050c int main () { 8C048004 heap int i; 21047002 int sum = 0; 0C400020 data add(m, &sum); ... max printf(“%d”,i); 0x10000000 1000 0000 10201000 ... 21040330 text .data max 22500102 jal } addi ... 0x00400000 4 0x00000000

  5. Virtual view of process memory 0xffffffff stack heap data text 0x00000000 0 1 2 3 4 5 6 7 5

  6. Where do we store virtual memory? Need to find a place where the physical memory of the process lives à Keep track of a “free list” of available memory blocks (so-called “holes”) 6

  7. Dynamic Storage-Allocation Problem • First-fit : Allocate first hole that is big enough • Next-fit : Allocate first hole that is big enough • Best-fit : Allocate smallest hole that is big enough; must search entire free list, unless ordered by size – Produces the smallest leftover hole • Worst-fit : Allocate largest hole; must also search entire free list – Produces the largest leftover hole

  8. Fragmentation • Internal Fragmentation – allocated memory may be larger than requested memory; this size difference is memory internal to a partition, but not being used • External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous

  9. How do we map virtual à physical • Having found the physical memory, how do we map virtual addresses to physical addresses? 9

  10. Early Days: Base and Limit Registers Base and Limit registers for each process process 1 Limit physical address = virtual address + base Base Physical (segmentation fault if Memory process 2 virtual address ≥ limit) Limit Base

  11. Early Days: Base and Limit Registers Base and Limit registers for each process process 1 Limit physical address = virtual address + base Base Physical (segmentation fault if Memory process 2 virtual address ≥ limit) Limit Base virtual hole between heap and stack leads to significant internal fragmentation

  12. Next: segmentation • Base and Limit register for each segment: code, data/heap, stack code physical address = Limit virtual address Base − virtual start of segment stack + base Physical Limit Memory Base data/heap Limit Base

  13. Next: segmentation • Base and Limit register for each segment: code, data/heap, stack code physical address = Limit virtual address Base − virtual start of segment stack + base Physical Limit Memory Base physical holes between segments leads to data/heap significant external Limit fragmentation Base

  14. TERMINOLOGY ALERT: Paged Translation Page: virtual Frame: physical Process View Virtual Page N Physical stack Solves both Memory Frame M internal and STACK 0 HEAP 0 external TEXT 1 heap fragmentation! HEAP 1 DATA 0 (to a large extent) TEXT 0 data STACK 1 text Frame 0 Virtual Page 0 14

  15. Paging Overview Divide: • Physical memory into fixed-sized blocks called frames • Virtual memory into blocks of same size called pages Management: • Keep track of which pages are mapped to which frames • Keep track of all free frames Notice: • Not all pages of a process may be mapped to frames 15

  16. Address Translation, Conceptually Virtual Address Raise Invalid Processor Translation Exception ? s i h t s e o Valid d o h W Physical Memory Data Physical Address Data 16

  17. Memory Management Unit (MMU) • Hardware device • Maps virtual to physical address (used to access data) User Process: • deals with virtual addresses • Never sees the physical address Physical Memory: • deals with physical addresses • Never sees the virtual address 17

  18. High-Level Address Translation Process Physical View Memory red cube is 255 th Page N Frame M byte in page 2. stack STACK 0 HEAP 0 Where is the red cube TEXT 1 in physical memory? heap HEAP 1 DATA 0 data TEXT 0 text STACK 1 Page 0 Frame 0 18

  19. Virtual Address Components Page number – Upper bits (most significant bits) • Must be translated into a physical frame number Page offset – Lower bits (least significant bits) • Does not change in translation page offset page number m - n n For given logical address space 2 m and page size2 n 19

  20. High-Level Address Translation Who keeps Virtual Physical Memory Memory track of the mapping? stack STACK 0 HEAP 0 0x???? TEXT 1 heap à Page Table 0x6000 HEAP 1 0x5000 0x5000 0 - DATA 0 0x4000 data 0x4000 1 3 TEXT 0 0x3000 0x3000 2 6 text 0x20FF 0x2000 0x2000 3 4 STACK 1 0x1000 4 8 0x1000 5… 5 0x0000 0x0000 20

  21. Simple Page Table Page-table Lives in Memory Page-table base register (PTBR) Points to the page table • Saved/restored on context switch • PTBR 21

  22. Leveraging Paging • Protection • Demand Loading • Copy-On-Write 22

  23. Full Page Table Page-table Meta Data about each frame Protection R/W/X, Modified, Valid, etc. MMU Enforces R/W/X protection (illegal access throws a page fault) PTBR 23

  24. Leveraging Paging • Protection • Demand Loading • Copy-On-Write 24

  25. Demand Loading • Page not mapped until it is used • Requires free frame allocation • What if there is no free frame??? • May involve reading page contents from disk or over the network 25

  26. Leveraging Paging • Protection • Demand Loading • Copy-On-Write (or fork() for free) 26

  27. Copy on Write (COW) Physical P1 Virt Addr Space Addr Space COW X stack • P1 forks() X heap heap • P2 created with X data text - own page table text X - same translations P2 Virt Addr Space • All pages marked data X stack COW (in Page stack heap X Table) data X X text 27

  28. Option 1: fork, then keep executing Physical P1 Virt Addr Space Addr Space COW X Now one process tries stack stack to write to the stack (for X heap heap example) : X data text • Page fault text X • Allocate new frame P2 Virt Addr Space • Copy page data X stack • Both pages no longer stack heap COW X data X X text 28

  29. Option 2: fork, then call exec Physical P1 Virt Addr Space Addr Space Before P2 calls COW X stack exec() X heap heap X data text text X P2 Virt Addr Space P2 Virt Addr Space data X stack stack stack heap heap X data data X X text text 29

  30. Option 2: fork, then call exec Physical P1 Virt Addr Space Addr Space COW stack stack After P2 calls exec() heap heap data text text • Allocate new text frames P2 Virt Addr Space data • Load in new pages data stack stack • Pages no longer COW data text 30

  31. Problems with Paging Memory Consumption: • Internal Fragmentation • Make pages smaller? But then… • Page Table Space: consider 48-bit address space, 2KB page size, each PTE 8 bytes • How big is this page table? - Note: you need one for each process Performance: every data/instruction access requires two memory accesses: • One for the page table • One for the data/instruction 31

  32. Optimal Page Size: P • Overhead due to internal fragmentation: P / 2 • Overhead due to page table: #pages x PTE size #pages = average process memory size / P • Total overhead: (process size / P) x PTE size + (P / 2) • Optimize for P d(overhead) / dP = 0 • Optimize for P P = sqrt(2 x process size x PTE size) • Example: 1 MB process, 8 byte PTE sqrt(2 x 2 20 x 2 3 ) = sqrt(2 24 ) = 2 12 = 4 KB 32

  33. Address Translation • Paged Translation • Efficient Address Translation • Multi-Level Page Tables • Inverted Page Tables • TLBs 33

  34. Multi-Level Page Tables to reduce page table space Implementation Physical Memory Processor Virtual Address index 1 | index 2 | offset Index 1 Index 2 Index 3 Offset Physical Address Level 1 Frame Offset Level 2 Level 3 + Allocate only PTEs in use + Simple memory allocation − more lookups per memory reference 34

  35. Two-Level Paging Example 32-bit machine, 1KB page size • Logical address is divided into: – a page offset of 10 bits (1024 = 2 10 ) – a page number of 22 bits (32-10) • Since the page table is paged, the page number is further divided into (say): – a 12-bit first index – a 10-bit second index • Thus, a logical address is as follows: page number page offset index 1 index 2 offset 12 10 10 35

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend