memory management
play

Memory Management Disclaimer: some slides are adopted from book - PowerPoint PPT Presentation

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Roadmap CPU management Process, thread, synchronization, scheduling Memory management Virtual memory Disk management Other


  1. Memory Management Disclaimer: some slides are adopted from book authors’ slides with permission 1

  2. Roadmap • CPU management – Process, thread, synchronization, scheduling • Memory management – Virtual memory • Disk management • Other topics 2

  3. Administrative • Project 2 is out – Due: 11/06 3

  4. Memory Management • Goals – Easy to use abstraction • Same virtual memory space for all processes – Isolation among processes • Don’t corrupt each other – Efficient use of capacity limited physical memory • Don’t waste memory 4

  5. Concepts to Learn • Virtual address translation • Paging and TLB • Page table management • Swap 5

  6. Virtual Memory (VM) • Abstraction – 4GB linear address space for each process • Reality – 1GB of actual physical memory shared with 20 other processes • How? 6

  7. Virtual Memory • Hardware support – MMU (memory management unit) – TLB (translation lookaside buffer) • OS support – Manage MMU (sometimes TLB) – Determine address mapping • Alternatives – No VM: many real- time OS (RTOS) don’t have VM 7

  8. Virtual Address Process A Process C Process B MMU 8 Physical Memory

  9. MMU • Hardware unit that translates virtual address to physical address Virtual Physical address address CPU MMU Memory 9

  10. A Simple MMU • BaseAddr: base register • Paddr = Vaddr + BaseAddr P3 • Advantages 28000 – Fast P2 • Disadvantages 14000 – No protection – Wasteful P1 10

  11. A Better MMU • Base + Limit approach – If Vaddr > limit, then trap to report error – Else Paddr = Vaddr + BaseAddr 11

  12. A Better MMU • Base + Limit approach – If Vaddr > limit, then trap to report error – Else Paddr = Vaddr + BaseAddr P3 • Advantages – Support protection P2 – Support variable size partitions • Disadvantages P1 – Fragmentation 12

  13. Fragmentation • External fragmentation – total available memory space exists to satisfy a request, but it is not contiguous P4 Free Alloc P2, P4 P5 P3 P3 P3 P5 P2 P1 P1 P1 13

  14. Modern MMU • Paging approach – Divide physical memory into fixed-sized blocks called frames (e.g., 4KB each) – Divide logical memory into blocks of the same size called pages (page size = frame size) – Pages are mapped onto frames via a table  page table 14

  15. Modern MMU • Paging hardware 15

  16. Modern MMU • Memory view 16

  17. Virtual Address Translation Virtual address 0x12345678 Page # Offset Ox12345 0x678 0x678 0x12345 Physical address 0xabcde678 offset frame #: 0xabcde frame # 17

  18. Advantages of Paging • No external fragmentation – Efficient use of memory – Internal fragmentation (waste within a page) still exists 18

  19. Issues of Paging • Translation speed – Each load/store instruction requires a translation – Table is stored in memory – Memory is slow to access • ~100 CPU cycles to access DRAM 19

  20. Translation Lookaside Buffer (TLB) • Cache frequent address translations – So that CPU don’t need to access the page table all the time – Much faster 20

  21. Issues of Paging • Page size – Small: minimize space waste, requires a large table – Big: can waste lots of space, the table size is small – Typical size: 4KB – How many pages are needed for 4GB (32bit)? • 4GB/4KB = 1M pages – What is the required page table size? • assume 1 page table entry (PTE) is 4bytes • 1M * 4bytes = 4MB – Btw, this is for each process. What if you have 100 processes? Or what if you have a 64bit address? 21

  22. Paging • Advantages – No external fragmentation • Two main Issues – Translation speed can be slow • TLB – Table size is big 22

  23. Multi-level Paging • Two-level paging 23

  24. Two Level Address Translation Virtual address 1 st level 2 nd level offset Base ptr 2 nd level Page 1 st level Physical address Page table Frame # Offset 24

  25. Multi-level Paging • Can save table space • How, why? 25

  26. Summary • MMU – Virtual address  physical address – Various designs are possible, but • Paged MMU – Memory is divided into fixed-sized pages – Use page table to store the translation table – No external fragmentation: i.e., efficient space utilization 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend