memory management
play

Memory Management Disclaimer: some slides are adopted from book - PowerPoint PPT Presentation

Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Recap: Page Replacement On a page fault Step 1: allocate a free page frame If theres a free frame, use it If theres no free


  1. Memory Management Disclaimer: some slides are adopted from book authors’ slides with permission 1

  2. Recap: Page Replacement • On a page fault – Step 1: allocate a free page frame • If there’s a free frame, use it • If there’s no free frame, choose a victim frame and evict it to disk (if necessary)  swap-out – Step 2: bring the stored page on disk (if necessary) – Step 3: update the PTE (mapping and valid bit) – Step 4: restart the instruction 2

  3. Recap: Page Replacement Policies • FIFO – Evict the oldest page first. Pros: fair; Cons: can throw out frequently used pages • Optimal – Evict the page that will not be used for the longest period – Pros: optimal; Cons: you need to know the future • Random – Randomly choose a page. Pros: simple. TLB commonly uses this method; Cons: unpredictable • LRU – Look at the past history, choose the one that has not been used for the longest period. Pros: good performance; Cons: complex, requires h/w support 3

  4. Recap: Page Table Entry (PTE) • PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No – Valid bit (V): whether the page is in memory – Modify bit (M): whether the page is modified – Reference bit (R): whether the page is accessed – Protection bits(P): readable, writable, executable 4

  5. Recap: Clock Algorithm • Key idea – Replace an old page, not the oldest page • Algorithm Step 1: advance the pointer – by one Step 2: check the reference – bit of the page: 1  Used recently. Clear the bit and go to Step 1 0  Not used recently. Selected victim. End. 5

  6. Quiz. • Complete the following with the FIFO replacement policy Reference E D H B D E D A E B E Page #1 E E E B B B B A A A A Page #2 D D Page #3 H Mark X for a fault X X X 6

  7. Quiz. • Complete the following with the FIFO replacement policy Reference E D H B D E D A E B E Page #1 E E E B B B B A A A A Page #2 D D D * E E E * B B Page #3 H H H H D D D D E Mark X for a fault X X X X X X X X X 7

  8. Concepts to Learn • Thrashing • Memory-mapped I/O • Copy-on-Write (COW) • Memory allocator 8

  9. Recap: Performance of Demand Paging Memory access time = 200 nanoseconds • Average page-fault service time = 8 milliseconds • EAT = (1 – p) x 200 + p (8 milliseconds) • = (1 – p) x 200 + p x 8,000,000 = 200 + p x 7,999,800 If one access out of 1,000 causes a page fault, then • EAT = 8.2 microseconds.  This is a slowdown by a factor of 40!! If want performance degradation < 10 percent • – 220 > 200 + 7,999,800 x p 20 > 7,999,800 x p – p < .0000025 – < one page fault in every 400,000 memory accesses 9

  10. Thrashing • A processes is busy swapping pages in and out – Don’t make much progress – Happens when a process do not have “enough” pages in memory – Very high page fault rate – Low CPU utilization (why?) – CPU utilization based admission control may bring more programs to increase the utilization  more page faults 10

  11. Thrashing 11

  12. Recap: Program Binary Sharing Bash #2 Bash #1 Physical memory Bash text • Multiple instances of the same program – E.g., 10 bash shells 12

  13. Memory Mapped I/O • Idea: map a file on disk onto the memory space 13

  14. Memory Mapped I/O • Benefits: you don’t need to use read()/write() system calls, just directly access data in the file via memory instructions • How it works? – Just like demand paging of an executable file – What about writes? • Mark the modified (M) bit in the PTE • Write back the modified pages back to the original file 14

  15. Copy-on-Write (COW) • Fork() creates a copy of a parent process – Copy the entire pages on new page frames? • If the parent uses 1GB memory, then a fork() call would take a while • Then, suppose you immediately call exec(). Was it of any use to copy the 1GB of parent process’s memory? 15

  16. Copy-on-Write • Better way: copy the page table of the parent – Page table is much smaller (so copy is faster) – Both parent and child point to the exactly same physical page frames parent child 16

  17. Copy-on-Write • What happens when the parent/child reads? • What happens when the parent/child writes? – Trouble!!! parent child 17

  18. Page Table Entry (PTE) • PTE format (architecture specific) 1 1 1 2 20 bits V M R P Page Frame No – Valid bit (V): whether the page is in memory – Modify bit (M): whether the page is modified – Reference bit (R): whether the page is accessed – Protection bits(P): readable, writable, executable 18

  19. Copy-on-Write • All pages are marked as read-only Page tbl Page tbl RO RO RO RO RO RO parent child 19

  20. Copy-on-Write • Up on a write, a page fault occurs and the OS copies the page on a new frame and maps to it with R/W protection setting Page tbl Page tbl RO RO RO RO RW RO parent child 20

  21. User-level Memory Allocation • When a process actually allocate a memory from the kernel? – On a page fault – Allocate a page (e.g., 4KB) • What does malloc () do? – Manage a process’s heap – Variable size objects in heap 21

  22. Kernel-level Memory Allocation • Page-level allocator – Page frame allocation/free (fixed size) – Users: page fault handler, kernel-memory allocator • Kernel-memory allocator (KMA) – Typical kernel object size << page size • File descriptor, inode, task_struct, … – KMA  kernel-level malloc – In Linux: buddy allocator, SLAB 22

  23. Buddy Allocator • Allocate physically contiguous pages – Satisfies requests in units sized as power of 2 – Request rounded up to next highest power of 2 – When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2 – Quickly expand/shrink across the lists 32KB 16KB 8KB 4KB 23

  24. Buddy Allocator • Example – Assume 256KB chunk available, kernel requests 21KB 256 Free 128 128 Free Free 64 64 128 Free Free Free 32 32 64 128 Free Free Free Free 32 32 64 128 A Free Free Free 24

  25. Buddy Allocator • Example 32 32 64 128 – Free A A Free Free Free 32 32 64 128 Free Free Free Free 64 64 128 Free Free Free 128 128 Free Free 256 Free 25

  26. Virtual Memory Summary • MMU and address translation • Paging • Demand paging • Copy-on-write • Page replacement 26

  27. Quiz: Address Translation 8 bits 8 bits 8 bits 1 st level 2 nd level offset Virtual address format (24bits) 3 1 4 bits Page table entry (8bit) V Frame # Unused Vaddr: 0x072370 Vaddr: 0x082370 Vaddr: 0x0703FE Paddr: ??? Paddr: ??? Paddr: 0x3FE Page-table base address = 0x100 Addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +A +B +C +D +E +F 0x000 31 0x010 0x020 41 .. 0x100 00 01 01 00 01 .. 0x200 27

  28. Quiz: Address Translation 8 bits 8 bits 8 bits 1 st level 2 nd level offset Virtual address format (24bits) 3 1 4 bits Page table entry (8bit) V Frame # Unused Vaddr: 0x072370 Vaddr: 0x082370 Vaddr: 0x0703FE Paddr: 0x470 Paddr: invalid Paddr: 0x3FE Page-table base address = 0x100 Addr +0 +1 +2 +3 +4 +5 +6 +7 +8 +A +B +C +D +E +F 0x000 31 0x010 0x020 41 .. 0x100 00 01 01 00 01 .. 0x200 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend