memory management what to do when coalescing fails
play

Memory Management What to do when coalescing fails 5H. Memory - PDF document

4/15/2018 Memory Management What to do when coalescing fails 5H. Memory Compaction garbage collection is just another way to free 6A. Swapping to secondary storage doesnt greatly help or hurt fragmentation 5E. Dynamic Relocation


  1. 4/15/2018 Memory Management What to do when coalescing fails 5H. Memory Compaction • garbage collection is just another way to free 6A. Swapping to secondary storage – doesn’t greatly help or hurt fragmentation 5E. Dynamic Relocation • ongoing activity can starve coalescing – chunks reallocated before neighbors become free 6B. Paging Memory Management Units • we could stop accepting new allocations 6C. Demand Paging – convoy on memory manager would trash throughput 6D. Replacement Algorithms • we need a way to rearrange active memory 6F. Optimizations – re-pack all processes in one end of memory 6H Paging and Segmentation – create one big chunk of free space at other end 6I. Virtual Memory and I/O Memory management 1 Memory management 2 The Need for Relocation Memory Compaction • Memory compaction moves a process P F – from where we originally loaded it to a new place – so we can compact the allocated memory P D – coalesce free space to cure external fragmentation • But a program is full of addresses P C – conditional branches, subroutine calls swap device – data addresses in the code, and in pointers P E • We can’t find/update all of these pointers – as we just saw with Garbage Collection Memory management 3 Why we swap Pure Swapping • make best use of a limited amount of memory • each segment is contiguous – process can only execute if it is in memory – in memory, and on secondary storage – can’t keep all processes in memory all the time – all in memory, or all on swap device – if it isn't READY, it doesn't need to be in memory • swapping takes a great deal of time – swap it out and make room for other processes – transferring entire data (and text) segments • improve CPU utilization • swapping wastes a great deal of memory – when there are no READY processes, CPU is idle – processes seldom need the entire segment – CPU idle time means reduced system throughput • variable length memory/disk allocation – more READY processes means better utilization – complex, expensive, external fragmentation Virtual Memory and Paging 5 Virtual Memory and Paging 6 1

  2. 4/15/2018 Virtual Address Translation The Need for Dynamic Relocation • there are a few reasons to move a process 0x00000000 shared code private data – needs a larger chunk of memory virtual address space (as seen by process) – swapped out, swapped back in to a new location DLL 1 private stack – to compact fragmented free space 0xFFFFFFFF • all addresses in the program will be wrong address translation unit (magical) – references in the code, pointers in the data • it is not feasible to re-linkage edit the program physical address space – new pointers have been created during run-time (as on CPU/memory bus) Memory management 7 Memory management 8 Segment Relocation Segment Relocation • a natural unit of allocation and relocation virtual address space 0x00000000 – process address space made up of segments shared code private data – each segment is contiguous w/no holes DLL 1 private stack • CPU has segment base registers 0xFFFFFFFF – point to (physical memory) base of each segment code base register data base register – CPU automatically relocates all references aux base register stack base register physical memory • OS uses for virtual address translation physical = virtual + base seg – set base to region where segment is loaded stack – efficient: CPU can relocate every reference code data – transparent: any segment can move anywhere DLL Memory management 9 Memory management 10 Moving a Segment Privacy and Protection The virtual address of the • confine process to its own address space stack doesn’t change Let’s say we need to 0x00000000 – each segment also has a length/limit register shared code private data move the stack in – CPU verifies all offsets are within range physical memory DLL 1 DLL 2 DLL 3 private stack – generates addressing exception if not 0xFFFFFFFF • protecting read-only segments code base register data base register – associate read/write access with each segment aux base register stack base register stack base register Physical memory – CPU ensures integrity of read-only segments physical = virtual + base seg stack • segmentation register update is privileged We just change the code data value in the stack DLL – only kernel-mode code can do this base register Memory management 12 2

  3. 4/15/2018 paged address translation Are Segments the Answer? process virtual address space • a very natural unit of address space CODE DATA STACK – variable length, contiguous data blobs – all-or-none with uniform r/w or r/o access – convenient/powerful virtual address abstraction • but they are variable length – they require contiguous physical memory – ultimately leading to external fragmentation – requiring expensive swapping for compaction physical memory … and in that moment he was enlightened … Memory management 13 Virtual Memory and Paging 14 Paging Memory Management Unit Paging and Fragmentation virtual address physical address page # offset page # offset offset within page a segment is implemented as a set of virtual pages remains the same. V page # virtual page # is V page # used as an index • internal fragmentation V page # into the page table 0 – averages only ½ page (half of the last one) V page # selected entry valid bit is checked 0 • external fragmentation contains physical to ensure that this V page # page number. virtual page # is – completely non-existent (we never carve up pages) V page # legal. page table Virtual Memory and Paging 15 Virtual Memory and Paging 16 Paging Relocation Examples Swapping is Wasteful virtual address physical address • process does not use all its pages all the time 0005 0000 0004 1C08 3E28 0100 0C20 041F 1C08 0100 – code and data both exhibit reference locality – some code/data may seldom be used page fault V 0C20 • keeping all pages in memory wastes space V 0105 V 00A1 – more space/process = fewer processes in memory 0 • swapping them all in and out wastes time V 041F 0 – longer transfers, longer waits for disk V 0D10 • it arbitrarily limits the size of a process V 0AC3 page table – process must be smaller than available memory Virtual Memory and Paging 17 Virtual Memory and Paging 18 3

  4. 4/15/2018 Loading Pages “On Demand” Page Fault Handling • paging MMU supports not present pages • initialize page table entries to not present • CPU faults when invalid page is referenced – CPU access of present pages proceeds normally 1. trap forwarded to page fault handler • accessing not present page generates a trap 2. determine which page, where it resides – operating system can process this “ page fault ” 3. find and allocate a free page frame – recognize that it is a request for another page 4. block process, schedule I/O to read page in – read that page in and resume process execution 5. update page table point at newly read-in page • entire process needn’t be in memory to run 6. back up user-mode PC to retry failed instruction 7. unblock process, return to user-mode – start each process with a subset of its pages Meanwhile, other processes can run • – load additional pages as program demands them Virtual Memory and Paging 19 Virtual Memory and Paging 20 Demand Paging – advantages Are Page Faults a Problem? • improved system performance • Page faults should not affect correctness – fewer in-memory pages per process – after fault is handled, desired page is in RAM – more processes in primary memory – process runs again, and can now use that page • more parallelism, better throughput (assuming the OS properly saves/restores state) • better response time for processes already in memory • But programs might run very slowly – less time required to page processes in and out – additional context switches waste available CPU – less disk I/O means reduced queuing delays – additional disk I/O wastes available throughput • fewer limitations on process size – processes are delayed waiting for needed pages – process can be larger than physical memory • We must minimize the number of page faults – process can have huge (sparse) virtual space Virtual Memory and Paging 21 Minimizing Number of Page Faults Belady's Optimal Algorithm • There are two ways: • Q: which page should we replace? – keep the “right” pages in memory A: the one we won't need for the longest time – give a process more pages of memory • Why is this the right page? • How do we keep “right” pages in memory? – it delays the next page fault as long as possible – we have no control over what pages we bring in – minimum number of page faults per unit time – but we can decide which pages to evict – this is called “replacement strategy” • How can we predict future references? • How many pages does a process need? – Belady cannot be implemented in a real system – that depends on which process and when – but we can run implement it for test data streams – this is called the process’ “working set” – we can compare other algorithms against it Virtual Memory and Paging 23 Virtual Memory and Paging 24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend