memory management
play

Memory Management 1 Overview Basic memory management Address - PDF document

Memory Management 1 Overview Basic memory management Address Spaces Virtual memory Page replacement algorithms Design issues for paging systems Implementation issues Segmentation 2 1 Memory Management


  1. Memory Management 1 Overview • Basic memory “management” • Address Spaces • Virtual memory • Page replacement algorithms • Design issues for paging systems • Implementation issues • Segmentation 2 1

  2. Memory Management • Ideally programmers want memory that is – large – fast – non volatile • Memory hierarchy – small amount of fast, expensive memory – cache – some medium-speed, medium price main memory – gigabytes of slow, cheap disk storage • Memory manager – handles the memory hierarchy – Protects processes from each other. 3 Approaches • Single Process, Contiguous Memory • Multiple Processes, Contiguous Memory • Multiple Processes, “Discontiguous” Memory • Multiple Processes, Only partially in memory 4 2

  3. Basic Memory Management Single Process without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process 5 Binding • If a program has a line: int x; When is the address of x determined? • What are the choices? • At compile time • At load time • At run time 6 3

  4. Base / Limit Registers • Binding done at run time. • Addresses are added to base value to map to physical address • Addresses larger than limit value are an error 7 Swapping Memory allocation changes as – processes come into memory – leave memory Shaded regions are unused memory 8 4

  5. Managing Free Memory • Assume a process being loaded can ask for any size “chunk” of memory needed. • We need to be able to find a chunk the right size. • How can we keep track of free chunks efficiently? 9 Memory Management with Bit Maps • Part of memory with 5 processes, 3 holes – tick marks show allocation units – shaded regions are free • Corresponding bit map • Same information as a list 10 5

  6. Memory Management with Linked Lists Four neighbor combinations for the terminating process X 11 Order of Search for Free Memory • We can search for a large enough free block of free memory starting from the beginning. That’s called first fit . • If we already skipped over the first N holes because they were too small, maybe it’s a waste of time to look there again. Try next fit. • Should we be more selective in our choice? After all we’re just grabbing the first thing that works… • How does first (or next) fit effect fragmentation? Could we do better? • Can we find a hole that fits faster? What are the downsides? 12 6

  7. The Contiguous Constraint • So far a process’s memory has been contiguous. • What if it didn’t have to be? • What problems would that help solve? • How would the hardware need to change? • What additional work would the OS have to do? 13 It’s All Gotta Be in Memory (or does it?) • We have assumed that the entire process has to be in memory whenever it is running. • What if it didn’t have to be? • What problems would that help solve? • How would the hardware need to change? • What additional work would the OS have to do? 14 7

  8. Virtual Memory The position and function of the MMU 15 Paging The relation between virtual addresses and physical memory addres- ses given by page table 16 8

  9. Page Tables Internal operation of MMU with 16 4 KB pages 17 Page Tables 2-level Second-level page tables Top-level page table • 32 bit address with 2 page table fields • Two-level page tables 18 9

  10. Page Table Entry • Present/absent is also called Valid • Modified is also called Dirty • Referenced is also called Accessed • Why would caching be disable? 19 Pentium PTE 20 10

  11. TLBs – Translation Lookaside Buffers A TLB to speed up paging 21 Inverted Page Tables Comparison of a traditional page table with an inverted page table 22 11

  12. Page Replacement Algorithms • Page fault forces choice – If there are no free page frames, we have to make room for incoming page – Which page should be removed? • Modified page frame must first be saved before being evicted – An unmodified page frame can just overwritten • Better not to choose an often used page – Likely to be brought back in again soon 23 Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future – Optimal but unrealizable • Estimate by … – logging page use on previous runs of process – although this is impractical 24 12

  13. Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit – bits are set when page is referenced, modified • Pages are classified not referenced, not modified 1. not referenced, modified 2. referenced, not modified 3. referenced, modified 4. • NRU removes page at random – from lowest numbered non empty class 25 FIFO Page Replacement Algorithm • Maintain a linked list of all pages – in order they came into memory • Page at beginning of list replaced • Disadvantage – page in memory the longest may be often used 26 13

  14. Second Chance Page Replacement Algorithm • Operation of a second chance – pages sorted in FIFO order – Page list if fault occurs at time 20, A has R bit set (numbers above pages are loading times) 27 The Clock Page Replacement Algorithm 28 14

  15. Least Recently Used (LRU) • Principle: assume a page used recently will be used again soon – throw out page that has been unused for longest time • Implementation – Keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! – Or keep time stamp with each PTE • choose page with oldest time stamp • Again, this must be updated with every memory reference. 29 LRU (Another Hardware Solution) LRU using a matrix – pages referenced in order 0,1,2,3,2,1,0,3,2,3 30 15

  16. LRU Approximation? • How could we approximate LRU? • We can’t track every time a page is referenced, but we can sample the data. • How often? Once per clock tick, perhaps. • Update a counter for each page that has been referenced in the last clock tick. • Take the page with the lowest count i.e. “Not Frequently Used” (NFU) • How well does this work? 31 Simulating LRU in Software Aging • The aging algorithm simulates LRU in software • Note 6 pages for 5 clock ticks, (a) – (e) 32 16

  17. Thrashing • A program causing page faults every few instructions is said to be thrashing . • What causes thrashing? • If a process keeps accessing random new pages, then it is hard to anticipate what it will use next. • Most programs exhibit temporal and spatial locality. – Temporal locality: if the process accessed a particular address, it is likely to do so again soon. – Spatial locality: if the process accessed a particular address, it is likely to access nearby addresses soon. • Processes that follow this principle tend not to thrash unless they have to fight for memory. 33 Causing Thrashing within a single process const int ROWS = 10000; const int COLS = 1024; int arr[ROWS][COLS]; int main() { for (int row = 0; row < ROWS; ++row) for (int col = 0; col < COLS; ++col) arr[row][col] = row * col; for (int col = 0; col < COLS; ++col) for (int row = 0; row < ROWS; ++row) arr[row][col] = row * col; } • The first nested loop demonstrates spatial locality • The second thrashes. 34 17

  18. The Working Set Page Replacement Algorithm (1) • The working set is the set of pages used by the k most recent memory references • w(k,t) is the size of the working set at time, t 35 The Working Set Page Replacement Algorithm (2) The working set algorithm 36 18

  19. The WSClock Page Replacement Algorithm Operation of the WSClock algorithm 37 Review of Page Replacement Algorithms 38 19

  20. Modeling Page Replacement Algorithms Belady's Anomaly • FIFO with 3 page frames • FIFO with 4 page frames • P 's show which page references show page faults 39 “Stack” Algorithms State of memory array, M , after each item in reference string is processed 40 20

  21. The Distance String Probability density functions for two hypothetical distance strings 41 The Distance String • Computation of page fault rate from distance string – the C vector – the F vector 42 21

  22. Design Issues for Paging Systems Local versus Global Allocation Policies • Original configuration • Local page replacement • Global page replacement 43 Page Fault Rate • Page fault rate as a function of the number of page frames assigned • Use to determine if a process should be granted additional pages. 44 22

  23. Load Control • Despite good designs, system may still thrash • When PFF algorithm indicates – some processes need more memory – but no processes need less • Solution : Reduce number of processes competing for memory – swap one or more to disk, divide up frames they held – reconsider degree of multiprogramming 45 Page Size (1) Small page size • Advantages – less internal fragmentation – better fit for various data structures, code sections – less unused program in memory • Disadvantages – programs need many pages, larger page tables 46 23

  24. Page Size (2) • Overhead due to page table and internal fragmentation page table space internal fragmentation • Where – s = average process size in bytes Optimized when – p = page size in bytes – e = page entry 47 Separate Instruction and Data Spaces • One address space • Separate I and D spaces 48 24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend