swapping
play

Swapping ! Active processes use more physical memory than system has - PDF document

Swapping ! Active processes use more physical memory than system has Address Binding Swap out can be fixed P1 or relocatable Operating Systems I at runtime P2 Backing Store (Swap Space) OS Virtual Memory Main Memory Swapping


  1. Swapping ! Active processes use more physical memory than system has Address Binding Swap out can be fixed P1 or relocatable Operating Systems I at runtime P2 Backing Store (Swap Space) OS Virtual Memory Main Memory Swapping Motivation ! Consider 100K proc, 1MB/s disk, 8ms seek – 108 ms * 2 = 216 ms ! Logical address space larger than physical – If used for context switch, want large quantum! memory ! Small processes faster – “Virtual Memory” – on special disk ! Pending I/O (DMA) ! Abstraction for programmer – don’t swap – DMA to OS buffers ! Performance ok? ! Unix uses swapping variant – Error handling not used – Each process has “too large” address space – Maximum arrays – Demand Paging Demand Paging Paging Implementation Validation Page out Bit ! Less I/O needed A1 A1 A2 A3 ! Less memory needed A3 0 1 v Page 0 0 ! Faster response B1 B2 1 0 i Page 1 ! More users B1 1 Page 0 0 3 2 3 v Page 2 ! No pages in memory 2 1 initially 3 0 i Page 3 3 Page 2 Main Memory – Pure demand 2 Logical Page Table paging Physical Memory Memory 1

  2. Page Fault Performance of Demand Paging ! Page not in memory Page Fault Rate – interrupt OS => page fault 0 < p < 1.0 (no page faults to every is fault) ! OS looks in table: Effective Access Time – invalid reference? => abort – not in memory? => bring it in = (1-p) (memory access) + p (page fault overhead) ! Get empty frame (from list) Page Fault Overhead ! Swap page into frame = swap page out + swap page in + restart ! Reset tables (valid bit = 1) ! Restart instruction Performance Example Page Replacement ! memory access time = 100 nanoseconds ! Page fault => What if no free frames? ! swap fault overhead = 25 msec – terminate user process (ugh!) ! page fault rate = 1/1000 – swap out process (reduces degree of multiprog) ! EAT = (1-p) x 100 + p x (25 msec) – replace other page with needed page = (1-p) x 100 + p x 25,000,000 ! Page replacement: = 100 + 24,999,900 x p – if free frame, use it = 100 + 24,999,900 x 1/1000 = 25 microseconds! – use algorithm to select victim frame ! Want less than 10% degradation – write page to disk, changing tables 110 > 100 + 24,999,900 x p – read in new page 10 > 24,999,9000 x p – restart process p < .0000004 or 1 fault in 2,500,000 accesses! Page Replacement Page Replacement Algorithms “Dirty” Bit - avoid page out 0 1 v ! Every system has its own (0) 1 0 2 i v (2) ! Want lowest page fault rate Page 0 2 3 v 0 ! Evaluate by running it on a particular string (1) Page 1 3 0 i of memory references ( reference string ) and 1 Page 0 0 3 Page 2 Page Table computing number of page faults 2 victim 1 Page 3 (3) ! Example: 1,2,3,4,1,2,5,1,2,3,4,5 0 1 v i 3 Page 2 (4) 2 Logical 1 0 i Physical Memory Memory Page Table 2

  3. First-In-First-Out (FIFO) Optimal 1,2,3,4,1,2,5,1,2,3,4,5 vs. 1 4 5 3 Frames / Process 9 Page Faults ! Replace the page that will not be used for 2 1 3 the longest period of time 3 2 4 1,2,3,4,1,2,5,1,2,3,4,5 1 5 4 1 4 4 Frames / Process 4 Frames / Process 10 Page Faults! 2 1 5 2 6 Page Faults 3 2 Belady’s Anomaly 3 How do we know this? 4 3 4 5 Use as benchmark Least Recently Used LRU Implementation ! Replace the page that has not been used for ! Counter implementation the longest period of time – every page has a counter; every time page is referenced, copy clock to counter 1,2,3,4,1,2,5,1,2,3,4,5 – when a page needs to be changed, compare the counters to determine which to change 1 5 ! Stack implementation 2 8 Page Faults – keep a stack of page numbers 3 5 4 – page referenced: move to top 4 3 No Belady’s Anomoly – no search needed for replacement - “Stack” Algorithm - N frames subset of N+1 LRU Approximations Second-Chance ! LRU good, but hardware support expensive ! FIFO replacement, but … – Get first in FIFO ! Some hardware support by reference bit – Look at reference bit – with each page, initially = 0 N bit == 0 then replace – when page is referenced, set = 1 N bit == 1 then set bit = 0, get next in FIFO – replace the one which is 0 (no order) ! If page referenced enough, never replaced – enhance by having 8 bits and shifting ! Implement with circular queue – approximate LRU 3

  4. Enhanced Second-Chance Second-Chance ! 2-bits, reference bit and modify bit (a) (b) ! (0,0) neither recently used nor modified 1 1 0 1 – best page to replace ! (0,1) not recently used but modified 0 2 0 2 Next – needs write-out 1 3 0 3 Vicitm ! (1,0) recently used but clean 1 4 0 4 – probably used again soon ! (1,1) recently used and modified If all 1, degenerates to FIFO – used soon, needs write-out ! Circular queue in each class -- (Macintosh) Counting Algorithms Page Buffering ! Keep a counter of number of references ! Pool of frames – LFU - replace page with smallest count – start new process immediately, before writing old N if does all in beginning, won’t be replaced N write out when system idle N decay values by shift – list of modified pages – MFU - smallest count just brought in and will N write out when system idle probably be used – pool of free frames, remember content ! Not too common (expensive) and not too N page fault => check pool good Allocation of Frames Fixed Allocation ! How many fixed frames per process? ! Equal allocation – ex: 93 frames, 5 procs = 18 per proc (3 in pool) ! Two allocation schemes: ! Proportional Allocation – fixed allocation – priority allocation – number of frames proportional to size – ex: 64 frames, s1 = 10, s2 = 127 N f1 = 10 / 137 x 64 = 5 N f2 = 127 / 137 x 64 = 59 ! Treat processes equal 4

  5. Priority Allocation Thrashing ! Use a proportional scheme based on priority ! If a process does not have “enough” pages, the page-fault rate is very high ! If process generates a page fault – low CPU utilization – select replacement a process with lower priority – OS thinks it needs increased multiprogramming – adds another procces to system ! “Global” versus “Local” replacement ! Thrashing is when a process is busy – local consistent (not influenced by others) swapping pages in and out – global more efficient (used more often) Thrashing Cause of Thrashing ! Why does paging work? – Locality model utilization N process migrates from one locality to another CPU N localities may overlap ! Why does thrashing occur? – sum of localities > total memory size ! How do we fix thrashing? degree of muliprogramming – Working Set Model – Page Fault Frequency Working-Set Model Working Set Example ! Working set window W = a fixed number of ! T = 5 page references ! 1 2 3 2 3 1 2 4 3 4 7 4 3 3 4 1 1 2 2 2 1 – total number of pages references in time T ! D = sum of size of W ’s W={1,2,3} W={3,4,7} W={1,2} – if T too small, will not encompass locality – if T too large, will encompass several localities – if T => infinity, will encompass entire program ! if D > m => thrashing, so suspend a process ! Modify LRU appx to include Working Set 5

  6. Page Fault Frequency Prepaging increase number of ! Pure demand paging has many page faults frames Page Fault Rate upper bound initially – use working set – does cost of prepaging unused frames outweigh lower bound decrease cost of page-faulting? number of frames Number of Frames ! Establish “acceptable” page-fault rate – If rate too low, process loses frame – If rate too high, process gains frame Page Size Program Structure ! Old - Page size fixed, New -choose page size ! consider: ! How do we pick the right page size? Tradeoffs: int A[1024][1024]; – Fragmentation for (j=0; j<1024; j++) – Table size for (i=0; i<1024; i++) – Minimize I/O A[i][j] = 0; N transfer small (.1ms), latency + seek time large (10ms) ! suppose: – Locality – process has 1 frame N small finer resolution, but more faults – ex: 200K process (1/2 used), 1 fault / 200k, 100K faults/1 byte – 1 row per page ! Historical trend towards larger page sizes – => 1024x1024 page faults! – CPU, mem faster proportionally than disks Program Structure Priority Processes int A[1024][1024]; ! Consider for (i=0; i<1024; i++) – low priority process faults, for (j=0; j<1024; j++) N bring page in A[i][j] = 0; – low priority process in ready queue for awhile, waiting while high priority process runs ! 1024 page faults – high priority process faults ! stack vs. hash table N low priority page clean, not used in a while ! Compiler => perfect! – separate code from data ! Lock-bit (like for I/O) until used once – keep routines that call each other together ! LISP (pointers) vs. Pascal (no-pointers) 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend