comp 3713 operating systems slides part 4
play

COMP 3713 Operating Systems Slides Part 4 Jim Diamond CAR 409 - PowerPoint PPT Presentation

COMP 3713 Operating Systems Slides Part 4 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University Acknowledgements These slides borrow heavily from those prepared for Operating System Concepts (eighth edition) by


  1. COMP 3713 — Operating Systems Slides Part 4 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University

  2. Acknowledgements These slides borrow heavily from those prepared for “Operating System Concepts” (eighth edition) by Silberschatz, Galvin and Gagne.

  3. Chapter 8 Main Memory Jim Diamond, Jodrey School of Computer Science, Acadia University

  4. Chapter 8 202 Background • A program must be brought (from disk or other mass storage) into memory and placed within a process for it to be run – Main memory and registers are the only storage the CPU can access • directly • Register access can be done within one CPU clock cycle Accessing main memory can take many CPU cycles • The cache sits between the main memory and the CPU registers • – Protection of memory is required to ensure correct operation • – Jim Diamond, Jodrey School of Computer Science, Acadia University

  5. Chapter 8 203 Memory Protection: Base and Limit Registers A pair of registers ( base • and limit ) define the address space a process is permitted to use – EVERY memory access • by a user-space program is checked against these limits – an attempted access outside this area generates an interrupt, which triggers the kernel to kill the offending process Jim Diamond, Jodrey School of Computer Science, Acadia University

  6. Chapter 8 204 Binding of Instructions and Data to Memory Addresses • Address binding : the decision about where each instruction and data item will be stored in memory Address binding of instructions and data to memory addresses can • happen at three different stages – – must recompile code if starting location of process changes! – load time: must generate relocatable code if memory location is not known at compile time – execution time: binding delayed until run time if the process can be moved during its execution from one memory segment to another – need hardware support for address maps (e.g., base and limit registers) The first two methods are archaic for general-purpose • multi-programmed computers Jim Diamond, Jodrey School of Computer Science, Acadia University

  7. Chapter 8 205 Turning a Program Into A Process Jim Diamond, Jodrey School of Computer Science, Acadia University

  8. Chapter 8 206 Logical vs. Physical Address Space • The concept of a logical address space that is bound to a separate physical address space is central to proper memory management – logical address : generated by the CPU; also referred to as virtual address – physical address : address seen by the memory unit • Logical and physical addresses are the same in compile-time and load-time address-binding schemes Logical (virtual) and physical addresses differ in execution-time • address-binding schemes Jim Diamond, Jodrey School of Computer Science, Acadia University

  9. Chapter 8 207 Memory-Management Unit (MMU) The MMU is the hardware device that maps virtual to physical address • • In the MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real • physical addresses – – GEQ: what happens if you run two separate copies of such a program at the same time? Jim Diamond, Jodrey School of Computer Science, Acadia University

  10. Chapter 8 208 Dynamic Relocation Using a Relocation Register Jim Diamond, Jodrey School of Computer Science, Acadia University

  11. Chapter 8 209 Dynamic Loading of Program Code Idea: rather than loading ALL of the program code from the disk to • memory when a process starts, only load a function if/when it is needed at run time • Advantage: better memory-space utilization: unused routines are never loaded – Advantage: program can start more quickly since less code must be • loaded into memory (initially, at least) • No special support (maybe a little support?) from the operating system is required – – that is, the program keeps track of what it has loaded into memory, and loads functions into memory before it calls them – it can unload functions when they are no longer needed Jim Diamond, Jodrey School of Computer Science, Acadia University

  12. Chapter 8 210 Dynamic Linking • Idea: rather than copying code from libraries to the program executable when a program is created, only link to a function if/when it is needed at run time – • A small piece of code, the stub , is used to locate the appropriate memory-resident library routine – The operating system is needed to check if the desired routine is in the • processes’ memory address space Advantages: • – large libraries don’t have to be linked into every program, saving lots of disk space – all processes using a given shared library share the in-core copy of the code, saving main memory space – See ldd command in Linux • Jim Diamond, Jodrey School of Computer Science, Acadia University

  13. Chapter 8 211 Swapping • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution – • Backing store : usually a fast disk – book says “large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images” . . . not on (most?) modern OSes • Roll out, roll in : swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed • The major component of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped • Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and ms-windows) • System maintains a queue of ready-to-run processes whose memory images are on disk Jim Diamond, Jodrey School of Computer Science, Acadia University

  14. Chapter 8 212 Schematic View of Swapping Jim Diamond, Jodrey School of Computer Science, Acadia University

  15. Chapter 8 213 Contiguous Memory Allocation: 1 One Way to Accommodate OS & Processes • Main memory is usually divided into two partitions: – resident operating system, usually held in low memory with the interrupt vector – user processes are held in high memory – Relocation registers are used to protect user processes from each other, • and from changing operating system code and data – base register contains value of smallest physical address for the currently running process – limit register contains range of logical addresses: each logical address must be less than the limit register – Jim Diamond, Jodrey School of Computer Science, Acadia University

  16. Chapter 8 214 Hardware Support for Relocation and Limit Registers Jim Diamond, Jodrey School of Computer Science, Acadia University

  17. Chapter 8 215 Contiguous Memory Allocation: 2 • Multiple- partition allocation (can be fixed- or variably-sized) • Hole : a block of available memory; holes of various size are scattered throughout memory • When a process arrives, it is allocated memory from a hole large enough to accommodate it • Operating system maintains information about (a) (b) free partitions (holes) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 ⇒ ⇒ ⇒ process 8 process 10 process 2 process 2 process 2 process 2 Jim Diamond, Jodrey School of Computer Science, Acadia University

  18. Chapter 8 216 Dynamic Storage-Allocation Problem How should we satisfy a request of size n from a list of free holes? • First-fit: use the first hole that is big enough • Best-fit: use the smallest hole that is big enough; must search entire • list, unless it is ordered by size – Worst-fit: allocate the largest hole; may also need to search entire list, • if it is not sorted and you don’t have quick access to the largest block – Textbook claims: • – – I believe this – First-fit and best-fit are better than worst-fit in terms of speed – Jim Diamond, Jodrey School of Computer Science, Acadia University

  19. Chapter 8 217 Fragmentation • External Fragmentation : total memory space exists to satisfy a request, but it is not contiguous Internal Fragmentation : allocated memory may be slightly larger than • requested memory; this size difference is memory internal to a partition, but not being used – • We can reduce external fragmentation by compaction – shuffle memory contents to place all free memory blocks together in one large block – compaction is possible only if relocation is dynamic, and is done at execution time – I/O problem: what if DMA is being done to some memory address in a block we want to move? Soln 1: lock job in memory while it is involved in I/O Soln 2: Jim Diamond, Jodrey School of Computer Science, Acadia University

  20. Chapter 8 218 Dealing with Fragmentation • There are two methods to dealing with fragmentation: – – paging • These two methods can be combined • The next few slides discuss these methods Jim Diamond, Jodrey School of Computer Science, Acadia University

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend