comp 3713 operating systems slides part 2
play

COMP 3713 Operating Systems Slides Part 2 Jim Diamond CAR 409 - PowerPoint PPT Presentation

COMP 3713 Operating Systems Slides Part 2 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University Acknowledgements These slides borrow from those prepared for Operating System Concepts (eighth edition) by


  1. COMP 3713 — Operating Systems Slides Part 2 Jim Diamond CAR 409 Jodrey School of Computer Science Acadia University

  2. Acknowledgements These slides borrow from those prepared for “Operating System Concepts” (eighth edition) by Silberschatz, Galvin and Gagne. These slides borrow lightly from those prepared for COMP 3713 by Dr. Darcy Benoit.

  3. Chapter 4 Threads Jim Diamond, Jodrey School of Computer Science, Acadia University

  4. Chapter 4 86 What is a thread? • A thread is a “unit” of CPU utilization – – it shares code, non-stack data, and other resources (such as open files) • Multiple threads can be associated with a single process • A thread is also referred to as a lightweight process • The resources saved by the threads being “lightweight” can be used for other processing • Unlike traditional (“heavyweight”) processes, it is possible for threads to do more work with less resources (on average) Jim Diamond, Jodrey School of Computer Science, Acadia University

  5. Chapter 4 87 Single and Multithreaded Processes Jim Diamond, Jodrey School of Computer Science, Acadia University

  6. Chapter 4 88 Benefits of Using Multiple Threads • A single-threaded process has only one sequence of instructions being executed – if a single-threaded process has to deal with (say) multiple input sources, it must have some facility for watching all of them concurrently (*cough*, select() , poll() ) – a complex single-threaded process may be hard to write, debug and maintain (or so say the people who don’t understand select() ) • Using a multi-threaded process can have these benefits: – improved responsiveness (e.g., a process’ UI can still respond “instantly” even if another thread is crunching or blocked) *cough* – resource sharing: only one copy of a process’ code/data is required in memory; no need to use (explicit) shared memory functions – economy: cheaper to create and use threads than processes (30X for creation, 5X for context switch in Solaris) – Jim Diamond, Jodrey School of Computer Science, Acadia University

  7. Chapter 4 89 Multicore Programming • Multicore systems put pressure on programmers to use them efficiently • Challenges include – – balance – it is undesirable to have one thread or process that does 99% of the work – – the data used by separate processes/threads should be separable to different cores (GEQ: Why?) – data dependency – when data is shared by two or more threads, processes/threads must be properly synchronized – testing and debugging – testing and debugging concurrent processes/threads is (much?) more difficult than single-threaded processes (GEQ: Why?) Jim Diamond, Jodrey School of Computer Science, Acadia University

  8. Chapter 4 90 Multithreaded Server Architecture • Better yet: have a pool of worker threads waiting for something to do – Jim Diamond, Jodrey School of Computer Science, Acadia University

  9. Chapter 4 91 Execution: Single Core vs. Multicore Single core: Multicore: Jim Diamond, Jodrey School of Computer Science, Acadia University

  10. Chapter 4 92 Threads: User and Kernel • User threads: – – threads are managed without kernel support – thread management done by user-level threads library – • Kernel threads – – implemented and used by most (all?) current OSes • The three primary thread libraries are POSIX Pthreads, Win32 threads, Java threads • All threads in one process must get CPU time; how? 1: the thread library does the scheduling and dispatching of a given process’ threads; or 2: kernel knows about, schedules and dispatches the threads Jim Diamond, Jodrey School of Computer Science, Acadia University

  11. Chapter 4 93 Many-to-One Threading Model • Many user threads mapped to one kernel thread – Jim Diamond, Jodrey School of Computer Science, Acadia University

  12. Chapter 4 94 One-to-One Threading Model • Benefits of 1–1: – – can make use of multiple processors/cores • Drawback of 1–1: need to create a kernel thread for every user-space thread, so more overhead involved • many-1: Solaris “green threads” • 1–1: Linux, ms-windows, Solaris > 8, . . . • Q: Is there something (possibly) better overall? Jim Diamond, Jodrey School of Computer Science, Acadia University

  13. Chapter 4 95 Many-to-Many Threading Model • Allows many user level threads to be mapped to many kernel threads – • Allows the operating system to create a “sufficient number” of kernel threads Jim Diamond, Jodrey School of Computer Science, Acadia University

  14. Chapter 4 96 Two-level Model • Similar to many-to-many, except that it allows one or more user threads to be bound to their own kernel thread • Examples: IRIX, HP-UX, Tru64 UNIX (formerly Digital UNIX), Solaris 8 and earlier Jim Diamond, Jodrey School of Computer Science, Acadia University

  15. Chapter 4 97 Thread Libraries • Thread libraries provide programmers with APIs for creating and managing threads • There are two primary ways of implementation – – kernel-level library supported by the OS • “Pthreads” is one such library – may be provided either as user-level or kernel-level – a POSIX standard (IEEE 1003.1c) API for thread creation and synchronization – – Common in UNIX operating systems (Solaris, Linux, MacOS) Jim Diamond, Jodrey School of Computer Science, Acadia University

  16. Chapter 4 98 Java Threads • Java threads are managed by the JVM • Typically implemented using the threads model provided by the underlying OS – – implementing the Runnable interface — see textbook – extending the Thread class and using its start() method: class Worker extends Thread { public void run() { System.out.println("I am a worker thread, woe is me"); /* Do something useful here? */ } } public class First { public static void main(String args[]) { Worker w = new Worker(); w.start(); System.out.println("I am main()"); } } Jim Diamond, Jodrey School of Computer Science, Acadia University

  17. Chapter 4 99 Issues with Threads • Does the use of threads affect the semantics of system calls? – e.g., does fork() create a new process with all the threads, or only one thread? e.g., does exec() replace all the threads or just one thread? – • Thread cancellation of target thread: asynchronous or deferred? • Signal handling • Thread pools – • Thread-specific data – – support is needed for threads to have private data (stack is private) • Scheduler activations – are threads scheduled individually or as a group? – e.g., do two threads get two time slices? • Should all threads in a given process have the same priority? Jim Diamond, Jodrey School of Computer Science, Acadia University

  18. Chapter 4 100 Thread Cancellation • Terminating a thread before it has finished • Two general approaches: – asynchronous cancellation terminates the target thread immediately – deferred cancellation allows the target thread to periodically check if it should be cancelled • Issue: what if a thread is asynchronously cancelled while updating data other threads are using? – Jim Diamond, Jodrey School of Computer Science, Acadia University

  19. Chapter 4 101 Signal Handling (Unix) • A signal is a low-level way to notify a process that some event has occurred – – programs are able to ignore (most) signals • Signals are handled by a signal handler – it can be the default handler or a user-defined handler • What about multithreaded processes? – deliver the signal to the appropriate thread? Which one is appropriate? – deliver the signal to all threads? – – assign one thread the job of handling signals? • See man 2 signal , man 7 signal , man 2 kill , man 2 sigaction , and their many, many friends – Jim Diamond, Jodrey School of Computer Science, Acadia University

  20. Chapter 5 Process Synchronization Jim Diamond, Jodrey School of Computer Science, Acadia University

  21. Chapter 5 102 Background • Concurrent access to shared data may result in data inconsistency – – this could be data shared by multiple processed using shared memory • Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes • Example: suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers – we can do so by keeping an integer count that tracks the number of full buffers – initially, count is set to 0 – Jim Diamond, Jodrey School of Computer Science, Acadia University

  22. Chapter 5 103 Producer Process • Producer and consumer are sharing memory ( buffer[BUFFER_SIZE] and count ) in = 0; // index next item will be placed in while (true) { /* Produce an item and put it in nextProduced */ while (count == BUFFER_SIZE) ; /* do nothing but wait for an empty slot */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; } • Note: a busy wait like this should be avoided whenever possible – Jim Diamond, Jodrey School of Computer Science, Acadia University

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend