ipc threads races critical sections
play

IPC, Threads, Races, Critical Sections 7A. Threads 7B. - PDF document

4/18/2016 IPC, Threads, Races, Critical Sections 7A. Threads 7B. Inter-Process Communication Operating Systems Principles 7C. Critical Sections IPC, Threads, Races, Critical Sections 7D. Asynchronous Event Completions Mark Kampe


  1. 4/18/2016 IPC, Threads, Races, Critical Sections 7A. Threads 7B. Inter-Process Communication Operating Systems Principles 7C. Critical Sections IPC, Threads, Races, Critical Sections 7D. Asynchronous Event Completions Mark Kampe (markk@cs.ucla.edu) IPC, Threads, Races, Critical Sections 2 a brief history of threads What is a thread? • processes are very expensive • strictly a unit of execution/scheduling – to create: they own resources – each thread has its own stack, PC, registers – to dispatch: they have address spaces • multiple threads can run in a process • different processes are very distinct – they all share the same code and data space – they cannot share the same address space – they all have access to the same resources – they cannot (usually) share resources – this makes the cheaper to create and run • not all programs require strong separation • sharing the CPU between multiple threads – cooperating parallel threads of execution – user level threads (w/voluntary yielding) – all are trusted, part of a single program – scheduled system threads (w/preemption) IPC, Threads, Races, Critical Sections 3 IPC, Threads, Races, Critical Sections 4 When to use processes When to use threads • running multiple distinct programs • parallel activities in a single program • creation/destruction are rare events • frequent creation and destruction • running agents with distinct privileges • all can run with same privileges • limited interactions and shared resources • they need to share resources • prevent interference between processes • they exchange many messages/signals • firewall one from failures of the other • no need to protect from each other IPC, Threads, Races, Critical Sections 5 IPC, Threads, Races, Critical Sections 6 1

  2. 4/18/2016 Thread state and thread stacks Processes vs. Threads – trade-offs • if you use multiple processes • each thread has its own registers, PS, PC – your application may run much more slowly • each thread must have its own stack area – it may be difficult to share some resources • maximum size specified when thread is • if you use multiple threads created – you will have to create and manage them – a process can contain many threads – you will have serialize resource use – they cannot all grow towards a single hole – your program will be more complex to write – thread creator must know max required stack size • TANSTAAFL – stack space must be reclaimed when thread exits – there ain't no such thing as a free lunch • procedure linkage conventions are unchanged IPC, Threads, Races, Critical Sections 7 IPC, Threads, Races, Critical Sections 8 UNIX stack space management Thread Stack Allocation 0x00000000 code segment data segment stack segment thread thread thread code data stack 1 stack 2 stack 3 0x00000000 0xFFFFFFFF stack Data segment starts at page boundary after code segment 0x0120000 0xFFFFFFFF Stack segment starts at high end of address space Unix extends stack automatically as program needs more. Data segment grows up; Stack segment grows down Both grow towards the hole in the middle. They are not allowed to meet . IPC, Threads, Races, Critical Sections 9 IPC, Threads, Races, Critical Sections 10 Inter-Process Communication IPC: operations • channel creation and destruction • the exchange of data between processes • write/send/put • Goals – insert data into the channel • read/receive/get – simplicity – Extract data from the channel – convenience • channel content query – generality – how much data is currently in the channel – efficiency • connection establishment and query – robustness and reliability – control connection of one channel end to another • some of these are contradictory – who are end-points, what is status of connections IPC, Threads, Races, Critical Sections 11 2

  3. 4/18/2016 IPC: messages vs streams IPC: flow-control • queued messages consume system resources • streams – buffered in the OS until the receiver asks for them – a continuous stream of bytes • many things can increase required buffer space – read or write few or many bytes at a time – write and read buffer sizes are unrelated – fast sender, non-responsive receiver – stream may contain app-specific record delimiters • must be a way to limit required buffer space • Messages (aka datagrams) – sender side: block sender or refuse message – a sequence of distinct messages – receiving side: stifle sender, flush old messages – each message has its own length (subject to limits) – this is usually handled by network protocols – message is typically read/written as a unit • mechanisms to report stifle/flush to sender – delivery of a message is typically all-or-nothing IPC: reliability and robustness Simplicity: pipelines • reliable delivery (e.g. TCP vs UDP) • data flows through a series of programs – networks can lose requests and responses – ls | grep | sort | mail • a sent message may not be processed – macro processor | complier | assembler – receiver invalid, dead, or not responding • When do we tell the sender "OK"? • data is a simple byte stream – queued locally? added to receivers input queue? – buffered in the operating system – receiver has read? receiver has acknowledged? – no need for intermediate temporary files • how persistent is system in attempting to deliver? • there are no security/privacy/trust issues – retransmission, alternate routes, back-up servers, ... – all under control of a single user • do channel/contents survive receiver restarts? • error conditions – can new server instance pick up where the old left off? – input: End of File output: SIGPIPE IPC, Threads, Races, Critical Sections 16 Generality: sockets half way: mail boxes, named pipes • connections between addresses/ports • client/server rendezvous point – connect/listen/accept – a name corresponds to a service – lookup: registry, DNS, service discovery protocols – a server awaits client connections • many data options – once open, it may be as simple as a pipe – reliable or best effort data-grams – streams, messages, remote procedure calls, … – OS may authenticate message sender • complex flow control and error handling • limited fail-over capability – retransmissions, timeouts, node failures – if server dies, another can take its place – possibility of reconnection or fail-over • trust/security/privacy/integrity – but what about in-progress requests? – we have a whole lecture on this subject • client/server must be on same system IPC, Threads, Races, Critical Sections 17 IPC, Threads, Races, Critical Sections 18 3

  4. 4/18/2016 Ludicrous Speed – Shared Memory Synchronization - evolution of problem • batch processing - serially reusable resources • shared read/write memory segments – process A has tape drive, process B must wait – mapped into multiple address spaces – process A updates file first, then process B – perhaps locked in physical memory • cooperating processes – applications maintain circular buffers – exchanging messages with one-another – OS is not involved in data transfer – continuous updates against shared files • simplicity, ease of use … your kidding, right? • shared data and multi-threaded computation – interrupt handlers, symmetric multi-processors • reliability, security … caveat emptor! – parallel algorithms, preemptive scheduling • generality … locals only! • network-scale distributed computing IPC, Threads, Races, Critical Sections 19 IPC, Threads, Races, Critical Sections 20 The benefits of parallelism What's the big deal? • sequential program execution is easy • improved throughput – first instruction one, then instruction two, ... – blocking of one activity does not stop others – execution order is obvious and deterministic • improved modularity • independent parallel programs are easy – separating complex activities into simpler pieces – if the parallel streams do not interact in any way • improved robustness • cooperating parallel programs are hard – the failure of one thread does not stop others – if the two execution streams are not synchronized • a better fit to emerging paradigms • results depend on the order of instruction execution • parallelism makes execution order non-deterministic – client server computing, web based services • results become combinatorially intractable – our universe is cooperating parallel processes IPC, Threads, Races, Critical Sections 21 IPC, Threads, Races, Critical Sections 22 Race Conditions Non-Deterministic Execution • shared resources and parallel operations • processes block for I/O or resources – where outcome depends on execution order • time-slice end preemption – these happen all the time, most don’t matter • interrupt service routines • some race conditions affect correctness • unsynchronized execution on another core – conflicting updates (mutual exclusion) – check/act races (sleep/wakeup problem) • queuing delays – multi-object updates (all-or-none transactions) • time required to perform I/O operations – distributed decisions based on inconsistent views • message transmission/delivery time • each of these classes can be managed – if we recognize the race condition and danger IPC, Threads, Races, Critical Sections 23 IPC, Threads, Races, Critical Sections 24 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend