system structuring with threads
play

System Structuring with Threads Example: A Transcoding Web Proxy - PDF document

System Structuring with Threads Example: A Transcoding Web Proxy Appliance Proxy clients Interposed between Web (HTTP) clients and servers. Masquerade as (represent) the server to the client. Masquerade as (represent) the client to the


  1. System Structuring with Threads Example: A Transcoding Web Proxy Appliance “Proxy” clients Interposed between Web (HTTP) clients and servers. Masquerade as (represent) the server to the client. Masquerade as (represent) the client to the server. Cache Store fetched objects (Web pages) on local disk. Reduce network overhead if objects are fetched again. Transcoding “Distill” images to size/resolution that’s right for client. Encrypt/decrypt as needed for security on the Internet. Appliance servers Serves one purpose only; no general-purpose OS. 1

  2. Using Threads to Structure the Proxy Server long-term periodic threads network driver gather statistics “scrub” cache for expired (old) objects HTTP request handler worker threads for specific objects distiller compresses/shrinks images scrubber distill object cache encrypt/decrypt stats manager encrypt logging device controller threads disk driver logging thread one thread for each disk one thread for network interface server threads request handlers Thread Family Tree for the Proxy Server network driver HTTP request handler scrubber distill file/cache stats manager encrypt logging disk driver main thread; waiting for child termination periodic threads; waiting for timer to fire server threads; waiting on queues of data messages or pending requests (e.g., device interrupts) worker threads; waiting for data to be produced/consumed 2

  3. Periodic Threads and Timers The scrubber and stats-gathering threads must wake up periodically to do their work. These “background” threads are often called daemons or sleepers . while (systemActive) { scrubber do my work; alarm->Pause(10000); stats } AlarmClock::Pause (int howlong ); /* called by waiting threads */ Puts calling thread to sleep. Maintains a collection of threads waiting for time to pass. AlarmClock::Tick(); /* called by clock interrupt handler */ Wake up any waiting threads whose wait times have elapsed. Interfacing with the Network sending receiving TCP/IP protocol stack NIC device driver host memory buffer pool I/O Bus N etwork NetTx NetRcv I nterface C ard N etwork Link 3

  4. Network Reception HTTP request handler while (systemActive) { packetArrival->P() ; disable interrupts; TCP/IP reception pkt = GetRcvPacket(); enable interrupts; HandleRcvPacket(pkt); packetArrival->P() } packetArrival->V() receive interrupt This example illustrates use of a semaphore by an interrupt handler to pass incoming handler data to waiting threads. interrupt Inter-Thread Messaging with Send/Receive get request for object from thread; file/cache manager network receive while( more data in object) { read data from object ; thread->send(data); } while (systemActive) { object = GetNextClientRequest(); find object in cache or Web server HTTP request handler while( more data in object) { currentThread->receive(data); transmit data to client; network send } } This example illustrates use of blocking send/receive primitives to pass a stream of messages or commands to a specific thread, connection, or “port”. 4

  5. Request/Response with Send/Receive HTTP file/cache request manager handler Thread* cache; while(systemActive) { .... currentThread->receive(request); cache->send(request); ... response = currentThread->receive(); requester->send(response); ... } The Need for Multiple Service Threads Each new request will involve a stream of messages passing through dedicated server thread(s) in each service module. But what about new requests flowing into the system? A system with single-threaded service modules could only handle one request at a time, even if most time is spent waiting for slow devices. network HTTP request handler Solution : multi-threaded file/cache manager service modules. 5

  6. Using Ports for Multithreaded Servers HTTP file/cache request manager handler while(systemActive) { Port* cachePort cachePort->receive(request); .... ... cachePort->send(request); requester->send(response); response = currentThread->receive(); } ... Producer/Consumer Pipes char inbuffer[1024]; char outbuffer[1024]; while (inbytes != 0) { inbytes = input->read(inbuffer, 1024); outbytes = process data from inbuffer to outbuffer; output->write(outbuffer, outbytes); } file/cache network manager This example illustrates one important use of the producer/consumer bounded buffer in Lab #3. 6

  7. Forking and Joining Workers /* give workers their input */ HTTP handler distiller->Send(input); decrypter->Send(pipe); /* give workers their output */ input pipe output distiller->Send(pipe); decrypter->Send(output); decrypter distiller /* wait for workers to finish */ distiller = new Thread(); distiller->Join(); distiller->Fork(Distill()); decrypter->Join(); decrypter = new Thread(); decrypter->Fork(Decrypt()); pipe = new Pipe(); A Serializer for Logging disk driver Multiple threads enqueue log records on a single queue without blocking for log write completion; a single logging thread writes the records into a stream, so log records are not interleaved. 7

  8. Summary of “Paradigms” for Using Threads • main thread or initiator • sleepers or daemons (background threads) • I/O service threads listening on network or user interface • server threads or Work Crews waiting for requests on a message queue, work queue, or port • filters or transformers one stage of a pipeline processing a stream of bytes • serializers Threads vs. Events 8

  9. Review: Thread-Structured Proxy Server network driver HTTP request handler scrubber distill file/cache stats manager encrypt logging disk driver main thread; waiting for child termination periodic threads; waiting for timer to fire server threads; waiting on queues of data messages or pending requests (e.g., device interrupts) worker threads; waiting for data to be produced/consumed Summary of “Paradigms” for Using Threads • main thread or initiator • sleepers or daemons (background threads) • I/O service threads listening on network or user interface • server threads or Work Crews waiting for requests on a message queue, work queue, or port • filters or transformers one stage of a pipeline processing a stream of bytes • serializers 9

  10. Thread Priority Many systems allow assignment of priority values to threads. Each job in the ready pool has an associated priority value;the scheduler favors jobs with higher priority values. • Assigned priorities reflect external preferences for particular users or tasks. “All jobs are equal, but some jobs are more equal than others.” • Example : running user interface threads (interactive) at higher priority improves the responsiveness of the system. • Example : Unix nice system call to lower priority of a task. • Example : Urgent tasks in a real-time process control system. Keeping Your Priorities Straight Priorities must be handled carefully when there are dependencies among tasks with different priorities. • A task with priority P should never impede the progress of a task with priority Q > P . This is called priority inversion , and it is to be avoided. • The basic solution is some form of priority inheritance . When a task with priority Q waits on some resource, the holder (with priority P) temporarily inherits priority Q if Q > P . Inheritance may also be needed when tasks coordinate with IPC. • Inheritance is useful to meet deadlines and preserve low- jitter execution, as well as to honor priorities. 10

  11. Multithreading: Pros and Cons Multithreaded structure has many advantages... Express different activities cleanly as independent thread bodies, with appropriate priorities. Activities succeed or fail independently. It is easy to wait/sleep without affecting other activities: e.g., I/O operations may be blocking. Extends easily to multiprocessors. ...but it also has some disadvantages. Requires support for threads or processes. Requires more careful synchronization. Imposes context-switching overhead. May consume lots of space for stacks of blocked threads. Alternative: Event-Driven Systems Structure the code as a single thread that responds to a series of events, each of which while (TRUE) { carries enough state to determine what is event = GetNextEvent(); needed and “pick up where we left off”. switch (event) { The thread continuously polls for new events, case IncomingPacket: whenever it completes a previous event. HandlePacket(); break; If handling some event requires waiting for case DiskCompletion: I/O to complete, the thread arranges for HandleDiskCompletion(); another event to notify it of completion, and break; keeps right on going, e.g., asynchronous case TimerExpired: non-blocking I/O . RunPeriodicTasks(); etc. etc. etc. Question: in what order should events be } delivered? 11

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend