System Structuring with Threads Example: A Transcoding Web Proxy - - PDF document

system structuring with threads
SMART_READER_LITE
LIVE PREVIEW

System Structuring with Threads Example: A Transcoding Web Proxy - - PDF document

System Structuring with Threads Example: A Transcoding Web Proxy Appliance Proxy clients Interposed between Web (HTTP) clients and servers. Masquerade as (represent) the server to the client. Masquerade as (represent) the client to the


slide-1
SLIDE 1

1

System Structuring with Threads

Example: A Transcoding Web Proxy Appliance

“Proxy”

Interposed between Web (HTTP) clients and servers. Masquerade as (represent) the server to the client. Masquerade as (represent) the client to the server.

Cache

Store fetched objects (Web pages) on local disk. Reduce network overhead if objects are fetched again.

Transcoding

“Distill” images to size/resolution that’s right for client. Encrypt/decrypt as needed for security on the Internet.

Appliance

Serves one purpose only; no general-purpose OS. clients servers

slide-2
SLIDE 2

2

Using Threads to Structure the Proxy Server

network driver

HTTP request handler

disk driver

scrubber stats

  • bject cache

manager

distill encrypt logging

long-term periodic threads

gather statistics “scrub” cache for expired (old) objects

worker threads for specific objects

distiller compresses/shrinks images encrypt/decrypt

device controller threads

logging thread

  • ne thread for each disk
  • ne thread for network interface

server threads

request handlers

Thread Family Tree for the Proxy Server

network driver

HTTP request handler

disk driver

scrubber stats

file/cache manager

distill encrypt logging

main thread; waiting for child termination periodic threads; waiting for timer to fire server threads; waiting on queues of data messages

  • r pending requests (e.g., device interrupts)

worker threads; waiting for data to be produced/consumed

slide-3
SLIDE 3

3

Periodic Threads and Timers

The scrubber and stats-gathering threads must wake up periodically to do their work.

These “background” threads are often called daemons or sleepers.

AlarmClock::Pause (int howlong); /* called by waiting threads */

Puts calling thread to sleep. Maintains a collection of threads waiting for time to pass.

AlarmClock::Tick(); /* called by clock interrupt handler */

Wake up any waiting threads whose wait times have elapsed. scrubber stats

while (systemActive) { do my work; alarm->Pause(10000); }

Interfacing with the Network

NetRcv NetTx

NIC device driver TCP/IP protocol stack

Network Interface Card Network Link I/O Bus host memory buffer pool sending receiving

slide-4
SLIDE 4

4

Network Reception

receive interrupt handler packetArrival->V() packetArrival->P() while (systemActive) { packetArrival->P(); disable interrupts; pkt = GetRcvPacket(); enable interrupts; HandleRcvPacket(pkt); }

interrupt

TCP/IP reception HTTP request handler

This example illustrates use of a semaphore by an interrupt handler to pass incoming data to waiting threads.

Inter-Thread Messaging with Send/Receive

network receive HTTP request handler while (systemActive) {

  • bject = GetNextClientRequest();

find object in cache or Web server while(more data in object) { currentThread->receive(data); transmit data to client; } } file/cache manager network send get request for object from thread; while(more data in object) { read data from object; thread->send(data); }

This example illustrates use of blocking send/receive primitives to pass a stream of messages or commands to a specific thread, connection, or “port”.

slide-5
SLIDE 5

5

Request/Response with Send/Receive

HTTP request handler

file/cache manager

Thread* cache; .... cache->send(request); response = currentThread->receive(); ... while(systemActive) { currentThread->receive(request); ... requester->send(response); }

The Need for Multiple Service Threads

network HTTP request handler file/cache manager

Each new request will involve a stream of messages passing through dedicated server thread(s) in each service module. But what about new requests flowing into the system? A system with single-threaded service modules could only handle one request at a time, even if most time is spent waiting for slow devices.

Solution: multi-threaded service modules.

slide-6
SLIDE 6

6

Using Ports for Multithreaded Servers

HTTP request handler

file/cache manager

Port* cachePort .... cachePort->send(request); response = currentThread->receive(); ... while(systemActive) { cachePort->receive(request); ... requester->send(response); }

Producer/Consumer Pipes

network file/cache manager char inbuffer[1024]; char outbuffer[1024]; while (inbytes != 0) { inbytes = input->read(inbuffer, 1024);

  • utbytes = process data from inbuffer to outbuffer;
  • utput->write(outbuffer, outbytes);

}

This example illustrates one important use of the producer/consumer bounded buffer in Lab #3.

slide-7
SLIDE 7

7

Forking and Joining Workers

/* give workers their input */ distiller->Send(input); decrypter->Send(pipe); /* give workers their output */ distiller->Send(pipe); decrypter->Send(output); /* wait for workers to finish */ distiller->Join(); decrypter->Join(); distiller = new Thread(); distiller->Fork(Distill()); decrypter = new Thread(); decrypter->Fork(Decrypt()); pipe = new Pipe();

input

  • utput

pipe distiller decrypter HTTP handler

A Serializer for Logging

disk driver

Multiple threads enqueue log records on a single queue without blocking for log write completion; a single logging thread writes the records into a stream, so log records are not interleaved.

slide-8
SLIDE 8

8

Summary of “Paradigms” for Using Threads

  • main thread or initiator
  • sleepers or daemons (background threads)
  • I/O service threads

listening on network or user interface

  • server threads or Work Crews

waiting for requests on a message queue, work queue, or port

  • filters or transformers
  • ne stage of a pipeline processing a stream of bytes
  • serializers

Threads vs. Events

slide-9
SLIDE 9

9

Review: Thread-Structured Proxy Server

network driver

HTTP request handler

disk driver

scrubber stats

file/cache manager

distill encrypt logging

main thread; waiting for child termination periodic threads; waiting for timer to fire server threads; waiting on queues of data messages

  • r pending requests (e.g., device interrupts)

worker threads; waiting for data to be produced/consumed

Summary of “Paradigms” for Using Threads

  • main thread or initiator
  • sleepers or daemons (background threads)
  • I/O service threads

listening on network or user interface

  • server threads or Work Crews

waiting for requests on a message queue, work queue, or port

  • filters or transformers
  • ne stage of a pipeline processing a stream of bytes
  • serializers
slide-10
SLIDE 10

10

Thread Priority

Many systems allow assignment of priority values to threads.

Each job in the ready pool has an associated priority value;the scheduler favors jobs with higher priority values.

  • Assigned priorities reflect external preferences for particular

users or tasks.

“All jobs are equal, but some jobs are more equal than others.”

  • Example: running user interface threads (interactive) at

higher priority improves the responsiveness of the system.

  • Example: Unix nice system call to lower priority of a task.
  • Example: Urgent tasks in a real-time process control system.

Keeping Your Priorities Straight

Priorities must be handled carefully when there are dependencies among tasks with different priorities.

  • A task with priority P should never impede the progress of a

task with priority Q > P.

This is called priority inversion, and it is to be avoided.

  • The basic solution is some form of priority inheritance.

When a task with priority Q waits on some resource, the holder (with priority P) temporarily inherits priority Q if Q > P. Inheritance may also be needed when tasks coordinate with IPC.

  • Inheritance is useful to meet deadlines and preserve low-

jitter execution, as well as to honor priorities.

slide-11
SLIDE 11

11

Multithreading: Pros and Cons

Multithreaded structure has many advantages...

Express different activities cleanly as independent thread bodies, with appropriate priorities. Activities succeed or fail independently. It is easy to wait/sleep without affecting other activities: e.g., I/O

  • perations may be blocking.

Extends easily to multiprocessors.

...but it also has some disadvantages.

Requires support for threads or processes. Requires more careful synchronization. Imposes context-switching overhead. May consume lots of space for stacks of blocked threads.

Alternative: Event-Driven Systems

while (TRUE) { event = GetNextEvent(); switch (event) { case IncomingPacket: HandlePacket(); break; case DiskCompletion: HandleDiskCompletion(); break; case TimerExpired: RunPeriodicTasks();

  • etc. etc. etc.

} If handling some event requires waiting for I/O to complete, the thread arranges for another event to notify it of completion, and keeps right on going, e.g., asynchronous non-blocking I/O. Structure the code as a single thread that responds to a series of events, each of which carries enough state to determine what is needed and “pick up where we left off”. The thread continuously polls for new events, whenever it completes a previous event. Question: in what order should events be delivered?

slide-12
SLIDE 12

12

Example: Unix Select Syscall

A thread/process with multiple network connections or open files can initiate nonblocking I/O on all of them. The Unix select system call supports such a polling model:

  • files are identified by file descriptors (open file numbers)
  • pass a bitmask for which descriptors to query for readiness
  • returns a bitmask of descriptors ready for reading/writing
  • reads and/or writes on these descriptors will not block

Select has fundamental scaling limitations in storing, passing, and traversing the bitmaps.

Event Notification with Upcalls

Problem: what if an event requires a more “immediate” notification?

  • What if a high-priority event occurs while we are executing

the handler for a low-priority event?

  • What about exceptions relating to the handling of an event?

We need some way to preemptively “break in” to the execution of a thread and notify it of events.

upcalls example: NT Asynchronous Procedure Calls (APCs) example: Unix signals Preemptive event handling raises synchronization issues similar to interrupt handling.

slide-13
SLIDE 13

13

Example: Unix Signals

Signals notify processes of internal or external events.

  • the Unix software equivalent of interrupts/exceptions
  • only way to do something to a process “from the outside”
  • Unix systems define a small set of signal types

Examples of signal generation:

  • keyboard ctrl-c and ctrl-z signal the foreground process
  • synchronous fault notifications, syscall errors
  • asynchronous notifications from other processes via kill
  • IPC events (SIGPIPE, SIGCHLD)
  • alarm notifications

signal == “upcall”

Handling Unix Signals

  • 1. Each signal type has a system-defined default action.

abort and dump core (SIGSEGV, SIGBUS, etc.) ignore, stop, exit, continue

  • 2. A process may choose to block (inhibit) or ignore some

signal types.

useful for synchronizing with signal handlers: inhibit signals before executing code shared with the signal handler

  • 3. The process may choose to catch some signal types by

specifying a (user mode) handler procedure.

system passes interrupted context to handler handler may munge and/or return to interrupted context

slide-14
SLIDE 14

14

Summary

  • 1. Threads are a useful tool for structuring complex systems.

Separate the code to handle concurrent activities that are logically separate, with easy handling of priority. Interaction primitives integrate synchronization, data transfer, and possibly priority inheritance.

  • 2. Many systems include an event handling mechanism.

Useful in conjuction with threads, or may be viewed as an alternative to threads structuring concurrent systems. Examples: Unix signals, NT APCs, GetNextEvent()

  • 3. Event-structured systems may require less direct handling
  • f concurrency.

But must synchronize with handlers if they are preemptive.