SEDA: An Architecture for Well-Conditioned Scalable Internet - - PowerPoint PPT Presentation

seda an architecture for well conditioned scalable
SMART_READER_LITE
LIVE PREVIEW

SEDA: An Architecture for Well-Conditioned Scalable Internet - - PowerPoint PPT Presentation

SEDA: An Architecture for Well-Conditioned Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer presented by Ahn, Ki Yung Staged Event-Driven Architecture Designed for highly concurrent Internet services Applications are


slide-1
SLIDE 1

SEDA: An Architecture for Well-Conditioned Scalable Internet Services

Matt Welsh, David Culler, and Eric Brewer presented by Ahn, Ki Yung

slide-2
SLIDE 2

Staged Event-Driven Architecture

Designed for highly concurrent Internet services Applications are network of stages Stages are driven by event Stages connected by explicit event queues

slide-3
SLIDE 3

Internet is a problem

millions of users demanding access more complex and dynamic contents traditional OS design does not fit

  • mutiprogrmming for resource virtualization

replication, clustering not always suffice

  • peak load may seldom occur
slide-4
SLIDE 4

Well-Conditioned service

Behaving like a simple pipeline Throughput increase proportional to the load, until it saturates (pippline full) Graceful Degradation: when overloaded, throughput does not degrade linear response-time penalty equally

slide-5
SLIDE 5

5000 10000 15000 20000 25000 30000 1 4 16 64 256 1024 50 100 150 200 250 300 350 400 Throughput, tasks/sec Latency, msec Number of threads Throughput Latency Linear (ideal) latency

Figure 2: Threaded server throughput degradation: This benchmark mea-

slide-6
SLIDE 6

5000 10000 15000 20000 25000 30000 35000 1 32 1024 32768 1048576 10000 20000 30000 40000 Throughput, tasks/sec Latency, msec Number of tasks in pipeline Throughput Latency Linear (ideal) latency

Figure 4: Event-driven server throughput: This benchmark measures an

event-driven version of the server from Figure 2. In this case, the server uses

slide-7
SLIDE 7

Concurrency Models

Thread-per-Request

  • throughput degrades for large # of access

Bounded Thread Pools

  • can avoid throughput degradation
  • response time may be extremely unfair

Event-Driven

  • robust to load
  • event handler should not block
  • application should schedule & order events
slide-8
SLIDE 8

scheduler network disk request FSM 1 request FSM 2 request FSM 3 request FSM 4 request FSM N

Figure 3: Event-driven server design: This figure shows the flow of events

dispatcher network

dispatch

request 1 request 2 request 3 request 4 request N network

send result

Figure 1: Threaded server design: Each incoming request is dispatched to a

slide-9
SLIDE 9

SEDA goals

massive concurrency

  • event-driven execution, asynchronous IO

simplify construction

  • provide scheduling, resource management

enable introspection on event queues

  • application can have control on events

self-tuning resource management

  • thread pool controller, batching controller
slide-10
SLIDE 10

Conclusion & Discussion

SEDA is a combination of threaded model and event-driven model event-driven stages - modularity explicit event queues - control over events dynamic controllers

  • scheduling & resource management

easier to build well-conditioned service SEDA can be new OS design model

  • more control over scheduling & resource
  • shared virtualized resource not necessary
slide-11
SLIDE 11

request HTTP cache miss I/O request packet packet cache hit connection file data

Socket read Socket listen PageCache HttpSend CacheMiss file I/O Socket write HttpParse packet parse cache check handle miss send response file I/O write packet read packet connection accept

Figure 5: Staged event-driven (SEDA) HTTP server: This is a structural representation of the SEDA-based Web server, described in detail in Section 5.1. The

application is composed as a set of stages separated by queues. Edges represent the flow of events between stages. Each stage can be independently managed, and stages can be run in sequence or in parallel, or a combination of the two. The use of event queues allows each stage to be individually load-conditioned, for example, by thresholding its event queue. For simplicity, some event paths and stages have been elided from this figure.

Outgoing Events Event Queue Controller Event Handler Thread Pool

Figure 6: A SEDA Stage: A stage consists of an incoming event queue, a

thread pool, and an application-supplied event handler. The stage’s operation is managed by the controller, which adjusts resource allocations and scheduling dynamically.

Event Handler Thread Pool Threshold Observe Adjust > Length Size Adjust Observe Event Handler Thread Pool Running Avg Other Stages > Batching Factor Rate

(a) Thread pool controller (b) Batching controller

Figure 7: SEDA resource controllers: Each stage has an associated controller

that adjusts its resource allocation and behavior to keep the application within its operating regime. The thread pool controller adjusts the number of threads executing within the stage, and the batching controller adjusts the number of events processed by each iteration of the event handler.

slide-12
SLIDE 12

SEDA seem familiar to what I’ve seen before

Distributed Systems

  • Computers : stages
  • OS (TCP stack) : event queue & scheduler

SEDA is like a model implementation of distributed system in a single machine

  • That is ... MULTIPROGRAMMING with IPC?
slide-13
SLIDE 13

Questions

How did they implement the SEDA queue? How to arrange events and asynchronous IO? Is SEDA essentially different from multiprocess/thread programming with message passing OR distributed systems?