http server
play

HTTP Server HTTP Server Servers: Concurrency and Creates a socket - PDF document

HTTP Server HTTP Server Servers: Concurrency and Creates a socket ( socket ) Bind s to an address Performance Listen s to setup accept backlog Can call accept to block waiting for connections (Can call select to check for


  1. HTTP Server • HTTP Server Servers: Concurrency and – Creates a socket ( socket ) – Bind s to an address Performance – Listen s to setup accept backlog – Can call accept to block waiting for connections – (Can call select to check for data on multiple socks ) Jeff Chase • Handle request Duke University – GET /index.html HTTP/1.0\n <optional body, multiple lines>\n \n Inside your server Example: Video On Demand Server() { Measures while (1) { Client() { Server application offered load cfd = accept(); fd = connect(“server”); (Apache, read (cfd, name); response time write (fd, “video.mpg”); Tomcat/Java, etc) fd = open (name); throughput while (!eof(fd)) { while (!eof(fd)) { utilization read(fd, block); read (fd, buf); accept write (cfd, block); display (buf); queue } } packet close (cfd); close (fd); } queues } listen queue How many clients can the server support? Suppose, say, 200 kb/s video on a 100 Mb/s network link? [MIT/Morris] Performance “analysis” WebServer Flow Create ServerSocket 128.36.232.5 • Server capacity: 128.36.230.2 TCP socket space – Network (100 Mbit/s) state: listening connSocket = accept() address: {*. 6789, * .* } – Disk (20 Mbyte/s) completed connection queue: sendbuf: recvbuf: • Obtained performance: one client stream read request from connSocket • Server is limited by software structure state: established address: {128.36.232.5: 6789 , 198.69.10.10. 1500 } sendbuf: recvbuf: • If a video is 200 Kbit/s, server should be able to read support more than one client. local file state: listening address: {*. 25, * .* } write file to 500? completed connection queue: sendbuf: connSocket recvbuf: close connSocket Discussion: what does step do and how long [MIT/Morris] does it take? 1

  2. Web Server Processing Steps Process States and Transitions running Accept Client Connection (user) Read HTTP interrupt, trap/return Request Header exception may block may block waiting on waiting on Yield Find running disk I/O network File (kernel) Sleep Run Send HTTP Response Header blocked ready Wakeup Read File Send Data Want to be able to process requests concurrently. Server Blocking Under the Hood • accept() when no connect requests are waiting on the listen queue start (arrival rate λ ) – What if server has multiple ports to listen from? • E.g., 80 for HTTP, 443 for HTTPS • open/read/write on server files • read() on a socket, if the client is sending too slowly CPU • write() on socket, if the client is receiving too slowly I/O completion I/O request – Yup, TCP has flow control like pipes exit What if the server blocks while serving one client, and (throughput λ until some I/O device another client has work to do? center saturates) Better single-server Concurrency and Pipelining performance • Goal: run at server’s hardware speed CPU – Disk or network should be bottleneck DISK Before • Method: NET – Pipeline blocks of each request – Multiplex requests from multiple clients • Two implementation approaches: CPU – Multithreaded server DISK After – Asynchronous I/O NET [MIT/Morris] 2

  3. Multiple Process Architecture Concurrent threads or processes Process 1 Accept Read Find Send Read File • Using multiple threads/processes Conn Request File Header Send Data – so that only the flow … processing a particular separate address spaces request is blocked Process N – Java: extends Thread or Accept Read Find Send Read File implements Runnable interface Conn Request File Header Send Data • Advantages – Simple programming while addressing blocking issue • Disadvantages – Many processes; large context switch overheads Example: a Multi-threaded WebServer, which creates a thread for each request – Consumes much memory – Optimizations involving sharing information among processes (e.g., caching) harder Using Threads Threads Thread 1 Accept Read Find Send Read File Conn Request File Header Send Data • A thread is a schedulable stream of control. • defined by CPU register values (PC, SP) … • suspend : save register values in memory Thread N • resume : restore registers from Accept Read Find Send Read File memory Conn Request File Header Send Data • Multiple threads can execute independently: • They can run in parallel on multiple CPUs... • Advantages – - physical concurrency – Lower context switch overheads • …or arbitrarily interleaved on a single – Shared address space simplifies optimizations (e.g., caches) CPU. • Disadvantages – Need kernel level threads (why?) – - logical concurrency – Some extra memory needed to support multiple stacks • Each thread must have its own stack. – Need thread-safe programs, synchronization Multithreaded server Event-Driven Programming server() { for (i = 0; i < 10; i++) • One execution stream: no CPU while (1) { threadfork (server); concurrency. cfd = accept(); Event • Register interest in events read (cfd, name); fd = open (name); Loop (callbacks). while (!eof(fd)) { • Event loop waits for events, read(fd, block); • When waiting for I/O, invokes handlers. write (cfd, block); thread scheduler runs } • No preemption of event another thread Event Handlers close (cfd); close (fd); handlers. • What about references to }} • Handlers generally short- shared data? lived. • Synchronization [MIT/Morris] [Ousterhout 1995] 3

  4. Asynchronous Multi-Process Event Driven (AMPED) Single Process Event Driven (SPED) Accept Read Find Send Read File File Header Send Data Conn Request Accept Read Find Send Read File Conn Request File Header Send Data Event Dispatcher Event Dispatcher Helper 1 Helper 1 Helper 1 • Single threaded • Like SPED, but use helper processes/thread for disk I/O • Asynchronous (non-blocking) I/O • Use IPC to communicate with helper process • Advantages • Advantages – Single address space – Shared address space for most web server functions – No synchronization – Concurrency for disk I/O • Disadvantages • Disadvantages – IPC between main thread and helper threads – In practice, disk reads still block This hybrid model is used by the “Flash” web server. Event-Based Concurrent Select Servers Using I/O Multiplexing • If a server has many open sockets, how does it know • Maintain a pool of connected descriptors. when one of them is ready for I/O? • Repeat the following forever: int select(int n, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, – Use the Unix select f unction to block until: struct timeval *timeout); • (a) New connection request arrives on the listening • Issues with scalability: alternative event interfaces descriptor. have been offered. • (b) New data arrives on an existing connected descriptor. – If (a), add the new connection to the pool of connections. – If (b), read any available data from the connection • Close connection on EOF and remove it from the pool. [CMU 15-213] Asychronous server Asynchronous I/O name_cb(cfd) { struct callback { init() { read(cfd,name); bool (*is_ready)(); on_accept(accept_cb); fd = open(name); void (*cb)(arg); • Code is structured as a } on_readable(fd, read_cb); void *arg; collection of handlers accept_cb() { } } • Handlers are nonblocking on_readable(cfd,name_cb); read_cb(cfd, fd) { • Create new handlers for } read(fd, block); on_readable(fd, fn) { on_writeeable(fd, write_cb); blocking operations main() { c = new } while (1) { • When operation callback(test_readable, fn, fd); write_cb(cfd, fd) { for (c = each callback) { completes, call handler add c to callback list; write(cfd, block); if (c->is_ready()) } on_readable(fd, read_cb); c->handler(c->arg); } } } } [MIT/Morris] [MIT/Morris] 4

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend