running with the devil
play

Running with the Devil: Mechanical Sympathetic Networking Todd L. - PowerPoint PPT Presentation

Running with the Devil: Mechanical Sympathetic Networking Todd L. Montgomery @toddlmontgomery Informatica Ultra Messaging Architecture Tail of a Networking Stack Beastie Direct Descendants FreeBSD NetBSD OpenBSD ... Darwin (Mac OS X)


  1. Running with the Devil: Mechanical Sympathetic Networking Todd L. Montgomery @toddlmontgomery Informatica Ultra Messaging Architecture

  2. Tail of a Networking Stack Beastie Direct Descendants FreeBSD NetBSD OpenBSD ... Darwin (Mac OS X) also Windows, Solaris, even Linux , Android , ...

  3. Domain: TCP & UDP

  4. It’s a Trap! TCP MSS Only 1448 bytes Specific Request Sizes? OSs? Big Request Solution 1 (256 KB) ... Set SO_RCVBUF Pops up again! Solution 2 Overly Set TCP_NODELAY Long RTT ? YES! but CPU Skyrockets! Well understood bad Response interaction! Symptom: Overly Long Round-Trip Time for a Request + Response

  5. Challenges with TCP Nagle Delayed ACKs Temporary Deadlock Waiting on an + = “Don’t send ‘small’ Don’t acknowledge data acknowledgement to send segments if un- immediately. Wait a small any waiting small segment. acknowledged period of time (200 ms) But acknowledgement is segments exist” for responses to be delayed waiting on more generated and piggyback data or a timeout the response with the acknowledgement Solutions?

  6. Little Experiment TCP MSS Chunk Size RTT (msec) 16344 bytes (loopback) 1500 32 BIG Request 4096 16 Dramatically (32 MB) ... Higher CPU 8192 12 Take Away(s) Response “Small” messages are evil? (1 KB) Chunks smaller than MSS are evil? ... no, or not quite ... OS pagesize (4096 bytes) matters! Why? What about sendfile(2) and Kernel boundary crossings matter! FileChannel.transferTo() ? Question: Does the size of a send matter that much?

  7. Challenges with UDP Not a Stream No Flow Control Not Reliable Loss recovery is apps Message boundaries Potential to overrun a responsibility matter! receiver (kernel boundary crossings) No Congestion No Nagle Causes of Loss Control Small messages not ‣ Receiver buffer overrun batched ‣ Network congestion Potential impact to all competing traffic!! (neither are strictly the apps fault) (unconstrained flow)

  8. Network Utilization & Datagrams “The Data percentage of traffic that is data” Data + Control Batching? No. of 200 Byte Utilization (%) App Messages Plus 1 87.7 Fewer 5 97.3 interrupts! 20 99.3 * IP Header = 20 bytes, UDP Header = 8 bytes, no response

  9. Application-Level Batching? Performance Batching by Application + = Limitations & the Application Specific Tradeoffs Knowledge Applications can optimize and make Nagle, Delayed ACKs, Applications the tradeoffs Chunk Sizes, UDP sometimes know necessary at the time Network Util, etc. when to send small they are needed and when to batch * HTTP (headers + body), etc. Addressing ‣ Request/Response idiosyncrasies ‣ Send-side optimizations

  10. Batching setsockopt() s TCP_CORK When to Batch? ‣ Linux only ‣ Only send when MSS full, when unCORKed, or ... ‣ ... after 200 msec ‣ unCORKing requires kernel boundary crossing ‣ Intended to work with TCP_NODELAY TCP_NOPUSH When to Flush? ‣ BSD (some) only ‣ Only send when SO_SNDBUF full ‣ Mostly broken on Darwin

  11. Flush? Batch? Batch when... Flush when... 1. Application logic 1. Application logic 2. More data is likely to follow 2. More data is unlikely to follow 3. Unlikely to get data out before next one 3. Timeout (200 msec?) 4. Likely to get data out before next one An Automatic Transmission for Batching 1. Always default to flushing 2. Batch when Mean time between sends < Mean time to send (EWMA?) 3. Flush on timeout as safety measure Large UDP (fragmentation) + YES! Question: Can you batch too much? non-trivial loss probability

  12. A Batching Architecture Blocking Sends MTBS: Mean Time Between Sends Socket(s) MTTS: Mean Time To Send (on socket) “Smart” Batching Pulls off all waiting data when possible (automatically batches when MTBS < MTTS) Advantages ‣ Non-contended send threads Can be re-used for other ‣ Decoupled API and socket sends batching tasks (like file I/O, DB ‣ Single writer principle for sockets writes, and pipeline requests)! ‣ Built-in back pressure (bounded ring buffer) ‣ Easy to add (async) send notification ‣ Easy to add rate limiting

  13. Multi-Message Send/Receive sendmmsg(2) recvmmsg(2) ‣ Linux 3.x only ‣ Linux 3.x only ‣ Send multiple datagrams with single call ‣ Receive multiple datagrams with single call ‣ Fits nicely with batching architecture ‣ So, so, sooo SMOKIN’ HOT! Compliments gather send Scatter recv ( recvmsg, readv ) is ( sendmsg, writev ) - which usually not worth the trouble you can do in the same call! Advantages ‣ Reduced kernel boundary crossings

  14. Domain: Protocol Design

  15. Application-Level Framing 0 X bytes File to Transfer Split into A pplication D ata U nit (constant size except maybe last) S: ADU 1 ADU 2 ADU 3 ADU 4 ADU 5 ADU 6 ADU 7 ADU 8 ADU 9 R: ADU 1 ADU 2 ADU 4 ADU 5 ADU 6 ADU 7 ADU 8 Recover ADU 3 ADU 9 Advantages ‣ Optimize recovery until end (or checkpoints) ‣ Works well with multicast and unicast ‣ Works best over UDP (message boundaries) Clark and Tennenhouse, ACM SIGCOMM CCR, VOlume 20, Issue 4, Sept. 1990

  16. PGM Router Assist S No loss here on these subtrees Retransmit only needs to traverse link once for both downstream links ... Loss Here! Effects Both downstream links Advantages ‣ NAKs follow reverse path R R ‣ Retransmissions localized ‣ Optimization of bandwidth! NAK Path NAKs traverse hop-by-hop back up the forwarding tree, Retransmit Path saving state in each router for retransmits to follow Pragmatic General Multicast (PGM), IETF RFC 3208

  17. Questions?

Recommend


More recommend