1
play

1 Fairness Evaluating Fairness First, need to define what is a - PDF document

Administrivia P561: Network Systems Fishnet Assignment #3 Week 6: Transport #2 Due Friday, 11/14, 5pm Homework #3 (out soon) Tom Anderson Due week 9 (11/24), start of class Ratul Mahajan TA: Colin Dixon 2 Avoiding Small Packets


  1. Administrivia P561: Network Systems Fishnet Assignment #3 Week 6: Transport #2 − Due Friday, 11/14, 5pm Homework #3 (out soon) Tom Anderson − Due week 9 (11/24), start of class Ratul Mahajan TA: Colin Dixon 2 Avoiding Small Packets Bandwidth Allocation Nagle’s algorithm (sender side): How fast should a host, e.g., a web server, send − Only allow one outstanding segment smaller than the MSS packets? − A “self-clocking” algorithm − But gets in the way for SSH etc. (TCP_NODELAY) Delayed acknowledgements (receiver side) Two considerations: − Wait to send ACK, hoping to piggyback on reverse stream − Congestion: − But send one ACK per two data packets and use timeout on the delay • sending too fast will cause packets to be lost in the network − Cuts down on overheads and allows coalescing − Fairness: − Otherwise a nuisance, e.g, RTT estimation • different users should get their fair share of the bandwidth Irony: how do Nagle and delayed ACKs interact? − Consider a Web request Often treated together (e.g. TCP) but needn’t be. Congestion Evaluating Congestion Control Source Power = throughput / delay 1 100-Mbps Ethernet Router Destination Throughput/delay At low load, throughput goes up 1.5-Mbps DSL link and delay remains small 1 Gbps fiber At moderate load, delay is Source increasing (queues) but 2 throughput doesn’t grow much Packets dropped here At high load, much loss and delay increases greatly due to Buffer absorbs bursts when input rate > output Optimal Load retransmissions If sending rate is persistently > drain rate, queue builds load Even worse, can oscillate! Dropped packets represent wasted work 1

  2. Fairness Evaluating Fairness First, need to define what is a fair allocation. − Consider n flows, each wants a fraction fi of the Source 1 bandwidth Router Destination Min-max fairness: 1 Router − First satisfy all flows evenly up to the lowest fi.. Source 2 Repeat with the remaining bandwidth. Router Destination Or proportional fairness 2 f 4 − Depends on path length … f 1 Source f 3 3 f 2 Each flow from a source to a destination should (?) get an equal share of the bottleneck link … depends on paths and other traffic Chapter 6, Figure 2 Why is bandwidth allocation hard? Designs affect Network services Given network and traffic, just work out fair share TCP/Internet provides “best-effort” service and tell the sources … − Implicit network feedback, host controls via window. − No strong notions of fairness But: − Demands come from many sources A network in which there are QOS (quality of service) guarantees − Rate-based reservations natural choice for some apps − Needed information isn’t in the right place − But reservations are need a good characterization of traffic − Demands are changing rapidly over time − Network involvement typically needed to provide a − Information is out-of-date by the time it’s conveyed guarantee − Network paths are changing over time Former tends to be simpler to build, latter offers greater service to applications but is more complex. Case Study: TCP TCP Before Congestion Control The dominant means of bandwidth allocation today Just use a fixed size sliding window! Internet meltdowns in the late 80s (“congestion − Will under-utilize the network or cause unnecessary loss collapse”) led to much of its mechanism Congestion control dynamically varies the size of − Jacobson’s slow-start, congestion avoidance [sic], fast the window to match sending and available retransmit and fast recovery. bandwidth Main constraint was zero network support and de facto backwards-compatible upgrades to the − Sliding window uses minimum of cwnd, the congestion window, and the advertised flow control window sender The big question: how do we decide what size the − Infer packet loss and use it as a proxy for congestion window should be? We will look at other models later … 2

  3. TCP Congestion Control Tracking the Bottleneck Bandwidth Sending rate = window size/RTT Goal: efficiently and fairly allocate network bandwidth Multiplicative decrease − Timeout => dropped packet => sending too fast => − Robust RTT estimation cut window size in half − Additive increase/multiplicative decrease • and therefore cut sending rate in half • oscillate around bottleneck capacity Additive increase − Slow start − Ack arrives => no drop => sending too slow => • quickly identify bottleneck capacity increase window size by one packet/window − Fast retransmit • and therefore increase sending rate a little − Fast recovery TCP “Sawtooth” Why AIMD? Two users competing for Oscillates around bottleneck bandwidth bandwidth: − adjusts to changes in competing traffic Consider the sequence of moves from AIMD, AIAD, MIMD, MIAD. What if two different TCP What if TCP and UDP share link? implementations share link? Independent of initial rates, UDP will get priority! If cut back more slowly after drops => will grab TCP will take what’s left. bigger share If add more quickly after acks => will grab bigger share Incentive to cause congestion collapse! − Many TCP “accelerators” − Easy to improve perf at expense of network One solution: enforce good behavior at router 3

  4. Slow start Slow Start Quickly find the bottleneck bandwidth How do we find bottleneck bandwidth? − Start by sending a single packet • start slow to avoid overwhelming network − Multiplicative increase until get packet loss • quickly find bottleneck − Remember previous max window size • shift into linear increase/multiplicative decrease when get close to previous max ~ bottleneck rate • called “congestion avoidance” TCP Mechanics Illustrated Slow Start vs. Delayed Acks Recall that acks are delayed by 200ms to wait for application to provide data But (!) TCP congestion control triggered by acks Source Router Dest − if receive half as many acks => window grows half as fast Slow start with window = 1 − ack will be delayed, even though sender is waiting for ack to expand window 100 Mbps 10 Mbps 0.9 ms latency 0 latency 21 Avoiding burstiness: ack pacing Ack Pacing After Timeout 1 bottleneck Packet loss causes timeout, 2 packets 3 disrupts ack pacing 1 1 4 5 − slow start/additive increase are Timeout designed to cause packet loss 1 Sender After loss, use slow start to regain Receiver 1 1 ack pacing − switch to linear increase at last successful rate 2 − “congestion avoidance” acks Window size = round trip delay * bit rate 5 4

  5. Putting It All Together Fast Retransmit 1 Can we detect packet loss without a 2 3 timeout? 1 1 4 − Receiver will reply to each packet with 5 an ack for last byte received in order 1 Duplicate acks imply either 1 − packet reordering (route change) 1 − packet loss 2 TCP Tahoe − resend if sender gets three duplicate Timeouts dominate performance! 5 acks, without waiting for timeout Fast Retransmit Caveats Fast Retransmit Slow Start + Congestion Avoidance + Fast Retransmit Assumes in order packet delivery 18 16 − Recent proposal: measure rate of out of order 14 delivery; dynamically adjust number of dup acks 12 needed for retransmit window 10 (in segs) Doesn’t work with small windows (e.g. modems) 8 6 − what if window size <= 3 4 Doesn’t work if many packets are lost 2 − example: at peak of slow start, might lose many 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 round-trip times packets Regaining ack pacing limits performance Fast Recovery Fast Recovery 1 Slow Start + Congestion Avoidance + Fast Use duplicate acks to maintain ack 2 Retransmit + Fast Recovery 3 18 pacing 1 1 16 4 5 − duplicate ack => packet left network 14 12 − after loss, send packet after every 1 10 window other acknowledgement (in segs) 1 8 1 Doesn’t work if lose many packets in a 6 2 row 4 2 − fall back on timeout and slow start to 3 0 reestablish ack pacing 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 round-trip times 5

  6. TCP Performance (Steady State) TCP over 10Gbps Pipes Bandwidth as a function of What’s the problem? − RTT? How might we fix it? − Loss rate? − Packet size? − Receive window? 31 32 TCP over Wireless What if TCP connection is short? What’s the problem? Slow start dominates performance How might we fix it? − What if network is unloaded? − Burstiness causes extra drops Packet losses unreliable indicator for short flows − can lose connection setup packet − Can get loss when connection near done − Packet loss signal unrelated to sending rate In limit, have to signal congestion (with a loss) on every connection − 50% loss rate as increase # of connections 33 Example: 100KB transfer Improving Short Flow Performance 100Mb/s Ethernet,100ms RTT, 1.5MB MSS Start with a larger initial window − RFC 3390: start with 3-4 packets Ethernet ~ 100 Mb/s Persistent connections 64KB window, 100ms RTT ~ 6 Mb/s − HTTP: reuse TCP connection for multiple objects on slow start (delayed acks), no losses ~ 500 Kb/s same page − Share congestion state between connections on same slow start, with 5% drop ~ 200 Kb/s host or across host Steady state, 5% drop rate ~ 750 Kb/s Skip slow start? Ignore congestion signals? 6

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend