equation based congestion outline control for unicast
play

Equation-Based Congestion Outline Control for Unicast Applications - PDF document

Equation-Based Congestion Outline Control for Unicast Applications Intro Foundations Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) TFRC Experimental Evaluation Jitendra Padhye Related Work Umass


  1. Equation-Based Congestion Outline Control for Unicast Applications • Intro • Foundations Sally Floyd, Mark Handley AT&T Center for Internet Research (ACIRI) • TFRC • Experimental Evaluation Jitendra Padhye • Related Work Umass Amherst • Conclusions Jorg Widmer International Computer Science Institute (ICSI) Proceedings of ACM SIGCOMM, 2000 Introduction But don’t we need TCP? • TCP – Dominant on Internet • Practical – Needed for stability – Primary threat are from unresponsive flows – AIMD + Choose UDP over TCP – Window-based – Give others protocol so they have something! • “Bulk-data” applications fine with TCP • Theoretical – But real-time find window fluctuations annoying – Internet does not require reduction by ½ • Equation-based congestion control to the + Other rates have been 7/8 (DECbit) rescue! – Even ‘fairness’ to TCP doesn’t require this – Smooth the rate – Needs some control to avoid high sending rate during congestion – (Note, class-based isolation beyond this paper) Guiding Basics for Equation-Based TFRC Goals Protocol • Want reliable and as quick as possible? • Determine maximum acceptable sending rate – Use TCP • Slowly changing rate? – Function of loss event rate – Round-trip time – Use TFRC (ms. vs. s.) • If competing with TCP (like Internet) should • Tackle tough issues in equation-based use TCP response equation during steady – Responsiveness to persistent congestion state – Avoiding unnecessary oscillations • There has been related work (see later – Avoiding unnecessary noise sections) but still far away from deployable – Robustness over wide-range of time scales protocol – Loss-event rate is a key component! • This work presents one such protocol • Multicast – TFRC – If all receivers change rates a lot, never can scale 1

  2. Foundations of Equation-Based Outline Congestion Control • TCP-Friendly Flow • Intro – In steady-state, uses no more bandwidth than • Foundations conformant TCP running under same conditions • TFRC • One formulation: • Experimental Evaluation • Related Work • Conclusions • s – packet size R – Round Trip Time • p – loss event rate t RTO – TCP timeout • (Results from analytic model of TCP) TFRC Basics Protocol Overview • Maintain steady sending rate, but still • Compute p (at receiver) respond to congestion • Compute R (at sender) • Refrain from aggressively seeking out • RTO and s are easy (like TCP and fixed) bandwidth • Computations could be split up many ways – Increase rate slowly • Do not respond as rapidly – Multicast would favor ‘fat’ receivers • TFRC has receiver only compute p and send – Slow response to one loss event – Halve rate when multiple loss events it to sender • Receiver reports to sender once per RTT • Next: – If it has received packet – Sender functionality • If no report for awhile, sender reduces rate – Receiver functionality Sender Functionality Receiver Functionality • Computing RTT • Compute loss event rate, p – Longer means subject to less ‘noise’ – Sender time-stamps data packets – Shorter means respond to congestion – Smooth with exponentially weighted avg • After “much testing”: – Echoed back by receiver – Loss event rate instead of packet loss rate • Computing RTO + Multiple packets may be one event – From TCP: RTO = RTT + 4 * RTT var – Should track smoothly when steady loss rate – But only matters when loss rate very high – Should respond strongly when multiple loss – So, use: RTO = 4 * R events • Different methods: • When receive p , calculate new rate T – Dynamic History Window, EWMA Loss Interval, – Adjust application rate, as appropriate Average Loss Interval 2

  3. Computing Loss Event Rate Average Weighted Loss Intervals • Dynamic History Window – Window of packets – Even at ‘steady state’ as packets arrive and leave window, added ‘noise’ could change rate • Exponentially Weighted Moving Average – Count packets between loss events – Hard to adjust weights correctly • Average Loss Interval – Weighted average of packets between loss events over last n intervals – The winner! (Comparison not in paper here) Illustration of Average Loss Loss Interval Computation Interval • w i = 1 for 1 <= I <= n/2 • w i = 1 – (I – n /2) / ( n /2 + 1) • 1, 1, 1, 1, 0.8, 0.6, 0.4, 0.2 • Rate depends upon n – n = 8 works well during increase in congestion (Later section validates) – Have not investigated relative weights • History discounting for sudden decreases in congestion – Interval s 0 is a lot larger – Can speed up • Loss event rate, p , is inverse of loss interval Instability from RTT Variance Improving Stability • Take square root of current RTT (M is sqrt of • Inter-packet time varies with RTT average) – Fluctuations when RTT changes 3

  4. Slowstart Outline • TCP slowstart can no more than double • Intro congestion bottleneck – 2 packets for each ack • Foundations • Rate-based could more than double • TFRC – Actual RTTs getting larger as congestion but measured RTTs too slow – Mechanics (done) • Have receiver send arrival rate – Discussion of features • Experimental Evaluation – T i+1 = min(2T i , 2T recv ) – Will limit it to double cong bwidth • Related Work • Loss occurs, terminate “slowstart” • Conclusions – Loss intervals? Set to ½ of rate for all – Fill in normally as progress Loss Fraction vs. Loss Event Increasing the Transmission Rate Fraction • Obvious is packets lost/packets received • What if T new is a lot bigger than T old ? – But different TCP’s respond to multiple losses in – May want to dampen the increase amount one window differently • Typically, only increase 0.14 packets / RTT + Tahoe, Reno, Sack all halve window + New Reno reduces it twice – History discounting provides 0.22 packets / RTT • Use loss event fraction to ignore multiple • Theoretical limit on increase drops within one RTT – A is number of packets in interval, w is weight • Previous work shows two rates are within 10% for steady state queues – But DropTail queues are bursty – So … no need to dampen more Response to Persistent Response to Quiescent Senders Congestion • Assume sender sending at maximum rate • To be smooth, TFRC does not respond as fast as does TCP to congestion – Like TCP • But if sender stops, and later has data to – TFRC requires 4-8 RTTs to reduce by ½ send • Balanced by milder increase in sending rate – the previous estimated rate, T , may be too high – 0.14 packets per RTT rather than 1 • Solution: • Does respond, so will avoid congestion – if sender stops, receiver stops feedback collapse • Sender ½ rate every 2 RTTs • (Me, but about response to bursty traffic?) • (Me, what about just a reduced rate that is significantly less than T ? – May happen for coarse level MM apps) 4

  5. Outline Simulation Results (NS) • Intro • TFRC co -exist with many kinds of TCP traffic • Foundations – SACK, Reno, NewReno… • TFRC – Lots of flows • TFRC works well in isolation • Experimental Evaluation – Or few flows – Simulation • Many network conditions – Implementation + Internet + Dummynet • Related Work • Conclusions TFRC vs. TCP, DropTail TFRC vs. TCP, RED • Even more fair • Mean TCP throughput (want 1.0) • Not fair for small windows • Fair (?) • (Me … bursty traffic with many flows?) Fair Overall, but what about CoV of Flows (Std Dev / Mean) Variance? • A fairness measure • Average of 10 runs • TFRC less fair for high loss rates (above typical) • Same w/Tahoe and Reno, SACK does better • Variance increases with loss rate, flows – timer granularity is better with SACK 5

  6. Individual Throughputs over Equivalence at Different Time Timescale • Compare two flows • Number between 0 and 1 (equation (4)) • Cases – Long duration flows in background – On-Off flows in background • .15 second interval (about multimedia sensitivity) •Smoother rate from TFRC Equivalence for Long Outline Duration • Intro • Foundations •Single bottleneck • TFRC •32 flows • Experimental Evaluation •15 Mbps link •Monitor 1 flow – Simulation •95% confidence interval + Fairness and Smoothness (CoV) (done) + Long Duration (done) + On-Off flows – Implementation • Results hold over • Related Work Broad range of • Conclusions timescales Equivalence with TCP with Performance with On-Off Flows Background Traffic • 50 – 150 On/Off UDP flows – On 1 second, off 2 seconds (mean) – Send at 500 kbps rate • Monitor TCP, Monitor TFRC •At high loss rates, less equivalent (40% more, less) •(Me, room for improvement) 6

  7. Effect on Queue Dynamics CoV with Background Traffic •40 flows, staggered start times •TCP (top) has 4.9% loss and TFRC (bottom) has 3.5% loss •99% utilization for all •TFRC rate has less variance, (Bursty?) •Basically, look the same especially at high loss rates •Extensive tests, w/RED and background look the same Outline Implementation Results • Intro • TFRC on Internet • Foundations – Microwave • TFRC – T1 • Experimental Evaluation – OC3 – Cable modem – Simulation (done) – Dialup modem – Implementation • Generally fair + Internet • (See tech report for details) • Related Work • Conclusions London to Berkeley TCP Equivalence over Internet • 3 TCP flows, 1 TFRC flow • TFRC slightly lower bandwidth but smoother • Typical loss rates .1% to 5% 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend