15-744: Computer NetworkingFair QueuingOverviewTCP ModelingOverall TCP BehaviorSimple TCP ModelSimple Loss ModelTCP FriendlinessTCP PerformanceTCP Congestion ControlSingle TCP Flow Router without buffersSummary Unbuffered LinkSlide 13Slide 14Single TCP Flow Router with large enough buffers for full link utilizationSummary Buffered LinkExampleRule-of-thumbIf flows are synchronizedIf flows are not synchronizedCentral Limit TheoremRequired buffer sizeSlide 23Fairness GoalsWhat is Fairness?Max-min FairnessMax-min Fairness ExampleImplementing max-min FairnessBit-by-bit RRBit-by-bit RR IllustrationSlide 31Slide 32FQ IllustrationBit-by-bit RR ExampleDelay AllocationFair Queuing TradeoffsDiscussion CommentsSlide 38Core-Stateless Fair QueuingSlide 40Edge Router BehaviorCore Router BehaviorF vs. AlphaEstimating Fair ShareOther IssuesSlide 46Important LessonsNext Lecture: TCP & Routers15-744: Computer NetworkingL-5 TCP & Routers2Fair Queuing•Fair Queuing•Core-stateless Fair queuing•Assigned reading•[DKS90] Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience•[SSZ98] Core-Stateless Fair Queueing: Achieving Approximately Fair Allocations in High Speed Networks3Overview•TCP modeling•Fairness•Fair-queuing•Core-stateless FQ4TCP Modeling•Given the congestion behavior of TCP can we predict what type of performance we should get?•What are the important factors•Loss rate•Affects how often window is reduced•RTT•Affects increase rate and relates BW to window•RTO•Affects performance during loss recovery•MSS •Affects increase rate5Overall TCP BehaviorTimeWindow•Let’s concentrate on steady state behavior with no timeouts and perfect loss recovery6Simple TCP Model•Some additional assumptions•Fixed RTT•No delayed ACKs•In steady state, TCP losses packet each time window reaches W packets•Window drops to W/2 packets•Each RTT window increases by 1 packetW/2 * RTT before next loss•BW = MSS * avg window/RTT = •MSS * (W + W/2)/(2 * RTT)•.75 * MSS * W / RTT7Simple Loss Model•What was the loss rate?•Packets transferred between losses = •Avg BW * time = •(.75 W/RTT) * (W/2 * RTT) = 3W2/8•1 packet lost loss rate = p = 8/3W2•W = sqrt( 8 / (3 * loss rate))•BW = .75 * MSS * W / RTT•BW = MSS / (RTT * sqrt (2/3p))8TCP Friendliness•What does it mean to be TCP friendly?•TCP is not going away•Any new congestion control must compete with TCP flows•Should not clobber TCP flows and grab bulk of link•Should also be able to hold its own, i.e. grab its fair share, or it will never become popular•How is this quantified/shown?•Has evolved into evaluating loss/throughput behavior•If it shows 1/sqrt(p) behavior it is ok•But is this really true?9TCP Performance•Can TCP saturate a link?•Congestion control•Increase utilization until… link becomes congested•React by decreasing window by 50%•Window is proportional to rate * RTT•Doesn’t this mean that the network oscillates between 50 and 100% utilization?•Average utilization = 75%??•No…this is *not* right!10TCP Congestion ControlOnly W packets may be outstandingRule for adjusting W•If an ACK is received: W ← W+1/W•If a packet is lost: W ← W/2Source DestmaxW2maxWtWindow size11Single TCP FlowRouter without buffers12Summary Unbuffered LinktWMinimum window for full utilization•The router can’t fully utilize the link•If the window is too small, link is not full•If the link is full, next window increase causes drop•With no buffer it still achieves 75% utilization13TCP Performance•In the real world, router queues play important role•Window is proportional to rate * RTT•But, RTT changes as well the window•Window to fill links = propagation RTT * bottleneck bandwidth•If window is larger, packets sit in queue on bottleneck link14TCP Performance•If we have a large router queue can get 100% utilization•But, router queues can cause large delays•How big does the queue need to be?•Windows vary from W W/2•Must make sure that link is always full•W/2 > RTT * BW•W = RTT * BW + Qsize•Therefore, Qsize > RTT * BW•Ensures 100% utilization•Delay?•Varies between RTT and 2 * RTT15Single TCP FlowRouter with large enough buffers for full link utilization16Summary Buffered LinktWMinimum window for full utilization•With sufficient buffering we achieve full link utilization•The window is always above the critical threshold•Buffer absorbs changes in window size•Buffer Size = Height of TCP Sawtooth•Minimum buffer size needed is 2T*C•This is the origin of the rule-of-thumbBuffer17Example•10Gb/s linecard•Requires 300Mbytes of buffering.•Read and write 40 byte packet every 32ns.•Memory technologies•DRAM: require 4 devices, but too slow. •SRAM: require 80 devices, 1kW, $2000.•Problem gets harder at 40Gb/s•Hence RLDRAM, FCRAM, etc.18Rule-of-thumb•Rule-of-thumb makes sense for one flow•Typical backbone link has > 20,000 flows•Does the rule-of-thumb still hold?19If flows are synchronized•Aggregate window has same dynamics•Therefore buffer occupancy has same dynamics•Rule-of-thumb still holds.2maxWtmax2W�maxW�maxW20If flows are not synchronizedProbabilityDistributionB0Buffer SizeW21Central Limit Theorem•CLT tells us that the more variables (Congestion Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows)•Width of Gaussian decreases with •Buffer size should also decreases withnCTnBBn21n1n122Required buffer size2T Cn�Simulation23Overview•TCP modeling•Fairness•Fair-queuing•Core-stateless FQ24Fairness Goals•Allocate resources fairly •Isolate ill-behaved users•Router does not send explicit feedback to source•Still needs e2e congestion control•Still achieve statistical muxing•One flow can fill entire pipe if no contenders•Work conserving scheduler never idles link if it has a packet25What is Fairness?•At what granularity?•Flows, connections, domains?•What if users have different RTTs/links/etc.•Should it share a link fairly or be TCP fair?•Maximize fairness index?•Fairness = (xi)2/n(xi2) 0<fairness<1•Basically a tough question to answer – typically design mechanisms instead of policy•User = arbitrary granularity26Max-min Fairness•Allocate user with “small” demand what it wants, evenly divide unused resources to “big” users•Formally:•Resources allocated
View Full Document