Congestion Control Reading: Sections 6.1-6.4Goals of Today’s LectureResource Allocation vs. Congestion ControlFlow Control vs. Congestion ControlThree Key Features of InternetCongestion is UnavoidableCongestion CollapseWhat Do We Want, Really?Load, Delay, and PowerFairnessSimple Resource AllocationSimple Congestion DetectionIdea of TCP Congestion ControlAdditive Increase, Multiplicative DecreaseLeads to the TCP “Sawtooth”Practical DetailsGetting Started“Slow Start” PhaseSlow Start in ActionSlow Start and the TCP SawtoothTwo Kinds of Loss in TCPRepeating Slow Start After TimeoutRepeating Slow Start After Idle PeriodOther TCP MechanismsMotivation for Nagle’s AlgorithmNagle’s AlgorithmMotivation for Delayed ACKTCP Header Allows PiggybackingExample of PiggybackingIncreasing Likelihood of PiggybackingDelayed ACKQueuing MechanismsBursty Loss From Drop-Tail QueuingSlow Feedback from Drop TailRandom Early Detection (RED)Properties of REDProblems With REDExplicit Congestion NotificationConclusions1Congestion ControlReading: Sections 6.1-6.4COS 461: Computer NetworksSpring 2006 (MW 1:30-2:50 in Friend 109)Jennifer RexfordTeaching Assistant: Mike Wawrzoniak http://www.cs.princeton.edu/courses/archive/spring06/cos461/2Goals of Today’s Lecture•Principles of congestion control–Learning that congestion is occurring–Adapting to alleviate the congestion•TCP congestion control–Additive-increase, multiplicative-decrease–Slow start and slow-start restart•Related TCP mechanisms–Nagle’s algorithm and delayed acknowledgments•Active Queue Management (AQM)–Random Early Detection (RED)–Explicit Congestion Notification (ECN)3Resource Allocation vs. Congestion Control•Resource allocation–How nodes meet competing demands for resources–E.g., link bandwidth and buffer space–When to say no, and to whom•Congestion control–How nodes prevent or respond to overload conditions–E.g., persuade hosts to stop sending, or slow down–Typically has notions of fairness (i.e., sharing the pain)4Flow Control vs. Congestion Control•Flow control–Keeping one fast sender from overwhelming a slow receiver•Congestion control–Keep a set of senders from overloading the network•Different concepts, but similar mechanisms–TCP flow control: receiver window–TCP congestion control: congestion window–TCP window: min{congestion window, receiver window}5Three Key Features of Internet•Packet switching–A given source may have enough capacity to send data–… and yet the packets may encounter an overloaded link•Connectionless flows–No notions of connections inside the network–… and no advance reservation of network resources–Still, you can view related packets as a group (“flow”)–… e.g., the packets in the same TCP transfer•Best-effort service–No guarantees for packet delivery or delay–No preferential treatment for certain packets6Congestion is Unavoidable•Two packets arrive at the same time–The node can only transmit one–… and either buffer or drop the other•If many packets arrive in a short period of time–The node cannot keep up with the arriving traffic–… and the buffer may eventually overflow7Congestion Collapse•Definition: Increase in network load results in a decrease of useful work done•Many possible causes–Spurious retransmissions of packets still in flightClassical congestion collapseSolution: better timers and TCP congestion control–Undelivered packetsPackets consume resources and are dropped elsewhere in networkSolution: congestion control for ALL traffic8What Do We Want, Really?•High throughput–Throughput: measured performance of a system–E.g., number of bits/second of data that get through•Low delay–Delay: time required to deliver a packet or message–E.g., number of msec to deliver a packet•These two metrics are sometimes at odds–E.g., suppose you drive a link as hard as possible–… then, throughput will be high, but delay will be, too9Load, Delay, and PowerAveragePacket delayLoadTypical behavior of queuing systems with random arrivals:PowerLoadA simple metric of how well the network is performing:LoadPowerDelay=“optimalload”Goal: maximize power10Fairness•Effective utilization is not the only goal–We also want to be fair to the various flows–… but what the heck does that mean?•Simple definition: equal shares of the bandwidth–N flows that each get 1/N of the bandwidth?–But, what if the flows traverse different paths?11Simple Resource Allocation•Simplest approach: FIFO queue and drop-tail•Link bandwidth: first-in first-out queue–Packets transmitted in the order they arrive•Buffer space: drop-tail queuing–If the queue is full, drop the incoming packet12Simple Congestion Detection•Packet loss–Packet gets dropped along the way•Packet delay–Packet experiences high delay•How does TCP sender learn this?–LossTimeout Triple-duplicate acknowledgment–DelayRound-trip time estimate13Idea of TCP Congestion Control•Each source determines the available capacity–… so it knows how many packets to have in transit•Congestion window–Maximum # of unacknowledged bytes to have in transit–The congestion-control equivalent of receiver window–MaxWindow = min{congestion window, receiver window}–Send at the rate of the slowest component•Adapting the congestion window–Decrease upon losing a packet: backing off–Increase upon success: optimistically exploring14Additive Increase, Multiplicative Decrease•How much to increase and decrease?–Increase linearly, decrease multiplicatively–A necessary condition for stability of TCP–Consequences of over-sized window are much worse than having an under-sized windowOver-sized window: packets dropped and retransmittedUnder-sized window: somewhat lower throughput•Multiplicative decrease–On loss of packet, divide congestion window in half•Additive increase–On success for last window of data, increase linearly15Leads to the TCP “Sawtooth”tWindowhalvedLoss16Practical Details•Congestion window–Represented in bytes, not in packets (Why?)–Packets have MSS (Maximum Segment Size) bytes•Increasing the congestion window–Increase by MSS on success for last window of data–In practice, increase a fraction of MSS per received ACK# packets per window: CWND / MSSIncrement per ACK: MSS * (MSS / CWND)•Decreasing the congestion window–Never drop congestion window below 1
View Full Document