DOC PREVIEW
Berkeley ELENG 122 - TCP Congestion Control

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

11TCP Congestion ControlEE 122: Intro to Communication NetworksFall 2006 (MW 4-5:30 in Donner 155)Vern PaxsonTAs: Dilip Antony Joseph and Sukun Kimhttp://inst.eecs.berkeley.edu/~ee122/Materials with thanks to Jennifer Rexford, Ion Stoica,and colleagues at Princeton and UC Berkeley2Announcements• Project #3 should be out tonight– Can do individual or in a team of 2 people– First phase due November 16 - no slip days– Exercise good (better) time management3Goals of Today’s Lecture• State diagrams– Tool for understanding complex protocols• Principles of congestion control– Learning that congestion is occurring– Adapting to alleviate the congestion• TCP congestion control– Additive-increase, multiplicative-decrease– NACK- (“fast retransmission”) and timeout-baseddetection– Slow start and slow-start restart4State Diagrams• For complicated protocols, operation dependscritically on current mode of operation• Important tool for capture this: state diagram• At any given time, protocol endpoint is in aparticular state– Dictates its current behavior• Endpoint transitions to other states on events– Interaction with lower layer Reception of certain types of packets– Interaction with upper layer New data arrives to send, or received data is consumed– Timers5TCP State Diagram6278How Fast Should TCP Send?Flow Control9Sliding Window• Allow a larger amount of data “in flight”– Allow sender to get ahead of the receiver– … though not too far aheadSending process Receiving processLast byte ACKedLast byte sentTCPTCPNext byte expectedLast byte writtenLast byte readLast byte receivedSender WindowReceiver Window10TCP Header for Receiver BufferingSource port Destination portSequence numberAcknowledgmentAdvertised windowHdrLenFlags0Checksum Urgent pointerOptions (variable)DataAdvertised windowinforms sender ofreceiver’s bufferspace11Advertised Window Limits Rate• If the window is W, then sender can send no fasterthan W/RTT bytes/sec– Receiver implicitly limits sender to rate that receivercan sustain– If sender is going to fast, window advertisements getsmaller & smaller– Termed Flow Control• In original TCP design, that was it - sole protocolmechanism controlling sender’s rate• What’s missing?12How Fast Should TCP Send?Congestion Control313It’s Not Just The Sender & Receiver• Flow control keeps one fast sender fromoverwhelming a slow receiver• Congestion control keeps a set of senders fromoverloading the network• Three congestion control problems:– Adjusting to bottleneck bandwidth Without any a priori knowledge Could be a Gbps link; could be a modem– Adjusting to variations in bandwidth– Sharing bandwidth between flows14Congestion is Unavoidable• Two packets arrive at the same time– The node can only transmit one– … and either buffers or drops the other• If many packets arrive in a short period of time– The node cannot keep up with the arriving traffic– … and the buffer may eventually overflow15Load, Delay, and PowerAveragePacket delayLoadTypical behavior of queuing systems with bursty arrivals:PowerLoadA simple metric of how well the network is performing:LoadPowerDelay=“optimalload”Goal: maximize power16Congestion Collapse• Definition: Increase in network load resultsin a decrease of useful work done• Due to:–Undelivered packets Packets consume resources and are dropped later innetwork–Spurious retransmissions of packets still in flight Unnecessary retransmissions lead to more load! Pouring gasoline on a fire• Mid-1980s: Internet grinds to a halt–Until Jacobson/Karels devise TCP congestioncontrol17View from a Single Flow• Knee – point after which– Throughput increases veryslowly– Delay increases quickly• Cliff – point after which– Throughput starts to decreasevery fast to zero (congestioncollapse)– Delay approaches infinityLoadLoadThroughputDelayknee cliffcongestioncollapsepacketloss18General Approaches• Send without care– Many packet drops– Disaster: leads to congestion collapse• (1) Reservations– Pre-arrange bandwidth allocations– Requires negotiation before sending packets– Potentially low utilization (difficult to stat-mux)• (2) Pricing– Don’t drop packets for the highest bidders– Requires payment model419General Approaches (cont’d)• (3) Dynamic Adjustment– Probe network to test level of congestion– Speed up when no congestion– Slow down when congestion– Drawbacks: Suboptimal Messy dynamics– Seems complicated to implement But clever algorithms actually pretty simple (Jacobson/Karels ‘88)• All three techniques have their place– But for generic Internet usage, dynamic adjustment is themost appropriate …– … due to pricing structure, traffic characteristics, andgood citizenship20Idea of TCP Congestion Control• Each source determines the available capacity– … so it knows how many packets to have in flight• Congestion window (CWND)– Maximum # of unacknowledged bytes to have in flight– Congestion-control equivalent of receiver window– MaxWindow = min{congestion window, receiver window}– Send at the rate of the slowest component• Adapting the congestion window– Decrease upon detecting congestion– Increase upon lack of congestion: optimistic exploration21Detecting Congestion• How can a TCP sender determine that network isunder stress?• Network could tell it (ICMP Source Quench)– Risky, because during times of overload the signal itselfcould be dropped!• Packet delays go up (knee of load-delay curve)– Tricky, because a noisy signal (delay often variesconsiderably)• Packet loss– Fail-safe signal that TCP already has to detect– Complication: non-congestive loss (checksum errors)225 Minute BreakQuestions Before We Proceed?23Additive Increase, Multiplicative Decrease• How much to increase and decrease?– Increase linearly, decrease multiplicatively (AIMD)– Necessary condition for stability of TCP– Consequences of over-sized window much worse thanhaving an under-sized window Over-sized window: packets dropped and retransmitted Under-sized window: somewhat lower throughput• Additive increase– On success for last window of data, increase linearly One packet (MSS) per RTT• Multiplicative decrease– On loss of packet, divide congestion window in half24Leads to the TCP “Sawtooth”tWindowhalvedLoss525Managing the Congestion Window•


View Full Document

Berkeley ELENG 122 - TCP Congestion Control

Documents in this Course
Lecture 6

Lecture 6

22 pages

Wireless

Wireless

16 pages

Links

Links

21 pages

Ethernet

Ethernet

10 pages

routing

routing

11 pages

Links

Links

7 pages

Switches

Switches

30 pages

Multicast

Multicast

36 pages

Switches

Switches

18 pages

Security

Security

16 pages

Switches

Switches

18 pages

Lecture 1

Lecture 1

56 pages

OPNET

OPNET

5 pages

Lecture 4

Lecture 4

16 pages

Ethernet

Ethernet

65 pages

Models

Models

30 pages

TCP

TCP

16 pages

Wireless

Wireless

48 pages

Load more
Download TCP Congestion Control
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view TCP Congestion Control and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view TCP Congestion Control 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?