DOC PREVIEW
Berkeley ELENG 290T - A Survey on TCP-Friendly Congestion Control

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

IEEE Network • May/June 200128n the Internet, packet loss can occur as a result of trans-mission errors, but also, and most commonly, as a resultof congestion. TCP’s end-to-end congestion controlmechanism reacts to packet loss by reducing the numberof outstanding unacknowledged data segments allowed in thenetwork. TCP flows with similar round-trip times (RTTs) thatshare a common bottleneck reduce their rates so that theavailable bandwidth will be, in the ideal case, distributedequally among them.Not all Internet applications use TCP and therefore do notfollow the same concept of fairly sharing the available band-width. Thus far, the undesired effect of the unfairness of thesenon-TCP applications has not had a heavy impact since most ofthe traffic in the Internet uses TCP-based protocols such asHypertext Transfer Protocol (HTTP), Simple Mail TransferProtocol (SMTP), or File Transfer Protocol (FTP). However,the number of audio/video streaming applications such as Inter-net audio players, IP telephony, videoconferencing, and similartypes of real-time applications is constantly growing, and it isfeared that one consequence will be an increase in the percent-age of non-TCP traffic. Since these applications commonly donot integrate TCP-compatible congestion control mechanisms,they treat competing TCP flows in an unfair manner: uponencountering congestion, all contending TCP flows reduce theirdata rates in an attempt to dissolve the congestion, while thenon-TCP flows continue to send at their original rate. Thishighly unfair situation can lead to starvation of TCP traffic, oreven to a congestion collapse [1], which describes the undesir-able situation where the available bandwidth in a network isalmost exclusively occupied by packets that are discardedbecause of congestion before they reach their destination.For this reason, it is desirable to define appropriate rateadaptation rules and mechanisms for non-TCP traffic that arecompatible with the rate adaptation mechanism of TCP.These rate adaptation rules should make non-TCP applica-tions TCP-friendly, and lead to a fair distribution of band-width.In this article we present a survey of TCP-friendly conges-tion control schemes to summarize the state of the art in thisfield of research and motivate further research on TCPfriendliness. We define TCP friendliness and outline thedesign space for TCP-friendly congestion control. Existing sin-gle-rate protocols are discussed, and a detailed survey of mul-tirate protocols is given. The article contains an evaluation ofthe strengths and weaknesses of the mechanisms presented.We point to open problems and issues for future research andgive some concluding remarks.TCP and TCP FriendlinessTCP is a connection-oriented unicast protocol that offers reli-able data transfer as well as flow and congestion control. TCPmaintains a congestion window that controls the number ofoutstanding unacknowledged data packets in the network.Sending data consumes slots in the window of the sender, andthe sender can send packets only as long as free slots areavailable. When an acknowledgment (ACK) for outstandingpackets is received, the window is shifted so that the acknowl-edged packets leave the window, and the same number of freeslots becomes available.On startup, TCP performs slowstart, during which the rateroughly doubles each RTT to quickly gain its fair share ofbandwidth. In steady state, TCP uses an additive increase, mul-tiplicative decrease (AIMD) mechanism to detect additionalbandwidth and to react to congestion. When there is no indica-tion of loss, TCP increases the congestion window by 1slot/RTT. In case of packet loss, indicated by a timeout, thecongestion window is reduced to one slot and TCP reentersthe slowstart phase. Packet loss indicated by three duplicateACKs results in a window reduction to half of its previous size.0890-8044/01/$10.00 © 2001 IEEEA Survey on TCP-FriendlyCongestion ControlJoerg Widmer, Robert Denda, and Martin MauvePraktische Informatik IV, University of Mannheim, GermanyAbstractNew trends in communication, in particular the deployment of multicast and real-timeaudio/video streaming applications, are likely to increase the percentage of non-TCP traffic in the Internet. These applications rarely perform congestion control in aTCP-friendly manner; they do not share the available bandwidth fairly with applica-tions built on TCP, such as Web browsers, FTP, or e-mail clients. The Internet commu-nity strongly fears that the current evolution could lead to congestion collapse and starvationof TCP traffic. For this reason, TCP-friendly protocols are being developed thatbehave fairly with respect to coexistent TCP flows. In this article we present a surveyof current approaches to TCP friendliness and discuss their characteristics. Both uni-cast and multicast congestion control protocols are examined, and an evaluation ofthe different approaches is presented.Robert Denda is also with ENDITEL Endesa Ingeniería de Telecomuni-caciones.IIIEEE Network • May/June 200129Modeling TCP ThroughputThe throughput of TCP depends mainly on the parametersRTT tRTT, retransmission timeout value tRTO, segment size s,and packet loss rate p. Using these parameters, an estimate ofTCP’s throughput can be derived. A basic model that approxi-mates TCP’s steady-state throughput T is given by Eq. 1 [1].This model is a simplification in that it does not take intoaccount TCP timeouts.Equation 2, presented in [2], gives an example of a morecomplex model of TCP throughput; b is the number of pack-ets acknowledged by each ACK and Wmis the maximum sizeof the congestion window. Unlike the model presented by Eq.1, the complex model takes into account rate reductions dueto TCP timeouts. Thus, it models TCP more accurately in anenvironment with high loss rates.(1)(2)Note, both models assume that the RTT and loss rate areindependent of the estimated rate (i.e., they do not take intoaccount that changing the rate can affect the RTT and loss rate).They work well in environments with a high level of statisticalmultiplexing such as the Internet, but care has to be taken whenthey are used as part of a protocol’s control loop when only afew flows share a bottleneck link. In that case, changes to thesending rate alter the conditions at the bottleneck link, which inturn determine the sending rate through the equation. Such afeedback loop can render the results of both equations invalid.TCP FriendlinessIn [1], non-TCP


View Full Document

Berkeley ELENG 290T - A Survey on TCP-Friendly Congestion Control

Download A Survey on TCP-Friendly Congestion Control
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A Survey on TCP-Friendly Congestion Control and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Survey on TCP-Friendly Congestion Control 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?