DOC PREVIEW
CMU CS 15744 - lecture

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Copyright Hari Balakrishnan, 1998-2005, all rights reserved. Please do n ot redistributewithout permission.LECTURE 6End-to-end Congestion ControlCongestion management is a fundamental problem in networking because the way toachieve cost-effective and scalable netw ork designs is by sharing the network infras-tructure. Managing shared resources in a careful way is therefore a critical problem. Thislecture covers the principles and practice of (unicast) congestion control, including linearcongestion control algorithms and the practically important details of TCP congestion con-trol. Background readings for this lecture include Chiu and Jain’s paper introducing linearcontrols [?] and Jacobson’s TCP congestion control paper [10].! 6.1 The problemThe first question one should ask at this stage is: “What resources are being shared thatneed to be managed?” To answer this question, let’s think about some of the properties ofbest-effort netwo r ks. Recall that the model at any router is for each packet to be processedand forwarded along one of several possible output links, at a rate determined by the rate dbandwidth of that link. In the meantime, more packets, which in general could belong toany other flow,1could arrive at the router (recall that this is one of the key consequencesof asynchronous multiplexing).What happens to these packets? The router tries to accomodate them in its queu es andprocess them, but if there isn’t enough space in its queue, some of them may be dropped.Thus, packet queues build up whe n the link is busy and demand for the link outstrips theavailable link bandwidth, and packets are generally dropped when the queues ge t full.Figure 6-1 shows a simple network with connections between senders Siand receivers Rivia a 100 Kbps bottleneck link.2The above discus sion makes it clear that there are two resources that flows conte n d forin a network:1. Link bandwidth. The network has to decide how to apportion bandwidth betweendifferent flows. Network routers may also decide to prioritize cer tain types of pack-1Or connection. A flow is just a generalization of a connection; the distinction between the two is irrelevantfor the purposes of this discussion.2We use Mbps for Megabits per second, Kbps for Kilobits per second, and bps for bits per second.12 LECTURE 6. END-TO-END CONGESTION CONTROLS1S2C1C2R1R2!"#$%&'!""#(%&')*+*+#,-#%.--/+0+123+04+5'6+1+78+5'Figure 6-1: A simple network topology showing how sharing causes resource contention. Many connec-tions between senders and receivers contend for the bottleneck 100 Kbps link .ets (e.g., latency-sensitive aud io or interactive teln et packets) over others (e.g.,electronic mail).32. Queue space. When the router decides that a packet has to be dropped because itis running out of queue space (also known as buffer space), which packet should itdrop? The arriving one? The earliest one? A rando m one? And when should itdecide to drop packets: only when the queue is full, or sooner than that? Waiting toolong to before dropping packets only ser ve s to increase packet delays, and it may beadvantageous to drop occasional packets even when the queue isn’t full.What happe n s if we don’t manage networ k resources well? For one thing, the availablebandwidth might end up being greatly under-utilized even when there is de mand for it,causing economic heartburn. Most often, however, networ k designers end up provision-ing for a certain amount of “expected” offered load, and then have to deal with overload, orcongestion.! 6.1.1 Understanding congestionA network link is said to be congested if contention for it causes queues to build up and forpackets to start getting dropped. At such a time, demand for link bandwidth (and even-tually queue space), outstrips what is available. For this reason, many network resourcemanagement problems are also called congestion control or congestion management problems,since t h e efficient management of congestion implies an efficient management of t h e abovenetwork resources.3Note that there’s a “type-of-service (TOS)” field, now also calle d the “differentiated services” fiel d, i n theIP header; routers sometimes look at this field to decide how to prioritize packets.SECTION 6.1. THE PROBLEM 3Offered load (bits/s)Throughput (bits/s)!"#$%&'()*)*+,-)$'.$/0,)"12/0*+#$2/,32''&"+*4*0$2/5/**Figure 6-2: Schematic view of throughput vs. offered load, showing the optimal operating point and con-gestion collapse region.Congestion occurs due to overload, and is exacerbated by network heterogeneity. Het-erogeneity is a wonderful property of the Internet, which allows 14.4Kbps wireless linksto co-exist with and connect to 100 Mbps Ether n ets. However, the tr ans ition between linksof very different speeds implies that sources connected to high-bandwidth links can’t sim-ply blast their packets through, since the bottleneck available bandwidth to a destinationcan be quite a bit smaller. Sources should learn what rates are sustainable and adapt toavailable bandwidth; this s h ould be don e dynamically because there are few paths on theInternet whose conditions are unchanging.There are many examples of how ove r load causes congestion. For example, one pos-sible way to achieve high throughput in a network might be to have all the sources blastpackets as fast as they can, so that bottleneck ne twork links run at close to 100% utilization.While this seems reasonable, a little t h ought sh ows that this approach is self-defeating. Allit really accomplishes are long packet queues and resultant end-to-end delays, as well asincreased packet loss rates, which for a reliable end-to-end transport layer would cause alarge number of retransmissions. This is therefore quite the wrong thing to do to obtaingood netw ork throughput. This example also demonstrates the fundamental ten sion be-tween achieving high link utilizations on the one hand (by tr ans mitting data rapidly), andlow delays and loss rates on the other (which increase if data is sen t too rapidly).This notion of how increasing offered load to achieve high utilization is at odds withpacket losses and delays is illustrated in Figure 6-2. This figure shows a s chematic viewof throughput as a function of the offered load on the netwo r k. Initially, at low levels ofoffered load, th roughput is roughly proportional to offered load, because the network isunder-utilized. Then, throughput plateaus at a value e q ual to the bottleneck link


View Full Document

CMU CS 15744 - lecture

Documents in this Course
Lecture

Lecture

25 pages

Lecture

Lecture

10 pages

Lecture

Lecture

10 pages

Lecture

Lecture

45 pages

Lecture

Lecture

48 pages

Lecture

Lecture

19 pages

Lecture

Lecture

97 pages

Lecture

Lecture

39 pages

Lecture

Lecture

49 pages

Lecture

Lecture

33 pages

Lecture

Lecture

21 pages

Lecture

Lecture

52 pages

Problem

Problem

9 pages

Lecture

Lecture

6 pages

03-BGP

03-BGP

13 pages

Lecture

Lecture

42 pages

lecture

lecture

54 pages

lecture

lecture

21 pages

Lecture

Lecture

18 pages

Lecture

Lecture

18 pages

Lecture

Lecture

58 pages

lecture

lecture

17 pages

lecture

lecture

46 pages

Lecture

Lecture

72 pages

Lecture

Lecture

44 pages

Lecture

Lecture

13 pages

Lecture

Lecture

22 pages

Lecture

Lecture

48 pages

lecture

lecture

73 pages

17-DNS

17-DNS

52 pages

Lecture

Lecture

10 pages

lecture

lecture

53 pages

lecture

lecture

51 pages

Wireless

Wireless

27 pages

lecture

lecture

14 pages

Lecture

Lecture

16 pages

Lecture

Lecture

14 pages

lecture

lecture

16 pages

Lecture

Lecture

16 pages

Lecture

Lecture

37 pages

Lecture

Lecture

44 pages

Lecture

Lecture

11 pages

Lecture

Lecture

61 pages

Multicast

Multicast

61 pages

Lecture

Lecture

19 pages

Lecture

Lecture

8 pages

Lecture

Lecture

81 pages

Lecture

Lecture

9 pages

Lecture

Lecture

6 pages

Lecture

Lecture

63 pages

Lecture

Lecture

13 pages

Lecture

Lecture

63 pages

Lecture

Lecture

50 pages

lecture

lecture

35 pages

Lecture

Lecture

47 pages

Lecture

Lecture

29 pages

Lecture

Lecture

92 pages

Load more
Download lecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view lecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view lecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?