Network Performance Queuing EE 122 Intro to Communication Networks Fall 2007 WF 4 5 30 in Cory 277 Vern Paxson TAs Lisa Fowler Daniel Killebrew Jorge Ortiz http inst eecs berkeley edu ee122 Materials with thanks to Jennifer Rexford Ion Stoica and colleagues at Princeton and UC Berkeley 1 Announcements Next Wednesday s lecture wireless will be given by Jorge I will be away next Wednesday and won t have office hours that day But will have my usual Friday office hour Reminder Phase 1 of Project 2 due Tuesday 11PM 2 1 Goals of Today s Lecture Finish discussion of TCP performance Window Scaling TCP Throughput Equation Computes approximate long running TCP performance for a given packet loss probability p Relationship between performance and queuing Router architecture FIFO queuing Active Queue Management RED Explicit Congestion Notification ECN Modeling of queuing systems Little s Law 3 Going Fast Q on a path with RTT 100 msec what s the absolute fastest rate that TCP can achieve Q what s the absolute largest sliding window that TCP can use A advertised window is 16 bits 65 535 bytes Thus max speed 65 535 bytes 100 msec 655 KB s Q how can we fix this problem A we need a larger window Q how do we make the window larger A using a TCP option 4 2 Window Scaling Option RFC 1323 Source port Destination port Sequence number HdrLen specifies 4 bytes of options Kind 3 indicates Window Scaling Kind 0 indicates end of options Acknowledgment Len 6 0 Flags Advertised window Checksum Urgent pointer Kind 3 Length 3 shift cnt Kind 0 Data 5 Window Scaling con t Sent in initial SYN If server s SYN ACK also includes a Window Scaling option then scaling is in effect The server including the option confirms its use shift cnt specifies scaling factor for units used in window advertisement E g shift cnt 5 advertised window is 25 32 byte units 6 3 Window Scaling con t Q Now how large can the window be A Clearly must not exceed 232 If it does then can t disambiguate data in flight So scaling 16 In fact somewhat subtle requirements limit window to 230 to allow receiver to determine whether data fits in the offered window So scaling 14 7 Window Scaling con t Now we can go fast Suppose high speed LAN RTT 1 msec How fast can we transmit 1 GB msec 1 TB sec What problem arises if packets are occasionally delayed in the network for 10 msec Sequence number wrap can t tell earlier delayed segments from later instances Fix another TCP option to associate high res timestamps with TCP segments Essentially adds more bits to sequence space Side effect no more need for Karn Partridge restriction not to compute RTT for ACKs of retransmitted packets 8 4 Relationship of Performance Loss For packets of B bytes and packet loss rate p throughput is T 1 5B RTT p Implications Long term throughput falls as 1 RTT throughput falls as 1 sqrt p Long term Non TCP transport can use equation to provide TCP friendly congestion control 9 Where Does Loss p Come From Anyway Routers Queuing 10 5 Generic Router Architecture Input and output interfaces are connected through an interconnect input interface Interconnect can be implemented by output interface Interconnect Shared memory o Low capacity routers e g PC based routers Shared bus o Medium capacity routers Point to point switched bus o High capacity routers o Packets fragmented into cells o Essentially a network inside the router 11 Shared Memory 1st Generation Shared Backplane CP M em Li n U In te e rfa ce or y CPU Route Table Buffer Memory Line Interface Line Interface Line Interface MAC MAC MAC Typically 0 5Gbps aggregate capacity Limited by rate of shared memory Slide by Nick McKeown 12 6 Shared Bus 2nd Generation CPU Route Table Buffer Memory Line Card Line Card Line Card Buffer Memory Buffer Memory Buffer Memory Fwding Cache Fwding Cache Fwding Cache MAC MAC MAC Typically 5Gb s aggregate capacity Limited by shared bus Slide by Nick McKeown 13 Point to Point Switch 3rd Generation Switched Backplane Li CPIn ne Uterf ac M e em or y Line Card CPU Card Line Card Local Buffer Memory Routing Table Local Buffer Memory Fwding Table Fwding Table MAC MAC Typically 100Gbps aggregate capacity Slide by Nick McKeown 14 7 What a Router Looks Like Cisco GSR 12416 Juniper M160 19 19 Capacity 160Gb s Power 4 2kW Lines of Code 8M circa year 2000 Capacity 80Gb s Power 2 6kW 3ft 6ft 2ft 2 5ft Slide courtesy Nick McKeown 15 Input Interface Packet forwarding decide to which output interface to forward each packet based on the information in packet header Examine packet header Lookup in forwarding table Update packet header Question do we send the packet to the output interface immediately input interface output interface Interconnect 16 8 Output Functions Buffer management decide when and which packet to drop Scheduler decide when and which packet to transmit Buffer Scheduler 1 2 17 Output Queued Routers Only output interfaces store packets Advantages Easy to design algorithms only one congestion point input interface output interface Backplane Disadvantages Requires an output speedup Ro C N where N is the number of interfaces not feasible Ro C 18 9 Input Queued Routers Input interfaces store packets input interface Easier to build since only need R C output interface Backplane Though need to implement back pressure to know when to send But harder to build efficiently due to contention and head of line blocking C R 19 Head of line Blocking Cell at head of an input queue cannot be transferred thus blocking the following cells Cannot be transferred because is blocked by orange cell Input 1 Output 1 Input 2 Output 2 Input 3 Cannot be transferred because output buffer overflow Output 3 Modern high speed routers use combination of input output queuing with flow control multiple virtual queues 20 10 5 Minute Break Questions Before We Proceed 21 Simple Queuing FIFO and Drop Tail Most of today s routers Transmission via FIFO scheduling First in first out queue Packets transmitted in the order they arrive Buffer management drop tail If the queue is full drop the incoming packet 22 11 Refinements to FIFO Random Early Detection RED Explicit Congestion Notification ECN 23 Bursty Loss From Drop Tail Queuing TCP depends on packet loss Packet loss is the indication of congestion In fact TCP drives the network into packet loss by continuing to increase the sending rate Drop tail queuing leads to bursty loss When a link becomes congested many arriving packets encounter a full queue And as a result many flows perceive
View Full Document