10/29/20011Mani SrivastavaUCLA - EE DepartmentRoom: 7702-B Boelter HallEmail: [email protected]: 310-267-2098WWW: http://www.ee.ucla.edu/~mbsCopyright 2001 Mani SrivastavaReal-time CommunicationsEE202A (Fall 2001): Lecture #62Copyright 2001 Mani SrivastavaReading List for This Lecture Required Lahiri, K.; Raghunathan, A.; Lakshminarayana, G. LOTTERYBUS: A new high-performance communication architecture for system-on-chip designs. Proceedings of the 38thACM Design Automation Conference, 2001. Recommended none Others none10/29/200123Copyright 2001 Mani SrivastavaReal-time Communications Analogous to real-time computation Value of communication depends on the time at which the message is delivered to the recipient Metrics Throughput Delay Delay jitter Loss rate Fairness Comes in hard & soft variety Deterministic vs. statistical guarantees4Copyright 2001 Mani SrivastavaKey Problem Allocation and scheduling of communication resources Point-to-point link e.g. wire (scheduling at transmitter) Distributed link e.g. wireless (MAC) or bus (arbiter) Entire network (routing) Anywhere there is a shared resource! Analogous to task scheduling on processors, but with certain crucial differences Often no preemption or coarse pre-emption Channels of time-varying quality/capacity10/29/200135Copyright 2001 Mani SrivastavaType of Traffic Sources Constant bit rate: periodic traffic Fixed-size packets at periodic intervals Analogous to periodic tasks with constant computation time in RM model Variable bit rate: bursty traffic Fixed-size packets at irregular intervals Variable-size packets at regular intervals Traffic characteristics may change as it passes through the communication system6Copyright 2001 Mani SrivastavaKey Issues Scheduling Admission control (schedulability) Policing (for “isolation”) Goals: meet performance and fairness metrics high resource utilization (as measured by resource operator) easy to implement small work per data item, scale slowly with # of flows or tasks easy admission control decisions Schedulable region: set of all possible combinations of performance bounds that a scheduler can simultaneously meet10/29/200147Copyright 2001 Mani SrivastavaFairness Intuitively each connection gets no more than what it wants the excess, if any, is equally shared Fairness is intuitively a good idea Fairness also provides protection traffic hogs cannot overrun others automatically builds firewalls around heavy users reverse is not true: protection may not lead to fairnessABCABCTransfer half of excessUnsatisfied demand8Copyright 2001 Mani SrivastavaMax-min Fairness Maximize the minimum share of task or flow whose demand is not fully satisfied Resources are allocated in order of increasing demand, normalized by weight No task or flow gets a share larger than its demand Task or flows with unsatisfied demands get resource shared in proportion to their weights10/29/200159Copyright 2001 Mani SrivastavaExample Given Four flows with demands 4, 2, 10, 4 and weights 2.5, 4, 0.5, and 1, and a resource with capacity C=16 Steps Normalize weights so that smallest is 1: 5, 8, 1, 2 In each round give a flow a share ∝ to its weight Round 1: allocation is 5, 8, 1, 2• Results in 1 and 6 units extra for flows 1 & 2 = 7• Allocate this 7 to flows still in deficit according to re-normalized weights Round 2: allocation is 7*1/3 and 7*2/3 to flows 3 & 4• Results in 2.666 excess for flow 4 while flow 3 is still short• Allocate this 2.666 to flows still in deficit according to re-normalized weights Round 3: allocation is 2.666 for flow 3• Results in flow 3 with a total of 6, i.e. a deficit of 410Copyright 2001 Mani SrivastavaPolicing Three criteria: (Long term) Average (Sustained) Rate 100 packets per sec or 6000 packets per min?? crucial aspect is the interval length Peak Rate e.g., 6000 p p minute Avg and 1500 p p sec Peak (Max.) Burst Size Max. number of packets sent consecutively, i.e. over a short period of time10/29/2001611Copyright 2001 Mani SrivastavaLeaky Bucket Mechanism Provides a means for limiting input to specified Burst Size and Average Rate. Figure from: Kurose & Ross12Copyright 2001 Mani SrivastavaLeaky Bucket Mechanism (contd.) Bucket can hold b tokens; token are generated at a rate of r token/sec unless bucket is full of tokens Over an interval of length t, the number of packets that are admitted is less than or equal to (r t + b) How can one enforce a constraint on peak rate?10/29/2001713Copyright 2001 Mani SrivastavaReal-time Communications over a Link Scenario: applications sharing a link of fixed capacity Which packet to send when? FIFO Priority queuing: preemptive, non-preemptive Round robin Weighted fair queuing EDF Which packet to discard if buffers at sender are full? What if senders not at the same place? Need multiple access mechanism Need distributed implementation14Copyright 2001 Mani SrivastavaFundamental Choices Number of priority levels a priority level served only if higher levels don’t need service(multilevel priority with exhaustive service) Work conserving vs. non-work conserving never idle when packets await service why bother with non-work conserving? Degree of aggregation cost, amount of state, how much individualization aggregate to a class members of class have same performance requirement no protection within class Service order within a level FCFS (bandwidth hogs win, no guarantee on delays) In order of a service tag (both protection & delay can be ensured)10/29/2001815Copyright 2001 Mani SrivastavaNon-work-conserving Disciplines Idea Delay packet till eligible Reduces delay-jitter => fewer buffers in network E.g. traffic remains smooth as it proceeds through the network How to choose eligibility time? rate-jitter regulator: bounds maximum outgoing rate delay-jitter regulator: compensates for variable delay at previous hop Do we really need it? one can remove delay-jitter at an endpoint instead but it also reduces expensive switch memory easy to computer end-to-end performance sum of per-hop delay and delay jitter leads to tight end-to-end delay and delay-jitter
View Full Document