Queueing Theory Lecture NotesENEE 426, Spring 2008March 4, 20081 Little’s TheoremN(t) = Number of customers in queue at time tα(t) = Number of customers who arrived in [0, t]Ti= Time spent in system by ith customerLetNt=1tZt0N(τ)dτ (1)be the time average of N(t). LetN = limt→∞Nt(2)be the steady-state time average. Letλt=α(t)t(3)be the time average arrival rate over [0, t]. Letλ = limt→∞λt(4)be the steady-state arrival rate. LetTt=1α(t)α(t)Xi=0Ti(5)be the time average customer delay over [0, t]. LetT = limt→∞Tt(6)be the steady-state delay. Little’s theorem:N = λT (7)2 Probabalistic VersionDefine random variablepn(t) = P r[n customers in queue at time t] (8)Then let¯N(t) = E[pn(t)] =∞Xn=0npn(t) (9)In steady state, pn= limt→∞pn(t), ∀n, then¯N = E[pn] =∞Xn=0npn(10)Note that this is also¯N = limt→∞¯N(t) (11)For delay, assume knowledge of¯Tk, average delay forcustomer k. Then¯T = limk→∞¯Tk(12)In ergodic systems, the time average equals the stead-state average, thus N =¯N and T =¯T . Lastly, defineλ = limt→∞E[probabalistic α(t)]t(13)Then Little’s Theorem still holds:N = λT (14)3 ExamplesExample 1: Packets arrive at n nodes in a networkwith rates λ1, ..., λn. N is the average number ofpackets inside the network. Then the average delayper packet isT =NPni=1λi(15)Also, Ni= λiTi.Example 2: k packets per second arrive with thefirst arrival at t = 0. Each requires αk time to trans-mit. Processing and propagation time equal P . In-terarrival times constant.T = αk + P (16)λ =1k(17)N = λT = α +Pk(18)which holds only with time averages.14 Poisson ProcessesUsed to measure the number arrivals between time 0and time t. If A(t) is a Poisson Process, then:• A(t + τ) ≥ A(t) for τ ≥ 0• A(t) ∈ I+• For arrival rate λ, A(t) ∼ P oi(τ; λ), thenP [A(t + τ) − A(t) = n] = e−λt(λτ)nn!(19)and the average isE[P oi(τ ; λ)] = τλ (20)• If tnis the time of the nth arrival, and τn=tn+1− tnis the interarrival time, thenP [τn= τ] = λe−λτ(21)which is an exponential probability density func-tion, with mean E[τn] =1λand varianceV ar[τn] =1λ2.• If Ai(t) are independent Poisson processes andAi(t) ∼ P oi(t; λi) then ifˆA(t) =nXi=1Ai(t) (22)then the sum process has distributionˆA(t) ∼P oi(t;ˆλ), whereˆλ =nXi=1λi(23)5 Discrete-Time MarkovChainsM[n] is a Markov process at time n. Markov meansthat the value of M [n + 1] depends only on the valueof M[n], and is independent of M[0], ..., M[n−1]. LetP r[M [n] = j|M[n − 1] = i] = qij(24)Let Q be a matrix of values qijfor all i, j. Q is thetransition probability matrix. Letvi[n] = P r[M [n] = i] (25)and v[n] be a vector of values vi[n]. Thenv[n + 1] = v[n]Q (26)which can be inductively expanded tov[n] = v[0]Qn(27)These are called the Chapman-Kolmogrov Equations.To find the steady-state probability, computev = limn→∞v[n] = limn→∞v[0]Qn(28)6 Continuous-Time MarkovChains (CTMC)M(t) is a Markov process at time t. The time spentin state i is distributed exp(λi), where λiis the exitrate from state i. The probability of transitioning tostate j from state i is, as before, qij. Let λ be a vectorof λi. Then the rate of transition between states isλQ.Let pj= P [in state j at steady-state] and Tj(t) bethe time in state j from [0, t], thenpj= limt→∞Tj(t)t= limt→∞P r[M (t) = j] (29)7 CTMC QueuesM(t) is the number of customers in the system. Therate at which M(t) increases by one is λ and therate at which it decreases is µ, which are the arrivaland departure rates from the system. Our goal is tocomputepj= limt→∞P r[M (t) = j] (30)Look at the transitions from state n to state n + 1.They must be within 1 of each other. Thuspnλ = pn+1µ (31)If we let ρ = λ/µ, thenpn+1= ρpn(32)Inductively, we can compute thatpn= ρnp0(33)For ρ < 1 we have1 =∞Xn=0pn=∞Xn=0ρnp0=p01 − ρ(34)Solving, we get p0= 1 − ρ and pn= ρn(1 − rho).2Looking at the steady-state queue lengthN = limt→∞E[N (t)]=∞Xn=0npn=∞Xn=0nρn(1 − ρ)= ρ(1 − ρ)∞Xn=0nρn−1= ρ(1 − ρ)ddρ ∞Xn=0ρn!= ρ(1 − ρ)ddρ11 − ρ= ρ(1 − ρ)1(1 − ρ)2=ρ1 − ρ=λµ − λ(35)Then the system wait timeT =Nλ=ρλ(1 − ρ)=1µ − λ(36)The average queue wait timeW =1µ − λ−1µ=ρµ − λ(37)And the number in the queueNQ= λW =ρ21 −
View Full Document