Unformatted text preview:

Chapter 6COUNTABLE-STATE MARKOVCHAINS6.1 Introduction and classification of statesMarkov chains with a countably-infinite state space (more briefly, countable-state Markovchains) exhibit some typ es of behavior not possible for chains with a finite state space.With the exception of the first example to follow and the section on branching processes,we label the states by the nonnegative integers. This is appropriate when modeling thingssuch as the number of customers in a queue, and causes no loss of generality in other cases.The following two examples give some insight into the new issues posed by countable statespaces.Example 6.1.1. Consider the familiar Bernoulli process {Sn= X1+ ···Xn; n  1} where{Xn; n  1} is an IID binary sequence with pX(1) = p and pX(1) = (1  p) = q. Thesequence {Sn; n  1} is a sequence of integer random variables (rv’s ) where Sn= Sn1+ 1with probability p and Sn= Sn11 with probability q. This sequence can be modeled bythe Markov chain in Figure 6.1.2 10 1 2np p p pq q qq = 1  pq. . .nXzXynXzXynXzXynXzXy. . .Figure 6.1: A Markov chain with a countable state space modeling a Bernoulli process.If p > 1/2, then as time n increases, the state Xnb ecomes large with high probability,i.e., limn!1Pr{Xn j} = 1 for each integer j. Similarly, for p < 1/2, the stateb ecomes highly negative.Using the notation of Markov chains, Pn0jis the probability of being in state j at the endof the nth transition, conditional on starting in state 0. The final state j is the number ofpositive transitions k less the number of negative transitions n  k, i.e., j = 2k  n. Thus,2906.1. INTRODUCTION AND CLASSIFICATION OF STATES 291using the binomial formula,Pn0j=✓nk◆pkqnkwhere k =j + n2; j + n even. (6.1)All states in this Markov chain communicate with all other states, and are thus in the sameclass. The formula makes it clear that this class, i.e., the entire set of states in the Markovchain, is periodic with period 2. For n even, the state is even and for n odd, the state isodd.What is more important than the periodicity, however, is what happens to the state prob-abilities for large n. As we saw in the Gaussian approximation to the binomial PMF in(1.89),Pn0j⇠1p2⇡npqexp(k  np)22pqnwhere k =j + n2; j + n even. (6.2)In other words, Pn0j, as a function of j, looks like a quantized form of the Gaussian density forlarge n. The significant terms of that distribution are close to k = np, i.e., to j = n(2p 1).For p > 1/2, the state increases with increasing n. Its distribution is centered at n(2p 1),but the distribution is also spreading out aspn. For p < 1/2, the state similarly decreasesand spreads out. The most interesting case is p = 1/2, where the distribution remainscentered at 0, but due to the spreading, the PMF approaches 0 as 1/pn for all j.For this example, then, the probability of each state approaches zero as n ! 1, and thisholds for all choices of p, 0 < p < 1. If we attempt to define a steady-state probabilityas 0 for each state, then these probabilities do not sum to 1, so they cannot be viewedas a steady-state distribution. Thus, for countable-state Markov chains, the notions ofrecurrence and steady-state probabilities will have to be modified from that with finite-state Markov chains. The same type of situation occurs whenever {Sn; n  1} is a sequenceof sums of arbitrary IID integer-valued rv’s.Most countable-state Markov chains that are useful in applications are quite di↵erent fromExample 6.1.1, and instead are quite similar to finite-state Markov chains. The followingexample bears a close resemblance to Example 6.1.1, but at the same time is a countable-state Markov chain that will keep reappearing in a large number of contexts. It is a specialcase of a birth-death process, which we study in Section 6.2.Example 6.1.2. Figure 6.2 is similar to Figure 6.1 except that the negative states havebeen eliminated. A sequence of IID binary rv’s {Xn; n  1}, with pX(1) = p and pX(1) =q = 1  p, controls the state transitions. Now, however, Sn= max(0, Sn1+ Xn, so thatSnis a nonnegative rv. All states again communicate, and because of the self transition atstate 0, the chain is aperiodic.For p > 1/2, transitions to the right occur with higher frequency than transitions to theleft. Thus, reasoning heuristically, we expect the state Snat time n to drift to the rightwith increasing n. Given S0= 0, the probability Pn0jof b eing in state j at time n, shouldthen tend to zero for any fixed j with increasing n. As in Example 6.1.1, we see that a292 CHAPTER 6. COUNTABLE-STATE MARKOV CHAINS0 1 2 3 4np p p pq q qq = 1pq. . .⇠:nXzXynXzXynXzXynXzXyFigure 6.2: A Markov chain with a countable state space. If p > 1/2, then as time nincreases, the state Xnb ecomes large with high probability, i.e., limn!1Pr{Xn j} =1 for each integer j.steady state does not exist. In more poetic terms, the state wanders o↵ into the wild blueyonder.One way to understand this chain better is to look at what happens if the chain is truncatedThe truncation of Figure 6.2 to k states is analyzed in Exercise 4.9. The solution theredefines ⇢ = p/q and shows that if ⇢ 6= 1, then ⇡i= (1  ⇢)⇢i/(1  ⇢k) for each i, 0 i < k. For ⇢ = 1, ⇡i= 1/k for each i. For ⇢ < 1, the limiting behavior as k ! 1 is⇡i= (1  ⇢)⇢i. Thus for ⇢ < 1 ( p < 1/2), the steady state probabilities for the truncatedMarkov chain approaches a limit which we later interpret as the steady state probabilitiesfor the untruncated chain. For ⇢ > 1 (p > 1/2), on the other hand, the steady-stateprobabilities for the truncated case are geometrically decreasing from the right, and thestates with significant probability keep moving to the right as k increases. Although theprobability of each fixed state j approaches 0 as k increases, the truncated chain neverresembles the untruncated chain.Perhaps the most interesting case is that where p = 1/2. The nth order transition proba-bilities, Pn0jcan be calculated exactly for this case (see Exercise 6.3) and are very similarto those of Example 6.1.1. In particular,Pn0j=(n(j+n)/22nfor j  0, (j + n) evenn(j+n+1)/22nfor j  0, (j + n) odd(6.3)⇠r2⇡nexpj22nfor j  0. (6.4)We see that Pn0jfor large n is approximated by the positive side of a quantized Gaussiandistribution. It looks like the positive side of the PMF of (6.1) except that it is no


View Full Document

MIT 6 262 - COUNTABLE-STATE MARKOV CHAINS

Download COUNTABLE-STATE MARKOV CHAINS
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view COUNTABLE-STATE MARKOV CHAINS and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view COUNTABLE-STATE MARKOV CHAINS 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?