DOC PREVIEW
Brandeis MATH 56A - FINITE MARKOV CHAINS

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

38 FINITE MARKOV CHAINS1.4.1. larger transient classes. Last time I explained (Theorem 1.12)that, ifP(success in one trial) = p > 0thenP(success with ∞ many trials) = 1.But you can say more:Corollary 1.13. Furthermore, you will almost surely succeed an infi-nite number of times.Proof. Suppose that you succeed only finitely many times, say 5 times:n1, n2, n3, n4, n5.If n5is the last time that you succeed, it means that, after that pointin time, you try over and over infinitely many times and fail each time.This has probability zero by the theorem. So,P(only finitely many successes) = 0.But, the number of successes is either finite or infinite. So,P(infinitely many successes) = 1.!Apply this to Markov chains:X0, X1, X2, · · ·These locations are random states in the finite set S of all states. Thismeans that there is at least one state that is visited infinitely manytimes. LetI := {i ∈ S | Xn= i for infinitely many n}This is the set of those states that the random path goes to infinitelymany times.Theorem 1.14. (A.s.) I is one recurrent class.At this point we had a discussion about the meaning of this. Theset I is a random set. Since a general finite Markov chain has severalrecurrent classes, which one you end up in is a matter of chance. Theprobability distribution of Xnfor large n will include a linear combina-tion or “superp osition” of several possible futures. So, several recurrentclasses have positive probability at the beginning. However, when youactually go into the future, you pick one path and you get stuck inone recurrent class from which you cannot escape. This theorem saysMATH 56A SPRING 2008 STOCHASTIC PROCESSES 39that you will wander around and visit every site in that recurrent classinfinitely many times.Proof. In order to prove this theorem I first proved:Lemma (a) If i ∈ I and i → j then j ∈ I.This means: if it is possible to go from i to j then j ∈ I.Proof of Lemma (a): It is given that i ∈ I. I.e., we go to i infinitelymany times. Each time we go to i we have a probability p > 0 of goingto j. Theorem 1.12 says that, with probability one, we will eventuallygo to j. But then (b) we have to eventually go back to i because,we are going to i infinitely many times. So, by Corollary 1.13, withprobability one, you cross that bridge infinitely many times. So, j ∈ I.(The picture is a little deceptive. The path from i to j can have morethan one step.)This proof also says: (b) j → i since you need to return to i infinitelymany times. Therefore, I is one communication class. We just need toshow that this class is recurrent.But (a) implies that I is recurrent. Otherwise, there would be a jnot in I so that i → j for some i ∈ I and this would contradict (a). !Corollary 1.15. The probability is zero that you remain in a transientclass indefinitely.40 FINITE MARKOV CHAINS1.5. Canonical form of P . Next, I talked about the canonical formof P which is given on page 20 of our book.1.5.1. definition. Suppose that R1, R2, · · · , Rrare the recurrent classesof a Markov chain and T1, T2, · · · , Tsare the transient classes. I drewa picture similar to the following to illustrate this.Then the canonical form of the transition matrix P is given by thefollowing “block” form of the matrix: (In the book, all transient classesare combined. So, I will do the same here.)P =R1R2TR1R2TP10 00 P20S1S2QIf you start in the recurrent class R1then you can’t go anywhere else.So, there is only P1in the first row. In the example, it is a 2×2 matrix.Similarly, the second row has only P2since, if you start in R2you can’tget out.The matrices P1and P2are stochastic matrices. Their rows add upto one since, in the entire matrix P , there are no other numbers inthose rows. This also reflects the fact that the recurrent classes R1andR2are, in themselves, (irreducible) Markov chains.The transient class T is not a Markov chain. Why not? There areseveral reasons. If you look at the picture, you see that you can leavethe transient class out of the bottom. So, it is not a “closed system.”Another reason is that the matrix Q is not stochastic. Its rows do notadd up to one. So, Q does not define a Markov chain.MATH 56A SPRING 2008 STOCHASTIC PROCESSES 41The bottom row in the canonical form describes what happens if youstart in any transient class. You either go to another transient state oryou go to a recurrent state. The matrix Q is the transient-to-transientmatrix. The matrixS = (S1, S2)is the transient-to-recurrent matrix. It has one block Sifor every re-current state Ri.Since each recurrent state Riis an irreducible Markov chain, it hasa unique invariant distribution πi.Theorem 1.16. If πiis the invariant distribution for Pithen the in-variant distributions for P are the positive linear combinations of theπi(with coefficients adding to 1). In other words,π =%tiπiwhere ti≥ 0 and&ti= 1. In the case of two recurrent states, this is:π = tπ1+ (1 − t)π2where 0 ≤ t ≤ 1.Proof. Suppose that π1, π2are invariant distributions for P1, P2. Thenthey are row vectors of the same size as P1, P2, respectively, andπ1P1= π1, π2P2= π2.When t = 1/3 we get the invariant distribution:π ='13π1,23π2, 0(.You need to multiply by 1/3 and 2/3 (or some other numbers ≥ 0which add to 1) so that the entries of π add up to 1. Block matrixmultiplication show that this is an invariant distribution:πP ='13π1,23π2, 0(P10 00 P20S1S2Q='13π1P1,23π2P2, 0(='13π1,23π2, 0(= πThis shows that the positive linear combinations of the invariant dis-tributions πiare invariant distributions for P .The converse, which I did not prove in class is easy: Suppose thatπ = (α, β, γ) is an invariant distribution. Then we must have γ = 0,since otherwise(α, β, γ)Pn= (α, β, γ)42 FINITE MARKOV CHAINSindicating that we have a positive probability of remaining in a tran-sient state indefinitely, a contradiction to what we just proved. So,π = (α, β, 0) and(α, β, 0)P1000 P20S1S2Q= (αP1, βP2, 0) = (α , β, 0)which means that αP1= α and βP2= β. So, α, β are scalar multiplesof invariant distributions for P1, P2. !The next two pages are what I handed out in class, although thepage numbers have shifted.MATH 56A SPRING 2008 STOCHASTIC PROCESSES 431.5.2. example. The problem is to find all invariant distributions of thefollowing transition matrix.P =1/2 0 0 1/21/4 1/4 1/4 1/40 0 1 01/4 0 0 3/4An invariant distribution π is


View Full Document

Brandeis MATH 56A - FINITE MARKOV CHAINS

Documents in this Course
Load more
Download FINITE MARKOV CHAINS
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view FINITE MARKOV CHAINS and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view FINITE MARKOV CHAINS 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?