DOC PREVIEW
Brandeis MATH 56A - MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MATH 56A: STOCHASTIC PROCESSESCHAPTER 11. Finite Markov chainsFor the sake of completeness of these notes I decided to write asummary of the basic concepts of finite Markov chains. The topics inthis chapter are:(1) definition of a Markov chain(2) communication classes(3) classification of states(4) periodic/aperiodic(5) invariant probability distribution(6) return time(7) substochastic matrix1.1. definition. A stochastic process is a random process which evolveswith time. In other words, we have random variables Xt, Yt, etc. whichdepend on time t. For example Xtcould be your location at time t(where t is in the future).A finite Markov chain is a special kind of stochastic process with thefollowing properties• There are only a finite number of states. If we are talkingabout your location, this means we are only interested in whichbuilding you are in and not your exact position in the building.• Time is discrete: For example, things change only at 1pm, 2pm,etc. and never at 1:12pm or any time in between. Xnis thestate at time n where n = 0, 1, 2, etc.• No memory. Your (random) movement at time n depends onlyon Xnand is independent of Xtfor t < n (You forget the pastand your decision making process is based only on the presentstate.)• Time homogeneous: Your rules of movement are the same atall times.To summarize: You have a finite numbe r of building that you canmove around in. You can only move on the hour. Your decision makingprocess is random and depends only on your present location and notDate: December 15, 2006.12 MATH 56A: STOCHASTIC PROCESSES CHAPTER 1on past locations and does not take into account what time it is. (Youmove at midnight in the same way that you do at noon.)Definition 1.1. A finite Markov chain is a sequence of random vari-ables X0, X1, · · · which take values in a finite set S called the statespace and for any n and any values of x0, x1, · · · , xnwe have:P(Xn+1= x | X0= x0, X1= x1, · · · , Xn= xn) = P(X1= x | X0= xn)The S × S matrix P with entriesp(x, y) := P(X1= y | X0= x)is called the transition matrix.For example, suppose that you have 4 points: 0,1,2,3 and at eachstep you either increase one or decrease one with equal probabilityexcept at the end points. Suppose that, when you reach 0, you cannotleave. (0 is called an absorbing state.) Suppose that when you reach 3you always go to 2. (3 is called a reflecting wall.) Then the transitionmatrix isP =1 0 0 01/2 0 1/2 00 1/2 0 1/20 0 1 0Notice that the numbers are all nonnegative and the numbers in eachrow add up to 1. This characterizes all transition matrices.In the discussion of Markov chains, there are qualitative non-numericalconcepts and quantitative, computational concepts. The qualitativeconcepts are: communication classes , their classification and periodic-ity.1.2. communication classes. Two states communicate with each otherif they are equal or if it is possible (with positive probability) to getfrom either one to the other in a finite amount of time. We write x ↔ y.This is an equivalence relation and the equivalence classes are calledcommunication classes.In the e xample above, {0} and {1, 2, 3} are the communication classes.A Markov chain is irreducible if it has only one communication class,i.e., if it is possible to get from any state to any other state.1.3. classification of states: transient and recurrent. There aretwo types of communication classes: recurrent and transient. At thispoint, I allowed the state space S to be infinite so that you don’t getthe wrong idea.MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 3A communication class C is recurrent if for any state x ∈ C, youwill keep returning to x an infinite number of times with probabilityone. A communication class C is transient if, starting at any x ∈ C,you will return to x only a finite number of times with probability one.The theorem is that these are the only two possibilities. I provedthis in class:Lemma 1.2. If p = p(i, j) > 0 and i is recurrent then so is j. In fact,if you start in state i you will go to state j an infinite number of timeswith probability one.Proof. This is the same as saying that the probability of going to statej only a finite number of times is zero. To prove this suppose that theMarkov chain goes to state j only a finite number of times. Then thereis a last time, say Xm= j. Then you can never return to j.But i is recurrent. So, with probability one, you will go to i an infinitenumber of times after time m. Say at times n1< n2< n3< · · · (all> m). ButP(Xn1+16= j | Xn1= i) = 1 − pP(Xn2+16= j | Xn2= i) = 1 − pThe product is (1 − p)2, (1 − p)3, etc. which converges to 0. So, theprobability is zero that in all of these infinite times that you visit i youwill never go to j. This is what I was claiming. Theorem 1.3. Once you leave a communication class you can neverreturn.Theorem 1.4. Recurrent communication classes are absorbing.The lemma shows that, if you could leave a recurrent communicationclass, you will with probability one. This would be a contradiction tothe definition of recurrent. So, you cannot leave a recurrent class.The lemma that I proved can be rephrased as follows:Lemma 1.5. If you make an infinite number of attempts and you havea fixed positive probability p of success then, with probability one, youwill succeed an infinite number of times.The strong law of large numbers says that, with probability one,the proportion of trials which are successful will converge to p as thenumber of trials goes to infinity. I.e., the experimental value of theprobability p will conve rge to the theoretical value with probabilityone.4 MATH 56A: STOCHASTIC PROCESSES CHAPTER 11.4. periodic chains. We are interested in the time it takes to returnto a state i. The return time Tito state i is the smallest positive integern so that Xn= i given that X0= i. In other words, you start at state iand count how many turns it takes to return to the same state i. Thisnumber is Ti. It is random. For example, in the random walk examplegiven above, P(T3= 2) = 1/2, P(T2= 2) = 3/4.The period of a state i is the greatest common divisor of all possiblereturn times to state i. For the random walk on an infinite straightline (or on a finite line with reflecting walls), the period of every stateis 2 because it always takes an even number of steps (the same numberright as left) to get back to the same point.A state i is aperiodic if the period is 1.Theorem 1.6. States in the same


View Full Document

Brandeis MATH 56A - MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

Documents in this Course
Load more
Download MATH 56A: STOCHASTIC PROCESSES CHAPTER 1
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MATH 56A: STOCHASTIC PROCESSES CHAPTER 1 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?