DOC PREVIEW
Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 1 3 Invariant probability distribution Definition 1 4 A probability distribution is a function S 0 1 from the set of states S to the closed unit interval 0 1 so that i 1 i S is When the set of states is equal to S 1 2 s then the condition s i 1 i 1 Definition 1 5 A probability distribution is called invariant if P P I e is a left eigenvector for P with eigenvalue 1 1 3 1 probability distribution of Xn Each Xn has a probability distribution I used the following example to illustrate this 1 3 1 2 1 4 The numbers 1 3 and 1 4 are transition probabilities They say nothing about X0 But we need to start in a random state X0 This is because we need to understand how the transition from one random state to another works so that we can go from Xn to Xn 1 X0 will be equal to either 0 or 1 with probability 1 P X0 1 2 P X0 2 These two numbers are between 0 and 1 inclusive and add up to 1 1 2 1 So 1 2 is a probability distribution is the probability distribution of X0 and is called the initial probability distribution Once the distribution of X0 is given the probability distribution of every Xn is determined by the transition matrix Theorem 1 6 The probability distribution of Xn is the vector P n 32 FINITE MARKOV CHAINS So in the example P X2 2 2 2 i 1 j 1 P X0 i P X1 j X0 i P X2 2 X1 j i p i j p j 2 i p i j p j 2 P 2 2 i j This is the sum of the probabilities of all possible ways that you can end up at state 2 at time 2 To prove this in general I used the following probability formula Lemma 1 7 Suppose that our sample space is a disjoint union Bi of events Bi Then P A 1 1 i P Bi P A Bi I drew this picture to illustrate this basic concept that you should already know Proof of theorem 1 6 By induction on n If n 0 then P n I is the identity matrix So P n P 0 I This is the distribution of Xn X0 by definition So the theorem holds for n 0 Suppose the theorem holds for n Then by Equation 1 1 P Xn 1 1 P Xn 1 P Xn 1 1 Xn 1 P Xn 2 P Xn 1 1 Xn 2 B1 p 1 1 B2 p 2 1 P n 1 p 1 1 P n 2 p 2 1 P n P 1 P n 1 1 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 33 And similarly P Xn 1 2 P n 1 2 So the theorem holds for n 1 So it holds for all n 0 Corollary 1 8 If the initial distribution is invariant then Xn has probability distribution for all n Proof The distribution of Xn is P n P n P P P since every time you multiply P you always have 1 3 2 Perron Frobenius Theorem I stated this very important theorem without proof However the proof is outlined in Exercise 1 20 in the book Theorem 1 9 Perron Frobenius Suppose that A is a square matrix all of whose entries are positive real numbers Then A has a left eigenvector all of whose coordinates are positive real numbers I e P Furthermore a is unique up to a scalar multiple If is another left eigenvector of A with positive real entries then C for some scalar C b 1 is a positive real number c The eigenvector 1 is larger in absolute value then any other eigenvalue of A 2 3 1 So 1 is called the maximal eigenvalue of A d 1 lim n P n n 1 assuming that is a probability distribution i e i 1 I didn t prove this However I tried to explain the last statement When we raise P to the power n it tends to look like multiplication by n1 So we should divide by n1 If we know that the rows of the 34 matrix FINITE MARKOV CHAINS 1 Pn n 1 are all the same what is it 0 1 1 n n P 1 2 i 1 But 11 P So and 0 1 1 n P i n1 1 i This theorem applies to Markov chains but with some conditions First I stated without proof the fact Theorem 1 10 The maximal eigenvalue of P is 1 More precisely all eigenvalues of P have 1 Proof Suppose that P has an eigenvalue with absolute value greater than 1 Then there is an eigenvector x so that xP x Then xP n n x diverges as n goes to infinity But this is not possible since the entries of the matrix P n are all between 0 and 1 since P n is the probability transition matrix from X0 to Xn by Theorem 1 6 I ll explain this proof later What I did explain is class is that 1 is always an eigenvalue of P This follows from the fact that the rows of P add up to 1 p i j 1 j This implies that the column vector with all entries 1 is a right eigenvector of P with eigenvalue 1 1 1 1 1 P 1 For example if then 2 1 3 2 3 1 3 P 1 4 3 4 2 3 2 32 3 2 3 1 1 1 2 3 1 3 P 1 4 3 4 1 1 1 MATH 56A SPRING 2008 STOCHASTIC PROCESSES 35 The invariant distribution is a left eigenvector with eigenvalue 1 The unique invariant distribution is 2 3 3 4 7 7 This means 2 32 3 2 3 3 4 3 4 2 3 1 3 P 1 4 3 4 7 7 7 7 You can find using linear algebra or a computer You can also use intuition In the Markov chain 1 3 1 2 1 4 we can use the Law of large numbers which says that if there are a large number of people moving randomly then the proportion who move will be approximately equal to the probability So if there are a large number of people in states 1 and 2 then one third of those at 1 will move to 2 and one fourth of those in 2 will move to 1 If you want the distribution to be stable the numbers should have a 3 4 ratio If there are 3 guys at point 1 and 1 3 of them move then one guy moves to 2 If there are 4 guys at 2 and 1 4 of them move then one guy moves from 2 to 1 and the distribution is the same To make it a probability distribution the vector 3 4 needs …


View Full Document

Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31

Documents in this Course
Load more
Download MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 31 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?