Unformatted text preview:

Math 19b: Linear Algebra with Probability Oliver Knill, Spring 2011Lecture 33: Markov matricesA n × n matrix is called a Markov matrix if all entries are nonnegative and thesum of each column vector is equal to 1.1 The matrixA ="1/2 1/31/2 2/3#is a Markov matrix.Markov matr ices are also called stochastic matrices. Many authors write the transposeof the matr ix and apply the matrix to the right of a row vector. In linear algebra we writeAp. This is of course equivalent.Lets call a vector with nonnegative entries pkfor which all the pkadd up to 1 astochastic vector. For a stochastic matrix, every column is a stochastic vector.If p is a stochastic vector and A is a stochastic matrix, then Ap is a stochasticvector.Pro of. Let v1, .., , vnbe the column vectors of A. ThenAp =p1p2. . .pn= p1v1+ ... + vnvn.If we sum this up we get p1+ p2+ ... + pn= 1.A Markov matrix A always has an eigenvalue 1. All other eigenvalues are in absolutevalue smaller or equal to 1.Pro of. For the transpose matrix AT, the sum of the row vectors is equal to 1. The ma trixATtherefore has the eigenvector11...1.Because A and AThave the same determinant also A − λInand AT− λInhave the samedeterminant so that the eigenvalues of A and ATare the same. With AThaving an eigenvalue1 also A has an eigenvalue 1.Assume now that v is an eigenvector with an eigenvalue |λ| > 1. Then Anv = |λ|nv hasexponentially g r owing length for n → ∞. This implies that there is for large n one coefficient[An]ijwhich is larger than 1. But Anis a stochastic matrix (see homework) and has all entries≤ 1. The assumption of an eigenvalue larger than 1 can not be valid.2 The exampleA ="0 01 1#shows that a Markov matrix can have zero eigenvalues and determinant.3 The exampleA ="0 11 0#shows that a Markov matrix can have negative eigenvalues. and determinant.4 The exampleA ="1 00 1#shows that a Markov matrix can have several eigenvalues 1.5 If all entries are positive and A is a 2 × 2 Markov matrix, then there is only o ne eigenvalue1 and one eigenvalue smaller than 1.A ="a b1 − a 1 − b#Pro of: we have seen that there is one eigenvalue 1 because AThas [1, 1]Tas an eigenvector.The trace of A is 1 + a − b which is smaller than 2. Because the trace is the sum of theeigenvalues, the second eigenvalue is smaller than 1.6 The exampleA =0 1 00 0 11 0 0shows that a Markov matrix can have complex eigenvalues and that Markov matrices canbe orthogonal.The following example shows that stochastic matrices do not need to be diagonalizable, noteven in the complex:7 The matrixA =5/12 1/4 1/35/12 1/4 1/31/6 1/2 1/3is a stochastic matrix, even doubly stochastic. Its transpose is stochastic too. Its rowreduced echelon form isA =1 0 1/20 1 1/20 0 0so that it ha s a one dimensional kernel. Its char acteristic polynomial is fA(x) = x2−x3whichshows that the eigenvalues are 1, 0, 0. The algebraic multiplicity of 0 is 2. The geometricmultiplicity of 0 is 1. The matrix is not diagonalizable.11This example appe ared in http://mathoverflow.net/questions/51887/non-diagonalizable-doubly-stochastic-matricesThe eigenvector v to the eigenvalue 1 is called the stable equilibrium distributionof A. It is a lso called Perron-Frobenius eigenvector.Typically, the discrete dynamical system converges to the stable equilibrium. But the aboverotation matrix shows that we do not have to have convergence at all.8 AssumeA =0 0.1 0.2 0.30.2 0.3 0.2 0.10.3 0.2 0.5 0.40.5 0.4 0.1 0.2.Lets visualize this. We start with the vector1000.1234123412341234Many games are Markov ga mes. Lets look at a simple example of a mini monopoly, whereno property is bought:9 Lets have a simple ”monopoly” game with 6 fields. We start at field 1 and throw a coin.If the coin shows head, we move 2 fields forward. If the coin shows tail, we move back t othe field number 2. If you reach the end, you win a dollar. If you overshoo t you pay a feeof a dollar and move to the first field. Question: in the long term, do you win or lose ifp6− p5measures this win? Here p = (p1, p2, p3, p4, p5, p6) is the stable equilibrium solutionwith eigenvalue 1 of the game.10 Take the same example but now throw also a dice and move with probability 1/6. Thematrix is nowA =1/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/6.In the homework, you will see that there is only one stable equilibrium now.Homework due April 27, 20111 Find the stable equilibrium distribution of the matrixA ="1/2 1/31/2 2/3#.2 a) Verify that the product of two Markov matrices is a Markov matrix.b) Is the inverse of a Markov matrix always a Markov matr ix? Hint for a): Let A, B beMarkov matrices. Yo u have to verify that BAekis a stochastic vector.3 Find all the eigenvalues and eigenvectors of the doubly stochastic matrix in the modifiedgame aboveA =1/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6 1/61/6 1/6 1/6 1/6 1/6


View Full Document

HARVARD MATH 19B - Lecture 33

Download Lecture 33
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 33 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 33 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?