DOC PREVIEW
ASU MAT 211 - Markov Processes

This preview shows page 1 out of 3 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Markov Processes Regular Markov Chains There are some Markov chains with very interesting properties The first we will look at are called regular Regularity comes from the properties of matrices Recall that when you multiply matrices we can have non zero matrices create zero products or perhaps one the original matrices even though we never used the identity or other freaky occurrences Regular Markov Chains have transition matrices P such that for some n allowed anywhere P n has only positive entries no zeros Thinking of what this means it is always possible to move from one state to another even if the first step wouldn t allow it Just to be clear about how we know where we are during a stage of transition recall that to find the probability of passing from state I to state j we need only look at the ijth entry In the matrix to the right our probability in moving from s2 to s1 is zero since the 21th entry p21 is zero If you started in state two you would be stuck there forever We ll come back to that s1 s 2 s1 95 05 s 2 0 1 Look again at the transition matrix to the right Clearly it could not be regular Taking the second and then successive powers of the transition matrix that zero value is stuck forever in the 21th entry p21 Now look at the 3 by 3 matrix to the right Enter it into your calculator and take it to successfully higher powers You ll see that the p31 entry is non zero after just the second step It is regular at n 2 Even so should you begin in the initial state of C you have no chance of moving to A immediately If this matrix described product satisfaction since this could mean that if you bought C first you would never settle for less A B C A B C 25 35 40 10 65 25 0 5 5 However if we seek the fixed probability vector t we find that the matrix stabilizes at 20 50 91 This is the long term behavior of this regular Markov 261 87 261 chain The long term behavior is exactly that It does not describe what will happen at the first step from a given state As suggested a regular Markov chain could demonstrate a level of satisfaction over some number of steps or years The impact of regularity is that given a list of products say A B C ALL of the products will eventually obtain some market share when the model is regular Example Suppose the transition matrix above related to politicians A B C Politician A is a scoundrel He has been caught with his hand in the cash box too many times Let s analyze what the Markov model tells us about him From t we would know that eventually 20 out of 261 voters will vote for that scoundrel A every time This is true even if we started with politician C in office We establish this by setting the initial probability distribution at 0 0 1 We could have started with A in office by setting the initial probability distribution at 1 0 0 However the gratifying part is that scoundrel A will probably never win the office two times Surprisingly while C is the preferred candidate when A is initially in office the eventual long term candidate of choice is B We would characterize his eventual success as a landslide Arizona State University Department of Mathematics and Statistics 1 of 3 Markov Processes Example Suppose we had the situation where we had the floor plan shown We ve let our pet snake get out of its cage again We discovered from previous mistakes of this sort that the snake moves quickly around the house It changes rooms every ten minutes The snake has no particular pattern to the room changes It just picks a door and slithers through randomly It never stays in the room As nearly as anyone can recall the snake was in its cage an hour ago and the cage was found in room 3 1 Create the transition matrix The transition matrix assumes the snake is equally likely to exit through any door So if the snake starts in room one it has two ways into room 2 one way into room 3 and no way to go directly to room 4 So it looks like this 2 Decide where the snake is most likely to be right now one hour after escape 1 2 3 4 0 1 2 2 3 3 1 2 4 0 The next step is creating the initial probability distribution Since the snake is known to have been in room 3 the vector is 0 0 1 0 Finally one hour after escape is 6 ten minute stages So the final result is obtained using 2 3 1 3 0 0 0 0 1 1 2 2 0 1 3 1 2 0 v 6 v 0 P 6 Using my calculator I found this result 0 0 5999871399 0 4000128601 0 From it I decide that I should search room 2 first then room 3 I can discard the notion that the snake is in rooms 1 or 4 3 As a casual sidelight is this a regular transition matrix 1 2 3 4 Definitely not Applying successively higher powers the matrix stabilizes at about step 15 to the one to the right Lot s of zero entries Every larger power shows this same result There is also some mathematical nonsense happening The results suggest that the snake moves instantaneously from room 1 to 4 There is no connection Hmm 1 6 0 0 4 2 0 6 4 0 3 0 6 4 0 4 6 0 0 4 4 Just as I was about to begin the search one of my friends said they were 70 sure that the snake was in room 2 half an hour ago With that information where should I begin my search With this new information we need to create a new initial probability distribution Considering the snake s penchant for equally likely doors I would use v 0 0 20 0 70 0 0 10 and find v 3 v 0 P 3 since half an hour is three stages 2 of 3 Arizona State University Department of Mathematics and Statistics Markov Processes Using my calculator again I found this result 0 48101 0 12307 0 079630 0 31898 This a rounded result From it I decide that I should search room 1 first then room 4 However I cannot discard the possibility that the snake is in rooms 2 or 3 5 Based on the location of the cage where should I look for the snake after many many many hours I found that for very large n values t v 0 P n 0 6 4 0 So look in room 3 then room 4 Arizona State University Department of Mathematics and Statistics 3 of 3


View Full Document

ASU MAT 211 - Markov Processes

Download Markov Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Markov Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Markov Processes and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?