DOC PREVIEW
ASU MAT 211 - Markov Processes

This preview shows page 1-2 out of 5 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 5 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Markov Processes Absorbing Markov Chains Recall that you saw matrices where it seemed like you would eventually get stuck in a state This occurs when the state has a probability of one Clearly all the other states on that row have a probability of zero Under certain conditions a matrix with this feature is called absorbing Absorbing Markov Chains have transition matrices P such that 1 P has an absorbing state pij A state is absorbing if once you go into the state you can t leave If the transition matrix columns and rows are in order by state then pii 1 and all other entries on that row are zero An absorbing state is a probabilistic black hole It sucks in everything that gets too close eventually Just as with black holes you cannot get sucked in if you cannot get close A good thing for us since there is one humongous black hole at the center of our galaxy So if you cannot enter the state from another state even if pii 1 it is not absorbing 2 An absorbing Markov chain has at least one absorbing state and it is possible to get to an absorbing state from a nonabsorbing state This does not need to be a direct path from every other non absorbing state Just to review recall that to find the probability of passing from state i to state j we need only look at the ijth entry In the matrix to the right our probability in moving from s2 to s1 is zero since the 21th entry p21 is zero If you arrive to or started in state two you would be stuck there forever But we need to be able to get to s2 From the first row we have a 5 likelihood of moving into s2 from s1 So the matrix has an absorbing state s2 and we can reach it s1 s 2 s3 s1 s2 s3 75 25 0 7 3 0 0 0 1 s1 s 2 s1 95 05 s 2 0 1 Now look at the 3 by 3 matrix to the left It does appear to have an absorbing state at C However how do you get there from A or B The answer is you don t You might look at this as a mutually exclusive situation Depending on the initial probability distribution going to C leaves you there with no recourse to A or B while going to either A or B puts you into a shuttle back and forth between them The drawing below is the map and the tree process we could use to represent this absorption In these two examples we have almost everything we need to identify these matrices Notice in the 2 by 2 we do have a row with the necessary one zero distribution However at least one entry in the s2 column is not zero There is a path to s2 This is an absorbing Markov chain In contrast in the 3 by 3 both the row and the column with the probability of one has only zeros in the other entries It fails the absorption property of having a path to C This not an absorbing Markov chain Arizona State University Department of Mathematics and Statistics 1 of 5 Markov Processes Let s look at some more examples Decide whether each transition matrix below is an absorbing Markov chain If the example is absorbing name the absorbing state or states If not explain why not If there is only one absorbing state describe a path to it from every other state s1 s 2 s3 s 4 25 25 25 25 2 0 0 8 5 0 5 0 0 0 1 s4 0 s1 s2 s3 Absorbing state s4 Path from s1 s1 s4 Path from s2 s 2 s4 Path from s3 s3 s1 s 4 Absorbing No s1 s 2 s3 s 4 25 5 25 0 2 8 0 0 7 1 2 0 s4 0 0 0 1 Why not s1 s2 s3 Absorbing Yes There is no transition possible into s4 from any other state This is the stock answer Once we have the matrix arranged as this one is it becomes obvious That even gives us the strategy we will use shortly for working with these matrices s1 s 2 s3 s 4 s5 s1 25 5 25 0 0 s 2 0 1 0 0 0 s3 7 0 1 1 1 0 s4 0 0 0 1 0 s5 0 0 0 0 1 Absorbing Yes Absorbing states s2 and s4 State s5 is not absorbing There is no way into it An absorbing path from s1 s1 s 2 An absorbing path from s3 s3 s 2 Now it is time to get technical The nature of absorbing Markov chains is that we can glean a ton of information from them just rearranging them legally as matrices In each of the example above we could tell a lot about the matrix by looking for rows with 1 and zeros only and examine the column where the one appeared They were all small enough to make that practical 2 of 5 Arizona State University Department of Mathematics and Statistics Markov Processes However by rearranging the states in the transition matrix P absorbing states first then nonabsorbing and carefully rearranging the columns so we don t lose track of the states themselves we can subdivide P to see an identity matrix a zero matrix and two others we will call S and Q as shown to the left Ik P S 0 Q Let s do that with our previous examples s1 s 2 s3 s 4 25 25 25 25 2 0 0 8 5 0 5 0 0 0 1 s4 0 s1 s2 s3 s 4 s1 s 2 s3 25 25 25 25 8 2 0 0 0 5 0 5 0 0 0 s4 1 s1 s2 s3 s 4 s1 s 2 s3 0 0 0 1 25 25 25 25 8 2 0 0 s3 0 5 0 5 s4 s1 s2 I rearranged columns first then I rearranged the rows Now I can see the 1 by 1 identity matrix 1 The 1 by 3 zero matrix 0 0 25 25 25 25 0 and S 8 Q 2 0 0 0 2 0 5 s1 s 2 s3 s 4 25 5 25 0 2 8 0 0 7 1 2 0 s4 0 0 0 1 s1 s2 s3 s1 s 2 s3 s 4 0 0 0 1 25 5 25 0 2 8 0 0 s3 7 1 2 0 s4 s1 s2 s 4 s1 s 2 s3 1 0 0 0 0 25 5 25 0 2 8 0 s3 0 7 1 2 s4 s1 s2 I rearranged columns first then I rearranged the rows Now I can see the 1 by 1 identity matrix 1 The 25 5 25 0 1 by 3 zero matrix 0 0 0 and S 0 Q 2 8 0 S has only zero entries This …


View Full Document

ASU MAT 211 - Markov Processes

Download Markov Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Markov Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Markov Processes and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?