DOC PREVIEW
TAMU MATH 304 - Lecture39web

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Math 304-504Linear algebraLecture 39:Markov chains.Stochastic processStochastic (or random) process is a sequence ofexperiments for which the outcome at any stagedepends on a chance.Simple model:• a finite number of possible outcomes (calledstates);• discrete timeLet S denote the set of the states. Then thestochastic process is a sequence s0, s1, s2, . . . ,where all sn∈ S depend on chance.How do they depend on chance?Bernoulli schemeBernoulli scheme is a sequence of independentrandom events.That is, in the sequence s0, s1, s2, . . . any outcomesnis independent of the others.For any integer n ≥ 0 we have a probabilitydistribution p(n)on S. This means that each states ∈ S is assigned a value p(n)s≥ 0 so thatPs∈Sp(n)s= 1. Then the probability of the eventsn= s is p(n)s.The Bernoulli scheme is called stationary if theprobability distributions p(n)do not depend on n.Examples of Bernoulli schemes:• Coin tossing2 states: heads and tails. Equal probabilities: 1/2.• Die throwing6 states. Uniform probability distribution: 1/6 each.• Lotto TexasAny state is a 6-element subset of the set{1, 2, . . . , 54}. The total number of states is25, 827, 165. Uniform probability distribution.Markov chainMarkov chain is a stochastic process with discretetime such that the probability of the next outcomedepends only on the previous outcome.Let S = {1, 2, . . . , k}. The Markov chain isdetermined by transition probabilities p(t)ij,1 ≤ i, j ≤ k, t ≥ 0, and by the initial probabilitydistribution qi, 1 ≤ i ≤ k.Here qiis the probability of the event s0= i, andp(t)ijis the conditional probability of the eventst+1= j provided that st= i. By construction,p(t)ij, qi≥ 0,Piqi= 1, andPjp(t)ij= 1.We shall assume that the Markov chain istime-independent, i.e., transition probabilities donot depend on time: p(t)ij= pij.Then a Markov chain on S = {1, 2, . . . , k} isdetermined by a probability vectorx0= (q1, q2, . . . , qk) ∈ Rkand a k×k transitionmatrix P = (pij). The entries in each row of Padd up to 1.Let s0, s1, s2, . . . be the Markov chain. Then thevector x0determines the probability distribution ofthe initial state s0.Problem. Find the (unconditional) probabilitydistribution for any sn.Random walk123Transition matrix: P =0 1/2 1/20 1/2 1/21 0 0Problem. Find the (unconditional) probabilitydistribution for any sn, n ≥ 1.The probability distribution of sn−1is given by aprobability vector xn−1= (a1, . . . , ak). Theprobability distribution of snis given by a vectorxn= (b1, . . . , bk).We havebj= a1p1j+ a2p2j+ · · · + akpkj, 1 ≤ j ≤ k.That is,(b1, . . . , bk) = (a1, . . . , ak)p11. . . p1k.........pk1. . . pkk.xn= xn−1P =⇒ xTn= (xn−1P)T= PTxTn−1.Thus xn= Qxn−1, where Q = PTand the vectorsare regarded as columns.Then xn= Qxn−1= Q(Qxn−2) = Q2xn−2.Similarly, xn= Q3xn−3, and so on.Finally,xn= Qnx0.Example. Very primitive weather model:Two states: “sunny” (1) and “rainy” (2).Transition matrix: P =0.9 0.10.5 0.5.Suppose that x0= (1, 0) (sunny weather initially).Problem. Make a long-term weather prediction.The probability distribution of weather for day n isgiven by the vector xn= Qnx0, where Q = PT.To compute Qn, we need to diagonalize the matrixQ =0.9 0.50.1 0.5.det(Q − λI ) =0.9 − λ 0.50.1 0.5 − λ== λ2− 1.4λ + 0.4 = (λ − 1)(λ − 0.4).Two eigenvalues: λ1= 1, λ2= 0.4.(Q − I )v = 0 ⇐⇒−0.1 0.50.1 −0.5xy=00⇐⇒ (x, y) = t(5, 1), t ∈ R.(Q − 0.4I )v = 0 ⇐⇒0.5 0.50.1 0.1xy=00⇐⇒ (x, y) = t(−1, 1), t ∈ R.v1= (5, 1) and v2= (−1, 1) are eigenvectors of Qbelonging to eigenvalues 1 and 0.4, respectively.x0= αv1+ βv2⇐⇒5α − β = 1α + β = 0⇐⇒α = 1/6β = −1/6Now xn= Qnx0= Qn(αv1+ βv2) == α(Qnv1) + β(Qnv2) = αv1+ (0.4)nβv2,which converges to the vector αv1= (5/6, 1/6) asn → ∞.The vector x∞= (5/6, 1/6) gives the limitdistribution. Also, it is a steady-state vector.Remark. The limit distribution does not depend onthe initial


View Full Document

TAMU MATH 304 - Lecture39web

Documents in this Course
quiz1

quiz1

2 pages

4-2

4-2

6 pages

5-6

5-6

7 pages

Lecture 9

Lecture 9

20 pages

lecture 8

lecture 8

17 pages

5-4

5-4

5 pages

Load more
Download Lecture39web
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture39web and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture39web 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?