DOC PREVIEW
Brandeis MATH 56A - MATH 56A: STOCHASTIC PROCESSES ANSWERS TO HOMEWORK

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Homework 3Continuous Markov ChainsMATH 56A: STOCHASTIC PROCESSESANSWERS TO HOMEWORKHomework 3Continuous Markov ChainsThree problems due 6pm Monday, March 10. Answers will be posted Tuesday evening.Quiz 2 on Friday, March 14.3.4 A is the infinitesimal generator for an irreducible continuous time Markov chain withfinite state space.a) Let a be a positive number greater (in absolute value) than all the entries of A. LetP = (1/a)A + IShow that P is the transition matrix for a discrete time, irreducible aperiodic Markov chain.Since the rows of A add up to 0, the rows of P add up to 1:Xjp(i, j) =Xj(1/a)α(i, j) + δ(i, j) = (1/a)Xjα(i, j) +Xjδ(i, j) = (1/a)(0) + 1.The entries of P are nonnegative:p(i, j) = (1/a)α(i, j) ≥ 1 if i 6= jp(i, i) = (1/a)α(i, i) + 1 ≥ 1 − (1/a)|α(i, i)| > 1 − 1 = 0 since a > |α(i, i)|.Since the diagonal entries of P are positive, the chain is aperiodic.Since p(i, j) > 0 whenever α(i, j) > 0, the communication classes for P are the same as forA. So, the discrete Markov chain is irreducible given that the continuous one is irreducible.b) Show that A has a unique left eigenvector with eigenvalue 0 that is a probability vectorand all the other eigenvalues of A have real part strictly less than 0.This follows from the Perron-Frobenius Theorem (p.17):First of all the eigenvalues of P, A are related as follows: If λ is an eigenvalue of P withleft eigenvector x (and right eigenvector y) thenxP = λxSo,xA = x[aP − aI] = axP − ax = aλx − ax = (aλ − a)xwhich makes a(λ − 1) an eigenvalue of A. [You can also use the right eigenvector y to getthe same conclusion.]When λ = 1, the corresponding eigenvector for A is a(λ − 1) = 0.“1 is a simple eigenvalue of P ” means there is a unique left eigenvector with eigenvalue 1:πP = π1This is the invariant probability distribution. The calculation above implies that π is also theunique left eigenvector of A with eigenvalue 0. The eigenvector π is a “probability vector”since its entries are nonnegative and add up to 1.The other eigenvalues of A are a(λ − 1) where |λ| < 1. But this implies that the real partof λ has absolute value less than 1. Since a is positive real this implies that<(a(λ − 1)) = a(<λ − 1) < 0.3.9The infinitesmal generator isA =−2 1 1 00 −1 1 01 1 −3 10 0 1 −1(a) Find the invariant distribution π.This is the solution of πA = 0 normalized so that the sum of the coordinates is 1:π =18(1, 3, 2, 2)(b) If X0= 1 what is the expected amount of time until the first jump?The change rate is 2 times per unit time. So, the expected wait is 1/2 of a unit of time.(c) If X0= 1 what is the expected time until you reach state 4?You take˜A = A with the 4th row and 4th column deleted˜A =−2 1 10 −1 11 1 −3Then b(x) = (expected time to get from x to 4) is given by the formula (Example 3 on page74):b = [−˜A]−1111=1 2 11/2 5/2 11/2 3/2 1111=443So the answer isb(1) = 4.(Using the corrected version of the explosion probability calculation we did in class) Let b(x)be the expected time that it takes to get from state x to state 4. Then,b(x) =1|a(x, x)|+Xy6=x,4p(x, y)b(y)The first number is the time it takes to escape from state x. If you jump to a state ywhich is not 4 then you need more time and the amount of extra time you need is b(y) withprobability p(x, y). For the matrix A we get:b(1) =12+12(b(2) + b(3))b(2) = 1 + b(3)b(3) =13+13(b(1) + b(2))The solution of this system of equations isb = (4, 4, 3)as before.(M/M/1 queue).Suppose that there is a queue with one server. People get into the line at a rate of λ andthey get served at the rate of µ.(1) This is a continous Markov chain Xtwith states 0, 1, 2, 3, · · · . What is the infinitesi-mal generator A = (α(x, y))?This is given bya(n, n + 1) = λ for n ≥ 0a(n, n − 1) = µ for n > 0and don’t forget:a(n, n) = −λ − µ for n ≥ 0The other entries of A are all zero.(2) Convert this to a countable Markov chain Zn. What is the (infinite) probabilitytransition matrix P = (p(x, y))?p(n, n + 1) =λλ + µfor n > 0p(n, n − 1) =µλ + µfor n > 0p(0, 1) = 1and the other entries of P are zero. These are the probabilities for the jumps betweenstates.p(x, y) = P(the state jumps to y from x)(3) Using your answers to Homework Problem #2.1 determine(a) Under what conditions is this queue transient, positive recurrent, null recurrent?This is a little tricky. The conversion isp =λ1 + λ= 1 −11 + λq =µ1 + µ= 1 −11 + µSince the function f(x) = 1 − 1/(1 + x) has positive derivative:f0(x) =1(1 + x)2> 0So, f(x) is monotonically increasing. I.e., p > q ⇐⇒ λ > µ and p < q ⇐⇒λ < µ. So this countable Markov chain is(i) transient iff p > q iff λ > µ(ii) null recurrent iff p = q iff λ = µ(iii) positive recurrent iff p < q iff λ < µTo prove this you have to also convert 2.1 into jump probabilities.p(x, x + 1) = P(the state jumps to x + 1 from x) =p(1 − q)p(1 − q) + q(1 − p)=λλ + µp(x, x − 1) = P(the state jumps to x − 1 from x) =q(1 − p)p(1 − q) + q(1 − p)=µλ + µ(b) When it is positive recurrent, what is the expected return time to 0? (For Xtnot Zn).The invariant distribution (for Zn, not for Xn) is given by normalizing the se-quence:λλ + µ,λµ,λ2µ2, · · ·The sum of these numbers isλλ + µ+λ/µ1 − λ/µ=2λµµ2− λ2So,π(0) =µ2− λ22λµλλ + µ=µ − λ2µSo, the expected return time to 0 for Znis1π(0)=2µµ − λ= 1 +λ + µµ − λ(Note that this number is≥ 2 since it takes at least 2 steps to go from 0 backto 0.) For the continuous chain Xt, the first step takes 1/λ amount of time andevery subsequent step takes 1/(λ + µ) amount of time. So the correct answer is:E(T ) =1λ+λ + µµ − λ1µ + λ=1λ+1µ − λ(I wonder if anyone got this.)(4) Determine when the chain Xtis explosive.This chain is never explosive for the simple reason that each jump takes an averageof1λ + µamount of time. So, you can’t have an infinite number of jumps in a finite amountof time. In particular, you can’t go to infinity. Another method is to show that theamount of time it takes to get from state n to state n + 1 is bounded below by1λ+µ.So, the infinite sum diverges and you have no


View Full Document

Brandeis MATH 56A - MATH 56A: STOCHASTIC PROCESSES ANSWERS TO HOMEWORK

Documents in this Course
Load more
Download MATH 56A: STOCHASTIC PROCESSES ANSWERS TO HOMEWORK
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MATH 56A: STOCHASTIC PROCESSES ANSWERS TO HOMEWORK and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MATH 56A: STOCHASTIC PROCESSES ANSWERS TO HOMEWORK 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?