DOC PREVIEW
Berkeley ELENG 226A - Problem Set 7

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EE226: Random Processes in Systems Fall’06Problem Set 7 — Due November, 16Lecturer: Jean C. Walrand GSI: Assane GueyeThis problem set essentially reviews Markov chains and Poisson Processes. Not all ex-ercises are to be turned in. Only those with the sign F are due on Thursday, November16that the beginning of the class. Although the remaining exercises are not graded, you areencouraged to go through them.We will discuss some of the exercises during discussion sections.Please feel free to point out errors and notions that need to be clarified.Exercise 7.1. FConsider the Markov chain Xnwith the transition diagram shown in figure 7.1 where n > 1pFigure 7.1. Markov chain diagramand p ∈ (0, 0.5)Explain how to calculate P [Tn< T0|X0= 1] where Ti= min{n ≥ 0|Xn= i}. Specifically:(a) What are the first step equations?(b)What are the boundary conditions?(c) Can you solve?Solution(a) Let α(i) := P [Tn< T0|X0= i]. We findα(i) = pα(i + 1) + (1 − p)α (i − 1) , 0 < i < n.(b) α(0) = 0 and α(n) = 1.(c) Let us try α(i) = λi. Thenλi= pλ × λi+ (1 − p)λ−1λi.This is solved if 1 = pλ + (1 − p)λ−1, i.e., if λ = 1 or λ = (1 − p)/p. Thus, the generalsolution of the equations isα(i) = a + bρiwhere ρ :=1 − pp.7-1EE226 Problem Set 7 — Due November, 16 Fall’06With the boundary conditions, we find a + b = 0 and a + bρn= 1.Hence,α(i) =1 − ρi1 − ρn, for 0 ≤ i ≤ n.Finally,P [Tn< T0|X0= 1] = α(1) =1 − ρ1 − ρn.Exercise 7.2. FA transition probability matrix P is doubly stochastic if the sum over each column equalsone; that isXiPij= 1, for all jIf such a chain is irreducible and aperiodic and consists of M + 1 states 0, 1, . . . , M, showthat the limiting probability is given by:πj=1M + 1, j = 0, 1, . . . , MSolutionWe know that an irreducible and aperiodic Markov chain has a limiting probability whichis the unique solution of π = πP . Thus we only need to check that the distribution given inthis exercise is a solution.Letting πj=1M+1, j = 0, 1 . . . , M we have(πP )j=XiπiPij=1M + 1XiPij=1M + 1= πjwhich gives the results.Exercise 7.3. Let πjdenote the long-run proportion of time a given Markov chain is instate j.(a) Explain why πjis also the proportion of transitions that are into the state j as well asbeing the proportion of transitions that are from state j.(b) πPijrepresents the proportion of transitions that satisfy what property?(c)PiπPijrepresents the proportion of transitions that satisfy what property?(d) Using the preceding explain whyπj=XiπiPij7-2EE226 Problem Set 7 — Due November, 16 Fall’06Solution: Hint(a) The number of transitions into i, the number of transitions from i, and the the number oftime period the process is in i are all different by at most 1, thus their long-run proportionsare equal.(b)πiPijis the long-run proportion of transitions from i to j.(c)PiπiPijis the long-run proportion of transitions into j.(d) Since πjis equal to the long-run proportion of transition into j we must haveπjPiπiPijExercise 7.4. F♣ This was a prelim question!A fly wanders around on a graph G with vertices V = {1, . . . , 5} (see figure 7.2) (a) Suppose12345spiderwindowFigure 7.2. A fly wanders randomly on a graphthat the fly wanders as follows: if it is at node i at time t, then it chooses one of the neighborsj of i uniformly at random, and then wanders to node j at time t + 1. For times t = 1, 2, . . . ,let Xt= {fly’s position at time t}. Is Xta Markov chain? Give your arguments.(b) Does the probability vector P r(Xt= i) converge to a limit as the time t increases,independent of the starting position?(c) Now suppose that a spider is sitting at node 3 and will catch the the fly whenever itreaches that node. On the other hand, if the fly reaches the window (node 5), it will escape.What is the probability that the fly escapes supposing it starts at node 1?Solution(a) The position of the fly in the next time step only depends on the current position, sothe process is a Markov chain. It is irreducible, finite (thus positive recurrent), and periodicwith period 2.(b) Starting at position 1, we know that the fly will return to that position only after aneven number of time steps. Thus P (Xt= 1|X0= 1) = 0 for all t odd. In the other handif the initial position is 2, the fly will enter state 1 only after an odd number of time steps.Since the process is positive recurrent we have P (Xt= 1|X0= 2) > 0 for all t odd. Thusdepending on the starting point we have two different limits.(c)We want to compute P (T5< T3|X0= 1) where Ti= min{t ≥ 0, Xt= i}.7-3EE226 Problem Set 7 — Due November, 16 Fall’06Conditioning on the first step and using the fact that the process is a Markov chain, we have:P (T5< T3|X0= 1) =12P (T5< T3|X0= 2) +12P (T5< T3|X0= 4)=12·131 +13P (T5< T3|X0= 1) +12P (T5< T3|X0= 1) +120¸=16+512P (T5< T3|X0= 1)Solving the last equation gives P (T5< T3|X0= 1) =27Exercise 7.5. Gambler ruin problem.A gambler wins $1 at each play round, with probability p and loses $1 with probability 1−p.Different rounds are assumed to be independent. The gambler keeps playing until he eitheraccumulate $m or loses all his money. What is the probability of eventually accumulatingthe target amount.Assume that the gambler starts withm2Solution: HintThe exercise is similar to exercise 1. However we provide here hints for another way ofcomputing the desired probability.1- Write the first step equations and the boundary equations.2- Define ρ =1−ppand Pito be the probability of eventually accumulating the target amountgiven that the gambler starts with $i. Then write the first step equations in the formP2− P1= ρP1P3− P2= ρ(P2− P1) = ρ2P1P4− P3= ρ(P3− P2) = ρ3P1...Pm− Pm−1= ρm−1P1Taking the sum of the equations we getPm− P1= (ρ + ρ2+ ··· + ρm−1)P1Now observing that Pm= 1 and solving for P1, we getP1=1 − ρ1 − ρmand Pi=1 − ρi1 − ρmExercise 7.6. FAn individual possesses r umbrellas which he employs in going from home to office and viceversa. If he is at home (resp. office) at the beginning (resp. end) of a day and it is raining,then he will take an umbrella with him to the office (resp. home) if there is one to be taken.7-4EE226 Problem Set 7 — Due November, 16 Fall’06If it is not raining, then he will not take an umbrella. Assuming that, independent of thepast, it rains at the beginning (end) of a day with probability p.(a) Define a Markov chain that will help you determine


View Full Document

Berkeley ELENG 226A - Problem Set 7

Download Problem Set 7
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Problem Set 7 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Problem Set 7 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?