Unformatted text preview:

Economics 520, Fall 2011Lecture Note 9: Introduction to Stochastic ProcessesThese notes are based on S. Ross, Introduction to Probability Models, Academic Press, and J.Hamilton, Time Series Analysis, Princeton University Press.Definition 1 A stochastic process {X (t ), t ∈ T } is a collection of random variables: for each t ∈ T ,X (t) is a random variable.The set T is called an index set, and t ∈ T is often interpreted as time. So X (t) would be the(random) state of a process at time t. Sometimes for simplicity we write Xt= X (t).Note that each Xtis a random variable, so really we should be writing Xt(ω). Each Xtis a functionfrom the sample space Ω to a subset of R. In applications we often are interested in modelling theevolution of some variable over time, so it is reasonable that the range of Xtis the same acrosstime. In that case we call the range of Xtthe state space.If the index set T is a countable set, we call the stochastic process a discrete-time process. If theindex set is an interval of the real line, we call the process a continuous-time process.Although t is often used to indicate time, it can be used in other ways as well. For example, whenmodelling spatial phenomena (e.g. geographical concentrations of pollution), we might use atwo-dimensional index t corresponding to longitude and latitude.Example 1: Consider flipping a coin forever (or for a very long time. .. ). The sample space Ωwould contain every possible infinite sequence of Hs and T s. We could define the index set asT = {1,2,3,...,} and X1= 1 if the first toss is heads and 0 otherwise, X2= 1 if the second toss isheads and zero otherwise, and so on. This defines a stochastic process {Xt, t ∈ T }, where Xsisindependent of Xrfor s 6= r .Next, we could define a new stochastic process {Yt, t ∈ T }, where Ytis the total number of headsup to that point in time: Yt=Pti=1Xi. Now there is a very distinct dependence between, say, Ysand Ys+1. We could also consider another stochastic process {Zt, t ∈ T } whereZt=Ytt=1ttXi=1Xi.Since Ztis the average of Xtup to that point in time, we might think that Ztwould converge to1/2 as t increases. We will come back to this idea in later lecture notes.Markov ChainsSuppose that {Xt, t = 1,2,3,...} is a discrete-time stochastic process with a finite or countable1state space. That is, Xttakes on a finite or countable number of possible values, and for simplic-ity let us say that the range is actually the nonnegative integers 0,1,2,3, ...We can fully specify the joint probability distribution of these random variables by starting withthe marginal probabilitiesPr (X1= i ) = fX1(i), i = 0,1,2,.. .and then defining various conditional probabilities recursively:Pr (X2= j |X1= i )Pr (X3= j |X2= i , X1= k)and so on. Suppose that the conditional probabilities have a simple form, depending only on themost recent past random variable:Pr (Xt+1= j |Xt= i , Xt−1= kt−1,..., X1= k1) = Pr (Xt+1= j |Xt= i ) = Pi j, ∀t = 1, 2,3,. ..We call such a stochastic process a Markov chain.The numbers Pi jrepresent the transition probability, the probability of going from state i tostate j . They must be nonnegative: Pi j≥ 0 for all i and j , and for each i, we must haveP∞j =0Pi j=1. It’s handy to collect these into a matrix:P =P00P01P02· · ·P10P11P12· · ·.......Example 1 continued For independent coin flips, the possible values are 0 and 1, and the tran-sition matrix would beP =".5 .5.5 .5#.Next, consider Yt, the cumulative sum of heads. For any time t , Yt+1can either be equal to Yt(with probability 1/2) or Yt+1 (with probability 1/2). Thus Pi j= .5 for j = i ,i +1 and 0 otherwise. Example 2: The paper “Intrafirm Mobility and Sex Differences in Pay,” by Michael Ransom andRonald Oaxaca, studies employment records for a major grocery store. They construct transitionprobabilities of transitions between different job categories, separately for males and females.For example, male produce clerks have a .17 probability of being terminated the following year,2a .65 probability of remaining in their position, a .04 probability of being promoted to producemanager, and various other probabilities of shifting position within the firm. A female produceclerk has a .25 probability of being terminated, a .375 probability of remaining as a produce clerk,and so on. Next, we can calculate the transition probabilities more than one step ahead: let the m-steptransition probabilities be:Pmi j= Pr (Xt+m= j |Xt= i ), t ≥ 0,i, j ≥ 0.Result 1 For a Markov chain, and for any n,m ≥ 0, the n + m step transition probabilities arerelated to the n and the m step transition probabilities by:Pn+mi j=∞Xk=0PnikPmk j. (1)Proof: The intuition is that the right hand side represents the probability of starting at state i ,passing through state k at the nth transition, and then going to j after another m transitions.Formally:Pm+ni j= Pr (Xm+n= j |X0= i )=∞Xk=0Pr (Xm+n= j, Xn= k|X0= i )=∞Xk=0Pr (Xm+n= j |Xn= k, X0= i )Pr (Xn= k|X0= i )=∞Xk=0Pr (Xm+n= j |Xn= k)Pr (Xn= k|X0= i ) (by the Markov property)=∞Xk=0Pmk jPnik The equations (1) are often referred to as the Chapman-Kolmogorov equations. They are partic-ularly easy to write in matrix form. If P(n)denotes the matrix of n step transition probabilities,then (1) can be written asP(n+m)= P(n)· P(m).where the multiplication is the usual matrix multiplication. Thus,P(2)= P · P, P(3)= P · P · P,3and in general:P(n)= Pn.In some cases, as n → ∞, the Pnconverge to a constant matrix. The following theorem is one ofthe key results in Markov chain theory:Theorem 1 Suppose that a Markov chain with transition matrix P satisfies:1. Aperiodicity: for all i , Pii> 0.2. Positive recurrence: starting in any state i , the expected time to return to state i is finite.3. Irreducibility: for all states i and j , there is some n such that Pni j> 0. Thus, state j is “acces-sible” from state i.Then, limn→∞Pni jexists and is independent of i. Furthermore, lettingπj= limn→∞Pni j,the limiting probabilities are the unique nonnegative solution toπj=∞Xi=0πiPi j, j ≥ 0,∞Xj =0πj= 1.Notice that if we start with probabilities πiof being in state i , then at the next step we havethe same probabilities of being in any given state. Thus, the vector π could be thought of as an“invariant” or “steady-state” distribution over the states.Also: πjis interpreted as the limiting probability that the process is in state j


View Full Document

UA ECON 520 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?