DOC PREVIEW
Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

130 MARTINGALES5.3. definition of conditional expectation. The definition of a mar-tingale is: If n ≥ m thenE(Mn| Fm) = MmBut what is the definition of this ↑ conditional expectation?5.3.1. conditional expectation wrt information.Definition 5.8. Given: Y a measurable function with respect to F∞=∪Fn. (This means we will “eventually” know the value of Y .) ThenY"= E(Y | Fm)is defined to be “the Fm-measurable function which best approximatesY .”To explain what this says I used two examples.Example 5.9. Fnis given by X0, X1, X2, · · · . Then F2is given by thetable:F2:X2= 1X2= 2X1= 1 X1= 2 X1= 3To say that Y"= E(Y | F2) is F2-measurable means that Y"takes only6 values, one in each of the little rectangles in the diagram. (Each littlebox is a subset of Ω.)Y"=y11y21y31y12y22y32The numbers are the best guess as to the value of Y given the infor-mation at time n = 2:yij= E(Y | X1= i, X2= j)The law of iterated expectation says:E(Y | F0) = E(E(Y | F2) | F0) = E(Y"| F0)=3!i=12!j=1yijP(X1= i, X2= j)The conditional expectation with respect to information is the func-tion which takes 6 values in the previous example. Each of these num-bers is a conditional expectation with respect to an event. The nextexample explains this and gives the correct formula for conditionaljump time (something that we needed in Chapter 3).MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131• •λµ µ µ µλ λ λ•X0• •Example 5.10. Consider the continuous Markov chain (on the set ofintegers Z): The rate of movement to the right is a(n, n + 1) = λ andto the left is a(n, n − 1) = µ. We want to calculate the conditionalexpected value of the jump time:T = time of 1st jump= inf{t | Xt%= X0}This is a stopping time because Xtis right continuous: At the mo-•Tment that you jump you will know:XT= the state that you jump toXTwill be either to the right R or left L:R = event “XT= X0+ 1” (jump right)L = event “XT= X0− 1” (jump left)These two events give a σ-algebraF1= σ-algebra generated by R, L:F1= {∅, Ω, R, L}The conditional expectation of the jump time T with respect to thisinformation is:T"= E(T | F1) = F1-measurable function approximating TT"being F1-measurable means that it takes two values:T"=tLtRThe two values are:tL= E(T | L)132 MARTINGALEStR= E(T | R)We know from before (and I will compute it again below) thatE(T | F0) = E(T ) =1λ + µI gave an intuitive proof of this by drawing a picture:0!!R L L R R1Here λ = 3, µ = 2. Then on an average unit interval of time we willsee 3 jumps to the right and 2 jumps to the left for a total of λ + µ = 5jumps per unit time. The average time between jumps will be1λ + µ=15The law of iterated expectation implies that this is= E(E(T | F1) | F0) = E(T"| F0)= E(T | L)P(L) + E(T | R)P(R)= tL·µλ + µ+ tR·λλ + µThis means that the intuitive idea thattL=1µ, tR=1λis WRONGbecause it would give1λ + µ=2λ + µ!!So, the question is: What is E(T | R)?5.3.2. conditional expectation wrt an event.Definition 5.11. If A is any event then the conditional expectation ofT given A is defined to be:E(T | A) :=E(T · IA)P(A)where IAis the indicator function of A. This is the function which is1 on A and 0 outside of A. (So, E(IA) = P(A). This is an equation wesaw before.)The expectation of T and of T · IAare given by integration:E(T ) ="∞0t fT(t) dtE(T · IA) ="∞0t · IAfT(t) dtMATH 56A SPRING 2008 STOCHASTIC PROCESSES 133where fT(t) is the probability density function (pdf) of T .0t t+∆t!!| | | | | ||X|TfT(t)∆t ≈ P#jump occurs in the interval (t, t + ∆t]and it does not happen in the interval [0, t]$= P( jump occurs in the interval (t, t + ∆t])×P( no jump in each oft∆tintervals of length ∆t)= (λ + µ)∆t · (1 − λ∆t − µ∆t)t/∆tCanceling the ∆t’s we get:fT(t) ≈ (λ + µ) · (1 − λ∆t − µ∆t)t/∆tTo make this approximation exact, we need to take the limit as ∆t → 0.This uses the well-known limit:lim∆t→0(1 − c∆t)1/∆t= e−cwhich you can prove using L’Hospital’s rule (after taking the log ofboth sides). If we raise both sides to the power t we get:lim∆t→0(1 − c∆t)t/∆t= e−ctSetting c = λ + µ we get:fT(t) = (λ + µ) lim∆t→0(1 − λ∆t − µ∆t)t/∆t= (λ + µ)e−(λ+µ)tThis means thatE(T ) ="∞0t fT(t) dt ="∞0t(λ + µ)e−(λ+µ)tdt =1λ + µ(Do the substitution: s = (λ + µ)t, ds = (λ + µ)dt then:E(T ) =1λ + µ"∞0se−sds =1λ + µsince%se−sds = −se−s− e−s+ C.)To do the expected value of T · IRwe need to take the probability ofjumping to the right which is:IA·fT(t)∆t ≈ P#a jump to the right occurs in the interval (t, t + ∆t]and no jump (right or left) occurs in the interval [0, t]$= λ∆t · (1 − λ∆t − µ∆t)t/∆tThe second term is the same as before but the first term is λ∆t insteadof (λ + µ)∆t. This meansIA· fT(t) = λe−(λ+µ)t134 MARTINGALESE(T ·IR) ="∞0t·IRfT(t) dt ="∞0tλe−(λ+µ)tdt =λλ + µE(T ) =λ(λ + µ)2Since the probability of jumping to the right is P(R) =λλ+µwe get:E(T | R) =E(T · IA)P(R)=λ/(λ + µ)2λ/(λ + µ)=1λ + µSimilarly,E(T | L) =1λ + µThis means that the information of which way you jump tells us nothingabout T ! (The time of the jump and the direction of the jump areindependent.)We then moved on to the Optimal Sampling Theorem.MATH 56A SPRING 2008 STOCHASTIC PROCESSES 1355.4. Optimal Sampling Theorem (OST). First I stated it a littlevaguely:Theorem 5.12. Suppose that(1) T is a stopping time(2) Mnis a martingale wrt the filtration Fn(3) certain other conditions are satisfied.Then:E(MT| F0) = M0The first thing I explained is that this statement is NOT TRUE forMonte Carlo. This is the gambling strategy in which you double yourbet every time you lose. Suppose that you want to win $100. Thenyou go to a casino and you bet $100. If you lose you bet $200. If youlose again, you bet $400 and so on. At the end you get $100. Theprobability is zero that you lose every single time. In practice thisdoes not work since you need an unlimited supply of money. But inmathematics we don’t have that problem.To make this a martingale you do the following. LetX1, X2, X3, · · ·be i.i.d. Bernoulli random variables which are equal to ±1 with equalprobability:Xi=&1 with probability12−1 with probability12In other words, we are assuming each gave is fair. ThenE(Xi) = 0.LetMn= X1+ 2X2+ 4X3+ · · · + 2n−1XnThis is the amount of money you will have at the end of n rounds ofplay if you bet


View Full Document

Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131

Documents in this Course
Load more
Download MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 131 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?