DOC PREVIEW
Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

124 MARTINGALES5.4. Optimal Sampling Theorem (OST). First I stated it a littlevaguely:Theorem 5.12. Suppose that(1) T is a stopping time(2) Mnis a martingale wrt the filtration Fn(3) certain other conditions are satisfied.Then:E(MT| F0) = M0The first thing I explained is that this statement is NOT TRUE forMonte Carlo. This is the gambling strategy in which you double yourbet every time you lose. Suppose that you want to win $100. Thenyou go to a casino and you bet $100. If you lose you bet $200. If youlose again, you bet $400 and so on. At the end you get $100. Theprobability is zero that you lose every single time. In practice thisdoes not work since you need an unlimited supply of money. But inmathematics we don’t have that problem.To make this a martingale you do the following. LetX1, X2, X3, · · ·be i.i.d. Bernoulli random variables which are equal to ±1 with equalprobability:Xi=!1 with probability12−1 with probability12In other words, we are assuming each gave is fair. ThenE(Xi) = 0.LetMn= X1+ 2X2+ 4X3+ · · · + 2n−1XnThis is the amount of money you will have at the end of n rounds ofplay if you bet 1 on the first game, 2 on the second, 4 on the third, etc.and keep playing regardless of whether you win or lose. To see thatthis is a martingale we calculate:Mn+1= X1+ 2X2+ · · · + 2n−1Xn+ 2nXn+1= Mn+ 2nXn+1At time n we know the first n numbers but we don’t know the lastnumber. So,E(Mn+1| Fn) = Mn+ E(2nXn+1)= Mn+ 2nE(Xn+1) = Mn+ 0 = MnI.e., the expect future value is the same as the known value on eachday. So, this is a martingale.MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125T = the first time you win. ThenP(T < ∞) = 1.The argument about random walk being null recurrent actually doesnot apply here. I will explain on Monday what that was about. In theMonte Carlo case it is obvious that T < ∞ sinceP(T > n) =12n→ 0.In any case,MT= 1since, at the moment you win, your net gain will be exactly 1. So,E(MT| F0) = 1 $= M0= 0.In other words, the Optimal Sampling Theorem does not hold. We needto add a condition that excludes Monte Carlo. We also know that wecannot prove a theorem which is false. So, we need some other conditionin order to prove OST. The simplest condition is boundedness:Theorem 5.13 (OST1). The OST holds if T is bounded, i.e., if T ≤ Bfor some constant B.126 MARTINGALES5.5. integrability conditions. The OST says thatE(MT| F0) = M0under “certain conditions.” These are integrability conditions which Iwant to explain (but just the definition).Definition 5.14. Suppose that Y is a random variable. Then(1) Y is integrable (L1) ifE(|Y |) < ∞I.e, if the integral"∞−∞|y|fY(y)dy converges.(2) Y is square integrable (L2, p. 217 in book) ifE(Y2) < ∞If Y is not integrable then one of the tails must be fat: I.e., either theright tail"∞Ky fY(y)dy = ∞or the left tail:"−K−∞|y| fY(y)dy = ∞If we cut off any finite piece, you still have infinity left. So, the samewill be true for any value of K.5.5.1. a beautiful theorem. Here is a wonderful theorem which takeslonger to state than to prove and which is related to what we justlearned.Theorem 5.15. Suppose that(1) Ynis Fn-measurable,(2) T is a stopping time and(3) P(T < ∞) = 1.MATH 56A SPRING 2008 STOCHASTIC PROCESSES 127ThenMn:= E(YT| Fn)is a martingale wrt Fn.Proof.E(Mn+1| Fn) = E(E(YT| Fn+1) | Fn) = E(YT| Fn) = Mn.So, Mnis a martingale. !Example 5.16. Let Yn= f(Xn) be the payoff function.Xn= state at time n.T = optimal stopping time. ThenYT= f(XT) = optimal payoff.v(x) = value function.Thenv(Xn) = E(f(XT)# $% &YT| Fn)As an example of the theorem we just proved, we have:Corollary 5.17. Mn= v(Xn) is a martingale!Question: Does v(Xn) satisfy OST? In other words:¿E(v(XT) | F0) = v(X0)?Answer: Yes, because v(XT) = f(XT). (When you reach the stateXTyou are supposed to stop and take the payoff.)5.5.2. uniform integrability.Theorem 5.18 (2nd Optimal Sampling Theorem). Suppose that M0, M1, M2, · · ·is a martingale wrt the filtration Fn. Suppose(1) T = stopping time(2) P(T < ∞) = 1.(3) MTis integrable E(|MT|) < ∞(4) M0, M1, · · · are uniformly integrable (defined below).Then OST holds, i.e., E(MT| F0) = M0.Note: The contrapositive is also true. I.e., if OST fails then one ofthe conditions must fail.For example, in Monte Carlo, Xi= ±1 with probability 1/2,Mn= X1+ 2X2+ 22X3+ · · · + 2n−1Xnis a martingaleT = smallest n so that Xn= 1.128 MARTINGALESThis is a stopping time with P(T < ∞) = 1 and MT= 1 is integrable.But OST fails. So, it must be that this martingale is not uniformlyintegrable.Definition 5.19. Ynis integrable if for every ! > 0 there is a Kn> 0so that the Kn-tails have total area less than !:"∞KnyfYn(y)dy +"−Kn−∞|y|fYn(y)dy < !Ynis uniformly integrable if the cutoff points are the same for all Yn:Kn= K.If a sequence Ynis not uniformly integrable then, as time goes on,you are very likely to end up in the tail. (No matter where you cutit the tail has probability ≥ ! > 0. But you have an infinite sequenceof random variable. If they are independent you are almost certain toend up in the tail.)Finally, I asked: Why is Monte Carlo not uniformly integrable? Itis not given by an integral. So, what does this mean?5.5.3. nonintegral meaning of uniform integrability. We need a new def-inition of tail which applies to any random variable Yn, not just thecontinuous ones. For any δ > 0 define a δ-tail to be a set of valuesof Ynwith probability ≤ δ. Then uniform integrability implies that:∀! > 0 ∃δ > 0 so that"δ-tail|Yn| < !for all n. (In the discrete case the integral means you add up theprobability times |Yn| for all points in the tail.) In the case of MonteCarlo, regardless of δ, we can take n so that 1/2n< δ. Then the eventthat X1, X2, · · · , Xnare all 1 is in the δ-tail. It has probability 1/2n.But Mn= 2n− 1 on this tail. So,"δ-tail|Mn| ≥2n− 12n≈ 1which will not be < !. So, this sequence is not uniformly integrable.This δ-tail condition is not exactly the same uniform integrability.This will be explained at the end.5.5.4. Martingale convergence theorem. I just stated this theorem with-out much explanation. It has two important integrality conditions.Theorem 5.20 (Martingale convergence theorem). Suppose that Mnis a martingale wrt the filtration Fn. ThenMATH 56A SPRING 2008 STOCHASTIC PROCESSES 129(1) Mnconverges to a random variable M∞if E(|Mn|) ≤ C forsome constant C.(2) E(Mn) → E(M∞) if Mnare uniformly integrable.This ends what I


View Full Document

Brandeis MATH 56A - MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125

Documents in this Course
Load more
Download MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MATH 56A SPRING 2008 STOCHASTIC PROCESSES 125 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?