DOC PREVIEW
Princeton ECO 504 - RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

This preview shows page 1-2 out of 7 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 7 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

ECO 504 Spring 2002 Chris SimsRANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY1. INTRODUCTIONLagrange multiplier methods are standard fare in elementary calculus courses, and theyplay a central role in economic applications of calculus because they often turn out to haveinterpretations as prices or shadow prices. You have seen them generalized to cover dynamic,non-stochastic models as Hamiltonian methods, or as byproducts of using Pontryagin’s max-imum principle. In static models Lagrangian methods reduce a constrained maximizationproblem to an equation-solving problem. In dynamic models they result in an ordinary dif-ferential equation problem.In the stochastic models we are about to consider they result in, for discrete time, an inte-gral equation problem or, in continuous time, a partial differential equation problem. Integralequations and partial differential equations are harder to solve than ordinary equations or dif-ferential equations˝U they are both less likely to have an analytical solution and more difficultto handle numerically. The application of Lagrangian methods to stochastic dynamic modelstherefore appears to be of less help in solving the optimization problem than is their appli-cation to non-stochastic problems. Consequently many references on dynamic stochasticoptimization give little attention to Lagrange multipliers, instead emphasizing more directmethods for obtaining solutions.The economic literature has to some extent been guided by this pattern of emphasis. Thisis unfortunate, because Lagrangian methods are as helpful in economic interpretation ofmodels in stochastic as in non-stochastic models. Also, in general equilibrium models, useof Lagrangian methods turns out sometimes to simplify the computational problem, in com-parison to approaches that try to solve by more direct methods all the separate optimizationsembedded in the general equilibrium.2. STATEMENT OF THE PROBLEM AND THE EULER EQUATION FIRST ORDERCONDITIONSSince in this course we are more interested in using these results than in proving them, wepresent them backwards. That is, we begin by writing down the result we are aiming at, thenprove that it is part of a set of sufficient conditions for an optimum. The first-order conditionswe display are in fact also necessary conditions for an optimum under regularity conditionsthat often apply in economic models, but we do not in this set of notes prove that. A morecomplete presentation, that however gives less attention to infinite-horizon problems, is inKushner (1965b) and (Kushner, 1965a).c°2002 by Christopher A. Sims. This document may be reproduced for educational andresearch purposes, so long as the copies contain this notice and are retained for personal useor distributed free.2Note that in this course you will be responsible for knowing how to use the conditionsdisplayed in these notes to analyze and solve economic models, not for reproducing proofsof necessity or sufficiency.We consider a problem of the formmaxC∞0E"∞∑t=0βtUt¡Ct−∞, Zt−∞¢#(1)subject togt¡Ct−∞, Zt−∞¢≤ 0, t = 0, ..., ∞, (2)where we are using the notation Cnm={Cs, s = m, . . . , n}.We assume that the vector Z is an exogenous stochastic process, that is, that it cannot beinfluenced by the vector of variables that we can choose,C. For a dynamic, stochastic setting,the information structure is an essential aspect of any problem statement. Information isrevealed over time, and decisions made at a time t can depend only on the information thathas been revealed by time t. Here, we assume that what is known at t is Zt−∞, i.e. currentand past values of the exogenous variables in the model.1The class of stochastic processesC that have this property are said to be adapted to the information structure.We can generate first order conditions for this problem by first writing down a Lagrangianexpression,E"∞∑t=0βtUt¡Ct−∞, Zt−∞¢−∞∑t=0βtλtgt¡Ct−∞, Zt−∞¢#, (3)and then differentiating it to form the FOC’s:∂H∂C(t)=βtEt"∞∑s=0βs∂Ut+s∂C(t)−∞∑s=0βs∂gt+s∂C(t)λt+s#= 0, t = 0, ..., ∞ (4)Notice that:• In contrast to the deterministic case, the Lagrangian in (3) and the FOC’s in (4)involve expectation operators.• The expectation operator in the FOC is Et, conditional expectation given the infor-mation set available at t, the date of the choice variable vector C with respect towhich the FOC is taken.• Because U and g each depend only on C’s dated t and earlier, the infinite sums in(4) involve only U’s and g’s dated t and later.• The term at the left in (4) is superfluous and is usually just omitted.1It may seem that it would be natural to include also past C’s in the information set. But it is our assumptionthat this would be redundant. Of course a decision maker could make Ctdepend on some “extraneous" randomelement like a coin flip. Our assumption is simply that if this can occur, the coin flip is part of Zt−∞33. REVIEW OF FINITE-DIMENSIONAL, NON-STOCHASTIC KUHN-TUCKERCONDITIONSIn finite-dimensional problems, first order conditions are necessary and sufficient condi-tions for an optimum in a problem with concave objective functions and convex constraintsets. The conditions in (4) are not as powerful, because this is an infinite-horizon problem.First order conditions here, as in simpler problems, are applications of the:Separating Hyperplane Theorem: If ¯x maximizes the continuous, concave functionV(·) over a convex constraint set Γ in some linear space, and if there is an (infeasible)x∗withV(x∗) > V(¯x), then there is a continuous linear function f(·) such that f(x) >f( ¯x) implies that x lies outside Γ and f(x) < f( ¯x) implies V(x) < V( ¯x).In a finite-dimensional problem with x n× 1, we can always write any such f asf(x) =n∑i=1fi· xi(5)where the fiare all real numbers.If the problem has differentiable V and differentiable constraints of the form gi(x) ≤ 0,then it will also be true that we can always pickfi=∂V∂xi( ¯x) (6)and nearly always writef(x) =∑jλj∂gj( ¯x)∂x· x (7)withλi≥ 0, all i. The “nearly" is necessary because of what is known as the “constraintqualification". It is possible that the first-order properties of the constraints near the optimumdo not give a good local characterization of the constraint set Γ. However, if we can find anx vector and a set of non-negativeλi’s that satisfy the constraints and (6) and (7), we


View Full Document

Princeton ECO 504 - RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY

Download RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view RANDOM LAGRANGE MULTIPLIERS AND TRANSVERSALITY 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?