Unformatted text preview:

Department of Economics Microeconomic TheoryUniversity of California, Berkeley Economics 101AJanuary 23, 2007 Spring 2007Economics 101A: Microeconomic TheorySection Notes for Week 11 Multi-Variable Calculus1.1 Partial DifferentiationThe partial derivative of a multi-variable function f(x1, x2) is the incremental change in thefunction caused by an incremental change in one of the variables while all other variablesare held constant. The definition of the partial derivative of is∂f(x1, x2)∂x1= limh→0f(x1+ h, x2) − f(x1, x2)hFor convenience we sometimes write partial derivatives using the following alternative nota-tions,∂f(x1, x2)∂x1= f1(x1, x2) = fx1(x1, x2)1.2 Total DifferentiationFor small changes in the function f(x1, x2) a first order Taylor approximation holds exactly,thus the total change in f is just the sum of the partials for each variable (slopes in eachdirection) times the incremental change in that particular variable.df(x1, x2) =∂f(x1, x2)∂x1dx1+∂f(x1, x2)∂x2dx2= f1(x1, x2)dx1+ f2(x1, x2)dx2Now let’s assume that x2is a known function of x1, x2= h(x1). We could find the totalderivative of f(x1, x2) with respect to x1using the formula abovedf(x1, x2)dx1=df(x1, h(x1))dx1= f1(x1, h(x1))dx1dx1+ f2(x1, h(x1))dh(x1)dx1= f1(x1, h(x1)) + f2(x1, h(x1))dh(x1)dx111.3 Implicit DifferentiationSometimes we may be in a situation where x2depends upon x1but there is no “explicit”formula that can be solved to express x2in terms of x1, yet it is known that x1and x2satisfysome equation such asf(x1, x2) = kwhere k is a constant. Even though we can’t explicitly solve for a function h such thatx2= h(x1) it may still be possible to find the derivative,dh(x1)dx1. To see this, start by takingthe total derivative of the equation above with respect to x1. The derivative of the left handside (LHS) isdf(x1, x2)dx1= f1(x1, h(x1)) + f2(x1, h(x1))dh(x1)dx1The derivative of the right hand side (RHS) isdkdx1= 0thus we can rearrange and solve fordh(x1)dx1f1(x1, h(x1)) + f2(x1, h(x1))dh(x1)dx1= 0dh(x1)dx1= −f1(x1, h(x1))f2(x1, h(x1))which is defined as long as f2(x1, h(x1)) 6= 0.2 Optimization2.1 Unconstrained OptimizationAssuming that f has a maximum and it differentiable everywhere then a set of necessaryconditions for a point in the interior of the function’s domain, (x∗1, x∗2) to be a maximumare that the slope in each direction (the partial derivatives) of the function are zero at thatpoint. Formally, the first order necessary conditions (FOC’s) for (x∗1, x∗2) to solve,maxx1,x2f(x1, x2)are∂f(x∗1, x∗2)dx1= f1(x∗1, x∗2) = 0∂f(x∗1, x∗2)dx2= f2(x∗1, x∗2) = 0Sufficient conditions to guarantee that (x∗1, x∗2) is a local maxima (as opposed to a localminima, or saddle point) involve the second partial derivatives of the function f at the point2(x∗1, x∗2). Specifically, if the FOC’s hold and iff11(x∗1, x∗2) < 0f22(x∗1, x∗2) < 0f11(x∗1, x∗2)f22(x∗1, x∗2) − f212(x∗1, x∗2) > 0then (x∗1, x∗2) is a local maxima.Sufficient conditions to guarantee that (x∗1, x∗2) is a global maxima of f (as opposed to a localmaxima) involve the global shape of the function, or second partial derivatives of f at allpoints in f’s domain. Specifically, iff11(x1, x2) < 0f22(x1, x2) < 0f11(x1, x2)f22(x1, x2) − f212(x1, x2) > 0for all possible points (x1, x2) in f’s domain then (x∗1, x∗2) will be a global maxima.2.2 Constrained OptimizationSuppose that we would like to find the maximum of f (x1, x2) subject to the constraintg(x1, x2) = k.2.2.1 Substition SolutionThe first step in the substitution solution is to use the constraint equation to solve for x2as afunction of x1, or x2= h(x1). We can then rewrite the maximization problem as maximizingover only one choice variable, x1.maxx1f(x1, h(x1))the first order necessary c ondition (FOC) comes from setting the total derivative equal tozero,f1(x∗1, h(x∗1)) + f2(x∗1, h(x∗1))dh(x∗1)dx1= 0We could then use this equation to solve for x∗1and then x∗2= h(x∗1).Note, even if we could not solve for h explicity, the FOC implies thatdh(x∗1)dx1= −f1(x∗1, h(x∗1))f2(x∗1, h(x∗1))and the constraint (g(x1, h(x1)) = k) implies thatdh(x∗1)dx1= −g1(x∗1, h(x∗1))g2(x∗1, h(x∗1))3Putting the two together gives the familiar tangency conditionf1(x∗1, h(x∗1))f2(x∗1, h(x∗1))=g1(x∗1, h(x∗1))g2(x∗1, h(x∗1))meaning that the c onstraint function, g, and the level curves of objective function, f , shouldbe tangent at the optimium.2.2.2 LaGrangian SolutionThe LaGrangian method reformulates the constrained maximization as an unconstrainedmaximization problem with additional variables.The first step is to form the LaGrangianL(x1, x2, λ) = f (x1, x2) − λ[g(x1, x2) − k]the first order necessary condidtions (FOC’s) are the same as they would be for the uncon-strained maximization of L(x1, x2, λ), or specificallyL1(x∗1, x∗2, λ) = f1(x∗1, x∗2) − λg1(x∗1, x∗2) = 0L2(x∗1, x∗2, λ) = f2(x∗1, x∗2) − λg2(x∗1, x∗2) = 0L3(x∗1, x∗2, λ) = g1(x∗1, x∗2) − k = 0Note, the ratio of the first two equations imply the tangency conditionf1(x∗1, x∗2)f2(x∗1, x∗2)=g1(x∗1, x∗2)g2(x∗1, x∗2)Also note that the first two equations imply thatλ∗=f1(x∗1, x∗2)g1(x∗1, x∗2)=f2(x∗1, x∗2)g2(x∗1, x∗2)lambda∗tells us the “shadow price” of the constriant g(x1, x2) = k at the optimum, meaningthe price we would pay to relax the constraint by raising k.We can see this by thinking of the constraint level, k as a parameter, upon which the optimallevels of x1, x2, λ dep e nd. Thus we can write the maximized objective function as a functionof these optimal valuesf(x∗1(k), x∗2(k), λ∗(k))now we would like to see how this maximum changes as we relax the constraint by increasingkdf(x∗1(k), x∗2(k))dk= f1(x∗1(k), x∗2(k))dx∗1(k)dk+ f2(x∗1(k), x∗2(k))dx∗2(k)dk= λ∗(k)g1(x∗1(k), x∗2(k))dx∗1(k)dk+ λ∗(k)g2(x∗1(k), x∗2(k))dx∗2(k)dk= λ∗(k)g1(x∗1(k), x∗2(k))dx∗1(k)dk+ g2(x∗1(k), x∗2(k))dx∗2(k)dk4where the second equality is from obtained by plugging in the first two FOC’s. Now takethe total derivative of both sides of the constraint function with respect to k to obtaindg(x∗1(k), x∗2(k))dk=d(k)dkg1(x∗1(k), x∗2(k))dx∗1(k)dk+ g2(x∗1(k), x∗2(k))dx∗2(k)dk= 1plugging this in, we getdf(x∗1(k), x∗2(k))dk=


View Full Document

Berkeley ECON 101A - Section Notes

Download Section Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Section Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Section Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?