CORNELL ECON 620 - LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION

Unformatted text preview:

N.M. Kiefer, Cornell University, Econ 620, Lecture 91LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION:Jensen's Inequality: Suppose X is a random variable with E(X) = µ, and f is a convex function. ThenE(f(X)) > f(E(X)).This inequality will be used to get the consistency of the ML estimator.N.M. Kiefer, Cornell University, Econ 620, Lecture 92Let p(x|) be the probability density function of X given the parameter .Consider a random sample of n observations and letbe the log likelihood function.Assume 0is the true value and that dlnp/dθ exists in an interval including θ0, furthermore make the following 3 assumptions: Assumption 1:exist in an interval including 0.l(| , ,..., ) ln ( |)θθxx x pxni12 = i=1n∑dpdpdpdln ln lnθθ θ ; d : d2323N.M. Kiefer, Cornell University, Econ 620, Lecture 93Assumption 2:where p’ = dp/d and p’’ = d2p/d2. These usually hold in the problems we will see. Assumption 3:This is a purely technical assumption. It will basically control the expected error in Taylor expansions.Eppp′′′′θθθ = 0; Epp = 0; Ep > 02dpd33lnθ < M(x) where E[M(x)] < K.N.M. Kiefer, Cornell University, Econ 620, Lecture 94CONSISTENCYWe can get the consistency of the ML estimator immediately. We will use assumptions 1-3 to get the asymptotic normality of a consistent estimator in general and the ML estimator in particular.Suppose is an estimator for θ. We would like to require that the probability of being close to the true value of θ (i.e. θ0) should increase as the sample size increases. Definition: An estimator is said to be consistent for θ0if plim = θ0.θθθθN.M. Kiefer, Cornell University, Econ 620, Lecture 95Proposition: If ln p is differentiable, then the ML equation(first order condition)has a root with probability 1 which is consistent for θ, i.e. the ML estimator for θ is consistent.Proof: Using Jensen's inequality for concavefunctionsis a small number.ddlθ = 0E lnppp( < 0 ; E lnp( < 0 where 00θδθθθδθθδ−+)())()0000N.M. Kiefer, Cornell University, Econ 620, Lecture 96The inequality is strict unless p does not depend on . To see this, note thatThen noting the definition of () and using SLLN,() has a local maximum at θ0 in the limitImplying that the first order condition is satisfied at θ0in the limitNote that we have not shown that the MLE is a global max -- this requires more conditions.E lnppdx p( < ln Ep( = ln p( = ln 1 = 0.000θδθθδθθδ+++∫)())())00llim( [ ( ) )])10nllθδ θ± - ( < 00lN.M. Kiefer, Cornell University, Econ 620, Lecture 97ASYMPTOTIC NORMALITY OF CONSISTENT ESTIMATORS:Proposition: LetLet be the consistent MLE estimator for θ. Thenin probability.−Ed = Ed ln pd = i(200ln).pdθθθ022θnidd[( ) ( )θθ θθ−→000 - 1n] 0lN.M. Kiefer, Cornell University, Econ 620, Lecture 98Proof: From the first order condition, we get the following expansion:Taking the probability limit we note that the first expression in the denominator converges to -i(0) and the second expression in the denominator converges to 0 (Why?), we get the required result. 0 = dd + ( - + (- d0003ll lθθθθθθθ))ddd2022032⇒−− n = dd1n + (- d 003())θθθθθθθ02020312ndddlllN.M. Kiefer, Cornell University, Econ 620, Lecture 99Proposition: (Asymptotic Normality)N(0,i(0)-1)Proof: Note that has the same asymptotic distribution as We know thatsincen( )θθ−0 ~ni()()θθ θ−001100nddndpdlθθ = nln.∑Edpdlnθ0 = 0p x dx () .θθ0= 1 p dx = 0 = Ed ln pd0⇒′∫∫N.M. Kiefer, Cornell University, Econ 620, Lecture 910Note that differentiating p d ln p dx = 0 again implies thatThe first term is just the variance of d ln p/d0and the second expression is -i(0).Thus,Now we use the Central Limit Theorem forpppd p dx p dx ′∫∫ln ln .+ pd = 0 2Vdpdln).θθ0 = i(0nndpd1lnθθ∑ with Ed ln pd = 0, 0)θθ Vd ln pd = i(000N.M. Kiefer, Cornell University, Econ 620, Lecture 911to obtainN(0,i(0)).Thus (using z ~ N(0, Σ) ⇒ Az ~ N(0,AΣA′)) N (0,i(0)-1)Basic result: Approximate the distribution of ( - θ0) by NOf course, i(θ0)-1is consistently estimated byi( )-1under our assumptions. (why?)ni()()θθ θ−00 ~n( )θθ−0 ~θ001,() i .θ−nθN.M. Kiefer, Cornell University, Econ 620, Lecture 912Applications1. This will give the exact distribution in estimating a normal mean. Check this.2. Consider a regression model with Ey = Xβ, Vy = σ2I and y ~ normal. Check that the asymptotic distribution of is equal to its exact distribution.$βN.M. Kiefer, Cornell University, Econ 620, Lecture 913MISCELLANEOUS USEFUL RESULTS1. Consistency of continuous functions of ML estimators:Suppose is the ML estimator.Recall that plim = θ0⇒plim g( ) = g(θ0).(Choice of parametrization is irrelevant in this regard.)2. Asymptotic variances of differentiable (+ a little more) functions of asymptotically normal estimators: The δ- methodSuppose ~ N(θ0,σ2). Taylor expansion of g( ) around θ0givesg( ) = g(θ0) + g’(θ0) ( - θ0) + more. $θ$θ$θ$θ$θ$θ$θN.M. Kiefer, Cornell University, Econ 620, Lecture 914Remember that consistency impliesplim g( ) = g(θ0). Thus, V(g( )) = E(g( ) - g(θ0))2= (g’’(θ0))2V( ) using the Taylor expansion above and ignoring "more". Hence g( ) ~ N(g(θ0), V(g( ))) asymptotically.This method of estimating standard errors is very useful. It is called the δ- method. In the K-dimensional case with we have gNNote: Do not use the ambiguous term "asymptotically unbiased" estimators.$θ$θ$θ$θ$θ$θ$~θ(,)θ0Σ($)θ~( ( ), ( / ) ( / ) ).gg gθ∂∂θ ∂∂θ00 0Σ′N.M. Kiefer, Cornell University, Econ 620, Lecture 915Why do we use ML estimators?Under our assumptions which provide lots of smoothness, ML estimators are asymptotically efficient - attaining (asymptotically) the Cramer-Rao lower bound on variance. (What is the relation to the Gauss-Markov property?)Proposition: (Cramer-Rao bound for unbiased estimators) Let p be the likelihood function. (Here we have that p is the joint density of the data, regarded as a function of the parameter.) Suppose * is an


View Full Document

CORNELL ECON 620 - LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION

Download LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view LECTURE 9: ASYMPTOTICS II MAXIMUM LIKELIHOOD ESTIMATION 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?