Estimation of the Multiple Regression Model

(1 pages)
Previewing page 1
View Full Document

Estimation of the Multiple Regression Model

1531 views

Lecture number:
28
Pages:
1
Type:
Lecture Note
School:
Cornell University
Course:
Econ 3120 - Applied Econometrics
Edition:
1
Documents in this Packet
• 1 pages

• 2 pages

• 1 pages

• 2 pages

• 1 pages

• 1 pages

• 1 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 1 pages

• 2 pages

• 3 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 6 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

Unformatted text preview:

Lecture 29 Outline of Last Lecture I. Instrumental Variable Outline of Current Lecture II. Estimation of the Multiple Regression Model Current Lecture IV Estimation of the Multiple Regression Model Consider a general model with k regressors:2 y = β0 +β1x1 +...+βkxk +u (1) Suppose you are comfortable assuming that x2,..., xk are uncorrelated with u, but you think that E(u|x1) 6= 0. We call x an endogenous variable, and x2,..., xk exogenous variables. You have located an instrument z for x1 (not typically an easy task!) We can then set up a series of moment conditions, just as we did for the multiple regression model in Lecture 5: E(u|x2,..., xk) = 0 ⇒ Cov(x j ,u) = 0, j 6= 1 E(u) = 0 We have one additional moment condition from our instrument: Cov(z,u) = 0 Now, putting these all together, we can impose the sample analogs on our estimates, using equation (1): 1 n ∑(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 1 n ∑x2(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 ... 1 n ∑xk(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 1 n ∑z(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 As before, we have k + 1 equations and k + 1 unknowns. Solving for the ˆβ’s gives us the IV estimates for equation (1). 2 I am departing a bit from the notation in Section 15.2 of the book, which denotes endogenous regressors as y and exogenous regressors and instruments as z. 3 Two-Stage Least Squares It turns out that there is a (relatively) simple and intuitive way to implement the IV estimation in the previous section. Two-stage least squares proceeds as follows: 1. Regress x1on z and all the other exogenous variables x2,..., xk using OLS. This is often called the “first stage”: x1 = α0 +α1z+α2x2 +...+αkxk +ν (2) Collect the predicted values from this regression ˆx1. 2. Run the regression (1), substituting ˆx1 for x: y = β0 +β1xˆ1 +...+βkxk +u (3) In this second stage, the OLS estimate of β1 will be equal to the IV estimate as derived above. This process yields some nice intuition for why IV works. By using the predicted values in the first stage, you are isolating the variation in x1 that is attributable to z and the other exogenous variables. By estimating equation (2), ˆx1 will be a linear combination of things that are uncorrelated with u. This implies that ˆx1 itself is uncorrelated with u, and therefore OLS estimation of (3) will be consistent. Inference We won’t go through how to estimate standard errors or other regression statistics for IV. Just note that the unadjusted standard errors that you get from the 2-step 2SLS will generally be wrong. You instead need to use Stata’s built-in ivreg command. Once you have the correct standard errors, you can conduct inference just as you would with OLS estimates. Example: Card (1995) Let’s now return to the proximity- to-college IV example. Card (1995) ...

View Full Document

Unlocking...