Omitted Variable Bias with Many Regressors

(2 pages)
Previewing page 1 of actual document.

Omitted Variable Bias with Many Regressors

Lecture 14

Lecture number:
Lecture Note
Cornell University
Econ 3120 - Applied Econometrics

Unformatted text preview:

Lecture 14 Outline of Current Lecture I. Unbiasedness of Multivariate OLS Estimators Current Lecture I. Omitted Variable Bias with Many Regressors 6.3 Omitted Variable Bias with Many Regressors The above discussion only applies to a model with two independent variables. When there are more than two independent variables, and we leave one out, things get much more complicated. If this happens, all of the estimated β 0 s can be biased, and the bias of ˜βj in the short regression generally depends on the relationship between the omitted variable and all the x’s, and on the relationship between all the x’s. But generally speaking, if we assume that x j is uncorrelated with the other included x’s, then we can say something about the bias of the estimated coefficient. Suppose the true model is log(wage) = β0 +β1educ+β2exper +β3abil +u (where exper is years of experience) and we leave out abil. If we assume that education and experience are uncorrelated, then the expectation of ˜β1 in the regression log(wage) = β0 +β1educ+β2exper +u will be E( ˜β1) = β1 +β3 Cov(educ d,abil) Vard(educ) , which is essentially the same thing as the formula (7) above. Therefore, we might expect ˜β1 to be biased upwards if β3 is positive and education and ability are positively correlated. Economists often try to sign the bias using this formula regardless of whether x j (the variable of interest in the short regression) is uncorrelated with the other included x 0 s. This is a reasonable shortcut, but it’s important to know that it isn’t exactly right, especially if the included x’s are highly correlated. 2 2The general form for the estimate of βj in the short regression when xk is omitted is ˜βj = ˆβj + ˆβk ˜δj , where ˜δj is the coefficient on xj in the regression of xk on all of the included regressors. See Wooldridge Section 3A for more detail. 9 7 Variance of OLS Estimators To obtain the variance of OLS estimators, we need make an assumption analogous to SLR.5: • MLR.5 Homoskedasticity: The error term in the OLS equation described by MLR.1 has constant variance: Var(u|x1,..., xk) = σ 2 With assumptions MLR.1-MLR.5, the variance of an OLS estimator ˆβj is given by Var( ˆβj) = σ 2 SSTj(1−R 2 j ) where SSTj = ∑(xi j − x¯j) 2 and R 2 j is the R-squared from the regression of xi j on all of the other independent variables. 8 Estimating σ 2 We obtain an unbiased estimator of σ 2 in a similar manner to the bivariate case: σˆ 2 = 1 n−k −1 ∑uˆ 2 i The denominator n−k−1 equals the number of observations minus the total number of parameters estimated in the model. Using this estimate, we can estimate the variances of OLS estimators as Vard( ˆβj) = σˆ 2 SSTj(1−R 2 j ) And the standard errors as se( ˆβj) = q Vard( ˆβj) The Gauss- Markov Theorem Our OLS estimators are actually ...

View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams