##
This **preview** shows page *1*
out of 2 **pages**.

*View Full Document*

End of preview. Want to read all 2 pages?

Upload your study docs or become a GradeBuddy member to access this document.

View Full Document**Unformatted text preview:**

Econ 3120 1st Edition Lecture 18Outline of Current Lecture I. HeteroskdasticityCurrent LectureII. Generalized Least Squares and Feasible Generalized Least SquaresGeneralized Least Squares and Feasible Generalized Least Squares As described above, we can deal with heteroskedasticity using OLS robust standard errors, but this is not the most efficient way to estimate theβ’s. This section outlines how to perform more efficient estimation. 4.1 Generalized Least Squares Suppose (somewhat unrealistically) that we know the form of the heteroskedasticity. We will consider heteroskedasticity of the form Var(u|x) = σ 2 h(x) so that thevariance can be expressed as some function of x. As an example, suppose our model is savei = β0+β1inci +ui (1) where inci is the income of household i in a given year, and savei is savings in that year. (What economic parameter does β1 represent)? 4 With this model, one can imagine that Var(ui |inci) is increasing in income. The higher someone’s income, it makes sense that there is higher variance in the unexplained component of the model. In particular, suppose Var(ui |inci) =σ 2 inci Armed with this information, we can transform the model to one with homoskedastic standard errors. Suppose we divide every term in (1) by √ inci : savei/ √ inci = β0/ √ inci +β1inci/ √ inci +ui/ √ inci In this equation, the final term (which is still an error term) has variance Var(ui/ √ inci |inci) = 1 inci Var(ui |inci) = σ 2 The errors are now homoskedastic! Our transformed modelsatisfies assumptions MLR.1-MLR.4, so that we now have a model whose estimates will be best linear unbiased. This procedure is typically called generalized least squares. The generic form of this starts with the model: y = β0 +β1x1 +β2x2 +...+βkxk +u that satisfies MLR.1-MLR.4, and that Var(u|x) = σ 2 h(x) Then, the transformed model y/ √ h = β0/ √ h+β1x1/ √ h+β2x2/ √ h+...+βkxk/ √ h+u/ √ h will satisfy MLR.1-MLR.5, and the OLS estimates will therefore be best linear unbiased. 4.2 Feasible Generalized Least Squares The vast majority of the time, we don’t know the form of the heteroskedasticity. In that case, we need to estimate it. But once we do that, we can do a procedure similar to the above. Suppose we think there is heteroskedasticity of the form Var(u|x) = σ 2 h(x) 5 which we model as Var(u|x) = σ 2 exp(δ0 +δ1x1 +δ2x2 +...+δkxk) (2) This is not the only possible model, we could use other functions of the x’s as well. Exponentiating is convenient because it imposes that the variance will always be positive. Since we need to estimate (2), we write the estimating equation as u 2 = σ 2 exp(δ0 +δ1x1 +δ2x2 +...+δkxk)ν where E(ν) = 1 and E(ν|x) = 0. Taking logs, we get log(u 2 ) = α0 +δ1x1 +δ2x2 +...+δkxk +eNow we’re ready to apply Feasible Generalized Least Squares: 1. Estimate the full model using OLS, collect ˆui 2. Create log(uˆ 2 i ) 3. Regress log(uˆ 2 i ) on all of the x’s using OLS, and These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.generate the predictions, call that ˆg. 4. Take hˆ = exp(gˆ) and apply generalized least squares as above by multiplying each observation by1/ p hˆ. 5 Heteroskedasticity in the Linear Probability Model The linear probability model has heteroskedasticity “built in”. Take the simple model y = β0 +β1x+u where y is a dummy variable. Because Var(u|x) = Var(y|x), we can write Var(y|x) = p(y= 1)(1− p(y = 1)) = (β0 +β1x)(1−β0 −β1x) Thus, we know the form of the heteroskedasticity, but since we don’t know the β 0 s we need to model it. We can account for heteroskedasticity in the LPM using feasible generalized least squares 6 by taking hˆ = yˆ(1−yˆ) and applying feasible generalized least squares. There is a slight problem in that the fitted values may not always be between 0 and 1, in which case you may not always have a positive value for hˆ. There is no good way to deal with this, which is why it is more typical to just use OLS with robust standard

View Full Document