# CORNELL ECON 3120 - Generalized Least Squares and Feasible Generalized Least Squares (2 pages)

Previewing page*1*of 2 page document

**View the full content.**## Generalized Least Squares and Feasible Generalized Least Squares

Previewing page
*1*
of
actual document.

**View the full content.**View Full Document

## Generalized Least Squares and Feasible Generalized Least Squares

0 0 1283 views

18

- Lecture number:
- 18
- Pages:
- 2
- Type:
- Lecture Note
- School:
- Cornell University
- Course:
- Econ 3120 - Applied Econometrics
- Edition:
- 1

**Unformatted text preview:**

Econ 3120 1st Edition Lecture 18 Outline of Current Lecture I Heteroskdasticity Current Lecture II Generalized Least Squares and Feasible Generalized Least Squares Generalized Least Squares and Feasible Generalized Least Squares As described above we can deal with heteroskedasticity using OLS robust standard errors but this is not the most efficient way to estimate the s This section outlines how to perform more efficient estimation 4 1 Generalized Least Squares Suppose somewhat unrealistically that we know the form of the heteroskedasticity We will consider heteroskedasticity of the form Var u x 2 h x so that the variance can be expressed as some function of x As an example suppose our model is savei 0 1inci ui 1 where inci is the income of household i in a given year and savei is savings in that year What economic parameter does 1 represent 4 With this model one can imagine that Var ui inci is increasing in income The higher someone s income it makes sense that there is higher variance in the unexplained component of the model In particular suppose Var ui inci 2 inci Armed with this information we can transform the model to one with homoskedastic standard errors Suppose we divide every term in 1 by inci savei inci 0 inci 1inci inci ui inci In this equation the final term which is still an error term has variance Var ui inci inci 1 inci Var ui inci 2 The errors are now homoskedastic Our transformed model satisfies assumptions MLR 1 MLR 4 so that we now have a model whose estimates will be best linear unbiased This procedure is typically called generalized least squares The generic form of this starts with the model y 0 1x1 2x2 kxk u that satisfies MLR 1 MLR 4 and that Var u x 2 h x Then the transformed model y h 0 h 1x1 h 2x2 h kxk h u h will satisfy MLR 1 MLR 5 and the OLS estimates will therefore be best linear unbiased 4 2 Feasible Generalized Least Squares The vast majority of the time we don t know the form of the heteroskedasticity In that case we need to estimate it But once we do that we can do a procedure similar to the above Suppose we think there is heteroskedasticity of the form Var u x 2 h x 5 which we model as Var u x 2 exp 0 1x1 2x2 kxk 2 This is not the only possible model we could use other functions of the x s as well Exponentiating is convenient because it imposes that the variance will always be positive Since we need to estimate 2 we write the estimating equation as u 2 2 exp 0 1x1 2x2 kxk where E 1 and E x 0 Taking logs we get log u 2 0 1x1 2x2 kxk e Now we re ready to apply Feasible Generalized Least Squares 1 Estimate the full model using OLS collect ui 2 Create log u 2 i 3 Regress log u 2 i on all of the x s using OLS and These notes represent a detailed interpretation of the professor s lecture GradeBuddy is best used as a supplement to your own notes not as a substitute generate the predictions call that g 4 Take h exp g and apply generalized least squares as above by multiplying each observation by1 p h 5 Heteroskedasticity in the Linear Probability Model The linear probability model has heteroskedasticity built in Take the simple model y 0 1x u where y is a dummy variable Because Var u x Var y x we can write Var y x p y 1 1 p y 1 0 1x 1 0 1x Thus we know the form of the heteroskedasticity but since we don t know the 0 s we need to model it We can account for heteroskedasticity in the LPM using feasible generalized least squares 6 by taking h y 1 y and applying feasible generalized least squares There is a slight problem in that the fitted values may not always be between 0 and 1 in which case you may not always have a positive value for h There is no good way to deal with this which is why it is more typical to just use OLS with robust standard errors

View Full Document