**Unformatted text preview:**

Econ 3120 1st Edition Lecture 20Outline of Last Lecture I. HeteroskedasticityOutline of Current Lecture II. The White TestCurrent LectureLet’s start with our basic multivariate regression model under the standard MLR assumptions • MLR.1: The model is given by y = β0 +β1x1 +β2x2 +...+βkxk +u • MLR.2: Random sampling: our sample {(x1, x2,..., xk , y) : i = 1,...,n} is a random sample following the population model in MLR.1 • MLR.3: No perfectcollinearity: In the sample (and in the population): 1) Each independent variable has sample variation 6= 0 2) None of the independent variables can be constructed as a linear combination of the other independent variables. • MLR.4: Zero conditional mean: E(u|x1, x2,..., xk) = 0 • MLR.5 Homoskedasticity:The error term in the OLS equation described by MLR.1 has constant variance: Var(u|x1,..., xk) = σ 2 1 We’ve seen previously MLR.1 through MLR.4, OLS estimators for β will be unbiased. In this lecture we’re going to focus on violations of MLR.5. Violations of homoskedasticity are called, unsurprisingly, heteroskedasticity. To understand what this means, it’s easiest to illustrate this graphically using just one x (i.e., the bivariate case): Violations of MLR.5 affect variance estimation. Again, using the bivariate regression y = β0 + β1x+u, recall that the estimator for ˆβ1is given by ˆβ1 = ∑(xi −x¯)yi ∑(xi −x¯) 2 and this can be written as ˆβ1 = β1 + ∑(xi −x¯)ui ∑(xi −x¯) 2 taking the variance of ˆβ1 yields Var( ˆβ1) = ∑(xi −x¯) 2Var(ui) (∑(xi −x¯) 2) 2 Now, under MLR.5, we can use the fact that Var(ui) = σ for all i, and this collapses to Var( ˆβ1) = σ 2 SSTx . But suppose that the variances are heteroskedastic, such that Var(ui) = σ 2 i . Now, we get Var( ˆβ1) = ∑(xi −x¯) 2σ 2 i (∑(xi −x¯) 2) 2 and we cannot simplify any further. There are two things to note at this point. First, the estimators for the variance that we had derived previously are wrong under heteroskedasticity. Second OLS is no longer Best Linear Unbiased, because we have violatedone of the the Gauss Markov assumptions. 2 2 Robust Inference Under Heteroskedasticity We’ll deal with the issue of efficiency later, but there is a pretty straightforward way to deal with heteroskedasticity using OLS estimation. Because OLS estimates are unbiased, we can still use the estimates to arrive at a “heteroskedasticity-robust” variance estimator. Here’s what that looks like in the bivariate case: Vard( ˆβ1) = ∑(xi −x¯) 2uˆ 2 i (∑(xi −x¯) 2) 2 Implementation of this is straightforward using the “, robust” option of the regress command in Stata. With multiple regression, the robust estimator follows the samereasoning, but the formula is a bit more complicated so we won’t cover it here. Note that the heteroskedacitity-robust estimates of variance are valid even in the absence of heteroskedasticity. In practice, a lot of applied work reports robust standard errors rather than regular standard errors just in case. At the same time, robust estimates are usually larger than standard OLS estimates, so using robust These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.estimates does come with a loss of precision. 3 Testing for Heteroskedasticity To test for heteroskedasticity, we can turn MLR.5 into a null hypothesis to be tested: H0 : Var(u|x1,..., xk) = σ 2 The basic idea of implementation is to test whether ˆu 2 i , depends on a function of the x’s. Since the OLS-derived ˆu 2 i is an unbiased estimator for σ 2 i , we can use it for testing. The following steps can be used to implement the

View Full Document