##
This **preview** shows page *1*
out of 2 **pages**.

*View Full Document*

End of preview. Want to read all 2 pages?

Upload your study docs or become a GradeBuddy member to access this document.

View Full Document**Unformatted text preview:**

Econ 3120 1st Edition Lecture 12Outline of Current Lecture I. Goodness of FitII. UnbiasednessCurrent LectureIII. MotivationMotivation Multiple regression allows us to account for more than one factor in explaining our dependent variable y. Consider the familiar example of the relationship between schooling and wages. Suppose we have data on the SAT score of the individual while she was in high school. We might be interested in estimated a relationship of the form: log(wage) = β0 +β1educ+β2SAT +u Or, to take a simplemodel from macroeconomics, suppose we want to estimate the determinants of a country’s growth rate.We may model the growth rate of a country (from 1980-2000) as a function of per capita income in 1980and income inequality (as measured by the Gini coefficient): growthrate = β0 +β1inc80+β2Gini+u A multivariate model with two independent variables, x1 and x2, takes the form: y = β0 +β1x1 +β2x2 +u In this case, β1 represents the change in the y for a one-unit change in x1, holding all other factors (x2 and u) fixed. This is the partial derivative of y with respect to x1, holding x2 and u fixed. Our x 0 s don’t have to be separate variables, they can actually be f unctions of the same variable. For example, suppose we are studying the relationship between household consumption and income, and we model the relationship as follows: cons = β0 +βinc+β2inc2 +u 1 In this case, the effect of income on consumption depends on both β1 and β2: ∂ ∂ inc cons|u constant = β1 +β2inc The general form of the multivariate model with k independent variables is y = β0 +β1x1i +β2x2i +...+βkxki +ui (1) (Note that I use the notation x ji for observation i and variable x j , while Wooldridge uses xi j.) Analogous to the bivariate model, the key assumption is the independence of the error term and the regressors (independent variables): E(u|x1, x2,..., xk) = 0 This implies that u must be independent of (and uncorrelated with) all ofthe explanatory variables x j . If u is correlated with any of these variables, the assumption does not hold and our estimates will be unbiased (more on this later on). 2 Estimating Multivariate Regression Parameters Estimation of β’s in a multivariate models follows a similar procedure to bivariate estimation.We first start with the independence assumption E(u|x1, x2,..., xk) = 0 ⇒ Cov(x j ,u) = 0 and impose the sample analog on our estimates, using equation (1). This implies 1 n ∑x1(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 01 n ∑x2(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 ... 1 n ∑xk(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) = 0 Note that these equations are the same as the first-order conditions from the minimization of the sum of squared residuals: min ˆβ0,.., ˆβk∑uˆ 2 i 2 min ˆβ0,.., ˆβk∑(yi − ˆβ0 − ˆβ1x1i −...− ˆβkxki) 2 The actual computationof these estimates is very involved and is best left to Stata. 3 Fitted Values, Residuals and Goodness of Fit3.1 Fitted Values and Residuals OLS fitted values and residuals are constructed just as they were in the These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.bivariate case: yˆi = ˆβ0 + ˆβ1x1i +...+ ˆβkxki uˆi = yi −yˆi The algebraic properties of residuals and fitted values that we defined in the bivariate case also hold in the multivariate case: 1. The sum of the OLS residuals is zero: ∑uˆi = 0 2. The sample covariance between each x jand the estimated residuals is zero: 1 n−1 ∑(xi −x¯)(uˆi − ¯uˆ) = 0 ∑x jiuˆi = 0 3. The sample covariance between the fitted values and the estimated residuals is zero: 1 n−1 ∑(yˆi −y¯)(uˆi −u¯) = 0 ∑yˆiuˆi = 0 . 3.2 Goodness-of-fit We construct R 2, our goodness-of-fit measure, just as before. 3 total sum of squares (SST) = ∑(yi −y¯) 2 explained sum of squares (SSE) = ∑(yˆi −y¯) 2 residual sum of squares (SSR) = ∑uˆ 2 i R 2 ≡ SSE/SST = 1−SSR/SST The R 2 represents the proportion of the variation in y that is explained by the variation in the x’s. One importantfeature of R 2 in the multivariate case is that it always increases when additional variables are added to the regression. This follows because SST is always the same, but including additional variables may decrease the SSR (even if it’s only by a little). Thus, we cannot simply use an increase in R 2 as evidence that an additional regressor belongs in the model.1 4 Regression Anatomy Here’s another way to think about multiple regression coefficients. As an aside, note that we can write bivarate regression parameters as functions of variances and covariances. Consider the model: y = β0 +β1x+u Now, taking the covariance of both sides with x, and dividing by Var(x) yields: Cov(y, x) = Cov(β0 +β1x+u, x) = Cov(β1x,x) β1 = Cov(y, x) Var(x) The estimate ˆβ1 is just the sample analog of the right-hand side, ˆβ1 = Covd(y,x) Vard(x) . Now, back to multiple regression. Consider a model with two regressors: y = β0 +β1x1 +β2x2 +u 1Note that there are adjustments that are made to R 2 to account for this, such as the “Adjusted-R 2 ” reported by Stata. Normally, however, people just report the standard R 2 , so it is important to keep the above caveat in mind. 4 Now, take the following “auxiliary” regressions of the x’s on each other: x1 = δ10+δ2x2 +x˜1 (2) x2 = δ20 +δ1x1 +x˜2 In this case, ˜x1 and ˜x2 are the error terms. It follows that β1 and β2 are actually the result of bivariate regressions of y on ˜x1 and ˜x2: β1 = Cov(y, x˜1) Var(x˜1) β2 = Cov(y, x˜2) Var(x˜2) The proof is “straightforward”: Let’s take a step back and think about what this means. ˜x1 represents the variation in x1 that is left over after accounting for x2. (Some people call this “partialling out” x2 from x1). So β1 is the result of the regression of y on x1, after x1has been purged of variation relating to x2. This is another way to think about what it means to say that we are “controlling for” x2in the regression. We can extend this to models with many regressors, by partialling out all of the other x’s from a given x j , then running the bivariate regression of y on x j . Suppose we have the generalmodel: y = β0 +β1x1 +...+βkxk +u To find βj we do the following: 1. Regress x j on all other x’s. Gather the residuals ˜x j . 2. Run the resulting

View Full Document