# UCLA ECON 103 - Econ-103-Lecture-05 (74 pages)

Previewing pages*1, 2, 3, 4, 5, 35, 36, 37, 38, 39, 70, 71, 72, 73, 74*of 74 page document

**View the full content.**## Econ-103-Lecture-05

Previewing pages
*1, 2, 3, 4, 5, 35, 36, 37, 38, 39, 70, 71, 72, 73, 74*
of
actual document.

**View the full content.**View Full Document

## Econ-103-Lecture-05

0 0 174 views

- Pages:
- 74
- School:
- University of California, Los Angeles
- Course:
- Econ 103 - Introduction to Econometrics

**Unformatted text preview:**

Lecture Note 5 Prediction Goodness of Fit and Modelling Issues Moshe Buchinsky UCLA Fall 2014 Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 1 74 Topics to be Covered 1 Least Square Prediction 2 Measuring Goodness of t 3 Modeling Issues 4 Polynomial Models 5 Log linear Models 6 Log log Models Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 2 74 Least Squares Predictions Least Squares Predictions The ability to predict is important to Business economists and nancial analysts who attempt to forecast the sales and revenues of speci c rms Government policy makers who attempt to predict the rates of growth in national income in ation investment saving social insurance program expenditures etc Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 3 74 Least Squares Predictions Accurate predictions provide a basis for better decision making in every type of planning context In order to use regression analysis as a basis for prediction we must assume that y0 and x0 are related to one another by the same regression model that describes our sample of data In particular assumption SR1 holds for these observations namely y0 1 2 x0 e0 4 1 where e0 is a random error Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 4 74 Least Squares Predictions The task of predicting y0 is related to the problem of estimating E y0 1 2 x0 Although E y0 1 2 x0 is not random the outcome y0 is random The least squares predictor of y0 comes from the tted regression line yb0 b1 b2 x0 Buchinsky UCLA Ec 103 Lecture 5 4 2 Fall 2014 5 74 Least Squares Predictions Figure 4 1 A point prediction Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 6 74 Least Squares Predictions To evaluate how well this predictor performs we de ne the forecast error which is analogous to the least squares residual f y0 yb0 1 2 x0 e0 b1 b2 x 0 4 3 We would like the forecast error to be small Taking the expected value of f we nd that E f 1 2 x 0 E e0 1 2 x0 0 0 E b1 E b2 x0 1 2 x0 This means that on average the forecast error is zero or yb0 is an unbiased predictor of y0 Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 7 74 Least Squares Predictions Also yb0 is the best linear unbiased predictor BLUP of y0 if assumptions SR1 SR5 hold Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 8 74 Least Squares Predictions The variance of the forecast is Var f 2 1 x0 x 2 1 2 N xi x 4 4 The variance of the forecast is smaller when the overall uncertainty in the model is smaller as measured by the variance of the random errors 2 the sample size N is larger the variation in the explanatory variable is larger the value of x0 x 2 is small Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 9 74 Least Squares Predictions In practice because we do not know 2 we use 2 x 1 x 0 b f b2 1 Var 2 N xi x as the estimator for the variance The standard error of the forecast is q b f se f Var The 100 1 prediction interval is yb0 Buchinsky UCLA 4 5 tc se f Ec 103 Lecture 5 4 6 Fall 2014 10 74 Least Squares Predictions Figure 4 2 Point and interval prediction Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 11 74 Least Squares Predictions For our food expenditure problem we have yb0 b1 b2 x0 83 416 10 21 20 287 61 The estimated variance for the forecast error is 2 1 x x 0 2 b f b 1 Var 2 N xi x b2 b2 Buchinsky UCLA b2 x0 N b2 x0 N Ec 103 Lecture 5 x 2 b2 xi x 2 b b2 x 2 Var Fall 2014 12 74 Least Squares Predictions The 95 prediction interval for y0 is yb0 Buchinsky UCLA tc se f 287 61 2 024 90 633 104 132 471 085 Ec 103 Lecture 5 Fall 2014 13 74 Measuring the Goodness of Fit Measuring the Goodness of Fit There are two major reasons for analyzing the model yi 1 2 xi ei 4 7 First we want to explain how the dependent variable yi changes as the independent variable xi changes Second we want to predict y0 given an x0 Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 14 74 Measuring the Goodness of Fit Closely related is the desire to use xi to explain as much as possible of the variation in the dependent variable yi In the regression model Eq 4 7 we call xi the explanatory variable because we hope that its variation will explain the variation in yi Note that we can separate yi into its explainable and unexplainable components yi E yi ei 4 8 where E yi is the explainable or systematic part ei is the random unsystematic and unexplainable component Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 15 74 Measuring the Goodness of Fit Analogous to Eq 4 8 we can write yi ybi b ei 4 9 Subtracting the sample mean from both sides yi Buchinsky UCLA y ybi Ec 103 Lecture 5 y b ei 4 10 Fall 2014 16 74 Measuring the Goodness of Fit Figure 4 3 Explained and unexplained components of yi Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 17 74 Measuring the Goodness of Fit Recall that the sample variance of yi is sy2 1 N N 1 i 1 yi y 2 Squaring and summing both sides of Eq 4 10 and using the fact that ei 0 we get i ybi y b N yi i 1 y 2 N ybi i 1 N y 2 b ei2 4 11 i 1 Eq 4 11 decomposition of the total sample variation in y into explained and unexplained components These are called sums of squares Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 18 74 Measuring the Goodness of Fit Speci cally N yi y 2 SST Total sum of squares ybi y 2 SSR Sum of squares due to the regression i 1 N i 1 N bei2 SSE Sum of squares due to error i 1 We now rewrite Eq 4 11 as SST SSR SSE Buchinsky UCLA Ec 103 Lecture 5 Fall 2014 19 74 Measuring the Goodness of Fit Let s de ne the coe cient of determination or R 2 as the proportion of variation in y explained by x within the regression model R2 SSR 1 SST SSE SST 4 12 The closer R 2 is to 1 the closer the sample values yi are to the tted regression equation If R 2 1 then all the sample data fall exactly on the tted least squares line so SSE 0 and the model ts the data perfectly If the sample data for y and x are uncorrelated and show no linear association then the least squares tted line is horizontal and identical to y so …

View Full Document