DOC PREVIEW
UCLA ECON 103 - Econ-103-Lecture-05

This preview shows page 1-2-3-4-5-35-36-37-38-39-70-71-72-73-74 out of 74 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 74 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Least Squares PredictionsMeasuring the Goodness-of-FitReporting the ResultsModelling IssuesPolynomial ModelLog-Linear ModelLog-log ModelKey WordsLecture Note 5Prediction, Goodness-of-Fit, and Modelling IssuesMoshe BuchinskyUCLAFall, 2014B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 1 / 74Topics to be Covered1Least Square Prediction2Measuring Goodness-of-…t3Modeling Issues4Polynomial Models5Log-linear Models6Log-log ModelsB uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 2 / 74Least Squares PredictionsLeast Squares PredictionsThe ability to predict is important to:Business economists and …nancial analysts who attempt to forecast thesales and revenues of speci…c …rmsGovernment policy makers who attempt to predict the rates of growthin national income, in‡ation, investment, saving, social insuranceprogram expenditures, etc.B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 3 / 74Least Squares PredictionsAccurate predictions provide a basis for better decision making inevery type of planning contextIn order to use regression analysis as a basis for prediction, we mustassume that y0and x0are related to one another by the sameregression model that describes our sample of dataIn particular assumption SR1 holds for these observations, namelyy0= β1+ β2x0+ e0, (4.1)where e0is a random error.B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 4 / 74Least Squares PredictionsThe task of predicting y0is related to the problem of estimatingE (y0) = β1+ β2x0Although E (y0) = β1+ β2x0is not random, the outcome y0israndomThe least squares predictor of y0comes from the …tted regression lineby0= b1+ b2x0(4.2)B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 5 / 74Least Squares PredictionsFigure 4.1: A point predictionB uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 6 / 74Least Squares PredictionsTo evaluate how well this predictor performs, we de…ne the forecasterror, which is analogous to the least squares residual:f = y0 by0=(β1+ β2x0+ e0)(b1+ b2x0)(4.3)We would like the forecast error to be smallTaking the expected value of f , we …nd thatE(f)= β1+ β2x0+ E(e0)(E(b1)+ E(b2))x0= β1+ β2x0+ 0 (β1+ β2x0)= 0This means that,on average the forecast error is zero, or by0is anunbiased predictor of y0B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 7 / 74Least Squares PredictionsAlso, by0is the best linear unbiased predictor (BLUP) of y0ifassumptions SR1–SR5 holdB uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 8 / 74Least Squares PredictionsThe variance of the forecast isVar(f)= σ2"1 +1N+(x0 x)2∑(xi x)2#(4.4)The variance of the forecast is smaller when:the overall uncertainty in the model is smaller, as measured by thevariance of the random errors σ2the sample size N is largerthe variation in the explanatory variable is largerthe value of(x0 x)2is smallB uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 9 / 74Least Squares Prediction sIn practice, because we do not know σ2, we usebVar(f)=bσ2"1 +1N+(x0 x)2∑(xi x)2#as the estimator for the varianceThe standard error of the forecast is:se(f)=qbVar(f)(4.5)The 100(1  α)% prediction interval is:by0 tcse(f)(4.6)B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 10 / 74Least Squares Prediction sFigure 4.2: Point and interval predictionB uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 11 / 74Least Squares Prediction sFor our food expenditure problem, we have:by0= b1+ b2x0= 83.416 + 10.21  20= 287.61The estimated variance for the forecast error is:bVar(f)=bσ2"1 +1N+(x0 x)2∑(xi x)2#=bσ2+bσ2N+(x0 x)2bσ2∑(xi x)2=bσ2+bσ2N+(x0 x)2bVar(b2)B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 12 / 74Least Squares Prediction sThe 95% prediction interval for y0is:by0 tcse(f)= 287.61  2.024  90.633=[104.132, 471.085]B uchinsky (UCLA) Ec. 103, Lecture 5 Fall, 2014 13 / 74Measuring the Goodness-of-FitMeasuring the Go odness-of-FitThere are two major reasons for analyzing the modelyi= β1+ β2xi+ ei(4.7)First, we want to explain how the dependent variable (yi) changes asthe independent variable (xi) changesSecond, we want to predict y0given an x0B uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 14 / 74Measuring the Goodness-of-FitClosely related is the desire to use xito explain as much as possible ofthe variation in the dependent variable yi.In the regression model Eq. 4.7 we call xithe “explanatory” variablebecause we hope that its variation will “explain” the variation in yiNote that we can separate yiinto its explainable and unexplainablecomponentsyi= E(yi)+ ei, (4.8)whereE(yi)is the explainable, or systematic, parteiis the random, unsystematic and unexplainable componentB uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 15 / 74Measuring the Goodness-of-FitAnalogous to Eq. 4.8, we can write:yi= byi+ bei(4.9)Subtracting the sample mean from both sides:yi y =(byi y)+ bei(4.10)B uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 16 / 74Measuring the Goodness-of-FitFigure 4.3: Explained and unexplained components of yiB uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 17 / 74Measuring the Goodness-of-FitRecall that the sample variance of yiiss2y=1N  1N∑i =1(yi y)2Squaring and summing both sides of Eq. 4.10, and using the fact that∑i(byi y)bei= 0, we get:N∑i =1(yi y)2=N∑i =1(byi y)2+N∑i =1be2i(4.11)Eq. 4.11 decomposition of the “total sample variation” in y intoexplained and unexplained components: These are called sums ofsquaresB uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 18 / 74Measuring the Goodness-of-FitSpeci…cally:N∑i =1(yi y)2= SST = Total sum of squaresN∑i =1(byi y)2= SSR = Sum of squares due to the regressionN∑i =1be2i= SSE = Sum of squares due to errorWe now rewrite Eq. 4.11 as:SST = SSR + SSEB uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 19 / 74Measuring the Goodness-of-FitLet’s de…ne the coe¢ cient of determination, or R2, as the proportionof variation in y explained by x within the regression model:R2=SSRSST= 1 SSESST(4.12)The closer R2is to 1, the closer the sample values yiare to the …ttedregression equationIf R2= 1, then all the sample data fall exactly on the …tted leastsquares line, so SSE = 0, and the model …ts the data “perfectly”If the sample data for y and x are uncorrelated and show no linearassociation, then the least squares …tted line is “horizontal,” andidentical to y, so that SSR = 0 and R2= 0B uchinsky (UC LA) Ec. 103, Lecture 5 Fall, 2014 20 / 74Measuring the Goodness-of-FitImportant to


View Full Document

UCLA ECON 103 - Econ-103-Lecture-05

Download Econ-103-Lecture-05
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Econ-103-Lecture-05 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Econ-103-Lecture-05 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?