DOC PREVIEW
MIT 14 02 - Lecture Notes

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Economic vs Econometrics TheoryOrdinary Least Square (OLS) EstimatorsConfidenceFitness:Summary:A numerical example:Observation #SumCaveatst-distribution (from Green)Econometrics Lecture 7 11. Economic vs Econometrics Theory • It is given that the economic theory assumes a linear model between the dependent and independent explanatory variable, with some random (unexplained) deviations/errors/residuals: Ct Yt tεˆtCˆCt β Ct = α + β Yt + εt => εt = (Ct - α - β Yt) α • We dislike errors. • Our dissatisfaction from the errors increases very rapidly. If the error to either side doubles, our dissatisfaction increases more than double. 22. Ordinary Least Square (OLS) Estimators • Therefore, we would like to impose some restrictions on our econometric model regarding the errors: (I) ∑t εt = 0 - There is no systematic (aggregate) error. (II) Min ∑t εt2 - Penalize for larger errors to either side. • It is easy to prove that: given our theoretical linear model, imposing the above restrictions provides us with the Best Linear Unbiased Estimators. In other words, the OLS estimators are BLUE (Gauss-Markov theorem). • What can we infer from these restrictions? How can we use them in order to derive the estimators- the equations that estimate our coefficients: α & β? * 3• Before we proceed, notice: (1) Our sample includes n observation, and our regression runs m independent variables. When m>1 the regression is called multi-variable regression. We won’t provide you with the estimators for multi-variable regression, but the intuition is the same. (2) The head (i.e., ) denotes the estimator or the estimate. The former is the formula that we use to estimate the coefficient from the sample, and the latter is the value of the estimator after using the sample. Cˆ(3) The bar (i.e., YC, ) denotes the mean of the variable (4) When we use lower-case letters, then we refer to the deviation from the mean of that letter (CCtt−≡c ). (5) We want to minimize the vertical deviations from the regression line. That is different from minimizing the horizontal deviations from the regression line. (6) The regression provides us with estimates for the correlation- its sign, magnitude and significance. However, further economics and econometric theory is need for identifying the causality. 4(I) ∑t εt = 0 (I.a) ()0=−−∑ntttYCβα0=−−∑∑∑nttntnttYCβα 0=−−∑∑∑NYNNCNttNtNttβα 0=−− YCβα YCβα−= - The constant coefficients (I.b) 0=∑NNttε 0=ε 5(II) Min ∑t εt2 (II.a) εt = Ct - α - βYt - By the model specification (see page 1) 0 = C - α - βY - Our 1st conclusion from 1st restriction (see page 2) => εt = (Ct – C) - β (Yt – Y) ≡ ct - β yt εt = ct - β yt . (II.b) Min ∑t εt2 = Min ∑t (ct - β yt) 2 = Min ∑t (ct 2 - 2β ct yt + β 2 yt 2) • β minimizes the Sum of Squared Errors (SSE) if the first derivative equals to zero (FOC): ∑t (0 - 2 ct yt + 2 β yt 2) = 0 ∑t 0 - 2 ∑t ct yt + 2 β ∑t yt 2 = 0 β = ∑t ct yt / ∑t yt 2 ⇒ β = SScy / SSyy . β = [∑t ct yt / n] / [∑t yt 2 /n] ⇒ β = σcy / σy2 . Covariance(Ct ,Yt)Sum of Squares Variance(Yt) = STD(Yt)2β = [σcy / σy σc] [σc / σy] ⇒ β = ρ cy * [σc / σy] . Correlation(Ct,Yt) ≤1 When ρ cy , β <0 (note, both have to have the sign), then we have a negative correlation between c and y, and visa versa. 63. Confidence • Since we try to fit a linear model to a sample from the population, our estimates based on the sample might be different from the true values (based on the population). • By how much can our estimate be different from the actual? • We can be (1-λ)% confident that the true value is different from our estimate by no more than (the confidence interval): ( λ/21,1mnβˆ t Sˆ βˆ β−−−±=) , where ∑∑=nttnttySˆˆˆˆεβ 0 • Therefore, if t-statistic ≡ (2/1,1ˆˆ/ˆλββ−−−<mntS), then we don’t have a statistically significant linear relation between Ct and Yt. [m-1≡ df ≡ degree of freedom]. • For (the common) 95% confidence interval, the t-statistic should be greater than 2. 74. Fitness: • Though we could find the best curve that fits the data, but should it really be linear? • Evaluating the fitness of the linear regression (how much accurate/powerful the fitted line in explaining the data) is provided by the following criterion: ∑∑∑∑−==nttnttnttnttcccRεˆ1ˆ2 102≤≤⇒ R • Is there a different between “correlation” and “causality”? 85. Summary: • The theoretical economic model assumes linear relation in levels: Ct = α + β Yt + εt => εt = (Ct - α - β Yt) • The OLS econometric model imposes the following: (I) ∑t εt = 0 (II) Min ∑t εt2 • Gauss-Markov theorem: OLS estimators are BLUE. • Estimators: βˆ = ∑t ct yt / ∑t yt 2 ⇒ yycySSˆ/ˆˆ=β. βˆ = [∑t ct yt / n] / [∑t yt 2 /n] ⇒ 2/ˆyCYσσβ=Covariance(Ct ,Yt)Sum of Squares Variance(Yt) = STD(Yt)2()(yccyCYσσσσσβˆ/ˆˆˆ/ˆˆ×=) ⇒ ()ycCYσσρβˆ/ˆˆˆ×= Correlation(Ct,Yt) ≤1YCβαˆˆ−= • ()%1λ− confidence interval: , where (2/1,1ˆˆˆλβββ−−−±=mntS)∑∑=nttnttySˆ/ˆˆˆεβ If t-statistic ≡ ()2/1,1ˆˆ/ˆλββ−−−<mntS, then there is NO statistically significant linear relation between Ct and Yt. • Linear Fitness: ∑∑∑∑−==nttnttnttnttcccRεˆ1ˆ2 102≤≤⇒ R 96. A numerical example: Observation # Ct Yt ctytct2yt2ct * yt tCˆ tεˆ2ˆtε 1 100 137 3 20 9 400 60 112 -12 1492 90 115 (7) (2) 49 4 14 95 -5 303 75 92 (22) (25) 484 625 550 78 -3 94 110 120 13 3 169 9 39 99 11 1155 104 116 7 (1) 49 1 (7) 96 8 606 120 142 23 25 529 625 575 116 4 167 80 97 (17) (20) 289 400 340 82 -2 3 Sum 679 819 1578 2064 1571 679 0 382Average 97 117 225 295 224 97 0 55 Therefore, the estimates are (so add ^ for all): 90 100 110 120 130 140 150YtC tσc2 = 225 Scc = 1578σy2 = 295 Syy = 2064σyc = 224 Syc = 1571ρcy = 0.93 Sb =0.43β = 0.76 t-statistic = 1.77α = 7.95 t(n-m-1,1-λ/2 ) = 2.571R2 = 0.76 95% Confidence Interval? Is it significant? 107. Caveats • (I) Spurious


View Full Document

MIT 14 02 - Lecture Notes

Documents in this Course
Quiz 2

Quiz 2

12 pages

Quiz 3

Quiz 3

15 pages

Quiz #2

Quiz #2

12 pages

Quiz #1

Quiz #1

10 pages

Quiz #1

Quiz #1

12 pages

Quiz 3

Quiz 3

11 pages

Recitation

Recitation

146 pages

Quiz 2

Quiz 2

9 pages

Quiz 1

Quiz 1

3 pages

Quiz 1

Quiz 1

13 pages

Quiz 1

Quiz 1

12 pages

Quiz 2

Quiz 2

14 pages

Quiz 1

Quiz 1

15 pages

Recitation

Recitation

123 pages

Quiz 2

Quiz 2

11 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?