DOC PREVIEW
CORNELL ECON 3120 - One Sided Tests, Confidence Intervals
Type Lecture Note
Pages 2

This preview shows page 1 out of 2 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Econ 3120 1st Edition Lecture 9 Outline of Last Lecture I Confidence Intervals and T tests Outline of Current Lecture II Regression Over the next several lectures we ll start by examining two variables Examples the relationship between education and wages firm output and production inputs state cigarette taxes and cigarette consumption In these relationships we define the following dependent variable the variable to be explained e g education independent variable the variable that gives us information on the dependent variable e g wages also called an explanatory variable We typically denote a dependent variable as y and an independent variable as x To describe the relationship between the variables we define some function y f x For our purposes we ll assume that this relationship is linear y 0 1x u where 0 and 1 are parameters which describe this relationship and u is an error term Clearly x is not the only factor that affects y The variable u represents all of the other factors in determining y For example in the schooling earnings relationship all of the other factors that influence wages e g experience ability location etc go into u 12 Assumptions In order to analyze the relationship between x and y and to derive estimates for 0 and 1 we need to make a few assumptions We first assume that the unconditional expectation of u is 0 E u 0 We can always make this assumption because we can simply re scale the intercept 0 such that it is true A key assumption in regression modeling is that x and u are independent that is E u x E u 0 We call this the zero conditional mean assumption Under this second assumption we can write the conditional expectation function CEF of y as E y x E 0 1x u x 0 1x E u x 0 1x From this equation we have the primary interpretation of the parameter 1 it represents the slope of the CEF with respect to x In other words it represents the change in the expected value of y conditional on x with respect to x 3 Estimating 0 and 1 using ordinary least squares OLS regression 3 1 Preliminaries Before we derive OLS estimators there are two quick things to note First note that the sample covariance of x and y is defined as Cov x y 1 n 1 xi x yi y 2Similar to the sample variance this can be shown to be an unbiased and efficient estimator for the covariance of x and y Second we need to know that xi x yi y xi yi y yi xi x and xi x xi x xi xi x The proof is straightforward 3 2 Derivation of OLS Estimates We start with our basic relationship y 0 1x u 1 Suppose we have a random sample of observations on x and y denoted as xi yi i 1 n Since all of the data have the same functional relationship we can write 1 observation wise as yi 0 1xi ui Recall the assumption above E u 0 2 We can use use the zero conditional mean assumption above to write Cov x u E xu 0 3 We can then substitute u y 0 1x in equations 2 and 3 to yield E y 0 1x 0 4 3E x y 0 1x 0 These notes represent a detailed interpretation of the professor s lecture GradeBuddy is best used as a supplement to your own notes not as a substitute 5 Define our estimates 0 and 1 as 0 and 1 Deriving these estimates involves using the sample analogs of equations 4 and 5 1 n yi 0 1xi 0 6 1 n xi yi 0 1xi 0 7 Equation 6 can be written as y 0 1x 0 y 1x 8 Equation 7 can be written as xiyi xi 0 1x 2 i 0 9 Substituting 0 from 8 into 9 we have xiyi xi y 1x 1x 2 i 0 xi yi y 1 xi xi x 1 xi yi y xi xi x 1 n 1 xi x yi y 1 n 1 xi x 2 Cov x y Var x and 0 y Cov x y Var x x 43 3 Predicted fitted values Based on the OLS estimates the predicted values of y are given by y i 0 1xi In other words we can predict y for each value of x in our sample 3 4 Residuals The OLS residual is defined as u i yi y i yi 0 1xi 3 5 Why it s called Ordinary Least Squares It turns out that the estimators we we derived above for 0 and 1 are the same as those found by taking the minimum sum of squared residuals u 2 i yi 0 1xi


View Full Document
Download One Sided Tests, Confidence Intervals
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view One Sided Tests, Confidence Intervals and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view One Sided Tests, Confidence Intervals and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?