# One Sided Tests, Confidence Intervals (2 pages)

Previewing page*1*of actual document.

**View the full content.**
Previewing page *1*
of
actual document.

**View the full content.**View Full Document

## One Sided Tests, Confidence Intervals

0 0 869 views

II. One Sided Tests III. Confidence Intervals

- Lecture number:
- 9
- Pages:
- 2
- Type:
- Lecture Note
- School:
- Cornell University
- Course:
- Econ 3120 - Applied Econometrics
- Edition:
- 1

**Unformatted text preview: **

Lecture 9 Outline of Last Lecture I. Confidence Intervals and T tests Outline of Current Lecture II. Regression Over the next several lectures weâ€™ll start by examining two variables. Examples: the relationship between education and wages firm output and production inputs state cigarette taxes and cigarette consumption In these relationships, we define the following: dependent variable the variable to be explained (e.g., education) independent variable: the variable that gives us information on the dependent variable (e.g., wages), also called an explanatory variable We typically denote a dependent variable as y, and an independent variable as x. To describe the relationship between the variables, we define some function: y = f(x) For our purposes, weâ€™ll assume that this relationship is linear: y = Î²0 +Î²1x+u where Î²0 and Î²1 are parameters which describe this relationship, and u is an error term. Clearly, x is not the only factor that affects y. The variable u represents all of the other factors in determining y. For example, in the schooling-earnings relationship, all of the other factors that influence wages (e.g., experience, ability, location, etc) go into u. 12 Assumptions In order to analyze the relationship between x and y and to derive estimates for Î²0 and Î²1, we need to make a few assumptions. We first assume that the unconditional expectation of u is 0: E(u) = 0 We can always make this assumption, because we can simply re-scale the intercept Î²0 such that it is true. A key assumption in regression modeling is that x and u are independent, that is, E(u|x) = E(u) = 0 We call this the zero conditional mean assumption. Under this second assumption, we can write the conditional expectation function (CEF) of y as: E(y|x) = E(Î²0 +Î²1x+u|x) = Î²0 +Î²1x+E(u|x) = Î²0 +Î²1x From this equation, we have the primary interpretation of the parameter Î²1: it represents the slope of the CEF with respect to x. In other words, it represents the change in the expected value of y (conditional on x), with respect to x. 3 Estimating Î²0 and Î²1 using ordinary least squares (OLS) regression 3.1 Preliminaries Before we derive OLS estimators, there are two quick things to note: First, note that the sample covariance of x and y is defined as CovË† (x, y) = 1 nâˆ’1 âˆ‘(xi âˆ’xÂ¯)(yi âˆ’yÂ¯) 2Similar to the sample variance, this can be shown to be an unbiased (and efficient) estimator for the covariance of x and y. Second, we need to know that âˆ‘(xi âˆ’xÂ¯)(yi âˆ’yÂ¯) = âˆ‘xi(yi âˆ’yÂ¯) = âˆ‘yi(xi âˆ’xÂ¯) and âˆ‘(xi âˆ’xÂ¯)(xi âˆ’xÂ¯) = âˆ‘xi(xi âˆ’xÂ¯) The proof is straightforward: 3.2 Derivation of OLS Estimates We start with our basic relationship: y = Î²0 +Î²1x+u (1) Suppose we have a random sample of observations on x and y denoted as {(xi , yi) : i = 1,...,n}. Since all of the data have the same functional relationship, we can write (1) observation-wise as yi = Î²0 +Î²1xi +ui Recall the assumption above: E(u) ...

View Full Document