DOC PREVIEW
CORNELL ECON 3120 - Covariance
Type Lecture Note
Pages 2

This preview shows page 1 out of 2 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Econ 3120 1st Edition Lecture 3Outline of Last Lecture I. Marginal DistributionsII. Expectations and VarianceIII. CovarianceOutline of Current Lecture IV. CovarianceV. Conditional ExpectationVI. Normal DistributionCurrent Lecture2.3 Covariance When we are dealing with more than one random variable, it is useful to summarize how these two random variables move together. Suppose we have two random variables X and Y, and we define E(X) = µx and E(Y) = µy. The covariance between these two random variables is defined as Cov(X,Y)= σxy = E[(X − µx)(Y − µy)] We can also write the covariance as Cov(X,Y) = E[(X − µx)Y] = E[X(Y − µy)] = E(XY)− µxµy2.3.1 Properties of the covariance: 1. If X and Y are independent, then Cov(X,Y) = 0 This follows since E(XY) = E(X)E(Y) when X and Y are independent. 2. For any constants a1,b1,a2,b2, Cov(a1X +b1,a2Y +b2) =a1a2Cov(X,Y) Now that we know about the Covariance, we can define a third property of the variance: 3.Var(aX +bY) = a 2Var(X) +b 2Var(Y) +2abCov(X,Y) One issue with covariance is that the units are difficult tointerpret. It turns out that we can scale covariance by the standard deviations of both variables and to get the unit-friendly correlation: Corr(X,Y) = ρxy = Cov(X,Y) sd(X)sd(Y) = σxyσxσy 2.4 Conditional Expectation In econometrics we often want to know how much one variable X tells us about another variable Y. One way to do this is by using covariance and correlation, but another concept we will be using a lot is conditional expectation. Conditional expectation, written as E(Y|X = x) (often shortened to either E(Y|X) or E(Y|x)), tells us the mean of Y conditional on some value of X. The conditional expectation is defined in a similar way to the unconditional expectation above, but using the conditional probability distribution functions: discrete r.v.: E(Y|x) = ∑ y y · f(y|x) continuous r.v.: E(Y|x) = ˆ y y · f(y|x) Example: Suppose we are studying the relationship between schooling and earnings, and that hourly wages and schooling are our random variables. How does the mean wage vary with the schooling level? Our CEF (conditional expectation function might look something like this:E(WAGE|EDUC) = 4+0.6 ·EDUCThese notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.Thus, for each level of schooling, we know the mean wage. Properties of the conditional expectation: 1. If X and Y are independent, then E(Y|X) = E(Y) 2. E(E(Y|X)) = E(Y) This is called the “law of iterated expectations.” Example: Suppose we want to know E(WAGE)in the example above, and we know that E(EDUC) = 11.5. Then, by the law of iterated expectations, E(WAGE) = E(E(WAGE|EDUC)) = E(4+0.6 ·EDUC) = 4+0.6 ·E(EDUC) = 4+0.6 · 11.5 = 10.9 3 Normal Distribution Because of its properties, the normaldistribution is probably the most commonly used distribution in statistics and econometrics. We won’t go through all of the properties of the normal here, but they will become apparent as we go along. The normal is described by the pdf: f(x; µ,σ) = 1 σ √ 2π exp[−(x− µ) 2 /2σ 2 ] ,−∞ < x < ∞ The parameters are µ = E(X) and σ 2 = Var(X). We also write a normally-distributed random variable X as X ∼Normal(µ,σ 2 ). One special case of the normal is the standard normal, which has mean 0 and variance 1: φ(z) = f(z) = 1 √ 2π exp[−z 2 /2] ,−∞ < z < ∞We often convert normal random variables to standard normal because the values of the standard normal are more easily computed. The standard normal cumulative distribution function is denoted by Φ(z). If we can convert a variable to standard normal and express probabilities in terms of Φ(z), we can look up the values in a table. Some useful properties of the normal distribution: 1. The standard normal is symmetric about the origin. That is, for a positive constant c, φ(c) = φ(−c), or Φ(c) = 1−Φ(−c) 2. If X ∼Normal(µ,σ 2 ), then (X − µ)/σ ∼ Normal(0,1) 3. If X ∼Normal(µ,σ 2 ), then aX +b ∼ Normal(aµ +b,a 2σ 2 ) 4. Any linear combination of independent, identically distributed (i.i.d) normal random variables has anormal distribution. Example 1: if X1, X2, X3 are i.i.d random variables distributed as Normal(µ,σ 2 ), what is the distribution of X1 +2X2


View Full Document
Download Covariance
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Covariance and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Covariance 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?