Start mastering your classes. Access 3M+ class-specific study documents.

# Marginal Distributions (2 pages)

Previewing page 1 of actual document.
View Full Document

Previewing page 1 of actual document.

View Full Document
View Full Document

## Marginal Distributions

507 views

IV. Marginal Distributions V. Expectations and Variance VI. Covariance

Lecture number:
2
Pages:
2
Type:
Lecture Note
School:
Cornell University
Course:
Econ 3120 - Applied Econometrics
Edition:
1
##### Documents in this Packet
• 1 pages

• 1 pages

• 2 pages

• 1 pages

• 2 pages

• 1 pages

• 1 pages

• 1 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 1 pages

• 2 pages

• 3 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 6 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

• 2 pages

Unformatted text preview:

Lecture 2 Outline of Last Lecture I. Random Variables II. Continuous Distributions III. Multivariate Distributions Outline of Current Lecture IV. Marginal Distributions V. Expectations and Variance VI. Covariance Current Lecture 1.4 Marginal Distributions Suppose we have a joint distribution of X and Y given by f(x, y). How do we find the distribution of Y alone? The marginal distribution of a discrete random variable X is given by g(x) = âˆ‘ y f(x, y) Similarly, the marginal distribution of Y is given by h(y) = âˆ‘x f(x, y) In the continuous case, the marginal distributions for X and Y are given by g(x) = Ë† âˆž âˆ’âˆž f(x, y)dy and h(y) = Ë† âˆž âˆ’âˆž f(x, y)dx 1.5 Conditional Distributions The conditional distribution of X is defined as: f(x|y) = f(x, y) h(y) when h(y) is the value of the marginal distribution of Y at y. The conditional distribution of Y is defined similarly. Given the marginal distribution for X of g(x), the conditional distribution is given by: w(y|x) = f(x, y) g(x) 1.6 Independence Random variables X and Y are independent if and only if f(x, y) = f(x)Â· f(y) f(x|y) = g(x) Expectation and Variance 2.1 Expectation Definition: If X is a discrete random variable, and f(x) represents its probability distrubtion function, the expected value (or mean) of X is given by E(X) = Âµx = âˆ‘x x Â· f(x) For a continuous random variable, the expected value is given by E(X) = Âµx = Ë† âˆž âˆ’âˆž x Â· f(x)dx We can think about the expected value as the weighted average of X. The value of each possible realization of X is weighted by the probability that x occurs. We can also take expectations of functions of a random variable: E(w(X)) = Âµx = Ë† âˆž âˆ’âˆž w(x)Â· f(x)dx Or, we can take expectations of functions of multiple random variables: E(w(X,Y)) = Âµx = Ë† âˆž âˆ’âˆž w(x, y)Â· f(x, y)dxdy 2.1.1 Properties of Expectations 1. For any constant c, E(c) = c 2. For any constants a and b, E(aX +b) = aE(X) +b 3. If a1,a2,...,an are constants and {X1,X2,...,Xn} are random variables, then E( n âˆ‘ i=1 aiXi) = âˆ‘aiE(Xi) 2.2 Variance While the expectation tells us about the average of X, the variance gives us information on the dispersion of X around the mean. The variance is defined as Var(X) = Ïƒ 2 x = E[(X âˆ’E(X))2 ] It turns out that this is equivalent to Var(X) = E(X 2 )âˆ’(E(X))2 The variance is the average squared deviation from the mean. We often think about dispersion in terms of standard deviation, which is defined as sd(X) = Ïƒx = p Econ 3120 1st Edition

View Full Document

Unlocking...