DOC PREVIEW
CORNELL ECON 3120 - Marginal Distributions
Type Lecture Note
Pages 2

This preview shows page 1 out of 2 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Econ 3120 1st Edition Lecture 2Outline of Last Lecture I. Random VariablesII. Continuous DistributionsIII. Multivariate DistributionsOutline of Current Lecture IV. Marginal DistributionsV. Expectations and VarianceVI. CovarianceCurrent Lecture1.4 Marginal DistributionsSuppose we have a joint distribution of X and Y given by f(x, y). How do we find the distribution of Y alone? The marginal distribution of a discrete random variable X is given by g(x) = ∑ y f(x, y)Similarly, the marginal distribution of Y is given by h(y) = ∑x f(x, y) In the continuous case, the marginal distributions for X and Y are given by g(x) = ˆ ∞ −∞ f(x, y)dy and h(y) = ˆ ∞ −∞ f(x, y)dx 1.5 Conditional Distributions The conditional distribution of X is defined as: f(x|y) = f(x, y) h(y) when h(y) is the value of the marginal distribution of Y at y. The conditional distribution of Y is defined similarly. Given the marginal distribution for X of g(x), the conditional distribution is given by: w(y|x) = f(x, y) g(x) 1.6 Independence Random variables X and Y are independent if and only if f(x, y) = f(x)· f(y) f(x|y) = g(x)Expectation and Variance 2.1 Expectation Definition: If X is a discrete random variable, and f(x) represents its probability distrubtion function, the expected value (or mean) of X is given by E(X) = µx = ∑x x · f(x) For a continuous random variable, the expected value is given by E(X) = µx = ˆ ∞ −∞ x · f(x)dx We can think about the expected value as the weighted average of X. The value of each possible realization of X is weighted by the probability that x occurs. We can also take expectations of functions of a random variable: E(w(X)) = µx = ˆ ∞ −∞ w(x)· f(x)dx Or, we can take expectations of functions of multiple random variables:E(w(X,Y)) = µx = ˆ ∞ −∞ w(x, y)· f(x, y)dxdy 2.1.1 Properties of Expectations 1. For any constant c, E(c) = c 2. For any constants a and b, E(aX +b) = aE(X) +b 3. If a1,a2,...,an are constants and {X1,X2,...,Xn} are random variables, then E( n ∑ i=1 aiXi) = ∑aiE(Xi)These notes represent a detailed interpretation of the professor’s lecture. GradeBuddy is best used as a supplement to your own notes, not as a substitute.2.2 Variance While the expectation tells us about the average of X, the variance gives us information on the dispersion of X around the mean. The variance is defined as Var(X) = σ 2 x = E[(X −E(X))2 ] It turns out that this is equivalent to Var(X) = E(X 2 )−(E(X))2 The variance is the average squared deviation from the mean. We often think about dispersion in terms of standard deviation, which is defined as sd(X) = σx = p Var(X) 2.2.1 Properties of the variance 1. Var(X) ≥ 0 2.Var(X) = 0 if and only if X can take on a single value 3. For any constants a and b, Var(aX +b) = a 2Var(X)2.3 Covariance When we are dealing with more than one random variable, it is useful to summarize how these two random variables move together. Suppose we have two random variables X and Y, and we define E(X) = µx and E(Y) = µy. The covariance between these two random variables is defined as Cov(X,Y)= σxy = E[(X − µx)(Y − µy)] We can also write the covariance as Cov(X,Y) = E[(X − µx)Y] = E[X(Y − µy)] = E(XY)− µxµy2.3.1 Properties of the covariance: 1. If X and Y are independent, then Cov(X,Y) = 0 This follows since E(XY) = E(X)E(Y) when X and Y are independent. 2. For any constants a1,b1,a2,b2, Cov(a1X +b1,a2Y +b2) =a1a2Cov(X,Y) Now that we know about the Covariance, we can define a third property of the variance: 3.Var(aX +bY) = a 2Var(X) +b 2Var(Y) +2abCov(X,Y) One issue with covariance is that the units are difficult tointerpret. It turns out that we can scale covariance by the standard deviations of both variables and to get the unit-friendly correlation: Corr(X,Y) = ρxy = Cov(X,Y) sd(X)sd(Y) =


View Full Document
Download Marginal Distributions
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Marginal Distributions and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Marginal Distributions 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?