DOC PREVIEW
UIUC STAT 400 - 408jointdiscrete

This preview shows page 1 out of 2 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Math 408, Actuarial Statistics I A.J. HildebrandJoint Distributions, Discrete CaseIn the following, X and Y are discrete random variables.1. Joint distribution (joint p.m.f.):• Definition: f(x, y) = P (X = x, Y = y)• Properties: (1) f(x, y) ≥ 0, (2)Px,yf(x, y) = 1• Representation: The most natural representation of a joint discrete distribution is asa distribution matrix, with rows and columns indexed by x and y, and the xy-entrybeing f (x, y). This is analogous to the representation of ordinary discrete distributionsas a single-row table. As in the one-dimensional case, the entries in a distribution matrixmust be nonnegative and add up to 1.2. Marginal distributions: The distributions of X and Y , when considered separately.• Definition:• fX(x) = P (X = x) =Pyf(x, y)• fY(y) = P (Y = y) =Pxf(x, y)• Connection with distribution matrix: The marginal distributions fX(x) and fY(y)can be obtained from the distribution matrix as the row sums and column sums of theentries. These sums can be entered in the “margins” of the matrix as an additionalcolumn and row.• Expectation and variance: µX, µY, σ2X, σ2Ydenote the (ordinary) expectations andvariances of X and Y , computed as usual: µX=PxxfX(x), etc.3. Computations with joint distributions:• Probabilities: Probabilities involving X and Y (e.g., P (X + Y = 3) or P (X ≥ Y ) canbe computed by adding up the corresponding entries in the distribution matrix: Moreformally, for any set R of p oints in the xy-plane, P ((X, Y ) ∈ R)) =P(x,y ) ∈Rf(x, y).• Expectation of a function of X and Y (e.g., u(x, y) = xy): E(u(X, Y )) =Px,yu(x, y)f(x, y). This formula can also be used to compute expectation and variance ofthe marginal distributions directly from the joint distribution, without first computingthe marginal distribution. For example, E(X) =Px,yxf(x, y).4. Covariance and correlation:• Definitions: Cov(X, Y ) = E(XY ) − E(X)E(Y ) = E((X − µX)(Y − µY)) (Covarianceof X and Y ), ρ = ρ(X, Y ) =Cov(X,Y )σXσY(Correlation of X and Y )• Properties: | Cov(X, Y )| ≤ σXσY, −1 ≤ ρ(X, Y ) ≤ 1• Relation to variance: Var(X) = Cov(X, X)• Variance of a sum: Var(X + Y ) = Var(X) + Var(Y ) + 2 Cov(X, Y ) (Note the analogyof the latter formula to the identity (a + b)2= a2+ b2+ 2ab; the covariance acts like a“mixed term” in the expansion of Var(X + Y ).)1Math 408, Actuarial Statistics I A.J. Hildebrand5. Independence of random variables:• Definition: X and Y are called independent if the joint p.m.f. is the product of theindividual p.m.f.’s: i.e., if f(x, y) = fX(x)fY(y) for al l values of x and y.• Properties of independent random variables:If X and Y are independent, then:– The expectation of the product of X and Y is the product of the individualexpectations: E(XY ) = E(X)E(Y ). More generally, this product formula holdsfor any expectation of a function X times a function of Y . For example, E(X2Y3) =E(X2)E(Y3).– The product formula holds for probabilities of the form P(some condi-tion on X, some condition on Y ) (where the comma denotes “and”): Forexample, P (X ≤ 2, Y ≤ 3) = P (X ≤ 2)P (Y ≤ 3).– The covariance and correlation of X and Y are 0: Cov(X, Y ) = 0, ρ(X, Y ) = 0.– The variance of the sum of X and Y is the sum of the individual variances:Var(X + Y ) = Var(X) + Var(Y )– The moment-generating function of the sum of X and Y is the productof the individual moment-generating functions: MX+Y(t) = MX(t)MY(t).(Note that it is the sum X + Y , not the product XY , which has this property.)6. Conditional distributi ons:• Definitions:– conditional distribution (p.m.f.) of X given that Y = y:g(x|y) = P (X = x|Y = y) =f(x,y)fY(y )– conditional distribution (p.m.f.) of Y given that X = x:h(y|x) = P (Y = y|X = x) =f(x,y)fX(x)• Connection with distribution matrix: Conditional distributions are the distribu-tions obtained by fixing a row or column in the matrix and rescaling the entries in thatrow or column so that they again add up to 1. For example, h(y|2), the conditional dis-tribution of Y given that X = 2, is the distribution given by the entries in row 2 of thematrix, rescaled by dividing by the row sum (namely, fX(2)): h(y|2) = f(2, y)/fX(2).• Conditional expectations and variance: Conditional expectations, variances, etc.,are defined and computed as usual, but with conditional distributions in place of ordinarydistributions:• E(X|y) = E(X|Y = y) =Pxxg(x|y)• E(X2|y) = E(X2|Y = y) =Pxx2g(x|y)• Var(X|y) = Var(X|Y = y) = E(X2|y) − E(X|y)2More generally, for any condition (such as Y > 0), the expectation of X given thiscondition is defined as• E(X| condition) =PxxP (X = x| condition)and can be computed by starting out with the usual formula for the expectation, butrestricting to those terms that satisfy the


View Full Document

UIUC STAT 400 - 408jointdiscrete

Documents in this Course
Variance

Variance

11 pages

Midterm

Midterm

8 pages

Lecture 1

Lecture 1

17 pages

chapter 2

chapter 2

43 pages

chapter 1

chapter 1

45 pages

400Hw01

400Hw01

3 pages

Load more
Download 408jointdiscrete
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view 408jointdiscrete and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 408jointdiscrete 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?