extbf {Last Time} extbf {Linear Algebra Notation} extbf {Random Vectors} extbf {Characteristic Function and its Properties} extbf {Characteristic Function and its Properties (cont.)} extbf {Gaussian Random Vector} extbf {Mean and Covariance of Gaussian Random Vector} extbf {Non-degenerate Gaussian Random Vector} extbf {Convergence of Gaussian Random Vectors} extbf {Alternate Definitions of Gaussian Random Vector} extbf {Gaussian Random Vectors and Independence} extbf {Gaussian Stochastic Process}Last Time•Definition of stochastic process•Sample paths•Finite-dimensional distributions•Versions and modificationsToday’s lecture: Section 3.2MATH136/STAT219 Lecture 11, October 15, 2008 – p. 1/12Linear Algebra Notation•The dot product of two column vectors x = (x1, . . . , xn)′andy = (y1, . . . , yn)′is(x, y) = x1y1+ ··· + xnyn•A symmetric n × n matrix A is nonnegative definite if(x, Ax) = x′Ax =nXj=1nXk=1xjAjkxk≥ 0 for all vectors x ∈ IRn•A symmetric n × n matrix A is positive definite if(x, Ax) > 0 for all non-zero column vectors x ∈ IRnMATH136/STAT219 Lecture 11, October 15, 2008 – p. 2/12Random Vectors•Given a measurable space (Ω, F) an F-measurable mapX : Ω 7→ IRnis arandom vector•X = (X1, . . . , Xn) is a random vector if and only if Xiis arandom variable for all i = 1, . . . , n•Any random vector has a distribution/law on (IRn, Bn)•A random vector X has a probability density function fXifIP (ai≤ Xi≤ bi) =Zb1a1···ZbnanfX(x1, . . . , xn)dxn···dx1,for all ai< bi∈ IR, i = 1, . . . , nMATH136/STAT219 Lecture 11, October 15, 2008 – p. 3/12Characteristic Function and its Properties•The characteristic function of a random vectorX = (X1, . . . , Xn) is a function ΦX: IRn→ C defined byΦX(θ) = IE(ei(θ,X))= IE(cos((θ, X))) + iIE(sin((θ, X))), (1)where i =√−1 and θ = (θ1, . . . , θn) ∈ IRn•|ΦX(θ)| ≤ 1 for all θ ∈ IRn•ΦX(θ) is continuous as a function of θMATH136/STAT219 Lecture 11, October 15, 2008 – p. 4/12Characteristic Function and its Properties (cont.)•The RV’s X1, . . . , Xnare independent if and only ifΦX(θ) =nYk=1ΦXk(θk),for all θ = (θ1, . . . , θn)•Two random vectors have the same law if and only if theyhave the same characteristic function•Xnconverges to X in distribution as n → ∞ if and only ifΦXn(θ) → ΦX(θ) for all θ ∈ IRn.MATH136/STAT219 Lecture 11, October 15, 2008 – p. 5/12Gaussian Random Vector•A random vector X has a Gaussian (or MultivariateNormal) Distributionif its characteristic function has theformΦX(θ) = ei(θ,µ)−12(θ,Σθ), θ ∈ IRn,for some nonnegative definite n × n matrix Σ and somevector µ ∈ IRn•A RV X is Gaussian (or Normal) if its characteristic functionisΦX(θ) = eiθµ−12θ2σ2, θ ∈ IR,for some µ ∈ IR and σ2≥ 0MATH136/STAT219 Lecture 11, October 15, 2008 – p. 6/12Mean and Covariance of Gaussian Random Vector•If X has a Gaussian Distribution with characteristic functionΦX(θ) = ei(θ,µ)−12(θ,Σθ), θ ∈ IRn,•Then µ is the mean vector of X, i.e.IE(Xk) = µk, k = 1, . . . , n•And Σ is the covariance matrix of X, i.e.Cov(Xj, Xk).= IE[(Xj−µj)(Xk−µk)] = Σjk, j, k = 1, . . . , n•Thus, a Gaussian random vector is completelycharacterized by its mean vector and covariance matrixMATH136/STAT219 Lecture 11, October 15, 2008 – p. 7/12Non-degenerate Gaussian Random Vector•A random vector X has a non-degenerate GaussianDistributionif Σ is positive definite•A random vector X with non-degenerate Gaussiandistribution has densityfX(x) =1(2π)n/2(detΣ)1/2e−12(x−µ,Σ−1(x−µ)), x ∈ IRn•A Gaussian RV with σ > 0 has densityfX(x) =1√2πσe−(x−µ)22σ2, x ∈ IRMATH136/STAT219 Lecture 11, October 15, 2008 – p. 8/12Convergence of Gaussian Random Vectors•Let X(m), m = 1, 2, . . . be a sequence of Gaussian randomvectors with mean µ(m)and covariance matrix Σ(m). Let Xbe another random vector.•If X(m)→ X in L2as m → ∞•Then X is a Gaussian random vector with mean µ andcovariance matrix Σ given byµ = limm→∞µ(m)Σ = limm→∞Σ(m)MATH136/STAT219 Lecture 11, October 15, 2008 – p. 9/12Alternate Definitions of Gaussian Random Vector•A random vector X = (X1, . . . , Xn) is Gaussian if and only iffor all real numbers a1, . . . , an, the random variablea1X1+ ··· + anXnhas a Gaussian distribution•A random vector X = (X1, . . . , Xn) is Gaussian if and only iffor all real numbers b11, b12, . . . , bmn, the m-dimensionalrandom vector(b11X1+ ··· + b1nXn, . . . , bm1X1+ ··· + bmnXn)is Gaussian•Example: if (X, Y ) is a Gaussian random vector then◦X is a Gaussian random variable◦X + Y is a Gaussian random variable◦(X + Y, X − Y ) is a Gaussian random vectorMATH136/STAT219 Lecture 11, October 15, 2008 – p. 10/12Gaussian Random Vectors and Independence•If X = (X1, . . . , Xn) is a Gaussian random vector withuncorrelated coordinates (that is, IE[XiXj] = IE[Xi]IE[Xj]for all i 6= j)•Then X has independent coordinates, that is, X1, . . . , Xnare independent random variables•Essential that X is a Gaussian random vector for aboverelationship to hold•In particular, if X and Y are uncorrelated Gaussian randomvariables, then X and Y need not be independent (seeExercise 3.2.12)•If X and Y are Gaussian random variables then (X, Y ) isnot necessarily a Gaussian random vectorMATH136/STAT219 Lecture 11, October 15, 2008 – p. 11/12Gaussian Stochastic Process•A stochastic process {Xt: t ∈ I} is a Gaussian SP if itsFDD’s are Gaussian•That is, for all integers n < ∞ and times t1, . . . , tn∈ I, therandom vector (Xt1, . . . , Xtn) has a Gaussian distribution•All distributional properties of a Gaussian process aredetermined by its mean and autocovariance functions:µ(t) = IE(Xt)ρ(t, s) = IE[(Xt− µ(t))(Xs− µ(s))]MATH136/STAT219 Lecture 11, October 15, 2008 – p.
View Full Document