Unformatted text preview:

Chapter 3GAUSSIAN RANDOMVECTORS AND PROCESSES3.1 IntroductionPoisson processes and Gaussian processes are similar in terms of their simplicity and beauty.When we first look at a new problem involving stochastic processes, we often start withinsights from Poisson and/or Gaussian processes. Problems where queueing is a majorfactor tend to rely heavily on an understanding of Poisson processes, and those where noiseis a major factor tend to rely heavily on Gaussian processes.Poisson and Gaussian processes share the characteristic that the results arising from themare so simple, well known, and powerful that people often forget how much the resultsdepend on assumptions that are rarely satisfied perfectly in practice. At the same time,these assumptions are often approximately satisfied, so the results, if used with insight andcare, are often useful.This chapter is aimed primarily at Gaussian processes, but starts with a study of Gaussian(normal1) random variables and vectors, These initial topics are both important in theirown right and also essential to an understanding of Gaussian processes. The material hereis essentially independent of that on Poisson processes in Chapter 2.3.2 Gaussian Random VariablesA random variable (rv) W is defined to be a normalized Gaussian rv if it has the densityfW(w) =1p2⇡exp✓w22◆. (3.1)1Gaussian rv’s are often called normal rv’s. I prefer Gaussian, first because the corresponding processesare usually called Gaussian, second because Gaussian rv’s (which have arbitrary means and variances) areoften normalized to zero mean and unit variance, and third, because calling them normal gives the falseimpression that other rv’s are abnormal.1083.2. GAUSSIAN RANDOM VARIABLES 109Exercise 3.1 shows that fW(w) integrates to 1 (i.e., it is a probability density), and that Whas mean 0 and variance 1. If we scale W by a positive constant  to get Z = W , then thedensity of the rv Z at z = w satisfies fZ(z)dz = fW(w)dw. Since dz/dw = , the densityof Z isfZ(z) =1fW⇣z⌘=1p2⇡ exp✓z222◆. (3.2)Thus the density function for Z is scaled horizontally by the factor , and then scaledvertically by 1/ (see Figure 3.1). This scaling leaves the integral of the density unchangedwith value 1 and scales the variance by 2. If we let  approach 0, this density approachesan impulse, i.e., Z becomes the atomic rv for which Pr{Z=0} = 1. For convenience inwhat follows, we use (3.2) as the density for Z for all   0, with the above understandingab out the  = 0 case. A rv with the density in (3.2), for any   0, is defined to bea zero-mean Gaussian rv. The values P r(|Z| >  ) = .318, Pr(|Z| > 3) = .0027, andP r(|Z| > 5) = 2.2·1012give us a sense of how small the tails of the Gaussian distributionare.0 2 4 60.3989Figure 3.1: Graph of the density of a normalized Gaussian rv (the taller curve) and ofa zero-mean Gaussian rv with standard deviation 2 (the flatter curve).If we shift Z to U = Z + m, then the density shifts so as to be centered at E [U] = m, andthe density satisfies fU(u) = fZ(u  m). ThusfU(u) =1p2⇡ exp✓(u  m)222◆. (3.3)A random variable U with this density, for arbitrary m and   0, is defined to be aGaussian random variable and is denoted U ⇠ N(m, 2).The added generality of a mean often obscures formulas; we will usually work with rv’sand random vectors (rv’s) of zero mean and insert a mean later if necessary. That is, anyrandom variable U with a mean m can be regarded as the sum of m plus a zero mean rvU m called the fluctuation of U.The moment generating function, gZ(r), of a Gaussian rv Z ⇠ N(0, 2), can be calculated110 CHAPTER 3. GAUSSIAN RANDOM VECTORS AND PROCESSESas follows:gZ(r) = E[exp(rZ)]=1p2⇡ Z11exp(rz) expz222dz=1p2⇡ Z11expz2+ 22rz  r2422+r222dz (3.4)= expr222⇢1p2⇡ Z11exp(z r)222dz(3.5)= expr222. (3.6)We completed the square in the exponent in (3.4). We then recognized that the term inbraces in (3.5) is the integral of a probability density and thus equal to 1.Note that gZ(r) exists for all real r, although it increases rapidly with |r|. If a rv Z has amoment generating function gZ(r) in an open interval of r around 0, then all the momentsof Z can be found from gZ(r). As shown in Exercise 3.2, the moments for Z ⇠ N(0, 2),are given byEhZ2ki=(2k)!2kk! 2k= (2k  1)(2k  3)(2k 5) . . . (3)(1)2k. (3.7)Thus, E⇥Z4⇤= 34, E⇥Z6⇤= 156, etc. The odd moments of Z are all zero since z2k+1isan o dd function of z and the Gaussian density is even.For an arbitrary Gaussian rv U ⇠ N(m, 2), let Z = U m, Then Z ⇠ N(0, 2) and gU(r)is given bygU(r) = E [exp(r(m + Z)] = ermE⇥erZ⇤= exp(rm + r22/2). (3.8)The characteristic function, gZ(i✓ ) = E⇥ei✓Z⇤for Z ⇠ N(0, 2) and i✓ imaginary can beshown to be (e.g., see Chap. 2.12 in [21]).gZ(i✓ ) = exp✓222, (3.9)The argument in (3.4) to (3.6) do es not show this, and one can not always go from theMGF to the characteristic function simply by replacing real r in a formula by imaginary i✓.As explained in Section 1.3.11, the characteristic function is useful first because it exists forall rv’s and second because an inversion formula (essentially the Fourier transform) existsto uniquely find the distribution of a rv from its characteristic function.3.3 Gaussian Random VectorsAn n by ` matrix [A] is an array of n` elements arranged in n rows and ` columns; Ajkdenotes the kthelement in the jthrow. Unless specified to the contrary, the elements are3.3. GAUSSIAN RANDOM VECTORS 111real numbers. The transpose [A]Tof an n by ` matrix [A] is an ` by n matrix [B] withBkj= Ajkfor all j, k. A matrix is square if n = ` and a square matrix [A] is symmetric if[A] = [A]T. If [A] and [B] are each n by ` matrices, [A] + [B] is an n by ` matrix [C] withCjk= Ajk+ Bjkfor all j, k. If [A] is n by ` and [B] is ` by r, the matrix [A][B] is an n by rmatrix [C] with elements Cjk=PiAjiBik. A vector (or column vector) of dimension n isan n by 1 matrix and a row vector of dimension n is a 1 by n matrix. Since the transpose ofa vector is a row vector, we denote a vector a as (a1, . . . , an)T. Note that if a is a (column)vector of dimension n, then aaTis an n by n matrix whereas aTa is a number. The readeris expected to be familiar with these vector and matrix


View Full Document

MIT 6 262 - GAUSSIAN RANDOM VECTORS AND PROCESSES

Download GAUSSIAN RANDOM VECTORS AND PROCESSES
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view GAUSSIAN RANDOM VECTORS AND PROCESSES and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view GAUSSIAN RANDOM VECTORS AND PROCESSES 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?