Unformatted text preview:

Chapter 7R andom P rocesses7.1 Corr elation in Rando m VariablesArandomvariableX takes on num erical values as the resu lt of an ex peri-ment. Suppose that the experiment also produces anoth er ran do m variable,Y. W hat can we sa y about the relationship between X and Y ?.One of the best wa ys to visualize the possible relationship is to plot the(X, Y ) pair that is produced by sev er al trials of the experiment. An exam p leof correlated samples is show n in Figure 7.1. The poin ts fall within a some-what elliptical contou r, slanting downward, and centered at approxima tely(4,0). The poin ts were created w ith a r ando m number generator using acorrelation coefficien t of ρ = −0.5,E[X]=4,E[Y ]=0. The mean v aluesare the coordinates of the cluster center. The negative correlation coefficientindicates that an increase in X abo ve its mean value generally correspond stoadecreaseinY belo w its mean value. Th is tendency makes it possible tomake predictions about the value that one variable will tak e giv en the valueof the other, somethin g which can be useful in many settings.The joint behavior of X and Y is fully captured in the joint probab ilitydistribution. If the random variables are con tin uous then it is appropriate touse a probabilit y densit y function, fXY(x, y). We will presum e that the pdfis kno wn or can be estimated. Com pu tation of the usual expected values isthen straigh tfo rward .E[XmYn]=ZZ∞−∞xmynfXY(x, y)dxdy (7.1)123124 CHAPTER 7. RANDOM PROCESSESFigure 7.1: Scatter plot of random variables X and Y. These random variablesha ve a correlation of ρ = −0.5.7.1.1 Covariance FunctionThe covariance function is a n umber that measures the common variation ofX and Y. It is defined ascov(X, Y )=E[(X − E[X])(Y − E[Y ])] (7.2)= E[XY ] − E[X]E[Y ] (7.3)The co variance is determined by the difference in E[XY] and E[X]E[Y ]. IfX and Y were statistically independent then E[XY ] would equal E[X]E[Y ]and the co variance wou ld be zero. Hence, the covariance, as its name im-plies, measures the common variation. T he covariance can be normalized toproducewhatisknownasthecorrelationcoefficien t, ρ.ρ =cov(X, Y)pvar(X)var(Y)(7.4)The correlation coefficient is bounded b y −1 ≤ ρ ≤ 1. It will have value ρ =0when the covariance is zero and value ρ = ±1 when X and Y are perfectly7.2. LIN EA R ESTIM ATION 125correlated or anti-correlated.7.1.2 Autocorrelation FunctionThe autocorrelation1function is very similar to the covariance function. Itis defined asR(X, Y )=E[XY ]=cov(X, Y )+E[X]E[Y ] (7.5)It retains the mean valu es in the calculation of the value. The rando mvariables are orthogonal if R(X, Y )=0.7.1.3 Joint Norm al DistributionIf X and Y hav e a joint norm al distribution then the proba b ility den sityfunctio n isfXY(x, y)=12πσxσyp1 − ρ2exp−³x−µxσx´2− 2ρ³x−µxσx´³y−µyσy´+³y−µyσy´22(1 − ρ2)(7.6)The con tour s of equal probability are ellipses, as shown in Figure 7.2. Theprobab ility ch ang es muc h more rapidly along the minor axis of the ellipsesthan along the major axes. T he orientation of the elliptical contours is alon gthe line y = x if ρ > 0 and along the line y = −x if ρ < 0. The contours area circle, and the variables are uncorrelated, if ρ =0. The center of the ellipseis¡µx,µy¢.7.2 Linea r E st ima tionIt is often the case that one would lik e to estimate or predict the value ofone random variable based on an observation of the other. If the randomvariables are correlated then this should yield a better result, on the a verag e,than just guessing. We will see this is indeed the case.1Be careful to not confuse the term “autocorrelation function” with “correlation coef-ficient”.126 CHAPTER 7. RANDOM PROCESSESFigure 7.2: The norma l probability distribution sho w n as a surface plot onthe left and a contou r plot in the cen ter. A nu mber of sa m ple poin ts areshownoverlaidonthecontourplotintherightframe. Thelinearpredictorline is drawn in the righ t frame. ρ = −0.7, σx= σy=1,µx= µy=0.ThetaskistoconstructaruleforthepredictionofY basedonanobser-vation of X. We will call the predictio nˆY, and compute its value with thesimple linear equationˆY = aX + b (7.7)where a and b are parameters to be c hosen to pro vid e the best results. Weareencouragedtoselectthislinearrulewhenwenotethatthesamplepointstend to fall about a sloping line. We wou ld expect a to correspond to theslope and b to the intercep t.To find a means of calculating the coefficien ts from a set of sample points,construct the predictor errorε = E[(Y −ˆY )2] (7.8)We want to cho ose a and b to minim ize ε. Therefore, compute the appropriatederiv a tives and set them to zero.∂ε∂a= −2E[(Y −ˆY )∂ˆY∂a]=0 (7.9)∂ε∂b= −2E[(Y −ˆY )∂ˆY∂b]=0 (7.10)7.3. RA N D O M PR O C E SSE S 127upon substitution ofˆY = aX + b and rearrangement we get the pair ofequationsE[XY ]=aE[X2]+bE[X] (7.11)E[Y ]=aE[X]+b (7.12)These can be solv ed for a and b in terms of the expected values. The expectedvalues can be themselves estimated from the samp le set.a =cov(X, Y)var(X)(7.13)b = E[Y ] −cov(X, Y)var(X)E[X] (7.14)The prediction error with these parameter values isε =(1− ρ2)var(Y) (7.15)Wh en the correlation coefficien t ρ = ±1 the error is zero, meaning thatperfect prediction can be made. When ρ =0thevarianceinthepredictionis as large as the variatio n in Y, and the predicto r is of no help at all. Forin termediate values of ρ, whether positive or negative, the predictor reducesthe error.7.3 Ran dom ProcessesWe ha ve seen that a random variable X is a rule which assigns a n u mber toevery outcome e of an experimen t. The random variable is a function X(e)that maps the set of experiment outcomes to the set of nu mbers. A randomprocess is a rule that maps every outcome e of an experim ent to a functionX(t, e). A random process is usually conceived of as a function of time, butthere is no reason to not consider random processes that are functions of otherindependen t variables, such as spatial coordinates. T he function X(u, v, e)w ould be a function whose value depended on the location (u, v) and theoutcome e, and could be used in represen ting random variations in an image.In the follo w ing w e will deal with one-dimensional random processes todev elop a n umber of basic concepts. Having them in hand, we can then goon to multiple dimensions.128 CHAPTER 7. RANDOM PROCESSESThe domain of e


View Full Document

RIT SIMG 713 - Random Processes

Download Random Processes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Random Processes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Random Processes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?