# Stanford EE 363 - Lecture Notes (35 pages)

Previewing pages 1, 2, 16, 17, 18, 34, 35 of 35 page document
View Full Document

## Lecture Notes

Previewing pages 1, 2, 16, 17, 18, 34, 35 of actual document.

View Full Document
View Full Document

## Lecture Notes

41 views

Pages:
35
School:
Stanford University
Course:
Ee 363 - Linear Dynamical Systems
##### Linear Dynamical Systems Documents
• 21 pages

• 14 pages

• 32 pages

• 35 pages

• 12 pages

• 15 pages

• 4 pages

• 3 pages

• 16 pages

• 11 pages

• 34 pages

Unformatted text preview:

EE363 Winter 2008 09 Lecture 7 Estimation Gaussian random vectors minimum mean square estimation MMSE MMSE with linear measurements relation to least squares pseudo inverse 7 1 Gaussian random vectors random vector x Rn is Gaussian if it has density 1 px v 2 n 2 det 1 2 exp v x T 1 v x 2 for some T 0 x Rn denoted x N x x Rn is the mean or expected value of x i e x E x Z vpx v dv T 0 is the covariance matrix of x i e E x x x x T Estimation 7 2 E xxT x x T Z v x v x T px v dv density for x N 0 1 px v 2 1 e v 2 2 0 45 0 4 0 35 0 3 0 25 0 2 0 15 0 1 0 05 0 4 Estimation 3 2 1 0 v 1 2 3 4 7 3 mean and variance of scalar random variable xi are E xi x i hence standard deviation of xi is E xi x i 2 ii ii covariance between xi and xj is E xi x i xj x j ij ij correlation coefficient between xi and xj is ij p ii jj mean norm square deviation of x from x is E kx x k2 E Tr x x x x T Tr n X ii i 1 using Tr AB Tr BA example x N 0 I means xi are independent identically distributed IID N 0 1 random variables Estimation 7 4 Confidence ellipsoids px v is constant for v x T 1 v x i e on the surface of ellipsoid E v v x T 1 v x thus x and determine shape of density confidence set for random variable z is smallest volume set S with Prob z S in general case confidence set has form v pz v E are the confidence sets for Gaussian called confidence ellipsoids determines confidence level Estimation 7 5 Confidence levels the nonnegative random variable x x T 1 x x has a 2n distribution so Prob x E F 2n where F 2n is the CDF some good approximations En gives about 50 probability En 2 n gives about 90 probability Estimation 7 6 geometrically mean x gives center of ellipsoid semiaxes are iui where ui are orthonormal eigenvectors of with eigenvalues i Estimation 7 7 example x N x with x x1 has mean 2 std dev 2 2 1 1 1 1 2 x2 has mean 1 std dev 1 correlation coefficient between x1 and x2 is 1 2 E kx x k2 3 90 confidence ellipsoid corresponds to 4 6 8 6 4 x2 2 0 2 4 6 8 10 8 6 4 2 0 x1 2 4 6 8 10 here 91 out of 100 fall in E4 6 Estimation 7 8 Affine transformation suppose x N x x consider affine transformation of x z Ax b where A Rm n b Rm then z is Gaussian with mean E z E Ax b A E x b Ax b and covariance z E z z z z T E A x x x x T AT A xAT Estimation 7 9 examples if w N 0 I then x 1 2w x is N x useful for simulating vectors with given mean and covariance conversely if x N x then z 1 2 x x is N 0 I normalizes decorrelates called whitening or normalizing Estimation 7 10 suppose x N x and c Rn scalar cT x has mean cT x and variance cT c thus unit length direction of minimum variability for x is u where u minu standard deviation of uTn x is kuk 1 min similarly for maximum variability Estimation 7 11 Degenerate Gaussian vectors it is convenient to allow to be singular but still T 0 in this case density formula obviously does not hold meaning in some directions x is not random at all random variable x is called a degenerate Gaussian write as Q Q0 0 0 0 Q Q0 T where Q Q Q0 is orthogonal 0 columns of Q0 are orthonormal basis for N columns of Q are orthonormal basis for range Estimation 7 12 then QT x z w x Q z Q0w z N QT x is nondegenerate Gaussian hence density formula holds w QT0 x Rn is not random called deterministic component of x Estimation 7 13 Linear measurements linear measurements with noise y Ax v x Rn is what we want to measure or estimate y Rm is measurement A Rm n characterizes sensors or measurements v is sensor noise Estimation 7 14 common assumptions x N x x v N v v x and v are independent N x x is the prior distribution of x describes initial uncertainty about x v is noise bias or offset and is usually 0 v is noise covariance Estimation 7 15 thus using x v x x 0 v 0 v N x y we can write E x y I 0 A I x v x Ax v and E Estimation x x y y x x y y T I 0 A I x 0 0 v T x x A A x A xAT v I 0 A I T 7 16 covariance of measurement y is A xAT v A xAT is signal covariance v is noise covariance Estimation 7 17 Minimum mean square estimation suppose x Rn and y Rm are random vectors not necessarily Gaussian we seek to estimate x given y thus we seek a function Rm Rn such that x y is near x one common measure of nearness mean square error E k y xk2 minimum mean square estimator MMSE mmse minimizes this quantity general solution mmse y E x y i e the conditional expectation of x given y Estimation 7 18 MMSE for Gaussian vectors now suppose x Rn and y Rm are jointly Gaussian x y N x y x Txy xy y after a lot of algebra the conditional density is 1 n 2 1 2 px y v y 2 det exp v w T 1 v w 2 where T x xy 1 y xy w x xy 1 y y y hence MMSE estimator i e conditional expectation is x mmse y E x y x xy 1 y y y Estimation 7 19 mmse is an affine function MMSE estimation error x x is a Gaussian random vector T x x N 0 x xy 1 y xy note that T x xy 1 xy x y i e covariance of estimation error is always less than prior covariance of x Estimation 7 20 Best linear unbiased estimator estimator x blu y x xy 1 y y y makes sense when x y aren t jointly Gaussian this estimator is unbiased i e E x E x often works well is widely used has minimum mean square error among all affine estimators sometimes called best linear unbiased estimator Estimation 7 …

View Full Document

Unlocking...