DOC PREVIEW
Berkeley ELENG 226A - EE 226 Problem Set

This preview shows page 1-2-3 out of 10 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 10 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

EE226: Random Processes in Systems Fall’06Problem Set 5 — Due Oct, 24Lecturer: Jean C. Walrand GSI: Assane GueyeThis problem set essentially reviews estimation theory and the Kalman filter. Not allexercises are to be turned in. Only those with the sign F are due on Tuesday, October24that the beginning of the class. Although the remaining exercises are not graded, you areencouraged to go through them.We will discuss some of the exercises during discussion sections.Please feel free to point out errors and notions that need to be clarified.Exercise 5.1. Let Y = H × X + Z where X is some Gaussian random vector in Rnwithzero mean and covariance matrix KX, H is a non-singular n × n known matrix, and Z is agaussian random noise with zero mean and non-singular covariance matrix KZuncorrelatedto X.(a) Find the MMSE estimator of X given Y .(b) Explain what happens in the case where |KZ| = 0.(c) Repeat part (b) when both H and KZare singular.Solution:Hint(a) Since the matrix H is invertible, we can obtain a simpler equation˜Y = H−1Y = X +H−1Z = X +˜Z where˜Z is again Gaussian N (0, H−1KZ(H−1)T) (note: the covariance matrixof˜Z is not singular).Since we have only Gaussian random vectors, the MMSE of X given˜Y is given by (usingformula)ˆX = KX¡KX+ H−1KZ(H−1)T¢−1H−1Y(b) Note that if |KZ| = 0 then the covariance of˜Z is singular. So, without loss of generalitywe can discuss the transformed observation˜Y .The singularity of the covariance matrix indicates that the noise lies in a lower dimensionalspace then the signal X (which lies in Rn). The components of X that are in the subspaceof the noise will be perturbed but the components that are orthogonal to the noise can bedetected without error.A more rigorous analysis using eigenvalue decomposition will involve projection into thesubspace of the noise and into the orthogonal to that subspace.(c) If H is singular we cannot make anymore transformation of the observation but still wecan do the same analysis.The effect of multiplying by H is to reduce the signal into a lower dimensional space. If thisspace is entirely inside the noise space, the observation is noisy. If some components are stillin the orthogonal direction of the noise, those components can be detected without error.5-1EE226 Problem Set 5 — Due Oct, 24 Fall’06Exercise 5.2. FThe values of a random sample, 2.9, 0.5, −0.1, 1.2, 3.5, and 0, are obtained from a randomvariable X uniformly distributed over the interval [a, b]. Find the maximum-likelihood esti-mates of a and b (assume that the samples are independent).SolutionThe MLE is the value of (a, b) that maximizes the likelihood of observing the given sequence.In other terms, we are looking for (ˆa,ˆb) such that:(ˆa,ˆb) = argmax(a,b)f(x1, . . . , xn; (a, b))where xi’s are the observations.The pdf f (x1, . . . , xn; (a, b)) is equal tof(x1, . . . , xn; (a, b)) =½1b−aa ≤ xi≤ b, ∀i0 otherwiseThe choice of (a, b) that maximizes this expression is clearly a = xmin= min{x1, . . . , xn}and b = xmax= max{x1, . . . , xn}. Thus the MLE of (a, b) is (ˆa,ˆb) = (−0.1, 3.5).Exercise 5.3. FFor the estimation problem modeled by the equations:xk= xk−1+ wk−1, wk∼ N (0, 30), white noisezk= xk+ vk, vk∼ N (0, 20), white noiseσ20= 150find σ2k, sk, and rkfor k = 1, 2, 3, 4 and σ2∞(the steady value).(σ2k, skare the estimation and prediction square error updates, and rkis the Kalman gain.)SolutionRecalling from class, the Kalman filter update equations are:ˆxk= ˆxk−1+ rk(zk− ˆxk−1)rk=sksk+ σ2vsk= σ2k−1+ σ2wσ2k= (1 − rk)skσ20= 150;The following matlab script gives the results.5-2EE226 Problem Set 5 — Due Oct, 24 Fall’06n=10; sigma=[150 zeros(1,n-1)]; sigma_v=20; sigma_w=30;s=zeros(1,n); r= zeros(1,n);for i=2:ns(i)=sigma(i-1)+sigma_w;r(i)=s(i)/(s(i)+sigma_v);sigma(i)=(1-r(i))*s(i);endk 0 1 2 3 4 5 6 7 8 9sk0 180.0000 48.0000 44.1176 43.7615 43.7266 43.7232 43.7229 43.7228 43.7228rk0 0.9000 0.7059 0.6881 0.6863 0.6862 0.6861 0.6861 0.6861 0.6861σk150.0000 18.0000 14.1176 13.7615 13.7266 13.7232 13.7229 13.7228 13.7228 13.7228Exercise 5.4. F Parameter Estimation (recursive)Let x be a zero-mean Gaussian random variable with variance P0, and let zk= x + vkbe anobservation of x with white noise vk∼ N (0, R).(a) Find a recursive (MMSE) estimator of x given the observations zkand compute theestimation error.Hint: Example 4.3 Gallager’s notes.(b) What is the value of ˆx1if R = 0?(c) What is the value of ˆx1if R = ∞?(d) Explain the results of (b) and (c) in terms of measurement uncertainty.Solution(a) The solution of this problem is given in the Gallager’s notes (eq. 4.33).ˆxk=P0kP0+ RkXi=1zkσ2k=P0RkP0+ R(b)ˆx1=P0z1P0+ RR→0−−−→ z1(c)ˆx1=P0z1P0+ RR→∞−−−→ 0(d) In part (b), the noise has zero mean and zero variance. Since it is a Gaussian randomvariable we can conclude that it is identically equal to zero. So the estimation is correct andhas zero error.In part (c) the noise has infinite variance so the observation becomes independent to thesignal. Thus the best estimate given the observation is equal to the mean of x.Exercise 5.5. F Parameter Estimation (using KF)Let us consider the estimation of the value of an (unknown) constant x given measurements5-3EE226 Problem Set 5 — Due Oct, 24 Fall’06yn= x + vnthat are corrupted (but uncorrelated) with a zero mean white noise vnthat hasvariance σ2v.(a) Write the estimation problem as a Kalman filter problem and compute the Kalman gain(rn) and the variance of the estimation error (e2n).(You are asked to find close forms of enand rnas functions of n, e20, σ2v.(b) What is the Kalman filter as n → ∞?(c) What is the Kalman filter as σ2v→ ∞?(d) Now suppose that we do not have no a priori information about x (i.e ˆx0= 0 ande20→ ∞).Show that the Kalman filter simply becomes the sample meanˆx =1nnXi=1yiSolution(a) We can write the estimation problem into the following form:xn+1= xn+ 0yn= xn+ vnwhere vnis a white noise.Using the Kalman filter update equations we haveˆxn= ˆxn−1+ rn(yn− ˆxn−1)rn=snsn+ σ2vsn= e2n−1e2n= (1 − rn)snSubstituting snand rnin the last equation, we obtaine2n= (1 −en−1en−1+ σ2v)en−1=11σ2v+1e2n−1Thus we have:1e2n=1σ2v+1e2n−1=1σ2v+1σ2v+1e2n−1=1σ2v+1σ2v+ · · · +1e20=nσ2v+1e205-4EE226 Problem Set 5 — Due Oct, 24 Fall’06This gives:e2n=1nσ2v+1e20sn=1n−1σ2v+1e20rn=1n +σ2ve20ˆxn= ˆxn−1+1n +σ2ve20(yn− ˆxn−1)(b) When n


View Full Document

Berkeley ELENG 226A - EE 226 Problem Set

Download EE 226 Problem Set
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view EE 226 Problem Set and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view EE 226 Problem Set 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?