DOC PREVIEW
BU EECE 522 - Notes

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Ch. 12 Linear Bayesian EstimatorsIntroduction12.3 Linear MMSE Estimator SolutionDerivation of Optimal LMMSE CoefficientsEx. 12.1 DC Level in WGN with Uniform Prior12.4 Geometrical InterpretationsAbstract Vector Space RulesExamples of Abstract Vector SpacesInner Product SpacesInner Product Space of Random VariablesUse IP Space Ideas for Section 12.312.5 Vector LMMSE EstimatorSolutions to Vector LMMSETwo Properties of LMMSE EstimatorBayesian Gauss-Markov Theorem1Ch. 12 LinearBayesian Estimators2IntroductionIn chapter 11 we saw: the MMSE estimator takes a simple form when x and θ are jointly Gaussian – it is linear and used only the 1stand 2ndorder moments (means and covariances).Without the Gaussian assumption, the General MMSE estimator requires integrations to implement – undesirable!So what to do if we can’t “assume Gaussian” but want MMSE?Keep the MMSE criteria But…restrict the form of the estimator to be LINEAR⇒ “LMMSE Estimator”Something similar to BLUE!LMMSE Estimator = “Wiener Filter”3Bayesian ApproachesMAP“Hit-or-Miss” Cost Function{}{}xθxθCMxθθ|ˆ :Cov. Err.ˆ :EstimateEE==MMSE“Squared” Cost Function(NonlinearEstimate)Other Cost FunctionsLMMSEForce Linear EstimateKnown: E{θ},E{x}, CJointly Gaussian x and θ(Yields Linear Estimate){}{}()xθxxθxθθθxxθxCCCCMxxCCθθ1ˆ1 :Cov. Err.ˆ :Estimate−−−=−+= EE{}{}()xθxxθxθθθxxθxCCCCMxxCCθθ1ˆ1 :Cov. Err.ˆ :−−−=−+= EEEstimateSame!Bayesian Linear Model(Yields Linear Estimate){}()()()θθθθθθθθHCCHHCHCCMHµxCHHCHCθθ1ˆ1 :Cov. Err.ˆ :Estimate−−+−=−++=wTTwTTE412.3 Linear MMSE Estimator SolutionScalar Parameter Case:Estimate: θ, a random variable realizationGiven: data vector x = [x[0] x[1] . . .x[N-1] ]TAssume:– Joint PDF p(x, θ) is unknown– But…its 1sttwo moments are known– There is some statistical dependence between x and θ• E.g., Could estimate θ = salary using x = 10 past years’ taxes owed• E.g., Can’t estimate θ = salary using x = 10 past years’ number of Christmas cards sentGoal: Make the best possible estimate while using an affine form for the estimator∑−=+=10][ˆNnNnanxaθHandles Non-Zero Mean Case })ˆ{()ˆ(2θθθθ−=xEBmseChoose {an} to minimize5Derivation of Optimal LMMSE CoefficientsUsing the desired affine form of the estimator, the Bmse is+−=∑−=210][)ˆ(NnNnanxaEBmseθθ0)ˆ(=∂∂NaBmseθ0}][{210=+−−∑−=NnNnanxaEθStep #1: Focus on aNPassing ∂/∂aNthrough E{} gives∑−=−=10}][{}{NnnNnxEaEaθNote: aN= 0 if E{θ} = E{x[n]} = 06Step #2: Plug-In Step #1 Result for aN−−−=−−−=∑−=2210}){(}){(}){(]})[{][()ˆ(!"!#$!!"!!#$scalarscalarTNnnEEEEnxEnxaEBmseθθθθθxxawhere a = [a0a1. . . aN-1]TOnly up to N-1Note: aT(x – E{x}) = (x – E{x})Ta since it is scalar7Thus, expanding out [aT(x – E{x}) – (θ– E{θ})] 2gives {}{}θθθθTTTTTTTcEtcEtcEEEEtcEEEBmse+−−=+=+−−=+−−=accaaCaaCaaxxxxaaxxxxaxxxxxx..}){})({(.}){})({()ˆ(θN×NN×11×N1×1θTθTθθθEθExxxxccxcxc=== }{}{cross-covariancevectors ⇒θθθTTcBmse +−=xxxcaaCa 2)ˆ(θ8Step #3: Minimize w.r.t. a1, a2, … , aN-1Only up to N-10)ˆ(=∂∂aθBmseθxxxcCa1−=1−=xxxCcaθT0caCxxx=−θ22This is where the statistical dependence between the data and the parameter is used… via a cross-covariance vectorStep #4: Combine Results[]()}{}{}{}{][ˆ10xxaxaxa EEEEanxaTTTNnNn−+=−+=+=∑−=θθθSo the Optimal LMMSE Estimate is:xCcxxx1ˆ−=θθ()}{}{ˆ1xxCcxxxEEθ−+=−θθIf Means = 0Note: LMMSE Estimate Only Needs 1stand 2ndMoments… not PDFs!!9Step #5: Find Minimum BmseSubstitute into Bmse result and simplify:θθθθθθθθθθθθθθθTTcccBmse+−=+−=+−=−−−−−xxxxxxxxxxxxxxxxxxxxxxxcCccCccCccCCCccaaCa11111222)ˆ(θθθθθcBmsexxxxcCc1)ˆ(−−=θNote: If θ and x are statistically independent then Cθx= 0}{ˆθθE=Totally based on prior info… the data is uselessθθcBmse =)ˆ(θ10Ex. 12.1 DC Level in WGN with Uniform PriorRecall: Uniform prior gave a non-closed form requiring integration…but changing to a Gaussian prior fixed this.Here we keep the uniform prior and get a simple form:• by using the Linear MMSExCcxxx1ˆ−=AAFor this problem the LMMSE estimate is:()(){}I11w1w1Cxx22σσ+=++=TATAAE(){}TATθAAEAE1w1xcx2}{σ=+==NeedA & w are uncorrelatedA & w are uncorrelatedxNAAA+=/ˆ222σσσ1112.4 Geometrical InterpretationsAbstract Vector SpaceMathematicians first tackled “physical” vector spaces like RN and CN, etc.But… then abstracted the “bare essence” of these structures into the general idea of a vector space.We’ve seen that we can interpret Linear LS in terms of “Physical” vector spaces.We’ll now see that we can interpret Linear MMSE in terms of “Abstract” vector space ideas.12Abstract Vector Space RulesAn abstract vector space consists of a set of “mathematical objects” called vectors and another set called scalars that obey:1. There is a well-defined operation of “addition” of vectors that gives a vector in the set, and…• “Adding” is commutative and associative• There is a vector in the set – call it 0 – for which “adding” it to any vector in the set gives back that same vector • For every vector there is another vector s.t. when the 2 are added you get the 0 vector2. There is a well-defined operation of “multiplying” a vector by a “scalar” and it gives a vector in the set, and…• “Multiplying” is associative • Multiplying a vector by the scalar 1 gives back the same vector3. The distributive property holds• Multiplication distributes over vector addition• Multiplication distributes over scalar addition13Examples of Abstract Vector Spaces1. Scalars = Real Numbers Vectors = NthDegree Polynomials w/ Real Coefficients2. Scalars = Real Numbers Vectors = M×N Matrices of Real Numbers3. Scalars = Real Numbers Vectors = Functions from [0,1] to R4. Scalars = Real Numbers Vectors = Real-Valued Random Variables with Zero MeanColliding Terminology… a scalar RV is a vector!!!14There is a well-defined concept of inner product s.t. all the rules of “ordinary” inner product still hold•<x,y> =


View Full Document

BU EECE 522 - Notes

Download Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?