DOC PREVIEW
BU EECE 522 - Chapter 10

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Chapter 10Bayesian Philosophy10.1 IntroductionWhy Choose Bayesian?10.3 Prior Knowledge and EstimationEx. of Bayesian Viewpoint: Emitter LocationBayesian Criteria Depend on Joint PDFEx. Bayesian for DC LevelGeneral Insights From Example10.4 Choosing a Prior PDFEx. 10.1: DC in WGN with Gaussian Prior PDF1Chapter 10Bayesian Philosophy210.1 IntroductionUp to now… Classical Approach: assumes θ is deterministicThis has a few ramifications:• Variance of the estimate could depend on θ• In Monte Carlo simulations: – M runs done at the same θ, – must do M runs at each θ of interest– averaging done over data – no averaging over θ valuesE{} is w.r.t. p(x;θ)Bayesian Approach: assumes θ is random with pdf p(θ)This has a few ramifications:• Variance of the estimate CAN’T depend on θ• In Monte Carlo simulations: – each run done at a randomlychosen θ, – averaging done over data AND over θ valuesE{} is w.r.t. p(x,θ)joint pdf3Why Choose Bayesian?1. Sometimes we have prior knowledge on θ ⇒ some values are more likely than others2. Useful when the classical MVU estimator does not exist because of nonuniformity of minimal varianceθ)(2ˆθθiσ1θ2θ3. To combat the “signal estimation problem”… estimate signal sx = s + wIf s is deterministic and is the parameter to estimate, then H = IClassical Solution:()xxIIIs ==−TT1ˆSignal Estimate is the data itself!!!The Wiener filter is a Bayesian method to combat this!!410.3 Prior Knowledge and EstimationBayesian Data Model:• Parameter is “chosen” randomly w/ known “prior PDF”• Then data set is collected• Estimate value chosen for parameterEvery time you collect data, the parameter has a different value, but some values may be more likely to occur than othersThis is how you thinkabout it mathematically and how you run simulations to test it.This is what you know ahead of time about the parameter.5Ex. of Bayesian Viewpoint: Emitter Location Emitters are where they are and don’t randomly jump around each time you collect data. So why the Bayesian model?(At least) Three Reasons1. You may know from maps, intelligence data, other sensors, etc. that certain locations are more likely to have emitters• Emitters likely at airfields, unlikely in the middle of a lake2. Recall Classical Method: Parm Est. Variance often depends on parameter• It is often desirable (e.g. marketing) to have a singlenumber that measures accuracy.3. Classical Methods try to give an estimator that gives low variance at each θ value. However, this could give large variance where emitters are likely and low variance where they are unlikely.6Bayesian Criteria Depend on Joint PDFThere are several different optimization criteria within the Bayesian framework. The most widely used is…Minimize the Bayesian MSE:Bmse{}∫∫−=−=dθd,θpθθθθEθxxx )()](ˆ[)ˆ()ˆ(22Take E{} w.r.t. joint pdf of x and θCan Not Depend on θJoint pdf of x and θTo see the difference… compare to the Classical MSE:{}∫−=−=xxx dθpθθθθEθmse);()](ˆ[)ˆ()ˆ(22pdf of x parameterized by θCan Depend on θ7Ex. Bayesian for DC LevelZero-Mean White GaussianSame as before… x[n] = A + w[n]p(A)-Ao1/2AoAoABut here we use the following model: •that A is random w/ uniform pdf•RVs A and w[n] are independent of each otherNow we want to find the estimator function that maps data x into the estimate of A that minimizes Bayesian MSE:[]xxxxxdpdAApAAdAdApAAABmse∫∫∫∫−=−=)()|(]ˆ[),(]ˆ[)ˆ(22Now use… p(x,A) = p(A|x)p(x)Minimize this for each x valueThis works because p(x) ≥ 0So… fix x, take its partial derivative, set to 08Finding the Partial Derivative gives:∫∫∫∫∫+−=−−=∂−∂=−∂∂dAApAdAAApdAApAAdAApAAAdAApAAA)|(ˆ2)|(2)|(]ˆ[2)|(ˆ]ˆ[)|(]ˆ[ˆ22xxxxx=1Setting this equal to zero and solving gives:{}xx|)|(ˆAEdAAApA==∫Conditional mean of A given data xBayesian Minimum MSE Estimate = The Mean of “posterior pdf”MMSESo… we need to explore how to compute this from our data given knowledge of the Bayesian model for a problem9Compare this Bayesian Result to the Classical Result:… for a given observed data vector x look atMVUE = xAAo–Aop(A|x)p(x;A)MMSE = E{A|x}Before taking any data… what is the best “estimate” of A?• Classical: No best guess exists!• Bayesian: Mean of the Prior PDF… – observed data “updates” this “a priori” estimate into an “a posteriori” estimate that balances “prior” vs. data10So… for this example we’ve seen that we need E{A|x}.How do we compute that!!!?? Well…∫==dAAApAEA)|(}|{ˆxxSo… we need the posterior pdf of A given the data… which can be found using Bayes’ Rule:∫==dAApApApAppApApAp)()|()()|()()()|()|(xxxxxAllows us to write one cond. PDF in terms of the other way aroundAssumed KnownMore easily found than p(A|x)… very much the same structureas the parameterized PDF used in Classical Methods11So now we need p(x|A)… For x[n] = A + w[n] we know that()−−=−=−=222][21exp21)][()|][()|][(AnxAnxpAAnxpAnxpwwxσπσFor A known, x[n] is the known A plus random w[n]PDF of xBecause w[n] and A are assumed Independent Because w[n] is White Gaussian they are independent… thus, the data conditioned on A is independent:()()−−=∑−=10222/2][21exp21)|(NnNAnxApσπσxSame structure as the parameterized PDF used in Classical Methods… Buthere A is an RV upon which we have conditioned the PDF!!!12Now we can use all this to find the MMSE for this problem:MMSE Estimator…A function that maps observed data into the estimate… No Closed Form for this Case!!!()()[]()()[]()()∫∑∫∑∫∑∫∑∫∫∫−−=−−=−−=−−=−−−−=−−−−====ooooooooAANnAANnAAoNnNAAoNnNdAAnxdAAnxAAdAAAnxdAAAnxAdAApApdAApAApdAAApAEA1022102210222/210222/2][21exp][21expˆ2/1][21exp212/1][21exp21)()|()()|()|(}|{ˆσσσπσσπσxxxxUsing Bayes’ RuleUse Prior PDFUse Parameter-Conditioned PDFIdeaEasy!!Hard to“Build”13How the Bayesian approach balances a priori and a posteriori info:AAo–Aop(A)E{A}No DataAAo–Aop(A|x)E{A|x}xShort DataRecordAAo–Aop(A|x)xx≈}|{AELong DataRecord14General Insights From Example1. After collecting data: our knowledge is captured by the posterior PDF p(θ|x)2. Estimator that minimizes the Bmse is E{θ|x}… the mean of the posterior PDF3. Choice of


View Full Document

BU EECE 522 - Chapter 10

Download Chapter 10
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Chapter 10 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Chapter 10 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?