Unformatted text preview:

Basics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceBasics of Bayesian InferenceIllustration of Bayes' TheoremIllustration of Bayes' TheoremIllustration of Bayes' TheoremIllustration (continued)Illustration (continued)Illustration (continued)Notes on priorsNotes on priorsNotes on priorsA more general exampleA more general exampleA more general exampleBayesian estimationBayesian estimationBayesian estimationBayesian estimationEx: $Y sim Bin(10, heta )$, $ heta sim U(0,1)$, $y_{obs}=7$Ex: $Y sim Bin(10, heta )$,$ heta sim U(0,1)$, $y_{obs}=7$Ex: $Y sim Bin(10, heta )$,$ heta sim U(0,1)$, $y_{obs}=7$Bayesian hypothesis testingBayesian hypothesis testingBayesian hypothesis testingBayesian hypothesis testingBayesian hypothesis testingBayesian hypothesis testingBayesian hypothesis testing (cont'd)Bayesian hypothesis testing (cont'd)Bayesian hypothesis testing (cont'd)Bayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICBayesian hypothesis testing via DICThe Bayesian computation problemThe Bayesian computation problemThe Bayesian computation problemThe Bayesian computation problemThe Bayesian computation problemThe Bayesian computation problemBayesian computationBayesian computationGibbs samplingGibbs samplingGibbs samplingGibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Metropolis algorithmMetropolis algorithmMetropolis algorithmMetropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Convergence diagnosisConvergence diagnosisVariance estimationVariance estimationVariance estimation (cont'd)Variance estimation (cont'd)Variance estimation (cont'd)Basics of Bayesian InferenceA frequentist thinks of unknown parameters as fixedChapter 4: Basics of Bayesian Inference – p. 1/25Basics of Bayesian InferenceA frequentist thinks of unknown parameters as fixedA Bayesian thinks of parameters as random, and thushaving distributions (just like the data).Chapter 4: Basics of Bayesian Inference – p. 1/25Basics of Bayesian InferenceA frequentist thinks of unknown parameters as fixedA Bayesian thinks of parameters as random, and thushaving distributions (just like the data).A Bayesian writes down a prior guess for θ, andcombines it with the likelihood for the observed data Yto obtain the posterior distribution of θ. All statisticalinferences then follow from summarizing the posterior.Chapter 4: Basics of Bayesian Inference – p. 1/25Basics of Bayesian InferenceA frequentist thinks of unknown parameters as fixedA Bayesian thinks of parameters as random, and thushaving distributions (just like the data).A Bayesian writes down a prior guess for θ, andcombines it with the likelihood for the observed data Yto obtain the posterior distribution of θ. All statisticalinferences then follow from summarizing the posterior.This approach expands the class of candidate models,and facilitates hierarchical modeling, where it isimportant to properly account for various sources ofuncertainty (e.g. spatial vs. nonspatial heterogeneity)Chapter 4: Basics of Bayesian Inference – p. 1/25Basics of Bayesian InferenceA frequentist thinks of unknown parameters as fixedA Bayesian thinks of parameters as random, and thushaving distributions (just like the data).A Bayesian writes down a prior guess for θ, andcombines it with the likelihood for the observed data Yto obtain the posterior distribution of θ. All statisticalinferences then follow from summarizing the posterior.This approach expands the class of candidate models,and facilitates hierarchical modeling, where it isimportant to properly account for various sources ofuncertainty (e.g. spatial vs. nonspatial heterogeneity)The classical (frequentist) approach to estimation is not“wrong”, but it is “limited in scope”!Chapter 4: Basics of Bayesian Inference – p. 1/25Basics of Bayesian InferenceAs usual, we start with a model f(y|θ) for the observeddata y = (y1, . . . , yn) given the unknown parametersθ = (θ1, . . . , θK)Chapter 4: Basics of Bayesian Inference – p. 2/25Basics of Bayesian InferenceAs usual, we start with a model f(y |θ) for the observeddata y = (y1, . . . , yn) given the unknown parametersθ = (θ1, . . . , θK)Add a prior distribution π(θ|λ), where λ is a vector ofhyperparameters.Chapter 4: Basics of Bayesian Inference – p. 2/25Basics of Bayesian InferenceAs usual, we start with a model f(y |θ) for the observeddata y = (y1, . . . , yn) given the unknown parametersθ = (θ1, . . . , θK)Add a prior distribution π(θ|λ), where λ is a vector ofhyperparameters.The posterior distribution for θ is given byp(θ|y, λ) =p(y, θ|λ)p(y|λ)=p(y, θ|λ)Rp(y, θ|λ) dθ=f(y|θ)π(θ|λ)Rf(y|θ)π(θ|λ) dθ=f(y|θ)π(θ|λ)m(y|λ).We refer to this formula as Bayes’ Theorem.Chapter 4: Basics of Bayesian Inference – p. 2/25Basics of Bayesian InferenceSince λ will usually not be known, a second stage(hyperprior) distribution h(λ) will be required, so thatp(θ|y) =p(y, θ)p(y)=Rf(y|θ)π(θ|λ)h(λ) dλRf(y|θ)π(θ|λ)h(λ) dθdλ.Chapter 4: Basics of Bayesian Inference – p. 3/25Basics of Bayesian InferenceSince λ will usually not be known, a second stage(hyperprior) distribution h(λ) will be required, so thatp(θ|y) =p(y, θ)p(y)=Rf(y|θ)π(θ|λ)h(λ) dλRf(y|θ)π(θ|λ)h(λ) dθdλ.Alternatively, we might replace λ in p(θ|y, λ) by anestimateˆλ; this is called empirical Bayes analysisChapter 4: Basics of Bayesian Inference – p. 3/25Basics of Bayesian InferenceSince λ will usually not be known, a second stage(hyperprior) distribution h(λ) will be required, so thatp(θ|y) =p(y, θ)p(y)=Rf(y|θ)π(θ|λ)h(λ) dλRf(y|θ)π(θ|λ)h(λ) dθdλ.Alternatively, we might replace λ in p(θ|y, λ) by anestimateˆλ; this is called empirical Bayes analysisNote thatposterior information ≥ prior information .This is referred to as Bayesian learning (the change inthe posterior distribution compared with


View Full Document

U of M PUBH 8472 - Basics of Bayesian Inference

Download Basics of Bayesian Inference
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Basics of Bayesian Inference and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Basics of Bayesian Inference 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?