Unformatted text preview:

Bayesian computationBayesian computationBayesian computationBayesian computationBayesian computationBayesian computationBayesian computationAsymptotic methodsAsymptotic methodsExample 3.1: Hamburger patties againExample 3.1: Hamburger patties againExample 3.1: Hamburger patties againExample 3.1: Hamburger patties againExample 3.1: Hamburger patties againHigher order approximationsHigher order approximationsHigher order approximationsHigher order approximationsNoninterative Monte Carlo MethodsNoninterative Monte Carlo MethodsNoninterative Monte Carlo MethodsExample 3.2: Direct bivariate samplingExample 3.2: Direct bivariate samplingExample 3.2: Direct bivariate samplingExample 3.2: Direct bivariate samplingIndirect MethodsIndirect MethodsRejection samplingRejection samplingRejection samplingRejection samplingRejection samplingRejection Sampling: informal ``proof''Rejection Sampling: informal ``proof''Markov chain Monte Carlo methodsMarkov chain Monte Carlo methodsSubstitution samplingSubstitution samplingSubstitution samplingSubstitution samplingSubstitution samplingSubstitution samplingGibbs samplingGibbs samplingGibbs samplingGibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Gibbs sampling (cont'd)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 3.6 (2.7 revisited)Example 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataExample 7.2: Rat dataMetropolis algorithmMetropolis algorithmMetropolis algorithmMetropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Metropolis algorithm (cont'd)Convergence assessmentConvergence assessmentConvergence assessmentConvergence assessmentConvergence diagnosticsConvergence diagnosticsConvergence diagnosis strategyConvergence diagnosis strategyConvergence diagnosis strategyConvergence diagnosis strategyVariance estimationVariance estimationVariance estimation (cont'd)Variance estimation (cont'd)Variance estimation (cont'd)Variance estimation (cont'd)Recent DevelopmentsRecent DevelopmentsRecent DevelopmentsRecent DevelopmentsBlocking and Structured MCMCBlocking and Structured MCMCBlocking and Structured MCMCBlocking and Structured MCMCBlocking and Structured MCMCBlocking and Structured MCMCAuxiliary variables and slice samplingAuxiliary variables and slice samplingAuxiliary variables and slice samplingAuxiliary variables and slice samplingAuxiliary variables and slice samplingAuxiliary variables and slice samplingReversible Jump MCMCReversible Jump MCMCReversible Jump MCMCReversible Jump MCMCReversible Jump MCMCReversible Jump MCMCReversible Jump MCMCRJMCMC Example 1RJMCMC Example 1RJMCMC Example 2RJMCMC Example 2RJMCMC Example 2RJMCMC Example 2Bayesian computationprehistory (1763 – 1960): Conjugate priorsChapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.Chapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.1970’s: Expectation-Maximization (“EM”) algorithm –iterative mode-finderChapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.1970’s: Expectation-Maximization (“EM”) algorithm –iterative mode-finder1980’s: Asymptotic methods – Laplace’s method,saddlepoint approximationsChapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.1970’s: Expectation-Maximization (“EM”) algorithm –iterative mode-finder1980’s: Asymptotic methods – Laplace’s method,saddlepoint approximations1980’s: Noniterative Monte Carlo methods – Directposterior sampling and indirect methods (importancesampling, rejection, etc.)Chapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.1970’s: Expectation-Maximization (“EM”) algorithm –iterative mode-finder1980’s: Asymptotic methods – Laplace’s method,saddlepoint approximations1980’s: Noniterative Monte Carlo methods – Directposterior sampling and indirect methods (importancesampling, rejection, etc.)1990’s: Markov chain Monte Carlo (MCMC) – Gibbssampler, Metropolis-Hastings algorithmChapter 3: Bayesian Computation – p. 1/42Bayesian computationprehistory (1763 – 1960): Conjugate priors1960’s: Numerical quadrature – Newton-Cotesmethods, Gaussian quadrature, etc.1970’s: Expectation-Maximization (“EM”) algorithm –iterative mode-finder1980’s: Asymptotic methods – Laplace’s method,saddlepoint approximations1980’s: Noniterative Monte Carlo methods – Directposterior sampling and indirect methods (importancesampling, rejection, etc.)1990’s: Markov chain Monte Carlo (MCMC) – Gibbssampler, Metropolis-Hastings algorithm⇒ MCMC methods broadly applicable, but require care inparametrization and convergence diagnosis!Chapter 3: Bayesian Computation – p. 1/42Asymptotic methodsWhen n is large, f(x|θ) will be quite peaked relative top(θ), and so p(θ|x) will be approximately normal.bChapter 3: Bayesian Computation – p. 2/42Asymptotic methodsWhen n is large, f(x|θ) will be quite peaked relative top(θ), and so p(θ|x) will be approximately normal.“Bayesian Central Limit Theorem”: SupposeX1, . . . , Xniid∼ fi(xi|θ), and that the prior p(θ) and thelikelihood f(x|θ) are positive and twice differentiablenearbθp, the posterior mode of θ. Then for large np(θ|x)·∼ N(bθp, [Ip(x)]−1) ,where [Ip(x)]−1is the “generalized” observed Fisherinformation matrix for θ, i.e., minus the inverse Hessianof the log posterior evaluated at the


View Full Document

U of M PUBH 7440 - Bayesian computation

Download Bayesian computation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Bayesian computation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Bayesian computation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?