DOC PREVIEW
CMU CS 10701 - Bayesian point estimation Gaussians Linear Regression Bias-Variance Tradeoff

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

11Bayesian point estimationGaussiansLinear RegressionBias-Variance TradeoffMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversitySeptember 14th, 2009Readings listed in class website©Carlos Guestrin 2005-20092What about prior  Billionaire says: Wait, I know that the thumbtack is “close” to 50-50. What can you do for me now? You say: I can learn it the Bayesian way… Rather than estimating a single θ, we obtain a distribution over possible values of θ©Carlos Guestrin 2005-200923Bayesian Learning Use Bayes rule: Or equivalently:©Carlos Guestrin 2005-20094Bayesian Learning for Thumbtack Likelihood function is simply Binomial: What about prior? Represent expert knowledge Simple posterior form Conjugate priors: Closed-form representation of posterior For Binomial, conjugate prior is Beta distribution©Carlos Guestrin 2005-200935Beta prior distribution – P(θ) Likelihood function: Posterior:Mean:Mode: ©Carlos Guestrin 2005-20096Posterior distribution Prior: Data: αHheads and αTtails Posterior distribution: ©Carlos Guestrin 2005-200947Using Bayesian posterior Posterior distribution:  Bayesian inference: No longer single parameter: Integral is often hard to compute©Carlos Guestrin 2005-20098MAP: Maximum a posteriori approximation As more data is observed, Beta is more certain MAP: use most likely parameter:©Carlos Guestrin 2005-200959MAP for Beta distribution MAP: use most likely parameter: Beta prior equivalent to extra thumbtack flips As N → ∞, prior is “forgotten” But, for small sample size, prior is important!©Carlos Guestrin 2005-200910What you need to know Go to the recitation on intro to probabilities And, other recitations too Point estimation: MLE Bayesian learning MAP©Carlos Guestrin 2005-2009611©Carlos Guestrin 2005-2009What about continuous variables? Billionaire says: If I am measuring a continuous variable, what can you do for me? You say: Let me tell you about Gaussians…12©Carlos Guestrin 2005-2009Some properties of Gaussians affine transformation (multiplying by scalar and adding a constant) X ~ N(µ,σ2) Y = aX + b → Y ~ N(aµ+b,a2σ2) Sum of Gaussians X ~ N(µX,σ2X) Y ~ N(µY,σ2Y) Z = X+Y → Z ~ N(µX+µY, σ2X+σ2Y)713©Carlos Guestrin 2005-2009Learning a Gaussian Collect a bunch of data Hopefully, i.i.d. samples e.g., exam scores Learn parameters Mean Variance14©Carlos Guestrin 2005-2009MLE for Gaussian Prob. of i.i.d. samples D={x1,…,xN}: Log-likelihood of data:815©Carlos Guestrin 2005-2009Your second learning algorithm:MLE for mean of a Gaussian What’s MLE for mean?16©Carlos Guestrin 2005-2009MLE for variance Again, set derivative to zero:917©Carlos Guestrin 2005-2009Learning Gaussian parameters MLE: BTW. MLE for the variance of a Gaussian is biased Expected result of estimation is not true parameter!  Unbiased variance estimator:18©Carlos Guestrin 2005-2009Bayesian learning of Gaussian parameters Conjugate priors Mean: Gaussian prior Variance: Wishart Distribution Prior for mean:1019©Carlos Guestrin 2005-2009MAP for mean of Gaussian20©Carlos Guestrin 2005-2009Prediction of continuous variables Billionaire says: Wait, that’s not what I meant!  You says: Chill out, dude. He says: I want to predict a continuous variable for continuous inputs: I want to predict salaries from GPA. You say: I can regress that…1121©Carlos Guestrin 2005-2009The regression problem Instances: <xj, tj> Learn: Mapping from x to t(x) Hypothesis space: Given, basis functions Find coeffs w={w1,…,wk} Why is this called linear regression??? model is linear in the parameters Precisely, minimize the residual squared error:22©Carlos Guestrin 2005-2009The regression problem in matrix notationN sensorsK basis functionsN sensorsmeasurementsweightsK basis func1223©Carlos Guestrin 2005-2009Regression solution = simple matrix operationswherek×k matrix for k basis functions k×1 vector24©Carlos Guestrin 2005-2009Announcements 1 Readings associated with each class See course website for specific sections, extra links, and further details Visit the website frequently Recitations Thursdays, 5:00-6:20pm in Gates Hillman 6115 Special recitation on Matlab Today! 5:00-6:20pm GHC 61151325©Carlos Guestrin 2005-2009Announcement 2 First homework out later today! Download from course website! Start early!!! :) Due Sept. 30th Also, HW0! Due this Thursday! Just to make sure you can access the submission directory To expedite grading: there are 4 questions please hand in 4 stapled separate parts, one for each question Privacy policy for returning homeworks and exams: We write grades in second page of homework or exam We want to handout graded homeworks in class, but to do that CMU requires you to sign a waiver acknowledging that someone may turn the page and find your grade If you are not comfortable with this possibility, let us know and your homework will be available for pick up from Michelle Martin at GHC 800126©Carlos Guestrin 2005-2009 Billionaire (again) says: Why sum squared error??? You say: Gaussians, Dr. Gateson, Gaussians… Model: prediction is linear function plus Gaussian noise t = ∑iwihi(x) + ε Learn w using MLEBut, why?1427©Carlos Guestrin 2005-2009Maximizing log-likelihoodMaximize:Least-squares Linear Regression is MLE for Gaussians!!!28©Carlos Guestrin 2005-2009Applications Corner 1 Predict stock value over time from past values other relevant vars e.g., weather, demands, etc.1529©Carlos Guestrin 2005-2009Applications Corner 2 Measure temperatures at some locations Predict temperatures throughout the environmentSERVERLABKITCHENCOPYELECPHONEQUIETSTORAGECONFERENCEOFFICEOFFICE50515253544648494743454442 41373938 36333610111213141516171920212224252628303231272923189587434123540[Guestrin et al. ’04] 30©Carlos Guestrin 2005-2009Applications Corner 3 Predict when a sensor will fail based several variables age, chemical exposure, number of hours used,…1631©Carlos Guestrin 2005-2009Bias-Variance tradeoff – Intuition  Model too “simple” → does not fit the data well A biased solution Model too complex → small changes to the data, solution changes a lot A high-variance


View Full Document

CMU CS 10701 - Bayesian point estimation Gaussians Linear Regression Bias-Variance Tradeoff

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Bayesian point estimation Gaussians Linear Regression Bias-Variance Tradeoff
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Bayesian point estimation Gaussians Linear Regression Bias-Variance Tradeoff and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Bayesian point estimation Gaussians Linear Regression Bias-Variance Tradeoff 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?