DOC PREVIEW
CMU CS 10701 - MLE_MAP_1-18-11-ann

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1 Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 18, 2011 Today: • Bayes Rule • Estimating parameters • maximum likelihood • max a posteriori Readings: Probability review • Bishop Ch. 1 thru 1.2.3 • Bishop, Ch. 2 thru 2.2 • Andrew Moore’s online tutorial many of these slides are derived from William Cohen, Andrew Moore, Aarti Singh, Eric Xing, Carlos Guestrin. - Thanks! Visualizing Probabilities Sample space of all possible worlds Its area is 1 B A A ^ B2 Definition of Conditional Probability P(A ^ B) P(A|B) = ----------- P(B) B A Definition of Conditional Probability P(A ^ B) P(A|B) = ----------- P(B) Corollary: The Chain Rule P(A ^ B) = P(A|B) P(B) P(C ^ A ^ B) = P(C|A ^ B) P(A|B) P(B)3 Independent Events • Definition: two events A and B are independent if P(A ^ B)=P(A)*P(B) • Intuition: knowing A tells us nothing about the value of B (and vice versa) Bayes Rule • let’s write 2 expressions for P(A ^ B) B A A ^ B4 P(B|A) * P(A) P(B) P(A|B) = Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:370-418 …by no means merely a curious speculation in the doctrine of chances, but necessary to be solved in order to a sure foundation for all our reasonings concerning past facts, and what is likely to be hereafter…. necessary to be considered by any that would give a clear account of the strength of analogical or inductive reasoning… Bayes’ rule we call P(A) the “prior” and P(A|B) the “posterior” Other Forms of Bayes Rule5 Applying Bayes Rule A = you have the flu, B = you just coughed Assume: P(A) = 0.05 P(B|A) = 0.80 P(B| ~A) = 0.2 what is P(flu | cough) = P(A|B)? what does all this have to do with function approximation?6 The Joint Distribution Recipe for making a joint distribution of M variables: Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30 [A. Moore] The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30 [A. Moore]7 The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). 2. For each combination of values, say how probable it is. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30 [A. Moore] The Joint Distribution Recipe for making a joint distribution of M variables: 1. Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2M rows). 2. For each combination of values, say how probable it is. 3. If you subscribe to the axioms of probability, those numbers must sum to 1. Example: Boolean variables A, B, C A B C Prob 0 0 0 0.30 0 0 1 0.05 0 1 0 0.10 0 1 1 0.05 1 0 0 0.05 1 0 1 0.10 1 1 0 0.25 1 1 1 0.10 A B C 0.05 0.25 0.10 0.05 0.05 0.10 0.10 0.30 [A. Moore]8 Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute [A. Moore] Using the Joint P(Poor Male) = 0.4654 [A. Moore]9 Using the Joint P(Poor) = 0.7604 [A. Moore] Inference with the Joint P(Male | Poor) = 0.4654 / 0.7604 = 0.612 [A. Moore]10 Learning and the Joint Distribution Suppose we want to learn the function f: <G, H>  W Equivalently, P(W | G, H) Solution: learn joint distribution from data, calculate P(W | G, H) e.g., P(W=rich | G = female, H = 40.5- ) = [A. Moore] sounds like the solution to learning F: X Y, or P(Y | X). Are we done?11 [C. Guestrin] [C. Guestrin]12 [C. Guestrin] Maximum Likelihood Estimate for Θ [C. Guestrin]13 [C. Guestrin] [C. Guestrin]14 [C. Guestrin] Beta prior distribution – P(θ) [C. Guestrin]15 Beta prior distribution – P(θ) [C. Guestrin] [C. Guestrin]16 [C. Guestrin] Conjugate priors [A. Singh]17 Conjugate priors [A. Singh] Estimating Parameters • Maximum Likelihood Estimate (MLE): choose θ that maximizes probability of observed data • Maximum a Posteriori (MAP) estimate: choose θ that is most probable given prior probability and the data18 Dirichlet distribution • number of heads in N flips of a two-sided coin – follows a binomial distribution – Beta is a good prior (conjugate prior for binomial) • what it’s not two-sided, but k-sided? – follows a multinomial distribution – Dirichlet distribution is the conjugate prior You should know • Probability basics – random variables, events, sample space, conditional probs, … – independence of random variables – Bayes rule – Joint probability distributions – calculating probabilities from the joint distribution • Estimating parameters from data – maximum likelihood estimates – maximum a posteriori estimates – distributions – binomial, Beta, Dirichlet, … – conjugate priors19 Extra slides Expected values Given discrete random variable X, the expected value of X, written E[X] is We also can talk about the expected value of functions of X20 Covariance Given two discrete r.v.’s X and Y, we define the covariance of X and Y as e.g., X=gender, Y=playsFootball or X=gender, Y=leftHanded


View Full Document

CMU CS 10701 - MLE_MAP_1-18-11-ann

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download MLE_MAP_1-18-11-ann
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MLE_MAP_1-18-11-ann and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MLE_MAP_1-18-11-ann 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?