DOC PREVIEW
CMU CS 10701 - Bayes optimal classifier Naïve Bayes What’s learning, revisited

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

11Bayes optimal classifierNaïve BayesWhat’s learning, revisitedMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversitySeptember 21st, 2009©Carlos Guestrin 2005-2009Classification Learn: h:X aaaa Y X – features Y – target classes Suppose you know P(Y|X) exactly, how should you classify? Bayes classifier: Why?2Optimal classification Theorem: Bayes classifier hBayesis optimal! That is Proof:Bayes RuleWhich is shorthand for:3How hard is it to learn the optimal classifier? Data =  How do we represent these? How many parameters? Prior, P(Y): Suppose Y is composed of k classes Likelihood, P(X|Y): Suppose X is composed of n binary features Complex model ! High variance with limited data!!!Conditional Independence X is conditionally independent of Y given Z, if the probability distribution governing X is independent of the value of Y, given the value of Z e.g., Equivalent to:4What if features are independent? Predict Thunder From two conditionally Independent features Lightening  RainThe Naïve Bayes assumption Naïve Bayes assumption: Features are independent given class: More generally: How many parameters now? Suppose X is composed of n binary features5The Naïve Bayes Classifier Given: Prior P(Y) n conditionally independent features X given the class Y For each Xi, we have likelihood P(Xi|Y) Decision rule: If assumption holds, NB is optimal classifier!MLE for the parameters of NB Given dataset Count(A=a,B=b) Ã number of examples where A=a and B=b MLE for NB, simply: Prior: P(Y=y) =  Likelihood: P(Xi=xi|Yi=yi) =6Subtleties of NB classifier 1 –Violating the NB assumption Usually, features are not conditionally independent: Actual probabilities P(Y|X) often biased towards 0 or 1 Nonetheless, NB is the single most used classifier out there NB often performs well, even when assumption is violated [Domingos & Pazzani ’96] discuss some conditions for good performanceSubtleties of NB classifier 2 –Insufficient training data What if you never see a training instance where X1=a when Y=b? e.g., Y={SpamEmail}, X1={‘Enlargement’} P(X1=a | Y=b) = 0 Thus, no matter what the values X2,…,Xntake: P(Y=b | X1=a,X2,…,Xn) = 0 What now???7MAP for Beta distribution MAP: use most likely parameter: Beta prior equivalent to extra thumbtack flips As N → 1, prior is “forgotten” But, for small sample size, prior is important!Bayesian learning for NB parameters – a.k.a. smoothing Dataset of N examples Prior  “distribution” Q(Xi,Y), Q(Y) m “virtual” examples MAP estimate P(Xi|Y) Now, even if you never observe a feature/class, posterior probability never zero8Text classification Classify e-mails Y = {Spam,NotSpam} Classify news articles Y = {what is the topic of the article?} Classify webpages Y = {Student, professor, project, …} What about the features X? The text!Features X are entire document –Xifor ithword in article9NB for Text classification P(X|Y) is huge!!! Article at least 1000 words, X={X1,…,X1000} Xirepresents ith word in document, i.e., the domain of Xiis entire vocabulary, e.g., Webster Dictionary (or more), 10,000 words, etc. NB assumption helps a lot!!! P(Xi=xi|Y=y) is just the probability of observing word xiin a document on topic yBag of words model Typical additional assumption – Position in document doesn’t matter: P(Xi=xi|Y=y) = P(Xk=xi|Y=y)  “Bag of words” model – order of words on the page ignored Sounds really silly, but often works very well!When the lecture is over, remember to wake up the person sitting next to you in the lecture room.10Bag of words model Typical additional assumption – Position in document doesn’t matter: P(Xi=xi|Y=y) = P(Xk=xi|Y=y)  “Bag of words” model – order of words on the page ignored Sounds really silly, but often works very well!in is lecture lecture next over person remember room sitting the the the to to up wake when youBag of Words Approachaardvark 0about 2all 2Africa 1apple 0anxious 0...gas 1...oil 1…Zaire 011NB with Bag of Words for text classification Learning phase: Prior P(Y) Count how many documents you have from each topic (+ prior) P(Xi|Y)  For each topic, count how many times you saw word in documents of this topic (+ prior) Test phase: For each document Use naïve Bayes decision ruleTwenty News Groups results12Learning curve for Twenty News


View Full Document

CMU CS 10701 - Bayes optimal classifier Naïve Bayes What’s learning, revisited

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Bayes optimal classifier Naïve Bayes What’s learning, revisited
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Bayes optimal classifier Naïve Bayes What’s learning, revisited and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Bayes optimal classifier Naïve Bayes What’s learning, revisited 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?