11Bayes optimal classifierNaïve BayesWhat’s learning, revisitedMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversitySeptember 21st, 2009©Carlos Guestrin 2005-2009Classification Learn: h:X aaaa Y X – features Y – target classes Suppose you know P(Y|X) exactly, how should you classify? Bayes classifier: Why?2Optimal classification Theorem: Bayes classifier hBayesis optimal! That is Proof:Bayes RuleWhich is shorthand for:3How hard is it to learn the optimal classifier? Data = How do we represent these? How many parameters? Prior, P(Y): Suppose Y is composed of k classes Likelihood, P(X|Y): Suppose X is composed of n binary features Complex model ! High variance with limited data!!!Conditional Independence X is conditionally independent of Y given Z, if the probability distribution governing X is independent of the value of Y, given the value of Z e.g., Equivalent to:4What if features are independent? Predict Thunder From two conditionally Independent features Lightening RainThe Naïve Bayes assumption Naïve Bayes assumption: Features are independent given class: More generally: How many parameters now? Suppose X is composed of n binary features5The Naïve Bayes Classifier Given: Prior P(Y) n conditionally independent features X given the class Y For each Xi, we have likelihood P(Xi|Y) Decision rule: If assumption holds, NB is optimal classifier!MLE for the parameters of NB Given dataset Count(A=a,B=b) Ã number of examples where A=a and B=b MLE for NB, simply: Prior: P(Y=y) = Likelihood: P(Xi=xi|Yi=yi) =6Subtleties of NB classifier 1 –Violating the NB assumption Usually, features are not conditionally independent: Actual probabilities P(Y|X) often biased towards 0 or 1 Nonetheless, NB is the single most used classifier out there NB often performs well, even when assumption is violated [Domingos & Pazzani ’96] discuss some conditions for good performanceSubtleties of NB classifier 2 –Insufficient training data What if you never see a training instance where X1=a when Y=b? e.g., Y={SpamEmail}, X1={‘Enlargement’} P(X1=a | Y=b) = 0 Thus, no matter what the values X2,…,Xntake: P(Y=b | X1=a,X2,…,Xn) = 0 What now???7MAP for Beta distribution MAP: use most likely parameter: Beta prior equivalent to extra thumbtack flips As N → 1, prior is “forgotten” But, for small sample size, prior is important!Bayesian learning for NB parameters – a.k.a. smoothing Dataset of N examples Prior “distribution” Q(Xi,Y), Q(Y) m “virtual” examples MAP estimate P(Xi|Y) Now, even if you never observe a feature/class, posterior probability never zero8Text classification Classify e-mails Y = {Spam,NotSpam} Classify news articles Y = {what is the topic of the article?} Classify webpages Y = {Student, professor, project, …} What about the features X? The text!Features X are entire document –Xifor ithword in article9NB for Text classification P(X|Y) is huge!!! Article at least 1000 words, X={X1,…,X1000} Xirepresents ith word in document, i.e., the domain of Xiis entire vocabulary, e.g., Webster Dictionary (or more), 10,000 words, etc. NB assumption helps a lot!!! P(Xi=xi|Y=y) is just the probability of observing word xiin a document on topic yBag of words model Typical additional assumption – Position in document doesn’t matter: P(Xi=xi|Y=y) = P(Xk=xi|Y=y) “Bag of words” model – order of words on the page ignored Sounds really silly, but often works very well!When the lecture is over, remember to wake up the person sitting next to you in the lecture room.10Bag of words model Typical additional assumption – Position in document doesn’t matter: P(Xi=xi|Y=y) = P(Xk=xi|Y=y) “Bag of words” model – order of words on the page ignored Sounds really silly, but often works very well!in is lecture lecture next over person remember room sitting the the the to to up wake when youBag of Words Approachaardvark 0about 2all 2Africa 1apple 0anxious 0...gas 1...oil 1…Zaire 011NB with Bag of Words for text classification Learning phase: Prior P(Y) Count how many documents you have from each topic (+ prior) P(Xi|Y) For each topic, count how many times you saw word in documents of this topic (+ prior) Test phase: For each document Use naïve Bayes decision ruleTwenty News Groups results12Learning curve for Twenty News
View Full Document