Na ve Bayes Logistic Regression See class website Mitchell s Chapter required Ng Jordan 02 optional Gradient ascent and extensions Koller Friedman Chapter 1 4 Na ve Bayes Continued Na ve Bayes with Continuous variables Logistic Regression Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University January 30th 2006 2006 Carlos Guestrin 1 Announcements Recitations stay on Thursdays 5 6 30pm in Wean 5409 This week Na ve Bayes Logistic Regression Extension for the first homework Due Wed Feb 8th beginning of class Mitchell s chapter is most useful reading Go to the AI seminar Tuesdays 3 30pm Wean 5409 http www cs cmu edu aiseminar This week s seminar very relevant to what we are covering in class 2006 Carlos Guestrin 2 Classification Learn h X a Y X features Y target classes Suppose you know P Y X exactly how should you classify Bayes classifier Why 2006 Carlos Guestrin 3 Optimal classification Theorem Bayes classifier hBayes is optimal That is Proof 2006 Carlos Guestrin 4 How hard is it to learn the optimal classifier Data How do we represent these How many parameters Prior P Y Suppose Y is composed of k classes Likelihood P X Y Suppose X is composed of n binary features Complex model High variance with limited data 2006 Carlos Guestrin 5 Conditional Independence X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z e g Equivalent to 2006 Carlos Guestrin 6 The Na ve Bayes assumption Na ve Bayes assumption Features are independent given class More generally How many parameters now Suppose X is composed of n binary features 2006 Carlos Guestrin 7 The Na ve Bayes Classifier Given Prior P Y n conditionally independent features X given the class Y For each Xi we have likelihood P Xi Y Decision rule If assumption holds NB is optimal classifier 2006 Carlos Guestrin 8 MLE for the parameters of NB Given dataset Count A a B b number of examples where A a and B b MLE for NB simply Prior P Y y Likelihood P Xi xi Yi yi 2006 Carlos Guestrin 9 Subtleties of NB classifier 1 Violating the NB assumption Usually features are not conditionally independent Thus in NB actual probabilities P Y X often biased towards 0 or 1 see homework 1 Nonetheless NB is the single most used classifier out there NB often performs well even when assumption is violated Domingos Pazzani 96 discuss some conditions for good performance 2006 Carlos Guestrin 10 Subtleties of NB classifier 2 Insufficient training data What if you never see a training instance where X1 a when Y b e g Y SpamEmail X1 Enlargement P X1 a Y b 0 Thus no matter what the values X2 Xn take P Y b X1 a X2 Xn 0 What now 2006 Carlos Guestrin 11 MAP for Beta distribution MAP use most likely parameter Beta prior equivalent to extra thumbtack flips As N prior is forgotten But for small sample size prior is important 2006 Carlos Guestrin 12 Bayesian learning for NB parameters a k a smoothing Dataset of N examples Prior distribution Q Xi Y Q Y m virtual examples MAP estimate P Xi Y Now even if you never observe a feature class posterior probability never zero 2006 Carlos Guestrin 13 Text classification Classify e mails Y Spam NotSpam Classify news articles Y what is the topic of the article Classify webpages Y Student professor project What about the features X The text 2006 Carlos Guestrin 14 Features X are entire document Xi for ith word in article 2006 Carlos Guestrin 15 NB for Text classification P X Y is huge Article at least 1000 words X X1 X1000 Xi represents ith word in document i e the domain of Xi is entire vocabulary e g Webster Dictionary or more 10 000 words etc NB assumption helps a lot P Xi xi Y y is just the probability of observing word xi in a document on topic y 2006 Carlos Guestrin 16 Bag of words model Typical additional assumption Position in document doesn t matter P Xi xi Y y P Xk xi Y y Bag of words model order of words on the page ignored Sounds really silly but often works very well When the lecture is over remember to wake up the person sitting next to you in the lecture room 2006 Carlos Guestrin 17 Bag of words model Typical additional assumption Position in document doesn t matter P Xi xi Y y P Xk xi Y y Bag of words model order of words on the page ignored Sounds really silly but often works very well in is lecture lecture next over person remember room sitting the the the to to up wake when you 2006 Carlos Guestrin 18 Bag of Words Approach aardvark 0 about 2 all 2 Africa 1 apple 0 anxious 0 gas 1 oil 1 Zaire 2006 Carlos Guestrin 0 19 NB with Bag of Words for text classification Learning phase Prior P Y Count how many documents you have from each topic prior P Xi Y For each topic count how many times you saw word in documents of this topic prior Test phase For each document Use na ve Bayes decision rule 2006 Carlos Guestrin 20 Twenty News Groups results 2006 Carlos Guestrin 21 Learning curve for Twenty News Groups 2006 Carlos Guestrin 22 What if we have continuous Xi Eg character recognition Xi is ith pixel Gaussian Na ve Bayes GNB Sometimes assume variance is independent of Y i e i or independent of Xi i e k or both i e 2006 Carlos Guestrin 23 Estimating Parameters Y discrete Xi continuous Maximum likelihood estimates jth training example x 1 if x true else 0 2006 Carlos Guestrin 24 Example GNB for classifying mental Mitchell states et al 1 mm resolution 2 images per sec 15 000 voxels image non invasive safe 10 sec measures Blood Oxygen Level Dependent BOLD response Typical impulse response 2006 Carlos Guestrin 25 Brain scans can track activation with precision and sensitivity Mitchell et al 2006 Carlos Guestrin 26 Gaussian Na ve Bayes Learned voxel word P BrainActivity WordCategory People Animal Mitchell et al 2006 Carlos Guestrin 27 Learned Bayes Models Means for P BrainActivity WordCategory Pairwise classification accuracy 85 People words Mitchell et al Animal words 2006 Carlos Guestrin 28 What you need to know about Na ve Bayes Types of learning problems Learning is just function approximation Optimal decision using Bayes Classifier Na ve Bayes classifier What s the assumption Why we use it How do we learn it Why is Bayesian estimation important Text classification Bag of words model Gaussian NB Features are still conditionally independent Each feature has a Gaussian distribution given class 2006 Carlos Guestrin 29 Generative v Discriminative classifiers Intuition Want to Learn h X a Y X features Y target classes Bayes optimal
View Full Document