1CS 188: Artificial IntelligenceFall 2010Lecture 22: Naïve Bayes11/16/2010Dan Klein – UC BerkeleyAnnouncements Assignments: P3 in glookup W3 (shortened) is up, due 11/23 P5 will be out later this week Contest status: Rank page! Achievements page! Minor tweaks?2Survey Responses Most favorite aspects: projects, demos, lectures Least favorite aspects: writtens, sections, exams Specific things: Writtens: fewer smaller writtens? Writtens: writtens more like the exams? Sections: positive comments about how, mixed comments about what Sections: handouts merging with writtens? Midterm: “hard” “fair” “long”, compare to previous semesters? Webcast frame rate, “can’t see demos at 1 fps” Readings: mixed, “there are readings?” Office hours: “don’t usually go, but helpful when I do” Grading scales, etc.?New Proposals Change lecture format to mini-vids? Mixed reaction, more negative than positive, worry about whether people would actually watch the prep videos “doesn't sound like it'd work out very well” “HORRIBLE! Noooo!” “That sounds pretty awesome.” Multiple section types? The more the better (28), Only one (29), other answers (15)3Example: Spam Filter Input: email Output: spam/ham Setup: Get a large collection of example emails, each labeled “spam” or “ham” Note: someone has to hand label all this data! Want to learn to predict labels of new, future emails Features: The attributes used to make the ham / spam decision Words: FREE! Text Patterns: $dd, CAPS Non-text: SenderInContacts …Dear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT.99 MILLION EMAIL ADDRESSESFOR ONLY $99Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.Example: Digit Recognition Input: images / pixel grids Output: a digit 0-9 Setup: Get a large collection of example images, each labeled with a digit Note: someone has to hand label all this data! Want to learn to predict labels of new, future digit images Features: The attributes used to make the digit decision Pixels: (6,8)=ON Shape Patterns: NumComponents, AspectRatio, NumLoops …0121??4A Digit Recognizer Input: pixel grids Output: a digit 0-9Naïve Bayes for Digits Simple version: One feature Fijfor each grid position <i,j> Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image Each input maps to a feature vector, e.g. Here: lots of features, each is binary valued Naïve Bayes model: What do we need to learn?5General Naïve Bayes A general naive Bayes model: We only specify how each feature depends on the class Total number of parameters is linear in nYF1FnF2|Y| parametersn x |F| x |Y| parameters|Y| x |F|nparametersInference for Naïve Bayes Goal: compute posterior over causes Step 1: get joint probability of causes and evidence Step 2: get probability of evidence Step 3: renormalize+6General Naïve Bayes What do we need in order to use naïve Bayes? Inference (you know this part) Start with a bunch of conditionals, P(Y) and the P(Fi|Y) tables Use standard inference to compute P(Y|F1…Fn) Nothing new here Estimates of local conditional probability tables P(Y), the prior over labels P(Fi|Y) for each feature (evidence variable) These probabilities are collectively called the parameters of the model and denoted by θθθθ Up until now, we assumed these appeared by magic, but… …they typically come from training data: we’ll look at this nowExamples: CPTs1 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.10 0.11 0.012 0.053 0.054 0.305 0.806 0.907 0.058 0.609 0.500 0.801 0.052 0.013 0.904 0.805 0.906 0.907 0.258 0.859 0.600 0.807Important Concepts Data: labeled instances, e.g. emails marked spam/ham Training set Held out set Test set Features: attribute-value pairs which characterize each x Experimentation cycle Learn parameters (e.g. model probabilities) on training set (Tune hyperparameters on held-out set) Compute accuracy of test set Very important: never “peek” at the test set! Evaluation Accuracy: fraction of instances predicted correctly Overfitting and generalization Want a classifier which does well on test data Overfitting: fitting the training data very closely, but not generalizing well We’ll investigate overfitting and generalization formally in a few lecturesTrainingDataHeld-OutDataTestDataA Spam Filter Naïve Bayes spam filter Data: Collection of emails, labeled spam or ham Note: someone has to hand label all this data! Split into training, held-out, test sets Classifiers Learn on the training set (Tune it on a held-out set) Test it on new emailsDear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT.99 MILLION EMAIL ADDRESSESFOR ONLY $99Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.8Naïve Bayes for Text Bag-of-Words Naïve Bayes: Predict unknown class label (spam vs. ham) Assume evidence features (e.g. the words) are independent Warning: subtly different assumptions than before! Generative model Tied distributions and bag-of-words Usually, each variable gets its own conditional probability distribution P(F|Y) In a bag-of-words model Each position is identically distributed All positions share the same conditional probs P(W|C) Why make this assumption?Word at position i, not ithword in the dictionary!Example: Spam Filtering Model: What are the parameters? Where do these tables come from?the : 0.0156to : 0.0153and :
View Full Document