CS 188: Artificial Intelligence Fall 2006AnnouncementsMachine LearningClassificationSlide 5Bayes Nets for ClassificationSimple ClassificationGeneral Naïve BayesInference for Naïve BayesSlide 10A Digit RecognizerNaïve Bayes for DigitsExamples: CPTsParameter EstimationA Spam FilterNaïve Bayes for TextExample: Spam FilteringSpam ExampleExample: OverfittingSlide 20Generalization and OverfittingEstimation: SmoothingSlide 23Estimation: Laplace SmoothingSlide 25Estimation: Linear InterpolationReal NB: SmoothingSlide 28Tuning on Held-Out DataBaselinesConfidences from a ClassifierErrors, and What to DoWhat to Do About Errors?SummarySlide 37Case-Based ReasoningRecap: Nearest-NeighborNearest-Neighbor ClassificationCS 188: Artificial IntelligenceFall 2006Lecture 22: Naïve Bayes11/14/2006Dan Klein – UC BerkeleyAnnouncementsOptional midtermOn Tuesday 11/21 in classReview session 11/19, 7-9pm, in 306 SodaProjects3.3 due 11/153.4 due 11/27Contest details on web!Machine LearningUp till now: how to reason or make decisions using a modelMachine learning: how to select a model on the basis of data / experienceLearning parameters (e.g. probabilities)Learning structure (e.g. BN graphs)Learning hidden concepts (e.g. clustering)ClassificationIn classification, we learn to predict labels (classes) for inputsExamples:Spam detection (input: document, classes: spam / ham)OCR (input: images, classes: characters)Medical diagnosis (input: symptoms, classes: diseases)Automatic essay grader (input: document, classes: grades)Fraud detection (input: account activity, classes: fraud / no fraud)Customer service email routing… many moreClassification is an important commercial technology!ClassificationData:Inputs x, class labels yWe imagine that x is something that has a lot of structure, like an image or documentIn the basic case, y is a simple N-way choiceBasic Setup:Training data: D = bunch of <x,y> pairsFeature extractors: functions fi which provide attributes of an example xTest data: more x’s, we must predict y’sDuring development, we actually know the y’s, so we can check how well we’re doing, but when we deploy the system, we don’tBayes Nets for ClassificationOne method of classification:Features are values for observed variablesY is a query variableUse probabilistic inference to compute most likely YYou already know how to do this inferenceSimple ClassificationSimple example: two binary featuresThis is a naïve Bayes modelMS Fdirect estimateBayes estimate (no assumptions)Conditional independence+General Naïve BayesA general naive Bayes model:We only specify how each feature depends on the classTotal number of parameters is linear in nCE1EnE2|C| parametersn x |E| x |C| parameters|C| x |E|n parametersInference for Naïve BayesGoal: compute posterior over causesStep 1: get joint probability of causes and evidenceStep 2: get probability of evidenceStep 3: renormalize+General Naïve BayesWhat do we need in order to use naïve Bayes?Some code to do the inference (you know this part)For fixed evidence, build P(C,e)Sum out C to get P(e)Divide to get P(C|e)Estimates of local conditional probability tablesP(C), the prior over causesP(E|C) for each evidence variableThese probabilities are collectively called the parameters of the model and denoted by These typically come from observed data: we’ll look at this nowA Digit RecognizerInput: pixel gridsOutput: a digit 0-9Naïve Bayes for DigitsSimple version:One feature Fij for each grid position <i,j>Feature values are on / off based on whether intensity is more or less than 0.5Input looks like:Naïve Bayes model:What do we need to learn?Examples: CPTs1 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.10 0.11 0.012 0.053 0.054 0.305 0.806 0.907 0.058 0.609 0.500 0.801 0.052 0.013 0.904 0.805 0.906 0.907 0.258 0.859 0.600 0.80Parameter EstimationEstimating the distribution of a random variable X or X|YEmpirically: use training dataFor each value x, look at the empirical rate of that value:This estimate maximizes the likelihood of the dataElicitation: ask a human!Usually need domain experts, and sophisticated ways of eliciting probabilities (e.g. betting games)Trouble calibratingr g gA Spam FilterNaïve Bayes spam filterData:Collection of emails, labeled spam or hamNote: someone has to hand label all this data!Split into training, held-out, test setsClassifiersLearn on the training set(Tune it on a held-out set)Test it on new emailsDear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT.99 MILLION EMAIL ADDRESSES FOR ONLY $99Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.Naïve Bayes for TextNaïve Bayes:Predict unknown cause (spam vs. ham)Independent evidence from observed variables (e.g. the words)Generative model*Tied distributions and bag-of-wordsUsually, each variable gets its own conditional probability distributionIn a bag-of-words modelEach position is identically distributedAll share the same distributionsWhy make this assumption?*Minor detail: technically we’re conditioning on the length of the document hereWord at position i, not ith word in the dictionaryExample: Spam FilteringModel:What are the parameters?Where do these tables come from?the : 0.0156to : 0.0153and : 0.0115of : 0.0095you : 0.0093a : 0.0086with: 0.0080from: 0.0075...the : 0.0210to : 0.0133of : 0.01192002: 0.0110with: 0.0108from: 0.0107and : 0.0105a : 0.0100...ham : 0.66spam: 0.33Spam ExampleWord P(w|spam) P(w|ham) Tot Spam Tot Ham(prior) 0.33333 0.66666 -1.1 -0.4Gary 0.00002 0.00021 -11.8 -8.9would 0.00069 0.00084 -19.1 -16.0you 0.00881 0.00304 -23.8 -21.8like 0.00086 0.00083 -30.9 -28.9to 0.01517 0.01339 -35.1 -33.2lose 0.00008 0.00002 -44.5 -44.0weight 0.00016 0.00002 -53.3 -55.0while 0.00027 0.00027 -61.5 -63.2you 0.00881 0.00304 -66.2 -69.0sleep 0.00006 0.00001 -76.0 -80.5P(spam | w) =
View Full Document