CS 188: Artificial Intelligence Fall 2009AnnouncementsRecap: General Naïve BayesExample Naïve Bayes ModelsNaïve Bayes TrainingRecap: Laplace SmoothingBetter: Linear InterpolationReal NB: SmoothingSlide 9Tuning on Held-Out DataConfidences from a ClassifierNaïve Bayes SummaryWhat to Do About ErrorsFeature ExtractorsGenerative vs. DiscriminativeSome (Simplified) BiologyLinear ClassifiersExample: SpamBinary Decision RuleBinary Perceptron UpdateMulticlass Decision RuleExampleThe Perceptron Update RuleSlide 27Examples: PerceptronMistake-Driven ClassificationProperties of PerceptronsSlide 32Issues with PerceptronsCS 188: Artificial IntelligenceFall 2009Lecture 23: Perceptrons11/17/2009Dan Klein – UC BerkeleyAnnouncementsProject 4: Due Thursday!Final Contest: Qualifications are on!P5 will be due late enough to give you plenty of contest timeRecap: General Naïve BayesA general naïve Bayes model:Y: label to be predictedF1, …, Fn: features of each instanceYF1FnF2Example Naïve Bayes ModelsBag-of-words for textOne feature for every word position in the documentAll features share the same conditional distributionsMaximum likelihood estimates: word frequencies, by labelYW1WnW2Pixels for imagesOne feature for every pixel, indicating whether it is on (black)Each pixel has a different conditional distributionMaximum likelihood estimates: how often a pixel is on, by labelYF0,0Fn,nF0,1Naïve Bayes TrainingData: labeled instances, e.g. emails marked as spam/ham by a personDivide into training, held-out, and testFeatures are known for every training, held-out and test instanceEstimation: count feature values in the training set and normalize to get maximum likelihood estimates of probabilitiesSmoothing (aka regularization): adjust estimates to account for unseen dataTrainingSetHeld-OutSetTestSetRecap: Laplace SmoothingLaplace’s estimate (extended):Pretend you saw every outcome k extra timesWhat’s Laplace with k = 0?k is the strength of the priorLaplace for conditionals:Smooth each condition:Can be derived by dividingH H T6Better: Linear InterpolationLinear interpolation for conditional likelihoodsIdea: the conditional probability of a feature x given a label y should be close to the marginal probability of xExample: A rare word like “interpolation” should be similarly rare in both ham and spam (a priori)Procedure: Collect relative frequency estimates of both conditional and marginal, then averageEffect: Features have odds ratios closer to 17Real NB: SmoothingOdds ratios without smoothing:south-west : infnation : infmorally : infnicely : infextent : inf...screens : infminute : infguaranteed : inf$205.00 : infdelivery : inf...Real NB: Smoothinghelvetica : 11.4seems : 10.8group : 10.2ago : 8.4areas : 8.3...verdana : 28.8Credit : 28.4ORDER : 27.2<FONT> : 26.9money : 26.5...Do these make more sense?Odds ratios after smoothing:Tuning on Held-Out DataNow we’ve got two kinds of unknownsParameters: P(Fi|Y) and P(Y)Hyperparameters, like the amount of smoothing to do: k, Where to learn which unknownsLearn parameters from training setCan’t tune hyperparameters on training data (why?)For each possible value of the hyperparameters, train and test on the held-out dataChoose the best value and do a final test on the test dataProportion of PML(x) in P(x|y)Confidences from a ClassifierThe confidence of a classifier:Posterior of the most likely labelRepresents how sure the classifier is of the classificationAny probabilistic model will have confidencesNo guarantee confidence is correctCalibrationStrong calibration: confidence predicts accuracy rateWeak calibration: higher confidences mean higher accuracyWhat’s the value of calibration?Naïve Bayes SummaryBayes rule lets us do diagnostic queries with causal probabilitiesThe naïve Bayes assumption takes all features to be independent given the class labelWe can build classifiers out of a naïve Bayes model using training dataSmoothing estimates is important in real systemsConfidences are useful when the classifier is calibratedWhat to Do About ErrorsProblem: there’s still spam in your inboxNeed more features – words aren’t enough!Have you emailed the sender before?Have 1K other people just gotten the same email?Is the sending information consistent? Is the email in ALL CAPS?Do inline URLs point where they say they point?Does the email address you by (your) name?Naïve Bayes models can incorporate a variety of features, but tend to do best in homogeneous cases (e.g. all features are word occurrences)15Feature ExtractorsFeatures: anything you can compute about the inputA feature extractor maps inputs to feature vectorsMany classifiers take feature vectors as inputsFeature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys)Dear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …W=dear : 1W=sir : 1W=this : 2...W=wish : 0...MISSPELLED : 2YOUR_NAME : 1ALL_CAPS : 0NUM_URLS : 0...17Generative vs. DiscriminativeGenerative classifiers:E.g. naïve BayesA causal model with evidence variablesQuery model for causes given evidenceDiscriminative classifiers:No causal model, no Bayes rule, often no probabilities at all!Try to predict the label Y directly from XRobust, accurate with varied featuresLoosely: mistake driven rather than model driven18Some (Simplified) BiologyVery loose inspiration: human neurons19Linear ClassifiersInputs are feature valuesEach feature has a weightSum is the activationIf the activation is:Positive, output +1Negative, output -1f1f2f3w1w2w3>0?20Example: SpamImagine 4 features (spam is “positive” class):free (number of occurrences of “free”)money (occurrences of “money”)BIAS (intercept, always has value 1)BIAS : -3free : 4money : 2...BIAS : 1 free : 1money : 1...“free money”Binary Decision RuleIn the space of feature vectorsExamples are pointsAny weight vector is a hyperplaneOne side corresponds to Y=+1Other corresponds to Y=-1BIAS : -3free : 4money : 2...0 1012freemoney+1 = SPAM-1 = HAMBinary Perceptron
View Full Document