CS 188: Artificial Intelligence Fall 2007Feature ExtractorsThe Perceptron Update RuleNearest-Neighbor ClassificationBasic SimilarityInvariant MetricsRotation Invariant MetricsTemplate DeformationA Tale of Two Approaches…The Perceptron, AgainPerceptron WeightsDual PerceptronSlide 14Kernelized PerceptronKernelized Perceptron StructureKernels: Who Cares?Properties of PerceptronsNon-Linear SeparatorsSlide 20Some KernelsRecap: ClassificationClusteringSlide 24K-MeansK-Means ExampleK-Means as OptimizationPhase I: Update AssignmentsPhase II: Update MeansInitializationK-Means Getting StuckK-Means QuestionsClustering for SegmentationRepresenting PixelsK-Means SegmentationOther Uses of K-MeansAgglomerative ClusteringSlide 39Slide 40Back to SimilarityCollaborative FilteringCS 188: Artificial IntelligenceFall 2007Lecture 26: Kernels11/29/2007Dan Klein – UC BerkeleyFeature ExtractorsA feature extractor maps inputs to feature vectorsMany classifiers take feature vectors as inputsFeature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys)Dear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …W=dear : 1W=sir : 1W=this : 2...W=wish : 0...MISSPELLED : 2NAMELESS : 1ALL_CAPS : 0NUM_URLS : 0...The Perceptron Update RuleStart with zero weightsPick up training instances one by oneTry to classifyIf correct, no change!If wrong: lower score of wrong answer, raise score of right answerNearest-Neighbor ClassificationNearest neighbor for digits:Take new imageCompare to all training imagesAssign based on closest exampleEncoding: image is vector of intensities:What’s the similarity function?Dot product of two images vectors?Usually normalize vectors so ||x|| = 1min = 0 (when?), max = 1 (when?)Basic SimilarityMany similarities based on feature dot products:If features are just the pixels:Note: not all similarities are of this formInvariant MetricsThis and next few slides adapted from Xiao Hu, UIUCBetter distances use knowledge about visionInvariant metrics:Similarities are invariant under certain transformationsRotation, scaling, translation, stroke-thickness…E.g: 16 x 16 = 256 pixels; a point in 256-dim spaceSmall similarity in R256 (why?)How to incorporate invariance into similarities?Rotation Invariant MetricsEach example is now a curve in R256Rotation invariant similarity: s’=max s( r( ), r( ))E.g. highest similarity between images’ rotation linesTemplate DeformationDeformable templates:An “ideal” version of each categoryBest-fit to image using min varianceCost for high distortion of templateCost for image points being far from distorted templateUsed in many commercial digit recognizersExamples from [Hastie 94]A Tale of Two Approaches…Nearest neighbor-like approachesCan use fancy kernels (similarity functions)Don’t actually get to do explicit learningPerceptron-like approachesExplicit training to reduce empirical errorCan’t use fancy kernels (why not?)Or can you? Let’s find out!The Perceptron, AgainStart with zero weightsPick up training instances one by oneTry to classifyIf correct, no change!If wrong: lower score of wrong answer, raise score of right answerPerceptron WeightsWhat is the final value of a weight wc?Can it be any real vector?No! It’s built by adding up inputs.Can reconstruct weight vectors (the primal representation) from update counts (the dual representation)Dual PerceptronHow to classify a new example x?If someone tells us the value of K for each pair of examples, never need to build the weight vectors!Dual PerceptronStart with zero counts (alpha)Pick up training instances one by oneTry to classify xn,If correct, no change!If wrong: lower count of wrong class (for this instance), raise score of right class (for this instance)Kernelized PerceptronIf we had a black box (kernel) which told us the dot product of two examples x and y:Could work entirely with the dual representationNo need to ever take dot products (“kernel trick”)Like nearest neighbor – work with black-box similaritiesDownside: slow if many examples get nonzero alphaKernelized Perceptron StructureKernels: Who Cares?So far: a very strange way of doing a very simple calculation“Kernel trick”: we can substitute any* similarity function in place of the dot productLets us learn new kinds of hypothesis* Fine print: if your kernel doesn’t satisfy certain technical requirements, lots of proofs break. E.g. convergence, mistake bounds. In practice, illegal kernels sometimes work (but not always).Properties of PerceptronsSeparability: some parameters get the training set perfectly correctConvergence: if the training is separable, perceptron will eventually converge (binary case)Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separabilitySeparableNon-SeparableNon-Linear SeparatorsData that is linearly separable (with some noise) works out great:But what are we going to do if the dataset is just too hard? How about… mapping data to a higher-dimensional space:000x2xxxThis and next few slides adapted from Ray Mooney, UTNon-Linear SeparatorsGeneral idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable:Φ: x → φ(x)Some KernelsKernels implicitly map original vectors to higher dimensional spaces, take the dot product there, and hand the result backLinear kernel:Quadratic kernel:RBF: infinite dimensional representationDiscrete kernels: e.g. string kernelsRecap: ClassificationClassification systems:Supervised learningMake a rational prediction given evidenceWe’ve seen several methods for thisUseful when you have labeled data (or can get it)ClusteringClustering systems:Unsupervised learningDetect patterns in unlabeled dataE.g. group emails or search resultsE.g. find categories of customersE.g. detect anomalous program executionsUseful when don’t know what you’re looking forRequires data, but no labelsOften get gibberishClusteringBasic idea: group together similar instancesExample: 2D point patternsWhat could
View Full Document