DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 25: Kernels

This preview shows page 1-2-3-18-19-36-37-38 out of 38 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 38 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188: Artificial Intelligence Fall 2007Feature ExtractorsThe Binary PerceptronExample: SpamBinary Decision RuleThe Multiclass PerceptronExampleThe Perceptron Update RuleSlide 9Examples: PerceptronMistake-Driven ClassificationProperties of PerceptronsSlide 14Issues with PerceptronsLinear SeparatorsSupport Vector MachinesSummarySimilarity FunctionsCase-Based ReasoningParametric / Non-parametricNearest-Neighbor ClassificationBasic SimilarityInvariant MetricsRotation Invariant MetricsTangent FamiliesTemplate DeformationA Tale of Two Approaches…The Perceptron, AgainPerceptron WeightsDual PerceptronSlide 34Kernelized PerceptronKernelized Perceptron StructureKernels: Who Cares?Slide 38Non-Linear SeparatorsSlide 40Some KernelsCS 188: Artificial IntelligenceFall 2007Lecture 25: Kernels11/27/2007Dan Klein – UC BerkeleyFeature ExtractorsA feature extractor maps inputs to feature vectorsMany classifiers take feature vectors as inputsFeature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys)Dear Sir.First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …W=dear : 1W=sir : 1W=this : 2...W=wish : 0...MISSPELLED : 2NAMELESS : 1ALL_CAPS : 0NUM_URLS : 0...The Binary PerceptronInputs are featuresEach feature has a weightSum is the activationIf the activation is:Positive, output 1Negative, output 0f1f2f3w1w2w3>0?Example: SpamImagine 4 features:Free (number of occurrences of “free”)Money (occurrences of “money”)BIAS (always has value 1)BIAS : -3free : 4money : 2the : 0 ...BIAS : 1 free : 1money : 1the : 0...“free money”Binary Decision RuleIn the space of feature vectorsAny weight vector is a hyperplaneOne side will be class 1Other will be class 0BIAS : -3free : 4money : 2the : 0 ...0 1012freemoney1 = SPAM0 = HAMThe Multiclass PerceptronIf we have more than two classes:Have a weight vector for each classCalculate an activation for each classHighest activation winsExampleBIAS : -2win : 4game : 4vote : 0the : 0 ...BIAS : 1win : 2game : 0vote : 4the : 0 ...BIAS : 2win : 0game : 2vote : 0the : 0 ...“win the vote”BIAS : 1win : 1game : 0vote : 1the : 1...The Perceptron Update RuleStart with zero weightsPick up training instances one by oneTry to classifyIf correct, no change!If wrong: lower score of wrong answer, raise score of right answerExampleBIAS :win : game : vote : the : ...BIAS : win : game : vote : the : ...BIAS : win : game : vote : the : ...“win the vote”“win the election”“win the game”Examples: PerceptronSeparable CaseMistake-Driven ClassificationIn naïve Bayes, parameters:From data statisticsHave a causal interpretationOne pass through the dataFor the perceptron parameters:From reactions to mistakesHave a discriminative interpretationGo through the data until held-out accuracy maxes outTrainingDataHeld-OutDataTestDataProperties of PerceptronsSeparability: some parameters get the training set perfectly correctConvergence: if the training is separable, perceptron will eventually converge (binary case)Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separabilitySeparableNon-SeparableExamples: PerceptronNon-Separable CaseIssues with PerceptronsOvertraining: test / held-out accuracy usually rises, then fallsOvertraining isn’t quite as bad as overfitting, but is similarRegularization: if the data isn’t separable, weights might thrash aroundAveraging weight vectors over time can help (averaged perceptron)Mediocre generalization: finds a “barely” separating solutionLinear SeparatorsWhich of these linear separators is optimal?Support Vector MachinesMaximizing the margin: good according to intuition and PAC theory.Only support vectors matter; other training examples are ignorable. Support vector machines (SVMs) find the separator with max marginMathematically, gives a quadratic program to solveBasically, SVMs are perceptrons with smarter update counts!SummaryNaïve BayesBuild classifiers using model of training dataSmoothing estimates is important in real systemsClassifier confidences are useful, when you can get themPerceptrons:Make less assumptions about dataMistake-driven learningMultiple passes through dataSimilarity FunctionsSimilarity functions are very important in machine learningTopic for next class: kernelsSimilarity functions with special propertiesThe basis for a lot of advance machine learning (e.g. SVMs)Case-Based ReasoningSimilarity for classificationCase-based reasoningPredict an instance’s label using similar instancesNearest-neighbor classification1-NN: copy the label of the most similar data pointK-NN: let the k nearest neighbors vote (have to devise a weighting scheme)Key issue: how to define similarityTrade-off:Small k gives relevant neighborsLarge k gives smoother functionsSound familiar?[DEMO]http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/KNN.htmlParametric / Non-parametricParametric models:Fixed set of parametersMore data means better settingsNon-parametric models:Complexity of the classifier increases with dataBetter in the limit, often worse in the non-limit(K)NN is non-parametricTruth2 Examples10 Examples 100 Examples 10000 ExamplesNearest-Neighbor ClassificationNearest neighbor for digits:Take new imageCompare to all training imagesAssign based on closest exampleEncoding: image is vector of intensities:What’s the similarity function?Dot product of two images vectors?Usually normalize vectors so ||x|| = 1min = 0 (when?), max = 1 (when?)Basic SimilarityMany similarities based on feature dot products:If features are just the pixels:Note: not all similarities are of this formInvariant MetricsThis and next few slides adapted from Xiao Hu, UIUCBetter distances use knowledge about visionInvariant metrics:Similarities are invariant under certain transformationsRotation, scaling, translation, stroke-thickness…E.g: 16 x 16 = 256 pixels; a point in 256-dim spaceSmall similarity in R256 (why?)How to incorporate invariance into


View Full Document

Berkeley COMPSCI 188 - Lecture 25: Kernels

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 25: Kernels
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 25: Kernels and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 25: Kernels 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?