DOC PREVIEW
Berkeley COMPSCI 188 - Perceptron (2PP)

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2010Lecture 23: Perceptrons and More!11/18/2010Dan Klein – UC BerkeleyErrors, and What to Do Examples of errorsDear GlobalSCAPE Customer, GlobalSCAPE has partnered with ScanSoft to offer you the latest version of OmniPage Pro, for just $99.99* - the regular list price is $499! The most common question we've received about this offer is - Is this genuine? We would like to assure you that this offer is authorized by ScanSoft, is genuine and valid. You can get the . . .. . . To receive your $30 Amazon.com promotional certificate, click through tohttp://www.amazon.com/appareland see the prominent link for the $30 offer. All details are there. We hope you enjoyed receiving this message. However, if you'd rather not receive future e-mails announcing new store launches, please click . . .2What to Do About Errors Problem: there’s still spam in your inbox Need more features – words aren’t enough! Have you emailed the sender before? Have 1K other people just gotten the same email? Is the sending information consistent?  Is the email in ALL CAPS? Do inline URLs point where they say they point? Does the email address you by (your) name? Naïve Bayes models can incorporate a variety of features, but tend to do best in homogeneous cases (e.g. all features are word occurrences)3Later On…Web SearchDecision Problems3Classification: Feature VectorsHello,Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...SPAMor+PIXEL-7,12 : 1PIXEL-7,13 : 0...NUM_LOOPS : 1...“2”Some (Simplified) Biology Very loose inspiration: human neurons64Linear Classifiers Inputs are feature values Each feature has a weight Sum is the activation If the activation is: Positive, output +1 Negative, output -1Σf1f2f3w1w2w3>0?7Classification: Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...# free : 4YOUR_NAME :-1MISSPELLED : 1FROM_FRIEND :-3...# free : 0YOUR_NAME : 1MISSPELLED : 1FROM_FRIEND : 1...Dot product positive means the positive class5Binary Decision Rule In the space of feature vectors Examples are points Any weight vector is a hyperplane One side corresponds to Y=+1 Other corresponds to Y=-1BIAS : -3free : 4money : 2...0 1012freemoney+1 = SPAM-1 = HAMLearning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.10[Demo]6Multiclass Decision Rule If we have multiple classes: A weight vector for each class: Score (activation) of a class y: Prediction highest score winsBinary = multiclass where the negative class has weight zeroLearning: Multiclass Perceptron Start with all weights = 0 Pick up training examples one by one Predict with current weights If correct, no change! If wrong: lower score of wrong answer, raise score of right answer127Example: Multiclass PerceptronBIAS : 1win : 0game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...“win the vote”“win the election”“win the game”Examples: Perceptron Separable Case148Properties of Perceptrons Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separabilitySeparableNon-Separable15Examples: Perceptron Non-Separable Case169Problems with the Perceptron Noise: if the data isn’t separable, weights might thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a “barely” separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfittingFixing the Perceptron Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake… … but, minimizes the change to w The +1 helps to generalize* Margin Infused Relaxed Algorithm10Minimum Correcting Updatemin not τ=0, or would not have made an error, so min will be where equality holdsMaximum Step Size20 In practice, it’s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data11Linear Separators Which of these linear separators is optimal? 21Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable  Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at onceMIRASVM12Classification: Comparison Naïve Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate23Extension: Web Search Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR Web search: not exactly classification, but rather rankingx = “Apple Computers”13Feature-Based Rankingx = “Apple Computers”x,x,Perceptron for Ranking Inputs  Candidates Many feature vectors:  One weight vector: Prediction: Update (if


View Full Document

Berkeley COMPSCI 188 - Perceptron (2PP)

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Perceptron (2PP)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Perceptron (2PP) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Perceptron (2PP) 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?