DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 24: Perceptrons and More

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2009Lecture 24: Perceptrons and More!11/19/2009Dan Klein – UC BerkeleyAnnouncements Project 4 in today Project 5 out today Due date TBA, after final contest date Qualifiers for the contest can drop lowest assignment2Classification: Feature VectorsHello,Do you want free printrcartriges? Why pay more when you can get them ABSOLUTELY FREE! Just# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...SPAMor+PIXEL-7,12 : 1PIXEL-7,13 : 0...NUM_LOOPS : 1...“2”Later TodayWeb SearchDecision Problems3Classification: Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...# free : 4YOUR_NAME :-1MISSPELLED : 1FROM_FRIEND :-3...# free : 0YOUR_NAME : 1MISSPELLED : 1FROM_FRIEND : 1...Dot product positive means the positive classLearning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.6[Demo]4Multiclass Decision Rule If we have multiple classes: A weight vector for each class: Score (activation) of a class y: Prediction highest score winsBinary = multiclass where the negative class has weight zeroLearning: Multiclass Perceptron Start with all weights = 0 Pick up training examples one by one Predict with current weights If correct, no change! If wrong: lower score of wrong answer, raise score of right answer85Example: Multiclass PerceptronBIAS : 1win : 0game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...“win the vote”“win the election”“win the game”Examples: Perceptron Separable Case106Properties of Perceptrons Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separabilitySeparableNon-Separable11Examples: Perceptron Non-Separable Case127Problems with the Perceptron Noise: if the data isn’t separable, weights might thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a “barely” separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfittingFixing the Perceptron Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake… … but, minimizes the change to w The +1 helps to generalize* Margin Infused Relaxed Algorithm8Minimum Correcting Updatemin not τ=0, or would not have made an error, so min will be where equality holdsMaximum Step Size16 In practice, it’s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data9Linear Separators Which of these linear separators is optimal? 17Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable  Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at onceMIRASVM10Classification: Comparison Naïve Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate19Extension: Web Search Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR Web search: not exactly classification, but rather rankingx = “Apple Computers”11Feature-Based Rankingx = “Apple Computers”x,x,Perceptron for Ranking Inputs  Candidates Many feature vectors:  One weight vector: Prediction: Update (if wrong):12Pacman Apprenticeship! Examples are states s Candidates are pairs (s,a) “Correct” actions: those taken by expert Features defined over (s,a) pairs: f(s,a) Score of a q-state (s,a) given by: How is this VERY different from reinforcement learning?“correct” action a*Coming Up Natural Language Processing Vision


View Full Document

Berkeley COMPSCI 188 - Lecture 24: Perceptrons and More

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 24: Perceptrons and More
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 24: Perceptrons and More and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 24: Perceptrons and More 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?