1CS 188: Artificial IntelligenceFall 2009Lecture 24: Perceptrons and More!11/19/2009Dan Klein – UC BerkeleyAnnouncements Project 4 in today Project 5 out today Due date TBA, after final contest date Qualifiers for the contest can drop lowest assignment2Classification: Feature VectorsHello,Do you want free printrcartriges? Why pay more when you can get them ABSOLUTELY FREE! Just# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...SPAMor+PIXEL-7,12 : 1PIXEL-7,13 : 0...NUM_LOOPS : 1...“2”Later TodayWeb SearchDecision Problems3Classification: Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples# free : 2YOUR_NAME : 0MISSPELLED : 2FROM_FRIEND : 0...# free : 4YOUR_NAME :-1MISSPELLED : 1FROM_FRIEND :-3...# free : 0YOUR_NAME : 1MISSPELLED : 1FROM_FRIEND : 1...Dot product positive means the positive classLearning: Binary Perceptron Start with weights = 0 For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.6[Demo]4Multiclass Decision Rule If we have multiple classes: A weight vector for each class: Score (activation) of a class y: Prediction highest score winsBinary = multiclass where the negative class has weight zeroLearning: Multiclass Perceptron Start with all weights = 0 Pick up training examples one by one Predict with current weights If correct, no change! If wrong: lower score of wrong answer, raise score of right answer85Example: Multiclass PerceptronBIAS : 1win : 0game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...“win the vote”“win the election”“win the game”Examples: Perceptron Separable Case106Properties of Perceptrons Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separabilitySeparableNon-Separable11Examples: Perceptron Non-Separable Case127Problems with the Perceptron Noise: if the data isn’t separable, weights might thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a “barely” separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfittingFixing the Perceptron Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake… … but, minimizes the change to w The +1 helps to generalize* Margin Infused Relaxed Algorithm8Minimum Correcting Updatemin not τ=0, or would not have made an error, so min will be where equality holdsMaximum Step Size16 In practice, it’s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data9Linear Separators Which of these linear separators is optimal? 17Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at onceMIRASVM10Classification: Comparison Naïve Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate19Extension: Web Search Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR Web search: not exactly classification, but rather rankingx = “Apple Computers”11Feature-Based Rankingx = “Apple Computers”x,x,Perceptron for Ranking Inputs Candidates Many feature vectors: One weight vector: Prediction: Update (if wrong):12Pacman Apprenticeship! Examples are states s Candidates are pairs (s,a) “Correct” actions: those taken by expert Features defined over (s,a) pairs: f(s,a) Score of a q-state (s,a) given by: How is this VERY different from reinforcement learning?“correct” action a*Coming Up Natural Language Processing Vision
View Full Document