11SVMs, Duality and the Kernel TrickMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityOctober 21st, 2009©Carlos Guestrin 2005-2009©Carlos Guestrin 2005-2009 2SVMs reminder2©Carlos Guestrin 2005-2009 3Today’s lecture Learn one of the most interesting and exciting recent advancements in machine learning The “kernel trick” High dimensional feature spaces at no extra cost! But first, a detour Constrained optimization!©Carlos Guestrin 2005-2009 4Constrained optimization3©Carlos Guestrin 2005-2009 5Lagrange multipliers – Dual variablesMoving the constraint to objective functionLagrangian:Solve:©Carlos Guestrin 2005-2009 6Lagrange multipliers – Dual variablesSolving:4©Carlos Guestrin 2005-2009 7Dual SVM derivation (1) –the linearly separable case©Carlos Guestrin 2005-2009 8Dual SVM derivation (2) –the linearly separable case5©Carlos Guestrin 2005-2009 9Dual SVM interpretation: Sparsity©Carlos Guestrin 2005-2009 10Dual SVM formulation –the linearly separable case6©Carlos Guestrin 2005-2009 11Dual SVM derivation –the non-separable case©Carlos Guestrin 2005-2009 12Dual SVM formulation –the non-separable case7©Carlos Guestrin 2005-2009 13Why did we learn about the dual SVM? There are some quadratic programming algorithms that can solve the dual faster than the primal But, more importantly, the “kernel trick”!!! Another little detour…Announcements: Midterm When: Thursday, 10/29, 5pm - 6:30pm Where: Doherty 2210 What: You, your pencil, your textbook, your notes, course slides, your calculator, your good mood :) What NOT: No computers, iphones, or anything else that has an internet connection. Material: Everything from the beginning of the semester, until, and including SVMs and the Kernel trick©Carlos Guestrin 2005-2009 148©Carlos Guestrin 2005-2009 15Reminder from last time: What if the data is not linearly separable?Use features of features of features of features….Feature space can get really large really quickly!©Carlos Guestrin 2005-2009 16Higher order polynomialsnumber of input dimensionsnumber of monomial termsd=2d=4d=3m – input featuresd – degree of polynomialgrows fast!d = 6, m = 100about 1.6 billion terms9©Carlos Guestrin 2005-2009 17Dual formulation only depends on dot-products, not on w!©Carlos Guestrin 2005-2009 18Dot-product of polynomials10©Carlos Guestrin 2005-2009 19Finally: the “kernel trick”! Never represent features explicitly Compute dot products in closed form Constant-time high-dimensional dot-products for many classes of features Very interesting theory – Reproducing Kernel Hilbert Spaces Not covered in detail in 10701/15781, more in 10702©Carlos Guestrin 2005-2009 20Polynomial kernels All monomials of degree d in O(d) operations: How about all monomials of degree up to d? Solution 0: Better solution:11©Carlos Guestrin 2005-2009 21Common kernels Polynomials of degree d Polynomials of degree up to d Gaussian kernels Sigmoid©Carlos Guestrin 2005-2009 22Overfitting? Huge feature space with kernels, what about overfitting??? Maximizing margin leads to sparse set of support vectors Some interesting theory says that SVMs search for simple hypothesis with large margin Often robust to overfitting12©Carlos Guestrin 2005-2009 23What about at classification time For a new input x, if we need to represent Φ(x), we are in trouble! Recall classifier: sign(w.Φ(x)+b) Using kernels we are cool!©Carlos Guestrin 2005-2009 24SVMs with kernels Choose a set of features and kernel function Solve dual problem to obtain support vectors αi At classification time, compute:Classify as13©Carlos Guestrin 2005-2009 25What’s the difference between SVMs and Logistic Regression?SVMs LogisticRegressionLoss function Hinge loss Log-lossHigh dimensional features with kernelsYes! No©Carlos Guestrin 2005-2009 26Kernels in logistic regression Define weights in terms of support vectors: Derive simple gradient descent rule on αi14©Carlos Guestrin 2005-2009 27What’s the difference between SVMs and Logistic Regression? (Revisited)SVMs LogisticRegressionLoss function Hinge loss Log-lossHigh dimensional features with kernelsYes! Yes!Solution sparse Often yes! Almost always no!Semantics of output“Margin” Real probabilities©Carlos Guestrin 2005-2009 28What you need to know Dual SVM formulation How it’s derived The kernel trick Derive polynomial kernel Common kernels Kernelized logistic regression Differences between SVMs and logistic
View Full Document