Support Vector MachinesAnnouncementsLinear classifiers – Which line is better?Pick the one with the largest margin!Maximize the marginBut there are a many planes…Review: Normal to a planeNormalized margin – Canonical hyperplanesNormalized margin – Canonical hyperplanesMargin maximization using canonical hyperplanesSupport vector machines (SVMs)What if the data is not linearly separable?What if the data is still not linearly separable?Slack variables – Hinge lossSide note: What’s the difference between SVMs and logistic regression?What about multiple classes?One against AllLearn 1 classifier: Multiclass SVMLearn 1 classifier: Multiclass SVMWhat you need to knowSVMs, Duality and the Kernel TrickSVMs reminderYou will now…Constrained optimizationLagrange multipliers – Dual variablesDual SVM derivation (1) – the linearly separable caseDual SVM derivation (2) – the linearly separable caseDual SVM interpretationDual SVM formulation – the linearly separable caseDual SVM derivation – the non-separable caseDual SVM formulation – the non-separable caseWhy did we learn about the dual SVM?Reminder from last time: What if the data is not linearly separable?Higher order polynomialsDual formulation only depends on dot-products, not on w!Dot-product of polynomialsFinally: the “kernel trick”!Polynomial kernelsCommon kernelsOverfitting?What about at classification timeSVMs with kernelsWhat’s the difference between SVMs and Logistic Regression?Kernels in logistic regressionWhat’s the difference between SVMs and Logistic Regression? (Revisited)What you need to knowAcknowledgmentAcknowledgment©2006 Carlos Guestrin1Two SVM tutorials linked in class website (please, read both): High-level presentation with applications (Hearst 1998) Detailed tutorial (Burges 1998)Support Vector MachinesMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 22nd, 2005©2006 Carlos Guestrin2Announcements Third homework is out Due March 1st Final assigned by registrar: May 12, 1-4p.m Location TBD©2006 Carlos Guestrin3Linear classifiers – Which line is better?Data:Example i:w.x = ∑jw(j)x(j)©2006 Carlos Guestrin4Pick the one with the largest margin!w.x+ b = 0w.x = ∑jw(j)x(j)©2006 Carlos Guestrin5Maximize the marginw.x+ b = 0©2006 Carlos Guestrin6But there are a many planes…w.x+ b = 0©2006 Carlos Guestrin7Review: Normal to a planew.x+ b = 0©2006 Carlos Guestrin8Normalized margin – Canonical hyperplanesw.x+ b = +1w.x+ b = -1w.x+ b = 0margin 2γx-x+©2006 Carlos Guestrin9Normalized margin – Canonical hyperplanesw.x+ b = +1w.x+ b = -1w.x+ b = 0margin 2γx-x+©2006 Carlos Guestrin10Margin maximization using canonical hyperplanesw.x+ b = +1w.x+ b = -1w.x+ b = 0margin 2γ©2006 Carlos Guestrin11Support vector machines (SVMs)w.x+ b = +1w.x+ b = -1w.x+ b = 0margin 2γ Solve efficiently by quadratic programming (QP) Well-studied solution algorithms Hyperplane defined by support vectors©2006 Carlos Guestrin12What if the data is not linearly separable?Use features of features of features of features….©2006 Carlos Guestrin13What if the data is still not linearly separable? Minimize w.w and number of training mistakes Tradeoff two criteria? Tradeoff #(mistakes) and w.w 0/1 loss Slack penalty C Not QP anymore Also doesn’t distinguish near misses and really bad mistakes©2006 Carlos Guestrin14Slack variables – Hinge loss If margin ≥ 1, don’t care If margin < 1, pay linear penalty©2006 Carlos Guestrin15Side note: What’s the difference between SVMs and logistic regression?SVM:Logistic regression:Log loss:©2006 Carlos Guestrin16What about multiple classes?©2006 Carlos Guestrin17One against AllLearn 3 classifiers:©2006 Carlos Guestrin18Learn 1 classifier: Multiclass SVMSimultaneously learn 3 sets of weights©2006 Carlos Guestrin19Learn 1 classifier: Multiclass SVM©2006 Carlos Guestrin20What you need to know Maximizing margin Derivation of SVM formulation Slack variables and hinge loss Relationship between SVMs and logistic regression 0/1 loss Hinge loss Log loss Tackling multiple class One against All Multiclass SVMs©2006 Carlos Guestrin21SVMs, Duality and the Kernel TrickMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityFebruary 22nd, 2005©2006 Carlos Guestrin22SVMs reminder©2006 Carlos Guestrin23You will now… Learn one of the most interesting and exciting recent advancements in machine learning The “kernel trick” High dimensional feature spaces at no extra cost! But first, a detour Constrained optimization!©2006 Carlos Guestrin24Constrained optimization©2006 Carlos Guestrin25Lagrange multipliers – Dual variables©2006 Carlos Guestrin26Dual SVM derivation (1) –the linearly separable case©2006 Carlos Guestrin27Dual SVM derivation (2) –the linearly separable case©2006 Carlos Guestrin28Dual SVM interpretationw.x+ b = 0©2006 Carlos Guestrin29Dual SVM formulation –the linearly separable case©2006 Carlos Guestrin30Dual SVM derivation –the non-separable case©2006 Carlos Guestrin31Dual SVM formulation –the non-separable case©2006 Carlos Guestrin32Why did we learn about the dual SVM? There are some quadratic programming algorithms that can solve the dual faster than the primal But, more importantly, the “kernel trick”!!! Another little detour…©2006 Carlos Guestrin33Reminder from last time: What if the data is not linearly separable?Use features of features of features of features….Feature space can get really large really quickly!©2006 Carlos Guestrin34Higher order polynomialsm – input featuresd – degree of polynomialnumber of input dimensionsnumber of monomial termsd=2d=4d=3grows fast!d = 6, m = 100about 1.6 billion terms©2006 Carlos Guestrin35Dual formulation only depends on dot-products, not on w!©2006 Carlos Guestrin36Dot-product of polynomials©2006 Carlos Guestrin37Finally: the “kernel trick”! Never represent features explicitly Compute dot products in closed form Constant-time high-dimensional dot-products for many classes of features Very interesting theory – Reproducing Kernel Hilbert Spaces Not covered in detail in 10701/15781, more in 10702©2006 Carlos Guestrin38Polynomial kernels All monomials of degree d in O(d) operations: How about all monomials of degree up to d? Solution 0: Better solution:©2006 Carlos Guestrin39Common
View Full Document