SVM: Multiclass and Structured Prediction Bin ZhaoPart I: Multi-Class SVM2-Class SVM • Primal form • Dual formReal world classification problems Object recognition 100 Automated protein classification 50 300-600 Digit recognition 10 Phoneme recognition [Waibel, Hanzawa, Hinton,Shikano, Lang 1989] http://www.glue.umd.edu/~zhelin/recog.html • The number of classes is sometimes big • The multi-class algorithm can be heavyHow can we solve multi-class problem? • One-against-one • One-against-rest • Crammer & Singer’s formulation • Error-correcting output coding • Empirical comparisonsOne-against-oneOne-against-restProblems One-against-one One-against-restCrammer & Singer’s formulation • A Naïve approachCrammer & Singer’s formulation • A Naïve approach • C & S’s formulationError-Correcting Output Code (ECOC) Source: Dietterich and Bakiri (1995) Codeword Meta-classifier 0 1 0 0 0 0 0 0 0 0DiscussionsSpecial cases of ECOC One-against-one One-against-restEmpirical Study • In Defense of One-Vs-All Classification – JMLR 2004 – The most important step in good multiclass classification is to use the best binary classifier available. Once this is done, it seems to make little difference what multiclass scheme is applied, and therefore a simple scheme such as OVA (or AVA) is preferable to a more complex error-correcting coding scheme or single-machine scheme.Part II: Structured SVM Slides Courtesy: Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, Yasemin AltunLocal Classification Classify using local information Ignores correlations! b r e a r [thanks to Ben Taskar for slide!]Structured Classification • Use local information • Exploit correlations b r e a c [thanks to Ben Taskar for slide!]Case Study: Max Margin Learning on Domain-Independent Web Information ExtractionMotivation • “Understand” web page – Assign semantics to each component of a page – Understand functionality of each section – Distinguish main content from side contents • Page layout reveals important cuesMotivation (Cont.) • Human can “understand” a page written in foreign language – Position – Size – Font – Color – Boldness – Relative position to other sections • Can a computer fulfill similar task? – Domain-independent web information extractionThe Proposed Approach • Page Segmentation: vision tree – Built based on DOM tree – With visual information – Correspond to how each node is displayed • Structured Segmentation – Assign label for each node in the vision tree – Information extraction based on node classificationStructured Classification • Label space for leaf nodes – attribute name, attribute value, non attribute, image, nav bar, main title, page tail, image caption – non-atribute (anything else) • Label space for non-leaf nodes – structure block, data record, nav bar block, non attribute block, value block, page tail block, name block, image block, image caption block, main title blockMax Margin Learning • Structured output space (tree) – If treated as conventional classification: exponential number of classes unable to train – Augmented loss: misclassify a tree by one node should receive much less penalty than misclassifying the entire tree • Input-output feature mapping: • Linear discriminant function: • Response to input x:Max Margin Learning (Cont.) • Augmented loss: – # of disagreements between y and y’ • Learning: use SVM to find optimal w • Inference: dynamical programmingInput-Output Feature Mapping • Two types of cliques in the hierarchical model – Cliques covering observation-state node pairs – Cliques covering state-state node pairs • Correspondingly, two types of features – Features describing cliques – Features describing cliquesInput-Output Feature Mapping: Type I • Spatial features – Position of block center, block height, block weight, block area • Features for all nodes – Link number, number of various HTML tags, number of child nodes • Features for leaf nodes only – Text length, font, bold, italic, word count, number of images, image size, link text lengthInput-Output Feature Mapping: Type II • Parent-child relationship – Label co-occurrence pattern • Connect spatially adjacent blocks – Link a node with its k nearest neighbors – Define label co-occurrence pattern for connected node pair (i,j)Learning and Inference • Learning – Quadratic programming with exponential number of constraints cutting plane algorithm (a.k.a., constraint generation, bundle method) • Inference – Without edges between spatially adjacent blocks dynamical programming (a.k.a. Viterbi decoding) – With edges between spatially adjacent blocks loopy belief propagationEmpirical Study • Block level prediction results • Attribute name-value pair extraction – 1000 web pages – Precision: 56.91% – Recall: 59.37%Thank
View Full Document