DOC PREVIEW
Combining Multiple Kernels for Efficient Image Classification

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Combining Multiple Kernels for Efficient Image ClassificationBehjat SiddiquieUniversity of MarylandCollege Park, MD [email protected] N. VitaladevuniJanelia Farm Research CampusHoward Hughes Medical [email protected] S. DavisUniversity of MarylandCollege Park, MD [email protected] investigate the problem of combining multiple featurechannels for the purpose of efficient image classification.Discriminative kernel based methods, such as SVMs, havebeen shown to be quite effective for image classification. Touse these methods with several feature channels, one needsto combine base kernels computed from them. Multiple ker-nel learning is an effective method for combining the basekernels. However, the cost of computing the kernel similar-ities of a test image with each of the support vectors for allfeature channels is extremely high. We propose an alternatemethod, where training data instances are selected, usingAdaBoost, for each of the base kernels. A composite de-cision function, which can be evaluated by computing ker-nel similarities with respect to only these chosen instances,is learnt. This method significantly reduces the number ofkernel computations required during testing. Experimen-tal results on the benchmark UCI datasets, as well as on achallenging painting dataset, are included to demonstratethe effectiveness of our method.1. IntroductionWe address the problem of combining multiple heteroge-nous features for image classification. Categorizing im-ages based on stylistic variations such as scene content andpainting genre requires a rich feature repertoire. Classifi-cation is accomplished by comparing distributions of fea-tures, e.g., color, texture, gradient histograms [11, 23, 18].For instance, Grauman and Darrell proposed the PyramidMatch Kernel (PMK) to compute Mercer kernels betweenfeature distributions for Support Vector Machine (SVM)based classification. This has been shown to be effectivefor object categorization [11] and scene analysis [18]. Ap-proaches such as PMK would compute a kernel matrix foreach feature distribution. We explore techniques for com-bining the kernels from multiple features for efficient androbust recognition.A number of techniques have been proposed to learn theoptimal combination of a set of kernels for SVM-based clas-sification. Lanckriet et al. proposed an approach for Multi-ple Kernel Learning (MKL) through semi-definite program-ming [17]. Sonnenburg et al. generalized MKL to regres-sion and one-class SVMs, and enhanced the ability to han-dle large scale problems. Rakotomamonjy et al. increasedthe efficiency of MKL and demonstrated its utility on sev-eral standard datasets including the UCI repository [22].They compute multiple kernels by varying the parameters ofpolynomial and Gaussian kernels, and apply MKL to com-pute an optimal combination. Bosch et al. learn the optimalmixture between two kernels - shape and appearance - us-ing a validation set [6]. Varma and Ray propose to minimizethe number of kernels involved in the final classification byincluding the L1norms of the kernel weights in the SVMoptimization function [25]. Bi et al. proposed a boosting-based classifier that combines multiple kernel matrices forregression and classification [5].The efficiency of MKL-based SVM classifiers during thetesting phase depends upon the number of support vectorsand the number of features. In general, multi-class prob-lems requiring subtle distinctions entail a large number ofsupport vectors. The computational cost is substantial whenthe kernels are complex, e.g., matching similarity of featuredistributions. Is it possible to reduce the number of com-plex kernel computations while maintaining performance?We propose an approach for combining multiple kernelsthrough a feature selection process followed by SVM learn-ing. Let Km(., .) be the kernel values for the mthfeaturechannel computed using approaches such as the PyramidMatch Kernel. The columns of Kmare considered to befeatures embedding the images in a high-dimensional spacebased on similarity to training examples. During the train-ing phase, a subset of the columns are chosen using GentleBoost [1] based on their discriminative power, and a newkernel K is constructed. This is provided as input to anSVM for final classification. Kernels of test images needto be computed for only the chosen set of columns - muchsmaller than the full set of kernel values. This results in sub-stantial reductions in computational complexity during thetesting phase. The consequent approach is simple and re-lies on well understood techniques of Boosting and SVMs.Boosting methods have previously been used for featureselection [27], to learn kernels directly from data [8, 14],and for selecting a subset of kernels for concept detectionin [15].We compare our Boosted Kernel SVM (BK-SVM)1approach with the Efficient Multiple Kernel Learning(EMKL) approach proposed in [22]. EMKL has beenshown to increase the efficiency of kernel learning whileenabling the use of a large number of kernels within SVM.It uses all the kernel values for classification - a supersetof the features obtained by the greedy Boosting-based se-lection. BK-SVM and EMKL are tested in two scenarios:standard datasets from the UCI repository [2] and a novelPainting dataset. Results indicate that BK-SVM’s classifi-cation accuracy is comparable to that of EMKL, with theadditional advantage of a much smaller number of complexkernel computations.Currently, paintings are being extensively digitized in or-der to preserve them and make them more widely accessi-ble. Digital collections of paintings play an important rolein preserving our cultural heritage. Automatic indexing andannotation of such painting collections according to style,artist or period would considerably reduce the manual ef-fort required for such tasks. Supporting query and retrievalon such collections over the internet would make many rarepaintings more widely accessible. In this paper, we applyour BK-SVM method to the task of annotation of paintingsaccording to their genre, which could be applied to indexingas well as query and retrieval from painting collections.The Painting dataset consists of nearly 500 imagesdownloaded from the Internet - the task being to classifyimages into 6 genres. This provides a good testbed as theclassification is subtle, requiring a large variety of features.Recently, there have been studies on the classification


Combining Multiple Kernels for Efficient Image Classification

Download Combining Multiple Kernels for Efficient Image Classification
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Combining Multiple Kernels for Efficient Image Classification and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Combining Multiple Kernels for Efficient Image Classification 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?