ScIBVm8DJtRW1odTgj9S_FFQjeP17fDIHod17FS6s9mxmsuhmmHX8wJFAJLEnoqSfXonF6phy6cSovSospdIzg

Latent Wishart Processes for Relational Kernel Learning




1 views

Unformatted text preview:

336 Latent Wishart Processes for Relational Kernel Learning Wu-Jun Li Dept. of Comp. Sci. and Eng. Hong Kong Univ. of Sci. and Tech. Hong Kong, China [email protected] Zhihua Zhang College of Comp. Sci. and Tech. Zhejiang University Zhejiang 310027, China [email protected] Dit-Yan Yeung Dept. of Comp. Sci. and Eng. Hong Kong Univ. of Sci. and Tech. Hong Kong, China [email protected] Abstract One main concern towards kernel classifiers is on their sensitivity to the choice of kernel function or kernel matrix which characterizes the similarity between instances. Many real- world data, such as web pages and protein- protein interaction data, are relational in na- ture in the sense that different instances are correlated (linked) with each other. The re- lational information available in such data often provides strong hints on the correla- tion (or similarity) between instances. In this paper, we propose a novel relational kernel learning model based on latent Wishart pro- cesses (LWP) to learn the kernel function for relational data. This is done by seamlessly in- tegrating the relational information and the input attributes into the kernel learning pro- cess. Through extensive experiments on real- world applications, we demonstrate that our LWP model can give very promising perfor- mance in practice. 1 Introduction Kernel methods, such as support vector machines (SVM) and Gaussian processes (GP) (Rasmussen and Williams, 2006), have been widely used in many ap- plications giving very promising performance. In ker- nel methods, the similarity between instances is rep- resented by a kernel function defined over the input attributes. In general, the choice of an appropriate kernel function and its corresponding parameters is difficult in practice. Poorly chosen kernel functions Appearing in Proceedings of the 12th International Confe- rence on Artificial Intelligence and Statistics (AISTATS) 2009, Clearwater Beach, Florida, USA. Volume 5 of JMLR: W&CP 5. Copyright 2009 by the authors. can impair the performance significantly. Hence, ker- nel learning (Lanckriet et al., 2004; Zhang et al., 2006), which tries to find a good kernel matrix for the train- ing data, is very important for kernel-based classifier design. In many real-world applications, relationships or “links” between (some) instances may also be avail- able in the data in addition to the input attributes. Data of this sort, referred to as relational data (Getoor and Taskar, 2007), can be found in such diverse appli- cation areas as web mining, social network analysis, bioinformatics, marketing, and so on. In relational data, the attributes of connected (linked) instances are often correlated and the class label of one instance may have an influence on that of a linked instance. This means that the relationships (or links) between instances are very informative for instance classifica- tion (Getoor and Taskar, 2007), sometimes even much more informative than input attributes. For exam- ple, two hyperlinked web pages are very likely to be related to the same topic, even when their attributes may look quite different when represented as bags of words. In biology, interacting proteins are more likely to have the same biological ...





Loading Unlocking...

Login

Join to view Latent Wishart Processes for Relational Kernel Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?

Sign Up

Join to view Latent Wishart Processes for Relational Kernel Learning and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?