{"title": "Data Integration for Classification Problems Employing Gaussian Process Priors", "book": "Advances in Neural Information Processing Systems", "page_first": 465, "page_last": 472, "abstract": null, "full_text": "Data Integration for Classification Problems Employing Gaussian Process Priors\n\nMark Girolami Department of Computing Science University of Glasgow Scotland, UK girolami@dcs.gla.ac.uk\n\nMingjun Zhong IRISA, Campus de Beaulieu F-35042 Rennes Cedex France zmingjun@irisa.fr\n\nAbstract\nBy adopting Gaussian process priors a fully Bayesian solution to the problem of integrating possibly heterogeneous data sets within a classification setting is presented. Approximate inference schemes employing Variational & Expectation Propagation based methods are developed and rigorously assessed. We demonstrate our approach to integrating multiple data sets on a large scale protein fold prediction problem where we infer the optimal combinations of covariance functions and achieve state-of-the-art performance without resorting to any ad hoc parameter tuning and classifier combination.\n\n1\n\nIntroduction\n\nVarious emerging quantitative measurement technologies in the life sciences are producing genome, transcriptome and proteome-wide data collections which has motivated the development of data integration methods within an inferential framework. It has been demonstrated that for certain prediction tasks within computational biology synergistic improvements in performance can be obtained via the integration of a number of (possibly heterogeneous) data sources. In [2] six different data representations of proteins were employed for fold recognition of proteins using Support Vector Machines (SVM). It was observed that certain data combinations provided increased accuracy over the use of any single dataset. Likewise in [9] a comprehensive experimental study observed improvements in SVM based gene function prediction when data from both microarray expression and phylogentic profiles were manually combined. More recently protein network inference was shown to be improved when various genomic data sources were integrated [16] and in [1] it was shown that superior prediction accuracy of protein-protein interactions was obtainable when a number of diverse data types were combined in an SVM. Whilst all of these papers exploited the kernel method in providing a means of data fusion within SVM based classifiers it was initially only in [5] that a means of estimating an optimal linear combination of the kernel functions was presented using semi-definite programming. However, the methods developed in [5] are based on binary SVM's, whilst arguably the majority of important classification problems within computational biology are inherently multiclass. It is unclear how this approach could be extended in a straightforward or practical manner to discrimination over multiple-classes. In addition the SVM is non-probabilistic and whilst post hoc methods for obtaining predictive probabilities are available [10] these are not without problems such as overfitting. On the other hand Gaussian Process (GP) methods [11], [8] for classification provide a very natural way to both integrate and infer optimal combinations of multiple heterogeneous datasets via composite covariance functions within the Bayesian framework an idea first proposed in [8]. In this paper it is shown that GP's can indeed be successfully employed on general classification problems, without recourse to ad hoc binary classification combination schemes, where there are multiple data sources which are also optimally combined employing full Bayesian inference. A\n\n\f\nlarge scale example of protein fold prediction [2] is provided where state-of-the-art predictive performance is achieved in a straightforward manner without resorting to any extensive ad hoc engineering of the solution (see [2], [13]). As an additional important by-product of this work inference employing Variational Bayesian (VB) and Expectation Propagation (EP) based approximations for GP classification over multiple classes are studied and assessed in detail. It has been unclear whether EP based approximations would provide similar improvements in performance in the multi-class setting over the Laplace approximation and this work provides experimental evidence that both Variational and EP based approximations perform as well as a Gibbs sampler consistently outperforming the Laplace approximation. In addition we see that there is no statistically significant practical advantage of EP based approximations over VB approximations in this particular setting.\n\n2\n\nIntegrating Data with Gaussian Process Priors\n\nLet us denote each of J independent (possibly heterogeneous) feature representations, Fj (X ), of an object X by xj j = 1 J . For each object there is a corresponding polychotomous response target variable, t, so to model this response we assume an additive generalized regression model. Each distinct data representation of X , Fj (X ) = xj , is nonlinearly transformed such that fj (xj ) : Fj R and a lineaj model is employed in this new space such that the overall nonlinear r j fj (xj ). transformation is f (X ) = 2.1 Composite Covariance Functions Rather than specifying an explicit functional form for each of the functions fj (xj ) we assume that each nonlinear function corresponds to a Gaussian process (GP) [11] such that fj (xj ) GP (j ) where GP (j ) corresponds to a GP with trend and covariance functions mj (xj ) and Cj (xj , xj ; j ) where j denotes a set of hyper-parameters associated with the covariance function. Due to the assumed independence of the feature representations the overall nonlinear function will also be a realization of a Gaussian process defined as f (X ) j GP (1 J ,j1 J ) where now the 2 overall trend and covariance functions follow as j mj (xj ) and j Cj (xj , xj ; j ). For N object samples, X1 XN , each defined by the J feature representations, x1 xN , denoted by j j Xj , with associated class specific response fk = [fk (X1 ) fk (XN )]T the overall GP prior is a multivariate Normal such that 0 ( j fk | Xj =1J , 1k , J k , 1k J k Nfk , j k Cj k (j k ) 1)\n2 The positive random variables j k are denoted by j k , zero-trend GP functions have been assumed n m and each Cj k (j k ) is an N N matrix with elements Cj (xj , xj ; j k ). A GP functional prior, over all possible responses (classes), is now available where possibly heterogeneous data sources are integrated via the composite covariance function. It is then, in principle, a straightforward matter to perform Bayesian inference with this model and no further recourse to ad hoc binary classifier combination methods or ancillary optimizations to obtain the data combination weights is required.\n\n2.2\n\nBayesian Inference\n\nAs we are concerned with classification problems over possibly multiple classes we employ a multinomial probit likelihood rather than a multinomial logit as it provides a means of developing a Gibbs sampler, and subsequent computationally efficient approximations, for the GP random variables. The Gibbs sampler is to be preferred over the Metropolis scheme as no tuning of a proposal distribution is required. As in [3] the auxiliary variables ynk = fk (Xn ) + nk , nk N (0, 1) are introduced and the N 1 dimensional vector of target class values associated with each Xn is given as t where each element tn {1, , K }. The N K matrix of GP random variables fk (Xn ) is denoted by F. We represent the N 1 dimensional columns of F by F,k and the corresponding K 1 dimensional vectors, Fn, , which are formed by the indexed rows of F . The N K matrix of auxiliary variables ynk is represented as Y, where the N 1 dimensional columns are denoted by Y,k and the corresponding K 1 dimensional vectors are obtained from the rows of Y as Yn, . The multinomial probit likelihood [3] is adopted which follows as tn = j if ynj = argmax {ynk }\n1 k K\n\n(2)\n\n\f\nand this has the effect of dividing RK into K non-overlapping K -dimensional cones Ck = {y : yk > yi , k = i} where RK = k Ck and so each P (tn = i|Yn, ) can be represented as (yni > ynk k = i). Class specific independent Gamma priors, with parameters k , are placed on each j k and the individual components of j k (denote k = {j k , j k }j =1J ), a further Gamma prior is placed on each element of k with overall parameters a and b so this defines the full model likelihood and associated priors. 2.3 MCMC Procedure\n\nSamples from the full posterior P (Y, F, 1K , 1K |X1N , t, a, b) can be obtained from the following Metropolis-within-Blocked-Gibbs Sampling scheme indexing over all n = 1 N and k = 1 K. Yn,\n(i+1) (i+1) (i+1) (i+1)\n\n|Fn, , tn\n\n(i)\n\n \n\nT N (Fn, , I, tn )\n(i) (i) (i+1) N (k Y,k , k ) (i+1)\n\n(i)\n\n(3) (4) (5) (6)\n\n(i+1) (i) (i+1) F,k |Y,k , k , X1, ,N\n\n1\n\n|F,1\n\n, Y , k\n\n, 1 , X 1, ,N\n\n(i)\n\n P (k \n\n)\n\n(i+1) (i+1) k |k , ak , bk\n\n(i+1) P (k )\n\nwhere T N (Fn, , I, tn ) denotes a conic truncation of a multivariate Gaussian with location parameters Fn, and dispersion parameters I and the dimension indicated by the class value of tn will be the largest. An accept-reject strategy can be employed in sampling from the conic truncated Gaussian however this will very quickly become inefficient for problems with moderately large numbers of classes and as such a further Gibbs sampling scheme may be required. Each j (i) (i) (i) (i) (i) (i) (i) k = Ck (I + Ck )-1 and Ck = =1 j k Cj k (j k ) with the elements of Cj k (j k ) defined\nn as Cj (xm , xj ; j k ). A Metropolis sub-sampler is required to obtain samples for the conditional j (i+1) (i+1) (i)\n\nwhere = Y, 1P . A Monte-Carlo estimate is obtained by using samples drawn from the full K S 1 (t = k |f )p(f |(s) , X , X1N )df and the integral over the predictive prior posterior S s=1 (l |s) requires further conditional samples, f , to be drawn from each p(f |(s) , X , X1N ) finally yielding a Monte Carlo approximation of P (t = k |X , X1N , t, a, b) S L j u t = 1 l L sS 1l s (l |s) (l |s) (l |s) (8) + f,k - f,j P = k |f Ep(u) LS LS\n=1 =1 =1 =1 =k\n\ndistribution over the composite covariance function parameters P (k ) and finally P (k ) is a simple product of Gamma distributions. The predictive likelihood of a test sample X is P (t = k |X , X1N , t, a, b) which can be obtained by integrating over the posterior and predictive prior such that P (t = k |f )p(f |, X , X1N )p(|X1N , t, a, b)df d (7)\n\nMCMC procedures for GP classification have been previously presented in [8] and whilst this provides a practical means to perform Bayesian inference employing GP's the computational cost incurred and difficulties associated with monitoring convergence and running multiple-chains on reasonably sized problems are well documented and have motivated the development of computationally less costly approximations [15]. A recent study has shown that EP is superior to the Laplace approximation for binary classification [4] and that for multi-class classification VB methods are superior to the Laplace approximation [3]. However the comparison between Variational and EP based approximations for the multi-class setting have not been considered in the literature and so we seek to address this issue in the following sections. 2.4 Variational Approximation\n\nFrom the conditional probabilities which appear in the Gibbs sampler it can be seen that a mean field approximation gives a simple iterative scheme which provides a computationally efficient alternative to the full sampler (including the Metropolis sub-sampler for the covariance function parameters),\n\n\f\ndetails of which are given in [3]. However given the excellent performance of EP on a number of approximate Bayesian inference problems it is incumbent on us to consider an EP solution here. We should point out that only the top level inference on the GP variables is considered here and the composite covariance function parameters will be obtained using another appropriate type-II maximum likelihood optimization scheme if possible. 2.5 Expectation Propagation with Full Posterior Covariance\n\nThe required posterior can also be approximated by EP [7]. In this case the multinomial probitk likelihood is apprn ximated by a multivariate Gaussian such that p(F|t, X1N ) Q(F) = o p(F,k |X1N ) gn (Fn, )1 where gn (Fn, ) = NFn, (n , n ), n is a K 1 vector and n is a full K K idimensional covariance matrix. Denoting the cavity density as Q\\n (F) = k p(F,k |X1N ) ,i=n gi (Fi, ), EP proceeds by iteratively re-estimating the moments n , n by moment matching [7] giving the following\nn new = Epn {Fn, } and new = Epn {Fn, FT , } - Epn {Fn, }Epn {Fn, }T , ^ ^ ^ ^ n n\n\n(9)\n\n- where pn = Zn 1 Q\\n (Fn, )p(tn |Fn, ), and Zn is the required normalizing (partition) function ^ which is required to obtain the above mean and covariance estimates. To proceed an analytic form for the partition function Zn is required. Indeed for binary classification employing a binomial probit likelihood an elegant EP solution follows due to the analytic form of the partition function [4]. However for the case of multiple classes with a multinomial probit likelihood the partition function no longer has a closed analytic form and further approximations are required to make any progress. There are two strategies which we consider, the first retains the full posterior coupling in the covariance matrices n by employing Laplace Propagation (LP) [14] and the second assumes no posterior coupling in n by setting this as a diagonal covariance matrix. The second form of approximation has been adopted in [12] when developing a multi-class version of the Informative Vector Machine (IVM) [6]. In the first case where we employ LP an additional significant O(K 3 N 3 ) computational scaling will be incurred however it can be argued that the retention of the posterior coupling is important. For the second case clearly we lose this explicit posterior coupling but, of course, do not incur the expensive computational overhead required of LP. We observed in unreported experiments that there is little of statistical significance lost, in terms of predictive performance, when assuming a factorable form for each pn . LP proceeds by propagating the approximate moments such that ^ -2 -1 log pn ^ new argmax log pn and new ^ n (10) n Fn , Fn, FT , n\n\nThe required derivatives follow straightforwardly and details are included in the accompanying material. The approximate predictive distribution for a new data point x requires a Monte Carlo estimate employing samples drawn from a K -dimensional multivariate Gaussian for which details are given in the supplementary material2 . 2.6 Expectation Propagation with Diagonal Posterior Covariance\n\nBy assuming a factorable approximate posterior, as in the variational approximation [3], a distink t simplification of the problem setting follows, where now we assume that gn (Fn, ) = c NFn,k (n,k , n,k ) i.e. is a factorable distribution. This assumption has already been made in [12] in developing an EP based multi-class IVM. Now significant computational simplification follows where the required moment matching amounts to new = Epnk {Fn,k } and new = ^ nk nk Epnk {F2 ,k } - Epnk {Fn,k }2 where the density pnk has a partition function which now has the ^ ^ ^ n analytic form \\n \\n \\n j K u+v + ni - nj ni 1 Zn = Ep(u)p(v) (11) \\n + nj =1,j =i\n1 2\n\nConditioning on the covariance function parameters and associated hyper-parameters is implicit Supplementary material http://www.dcs.gla.ac.uk/people/personal/girolami/ pubs_2006/NIPS2006/index.htm\n\n\f\nwhere u and v are both standard Normal random variables (v\n\\n\n\nni having the usual meanings (details in accompanying material). Derivatives of this partition function follow in a straightforward way now allowing the required EP updates to proceed (details in supplementary material). The approximate predictive distribution for a new data point X in this case takes a similar form to that for the Variational approximation [3]. So we have u j K +v k + k - j 1 (12) P (t = k |X , X1N , t) = Ep(u)p(v) + j\n=1,j =k\n\n\n\n\\n ni\n\n= Fn,i - ni ) with ni and\n\n\\n\n\n\\n\n\nwhere the predictive mean and variance follow in standard form. = (C )T (Cj + j ) j j\n-1\n\nj and = c - (C )T (Cj + j ) j j j\n\n-1\n\nC j\n\n(13)\n\nIt should be noted here that the expectation over p(u) and p(v ) could be computed by using either Gaussian quadrature or a simple Monte Carlo approximation which is straightforward as sampling from a univariate standardized Normal only is required. The VB approximation [3] however only requires a 1-D Monte Carlo integral rather than the 2-D one required here.\n\n3\n\nExperiments\n\nBefore considering the main example of data integration within a large scale protein fold prediction problem we attempt to assess a number of approximate inference schemes for GP multi-class classification. We provide a short comparative study of the Laplace, VB, and both possible EP approximations by employing the Gibbs sampler as the comparative gold standard. For these experiments six multi-class data sets are employed 3 , i.e., Iris (N = 150, K = 3), Wine (N = 178, K = 3), Soybean (N = 47, K = 4), Teaching (N = 151, K = 3), Waveform (N = 300, K = 3) and ABE (N = 300, K = 3, which is a subset of the Isolet dataset using the letters `A', `B' and `E',). A single radial basis covariance function with one length scale parameter is used in this comparative study. Ten-fold cross validation (CV) was used to estimate the predictive log-likelihood and the percentage predictive error. Within each of the ten folds a further 10 CV routine was employed to select the length-scale of the covariance function. For the Gibbs sampler, after a burn-in of 2000 samples, the following 3000 samples were used for inference, and the predictive error and likelihood were computed from the 3000 post-burn-in samples. For each data set and each method the percentage predictive error and the predictive log-likelihood were estimated in this manner. The summary results given as the mean and standard deviation over the ten folds are shown in Table 1. The results which cannot be distinguished from each other, under a Wilcoxon rank sum test with a 5% significance level, are highlighted in bold. From those results, we can see that across most data sets used, the predictive log-likelihood obtained from the Laplace approximation is lower than those of the three other methods. In our observations, the predictive performance of VB and the IEP approximation are consistently indistinguishable from the performance achieved from the Gibbs sampler. From the experiments conducted there is no evidence to suggest any difference in predictive performance between IEP & VB methods in the case of multi-way classification. As there is no benefit in choosing an EP based approximation over the Variational one we now select the Variational approximation in that inference over the covariance parameters follows simply by obtaining posterior mean estimates using an importance sampler. As a brief illustration of how the Variational approximation compares to the full Metropolis-withinBlocked-Gibbs Sampler consider a toy dataset consisting of three classes formed by a Gaussian surrounded by two annular rings having ten features only two of which are predictive of the class labels [3]. We can compare the compute time taken to obtain reasonable predictions from the full MCMC and the approximate Variational scheme [3]. Figure 1 (a) shows the samples of the covariance function parameters drawn from the Metropolis subsampler4 and overlaid in black the corresponding approximate posterior mean estimates obtained from the variational scheme [3]. It\nhttp://www.ics.uci.edu/~mlearn/MPRepository.html It should be noted that multiple Metropolis sub-chains had to be run in order to obtain reasonable sampling 10 of the R+\n4 3\n\n\f\nTable 1: Percentage predictive error (PE) and predictive log-likelihood (PL) for six data sets from UCI computed using Laplace, Variational Bayes (VB), independent EP (IEP), as well as MCMC using Gibbs sampler. Best results which are statistically indistinguishable from each other are highlighted in bold.\nABE PE 4.0003.063 2.0002.330 3.3333.143 5.3335.019 Wine PE 3.8895.885 2.2223.884 4.5145.757 3.8895.885 Teach PE 39.2415.74 41.129.92 42.416.22 42.5411.32 PL -0.2900.123 -0.1640.026 -0.1580.037 -0.1390.050 PL -0.2580.045 -0.1820.057 -0.1770.054 -0.1330.047 PL -0.8360.072 -0.7110.125 -0.7300.113 -0.8000.072 Iris PE 3.3333.513 3.3333.513 3.3333.513 3.3333.513 Soybean PE 0.0000.000 0.0000.000 0.0000.000 0.0000.000 Wave PE 17.509.17 18.339.46 15.838.29 17.5010.72 PL -0.1320.052 -0.0870.056 -0.0790.056 -0.0630.059 PL -0.3590.040 -0.1580.034 -0.1580.039 -0.1720.037 PL -0.4300.085 -0.4100.100 -0.3800.116 -0.3830.107\n\nLaplace VB Gibbs IEP\n\nLaplace VB Gibbs IEP\n\nLaplace VB Gibbs IEP\n\n70 60 50 40 30 20\n10\n-4\n\n-0.2\n\nPredictive Likelihood\n5\n\n10\n\nPercentage Error\n\n0\n\n-0.4\n\n-0.6\n\n10\n\n-2\n\n-0.8\n\n-1\n\n10\n0 1 2\n\n10\n\n10\n\n10\n\n00 10\n\nTime (Seconds - Log)\n\n10\n\n-1.2 0 10\n\nTime (Seconds - Log)\n\n10\n\n5\n\n(a)\n\n(b)\n\n(c)\n\nFigure 1: (a) Progression of MCMC and Variational methods in estimating covariance function parameters, vertical axis denotes each d , horizontal axis is time (all log scale) (b) percentage error under the MCMC (gray) and Variational (black) schemes, (c) predictive likelihood under both schemes. is clear that after 100 calls to the sub-sampler the samples obtained reflect the relevance of the features, however the deterministic steps taken in the variational routine achieve this in just over ten computational steps of equal cost to the Metropolis sub-sampler. Figure 1 (b) shows the predictive error incurred by the classifier and under the MCMC scheme 30,000 CPU seconds are required to achieve the same level of predictive accuracy under the variational approximation obtained in 200 seconds (a factor of 150 times faster). This is due, in part, to the additional level of sampling from the predictive prior which is required when using MCMC to obtain predictive posteriors. Because of these results we now adopt the variational approximation for the following large scale experiment.\n\n4\n\nProtein Fold Prediction with GP Based Data Fusion\n\nTo illustrate the proposed GP based method of data integration a substantial protein fold classification problem originally studied in [2] and more recently in [13] is considered. The task is to devise a predictor of 27 distinct SCOP classes from a set (N = 314) of low homology protein sequences. Six\n\n\f\n0.2 60 50 40 30 20 10 0 AA HP PT PY SS VP MA MF\n\nPredictive Likelihood\n\n2.5\n\nPercent Accuracy\n\nAlpha Weight\nAA HP PT PY SS VP MA MF\n\n0.15\n\n2 1.5 1 0.5\n\n0.1\n\n0.05\n\n0\n\n0\n\nAA\n\nHP\n\nPT\n\nSS\n\nVP\n\nPZ\n\n(a)\n\n(b)\n\n(c)\n\nFigure 2: (a) The prediction accuracy for each individual data set and the corresponding combinations, (MA) employing inferred weights and (MF) employing a fixed weighting scheme (b) The predictive likelihood achieved for each individual data set and with the integrated data (c) The posterior mean values of the covariance function weights 1 6 . different data representations (each comprised of around 20 features) are available characterizing (1) Amino Acid composition (AA); (2) Hydrophobicity profile (HP); (3) Polarity (PT); (4) Polarizability (PY); (5) Secondary Structure (SS); (6) Van der Waals volume profile of the protein (VP). In [2] a number of classifier and data combination strategies were employed in devising a multiway classifier from a series of binary SVM's. In the original work of [2] the best predictive accuracy obtained on an independent set (N = 385) of low sequence similarity proteins was 53%. It was noted after extensive careful manual experimentation by the authors that a combination of Gaussian kernels each composed of the (AA), (SS) and (HP) datasets significantly improved predictive accuracy. More recently in [13] a heavily tuned ad hoc ensemble combination of classifiers raised this performance to 62% the best reported on this problem. We employ the proposed GP based method (Variational approximation) in devising a classifier for this task where now we employ a composite covariance function (shared across all 27 classes), a linear combination of RBF functions for each data set. Figure (2) shows the predictive performance of the GP classifier in terms of percentage prediction accuracy (a) and predictive likelihood on the independent test set (b). We note a significant synergistic increase in performance when all data sets are combined and weighted (MA) where the overall performance accuracy achieved is 62%. Although the 0-1 loss test error is the same for an equal weighting of the data sets (MF) and that obtained using the proposed inference procedure (MA) for (MA) there is an increase in predictive likelihood i.e. more confident correct predictions being made. It is interesting to note that the weighting obtained (posterior mean for ) Figure (2.c) weights the (AA) & (SS) with equal importance whilst other data sets play less of a role in performance improvement.\n\n5\n\nConclusions\n\nIn this paper we have considered the problem of integrating data sets within a classification setting, a common scenario within many bioinformatics problems. We have argued that the GP prior provides an elegant solution to this problem within the Bayesian inference framework. To obtain a computationally practical solution three approximate approaches to multi-class classification with GP priors, i.e. Laplace, Variational and EP based approximations have been considered. It is found that EP and Variational approximations approach the performance of a Gibbs sampler and indeed their predictive performances are indistinguishable at the 5% level of significance. The full EP (FEP) approximation employing LP has an excessive computational cost and there is little to recommend it in terms of predictive performance over the independent assumption (IEP). Likewise there is little to distinguish between IEP and VB approximations in terms of predictive performance in the multi-class classification setting though further experiments on a larger number of data sets is desirable. We employ VB to infer the optimal parameterized combinations of covariance functions for the protein fold prediction problem over 27 possible folds and achieve state-of-the-art performance without recourse to any ad hoc tinkering and tuning and the inferred combination weights are intuitive in terms of the information content of the highest weighted data sets. This is a highly practical solution to the problem of heterogenous data fusion in the classification setting which employs Bayesian inferen-\n\n\f\ntial semantics throughout in a consistent manner. We note that on the fold prediction problem the best performance achieved is equaled without resorting to complex and ad hoc data and classifier weighting and combination schemes. 5.1 Acknowledgements\n\nMG is supported by the Engineering and Physical Sciences Research Council (UK) grant number EP/C010620/1, MZ is supported by the National Natural Science Foundation of China grant number 60501021.\n\nReferences\n[1] A. Ben-Hur and W.S. Noble. Kernel methods for predicting protein-protein interactions. Bioinformatics, 21, Suppl. 1:3846, 2005. [2] Chris Ding and Inna Dubchak. Multi-class protein fold recognition using support vector machines and neural networks. Bioinformatics, 17:349358, 2001. [3] Mark Girolami and Simon Rogers. Variational Bayesian multinomial probit regression with Gaussian process priors. Neural Computation, 18(8):17901817, 2006. [4] M. Kuss and C.E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. Journal of Machine Learning Research, 6:16791704, 2005. [5] G. R. G. Lanckriet, T. De Bie, N. Cristianini, M. I. Jordan, and W. S. Noble. A statistical framework for genomic data fusion. Bioinformatics, 20:26262635, 2004. [6] Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press. [7] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001. [8] R. Neal. Regression and classification using Gaussian process priors. In A.P. Dawid, M. Bernardo, J.O. Berger, and A.F.M. Smith, editors, Bayesian Statistics 6, pages 475501. Oxford University Press, 1998. [9] Paul Pavlidis, Jason Weston, Jinsong Cai, and William Stafford Noble. Learning gene functional classifications from multiple data types. Journal of Computational Biology, 9(2):401 411, 2002. [10] J.C. Platt. Probabilities for support vector machines. In A. Smola, P. Bartlett, B. Schlkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 6174. MIT Press, 1999. [11] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] M.W. Seeger, N.D. Lawrence, and R. Herbrich. Efficient nonparametric Bayesian modelling with sparse Gaussian process approximations. Technical Report, \"http://www.kyb.tuebingen.mpg.de/bs/people/seeger/\", 2006. [13] Hong-Bin Shen and Kuo-Chen Chou. Ensemble classifier for protein fold pattern recognition. Bioinformatics, Advanced Access(doi:10.1093), 2006. [14] Alexander Smola, Vishy Vishwanathan, and Eleazar Eskin. Laplace propagation. In Sebas tian Thrun, Lawrence Saul, and Bernhard Scholkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [15] C.K.I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):13421352, 1998. [16] Y. Yamanishi, J. P. Vert, and M. Kanehisa. Protein network inference from multiple genomic data: a supervised approach. Bioinformatics, 20, Suppl. 1:363370, 2004.\n\n\f\n", "award": [], "sourceid": 3065, "authors": [{"given_name": "Mark", "family_name": "Girolami", "institution": null}, {"given_name": "Mingjun", "family_name": "Zhong", "institution": null}]}