{"title": "Bayesian Co-Training", "book": "Advances in Neural Information Processing Systems", "page_first": 1665, "page_last": 1672, "abstract": null, "full_text": "Bayesian Co-Training\n\nShipeng Yu, Balaji Krishnapuram, Romer Rosales, Harald Steck, R. Bharat Rao CAD & Knowledge Solutions, Siemens Medical Solutions USA, Inc. firstname.lastname@siemens.com\n\nAbstract\nWe propose a Bayesian undirected graphical model for co-training, or more generally for semi-supervised multi-view learning. This makes explicit the previously unstated assumptions of a large class of co-training type algorithms, and also clarifies the circumstances under which these assumptions fail. Building upon new insights from this model, we propose an improved method for co-training, which is a novel co-training kernel for Gaussian process classifiers. The resulting approach is convex and avoids local-maxima problems, unlike some previous multi-view learning methods. Furthermore, it can automatically estimate how much each view should be trusted, and thus accommodate noisy or unreliable views. Experiments on toy data and real world data sets illustrate the benefits of this approach.\n\n1\n\nIntroduction\n\nData samples may sometimes be characterized in multiple ways, e.g., web-pages can be described both in terms of the textual content in each page and the hyperlink structure between them. [1] have shown that the error rate on unseen test samples can be upper bounded by the disagreement between the classification-decisions obtained from independent characterizations (i.e., views) of the data. Thus, in the web-page example, misclassification rate can be indirectly minimized by reducing the rate of disagreement between hyperlink-based and content-based classifiers, provided these characterizations are independent conditional on the class. In many application domains class labels can be expensive to obtain and hence scarce, whereas unlabeled data are often cheap and abundantly available. Moreover, the disagreement between the class labels suggested by different views can be computed even when using unlabeled data. Therefore, a natural strategy for using unlabeled data to minimize the misclassification rate is to enforce consistency between the classification decisions based on several independent characterizations of the unlabeled samples. For brevity, unless otherwise specified, we shall use the term co-training to describe the entire genre of methods that rely upon this intuition, although strictly it should only refer to the original algorithm of [2]. Some co-training algorithms jointly optimize an objective function which includes misclassification penalties (loss terms) for classifiers from each view and a regularization term that penalizes lack of agreement between the classification decisions of the different views. In recent times, this coregularization approach has become the dominant strategy for exploiting the intuition behind multiview consensus learning, rendering obsolete earlier alternating-optimization strategies. We survey in Section 2 the major approaches to co-training, the theoretical guarantees that have spurred interest in the topic, and the previously published concerns about the applicability to certain domains. We analyze the precise assumptions that have been made and the optimization criteria to better understand why these approaches succeed (or fail) in certain situations. Then in Section 3 we propose a principled undirected graphical model for co-training which we call the Bayesian cotraining, and show that co-regularization algorithms provide one way for maximum-likelihood (ML) learning under this probabilistic model. By explicitly highlighting previously unstated assumptions, 1\n\n\f\nBayesian co-training provides a deeper understanding of the co-regularization framework, and we are also able to discuss certain fundamental limitations of multi-view consensus learning. In Section 4, we show that even simple and visually illustrated 2-D problems are sometimes not amenable to a co-training/co-regularization solution (no matter which specific model/algorithm is used  including ours). Empirical studies on two real world data sets are also illustrated. Summarizing our algorithmic contributions, co-regularization is exactly equivalent to the use of a novel co-training kernel for support vector machines (SVMs) and Gaussian processes (GP), thus allowing one to leverage the large body of available literature for these algorithms. The kernel is intrinsically non-stationary, i.e., the level of similarity between any pair of samples depends on all the available samples, whether labeled or unlabeled, thus promoting semi-supervised learning. Therefore, this approach is significantly simpler and more efficient than the alternating-optimization that is used in previous co-regularization implementations. Furthermore, we can automatically estimate how much each view should be trusted, and thus accommodate noisy or unreliable views.\n\n2 Related Work\nCo-Training and Theoretical Guarantees: The iterative, alternating co-training method originally introduced in [2] works in a bootstrap mode, by repeatedly adding pseudo-labeled unlabeled samples into the pool of labeled samples, retraining the classifiers for each view, and pseudo-labeling additional unlabeled samples where at least one view is confident about its decision. The paper provided PAC-style guarantees that if (a) there exist weakly useful classifiers on each view of the data, and (b) these characterizations of the sample are conditionally independent given the class label, then the co-training algorithm can utilize the unlabeled data to learn arbitrarily strong classifiers. [1] proved PAC-style guarantees that if (a) sample sizes are large, (b) the different views are conditionally independent given the class label, and (c) the classification decisions based on multiple views largely agree with each other, then with high probability the misclassification rate is upper bounded by the rate of disagreement between the classifiers based on each view. [3] tried to reduce the strong theoretical requirements. They showed that co-training would be useful if (a) there exist low error rate classifiers on each view, (b) these classifiers never make mistakes in classification when they are confident about their decisions, and (c) the two views are not too highly correlated, in the sense that there would be at least some cases where one view makes confident classification decisions while the classifier on the other view does not have much confidence in its own decision. While each of these theoretical guarantees is intriguing and theoretically interesting, they are also rather unrealistic in many application domains. The assumption that classifiers do not make mistakes when they are confident and that of class conditional independence are rarely satisfied in practice. Nevertheless empirical success has been reported. Co-EM and Related Algorithms: The Co-EM algorithm of [4] extended the original bootstrap approach of the co-training algorithm to operate simultaneously on all unlabeled samples in an iterative batch mode. [5] used this idea with SVMs as base classifiers and subsequently in unsupervised learning by [6]. However, co-EM also suffers from local maxima problems, and while each iteration's optimization step is clear, the co-EM is not really an expectation maximization algorithm (i.e., it lacks a clearly defined overall log-likelihood that monotonically improves across iterations). Co-Regularization: [7] proposed an approach for two-view consensus learning based on simultaneously learning multiple classifiers by maximizing an objective function which penalized misclassifications by any individual classifier, and included a regularization term that penalized a high level of disagreement between different views. This co-regularization framework improves upon the cotraining and co-EM algorithms by maximizing a convex objective function; however the algorithm still depends on an alternating optimization that optimizes one view at a time. This approach was later adapted to two-view spectral clustering [8]. Relationship to Current Work: The present work provides a probabilistic graphical model for multi-view consensus learning; alternating optimization based co-regularization is shown to be just one algorithm that accomplishes ML learning in this model. A more efficient, alternative strategy is proposed here for fully Bayesian classification under the same model. In practice, this strategy offers several advantages: it is easily extended to multiple views, it accommodates noisy views which are less predictive of class labels, and reduces run-time and memory requirements. 2\n\n\f\nf(x1)\n\ny1\n\nf1(x1(1))\n\nfc(x1) y1\n\nf2(x1(2))\n\nf(x2)\n\ny2\n\nf1(x2(1))\n\nfc(x2) y2\n\nf2(x2(2))\n\n...\n\n...\n(a) f(xn) yn (b)\n\nf1(xn(1))\n\nFigure 1: Factor graph for (a) one-view and (b) two-view models.\n\n...\n\nfc(xn) yn\n\nf2(xn(2))\n\n3 Bayesian Co-Training\n3.1 Single-View Learning with Gaussian Processes A Gaussian Process (GP) defines a nonparametric prior over functions in Bayesian statistics [9]. A random real-valued function f : Rd  R follows a GP, denoted by G P (h, ), if for every finite number of data points x1 , . . . , xn  Rd , f = {f (xi )}n 1 follows a multivariate Gaussian i= N (h, K) with mean h = {h(xi )}n 1 and covariance K = {(xi , xj )}nj =1 . Normally we fix the i= i, mean function h  0, and take a parametric (and usually stationary) form for the kernel function  (e.g., the Gaussian kernel (xk , x ) = exp(- xk - x 2 ) with  > 0 a free parameter). In a single-view, supervised learning scenario, an output or target yi is given for each observation xi (e.g., for regression yi  R and for classification yi  {-1p+1}). In the GP model we assume , there is a latent function f underlying the output, p(yi |xi ) = (yi |f , xi )p(f ) df , with the GP prior p(f ) = G P (h, ). Given the latent function f , p(yi |f , xi ) = p(yi |f (xi )) takes a Gaussian noise model N (f (xi ),  2 ) for regression, and a sigmoid function (yi f (xi )) for classification. The dependency structure of the single-view GP model can be shown as an undirected graph as in Fig. 1(a). The maximal cliques of the graphical model are the fully connected nodes (f (x1 ), . . . , f (xn )) and the pairs (yi , f (xi )), i = 1, . . . , n. Therefore, the joint probability of i 1 random variables f = {f (xi )} and y = {yi } is defined as p(f , y) = Z  (f )  (yi , f (xi )), with potential functions1 e 1 xp(- 22 yi - f (xi ) 2) for regression K-1 1  (f ) = exp(- 2 f f ),  (yi , f (xi )) = (1) (yi f (xi )) for classification and normalization factor Z (hereafter Z is defined such that the joint probability sums to 1). 3.2 Undirected Graphical Model for Multi-View Learning In multi-view learning, suppose we have m different views of a same set of n data samples. Let (j ) xi  Rdj be the features for the i-th sample obtained using the j -th view, where dj is the di(1) (m) mensionality of the input space for view j . Then the vector xi (xi , . . . , xi ) is the complete (j ) (j ) representation of the i-th data sample, and x(j ) (x1 , . . . , xn ) represents all sample observations for the j -th view. As in the single-view learning, let y = (y1 , . . . , yn ) where yi is the single output assigned to the i-th data point. One can clearly concatenate the multiple views into a single view and apply a single-view GP model, but the basic idea of multi-view learning is to introduce one function per view which only uses the features from that view, and then jointly optimize these functions such that they come to a consensus. Looking at this problem from a GP perspective, let fj denote the latent function for the j -th view (i.e., using features only from view j ), and let fj  G P (0, j ) be its GP prior in view j . Since one data sample i has only one single label yi even though it has multiple features from the multiple\n1 The definition of  in this paper has been overloaded to simplify notation, but its meaning should be clear from the function arguments.\n\n3\n\n\f\nviews (i.e., latent function value fj (xi ) for view j ), the label yi should depend on all of these latent function values for data sample i. The challenge here is to make this dependency explicit in a graphical model. We tackle this problem by introducing a new latent function, the consensus function fc , to ensure conditional independence between the output y and the m latent functions {fj } for the m views (see Fig. 1(b) for the undirected graphical model). At the functional level, the output y depends only on fc , and latent functions {fj } depend on each other only via the consensus function fc . That is, we have the joint probability: p(y , fc , f1 , . . . , fm ) = j 1  (fj , fc ),  (y , fc ) Z =1\nm\n\n(j )\n\nwith some potential functions  . In the ground network with n data samples, let f c = {fc (xi )}n 1 i= (j ) and f j = {fj (xi )}n 1 . The graphical model leads to the following factorization: i= p (y, f c , f 1 , . . . , f m ) = jm 1i  (yi , fc (xi ))  (f j ) (f j , f c ). Z =1 (2)\n\nHere the within-view potential  (f j ) specifies the dependency structure within each view j , and the consensus potential  (f j , f c ) describes how the latent function in each view is related with the consensus function fc . With a GP prior for each of the views, we can define the following potentials: - - , , 1 j -1 fj - fc 2  (f j ) = exp (3) f Kj f j  (f j , f c ) = exp 2 2 2j where Kj is the covariance matrix of view j , i.e., Kj (xk , x ) = j (xk , x ), and j > 0 a scalar which quantifies how far away the latent function f j is from f c . The output potential  (yi , fc (xi )) is defined the same as that in (1) for regression or classification. Some more insight may be gained by taking a careful look at these definitions: 1) The within-view potentials only rely on the intrinsic structure of each view, i.e., through the covariance Kj in a GP setting; 2) Each consensus potential actually defines a Gaussian over the difference of f j and f c , 2 i.e., f j - f c  N (0, j I), and it can also be interpreted as assuming a conditional Gaussian for f j with the consensus f c being the mean. Alternatively if we focus on f c , the joint consensus potentials 2 effectively define a conditional Gaussian prior for f c , f c |f 1 , . . . , f m , as N (c , c I) where j -1 j fj 1 2 2 c = c , c = . (4) 2 2 j j This can be easily verified as a product of Gaussians. This indicates that the prior mean of the consensus function f c is a weighted combination of the latent functions from all the views, and the weight is given by the inverse variance of each consensus potential. The higher the variance, the smaller the contribution to the consensus function. More insights of this undirected graphical model can be seen from the marginals, which we discuss in detail in the following subsections. One advantage of this representation is that is allows us to see that many existing multi-view learning models are actually a special case of the proposed framework. In addition, this Bayesian interpretation also helps us understand both the benefits and the limitations of co-training. 3.3 Marginal 1: Co-Regularized Multi-View Learning\n(j ) (j )\n\nBy taking the integral of (2) over f c (and ignoring the output potential for the moment), we obtain the joint marginal distribution of the m latent functions:    1 jm 1 1j fj - fk 2  p(f 1 , . . . , f m ) = exp - f j K-1 f j - . (5) j 2 2 2 Z 2 j + k \n=1 <k\n\nIt is clearly seen that the negation of the logarithm of this marginal exactly recovers the regularization terms in co-regularized multi-view learning: The first part regularizes the functional space of each 4\n\n\f\nview, and the second part constrains that all the functions need to agree on their outputs (inversely weighted by the sum of the corresponding variances). From the GP perspective, (5) actually defines a joint multi-view prior for the m latent functions, (f 1 , . . . , f m )  N (0, -1 ), where  is a mn  mn matrix with block-wise definition k 1 1 (j, j ) = K-1 + (j, j ) = - 2 j = 1, . . . , m, j = j. (6) j 2 2 I, 2, j + k j + j I\n=j\n\nJointly with the target variable y, the marginal is (for instance for regression):  m  1 j f -y 2 1 1j 1j j p(y, f 1 , . . . , f m ) = exp - - f j K- 1 f j - j 2 + 2 2 Z j 2 =1 2\n\n<k\n\nfj - fk . (7) 2 2 j + k \n\n 2\n\nThis recovers the co-regularization with least square loss in its log-marginal form. 3.4 Marginal 2: The Co-Training Kernel\n\nThe joint multi-view kernel defined in (6) is interesting, but it has a large dimension and is difficult to work with. A more interesting kernel can be obtained if we instead integrate out all the m latent functions in (2). This leads to a Gaussian prior p(f c ) = N (0, Kc ) for the consensus function fc , where   Kc =  j\n-1 2 (Kj + j I)-1 \n\n.\n\n(8)\n\nIn the following we call Kc the co-training kernel for multi-view learning. This marginalization is very important, because it reveals the previously unclear insight of how the kernels from different views are combined together in a multi-view learning framework. This allows us to transform a multi-view learning problem into a single-view problem, and simply use the co-training kernel Kc to solve GP classification or regression. Since this marginalization is equivalent to (5), we will end up with solutions that are largely similar to any other co-regularization algorithm, but however a key difference is the Bayesian treatement contrasting previous ML-optimization methods. Additional benefits of the co-training kernel include the following: 1. The co-training kernel avoids repeated alternating optimizations over the different views f j , and directly works with a single consensus view f c . This reduces both time complexity and space complexity (only maintains Kc in memory) of multi-view learning. 2. While other alternating optimization algorithms might converge to local minima (because they optimize, not integrate), the single consensus view guarantees the global optimal solution for multiview learning. 3. Even if all the individual kernels are stationary, Kc is in general non-stationary. This is because the inverse-covariances are added and then inverted again. In a transductive setting where the data are partially labeled, the co-training kernel between labeled data is also dependent on the unlabeled data. Hence the proposed co-training kernel can be used for semi-supervised GP learning [10]. 3.5 Benefits of Bayesian Co-Training The proposed undirected graphical model provides better understandings of multi-view learning algorithms. The co-training kernel in (8) indicates that the Bayesian co-training is equivalent to single-view learning with a special (non-stationary) kernel. This is also the preferable way of working with multi-view learning since it avoids alternating optimizations. Here are some other benefits which are not mentioned before: Trust-worthiness of each view: The graphical model allows each view j to have its own levels of 2 2 uncertainty (or trust-worthiness) j . In particular, a larger value of j implies less confidence on the observation of evidence provided by the j -th view. Thus when some views of the data are better at predicting the output than the others, they are weighted more while forming consensus opinions. 5\n\n\f\n6\n\n6\n\n6\n\n4\n\n4\n\n4\n\n2 x(2) x(2)\n\n2 x(2) -4 -2 0 x(1) 2 4 6\n\n2\n\n0\n\n0\n\n0\n\n-2\n\n-2\n\n-2\n\n-4\n\n-4\n\n-4\n\n-6 -6 6\n\n-4\n\n-2\n\n0 x(1)\n\n2\n\n4\n\n6\n\n-6 -6 6\n\n-6 -6 6\n\n-4\n\n-2\n\n0 x(1)\n0\n\n2\n\n4\n\n6\n\n4\n\n4\n\n4\n-0.5\n-0.5\n\n0.5\n\n2 x(2) x(2)\n\n2 x(2)\n\n2\n\n0.5\n0\n\n0\n\n0\n\n0\n\n0\n-0. 5\n\n0\n\n0\n\n0.5\n\n0.5\n\n-2\n\n-2\n\n-2\n\n-0\n0.5\n\n.5\n\n-4\n\n-4\n\n-4\n0\n\n-6 -6\n\n-4\n\n-2\n\n0 x(1)\n\n2\n\n4\n\n6\n\n-6 -6\n\n-4\n\n-2\n\n0 x(1)\n\n2\n\n4\n\n6\n\n-6 -6\n\n-4\n\n-2\n\n0 x(1)\n\n2\n\n4\n\n6\n\nFigure 2: Toy examples for co-training. Big red/blue markers denote +1/ - 1 labeled points; remaining points\nare unlabeled. TOP left: co-training result on two-Gaussian data with mean (2, -2) and (-2, 2); center and right: canonical and Bayesian co-training on two-Gaussian data with mean (2, 0) and (-2, 0); BOTTOM left: XOR data with four Gaussians; center and right: Bayesian co-training and pure GP supervised learning result (with RBF kernel). Co-training is much worse than GP supervised l rning in this case. All Gaussians have ea unit variance. RBF kernel uses width 1 for supervised learning and 1/ 2 for each feature in two-view learning.\n\nThese uncertainties can be easily optimized in the GP framework by maximizing the marginal of output y (omitted in this paper due to space limit). Unsupervised and semi-supervised multi-view learning: The proposed graphical model also motivates new methods for unsupervised multi-view learning such as spectral clustering. While the similarity matrix of each view j is encoded in Kj , the co-training kernel Kc encodes the similarity of two data samples with multiple views, and thus can be used directly in spectral clustering. The extension to semi-supervised learning is also straightforward since Kc by definition depends on unlabeled data as well. Alternative interaction potential functions: Previous discussions about multi-view learning rely on potential definitions in (3) (which we call the consensus-based potentials), but other definitions are also possible and will lead to different co-training models. Actually, the definition in (3) has fundamental limitations and leads only to consensus-based learning, as seen from the next subsection. 3.6 Limitations of Consensus-based Potentials As mentioned before, the consensus-based potentials in (3) can be interpreted as defining a Gaussian prior (4) to f c , where the mean is a weighted average of the m individual views. This averaging indicates that the value of f c is never higher (or lower) than that of any single view. While the consensus-based potentials are intuitive and useful for many applications, they are limited for some real world problems where the evidence from different views should be additive (or enhanced) rather than averaging. For instance, when a radiologist is making a diagnostic decision about a lung cancer patient, he might look at both the CT image and the MRI image. If either of the two images gives a strong evidence of cancer, he can make decision based on a single view; if both images give an evidence of 0.6 (in a [0,1] scale), the final evidence of cancer should be higher (say, 0.8) than either of them. It's clear that the multi-view learning in this scenario is not consensus-based. While all the previously proposed co-training and co-regularization algorithms have thus far been based on enforcing consensus between the views, in principle our graphical model allows other forms of 6\n\n\f\nTable 1: Results for Citeseer with different numbers of training data (pos/neg). Bold face indicates best performance. Bayesian co-training is significantly better than the others (p-value 0.01 in Wilcoxon rank sum test) except in AUC with \"Train +2/-10\".\nMODEL TEXT INBOUND LINK OUTBOUND LINK TEXT+LINK CO-TRAINED GPLR BAY E S I A N C O - T R A I N I N G # TRAIN +2/-10 AU C F1 0.5725  0.0180 0.1359  0.0565 0.5451  0.0025 0.3510  0.0011 0.5550  0.0119 0.3552  0.0053 0.5730  0.0177 0.1386  0.0561 0.6459  0.1034 0.4001  0.2186 0.6536  0.0419 0.4210  0.0401 # TRAIN +4/-20 AU C F1 0.5770  0.0209 0 . 1443  0 . 0705 0.5479  0.0035 0 . 3521  0 . 0017 0.5662  0.0124 0 . 3600  0 . 0059 0.5782  0.0218 0 . 1474  0 . 0721 0.6519  0.1091 0 . 4042  0 . 2321 0.6880  0.0300 0.4530  0.0293\n\nrelationships between the views. In particular, potentials other than those in (3) should be of great interest for future research.\n\n4 Experimental Study\nToy Examples: We show some 2D toy classification problems to visualize the co-training result (in Fig. 2). Our first example is a two-Gaussian case where either feature x(1) or x(2) can fully solve the problem (top left). This is an ideal case for co-training since: 1) each single view is sufficient to train a classifier, and 2) both views are conditionally independent given the class labels. The second toy data is a bit harder since the two Gaussians are aligned to the x(1) -axis. In this case the feature x(2) is totally irrelevant to the classification problem. The canonical co-training fails here (top center) since when we add labels using the x(2) feature , noisy labels will be introduced and expanded to future training. The proposed model can handle this situation since we can adapt the weight of each view and penalize the feature x(2) (top right). Our third toy data follows an XOR shape where four Gaussians form a binary classification problem that is not linearly separable (bottom left). In this case both assumptions mentioned above are violated, and co-training failed completely (bottom center). A supervised learning model can however easily recover the non-linear underlying structure (bottom right). This indicates that the co-training kernel Kc is not suitable for this problem. Web Data: We use two sets of linked documents for our experiment. The Citeseer data set contains 3,312 entries that belong to six classes. There are three natural views: the text view consists of title and abstract of a paper; the two link views are inbound and outbound references. We pick up the largest class which contains 701 documents and test the one-vs-rest classification performance. The WebKB data set is a collection of 4,502 academic web pages manually grouped into six classes (student, faculty, staff, department, course, project). There are two views containing the text on the page and the anchor text of all inbound links, respectively. We consider the binary classification problem \"student\" against \"faculty\", for which there are 1,641 and 1,119 documents, respectively. We compare the single-view learning methods (T E X T, I N B O U N D L I N K, etc), concatenated-view method (T E X T + L I N K), and co-training methods C O - T R A I N E D G P L R (Co-Trained Gaussian Process Logistic Regression) and BAY E S I A N C O - T R A I N I N G. Linear kernels are used for all the competing methods. For the canonical co-training method we repeat 50 times and in each iteration add the most predictable 1 positive sample and r negative samples into the training set where r depends on the number of negative/positive ratio of each data set. Performance is evaluated using AUC score and F1 measure. We vary the number of training documents (with ratio proportional to the true positive/negative ratio), and all the co-training algorithms use all the unlabeled data in the training process. The experiments are repeated 20 times and the prediction means and standard deviations are shown in Table 1 and 2. It can be seen that for Citeseer the co-training methods are better than the supervised methods. In this cases Bayesian co-training is better than canonical co-training and achieves the best performance. For WebDB, however, canonical co-trained GPLR is not as good as supervised algorithms, and thus Bayesian co-training is also worse than supervised methods though a little better than co-trained GPLR. This is maybe because the T E X T and L I N K features are not independent given the class labels (especially when two classes \"faculty\" and \"staff \" might share features). Canonical co-training has higher deviations than other methods due to the possibility of adding noisy labels. We have also tried other number of iterations but 50 seems to give an overall best performance. 7\n\n\f\nTable 2: Results for WebKB with different numbers of training data (pos/neg). Bold face indicates best\nperformance. No results are significantly better than all the others (p-value 0.01 in Wilcoxon rank sum test).\nMODEL TEXT INBOUND LINK TEXT+LINK CO-TRAINED GPLR BAY E S I A N C O - T R A I N I N G # TRAIN +2/-2 AU C F1 0.5767  0.0430 0.4449  0.1614 0.5211  0.0017 0.5761  0.0013 0.5766  0.0429 0.4443  0.1610 0.5624  0.1058 0.5437  0.1225 0.5794  0.0491 0.5562  0.1598 # TRAIN +4/-4 AU C F1 0.6150  0.0594 0 . 5338  0 . 1267 0.5210  0.0019 0.5758  0.0015 0.6150  0.0594 0 . 5336  0 . 1267 0.5959  0.0927 0 . 5737  0 . 1203 0.6140  0.0675 0 . 5742  0 . 1298\n\nNote that the single-view learning with T E X T almost achieves the same performance as concatenated-view method. This is because the number of text features are much more than the link features (e.g., for WebKB there are 24,480 text features and only 901 link features). So these multiple views are very unbalanced and should be taken into account in co-training with different weights. Bayesian co-training provides a natural way of doing it.\n\n5 Conclusions\nThis paper has two principal contributions. We have proposed a graphical model for combining multi-view data, and shown that previously derived co-regularization based training algorithms maximize the likelihood of this model. In the process, we showed that these algorithms have been making an intrinsic assumption of the form p(fc , f1 , f2 , . . . , fm )   (fc , f1 ) (fc , f2 ) . . .  (fc , fm ), even though it was not explicitly realized earlier. We also studied circumstances when this assumption proves unreasonable. Thus, our first contribution was to clarify the implicit assumptions and limitations in multi-view consensus learning in general, and co-regularization in particular. Motivated by the insights from the graphical model, our second contribution was the development of alternative algorithms for co-regularization; in particular the development of a non-stationary cotraining kernel, and the development of methods for using side-information in classification. Unlike previously published co-regularization algorithms, our approach: (a) handles naturally more than 2 views; (b) automatically learns which views of the data should be trusted more while predicting class labels; (c) shows how to leverages previously developed methods for efficiently training GP/SVM; (d) clearly explains our assumptions, what is being optimized overall, etc; (e) does not suffer from local maxima problems; (f) is less computationally demanding in terms of both speed and memory requirements.\n\nReferences\n[1] S. Dasgupta, M. Littman, and D. McAllester. PAC generalization bounds for co-training. In NIPS, 2001. [2] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998. [3] N. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and practice. In NIPS, 2004. [4] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Workshop on information and knowledge management, 2000. [5] U. Brefeld and T. Scheffer. Co-em support vector learning. In ICML, 2004. [6] Steffen Bickel and Tobias Scheffer. Estimation of mixture models using co-em. In ECML, 2005. [7] B. Krishnapuram, D. Williams, Y. Xue, A. Hartemink, L. Carin, and M. Figueiredo. On semi-supervised classification. In NIPS, 2004. [8] Virginia de Sa. Spectral clustering with two views. In ICML Workshop on Learning With Multiple Views, 2005. [9] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [10] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Semi-supervised learning: From Gaussian fields to gaussian processes. Technical report, CMU-CS-03-175, 2003.\n\n8\n\n\f\n", "award": [], "sourceid": 3260, "authors": [{"given_name": "Shipeng", "family_name": "Yu", "institution": null}, {"given_name": "Balaji", "family_name": "Krishnapuram", "institution": null}, {"given_name": "Harald", "family_name": "Steck", "institution": null}, {"given_name": "R.", "family_name": "Rao", "institution": null}, {"given_name": "R\u00f3mer", "family_name": "Rosales", "institution": null}]}