Part of Advances in Neural Information Processing Systems 19 (NIPS 2006)
Andreas Argyriou, Theodoros Evgeniou, Massimiliano Pontil
We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well- known 1-norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that this problem is equivalent to a convex optimization problem and develop an iterative algorithm for solving it. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the latter step we learn common- across-tasks representations and in the former step we learn task-speciﬁc functions using these representations. We report experiments on a simulated and a real data set which demonstrate that the proposed method dramatically improves the per- formance relative to learning each task independently. Our algorithm can also be used, as a special case, to simply select – not learn – a few common features across the tasks.