Fast Sparse Gaussian Process Methods: The Informative Vector Machine

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper

Authors

Neil Lawrence, Matthias Seeger, Ralf Herbrich

Abstract

We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information- theoretic principles, previously suggested for active learning. Our goal is not only to learn d{sparse predictors (which can be evalu- ated in O(d) rather than O(n), d (cid:28) n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n (cid:1) d2), and in large real-world classi(cid:12)cation experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be signi(cid:12)cantly faster in training. In contrast to the SVM, our approximation produces esti- mates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation.