Part of Advances in Neural Information Processing Systems 26 (NIPS 2013)
Yanshuai Cao, Marcus A. Brubaker, David J. Fleet, Aaron Hertzmann
We propose an efficient discrete optimization algorithm for selecting a subset of training data to induce sparsity for Gaussian process regression. The algorithm estimates this inducing set and the hyperparameters using a single objective, either the marginal likelihood or a variational free energy. The space and time complexity are linear in the training set size, and the algorithm can be applied to large regression problems on discrete or continuous domains. Empirical evaluation shows state-of-art performance in the discrete case and competitive results in the continuous case.