Bregman Divergence for Stochastic Variance Reduction: Saddle-Point and Adversarial Prediction

Part of Advances in Neural Information Processing Systems 30 (NIPS 2017)

Bibtex »Metadata »Paper »Reviews »Supplemental »


Zhan Shi, Xinhua Zhang, Yaoliang Yu


<p>Adversarial machines, where a learner competes against an adversary, have regained much recent interest in machine learning. They are naturally in the form of saddle-point optimization, often with separable structure but sometimes also with unmanageably large dimension. In this work we show that adversarial prediction under multivariate losses can be solved much faster than they used to be. We first reduce the problem size exponentially by using appropriate sufficient statistics, and then we adapt the new stochastic variance-reduced algorithm of Balamurugan &amp; Bach (2016) to allow any Bregman divergence. We prove that the same linear rate of convergence is retained and we show that for adversarial prediction using KL-divergence we can further achieve a speedup of #example times compared with the Euclidean alternative. We verify the theoretical findings through extensive experiments on two example applications: adversarial prediction and LPboosting.</p>