Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Aaron Defazio, Leon Bottou
The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.