Gradient Descent Is Optimal Under Lower Restricted Secant Inequality And Upper Error Bound

Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Bibtex Paper Supplemental

Authors

Charles Guille-Escuret, Adam Ibrahim, Baptiste Goujaud, Ioannis Mitliagkas

Abstract

The study of first-order optimization is sensitive to the assumptions made on the objective functions.These assumptions induce complexity classes which play a key role in worst-case analysis, includingthe fundamental concept of algorithm optimality. Recent work argues that strong convexity andsmoothness—popular assumptions in literature—lead to a pathological definition of the conditionnumber. Motivated by this result, we focus on the class of functionssatisfying a lower restricted secant inequality and an upper error bound. On top of being robust tothe aforementioned pathological behavior and including some non-convex functions, this pair ofconditions displays interesting geometrical properties. In particular, the necessary and sufficientconditions to interpolate a set of points and their gradients within the class can be separated intosimple conditions on each sampled gradient. This allows the performance estimation problem (PEP) to be solved analytically, leading to a lower boundon the convergence rate that proves gradient descent to be exactly optimal on this class of functionsamong all first-order algorithms.