Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)
Raef Bassily, Cristóbal Guzmán, Michael Menart
We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the ℓ2 setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the ℓ1 setting has nearly-optimal excess population risk ˜O(√logdn), and circumvents the dimension dependent lower bound of \cite{Asi:2021} for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the ℓ1-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, ˜O(log2/3dn1/3) in linear time. For the constrained ℓ2-case, with smooth losses, we obtain a linear-time algorithm with rate ˜O(1n3/10d1/10+(dn2)1/5). Finally, for the ℓ2-case we provide the first method for {\em non-smooth weakly convex} stochastic optimization with rate ˜O(1n1/4+(dn2)1/6) which matches the best existing non-private algorithm when d=O(√n). We also extend all our results above for the non-convex ℓ2 setting to the ℓp setting, where 1<p≤2, with only polylogarithmic (in the dimension) overhead in the rates.