Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track
Dachao Lin, Yuze Han, Haishan Ye, Zhihua Zhang
We study finite-sum distributed optimization problems involving a master node and n−1 local nodes under the popular δ-similarity and μ-strong convexity conditions. We propose two new algorithms, SVRS and AccSVRS, motivated by previous works. The non-accelerated SVRS method combines the techniques of gradient sliding and variance reduction and achieves a better communication complexity of ˜O(n+√nδ/μ) compared to existing non-accelerated algorithms. Applying the framework proposed in Katyusha X, we also develop a directly accelerated version named AccSVRS with the ˜O(n+n3/4√δ/μ) communication complexity. In contrast to existing results, our complexity bounds are entirely smoothness-free and exhibit superiority in ill-conditioned cases. Furthermore, we establish a nearly matched lower bound to verify the tightness of our AccSVRS method.