NeurIPS 2020

A Decentralized Parallel Algorithm for Training Generative Adversarial Nets


Meta Review

The paper presents a decentralized version of the Parallel Optimistic Stochastic Gradient algorithm. A non-asymptotic convergence theorem is given. The algorithm is suitable for generic smooth min-max optimization problems, including GANs. Concerns remained if the theory can recover the single-machine case, as well as more precise discussion needed about the requirements on the communication graphs (spectral gap vs max degree) and restricted communication setting. Authors confirmed in the response that the communication complexity is not logarithmic for general graphs. Also, the convergence of local iterates can be discussed better (acknowledged in the author response). We urge the authors to incorporate the feedback by all reviewers, in particular the detailed comments by Reviewer 2 for a camera ready version.