NeurIPS 2020

Tight last-iterate convergence rates for no-regret learning in multi-player games

Meta Review

This paper examines the "last-iterate" convergence rate of optimistic gradient descent with perfect gradient feedback in smooth, unconstrained monotone games. Without strong monotonicity, the closest results in the literature are the recent papers [LZMJ20] and [GPDO20], where the authors prove an $O(1/\sqrt{T})$ convergence rate for cocoercive and merely monotone games respectively (with perfect gradient input in both cases). The current paper extends the results of [GPDO20] to the optimistic gradient algorithm which, in contrast to extra-gradient, does not require an intermediate "gradient sample". The reviewers all agreed that this contribution is interesting, and the paper itself is well written. As a result, there was a quick consensus to accept the paper. After my own reading of the paper, I concur with the reviewers' assessment; however, I would also like the authors to clarify how their work compares to [GDPO20]. This paper relies heavily on the techniques of [GPDO20] but, in the introduction, [GPDO20] is only mentioned as an afterthought and the motivation to focus on "optimistic gradient" instead of "extra-gradient" is left unqualified - for example, when is extra-gradient regretful? In the main body of the paper, the authors explain quite clearly which techniques are adapted from [GPDO20] and how -- a version of these explanations should also appear in the introduction (the extra page available should be more than sufficient for this). I would also recommend making a clear distinction between methods that require perfect gradient feedback and those that do not, as well as those that examine gradient versus extra-gradient methods. The paper currently mixes these contributions together; tabulating them would make the paper's contribution clearer and provide valuable context. Finally, a minor point: several references in the bibliography seem to be incomplete – a quick run through DBLP should be enough to fix this.