NeurIPS 2020

Exponential ergodicity of mirror-Langevin diffusions

Meta Review

This paper presents a theoretical analyisis of the class of mirror-diffusions in continuous-time; i.e. diffusion processes used to sample from complex target distributions. Whereas these algorithms have been introduced by other authors, the only results available did not show improved convergence rates compared to standard Langevin. Here the authors have obtained improved CV rates under reasonable assumptions (e.g. contraction under the Poincare condition). All the reviewers participated to the discussion after the rebuttal was made available. Strengths: the paper is very well-written, the proofs are clear and easy to follow. Additionally the results are neat and demonstrate the potential usefulness of mirror-diffusions. Weaknesses: the authors have only analyzed the continuous-time algorithm. It is unclear how to discretize efficiently such processes and which theoretical results will be obtained for the discretized version. The authors have only considered Newton-Langevin Algorithm. In most scenarios, for the Newton Langevin Diffusion, the convex conjugate is not available analytically and it is unclear that, when having to solve a convex optimization algorithm at each iteration whether such an algorithm can be practically competitive with underdamped Langevin. The numerics could be improved and better connected to the theoretical part. The authors should also spell out more clearly the limitations of this approach. Overall, despite its weaknesses, this paper presents an interesting theoretical analysis of a new class of diffusions inspired by optimization algorithms. This should motivate further developments in this important area.