Processing math: 20%

Sampling from Log-Concave Distributions with Infinity-Distance Guarantees

Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track

Bibtex Paper Supplemental

Authors

Oren Mangoubi, Nisheeth Vishnoi

Abstract

For a d-dimensional log-concave distribution π(θ)ef(θ) constrained to a convex body K, the problem of outputting samples from a distribution ν which is ε-close in infinity-distance sup to \pi arises in differentially private optimization. While sampling within total-variation distance \varepsilon of \pi can be done by algorithms whose runtime depends polylogarithmically on \frac{1}{\varepsilon}, prior algorithms for sampling in \varepsilon infinity distance have runtime bounds that depend polynomially on \frac{1}{\varepsilon}. We bridge this gap by presenting an algorithm that outputs a point \varepsilon-close to \pi in infinity distance that requires at most \mathrm{poly}(\log \frac{1}{\varepsilon}, d) calls to a membership oracle for K and evaluation oracle for f, when f is Lipschitz. Our approach departs from prior works that construct Markov chains on a \frac{1}{\varepsilon^2}-discretization of K to achieve a sample with \varepsilon infinity-distance error, and present a method to directly convert continuous samples from K with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension d in the running time for the problem of sampling from a log-concave distribution on polytopes K with infinity distance \varepsilon, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.