Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Santosh Vempala, Andre Wibisono
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution $\nu = e^{-f}$ on $\R^n$. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming $\nu$ satisfies log-Sobolev inequality and $f$ has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in R\'enyi divergence of order $q > 1$ assuming the limit of ULA satisfies either log-Sobolev or Poincar\'e inequality.