Continuous Submodular Maximization: Beyond DR-Submodularity

Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)

AuthorFeedback Bibtex MetaReview Paper Review Supplemental

Authors

Moran Feldman, Amin Karbasi

Abstract

In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called \COORDINATE-ASCENT+, achieves a $(\frac{e-1}{2e-1}-\eps)$-approximation guarantee while performing $O(n/\epsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\epsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem). We then propose \COORDINATE-ASCENT++, that achieves the tight $(1-1/e-\eps)$-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly $O(n^3/\eps^{2.5} + n^3 \log n / \eps^2)$ per iteration. However, the computation of each round of \COORDINATE-ASCENT++ can be easily parallelized so that the computational cost per machine scales as $O(n/\sqrt{\epsilon}+n\log n)$.