List-decodable Linear Regression

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

AuthorFeedback Bibtex MetaReview Metadata Paper Reviews Supplemental

Authors

Sushrut Karmalkar, Adam Klivans, Pravesh Kothari

Abstract

We give the first polynomial-time algorithm for robust regression in the list-decodable setting where an adversary can corrupt a greater than 1/2 fraction of examples.

For any \alpha < 1, our algorithm takes as input a sample {(xi,yi)}{i \leq n} of n linear equations where \alpha n of the equations satisfy yi = \langle x_i,\ell^\rangle +\zeta for some small noise \zeta and (1-\alpha) n of the equations are {\em arbitrarily} chosen. It outputs a list L of size O(1/\alpha) - a fixed constant - that contains an \ell that is close to \ell^.

Our algorithm succeeds whenever the inliers are chosen from a certifiably anti-concentrated distribution D. In particular, this gives a (d/\alpha)^{O(1/\alpha^8)} time algorithm to find a O(1/\alpha) size list when the inlier distribution is a standard Gaussian. For discrete product distributions that are anti-concentrated only in regular directions, we give an algorithm that achieves similar guarantee under the promise that \ell^* has all coordinates of the same magnitude. To complement our result, we prove that the anti-concentration assumption on the inliers is information-theoretically necessary.

To solve the problem we introduce a new framework for list-decodable learning that strengthens the ``identifiability to algorithms'' paradigm based on the sum-of-squares method.