On the Power and Limitations of Random Features for Understanding Neural Networks

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

AuthorFeedback Bibtex MetaReview Metadata Paper Reviews Supplemental

Authors

Gilad Yehudai, Ohad Shamir

Abstract

Recently, a spate of papers have provided positive theoretical results for training over-parameterized neural networks (where the network size is larger than what is needed to achieve low error). The key insight is that with sufficient over-parameterization, gradient-based methods will implicitly leave some components of the network relatively unchanged, so the optimization dynamics will behave as if those components are essentially fixed at their initial random values. In fact, fixing these \emph{explicitly} leads to the well-known approach of learning with random features (e.g. \citep{rahimi2008random,rahimi2009weighted}). In other words, these techniques imply that we can successfully learn with neural networks, whenever we can successfully learn with random features. In this paper, we formalize the link between existing results and random features, and argue that despite the impressive positive results, random feature approaches are also inherently limited in what they can explain. In particular, we prove that random features cannot be used to learn \emph{even a single ReLU neuron} (over standard Gaussian inputs in $\reals^d$ and $\text{poly}(d)$ weights), unless the network size (or magnitude of its weights) is exponentially large in $d$. Since a single neuron \emph{is} known to be learnable with gradient-based methods, we conclude that we are still far from a satisfying general explanation for the empirical success of neural networks. For completeness we also provide a simple self-contained proof, using a random features technique, that one-hidden-layer neural networks can learn low-degree polynomials.