NeurIPS 2020

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization


Meta Review

All the reviewers agreed that the main results presented in this paper, the rigorous fixed-point equations for binary classification with generic loss and l2 regularizer, and more in-depth elucidation for three losses (ridge, hinge, and logistic), are sound and interesting. Although the problem setting may be thought as simple and limited, the findings in this paper are rigorous and non-trivial, which is the strength of this paper. In this regard, clarification on what statements are rigorous and what are not should be important. Some reviewers pointed out that it would be nicer if a general criterion telling if a particular loss would achieve the rate \propto \alpha^{-1} be provided. I think that this point would be worth mentioning in this paper. All the reviewers rated this paper favorably, which were kept after the author response. I would thus recommend acceptance of this paper. Minor points: Line 125: Corollary 2.3 should be referred to as corollary, not as theorem. acos (arccosine) should not be italicized.