NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6431
Title:Margin-Based Generalization Lower Bounds for Boosted Classifiers


		
The paper presents margin-based lower bounds for boosted classifiers that match the known upper bounds — an exciting result given the popularity and success of boosting methods in machine learning. A typical bound for boosting bounds the generalization error in terms of the empirical margin loss (which increases with the margin) and a complexity term (that decreases with the margin). Therefore, a predictor that achieves small empirical margin loss with a large margin would generalize well. The lower bounds in the paper match the known upper bounds. There was a bit of confusion about the statement of Theorem 1, and authors did well to clarify those doubts in their response. They are strongly encouraged to incorporate their comments into the revised version lest there is any doubt for another reader. In particular, they should emphasize that the existence of a predictor with empirical margin risk bounded by \tau implies that a learning algorithm that minimizes empirical risk will find a predictor whose empirical margin risk is no larger than \tau; if possible, it would be nice to have a formal statement to that effect. In fact, in thinking more about it, I wonder if the risk of a predictor that minimizes the empirical margin loss should not be much smaller than \tau either. Another concern echoed by a reviewer was that many interesting contributions were relegated to the supplementary material and that the paper was merely an advertisement of the results. While it is true that the conference papers are limited by space, I would encourage authors to include more of the proof sketch and novel ideas/insights from the appendix in the main paper.