NeurIPS 2020

Self-Adaptive Training: beyond Empirical Risk Minimization


Meta Review

The paper focuses on the problem of learning from corrupted data (e.g. label noise) and introduces an improved training objective. This objective can be interpreted as a self-training whereby the model's predictions are progressively averaged with the true (and possibly noisy labels) coupled with a sample weighting scheme which improves training stability. The authors show that this approach can be used for a variety of vision tasks, including classification under label noise, adversarial training, and selective classification. The reviewers appreciated the conceptual simplicity of the method, the clarity of the presentation, and the promising empirical results. The discussion phase focused on the following two drawbacks: - Theoretical justification: While the theoretical analysis is hard for the general case, it might be doable in the corrupted linear regression case, which could offer some valuable insights. There could be cases where such a scheme performs worse that ERM, and this should be discussed in the manuscript. The reviewers' opinions remain split on this issue. - Empirical validation: Given that the experiments were performed only in the vision domain and a small number of models it is hard to judge whether this approach will robustly generalize to other modalities. Notwithstanding the criticism above, the paper provides a relatively novel and conceptually simple idea which is supported by solid experiments on a topic relevant to the broader NeurIPS community, and I will recommend acceptance. I strongly advise the authors to include all the relevant information from the rebuttal, and prominently display the potential failure models of the proposed method.