NeurIPS 2020

Adversarial Weight Perturbation Helps Robust Generalization


Meta Review

This paper focuses on adverarial training. The proposal is to modify adversarial training with adversarial perturbation on both input and weight. The philosophy behind sounds quite interesting to me, namely, the authors have identified the strong connection between the weight landscape loss flatness and the robustness generalization gap. This philosophy leads to a novel algorithm design I have never seen, i.e., Adversarial Weight Perturbation (AWP) method. The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please carefully merge all reviewers' comments in the final version, especially R2 and R3.