NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8088
Title:Robust Attribution Regularization


		
Overall, the reviewers found that the paper makes an intriguing and useful connection between attribution methods traditionally used for model explanability and robust predictions for improved generalization. They also found the paper to be clearly written. I did find the suggestions from R4 to search for any "negative results" around attribution brittleness to be an important one, and agree it would strengthen the paper if such examples could be found to help the reader understand more about the behavior of robust saliency maps. And I do expect that the authors will take advantage of the extra space for the camera ready version to include additional datasets and empirical results, as they described they would in the author response.