NeurIPS 2020

The Generalization-Stability Tradeoff In Neural Network Pruning


Meta Review

The paper studies the effect of pruning on the generalization ability of neural networks. It introduces a notion of pruning instability (determines the closeness to the original function, or the drop in accuracy after pruning) and show that instability relates positively to generalization of neural networks. The paper is purely empirical and while the reviewers initially had some concerns regarding the choice of architectures, hyperparameters and datasets, some of these concerns were properly addressed in the rebuttal. Overall, the paper introduces an interesting view on pruning which is backed up to a large extent by their experimental results. The reviewers agree that some aspects could be improved and have made many suggestions. I recommend acceptance but I also strongly encourage the authors to revise the paper according to the reviews to maximize its potential impact.