This paper presents a method to prune filters in convolutional neural networks by introducing a scientific control group of knockoff features to reduce the disturbance of irrelevant factors. The authors also analyze the knockoff condition theoretically and derive the knockoff features given the knockoff data. Experiments are performed on CIFAR-10 and ImageNet. The reviewers and AC have read the author feedback carefully in addition to all the reviews. It is generally agreed that the proposed method is novel and interesting in that there is no need to specify arbitrary thresholds and hyperparameters for pruning. The theoretical analysis is sound and provides the requirements for knockoff features. The experiment results show that the method is effective, either comparable to or better than the SOTA pruning methods. Nevertheless, the clarity of the paper has room for improvement to make it appeal better to the readers. One reviewer upgraded the overall score during the post-rebuttal discussion. At the end of the discussion period, three out of four reviewers are supportive of accepting this paper and the other reviewer does not object to its acceptance. Consequently, it is recommended by the AC to accept the paper for poster presentation. All reviewers have made specific suggestions that can help to improve the clarity and presentation of the paper. The authors are recommended to consider them seriously for the revision.