This paper presents a novel adversarial attack method based on motion blur. The method can generate visually natural motion-blurred images that can fool DNNs for visual recognition. The paper is well written, and the proposed methods are convincing. One reviewer is convinced on the goodness of the paper and suggest a clear acceptance. A second and a third ones consider the paper above the acceptance threshold, being the problem very interesting and the approach clear. this is the first attempt to investigate kernel-based adversarial attacks. A fourth reviewer, appreciated the paper and the rebuttal but did not changed the idea on the fact that the paper is below a threshold of acceptance, mainly because, he/she said, a blur-detector could be used to improve the image and thus avoid the attack. After a long discussion, the consensus has not reached but the majority of reviwers agree in the acceptance. Also the AC agrees.