The paper was heavily discussed among all the reviewers, and in the end, the reviewers greed that the contributions of the paper are sufficiently significant (in terms of the scientific method and insights to the computational neuroscience community). The paper provides two neuroscience-inspired mechanisms are improve robustness of NNs to adversarial attacks. “Retinal fixations” stands for a non-uniform sampling of the visual field around the fixation point (densest close to the center); “Cortical fixations” is a multi-scale processing by smaller branches of the network. Both mechanisms seem to work to some extent for small perturbations. The reviewers also had a number of suggestions which will certainly improve the quality of the results (esp. to the ML audience). The suggestions are all mentioned in the "updated" reviews (e.g. more thorough evaluation).