In this submission, the authors present a hypothesis that better aligning models with computations within the primary visual cortex leads to more robust models. The authors first show that model robustness is correlated with the ability of a CNN to explain the variance in primary visual cortex recordings. Based on this observation, the authors develop a hybrid model for image classification in which a CNN is prepended with fixed features from a simple model of primary visual cortex. The authors demonstrate that the resulting hybrid model is more robust to white box adversarial attacks and marginally more robust to image distortions. Through a series of ablations the authors attribute the gains in robustness to the nonlinearity and stochasticity -- although the stochasticity provides the greatest benefit. The reviewers offer some concerns about the strength of the overall methods for assessing robustness. That said, all authors found the presentation of the results extremely clear and the line of research quite exciting. For all of these reasons, this paper will be accepted and published at NeurIPS.