Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track
Odelia Melamed, Gilad Yehudai, Gal Vardi
Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples.In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace.We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial L2-perturbations in these directions.Moreover, we show that decreasing the initialization scale of the training algorithm, or adding L2 regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data.