This paper proposes a defense to adversarial examples through generative models and adversarial training. But that's just the pretense of the paper. In reality, the paper is a study of the manifold hypothesis as a defense. The authors construct OM-ImageNet, an ImageNet variant that is entirely on the manifold of a GAN. By doing this, it is possible to evaluate the robustness of defenses that project images onto the manifold of a GAN. Typically it's hard to evaluate manifold projection defenses because images aren't completely on-manifold. This solves that problem. The authors construct a defense for this scheme. I don't believe the claim that this defense works, but the setup of OM-ImageNet is an interesting idea I haven't seen before. Future study on this dataset should be interesting.