The Point Where Reality Meets Fantasy: Mixed Adversarial Generators for Image Splice Detection

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

AuthorFeedback »Bibtex »Bibtex »MetaReview »Metadata »Paper »Reviews »Supplemental »

Authors

Vladimir V. Kniaz, Vladimir Knyaz, Fabio Remondino

Abstract

<p>Modern photo editing tools allow creating realistic manipulated images easily. While fake images can be quickly generated, learning models for their detection is challenging due to the high variety of tampering artifacts and the lack of large labeled datasets of manipulated images. In this paper, we propose a new framework for training of discriminative segmentation model via an adversarial process. We simultaneously train four models: a generative retouching model G<em>R that translates manipulated image to the real image domain, a generative annotation model G</em>A that estimates the pixel-wise probability of image patch being either real or fake, and two discriminators D<em>R and D</em>A that qualify the output of G<em>R and G</em>A. The aim of model G<em>R is to maximize the probability of model G</em>A making a mistake. Our method extends the generative adversarial networks framework with two main contributions: (1) training of a generative model G<em>R against a deep semantic segmentation network G</em>A that learns rich scene semantics for manipulated region detection, (2) proposing per class semantic loss that facilitates semantically consistent image retouching by the G_R. We collected large-scale manipulated image dataset to train our model. The dataset includes 16k real and fake images with pixel-level annotations of manipulated areas. The dataset also provides ground truth pixel-level object annotations. We validate our approach on several modern manipulated image datasets, where quantitative results and ablations demonstrate that our method achieves and surpasses the state-of-the-art in manipulated image detection. We made our code and dataset publicly available.</p>