Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Summary This paper provides a novel perspective to study the generalization of GANs. The author has theoretically and experimentally analyzed the connection between the information leakage and the generalization of GANs. Strengths 1. The insight to study the generalization of GAN from the view of privacy protection is very interesting. This work may motivate the following research to study the properties of GANs from the privacy aspect. 2. The theoretical analysis that employs differential privacy and the stability based theory is insightful 3. The experiment demonstrates that different Lipschitz regularization techniques can not only reduce information leakage but also improve the generalization ability of GANs. This conclusion is useful to guide the training of GANs in practice. Weakness 1. The attack used in this paper is relatively straightforward. The author should evaluate different attack methods and show the experimental results 2. The theoretical analysis has mentioned the connection between this work and the Bayesian GAN. It is better to demonstrate some evaluations of the information leakage of Bayesian GAN in the experimental section. Comment after rebuttal: The authors addressed the raised concerns. As pointed out by other reviewers, more advance of theoretical finding is desired.
This paper studies the generalization property of GAN models from an interesting perspective, which is built upon the intuition that reducing the generalization gap usually coincide with protecting privacy. To my knowledge, this angle of analysis is new to the body of GAN research. The empirical results/discussion on membership attack methods and various regularizations also enriches our understanding on the performance of GAN. That said, the pure theoretical advancement is good but not strong - The key results Thm1,2 are direct variations of differential privacy and may appear incremental rather than fundamental. Section 4 can be improved - it would be great if the authors can also provide guidlines of GAN design along the way of discussion.
Overall I think this paper raises an interesting perspective to understanding adversarial generative models. I think this paper has some value by raising the question and offering some interesting experimental results. The theory is quite standard, the authors first cite a relationship between differential privacy and RO stability, then cite that RO stability bounds the generalization gap. The short coming is that the theory only analyzes the discriminator, which do not seem much different compared to previous work analyzing classifiers. It would be much more interesting and novel to see an analysis of the joint learning process of generator and discriminator. Experiments: The experiments show a correlation between regularization, less information leakage and reduced generalization gap. In particular, to show information leakage, the authors propose a simple scheme of using the value of the discriminator output to decide if an image is from the training set. I am actually surprised that such a method could work. I think some of the claims regarding the experiments are a little strong, as their relationships are not necessarily causal. But of course, it is extremely hard, in almost every deep learning setup to establish strong causal relationship between change of modeling choices and change of performance. So I will not over criticize this point. The writing is generally easy to read. There might be a typo on line 249 where the \leq should be a \geq. --------------- After rebuttal Thank you for your response. I would like to maintain my score after the rebuttal, for the following reasons: I think the proposed theoretical improvement are hard to materialize. For example, it seems that composition of differential privacy will lead to a very loose bound, as information leakage will add up. I do not know if practical results can be obtained. I think overall the paper is of okay quality, the perspective is interesting, the writing is good. It's not the most novel or surprising paper, but I am happy to see this paper accepted at NeurIPS.