Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Originality: The task is new to me. Adding noise for GAN training is not new. The guess discriminator seems new to me. Quality: The paper's claim is supported by its experiment results. I think this is a complete piece of work. Clarity: The paper is ok with clarity. I'd like to see a detailed model structure, especially for the guess discriminator. Significance: The paper proposed a few metrics to evaluate the quality of the model, which could be very useful for comparing different methods. Unfortunately, in this paper there were no other methods to compare with. So it is hard to say if their method is much better than existing methods.
The submission is clearly building upon the observations made in , and extends/complements them in meaningful ways. In particular, it contributes mitigation techniques as well as improved/complementary evaluation metrics. Overall, the submission is written clearly, and remains very readable in all parts. Although not strictly part of this evaluation, the provided supplementary material is exemplary, and can help reproducing these results. I see the submission as a high-quality contribution to 1) gain deeper insight into the workings of [unpaired] image-to-image translation systems, and 2) improve their quality. Both of these goals have been reached, by means of the contributions a)-c). The presented defense techniques in Section 4 are based more on empirical observation (i.e. results get better) than on provable guarantees, but this does not diminish their usefulness and level of significance. While the adversarial training with noise (Section 4.1) is a rather obvious approach (and even referred to by the authors as a "simplistic defense approach"), the guess discriminator loss in Section 4.2 is a more interesting modification. The loss terms are generic enough to be suitably applied to any kind of cyclic/reconstruction based image-to-image translation architecture. The experimental results are convincing, both in terms of the data sets they have been evaluated on, as well as in terms of the results. Experiments overall are thorough enough to be significant. It would have been even better to see what a combination of the two loss terms can achieve., i.e. another row "CycleGAN + noise* + guess*" in Tables 2 and 3 (after optimization of the loss weighting hyperparameters). The novel "metrics" are quite ad-hoc but make sense, and appear to provide further insight into the behavior of these GAN-based translation networks. Coming up with good metrics here is not that easy, so this contribution is appreciated. The sensitivity-to-noise metric should be directly improved by the noise defense, and, unsurprisingly, this approach yields the best results under the metric. Minor comments: - References  and  are the same paper. - I think there is some word missing in the sentence starting in l. 102. I get the meaning, though. - Extraneous word ('is') in l. 112 - Typo in l. 156: 'Coditional' -> 'Conditional'
I think the self-adversarial attack observation is quite interesting but not very convinced that the proposed defense techniques are novel enough for the submission. Note self-adversarial attack is not a new observation(as the paper heavily cited), and both defense techniques (adding noise and adding pairwise discriminator) exist in the literature. Pros: This paper is quite well written and properly summarized the related works. This paper shows significant effort in conducting experiments. Cons: Novelty is not enough as most of the proposed solution or observations are already published. Need more insight on the proposed solutions instead of similar to some other works.