NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5716
Title:Semi-Implicit Graph Variational Auto-Encoders


		
The paper "carefully designed the SIG-VAE model" and showed SOTA results. We judged that the contributions are significant, but reviewers raised the following concerns. - No technical novelty in SIVI+GVAE (= naive SIG-VAE in Section 3.2). - Contributions are (non-naive!) SIG-VAE modeling (3-4) for graph analysis, giving strong empirical result (SOTA in link prediction, comparable in node classification, etc.) - Contribution in interpretability is not convincing. We strongly recommend the authors to revise the paper significantly. The biggest problem is in organization. The paper is written as if the main contribution is SIVI+GVAE, which is straightforward as reviewers pointed out. The main contribution (what the authors say in the rebuttal "careful design of SIG-VAE") must be appropriately highlighted. - Section 3.1 should be shrunken and moved to Section 2, since nothing there is novel. - Contributions would become clearer if the authors set the naive methods in Section 3.2 as baselines. - Highlight not the methodology (SIVI+GVAE) but the modeling (3-4). The second sentence in the rebuttal "SIG-VAE integrates a carefully designed generative model" really explains the main contribution. In the submitted version, the "carefully designed" modeling is not carefully explained. The authors say propagating uncertainty is essential. But is (3-4) only the way to propagate uncertainty? Or details don't matter if uncertainty is propagated? This is the most important point. If there are many possibilities and the authors made a particular choice, they should justify it. Also in Line 257-261 the authors explain their two-stage learning for the case without node attributes, without any justification.