NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper studies the approximation power of GNNs on combinatorial optimization problems. Its main result is to establish an equivalence between these neural networks and distributed local algorithms, and to provide a stronger family of GNNs that result in approximation algorithms for several NP-hard problems with provably better approximation ratio as alternatives. Reviewers had mixed impressions on this paper. Whereas they recognize the theoretical contribution of this work, they were also somewhat concerned by the lack of numerical examples that illustrate how the ideas developed in the theorems may apply outside the set of three NP-hard optimization examples. The AC also shares these concerns, but ultimately thinks the positive aspects of the paper outweight the negatives.