Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The paper is interesting and well-written. It is a little dense, which is somewhat unavoidable given the page limit. It may help if the part on "Graph Theoretic Formulation" was expanded in some way since the high level description is a bit difficult to follow. To make space, it might be possible to move the mention of some of the lemmas to the supplementary material. I did not go through the proofs in detail.
1. Originality: High. I think this is quite different from most previous work that involves propagating bounds through the network (for incomplete verification), and MLP/SMT based exhaustive search approaches. 2. Quality: The quality of the work is quite high. My biggest concern is with the experiments (see section on suggested improvements). 3. Clarity: Good. The paper is largely very well written, and quite easy to follow. I appreciate the effort the authors put into the manuscript. 4. Significance: Low. While the approach is new, the evidence is not strong enough here to believe that this is indeed a tractable approach for verifying the exact robustness for deep networks.
The main contribution of this work is a new approach to neural network robustness based on polyhedral complexes. Quality: The theoretical results are clear and intuition explained. (I haven't verified the proofs in the appendices). The proposed algorithms are clever and neatly explained. The main missing component is more extensive empirical evaluation: -- 128 random samples seems very small to draw reliable conclusions -- Only one type of network is considered (binary classification 1 vs. 7 on MNIST via standard training): The performance of a verifier typically varies a lot depending on how the network was trained (whether it was trained to be robust and by which method). For example, training to enforce relu stability would reduce the iteration time as mentioned in the paper. -- The main gain of the proposed approach is under limited time constraints, it can verify much more than complete methods like MILP. How does this compare to other LP/SDP based incomplete verifiers? Originality: The approach towards certifying robustness of neural networks is novel and very different from existing approaches. However, I am not familiar with the geometric results proposed to comment on whether the theorems involve novel techniques over existing work in the area. Clarity: The main ideas of the paper are well presented and the intuition is clearly explained. The technical exposition could be improved in parts, especially in defining and using terms that are not standard in ML literature. Some minor specific points: a) Corollary 3.2: T refers to boundary of the polytope but isn't mentioned in the statement of the corollary which is quite confusing on first read. b) The definition of convex polytope in Section 3 seems to be missing convexity requirements c) Chebyshev ball isn't formally clearly d) A figure to represent the bipartite graph could aid understanding Significance: i think this work is significant as it provides a new approach to quantifying robustness of a network. While the preliminary results presented in the paper do not yet address scalability issues, and provide only a small improvement over the baselines considered, this new approach could propel innovations for certifying neural network properties based on similar geometric ideas. -- Author feedback response -- I thank the authors for their response and clarification.