Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Pros: 1. Paper is well written and easily readable. 2. It presents a novel approach for state space reduction of neural network that could be extended to similar problems. 3. It address an important issue of computational complexity in output range analysis for neural network. Cons: 1. This paper should explore different partitioning schemes and provide a timing comparison among them. 2. It should provide a metric to compare degree of over approximation as compared with approach in 
First of all, my knowledge of formal verification of neural networks is very limited, and I apologize for the limitations this poses on my review. That said, I found this paper very interesting, well written, and from my limited understanding of the literature, this seems like a novel and highly useful tool in the toolbox for verifying neural network models. I am strongly in favor of acceptance. My main questions are the following: * It is not clear to me what increase in false positives does the method introduce by relaxing the estimate of the output of the network to a superset. * I would like to see a more formal definition of the algorithm with the "moving pieces" (e.g. partitioning strategies) stated more explicitly. Then I would like to have a discussion of the considerations that go into defining these "moving pieces". * What are the practical limitations of the method on real-world network sizes and architectures.
Overall, I like the approach in the paper and the theoretical background looks solid (though I didn't check the proofs). My main problem with a paper is in the experimental part: * just one experiment is considered on rather toy neural network * no comparison with other methods is made Thus, it is impossible to evaluate the usefulness of the proposed method.