NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper relates sum-product tensor operations (a.k.a. tensor networks) to compressed/factorized convolutional layers in neural networks. In doing so, they formally define a new kind of layer, einconv layer, that generalizes previously proposed approaches for compressing CNNs. An extensive search over the space of possible layers is performed to compare new factorized layers with existing ones. The reviewers agree that the idea is original and well executed and that the paper has potential to be significant. One concern is that the proposed enumeration algorithm used in the experiments is not practical, which is true. Nonetheless, this paper opens the way for future research in this direction (how to efficiently search the space of einconv/factorized layers), demonstrates that other factorization models can be considered, and provides a clear picture of connections between tensor networks and existing literature on compressing convolutional layers, which are all significant. Minor comments: - A final proof reading of the paper is needed (e.g. first sentence of Section 3.2, line 207 "this would be happen", line 211 "We"->"we"...) - I may have missed it but I think the notation \mathbb{T}_V(R) (used to denote the set of rank R tensors factorized according to V) has not bee introduced. - In Proposition 2, I believe T_{V\v_m}(R) should be T_{V\v_m}(\tilde{R}) with \tilde{R} = R \ {R_i | i \in v_m}.