Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper presents an auto-vectorization technique based on imitation learning. The authors claim that the proposed method is much faster than ILP-based method with some small loss in vectorization performance. The technique seems reasonable and the experiments do show its advantages over other techniques. However, I feel the paper is unsuitable for Neurips for the following reasons: 1) Auto-vectorization is an important problem in the compiler area. The paper should be more suitable to compiler/parallelization conferences such as PACT or CGO. 2) The idea of using imitation learning to make approximate decisions is not new. 3) The experiments are superficial. There is no comparison of the actual compile time and execution time with existing methods.
** I have read the author response and my opinion remains the same.** This paper uses imitation learning to solve the compiler auto-vectorization problem. It trains an agent to mimic the optimal solution generated by an integer linear programming solver. It outperforms production-level compiler LLVM in the experiments. Originality: The novelty is incremental. This paper directly combines well-known techniques and does not make any new contribution from the machine learning perspective. Quality: The experiment results look promising but it lacks detailed explanation. Only two figures were provided. Some case studies of the learned policy and more detailed results (more tables and plots) are expected. The claim "The learned policy runs faster then goSLP's ILP solver" is not backed by any experiment results. The author needs to provide a wall-clock time cost comparison of different methods. Clarity: The paper is clearly written and well organized. All required background is provided. The only drawback is that there is no mathematical description of the used algorithms such as GGNN and DAGGER. The author should provide these descriptions to make the paper accurate and clear. Significance: This paper shows a successful application of imitation learning for compiler optimization. It is a good trend for the compiler research community to use more intelligent data-driven machine learning algorithms. Overall this is an okay paper with limited novelty. The evaluation part definitely needs more experiments and result plots.
The authors propose to use graph neural networks to learn to imitate an optimal auto-vectorizer (ILP). This work is not the first to apply machine learning to auto-vectorization, however, the proposed approach is an end-to-end model for vectorization, whereas prior work focused on predicting performance. While the proposed approach does not match the optimal solution, it is able to outperform the heuristics of LLVM in a polynomial runtime. The authors run experiments comparing different imitation learning algorithms to ILP and LLVM. In the imitation learning experiments, different weights are placed on teacher vs student rollouts, and different node traversal strategies are proposed (forward vs backward), and benchmark their results on 3 datasets.