NeurIPS 2020

Measuring Systematic Generalization in Neural Proof Generation with Transformers


Meta Review

This paper evaluates a trained-from-scratch Transformer language model on an artificial simple-theorem-proving task in a way that helps to highlight and clarify some limitations of this commonly-used architecture. Reviewers found some points in the motivation and in the discussion of results potentially a bit misleading, especially surrounding the connection between this work and natural language, but ultimately formed a consensus that the primary claims of the paper are sound and significant, and that the remaining presentational issues don't undermine that.