NeurIPS 2020

Weak Form Generalized Hamiltonian Learning

Meta Review

There was significant discussion of this paper post rebuttal. I am following R2's argument that the core idea of this paper is not just very strong, but also very timely. The main issues raised by the other reviewers are a) that the paper is technically dense. It is debatable whether much can be done about this. The paper arguably does a good job of introducing the heavy algebraic geometry machinery, although one can wonder whether there may not be a simpler way of presenting the same ideas. b) the experiments are underwhelming. They are technically well done, but the setups are not ideal. Especially the second example, the Lorentz attractor is perhaps not a good choice. Its chaotic nature is irrelevant to the points of the paper. A higher-dimensional, more motivating example (e.g. many-body dynamics, or kinematics of a humanoid robot, etc.) would be a better fit for the NeurIPS community. Nevertheless, the contributions of this paper are so valuable, and the development in this domain currently so rapid, that I believe there should be a place for this paper in *this year's* NeurIPS. I encourage the authors to take on the points raised by the reviewers and improve the paper in time for camera ready.