Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The authors consider the problem of ensemble learning. They augment a traditional ensemble with one Gaussian process (GP) to address prediction bias and another (constrained, monotonic) GP to address miscalibration. They use this augmented ensemble to separate aleatoric and epistemic uncertainty. They demonstrate the method on simulated and real data and provide supporting theory. The reviewers agree the approach is clear, straightforward, and practically useful. In forming their revision, the authors should be sure to address a number of concerns that arose during review, rebuttal, and discussion. In particular, the authors have promised in their rebuttal to address the issues of improved empirical comparisons with existing methods and to more carefully discuss some existing work that was missed in the first draft. In addition to making minor plot edits to bring plots in line with NeurIPS formatting guidelines, I strongly encourage the authors to improve their explanations and discussions of the plots; make sure that every item (e.g. line, shaded area, etc) that is plotted is fully explained to the reader.