NeurIPS 2020

Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples


Meta Review

This paper gives a modified loss for Dirichlet Prior Networks that is designed to help distinguish out-of-domain examples from in-domain but high-class-uncertainty ones. It is tested on a variety of small-image classification datasets. Three reviewers were mildly positive and one mildly negative so this is a borderline case. Neither the rebuttal nor a lengthy inter-reviewer discussion changed these views. There is a consensus that the approach is novel, interesting and potentially useful, but the negative reviewer requested further justification and several of the reviewers point out relevant related work that needs to be discussed. Overall, the AC agrees that the idea is elegant and novel enough to appear in NeurIPS. For the final version of the paper, the authors should address the two main concerns of R4, discuss some of the missing references and strengthen the justification of the various design choices. Also, the tables and images in the text are uncomfortably cramped so some of the less-important material from the current paper may need to be moved to the supplementary material.