NeurIPS 2020

Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder


Meta Review

The key idea of this paper is checking how well a given VAE can be further trained on a given test input. The hope is that training the encoder for further iterations may increase the likelihood for OOD samples more when compared to inliers to facilitate detection. The authors characterize this improvement by a measure, coined as likelihood regret. The authors do not provide any analysis why this method might work, or characterize the conditions when it might not. This is not a requirement, but then the paper should provide enough empirical evidence that the approach is noteworthy. Overall, the paper has been perceived positively, and the authors have provided additional experimental results during the rebuttal. One reviewer found the empirical evaluations not sufficient or hastily executed to be convincing. They noted that the paper may benefit an analysis on all major types of deep generative models.