NeurIPS 2020

Implicit Rank-Minimizing Autoencoder


Meta Review

This work got mixed reviews: R1 praised the potential impact of such a simple idea being shown to work remarkably well, but other reviewers had significant concerns about the empirical evaluation, which is especially important when the main contribution of the paper is to show that an idea is effective in practice. The reviewers were ultimately unable to reach a consensus about this paper, but all reviewers agreed that the core idea is promising, and R2, R3 and R4 raised their scores in light of the discussion and the author feedback. While the resulting scores still make this a difficult decision overall, I have chosen to recommend acceptance. The main point of discussion was whether the required changes to the manuscript require another review cycle or not. Indeed, the requested changes were quite broad: - demonstrate the effect of the initial variance of the linear layers - compare the model against modern autoencoder variants - compare against vanilla autoencoders with varying latent dimension - demonstrate the effect of the number of linear layers - avoid overclaiming, e.g. about the proposed model working well "with all types of optimizers" - etc. However, I think the authors have done a great job addressing the majority of these concerns in their rebuttal, which includes many new results. Given the potential impact of the idea and the prospect of follow-up work in various directions, I think accepting this work as it stands is worth the risk. In this decision, I am of course counting on the goodwill of the authors to prominently include the additional results from the rebuttal in the camera ready version of their manuscript, and to address any remaining concerns. Please make sure to also incorporate the reviewers' detailed feedback regarding typos and clarity.