NeurIPS 2020

Transfer Learning via $\ell_1$ Regularization

Meta Review

Though the original reviews were on the low side, after the discussion it was agreed that the paper should be primarily viewed as a *theory* paper, giving provable guarantees about a particular kind of transfer/concept drift in linear regression settings -- allowing *sparse* changes in features. On the other hand, it's also agreed that the paper *oversells* its impact in the introductory portions and the rhetoric should be somewhat toned down -- in particular, we ask the authors to point out that the main contribution is an algorithm with *theoretical guarantees* that isn't being proposed (at least as of the writing of the paper) as a competitive method with existing heuristics/algorithms. Hence -- the lack of comparison in the paper with state-of-the-art methods.