NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4896
Title:Reconciling meta-learning and continual learning with online mixtures of tasks

Reviewer 1

Originality: I really liked the idea of modeling MAML-like approaches using nonparametric Bayesian priors and I am not familiar with other work that does that. Thus, I consider the proposed method novel. More specifically, I consider it a novel combination of existing methods, that exploits an interesting connection between them. Quality: The paper is of good quality both in terms of key contributions and in terms of how the contributions. are presented. Clarity: The paper is well-written and organized, and very easy to follow, given a bit of background in Bayesian methods. Significance: As mentioned in my earlier comments, I consider this work significant. One other comment is that I really like the extensive discussion of related work, both in the paper and in the supplementary material. I always find that very useful and often ignored so I was positively surprised in your case.

Reviewer 2

Overall, this is a very strong submission. It is well-written, timely, clear, and includes several significant and novel contributions. The technical contributions are well-developed, and it does a very nice job addressing a challenging problem in meta-learning. Although the novelty is high, it is related to several papers on online MAML and MAML with task-clustering that appeared in ICML 2019. Although I know the timing is very tight between these, so it isn't reasonable to expect empirical comparison, those works should at least be cited and compared qualitatively in related work. I would like to see more in the paper addressing the tension between the ability to fit the meta-parameters to each task clusters and the ability to generalize. This has a nice interpretation in the Bayesian setting that could be called out further. The clarity of Section 3 would be improved by adding a plate diagram or illustrative figure showing the relationship among the variables. POST-RESPONSE Authors -- thanks for your clear and engaging response. Please do add the plate diagram back in, at least as supplemental material, but preferably in the main paper.

Reviewer 3

The problem is interesting. The combination of continual learning and meta-learning is novel. The method is technically sound. The experiments on toy tasks are well-designed. The paper is clearly written. ======= UPDATE ======= I have read the authors' response and understood the difficulty of finding a naturalistic dataset.