Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
LU LIU, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang
Meta-learning extracts the common knowledge from learning different tasks and uses it for unseen tasks. It can signiﬁcantly improve tasks that suffer from insufﬁcient training data, e.g., few-shot learning. In most meta-learning methods, tasks are implicitly related by sharing parameters or optimizer. In this paper, we show that a meta-learner that explicitly relates tasks on a graph describing the relations of their output dimensions (e.g., classes) can signiﬁcantly improve few-shot learning. The graph’s structure is usually free or cheap to obtain but has rarely been explored in previous works. We develop a novel meta-learner of this type for prototype based classiﬁcation, in which a prototype is generated for each class, such that the nearest neighbor search among the prototypes produces an accurate classiﬁcation. The meta-learner, called “Gated Propagation Network (GPN)”, learns to propagate messages between prototypes of different classes on the graph, so that learning the prototype of each class beneﬁts from the data of other related classes. In GPN, an attention mechanism aggregates messages from neighboring classes of each class, with a gate choosing between the aggregated message and the message from the class itself. We train GPN on a sequence of tasks from many-shot to few-shot generated by subgraph sampling. During training, it is able to reuse and update previously achieved prototypes from the memory in a life-long learning cycle. In experiments, under different training-test discrepancy and test task generation settings, GPN outperforms recent meta-learning methods on two benchmark datasets. Code of GPN is publicly available at: https://github.com/liulu112601/Gated-Propagation-Net.