A Closer Look at the Training Strategy for Modern Meta-Learning

Part of Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)

Bibtex »Paper »Supplemental »

Bibtek download is not availble in the pre-proceeding


Authors

JIAXIN CHEN, Xiao-Ming Wu, Yanke Li, Qimai LI, Li-Ming Zhan, Fu-lai Chung

Abstract

The support/query (S/Q) episodic training strategy has been widely used in modern meta-learning algorithms and is believed to improve their generalization ability to test environments. This paper conducts a theoretical investigation of this training strategy on generalization. From a stability perspective, we analyze the generalization error bound of generic meta-learning algorithms trained with such strategy. We show that the S/Q episodic training strategy naturally leads to a counterintuitive generalization bound of $O(1/\sqrt{n})$, which only depends on the task number $n$ but independent of the inner-task sample size $m$. Under the common assumption $m<<n$ for few-shot learning, the bound of $O(1/\sqrt{n})$ implies strong generalization guarantees for modern meta-learning algorithms in the few-shot regime. To further explore the influence of training strategies on generalization, we propose a leave-one-out (LOO) training strategy for meta-learning and compare it with S/Q training. Experiments on standard few-shot regression and classification tasks with popular meta-learning algorithms validate our analysis.