Regularizing Towards Permutation Invariance In Recurrent Models

Part of Advances in Neural Information Processing Systems 33 (NeurIPS 2020)

AuthorFeedback »Bibtex »MetaReview »Paper »Review »Supplemental »


Edo Cohen-Karlik, Avichai Ben David, Amir Globerson


<p>In many machine learning problems the output should not depend on the order of the inputs. Such ``permutation invariant'' functions have been studied extensively recently. Here we argue that temporal architectures such as RNNs are highly relevant for such problems, despite the inherent dependence of RNNs on order. We show that RNNs can be regularized towards permutation invariance, and that this can result in compact models, as compared to non-recursive architectures. Existing solutions (e.g., DeepSets) mostly suggest restricting the learning problem to hypothesis classes which are permutation invariant by design. Our approach of enforcing permutation invariance via regularization gives rise to learning functions which are "semi permutation invariant", e.g. invariant to some permutations and not to others. Our approach relies on a novel form of stochastic regularization. We demonstrate that our method is beneficial compared to existing permutation invariant methods on synthetic and real world datasets.</p>