Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

*Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick van der Smagt*

We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution. To incentivise an informative latent representation of the data, we formulate the learning problem as a constrained optimisation problem by extending the Taming VAEs framework to two-level hierarchical models. We introduce a graph-based interpolation method, which shows that the topology of the learned latent representation corresponds to the topology of the data manifold---and present several examples, where desired properties of latent representation such as smoothness and simple explanatory factors are learned by the prior.

Do not remove: This comment is monitored to verify that the site is working properly