Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Roland Memisevic, Geoffrey E. Hinton
We describe a way of using multiple different types of similarity rela- tionship to learn a low-dimensional embedding of a dataset. Our method chooses different, possibly overlapping representations of similarity by individually reweighting the dimensions of a common underlying latent space. When applied to a single similarity relation that is based on Eu- clidean distances between the input data points, the method reduces to simple dimensionality reduction. If additional information is available about the dataset or about subsets of it, we can use this information to clean up or otherwise improve the embedding. We demonstrate the po- tential usefulness of this form of semi-supervised dimensionality reduc- tion on some simple examples.