Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)
Ulrike Luxburg, Stefanie Jegelka, Michael Kaufmann, Sébastien Bubeck
Clustering is often formulated as a discrete optimization problem. The objective is to ﬁnd, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the ﬁnite data set has been sampled from some underlying space, the goal is not to ﬁnd the best partition of the given sample, but to approximate the true partition of the under- lying space. We argue that the discrete optimization approach usually does not achieve this goal. As an alternative, we suggest the paradigm of “nearest neighbor clustering”. Instead of selecting the best out of all partitions of the sample, it only considers partitions in some restricted function class. Using tools from statistical learning theory we prove that nearest neighbor clustering is statistically consis- tent. Moreover, its worst case complexity is polynomial by construction, and it can be implemented with small average case complexity using branch and bound.