Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)
Tilman Lange, Mikio Braun, Volker Roth, Joachim Buhmann
Model selection is linked to model assessment, which is the problem of comparing different models, or model parameters, for a speciļ¬c learning task. For supervised learning, the standard practical technique is cross- validation, which is not applicable for semi-supervised and unsupervised settings. In this paper, a new model assessment scheme is introduced which is based on a notion of stability. The stability measure yields an upper bound to cross-validation in the supervised case, but extends to semi-supervised and unsupervised problems. In the experimental part, the performance of the stability measure is studied for model order se- lection in comparison to standard techniques in this area.