Convergence of Large Margin Separable Linear Classification

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper

Authors

Tong Zhang

Abstract

Large margin linear classification methods have been successfully ap(cid:173) plied to many applications. For a linearly separable problem, it is known that under appropriate assumptions, the expected misclassification error of the computed "optimal hyperplane" approaches zero at a rate propor(cid:173) tional to the inverse training sample size. This rate is usually charac(cid:173) terized by the margin and the maximum norm of the input data. In this paper, we argue that another quantity, namely the robustness of the in(cid:173) put data distribution, also plays an important role in characterizing the convergence behavior of expected misclassification error. Based on this concept of robustness, we show that for a large margin separable linear classification problem, the expected misclassification error may converge exponentially in the number of training sample size.