Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)

*Krishnan Kumar, Chiru Bhattacharya, Ramesh Hariharan*

We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy.

Do not remove: This comment is monitored to verify that the site is working properly