Optimal Brain Damage

Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)

Bibtex Metadata Paper

Authors

Yann LeCun, John Denker, Sara Solla

Abstract

We have used information-theoretic ideas to derive a class of prac(cid:173) tical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, sev(cid:173) eral improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative informa(cid:173) tion to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.