Temporal Dynamics of Generalization in Neural Networks

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper


Changfeng Wang, Santosh Venkatesh


This paper presents a rigorous characterization of how a general nonlinear learning machine generalizes during the training process when it is trained on a random sample using a gradient descent algorithm based on reduction of training error. It is shown, in particular, that best generalization performance occurs, in general, before the global minimum of the training error is achieved. The different roles played by the complexity of the machine class and the complexity of the specific machine in the class during learning are also precisely demarcated.