Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)
We present a new method for obtaining the response function 9 and its average G from which most of the properties of learning and generalization in linear perceptrons can be derived. We first rederive the known results for the 'thermodynamic limit' of infinite perceptron size N and show explicitly that 9 is self-averaging in this limit. We then discuss extensions of our method to more gen(cid:173) eral learning scenarios with anisotropic teacher space priors, input distributions, and weight decay terms. Finally, we use our method to calculate the finite N corrections of order 1/ N to G and discuss the corresponding finite size effects on generalization and learning dynamics. An important spin-off is the observation that results obtained in the thermodynamic limit are often directly relevant to systems of fairly modest, 'real-world' sizes.