Connectionist Implementation of a Theory of Generalization

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper


Roger Shepard, Sheila Kannappan


Empirically, generalization between a training and a test stimulus falls off in close approximation to an exponential decay function of distance between the two stimuli in the "stimulus space" obtained by multidimensional scaling. Math(cid:173) ematically, this result is derivable from the assumption that an individual takes the training stimulus to belong to a "consequential" region that includes that stimulus but is otherwise of unknown location, size, and shape in the stimulus space (Shepard, 1987). As the individual gains additional information about the consequential region-by finding other stimuli to be consequential or nOl-the theory predicts the shape of the generalization function to change toward the function relating actual probability of the consequence to location in the stimulus space. This paper describes a natural connectionist implementation of the theory, and illustrates how implications of the theory for generalization, discrimination, and classification learning can be explored by connectionist simulation.


Because we never confront exactly the same situation twice, anything we have learned in any previous situation can guide us in deciding which action to take in the present situation only to the extent that the similarity between the two situations is sufficient to justify generalization of our previous learning to the present situation. Accordingly, principles of generalization must be foundational for any theory of behavior.

In Shepard (1987) nonarbitrary principles of generalization were sought that would be optimum in any world in which an object, however distinct from other objects, is generally a member of some class or natural kind sharing some dispositional property of potential consequence for the individual. A newly encountered plant or animal might be edible or