Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)
Stephen Hanson, Mark Gluck
to construct dynamic
Spherical Units can be used reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.
1 The Generalization Problem
Given a favorable or unfavorable consequence, what should an organism assume about the contingent stimuli? If a moving shadow overhead appears prior to a hawk attack what should an organism assume about other moving shadows, their shapes and positions? If a dense food patch is occasioned by a particular density of certain kinds of shrubbery what should the organism assume about other shurbbery, vegetation or its spatial density? In an pattern recognition context, given a character of a certain shape, orientation, noise level etc.. has been recognized correctly what should the system assume about other shapes, orientations, noise levels it has yet to encounter?
• Also a member of Cognitive Science Laboratory, Princeton University, Princeton, NJ 08544