Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Aaron C. Courville, Nathaniel Daw, David Touretzky
We propose a probabilistic, generative account of configural learning phenomena in classical conditioning. Configural learning experiments probe how animals discriminate and generalize between patterns of si- multaneously presented stimuli (such as tones and lights) that are dif- ferentially predictive of reinforcement. Previous models of these issues have been successful more on a phenomenological than an explanatory level: they reproduce experimental findings but, lacking formal founda- tions, provide scant basis for understanding why animals behave as they do. We present a theory that clarifies seemingly arbitrary aspects of pre- vious models while also capturing a broader set of data. Key patterns of data, e.g. concerning animals' readiness to distinguish patterns with varying degrees of overlap, are shown to follow from statistical inference.