Maximilian Riesenhuber, Tomaso Poggio
In macaque inferotemporal cortex (IT), neurons have been found to re(cid:173) spond selectively to complex shapes while showing broad tuning ("in(cid:173) variance") with respect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth. Training monkeys with novel, paperclip-like objects, Logothetis et al. 9 could in(cid:173) vestigate whether these invariance properties are due to experience with exhaustively many transformed instances of an object or if there are mech(cid:173) anisms that allow the cells to show response invariance also to previously unseen instances of that object. They found object-selective cells in an(cid:173) terior IT which exhibited limited invariance to various transformations after training with single object views. While previous models accounted for the tuning of the cells for rotations in depth and for their selectiv(cid:173) ity to a specific object relative to a population of distractor objects,14,1 the model described here attempts to explain in a biologically plausible way the additional properties of translation and size invariance. Using the same stimuli as in the experiment, we find that model IT neurons exhibit invariance properties which closely parallel those of real neurons. Simulations show that the model is capable of unsupervised learning of view-tuned neurons.
We thank Peter Dayan, Marcus Dill, Shimon Edelman, Nikos Logothetis, Jonathan Mumick and
Randy O'Reilly for useful discussions and comments.