Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)

Bibtex Metadata Paper

Authors

Tatyana Sharpee

Abstract

This paper compares a family of methods for characterizing neural feature selec- tivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by max- imizing one of the family of objective functions, R´enyi divergences of different orders [1, 2]. We show that maximizing one of them, R´enyi divergence of or- der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by max- imizing R´enyi divergences of arbitrary order in the asymptotic limit of large spike numbers. We find that the smallest errors are obtained with R´enyi divergence of order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ra- tio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or informa- tion maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where information- theoretic measures are no more data limited than those derived from least squares.