Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Sophie Deneve
We propose a new interpretation of spiking neurons as Bayesian integra- tors accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new infor- mation, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic rep- resentation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implement- ing a variant of belief propagation.
Many perceptual and motor tasks performed by the central nervous system are probabilis- tic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should lead to quick and irreversible decisions in a perpetually changing world, noisy cues have to be integrated on-line, but in a way that takes into account unpredictable events, such as a sudden change in motion direction or the appearance of a new stimulus.
This raises the question of how this temporal integration can be performed at the neural level. It has been proposed that single neurons in sensory cortices represent and compute the log probability that a sensory variable takes on a certain value (eg Is visual motion in the neuron's preferred direction?) [9, 7]. Alternatively, to avoid normalization issues and provide an appropriate signal for decision making, neurons could represent the log proba- bility ratio of a particular hypothesis (eg is motion more likely to be towards the right than towards the left) [7, 6]. Log probabilities are convenient here, since under some assump- tions, independent noisy cues simply combine linearly. Moreover, there are physiological evidence for the neural representation of log probabilities and log probability ratios [9, 6, 7].
However, these models assume that neurons represent probabilities in their firing rates. We argue that it is important to study how probabilistic information are encoded in spikes. Indeed, it seems spurious to marry the idea of an exquisite on-line integration of noisy cues with an underlying rate code that requires averaging on large populations of noisy neurons and long periods of time. In particular, most natural tasks require this integration to take place on the time scale of inter-spike intervals. Spikes are more efficiently signaling events
Institute of Cognitive Science, 69645 Bron, France
than analog quantities. In addition, a neural theory of inference with spikes will bring us closer to the physiological level and generate more easily testable predictions.
Thus, we propose a new theory of neural processing in which spike trains provide a de- terministic, online representation of a log-probability ratio. Spikes signals events, eg that the log-probability ratio has exceeded what could be predicted from previous spikes. This form of coding was loosely inspired by the idea of "energy landscape" coding proposed by Hinton and Brown [2]. However, contrary to [2] and other theories using rate-based representation of probabilities, this model is self-consistent and does not require different models for encoding and decoding: As output spikes provide new, unpredictable, tempo- rally independent evidence, they can be used directly as an input to other Bayesian neurons.
Finally, we show that these neurons can be used as building blocks in a theory of approx- imate Bayesian inference in recurrent spiking networks. Connections between neurons implement an underlying Bayesian network, consisting of coupled hidden Markov models. Propagation of spikes is a form of belief propagation in this underlying graphical model.
Our theory provides computational explanations of some general physiological properties of cortical neurons, such as spike frequency adaptation, Poisson statistics of spike trains, the existence of strong local inhibition in cortical columns, and the maintenance of a tight balance between excitation and inhibition. Finally, we discuss the implications of this model for the debate about temporal versus rate-based neural coding.
1 Spikes and log posterior odds
1.1 Synaptic integration seen as inference in a hidden Markov chain
We propose that each neuron codes for an underlying "hidden" binary variable, xt, whose state evolves over time. We assume that xt depends only on the state at the previous time step, xt-dt, and is conditionally independent of other past states. The state xt can switch from 0 to 1 with a constant rate ron = 1 lim dt dt0 P (xt = 1|xt-dt = 0), and from 1 to 0 with a constant rate roff . For example, these transition rates could represent how often motion in a preferred direction appears the receptive field and how long it is likely to stay there.
The neuron infers the state of its hidden variable from N noisy synaptic inputs, considered to be observations of the hidden state. In this initial version of the model, we assume that these inputs are conditionally independent homogeneous Poisson processes, synapse i emitting a spike between time t and t + dt (sit = 1) with constant probability qiondt if xt = 1, and another constant probability qi dt off if xt = 0. The synaptic spikes are assumed to be otherwise independent of previous synaptic spikes, previous states and spikes at other synapses. The resulting generative model is a hidden Markov chain (figure 1-A).
However, rather than estimating the state of its hidden variable and communicating this estimate to other neurons (for example by emitting a spike when sensory evidence for xt = 1 goes above a threshold) the neuron reports and communicates its certainty that the current state is 1. This certainty takes the form of the log of the ratio of the probability that the hidden state is 1, and the probability that the state is 0, given all the synaptic inputs received so far: Lt = log P (xt=1|s0t) P (xt=0|s0t) . We use s0t as a short hand notation for the N synaptic inputs received at present and in the past. We will refer to it as the log odds ratio.
Thanks to the conditional independencies assumed in the generative model, we can com- pute this Log odds ratio iteratively. Taking the limit as dt goes to zero, we get the following differential equation:
L = ron 1 + e-L - roff 1 + eL + w i i(sit - 1) -
A. B. r .r r .r on off on off x x x t dt i t t dt st Ot
q , q q , q q , q on off j on off on off st I O G t Lt t t
t t s s s t dt t t dt
C. E. 4 sd 2 do g 0 oL -2 -4 500 1000 1500 2000 2500 3000 D. 2 20
0 L Count t Ot -2 0
500 1000 1500 2000 2500 3000 0 200 400 600 Time ISI
Figure 1: A. Generative model for the synaptic input. B. Schematic representation of log odds ratio encoding and decoding. The dashed circle represents both eventual downstream elements and the self-prediction taking place inside the model neuron. A spike is fired only when Lt exceeds Gt. C. One example trial, where the state switches from 0 to 1 (shaded area) and back to 0. plain: Lt, dotted: Gt. Black stripes at the top: corresponding spikes train. D. Mean Log odds ratio (dark line) and mean output firing rate (clear line). E. Output spike raster plot (1 line per trial) and ISI distribution for the neuron shown is C. and D. Clear line: ISI distribution for a poisson neuron with the same rate.
wi, the synaptic weight, describe how informative synapse i is about the state of the hidden variable, e.g. wi = log qion . Each synaptic spike (si qi t = 1) gives an impulse to the log off odds ratio, which is positive if this synapse is more active when the hidden state if 1 (i.e it increases the neuron's confidence that the state is 1), and negative if this synapse is more active when xt = 0 (i.e it decreases the neuron's confidence that the state is 1).
The bias, , is determined by how informative it is not to receive any spike, e.g. = qi i on - qioff . By convention, we will consider that the "bias" is positive or zero (if not, we need simply to invert the status of the state x).
1.2 Generation of output spikes
The spike train should convey a sparse representation of Lt, so that each spike reports new information about the state xt that is not redundant with that reported by other, preceding, spikes. This proposition is based on three arguments: First, spikes, being metabolically expensive, should be kept to a minimum. Second, spikes conveying redundant information would require a decoding of the entire spike train, whereas independent spike can be taken into account individually. And finally, we seek a self consistent model, with the spiking output having a similar semantics to its spiking input.
To maximize the independence of the spikes (conditioned on xt), we propose that the neu- ron fires only when the difference between its log odds ratio Lt and a prediction Gt of this log odds ratio based on the output spikes emitted so far reaches a certain threshold. Indeed, supposing that downstream elements predicts Lt as best as they can, the neuron only needs to fire when it expects that prediction to be too inaccurate (figure 1-B). In practice, this
will happen when the neuron receives new evidence for xt = 1. Gt should thereby follow the same dynamics as Lt when spikes are not received. The equation for Gt and the output Ot (Ot = 1 when an output spike is fired) are given by:
G = ron 1 + e-L - roff 1 + eL + go(Ot - 1) (1) g O o t = 1. when Lt > Gt + , 0 otherwise, (2) 2
Here go, a positive constant, is the only free parameter, the other parameters being con- strained by the statistics of the synaptic input.