{"title": "Know Thy Neighbour: A Normative Theory of Synaptic Depression", "book": "Advances in Neural Information Processing Systems", "page_first": 1464, "page_last": 1472, "abstract": "Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added noise. We suggest an alternative theory according to which short term synaptic plasticity plays a normatively-justifiable role. This theory starts from the commonplace observation that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We suggest that one key task for a synapse is to solve the inverse problem of estimating the pre-synaptic membrane potential from the spikes it receives and prior expectations, as in a recursive filter. We show that short-term synaptic depression has canonical dynamics which closely resemble those required for optimal estimation, and that it indeed supports high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic membrane potential. We make experimentally testable predictions for how the statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any particular cell type.", "full_text": "Know Thy Neighbour:\n\nA Normative Theory of Synaptic Depression\n\nJean-Pascal P\ufb01ster\n\nComputational & Biological Learning Lab\n\nDepartment of Engineering, University of Cambridge\n\nTrumpington Street, Cambridge CB2 1PZ, United Kingdom\n\njean-pascal.pfister@eng.cam.ac.uk\n\nPeter Dayan\n\nGatsby Computational Neuroscience Unit, UCL\n\n17 Queen Square, London WC1N 3AR, United Kingdom\n\ndayan@gatsby.ucl.ac.uk\n\nM\u00b4at\u00b4e Lengyel\n\nComputational & Biological Learning Lab\n\nDepartment of Engineering, University of Cambridge\n\nTrumpington Street, Cambridge CB2 1PZ, United Kingdom\n\nm.lengyel@eng.cam.ac.uk\n\nAbstract\n\nSynapses exhibit an extraordinary degree of short-term malleability, with release\nprobabilities and effective synaptic strengths changing markedly over multiple\ntimescales. From the perspective of a \ufb01xed computational operation in a net-\nwork, this seems like a most unacceptable degree of added variability. We sug-\ngest an alternative theory according to which short-term synaptic plasticity plays a\nnormatively-justi\ufb01able role. This theory starts from the commonplace observation\nthat the spiking of a neuron is an incomplete, digital, report of the analog quan-\ntity that contains all the critical information, namely its membrane potential. We\nsuggest that a synapse solves the inverse problem of estimating the pre-synaptic\nmembrane potential from the spikes it receives, acting as a recursive \ufb01lter. We\nshow that the dynamics of short-term synaptic depression closely resemble those\nrequired for optimal \ufb01ltering, and that they indeed support high quality estima-\ntion. Under this account, the local postsynaptic potential and the level of synap-\ntic resources track the (scaled) mean and variance of the estimated presynaptic\nmembrane potential. We make experimentally testable predictions for how the\nstatistics of subthreshold membrane potential \ufb02uctuations and the form of spik-\ning non-linearity should be related to the properties of short-term plasticity in any\nparticular cell type.\n\n1\n\nIntroduction\n\nFar from being static relays, synapses are complex dynamical elements. The effect of a spike from a\npresynaptic neuron on its postsynaptic partner depends on the history of the activity of both pre- and\npostsynaptic neurons, and thus the ef\ufb01cacy of a synapse undergoes perpetual modi\ufb01cation. These\nchanges in ef\ufb01cacy can last from hundreds of milliseconds or minutes (short-term plasticity) to hours\nor months (long-term plasticity). Short-term plasticity typically only depends on the \ufb01ring pattern\n\n1\n\n\fof the presynaptic cell [1]; short term depression gradually diminishes the postsynaptic effects of\npresynaptic spikes that arrive in quick succession (Fig. 1A). Given the prominence and ubiquity of\nsynaptic depression in cortical (and subcortical) synapses [2], it is pressing to identify its computa-\ntional role(s).\nThere have thus been various important suggestions for the functional signi\ufb01cance of synaptic de-\npression, including \u2013 just to name a few \u2013 low-pass \ufb01ltering of inputs [3], rendering postsynaptic\nresponses insensitive to the absolute intensity of presynaptic activity [4, 5], and decorrelating input\nspike sequences [6]. However, important though they must be for select neural systems, these sug-\ngestions have a piecemeal \ufb02avor \u2013 for instance, chaining together stages of low-pass \ufb01ltering would\nlead to trivial responding.\nHere, we propose a theory according which synaptic depression solves a computational problem that\nis faced by any neural population in which neurons represent and compute with analog quantities,\nbut communicate with discrete spikes. For convenience, we assume this analog quantity to be the\nmembrane potential, but, via a non-linear transformation [7], it could equally well be an analog \ufb01ring\nrate. That is, we assume that network computations require the evolution of the membrane potential\nof a neuron to be a function of the membrane potentials of its presynaptic partners. However, such\na neuron does not have (at least not directly, see [8] for an example of indirect interaction) access\nto these membrane potentials, but rather only to the spikes to which they lead, and so it faces a key\nestimation problem.\nThus, much as in the vein of standard textbook presentations, the operation of a neuron can be\nlogically broken down into three concurrent processes, each running in its dedicated functional\ncompartment: 1) the neuron\u2019s afferent synapses (e.g. spines) estimate the membrane potential of its\npresynaptic partners, scaled according to the rules of the network computation; 2) the neuron\u2019s soma-\ndendritic compartment follows the membrane potential-dependent dynamics and post-synaptic in-\ntegration also determined by the computation; and 3) its axon generates action potentials that are\nbroadcasted to its efferent synapses (and possibly back to the other compartments, eg. for long-term\nplasticity). It is in the indispensable \ufb01rst estimation step that we suggest synaptic depression to be\ninvolved.\nIn Section 2 we formalise the problem of estimating presynaptic membrane potentials as an instance\nof Bayesian inference, and derive an online recursive estimator for it. Given suitable assumptions\nabout presynaptic membrane potential dynamics and spike generation, this optimal estimator can be\nwritten in closed form exactly [9, 10]. In Section 3, we introduce a canonical model of postsynap-\ntic membrane potential and synaptic depression dynamics, and show how it relates to the optimal\nestimator derived earlier. In Section 4, we present results from numerical simulations showing the\nquality with which synaptic depression can approximate the performance of the optimal estimator,\nand how much is gained relative to a static synapse without synaptic depression. Finally, in Section\n5, we sum up, suggest experimentally testable predictions, and discuss possible extensions of this\nwork, eg. to incorporate other forms of short-term synaptic plasticity.\n\n2 Bayesian estimation of presynaptic membrane potentials\n\nThe Bayesian estimation problem that needs to be solved by a synapse involves inferring the poste-\nrior distribution p (ut|s1..t) over the presynaptic membrane potential ut at time step t (for discretized\ntime), given the spikes seen from the presynaptic cell up to that time step, s1..t. We \ufb01rst de\ufb01ne a\nstatistical (generative) model of presynaptic membrane potential \ufb02uctuations and spiking, and then\nderive the estimator that is appropriate for it.\nThe generative model involves two simplifying assumptions (Fig. 1B). First we assume that presy-\nnaptic membrane potential dynamics are Markovian\n\np(ut|u1..t\u22121) = p(ut|ut\u22121)\n\n(1)\n\nIn particular, we assume that the presynaptic membrane potential evolves as an Ornstein-Uhlenbeck\n(OU) process, given (again, in discretized time) by\nut = ut\u22121 \u2212 \u03b8(ut\u22121 \u2212 ur)\u2206t + Wt\n\niid\u223c N (Wt; 0, \u03c32\nW)\n\n\u221a\n\n\u2206t,\n\nWt\n\n(2)\n\n2\n\n\fC\n\nA\n\nB\n\nFigure 1: A. Synaptic depression: postsynaptic responses to a train of presynaptic action poten-\ntials (not shown) at 40 Hz. (Reproduced from [11], adapted from [12].) B. Graphical model of\nthe process generating presynaptic subthreshold membrane potential \ufb02uctuations, u, and spikes, s.\nThe membrane potential evolves according to a \ufb01rst-order Markov process, the Ornstein-Uhlenbeck\n(OU) process (Eqs. 1-2). The probability of generating a spike at time t (st = 1) depends only on\nthe current membrane potential, ut, and is determined by a non-linear Poisson (NP) model (Eqs. 3-\n5). C. Sample membrane potential trace (red line) and spike timings (vertical black dotted lines)\ngenerated by the OU-NP process; with ur = 0 mV, \u03b8\u22121 = 100 ms, \u03c32\nW = 0.02 mV2/ms \u2192\nOU = 1 mV2, \u03b2\u22121 = 1 mV, and g0 = 10 Hz.\n\u03c32\n\nwhere 1/\u03b8 is the time constant with which the membrane potential decays back to its resting value,\nur, and \u2206t is the size of the discretized time bins. Because both \u03b8 and \u03c3W are assumed to be\nconstant, the variance of the presynaptic membrane potential, \u03c32\nThe second assumption is that spiking activity at any time only depends on the membrane potential\nat that time:\n\nW/2\u03b8, is stationary.\n\nOU = \u03c32\n\n(3)\nIn particular, we assume that the spike generating mechanism is an inhomogeneous Poisson process\n(Fig. 1C). Thus, at time step t, the neuron emits a spike (st = 1) with probability g(ut)\u2206t, and\ntherefore the spiking probability p(st|ut) given the membrane potential can be written as:\n\np(st|u1..t) = p(st|ut)\n\np(st|ut) = [g(ut)\u2206t]st [1 \u2212 g(u)\u2206t](1\u2212st)\n\n(4)\n\nWe further assume that the transfer function, g(u), is exponential1:\n\ng(u) = g0 exp(\u03b2u)\n\n(5)\nwhere \u03b2 determines the stochasticity of spiking. In the limit \u03b2 \u2192 \u221e the spiking process is deter-\nministic, i.e. if the membrane potential, u, is bigger than zero, the neuron emits a spike, and if u < 0,\nthe neuron does not \ufb01re.\nEstimating on-line the membrane potential of the presynaptic cell from its spiking history amounts\nto computing the posterior probability distribution, p (ut|s1..t). Since equations 1 and 3 de\ufb01ne a\nhidden Markov model, the posterior can be written in a recursive form:\n\n(cid:90)\n\np(ut|s1..t) \u221d p(st|ut)\n\np(ut|ut\u22121) p(ut\u22121| s1..t\u22121) dut\u22121\n\n(6)\nThat is, the posterior at time step t, p(ut|s1..t), can be computed by combining information from the\ncurrent time step with the posterior obtained at the previous time step, p(ut\u22121|s1..t\u22121). Note that\neven though inference can be performed recursively, and the hidden dynamics is linear-Gaussian\n(Eq. 2), the (extended) Kalman \ufb01lter cannot be used here for inference because the measurement\ndoes not involve additive Gaussian noise, but rather comes from the stochasticity of the spiking\nprocess (Eqs. 4-5).\n\n1Note that the exponential gain function is a convenient choice since the product of a Gaussian and an expo-\nnential gives again an (unnormalised) Gaussian (see Supplementary Information). Furthermore, the exponential\ngain function has also some experimental support [13].\n\n3\n\n...ut\u22121ut...st\u22121st01002003004005006007008009001000\u22123\u22122\u2212101234time [ms]u [mV]\f\u02d9\u00b5 = \u2212\u03b8(\u00b5 \u2212 ur) + \u03b2\u03c32(S(t) \u2212 \u03b3)\n\n\u02d9\u03c32 = \u22122\u03b8(cid:0)\u03c32 \u2212 \u03c32\n\nOU\n\n(cid:1) \u2212 \u03b3\u03b22\u03c34\n(cid:18)\n\n= g0 exp\n\n\u03b2\u00b5 +\n\n(cid:19)\n\n\u03b22\u03c32\n\n2\n\n(7)\n(8)\n\n(9)\n\nPerforming recursive inference (\ufb01ltering), as described by equation 6, under the generative model de-\nscribed by equations 1-5 results in a posterior distribution that is Gaussian, ut|s1..t \u223c N (ut; \u00b5, \u03c32)\n(see Supplementary Information). The mean and variance of this Gaussian evolve (in continuous\ntime, by taking the limit \u2206t \u2192 0) as:\n\nwith the normalisation factor given by\n\n\u03b3 = (cid:104)g0 exp(\u03b2u)(cid:105)ut|s1..t\n\nwhere S(t) is the spike train of the presynaptic cell (represented as a sum of Dirac delta functions).\n(A similar, but not identical, derivation can be found in [9]).\nEquation 7 indicates that each time a spike is observed, the estimated membrane potential should\nincrease proportionally to the uncertainty (variance) about the current estimate. This estimation\nuncertainty then decreases each time a spike is observed (Eqs. 8-9). As Fig. 2A shows, the higher the\npresynaptic membrane potential is, the more spikes are emitted (because the instantaneous \ufb01ring rate\nis a monotonic function of membrane potential, see Eq. 5), and therefore the smaller the posterior\nvariance becomes. Therefore the estimation error is smaller for higher membrane potential (see\nFig. 2B). Conversely, in the absence of spikes, the estimated membrane potential decreases while the\nvariance increases back to its asymptotic value. Fig. 2C shows that the representation of uncertainty\nabout the membrane potential by \u03c32 is self-consistent because it is predictive of the error of the\nmean estimator, \u00b5.\nThe \ufb01rst term on the r.h.s of equation 7 comes from the prior knowledge about the membrane poten-\ntial dynamics. The second term comes from the likelihood of the spiking observations. Those two\ncontributions can be isolated independently by taking two different limits that we will consider in\nthe next two subsections.\n\n2.1 Small noise limit\n\nIn the limit of small variance of the noise driving the OU process, i.e., \u03c32\nthe asymptotic uncertainty \u03c32\u221e scales with \u0001: \u03c32\u221e = \u0001\u03c32\ndynamics of \u00b5 becomes driven only by the prior mean membrane potential ur:\n\nW0 with \u0001 \u2192 0,\n/2\u03b8 (c.f. Eq. 8 with \u02d9\u03c32 = 0). Then the\n\nW = \u0001\u03c32\n\n(10)\nand so the asymptotic estimated membrane potential will tend to the prior mean membrane potential.\nThis is reasonable since in the small noise limit, the true membrane potential ut will effectively be\nvery close to ur. Furthermore the convergence time constant of the estimated membrane potential\nshould be matched to the time constant \u03b8\u22121 of the OU process and this is indeed the case in Eq. 10.\n\nW0\n\u02d9\u00b5 (cid:39) \u2212\u03b8 (\u00b5 \u2212 ur)\n\n2.2 Slow dynamics limit\n\n\u221a\n\n\u0001\u03c3W0\n\n\u221a\n/\n\nOU = \u03c32\nW0\n\nW0, to prevent the process from being unbounded. The variance \u03c32\n\nA second interesting limit is where the time constant of the OU process becomes small, i.e., \u03b8 = \u0001\u03b80\nwith \u0001 \u2192 0. In this case, the variance of the noise in the OU process must also scale with \u0001, i.e\n\u03c32\nW = \u0001\u03c32\n/2\u03b80 of the\nOU process is therefore independent of \u0001. In this case, the asymptotic value of the posterior variance\n\u221a\nbecomes \u03c32\u221e =\n\u03b2\u03b3 (c.f. Eq. 8 with \u02d9\u03c32 = 0). In the limit of small \u0001, the \ufb01rst term of Eq. 7\nscales with \u0001 whereas the second term with\n\u221a\u03b3\n\u03c3W\n\n\u0001. We can therefore write:\n\u02d9\u00b5 (cid:39) S(t) \u2212 \u03b3\n\n(11)\nBecause the time constant \u03b8\u22121 of the OU process is slow, the driving force that pulls the membrane\npotential back to its mean value ur is weak. Therefore the membrane potential estimation dynamics\nshould rely on the observed spikes rather than on the prior information ur. This is apparent in Eq. 11.\n\nFurthermore, the time constant \u03c4 =(cid:112)\u03b3/\u0001/\u03c3W0 is not \ufb01xed but is a function of the mean estimated\n\nmembrane potential \u00b5. Thus, if the initial estimate \u00b50 = \u00b5(0) is below the target value ur, \u03b3\n\n4\n\n\fA\n\nB\n\nC\n\nFigure 2: The performance of the optimal on-line estimator. A. Red line: presynaptic membrane\npotential, u, as a function of time, vertical dotted lines: spikes emitted. Dot-dashed black line:\non-line estimator \u00b5 given by Eq. (7), gray shading: \u00b5 \u00b1 \u03c3, with \u03c3 given by Eq. (8). B. Estimation\nerror (\u00b5 \u2212 u)2 as a function of the membrane potential u of the OU process. Black dots: estimation\nerror and true membrane potential in individual time steps, red line: third order polynomial \ufb01t. C\nBlack bars: histogram of normalized estimation error z = (\u00b5 \u2212 u)/\u03c3. Red line: normal distribution\nN (z; 0, 1). Parameters were as in Fig. 1, except for \u03b2\u22121 = 0.5 mV .\n\nwill be small and hence the time constant \u03c4 will be small as well. As a consequence, each spike\nwill greatly increase the estimate and therefore speed up the approach of this estimate to the true\nvalue. As \u00b5 gets closer to the true membrane potential, the time constant increases, leading to an\nappropriately accurate estimate of the membrane potential. This dynamical time constant therefore\nhelps the estimation avoid the traditional speed vs accuracy trade-off (short time constant are fast\nbut give a noisy estimation; longer time constant are slow but yield a more accurate estimation), by\ncombining the best of the two worlds.\n\n3 Depressing synapses as estimators of presynaptic membrane potential\n\nIn section 2 we have shown that presynaptic spikes have a varying, context-dependent effect on\nthe optimal on-line estimator of presynaptic membrane potential. In this section we will show that\nthe variability that synaptic depression introduces in postsynaptic responses closely resembles the\nvariability of the optimal estimator.\nA simple way to study the similarity between the optimal estimator and short-term plasticity is to\nconsider their steady state \ufb01ltering properties. As we saw above, according to the optimal estimator,\nthe higher the input \ufb01ring rate is, the smaller the posterior variance becomes, and therefore the\nincrement due to subsequent spikes should decrease. This is consistent with depressing synapses\nfor which the amount of excitatory postsynaptic current (EPSC) decreases when the stimulation\nfrequency is increased (see Fig. 3).\n\n5\n\n01002003004005006007008009001000\u22124\u22122024time [ms]u [mV]\u22123\u22122\u2212101230246810u [mV](\u00b5\u2212u)2 [mV2]\u22123\u22122\u22121012300.10.20.30.4z = (u\u2212\u00b5)/\u03c3Probability density\fA\n\nB\n\nFigure 3: A. Steady-state spiking increment \u03b2\u03c32 of the optimal estimator as a function of r = (cid:104)S(cid:105)\n(Eq. 8). B. Synaptic depression in the climbing \ufb01bre to Purkinje cell synapse: average (\u00b1s.e.m.)\nnormalised \u201csteady-state\u201d magnitude of EPSCs as a function of stimulation frequency. Reproduced\nfrom [3].\n\nImportantly, the similarity between the optimal membrane potential estimator and short-term plas-\nticity is not limited to stationary properties. Indeed, the actual dynamics of the optimal estimator\n(Eqs. 7-9) can be well approximated by the dynamics of synaptic depression. In a canonical model\nof short-term depression [14], the postsynaptic membrane potential, v, changes as\n\u2212 Y x S(t)\n\n\u02d9v = \u2212 v \u2212 v0\n\n(12)\n\n1 \u2212 x\n\u03c4D\n\n\u03c4 + J Y x S(t),\n\nwith\n\n\u02d9x =\n\nwhere J and Y are constants (synaptic weight and utilisation fraction), and x is a time varying \u2018re-\nsource\u2019 variable (e.g. the fraction of presynaptic vesicles ready to fuse to the membrane). Thus, v is\nincreased by each presynaptic spike, and in the absence of spikes it decays to its resting value, v0,\nwith membrane time constant \u03c4. However, the effect of each spike on v is scaled by x which itself\nis decreased after each spike and increases between spikes back towards one with time constant \u03c4D.\nThus, the postsynaptic potential, v, behaves much like the posterior mean of the optimal estimator,\n\u00b5, while the dynamics of the synaptic resource variable, x, closely resemble that of the posterior\nvariance of the optimal estimator, \u03c32. This qualitative similarity can be made more formal under\nappropriate assumptions, for details see section 3 of supplementary information. Indeed, the ca-\npacity of a depressing synapse (with appropriate parameters) to estimate the presynaptic membrane\npotential can be nearly as good as that of the optimal estimator (Fig. 4, top). Interestingly, although\nthe scaled variance \u03c32/\u03c32\u221e does not follow the resource variable dynamics x perfectly just after a\nspike, these two quantities are virtually identical at the time of the next spike, i.e. when they are\nused by the membrane potential estimators (Fig. 4, bottom).\n\n4 Performance analysis\n\nIn order to quantify how well synaptic dynamics with depression perform in estimating presynap-\ntic membrane potentials, we measure performance by the mean-squared error (MSE) between the\ntrue membrane potential u and the estimated membrane potential, and compare the MSE of three\nalternatives estimators.\nThe simplest model we consider is a static (non-depressing) synapse, in which v is given by Eq. 12\nwith constant x = 1. This estimator has only 3 tuneable parameters: \u03c4, v0 and J (Y = 1 is \ufb01xed\nwithout loss of generality). The second estimator we consider includes synaptic depression, i.e. x\nis also allowed to vary (Eq. 12). This estimator contains 5 tuneable parameters ( v0, \u03c4, Y , J, \u03c4D).\nFinally, we consider the optimal estimator (Eqs. 7-9). This estimator has no tunable parameters.\nOnce the parameters of presynaptic membrane potential dynamics (\u03c3W, \u03b8, ur) and spiking (\u03b2, g0)\nare \ufb01xed, the optimal estimator is entirely determined. The comparison of the performance of these\nthree estimators is displayed on Fig. 5. The optimal estimator (black circles) is obviously a lower\nbound on any type of estimator. For a wide range of parameter values, the depressing synapse\nperforms almost as well as the optimal estimator, and both perform better than the static synapse.\n\n6\n\n02040608010000.20.40.60.811.2Stimulus rate [Hz]steady state increment [mV]\fFigure 4: Depressing synapses implement near-optimal estimation of presynaptic membrane poten-\ntials. Top. Red line, and vertical dotted lines: membrane potential, u, and spikes, S, generated by\na simulated presynaptic cell (with parameters as in Fig. 1). Blue line: postsynaptic potential, v, in\na depressing synapse (Eq. 12) with all 5 parameters (J = 4.82, \u03c4 = 60.6 ms, v0 = \u22120.59 mV,\n\u03c4d = 64 ms, Y = 0.17) tuned to minimize the mean squared estimation error, (u \u2212 v)2. Black line:\nPosterior mean of the optimal on-line estimator, \u00b5 (Eq. 7). Bottom. Black: resource variable, x, in\nthe depressing synapse (Eq. 12). Blue: posterior variance of the optimal estimator, \u03c32 (Eq. 8).\n\nIn the slow dynamics limit (\u0001 \u2192 0, see section 2.2), the estimation error of the optimal estimator can\neven be approximated analytically (see Supplementary Information). In this limit, the error scales\n\u0001. As can be seen on Fig. 5B, for small \u0001, the analytical\nwith\nexpression is consistent with the simulations.\n\n\u221a\u03c3W and therefore scales with 4\u221a\n\n5 Discussion\n\nSynapses are a cornerstone of computation in networks, and are highly complex dynamical systems\ninvolving more than a thousand different types of protein. One prominent feature of their dynamics\nis signi\ufb01cant short-term changes in ef\ufb01cacy; these belie the sort of single \ufb01xed, or slowly changing,\nweights popular in most neural models. We interpreted short-term synaptic depression, a key feature\nof synaptic dynamics, as solving the fundamental computational task of estimating the analog mem-\nbrane potential of the presynaptic cell from observed spikes. Steady-state and dynamical properties\nof a Bayes-optimal estimator are well-matched by a canonical model of depression; using a \ufb01xed\nsynaptic ef\ufb01cacy instead leads to a highly suboptimal estimator.\nOur theory is readily testable, since it suggests a precise relationship between quantities that have\nbeen subject to extensive, separate, empirical study \u2014 namely the statistics of a neuron\u2019s membrane\npotential dynamics (captured by the parameters of Eq. (2)), the form of its spiking non-linearity\n(described by Eq. (5)), and the synaptic depression it expresses in its efferent synapses. Accounting\nfor the observation that different efferent synapses of the same cell can express different forms\nof short-term synaptic plasticity [15] remains a challenge; one obvious possibility is that different\nsynapses are estimating different aspects or functions of the membrane potential.\nOur approach is almost dual to that explored in [16]. For that model, the spike generation mechanism\nof the presynaptic neuron was modi\ufb01ed such that even a simple read-out mechanism with \ufb01xed\nef\ufb01cacies could correctly decode the analogue quantity encoded presynaptically. By contrast, we\nconsidered a standard model of spiking [17], and thereby derived an explanation for the evident fact\nthat synapses are not in fact \ufb01xed.\n\n7\n\n0500100015002000\u22124\u22122024time [ms]u [mV] STPOptimalmembrane pot. u05001000150020000.20.40.60.81time [ms]x, \u03c32/\u03c32\u221e STPOptimal\fA\n\nB\n\nW = \u0001\u03c32\n\nFigure 5: A. Comparing the estimation error for different membrane potential estimators as a func-\ntion of \u0001. (\u03b8 = \u0001\u03b80, \u03c32\nW0). Black: asymptotic error of the optimal estimator. Blue: depressing\nsynapse with its 5 tuneable parameters (see text) being optimised for each value of \u0001. Red: static\nsynapse with its 3 tuneable parameters (see text) being optimised. Total simulated time was 5 min.\nHorizontal dot-dashed line: upper bound on the estimation error given by \u03c3OU = \u03c3W/\n2\u03b8 = 1.\nB. Analysing the estimation error of the optimal estimator in the slow dynamics limit (\u0001 \u2192 0).\nSolid line: analytical approximation (Eq. 31 in the Supplementary Information), circles: simulation,\nhorizontal dot-dashed line: as in A.\n\n\u221a\n\nThere are several avenues to extend the present analysis. For example, it would be important to un-\nderstand in more quantitative detail the mapping between the parameters of the process generating\nthe presynaptic membrane potential and spikes, and the parameters of synaptic depression that will\nbest realize the corresponding optimal estimator. We present some preliminary derivations in the\nsupplementary material that seem to yield at least the right ball-park values for optimal synaptic dy-\nnamics. This should also enable us to explore the particular parameter regimes in which depressing\nsynapses have the most (or least) advantage over static synapses in terms of estimation performance,\nas in Fig. 5. We should also consider a meta-plasticity rule that suitably adapts the parameters of the\nshort-term dynamics in the light of the statistics of spiking.\nOur assumption about the prior distribution of presynaptic membrane potential dynamics is highly\nrestrictive. A broader scheme that has previously been explored is that it follow a Gaussian process\nmodel [18, 19] with a more general covariance function. Recursive estimation is often a reasonable\napproximation in such cases, even for those covariance functions, for instance enforcing smooth-\nness, for which it cannot be exact. One interesting property of smooth trajectories is that a couple\nof spikes arriving in quick succession may be diagnostic of an upward-going trend in membrane po-\ntential which is best decoded with increasing, i.e., facilitating, rather than decreasing, postsynaptic\nresponses. Thus it may be possible to encompass other forms of short term plasticity within our\nscheme.\nThe spike generation process can also be extended to incorporate refractoriness, bursting, and other\nforms of non-Poisson behaviour, eg. as in [20]. Similarly, synaptic failures could also be considered.\nWe hope through our theory to be able to provide a teleological account of the rich complexities of\nreal synaptic inconstancy.\n\nAcknowledgements\n\nFunding was from the Gatsby Charitable Foundation (PD) and the Wellcome Trust (JPP, ML and\nPD).\n\n8\n\n10\u2212110000.20.40.60.811.2\u03b5Estimation Error in [mV] no STP: simulationSTP: simulationoptimal: simulation10\u2212210\u2212110000.20.40.60.811.2\u03b5Estimation Error in [mV] optimal: simulationoptimal: theory\fReferences\n[1] Abbott, L.F. & Regehr, W.G. Synaptic computation. Nature 431, 796\u2013803 (2004).\n[2] Zucker, R. & Regehr, W. Short-term synaptic plasticity. Annual Review of Physiology 64,\n\n355\u2013405 (2002).\n\n[3] Dittman, J., Kreitzer, A. & Regehr, W. Interplay between facilitation, depression, and residual\n\ncalcium at three presynaptic terminals. Journal of Neuroscience 20, 1374 (2000).\n\n[4] Abbott, L.F., Varela, J.A., Sen, K. & Nelson, S.B. Synaptic depression and cortical gain\n\ncontrol. Science 275, 220\u2013224 (1997).\n\n[5] Cook, D., Schwindt, P., Grande, L. & Spain, W. Synaptic depression in the localization of\n\nsound. Nature 421, 66\u201370 (2003).\n\n[6] Goldman, M., Maldonado, P. & Abbott, L. Redundancy reduction and sustained \ufb01ring with\n\nstochastic depressing synapses. Journal of Neuroscience 22, 584 (2002).\n\n[7] Ermentrout, B. Neural networks as spatio-temporal pattern-forming systems. Reports on\n\nProgress in Physics 61, 353 (1998).\n\n[8] Shu, Y., Hasenstaub, A., Duque, A., Yu, Y. & McCormick, D. Modulation of intracortical\nsynaptic potentials by presynaptic somatic membrane potential. Nature 441, 761\u2013765 (2006).\n[9] Eden, U., Frank, L., Barbieri, R., Solo, V. & Brown, E. Dynamic analysis of neural encoding\n\nby point process adaptive \ufb01ltering. Neural Computation 16, 971\u2013998 (2004).\n\n[10] Bobrowski, O., Meir, R. & Eldar, Y. Bayesian \ufb01ltering in spiking neural networks: Noise,\n\nadaptation, and multisensory integration. Neural Computation 21, 1277\u20131320 (2009).\n\n[11] Dayan, P. & Abbott, L.F. Theoretical Neuroscience (MIT Press, Cambridge, 2001).\n[12] Markram, H. & Tsodyks, M. Redistribution of synaptic ef\ufb01cacy between neocortical pyramidal\n\nneurons. Nature 382, 807\u2013810 (1996).\n\n[13] Jolivet, R., Rauch, A., L\u00a8uscher, H.R. & Gerstner, W. Predicting spike timing of neocortical\npyramidal neurons by simple threshold models. J. Computational Neuroscience 21, 35\u201349\n(2006).\n\n[14] Mongillo, G., Barak, O. & Tsodyks, M. Synaptic theory of working memory. Science 319,\n\n1543 (2008).\n\n[15] Markram, H., Wu, Y. & Tosdyks, M. Differential signaling via the same axon of neocortical\n\npyramidal neurons. Proc. Natl. Acad. Sci. USA 95, 5323\u20135328 (1998).\n\n[16] Deneve, S. Bayesian spiking neurons I: inference. Neural Computation 20, 91\u2013117 (2008).\n[17] Gerstner, W. & Kistler, W.K. Spiking Neuron Models (Cambridge University Press, Cambridge\n\nUK, 2002).\n\n[18] Cunningham, J., Yu, B., Shenoy, K. & Sahani, M. Inferring neural \ufb01ring rates from spike trains\nusing Gaussian processes. Advances in Neural Information Processing Systems 20, 329\u2013336\n(2008).\n\n[19] Huys, Q., Zemel, R., Natarajan, R. & Dayan, P. Fast population coding. Neural Computation\n\n19, 404\u2013441 (2007).\n\n[20] Pillow, J. et al. Spatio-temporal correlations and visual signalling in a complete neuronal\n\npopulation. Nature 454, 995\u2013999 (2008).\n\n9\n\n\f", "award": [], "sourceid": 695, "authors": [{"given_name": "Jean-pascal", "family_name": "Pfister", "institution": null}, {"given_name": "Peter", "family_name": "Dayan", "institution": null}, {"given_name": "M\u00e1t\u00e9", "family_name": "Lengyel", "institution": null}]}