{"title": "Dynamic Stochastic Synapses as Computational Units", "book": "Advances in Neural Information Processing Systems", "page_first": 194, "page_last": 200, "abstract": "", "full_text": "Dynamic Stochastic Synapses as \n\nComputational Units \n\nWolfgang Maass \n\nInstitute for Theoretical Computer Science \n\nTechnische Universitat Graz, \n\nA-B01O Graz, Austria. \n\nemail: maass@igi.tu-graz.ac.at \n\nAnthony M. Zador \n\nThe Salk Institute \n\nLa Jolla, CA 92037, USA \nemail: zador@salk.edu \n\nAbstract \n\nIn most neural network models, synapses are treated as static weights that \nchange only on the slow time scales of learning. In fact, however, synapses \nare highly dynamic, and show use-dependent plasticity over a wide range \nof time scales. Moreover, synaptic transmission is an inherently stochastic \nprocess: a spike arriving at a presynaptic terminal triggers release of a \nvesicle of neurotransmitter from a release site with a probability that can \nbe much less than one. Changes in release probability represent one of the \nmain mechanisms by which synaptic efficacy is modulated in neural circuits. \nWe propose and investigate a simple model for dynamic stochastic synapses \nthat can easily be integrated into common models for neural computation. \nWe show through computer simulations and rigorous theoretical analysis \nthat this model for a dynamic stochastic synapse increases computational \npower in a nontrivial way. Our results may have implications for the process(cid:173)\ning of time-varying signals by both biological and artificial neural networks. \n\nA synapse 8 carries out computations on spike trains, more precisely on trains of spikes \nfrom the presynaptic neuron. Each spike from the presynaptic neuron mayor may not \ntrigger the release of a neurotransmitter-filled vesicle at the synapse. The probability of a \nvesicle release ranges from about 0.01 to almost 1. Furthermore this release probability is \nknown to be strongly \"history dependent\" [Dobrunz and Stevens, 1997]. A spike causes an \nexcitatory or inhibitory potential (EPSP or IPSP, respectively) in the postsynaptic neuron \nonly when a vesicle is released. \n\nA spike train is represented as a sequence 1 of firing times, i.e. as increasing sequences \nof numbers tl < t2 < ... from R+ := {z E R: z ~ O} . For each spike train 1 the output of \nsynapse 8 consists of the sequence 8W of those ti E 10n which vesicles are \"released\" by \n8 , i.e. of those t, E 1 which cause an excitatory or inhibitory postsynaptic potential (EPSP \nor IPSP, respectively). The map 1 -+ 8(1) may be viewed as a stochastic function that is \ncomputed by synapse S. Alternatively one can characterize the output SW of a synapse \n8 through its release pattern q = qlq2 ... E {R, F}\u00b7 , where R stands for release and F for \nfailure of release. For each t, E 1 one sets q, = R if ti E 8(1) , and qi = F if ti \u00a2 8W . \n\n\fDynamic Stochastic Synapses as Computational Units \n\n195 \n\n1 Basic model \n\nThe central equation in our dynamic synapse model gives the probability PS(ti) that the ith \nspike in a presynaptic spike train t = (tl,\"\" tk) triggers the release of a vesicle at time ti \nat synapse S, \n\n(1) \nThe release probability is assumed to be nonzero only for t E t, so that releases occur only \nwhen a spike invades the presynaptic terminal (i.e. the spontaneous release probability is \nassumed to be zero). The functions C(t) ~ 0 and V(t) ~ 0 describe, respectively, the states \nof facilitation and depletion at the synapse at time t . \n\nThe dynamics of facilitation are given by \n\nC(t) = Co + L c(t - ti) , \n\nt. O. V(t) depends on the subset of those ti E t with ti < t on which \nvesicles were actually released by the synapse, i.e. ti E SW. The function v(s) models the \nresponse of V (t) to a preceding release of the same synapse at time t - s ~ t . Analogously \nas for c(s) one may choose for v(s) a function with exponential decay v(s) = e- a/ TV \nwhere Tv > 0 is the decay constant. The function V models in an abstract way internal \nsynaptic processes that support presynaptic depression, such as depletion of the pool of \nreadily releasable vesicles. In a more specific synapse model one could interpret Vo as the \nmaximal number of vesicles that can be stored in the readily releasable pool, and V(t) as \nthe expected number of vesicles in the readily releasable pool at time t. \n\n, \n\nIn summary, the model of synaptic dynamics presented here is described by five pa(cid:173)\n\nrameters: Co, Vo, Te, Tv and a. The dynamics of a synaptic computation and its internal \nvariables C(t) and V(t) are indicated in Fig. 1. \n\nFor low release probabilities, Eq. 1 can be expanded to first order around r(t) := \n\nC(t) . V(t) = 0 to give \n\n(4) \n\nSimilar expressions have been widely used to describe synaptic dynamiCS for mUltiple \nsynapses [Magie by, 1987, Markram and Tsodyks, 1996, Varela et al., 1997]. \n\nIn our synapse model, we have assumed a standard exponential form for the de(cid:173)\n\ncay of facilitation and depression (see e.g. [Magleby, 1987, Markram and Tsodyks, 1996, \nVarela et al., 1997, Dobrunz and Stevens, 1997]}. We have further assumed a multiplica(cid:173)\ntive interaction between facilitation and depletion. While this form has not been validated \n\n\f196 \n\nW. Maass and A. M. Zador \n\npresynaptic \nspike train \n\nfunction C(t) \n(facilitation) \n\nfunction V(t) \n(depression) \n\nfunction p(t,) ' [ \n(release \nprobabilities) \n\n0'---- - - - - - - - - --\n\n-\n\n-\n\n\" \" \n\nrelease pattern - - - - - - - - - - - - - - -\n\nFRF \n\nF R \n\nF \n\nFR \n\nR \n\nII \n\nI \n\nI \n\nI I \n\nI \n\nI \n\n)' \n\nt, \n\nFigure 1: Synaptic computation on a spike train i, together with the temporal dynamics of \nthe internal variables C and V of our model. Note that V(t) changes its value only when a \npresynaptic spike causes release. \n\nat single synapses, in the limit of low release probability (see Eq. 4), it agrees with the \nmultiplicative term employed in [Varela et al., 19971 to describe the dynamics of mUltiple \nsynapses. \n\nThe assumption that release at individual release sites of a synapse is binary, i. e. that \neach release site releases 0 or I-but not more than I-vesicle when invaded by a spike, leads \nto the exponential form of Eq. 1 [Dobrunz and Stevens, 19971. We emphasize the formal \ndistinction between release site and synapse. A synapse might consist of several release sites \nin parallel, each of which has a dynamics similar to that of the stochastic \"synapse model\" \nwe consider. \n\n2 Results \n\n2.1 Different \"Weights\" for the First and Second Spike in a Train \n\nWe start by investigating the range of different release probabilities ps(td,PS(t2) that a \nsynapse S can assume for the first two spikes in a given spike train. These release probabil(cid:173)\nities depend on t2 - tt as well as on the values of the internal parameters Co, Va,re,'TV,O \nof the synapse S. Here we analyze the potential freedom of a synapse to choose values for \nps(tt} and PS(t2)' We show in Theorem 2.1 that the range of values for the release prob(cid:173)\nabilities for the first two spikes is quite large, and that the entire attainable range can be \nreached through through suitable choices of Co and Vo . \n\nTheorem 2.1 Let (tt, t2) be some arbitrary spike train consisting of two spikes, and let \nPI ,P2 E (0,1) be some arbitrary given numbers with P2 > Pl' (1 - pd. Furthermore assume \nthat arbitrary positive values are given for the parameters 0, re, 'TV of a synapse S. Then one \ncan always find values for the two parameters Co and Va of the synapse S so that ps(tt) = PI \nand PS(t2) = P2. \n\nFurthermore the condition P2 > Pt . (1 - Pt) is necessary in a strong sense. If P2 ~ \nPt \u00b7 (1 - pt) then no synapse S can achieve ps(td = Pt and PS(t2) = P2 for any spike train \n(tl' t2) and for any values of its parameters Co, Vo, re, 'TV, 0. \n\nIf one associates the current sum of release probabilities of multiple synapses or release \nsites between two neurons u and v with the current value of the \"connection strength\" wu,v \nbetween two neurons in a formal neural network model, then the preceding result points \n\n\fDynamic Stochastic Synapses as Computational Units \n\n197 \n\nFigure 2: The dotted area indicates the range of pairs (Pl,P2) of release probabilities for the \nfirst and second spike through which a synapse can move (for any given interspike interval) \nby varying its parameters Co and Vo . \n\nto a significant difference between the dynamics of computations in biological circuits and \nformal neural network models. Whereas in formal neural network models it is commonly \nassumed that the value of a synaptic weight stays fixed during a computation, the release \nprobabilities of synapses in biological neural circuits may change on a fast time scale within \na single computation. \n\n2.2 Release Patterns for the First Three Spikes \n\nIn this section we examine the variety of release patterns that a synapse can produce for \nspike trains tl, t2, t3, ' \" with at least three spikes. We show not only that a synapse can \nmake use of different parameter settings to produce 'different release patterns, but also that \na synapse with a fixed parameter setting can respond quite differently to spike trains with \ndifferent interspike intervals. Hence a synapse can serve as pattern detector for temporal \npatterns in spike trains. \n\nIt \n\nthat \n\nthe \n\nturns out \n\nstructure of \n\nrelease probabilities \n(PS(tl),PS(t2),PS(t3)) that a synapse can assume is substantially more complicated \nthan for the first two spikes considered in the previous section. Therefore we focus here \non the dependence of the most likely release pattern q E {R, FP on the internal synaptic \nparameters and on the interspike intervals II := t2 - fi and 12 := t3 - t2. This dependence \nis in fact quite complex, as indicated in Fig. 3. \n\nthe \n\ntriples of \n\nRRR \n\nRFR \n\nFRF \n\nFFF \n\nRRF \n\ninterspike interval IJ \n\ninterspike interval IJ \n\nFigure 3: (A, left) Most likely release pattern of a synapse in dependence of the interspike \nintervals It and 12. The synaptic parameters are Co = 1.5, Vo = 0.5, rc = 5, 'TV = 9, \na = 0.7. (B, right) Release patterns for a synapse with other values of its parameters \n(Co = 0.1, Vo = 1.8, rc = 15, 'TV = 30, a = 1). \n\n\f198 \n\nW. Maass and A. M Zador \n\nFig. 3A shows the most likely release pattern for each given pair of interspike intervals \n(11,12 ), given a particular fixed set of synaptic parameters. One can see that a synapse with \nfixed parameter values is likely to respond quite differently to spike trains with different \ninterspike intervals. For example even if one just considers spike trains with 11 = 12 one \nmoves in Fig. 3A through 3 different release patterns that take their turn in becoming the \nmost likely release pattern when II varies. Similarly, if one only considers spike trains with \na fixed time interval t3 - t1 = II + 12 = ~, but with different positions of the second spike \nwithin this time interval of length ~, one sees that the most likely release pattern is quite \nsensitive to the position of the second spike within this time interval~. Fig. 3B shows \nthat a different set of synaptic parameters gives rise to a completely different assignment of \nrelease patterns. \n\nWe show in the next Theorem that the boundaries between the zones in these figures \nare \"plastic\": by changing the values of Co, Vo, Ct the synapse can move the zone for most \nof the release patterns q to any given point (11,12 )' This result provides another example \nfor a new type of synaptic plasticity that can no longer be described in terms of a decrease \nor increase of the synaptic \"weight\". \n\nTheorem 2.2 Assume that an arbitrary number p E (0,1) and an arbitrary pattern (11 ,12 ) \nof interspike intervals is given. Furthermore assume that arbitrary fixed pOlJitive val;.;.p.s are \ngiven for the parameters rc and TV of a synapse S. Then for any pattern q E {R, FP except \nRRF, FFR one can assign values to the other parameters Ct, Co, Vo of this-synapse S so that \nthe probability of release pattern q for a spike train with interspike intervals 11 ,12 becomes \nlarger than p. \n\n-\n\nIt is shown in the full version oftbis paper [Maass and Zador, 19971 that it is not possible \nto make the release patterns RRF and FFR arbitrarily likely for any given spike train with \ninterspike intervals (11 ,12 ) \u2022 \n\n2.3 Computing with Firing Rates \n\nSo far we have considered the effect of short trains of two or three presynaptic spikes on \nsynaptic release probability. Our next result (cf. Fig.5) shows that also two longer Poisson \nspike trains that represent the same firing rate can produce quite different numers of synaptic \nreleases, depending on the synaptic parameters. To emphasize that this is due to the pattern \nof interspike intervals, and not simply to the number of spikes, we compared the outputs in \nresponse to two Poisson spike trains A and B with the same number (lO)\u00b7of spikes. These \nexamples indicate that even in the context of rate coding, synaptic efficacy may not be well \ndescribed in terms of a single scalar parameter w. \n\n2.4 Burst Detection \n\nHere we show that the computational power of a spiking (e.g. integrate-and-fire) neuron with \nstochastic dynamic synapses is strictly larger than that of a spiking neuron with traditional \n\"static\" synapses (cf Lisman, 1997). Let T be a some given time window, and consider the \ncomputational task of detecting whether at least one of n presynaptic neurons a1, . .. ,an \nfire at least twice during T (\"burst detection\"). To make this task computationally feasible \nwe assume that none of the neurons al, ... ,an fires outside of this time window. \n\nTheorem 2.3 A spiking neuron v with dynamic stochastic synapses can solve this burst \ndetection task (with arbitrarily high reliability). On the other hand no spiking neuron with \nstatic synapses can solve this task (for any assignment of \"weights\" to its synapses). 1 \n\nlWe assume here that neuronal transmission delays differ by less than (n - 1) . T), where by \ntransmission delay we refer to the temporal delay between the firing of the presynaptic neuron and \nits effect on the postsynaptic target. \n\n\fDynamic Stochastic Synapses as Computational Units \n\n199 \n\n, .. \n.. \n\n, \n\nr. \n\nM \n\n\u00bb \n\n\u2022 \n\n, \n, \n, i \ni \n.4 r I \nl111 \n\u2022 \n\nM \n\n\u00bb \n\n\u2022 \n\n\u2022\u2022 r ....... ~ ... 11I \nII! \nIII \n\n\u2022 \n\n\u2022 \n\n\u2022 \n\nn \n\n, r \n\nu \n\n... \n\nu \n\nIl 1 \n\u2022 \n\u2022 \n\n10 \n\n.. \n\n.. \n\n\u2022 \n\n'10 \n\n...... ,....\". \n\nf \n, \n\n, I \nI \nI \nI \nI \nd \n\n! \n, \ni \n\n\u2022 \n\nr \nI \nI \n\nI \n\n\u2022 \n\n. \n\n. 1 \n\n\u2022 \n\n\u2022 \n\nFigure 4: Release probabilities of two synapses for two Poisson spike trains A and B with 10 \nspikes each. The release probabilities for the first synapse are shown on the left hand side, \nand for the second synapse on the right hand side. For both synapses the release probabilities \nfor spike train A are shown at the top, and for spike train B at the bottom. The first synapse \nhas for spike train A a 22 % higher average release probability, whereas the second synapse \nhas for spike train B a 16 % higher average release probability. Note that the fourth spike \nin spike train B has for the first synapse a release probability of nearly zero and so is not \nvisible. \n\n2.5 Translating Interval Coding into Population Coding \n\nAssume that information is encoded in the length I of the interspike interval between the \ntimes tl and t2 when a certain neuron v fires, and that different motor responses need to \nbe initiated depending on whether I < a or I > a, where a is some given parameter (c.f. \n[Hopfield, 1995]). For that purpose it would be useful to translate the information encoded \nin the interspike interval I into the firing activity of populations of neurons (\"population \ncoding\"). Fig. 5 illustrates a simple mechanism for that task based on dynamic synapses. \nThe synaptic parameters are chosen so that facilitation dominates (i.e., Co should be small \nand a large) at synapses between neuron v and the postsynaptic population of neurons. The \nrelease probability for the first spike is then close to 0, whereas the release probability for \nthe second spike is fairly large if I < a and significantly smaller if I is substantially larger \nthan a. If the resulting firing activity of the postsynaptic neurons is positively correlated \nwith the total number of releases of these synapses, then their population response is also \npositively correlated with the length of the interspike interval I. \n\n1 \n\n{ FR \u2022 if 1 < a \nFF \n\u2022 if 1> a \n\n{ I. if 1 < a \no \u2022 if I> a \n\npresynaptic spikes \n\nsynaptic response \n\nresulting activation of \npostsynaptic neurons \n\nFigure 5: A mechanism for translating temporal coding into population coding. \n\n\f200 \n\n3 Discussion \n\nW. Maass and A. M Zador \n\nWe have explored computational implications of a dynamic stochastic synapse model. Our \nmodel incorporates several features of biological synapses usually omitted in the connections \nor weights conventionally used in artificial neural network models. Our main result is that a \nneural circuit in which connections are dynamic has fundamentally greater power than one \nin which connections are static. We refer to [Maass and Zador, 1997] for details. Our results \nmay have implications for computation in both biological and artificial neural networks, and \nparticularly for the processing of signals with interesting temporal structure. \n\nSeveral groups have recently proposed a computational role for one form of use(cid:173)\n\ndependent short term synaptic plasticity [Abbott et al., 1997, Tsodyks and Markram, 1997]. \nThey showed that, under the experimental conditions tested, synaptic depression (of a form \nanalogous to Vet) in our Eq. (3) can implement a form of gain control in which the steady(cid:173)\nstate synaptic output is independent of the input firing rate over a wide range of firing \nrates. We have adopted a more general approach in which, rather than focussing on a par(cid:173)\nticular role for short term plasticity, we allow the dynamic synapse parameters to vary. This \napproach is analogous to that adopted in the study of artificial neural networks, in which \nfew if any constraints are placed on the connections between units. In our more general \nframework, standard neural network tasks such as supervised and unsupervised learning \ncan be formulated (see also [Liaw and Berger, 1996]). Indeed, a backpropagation-like gra(cid:173)\ndient descent algorithm can be used to adjust the parameters of a network connected by \ndynamic synapses (Zador and Maass, in preparation). The advantages of dynamic synapses \nmay become most apparent in the processing of time-varying Signals. \n\nReferences \n\n[Abbott et al., 1997] Abbott, L., Varela, J., Sen, K., and S.B., N. (1997). Synaptic depres(cid:173)\n\nsion and cortical gain control. Science, 275:220-4. \n\n[Dobrunz and Stevens, 1997] Dobrunz, L. and Stevens, C. (1997). Heterogeneity of release \n\nprobability, facilitation and depletion at central synapses. Neuron, 18:995-1008. \n\n[Hopfield, 1995] Hopfield, J. (1995). Pattern recognition computation using action potential \n\ntiming for stimulus representation. Nature, 376:33-36. \n\n[Liaw and Berger, 1996] Liaw, J.-S. and Berger, T. (1996). Dynamic synapse: A new con(cid:173)\n\ncept of neural representation and computation. Hippocampus, 6:591-600. \n\n[Lisman, 1997] Lisman, J. (1997). Bursts as a unit of neural information: making unreliable \n\nsynapses reliable. TINS, 20:38-43. \n\n[Maass and Zador, 1997] Maass, W. and Zador, A. (1997). Dynamic stochastic synapses as \n\ncomputational units. http://www.sloan.salk.edu/-zador/publications.html . \n\n[MagIe by, 1987] Magleby, K. (1987). Short term synaptic plasticity. In Edelman, G. M., \n\nGall, W. E., and Cowan, W. M., editors, Synaptic function. Wiley, New York. \n\n[Markram and Tsodyks, 1996] Markram, H. and Tsodyks, M. (1996). Redistribution of \n\nsynaptic efficacy between neocortical pyramidal neurons. Nature, 382:807-10. \n\n[Stevens and Wang, 1995] Stevens, C. and Wang, Y. (1995). Facilitation and depression at \n. single central synapses. Neuron, 14:795-802. \n[Tsodyks and Markram, 1997] Tsodyks, M. and Markram, H. (1997). The neural code be(cid:173)\n\ntween neocortical pyramidal neurons depends on neurotransmitter release probability. \nProc. Natl. Acad. Sci., 94:719-23. \n\n[Varela et al., 1997] Varela, J. A., Sen, K., Gibson, J., Fost, J., Abbott, L. F., and Nelson, \nS. B. (1997). A quantitative description of short-term plasticity at excitatory synapses in \nlayer 2/3 of rat primary visual cortex. J. Neurosci, 17:7926-7940. \n\n\f", "award": [], "sourceid": 1338, "authors": [{"given_name": "Wolfgang", "family_name": "Maass", "institution": null}, {"given_name": "Anthony", "family_name": "Zador", "institution": null}]}