{"title": "Learning in Spiking Neural Assemblies", "book": "Advances in Neural Information Processing Systems", "page_first": 165, "page_last": 172, "abstract": null, "full_text": "Learning in Spiking Neural Assemblies\n\nDavid Barber\n\nInstitute for Adaptive and Neural Computation\n\nEdinburgh University\n\n5 Forrest Hill, Edinburgh, EH1 2QL, U.K.\n\ndbarber@anc.ed.ac.uk\n\nAbstract\n\nWe consider a statistical framework for learning in a class of net-\nworks of spiking neurons. Our aim is to show how optimal local\nlearning rules can be readily derived once the neural dynamics and\ndesired functionality of the neural assembly have been speci\ufb02ed,\nin contrast to other models which assume (sub-optimal) learning\nrules. Within this framework we derive local rules for learning tem-\nporal sequences in a model of spiking neurons and demonstrate its\nsuperior performance to correlation (Hebbian) based approaches.\nWe further show how to include mechanisms such as synaptic de-\npression and outline how the framework is readily extensible to\nlearning in networks of highly complex spiking neurons. A stochas-\ntic quantal vesicle release mechanism is considered and implications\non the complexity of learning discussed.\n\n1\n\nIntroduction\n\nModels of individual neurons range from simple rate based approaches to spik-\ning models and further detailed descriptions of protein dynamics within the\ncell[9, 10, 13, 6, 12]. As the experimental search for the neural correlates of mem-\nory increasingly consider multi-cell observations, theoretical models of distributed\nmemory become more relevant[12]. Despite increasing complexity of neural de-\nscription, many theoretical models of learning are based on correlation Hebbian\nassumptions { that is, changes in synaptic e\u2013cacy are related to correlations of pre-\nand post-synaptic \ufb02ring[9, 10, 14]. Whilst such learning rules have some theoretical\njusti\ufb02cation in toy neural models, they are not necessarily optimal in more com-\nplex cases in which the dynamics of the cell contains historical information, such\nas modelled by synaptic facilitation and depression, for example[1]. It is our belief\nthat appropriate synaptic learning rules should appear as a natural consequence\nof the neurodynamical system and some desired functionality { such as storage of\ntemporal sequences.\n\nIt seems clear that, as the brain operates dynamically through time, relevant cog-\nnitive processes are plausibly represented in vivo as temporal sequences of spikes in\nrestricted neural assemblies. This paradigm has heralded a new research front in dy-\nnamic systems of spiking neurons[10]. However, to date, many learning algorithms\nassume Hebbian learning, and assess its performance in a given model[8, 6, 14].\n\n\f.\n\nneuron j\n\nneuron i\n\nHighly Complex \n(deterministic)\n\nInternal Dynamics\n\nstochastic firing\n\naxon\n\n.\n\n(b) Neural \ufb02ring model\n\nh(1)\n\nh(2)\n\nh(t)\n\nv(1)\n\nv(2)\n\nv(t)\n\n(a) Deterministic Hiddens\n\nFigure 1: (a) A \ufb02rst order Dynamic Bayesian Network with deterministic hidden\nstates (represented by diamonds). (b) The basic simpli\ufb02cation for neural \ufb02ring.\n\nRecent work[13] has taken into account some of the complexities in the synaptic dy-\nnamics, including facilitation and depression, and derived appropriate learning rules.\nHowever, these are rate based models, and do not capture the detailed stochastic\n\ufb02ring e\ufb01ects of individual neurons. Other recent work [4] has used experimental ob-\nservations to modify Hebbian learning rules to make heuristic rules consistent with\nempirical observations[11]. However, as more and more detail of cellular processes\nare experimentally discovered, it would be satisfying to see learning mechanisms as\nnaturally derivable consequences of the underlying cellular constraints. This paper\nis a modest step in this direction, in which we outline a framework for learning\nin spiking systems which can handle highly complex cellular processes. The major\nsimplifying assumption is that internal cellular processes are deterministic, whilst\ncommunication between cells can be stochastic. The central aim of this paper is\nto show that optimal learning algorithms are derivable consequences of statistical\nlearning criteria. Quantitative agreement with empirical data would require further\nrealistic constraints on the model parameters but is not a principled hindrance to\nour framework.\n\n2 A Framework for Learning\n\nA neural assembly of V neurons is represented by a vector v(t) whose components\nvi(t); i = 1; : : : ; V represent the state of neuron i at time t. Throughout we assume\nthat vi(t) 2 f0; 1g, for which vi(t) = 1 means that neuron i spikes at time t, and\nvi(t) = 0 denotes no spike. The shape of an action potential is assumed therefore\nnot to carry any information. This constraint of a binary state \ufb02ring representation\ncould be readily relaxed without great inconvenience to multiple or even continuous\nstates.\n\nOur stated goal is to derive optimal learning rules for an assumed desired func-\ntionality and a given neural dynamics. To make this more concrete, we as-\nsume that the task is sequence learning (although generalistions to other forms\nof learning, including input-output type dynamics are readily achievable[2]). We\nmake the important assumption that the neural assembly has a sequence of states\nV = fv(1); v(2); : : : ; v(t = T )g that it wishes to store (although how such internal\n\n\frepresentations are known is in itself a fundamental issue that needs to be ulti-\nmately addressed). In addition to the neural \ufb02ring states, V, we assume that there\nare hidden/latent variables which in(cid:176)uence the dynamics of the assembly, but which\ncannot be directly observed. These might include protein levels within a cell, for\nexample. These variables may also represent environmental conditions external to\nthe cell and common to groups of cells. We represent a sequence of hidden variables\nby H = fh(1); h(2); : : : ; h(T )g.\n\nThe general form of our model is depicted in \ufb02g(1)[a] and comprises two components\n\n1. Neural Conditional Independence :\n\np(v(t + 1)jv(t); h(t)) =\n\nV\n\nYi=1\n\np(vi(t + 1)jv(t); h(t); (cid:181)v)\n\n(1)\n\nThis distribution speci\ufb02es that all the information determining the proba-\nbility that neuron i \ufb02res at time t + 1 is contained in the immediate past\n\ufb02ring of the neural assembly at time v(t) and the hidden states h(t). The\ndistribution is parameterised by (cid:181)v, which can be learned from a training\nsequence (see below). Here time simply discretises the dynamics. In prin-\nciple, a unit of time in our model may represent a fraction of millisecond.\n\n2. Deterministic Hidden Variable Updating :\n\nh(t + 1) = f (v(t + 1); v(t); h(t); (cid:181)h)\n\n(2)\n\nThis equation speci\ufb02es that the next hidden state of the assembly h(t + 1)\ndepends on a vector function f of the states v(t+1); v(t); h(t). The function\nf is parameterised by (cid:181)h which is to be learned.\n\nThis model is a special case of Dynamic Bayesian networks, in which the hidden\nvariables are deterministic functions of their parental states and is treated in more\ngenerality in [2]. The model assumptions are depicted in \ufb02g(1)[b] in which poten-\ntially complex deterministic interactions within a neuron can be considered, with\nlossy transmission of this information between neurons in the form of stochastic \ufb02r-\ning. Whilst the restriction to deterministic hidden dynamics appears severe, it has\nthe critical advantage that learning in such models can be achieved by deterministic\nforward propagation through time. This is not the case in more general Dynamic\nBayesian networks where an integral part of the learning procedure involves, in prin-\ncipal, both forward and backward temporal passes (non-causal learning), and also\nimposes severe restrictions on the complexity of the hidden unit dynamics due to\ncomputational di\u2013culties[7, 2]. A central ingredient of our approach is that it deals\nwith individual spike events, and not just spiking-rates as used in other studies[13].\n\nThe key mechanism for learning in statistical models is maximising the log-likelihood\nL((cid:181)v; (cid:181)hjV) of a sequence V,\n\nL((cid:181)v; (cid:181)hjV) = log p(v(1)j(cid:181)v) +\n\nT \u00a11\n\nXt=1\n\nlog p(v(t + 1)jv(t); h(t); (cid:181)v)\n\n(3)\n\nwhere the hidden unit values are calculated recursively using (2). Training\nmultiple sequences V \u201e; \u201e = 1; : : : P is straightforward using the log-likelihood\n\nP\u201e L((cid:181)v; (cid:181)hjV \u201e). To maximise the log-likelihood, it is useful to evaluate the deriva-\n\ntives with respect to the model parameters. These can be calculated as follows :\n\ndL\nd(cid:181)v\n\n=\n\n@p(v(1)j(cid:181)v)\n\n@(cid:181)v\n\n+\n\nT \u00a11\n\nXt=1\n\n@\n\n@(cid:181)v\n\nlog p(v(t + 1)jv(t); h(t); (cid:181)v)\n\n(4)\n\n\fdL\nd(cid:181)h\n\n=\n\nT \u00a11\n\nXt=1\n\n@\n\n@h(t)\n\nlog p(v(t + 1)jv(t); h(t); (cid:181)v)\n\ndh(t)\nd(cid:181)h\n\ndh(t)\nd(cid:181)h\n\n=\n\n@f (t)\n@(cid:181)h\n\n+\n\n@f (t)\n\ndh(t \u00a1 1)\n\n@h(t \u00a1 1)\n\nd(cid:181)h\n\n(5)\n\n(6)\n\nwhere f (t) \u00b7 f (v(t); v(t \u00a1 1); h(t \u00a1 1); (cid:181)h). Hence :\n\n1. Learning can be carried out by forward propagation through time. In a bi-\nological system it is natural to use gradient ascent training (cid:181) \u02c6 (cid:181) +\u00b7dL=d(cid:181)\nwhere the learning rate \u00b7 is chosen small enough to ensure convergence to\na local optimum of the likelihood. This batch training procedure is readily\nconvertible to an online form if needed.\n\n2. Highly complex functions f and tables p(v(t + 1)jv(t); h(t)) may be used.\n\nIn the remaining sections, we apply this framework to some simple models and show\nhow optimal learning rules can be derived for old and new theoretical models.\n\n2.1 Stochastically Spiking Neurons\n\nWe assume that neuron i \ufb02res depending on the membrane potential ai(t) through\np(vi(t + 1) = 1jv(t); h(t)) = p(vi(t + 1) = 1jai(t)). (More complex dependencies on\nenvironmental variables are also clearly possible). To be speci\ufb02c, we take throughout\np(vi(t + 1) = 1jai(t)) = (cid:190) (ai(t)), where (cid:190)(x) = 1=(1 + e\u00a1x). The probability of the\nquiescent state is 1 minus this probability, and we can conveniently write\n\np(vi(t + 1)jai(t)) = (cid:190) ((2vi(t + 1) \u00a1 1)ai(t))\n\n(7)\n\nwhich follows from 1 \u00a1 (cid:190)(x) = (cid:190)(\u00a1x). The choice of the sigmoid function (cid:190)(x)\nis not fundamental and is simply analytically convenient. The log-likelihood of a\nsequence of visible states V is\n\nL =\n\nT \u00a11\n\nV\n\nXt=1\n\nXi=1\n\nlog (cid:190) ((2vi(t + 1) \u00a1 1)ai(t))\n\nand the (online) gradient of the log-likelihood is then\n\ndL(t + 1)\n\ndwij\n\n= (vi(t + 1) \u00a1 (cid:190)(ai(t)))\n\ndai(t)\ndwij\n\n(8)\n\n(9)\n\nwhere we used the fact that vi 2 f0; 1g. The batch gradient is simply given by\nsumming the above online gradient over time. Here wij are parameters of the\nmembrane potential (see below). We take (9) as common to the remainder in which\nwe model the membrane potential ai(t) with increasing complexity.\n\n2.2 A simple model of the membrane potential\n\nPerhaps the simplest membrane potential model is the Hop\ufb02eld potential\n\nai(t) \u00b7\n\nV\n\nXj=1\n\nwijvj(t) \u00a1 bi\n\n(10)\n\nwhere wij characterizes the synaptic e\u2013cacy from neuron j (pre-synaptic) to neuron\ni (post-synaptic), and bi is a threshold. The model is depicted in \ufb02g(2)[a]. Applying\n\n\fxi(t \u00a1 1)\n\nxi(t)\n\nxi(t + 1)\n\nai(t \u00a1 1)\n\nai(t)\n\nai(t + 1)\n\nv(t \u00a1 1)\n\nv(t)\n\nv(t + 1)\n\nai(t \u00a1 1)\n\nv(t \u00a1 1)\n\nai(t)\n\nv(t)\n\nai(t + 1)\n\nv(t + 1)\n\n(a) Hop\ufb02eld Graph\n\n(b) Hop\ufb02eld with Dynamic Synapses\n\nFigure 2: (a) The graph for a simple Hop\ufb02eld membrane potential shown only for a\nsingle membrane potential. The potential is a deterministic function of the network\nstate and (the collection of) membrane potentials in(cid:176)uences the next state of the\nnetwork. (b) Dynamic synapses correspond to hidden variables which in(cid:176)uence the\nmembrane potential and update themselves, depending on the \ufb02ring of the network.\nOnly one membrane potential and one synaptic factor is shown.\n\nour framework to this model to learn a temporal sequence V by adjustment of the\nparameters wij (the bi are \ufb02xed for simplicity), we obtain the (batch) learning rule\n\nwnew\n\nij = wij + \u00b7\n\ndL\ndwij\n\n;\n\ndL\ndwij\n\n=\n\nT \u00a11\n\nXt=1\n\n(vi(t + 1) \u00a1 (cid:190)(ai(t))) vj(t);\n\n(11)\n\nwhere the learning rate \u00b7 is chosen empirically to be su\u2013ciently small to ensure\nconvergence. Note that in the above rule vi(t + 1) refers to the desired known\ntraining pattern, and (cid:190)(ai(t)) can be interpreted as the average instantaneous \ufb02ring\nrate of neuron i at time t + 1 when its inputs are clamped to the known desired\nvalues of the network at time t. This is a form of Delta Rule (or Rescorla-Wagner)\nlearning[12]. The above learning rule can be seen as a modi\ufb02cation of the standard\nt=1 vi(t + 1)vj(t). However, the rule (11) can store a\nsequence of V linearly independent patterns, much greater than the 0:26V capacity\nof the Hebb rule[5]. Biologically, the rule (11) could be implemented by measuring\nthe di\ufb01erence between the desired training state vi(t + 1) of neuron i, and the\ninstantaneous \ufb02ring rate of neuron i when all other neurons, j 6= i are clamped\nin training states vj(t). Simulations with this model and comparison with other\ntraining approaches are given in [3].\n\nHebb learning rule wij = PT \u00a11\n\n3 Dynamic Synapses\n\nIn more realistic synaptic models, neurotransmitter generation depends on a \ufb02nite\nrate of cell subcomponent production, and the quantity of vesicles released is af-\nfected by the history of \ufb02ring[1]. The depression mechanism a\ufb01ects the impact of\nspiking on the membrane potential response by moderating terms in the membrane\n\npotential ai(t) of the form Pj wijvj(t) to Pj wijxj(t)vj(t), for depression factors\n\nxj(t) 2 [0; 1]. A simple dynamics for these depression factors is[15, 14]\n\nxj(t + 1) = xj(t) + \u2013t(cid:181) 1 \u00a1 xj(t)\n\n\u00bf\n\n\u00a1 U xj(t)vj(t)\u00b6\n\n(12)\n\n\fOriginal\n\nReconstruction\n\nx values\n\nHebb Reconstruction\n\nr\ne\nb\nm\nu\nn\nn\no\nr\nu\ne\nn\n\n \n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n35\n\n40\n\n45\n\n50\n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n35\n\n40\n\n45\n\n50\n\n20\n\n10\nt\n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n35\n\n40\n\n45\n\n50\n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n35\n\n40\n\n45\n\n50\n\n20\n\n10\nt\n\n20\n\n10\nt\n\n20\n\n10\nt\n\nFigure 3: Learning with depression : U = 0:5, \u00bf = 5, \u2013t = 1, \u00b7 = 0:25.\n\nwhere \u2013t, \u00bf , and U represent time scales, recovery times and spiking e\ufb01ect param-\neters respectively. Note that these depression factor dynamics are exactly of the\nform of hidden variables that are not observed, consistent with our framework in\nsection (2), see \ufb02g(2)[b]. Whilst some previous models have considered learning\nrules for dynamic synapses using spiking-rate models [13, 15] we consider learning\nin a stochastic spiking model. Also, in contrast to a previous study which assumes\nthat the synaptic dynamics modulates baseline Hebbian weights[14], we show below\nthat it is straightforward to include dynamic synapses in a principled way using our\nlearning framework. Since the depression dynamics in this model do not explicitly\ndepend on wij, the gradients are simple to calculate. Note that synaptic facilitation\nis also straightforward to include in principle[15].\n\nFor the Hop\ufb02eld potential, the learning dynamics is simply given by equations\n(9,12), with dai(t)\n= xj(t)vj(t). In \ufb02g(3) we demonstrate learning a random tem-\ndwij\nporal sequence of 20 time steps for an assembly of 50 neurons. After learning wij\nwith our rule, we initialised the trained network in the \ufb02rst state of the training\nsequence. The remaining states of the sequence were then correctly recalled by\niteration of the learned model. The corresponding generated factors xi(t) are also\nplotted. For comparison, we plot the results of using the dynamics having set the wij\nusing a temporal Hebb rule. The poor performance of the correlation based Hebb\nrule demonstrates the necessity, in general, to couple a dynamical system with an\nappropriate learning mechanism which, in this case at least, is readily available.\n\n4 Leaky Integrate and Fire models\n\nLeaky integrate and \ufb02re models move a step towards biological realism in which the\nmembrane potential increments if it receives an excitatory stimulus (wij > 0), and\ndecrements if it receives an inhibitory stimulus (wij < 0). A model that incorporates\nsuch e\ufb01ects is\n\nwijvj(t) + (cid:181)rest (1 \u00a1 \ufb01)1\n\nA (1 \u00a1 vi(t \u00a1 1)) + vi(t \u00a1 1)(cid:181)f ired\n\nai(t) = 0\n\n@\ufb01ai(t \u00a1 1) +Xj\n\n(13)\nSince vi 2 f0; 1g, if neuron i \ufb02res at time t \u00a1 1 the potential is reset to (cid:181)f ired at\ntime t. Similarly, with no synaptic input, the potential equilibrates to (cid:181)rest with\ntime constant \u00a11= log \ufb01. Here \ufb01 2 [0; 1] represents membrane leakage characteristic\nof this class of models.\n\n\fa(t \u00a1 1)\n\na(t)\n\na(t + 1)\n\nr(t \u00a1 1)\n\nr(t)\n\nr(t + 1)\n\nv(t \u00a1 1)\n\nv(t)\n\nv(t + 1)\n\nFigure 4: Stochastic vesicle release (synaptic dynamic factors not indicated).\n\nDespite the apparent increase in complexity of the membrane potential over the\nsimple Hop\ufb02eld case, deriving appropriate learning dynamics for this new system\nis straightforward since, as before, the hidden variables (here the membrane poten-\ntials) update in a deterministic fashion. The membrane derivatives are\n\ndai(t)\ndwij\n\n= (1 \u00a1 vi(t \u00a1 1))(cid:181)\ufb01\n\ndai(t \u00a1 1)\n\ndwij\n\n+ vj(t)\u00b6\n\n(14)\n\nBy initialising the derivative dai(t=1)\n= 0, equations (9,13,14) de\ufb02ne a \ufb02rst order\nrecursion for the gradient which can be used to adapt wij in the usual manner\nwij \u02c6 wij + \u00b7dL=dwij. We could also apply synaptic dynamics to this case by\nreplacing the term vj(t) in (14) by xj(t)vj(t).\n\ndwij\n\nA direct consequence of the above learning rule (explored in detail elsewhere) is a\nspike time dependent learning window in qualitative agreement with experimental\nresults[11], a pleasing corollary of our approach, and is consistent with our belief\nthat such observed plasticity has at its core a simple learning rule.\n\n5 A Stochastic Vesicle Release Model\n\nNeurotransmitter release can be highly stochastic and it would be desirable to in-\nclude this mechanism in our models. A simple model of quantal release of trans-\nmitter from pre-synaptic neuron j to post-synaptic neuron i is to release a vesicle\nwith probability\n\np(rij(t) = 1jxij(t); vj(t)) = xij(t)vj(t)Rij\n\nwhere, in analogy with (12),\n\nxij(t + 1) = xij(t) + \u2013t(cid:181) 1 \u00a1 xij(t)\n\n\u00bf\n\n\u00a1 U xij(t)rij(t)\u00b6\n\n(15)\n\n(16)\n\nand Rij 2 [0; 1] is a plastic release parameter. The membrane potential is then\ngoverned in integrate and \ufb02re models by\n\nwijrij(t) + (cid:181)rest (1 \u00a1 \ufb01)1\n\nA (1 \u00a1 vi(t \u00a1 1)) + vi(t \u00a1 1)(cid:181)f ired\n\nai(t) = 0\n\n@\ufb01ai(t \u00a1 1) +Xj\n\n(17)\nThis model is schematically depicted in \ufb02g(4). Since the unobserved stochastic\nrelease variables rij(t) are hidden, this model does not have fully deterministic\nhidden dynamics. In general, learning in such models is more complex and would\nrequire both forward and backward temporal propagations including, undoubtably,\ngraphical model approximation techniques[7].\n\n\f6 Discussion\n\nLeaving aside the issue of stochastic vesicle release, a further step in the evolu-\ntion of membrane complexity is to use Hodgkin-Huxley type dynamics[9]. Whilst\nthis might appear complex, in principle, this is straightforward since the membrane\ndynamics can be represented by deterministic hidden dynamics. Explicitly sum-\nming out the hidden variables would then give a representation of Hodgkin-Huxley\ndynamics analogous to that of the Spike Response Model (see Gerstner in [10]).\n\nDeriving optimal\nlearning in assemblies of stochastic spiking neurons can be\nachieved using maximum likelihood. This is straightforward in cases for which\nthe latent dynamics is deterministic. It is worth emphasising, therefore, that al-\nmost arbitrarily complex spatio-temporal patterns may potentially be learned {\nand generated under cued retrieval { for very complex neural dynamics. Whilst\nthis framework cannot deal with arbitrarily complex stochastic interactions, it can\ndeal with learning in a class of interesting neural models, and concepts from graph-\nical models can be useful in this area. A more general stochastic framework would\nneed to examine approximate causal learning rules which, despite not being fully\noptimal, may perform well. Finally, our assumption that the brain operates opti-\nmally (albeit within severe constraints) enables us to drop other assumptions about\nunobserved processes, and leads to models with potentially more predictive power.\n\nReferences\n\n[1] L.F. Abbott, J.A. Varela, K. Sen, and S.B. Nelson, Synaptic depression and cortical\n\ngain control, Science 275 (1997), 220{223.\n\n[2] D. Barber, Dynamic Bayesian Networks with Deterministic Latent Tables, Neural\n\nInformation Processing Systems (2003).\n\n[3] D. Barber and F. Agakov, Correlated sequence learning in a network of spiking neu-\nrons using maximum likelihood, Tech. Report EDI-INF-RR-0149, School of Informat-\nics, 5 Forrest Hill, Edinburgh, UK, 2002.\n\n[4] C. Chrisodoulou, G. Bugmann, and T.G. Clarkson, A Spiking Neuron Model : Appli-\n\ncations and Learning, Neural Networks 15 (2002), 891{908.\n\n[5] A. D\u02dcuring, A.C.C. Coolen, and D. Sherrington, Phase diagram and storage capacity\nof sequence processing neural networks, Journal of Physics A 31 (1998), 8607{8621.\n[6] W. Gerstner, R. Ritz, and J.L. van Hemmen, Why Spikes? Hebbian Learning and\nretrieval of time-resolved excitation patterns, Biological Cybernetics 69 (1993), 503{\n515.\n\n[7] M. I. Jordan, Learning in Graphical Models, MIT Press, 1998.\n[8] R. Kempter, W. Gerstner, and J.L. van Hemmen, Hebbian learning and spiking neu-\n\nrons, Physical Review E 59 (1999), 4498{4514.\n\n[9] C. Koch, Biophysics of Computation, Oxford University Press, 1998.\n[10] W. Maass and C. Bishop, Pulsed Neural Networks, MIT Press, 2001.\n[11] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, Regulation of synaptic e\u2013cacy\n\nby coindence of postsynaptic APs and EPSPs, Science 275 (1997), 213{215.\n\n[12] S.J. Martin, P.D. Grimwood, and R.G.M. Morris, Synaptic Plasticity and Memory:\nAn Evaluation of the Hypothesis, Annual Reviews Neuroscience 23 (2000), 649{711.\n[13] T. Natschl\u02dcager, W. Maass, and A. Zador, E\u2013cient Temporal Processing with Biolog-\n\nically Realistic Dynamic Synapses, Tech Report (2002).\n\n[14] L. Pantic, J.T. Joaquin, H.J. Kappen, and S.C.A.M. Gielen, Associatice Memory with\n\nDynamic Synapses, Neural Computation 14 (2002), 2903{2923.\n\n[15] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Networks with Dynamic Synapses,\n\nNeural Computation 10 (1998), 821{835.\n\n\f", "award": [], "sourceid": 2330, "authors": [{"given_name": "David", "family_name": "Barber", "institution": null}]}