{"title": "Attractor Dynamics with Synaptic Depression", "book": "Advances in Neural Information Processing Systems", "page_first": 640, "page_last": 648, "abstract": "Neuronal connection weights exhibit short-term depression (STD). The present study investigates the impact of STD on the dynamics of a continuous attractor neural network (CANN) and its potential roles in neural information processing. We find that the network with STD can generate both static and traveling bumps, and STD enhances the performance of the network in tracking external inputs. In particular, we find that STD endows the network with slow-decaying plateau behaviors, namely, the network being initially stimulated to an active state will decay to silence very slowly in the time scale of STD rather than that of neural signaling. We argue that this provides a mechanism for neural systems to hold short-term memory easily and shut off persistent activities naturally.", "full_text": "Attractor Dynamics with Synaptic Depression\n\nC. C. Alan Fung, K. Y. Michael Wong\n\nHong Kong University of Science and Technology, Hong Kong, China\n\nalanfung@ust.hk, phkywong@ust.hk\n\nHe Wang\n\nTsinghua University, Beijing, China\n\nwanghe07@mails.tsinghua.edu.cn\n\nSi Wu\n\nInstitute of Neuroscience,\n\nChinese Academy of Sciences, Shanghai, China\n\nsiwu@ion.ac.cn\n\nAbstract\n\nNeuronal connection weights exhibit short-term depression (STD). The present\nstudy investigates the impact of STD on the dynamics of a continuous attractor\nneural network (CANN) and its potential roles in neural information processing.\nWe \ufb01nd that the network with STD can generate both static and traveling bumps,\nand STD enhances the performance of the network in tracking external inputs.\nIn particular, we \ufb01nd that STD endows the network with slow-decaying plateau\nbehaviors, namely, the network being initially stimulated to an active state will\ndecay to silence very slowly in the time scale of STD rather than that of neural\nsignaling. We argue that this provides a mechanism for neural systems to hold\nshort-term memory easily and shut off persistent activities naturally.\n\n1 Introduction\n\nNetworks of various types, formed by a large number of neurons through synapses, are the substrate\nof brain functions. The network structure is the key that determines the responsive behaviors of a\nnetwork to external inputs, and hence the computations implemented by the neural system. Under-\nstanding the relationship between the structure of a neural network and the function it can achieve\nis at the core of using mathematical models for elucidating brain functions.\n\nIn the conventional modeling of neuronal networks, it is often assumed that the connection weights\nbetween neurons, which model the ef\ufb01cacy of the activities of pre-synaptic neurons on modulating\nthe states of post-synaptic neurons, are constants, or vary only in long-time scales when learning\noccurs. However, experimental data has consistently revealed that neuronal connection weights\nchange in short time scales, varying from hundreds to thousands of milliseconds (see, e.g., [1]). This\nis called short-term plasticity (STP). A predominant type of STP is short-term depression (STD),\nwhich decreases the connection ef\ufb01cacy when a pre-synaptic neuron \ufb01res. The physiological process\nunderlying STD is the depletion of available resources when signals are transmitted from a pre-\nsynaptic neuron to the post-synaptic one.\n\nIs STD simply a by-product of the biophysical process of neural signaling? Experimental and theo-\nretical studies have suggested that this is unlikely to be the case. Instead, STD can play very active\nroles in neural computation. For instance, it was found that STD can achieve gain control in reg-\nulating neural responses to external inputs, realizing Weber\u2019s law [2, 3]. Another example is that\nSTD enables a network to generate transient synchronized population \ufb01ring, appealing for detecting\nsubtle changes in the environment [4, 5]. The STD of a neuron is also thought to play a role in\nestimating the information of the pre-synaptic membrane potential from the spikes it receives [6].\nFrom the computational point of view, the time scale of STD resides between fast neural signaling\n\n1\n\n\f(in the order of milliseconds) and slow learning (in the order of minutes or above), which is the time\norder of many important temporal operations occurring in our daily life, such as working memory.\nThus, STD may serve as a substrate for neural systems to manipulate temporal information in the\nrelevant time scales.\n\nIn this study, we will further explore the potential role of STD in neural information processing,\nan issue of fundamental importance but has not been adequately investigated so far. We will use\ncontinuous attractor neural networks (CANNs) as working models. CANNs are a type of recurrent\nnetworks which hold a continuous family of localized active states [7]. Neutral stability is a key\nadvantage of CANNs, which enables neural systems to update memory states or to track time-\nvarying stimuli smoothly. CANNs have been successfully applied to describe the retaining of short-\nterm memory, and the encoding of continuous features, such as the orientation, the head direction\nand the spatial location of objects, in neural systems [8, 9, 10]. CANNs are also shown to provide a\nframework for implementing population decoding ef\ufb01ciently [11].\n\nWe analyze the dynamics of a CANN with STD included, and \ufb01nd that apart from the static bump\nstates, the network can also hold moving bump solutions. This \ufb01nding agrees with the results re-\nported in the literature [12, 13]. In particular, we \ufb01nd that with STD, the network can have slow-\ndecaying plateau states, that is, the network being stimulated to an active state by a transient input\nwill decay to silence very slowly in the time order of STD rather than that of neural signaling. This is\na very interesting property. It implies that STD can provide a mechanism for neural systems to gen-\nerate short-term memory and shut off activities naturally. We also \ufb01nd that STD retains the neutral\nstability of the CANN, and enhances the tracking performance of the network to external inputs.\n\n2 The Model\n\nLet us consider a one-dimensional continuous stimulus x encoded by an ensemble of neurons. For\nexample, the stimulus may represent the moving direction, the orientation or a general continuous\nfeature of objects extracted by the neural system.\n\nLet u(x, t) be the synaptic input at time t to the neurons whose preferred stimulus is x. The range of\nthe possible values of the stimulus is \u2212L/2 < x \u2264 L/2 and u(x, t) is periodic, i.e., u(x+L) = u(x).\nThe dynamics is particularly convenient to analyze in the limit that the interaction range a is much\nless than the stimulus range L, so that we can effectively take x \u2208 (\u2212\u221e,\u221e). The dynamics of\nu(x, t) is determined by the external input Iext(x, t), the network input from other neurons, and its\nown relaxation. It is given by\n\n\u03c4s\n\n\u2202u(x, t)\n\n\u2202t\n\n= Iext(x, t) + \u03c1Z \u221e\n\n\u2212\u221e\n\ndx\u2032J(x, x\u2032)p(x\u2032, t)r(x\u2032, t) \u2212 u(x, t),\n\n(1)\n\nwhere \u03c4s is the synaptical transmission delay, which is typically in the order of 2 to 5 ms. J(x, x\u2032)\nis the base neural interaction from x\u2032 to x. r(x, t) is the \ufb01ring rate of neurons. It increases with the\nsynaptic input, but saturates in the presence of a global activity-dependent inhibition. A solvable\n\u2212\u221e dx\u2032u(x\u2032, t)2], where\n\u03c1 is the neural density, and k is a positive constant controlling the strength of global inhibition. The\nglobal inhibition can be generated by shunting inhibition [14].\n\nmodel that captures these features is given by r(x, t) = u(x, t)2/[1 + k\u03c1R \u221e\n\nThe key character of CANNs is the translational invariance of their neural interactions.\nIn our\nsolvable model, we choose Gaussian interactions with a range a, namely, J(x, x\u2032) = J0 exp[\u2212(x \u2212\nx\u2032)2/2a2]/(a\u221a2\u03c0), where J0 is a constant.\nThe STD coef\ufb01cient p(x, t) in Eq. (1) takes into account the pre-synaptic STD. It has the maximum\nvalue of 1, and decreases with the \ufb01ring rate of the neuron [15, 16]. Its dynamics is given by\n\n(2)\nwhere \u03c4d is the time constant for synaptic depression, and the parameter \u03b2 controls the depression\neffect due to neural \ufb01ring.\n\n= 1 \u2212 p(x, t) \u2212 p(x, t)\u03c4d\u03b2r(x, t),\n\n\u2202t\n\n\u2202p(x, t)\n\n\u03c4d\n\nThe network dynamics is governed by two time scales. The time constants of STD is typically in\nthe range of hundreds to thousands of milliseconds, much larger than that of neural signaling, i.e.,\n\u03c4d \u226b \u03c4s. The interplay between the fast and slow dynamics causes the network to exhibit interesting\ndynamical behaviors.\n\n2\n\n\f3\n\n2\n\n1\n\n0\n\n-1\n\n-2\n\n-3\n\nx\n\n0.4320\n0.3780\n0.3240\n0.2700\n0.2160\n0.1620\n0.1080\n0.05400\n0.000\n\n0\n\n5\n\n10\nt/ s\n\n15\n\n20\n\nFigure 1: The neural response pro\ufb01le tracks the\nchange of position of the external stimulus from\nz0 = 0 to 1.5 at t = 0. Parameters: a = 0.5,\nk = 0.95, \u03b2 = 0, \u03b1 = 0.5.\n\n)\nx\n(\nu\n\n0.5\n\n0.4\n\n0.3\n\n0.2\n\n0.1\n\n0\n\n-2\n\n0\nt/\u03c4\n\ns\n\n2\n\nFigure 2: The pro\ufb01le of u(x, t) at t/\u03c4 =\n0, 1, 2,\u00b7\u00b7\u00b7 , 10 during the tracking process in\nFig. 1.\n\n2.1 Dynamics of CANN without Dynamical Synapses\n\nIt is instructive to \ufb01rst consider the network dynamics when no dynamical synapses are included.\nThis is done by setting \u03b2 = 0 in Eq. (2), so that p(x, t) = 1 for all t. In this case, the network can\nsupport a continuous family of stationary states when the global inhibition is not too strong.\n\nSpeci\ufb01cally, the steady state solution to Eq. (1) is\n\n\u02dcu(x|z) = u0 exp(cid:20)\u2212\n\n(x \u2212 z)2\n\n4a2\n\n(cid:21) ,\n\n\u02dcr(x|z) = r0 exp(cid:20)\u2212\n\n(x \u2212 z)2\n\n2a2\n\n(cid:21) ,\n\n(3)\n\nwhere u0 = [1 + (1 \u2212 k/kc)1/2]J0/(4ak\u221a\u03c0), r0 = [1 + (1 \u2212 k/kc)1/2]/(2ak\u03c1\u221a2\u03c0) and kc =\n0 /(8a\u221a2\u03c0). These stationary states are translationally invariant among themselves and have the\n\u03c1J 2\nGaussian shape with a free parameter z representing the position of the Gaussian bumps. They exist\nfor 0 < k < kc, kc is thus the critical inhibition strength.\nFung et al [17] considered the perturbations of the Gaussian states. They found various distortion\nmodes, each characterized by an eigenvalue representing its rate of evolution in time. A key property\nthey found is that the translational mode has a zero eigenvalue, and all other distortion modes have\nnegative eigenvalues for k < kc. This implies that the Gaussian bumps are able to track changes\nin the position of the external stimuli by continuously shifting the position of the bumps, with other\ndistortion modes affecting the tracking process only in the transients.\n\nAn example of the tracking process is shown in Figs. 1 and 2, when an external stimulus with a\nGaussian pro\ufb01le is initially centered at z = 0, pinning the center of a Gaussian neuronal response\nat the same position. At time t = 0, the stimulus shifts its center from z = 0 to z = 1.5 abruptly.\nThe bump moves towards the new stimulus position, and catches up with the stimulus change after\na time duration. which is referred to as the reaction time.\n\n3 Dynamics of CANN with Synaptic Depression\n\nFor clarity, we will \ufb01rst summarize the main results obtained on the network dynamics due to STD,\nand then present the theoretical analysis in Sec. 4.\n\n3.1 The Phase Diagram\n\nIn the presence of STD, CANNs exhibit new interesting dynamical behaviors. Apart from the static\nbump state, the network also supports moving bump states. To construct a phase diagram mapping\nthese behaviors, we \ufb01rst consider how the global inhibition k and the synaptic depression \u03b2 scale\nwith other parameters. In the steady state solution of Eq. (1), u0 and \u03c1J0u2\n0 should have the same\ndimension; so are 1\u2212p(x, t) and \u03c4d\u03b2u0 in Eq. (2). Hence we introduce the dimensionless parameters\n0 ). The phase diagram obtained by numerical solutions to the network\nk \u2261 k/kc and \u03b2 \u2261 \u03c4d\u03b2/(\u03c12J 2\ndynamics is shown in Fig. 3.\n\n3\n\n\f0.06\n\n0.04\n\n0.02\n\n\u03b2\n\nMetastatic or Moving\n\nMoving\n\nSilent\n\nP\n\n0\n0\n\n0.2\n\n0.4\n\nk\n\nStatic\n\n0.6\n\n0.8\n\n1\n\nFigure 3: Phase diagram of the network\nstates. Symbols: numerical solutions.\nDashed line: Eq. (10). Dotted line:\nEq. (13). Solid line: Gaussian approxi-\nmation using 11th order perturbation of\nthe STD coef\ufb01cient. Point P: the work-\ning point for Figs. 4 and 7. Parameters:\n\u03c4d/\u03c4s = 50, a = 0.5/6, range of the\nnetwork = [\u2212\u03c0, \u03c0).\n\nWe \ufb01rst note that the synaptic depression and the global inhibition plays the same role in reducing\nthe amplitude of the bump states. This can be seen from the steady state solution of u(x, t), which\nreads\n\n(4)\n\nu(x) =Z dx\u2032\n\n\u03c1J(x \u2212 x\u2032)u(x\u2032)2\n\n1 + k\u03c1R dx\u2032\u2032u(x\u2032\u2032)2 + \u03c4d\u03b2u(x\u2032)2 .\n\nThe third term in the denominator of the integrand arises from STD, and plays the role of a local\ninhibition that is strongest where the neurons are most active. Hence we see that the silent state with\nu(x, t) = 0 is the only stable state when either k or \u03b2 is large.\nWhen STD is weak, the network behaves similarly with CANNs without STD, that is, the static\nbump state is present up to k near 1. However, when \u03b2 increases, a state with the bump sponta-\nneously moving at a constant velocity comes into existence. Such moving states have been pre-\ndicted in CANNs [12, 13], and can be associated with traveling wave behaviors widely observed in\nthe neocortex [18]. At an intermediate range of \u03b2, both the static and moving states coexist, and\nthe \ufb01nal state of the network depends on the initial condition. When \u03b2 increases further, only the\nmoving state is present.\n\n3.2 The Plateau Behavior\n\nThe network dynamics displays a very interesting behavior in the parameter regime when the static\nbump solution just loses its stability. In this regime, an initially activated network state decays very\nslowly to silence, in the time order of \u03c4d. Hence, although the bump state eventually decays to the\nsilent state, it goes through a plateau region of a slowly decaying amplitude, as shown in Fig. 4.\n\n5\n5\n\n4\n4\n\n3\n3\n\n)\nt\n,\n\nx\n(\nu\n\n0\n\nJ\n\u03c1\n\n2\n2\n\nx\n\nx\na\nm\n\n1\n1\n\n0\n0\n0\n0\n\n)\nt\n,\n\nx\n(\np\n\nx\n\nn\ni\nm\n-\n1\n\n0.05\n0.05\n\n0.04\n0.04\n\n0.03\n0.03\n\n0.02\n0.02\n\n0.01\n0.01\n\n0\n0\n0\n0\n\nB\n\nA\n\n100\n100\n\nt\n\n200\n200\n\n300\n300\n\nA\n\nB\n\n100\n100\n\n200 300\n200 300\n\nt\n\n400 500\n400 500\n\nFigure 4: Magnitudes of rescaled neu-\nronal\ninput \u03c1J0u(x, t) and synaptic\ndepression 1 \u2212 p(x, t) at (k, \u03b2) =\n(0.95, 0.0085) (point P in Fig. 3) and\nfor initial conditions of types A and B\nin Fig. 8. Symbols: numerical so-\nlutions. Lines: Gaussian approxima-\ntion using Eqs. (8) and (9). Other pa-\nrameters: \u03c4d/\u03c4s = 50, a = 0.5 and\nx \u2208 [\u2212\u03c0, \u03c0).\n\n3.3 Enhanced Tracking Performance\n\nThe responses of CANNs with STD to an abrupt change of stimulus are illustrated in Fig. 5. Com-\npared with networks without STD, we \ufb01nd that the bump shifts to the new position faster. The extent\nof improvement in the presence of STD is quanti\ufb01ed in Fig. 6. However, when \u03b2 is too strong, the\nbump tends to overshoot the target before eventually approaching it.\n\n4\n\n\f2\n\n1.5\n\n)\nt\n(\nz\n\n1\n\n0.5\n\n0\n0\n\nk = 0.5, \u03b2 = 0\nk = 0.5, \u03b2 = 0.05\nk = 0.5, \u03b2 = 0.2\n\n10\n\nt\n\n20\n\n30\n\nFigure 5: The response of CANNs with STD to an\nabruptly changed stimulus from z0 = 0 to z0 = 1.5 at\nt = 0. Symbols: numerical solutions. Lines: Gaussian\napproximation using 11th order perturbation of the STD\ncoef\ufb01cent. Parameters: \u03c4d/\u03c4s = 50, \u03b1 = 0.5, a = 0.5\nand x \u2208 [\u2212\u03c0, \u03c0).\n\n4 Analysis\n\n0.9\n\n0.8\n\n0\n\nz\n5\n\n0.7\n\n.\n\n \n\n0\n=\n \nz\n \nt\na\n \n\nv\n\n0.6\n\n0.5\n\n0.4\n\n0.3\n0\n\nk = 0.3\nk = 0.5\nk = 0.7\n\n0.02\n\n0.04\n\n\u03b2\n\n0.06\n\n0.08\n\n0.1\n\nFigure 6: Tracking speed of the\nbump at 0.5z0, where z0 is \ufb01xed to\nbe 1.5\n\nDespite the apparently complex behaviors of CANNs with STD, we will show in this section that a\nGaussian approximation can reproduce the behaviors and facilitate the interpretation of the results.\nDetails are explained in Supplementary Information. We observe that the pro\ufb01le of the bump re-\nmains effectively Gaussian in the presence of synaptic depression. On the other hand, there is a\nconsiderable distortion of the pro\ufb01le of the synaptic depression, when STD is strong. Yet, to the\nlowest order approximation, let us approximate the pro\ufb01le of the synaptic depression to be a Gaus-\nsian as well, which is valid when STD is weak, as shown in Fig. 7(a). Hence, for a \u226a L, we propose\nthe following ansatz\n\nu(x, t) = u0(t) exp(cid:20)\u2212\np(x, t) = 1 \u2212 p0(t) exp(cid:20)\u2212\n\n(x \u2212 z)2\n\n(cid:21) ,\n4a2\n(x \u2212 z)2\n\n2a2\n\n(cid:21) .\n\nWhen these expressions are substituted into the dynamical equations (1) and (2), other functions\nf (x) of x appear. To maintain consistency with the Gaussian approximation, these functions will be\napproximated by their projections onto the Gaussian functions. In Eq. (1), we approximate\n\nf (x) \u2248(cid:20)Z\n\ndx\u2032\n\u221a2\u03c0a2\n\n(x\n\nf (x\u2032)e\u2212\n\n\u2032\n\n\u2212z)2\n\n4a2 (cid:21) e\u2212\n\n(x\u2212z)2\n\n4a2\n\n.\n\n(7)\n\nSimilarly, in Eq. (2), we approximate f (x) by its projection onto exp(cid:2)\u2212(x \u2212 z)2/(2a2)(cid:3).\n\n4.1 The Solution of the Static Bumps\n\n(5)\n\n(6)\n\n(9)\n\nWithout loss of generality, we let z = 0. Substituting Eq. (5) and (6) into Eqs. (1) and (2), and\nletting u(t) \u2261 \u03c1J0u0(t), we get\n=\n\nu(t)2\n\ndu(t)\n\n(8)\n\n\u03c4s\n\ndt\n\n\u03c4d\n\ndp0(t)\n\ndt\n\n=\n\n\u221a2(1 + ku(t)2/8)\"1 \u2212r 4\n1 + ku(t)2/8\"1 \u2212r 2\n\n\u03b2u(t)2\n\n7\n\n3\n\np0(t)# \u2212 p0(t).\n\np0(t)# \u2212 u(t),\n\nBy considering the steady state solution of u and p0 and their stability against \ufb02uctuations of u and\np0, we \ufb01nd that stable solutions exist when\n\n\u03b2 \u2264\n\n4(1 \u2212p2/3p0) \"1 +\np0(1 \u2212p4/7p0)2\n\n5\n\n\u03c4s\n\n\u03c4d(1 \u2212p2/3p0)# ,\n\n(10)\n\n\fwhen p0 is the steady state solution of Eqs. (1) and (2). The boundary of this region is shown as a\ndashed line in Fig. 3. Unfortunately, this line is not easily observed in numerical solutions since the\nstatic bump is unstable against \ufb02uctuations that are asymmetric with respect to its central position.\nAlthough the bump is stable against symmetric \ufb02uctuations, asymmetric \ufb02uctuations can displace\nits position and eventually convert it to a moving bump.\n\n4.2 The Solution of the Moving Bumps\n\nAs shown in Fig. 7(b), the pro\ufb01le of a moving bump is characterized by a lag of the synaptic\ndepression behind the moving bump. This is because neurons tend to be less active in locations of\nlow values of p(x, t), causing the bump to move away from locations of strong synaptic depression.\nIn turn, the region of synaptic depression tends to follow the bump. However, if the time scale\nof synaptic depression is large, the recovery of the synaptic depressed region is slowed down, and\ncannot catch up with the bump motion. Thus, the bump starts moving spontaneously.\n\nTo incorporate asymmetry into the moving state, we propose the following ansatz:\n\nu(x, t) = u0(t) exp(cid:20)\u2212\np(x, t) = 1 \u2212 p0(t) exp(cid:20)\u2212\n\n(x \u2212 vt)2\n\n(cid:21) ,\n4a2\n(x \u2212 vt)2\n\n(cid:21) + p1(t) exp(cid:20)\u2212\n\n(x \u2212 vt)2\n\n2a2\n\na (cid:19) .\n(cid:21)(cid:18) x \u2212 vt\n\n2a2\nto the basis\n\n(11)\n\n(12)\n\n(1)\n\nin Eq.\n\nlutions exist only if\n\nProjecting the terms\n\nfunctions exp(cid:2)\u2212(x \u2212 vt)2/(4a2)(cid:3) and\nexp(cid:2)\u2212(x \u2212 vt)2/(4a2)(cid:3) (x \u2212 vt)/a, and those in Eq.\n(2) to exp(cid:2)\u2212(x \u2212 vt)2/(2a2)(cid:3) and\nexp(cid:2)\u2212(x \u2212 vt)2/(2a2)(cid:3) (x \u2212 vt)/a, we obtain four equations for u, p0, p1 and v\u03c4s/a. Real so-\n\u03c4s \u2212 B +s(cid:18) \u03c4d\n\n\u03c4s \u2212 B(cid:19)2\n\nwhere A = 7\u221a7/4, B = (7/4)[(5/2)p7/6\u2212 1], and C = (343/36)(1\u2212p6/7). As shown in Fig.\n\n3, the boundary of this region effectively coincides with the numerical solution of the line separating\nthe static and moving phases.\nNote that when \u03c4d/\u03c4s increases, the static phase shrinks. This is because the recovery of the synaptic\ndepressed region is slowed down, making it harder to catch up with changes in the bump motion.\n\n\u03b2u2\n\n1 + ku2/8 \u2265 A\uf8ee\n\uf8f0\n\n\u2212 C\uf8f9\n\uf8fb\n\n(13)\n\n\u22121\n\n,\n\n\u03c4d\n\nStationary\n\n(a)\n\nu\np\n\n1.02\n\n1.01\n\n1\n\n0.99\n\n)\nt\n,\n\nx\n(\np\n\n0.98\n\n0.97\n\n0.96\n\n-2\n\n0\nx\n\n2\n\n0.25\n\n0.2\n\n)\nt\n,\n\nx\n(\nu\n\n0.15\n\n0.1\n\n0.05\n\n0\n\n)\nt\n,\n\nx\n(\nu\n\n0.6\n\n0.5\n\n0.4\n\n0.3\n\n0.2\n\n0.1\n\n0\n\nMoving\n\n(b)\n\nu\np\n\n1.1\n\n1.05\n\n1\n\n)\nt\n,\n\nx\n(\np\n\n0.95\n\n0.9\n\n0.85\n\n-2\n\n0\nx\n\n2\n\nFigure 7: Neuronal input u(x, t) and\nthe STD coef\ufb01cient p(x, t) in (a) the\nstatic state at (k, \u03b2) = (0.9, 0.005),\nand (b) the moving state at (k, \u03b2) =\n(0.5, 0.015). Parameter: \u03c4d/\u03c4s = 50.\n\nAn alternative approach that arrives at Eq. (13) is to consider the instability of the static bump,\nwhich is obtained by setting v and p1 to zero in Eqs. (11) and (12). Considering the instability of\nthe static bump against the asymmetric \ufb02uctuations in p1 and vt, we again arrive at Eq. (13). This\nshows that as soon as the moving bump comes into existence, the static bump becomes unstable.\nThis also implies that in the entire region that the static and moving bumps coexist, the static bump\nis unstable to asymmetric \ufb02uctuations. It is stable (or more precisely, metastable) when it is static,\nbut once it is pushed to one side, it will continue to move along that direction. We may call this\nbehavior metastatic. As we shall see, this metastatic behavior is also the cause of the enhanced\ntracking performance.\n\n4.3 The Plateau Behavior\n\nTo illustrate the plateau behavior, we select a point in the marginally unstable regime of the silent\nphase, that is, in the vicinity of the static phase. As shown in Fig. 8, the nullclines of u and p0\n\n6\n\n\f(du/dt = 0 and dp0/dt = 0 respectively) do not have any intersections as they do in the static\nphase where the bump state exists. Yet, they are still close enough to create a region with very slow\n\u221ak)]. Then, in Fig. 8,\ndynamics near the apex of the u-nullcline at (u, p0) = [(8/k)1/2,p7/4(1 \u2212\nwe plot the trajectories of the dynamics starting from different initial conditions. For veri\ufb01cation, we\nalso solve the full equations (1) and (2), and plot a \ufb02ow diagram with the axes being maxx u(x, t)\nand 1 \u2212 minx p(x, t). The resultant \ufb02ow diagram has a satisfactory agreement with Fig. 8.\nThe most interesting family of trajectories is represented by B and C in Fig. 8. Due to the much\nfaster dynamics of u, trajectories starting from a wide range of initial conditions converge rapidly,\nin a time of the order \u03c4s, to a common trajectory in the close neighborhood of the u-nullcline. Along\nthis common trajectory, u is effectively the steady state solution of Eq. (8) at the instantaneous value\nof p0(t), which evolves with the much longer time scale of \u03c4d. This gives rise to the plateau region\nof u which can survive for a duration of the order \u03c4d. The plateau ends after the trajectory has passed\nthe slow region near the apex of the u-nullcline. This dynamics is in clear contrast with trajectory\nD, in which the bump height decays to zero in a time of the order \u03c4s.\nTrajectory A represents another family of trajectories having rather similar behaviors, although the\nlifetimes of their plateaus are not so long. These trajectories start from more depleted initial con-\nditions, and hence do not have chances to get close to the u-nullcline. Nevertheless, they converge\nrapidly, in a time of order \u03c4s, to the band u \u2248 (8/k)1/2, where the dynamics of u is slow. The\ntrajectories then rely mainly on the dynamics of p0 to carry them out of this slow region, and hence\nplateaus of lifetimes of the order \u03c4d are created.\n\n0.06\n\n0.05\n\n0.04\n\n0\n\np\n\n0.03\n\n0.02\n\n0.01\n\n0\n0\n\nA\n\nB\n\nCD\n\n1\n\n2\n\n3\nu\n\n4\n\n5\n\n6\n\n0.06\n\n0.05\n\n0.04\n\n0.03\n\n0.02\n\n0.01\n\n0.00\n\n0.6\n\nBumps can sustain here.\n\n0.8\n\nk\n\n1.0\n\n100.0\n80.00\n70.00\n60.00\n40.00\n30.00\n20.00\n10.00\n5.500\n\nFigure 8: Trajectories of network dynamics start-\ning from various initial conditions at (k, \u03b2) = (0.95,\n0.0085) (point P in Fig. 3). Solid line: u-nullcline.\nDashed line: p0-nullcline. Symbols are data points\nspaced at time intervals of 2\u03c4s.\n\nFigure 9: Contours of plateau lifetimes in the\nspace of k and \u03b2. The lines are the two top-\nmost phase boundaries in Fig. 3. In the initial\ncondition, \u03b1 = 0.5.\n\nFollowing similar arguments, the plateau behavior also exists in the stable region of the static states.\nThis happens when the initial condition of the network lies outside the basin of attraction of the\nstatic states, but it is still in the vicinity of the basin boundary.\n\nWhen one goes deeper into the silent phase, the region of slow dynamics between the u- and p0-\nnullclines broadens. Hence plateau lifetimes are longest near the phase boundary between the bump\nand silent states, and become shorter when one goes deeper into the silent phase. This is con\ufb01rmed\nby the contours of plateau lifetimes in the phase diagram shown in Fig. 9 obtained by numerical\nsolution. The initial condition is uniformly set by introducing an external stimulus I ext(x|z0) =\n\u03b1u0 exp[\u2212x2/(4a2)] to the right hand side of Eq. (1), where \u03b1 is the stimulus strength. After the\nnetwork has reached a steady state, the stimulus is removed at t = 0, leaving the network to relax.\n\n4.4 The Tracking Behavior\n\nTo\n\nadd\n\nthe\n\nstudy\n\nthe\n\ntracking\n\nbehavior, we\n\n\u03b1u0 exp(cid:2)\u2212(x \u2212 z0)2/(4a2)(cid:3) to the right hand side of Eq.\n\n=\n(11), where z0 is the position of\nthe stimulus abruptly changed at t = 0. With this additional term, we solve the modi\ufb01ed version\nof Eqs. (11) and (12), and the solution reproduces the qualitative features due to the presence of\nsynaptic depression, namely, the faster response at weak \u03b2, and the overshooting at stronger \u03b2.\nAs remarked previously, this is due to the metastatic behavior of the bumps, which enhances their\nreaction to move from the static state when a small push is exerted.\n\nexternal\n\nstimulus\n\nI ext (x|z0)\n\n7\n\n\fHowever, when describing the overshooting of the tracking process, the quantitative agreement be-\ntween the numerical solution and the ansatz in Eqs. (11) and (12) is not satisfactory. We have\nmade improvement by developing a higher order perturbation analysis using basis functions of the\nquantum harmonic oscillator [17]. As shown in Fig. 5, the quantitative agreement is much more\nsatisfactory.\n\n5 Conclusions and Discussions\n\nIn this work, we have investigated the impact of STD on the dynamics of a CANN, and found\nthat the network can support both static and moving bumps. Static bumps exist only when the\nsynaptic depression is suf\ufb01ciently weak. A consequence of synaptic depression is that it places\nstatic bumps in the metastatic state, so that its response to changing stimuli is speeded up, enhancing\nits tracking performance. We conjecture that moving bump states may be associated with traveling\nwave behaviors widely observed in the neurocortex.\n\nA \ufb01nding in our work with possibly very important biological implications is that STD endows the\nnetwork with slow-decaying behaviors. When the network is initially stimulated to an active state\nby an external input, it will decay to silence very slowly after the input is removed. The duration\nof the plateau is of the time scale of STD rather than neural signaling, and it provides a way for the\nnetwork to hold the stimulus information for up to hundreds of milliseconds, if the network operates\nin the parameter regime that the bumps are marginally unstable. This property is, on the other hand,\nextremely dif\ufb01cult to be implemented in attractor networks without STD. In a CANN without STD,\nan active state of the network decays to silence exponentially fast or persists forever, depending on\nthe initial activity level of the network. Indeed, how to shut off the activity of a CANN has been a\nchallenging issue that received wide attention in theoretical neuroscience, with solutions suggesting\nthat a strong external input either in the form of inhibition or excitation must be applied (see, e.g.,\n[19]). Here, we show that STD provides a mechanism for closing down network activities naturally\nand in the desirable duration.\n\nWe have also analyzed the dynamics of CANNs with STD using a Gaussian approximation of the\nbump. It describes the phase diagram of the static and moving phases, the plateau behavior, and\nprovides insights on the metastatic nature of the bumps and its relation with the enhanced tracking\nperformance. In most cases, approximating 1 \u2212 p(x, t) by a Gaussian pro\ufb01le is already suf\ufb01cient to\nproduce qualitatively satisfactory results. However, higher order perturbation analysis is required to\nyield more accurate descriptions of results such as the overshooting in the tracking process (Fig. 5).\n\nBesides STD, there are other forms of STP that may be relevant to realizing short-term memory.\nMongillo et al. [20] have recently proposed a very interesting idea for achieving working memory\nin the prefrontal cortex by utilizing the effect of short-term facilitation (STF). Compared with STD,\nSTF has the opposite effect in modifying the neuronal connection weights. The underlying bio-\nphysics of STF is the increased level of residual calcium due to neural \ufb01ring, which increases the\nreleasing probability of neural transmitters. Mongillo et al. [20] showed that STF provides a way\nfor the network to encode the information of external inputs in the facilitated connection weights,\nand it has the advantage of not having to recruit persistent neural \ufb01ring and hence is economically\nef\ufb01cient. This STF-based memory mechanism is, however, not necessarily contradictory to the\nSTD-based one we propose here. They may be present in different cortical areas for different com-\nputational purposes. STD and STF have been observed to have different effects in different cortical\nareas. One location is the sensory cortex where CANN models are often applicable. Here, the effects\nof STD tends to be stronger than that of STF. Different from the STF-based mechanism, our work\nsuggests that the STD-based one exhibits the prolonged neural \ufb01ring, which has been observed in\nsome cortical areas. In terms of information transmission, prolonged neural \ufb01ring is preferable in\nthe early information pathways, so that the stimulus information can be conveyed to higher cortical\nareas through neuronal interactions. Hence, it seems that the brain may use a strategy of weight-\ning the effects of STD and STF differentially for carrying out different computational tasks. It is\nour goal in future work to explore the joint impact of STD and STF on the dynamics of neuronal\nnetworks.\n\nThis work is partially supported by the Research Grants Council of Hong Kong (grant nos. HKUST\n603607 and 604008).\n\n8\n\n\fReferences\n\n[1] H. Markram, Y. Wang and M. Tsodyks, Proc. Natl. Acad. Sci. U.S.A., 95, 5323 (1998).\n[2] M. Tsodyks and H. Markram, Proc. Natl. Acad. Sci. U.S.A., 94, 719-723 (1997).\n[3] L. F. Abbott, J. A. Varela, K. Sen and S. B. Nelson, Science, 275, 220-224 (1997).\n[4] M. Tsodyks, A. Uziel and H. Markram, J. Neurosci., 20, 1-5 (2000).\n[5] A. Loebel and M. Tsodyks, J. Comput. Neurosci., 13, 111-124 (2002).\n[6] J.-P. P\ufb01ster, P. Dayan, and M. Lengyel, Advances in Neural Information Processing Systems\n22, Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta (eds.), 1464\n(2009).\n\n[7] S. Amari, Biological Cybernetics, 27, 77-87 (1977).\n[8] R. Ben-Yishai, R. Lev Bar-Or and H. Sompolinsky, Proc. Natl. Acad. Sci. U.S.A., 92, 3844-\n\n3848 (1995).\n\n[9] K.-C. Zhang, J. Neurosci., 16, 2112-2126 (1996).\n[10] A. Samsonovich, and B. L. McNaughton, J. Neurosci., 7, 5900-5920 (1997).\n[11] S. Deneve, P. E. Latham and A. Pouget, Nature Neuroscience, 2, 740-745 (1999).\n[12] L. C. York and M. C. W. van Rossum, J. Comput. Neurosci. 27, 607-620 (2009)\n[13] Z. P. Kilpatrick and P. C. Bressloff, Physica D 239, 547-560 (2010)\n[14] J. Hao, X. Wang, Y. Dan, M. Poo and X. Zhang, Proc. Natl. Acad. Sci. U.S.A., 106, 21906-\n\n21911 (2009).\n\n[15] M. V. Tsodyks, K. Pawelzik and H. Markram, Neural Comput. 10, 821-835 (1998).\n[16] R. S. Zucker and W. G. Regehr, Annu. Rev. Physiol. 64, 355-405 (2002).\n[17] C. C. A. Fung, K. Y. M. Wong and S. Wu, Neural Comput. 22, 752-792 (2010)\n[18] J. Wu, X. Huang and C. Zhang, The Neuroscientist, 14, 487-502 (2008).\n[19] B. S. Gutkin, C. R. Laing, C. L. Colby, C. C. Chow and B. G. Ermentrout, J. Comput. Neu-\n\nrosci., 11, 121-134 (2001).\n\n[20] G. Mongillo, O. Barak and M. Tsodyks, Science, 319, 1543-1546 (2008).\n\n9\n\n\f", "award": [], "sourceid": 486, "authors": [{"given_name": "K.", "family_name": "Wong", "institution": null}, {"given_name": "He", "family_name": "Wang", "institution": null}, {"given_name": "Si", "family_name": "Wu", "institution": null}, {"given_name": "Chi", "family_name": "Fung", "institution": null}]}