{"title": "Activity Driven Adaptive Stochastic Resonance", "book": "Advances in Neural Information Processing Systems", "page_first": 301, "page_last": 308, "abstract": null, "full_text": "Activity Driven Adaptive Stochastic \n\nResonance \n\nGregor Wenning and Klaus Oberrnayer \n\nDepartment of Electrical Engineering and Computer Science \n\nTechnical University of Berlin \nFranklinstr. 28/29 , 10587 Berlin \n\n{grewe, oby}@cs.tu-berlin.de \n\nAbstract \n\nCortical neurons might be considered as threshold elements inte(cid:173)\ngrating in parallel many excitatory and inhibitory inputs. Due to \nthe apparent variability of cortical spike trains this yields a strongly \nfluctuating membrane potential, such that threshold crossings are \nhighly irregular. Here we study how a neuron could maximize its \nsensitivity w.r.t. a relatively small subset of excitatory input. Weak \nsignals embedded in fluctuations is the natural realm of stochastic \nresonance. The neuron's response is described in a hazard-function \napproximation applied to an Ornstein-Uhlenbeck process. We an(cid:173)\nalytically derive an optimality criterium and give a learning rule \nfor the adjustment of the membrane fluctuations, such that the \nsensitivity is maximal exploiting stochastic resonance. We show \nthat adaptation depends only on quantities that could easily be \nestimated locally (in space and time) by the neuron. The main \nresults are compared with simulations of a biophysically more re(cid:173)\nalistic neuron model. \n\n1 \n\nIntroduction \n\nEnergetical considerations [1] and measurements [2] suggest , that sub-threshold \ninputs, i.e. inputs which on their own are not capable of driving a neuron , play an \nimportant role in information processing. This implies that measures must be taken, \nsuch that the relevant information which is contained in the inputs is amplified in \norder to be transmitted. One way to increase the sensitivity of a threshold device is \nthe addition of noise. This phenomenon is called stochastic resonance (see [3] for a \nreview) , and has already been investigated and experimentally demonstrated in the \ncontext of neural systems (e.g. [3, 4]). The optimal noise level, however , depends \non the distribution of the input signals, hence neurons must adapt their internal \nnoise levels when the statistics of the input is changing. Here we derive and explore \nan activity depend ent learning rule which is intuitive and which only depends on \nquantities (input and output rates) which a neuron could - in principle - estimate. \nThe paper is structured as follows. In section 2 we describe the neuron model and we \nintroduce the m embrane potential dynamics in its hazard function approximation. \n\n\fIn section 3 we characterize stochastic resonance in this model system and we calcu(cid:173)\nlate the optimal noise level as a function of t he input and output rates. In section 4 \nwe introduce an activity dependent learning rule for optimally adjusting the inter(cid:173)\nnal noise level, demonstrate its usefulness by applying it to t he Ornstein-Uhlenbeck \nneuron and relate the phenomenon of stochastic resonance to its experimentally \naccessible signature: the adaptation of the neuron 's transfer function . Section 5 \ncontains a comparison to the results from a biophysically more realistic neuron \nmodel. Section 6, finally, concludes with a brief discussion. \n\n2 The abstract Neuron Model \n\nFigure 1 a) shows the basic model setup. A leaky integrate-and-fire neuron receives \n\na) \n\n~;=\".~ >8 rateAo \n\n8 \n\ntrain with rate As \n\n/'-\nIWn I \n\n2 N balanced Poisson \nspike trains with rates \n\nAs \n\n0.7 \n\n0.9 b) \n\n-5 > O/l \n\n0.8 \n\n0.6 \n\n.\", \n\"0 \n{5 \n\" '-< \nc \n.~ \n;:: \n\" 0.5 \n~ \nE 0.4 \n~ 0.3 \n.S \n0.2 \n\"-' \n0 \n.0 \n;:: \n0. \n\n0.1 \n\n00 \n\n0.2 \n0.8 \naverage membrane potential \n\n0.6 \n\n0.4 \n\nFigure 1: a)The basic model setup. For explanation see text . b) A family of \nArrhenius type hazard functions for different noise levels. 1 corresponds to the \nthreshold e and values below 1 are subthreshold . \n\na \"signal\" input , which we assume to be a Poisson distributed spike train with a rate \nAs. The rate As is low enough , so that the membrane potential V of the neuron \nremains sub-threshold and no output spikes are generated . For the following we \nassume that the information the input and output of the neuron convey is coded by \nits input and output rates As and Ao only. Sensitivity is then increased by adding \n2N balanced excitatory and inhibitory \"noise\" inputs (N inputs each) with rates \nAn and Poisson distributed spikes . Balanced inputs [5, 6] were chosen , because they \ndo not affect t he average membrane potential and allow to separate the effect of \ndecreasing the distance of the neuron's operating point to the threshold potential \nfrom the effect of increasing the variance of the noise. Signal and noise inputs \nare coupled to t he neuron via synaptic weights Ws and Wn for the signal and noise \ninputs . The threshold of the neuron is denoted bye. Without loss of generality the \nmembrane time-constant, the neuron 's resting potential, and the neuron 's threshold \nare set to one, zero , and one , respectively. \n\nIf the total rate 2N An of incoming spikes on t he \"noise\" channel is large and the \nindividual coupling constants Wn are small , the dynamics of the m embrane potential \ncan be approximated by an Ornstein-Uhlenbeck process, \ndV = - V dt + J.l dt + (J\" dW, \n\n(1) \nwhere drift J.l and variance (J\" are given by J.l = wsA s and (J\"2 = w1A s + 2NwYvAN, \nand where dW describes a Gaussian noise process with m ean zero and variance \none [8]. Spike initiation is included by inserting an absorbing boundary with reset. \nEquation (1) can be solved analytically for special cases [8], but here we opt for \n\n\fa more versatile approximation (cf. [7]). In this approximation, the probability of \ncrossing the threshold , which is proportional to the instantaneous output rate of \nthe neuron , is described by an effective transfer function. In [7] several transfer \nfunctions were compared in their performance, from which we choose an Arrhenius(cid:173)\ntype function , \n\nAo(t) = c exp{ _ (e - ~(t))2}, \n\n(2) \nwhere e - x(t) is the distance in voltage between the noise free trajectory of the \nmembrane potential x(t) and the threshold e, x(t) is calculated from eq. (1) without \nits diffusion term. Note that x(t) is a function of As, c is a constant. Figure 1 b) \nshows a family of Arrhenius type transfer functions for different noise levels cr. \n\ncr \n\n3 Stochastic Resonance in an Ornstein-Uhlenbeck Neuron \n\nSeveral measures can be used to quantify the impact of noise on the quality of signal \ntransmission through threshold devices . A natural choice is the mutual information \n[9] between the distributions p( As) and p( Ao) of input and output rates, which we \nwill discuss in section 4, see also figure 3f. In order to keep the analysis and the \nderivation of the learning rule simple , however, we first consider a scenario, in which \na neuron should distinguish between two sub-threshold input rates As and As + ~s. \nOptimal distinguishability is achieved if the difference ~o of the corresponding \noutput rates is maximal, i.e. if \n\n~ o = /(As + ~ s) - /(As) = max , \n\n(3) \nwhere / is the transfer function given by eq. (2). Obviously there is a close con(cid:173)\nnection between these two measures , because increasing both of them leads to an \nincrease in the entropy of p( Ao) . \n\nFig. 2 shows plots of the difference ~ o of output rates vs. the level of noise, cr , for \n\n0.16 \n\n0.14 \n\n0.12 \n:5 0.1 \n0 \n<:::] 0.08 \n\n0.06 \n\n0.04 \n\n0.02 \n\n00 \n\n~s \n\n0.4 \n\n0.35 \n\n~s \n\nAS= 7 \n\n:5 \n0 \n<:::] \n\n0.05 \n\n50 \n\n100 \n\n0 2 \n\n[per cent] \n\n50 \n\n100 \n\n[per cent] \n\nFigure 2: ~ o vs. cr 2 for two different base rates As = 2 (left) and 7 (right) and 10 \ndifferent values of ~ s = 0.01 , 0.02 , ... , 0.1. cr 2 is given in per cent of the maximum \ncr 2 = 2N W;An. The arrows above t he x-axis indicate the position of the maximum \naccording to eq. (3), the arrowh eads below the x-axis indicate the optimal value \n(5) (67% and 25%). Parameters were: N = la , An = 7, \ncomputed using eq. \nWs = 0.1 , and Wn E [0, 0.1]. \n\ndifferent rates As and different values of ~ s . All curves show a clear maximum at a \n\n\fparticular noise level. The optimal noise level increases wit h decreasing t he input \nrate As, but is roughly independent of the difference ~ s as long as ~ s is small. \nTherefore, one optimal noise level holds even if a neuron has to distinguish several \nsub-threshold input rates - as long as these rates are clustered around a given base \nrate As. \n\nThe optimal noise level for constant As (stationary states) is given by the condition \n\nd \nd(j2 (f(A s + ~ s) -\n\nf(As)) = 0 , \n\n(4) \n\nwhere f is given by eq. (2). Equation (4) can be evaluated in the limit of small \nvalues of ~ s using a Taylor expansion up to the second order. We obtain \n\n(j;pt = 2(1 - ws As)2 \n\n(5) \n\n\" f 2 '\" 2N 2 , \n\n'\" WNAN c . eq. \n\nif the main part of the variance of the membrane potential is a result of the balanced \n(5) \n. \nmput , l.e. 1 (j \nis equivalent to 1 + 2 log( Ao (A; ;0\"2)) = O. This shows that the optimal noise level \ndepends either only on As or on Ao (As; (j2), both are quantities which are locally \navailable at the cell. \n\n(1 - W, A, )2 \nlog(Ao/C) , eq. \n\n(1)) S' \n\n-\n. mce (jopt -\n\n2 \n\n(f \n\n(2) \n\n, eq. \n\n-\n\n4 Adaptive Stochastic Resonance \n\nWe now consider the case , that a neuron needs to adapt its internal noise level \nbecause the base input rate As changes. A simple learning rule which converges to \nthe optimal noise level is given by \n\n~(j2 = -\n\nf \n\n(j2 \n\nlog( - 2-) , \n\n(j opt \n\n(6) \n\nwhere the learning parameter f determines the time-scale of adaptation . Inserting \nthe corresponding expressions for the actual and the optimal variance we obtain a \nlearning rule for the weights W n , \n\n~wn = - f og \n\nI \n\n( 2NAnw; \n( \n2 1 - ws As \n\n) \n)2 . \n\n(7) \n\nNote, t hat equivalent learning rules (in the sense of eq. (6)) can be formulat ed for \nthe number N of the noise inputs and for their rates An as well. The r.h. s. of eqs . \n(6) and (7) depend only on quantities which are locally available at the neuron. \n\nFig. 3ab shows the stochastic adaptation of the noise level, using eq. \nrandomly distributed As which are clustered around a base rate. \n\n(7) , to \n\nFig. 3c-f shows an application ofthe learning rule, eq. (7) to an Ornstein-Uhlenbeck \nneuron whose noise level needs to adapt to three different base input rates. T he \nfigure shows t he base input rate As (Fig. 3a). In fig. 3b the adaptation of Wn \naccording to eq. (7) is shown (solid line), for comparison t he Wn which maximizes \neq. (3) is also displayed (dashed dotted line). Mutual information was calculated \nbetween a distribution of randomly chosen input rates which are clustered around \nthe base rate As. The Wn that maximizes mutual Information between input and \noutput rates is displayed in fig. 3d (dashed line). Fig. 3e shows the ratio ~ o / ~ s \ncomputed by using eq. (3) and the Wn calculated with eq. (8) (dashed dotted line) \nand the same ratio for the quadratic approximation. Fig. 3f shows the mutual \ninformation between the input and output rates as a function of the changing w n . \n\n\f0 \n0 \n\n500 \n\n1000 1500 \n\n2000 2500 3000 \n\nI[n~[ :~--/ \n\n\u2022 \n\n1500 \n\n110 \n\nI1S \n\no:i,ek ' I\nO'h \n\n500 \n\n0 \n\n1000 \n\n.. \n\nW \n\nn 0.1 d) \n\n2000 2500 \n\n~I \nI,rJ \nri\" \nI I \n\n3000 \n\n3000 \n\n2000 2500 3000 \n\n500 \n\n1000 \n\n1500 \n\n2000 \n\n2500 \n\n3000 \n\n~ \n0 \n0 \n\n2000 2500 \n\n1000 \n\n1500 \n\n500 \n\nAs \n\n':1 C) \n\n0 \n0 \n\n500 \n\n\u2022 \n\n2500 \n\n3000 \n\n500 \n\n1500 \n\n1000 \ntime [update steps] \n\n2000 \n\n\u2022 \n\nI \n\n1000 1500 \n\n\u2022 \n\ntime [update steps 1 \n\n0.1 5 \n\nw \nn 0.1 \n\nb) \n\n0.05 \n\n00 \n\n10 \n\na) \n\nAS \n\n5 \n\n00 \n\nFigure 3: a) Input rates As are evenly distributed around a base rate with width \n0.5, in each time step one As is presented . b) Application of the learning rule eq. \n(7) to t he rates shown in a). Adaptation of the noise level to t hree different input \nbase rates As. c ) The three base rates As. d) Wn as a function of time according \nto eq. (7) (solid line) , the optimal Wn that maximizes eq. (3) (dashed dotted line) \nand the optimal Wn that maximizes the mut ual information between t he input and \noutput rates (dashed). T he opt imal values of Wn as the quadratic approximation, \neq. (5) yield are indicated by the black arrows. e ) The ratio b.. o / b.. s computed \nfrom eq. (3) (dashed dotted line) and t he quadratic approximation (solid line) . f) \nMut ual information between input and output rates as a function of base rate and \nchanging synaptic coupling constant W n . For calculating the mutual information \nthe input rates were chosen randomly from the interval [As - 0.25 , As + 0.25] in each \ntime step . Parameters as in fig . 2. \n\nT he figure shows , that the learning rule, eq. (7) in t he quadratic approximation \nleads to values for () which are near-optimal, and that optimizing the difference of \noutput rates leads to results similar to t he optimization of the mut ual information . \n\n5 Conductance based Model Neuron \n\nTo check if and how t he results from the abstract model carryover to a biophysically \nmode realistic one we explore a modified Hodgkin-Huxley point neuron with an \nadditional A-Current (a slow potassium current) as in [11] . T he dynamics of the \nmembrane potential V is described by t he following equation \n\nC~~ \n\n- gL(V(t ) - EL) - !iNam~ h(t)(V - ENa) \n- !iKn(t)4(V - EK) - !iAa~ b(t)(V - EK) \n+ l syn + la pp, \n\n(8) \n\nthe parameters can be found in the appendix. All parameters are kept fixed through-\n\n\f80 ,---------------------------------, \n\na \nII~ 70 b) \n\na) \n\n'N 10 \ntS \n~ \no \n~ 5 \n~ \n\npeak \n\nconductances \n\n50 \n\n\"} 60 \n?::l \nc \n::I 8 \n40 \n~ 30 \n.~ \n.5 20 \n \nU \n\n10 \n\n~ \n\n00'----- 0 ,\"\"2 =:::...0-.4\"\"\"\"'-=--,L---~--,~,2---\"1 .4 ~ 0 0 \n\n'i3 \n\n10 \nnoiselevel in multiples of peak conductances \n\nB \n\n2 \n\n4 \n\n6 \n\nFigure 4: a) Transfer function for the conductance based model neuron with ad(cid:173)\nditional balanced input , a = 1, 2, 3, 4 b ) Demonstration of SR for the conductance \nbased model neuron. The plot shows the resonance for two different base currents \nlapp = 0.7 and lapp = 0.2 and a E [0, 10]. \n\n~ -I ~ \n\n& \n'0 \n-~ \nE \n= \n.j \n\n7 \n\nE3 -\n\n5 \n\n4-\n\n3 \n\n:2 \n\n1 \n\n90 ,---------------- ---------------, \n\n0) \n\na) \n\n-B gp ao -\n~ \n~ 70 -\n--&., \n~ 60 -\nb 50 -\n~ 40 -\n1G \n_~30 -\n\n~ 20 -\n:;:: \n-~ 10 -\n\n~----------~~------~~~1 \n\n0.5 \n\n0 0 \n\nI drift [n~] \n\n\u00b00~----------0~.5~--~------~ \n1 \n\ns\"U-othresl\"\"1<:>ld p<:>te:n.t:ial (8 = \n\n) \n\nFigure 5: a) Optimal noise-level as a function ofthe base current in the conductance \nbased model. b) Optimal noise-level as a function of the noise-free membrane \npotential in the abstract model. \n\nout all shown data. As balanced input we choose an excitatory Poisson spike train \nwith rate Ane = 1750 Hz and an inhibitory spike train with rate Ani = 750 Hz . \nThese spike trains are coupled to the neuron via synapses resulting in a synaptic \ncurrent as in [12] \n\nls yn = ge(V(t) - Ee) + gi(V(t) - Ei)). \n\n(9) \n\nI \n\nEvery time a spike arrives at the synapse the conductance is increased by its peak \nconductance ge i and decreases afterwards exponentially like exp{ - _t_, } . The cor-\nresponding parameters are ge = a * 0.02 * gL , gi = a * 0.0615 * gL. The common \nfactor a is varied in the simulations and adjusts the height of the peak conductances, \ngL is the leak conductance given above. Excitatory and inhibitory input are called \nbalanced if the impact of a spike-train at threshold is the same for excitation and \ninhibition \n\nT e, t \n\nTegeAne(Ee - B) = - TigiAni(Ei - B) \n\n(10) \n\nI \n\nge,t J! \n\nwith Te i = ~ fooo ge i(t)dt . Note that the factor a does cancel in eq . (10). \nFig. 4a displays transfer funct ions in the conductance based setting with balanced \ninput. A family of functions with varying peak conduct ances for the balanced input \nis drawn . \n\n, \n\n\fd) \n\n~ 100 r-----~----~----~----~----~-----. \n\nrJJ \n~ 50 \n~ \u00b0OL'~--~--~----~--~~--~--~ \n300 \n'-J \n\n150 \n\n250 \n\n200 \n\n50 \n\n200 \n\n250 \n\n300 \n\n200 \n\n250 \n\n300 \n\n100 \n\n,-..., i':k\"-: -..... :~ : \n1~-:-\n\n150 \n\n100 \n\n50 \n\n150 \n\n'\" \n\n0 \n\n50 \n\n100 \n\no \n\nFigure 6: Adaptive SR in the conductance based model. a) Currents drawn from \na uniform distribution of width 0.2 nA centered around base currents of 3, 8, 1 nA \nrespectively. b) Optimal noise-level in terms of a. Optimality refers to a semi-linear \nfit to the data of fig. 5a. c) adapting the peak conductances using a in a learning \nrule like eg. (8). d) Difference in spike count , for base currents I \u00b1 0.1 nA and \nusing a as specified in c) . \n\nFor studying SR in t he conductance based framework , we apply the same paradigm \nas in the abstract model. Given a certain average membrane potential, which is \nadjusted via injecting a current I (in nA), we calculate the difference in the output \nrate given a certain difference in the average membrane potential (mediated via the \ninjected current) I \u00b1 t:.I. A demonstration of stochastic resonance in the conduc(cid:173)\ntance based neuron can be seen in fig. 4b. In fig. 5a the optimal noise-level, in \nterms of multiples a of the peak conductances , is plotted versus all currents that \nyield a sub-threshold membrane voltage. For comparison we give the corresponding \nrelationship for the abstract model in fig. 5b. \n\nFig. 6 shows the performance of the conductance based model using a learning \nrule like eg. (7). Since we do not have an analytically derived expression for (J opt \nin the conductance based case, the relation (Jopt (I) , necessary for using eg. \n(7), \ncorresponds to a semi-linear fit to the (a opt , I) relation in fig. 5a. \n\n6 Conclusion and future directions \n\nIn our contribution we have shown , that a simple and activity driven learning rule \ncan be given for the adaptation of the optimal noise level in a stochastic resonance \nsetting. The results from the abstract framework are compared with results from a \nconductance based model neuron. A biological plausible m echanism for implem ent(cid:173)\ning adaptive stochastic resonance in conductance based neurons is currently under \ninvestigation. \n\n\fAcknowledgments \n\nSupported by: Wellcome Trust (061113/Z/00) \n\nApp e ndix: Parame t ers for the conductance based mode l n e uron \nsomatic conductances/ion-channel properties: em = 1.0 :':2 ,gL = 0.05 ~ ,gNa = \n100 ~,gJ( = 40 ~,gA = 20 ~,EL = -65 mV,ENa = 55 mV, EJ( \n- 80 mV, TA = 20 ms , \nsynaptic coupling: Ee = 0 mV, Ei = -80 mV, Te = 5 ms , Ti = 10 ms , \nspike initiation: dh = ~ dn = ngo - n m<>-ti3m' O:m = -O.l(V + 30)/(exp( -O .l(V + 30)) - 1), f3m = 4exp( -(V + \n55)/18), \nhoo = <>h\":i3h' O:h = 0.07exp(-(V + 44)/20), f3h = l/( exp(-O. l(V + 14)) + 1) , \nn co = <>n+i3 n ' O:n = -O.Ol(V + 34)/(exp( -O.l(V + 34)) -1) , f3n = 0.125exp( -(V + \n44)/80) \na oo = l/(exp( -(V + 50)/20) + 1) , boo = l/(exp((V + 80)/6) + 1), \nTh = in/(O:h + f3h) , Tn = in/(O:n + f3n), in = 0.1 \n\nTn ' dt \n\n' dt \n\nT A \n\n' \n\ndt \n\nTh\n\nReferences \n\n[1] S. B. Laughlin, RR de Ruyter van Steveninck and J.C. Anderson, The metabolic \n\ncost of neural information, Nature Neuroscience, 1(1) , 1998, p.36-41 \n\n[2] J. S. Anderson, I. Lampl, D. C. Gillespie and D. Ferster, The Contribution of Noise to \nContrast Invariance of Orientation Tuning in Cat Visual Cortex, Science, 290 , 2000, \np.1968-1972 \n\n[3] L. Gammaitoni, P. Hanggi, P. Jung and F. Marchesoni, Stochastic Resonance R eviews \n\nof modern Physics, 70(1) , 1998, p.223-287 \n\n[4] D . F. Russel, L. A. Wilkens and F. Moss, Use of behavioral stochastic resonance by \n\npaddle fish for feeding, Nature, 402 , 1999, p.291-294 \n\n[5] M. N . Shadlen, and W. T. Newsome, The Variable Discharge of Cortical Neurons: \nImplications for Connec tivity, Computation, and Information Coding, The Journal of \nNeuroscience, 18 (10), 1998, p.3870-3896 \n\n[6] M.V. Tsodyks and T. Sejnowski, Rapid state switching in balanced cortical network \n\nmodels, Network: Computation in Neural Systems, 6 , 1995, p.I11-124 \n\n[7] H. E. Plesser and W. Gerstner, Noise in Integrate-and-Fire Neurons: From Stochastic \n\nInput to Escape Rates, Neural Computation, 12 , 2000, p.367-384 \n\n[8] H. C. Tuckwell, Introduction to theoretical neurobiology: volume 2 nonlinear and \n\nstochastic theories, Cambridge University Press, 1998 \n\n[9] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley Series in \n\nTelecommunications, 1991, 2nd edition \n\n[10] A.R Bulsara, and A. Zador, Threshold detection of wide band signals: a noise-induced \n\nmaximum in the mutual information., PRE, 54(3), 1996, R2185-2188 \n\n[11] O. Shriki, D. Hansel and H. Sompolinsky, Modeling neuronal networks in cortex by rate \nmodels using the current-frequency respons e properties of cortical cells, Soc. Neurosci. \nAbstr., 24 , p.143, 1998 \n\n[12] E. Salinas and T .J. Sejnowski, Impa ct of Co rrelated Synaptic Input on Output Firing \n\nRate and Variability in Simple Neurona l Mode ls J. Neurosci. 20 , 2000, p.6193-6209 \n\n\f", "award": [], "sourceid": 2112, "authors": [{"given_name": "Gregor", "family_name": "Wenning", "institution": null}, {"given_name": "Klaus", "family_name": "Obermayer", "institution": null}]}