{"title": "Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning", "book": "Advances in Neural Information Processing Systems", "page_first": 1091, "page_last": 1098, "abstract": null, "full_text": "Circuits  for  VLSI  Implementation of \n\nTemporally-Asymmetric Hebbian \n\nLearning \n\nAdria Bofill \n\nAlan  F.  Murray \n\nDanlOn  P.  Thompson \n\nDept.  of Electrical  Engineering \nThe  University of Edinburgh \n\nEdinburgh ,  EH93JL , UK \nadria. bofill@ee.ed.ac. uk \nalan. murray@ee.ed.ac.uk \n\ndamon. thompson @ee.ed.ac. uk \n\nAbstract \n\nExperimental data has  shown  that synaptic  strength  modification \nin some types of biological neurons depends upon precise spike tim(cid:173)\ning  differences  between  presynaptic  and  postsynaptic  spikes.  Sev(cid:173)\neral  temporally-asymmetric  Hebbian  learning  rules  motivated  by \nthis  data have  been  proposed.  We  argue  that  such  learning  rules \nare  suitable  to  analog  VLSI  implementation.  We  describe  an  eas(cid:173)\nily  tunable circuit  to modify the weight of a  silicon spiking neuron \naccording to those  learning rules.  Test results from the fabrication \nof the circuit using  a  O.6J.lm  CMOS  process  are  given. \n\n1 \n\nIntroduction \n\nHebbian learning rules modify weights of synapses according to correlations between \nactivity  at  the  input  and  the  output  of neurons.  Most  artificial  neural  networks \nusing  Hebbian  learning  are  based  on  pulse-rate  correlations  between  continuous(cid:173)\nvalued  signals;  they  reduce  the  neural  spike  trains  to  mean  firing  rates  and  thus \nprecise  timing does  not  carry  information.  With this  approach  the  spiking  nature \nof biological  neurons  is  just  an  efficient  solution  that  evolution  has  produced  to \ntransmit analog information over  an  unreliable medium. \n\nIn  recent  years,  recorded  data have  indicated  that synaptic  strength  modifications \nare also induced  by timing differences between  pairs of presynaptic and postsynaptic \nspikes  [1][2].  A  class  of learning rules  derived  from these  experimental dat a  is  illus(cid:173)\ntrated  in  Figure  1  [2]-[4].  The  \"causal/non-causal\"  basis  of these  Hebbian learning \nalgorithms is  present  in  all  variants of this spike-timing dependent  weight  modific(cid:173)\nation  rule.  When  the  presynaptic  spike  arrives  at  the  synapse  a  few  milliseconds \n\n\fpresynaptic spike \n\npresynaptic spike \n\ntpre' \n\npostsynaptic spike \n\npostsynaptic spike \n\n!post \n\n!'.w \n\n,tpre \n\ntpost ' \n\ntpre - tpost \n\ntpre - tpost \n\n(a) \n\n(b) \n\nFigure 1:  Two temporally-asymmetric Hebbian learning rules drawing on \nexperimental  data.  The  curves  show  the  shape  of the  weight  change  (~W) for \ndifferences  between  the firing  times of the  presynaptic  (tpr e) and  the  postsynaptic \n(tpost)  neurons.  When  the  presynaptic  spike  arrives  at  the  synapse  a  few  ms  be(cid:173)\nfore  the  postsynaptic  neuron  fires ,  the  weight  of the  synapse  is  increased.  If the \npostsynaptic  neuron fires  first,  the  weight is  decreased. \n\nbefore  an  output  spike  is  generated,  the  synaptic  efficiency  increases.  In  contrast, \nwhen  the  postsynaptic  neuron  fires  first ,  the  efficiency  of the synapse  is  weakened. \nHence,  only  those  synapses  that  receive  spikes  that  appear  to  contribute  to  the \ngeneration  of the  postsynaptic  spike  are  reinforced.  In  [5]  a  similar  spike-timing \ndifference  based learning rule  has been  used to learn input sequence  prediction in a \nrecurrent  network.  Studies  reported  in  [4]  indicate that  the  positive  (potentiation) \nelement  of the  learning  curve  must  be  smaller  than  the  negative  (depression)  to \nobtain stable competitive weight  modification. \n\nPulse signal representation  has been  used  extensively in hardware implementations \nof artificial neural  networks  [6] [7].  Such systems  use  pulses  as  a  mere technological \nsolution to  benefit  from  the robustness  of binary signal  transmission while  making \nuse of analog circuitry for  the elementary computation units.  However , they do not \nexploit the relative  timing differences  between  individual pulses  to compute.  Also , \nanalog hardware is not well-suited to the complexity of most artificial neural network \nalgorithms.  The  learning  rules  presented  in  Figure  1  are  suitable for  analog  VLSI \nbecause:  (a)  the signals involved in the weight modification are local to the neuron , \n(b)  no temporal averaging of the presynaptic or postsynaptic activity is  needed  and \n(c)  they are remarkably simple compared to complex neural algorithms that impose \nmathematical  constraints  in  terms  of  accuracy  and  precision.  An  analog  VLSI \nimplementation  of  a  similar,  but  more  complex,  spike-timing  dependent  learning \nrule  can  be found  in  [8]. \n\nWe  describe  a  circuit  that  implements  the  spike-timing dependent  weight  change \ndescribed  above along with the t est results from a fabricated  chip.  We have fo cused \non the implem entation of the  weight modification circuits,  as  VLSI spiking neurons \nwith  tunable  m embrane  time  constant  and  refractory  period  have  already  been \nproposed  in  [9]  and  [10]. \n\n\f2  Learning  circuit  description \n\nFigure 2 shows  the weight change  circuit  and  Figure 3 the form of signals required \nto  drive  learning.  These  driving  signals  are  generated  by  the  circuits  described  in \nFigure  4.  The  voltage  across  the  weight  capacitor ,  Cw  in  Figure  2,  is  modified \naccording  to  t he  spike-timing dependent  weight  change  rule  discussed  above.  The \nweight  change,  ~ W, is  defined  as  -~ Vw so  that the leakage  of t he  capacitor leads \nVw  in  the  direction  of weight  decay.  The  circuits  presented  allow  the  control  of: \n(a)  the  abruptness  of  the  transition  between  potentiation  and  depression  at  the \norigin,  (b)  the difference  between  the areas under  the curve  in the potentiation and \ndepression  regions,  (c)  the  absolute  value  of the  area  under  each  side  of  the  curve \nand  (d)  the time constant of t he  curve  decay. \n\nPI \n\nFigure  2:  W e ight  change  circuit \n\npostsynaptic spike \n\nup \n\ndown \n\nn '-----__ \n\n(a) \n\n(b) \n\nFigure  3:  Stimulus  for  the we ight  change  circuit \n\nThe  weight  change  circuit  of Figure  2  works  as  follows.  When  a  falling  edge  of \neither  a  postsynaptic  or  a  presynaptic  spike  occurs ,  a  short  activation  pulse  is \ngenerated  which  causes  Cdec  to  be  charged  to  V pea k  through  transistor  Nl.  The \ncharge  accumulated  in  Cdec  will  leak  to  ground  with  a  rate  set  by  Vd ec ay '  The \n\n\fresulting voltage at the gate of N3 produces a  current flowing  through P2-P3-N4.  If \na  presynaptic spike is  active after the falling edge of a  postsynaptic spike  an active(cid:173)\nlow  up  pulse  is  applied  to  the  gate  of transistor  P5.  Thus,  the  current  flowing \nthrough  N3  is  mirrored  to  transistor  P4  causing  an  increase  in  the  voltage  across \nCw  that corresponds  to  a  decrease  in  the weight.  In  contrast,  when  a  presynaptic \nspike precedes  a  postsynaptic spike an active-high  down  pulse  is  generated  and the \ncurrent  in  N3  is  mirrored to  N5-N6  resulting  in  a  discharge  of Cw . \nAs  the  current  in  N2  is  constant,  the  current  integrated  by  Cw  displays  an  expo(cid:173)\nnential  decay,  if  Vpeak  is  such  that  N3  is  in  sub-threshold  mode.  Hence,  the  rate \nof decay  of the  learning curve  is  fixed  by  the  ratio hlCdec.  The  abruptness  of the \ntransition zone  between  potentiation and depression  is  set  by  the duration of both \nthe  presynaptic  and  postsynaptic  spike.  Finally,  an  imbalance between  the  areas \nunder  the  positive  and  negative  side  of the  curve  can  be  introduced  via  Vdep  and \nVpot .  The  effect  of all  these  circuit  parameters  is  exemplified  by  the  test  results \nshown  in the following section. \n\nact \n\npost_spike \n\ndown \n\n(a) \n\n(b) \n\nFigure  4:  Learning  drivers.  (a)  Delayed  act  pulse  generator.  (b)  Asyn(cid:173)\nchronous controller for  up  and down  signals \n\nThe circuit of Figure 4(a) , present in both the presynaptic and postsynaptic neurons, \ngenerates  a  short act  pulse with the falling edge of the output spike.  The act  pulses \nare  ORed  at  each  synapse  to  produce  the  activation  pulse  applied  to  the  weight \nchange circuit  of Figure  2. \n\nThe other two driving signals , up and down,  are produced  by a  small asynchronous \ncontroller  using  standard  and  asymmetric  C-elements  [11]  shown  in  Figure  4(b). \nThe  internal  signal  q  indicates  if  the  last  falling  edge  to  occur  corresponds  to  a \npre  (q  =  1)  or  a  postsynaptic  spike  (q  =  0).  This  ensures  that  an  up  signal  that \ndecreases  the  weight  is  only  generated  when  a  presynaptic spike  is  active  after  the \nfalling  edge  of  a  postsynaptic  spike.  Similarly  down  is  activated  only  when  the \npostsynaptic spike  is  active following  a  presynaptic spike falling edge. \n\nUsing the  current  flowing  through  N3  (Figure  2)  to both increase  and decrease  the \nweight  allows  us  to  match the curve  at  the  potentiation  and depression  regions  at \nthe expense  of having to introduce  the  driving circuits  of Figure  4. \n\n3  Results from  the temporally-asymmetric Hebbian  chip \n\nThe  circuit  in  Figure  2  has  been  fabricated  in  a  O.6J.lm  standard  CMOS  process. \nThe  driving  signals  (down,  up  and  activation)  are  currently  generated  off-chip. \n\n\fThe circuit can be operated in t he  p,s  timescale, however , here  we  only  present  test \nresults  with  time  constants  similar  to  those  suggested  by  experimental  data  and \nstudied  using software  models in  [3]- [5]. \n\n3.5\"==;;;r----;;.==---~--~==~ \n\n2.5 \n\nt \npre \n\n-I \n\npost \n\n=2ms \n\n\\ \n\n- t \n\nt \npre \n\n= 5 \n\npost \n\nt \npre \n\n-I \n\npost \n\n=7.5ms \n\n0.5 \n\n1.5 \n\nTime(s) \n\n(a) \n\nVdecay  = 515mV \nV peak  = 694mV \nT  = 1ms \n' p \nT act  = SOj.tS \nV \n=OV \npot \nVdd  - Vdep  = OV \n\n> ' \n\n1.5 \n\n-I  =5m \n\npre \n\nt \npost \n\n/ \n\n- t  = \n\npre  ms \n\n0.5 \n\nt \npost \n\n/ \n\n2.5 \n\n00 \n\n0.5 \n\nVdecay = 515mV \nV peak  = 694m V \nT  = 1ms \n'p \nT act  = 50j.ts \nV  =OV \npot \nVdd  - Vdep  = OV \n\nt \n\n- t \n\npo\"  i \n\n= 7.5ms \n\n1.5 \n\nTime ( s ) \n\n(b) \n\n2.5 \n\nFigure  5:  Test  result s.  Linearity.  (a )  The  voltage across  Cw  is  initially  set  to \nOV  and increased  by a sequence  of consecutive pairs of pre and postsynaptic  spikes. \nThe delays between  presynaptic  and postsynaptic firing  times were  set  to 2ms , 5ms \nand 7.5ms (b) The order of pre and postsynaptic spikes  is  reversed  to decrease  Vw . \nIn  both  plots  the  duration  of the  spikes,  T sp ,  and  the  activation  pulse,  Ta ct ,  is  set \nto  1ms and  and  50p,s  respectively. \n\nThe  learning  window  plots  shown  in  Figures  6-8  were  constructed  with  test  data \nfrom  a  sequence  of consecutive  presynaptic  and  postsynaptic  spikes  with  different \ndelays .  Before  every  new  pair  of presynaptic  and  postsynaptic  spikes,  the  voltage \nin  Cw  was  reset  to  Vw = 2V . The  weight  change  curves  are  similar for  other  initial \n\"reset\"  weight voltages owing to the linearity of the learning circuit for  different  Vw \nvalues  as  shown  in  Figure  5.  A  power  supply  voltage of Vdd = 5V  is  used  in  all  test \nresults  shown. \n\n80 , - - . - - , - - , - - , - - , - - , - - , - - - ,  \n\n100 \n\n50 \n\n> \nE \n\n>'  0 \n<l \nr \n\n~  -50 \n\n- 100 \n\nVdecay =516mV \nTsp =1ms \nTact = 501-15 \nV \n=OV \npot \nVdd  - Vdep = OV \n\n-\n--- V \n-- V \n\nVpeak =716mV \n=711mV \n,,'k \n=701mV \n\"\" \n\nTsp= 1ms \nTact = 5O~IS \nVdd  - Vdep  = OV \nV \npot \n\n= OV \n\n- - ......... ~--.=.::.---.:.-\n\n,I \n\n\" \" \" \n\" \n, , \n\n60 \n\n40 \n\n> \nE  20 \n\n>~ \nl' \n;\n<l \n\n-20 \n\n-40 \n\n-60 \n\n-\n\nVdecay =482mV \nVpeak  = 699mV \n\n- - - v \n\n= 499mV \nVpeak  =  701mV \n\ndecay \n\n-\n\n-\n\nVdecay =517mV \nVpeak  =  702mV \n\n-\n\n';\",-;\"--' ::\"= '--\"'- (cid:173)\n\n, \n\n-25  -20 \n\n-15  - 10 \n\n0 \n\n-5 \ntp,.  - tpo,t \n\n10 \n\n5 \n(ms) \n\n15 \n\n20 \n\n25 \n\n0 \n\n10 \ntpre  - \\ OSI  (m s) \n\n-80~-~-~-~-~-~-~-~-----' \n\n-40 \n\n-30 \n\n-20 \n\n-10 \n\n20 \n\n30 \n\n40 \n\n(a) \n\n(b) \n\nFigure  6:  Test  result s.  (a)  M aximum we ight  ch ange.(b)  Learning window \ndecay.  The  decay  of both  tails  of the  learning  window  is  set  by  Vdecay.  A  wid e \nrange of time constants can be set.  Note, however,  that Vpea k needs  to be increased \nslightly for  faster  decay  rates  to maintain exactly  the  same peak  value. \n\n\fThe  maximum weight  change  is  easily  tuned  with  Vp e ak  as  shown  in  Figure  6( a). \nChanging the value of Vp e ak  modifies by the same amount the absolute value of the \npeaks  at  both  sides  of the  curve.  The  decay  of the  learning  window  is  controlled \nby  Vd ec ay'  An  increase  in  Vd ecay  causes  both  tails of the  learning  window to  decay \nfaster  as  seen  in  Figure  6 (b).  As  m entioned above,  matching between  both sides  of \nthe learning window is  possible  because  the same source  of current  is  used  to  both \nincrease  and decrease  the  weight. \n\n100  ~-~-~~~~-~-~-~-~ \n\n100 \n\n80 \n\n60 \n\n40 \n\n20 \n\n> \nE \n\nV decay = SOOmV \nV peak  = 705mV \nTsp= 1ms \nTact = 50~lS \nVdd -Vdep=OV \n\n,,_ \n\n,, ;' \n\n// \n\n.-.\"';::: ,-;// \n\npot \n\nV  =OV \n\n-\n- - - V  =47mV \n- -- VPOt  = 9~V \n\npot \n\n>'  0 \n<l , \n;: \n<l \n\n-20 \n\n-40 \n\n-60 \n\n-80 \n\nV decay  =  SOOmV \nV peak = 705mV \nT  = 1ms \n' p \nTact = 50l1s \nV  =OV \npot \n\n-\n- - - V \n_ , _ \n\nVdd  - Vdep  = OV \ndd \n\n= 3.8mV \nVdd  - Vdep = 9.3mV \n\n- V \n\ndep \n\n80 \n\n60 \n\n40 \n\n-\n> \n.:.  20 \n\n>'0 \nI' \n-\n;: \n<l  -40 \n\n-20 \n\n-60 \n\n-80 \n\n-1~go'---_--c1~5 ---C10~~-5~-0~-~5-~-~15:--~ 20 \n\n-100 \n\n-20 \n\n- 15 \n\n- 10 \n\n-5 \n\n'pre  - 'post  (ms) \n\n(a) \n\n'pre -\n\n10 \n\n15 \n\n20 \n\n5 \n\n0 \n'post  (ms) \n\n(b) \n\nFigure  7:  Test  results.  Imbalance between potentiation and depression. \n\nThe imbalance between  the  areas  under  the  potentiation and depression  regions  of \nthe learning window is  a  critical parameter of this class of learning rules  [3] [4].  The \ncircuit  proposed  can  adjust  the  peak  of the  curve  for  potentiation  and  depression \nindependently (Figure 7).  Vp ot  can be used to reduce the area under the potentiation \nregion  while  keeping  unchanged  the  depression  part of the  curve ,  thus  setting  the \noverall  area  under  the  curve  to  a  negative  value  (Figure  7(b)).  Similarly,  with \nVdd  - Vd ep  the  area of the depression  region  can  also  be  reduced  (Figure 7 ( a) ) . \n\n100 ,---,--~--,,---,--~--, \n\n__  T sp = 100llS \n\nVpeak = 790mV \nVdecay = 499mV \n\n---- T  =1ms \n\n\" Vpeak = 699mV \nVdecay = 482mV \n\n100 \n\n50 \n\nTact = 50flS \nVdd  - Vdep  =OV \nV  =OV \npot \n\n> \nE \n\n50 \n\ni 100 \n\n;:  100 \n<l \n\n50 \n\n50 \n\n100 '---~--~--L--~--~-~ \n\n10 \n\n15 \n\n-15 \n\n-10 \n\n-5 \n\n0 \n5 \n'post  (ms) \n\n'pre -\n\nFigure  8:  Test  results.  Abruptness at  the origin \n\nThe  abruptness  of the learning window at the origin  (short delays between  pre  and \npostsynaptic  spikes)  is  set  by  th e  duration  of the  spikes.  Dat a  in  Figure  8  show \nthat  th e  two  peaks  of the  learning  window  are  separat ed  by  2  times  the  durations \nof the  spikes  (Tsp ). \n\n\f4  Discussion and  future  work \n\nDrawn  from  experimental  data,  several  temporally-asymmetric  Hebbian  learning \nrules  have been proposed recently.  These learning rules only strengthen the weights \nwhen  there  is  a  causal  relation  between  presynaptic  and  postsynaptic  activity. \nPurely  random time coincidences  between  spikes  will tend  to decrease  the weights. \nSynaptic  weight  normalization is  thus  achieved  via  competition  to  drive  postsyn(cid:173)\naptic  spikes  [4].  Predictive  sequence  learning  has  been  achieved  using  a  similar \ntime-difference  learning  rule  based  on  the  same data  [5].  Other  pulse-based  learn(cid:173)\ning  rules  have  also  been  used  to  study  how  delay  tuning  could  be  achieved  in  the \nsound source  localization system of the barn owl  [12]. \n\nA  simple circuit to implement a  general weight change block based on such learning \nrules  has  been  designed  and  partially fabricated.  The  main characteristics  of the \nlearning  rule,  namely  the  abruptness  at  the  origin,  the  rate  of the  decay  of the \nlearning  window,  the  imbalance  between  the  potentiation  and  depression  regions \nand  the  rate  of learning ,  can  be  tuned  easily.  The  design  also  ensures  that  the \ncircuit  can  operate  at  different  timescales.  As  shown,  the  fabricated  circuits  have \ngood linearity over  a  wide range  of weight  voltage values. \n\nWe  are  currently  developing  a  second  chip  with  a  small  network  of  temporally \nasymmetric Hebbian spiking neurons using the circuit described  in this paper.  The \nstructure  of the  network  will  be  reconfigurable.  The  small network  will  be  used  to \ncarry  out  movement planning experiments  by  learning  of temporal sequences.  We \nenvisage  the  application  of networks  of temporally-asymmetric  Hebbian  learning \nsilicon  neurons  as  higher  level  processing  stages  for  the  integration  of sensor  and \nmotor  activities  in  neuromorphic system.  We will  concentrate  on  auditory  applic(cid:173)\nations  and  adaptive,  spike-based  motion estimation.  In  both  types  of application, \nnaturally-occurring correlations  in data can be  exploited to drive  the  pulse timing(cid:173)\nbased  learning process. \n\nAcknowledgelllents \n\nWe  thank  Robin  Woodburn ,  Patrice  Fleury  and  Martin  Reekie  for  fruitful  discus(cid:173)\nsions  during  the  design  and  tape  out  of the  chip.  We  also  acknowledge  that  the \ncircuits  presented  incorporate  some of the  insights  into  neuromorphic  engineering \nthat one  of the authors gained at the Telluride Workshop on Neuromorphic Engin(cid:173)\neering  2000  (http://www.ini.unizh.ch/telluride2000 /). \n\nReferences \n\n[1]  Markram,  H.,  Lubke,  J.,  Frotscher ,  M.  &  Sakmann,  B.(1997)  Regulation  of  Synaptic \nEfficacy  by  Coincidence  of Postsynaptic  APs and  EPSPs.  Science 275 ,  213-215. \n\n[2]  Zhang,  L.L , Tao,  H.W.,  Holt,  C.E. , Harris,  W.A.  &  Poo,  M-m.(1998)  A critical window \nfor  cooperation  and  competition  among  developing  retinotectal  synapses. Nature  395 ,  37-\n44. \n\n[3]  Abbott,  L.F.  &  Song,  S.(1999)  Temporally  Asymmetric  Hebbian  Learning,  Spike  Tim(cid:173)\ning  and  Neuronal  Response  Variability.  In Kearns,  M.S.,  Solla,  S.A.,  &  Cohn,  D.A.  (eds.), \nAdvances  in  Neura l  Information  Processing  S ystems  11,  69-75.  Cambridge,  MA:  MIT \nPress. \n\n[4]  Song,  S.,  Miller,  K.D.  &  Abbott ,  L.F .(2000)  Competitive  Hebbian  Learning  Through \nSpike-Timing  Dependent  Synaptic  Plasticity.  Nature  Neuroscience 3,  919-926. \n\n[5]  Rao,  R.P.N.,  &  Sejnowski,  T .J.(2000)  Predictive  Sequence  Learning  in  Recurrent  Neo(cid:173)\ncortical  Circuits.  In  Solla,  S.A.,  Leen,  T .K.  &  Muller,  K-R.  (eds.),  Advances  in  Neural \n\n\fInformation  Processing Systems  12,  164-170.  Cambridge,  MA:  MIT  Press. \n\n[6]  Murray,  A.F.  &  Smith  A.V.W.(1987)  Asynchronous  Arithmetic  for  VLSI  Neural  Sys(cid:173)\ntems.  Electronic  Letters 23 ,  642-643. \n\n[7]  Murray,  A.F. & Tarrasenko,  L.(1994)  Neural Computing:  An Analogue VLSI Approach. \nChapman- Hall. \n\n[8]  Hafliger,  P.,  Mahowald,  M.  &  Watts ,  L.  (1996)  A  Spike  Based  Learning  Neuron  in \nAnalog  VLSL  In  Mozer ,  M.C.,  Jordan ,  M.L ,  &  Petsche,  T.  (eds.) ,  Advances  in  Neural \nInformation  Processing Systems  9,  692-698.  Cambridge,  MA:  MIT Press. \n\n[9]  Indiveri,  G.(2000)  Modeling  Selective  Attention  Using  a  Neuromorphic  Analog  VLSI \nDevice.  Neural  computation 12 ,  2857-2880. \n\n[10]  van  Schaik,  A.,  Fragniere,  E.  &  Vittoz ,  E.( 1996)  An  Analogue  Electronic  Model  of \nVentral  Cochlear  Nucleus  Neurons.  In  Proceedings  of the  5th  International Conferen ce  on \nM icroelec tronics for  Neural,  Fuzzy  and  Bio-inspired S ystems;  Mi croneuro  '96,  52-59.  Los \nAlamitos,  CA:  IEEE  Computer  Society  Press. \n\n[11]  Shams,  M.,  Ebergen,  J.C.  &  Elmasry,  M.L  (1998)  Modeling  and  Comparing  CMOS \nImplementations  of the  C-elment.  IEEE  Transactions  on  Very  Large  Scale  Intergration \n(VLSI)  Systems,  Vol.  6,  No.4,  563-567. \n\n[12]  Gerstner,  W. , Kempter , R. , van Hemmen ,  J.1.  &  Wagner ,  H.(1999)  Hebbian  Learning \nof Pulse  Timing  in  the  Barn  Owl  Auditory  System.  In  Mass ,  W.  &  Bishop,  C.M.  (eds.) , \nPulsed Neura l  Networks.  Cambridge,  MA:  MIT Press. \n\n\f", "award": [], "sourceid": 2124, "authors": [{"given_name": "A.", "family_name": "Bofill", "institution": null}, {"given_name": "D.", "family_name": "Thompson", "institution": null}, {"given_name": "Alan", "family_name": "Murray", "institution": null}]}