{"title": "An online Hebbian learning rule that performs Independent Component Analysis", "book": "Advances in Neural Information Processing Systems", "page_first": 321, "page_last": 328, "abstract": "Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule.", "full_text": "An online Hebbian learning rule that performs\n\nIndependent Component Analysis\n\nClaudia Clopath\n\nSchool of Computer Science and Brain Mind Institute\n\nEcole polytechnique federale de Lausanne\n\n1015 Lausanne EPFL\n\nclaudia.clopath@epfl.ch\n\nAndre Longtin\n\nCenter for Neural Dynamics\n\nUniversity of Ottawa\n\n150 Louis Pasteur, Ottawa\nalongtin@uottawa.ca\n\nWulfram Gerstner\n\nSchool of Computer Science and Brain Mind Institute\n\nEcole polytechnique federale de Lausanne\n\n1015 Lausanne EPFL\n\nwulfram.gerstner@epfl.ch\n\nAbstract\n\nIndependent component analysis (ICA) is a powerful method to decouple signals.\nMost of the algorithms performing ICA do not consider the temporal correlations\nof the signal, but only higher moments of its amplitude distribution. Moreover,\nthey require some preprocessing of the data (whitening) so as to remove second\norder correlations. In this paper, we are interested in understanding the neural\nmechanism responsible for solving ICA. We present an online learning rule that\nexploits delayed correlations in the input. This rule performs ICA by detecting\njoint variations in the \ufb01ring rates of pre- and postsynaptic neurons, similar to a\nlocal rate-based Hebbian learning rule.\n\n1 Introduction\n\nThe so-called cocktail party problem refers to a situation where several sound sources are simul-\ntaneously active, e.g. persons talking at the same time. The goal is to recover the initial sound\nsources from the measurement of the mixed signals. A standard method of solving the cocktail\nparty problem is independent component analysis (ICA), which can be performed by a class of pow-\nerful algorithms. However, classical algorithms based on higher moments of the signal distribution\n[1] do not consider temporal correlations, i.e. data points corresponding to different time slices could\nbe shuf\ufb02ed without a change in the results. But time order is important since most natural signal\nsources have intrinsic temporal correlations that could potentially be exploited. Therefore, some\nalgorithms have been developed to take into account those temporal correlations, e.g. algorithms\nbased on delayed correlations [2, 3, 4, 5] potentially combined with higher-order statistics [6], based\non innovation processes [7], or complexity pursuit [8]. However, those methods are rather algorith-\nmic and most of them are dif\ufb01cult to interpret biologically, e.g. they are not online or not local or\nrequire a preprocessing of the data.\nBiological learning algorithms are usually implemented as an online Hebbian learning rule that trig-\ngers changes of synaptic ef\ufb01cacy based on the correlations between pre- and postsynaptic neurons.\nA Hebbian learning rule, like Oja\u2019s learning rule [9], combined with a linear neuron model, has been\nshown to perform principal component analysis (PCA). Simply using a nonlinear neuron combined\nwith Oja\u2019s learning rule allows one to compute higher moments of the distributions which yields\nICA if the signals have been preprocessed (whitening) at an earlier stage [1]. In this paper, we are\n\n1\n\n\fFigure 1: The sources s are mixed with a matrix\nC, x = Cs, x are the presynaptic signals. Us-\ning a linear neuron y = W x, we want to \ufb01nd the\nmatrix W which allows the postsynaptic signals\ny to recover the sources, y = P s, where P is\na permutation matrix with different multiplicative\nconstants.\n\ninterested in exploiting the correlation of the signals at different time delays, i.e. a generalization of\nthe theory of Molgedey and Schuster [4]. We will show that a linear neuron model combined with a\nHebbian learning rule based on the joint \ufb01ring rates of the pre- and postsynaptic neurons of different\ntime delays performs ICA by exploiting the temporal correlations of the presynaptic inputs.\n\n2 Mathematical derivation of the learning rule\n\n2.1 The problem\n\nWe assume statistically independent autocorrelated source signals si with mean < si >= 0 (<>\nmeans averaging over time) and correlations < si(t)sj(t(cid:48)) >= Ki(|t \u2212 t(cid:48)|)\u03b4ij. The sources s are\nmixed by a matrix C\n(1)\n\nx = Cs,\n\nwhere x are the mixed signals recorded by a \ufb01nite number of receptors (bold notation refers to a\nvector). We think of the receptors as presynaptic neurons that are connected via a weight matrix W\nto postsynaptic neurons. We consider linear neurons [9], so that the postsynaptic signals y can be\nwritten\n\n(2)\nThe aim is to \ufb01nd a learning rule that adjusts the appropriate weight matrix W to W \u2217 (* denotes the\nvalue at the solution) so that the postsynaptic signals y recover the independent sources s (Fig 1),\ni.e. y = P s where P is a permutation matrix with different multiplicative constants (the sources are\nrecovered in a different order up to a multiplicative constant), which means that, neglecting P ,\n\ny = W x.\n\nW \u2217 = C\u22121.\n\n(3)\n\nTo solve this problem we extend the theory of Molgedey and Schuster [4] in order to derive an online\nbiological hebbian rule.\n\n2.2 Theory of Molgedey and Schuster and generalization\n\nThe paper of Molgedey and Schuster [4] focuses on the instantaneous correlation matrix but also the\ntime delayed correlations Mij =< xi(t)xj(t + \u03c4) > of the incoming signals. Since the correlation\nmatrix Mij is symmetric, it has up to n(n + 1)/2 independent elements. However, the unknown\nmixing matrix C has potentially n2 elements (for n sources and n detectors). Therefore, we need to\nevaluate two delayed correlation matrices M and \u00afM with two different time delays de\ufb01ned as\n\nMij =< xi(t)xj(t + \u03c42) >\n\n\u00afMij =< xi(t)xj(t + \u03c41) >\n\n(4)\n\nto get enough information about the mixing process [10].\nl CilCjl \u00af\u039bll\nFrom equation 1, we obtain the relation Mij =\nwhere \u039bij = \u03b4ijKi(\u03c42) and \u00af\u039bij = \u03b4ijKi(\u03c41) are diagonal matrices. Since M = C\u039bC T and\n\u00afM = C \u00af\u039bC T , we have\n\nl CilCjl\u039bll and similarly \u00afMij =\n\n(cid:80)\n\n(cid:80)\n\n(M \u00afM\u22121)C = C(\u039b\u00af\u039b\u22121).\n\n(5)\n\n2\n\nsxyCWMixingICAHebbian Learning\fIt follows that C can be found from an eigenvalue problem. Since C is the mixing matrix, a simple\nalgorithmic inversion allows Molgedey and Schuster to recover the original sources [4].\n\n2.3 Our learning rule\n\nIn order to understand the putative neural mechanism performing ICA derived from the formalism\ndeveloped above, we need to \ufb01nd an online learning rule describing changes of the synapses as\na function of pre- and postsynaptic activity. Taking the inverse of (5), we have C\u22121 \u00afM M\u22121 =\n\u00af\u039b\u039b\u22121C\u22121. Therefore, for weights that solve the ICA problem we expect because of (3) that\n\nW \u2217 \u00afM = \u00af\u039b\u039b\u22121W \u2217M,\n\n(6)\n\nwhich de\ufb01nes the weight matrix W \u2217 at the solution.\nFor the sake of simplicity, consider only one linear postsynaptic neuron. The generalization to many\npostsynaptic neurons is straightforward (see section 4). The output signal y of the neuron can be\nwritten as y = w\u2217Tx, where w\u2217T is a row of the matrix W \u2217. Then equation 6 can be written as\n\nw\u2217T \u00afM = \u03bbw\u2217TM,\n\n(7)\n\nwhere \u03bb is one element of the diagonal matrix \u00af\u039b\u039b\u22121.\nIn order to solve this equation, we can use the following iterative update rule with update parameter\n\u03b3.\n\n\u02d9w = \u03b3[wT \u00afM \u2212 \u03bbwTM].\n\n(8)\nThe \ufb01xed point of this update rule is giving by (7), i.e. w = w\u2217. Furthermore, multiplication of (7)\nwith w yields \u03bb = wT \u00afM w\nwTM w .\nIf we insert the de\ufb01nition of M from (2), we obtain the following rule\n\n\u02d9w = \u03b3[< y(t)x(t + \u03c41) > \u2212\u03bb < y(t)x(t + \u03c42) >],\n\n(9)\n\nwith a parameter \u03bb given by\n\n\u03bb = < y(t)y(t + \u03c41) >\n< y(t)y(t + \u03c42) >\n\n.\n\nIt is possible to show that \u02d9w is orthogonal to w. This implies that to \ufb01rst order (in | \u02d9w/w|), w will\nkeep the same norm during iterations of (9).\nThe rule 9 we derived is a batch-rule, i.e. it averages over all sample signals. We convert this rule\ninto an online learning rule by taking a small learning rate \u03b3 and using an online estimate of \u03bb.\n\n\u02d9w = \u03b3[y(t)x(t + \u03c41) \u2212 \u03bb1\n\u03bb2\n\u02d9\u03bb1 = \u2212\u03bb1 + y(t)y(t + \u03c41)\n\u02d9\u03bb2 = \u2212\u03bb2 + y(t)y(t + \u03c42).\n\n\u03c4\u03bb\n\n\u03c4\u03bb\n\ny(t)x(t + \u03c42)]\n\n(10)\n\nNote that the rule de\ufb01ned in (10) uses information on the correlated activity xy of pre- and postsy-\nnaptic neurons as well as an estimate of the autocorrelation < yy > of the postsynaptic neuron. \u03c4\u03bb\nis taken suf\ufb01ciently long so as to average over a representative sample of the signals and |\u03b3| (cid:191) 1 is\na small learning rate. Stability properties of updates under rule (10) are discussed in section 4.\n\n3 Performances of the learning rule\n\nA simple example of a cocktail party problem is shown in Fig 2 where two signals, a sinus and a\nramp (saw-tooth signal), have been mixed. The learning rule converges to a correct set of synaptic\n\n3\n\n\fA\n\nC\n\nB\n\nD\n\nFigure 2: A. Two periodic source signals, a sinus (thick solid line) and a ramp (thin solid line), are\nmixed into the presynaptic signals (dotted lines). B. The autocorrelation functions of the two source\nsignals are shown (the sinus in thick solid line and the ramp in thin solid line). The sources are\nnormalized so that \u039b(0) = 1 for both. C. The learning rule with \u03c41 = 3 and \u03c42 = 0 extracts the\nsinusoidal output signal (dashed) composed to the two input signals. In agreement with the calcula-\ntion of stability, \u03b3 > 0 , the output is recovering the sinus source because \u039bsin(3) > \u039bramp(3). D.\nThe learning rule with \u03c41 = 10, \u03c42 = 0, converges to the other signal (dashed line), i.e. the ramp,\nbecause \u039bramp(10) > \u039bsin(10). Note that the signals have been rescalled since the learning rule\nrecovers the signals up to a multiplicative factor.\n\nweights so that the postsynaptic signal recovers correctly one of the sources. Postsynaptic neurons\nwith different combinations of \u03c41 and \u03c42 are able to recover different signals (see the section 4 on\nStability). In the simulations, we \ufb01nd that the convergence is fast and the performance is very accu-\nrate and stable. Here we show only a two-sources problem for the sake of visual clarity. However,\nthe rule can easily recover several mixed sources that have different temporal characteristics.\nFig 3 shows an ICA problem with sources s(t) generated by an Ornstein-Uhlenbeck process of the\nform \u03c4si \u02d9si = \u2212si + \u03be, where \u03be is some gaussian noise. The different sources are characterized\nby different time constants. The learning rule is able to decouple these colored noise signals with\ngaussian amplitude distribution since they have different temporal correlations.\nFinally, Fig 4 shows an application with nine different sounds. We used 60 postsynaptic neurons\nwith time delays \u03c41 chosen uniformly in an interval [1,30ms] and \u03c42 = 0 . Globally 52 of the 60\nneurons recovered exactly 1 source (A, B) and the remaining 8 recovered mixtures of 2 sources (E).\nOne postsynaptic neuron is recovering one of the sources depending on the source\u2019s autocorrelation\nat time \u03c41 and \u03c42 (.i.e. the source with the biggest autocorrelation at time \u03c41 since \u03c42 = 0 for all\nneurons, see section Stability). A histogram (C) shows how many postsynaptic neurons recover\neach source. However, as it will become clear from the stability analysis below, a few speci\ufb01c\npostsynaptic neurons tuned to time delays, where the autocorrelation functions intersect (D, at time\n\u03c41 = 3ms and \u03c42 = 0), cannot recover one of the sources precisely (E).\n\n4\n\ntimesignalstimeautocorrelation Ki(t-t\u2019)0510-5-10timesignalstimesignals\fA\n\nB\n\nFigure 3: A. The 3 source signals (solid lines generated with the equation \u03c4si \u02d9si = \u2212si + \u03be with\ndifferent time constants, where \u03be is some gaussian noise) are plotted together with the output signal\n(dashed). The learning rule is converging to one of the sources. B. Same as before, but only the one\nsignal (solid) that was recovered is shown together with the neuronal output (dashed).\n\nB\n\nD\n\nA\n\nC\n\nE\n\nFigure 4: Nine different sound sources from [11] were mixed with a random matrix. 60 postsynaptic\nneurons tuned to different \u03c41 and \u03c42 were used in order to recover the sources, i.e. \u03c41 varies from 1ms\nto 30ms by steps of 0.5ms and \u03c42 = 0 for all neurons. A. One source signal (below) is recovered\nby one of the postsynaptic neurons (above, for clarity reason, the output is shifted upward). B.\nZoom on one source (solid line) and one output (dashed line). C. Histogram of the number of\npostsynaptic neurons recovering each sources. D. Autocorrelation of the different sources. There\nare several sources with the biggest autocorrelation at time 3ms. E. The postsynaptic neuron tuned\nto a \u03c41 = 3ms and \u03c42 = 0 (above) is not able to recover properly one of the sources even though it\nstill performs well except for the low amplitude parts of the signal (below).\n\n5\n\ntimesingalstimesingals12345time [s]signalstimesignals 10 ms123456789051015sources ## of ouput\u22124\u22122024time [ms]autocorrelation12345time [s]signals\f4 Stability of the learning rule\n\nIn principle our online learning rule (10) could lead to several solutions corresponding to different\n\ufb01xed points of the dynamics. Fixed points will be denoted by w\u2217 = ek, which are by construction\nthe row vectors of the decoupling matrix W \u2217 (see (5) and (7)). The rule 10 has two parameters, i.e.\nthe delays \u03c41 and \u03c42 (the \u03c4\u03bb is considered \ufb01xed). We assume that in our architecture, these delays\ncharacterize different properties of the postsynaptic neuron. Neurons with different choices of \u03c41\nand \u03c42 will potentially recover different signals from the same mixture. The stability analysis will\nshow which \ufb01xed point is stable depending on the autocorrelation functions of the signals and the\ndelays \u03c41 and \u03c42.\nWe analyze the stability, assuming small perturbation of the weights, i.e. w = ei + \u0001ej where {ek},\nthe basis of the matrix C\u22121, are the \ufb01xed points. We obtain the expression (see Appendix for\ncalculation details)\n\n\u02d9\u0001 = \u03b3\u0001\n\n\u039bjj(\u03c41)\u039bii(\u03c42) \u2212 \u039bii(\u03c41)\u039bjj(\u03c42)\n\n\u039bii(\u03c42)\n\n,\n\n(11)\n\nwhere \u039b(\u03c4)ij =< si(t)sj(t + \u03c4) > is the diagonal correlation matrix.\nTo illustrate the stability equation (11), let us take \u03c41 = 0 and assume that \u039bii(0) = \u039bjj(0), i.e. all\nsignals have the same zero-time-lag autocorrelation. In this case (11) reduces to \u02d9\u0001 = \u03b3\u0001[\u039bjj(\u03c41) \u2212\n\u039bii(\u03c41)]. That is the solution ei is stable if \u039bjj(\u03c41) < \u039bii(\u03c41) for all directions ej (with biggest\nautocorrelation at time \u03c41) for \u03b3 > 0. If \u03b3 < 0, the solution ei is stable for \u039bjj(\u03c41) > \u039bii(\u03c41).\nThis stability relation is veri\ufb01ed in the simulations. Fig 2 shows two signals with different autocor-\nrelation functions. In this example, we chose \u03c41 = 0 and \u039b(0) = I, i.e. the signals are normalized.\nThe learning rule is recovering the signal with the biggest autocorrelation at time \u03c41, \u039bkk(\u03c41), for a\npositive learning rate.\n\n5 Comparison between Spatial ICA and Temporal ICA\n\nOne of the algorithms most used to solve ICA is FastICA [1].\nIt is based on an approximation\nof negentropy and is purely spatial, i.e. it takes into account only the amplitude distribution of the\nsignal, but not it\u2019s temporal structure. Therefore we show an example (Fig. 5), where three signals\ngenerated by Ornstein-Uhlenbeck processes have the same spatial distribution but different time\nconstants of the autocorrelation. With a spatial algorithm data points corresponding to different time\nslices can be shuf\ufb02ed without any change in the results. Therefore, it cannot solve this example. We\ntested our example with FastICA downloaded from [11] and it failed to recover the original sources\n(Fig. 5). However, to our surprise, FastICA could for very few trial solve this problem even though\nthe convergence was not stable. Indeed, since FastICA algorithm is an iterative online algorithm, it\ntakes the signals in the temporal order in which they arrive. Therefore temporal correlations can in\nsome cases be taken into account even though this is not part of the theory of FastICA.\n\n6 Discussions and conclusions\n\nWe presented a powerful online learning rule that performs ICA by computing joint variations in\nthe \ufb01ring rates of pre- and postsynaptic neurons at different time delays. This is very similar to a\nstandard Hebbian rule with exception of an additional factor \u03bb which is an online estimate of the\noutput correlations at different time delays. The different delay times \u03c41, \u03c42 are necessary to recover\ndifferent sources. Therefore properties varying between one postsynaptic neuron and the next could\nlead to different time delays used in the learning rule. We could assume that the time delays are\nintrinsic properties of each postsynaptic neuron due to for example the distance on the dendrites\nwhere the synapse is formed [12], i.e. due to different signal propagation time. The calculation of\nstability shows that a postsynaptic neuron will recover the signal with the biggest autocorrelation at\nthe considered delay time or the smallest depending of the sign of the learning rates. We assume that\nfor biological signals autocorrelation functions cross so that it\u2019s possible with different postsynaptic\nneurons to recover all the signals.\n\n6\n\n\fA\n\nC\n\nB\n\nD\n\nFigure 5: Two signals generated by an Ornstein-Uhlenbeck process are mixed. A. The signals have\nthe same spatial distributions. B. The time constants of the autocorrelations are different. C. Our\nlearning rule converges to an output (dashed line) recovering one of the signals source (solid line).\nD. FastICA (dashed line) doesn\u2019t succeed to recover the sources (solid line).\n\nThe algorithm assumes centered signals. However for a complete mapping of those signals\nto neural rates, we have to consider positive signals. Nevertheless we can easily compute an\nonline estimate of the mean \ufb01ring rate and remove this mean from the original rates. This way the\nalgorithm still holds taking neural rates as input.\n\nHyvaerinen proposed an ICA algorithm [8] based on complexity pursuit.\nIt uses the non-\ngaussianity of the residuals once the part of the signals that is predictable from the temporal\ncorrelations has been removed. The update step of this algorithm has some similarities with our\nlearning rule even though the approach is completely different since we want to exploit temporal\ncorrelations directly rather than formally removing them by a \u201dpredictor\u201d. We also do not assume\npre-whitened data and are not considering nongaussianity.\n\nOur learning rule considers smooth signals that are assumed to be rates. However, it is com-\nmonly accepted that synaptic plasticity takes into account spike trains of pre- and postsynaptic\nneurons looking at the precise timing of the spikes, i.e. Spike Timing Dependent Plasticity (STDP)\n[13, 14, 15]. Therefore a spike-based description of our algorithm is currently under study.\n\nAppendix: Stability calculation\nBy construction, the row vectors {ek, k = 1,..,n} of W \u2217 = C\u22121, the inverse of the mixing matrix,\nare solutions of the batch learning rule 9 (n is the number of sources). Assume one of these row\nvectors eT\ni , (i.e. a \ufb01xed point of the dynamic), and consider w = ei + \u0001ej a small perturbation in\nj . Note that {ek} is a basis because det(C) (cid:54)= 0 (the matrix must be invertible). The rule\ndirection eT\n(9) becomes:\n\n7\n\nsignalsdistributiontime delayautocorrelationtimesignalstimesignals\f\u02d9\u0001ei =\u03b3[< x(t + \u03c41)(ei + \u0001ej)T x(t) >\n\n(12)\n\n\u2212 < (ei + \u0001ej)T x(t)(ei + \u0001ej)T x(t + \u03c41) >\n\n< (ei + \u0001ej)T x(t)(ei + \u0001ej)T x(t + \u03c42 >) < x(t + \u03c42 >)(ei + \u0001ej)T x(t) >].\n\nWe can expand the terms on the righthand side to \ufb01rst order in \u0001. Multiplying the stability expres-\nsion by eT\nj ej = 1 since the recovering of the sources are up to a\nj\nmultiplicative constant), we \ufb01nd:\n\n(here we can assume that eT\n\n[eT\nj C\u039b(\u03c41)C T ej][eT\n\ni C\u039b(\u03c42)C T ei] \u2212 [eT\n\n\u02d9\u0001 =\u03b3\u0001\n\ni C\u039b(\u03c41)C T ei][eT\n\nj C\u039b(\u03c42)C T ej]\n\n(13)\n\neT\ni C\u039b(\u03c42)C T ei\n\n\u2212 \u0001\n\n4[eT\n\ni C\u039b(\u03c41)C T ej][eT\n\nj C\u039b(\u03c42)C T ei]\n\neT\ni C\u039b(\u03c42)C T ei\n\n.\n\nwhere \u039b(\u03c4)ij =< si(t)sj(t + \u03c4) > is the diagonal matrix.\nThis expression can be simpli\ufb01ed because eT\ni C is the unit vector\ni\nof the form (0,0,...,1,0,...) where the position of the \u201d1\u201d indicates the solution number 0. Therefore,\ni C\u039b(\u03c4)C T ek = \u039b(\u03c4)ik.\nwe have eT\nThe expression of stability becomes\n\nis a row of W \u2217 = C\u22121, so that eT\n\n\u02d9\u0001 = \u03b3\u0001\n\n\u039bjj(\u03c41)\u039bii(\u03c42) \u2212 \u039bii(\u03c41)\u039bjj(\u03c42)\n\n\u039bii(\u03c42)\n\n(14)\n\nReferences\n[1] A. Hyvaerinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley-Interscience, 2001.\n[2] L Tong, R Liu, VC Soon, and YF Huang. Indeterminacy and identi\ufb01ability of blind identi\ufb01cation. IEEE\n\nTrans. on Circuits and Systems, 1991.\n\n[3] A. Belouchrani, KA. Meraim, JF. Cardoso, and E. Moulines. A blind source separation technique based\n\non second order statistics. IEEE Trans. on Sig. Proc., 1997.\n\n[4] L. Molgedey and H.G. Schuster. Separation of a mixture of independent signals using time delayed\n\ncorrelations. Phys. Rev. Lett., 72:3634\u201337, 1994.\n\n[5] A. Ziehe and K. Muller. Tdsep \u2013 an ef\ufb01cient algorithm for blind separation using time structure.\n[6] KR. Mueller, P. Philips, and A. Ziehe. Jade td : Combining higher-order statistics and temporal informa-\n\ntion for blind source separation (with noise). Proc. Int. Workshop on ICA, 1999.\n\n[7] A. Hyvaerinen. Independent component analysis for time-dependent stochastic processes. Proc. Int. Conf.\n\non Art. Neur. Net., 1998.\n\n[8] A. Hyvaerinen. Complexity pursuit: Separating interesting components from time-series. Neural Com-\n\nputation, 13:883\u2013898, 2001.\n\n[9] E. Oja. A simpli\ufb01ed neuron model as principal component analyzer. J. Math. Biol., 15:267 \u2013273, 1982.\n[10] J.J. Hop\ufb01eld. Olfactory computation and object perception. PNAS, 88:6462\u20136466, 1991.\n[11] H. Gavert,\n\nFastica and cocktail party demo.\n\nand A. Hyvarinen.\n\nJ. Sarela,\n\nJ. Hurri,\n\nhttp://www.cis.hut.\ufb01/projects/ica/.\n\n[12] R. C. Froemke, M. Poo, and Y. Dan. Spike-timing dependent synaptic plasticity depends on dentritic\n\nlocation. Nature, 434:221\u2013225, 2005.\n\n[13] G. Bi and M. Poo. Synaptic modi\ufb01cation by correlated activity: Hebb\u2019s postulate revisited. Annual\n\nReview of Neuroscience, 2001.\n\n[14] H. Markram, J. L\u00a8ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic ef\ufb01cacy by coincidence of\n\npostsynaptic APs and EPSPs. Science, 275:213\u2013215, 1997.\n\n[15] W. Gerstner, R. Kempter, JL. van Hemmen, and H. Wagner. A neuronal learning rule for sub-millisecond\n\ntemporal coding. Nature, 383:76\u201378, 1996.\n\n8\n\n\f", "award": [], "sourceid": 170, "authors": [{"given_name": "Claudia", "family_name": "Clopath", "institution": null}, {"given_name": "Andr\u00e9", "family_name": "Longtin", "institution": null}, {"given_name": "Wulfram", "family_name": "Gerstner", "institution": null}]}