{"title": "Computational Differences between Asymmetrical and Symmetrical Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 274, "page_last": 280, "abstract": null, "full_text": "Computational Differences between \n\nAsymmetrical and Symmetrical Networks \n\nZhaoping Li \n\nPeter Dayan \n\nGatsby Computational Neuroscience Unit \n\n17 Queen Square, London, England, WCIN 3AR. \n\nzhaoping@gatsby.ucl.ac.uk \n\ndayan@gatsby.ucl.ac.uk \n\nAbstract \n\nSymmetrically connected recurrent networks have recently been \nused as models of a host of neural computations. However, be(cid:173)\ncause of the separation between excitation and inhibition, biolog(cid:173)\nical neural networks are asymmetrical. We study characteristic \ndifferences between asymmetrical networks and their symmetri(cid:173)\ncal counterparts, showing that they have dramatically different \ndynamical behavior and also how the differences can be exploited \nfor computational ends. We illustrate our results in the case of a \nnetwork that is a selective amplifier. \n\n1 \n\nIntroduction \n\nA large class of non-linear recurrent networks, including those studied by \nGrossberg,9 the Hopfield net,lO,l1 and many more recent proposals for the \nhead direction system,27 orientation tuning in primarls visual cortex,25, 1,3, 18 eye \nposition,20 and spatial location in the hippocampus 9 make a key simplifying \nassumption that the connections between the neurons are symmetric. Analysis \nis relatively straightforward in this case, since there is a Lyapunov (or energy) \nfunction4, 11 that often guarantees the convergence of the motion trajectory to an \nequilibrium point. However, the assumption of symmetry is broadly false. Net(cid:173)\nworks in the brain are almost never symmetrical, if for no other reason than the \nseparation between excitation and inhibition. In fact, the question of whether ig(cid:173)\nnoring the polarity of the cells is simplification or over-simplication has yet to be \nfully answered. \nNetworks with excitatory and inhibitory cells (EI systems, for short) have \nlong been studied,6 for instance from the per~ective of pattern generation in \ninvertebrates,23 and oscillations in the thalamus ' 24 and the olfactory systemP, 13 \nFurther, since the discovery of 40 Hz oscillations or synchronization amongst cells \nin primary visual cortex of anesthetised cat,8,5 oscillatory models of VI involving \nseparate excitatory and inhibitory cells have also been popular, mainly from the \nperspective of how the oscillations can be created and sustained and how they can \n\n\fComputational Differences between Asymmetrical and Symmetrical Networks \n\n275 \n\nbe used for feature linking or binding.26,22,12 However the scope for computing \nwith dynamically stable behaviors such as limit cycles is not yet clear. \nIn this paper, we study the computational differences between a family of EI sys(cid:173)\ntems and their symmetric counterparts (which we call S systems). One inspira(cid:173)\ntion for this work is Li's nonlinear EI system modeling how the primary visual \ncortex performs contour enhancement and pre-attentive region segmentation. 14 , 15 \nStudies by Braun2 had suggested that an S system model of the cortex can not \nperform contour enhancement unless additional (and biologically questionable) \nmechanisms are used. This posed a question about the true differences between \nEI and S systems that we answer. We show that EI systems can take advantage of \ndynamically stable modes that are not available to S systems. The computational \nsignificance of this result is discussed and demonstrated in the context of models \nof orientation selectivity. More details of this work, especially its significance for \nmodels of the primary visual cortical system, can be found in Li & Dayan (1999).16 \n\n2 Theory and Experiment \n\nConsider a simple, but biologically significant, EI system in which excitatory and \ninhibitory cells come in pairs and there are no 'lon~-range' connections from the \ninhibitory cells14, 15 (to which the Lyapunov theory1 ,21 does not yet apply): \nTyYi = -Yi + L: j Wijg(Xj) , \n\nXi = -Xi + L: j Jijg(Xj) - h(Yd + Ii \n\n(1) \n\nwhere Xi are the principal excitatory cells, which receive external or sensory in(cid:173)\nput h and generate the network outputs g(Xi); Yi are the inhibitory interneurons \n(which are taken here as having no external input); function g(x) = [x - T]+ is \nthe threshold non-linear activation function for the excitatory cells; h(y) is the ac(cid:173)\ntivation function for the inhibitory cells (for analytical convenience, we use the \nlinear form h(y) = (y - Ty) although the results are similar with the non-linear \nh(y) = [y - Ty]+); Ty is a time-constant for the inhibitory cells; and Jij and Wij are \nthe output connections of the excitatory cells. Excitatory and inhibitory cells can \nalso be perturbed by Gaussian noise. \nIn the limit that the inhibitory cells are made infinitely fast (Ty = 0), we have \nYi = L: j Wijg(Xj), leaving the excitatory cells to interact directly with each other: \n\nXi \n\n-Xi + L: j Jijg(Xj) - h(L: j Wijg(Xj)) + Ii \n-Xi + L:j(Jij - Wij)g(Xj) + Ii + /'i,i \n\n= \n\n(2) \n\n(3) \n\nwhere /'i,i are constants. In this network, the neural connections Jij - Wij between \nany two cells X can be either excitatory or inhibitory, as in many abstract neural \nnetwork models. When Jij = Jji and Wij = Wji, the network has symmetric \nconnections. This paper compares EI systems with such connections and the cor(cid:173)\nresponding S systems. Since there are many ways of setting Jij and Wij in the EI \nsystem whilst keeping constant Jij - Wij , which is the effective weight in the S \nsystem, one may intuitively expect the EI system to have a broader computational \nrange. \nThe response of either system to given inputs is goverhed by the location and lin(cid:173)\near stability of their fixed points. The S network is so defined as to have fixed \npoints x (where x = 0 in equation 3) that are the same as those (x, y) of the EI \nnetwork. In particular, x depends on inputs I (the input-output sensitivity) via \nax = (IT- JDg + WD g)-1 dI, where IT is the identity matrix, J and Ware the \nconnection matrices, and Dg is a diagonal matrix with elements [Dg]ii = g'(Xi). \nHowever, although the locations of the fixed points are the same for the EI and S \n\n\f276 \n\nZ. Li and P. Dayan \n\nsystems, the dynamical behavior of the systems about those fixed points are quite \ndifferent, and this is what leads to their differing computational power. \nTo analyse the stability of the fixed points, consider, for simplicity the case that Ty = \n1 in the EI system, and that the matrices JDg and WDg commute with eigenvalues \n>\"i and >..J; respectively for k = 1, ... ,N where N is the dimension of x. The local \ndeviations near the fixed points along each of the N modes will grow in time if the \nreal parts of the following values are positive \n\n-1 + (1/2>..t \u00b1 (t (>..t)2 - >..J;)1/2 \n\n,fI = \n\n,r = -1 - >..J; + >..t \n\nfor the EI system \nfor the S system \n\nIn the case that>.. J and >.. ware real, then if the S system is unstable, then the EI \nsystem is also unstable. Forif-1+>..{ ->..r' > Othen(>..{)2-4>..r' > (>..{ -2?,and \ns02,fI = -2+>..{+{(>..f)2 _4>..W)I/2 > o. However, if the EI system is oscillatory, \n4>..w > (>..J)2, then the S system is stable since -1 +>..J - >..w < _l+>..J - (>..J)2 /4 = \n-(1 - >..J /2)2 ::; O. Hence the EI system can be unstable and oscillatory while the S \nsystem is stable. \nWe are interested in the capacity of both systems to be selective amplifiers. This \nmeans that there is a class of inputs I that should be comparatively boosted by \nthe system; whereas others should be comparatively suppressed. For instance, if \nthe cells represent the orientation of a bar at a point, then the mode containing a \nunimodal, well-tuned, 'bump' in orientation space should be enhanced compared \nwith poorly tuned inputs.2 ,1,18 However, if the cells represent oriented small \nbars at multiple points in visual space, then isolated smooth and strai?ht contours \nshould be enhanced compared with extended homogeneous textures. 4,15 \nThe quality of the systems will be judged according to how much selective ampli(cid:173)\nfication they can stably deliver. The critical trade-off is that the more the selected \nmode is amplified, the more likely it is that, when the input is non-specific, the \nsystem will be unstable to fluctuations in the direction of the selected mode, and \ntherefore will hallucinate spurious answers. \n\n3 The Two Point System \n\nA particularly simple case to consider has just two neurons (for the S system; two \npairs of neurons for the EI system) and weights \n\nJ = (~o ~) W = (wo w ) \n\nw Wo \n\n) \n\n)0 \n\nThe idea is that each node coarsely models a group of neurons, and the interac(cid:173)\ntions between neurons within a group (jo and wo) are qualitatively different from \ninteractions between neurons between groups (j and w). The form of selective am(cid:173)\nplification here is that symmetric or ambiguous inputs Ia = 1(1, 1) should be sup(cid:173)\npressed compared with asymmetric inputs Ib = 1(1, 0) (and, equivalently, 1(0,1). \nIn particular, given, la, the system should not spontaneously generate a response \nwith Xl significantly different from X2\u00b7 Define the fixed points to be Xl = x2 > T \nunder Ia and x~ > T > x~ under I b, where T is the threshold of the excitatory \nneurons. These relationships will be true across a wide range of input levels I. The \nratio \n\nR = dxUdl = 1 + ((wo + w) - (jo + j)) = 1 + \n\ndxlIdl \n\n1 + (wo - jo) \n\n(w - j) \n\n1 + (wo - jo) \n\nof the average relative responses as the input level 1 changes is a measure of how \nthe system selectively amplifies the preferred or consistent inputs against ambigu(cid:173)\nous ones. This measure is appropriate only when the fluctuations of the system \n\n(4) \n\n\fA \n\n-2 \n\nIb \n\nD \nX2 \n\n6 \n\nI a \n\nC \n\n-2 \n\n-.:14 \n\nComputational Differences between Asymmetrical and Symmetrical Networks \n\n277 \n\nThe symmetry preserving network \n\nThe symmetry breaking network \n\nI a \n\nB \n\nIb \n\n4 \n\n2 \n\n- 2 \n\n0 \n\n-2 \n\n0 \n\n6 \n\n8 \n\nXl \n\n-~4 \n\n-~4 \nFigure 1: Phase portraits for the S system in the 2 point case. A;B) Evolution in response to I G ex (1, 1) \nand Ib ex (1,0) for parameters for which the response to Ia is stably symmetric. C;D) Evolution in \nresponse to Ia and Ib for parameters for which the symmetric response to Ia is unstable, inducing two \nextra equilibrium points. The dotted lines show the thresholds T for g(x). \n\n-~4 \n\nXl \n\nXl \n\n-2 \n\n6 \n\n8 \n\nXl \n\n-2 \n\n0 \n\n4 \n\n6 \n\n2 \n\n4 \n\n2 \n\n4 \n\n6 \n\n8 \n\n8 \n\na \n\n2 \n\nfrom the fixed points xa and xb are well behaved. We will show that this require(cid:173)\nment permits larger values of R in the EI system than the S system, suggesting that \nthe EI system can be a more powerful selective amplifier. \nIn the S system, the stabilities are governed by,S = -(1 + Wo - jo) for the single \nmode of deviation Xl - x~ around fixed point band,f = - (1 + (wo \u00b1 w) - (jo \u00b1 j)) \nfor the two modes of deviation X\u00b1 == (Xl - xl) \u00b1 (X2 - x~) around fixed point \na. Since we only consider cases when the input-output relationship dX/ dI of the \nfixed points is well defined, this means,s < a and,~ < O. However, for some \ninteraction parameters, there are two extra (uneven) fixed points x~ =1= x~ for (the \neven) input fa. Dynamic systems theory dictates these two uneven fixed points \nwill be stable and that they will appear when the '-' mode of the perturbation \naround the even fixed point x~ = x~ is unstable. The system breaks symmetry \nin inputs, ie the motion trajectory diverges from the (unstable) even fixed point to \none of the (stable) uneven ones. To avoid such cases, it is necessary that,~ < O. \nCombining this condition with equation 4 and,s < a leads to a upper bound on \nthe amplification ratio R S < 2. Figure 1 shows phase portraits and the equilibrium \npOints of the S system under input fa and fb for the two different system parameter \nregions. \nAs we have described, the EI system has exactly the same fixed points as the S sys(cid:173)\ntem, but they are more unstable. The stability around the symmetric fixed point \nunder Ia is governed by ,f:I = -1+(jo\u00b1j)/2\u00b1J(jo \u00b1 j)2/4 -\n(wo \u00b1 w), while that \nof the asymmetric fixed pointunderIb orIa by ,EI = -1+jo/2\u00b1JHJ4 - woo Con(cid:173)\nsequently, when there are three fixed points under la, all of them can be unstable \nin the EI system, and the motion trajectory cannot converge to any of them. In this \ncase, when both the' +' and '-' modes around the symmetric fixed point x~ = x~ \nare unstable, the global dynamics constrains the motion trajectory to a limit cycle \naround the fixed points. If x~ ~ x~ on this limit cycle, then the EI system will \nnot break symmetry, even though the selective amplification ratio R > 2. Figure 2 \ndemonstrates the performance of the EI system in this regime. Figure 2A;B show \nvarious aspects of the response to input P which should be comparatively sup(cid:173)\npressed. The system oscillates in such a way that Xl and X2 tend to be extremely \nsimilar (including being synchronised). Figure 2C;D show the same aspects of \nthe response to Ib, which should be amplified. Again the network oscillates, and, \nalthough g(X2) is not driven completely to a (it peaks at 15), it is very strongly \ndominated by g(xd, and further, the overall response is much stronger than in \nfigure 2A;B. \nThe pertinent difference between the EI and S systems is that while the S system \n(when h(y) is linear) can only roll down the energy landscape to a stable fixed \n\n\f278 \n\nA \n\nResponse to I a = 1{1 , 1) \n\nB \n\nc \n\nResponse to Ib = 1(1,0) \n\nD \n\n80 \n\n80 \n\nZ. Li and P Dayan \n\n40 \n\n-200 \n\n10 \n\n20 time 30 \n\n40 \n\n50 \n\n0 0 \n\n1000 x 2000 \n\n3000 \n\n100 \n\nlime 200 \n\nFigure 2: Projections of the response of the EI system. AiB) E~olution of response to la. A) Xl VS Yl \nand B) g(xl) - g(X2) (solid); g(X!}+g(X2) (dotted) across time show that the Xl = X2 mode dominates \nand the growth of X l - X2 is strongly suppressed. C;D) Evolution of the response to lb. Here, the \nresponse of Xl always dominates that of X2 over oscillations. The difference between g(Xl )+g(X2) and \ng(Xl) - g(X2) is too small to be evident on the figure. Note the difference in scales between AiB and \nC;D. Herejo = 2.1 ; j = O.4;wo = 1.11 ; w = 0.9. \n\n:m \n\npoint and break the input symmetry, the EI system can resort to global limit cycles \nXl (t) ~ X2(t) between unstable fixed points and maintain input symmetry. This is \noften (robustly over a large range of parameters) the case even when the '-' mode \nis locally more unstable (at the symmetric fixed point) than the ' +' mode, because \nthe' -' mode is much strongly suppressed when the motion trajectory enters the \nsubthreshold region Xl < T and X2 < T. As we can see in figure 2A;B, this acts \nto suppress any overall growth in the' -' mode. Since the asymmetric fixed point \nunder Ib is just as unstable as that under la, the El system responds to asymmetric \ninput Ib also by a stable limit cycle around the asymmetric fixed point. \nSince the response of the system in response to either pattern is oscillatory, there are \nvarious reasonable ways of evaluating the relative response ratio. Using the mean \nresponses of the system during a cycle to define X, the selective amplification ratio \nin figure 2 is REI = 97, which is significantly higher than the R S = 2 available from \nthe S system. This is a simple existence proof of the superiority of the EI system for \namplification, albeit at the expense of oscillations. In fact, in this two point case, it \ncan be shown that any meaningful behavior of the S system (including symmetry \nbreaking) can be qualitatively replicated in the EI system, but not vice-versa. \n\n4 The Orientation System \n\nSymmetric recurrent networks have recently been investigated in great depth for \nrepresenting and calculating a wide variety of quantities, including orientation \ntuning. The idea behind the recurrent networks is that they should take noisy (and \nperhaps weakly tuned) input and selectively amplify the component that repre(cid:173)\nsents an orientation 0 in the input, leaving a tuned pattern of excitation across the \npopulation that faithfully represents the underlying input. Based on the analysis \nabove, we can expect that if an S network amplifies a tuned input enough, then it \nwill break input symmetry given an untuned input and thus hallucinate a tuned \nresponse. However, an EI system, in the same oscillatory regime as for the two \npoint system, can maintain untuned and suppressed response to untuned inputs. \nWe designed a particular El system with a high selective amplification factor \nfor tuned inputs 1(0). In this case, units Xi, Yi have preferred orientations Oi = \n(i - N/2)7r/N for i = 1 .. . n. the connection matrices J is Toplitz with Gaussian \ntuning, and, for simplicity, [W]ij does not depend on i,j. Figure 3B (and inset) \nshows the output of two units in the network in response to a tuned input, show(cid:173)\ning the nature of the oscillations and the way that selectivity builds up over the \ncourse of each period. Figure 3C shows the activities of all the units at three partic(cid:173)\nular phases of the oscillation. Figure 3A shows how the mean activity of the most \n\n\fComputational Differences between Asymmetrical and Symmetrical Networks \n\n279 \n\nA) Cell outputs vs a or b \n\nB) Cell outputs vs time \n\nC) Cell outputs vs (}i \n\nx 10' \n\nX 10' \n\n10' \n\nj\" \n~10' \n'\" .a. \n~102 \n&. \n~ \nij100 \n~ \nE \n\n10-2 \n0 \n\ntuned \n\nflat \n\n.. ' \n\n10 \n\na or b \n\n6 \n\n2 \n\n6 \n\n2 \n\n15 \n\n20 \n\n~5 \n\n46 \n\n49 \n\n50 \n\n-~ \n\n-45 \n\n45 \n\n90 \n\nFigure 3: The Gaussian orientation network. A) Mean response of the 8; = 0\u00b0 unit in the network as \na function of a (untuned) or b (tuned) with a log scale. B) Activity of the 8i = 0\u00b0 (solid) and 8; = 30\u00b0 \n(dashed) units in the network over the course of the positive part of an oscillation. Inset - activity of \nthese units over all time. C) Activity of all the units at the three times shown as (i), (ii) and (iii) in (B) \n(i) (dashed) is in the rising phase of the oscillation; (ii) (solid) is at the peak; and (iii) (dotted) is during \nthe falling phase. Here, the input is Ii = a + be-IJ[ /20'2, with (J\" = 13\u00b0, and the Toplitz weights are \nJ;j = (3 + 21e-(IJ i - IJj )2/20',2)/N, with (J\"' = 20\u00b0 and Wij = 23.S/N. \n\nactivated unit scales with the levels of tuned and untuned input. The network \namplifies the tuned inputs dramatically more - note the logarithmic scale. The S \nsystem breaks symmetry to the untuned input (b = 0) for these weights. If the \nweights are scaled uniformly by a factor of 0.22, then the S system is appropriately \nstable. However, the magnification ratio is 4.2 rather than something greater than \n1000 in the EI system. \nThe orientation system can be understood to a large qualitative degree by looking \nat its two-point cousins. Many of the essential constraints on the system are de(cid:173)\ntermined by the behavior of the system when the mode with Xi = Xj dominates, \nin which case the complex non-linearities induced by orientation tuning or cut off \nand its equivalents are irrelevant. Let J(I) and W(I) for (angular) frequency f \nbe the Fourier transforms of J(i - j) == [J]ij and W(i -\nj) == [WLj and define \n)..(J) = Re{ -1 + J(I)/2 + iJ(W(I) - J2(1)/4)}. Then, let 1* >0 be the frequency \nsuch that )..(1*) 2: )..(1) for all f > O. This is the non-translation-invariant mode \nthat is most likely to cause instabilities for translation invariant behavior. A two \npoint system that closely corresponds to the full system can be found by solving \nthe simultaneous equations: \n\njo + j = J(O) WO + w = W(O) \n\njo - j = J(J*) WO - w = W(I*) \n\nThis design equates the Xl = X2 mode in the two point system with the f = 0 mode \nin the orientation system and the Xl = -X2 mode with the f = 1* mode. For smooth \nJ and W, 1* is often the smallest or one of the smallest non-zero spatial frequen(cid:173)\ncies. It is easy to see that the two systems are exactly equivalent in the translation \ninvariant mode Xi = Xj under translation invariant input Ii = Ij in both the lin(cid:173)\near and nonlinear regimes. The close correspondence between the two systems in \nother dynamic regimes is supported by simulation results. 16 Quantitatively, how(cid:173)\never, the amplification ratio differs between the two systems. \n\n5 Conclusions \n\nWe have studied the dynamical behavior of networks with symmetrical and asym(cid:173)\nmetrical connections and have shown that the extra degrees of dynamical freedom \n\n\f280 \n\nZ. Li and P. Dayan \n\nof the latter can be put to good computational use, eg global dynamic stability via \nlocal instability. Many applications of recurrent networks involve selective am(cid:173)\nplification - and the selective amplification factors for asymmetrical networks can \ngreatly exceed those of symmetrical networks. We showed this in the case of orien(cid:173)\ntation selectivity. However, it was originally inspired by a similar result in contour \nenhancement and texture segregation for which the activity of isolated oriented \nline elements should be enhanced if they form part of a smooth contour in the \ninput and suppressed if they form part of an extended homogeneous texture. Fur(cid:173)\nther, the output should be homogeneous if the input is homogeneous (in the same \nway that the orientation network should not hallucinate orientations from untuned \ninput). In this case, similar analysis16 shows that stable contour enhancement is \nlimited to just a factor of 3.0 for the S system (but not for the EI system), suggest(cid:173)\ning an explanation for the poor performance of a slew of S systems in the literature \ndesigned for this purpose. We used a very simple system with just two pairs of \nneurons to develop analytical intuitions which are powerful enough to guide our \ndesign of the more complex systems. We expect that the details of our model, with \nthe exact pairing of excitatory and inhibitory cells and the threshold non-linearity, \nare not crucial for the results. \nInhibition in the cortex is, of course, substantially more complicated than we have \nsuggested. In particular, inhibitory cells do have somewhat faster (though finite) \ntime constants than excitatory cells, and are also not so subject to short term plas(cid:173)\nticity effects such as spike rate adaptation. Nevertheless, oscillations of various \nsorts can certainly occur, suggesting the relevance of the computational regime \nthat we have studied. \n\nReferences \n\n[1] Ben-Yishai, R, Bar-Or, RL & Sompolinsky, H (1995) PNAS 92:3844-3848. \n[2] Braun, J, Neibur, E, Schuster, HG & Koch, C (1994) Society for Neuroscience Abstracts 20:1665. \n[3] Carandini, M & Ringach, DL (1997) Vision Research 37:3061-307l. \n[4] Cohen, MA & Grossberg, S (1983) IEEE Transactions on Systems, Man and Cybernetics 13:815-826. \n[5] Eckhom, R, et al (1988) Biological Cybernetics 60:121-130. \n[6] Ermentrout, GB & Cowan, JD (1979). Journal of Mathematical Biology 7:265-280. \n[7] Golomb, D, Wang, XI & Rinzel, J (1996). Journal of Neurophysiology 75:750-769. \n[8] Gray, CM, Konig, P, Engel, AK & Singer, W (1989) Nature 338:334-337. \n[9] Grossberg, S (1988) Neural Networks 1:17-61. \n[10] Hopfield, JJ (1982) PNAS 79:2554-2558. \n[11] Hopfield, JJ (1984) PNAS 81:3088-3092. \n[12] Konig, P, Janosch, B & Schillen, TB (1992) Neural Computation 4:666-681. \n[13] Li, Z (1995) InJL van Hemmen et ai, eds, Models of Neural Networks. Vol. 2. NY: Springer. \n[14] Li, Z (1997) In KYM Wong, I King & DY Yeung, editors, Theoretical Aspects of Neural Computation. \n\nHong Kong: Springer-Verlag. \n\n[15] Li, Z (1998) Neural Computation 10:903-940. \n[16] Li, Z. and Dayan, P. (1999) to be published in Network: Computations in Neural Systems. \n[17] Li, Z & Hopfield, JJ (1989). Biological Cybernetics 61:379-392. \n[18] Pouget, A, Zhang, KC, Deneve, S & Latham, PE (1998) Neural Computation, 10373-401. \n[19] Samsonovich A & McNaughton, BL (1997) Journal of Neuroscience 17:5900-5920. \n[20] Seung, HS (1996) PNAS 93:13339-13344. \n[21] Seung, HS et al (1998). NIPS 10. \n[22] Sompolinsky, H, Golomb, D & Kleinfeld, D (1990) PNAS 87:7200-7204. \n[23] Stein, PSG, et al (1997) Neurons, Networks, and Motor Behavior. Cambridge, MA: MIT Press. \n[24] Steriade, M, McCormick, DA & Sejnowski, TJ (1993). Science 262:679-685. \n[25] Suarez, H, Koch, C & Douglas, R (1995) Journal of Neuroscience 15:6700-6719. \n[26] von der Malsburg, C (1988) Neural Networks 1:141-148. \n[27] Zhang, K (1996) Journal of Neuroscience 16:2112-2126. \n\n\f", "award": [], "sourceid": 1559, "authors": [{"given_name": "Zhaoping", "family_name": "Li", "institution": null}, {"given_name": "Peter", "family_name": "Dayan", "institution": null}]}