{"title": "Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics", "book": "Advances in Neural Information Processing Systems", "page_first": 1975, "page_last": 1983, "abstract": "How are the spatial patterns of spontaneous and evoked population responses related? We study the impact of connectivity on the spatial pattern of fluctuations in the input-generated response of a neural network, by comparing the distribution of evoked and intrinsically generated activity across the different units. We develop a complementary approach to principal component analysis in which separate high-variance directions are typically derived for each input condition. We analyze subspace angles to compute the difference between the shapes of trajectories corresponding to different network states, and the orientation of the low-dimensional subspaces that driven trajectories occupy within the full space of neuronal activity. In addition to revealing how the spatiotemporal structure of spontaneous activity affects input-evoked responses, these methods can be used to infer input selectivity induced by network dynamics from experimentally accessible measures of spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments). We conclude that the absence of a detailed spatial map of afferent inputs and cortical connectivity does not limit our ability to design spatially extended stimuli that evoke strong responses.", "full_text": "Inferring Stimulus Selectivity from the Spatial\n\nStructure of Neural Network Dynamics\n\nKanaka Rajan\n\nLewis-Sigler Institute for Integrative Genomics\n\nCarl Icahn Laboratories # 262, Princeton University\n\nPrinceton NJ 08544 USA\n\nkrajan@princeton.edu\n\nL. F. Abbott\n\nDepartment of Neuroscience\n\nDepartment of Physiology and Cellular Biophysics\n\nColumbia University College of Physicians and Surgeons\n\nNew York, NY 10032-2695 USA\n\nlfa2103@columbia.edu\n\nInterdisciplinary Center for Neural Computation\n\nHaim Sompolinsky\n\nRacah Institute of Physics\n\nHebrew University\nJerusalem, Israel\n\nand\n\nCenter for Brain Science\n\nHarvard University\n\nCambridge, MA 02138 USA\nhaim@fiz.huji.ac.il\n\nAbstract\n\nHow are the spatial patterns of spontaneous and evoked population responses re-\nlated? We study the impact of connectivity on the spatial pattern of \ufb02uctuations\nin the input-generated response, by comparing the distribution of evoked and in-\ntrinsically generated activity across the different units of a neural network. We\ndevelop a complementary approach to principal component analysis in which sep-\narate high-variance directions are derived for each input condition. We analyze\nsubspace angles to compute the difference between the shapes of trajectories cor-\nresponding to different network states, and the orientation of the low-dimensional\nsubspaces that driven trajectories occupy within the full space of neuronal activity.\nIn addition to revealing how the spatiotemporal structure of spontaneous activity\naffects input-evoked responses, these methods can be used to infer input selec-\ntivity induced by network dynamics from experimentally accessible measures of\nspontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging ex-\nperiments). We conclude that the absence of a detailed spatial map of afferent\ninputs and cortical connectivity does not limit our ability to design spatially ex-\ntended stimuli that evoke strong responses.\n\n1\n\n\f1 Motivation\n\nStimulus selectivity in neural networks was historically measured directly from input-driven re-\nsponses [1], and only later were similar selectivity patterns observed in spontaneous activity across\nthe cortical surface [2, 3]. We argue that it is possible to work in the reverse order, and show that an-\nalyzing the distribution of spontaneous activity across the different units in the network can inform\nus about the selectivity of evoked responses to stimulus features, even when no apparent sensory\nmap exists.\nSensory-evoked responses are typically divided into a signal component generated by the stimulus\nand a noise component corresponding to ongoing activity that is not directly related to the stimulus.\nSubsequent effort focuses on understanding how the signal depends on properties of the stimulus,\nwhile the remaining, irregular part of the response is treated as additive noise. The distinction\nbetween external stochastic processes and the noise generated deterministically as a function of\nintrinsic recurrence has been previously studied in chaotic neural networks [4]. It has also been\nsuggested that internally generated noise is not additive and can be more sensitive to the frequency\nand amplitude of the input, compared to the signal component of the response [5 - 8].\nIn this paper, we demonstrate that the interaction between deterministic intrinsic noise and the spatial\nproperties of the external stimulus is also complex and nonlinear. We study the impact of network\nconnectivity on the spatial pattern of input-driven responses by comparing the structure of evoked\nand spontaneous activity, and show how the unique signature of these dynamics determines the\nselectivity of networks to spatial features of the stimuli driving them.\n\n2 Model description\n\nIn this section, we describe the network model and the methods we use to analyze its dynamics. Sub-\nsequent sections explore how the spatial patterns of spontaneous and evoked responses are related\nin terms of the distribution of the activity across the network. Finally, we show how the stimulus\nselectivity of the network can be inferred from its spontaneous activity patterns.\n\n2.1 Network elements\n\nWe build a \ufb01ring rate model of N interconnected units characterized by a statistical description of the\nunderlying circuitry (as N \u2192 \u221e, the system \u201cself averages\u201d making the description independent\nof a speci\ufb01c network architecture, see also [11, 12]). Each unit is characterized by an activation\nvariable xi \u2200 i = 1, 2, . . . N, and a nonlinear response function ri which relates to xi through\nri = R0 + \u03c6(xi) where,\n\n\uf8f1\uf8f2\uf8f3 R0 tanh\n\n(cid:17)\n\n(cid:16) x\n\nR0\n\n\u03c6(x) =\n\n(Rmax \u2212 R0) tanh\n\nx\n\nRmax\u2212R0\n\n(cid:16)\n\n(cid:17)\n\nfor x \u2264 0\notherwise.\n\n(1)\n\nEq. 1 allows us to independently set the maximum \ufb01ring rate Rmax and the background rate R0\nto biologically reasonable values, while retaining a maximum gradient at x = 0 to guarantee the\nsmoothness of the transition to chaos [4].\nWe introduce a recurrent weight matrix with element Jij equivalent to the strength of the synapse\nfrom unit j \u2192 unit i. The individual weights are chosen independently and randomly from a Gaus-\n= g2/N , where square\nsian distribution with mean and variance given by [Jij]J = 0 and\nbrackets are ensemble averages [9 - 11,13]. The control parameter g which scales as the variance of\nthe synaptic weights, is particularly important in determining whether or not the network produces\nspontaneous activity with non-trivial dynamics (Speci\ufb01cally, g = 0 corresponds to a completely\nuncoupled network and a network with g = 1 generates non-trivial spontaneous activity [4, 9, 10]).\nThe activation variable for each unit xi is therefore determined by the relation,\n\n(cid:2)J 2\n\n(cid:3)\n\nij\n\nJ\n\n\u03c4r\n\ndxi\ndt\n\n= \u2212xi + g\n\nN(cid:88)\n\nj=1\n\nJijrj + Ii ,\n\n(2)\n\nwith the time scale of the network set by the single-neuron time constant \u03c4r of 10 ms.\n\n2\n\n\fThe amplitude I of an oscillatory external input of frequency f, is always the same for each unit,\nbut in some examples shown in this paper, we introduce a neuron-speci\ufb01c phase factor \u03b8i, chosen\nrandomly from a uniform distribution between 0 and 2\u03c0, such that\n\nIi = I cos(2\u03c0f t + \u03b8i) \u2200 i=1, 2, . . . N.\n\n(3)\nIn visually responsive neurons, this mimics a population of simple cells driven by a drifting grating\nof temporal frequency f, with the different phases arising from offsets in spatial receptive \ufb01eld\nlocations. The randomly assigned phases in our model ensure that the spatial pattern of input is not\ncorrelated with the pattern of recurrent connectivity. In our selectivity analysis however (Fig. 3), we\nreplace the random phases with spatial input patterns that are aligned with network connectivity.\n\n2.2 PCA redux\n\nPrincipal component analysis (PCA) has been applied pro\ufb01tably to neuronal recordings (see for\nexample [14]) but these analyses often plot activity trajectories corresponding to different network\nstates using the \ufb01xed principal component coordinates derived from combined activities under all\nstimulus conditions. Our analysis offers a complementary approach whereby separate principal\ncomponents are derived for each stimulus condition, and the resulting principal angles reveal not\nonly the difference between the shapes of trajectories corresponding to different network states,\nbut also the orientation of the low-dimensional subspaces these trajectories occupy within the full\nN-dimensional space of neuronal activity.\nThe instantaneous network state can be described by a point in an N-dimensional space with coor-\ndinates equal to the \ufb01ring rates of the N units. Over time, the network activity traverses a trajectory\nin this N-dimensional space and PCA can be used to delineate the subspace in which this trajectory\nlies. The analysis is done by diagonalizing the equal-time cross-correlation matrix of network \ufb01ring\nrates given by,\n\nDij = (cid:104)(ri(t) \u2212 (cid:104)ri(cid:105))(rj(t) \u2212 (cid:104)rj(cid:105))(cid:105) ,\n\n(4)\nwhere <> denotes a time average. The eigenvalues of this matrix expressed as a fraction of their sum\n(denoted by \u02dc\u03bba in this paper), indicate the distribution of variances across the different orthogonal\ndirections in the activity trajectory.\nSpontaneous activity is a useful indicator of recurrent effects, because it is completely determined\nby network feedback. We can therefore study the impact of network connectivity on the spatial\npattern of input-driven responses by comparing the spatial structure of evoked and spontaneous ac-\ntivity. In the spontaneous state, there are a number of signi\ufb01cant contributors to the total variance.\nFor instance, for g = 1.5, the leading 10% of the components account for 90% of the total variance\nwith an exponential taper for the variance associated with higher components. In addition, projec-\ntions of network activity onto components with smaller variances \ufb02uctuate at progressively higher\nfrequencies, as illustrated in Fig. 1b & d.\nOther models of chaotic networks have shown a regime in which an input generates a non-chaotic\nnetwork response, even though the network returns to chaotic \ufb02uctuations when the external drive\nis turned off [5, 16]. Although chaotic intrinsic activity can be completely suppressed by the in-\nput in this network state, its imprint can still be detected in the spatial pattern of the non-chaotic\nactivity. We determine that the perfectly entrained driven state is approximately two-dimensional\ncorresponding to a circular oscillatory orbit, the projections of which are oscillations \u03c0/2 apart in\nphase. (The residual variance in the higher dimensions re\ufb02ects harmonics arising naturally from the\nnonlinearity in the network model).\n\n2.3 Dimensionality of spontaneous and evoked activity\n\nTo quantify the dimension of the subspace containing the chaotic trajectory in more detail, we intro-\nduce the quantity\n\n(cid:32) N(cid:88)\n\n(cid:33)\u22121\n\nNeff =\n\n\u02dc\u03bb2\n\na\n\n,\n\n(5)\n\nwhich provides a measure of the effective number of principal components describing a trajectory.\nFor example, if n principal components share the total variance equally, and the remaining N \u2212 n\nprincipal components have zero variance, Neff = n.\n\na=1\n\n3\n\n\fFigure 1: PCA of the chaotic spontaneous state and non-chaotic driven state reached when an input\nof suf\ufb01ciently high amplitude has suppressed the chaotic \ufb02uctuations. a) % variance accounted for\nby different PC\u2019s for chaotic spontaneous activity. b) Projections of the chaotic spontaneous activity\nonto PC vectors 1, 10 and 50 (in decreasing order of variance). c) Same as panel a, but for non-\nchaotic driven activity. d) Projections of periodic driven activity onto PC\u2019s 1, 3, and 5. Projections\nonto components 2, 4, and 6 are identical but phase shifted by \u03c0/2. For this \ufb01gure, N = 1000,\ng =1.5, f =5 Hz and I/I1/2 =0.7 for b and d.\n\nFigure 2: The effective dimension Neff of the trajectory of chaotic spontaneous activity as a function\nof g for networks with 1000 (blue circles) or 2000 (red circles) neurons.\n\nFor the chaotic spontaneous state in the networks we build, Neff increases with g (Fig. 2), due to\nthe higher amplitude and frequency content of chaotic dynamics for large g. Note that Neff scales\napproximately with N, which means that large networks have proportionally higher-dimensional\nchaotic activity (compare the two traces within Fig. 2). The fact that the number of activated modes\nis only 2% of the system\u2019s total dimensionality even for g as high as 2.5, is another manifestation\nof the deterministic nature of the autonomous \ufb02uctuations. For comparison, we calculated Neff for a\nsimilar network driven by white noise, with g set below the chaotic transition at g = 1. In this case,\nNeff only assumes such low values when g is within a few percent of the critical value of 1.\n\n4\n\nt(s)P1P3P501dP1P10P50bt(s)011020300102030a0PC#01020304050102030c0PC#% variance% variancegNeff11.522.50102030401000 units2000 units\f2.4 Subspace angles\n\nThe orbit describing the activity in the non-chaotic driven state consists of a circle in a two-\ndimensional subspace of the full N-dimensions of neuronal activities. How does this circle align\nrelative to the subspaces de\ufb01ned by different numbers of principal components that characterize the\nspontaneous activity? To overcome the dif\ufb01culty in visualizing this relationship due to the high di-\nmensionality of both the chaotic subspace as well as the full space of network activities, we utilize\nprincipal angles between subspaces [15].\nThe \ufb01rst principal angle is the angle between two unit vectors (called principal vectors), one in\neach subspace, that have the maximum overlap (dot product). Higher principal angles are de\ufb01ned\nrecursively as the angles between pairs of unit vectors with the highest overlap that are orthogonal\nto the previously de\ufb01ned principal vectors. For two subspaces of dimension d1 and d2 de\ufb01ned by\n2 , for b = 1, 2, . . . d2, the cosines of the\nthe orthogonal unit vectors V a\nprincipal angles are equal to the singular values of the d1\u00d7d2 matrix V a\n2 . The angle between the\ntwo subspaces is given by,\n\n1 , for a = 1, 2, . . . d1 and V b\n\n1 \u00b7V b\n\n\u03b8 = arccos(cid:0)min(singularvalueofVa\n\n(6)\nThe resulting principal angles vary between 0 and \u03c0/2 depending on whether the two subspaces\noverlap partially or whether the two subspaces are completely non-overlapping, respectively. The\nangle between two subspaces is, by convention, the largest of their principal angles.\n\n2)(cid:1) .\n\n1\u00b7Vb\n\n2.5 Signal and noise from network responses\n\nTo characterize the activity of the entire network, we compute the average autocorrelation function\nof each neuronal \ufb01ring rate averaged across all the network units, de\ufb01ned as,\n\n(cid:104)(ri(t) \u2212 (cid:104)ri(cid:105))(ri(t + \u03c4) \u2212 (cid:104)ri(cid:105))(cid:105) .\n\n(7)\n\nN(cid:88)\n\ni=1\n\nC(\u03c4) =\n\n1\nN\n\nThe total variance in the \ufb02uctuations of the \ufb01ring rates of the network neurons is denoted by C(0),\nwhereas C(\u03c4) for non-zero \u03c4 provides information about the temporal structure of the network ac-\ntivity. To quantify signal and noise from this measure of network activity, we split the total variance\nof the network activity (i.e., C(0)) into oscillatory and chaotic components,\n\nchaos + \u03c32\nC(0) = \u03c32\n(8)\nosc is de\ufb01ned as the amplitude of the oscillatory part\nAs depicted in the function plotted in Fig. 4a, \u03c32\nof the correlation function C(\u03c4). The chaotic variance \u03c32\nchaos, is then equal to the difference between\nthe full variance C(0) and the variance \u03c32\nosc induced by entrainment to the periodic drive. We call\n\u03c3osc the signal amplitude and \u03c3chaos the noise amplitude, although it should be kept in mind that this\n\u201cnoise\u201d is generated by the network in a deterministic not stochastic manner [5 - 8].\n\nosc .\n\n3 Network effects on the spatial pattern of evoked activity\n\nA mean-\ufb01eld-based study developed for chaotic neural networks has recently shown a phase tran-\nsition in which chaotic background can be actively suppressed by inputs in a temporal frequency-\ndependent manner [5 - 8]. Similar effects have also been shown in discrete-time models and models\nwith white noise inputs [16, 17] but these models lack the rich dynamics of continuous time models.\nIn contrast, we show that external inputs do not exert nearly as strong control on the spatial struc-\nture of the network response. The phases of the \ufb01ring-rate oscillations of network neurons are only\npartially correlated with the phases of the inputs that drive them, instead appearing more strongly\nin\ufb02uenced by the recurrent feedback.\nWe schematize the irregular trajectory of the chaotic spontaneous activity, described by its leading\nprincipal components in red in Fig. 3a. The circular orbit of the periodic activity (schematically\nin blue in 3a) has been rotated by the smaller of its two principal angles. The angle between these\ntwo subspaces (the angle between \u02c6nchaos and \u02c6nosc) is then the remaining angle through which the\nperiodic orbit would have to be rotated to align it with the horizontal plane containing the two-\ndimensional projection of the chaotic trajectory. In other words, Fig. 3a depicts the angle between\n\n5\n\n\fthe subspaces de\ufb01ned by the \ufb01rst two principal components of the orbit of periodic driven activity\nand the \ufb01rst two principal components of the chaotic spontaneous activity. We ask how this cir-\ncle is aligned relative to the subspaces de\ufb01ned by different numbers of principal components that\ncharacterize the spontaneous activity.\n\nFigure 3: Spatial pattern of network responses. a) Cartoon of the angle between the subspace de-\n\ufb01ned by the \ufb01rst two components of the chaotic activity (red) and a two-dimensional description\nof the periodic orbit (blue curve). b) Relationship between the orientation of periodic and chaotic\ntrajectories. Angles between the subspace de\ufb01ned by the two PC\u2019s of the non-chaotic driven state\nand subspaces formed by PC\u2019s 1 through m of the chaotic spontaneous activity, where m appears on\nthe horizontal axis (red dots). Black dots show the analogous angles but with the two-dimensional\nsubspace de\ufb01ned by the random input phases replacing the subspace of the non-chaotic driven ac-\ntivity. c) Cartoon of the angle between the subspaces de\ufb01ned by two periodic driven trajectories. d)\nEffect of input frequency on the orientation of the periodic orbit. Angle between the subspaces de-\n\ufb01ned by the two leading principal components of non-chaotic driven activity at different frequencies\nand these two vectors for a 5 Hz input frequency. The results in this \ufb01gure come from a network\nsimulation with N =1000 and I/I1/2 = 0.7 and f = 5 Hz for b, I/I1/2 = 1.0 for d.\n\nNext, we compare the two-dimensional subspace of the periodic driven orbit to the subspaces de\ufb01ned\nby the \ufb01rst m principal components of the chaotic spontaneous activity. This allows us to see how the\norbit lies in the full N-dimensional space of neuronal activities relative to the trajectory of the chaotic\nspontaneous activity. The results (Fig. 3b, red dots) show that this angle is close to \u03c0/2 for small m,\nequivalent to the angle between two randomly chosen subspaces. However, the value drops quickly\nfor subspaces de\ufb01ned by progressively more of the leading principal components of the chaotic\nactivity. Ultimately, this angle approaches zero when all N of the chaotic principal component\nvectors are considered, as it must, because these span the entire space of network activities.\nIn the periodic driven regime, the temporal phases of the different neurons determine the orientation\nof the orbit in the space of neuronal activities. The rapidly falling angle between this orbit and the\nsubspaces de\ufb01ned by spatial patterns dominating the chaotic state (Fig. 3b, red dots) indicates that\nthese phases are strongly in\ufb02uenced by the recurrent connectivity, that in turn determines the spatial\npattern of the spontaneous activity. As an indication of the magnitude of this effect, we note that the\nangles between the random phase sinusoidal trajectory of the input and the same chaotic subspaces\nare much larger than those associated with the periodic driven activity (Fig. 3b, black dots).\n\n6\n\nSubspace angle (rad)PC #Subspace angle (rad)f (Hz)dcabv. randomv. driven periodicnchaosnoscnosc1nosc2fIn = 5Hz\f4 Temporal frequency modulation of spatial patterns\n\nAlthough recurrent feedback in the network plays an important role in the structure of driven network\nresponses, the spatial pattern of the activity is not \ufb01xed but rather, is shaped by a complex interac-\ntion between the driving input and intrinsic network dynamics. It is therefore sensitive to both the\namplitude and the frequency of this drive. To see this, we examine how the orientation of the approx-\nimately two-dimensional periodic orbit of driven network activity in the non-chaotic regime depends\non input frequency. We use the technique of principal angles described above, to examine how the\norientation of the oscillatory orbit changes when the input frequency is varied (angle between \u02c6nosc1\nand \u02c6nosc2 in Fig. 3c). For comparison purposes, we choose the dominant two-dimensional subspace\nof the network oscillatory responses to a driving input at 5 Hz as a reference. We then calculate\nthe principal angles between this subspace and the corresponding subspaces evoked by inputs with\ndifferent frequencies. The result shown in Fig. 3d indicates that the orientation of the orbit for these\ndriven states rotates as the input frequency changes.\nThe frequency dependence of the orientation of the evoked response is likely related to the effect\nseen in Fig. 1b & d in which higher frequency activity is projected onto higher principal components\nof the spontaneous activity. This causes the orbit of the driven activity to rotate in the direction of\nhigher-order principal components of the spontaneous activity as the input frequency increases. In\naddition, we \ufb01nd that the larger the stimulus amplitude, the closer the response phases of the neurons\nare to the random phases of their external inputs (results not shown).\n\n5 Network selectivity\n\nWe have shown that the response of a network to random-phase input is strongly affected by the\nspatial structure of spontaneous activity (Fig. 3b). We now ask if the spatial patterns that dominate\nthe spontaneous activity in a network correspond to the spatial input patterns to which the network\nresponds most robustly. In other words, can the spatial structure of an input be designed to maximize\nits ability to suppress chaos?\nRather than using random-phase inputs, we align the inputs to our network along the directions\nde\ufb01ned by the different principal components of its spontaneous activity. Speci\ufb01cally, the input to\nneuron i is set to,\n\ni cos(2\u03c0f t) ,\n\nIi = IV a\n(9)\nis the ith component of principal component vector a of the\nwhere I is the amplitude factor and V a\ni\nspontaneous activity. The index a is ordered so that a = 1 corresponds to the principal component\nwith the largest variance and a= N, the least.\nThe signal amplitude when the input is aligned with different leading eigenvectors shows no strong\ndependence on a, but the noise amplitude exhibits a sharp transition from no chaotic component for\nsmall a to partial chaos for larger a (Fig.4b). The critical value of a depends on I, f and g but, in\ngeneral, inputs aligned with the directions along which the spontaneous network activity has large\nprojections are most effective at inducing transitions to the driven periodic state. The point a = 5\ncorresponds to a phase transition analogous to that seen in other network models [5, 16]. The noise\nis therefore more sensitive to the spatial structure of the input compared to the signal.\nSuppression of spontaneously generated noise in neural networks does not require stimuli so strong\nthat they simply overwhelm \ufb02uctuations through saturation. Near the onset of chaos, complete\nnoise suppression can be achieved with relatively low amplitude inputs (compared to the strength of\nthe internal feedback), especially if the input is aligned with the dominant principal components of\nthe spontaneous activity.\n\n6 Discussion\n\nMany models of selectivity in cortical circuits rely on knowledge of the spatial organization of affer-\nent inputs as well as cortical connectivity. However, in many cortical areas, such information is not\navailable. This is analogous to the random character of connectivity in our network which precludes\n\n7\n\n\fFigure 4: a) An example autocorrelation function. Horizontal lines indicate how we de\ufb01ne the\nsignal and noise amplitudes. Parameters used for this \ufb01gure are I/I1/2 = 0.4, g = 1.8 and f = 20\nHz. b) Network selectivity to different spatial patterns of input. Signal and noise amplitudes in the\ninput-evoked response aligned to the leading principal components of the spontaneous activity of\nthe network. The inset shows a larger range on a coarser scale. The results in this \ufb01gure come from\na network simulation with N =1000, I/I1/2 = 0.2 and f = 2 Hz for b.\n\na simple description of the spatial distribution of activity patterns in terms of topographically orga-\nnized maps. Our analysis shows that even in cortical areas where the underlying connectivity does\nnot exhibit systematic topography, dissecting the spatial patterns of \ufb02uctuations in neuronal activity\ncan yield important insight about both intrinsic network dynamics and stimulus selectivity.\nAnalysis of the spatial pattern of network activity reveals that even though the network connectivity\nmatrix is full rank, the effective dimensionality of the chaotic \ufb02uctuations is much smaller than\nnetwork size. This suppression of spatial modes is much stronger than expected, for instance, from\na linear network that low-pass \ufb01lters a spatiotemporal white noise input. Further, this study extends\na similar effect demonstrated in the temporal domain elsewhere [5 - 8] to show that active spatial\npatterns exhibit strong nonlinear interaction between external driving inputs and intrinsic dynamics.\nSurprisingly though, even when the input is strong enough to fully entrain the temporal pattern\nof network activity, spatial organization of the activity remains strongly in\ufb02uenced by recurrent\ndynamics.\nOur results show that experimentally accessible spatial patterns of spontaneous activity (e.g. from\nvoltage- or calcium-sensitive optical imaging experiments) can be used to infer the stimulus selec-\ntivity induced by the network dynamics and to design spatially extended stimuli that evoke strong\nresponses. This is particularly true when selectivity is measured in terms of the ability of a stimulus\nto entrain the neural dynamics.\nIn general, our results indicate that the analysis of spontaneous\nactivity can provide valuable information about the computational implications of neuronal circuitry.\n\nAcknowledgments\n\nResearch of KR and LFA supported by National Science Foundation grant IBN-0235463 and an NIH\nDirector\u2019s Pioneer Award, part of the NIH Roadmap for Medical Research, through grant number\n5-DP1-OD114-02. HS was partially supported by grants from the Israel Science Foundation and\nthe McDonnell Foundation. This research was also supported by the Swartz Foundation through the\nSwartz Centers at Columbia, Princeton and Harvard Universities.\n\n8\n\nPC aligned to inputResponse amplitude / R1/200.10.20.30.10.20.3124683570signalnoise00.40.20.25\u03c4(s)0.1252osc2chaos+2osc[ - Cr]2ba\fReferences\n\n[1] Hubel, D.H. & Wiesel, T.N. (1962) Receptive \ufb01elds, binocular interaction and functional archi-\ntecture in the cats visual cortex. J. Physiol. 160, 106-154.\n[2] Arieli, A., Shoham, D., Hildesheim, R. & Grinvald, A. (1995) Coherent spatiotemporal patterns\nof ongoing activity revealed by real-time optical imaging coupled with single-unit recording in the\ncat visual cortex. J. Neurophysiol. 73, 2072-2093.\n[3] Arieli, A., Sterkin, A., Grinvald, A. & Aertsen, A. (1996) Dynamics of ongoing activity: expla-\nnation of the large variability in evoked cortical responses. Science 273, 1868-1871.\n[4] Sompolinsky, H., Crisanti, A. & Sommers, H.J. (1988) Chaos in Random Neural Networks.\nPhys. Rev. Lett. 61, 259-262.\n[5] Rajan, K., Abbott, L.F. & Sompolinsky, H. (2010) Stimulus-dependent Suppression of Chaos in\nRecurrent Neural Networks. Phys. Rev. E., 82: 01193.\n[6] Rajan, K. (2009) Nonchaotic Responses from Randomly Connected Networks of Model Neurons.\nPh.D. Dissertation, Columbia University in the City of New York.\n[7] Rajan, K., Abbott, L. F., & Sompolinsky, H. (2010) Stimulus-dependent Suppression of Intrinsic\nVariability in Recurrent Neural Networks. BMC Neuroscience, 11, O17: 11.\n[8] Rajan, K. (2010) What do Random Matrices Tell us about the Brain? Grace Hopper Celebration\nof Women in Computing, published by the Anita Borg Institute for Women & Technology and the\nAssociation for Computing Machinery.\n[9] van Vreeswijk, C. & Sompolinsky, H. (1996) Chaos in neuronal networks with balanced excita-\ntory and inhibitory activity. Science 24, 1724-1726.\n[10] van Vreeswijk, C. & Sompolinsky, H. (1998) Chaotic balanced state in a model of cortical\ncircuits. Neural Comput. 10, 1321-1371.\n[11] Shriki, O., Hansel, D. & Sompolinsky, H. (2003) Rate models for conductance-based cortical\nneuronal networks. Neural Comput. 15, 1809-1841.\n[12] Wong, K.-F. & Wang, X.-J. (2006) A Recurrent network mechanism of time integration in\nperceptual decisions. J. Neurosci. 26, 1314-1328.\n[13] Rajan, K. & Abbott, L.F. (2006) Eigenvalue spectra of random matrices for neural networks.\nPhys. Rev. Lett. 97, 188104.\n[14] Broome, B.M., Jayaraman, V. & Laurent, G. (2006) Encoding and decoding of overlapping\nodor sequences. Neuron 51, 467-482.\n[15] Ipsen, I.C.F. & Meyer, C.D. (1995) The angle between complementary subspaces. Amer. Math.\nMonthly 102, 904-911.\n[16] Bertchinger, N. & Natschl\u0141ger, T. (1995) Real-time computation at the edge of chaos in recur-\nrent neural networks. Neural Comput. 16, 1413-1436.\n[17] Molgedey, L., Schuchhardt, J. & Schuster, H.G. (1992) Suppressing chaos in neural networks\nby noise. Phys. Rev. Lett. 69, 3717-3719.\n\n9\n\n\f", "award": [], "sourceid": 863, "authors": [{"given_name": "Kanaka", "family_name": "Rajan", "institution": null}, {"given_name": "L", "family_name": "Abbott", "institution": null}, {"given_name": "Haim", "family_name": "Sompolinsky", "institution": null}]}