{"title": "Spatial Representations in the Parietal Cortex May Use Basis Functions", "book": "Advances in Neural Information Processing Systems", "page_first": 157, "page_last": 164, "abstract": null, "full_text": "Spatial Representations in the Parietal \n\nCortex May Use Basis Functions \n\nAlexandre Pouget \n\nalex@salk.edu \n\nTerrence J. Sejnowski \n\nterry@salk.edu \n\nHoward Hughes Medical Institute \n\nDepartment of Biology \n\nUniversity of California, San Diego \n\nThe Salk Institute \nLa Jolla, CA 92037 \n\nand \n\nAbstract \n\nThe parietal cortex is thought to represent the egocentric posi(cid:173)\ntions of objects in particular coordinate systems. We propose an \nalternative approach to spatial perception of objects in the pari(cid:173)\netal cortex from the perspective of sensorimotor transformations. \nThe responses of single parietal neurons can be modeled as a gaus(cid:173)\nsian function of retinal position multiplied by a sigmoid function \nof eye position, which form a set of basis functions. We show here \nhow these basis functions can be used to generate receptive fields \nin either retinotopic or head-centered coordinates by simple linear \ntransformations. This raises the possibility that the parietal cortex \ndoes not attempt to compute the positions of objects in a partic(cid:173)\nular frame of reference but instead computes a general purpose \nrepresentation of the retinal location and eye position from which \nany transformation can be synthesized by direct projection. This \nrepresentation predicts that hemineglect, a neurological syndrome \nproduced by parietal lesions, should not be confined to egocentric \ncoordinates, but should be observed in multiple frames of reference \nin single patients, a prediction supported by several experiments. \n\n\f158 \n\nAlexandre Pouget, Terrence J. Sejnowski \n\n1 \n\nIntroduction \n\nThe temporo-parietal junction in the human cortex and its equivalent in monkeys, \nthe inferior parietal lobule, are thought to playa critical role in spatial perception. \nLesions in these regions typically result in a neurological syndrome, called hemine(cid:173)\nglect, characterized by a lack of motor exploration toward the hemispace contralat(cid:173)\neral to the site of the lesion. As demonstrated by Zipser and Andersen [11), the \nresponses of single cells in the monkey parietal cortex are also consistent with this \npresumed role in spatial perception. \n\nIn the general case, recovering the egocentric position of an object from its multiple \nsensory inputs is difficult because of the multiple reference frames that must be \nintegrated . In this paper, we consider a simpler situation in which there is only \nvisual input and all body parts are fixed but the eyes, a condition which has been \nextensively used for neurophysiological studies in monkeys. In this situation, the \nhead-centered position of an object, X, can be readily recovered from the retinal \nlocation,R, and current eye position, E, by vector addition: \n\n(1) \n\nIf the parietal cortex contains a representation of the egocentric position of objects, \nthen one would expect to find a representation of the vectors, X, associated with \nthese objects. There is an extensive literature on how to encode a vector with \na population of neurons, and we first present two schemes that have been or are \nused as working hypothesis to study the parietal cortex. The first scheme involves \nwhat is typically called a computational map, whereas the second uses a vectorial \nrepresentation [9]. \n\nThis paper shows that none of these encoding schemes accurately accounts for all \nthe response properties of single cells in the parietal cortex. Instead, we propose \nan alternative hypothesis which does not aim at representing X per se; instead, the \ninputs Rand E are represented in a particular basis function representation. We \nshow that this scheme is consistent with the way parietal neurons respond to the \nretinal position of objects and eye position, and we give computational arguments \nfor why this might be an efficient strategy for the cortex. \n\n2 Maps and Vectorial Representations \n\nOne way to encode a two-dimensional vector is to use a lookup table for this vector \nwhich, in the case of a two-dimensional vector, would take the form of a two(cid:173)\ndimensional neuronal map. The parietal cortex may represent the egocentric loca(cid:173)\ntion of object, X, in a similar fashion. This predicts that the visual receptive field \nof parietal neurons have a fixed position with respect to the head (figure IB). The \nwork of Andersen et al. (1985) have clearly shown that this is not the case. As \nillustrated in figure 2A, parietal neurons have retinotopic receptive fields . \n\nIn a vectorial representation, a vector is encoded by N units, each of them coding \nfor the projection of the vector along its preferred direction. This entails that the \nactivity, h, of a neuron is given by: \n\n\fSpatial Representations in the Parietal Cortex May Use Basis Functions \n\n159 \n\nA \n\nB Map Representation \n\n~ V \n\nv x \n\nC Vectorial Representation \n\n0 \n\no \no \n\n\u00b0io \n0 j[ \n\n\u00b71110 \n\n0 \n\n-'JO \n90 \na (I)'gr\") \n\n180 \n\nFigure 1: Two neural representations of a vector. A) A vector if in cartesian and \npolar coordinates. B) In a map representation, units have a narrow gaussian tuning \nto the horizontal and vertical components of if . Moreover, the position of the peak \nresponse is directly related to the position of the units on the map. C) In a vectorial \nrepresentation , each unit encodes the projection of if along its preferred direction \n( central arrows) . This results in a cosine tuning to the vector angle, () . \n\n(2) \n\nWa is usually called the preferred direction of the cells because the activity is max(cid:173)\nimum whenever () = 0; that is, when A points in the same direction as Wa. Such \nneurons have a cosine tuning to the direction of the egocentric location of objects, \nas shown also in figure lC. \n\nCosine tuning curves have been reported in the motor cortex by Georgopoulos et \nal. (1982) , suggesting that the motor cortex uses a vectorial code for the direction \nof hand movement in extrapersonal space. The same scheme has been also used by \nGoodman and Andersen (1990), and Touretzski et al. (1993) to model the encoding \nof egocentric position of objects in the parietal cortex. Touretzski et al. (1993) called \ntheir representation a sinusoidal array instead of a vectorial representation. \n\nUsing Eq. 1, we can rewrite Eq. 2: \n\n(3) \n\nThis second equation is linear in Ii and if and uses the same vectors , Wa , in both \ndot products. This leads to three important predictions: \n1) The visual receptive fields of parietal neurons should be planar. \n\n2) The eye position receptive fields of parietal neurons should also be planar; that \nis, for a given retinal positions, the response of parietal neuron should be a linear \nfunction of eye position. \n\n\f160 \n\nAlexandre Pouget, Terrence J. Sejnowski \n\nA \n\nB \n\n@ \u2022 \n\no \n\n-101'--'---'--~-'--'--'---'---' \n40 \n\n-40 \n\n20 \n(Deg) \n\n-20 \nRetinal Position \n\n0 \n\nFigure 2: Typical response of a neuron in the parietal cortex of a monkey. A) Visual \nreceptive field has a fixed position on the retina, but the gain of the response is \nmodulated by eye position (ex). (Adpated from Andersen et al., 1985) B) Example \nof an eye position receptive field, also called gain field, for a parietal cell. The nine \ncircles indicate the amplitude of the response to an identical retinal stimulation for \nnine different eye positions. Outer circles show the total activity, whereas black \ncircles correspond to the total response minus spontaneous activity prior to visual \nstimulation. (Adpated from Zipser et al., 1988) \n\n3) The preferred direction for retinal location and eye position should be identical. \nFor example, if the receptive field is on the right side of the visual field , the gain \nfield should also increase with eye positon to the right side. \n\nThe visual receptive fields and the eye position gain fields of single parietal neurons \n[2]. In most cases, the visual \nhave been extensively studied by Andersen et al. \nreceptive fields were bell-shaped with one or several peaks and an average radius \nof 22 degrees of visual angle [1], a result that is clearly not consistent with the \nfirst prediction above. We show in figure 2A an idealized visual receptive field of \na parietal neuron. The effect of eye position on the visual receptive field is also \nillustrated. The eye position clearly modulates the gain of the visual response. \n\nThe prediction regarding the receptive field for eye position has been borne out by \nstatistical analysis. The gain fields of 80% of the cells had a planar component [1 , \n11] . One such gain field is shown in figure 2B. \n\nThere is not enough data available to determine whether or not the third prediction \nis valid. However, indirect evidence suggests that if such a correlation exists between \npreferred direction for retinal location and for eye position, it is probably not strong. \nCells with opposite preferred directions [2, 3] have been observed. Furthermore, \nalthough each hemisphere represents all possible preferred eye position directions, \nthere is a clear tendency to overrepresent the contralateral retinal hemifield [1]. \n\nIn conclusion, the experimental data are not fully consistent with the predictions of \nthe vectorial code. The visual receptive fields, in particular, are strongly nonlinear. \nIf these nonlinearities are computationally neutral, that is, they are averaged out in \nsubsequent stages of processing in the cortex, then the vectorial code could capture \n\n\fSpatial Representations in the Parietal Cortex May Use Basis Functions \n\n161 \n\nthe essence of what the parietal cortex computes and, as such, would provide a valid \napproximation of the neurophysiological data. We argue in the next section that \nthe nonlinearities cannot be disregarded and we present a representational scheme \nin which they have a central computational function. \n\n3 Basis Function Representation \n\n3.1 Sensorimotor Coordination and Nonlinear Function \n\nApproximation \n\nThe function which specified the pattern of muscle activities required to move a \nlimb, or the body, to a specific spatial location is a highly nonlinear function of the \nsensory inputs. The cortex is not believed to specify patterns of muscle activation, \nbut the intermediate transformations which are handled by the cortex are often \nthemselves nonlinear. Even if the transformations are actually linear, the nature of \ncortical representations often makes the problem a nonlinear mapping. For example, \nthere exists in the putamen and premotor cortex cells with gaussian head-centered \nvisual receptive fields [7J which means that these cells compute gaussians of A \nor, equivalently, gaussians of R + E, which is nonlinear in Rand E. There are \nmany other examples of sensory remappings involving similar computations. If the \nparietal cortex is to have a role in these remappings, the cells should respond to the \nsensory inputs in a way that can be used to approximate the nonlinear responses \nobserved elsewhere. \n\nOne possibility would be for parietal neurons to represent input signals such as eye \nposition and retinal location with basis functions. A basis function decomposition is \na well-known method for approximating nonlinear functions which is, in addition, \nbiologically plausible [8J. \nIn such a representation, neurons do not encode the \nhead-centered locations of objects, A; instead, they compute functions of the input \nvariables, such as Rand E, which can be used subsequently to approximate any \nfunctions of these variables. \n\n3.2 Predictions of the Basis Function Representation \n\nNot all functions are basis functions. Linear functions do not qualify, nor do sums of \nfunctions which, individually, would be basis functions, such as gaussian functions \nof retinal location plus a sigmoidal functions of eye position. If the parietal cortex \nuses a basis function representation, two conditions have to be met: \n\n1) The visual and the eye position receptive fields should be smooth nonlinear \nfunction of Rand E. \n2) The selectivities to Rand E should interact nonlinearly \nThe visual receptive fields of parietal neurons are typically smooth and nonlinear. \nGaussian or sum of gaussians appear to provide good models of their response \nprofiles [2]. The eye position receptive field on the other hand, which is represented \nby the gain field, appears to be approximately linear. We believe, however, that the \npublished data only demonstrate that the eye position receptive field is monotonic, \n\n\f162 \n\nAlexandre Pouget, Terrence J. Sejnowski \n\nHead-Centered \n\nRetinotopic \n\no \n\no \n\nFigure 3: Approximation of a gaussian head-centered (top-left) and a retinotopic \n(top-right) receptive field, by a linear combination of basis function neurons. The \nbottom 3-D plots show the response to all possible horizontal retinal position, r x , \nand horizontal eye positions, ex, of four typical basis function units. These units \nare meant to model actual parietal neurons \n\nbut not necessarily linear. In published experiments, eye position receptive fields \n(gain fields) were sampled at only nine points, which makes it difficult to distinguish \nbetween a plane and other functions such as a sigmoidal function or a piecewise \nlinear function. The hallmark of a nonlinearity would be evidence for saturation \nof activity within working range of eye position. Several published gain fields show \nsuch saturations [3, 11], but a rigorous statistical analysis would be desirable. \n\nAndersen et al. (1985) have have shown that the responses of parietal neurons are \nbest modeled by a multiplication between the retinal and eye position selectivities \nwhich is consistent with the requirements for basis functions. \n\nTherefore, the experimental data are consistent with our hypothesis that the parietal \ncortex uses a basis function representation. The response of most gain-modulated \nneurons in the parietal cortex could be modeled by multiplying a gaussian tuning \nto retinal position by a sigmoid of eye position, a function which qualifies as a basis \nfunction. \n\n3.3 Simulations \n\nWe simulated the response of 121 parietal gain-modulated neurons modeled by \nmultiplying a gaussian of retinal position, rx , with a sigmoid of eye position, ex : \n\n\fSpatial Representations in the Parietal Cortex May Use Basis Functions \n\nh,o=-----\n\ne,,;-e,l'j \n\ne-\n1 +e-\n\n(rz-rra):il \n\n~ .. ~ \n\nt \n\n163 \n\n(4) \n\nwhere the centers of the gaussians for retinalloction rxi and the positions of the sig(cid:173)\nmoids for eye postions exi were uniformly distributredo The widths of the gaussian \n(T and the sigmoid t were fixed. Four of these functions are shown at the bottom of \nfigure 3. \n\nWe used these basis functions as a hidden layer to approximate two kinds of out(cid:173)\nput functions: a gaussian head-centered receptive field and a gaussian retinotopic \nreceptive field . Neurons with these response properties are found downstream of \nthe parietal cortex in the premotor cortex [7] and superior colliculus, two structures \nbelieved to be involved in the control of, respectively, arm and eye movements. \n\nThe weights for a particular output were obtained by using the delta rule. Weights \nwere adjusted until the mean error was below 5% of the maximum output value. \nFigure 3 shows our best approximations for both the head-centered and retinotopic \nreceptive fields. This demonstrates that the same pool of neurons can be used to \napproximate several diffferent nonlinear functions. \n\n4 Discussion \n\nNeurophysiological data support our hypothesis that the parietal cortex represents \nits inputs, such as the retinal location of objects and eye position, in a format \nsuitable to non-linear function approximation, an operation central to sensorimotor \ncoordination. Neurons have gaussian visual receptive fields modulated by monotonic \nfunction of eye position leading to response function that can be modeled by product \nof gaussian and sigmoids. Since the product of gaussian and sigmoids forms basis \nfunctions, this representation is good for approximating nonlinear functions of the \ninput variables. \n\nPrevious attempts to characterize spatial representations have emphasized linear \nencoding schemes in which the location of objects is represented in egocentric co(cid:173)\nordinates. These codes cannot be used for nonlinear function approximation and, \nas such, may not be adequate for sensorimotor coordination [6, 10]. On the other \nhand, such representations are computationally interesting for certain operations, \nlike addition or rotation. Some part of the brain more specialized in navigation like \nthe hippocampus might be using such a scheme [10]. \n\nIn figure 3, a head-centered or a retinotopic receptive field can be computed from \nthe same pool of neurons. It would be arbitrary to say that these neurons encode \nthe positions of objects in egocentric coordinates. Instead, these units encode a \nposition in several frames of reference simultaneously. If the parietal cortex uses \nthis basis function representation, we predict that hemineglect, the neurological \nsyndrome which results from lesions in the parietal cortex, should not be confined to \nany particular frame of reference. This is precisely the conclusion that has emerged \nfrom recent studies of parietal patients [4]. Whether the behavior of parietal patients \ncan be fully explained by lesions of a basis function representation remains to be \ninvestigated. \n\n\f164 \n\nAlexandre Pouget, Terrence J. Sejnowski \n\nAcknowledgments \n\nWe thank Richard Andersen for helpful conversations and with access to unpub(cid:173)\nlished data. \n\nReferences \n\n[1] R.A. Andersen, C. Asanuma, G. Essick, and R.M. Siegel. Corticocortical con(cid:173)\nnections of anatomically and physiologically defined subdivisions within the in(cid:173)\nferior parietal lobule . Journal of Comparative Neurology, 296(1):65-113, 1990. \n[2] R.A. Andersen, G.K. Essick, and R.M. Siegel. Encoding of spatial location by \n\nposterior parietal neurons. Science, 230:456-458 , 1985. \n\n[3] R.A. Andersen and D. Zipser. The role of the posterior parietal cortex in \ncoordinate transformations for visual-motor integration. Canadian Journal of \nPhysiology and Pharmacology, 66:488-501, 1988. \n\n[4] M. Behrmann and M. Moscovitch. Object-centered neglect in patient with uni(cid:173)\n\nlateral neglect: effect of left-right coordinates of objects. Journal of Cognitive \nNeuroscience , 6:1-16, 1994. \n\n[5] A.P. Georgopoulos, J.F. Kalaska, R. Caminiti, and J.T. Massey. On the re(cid:173)\nlations between the direction of two-dimensional arm movements and cell dis(cid:173)\ncharge in primate motor cortex. Journal of Neuroscience, 2(11):1527-1537, \n1982. \n\n[6] S.J. Goodman and R.A. Andersen. Algorithm programmed by a neural model \nfor coordinate transformation. In International Joint Conference on Neural \nNetworks, San Diego, 1990. \n\n[7] M.S. Graziano, G.s. Yap, and C.G. Gross. Coding of visual space by premotor \n\nneurons. Science, 266:1054-1057, 1994. \n\n[8] T. Poggio. A theory of how the brain might work. Cold Spring Harbor Sym(cid:173)\n\nposium on Quantitative Biology, 55:899-910, 1990. \n\n[9] J .F. Soechting and M. Flanders. Moving in three-dimensional space: frames \nof reference, vectors and coordinate systems. Annual Review in Neuroscience, \n15:167-91, 1992. \n\n[10] D.S. Touretzky, A.D. Redish, and H.S . Wan. Neural representation of space \n\nusing sinusoidal arrays. Neural Computation, 5:869-884, 1993. \n\n[11] D. Zipser and R.A. Andersen. A back-propagation programmed network that \nstimulates reponse properties of a subset of posterior parietal neurons. Nature, \n331:679- 684, 1988. \n\n\f", "award": [], "sourceid": 884, "authors": [{"given_name": "Alexandre", "family_name": "Pouget", "institution": null}, {"given_name": "Terrence", "family_name": "Sejnowski", "institution": null}]}