{"title": "Neural Decoding of Cursor Motion Using a Kalman Filter", "book": "Advances in Neural Information Processing Systems", "page_first": 133, "page_last": 140, "abstract": null, "full_text": "Neural Decoding of Cursor Motion Using a Kalman Filter\n\nW. Wu M. J. Black\u0001\nM. Serruya\u0002\u0005\u0004\n\nY. Gao\nA. Shaikhouni\u0002\u0006\u0004\n\nE. Bienenstock\u0003\u0002\nJ. P. Donoghue\u0002\n\n Division of Applied Mathematics, \u0001 Dept. of Computer Science,\n\n\u0002 Dept. of Neuroscience, \u0004 Division of Biology and Medicine,\n\nBrown University, Providence, RI 02912\n\nweiwu@cfm.brown.edu, black@cs.brown.edu, gao@cfm.brown.edu,\n\nelie@dam.brown.edu, Mijail Serruya@brown.edu,\n\nAmmar Shaikhouni@brown.edu, john donoghue@brown.edu\n\nAbstract\n\nThe direct neural control of external devices such as computer displays\nor prosthetic limbs requires the accurate decoding of neural activity rep-\nresenting continuous movement. We develop a real-time control system\nusing the spiking activity of approximately 40 neurons recorded with\nan electrode array implanted in the arm area of primary motor cortex.\nIn contrast to previous work, we develop a control-theoretic approach\nthat explicitly models the motion of the hand and the probabilistic re-\nlationship between this motion and the mean \ufb01ring rates of the cells in\n70\nbins. We focus on a realistic cursor control task in which the sub-\nject must move a cursor to \u201chit\u201d randomly placed targets on a computer\nmonitor. Encoding and decoding of the neural data is achieved with a\nKalman \ufb01lter which has a number of advantages over previous linear\n\ufb01ltering techniques. In particular, the Kalman \ufb01lter reconstructions of\nhand trajectories in off-line experiments are more accurate than previ-\nously reported results and the model provides insights into the nature of\nthe neural coding of movement.\n\n\u0007\t\b\n\n1 Introduction\n\nRecent results have demonstrated the feasibility of direct neural control of devices such as\ncomputer cursors using implanted electrodes [5, 9, 11, 14]. These results are enabled by a\nvariety of mathematical \u201cdecoding\u201d methods that produce an estimate of the system \u201cstate\u201d\n(e.g. hand position) from a sequence of measurements (e.g. the \ufb01ring rates of a collection\nof cells). Here we argue that such a decoding method should (1) have a sound probabilistic\nfoundation; (2) explicitly model noise in the data; (3) indicate the uncertainty in estimates\nof hand position; (4) make minimal assumptions about the data; (5) require a minimal\namount of \u201ctraining\u201d data; (6) provide on-line estimates of hand position with short delay\n(less than 200ms); and (7) provide insight into the neural coding of movement. To that\n\n\fMonitor\n\nTarget\n\nTablet\n\nTrajectory\n\n12\n\n10\n\n8\n\n6\n\n4\n\n2\n\nManipulandum\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\na\n\nb\n\nFigure 1: Reconstructing 2D hand motion. (a) Training: neural spiking activity is recorded\nwhile the subject moves a jointed manipulandum on a 2D plane to control a cursor so that\nit hits randomly placed targets. (b) Decoding: true target trajectory (dashed (red): dark to\nlight) and reconstruction using the Kalman \ufb01lter (solid (blue): dark to light).\n\nend, we propose a Kalman \ufb01ltering method that provides a rigorous and well understood\nframework that addresses these issues. This approach provides a control-theoretic model\nfor the encoding of hand movement in motor cortex and for inferring, or decoding, this\nmovement from the \ufb01ring rates of a population of cells.\n\n\u0002\u0001\u0003\u0001\n\nSimultaneous recordings are acquired from an array consisting of\nmicroelectrodes [6]\nimplanted in the arm area of primary motor cortex (MI) of a Macaque monkey; recordings\nfrom this area have been used previously to control devices [5, 9, 10, 11, 14]. The monkey\nviews a computer monitor while gripping a two-link manipulandum that controls the 2D\nmotion of a cursor on the monitor (Figure 1a). We use the experimental paradigm of [9], in\nwhich a target dot appears in a random location on the monitor and the task requires moving\na feedback dot with the manipulandum so that it hits the target. When the target is hit, it\njumps to a new random location. The trajectory of the hand and the neural activity of\n\u0004\u0006\u0005\ncells are recorded simultaneously. We compute the position, velocity, and acceleration of\nthe hand along with the mean \ufb01ring rate for each of the cells within non-overlapping\ntime bins. In contrast to related work [8, 15], the motions of the monkey in this task are\nquite rapid and more \u201cnatural\u201d in that the actual trajectory of the motion is unconstrained.\n\n\u0007\b\u0001\n\nThe reconstruction of hand trajectory from the mean \ufb01ring rates can be viewed probabilis-\ntically as a problem of inferring behavior from noisy measurements. In [15] we proposed\na Kalman \ufb01lter framework [3] for modeling the relationship between \ufb01ring rates in motor\ncortex and the position and velocity of the subject\u2019s hand. This work focused on off-line\nreconstruction using constrained motions of the hand [8]. Here we consider new data from\nthe on-line environmental setup [9] which is more natural, varied, and contains rapid mo-\ntions. With this data we show that, in contrast to our previous results, a model of hand\nacceleration (in addition to position and velocity) is important for accurate reconstruction.\n\nIn the Kalman framework, the hand movement (position, velocity and acceleration) is mod-\neled as the system state and the neural \ufb01ring rate is modeled as the observation (measure-\nment). The approach speci\ufb01es an explicit generative model that assumes the observation\n(\ufb01ring rate in\n.\nSimilarly, the hand state at time\nis assumed to be a linear function of the hand state at the\nprevious time instant plus Gaussian noise. The Kalman \ufb01lter approach provides a recursive,\non-line, estimate of hand kinematics from the \ufb01ring rate in non-overlapping time bins. The\n\n) is a linear function of the state (hand kinematics) plus Gaussian noise\n\n\u0007\b\u0001\n\n\u000b This is a crude assumption but the \ufb01ring rates can be square-root transformed [7] making them\n\nmore Gaussian and the mean \ufb01ring rate can be subtracted to achieve zero-mean data.\n\n\u0007\n\b\n\u0007\n\b\n\t\n\n\fresults of reconstructing hand trajectories from pre-recorded neural \ufb01ring rates are com-\npared with those obtained using more traditional \ufb01xed linear \ufb01ltering techniques [9, 12]\nusing overlapping\nwindows. The results indicate that the Kalman \ufb01lter decoding is\nmore accurate than that of the \ufb01xed linear \ufb01lter.\n\n\u0001\n\n1.1 Related Work\nGeorgopoulos and colleagues [4] showed that hand movement direction may be encoded\nby the neural ensemble in the arm area of motor cortex (MI). This early work has resulted\nin a number of successful algorithms for decoding neural activity in MI to perform off-\nline reconstruction or on-line control of cursors or robotic arms. Roughly, the primary\nmethods for decoding MI activity include the population vector algorithm [4, 5, 7, 11],\nlinear \ufb01ltering [9, 12], arti\ufb01cial neural networks [14], and probabilistic methods [2, 10, 15].\n\nThis population vector approach is the oldest method and it has been used for the real-time\nneural control of 3D cursor movement [11]. This work has focused primarily on \u201ccenter\nout\u201d motions to a discrete set of radial targets (in 2D or 3D) rather than natural, continuous,\nmotion that we address here.\n\nLinear \ufb01ltering [8, 12] is a simple statistical method that is effective for real-time neural\ncontrol of a 2D cursor [9]. This approach requires the use of data over a long time win-\ndow (typically \u0002\n). The \ufb01xed linear \ufb01lter, like population vectors and neural\nnetworks [14] lack both a clear probabilistic model and a model of the temporal hand kine-\nmatics. Additionally, they provide no estimate of uncertainty and hence may be dif\ufb01cult to\nextend to the analysis of more complex temporal movement patterns.\n\n\u0003\n\nto\n\n\u0001\u0003\u0001\n\n\u0001\u0005\u0004\n\n\u0002\u0001\u0003\u0001\n\nWe argue that what is needed is a probabilistically grounded method that uses data in small\nor less) and integrates that information over time in a recur-\ntime windows (e.g. \u0002\nsive fashion. The CONDENSATION algorithm has been recently introduced as a Bayesian\ndecoding scheme [2], which provides a probabilistic framework for causal estimation and\nis shown superior to the performance of linear \ufb01ltering when suf\ufb01cient data is available\n(e.g. using \ufb01ring rates for several hundred cells). Note that the C ONDENSATION method is\nmore general than the Kalman \ufb01lter proposed here in that it does not assume linear models\nand Gaussian noise. While this may be important for neural decoding as suggested in [2],\ncurrent technology makes the method impractical for real-time control.\n\nFor real-time neural control we exploit the Kalman \ufb01lter [3, 13] which has been widely\nused for estimation problems ranging from target tracking to vehicle control. Here we\napply this well understood theory to the problem of decoding hand kinematics from neural\nactivity in motor cortex. This builds on the work that uses recursive Bayesian \ufb01lters to\nestimate the position of a rat from the \ufb01ring activity of hippocampal place cells [1, 16]. In\ncontrast to the linear \ufb01lter or population vector methods, this approach provides a measure\nof con\ufb01dence in the resulting estimates. This can be extremely important when the output\nof the decoding method is to be used for later stages of analysis.\n\n2 Methods\n\nDecoding involves estimating the state of the hand at the current instant in time; i.e.\nx\n-\n\u0006\b\u0007\n\t\nacceleration, and\nin our experiments.\nThe Kalman \ufb01lter [3, 13] model assumes the state is linearly related to the observations\nz\n\n\u001d\u0006\u001e\u0007 \u001f\"!\nvector containing the \ufb01ring rates at time\n\nrepresenting\n-acceleration at time\n\n\u000b\r\f\u000f\u000e\u0010\f\u000f\u0011\u0013\u0012\u0014\f\u000f\u0011\u0013\u0015\u0016\f\u0018\u0017\u0016\u0012\u0014\f\u0018\u0017\u0016\u0015\u001a\u0019\u001c\u001b\n\nwhich here represents a\n\n-position,\n\n-velocity,\n\n-velocity,\n\n#\u0007\n\n\u0007\b\u0001\n\n-position,\n\nwhere\n\n\u0006%$'&)(\n\n*,+\n\nfor\n\n\u0004\n\b\n\u0007\n\b\n\u0002\n\b\n\u0007\n\b\n\u0006\n\u000b\n\u000e\n\u000b\n\u000e\n\u000b\n\u000e\n\n!\n\u0007\n\b\n\n\n\u0006\n*\n\fobserved neurons within\nKalman \ufb01lter algorithm below; for details the reader is referred to [3, 13].\n\n\u0004\u0006\u0005\nEncoding: We de\ufb01ne a generative model of neural \ufb01ring as\n\n. In our experiments,\n\n\u0007\b\u0001\n\ncells. We brie\ufb02y review the\n\nz\n\nx\n\nq\n\n(1)\n\n\u0006\u0003\u0002\n\n,\n\n\u0001\f\n\nwhere\nis a\nmatrix that linearly relates the hand state to the neural \ufb01ring. We assume the noise in the\nobservations is zero mean and normally distributed, i.e. q\n\nis the number of time steps in the trial, and \u0001\n\f\u0016\u0013%\u0006\nThe states are assumed to propagate in time according to the system model\n\n(\n\t\f\u000b\n&)(\u0017\t\n\n\u0006\u000e\r\u0010\u000f\u0012\u0011\n\n\u0005\"\f\u0005\u0004\u0006\u0004\u0005\u0004\n\n\u0001\"\f\b\u0013\n\n\u0006\u0015\u0014\n\n\f\b\u0007\n\n.\n\nx\n\n\u0006\u0016\u0018\n\n\u0007\u001a\u0019\n\nx\n\n\u0006\u0017\u0002\n\nw\n\n(2)\n\nis the coef\ufb01cient matrix and the noise term w\n\n. This states that the hand kinematics (position, velocity, and acceleration) at time\n. Once again we assume these estimates are\nis linearly related to the state at time\n\n\u001c\u000f\u001d\u0011\n\n\u0001\"\f\u001f\u001e\n\u0006\u0015\u0014\n\n\f \u001e\n\u0006\n\nwhere\n&\u0017\u000b\u0015\t\u001b\u000b\n&\u0017\u000b\u0015\t\u001b\u000b\nnormally distributed.\n\nIn practice,\n, however, here we make the\ncommon simplifying assumption they are constant. Thus we can estimate the Kalman \ufb01lter\nmodel from training data using least squares estimation:\n\nmight change with time step\n\n\f\b\u0013\n\n\f\u001f\u001e\n\nx\n\n&'&\n\nargmin\n\nH\n\n&*& z\n\n\u0001 x\n\n&'&\n\nit is then simple to estimate the noise covariance\n\n\u0006\u0016%\n\n!#\"\n\u0006\u0016%\n\nargmin\n\nA\n\n&'& x\n\n\u0006\u0016\u0018\n) norm. Given\n; details are given in [15].\n\n\u0004(\u0019\nand \u0001\n\n&'& is the\nand\n\nwhere &*&\nmatrices\nDecoding: At each time step\nstate estimate\na posteriori state estimate\n\nx\"\n\nx\n\nthe algorithm has two steps: 1) prediction of the a priori\n; and 2) updating this estimate with new measurement data to produce an\n\n. In particular, these steps are:\n\nI. Discrete Kalman \ufb01lter time update equations:\n\nAt each time\nits error covariance matrix,\n\n\u0018\u0006\n\n:\n\n, we obtain the a priori estimate from the previous time\n\nx\n\n\u0007\u0010\u0019\n\nx\"\n\u0007\u0010\u0019.-\n\n\u001d\u0006\n\n, then compute\n\n(3)\n(4)\n\nII. Measurement update equations:\n\nUsing the estimate\nand compute the posterior error covariance matrix:\n\nand \ufb01ring rate z\n\nx\"\n\n, we update the estimate using the measurement\n\nz\n\nx\"\n\nx\n\nx\"\n\u0007/,\n\n\u0002\u001d0\n\u001121\n\nwhere\n\n(6)\nrepresents the state error covariance after taking into account the neural data and\n\n\u00143-\n\nis the Kalman gain matrix given by\n\n\u0007\u0010-\nThis\ntion (see [3] for details). Note that\nthe reliability of the data, the gain term,\nnew measurement to the state estimate.\n\nproduces a state estimate that minimizes the mean squared error of the reconstruc-\nis the measurement error matrix and, depending on\n, automatically adjusts the contribution of the\n\n\u0013#\u0014\n\n(7)\n\n(5)\n\n\u0007\n\b\n*\n\u0007\n\u0006\n\u0007\n\u0001\n\u0006\n\u0006\n\f\n\u001f\n\u0007\n\u0007\n$\n&\n$\n(\n\t\n\u0006\n\u0006\n\f\n\u0019\n\u0006\n$\n\u0006\n$\n\u001f\n\u0002\n\n\u001f\n\u0019\n\u0006\n\f\n\u0001\n\u0006\n\u0006\n\u0006\n\u001f\n\t\n$\n\t\n\t\n\u0006\n)\n\f\n!\n$\n\t\n\u0006\n\u0004\n\u0006\n)\n\f\n\u0004\n+\n\u0019\n\u001e\n\u0013\n\u001f\n,\n\u0006\n,\n\u0006\n\"\n\t\n-\n\"\n\u0006\n,\n\u0006\n,\n\u0006\n\"\n\t\n\f\n-\n\"\n\u0006\n\u0006\n\"\n\t\n\u0019\n\u001b\n\u0002\n\u001e\n\n,\n\u0006\n\u0006\n,\n\u0006\n\u0006\n\u0006\n\u0011\n\u0006\n\u0004\n\u0001\n,\n\u0006\n\u0014\n\f\n-\n\u0006\n\u0007\n\u0004\n0\n\u0006\n\u0001\n\"\n\u0006\n\f\n-\n\u0006\n0\n\u0006\n0\n\u0006\n\"\n\u0006\n\u0001\n\u001b\n\u0011\n\u0001\n-\n\"\n\u0006\n\u0001\n\u001b\n\u0002\n\"\n\t\n\n0\n\u0006\n\u0013\n0\n\u0006\n\flag)\nlag)\n\nMethod\nKalman (0\nKalman (70\n\u0007\t\b\nKalman (140\nKalman (210\nKalman (no acceleration)\nLinear \ufb01lter\n\nlag)\nlag)\n\n\u0007\t\b\n\u0007\t\b\n\n\u000b\r\f\u000f\u000e\f\u0014\n\nCorrelation Coef\ufb01cient\n(0.768, 0.912)\n(0.785, 0.932)\n(0.815, 0.929)\n(0.808, 0.891)\n(0.817, 0.914)\n(0.756, 0.915)\n\n) )\n\nMSE (\n\n7.09\n7.07\n6.28\n6.87\n6.60\n8.30\n\nTable 1: Reconstruction results for the \ufb01xed linear and recursive Kalman \ufb01lter. The table\nalso shows how the Kalman \ufb01lter results vary with lag times (see text).\n\n3 Experimental Results\n\n,\n\n,\n\n, \u0001\n\nTo be practical, we must be able to train the model (i.e. estimate\n) using a\nsmall amount of data. Experimentally we found that approximately 3.5 minutes of training\ndata suf\ufb01ces for accurate reconstruction (this is similar to the result for \ufb01xed linear \ufb01lters\nreported in [9]). As described in the introduction, the task involves moving a manipulan-\nworkspace) to hit randomly\ndum freely on a \u0001\nplaced targets on the screen. We gather the mean \ufb01ring rates and actual hand trajectories\nfor the training data and then learn the models via least squares (the computation time is\nnegligible). We then test the accuracy of the method by reconstructing test trajectories off-\nline using recorded neural data not present in the training set. The results reported here use\napproximately 1 minute of test data.\n\ntablet (with a\n\n\u0001\u0002\u0003\u0007\n\n\u0001\u0002\u0003\u0007\n\n\u0001\u0003\n\n\u0001\u0003\n\nOptimal Lag: The physical relationship between neural \ufb01ring and arm movement means\nthere exists a time lag between them [7, 8]. The introduction of a time lag results in the\nmeasurements, z\n, being taken from some previous (or future) instant in time\n. In the interest of simplicity, we consider a single optimal time\n\n\u000f\u0006\nlag for all the cells though evidence suggests that individual time lags may provide better\nresults [15].\n\n\"\u0005\u0004 for some integer\n\n, at time\n\n\u0007\t\b\n\nUsing time lags of 0, 70, 140, 210\nwe train the Kalman \ufb01lter and perform reconstruction\n(see Table 1). We report the accuracy of the reconstructions with a variety of error measures\nused in the literature including the correlation coef\ufb01cient (\n) and the mean squared error\n(MSE) between the reconstructed and true trajectories. From Table 1 we see that optimal\nlag is around two time steps (or 140\n); this lag will be used in the remainder of the\nexperiments and is similar to our previous \ufb01ndings [15] which suggested that the optimal\nlag was between 50-100\n\n.\n\nDecoding: At the beginning of the test trial we let the predicted initial condition equal the\nreal initial condition. Then the update equations in Section 2 are applied. Some examples of\nthe reconstructed trajectory are shown in Figure 2 while Figure 3 shows the reconstruction\nof each component of the state variable (position, velocity and acceleration in\n\nand\n\n).\n\n\u0007\t\b\n\nis more accurate than in\nFrom Figure 3 and Table 1 we note that the reconstruction in\ndirection (the same is true for the \ufb01xed linear \ufb01lter described below); this requires\nthe\nfurther investigation. Note also that the ground truth velocity and acceleration curves are\ncomputed from the position data with simple differencing. As a result these plots are quite\nnoisy making an evaluation of the reconstruction dif\ufb01cult.\n\n\u0011\n\n\u0007\n\u0007\n\b\n\u0019\n\u001e\n\u0013\n+\n\u0001\n\u0005\n\u0007\n+\n\u0005\n\u0007\n\u0006\n\n\u0006\n\u0006\n\u0007\n\u0007\n\b\n\u000b\n\u000e\n\u000e\n\u000b\n\f20\n\n18\n\n16\n\n14\n\n12\n\n10\n\n8\n\n6\n\n4\n\n2\n\n20\n\n18\n\n16\n\n14\n\n12\n\n10\n\n8\n\n6\n\n4\n\n2\n\n20\n\n18\n\n16\n\n14\n\n12\n\n10\n\n8\n\n6\n\n4\n\n2\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\n22\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\n22\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\n22\n\nFigure 2: Reconstructed trajectories (portions of 1min test data \u2013 each plot shows 50 time\ninstants (3.5\n)): true target trajectory (dashed (red)) and reconstruction using the Kalman\n\ufb01lter (solid (blue)).\n\n3.1 Comparison with linear \ufb01ltering\nFixed linear \ufb01lters reconstruct hand position as a linear combination of the \ufb01ring rates over\nsome \ufb01xed time period [4, 9, 12]; that is,\n\nwhere\n\n,\n\n%\u0004\u0003\n-position (or, equivalently, the\n\nis the\n\u0003\f\u0005\u0004\u0005\u0004\u0006\u0004\nis the \ufb01ring rate of neuron\n\n, where\n\n\u001d\u0006\n\n\u0002\u0006\u0005\n\n-position) at time\n\nat time\n\nis the number of time steps in a trial,\n\nis the constant\n\u0007\b\u0001\nare the \ufb01lter coef\ufb01cients. The\noffset,\ncoef\ufb01cients can be learned from training data using a simple least squares technique. In\nour experiments here we take\nwhich means that the hand position is determined\nfrom \ufb01ring data over\n. This is exactly the method described in [9] which provides a fair\ncomparison for the Kalman \ufb01lter; for details see [12, 15]. Note that since the linear \ufb01lter\nuses data over a long time window, it does not bene\ufb01t from the use of time-lagged data.\nNote also that it does not explicitly reconstruct velocity or acceleration.\n\n, and\n\n\u0001\n\n\u0005\u0003\u0001\n\n\u001f\u0014!\n\nThe linear \ufb01lter reconstruction of position is shown in Figure 4. Compared with Figure 3,\nwe see that the results are visually similar. Table 1, however, shows that the Kalman \ufb01lter\ngives a more accurate reconstruction than the linear \ufb01lter (higher correlation coef\ufb01cient and\nlower mean-squared error). While \ufb01xed linear \ufb01ltering is extremely simple, it lacks many\nof the desirable properties of the Kalman \ufb01lter.\n\nAnalysis: In our previous work [15], the experimental paradigm involved carefully de-\nsigned hand motions that were slow and smooth. In that case we showed that acceleration\nwas redundant and could be removed from the state equation. The data used here is more\n\u201cnatural\u201d, varied, and rapid and we \ufb01nd that modeling acceleration improves the prediction\nof the system state and the accuracy of the reconstruction; Table 1 shows the decrease in\naccuracy with only position and velocity in the system state (with 140ms lag).\n\n4 Conclusions\n\nWe have described a discrete linear Kalman \ufb01lter that is appropriate for the neural control\nof 2D cursor motion. The model can be easily learned using a few minutes of training data\nand provides real-time estimates of hand position every\ngiven the \ufb01ring rates of 42\n\n\u0007\b\u0001\n\n\b\n\u000b\n\u0006\n\u0007\n\u0017\n\u0002\n$\n\n\u0001\n$\n\u0002\n\u0007\n\n\u0006\n\"\n\n\u0002\n\f\n\u000b\n\u0006\n\u000b\n\u000e\n\n\u0006\n\u0007\n\n\u0011\n!\n\n\u0007\n\u0007\n\b\n\u0014\n\u001f\n\u0007\n\f\n\u0007\n\u0007\n\u0017\n\u0007\n\n\u0006\n\"\n\u0002\n\u0011\n\"\n\u0002\n\u0005\n\n\u0002\n\u000f\n\u0007\n\u0004\n\b\n\u0007\n\b\n\f20\n\n15\n\n10\n\n5\n\n2\n\n1\n\n0\n\n1\n\n2\n\n1\n\n0\n\n1\n\nx-position\n\ny-position\n\n5\n\n10\n\n15\n\n20\n\nx-velocity\n\n10\n\n5\n\n0\n\n2\n\n0\n\n2\n\n5\n\n10\n\n15\n\n20\n\ny-velocity\n\n5\n\n10\n\n15\n\n20\n\n5\n\n10\n\n15\n\n20\n\nx-acceleration\n\ny-acceleration\n\n2\n\n1\n\n0\n\n1\n\n2\n\n5\n\n10\n\n15\n\n20\n\n5\n\n10\n\n15\n\n20\n\ntime (second)\n\ntime (second)\n\nFigure 3: Reconstruction of each component of the system state variable: true target motion\n(dashed (red)) and reconstruction using the Kalman \ufb01lter (solid (blue)). 20\nfrom a 1min\ntest sequence are shown.\n\nx-position\n\ny-position\n\n20\n\n15\n\n10\n\n5\n\n10\n\n5\n\n0\n\n5\n\n10\n\n15\n\n20\n\n5\n\n10\n\n15\n\n20\n\ntime (second)\n\ntime (second)\n\nFigure 4: Reconstruction of position using the linear \ufb01lter: true target trajectory (dashed\n(red)) and reconstruction using the linear \ufb01lter (solid (blue)).\n\ncells in primary motor cortex. The estimated trajectories are more accurate than the \ufb01xed\nlinear \ufb01ltering results being used currently.\n\nThe Kalman \ufb01lter proposed here provides a rigorous probabilistic approach with a well\nunderstood theory. By making its assumptions explicit and by providing an estimate of\nuncertainty, the Kalman \ufb01lter offers signi\ufb01cant advantages over previous methods. The\nmethod also estimates hand velocity and acceleration in addition to 2D position. In contrast\nto previous experiments, we show, for the natural 2D motions in this task, that incorporat-\ning acceleration into the system and measurement models improves the accuracy of the\ndecoding. We also show that, consistent with previous studies, a time lag of\nimproves the accuracy.\n\n\u0007\t\b\n\n\u0002\u0004\n\nOur future work will evaluate the performance of the Kalman \ufb01lter for on-line neural con-\ntrol of cursor motion in the task described here. Additionally, we are exploring alternative\nmeasurement noise models, non-linear system models, and non-linear particle \ufb01lter decod-\n\n\b\n\u0007\n\u0001\n\u0004\n\u0001\n\fing methods. Finally, to get a complete picture of current methods, we are pursuing further\ncomparisons with population vector methods [7] and particle \ufb01ltering techniques [2].\n\nAcknowledgments. This work was supported in part by: the DARPA Brain Machine\nInterface Program, NINDS Neural Prosthetics Program and Grant #NS25074, and the Na-\ntional Science Foundation (ITR Program award #0113679). We thank J. Dushanova, C.\nVargas, L. Lennox, and M. Fellows for their assistance.\n\nReferences\n[1] Brown, E., Frank, L., Tang, D., Quirk, M., and Wilson, M. (1998). A statistical paradigm for\nneural spike train decoding applied to position prediction from ensemble \ufb01ring patterns of rat\nhippocampal place cells. J. of Neuroscience, 18(18):7411\u20137425.\n\n[2] Gao, Y., Black, M. J., Bienenstock, E., Shoham, S., and Donoghue, J. P. (2002). Probabilistic\ninference of hand motion from neural activity in motor cortex. Advances in Neural Information\nProcessing Systems 14, The MIT Press.\n\n[3] Gelb, A., (Ed.) (1974). Applied Optimal Estimation. MIT Press.\n[4] Georgopoulos, A., Schwartz, A., and Kettner, R. (1986). Neural population coding of move-\n\nment direction. Science, 233:1416\u20131419.\n\n[5] Helms Tillery, S., Taylor, D., Isaacs, R., Schwartz, A. (2000) Online control of a prosthetic\n\narm from motor cortical signals. Soc. for Neuroscience Abst., Vol. 26.\n\n[6] Maynard, E., Nordhausen C., Normann, R. (1997). The Utah intracortical electrode array: A\nrecording structure for potential brain-computer interfaces. Electroencephalography and Clin-\nical Neuophysiology 102, pp. 228\u2013239.\n\n[7] Moran, D. and Schwartz, B. (1999). Motor cortical representation of speed and direction during\n\nreaching. J. of Neurophysiology, 82(5):2676\u20132692.\n\n[8] Paninski, L., Fellows, M., Hatsopoulos, N., and Donoghue, J. P. (2001). Temporal tuning\nproperties for hand position and velocity in motor cortical neurons. submitted, J. of Neurophys-\niology.\n\n[9] Serruya, M. D., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., and Donoghue, J. P. (2002).\nBrain-machine interface: Instant neural control of a movement signal. Nature, (416):141\u2013142.\n[10] Serruya. M., Hatsopoulos, N., Donoghue, J., (2000) Assignment of primate M1 cortical activity\n\nto robot arm position with Bayesian reconstruction algorithm. Soc. for Neuro. Abst., Vol. 26.\n\n[11] Taylor. D., Tillery, S., Schwartz, A. (2002). Direct cortical control of 3D neuroprosthetic\n\ndevices. Science, Jun. 7;296(5574):1829-32.\n\n[12] Warland, D., Reinagel, P., and Meister, M. (1997). Decoding visual information from a popu-\n\nlation of retinal ganglion cells. J. of Neurophysiology, 78(5):2336\u20132350.\n\n[13] Welch, G. and Bishop, G. (2001). An introduction to the Kalman \ufb01lter. Technical Report TR\n\n95-041, University of North Carolina at Chapel Hill, Chapel Hill,NC 27599-3175.\n\n[14] Wessberg, J., Stambaugh, C., Kralik, J., Beck, P., Laubach, M., Chapin, J., Kim, J., Biggs, S.,\nSrinivasan, M., and Nicolelis, M. (2000). Real-time prediction of hand trajectory by ensembles\nof cortical neurons in primates. Nature, 408:361\u2013365.\n\n[15] Wu, W., Black, M. J., Gao, Y., Bienenstock, E., Serruya, M., and Donoghue, J. P.,\n\nInfer-\nring hand motion from multi-cell recordings in motor cortex using a Kalman \ufb01lter, SAB\u201902-\nWorkshop on Motor Control in Humans and Robots: On the Interplay of Real Brains and\nArti\ufb01cial Devices, Aug. 10, 2002, Edinburgh, Scotland, pp. 66\u201373.\n\n[16] Zhang, K., Ginzburg, I., McNaughton, B., Sejnowski, T., Interpreting neuronal population\nactivity by reconstruction: Uni\ufb01ed framework with application to hippocampal place cells, J.\nNeurophysiol. 79:1017\u20131044, 1998.\n\n\f", "award": [], "sourceid": 2178, "authors": [{"given_name": "W", "family_name": "Wu", "institution": null}, {"given_name": "M.", "family_name": "Black", "institution": null}, {"given_name": "Y.", "family_name": "Gao", "institution": null}, {"given_name": "M.", "family_name": "Serruya", "institution": null}, {"given_name": "A.", "family_name": "Shaikhouni", "institution": null}, {"given_name": "J.", "family_name": "Donoghue", "institution": null}, {"given_name": "Elie", "family_name": "Bienenstock", "institution": null}]}