{"title": "Mixture of time-warped trajectory models for movement decoding", "book": "Advances in Neural Information Processing Systems", "page_first": 433, "page_last": 441, "abstract": "Applications of Brain-Machine-Interfaces typically estimate user intent based on biological signals that are under voluntary control. For example, we might want to estimate how a patient with a paralyzed arm wants to move based on residual muscle activity. To solve such problems it is necessary to integrate obtained information over time. To do so, state of the art approaches typically use a probabilistic model of how the state, e.g. position and velocity of the arm, evolves over time \u2013 a so-called trajectory model. We wanted to further develop this approach using two intuitive insights: (1) At any given point of time there may be a small set of likely movement targets, potentially identified by the location of objects in the workspace or by gaze information from the user. (2) The user may want to produce movements at varying speeds. We thus use a generative model with a trajectory model incorporating these insights. Approximate inference on that generative model is implemented using a mixture of extended Kalman filters. We find that the resulting algorithm allows us to decode arm movements dramatically better than when we use a trajectory model with linear dynamics.", "full_text": " \n\n \n\n \n \n \n\nMixture of time -w arped trajectory  models for \n\nmovement decoding  \n\nElaine A. Corbett, Eric J. Perreault and Konrad P. K\u00f6rding \n\n \n \n \n\nNorthwestern University \n\nChicago, IL 60611 \n\necorbett@u.northwestern.edu \n\n \n \n \n\nAbstract \n\nApplications  of  Brain-Machine-Interfaces  typically  estimate  user  intent \nbased  on  biological  signals  that  are  under  voluntary  control.  For  example, \nwe  might  want  to  estimate  how  a  patient  with  a  paralyzed  arm  wants  to \nmove  based  on  residual  muscle  activity.  To  solve  such  problems  it  is \nnecessary to integrate obtained information over time. To do so, state of the \nart  approaches  typically  use  a  probabilistic  model  of  how  the  state,  e.g. \nposition  and  velocity of  the  arm, evolves over time  \u2013 a  so-called trajectory \nmodel.  We  wanted  to  further  develop  this  approach  using  two  intuitive \ninsights:  (1) At  any  given  point  of  time  there  may  be  a  small  set  of  likely \nmovement  targets,  potentially  identified  by  the  location  of  objects  in  the \nworkspace or by  gaze information  from the  user. (2) The  user  may  want to  \nproduce movements at varying speeds. We thus use a generative model with \na  trajectory  model  incorporating  these  insights.  Approximate  inference  on \nthat  generative  model  is  implemented  using  a  mixture  of  extended  Kalman \nfilters.  We  find  that  the  resulting  algorithm  allows  us  to  decode  arm \nmovements  dramatically  better  than  when  we  use  a  trajectory  model  with \nlinear dynamics. \n\n \n\n1  \n\nI n t ro d u c t i o n  \n\nWhen patients have lost a limb or the ability to communicate with the outside world, brain \nmachine interfaces (BMIs) are often used to enable robotic prostheses or restore \ncommunication. To achieve this, the user's intended state of the device must be decoded \nfrom biological signals. In the context of Bayesian statistics, two aspects are important for \nthe design of an estimator of a temporally evolving state: the observation model, which \ndescribes how measured variables relate to the system\u2019s state and the  trajectory model which \ndescribes how the state changes over time in a probabilistic manner. Following this logic \nmany recent BMI applications have relied on Bayesian estimation for a wide range of \nproblems including the decoding of intended human [1]  and animal [2] movements. In the \ncontext of BMIs, Bayesian approaches offer a principled way of formalizing the uncertainty \nabout signals and thus often result in improvements over other signal processing techniques \n[1]-[3]. \n\nMost work on state estimation in dynamical systems has assumed linear dynamics and \nGaussian noise. Under these circumstances, efficient algorithms result from belief \npropagation. The most frequent application uses the Kalman filter (KF), which recursively \ncombines noisy state observations with the probabilistic evolution of state defined by the \ntrajectory model to estimate the marginal distribution over states  [4]. Such approaches have \nbeen used widely for applications including upper [1] and lower [5]  extremity prosthetic \n\n \n\n1 \n\n\fdevices, functional electric stimulation [6] and human computer interactions [7]. As these \nalgorithms are so commonly used, it seems promising to develop extensions to nonlinear \ntrajectory models that may better describe the probabilistic distribution of movements in \neveryday life. \n\nOne salient departure from the standard assumptions is that people tend to produce both slow \nand fast movements, depending on the situation. Models with linear dynamics only allow \nsuch deviation through the noise term, which makes these models poor at describing the \nnatural variation of movement speeds during real world tasks. Explicitly incorporating \nmovement speed into the trajectory model should lead to better movement estimates.  \n\nKnowledge of the target position should also strongly affect trajectory models. After all , we \ntend to accelerate our arm early during movement and slow down later on. Target \ninformation can be linearly incorporated into the trajectory model, and this has greatly \nimproved predictions [8]-[12]. Alternatively, if there are a small number of potential targets \nthen a mixture of trajectory models approach [13] can be used. Here we are interested in the \ncase where available data provide a prior over potential t argets but where movement targets \nmay be anywhere. We want to incorporate target uncertainty and allow generalization to \nnovel targets. \n\nPrior information about potential targets could come from a number of sources but would \ngenerally be noisy. For example, activity in the dorsal premotor cortex provides information \nabout intended target location prior to movement and may be used where such recordings are \navailable [14]. Target information may also be found noninvasively by tracking eye \nmovements. However, such data will generally provide non-zero priors for a number of \npossible target locations as the subject saccades over the scene. While  subjects almost \nalways look at a target before reaching for it [15], there may be a delay of up to a second \nbetween looking at the target and the reach \u2013 a time interval over which up to 3 saccades are \ntypically made. Each of these fixations could be the target. Hence, a probabilistic \ndistribution of targets is appropriate when using either neural recordings or eye tracking to \nestimate potential reach targets \n\nHere  we  present  an  algorithm  that  uses  a  mixture  of  extended  Kalman  Filters  (EKFs)  to \ncombine  our  insights  related  to  the  variation  of  movement  speed  and  the  availability  of \nprobabilistic  target  knowledge.  Each  of  the  mixture  component s  allows  the  speed  of  the \nmovement to vary continuously over time. We tested how well we could use EMGs and eye \nmovements  to  decode  hand  position  of  humans  performing  a  three -dimensional  large \nworkspace reaching  task. We find  that  using a trajectory  model  that allows  for probabilistic \ntarget  information  and  variation  of  speed  leads  to  dramatic  improvements  in  decoding \nquality. \n\n \n2  \n\nG e n e r a l   D e c o d i n g   S e t t i n g  \n\nWe  wanted  to  test  how  well  different  decoding  algorithms  can  decode  human  movement, \nover  a  wide  range  of  dynamics. While  many  recent  studies  have  looked  at  more  restrictive, \ntwo-dimensional  movements, a system to restore arm  function should produce a  wide range \nof  3D  trajectories.  We  recorded  arm  kinematics  and  EMGs  of  healthy  subjects  during  \nunconstrained  3D  reaches  to  targets  over  a  large  workspace.  Two  healthy  subjects  were \nasked to reach at slow, normal and fast speeds, as they would in everyday life. Subjects were \nseated  as  they  reached  towards  16  LEDs  in  blocks  of  150s,  which  were  located  on  two \nplanes  positioned  such  that  all  targets  were  just  reachable  (Fig  1A). The  target  LED  was  lit \nfor  one  second  prior  to  an  auditory  go  cue,  at  which  time  the  subject  would  reach  to  the \ntarget  at  the  appropriate  speed.  Slow,  normal  and  fast  reaches  were  allotted  3 s,  1.5s  and  1s  \nrespectively;  however,  subjects  determined  the  speed. An  approximate  total  of  450  reaches \nwere  performed  per  subject.  The  subjects  provided  informed  consent,  and  the  protocol  was \napproved  by  the  Northwestern  University  Institutional  Review  Board.  EMG  signals  were \nmeasured  from  the  pectoralis  major,  and  the  three  deltoid  muscles  of  the  shoulder.  This \nrepresents  a  small  subset  of  the  muscles  involved  in  reaching,  and  approximates  those \nmuscles  retaining  some  voluntary  control  following  mid -level  cervical  spinal  cord  injuries. \n\n \n\n2 \n\n\fThe EMG  signals  were band-pass  filtered between 10 and 1,000 Hz, and subsequently anti -\naliased  filtered.  Hand,  wrist,  shoulder  and  head  positions  were  tracked  using  an  Optotrak \nmotion  analysis  system.  We  simultaneously  recorded  eye  movements  with  an   ASL \nEYETRAC-6 head mounted eye tracker.  \n\nApproximately  25%  of  the  reaches  were  assigned  to  the  test  set,  and  the  rest  were  used  for \ntraining.  Reaches  for  which  either  the  motion  capture  data  was  incomplete,  or  there  was \nvisible  motion  artifact  on  the  EMG  were  removed. As  the  state  we  used  hand  positions  and \njoint  angles  (3  shoulder,  2  elbow,  position,  velocity  and  acceleration,  24  dimensions).  Joint \nangles  were  calculated  from  the  shoulder  and  wrist  marker  data  using  digitized  bony \nlandmarks  which  defined  a  coordinate  system  for  the  upper  limb  as  detailed  by  Wu  et  al. \n[16]. As the  motion data  were sampled at 60Hz, the  mean absolute value o f the EMG in the \ncorresponding  16.7ms  windows  was  used  as  an  observation  of  the  state  at  each  time -step. \nAlgorithm  accuracy  was  quantified  by  normalizing  the  root -mean-squared  error  by  the \nstraight line distance between the first and final position of the  endpoint for each reach. We \ncompared the algorithms statistically using repeated measures ANOVAs with Tukey post -hoc \ntests, treating reach and subject as random effects.  \n\n In  the  rest  of  the  paper  we  will  ask  how  well  these  reaching  movements  can  be  decoded  \nfrom EMG and eye-tracking data. \n\n \n\n \n\nFigure 1: A Experimental setup and B sample kinematics and processed EMGs for one reach \n\n \n3  \n\nK a l ma n   Fi l t e r s   w i t h  Ta r g e t   i n f o r ma t i o n  \n\nAll models that we consider in this paper assume linear observations with Gaussian noise:  \n\n \n\n               \n\n \n\n(1) \n\nwhere x is the state, y is the observation and v is the measurement noise with  p(v) ~ N(0,R), \nand  R  is  the  observation  covariance  matrix.  The  model  fitted  the  measured  EMGs  with  an \naverage r2 of 0.55. This highlights the need to integrate information over time. \n\nThe standard approach also assumes linear dynamics and Gaussian process noise: \n\n \n\n \nwhere, xt      represents the hand and joint angle positions, w is the process noise with p(w) \n~  N(0,Q),  and  Q  is  the  state  covariance  matrix.  The  Kalman  filter  does  optimal  inference  for \nthis generative model. \n\n                                    \n\n \n\n(2) \n\nThis model can effectively capture the dynamics of stereotypical reaches to a single target by \nappropriately  tuning  its  parameters.  However,  when  used  to  describe  reaches  to  multiple \ntargets,  the  model  cannot  describe  target  dependent  aspects  of  reaching  but  boils  down  to  a \nrandom  drift  model.  Fast  velocities  are  underestimated  as  they  are  unlikely  under  the \ntrajectory model and there is excessive drift close to the target (Fig. 2A).  \n\n \n\n3 \n\n\fIn many decoding applications we may know the subject\u2019s target. A range of recent studies have \naddressed the issue of incorporating this information into the trajectory model [8, 13], and we \nmight assume the effect of the target on the dynamics to be linear. This naturally suggests adding \nthe target to the state space, which works well in practice [9, 12].  By appending the target to the \nstate vector (KFT), the simple linear format of the KF may be retained:  \n\n \n\n \n\n                                        \n\n \n\n(3) \n\nwhere  xTt     is the vector of target positions, with dimensionality less than or equal to that of \nxt. This trajectory model thus allows describing both the rapid acceleration that characterizes the \nbeginning of a reach and the stabilization towards its end. \n\nWe compared the accuracy of the KF and the KFT to the Single Target Model (STM), a KF \ntrained only on reaches to the target being tested (Fig. 2). The STM represents the best possible \nprediction that could be obtained with a Kalman filter. Assuming the target is perfectly known, we \nimplemented the KFT by correctly initializing the target state xT at the beginning of the reach. We \nwill relax this assumption below. The initial hand and joint angle positions were also assumed to \nbe known. \n\n \n\nFigure 2: A Sample reach and predictions and B average accuracies with standard errors for KFT, \n\n \n\nKF and MTM. \n\n \n\nConsistent with the recent literature, both methods that incorporated target information produced \nhigher prediction accuracy than the standard KF (both p<0.0001). Interestingly, there was no \nsignificant difference between the KFT and the STM (p=0.9). It seems that when we have \nknowledge of the target, we do not lose much by training a single model over the whole \nworkspace rather than modeling the targets individually. This is encouraging, as we desire a BMI \nsystem that can generalize to any target within the workspace, not just specifically to those that are \navailable in the training data.  \n\nClearly, adding the target to the state space allows the dynamics of  typical movements to be \nmodeled effectively, resulting in dramatic increases in decoding performance. \n \n4  \n\nTi me  Wa r p i n g    \n\n4 . 1  \n\nI m p l e m e n t i n g   a   t i m e - w a r p e d   t r a j e c t o r y   m o d e l  \n\nWhile  the  KFT  above  can  capture  the  general  reach  trajectory  profile,  it  does  not  allow  for \nnatural  variability  in  the  speed  of  movements.  Depending  on  our  task  objectives,  which \nwould not directly be observed by a BMI,  we  might  lazily reach toward a target or  move a t \nmaximal  speed.  We  aim  to  change  the  trajectory  model  to  explicitly  incorporate  a  warping \nfactor by which the average movement speed is scaled, allowing for such variability. As the \nmovement speed will be positive in all practical cases, we model the logarithm of this factor, \n\n \n\n4 \n\n\fand append it to the state vector: \n\n \n\n                                                                  \n\n \n\n(4) \n\nWe create a time-warped trajectory model by noting that if the average rate of a trajectory  is \nto  be  scaled  by  a  factor  S,  the  position  at  time  t  will  equal  that  of  the  original  trajectory  at \ntime St. Differentiating, the velocity will be multiplied by  S, and the acceleration by  S2. For \nsimplicity,  the  trajectory  noise  is  assumed  to  be  additive  and  Gaussian,  and  the  model  is \nassumed to be stationary: \n\n \n\n    \n\n  \n    \n \n    \n  \n   \n     \n\n \n  \n\n \n\n \n\n \n \n \n \n\n \n\n                                                                   \n                                                               \n                                           \n               \n    \n                                                                 \n                                                               \n\n \n\n \n\n \n\n     \n\n(5) \n\n    \n      \n \n      \n  \n     \n       \n\n \n  \n\n \n\n \n \n \n \n\n \n\nwhere  Ip  is  the  p-dimensional  identity  matrix  and        is  a  p p  matrix  of  zeros.  Only  the    \nterms used to predict the acceleration states need to be estimated  to build the state transition \nmatrix, and they are scaled as a nonlinear function of xs.  \n\nAfter  adding  the  variable  movement  speed  to  the  state  space  the  system  is  no  longer  linear. \nTherefore  we  need  a  different  solution  strategy.  Instead  of  the  typical  KFT  we  use  the \nExtended  Kalman  Filter  (EKFT)  to  implement  a  nonlinear  trajectory  model  by  linearizing \nthe dynamics around the best estimate at each time-step [17]. With this approach we add only \nsmall computational overhead to the KFT recursions. \n \n4 . 2  \n\nTr a i n i n g   t h e   t i m e   w a r p i n g   m o d e l  \n\nThe  filter  parameters  were  trained  using  a  variant  of  the  Expectation  Maximization  (EM) \nalgorithm  [18].  For  extended  Kalman  filter  learning  the  initialization  for  the  variables  may \nmatter. S was initialized with the ground truth average reach speeds for each movement relative to \nthe average speed across all movements. The state transition parameters    were estimated using \nnonlinear  least  squares  regression,  while  C,  Q  and  R  were  estimated  linearly  for  the  new \nsystem,  using  the  maximum  likelihood  solution  [18]  (M-step).  For  the  E-step  we  used  a \nstandard extended Kalman smoother. We thus found the expected values for t he states given \nthe  current  filter  parameters.  For  this  computation,   and  later  when  testing  the  algorithm,  xs \nwas  initialized  to  its  average  value  across  all  reaches  while  the  remaining  states  were \ninitialized to their true values. The smoothed estimate fo r xs was then used, along with the true \nvalues  for  the  other  states,  to  re-estimate  the  filter  parameters  in  the  M-step  as  before.  We \nalternated between the E and M steps until the log likelihood converged (which it did in all cases). \nFollowing the training procedure, the diagonal of the state covariance matrix Q corresponding to \nxs  was set to the variance of the smoothed  xs over all reaches, according to how  much this state \nshould be allowed to change during prediction. This allowed the estimate of xs to develop over the \ncourse  of  the  reach  due  to  the  evidence  provided  by  the  observations,  better  capturing  the \ndynamics of reaches at different speeds. \n \n4 . 3  \n\nP e r f o r m a n c e   o f   t h e   t i m e - w a r p e d   E K F T  \n\nIncorporating  time  warping  explicitly  into  the  trajectory  model  pro duced  a  noticeable \nincrease  in  decoding  performance  over  the  KFT.  As  the  speed  state  xs  is  estimated \nthroughout the course of the reach, based on the evidence provided by the observations, the \ntrajectory model has the flexibility to follow the dynamics of  the reach more accurately (Fig. \n3).  While  at  the  normal  self-selected  speed  the  difference  between  the  algorithms  is  small, \nfor  the  slow  and  fast  speeds,  where  the  dynamics  deviate  from  average,  there  i s  a  clear \nadvantage to the time warping model. \n\n \n\n \n\n5 \n\n\f \n\nFigure 3: Hand positions and predictions of the KFT and EKFT for sample reaches at A slow, \n\nB normal and C fast speeds. Note the different time scales between reaches.  \n\n \n\nThe  models  were  first  trained  using  data  from  all  speeds  (Fig.  4A).  The  EKFT  was  1.8% \nmore accurate on average (p<0.01), and the effect was significant at the slow (1.9%, p<0.05) \nand the fast (2.8%, p<0.01), but not at the normal (p=0.3) speed. We also trained the models \nfrom data using only reaches at the self-selected  normal speed, as  we  wanted to see if there \nwas enough variation to effectively train the EKFT (Fig. 4B). Interestingly, the performance \nof  the  EKFT  was  reduced  by  only  0.6%,  and  the  KFT  by  1.1%.  The  difference  in \nperformance  between  the  EKFT  and  KFT  was  even  more  pronounced  on  aver age  (2.3%, \np<0.001),  and  for  the  slow  and  fast  speeds  (3.6  and  4.1%,  both  p<  0.0001). At  the  normal \nspeed, the algorithms again  were  not statistically different (p=0.6). This result demonstrates \nthat  the  EKFT  is  a  practical  option  for  a  real  BMI  system,  as   it  is  not  necessary  to  greatly \nvary the speeds while collecting training data for the model to be effective  over a wide range \nof intended speeds. \n\nExplicitly  incorporating  speed  information  into  the  trajectory  model  helps  decoding,  by \nmodeling the natural variation in volitional speed.  \n\n \n\n \n5  \n\nFigure 4: Mean and standard error of EKFT and KFT accuracy at the different subject-\nselected speeds. Models were trained on reaches at A all speeds and B just normal speed \nreaches. Asterisks indicate statistically significant differences between the algorithms. \n\n \n\nM i x t u re s   o f  Ta r g e t s  \n\nSo far, we have assumed that the targets of our reaches are perfectly known. In a real-world \nsystem,  there  will be uncertainty about the intended target of the reach . However, in typical \napplications there are a small number of possible objectives.  Here we address this situation. \nDrawing  on  the  recent  literature,  we  use  a  mixture  model  to  consider  each  of  the  possible \ntargets [11, 13]. We condition the posterior probability for the state on the N possible targets, \nT: \n\n \n\n \n\n  \n\n                                  \n\n          \n\n \n\n(6) \n\n    \n\n6 \n\n\fUsing Bayes' Rule, this equation becomes: \n\n \n\n  \n\n                                \n\n    \n\n                 \n\n       \n\n \n\n(7) \n\nAs  we  are  dealing  with  a  mixture  model,  we  perform  the  Kalman  filter  recursion  for  each \npossible  target,  xT,  and  our  solution  is  a  weighted  sum  of  the  outputs.  The  weights  are \nproportional  to  the  prior  for  that  target,        ,  and  the  likelihood  of  the  model  given  that \ntarget            .         is independent of the target and does not need to be calculated. \n\nWe  tested  mixtures  of  both  algorithms,  the  mKFT  and  mEKFT,  with  real  uncert ain  priors \nobtained  from  eye-tracking  in  the  one-second  period  preceding  movement.  As  the  targets \nwere  situated  on  two  planes,  the  three-dimensional  location  of  the  eye  gaze  was  found  by \nprojecting  its  direction  onto  those  planes.  The  first,  middle  and  last  eye  samples  were \nselected, and all other samples were assigned to a group according to which of the three was \nclosest.  The  mean  and  variance  of  these  three  groups  were  used  to  initialize  three  Kalman \nfilters in the mixture model. The priors of the three  groups were assigned proportional to the \nnumber of  samples in them. If  the subject looks at  multiple positions prior to reaching, this \nmethod ensures with a high probability that the correct target was accounted for in one of the \nfilters in the mixture.  \n\nWe  also  compared  the  MTM  approach  of  Yu  et  al.  [13],  where  a  different  KF  model  was \ngenerated  for  each  target,  and  a  mixture  is  performed  over  these  models.  This  approach \nexplicitly  captures  the  dynamics  of  stereotypical  reaches  to  specific  targets.  Given  perfect \ntarget  information,  it  would  reduce  to  the  STM  described  above.  Priors  for  the  MTM  were \nfound  by  assigning  each  valid  eye  sample  to  its  closest  two  targets,  and  weighting  the \nmodels proportional  to the  number of  samples assigned to the corresponding target,  divided \nby its distance from the mean of those samples. We tried other ways of assigning priors and \nthe one presented gave the best results. \n\nWe  calculated  the  reduction  in  decoding  quality  when  instead  of  perfect  priors  we  provide \neye-movement based noisy priors (Fig. 5). The accuracies of the mEKFT, the mKFT and the \nMTM  were  only  degraded  by  0.8,  1.9  and  2.1%  respectively,  compared  to  the  perfect  prior \nsituation.  The  mEKFT  was  still  close  to  10%  better  than  the  KF.  The  mixture  model \nframework is effective in accounting for uncertain priors.  \n\n \n\n \n\nFigure 5: Mean and standard errors of accuracy for algorithms with perfect priors, and \nuncertain priors with full and partial training set. The asterisk indicates a statistically \n\nsignificant effects between the two training types, where real priors are used. \n\n \n\nHere, only reaches at normal speed were used to train the models, as this is a more realistic \ntraining set for a BMI application. This accounts for the degraded performance of the  MTM \nwith  perfect  priors  relative  to  the  STM  from  above  (Fig.  2).  With  even  more  stereotyped \ntraining data for each target, the MTM doesn't generalize as well to new speeds. \n\n \n\n7 \n\n\fWe  also  wanted  to  know  if  the  algorithms  could  generalize  to  new  targets.  In  a  real \napplication, the available training data will generally not span the entire useable worksp ace. \nWe  compared  the  algorithms  where  reaches  to  all  targets  except  the  one  being  tested  had \nbeen  used  to  train  the  models.  The  performance  of  the  MTM  was    significantly  de graded \nunsurprisingly, as  it  was designed  for reaches  to a  set of known targets. Performance of  the \nmKFT and mEKFT degraded by about 1%, but not significantly (both p>0.7), demonstrating \nthat  the  continuous  approach  to  target  information  is  preferable  when   the  target  could  be \nanywhere in space, not just at locations for which training data is available.  \n\n \n6  \n\nD i s c u s s i o n   a n d   c o n c l u s i o n s  \n\nThe goal of this work was to design a trajectory model that would improve decoding for \nBMIs with an application to reaching. We incorporated two features that prominently \ninfluence the dynamics of natural reach: the movement speed and the target location. Our \napproach is appropriate where uncertain target information is available. The model \ngeneralizes well to new regions of the workspace for which there is no training data, and \nacross a broad range of reaching dynamics to widely spaced targets in three dimensions.   \n\nThe advantages over linear models in decoding precision we report here could be equally \nobtained using mixtures over many targets and speeds. While mixture models [11, 13] could \nallow for slow versus fast movements and any number of potential targets, this strategy will \ngenerally require many mixture components. Such an approach would require a lot more \ntraining data, as we have shown that it does not generalize well. It would also be run-time \nintensive which is problematic for prosthetic devices that rely on low power controllers.  In \ncontrast, the algorithm introduced here only takes a small amount of additional run-time in \ncomparison to the standard KF approach. The EKF is only marginally slower than the \nstandard KF and the algorithm will not generally need to consider more than 3 mixture \ncomponents assuming the subject fixates the target within the second pre ceding the reach. \n\nIn this paper we assumed that subjects always would fixate a reach target  \u2013 along with other \nnon-targets. While this is close to the way humans usually coordinate eyes and reaches  [15], \nthere might be cases where people do not fixate a reach target. Our approach could be easily \nextended to deal with such situations by adding a dummy mixture component that all ows the \ndescription of movements to any target. \n\nAs an alternative to mixture approaches, a system can explicitly estimate the target position \nin the state vector [9]. This approach, however, would not straightforwardly allow for the \nrich target information available; we look at the target but also at other locations, strongly \nsuggesting mixture distributions. A combination of the two approaches could further \nimprove decoding quality. We could both estimate speed and target position for the EKFT in \na continuous manner while retaining the mixture over target priors.  \n\nWe  believe  that  the  issues  that  we  have  addressed  here  are  almost  universal.  Virtually  all \ntypes  of  movements  are  executed  at  varying  speed.  A  probabilistic  distribution  for  a  small \nnumber of action candidates may also be expected in most BMI applications  \u2013 after all there \nare  usually  only  a  small  number  of  actions  that  make  sense  in  a  given  environment.  While \nthis  work  is  presented  in  the  context  of  decoding  human  reaching,  it  may  be  applied  to  a \nwide  range  of  BMI  applications  including  lower  limb  prosthetic  devices  and  human \ncomputer  interactions,  as  well  as  different  signal  sources  such  as  electrode  grid  recordings \nand  electroencephalograms.  The  increased  user  control  in  conveying  their  intended \nmovements would significantly improve the functionality of a neuroprosthetic device.   \n\nA c k n o w l e d g e m e n t s  \n\nT h e   a u t h o r s   t h a n k   T.   H a s w e l l ,   E .   K r e p k o v i c h ,   a n d   V.  Ravichandran for assistance \nwith experiments. This work was funded by the NSF Program in Cyber-Physical Systems.  \n\nR e f e r e n c e s  \n[1] \n\nL.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. \n\n \n\n8 \n\n\fChen, R.D. Penn, and J.P. Donoghue, \u201cNeuronal ensemble control of prosthetic devices by a human \nwith tetraplegia,\u201d Nature,  vol. 442, 2006, pp. 164\u2013171. \n\n[2]  W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black, \u201cBayesian population decoding of \n\nmotor cortical activity using a Kalman filter,\u201d Neural Computation,  vol. 18, 2006, pp. 80\u2013118. \n\n[3]  W. Wu, M.J. Black, Y. Gao, E. Bienenstock, M. Serruya, A. Shaikhouni, and J.P. Donoghue, \u201cNeural \n\ndecoding of cursor motion using a Kalman filter,\u201d Advances in Neural Information Processing \nSystems 15: Proceedings of the 2002 Conference, 2003, p. 133. \nR.E. Kalman, \u201cA new approach to linear filtering and prediction problems,\u201d Journal of basic \nEngineering,  vol. 82, 1960, pp. 35\u201345. \n\n[4] \n\n[5]  G.G. Scandaroli, G.A. Borges, J.Y. Ishihara, M.H. Terra, A.F.D. Rocha, and F.A.D.O. Nascimento, \n\n\u201cEstimation of foot orientation with respect to ground for an above knee robotic prosthesis,\u201d \nProceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems,  St. \nLouis, MO, USA: IEEE Press, 2009, pp. 1112-1117. \nI. Cikajlo, Z. Matja\u010di\u0107, and T. Bajd, \u201cEfficient FES triggering applying Kalman filter during sensory \nsupported treadmill walking,\u201d Journal of Medical Engineering & Technology,  vol. 32, 2008, pp. 133-\n144. \nS. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, \u201cNeural control of computer \ncursor velocity by decoding motor cortical spiking activity in humans with tetraplegia,\u201d Journal of \nNeural Engineering,  vol. 5, 2008, pp. 455-476. \nL. Srinivasan, U.T. Eden, A.S. Willsky, and E.N. Brown, \u201cA state-space analysis for reconstruction of \ngoal-directed movements using neural signals,\u201d Neural computation,  vol. 18, 2006, pp. 2465\u20132494. \n\n[6] \n\n[7] \n\n[8] \n\n[9]  G.H. Mulliken, S. Musallam, and R.A. Andersen, \u201cDecoding trajectories from posterior parietal cortex \n\nensembles,\u201d Journal of Neuroscience,  vol. 28, 2008, p. 12913. \n\n[10]  W. Wu, J.E. Kulkarni, N.G. Hatsopoulos, and L. Paninski, \u201cNeural Decoding of Hand Motion Using a \n\nLinear State-Space Model With Hidden States,\u201d IEEE Transactions on neural systems and \nrehabilitation engineering,  vol. 17, 2009, p. 1. \nJ.E. Kulkarni and L. Paninski, \u201cState-space decoding of goal-directed movements,\u201d IEEE Signal \nProcessing Magazine,  vol. 25, 2008, p. 78. \n\n[11] \n\n[12]  C. Kemere and T. Meng, \u201cOptimal estimation of feed-forward-controlled linear systems,\u201d IEEE \n\nInternational Conference on Acoustics, Speech, and Signal Processing, 2005. \nProceedings.(ICASSP'05), 2005. \n\n[13]  B.M. Yu, C. Kemere, G. Santhanam, A. Afshar, S.I. Ryu, T.H. Meng, M. Sahani, and K.V. Shenoy, \n\n\u201cMixture of trajectory models for neural decoding of goal-directed movements,\u201d Journal of \nneurophysiology,  vol. 97, 2007, p. 3763. \n\n[14]  N. Hatsopoulos, J. Joshi, and J.G. O'Leary, \u201cDecoding continuous and discrete motor behaviors using \n\nmotor and premotor cortical ensembles,\u201d Journal of neurophysiology,  vol. 92, 2004, p. 1165. \n\n[15]  R.S. Johansson, G. Westling, A. Backstrom, and J.R. Flanagan, \u201cEye-hand coordination in object \n\nmanipulation,\u201d Journal of Neuroscience,  vol. 21, 2001, p. 6917. \n\n[16]  G. Wu, F.C. van der Helm, H.E.J. Veeger, M. Makhsous, P. Van Roy, C. Anglin, J. Nagels, A.R. \n\nKarduna, and K. McQuade, \u201cISB recommendation on definitions of joint coordinate systems of \nvarious joints for the reporting of human joint motion\u2013Part II: shoulder, elbow, wrist and hand,\u201d \nJournal of biomechanics,  vol. 38, 2005, pp. 981\u2013992. \n\n[17]  D. Simon, Optimal state estimation: Kalman, H [infinity] and nonlinear approaches, John Wiley and \n\nSons, 2006. \n\n[18]  Z. Ghahramani and G.E. Hinton, \u201cParameter estimation for linear dynamical systems,\u201d University of \n\nToronto technical report CRG-TR-96-2,  vol. 6, 1996. \n\n \n\n \n\n \n\n9 \n\n\f", "award": [], "sourceid": 509, "authors": [{"given_name": "Elaine", "family_name": "Corbett", "institution": null}, {"given_name": "Eric", "family_name": "Perreault", "institution": null}, {"given_name": "Konrad", "family_name": "Koerding", "institution": null}]}