{"title": "Incremental Learning for Visual Tracking", "book": "Advances in Neural Information Processing Systems", "page_first": 793, "page_last": 800, "abstract": null, "full_text": " Incremental Learning for Visual Tracking\n\n Jongwoo Lim David Ross Ruei-Sung Lin Ming-Hsuan Yang\n University of Illinois University of Toronto Honda Research Institute\n jlim1@uiuc.edu dross@cs.toronto.edu rlin1@uiuc.edu myang@honda-ri.com\n\n\n Abstract\n Most existing tracking algorithms construct a representation of a target\n object prior to the tracking task starts, and utilize invariant features to\n handle appearance variation of the target caused by lighting, pose, and\n view angle change. In this paper, we present an efficient and effec-\n tive online algorithm that incrementally learns and adapts a low dimen-\n sional eigenspace representation to reflect appearance changes of the tar-\n get, thereby facilitating the tracking task. Furthermore, our incremental\n method correctly updates the sample mean and the eigenbasis, whereas\n existing incremental subspace update methods ignore the fact the sample\n mean varies over time. The tracking problem is formulated as a state\n inference problem within a Markov Chain Monte Carlo framework and\n a particle filter is incorporated for propagating sample distributions over\n time. Numerous experiments demonstrate the effectiveness of the pro-\n posed tracking algorithm in indoor and outdoor environments where the\n target objects undergo large pose and lighting changes.\n\n1 Introduction\nThe main challenges of visual tracking can be attributed to the difficulty in handling appear-\nance variability of a target object. Intrinsic appearance variabilities include pose variation\nand shape deformation of a target object, whereas extrinsic illumination change, camera\nmotion, camera viewpoint, and occlusions inevitably cause large appearance variation. Due\nto the nature of the tracking problem, it is imperative for a tracking algorithm to model such\nappearance variation.\n\nHere we developed a method that, during visual tracking, constantly and efficiently up-\ndates a low dimensional eigenspace representation of the appearance of the target object.\nThe advantages of this adaptive subspace representation are several folds. The eigenspace\nrepresentation provides a compact notion of the \"thing\" being tracked rather than treating\nthe target as a set of independent pixels, i.e., \"stuff\" [1]. The use of an incremental method\ncontinually updates the eigenspace to reflect the appearance change caused by intrinsic\nand extrinsic factors, thereby facilitating the tracking process. To estimate the locations\nof the target objects in consecutive frames, we used a sampling algorithm with likelihood\nestimates, which is in direct contrast to other tracking methods that usually solve complex\noptimization problems using gradient-descent approach.\n\nThe proposed method differs from our prior work [14] in several aspects. First, the pro-\nposed algorithm does not require any training images of the target object before the tracking\ntask starts. That is, our tracker learns a low dimensional eigenspace representation on-line\nand incrementally updates it as time progresses (We assume, like most tracking algorithms,\nthat the target region has been initialized in the first frame). Second, we extend our sam-\npling method to incorporate a particle filter so that the sample distributions are propagated\nover time. Based on the eigenspace model with updates, an effective likelihood estimation\nfunction is developed. Third, we extend the R-SVD algorithm [6] so that both the sample\nmean and eigenbasis are correctly updated as new data arrive. Though there are numerous\nsubspace update algorithms in the literature, only the method by Hall et al. [8] is also able\n\n\f\nto update the sample mean. However, their method is based on the addition of a single col-\numn (single observation) rather than blocks (a number of observations in our case) and thus\nis less efficient than ours. While our formulation provides an exact solution, their algorithm\ngives only approximate updates and thus it may suffer from numerical instability. Finally,\nthe proposed tracker is extended to use a robust error norm for likelihood estimation in the\npresence of noisy data or partial occlusions, thereby rendering more accurate and robust\ntracking results.\n\n2 Previous Work and Motivation\nBlack et al. [4] proposed a tracking algorithm using a pre-trained view-based eigenbasis\nrepresentation and a robust error norm. Instead of relying on the popular brightness con-\nstancy working principal, they advocated the use of subspace constancy assumption for\nvisual tracking. Although their algorithm demonstrated excellent empirical results, it re-\nquires to build a set of view-based eigenbases before the tracking task starts. Furthermore,\ntheir method assumes that certain factors, such as illumination conditions, do not change\nsignificantly as the eigenbasis, once constructed, is not updated.\n\nHager and Belhumeur [7] presented a tracking algorithm to handle the geometry and illu-\nmination variations of target objects. Their method extends a gradient-based optical flow\nalgorithm to incorporate research findings in [2] for object tracking under varying illumi-\nnation conditions. Prior to the tracking task starts, a set of illumination basis needs to be\nconstructed at a fixed pose in order to account for appearance variation of the target due to\nlighting changes. Consequently, it is not clear whether this method is effective if a target\nobject undergoes changes in illumination with arbitrary pose.\n\nIn [9] Isard and Blake developed the Condensation algorithm for contour tracking in which\nmultiple plausible interpretations are propagated over time. Though their probabilistic ap-\nproach has demonstrated success in tracking contours in clutter, the representation scheme\nis rather primitive, i.e., curves or splines, and is not updated as the appearance of a target\nvaries due to pose or illumination change.\n\nMixture models have been used to describe appearance change for motion estimation [3]\n[10]. In Black et al. [3] four possible causes are identified in a mixture model for estimating\nappearance change in consecutive frames, and thereby more reliable image motion can be\nobtained. A more elaborate mixture model with an online EM algorithm was recently\nproposed by Jepson et al. [10] in which they use three components and wavelet filters to\naccount for appearance changes during tracking. Their method is able to handle variations\nin pose, illumination and expression. However, their WSL appearance model treats pixels\nwithin the target region independently, and therefore does not have notion of the \"thing\"\nbeing tracked. This may result in modeling background rather than the foreground, and fail\nto track the target.\n\nIn contrast to the eigentracking algorithm [4], our algorithm does not require a training\nphase but learns the eigenbases on-line during the object tracking process, and constantly\nupdates this representation as the appearance changes due to pose, view angle, and illumi-\nnation variation. Further, our method uses a particle filter for motion parameter estimation\nrather than the Gauss-Newton method which often gets stuck in local minima or is dis-\ntracted by outliers [4]. Our appearance-based model provides a richer description than\nsimple curves or splines as used in [9], and has notion of the \"thing\" being tracked. In\naddition, the learned representation can be utilized for other tasks such as object recog-\nnition. In this work, an eigenspace representation is learned directly from pixel values\nwithin a target object in the image space. Experiments show that good tracking results\ncan be obtained with this representation without resorting to wavelets as used in [10], and\nbetter performance can potentially be achieved using wavelet filters. Note also that the\nview-based eigenspace representation has demonstrated its ability to model appearance of\nobjects at different pose [13], and under different lighting conditions [2].\n\n\f\n3 Incremental Learning for Tracking\nWe present the details of the proposed incremental learning algorithm for object tracking\nin this section.\n\n3.1 Incremental Update of Eigenbasis and Mean\n\nThe appearance of a target object may change drastically due to intrinsic and extrinsic\nfactors as discussed earlier. Therefore it is important to develop an efficient algorithm to\nupdate the eigenspace as the tracking task progresses. Numerous algorithms have been\ndeveloped to update eigenbasis from a time-varying covariance matrix as more data arrive\n[6] [8] [11] [5]. However, most methods assume zero mean in updating the eigenbasis\nexcept the method by Hall et al. [8] in which they consider the change of the mean when\nupdating eigenbasis as each new datum arrives. Their update algorithm only handles one\ndatum per update and gives approximate results, while our formulation handles multiple\ndata at the same time and renders exact solutions.\n\nWe extend the work of the classic R-SVD method [6] in which we update the eigenbasis\nwhile taking the shift of the sample mean into account. To the best of our knowledge, this\nformulation with mean update is new in the literature.\n\nGiven a d n data matrix A = {I1, . . . , In} where each column Ii is an observation (a d-\ndimensional image vector in this paper), we can compute the singular value decomposition\n(SVD) of A, i.e., A = U V . When a dm matrix E of new observations is available, the\nR-SVD algorithm efficiently computes the SVD of the matrix A = (A|E) = U V\nbased on the SVD of A as follows:\n\n 1. Apply QR decomposition to and get orthonormal basis ~\n E of E, and U = (U | ~\n E).\n\n 2. Let V = V 0\n 0 I where Im is an m m identity matrix. It follows then,\n m\n\n\n = U A V = U (A|E) V 0 = U AV U E = U E .\n ~\n E 0 Im ~\n E AV ~\n E E 0 ~\n E E\n\n 3. Compute the SVD of = ~\n U ~\n ~\n V and the SVD of A is\n\n A = U ( ~\n U ~\n ~\n V )V = (U ~\n U ) ~\n ( ~\n V V ).\n\nExploiting the properties of orthonormal bases and block structures, the R-SVD algorithm\ncomputes the new eigenbasis efficiently. The computational complexity analysis and more\ndetails are described in [6].\n\nOne problem with the R-SVD algorithm is that the eigenbasis U is computed from AA\nwith the zero mean assumption. We modify the R-SVD algorithm and compute the eigen-\nbasis with mean update. The following derivation is based on scatter matrix, which is same\nas covariance matrix except a scalar factor.\n\nProposition 1 Let Ip = {I1, I2, . . . , In}, Iq= {In+1, In+2, . . . , In+m}, and Ir = (Ip|Iq).\nDenote the means and scatter matrices of Ip, Iq, Ir as Ip, Iq, Ir, and Sp, Sq, Sr respec-\ntively, then Sr = Sp + Sq + nm (I\n n+m q - \n Ip)(Iq - Ip) .\n\nProof: By definition, \n I \n r = n I I (I\n n+m p + m\n n+m q , Ip - Ir = m\n n+m p - \n Iq); Iq - Ir =\n n (I\nn+m q - \n Ip) and,\n Sr = n ( (\n i=1 Ii - \n Ir)(Ii - Ir) + n+m\n i=n+1 Ii - \n Ir)(Ii - Ir)\n = n (\n i=1 Ii - \n Ip + Ip - Ir)(Ii - Ip + Ip - Ir) +\n n+m (\n i=m+1 Ii - \n Iq + Iq - Ir)(Ii - Iq + Iq - Ir)\n = Sp + n(Ip - Ir)(Ip - Ir) + Sq + m(Iq - Ir)(Iq - Ir)\n = Sp + nm2 ( (\n ( I I\n n+m)2 p - \n Iq)(Ip - Iq) + Sq + n2m\n (n+m)2 p - \n Iq)(Ip - Iq)\n = Sp + Sq + nm (I\n n+m p - \n Iq)(Ip - Iq)\n\n\f\nLet ^\n Ip = {I1 - Ip, . . . , In - Ip}, ^\n Iq = {In+1 - Iq, . . . , In+m - Iq}, and ^\n Ir = {I1 -\nIr, . . . , In+m - Ir}, and the SVD of ^Ir = UrrVr . Let ~\n E = ^\n Iq| nm (I\n n+m p - \n Iq) ,\n\nand use Proposition 1, Sr = (^\n Ip| ~\n E)(^\n Ip| ~\n E) . Therefore, we compute SVD on ( ^\n Ip| ~\n E) to\nget Ur. This can be done efficiently by the R-SVD algorithm as described above.\n\nIn summary, given the mean \n Ip and the SVD of existing data Ip, i.e., UppVp and new\ndata Iq, we can compute the the mean Ir and the SVD of Ir, i.e., UrrVr easily:\n\n 1. Compute \n I \n r = n I I (I\n n+m p + m\n n+m q , and ~\n E = Iq - Ir 1(1m) | nm\n n+m p - \n Iq) .\n\n 2. Compute R-SVD with (UppVp ) and ~\n E to obtain (UrrVr ).\n\nIn numerous vision problems, we can further exploit the low dimensional approximation of\nimage data and put larger weights on the recent observations, or equivalently downweight\nthe contributions of previous observations. For example as the appearance of a target object\ngradually changes, we may want to put more weights on recent observations in updating\nthe eigenbasis since they are more likely to be similar to the current appearance of the\ntarget. The forgetting factor f can be used under this premise as suggested in [11] , i.e.,\nA = (f A |E) = (U (f )V |E) where A and A are original and weighted data matrices,\nrespectively.\n\n3.2 Sequential Inference Model\n\nThe visual tracking problem is cast as an inference problem with a Markov model and\nhidden state variable, where a state variable Xt describes the affine motion parameters\n(and thereby the location) of the target at time t. Given a set of observed images It =\n{I1, . . . , It}. we aim to estimate the value of the hidden state variable Xt. Using Bayes'\ntheorem, we have\n\n p(Xt| It) p(It|Xt) p(Xt|Xt-1) p(Xt-1| It-1) dXt-1\n\nThe tracking process is governed by the observation model p(It|Xt) where we estimate the\nlikelihood of Xt observing It, and the dynamical model between two states p(Xt|Xt-1).\nThe Condensation algorithm [9], based on factored sampling, approximates an arbitrary\ndistribution of observations with a stochastically generated set of weighted samples. We\nuse a variant of the Condensation algorithm to model the distribution over the object's\nlocation, as it evolves over time.\n\n3.3 Dynamical and Observation Models\n\nThe motion of a target object between two consecutive frames can be approximated by\nan affine image warping. In this work, we use the six parameters of affine transform\nto model the state transition from Xt-1 to Xt of a target object being tracked. Let\nXt = (xt, yt, t, st, t, t) where xt, yt, t, st, t, t, denote x, y translation, rotation\nangle, scale, aspect ratio, and skew direction at time t. Each parameter in Xt is modeled\nindependently by a Gaussian distribution around its counterpart in Xt-1. That is,\n p(Xt|Xt-1) = N (Xt; Xt-1, )\nwhere is a diagonal covariance matrix whose elements are the corresponding variances\nof affine parameters, i.e., 2x, 2y, 2, 2 .\n s , 2\n , 2\n \n\nSince our goal is to use a representation to model the \"thing\" that we are tracking, we\nmodel the image observations using a probabilistic interpretation of principal component\nanalysis [16]. Given an image patch predicated by Xt, we assume the observed image It\nwas generated from a subspace spanned by U centered at . The probability that a sample\nbeing generated from the subspace is inversely proportional to the distance d from the\nsample to the reference point (i.e., center) of the subspace, which can be decomposed into\nthe distance-to-subspace, dt, and the distance-within-subspace from the projected sample\n\n\f\nto the subspace center, dw. This distance formulation, based on a orthonormal subspace\nand its complement space, is similar to [12] in spirit.\n\nThe probability of a sample generated from a subspace, pd (I\n t t|Xt), is governed by a Gaus-\nsian distribution:\n pd (I\n t t | Xt) = N (It ; , U U + I )\nwhere I is an identity matrix, is the mean, and I term corresponds to the additive Gaus-\nsian noise in the observation process. It can be shown [15] that the negative exponential\ndistance from It to the subspace spanned by U , i.e., exp(-||(It - ) - U U (It - )||2),\nis proportional to N (It; , U U + I) as 0.\n\nWithin a subspace, the likelihood of the projected sample can be modeled by the Maha-\nlanobis distance from the mean as follows:\n pd (I\n w t | Xt) = N (It ; , U -2U )\nwhere is the center of the subspace and is the matrix of singular values corresponding\nto the columns of U . Put together, the likelihood of a sample being generated from the\nsubspace is governed by\n p(It|Xt) = pd (I (I\n t t|Xt) pdw t|Xt) = N (It; , U U + I) N (It; , U-2U ) (1)\n\nGiven a drawn sample Xt and the corresponding image region It, we aim to compute\np(It|Xt) using (1). To minimize the effects of noisy pixels, we utilize a robust error norm\n[4], (x, ) = x2 instead of the Euclidean norm d(x) = ||x||2, to ignore the \"outlier\"\n 2+x2\npixels (i.e., the pixels that are not likely to appear inside the target region given the current\neigenspace). We use a method similar to that used in [4] in order to compute dt and dw.\nThis robust error norm is helpful especially when we use a rectangular region to enclose\nthe target (which inevitably contains some noisy background pixels).\n\n4 Experiments\nTo test the performance of our proposed tracker, we collected a number of videos recorded\nin indoor and outdoor environments where the targets change pose in different lighting con-\nditions. Each video consists of 320 240 gray scale images and are recorded at 15 frames\nper second unless specified otherwise. For the eigenspace representation, each target image\nregion is resized to 32 32 patch, and the number of eigenvectors used in all experiments\nis set to 16 though fewer eigenvectors may also work well. Implemented in MATLAB\nwith MEX, our algorithm runs at 4 frames per second on a standard computer with 200\nparticles. We present some tracking results in this section and more tracking results as well\nas videos can be found at http://vision.ucsd.edu/~jwlim/ilt/.\n\n4.1 Experimental Results\n\nFigure 1 shows the tracking results using a challenging sequence recorded with a mov-\ning digital camera in which a person moves from a dark room toward a bright area while\nchanging his pose, moving underneath spot lights, changing facial expressions and taking\noff glasses. All the eigenbases are constructed automatically from scratch and constantly\nupdated to model the appearance of the target object while undergoing appearance changes.\nEven with the significant camera motion and low frame rate (which makes the motions be-\ntween frames more significant, or equivalently to tracking fast moving objects), our tracker\nstays stably on the target throughout the sequence.\n\nThe second sequence contains an animal doll moving in different pose, scale, and lighting\nconditions as shown in Figure 2. Experimental results demonstrate that our tracker is able\nto follow the target as it undergoes large pose change, cluttered background, and lighting\nvariation. Notice that the non-convex target object is localized with an enclosing rectan-\ngular window, and thus it inevitably contains some background pixels in its appearance\nrepresentation. The robust error norm enables the tracker to ignore background pixels and\nestimate the target location correctly. The results also show that our algorithm faithfully\n\n\f\nFigure 1: A person moves from dark toward bright area with large lighting and pose changes. The\nimages in the second row shows the current sample mean, tracked region, reconstructed image, and\nthe reconstruction error respectively. The third and forth rows shows 10 largest eigenbases.\n\n\n\n\n\n Figure 2: An animal doll moving with large pose, lighting variation in a cluttered background.\n\n\nmodels the appearance of the target, as shown in eigenbases and reconstructed images, in\nthe presence of noisy background pixels.\n\nWe recorded a sequence to demonstrate that our tracker performs well in outdoor environ-\nment where lighting conditions change drastically. The video was acquired when a person\nwalking underneath a trellis covered by vines. As shown in Figure 3, the cast shadow\nchanges the appearance of the target face drastically. Furthermore, the combined pose and\nlighting variation with low frame rate makes the tracking task extremely difficult. Nev-\nertheless, the results show that our tracker successfully follows the target accurately and\nrobustly. Due to heavy shadows and drastic lighting change, other tracking methods based\non gradient, contour, or color information are unlikely to perform well in this case.\n\n4.2 Discussion\n\nThe success of our tracker can be attributed to several factors. It is well known that\nthe appearance of an object undergoing pose change can be modeled well by view-based\n\n\f\nFigure 3: A person moves underneath a trellis with large illumination change and cast shadows\nwhile changing his pose. More results can be found in the project web page.\n\nrepresentation [13]. Meanwhile at fixed pose, the appearance of an object in different\nillumination conditions can be approximated well by a low dimensional subspace [2]. Our\nempirical results show that these variations can be learned on-line without any prior training\nphase, and also the changes caused by cast and attached shadows can still be approximated\nby a linear subspace to some extent. We show a few failure cases at our the web site\nmentioned earlier. Typically, the failure happens when there is a combination of fast pose\nchange and drastic illumination change.\n\nIn this paper, we do not directly address the partial occlusion problems. Empirical results\nshow that temporary and partial occlusions can be handled by our method through the\nrobust error norm and the constant update of the eigenspace. Nevertheless situations arise\nwhere we may have prior knowledge of the objects being tracked, and can exploit such\ninformation for better occlusion handling.\n\nTo demonstrate the potency of our modified R-SVD algorithm in faithfully modeling the\nobject appearance, we compare the reconstructed images using our method and a conven-\ntional SVD algorithm. In Figure 4 first row contains a set of images tracked by our\ntracker, and the second and fourth rows show the reconstructed images using 16 eigenvec-\ntors obtained after 121 incremental updates of 605 frame (block size is set to 5), and the\ntop 16 eigenvectors obtained by conventional SVD algorithm using all 605 tracked images.\nNote that we only maintained 16 eigenvectors during tracking, and discarded the remaining\neigenvectors at each update. The residue images are presented in the third and fifth rows,\nand the average L2 reconstruction error per pixel is 5.73 10-2 and 5.65 10-2 for our\nmodified R-SVD method and the conventional SVD algorithm respectively. The figure and\naverage reconstruction error shows that our modified R-SVD method is able to effectively\nmodel the object appearance without losing detailed information.\n\n5 Conclusions and Future Work\nWe have presented an appearance-based tracker that incrementally learns a low dimensional\neigenspace representation for object tracking while the target undergoes pose, illumination\nand appearance changes. Whereas most tracking algorithms operate on the premise that\nthe object appearance or ambient environment lighting condition does not change as time\nprogresses, our method adapts the model representation to reflect appearance variation of\nthe target, thereby facilitating the tracking task. In contrast to the existing incremental\nsubspace methods, our R-SVD method updates the mean and eigenbasis accurately and\n\n\f\nefficiently, and thereby learns a good eigenspace representation to faithfully model the\nappearance of the target being tracked. Our experiments demonstrate the effectiveness of\nthe proposed tracker in indoor and outdoor environments where the target objects undergo\nlarge pose and lighting changes.\n\nThe current dynamical model in our sampling method is based on a Gaussian distribution,\nbut the dynamics could be learned from exemplars for more efficient parameter estimation.\nOur algorithm can be extended to construct a set of eigenbases for modeling nonlinear\naspects of appearance variation more precisely and automatically. We aim to address these\nissues in our future work.\n\n\n\n\n\n Figure 4: Reconstructed images and errors using our and the conventional SVD algorithms.\n\n\nReferences\n\n [1] E. H. Adelson and J. R. Bergen. The plenoptic function and the elements of early vision. In M. Landy and\n J. A. Movshon, editors, Computational Models of Visual Processing, pp. 120. MIT Press, 1991.\n [2] P. Belhumeur and D. Kreigman. What is the set of images of an object under all possible lighting conditions.\n In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 270277, 1997.\n [3] M. J. Black, D. J. Fleet, and Y. Yacoob. A framework for modeling appearance change in image sequence.\n In Proceedings of the Sixth IEEE International Conference on Computer Vision, pp. 660667, 1998.\n [4] M. J. Black and A. D. Jepson. Eigentracking: Robust matching and tracking of articulated objects using\n view-based representation. In Proceedings of European Conference on Computer Vision, pp. 329342,\n 1996.\n [5] M. Brand. Incremental singular value decomposition of uncertain data with missing values. In Proceedings\n of the Seventh European Conference on Computer Vision, volume 4, pp. 707720, 2002.\n [6] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 1996.\n [7] G. Hager and P. Belhumeur. Real-time tracking of image regions with changes in geometry and illumi-\n nation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 403410,\n 1996.\n [8] P. Hall, D. Marshall, and R. Martin. Incremental eigenanalysis for classification. In Proceedings of British\n Machine Vision Conference, pp. 286295, 1998.\n [9] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density. In Proceedings\n of the Fourth European Conference on Computer Vision, volume 2, pp. 343356, 1996.\n[10] A. D. Jepson, D. J. Fleet, and T. F. El-Maraghi. Robust online appearance models for visual tracking. In\n Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pp. 415422,\n 2001.\n[11] A. Levy and M. Lindenbaum. Sequential Karhunen-Loeve basis extraction and its application to images.\n IEEE Transactions on Image Processing, 9(8):13711374, 2000.\n[12] B. Moghaddam and A. Pentland. Probabilistic visual learning for object recognition. IEEE Transactions\n on Pattern Analysis and Machine Intelligence, 19(7):696710, 1997.\n[13] H. Murase and S. Nayar. Visual learning and recognition of 3d objects from appearance. International\n Journal of Computer Vision, 14(1):524, 1995.\n[14] D. Ross, J. Lim, and M.-H. Yang. Adaptive probabilistic visual tracking with incremental subspace update.\n In Proceedings of the Eighth European Conference on Computer Vision, volume 2, pp. 470482, 2004.\n[15] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing Systems\n 10, pp. 626632, 1997.\n[16] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statis-\n tical Society, Series B, 61(3):611622, 1999.\n\n\f\n", "award": [], "sourceid": 2641, "authors": [{"given_name": "Jongwoo", "family_name": "Lim", "institution": null}, {"given_name": "David", "family_name": "Ross", "institution": null}, {"given_name": "Ruei-sung", "family_name": "Lin", "institution": null}, {"given_name": "Ming-Hsuan", "family_name": "Yang", "institution": null}]}