{"title": "SpaRCS: Recovering low-rank and sparse matrices from compressive measurements", "book": "Advances in Neural Information Processing Systems", "page_first": 1089, "page_last": 1097, "abstract": "We consider the problem of recovering a matrix $\\mathbf{M}$ that is the sum of a low-rank matrix $\\mathbf{L}$ and a sparse matrix $\\mathbf{S}$ from a small set of linear measurements of the form $\\mathbf{y} = \\mathcal{A}(\\mathbf{M}) = \\mathcal{A}({\\bf L}+{\\bf S})$. This model subsumes three important classes of signal recovery problems: compressive sensing, affine rank minimization, and robust principal component analysis. We propose a natural optimization problem for signal recovery under this model and develop a new greedy algorithm called SpaRCS to solve it. SpaRCS inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms, including exponential convergence and efficient implementation. Simulation results with video compressive sensing, hyperspectral imaging, and robust matrix completion data sets demonstrate both the accuracy and efficacy of the algorithm.", "full_text": "SpaRCS: Recovering Low-Rank and Sparse Matrices\n\nfrom Compressive Measurements\n\nAndrew E. Waters, Aswin C. Sankaranarayanan, Richard G. Baraniuk\n\nRice University\n\n{andrew.e.waters, saswin, richb}@rice.edu\n\nAbstract\n\nWe consider the problem of recovering a matrix M that is the sum of a low-rank\nmatrix L and a sparse matrix S from a small set of linear measurements of the\nform y = A(M) = A(L + S). This model subsumes three important classes\nof signal recovery problems: compressive sensing, af\ufb01ne rank minimization, and\nrobust principal component analysis. We propose a natural optimization problem\nfor signal recovery under this model and develop a new greedy algorithm called\nSpaRCS to solve it. Empirically, SpaRCS inherits a number of desirable proper-\nties from the state-of-the-art CoSaMP and ADMiRA algorithms, including expo-\nnential convergence and ef\ufb01cient implementation. Simulation results with video\ncompressive sensing, hyperspectral imaging, and robust matrix completion data\nsets demonstrate both the accuracy and ef\ufb01cacy of the algorithm.\n\n1\n\nIntroduction\n\nThe explosion of digital sensing technology has unleashed a veritable data deluge that has pushed\ncurrent signal processing algorithms to their limits. Not only are traditional sensing and processing\nalgorithms increasingly overwhelmed by the sheer volume of sensor data, but storage and transmis-\nsion of the data itself is also increasingly prohibitive without \ufb01rst employing costly compression\ntechniques. This reality has driven much of the recent research on compressive data acquisition, in\nwhich data is acquired directly in a compressed format [1]. Recovery of the data typically requires\n\ufb01nding a solution to an undetermined linear system, which becomes feasible when the underly-\ning data possesses special structure. Within this general paradigm, three important problem classes\nhave received signi\ufb01cant recent attention: compressive sensing, af\ufb01ne rank minimization, and robust\nprincipal component analysis (PCA).\nCompressive sensing (CS): CS is concerned with the recovery a vector x that is sparse in some\ntransform domain [1]. Data measurements take the form y = A(x), where A is an underdetermined\nlinear operator. To recover x, one would ideally solve\n(1)\nwhere kxk0 is the number of non-zero components in x. This problem formulation is non-convex,\nCS recovery is typically accomplished either via convex relaxation or greedy approaches.\nAf\ufb01ne rank minimization: The CS concept extends naturally to low-rank matrices. In the af\ufb01ne\nrank minimization problem [14, 23], we observe the linear measurements y = A(L), where L is a\nlow-rank matrix. One important sub-problem is that of matrix completion [3, 5, 22], where A takes\nthe form of a sampling operator. To recover L, one would ideally solve\n(2)\nsubject to y = A(L).\nAs with CS, this problem is non-convex and so several algorithms based on convex relaxation and\ngreedy methods have been developed for \ufb01nding solutions.\n\nsubject to y = A(x),\n\nmin kxk0\n\nmin rank(L)\n\n1\n\n\fRobust PCA: In the robust PCA problem [2,8], we wish to decompose a matrix M into a low-rank\nmatrix L and a sparse matrix S such that M = L + S. This problem is known to have a stable\nsolution provided L and S are suf\ufb01ciently incoherent [2]. To date, this problem has been studied\nonly in the non-compressive setting, i.e, when M is fully available. A variety of convex relaxation\nmethods have been proposed for solving this case.\nThe work of this paper stands at the intersection of these three problems. Speci\ufb01cally, we aim to\nrecover the entries of a matrix M in terms of a low-rank matrix L and sparse matrix S from a small\nset of compressive measurements y = A(L + S). This problem is relevant in several application set-\ntings. A \ufb01rst application is the recovery of a video sequence obtained from a static camera observing\na dynamic scene under changing illumination. Here, each column of M corresponds to a vectorized\nimage frame of the video. The changing illumination has low-rank properties, while the foreground\ninnovations exhibit sparse structures [2]. In such a scenario, neither sparse nor low-rank models\nare individually suf\ufb01cient for capturing the underlying information of the signal. Models that com-\nbine low-rank and sparse components, however, are well suited for capturing such phenomenon. A\nsecond application is hyperspectral imaging, where each column of M is the vectorized image of a\nparticular spectral band; a low-rank plus sparse model arises naturally due to material properties [7].\nA third application is robust matrix completion [11], which can be cast as a compressive low-rank\nand sparse recovery problem.\nThe natural optimization problem that unites the above three problem classes above is\nsubject to rank(L) \uf8ff r, kvec(S)k0 \uf8ff K.\n\n(3)\nThe main contribution of this paper is a novel greedy algorithm for solving (P1), which we dub\nSpaRCS for SPArse and low Rank decomposition via Compressive Sensing. To the best of our\nknowledge, we are the \ufb01rst to propose a computationally ef\ufb01cient algorithm for solving a prob-\nlem like (P1). SpaRCS combines the best aspects of CoSaMP [20] for sparse vector recovery and\nADMiRA [17] for low-rank matrix recovery.\n\n(P1) min ky A(L + S)k2\n\n2 Background\n\nHere we introduce the relevant background information regarding signal recovery from CS measure-\nments, where our de\ufb01nition of signal is broadened to include both vectors and matrices. We further\nprovide background on incoherency between low-rank and sparse matrices.\n\nRestricted isometry and rank-restricted isometry properties: Signal recovery for a K-sparse\nvector from CS measurements is possible when the measurement operator A obeys the so-called\nrestricted isometry property (RIP) [4] with constant K\n(4)\nThis property implies that the information in x is nearly preserved after being measured by A. Anal-\nogous to CS, it has been shown that a low-rank matrix can be recovered from a set of CS measure-\nments when the measurement operator A obeys the rank-restricted isometry (RRIP) property [23]\nwith constant \u21e4r\n(5)\n\n2 \uf8ff (1 + K)kxk2\n\n2, 8kxk0 \uf8ff K.\n\n(1 K)kxk2\n\n2 \uf8ff kA(x)k2\n\nF , 8 rank(L) \uf8ff r.\n\n(1 \u21e4r )kLk2\n\nF \uf8ff kA(L)k2\n\nF \uf8ff (1 + \u21e4r )kLk2\n\nRecovery algorithms: Recovery of sparse vectors and low-rank matrices can be accomplished\nwhen the measurement operator A satis\ufb01es the appropriate RIP or RRIP condition. Recovery algo-\nrithms typically fall into one of two broad classes: convex optimization and greedy iteration. Convex\noptimization techniques recast (1) or (2) in a form that can be solved ef\ufb01ciently using convex pro-\ngramming [2, 27]. In the case of CS, the `0 norm is relaxed to the `1 norm; for low-rank matrices,\nthe rank operator is relaxed to the nuclear norm.\nIn contrast, greedy algorithms [17, 20] operate iteratively on the signal measurements, constructing\na basis for the signal and attempting signal recovery restricted to that basis. Compared to convex\napproaches, these algorithms often have superior speed and scale better to large problems. We\nhighlight the CoSaMP algorithm [20] for sparse vector recovery and the ADMiRA algorithm [17]\nfor low-rank matrix recovery in this paper. Both algorithms have strong convergence guarantees\nwhen the measurement operator A satis\ufb01es the appropriate RIP or RRIP condition, most notably\nexponential convergence to the true signal.\n\n2\n\n\fMatrix Incoherency: For matrix decomposition problems such as the Robust PCA problem or the\nproblem de\ufb01ned in (3) to have unique solutions, there must exist a degree of incoherence between\nthe low-rank matrix L and the sparse matrix S. It is known that the decomposition of a matrix into\nits low-rank and sparse components makes sense only when the low-rank matrix is not sparse and,\nsimilarly, when the sparse matrix is not low-rank. A simple deterministic condition can be found in\nthe work by Chandrasekaran, et al [9].\nFor our purposes, we assume the following model for non-sparse low rank matrices.\nDe\ufb01nition 2.1 (Uniformly bounded matrix [5]) An N \u21e5 N matrix L of rank r is uniformly\nbounded if its singular vectors {uj, vj, 1 \uf8ff j \uf8ff r} obey\n\nk=1 u2\n\n\u00b5B\n\nkujk1,kvjk1 \uf8ffp\u00b5B/N ,\n\nwith \u00b5B = O(1), where kxk1 denotes the largest entry in magnitude of x.\nWhen \u00b5B is small (note that \u00b5B 1), this model for the low-rank matrix L ensures that its singular\nvectors are not sparse. This can be seen in the case of the a singular vector u by noting that 1 =\n2 = PN\n.\n1 N\nkuk2\nThus, \u00b5B controls the sparsity of the matrix L by bounding the sparsity of its singular vectors.\nA suf\ufb01cient model for a sparse matrix that is not low-rank is to assume that the support set \u2326 is\nuniform. As shown in the work of Candes, et al [2] this model is equivalent to de\ufb01ning the sparse\nsupport set \u2326 = {(i, j) : i,j = 1} with each i,j being an i.i.d. Bernoulli with suf\ufb01ciently small\nparameter \u21e2S.\n\n. Rearranging terms enables us to write kuk0 1\nkuk2\n\nk \uf8ff kuk0kuk2\n1\n\n3 SpaRCS: CS recovery of low-rank and sparse matrices\n\nWe now present the SpaRCS algorithm to solve (P1) and disucss its empirical properties. Assume\nthat we are interested in a matrix M 2 RN1\u21e5N2 such that M = L + S, with rank(L) \uf8ff r, L uni-\nformly bounded with constant \u00b5B, and kSk0 \uf8ff K with support distributed uniformly. Further as-\nsume that a known linear operator A : RN1\u21e5N2 ! Rp provides us with p compressive measurements\ny of M. Let A\u21e4 denote the adjoint of the operator A and, given the index set T \u21e2 {1, . . . , N1N2}, let\nA|T denote the restriction of the operator to T . Given y = A(M)+e, where e denotes measurement\nnoise, our goal is to estimate a low rank matrixbL and a sparse matrixbS such that y \u21e1 A(bL +bS).\n\n3.1 Algorithm\n\nSpaRCS iteratively estimates L and S; the estimation of L is closely related to ADMiRA [17], while\nthe estimation of S is closely related to CoSaMP [20]. At each iteration, SpaRCS computes a signal\nproxy and then proceeds through four steps to update its estimates of L and S. These steps are laid\nout in Algorithm 1. We use the notation supp(X; K) to denote the largest K-term support set of the\nmatrix X. This forms a natural basis for sparse signal approximation. We further use the notation\nsvd(X; r) to denote computation of the rank-r singular value decomposition (SVD) of X and the\narrangement of its singular vectors into a set of up to r rank-1 matrices. This set of rank-1 matrices\nserve as a natural basis for approximating uniformly bounded low-rank matrices.\n\n3.2 Performance characterization\n\nEmpirically, SpaRCS produces a series of estimates bLk and bSk that converge convergence expo-\n\nnentially towards the true values L and S. This performance is inhereted largely from the behavior\nof the CoSaMP and ADMiRA algorithms with one noteworthy modi\ufb01cation. The key difference is\nthat, for SpaRCS, the sparse and low-rank estimation problems are coupled. While CoSaMP and\nADMiRA operate solely in the presence of the measurement noise, SpaRCS must estimate L in\nthe presence of the residual error of S, and vice-versa. Proving convergence for the algorithm in\nthe presence of the additional residual terms is non-trivial; simply lumping these additional residual\nerrors together with the measurement noise e is insuf\ufb01cient for analysis.\n\nAs a concrete example, consider the support identi\ufb01cation step b S supp(P; 2K), with\n\nP = A\u21e4(wk1) = A\u21e4(A(S bSk1) + A(L bLk1) + e),\n\n3\n\n\fSupport merger:\n\nAlgorithm 1: (bL,bS) = SpaRCS (y,A,A\u21e4, K, r, \u270f)\nInitialization: k 1,bL0 0,bS0 0, L ;, S ;, w0 y\nwhile kwk1k2 \u270f do\nCompute signal proxy:\nP A\u21e4(wk1)\nSupport identi\ufb01cation:\nb L svd(P; 2r); b S supp(P; 2K)\ne L b LS L; e S b SS S\nBL e \u2020L(y A(bSk1)); BS e \u2020S(y A(bLk1))\n(bLk , L) svd(BL; r); (bSk , S) supp(BS; K)\nwk y A(bLk +bSk)\n\nLeast squares estimation:\n\nSupport pruning:\n\nUpdate residue:\n\nk k + 1\n\nend\n\nbL =bLk1;bS =bSk1\n\nthat estimates the support set of S. CoSaMP relies on high correlation between supp(P; 2K) and\n\nsupp(S bSk1; 2K); to achieve the same in SpaRCS, (L bLk1) must be well behaved.\n\nWe are currently preparing a full theoretical characterization of the SpaRCS algorithm along with\nthe necessary conditions that guarantee this exponential convergence property. We reserve the pre-\nsentation of the convergence proof for an extended version of this work.\n\nPhase transition: The empirical performance of SpaRCS can be charted using phase transition\nplots, which predicts suf\ufb01cient and necessary conditions on its success/failure. Figure 1 shows\nphase transition results on a problem of size N1 = N2 = 512 for various values of p, r, and K. As\nexpected, SpaRCS degrades gracefully as we decrease p or increase r and K.\n\nFigure 1: Phase transitions for a recovery problem of size N1 = N2 = N = 512. Shown are\naggregate results over 20 Monte-Carlo runs at each speci\ufb01cation of r, K, and p. Black indicates\nrecovery failure, while white indicates recovery success.\n\nComputational cost: SpaRCS is highly computationally ef\ufb01cient and scales well as N1, N2 grow\nlarge. The largest computational cost is that of computing the two truncated SVDs per iteration. The\nSVDs can be performed ef\ufb01ciently via the Lanczos algorithm or similar method. The least squares\nestimation can be solved ef\ufb01ciently using conjugate gradient or Richardson iterations. Support es-\ntimation for the sparse vector merely entails sorting the signal proxy magnitudes and choosing the\nlargest 2K elements.\n\n4\n\nr=5 r=10 r=15 r=20 r=25 \fFigure 2 compares the performance of SpaRCS with two alternate recovery algorithms. We imple-\nment CS versions of the IT [18] and APG [19] algorithms, which solve the problems\n\nmin \u2327 (kLk\u21e4 + kvec(S)k1) +\n\n1\n2kLk2\n\nF +\n\n1\n2kSk2\n\nF s.t. y = A(L + S)\n\nand\n\nmin kLk\u21e4 + kvec(S)k1 s.t. y = A(L + S),\n\nrespectively. We endeavor to tune the parameters of these algorithms (which we refer to as CS\nIT and CS APG, respectively) to optimize their performance. Details of our implementation can\nbe found in [26].\nIn all experiments, we consider matrices of size N \u21e5 N with rank(L) = 2\nand kSk0 = 0.02N 2 and use permuted noiselets [12] for the measurement operator A. As a \ufb01rst\nexperiment, we generate convergence plots for matrices with N = 128 and vary the measurement\nratio p/N 2 from 0.05 to 0.5. We then recoverbL andbS and measure the recovered signal to noise\nratio (RSNR) forcM = bL +bS via 20 log10\u21e3\nkMbLbSkF\u2318. These results are displayed in Figure\n2(a), where we see that SpaRCS provides the best recovery. As a second experiment, we vary the\nproblem size N 2 {128, 256, 512, 1024} while holding the number of measurements constant at\np = 0.2N 2. We measure the recovery time required by each algorithm to reach a residual error\nkyA(bL+bS)k2\n\uf8ff 5 \u21e5 104. These results are displayed in Figure 2(b), which demonstrate that\n\nSpaRCS converges signi\ufb01cantly faster than the two other recovery methods.\n\nkMkF\n\nkyk2\n\n40\n\n30\n\n20\n\n10\n\n)\n\nB\nd\n(\n \n\nR\nN\nS\nR\n\n \n0\n0.1\n\n0.2\n\n0.3\np/N2\n\n \n\n)\nc\ne\ns\n(\n \n\ni\n\n \n\ne\nm\nT\ne\nc\nn\ne\ng\nr\ne\nv\nn\no\nC\n\nSpaRCS\nCS APG\nCS IT\n0.4\n\n0.5\n\n(a) Performance\n\n105\n104\n103\n102\n101\n\n \n7\n\nSpaRCS\nCS APG\nCS IT\n\n8\n\n9\n\nlog2(N)\n\n(b) Timing plot\n\n \n\n10\n\nFigure 2: Performance and run-time comparisons between SpaRCS, CS IT, and CS APG. Shown are\naverage results over 10 Monte-Carlo runs for problems of size N1 = N2 = N with rank(L) = 2\n(a) Performance for a problem with N = 128 for various values of the\nand kSk0 = 0.02N 2.\nmeasurement ratio p/N 2. SpaRCS exhibits superior recovery over the alternate approaches. (b)\nTiming plot for problems of various sizes N. SpaRCS converges in time several orders of magnitude\nfaster than the alternate approaches.\n\n4 Applications\n\nWe now present several experiments that validate SpaRCS and showcase its performance in several\napplications. In all experiments, we use permuted noiselets for the measurement operator A; these\nprovide both a fast transform as well as save memory, since we do not have to store A explicitly.\nVideo compressive sensing: The video CS problem is concerned with recovering multiple image\nframes of a video sequence from CS measurements [6,21,24]. We consider a 128\u21e5 128\u21e5 201 video\nsequence consisting of a static background with a number of people moving in the foreground. We\naim to not only recover the original video but also separate the background and foreground. We\nresize the data cube into a 1282 \u21e5 201 matrix M, where each column corresponds to a (vectorized)\nimage frame. The measurement operator A operates on each column of M independently, simulat-\ning acquisition using a single pixel camera [13]. We acquire p = 0.15 \u21e5 1282 measurements per\nimage frame. We recover with SpaRCS using r = 1 and K = 20,000. The results are displayed in\nFigure 3, where it can be seen that SpaRCS accurately estimates and separates the low-rank back-\nground and the sparse foreground. Figure 4 shows recovery results on a more challenging sequence\n\n5\n\n\f(a)\n\n(b)\n\n(c)\n\nFigure 3: SpaRCS recovery results on a 128 \u21e5 128 \u21e5 201 video sequence. The video sequence is\nreshaped into an N1 \u21e5 N2 matrix with N1 = 1282 and N2 = 201. (a) Ground truth for several\nframes. (b) Estimated low-rank component L. (c) Estimated sparse component S. The recovery\nSNR is 31.2 dB at the measurement ratio p/(N1N2) = 0.15. The recovery is accurate in spite of the\nmeasurement operator A working independently on each frame.\n\nParameters\nCol-only measurement matrix\nM per col = 2458\nOverall K = 49398\nRank = 1 \nRho = 0.15\n\nDetails\n128x128x201 Video\nCompression 6.6656\nSNR = 31.1637\n\n(a)\n\n(b)\n\nFigure 4: SpaRCS recovery results on a 64 \u21e5 64 \u21e5 234 video sequence. The video sequence is\nreshaped into an N1\u21e5N2 matrix with N1 = 642 and N2 = 234. (a) Ground truth for several frames.\n(b) Recovered frames. The recovery SNR is 23.9 dB at the measurement ratio of p/(N1N2) = 0.33.\nThe recovery is accurate in spite of the changing illumination conditions.\n\nwith changing illumination. In contrast to SpaRCS, existing video CS algorithms do not work well\nwith dramatically changing illumination.\n\nHyperspectral compressive sensing: Low-rank/sparse decomposition has an important physical\nrelevance in hyperspectral imaging [7]. Here we consider a hyperspectral cube, which contains a\nvector of spectral information at each image pixel. A measurement device such as [25] can provide\ncompressive measurements of such a hyperspectral cube. We employ SpaRCS on a hyperspectral\ncube of size 128 \u21e5 128 \u21e5 128 rearranged as a matrix of size 1282 \u21e5 128 such that each column\ncorresponds to a different spectral band. Figure 5 demonstrates recovery using p = 0.15 \u21e5 1282 \u21e5\n128 total measurements of the entire data cube with r = 8, K = 3000. SpaRCS performs well\nin terms of residual error (Figure 5(c)) despite the number of rows being much larger than the\nnumber of columns. Figure 5(d) emphasizes the utility the sparse component. Using only a low-\nrank approximation (corresponding to traditional PCA) causes a signi\ufb01cant increase in residual error\nover what is achieved by SpaRCS.\n\nParameter mismatch:\nIn Figure 6, we analyze the in\ufb02uence of incorrect selection of the parame-\nters r using the hyperspectral data as an example. We plot the recovered SNR that can be obtained\nat various levels of the measurement ratio p/(N1N2) for both the case of r = 8 and r = 4. There\nare interesting tradeoffs associated with the choice of parameters. Larger values of r and K enable\nbetter approximation to the unknown signals. However, by increasing r and K, we also increase the\nnumber of independent parameters in the problem, which is given by (2 max(N1, N2)r r2 + 2K).\nAn empirical rule-of-thumb for greedy recovery algorithms is that the number of measurements p\nshould be 2\u20135 times the number of independent parameters. Consequently, there exists a tradeoff\nbetween the values of r, K, and p to ensure stable recovery.\n\n6\n\n\f(a)\n\n(b)\n\n(c)\n\n(d)\n\nFigure 5: SpaRCS recovery results on a 128\u21e5 128\u21e5 128 hyperspectral data cube. The hyperspectral\ndata is reshaped into an N1 \u21e5 N2 matrix with N1 = 1282 and N2 = 128. Each image pane\ncorresponds to a different spectral band. (a) Ground truth. (b) Recovered images. (c) Residual\nerror using both the low-rank and sparse component. (d) Residual error using only the low-rank\ncomponent. The measurement ratio is p/(N1N2) = 0.15.\n\nDatasize: 128x128x128 == 128^2 x 128 matrix.\nMeasurement matrix takes inner product with the WHOLE data matrix.\nRank = 4. Measurement = 15%. K = 3000. Wavelet transformed data\u2013 db4.\n\nRecons SNR with Xhat = 27.3 dB\nRecons SNR with Lhat = 21.9 dB\n\nFigure 6: Hyperspectral data recovery for various values of the rank r of the low-rank matrix L. The\ndata used is the same as in Figure 5. (a) r = 1, SNR = 12.81 dB. (b) r = 2, SNR = 19.42 dB. (c)\nr = 4, SNR = 27.46 dB. (d) Comparison of compression ratio (N1N2)/p and recovery SNR using\nr = 4 and r = 8. All results were obtained with K = 3000.\n\nRobust matrix completion: We apply SpaRCS to the robust matrix completion problem [11]\n\nsubject to L\u2326 + s = y\n\nminkLk\u21e4 + ksk1\n\n(6)\nwhere s models outlier noise and \u2326 denotes the set of observed entries. This problem can be cast as\na compressive low-rank and sparse matrix recovery problem by using a sparse matrix S in place of\nthe outlier noise s and realizing that the support of S is a subset of \u2326. This enables recovery of both\nL and S from samples of their sum L + S.\nMatrix completion under outlier noise [10,11] has received some attention and, in many ways, is the\nwork that is closest to this paper. There are, however, several important distinctions. Chen et al. [11]\nanalyze the convex problem of (6) to provide performance guarantees. Yet, convex optimization\nmethods often do not scale well with the size of the problem. SpaRCS, by contrast, is computation-\nally ef\ufb01cient and does scale well as the problem size increases. Furthermore, [10] is tied to the case\nwhen A is a sampling operator; it is not immediately clear whether this analysis can extend to the\nmore general case of (P1), where the sparse component cannot be modeled as outlier noise in the\nmeasurements.\n\n7\n\n(d) 510152015202530N2/pRSNR (dB) r = 4r = 8(c) (b) (a) \f)\n\nB\nd\n(\n \n\nR\nN\nS\nR\n\n80\n60\n40\n20\n0\n\u221220\n\n \n10\u22123\n\n \n\nCVX\nSpaRCS\nCS IT\nOptSpace\n\n10\u22122\n\nK/p\n\n10\u22121\n\n(a) Performance\n\n100\n\n)\ne\nm\n\ni\nt\n \nn\no\ni\nt\nu\nc\ne\nx\ne\n(\ng\no\n\nl\n\n2.5\n\n2\n\n1.5\n\n1\n\n0.5\n\n0\n\n\u22120.5\n\n\u22121\n\n \n\n1/1000 1/100 1/50\n\n \n\nSpaRCS\nCS IT\nCVX\nOptSpace\n\n1/5\n\n1/4\n\n1/3\n\n1/25\n\n1/10\n\nK/p\n\n(b) Timing plot\n\nFigure 7: Comparison of several algorithms for the robust matrix completion problem. (a) RSNR\naveraged over 10 Monte-Carlo runs for an N \u21e5 N matrix completion problem with N = 128,\nr = 1, and p/N 2 = 0.2. Non-robust formulations, such OptSpace, fail. SpaRCS acheives perfor-\nmance close to that of the convex solver (CVX). (b) Comparison of convergence times for the various\nalgorithms. SpaRCS converges in only a fraction of the time required by the other algorithms.\n\nIn our robust matrix completion experiments we compare SpaRCS with CS SVT, OptSpace [16] (a\nnon-robust matrix completion algorithm), and a convex solution using CVX [15]. Figure 7 shows\nthe performance of these algorithms. OptSpace, being non-robust, fails as expected. The accuracy\nof SpaRCS is closest to that of CVX, although the convergence time of SpaRCS is several orders of\nmagnitude faster.\n\n5 Conclusion\n\nWe have considered the problem of recovering low-rank and sparse matrices given only a few linear\nmeasurements. Our proposed greedy algorithm, SpaRCS, is both fast and accurate even for large\nmatrix sizes and enjoys strong empirical performance in its convergence to the true solution. We\nhave demonstrated the applicability of SpaRCS to video compressive sensing, hyperspectral imag-\ning, and robust matrix completion.\nThere are many avenues for future work. Model-based extensions of SpaRCS are important di-\nrections. Both low-rank and sparse matrices exhibit rich structure in practice, including low-rank\nHankel matrices in system identi\ufb01cation and group sparsity in background subtraction. The use\nof models could signi\ufb01cantly enhance the performance of the algorithm. This would be especially\nuseful in applications such as video CS, where the measurement operator is typically constrained to\noperate on each image frame individually.\n\nAcknowledgements\n\nThis work was partially supported by the grants NSF CCF-0431150, CCF-0728867, CCF-0926127,\nCCF-1117939, ARO MURI W911NF-09-1-0383, W911NF-07-1-0185, DARPA N66001-11-1-\n4090, N66001-11-C-4092, N66001-08-1-2065, AFOSR FA9550-09-1-0432, and LLNL B593154.\nAdditionally, the authors wish to thank Prof. John Wright for his helpful comments and corrections\nto a previous version of this manuscript.\n\n8\n\n\fReferences\n[1] E. J. Cand`es. Compressive sampling. In Intl. Cong. of Math., Madrid, Spain, Aug. 2006.\n[2] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. ACM, 58(1):1\u201337,\n\n2009.\n\n[3] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proc. IEEE, 98(6):925\u2013936, 2010.\n[4] E. J. Cand`es and J. Romberg. Quantitative robust uncertainty principles and optimally sparse decomposi-\n\ntions. Found. Comput. Math., 6(2):227\u2013254, 2006.\n\n[5] E.J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans.\n\non Info. Theory, 56(5):2053\u20132080, 2010.\n\n[6] V. Cevher, A. C. Sankaranarayanan, M. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive\n\nsensing for background subtraction. In European Conf. Comp. Vision, Marseilles, France, Oct. 2008.\n\n[7] A. Chakrabarti and T. Zickler. Statistics of Real-World Hyperspectral Images. In IEEE Int. Conf. Comp.\n\nVis., Colorado Springs, CO, June 2011.\n\n[8] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Sparse and low-rank matrix decompo-\n\nsitions. In Allerton Conf. on Comm., Contr., and Comp., Monticello, IL, Sep. 2009.\n\n[9] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A.S. Willsky. Rank-sparsity incoherence for matrix\n\ndecomposition. Arxiv preprint arXiv:0906.2220, 2009.\n\n[10] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures.\n\nArxiv preprint arXiv:1104.0354, 2011.\n\n[11] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi. Robust matrix completion with corrupted columns. Arxiv\n\npreprint arXiv:1102.2254, 2011.\n\n[12] R. Coifman, F. Geshwind, and Y. Meyer. Noiselets. Appl. Comput. Harmon. Anal., 10:27\u201344, 2001.\n[13] M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk. Single\n\npixel imaging via compressive sampling. IEEE Signal Processing Mag., 25(2):83\u201391, 2008.\n\n[14] M. Fazel, E. Cand`es, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank\n\nmatrices. In Asilomar Conf. Signals, Systems, and Computers, Paci\ufb01c Grove, CA, Nov. 2008.\n\n[15] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21.\n\nhttp://cvxr.com/cvx, Apr. 2011.\n\n[16] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. J. Mach. Learn. Res.,\n\n11:2057\u20132078, 2010.\n\n[17] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE Trans.\n\non Info. Theory, 56(9):4402\u20134416, 2010.\n\n[18] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery\nof corrupted low-rank matrices. Technical report, University of Illinois at Urbana-Champaign, Urbana-\nChampaign, IL.\n\n[19] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen, and Y. Ma. Fast convex optimization algorithms for\nexact recovery of a corrupted low-rank matrix. In Intl. Workshop on Comp. Adv. in Multi-Sensor Adapt.\nProcessing, Aruba, Dutch Antilles, Dec. 2009.\n\n[20] D. Needell and J.A. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples.\n\nAppl. Comput. Harmon. Anal., 26(3):301\u2013321, 2009.\n\n[21] J. Y. Park and M. B. Wakin. A multiscale framework for compressive sensing of video. In Picture Coding\n\nSymp., Chicago, IL, May 2009.\n\n[22] B. Recht. A simpler approach to matrix completion. J. Mach. Learn. Res., posted Oct. 2009, to appear.\n[23] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum rank solutions of linear matrix equations via\n\nnuclear norm minimization. SIAM Rev., 52(3):471\u2013501, 2010.\n\n[24] A. C. Sankaranarayanan, P. Turaga, R. G. Baraniuk, and R. Chellappa. Compressive acquisition of dy-\n\nnamic scenes. In European Conf. Comp. Vision, Crete, Greece, Sep. 2010.\n\n[25] T. Sun and K. Kelly. Compressive sensing hyperspectral imager. In Comput. Opt. Sensing and Imaging,\n\nSan Jose, CA, Oct. 2009.\n\n[26] A. E. Waters, A. C. Sankaranarayanan, and R. G. Baraniuk. SpaRCS: Recovering low-rank and sparse\n\nmatrices from compressive measurements. Technical report, Rice University, Houston, TX, 2011.\n\n[27] W. Yin, S. Osher, D. Goldfarb, and J. Darbon. Bregman iterative algorithms for `1-minimization with\n\napplications to compressed sensing. SIAM J. Imag. Sci., 1(1):143\u2013168, 2008.\n\n9\n\n\f", "award": [], "sourceid": 659, "authors": [{"given_name": "Andrew", "family_name": "Waters", "institution": null}, {"given_name": "Aswin", "family_name": "Sankaranarayanan", "institution": null}, {"given_name": "Richard", "family_name": "Baraniuk", "institution": null}]}