{"title": "Compressive Sensing MRI with Wavelet Tree Sparsity", "book": "Advances in Neural Information Processing Systems", "page_first": 1115, "page_last": 1123, "abstract": "In Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can reconstruct a MR image with good quality from only a small number of measurements. This can significantly reduce MR scanning time. According to structured sparsity theory, the measurements can be further reduced to $\\mathcal{O}(K+\\log n)$ for tree-sparse data instead of $\\mathcal{O}(K+K\\log n)$ for standard $K$-sparse data with length $n$. However, few of existing algorithms has utilized this for CS-MRI, while most of them use Total Variation and wavelet sparse regularization. On the other side, some algorithms have been proposed for tree sparsity regularization, but few of them has validated   the benefit of tree structure in CS-MRI. In this paper, we propose a fast convex optimization algorithm to improve CS-MRI.  Wavelet sparsity, gradient sparsity and tree sparsity are all considered in our model for real MR images. The original complex problem is decomposed to three simpler subproblems then each of the subproblems can be efficiently solved with an iterative scheme. Numerous experiments have been conducted and show that the proposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain better reconstructions results on real MR images than general tree based solvers or algorithms.", "full_text": "Compressive Sensing MRI with Wavelet Tree Sparsity\n\nChen Chen and Junzhou Huang\n\nDepartment of Computer Science and Engineering\n\nUniversity of Texas at Arlington\n\ncchen@mavs.uta.edu\n\njzhuang@uta.edu\n\nAbstract\n\nIn Compressive Sensing Magnetic Resonance Imaging (CS-MRI), one can recon-\nstruct a MR image with good quality from only a small number of measurements.\nThis can signi\ufb01cantly reduce MR scanning time. According to structured sparsity\ntheory, the measurements can be further reduced to O(K + log n) for tree-sparse\ndata instead of O(K + K log n) for standard K-sparse data with length n. How-\never, few of existing algorithms have utilized this for CS-MRI, while most of them\nmodel the problem with total variation and wavelet sparse regularization. On the\nother side, some algorithms have been proposed for tree sparse regularization, but\nfew of them have validated the bene\ufb01t of wavelet tree structure in CS-MRI. In\nthis paper, we propose a fast convex optimization algorithm to improve CS-MRI.\nWavelet sparsity, gradient sparsity and tree sparsity are all considered in our model\nfor real MR images. The original complex problem is decomposed into three sim-\npler subproblems then each of the subproblems can be ef\ufb01ciently solved with an\niterative scheme. Numerous experiments have been conducted and show that the\nproposed algorithm outperforms the state-of-the-art CS-MRI algorithms, and gain\nbetter reconstructions results on real MR images than general tree based solvers\nor algorithms.\n\n1\n\nIntroduction\n\nMagnetic Resonance Imaging (MRI) is widely used for observing the tissue changes of the patients\nwithin a non-invasive manner. One limitation of MRI is its imaging speed, including both scanning\nspeed and reconstruction speed. Long waiting time and slow scanning may result in patients\u2019 an-\nnoyance and blur on images due to local motion such as breathing, heart beating etc. According to\ncompressive sensing (CS) [1,2] theory, only a small number of measurements is enough to recover\nan image with good quality. This is an extension of Nyquist-Shannon sampling theorem when data\nis sparse or can be sparsely represented. Compressive Sensing Magnetic Resonance Imaging (CS-\nMRI) becomes one of the most successful applications of compressive sensing, since MR scanning\ntime is directly related to the number of sampling measurements [3]. As most images can be trans-\nferred to some sparse domain (wavelet etc.), only O(K + K log n) samples are enough to obtain\nrobust MR image reconstruction.\nActually, this result can be better. Recent works on structured sparsity show that the required number\nof sampling measurements could be further reduced to O(K + log n) by exploring the tree structure\n[4-6]. A typical relationship in tree sparsity is that, if a parent coef\ufb01cient has a large/small value,\nits children also tend to be large/small. Some methods have been proposed to improve standard CS\nreconstruction by utilizing this prior. Specially, two convex models are proposed to handle the tree-\nbased reconstruction problem [7]. They apply SpaRSA [11] to solve their models, with a relatively\nslow convergence rate. In Bayesian compressive sensing, Markov Chain Monte Carlo (MCMC) and\nvariational Bayesian (VB) are used to solve the tree-based hierarchical models [8][9]. Turbo AMP\n[10] also well exploits tree sparsity for compressive sensing with an iterative approximate message\n\n1\n\n\fpassing approach. However, none of them has conducted numerous experiments on MR images to\nvalidate their superiority.\nIn existing CS-MRI models, the linear combination of total variation and wavelet sparse regular-\nization is very popular [3,12-15]. Classical conjugate gradient decent method is \ufb01rst used to solve\nthis problem [3]. TVCMRI [12] and RecPF [13] use an operator-splitting method and a variable\nsplitting method to solve this problem respectively. FCSA [14,15] decomposes the original problem\ninto two easy subproblems and separately solves each of them with FISTA [16,17]. They are the\nstate-of-the-art algorithms for CS-MRI, but none of them utilizes tree sparsity prior to enhance their\nperformance.\nIn this paper, we propose a new model for CS-MRI, which combines wavelet sparsity, gradient\nsparsity and tree sparsity seamlessly.\nIn tree structure modeling, we assign each pair of parent-\nchild wavelet coef\ufb01cients to one group, which forces them to be zeros or non-zeros simultaneously.\nThis is an overlapping group problem and hard to be solved directly. A new variable is introduced\nto decompose this problem to three simpler subproblems. Then each of subproblems has closed\nform solution or can be solved ef\ufb01ciently by existing techniques. We conduct extensive experiments\nto compare the proposed algorithm with the state-of-the-art CS-MRI algorithms and several tree\nsparsity algorithms. The proposed algorithm always achieves the best results in terms of SNR and\ncomputational time.\nOur contribution can be summarized as: (1) We introduce the wavelet tree sparsity to CS-MRI, and\nprovide a convex formulation to model the tree-based structure combining with total variation and\nwavelet sparsity; (2) An ef\ufb01cient algorithm with fast convergence performance is proposed in this\npaper to solve this model. Each iteration only cost O(n log n) time.(3) Numerous experiments have\nbeen conducted to compare the proposed algorithm with the state-of-the-art CS-MRI algorithms\nand several general tree-based algorithms or solvers. The results show that the proposed algorithm\noutperforms all others on real MR images.\n\n2 Related work\n\n2.1 Tree based compressive sensing\n\nIf a signal is sparse or can be sparsely represented, the necessary samples to reconstruct it can\nbe signi\ufb01cantly smaller than that needed in Nyquist-Shannon sampling theorem. Moreover, if we\nknow some prior about the structure of the original signal, such as group or graph structure, the\nmeasurements can be further reduced [4,5]. Some previous algorithms have utilized the tree structure\nof wavelet coef\ufb01cients to improve CS reconstruction [7-10].\nOGL [7] is a convex approach to model the tree structure:\n\n\u02c6\u03b8 = arg min\n\n\u03b8\n\n{F (\u03b8) =\n\n1\n2\n\n||b \u2212 A\u03a6T \u03b8||2\n\n2 + \u03bbg\n\n(\u03b8i \u2212 \u03b8j\n\ni )2}\n\n(1)\n\n(cid:88)\n\ng\u2208G\n\n||(cid:101)\u03b8g||2 +\n\n1\n2\n\n\u03c4 2\n\nn(cid:88)\n\n(cid:88)\n\ni=1\n\nj\u2208Ji\n\nwhere \u03b8 is a set of the wavelet coef\ufb01cients. A represents a partial Fourier transform for MR re-\nconstruction problem and b is the measurements data. \u03a6T denotes the inverse wavelet transform.\n\nG denotes the set of all parent-child groups and g is one of such groups. (cid:101)\u03b8 is an extended vector\n\nof \u03b8 with replicates and the last term is a penalty to force the replicates to be the same. When\nwavelet coef\ufb01cients are recovered, they can be transformed to the recovered image by an inverse\nwavelet transform.This method well explores the tree structure assumption, but may be slow in gen-\neral as following reasons: a) the parent-child relationship in their model is hard to maintain; b) it\napplies SpaRSA [11] to solve (1). Overall, their method can only achieve a convergence rate of\nF (\u03b8k) \u2212 F (\u03b8\u2217) (cid:39) O(1/k) [16], where k is the iteration number and \u03b8\u2217 is the optimal solution.\nIn statistical learning, AMP [10], MCMC [8], and VB [9] all solve (2) with probabilistic inference.\nIn (2), x is the original image to be reconstructed and w is Gaussian white noise.\nIn these ap-\nproaches, graphical models are used to represent the wavelet tree structure and the distribution of\neach coef\ufb01cient is decided by its parent\u2019s value.\n\ny = Ax + w = A\u03a6T \u03b8 + w\n\n(2)\n\n2\n\n\f2.2 Ef\ufb01cient MR image reconstruction algorithms\n\nIn existing CS-MRI algorithms, the linear combination of total variation and wavelet sparsity con-\nstrains has shown good property for MR images. Recent fastest algorithms all attempt to solve\n(3) in less computational time. \u03b1 and \u03b2 are two positive parameters, and \u03a6 denotes the wavelet\n\n(cid:112)(\u22071xij)2 + (\u22072xij)2, where \u22071 and \u22072 denote the forward \ufb01-\n\ntransform. (cid:107)x(cid:107)T V = (cid:80)\n\n(cid:80)\n\nnite difference operators on the \ufb01rst and second coordinates. TVCMRI [12] and RecPF [13] use an\noperator-splitting method and a variable splitting method to solve this problem respectively. FCSA\n[14,15] decomposes this problem into 2 simpler problems and solves them with FISTA respectively.\nThe convergence rate of FISTA is O(1/k2). These approaches are very effective on real MR image\nreconstruction, but none of them utilizes the wavelet tree structure to get further enhancement.\n\ni\n\nj\n\n\u02c6x = arg min\n\nx\n\n{ 1\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03b1(cid:107)x(cid:107)T V + \u03b2(cid:107)\u03a6x(cid:107)1}\n\n(3)\n\n2.3 Convex overlapped group sparsity solvers\n\nSLEP [18] (Sparse Learning with Ef\ufb01cient Projections) has the package for tree structured group\nlasso (4). Its main function is to iteratively solve the tree structured denoising problem. When it\ncomes to reconstruction problem, it applies FISTA to transfer the problem to denoising.\n\n\u02c6x = arg min\n\nx\n\n{ 1\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03b2(cid:107)\u03a6x(cid:107)tree}\n\n(4)\n\nYALL1 [19] (Your Algorithms for L1) can solve the general overlapping group sparse problem ef\ufb01-\nciently. We put it in comparisons too. It \ufb01rst relaxes the constrained overlapping group minimization\nto unconstrained problem by Lagrangian method. Then the minimization of the x and z subproblems\ncan be written as:\n\ns(cid:88)\n\ni=1\n\n\u02c6x = arg min\nx,z\n\n{ \u03b22\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03bbT\n\n1 G\u03a6x +\n\n\u03b21\n2\n\n||z \u2212 G\u03a6x||2\n\n2 \u2212 \u03bbT\n\n2 Ax +\n\nwi||zi||2}\n\n(5)\n\nwhere G indicates the grouping index with all its elements to be 1 or 0. s is the total number of\ngroups. \u03bb1, \u03bb2 are multipliers and \u03b21, \u03b22 are positive parameters.\n\n3 Algorithm\n\nObservations tell us that the wavelet coef\ufb01cients of real MR images tend to be quadtree structured\n[20], although not strictly. Moreover they are generally sparse in wavelet and gradient domain. So\nwe utilize all the sparse prior in our model. A new algorithm called Wavelet Tree Sparsity MRI\n(WaTMRI) is proposed to ef\ufb01ciently solve this model. Tree-based MRI problem can be formulated\nas follows:\n\n(cid:88)\n\ng\u2208G\n\n{F (x) =\n\nmin\n\nx\n\n1\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03b1(cid:107)x(cid:107)T V + \u03b2((cid:107)\u03a6x(cid:107)1 +\n\n||\u03a6xg||2)}\n\n(6)\n\nThe total variation and L1 term in fact have complemented the tree structure assumption, which\nmake our model more robust on real MR images. This is a main difference with previous tree\nstructured algorithms or solvers. However, this problem can not be solved ef\ufb01ciently. We introduce\n\n3\n\n\fa variable z to constrain x with overlapping structure. Then the problem becomes non-overlapping\nconvex optimization. Let G\u03a6x = z, (6) can be rewritten as:\n\n{F (x) =\n\nmin\nx,z\n\n1\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03b1(cid:107)x(cid:107)T V + \u03b2((cid:107)\u03a6x(cid:107)1 +\n\n||zgi||2) +\n\n\u03bb\n2\n\n||z \u2212 G\u03a6x||2\n2}\n\n(7)\n\ns(cid:88)\n\ni=1\n\nFor the z subproblem, it has closed form solution by the group-wise soft thresholding. For the x\nsubproblem, we can combine the \ufb01rst and last quadratic penalty on the right side. Then the rest has\nthe similar form with FCSA and can be solved ef\ufb01ciently with an iterative scheme.\n\n3.1 Solution\n\nAs mentioned above, z-subproblem in (7) can be written as:\n\nzgi = arg min\nzgi\n\n{\u03b2||zgi||2 +\n\n\u03bb\n2\n\n||zgi \u2212 (G\u03a6x)gi||2\n\n2}, i = 1, 2, ..., s\n\n(8)\n\nwhere gi is the i-th group and s is number of total groups. It has a closed form solution by soft\nthresholding:\n\nzgi = max(||ri||2 \u2212 \u03b2\n\u03bb\n\n, 0)\n\nri\n||ri||2\n\n, i = 1, 2, ..., s\n\nwhere ri = (G\u03a6x)gi.\nFor the x-subproblem,\n\nx = arg min\n\nx\n\n{ 1\n2\n\n(cid:107)Ax \u2212 b(cid:107)2\n\n2 + \u03b1(cid:107)x(cid:107)T V + \u03b2(cid:107)\u03a6x(cid:107)1 +\n\n||z \u2212 G\u03a6x||2\n2}\n\n\u03bb\n2\n\n(9)\n\n(10)\n\n2(cid:107)Ax \u2212 b(cid:107)2\n\n2||z \u2212 G\u03a6x||2\n\n2 + \u03bb\n\nLet f (x) = 1\n2, which is a convex and smooth function with Lipschitz\nLf , and g1(x) = \u03b1(cid:107)x(cid:107)T V , g2(x) = \u03b2(cid:107)\u03a6x(cid:107)1, which are convex but non-smooth functions. Then\nthis x problem can be solved ef\ufb01ciently by FCSA. For convenience, we denote (9) by\nz = shrinkgroup(G\u03a6x, \u03b2\n\n\u03bb ). Now, we can summarize our algorithm as in Algorithm 1:\n\nAlgorithm 1 WaTMRI\n\nInput: \u03c1 = 1/Lf , r1 = x0, t1 = 1, \u03b1, \u03b2, \u03bb\nfor k = 1 to N do\nz = shrinkgroup(G\u03a6xk\u22121, \u03b2/\u03bb)\n\n1)\n2) xg = rk \u2212 \u03c1\u2207f (rk)\n3) x1 = prox\u03c1(2\u03b1(cid:107)x(cid:107)T V )(xg)\n4) x2 = prox\u03c1(2\u03b2((cid:107)\u03a6x(cid:107)1)(xg)\n5) xk = (x1 + x2)/2\n6)\n7)\n\ntk+1 = [1 +(cid:112)1 + 4(tk)2]/2\n\nrk+1 = xk + tk\u22121\n\ntk+1 (xk \u2212 xk\u22121)\n\nend for\n\nwhere the proximal map is de\ufb01ned for any scaler \u03c1 > 0:\n{g(u) +\n\nprox\u03c1(g)(x) := arg min\n\n(11)\nand \u2207f (rk) = AT (Ark \u2212 b) + \u03bb\u03a6T GT (G\u03a6rk \u2212 z) with AT denotes the inverse partial Fourier\ntransform.\n\nu\n\n(cid:107)u \u2212 x(cid:107)2}\n\n1\n2\u03c1\n\n4\n\n\f3.2 Algorithm analysis\nSuppose x represents an image with n pixels and z contains n(cid:48) elements. Although G is a n(cid:48) \u00d7 n\nmatrix, it is sparse with only n(cid:48) non-zero elements. So we can implement a multiplication by G\nef\ufb01ciently with O(n(cid:48)) time. Step 1 shrinkgroup takes O(n(cid:48) + n log n) time. The total cost of step\n2 takes O(n log n) time. Step 4 takes O(n log n) when the fast wavelet transform is applied. Steps\n3,5 all cost O(n). Note that n(cid:48) \u2264 2n since we assign every parent-child coef\ufb01cients to one group\nand leave every wavelet scaling in one group. So the total computation complexity for each iteration\nis O(n log n), the same complexity as that in TVCMRI, RecPF and FCSA. We introduce the wavelet\ntree structure constrain in our model, without increasing the total computation complexity. The x-\nsubproblem is accelerated by FISTA, and the whole algorithm shows a very fast convergence rate in\nthe following experiments.\n\n4 Experiments\n\n4.1 Experiments set up\n\nNumerous experiments have been conducted to show the superiority of the proposed algorithm on\nCS-MRI. In the MR imaging problem, A is partial Fourier transform with m rows and n columns.\nWe de\ufb01ne the sampling ratio as m/n. The fewer measurements we samples, the less MR scanning\ntime is need. So MR imaging is always interested in low sampling ratio cases. We follow the\nsampling strategy of previous works([12,14-15]), which randomly choose more Fourier coef\ufb01cients\nfrom low frequency and less on high frequency. All measurements are mixed with 0.01 Gaussian\nwhite noise. Signal-to-Noise Ratio (SNR) is used for result evaluation.\nAll experiments are on a laptop with 2.4GHz Intel core i5 2430M CPU. Matlab version is 7.8(2009a).\nWe conduct experiments on four MR images : \u201dCardiac\u201d, \u201dBrain\u201d, \u201dChest\u201d and \u201dShoulder\u201d ( Fig-\nure 1). We \ufb01rst compare our algorithm with the classical and fastest MR image reconstruction\nalgorithms: CG[3], TVCMRI[12], RecPF[13], FCSA[14,15], and then with general tree based al-\ngorithms or solvers: AMP[10], VB[9], YALL1[19], SLEP[18]. For fair comparisons, all codes are\ndownloaded from the authors\u2019 websites. We do not include MCMC[7] in experiements because it\nhas slow execution speed and untractable convergence [9][10]. OGL[7] solves its model by SpaRSA\n[11] with only O(1/k) convergence rate, which can not be competitive with recent FISTA[16,17]\nalgorithms with O(1/k2) convergence rate. The authors have not published the code yet. So we\ndo not include OGL for comparisons neither. We use the same setting \u03b1 = 0.001, \u03b2 = 0.035 in\nprevious works [12,14,15] for all convex models. \u03bb = 0.2 \u00d7 \u03b2 is used for our model.\n\nFigure 1: MR images: Cardiac; Brain; Chest; Shoulder and the sampling mask.\n\n4.2 Comparisons with MR image reconstruction algorithms\n\nWe \ufb01rst compare our method with the state-of-the-art MR image reconstruction algorithms. For\nconvenience, all test images are resized to 256\u00d7256. Figure 2 shows the performance comparison on\n\u201dBrain\u201d image. All algorithms terminate after 50 iterations. We decompose the wavelet coef\ufb01cients\nto 4 levels here since more levels would increase the computation cost and less levels will weaken\ntree structure bene\ufb01t. One could observe that the visual result recovered by the proposed algorithm\nis the closest to the original with only 20% sampling ratio. Although tree structure de\ufb01nitely cost a\nlittle more time to solve, it always achieves the best performance in terms of SNR and CPU time. We\nhave conducted experiments on other images and obtain similar results that the proposed algorithm\nalways has the best performance in terms of SNR and CPU time. This result is reasonable because\n\n5\n\n\fwe exploit wavelet tree structure in our model, which can reduce requirement for the number of\nmeasurements or increase the accuracy of the solution with the same measurements.\n\nFigure 2: Brain image reconstruction with 20% sampling. Visual results from left to right, top to\nbottom are original image, images reconstructed by CG [3], TVCMRI [12], RecPF [13], FCSA\n[14,15], and the proposed algorithm. Their SNR are 10.26, 13.50, 14.29, 15.69 and 16.88. The right\nside shows the average SNR to iterations and SNR to CPU time.\n\n4.3 Comparisons with general algorithms of tree structure\n\nWe also compare our algorithm with existing algorithms for tree sparsity with statistical inference\nand convex optimization. For statistical algorithms AMP [10] and VB [9], we use the default setting\nin their code. For SLEP [18], we set the same parameters \u03b1 and \u03b2 as those in previous experiments.\nFor YALL1 [19], we set both \u03b21 and \u03b22 equal to \u03b2. VB needs every column of A, which slows\ndown the whole algorithm. Due to the higher space requirement and time complexity of VB, we\nresize all images to 128 \u00d7 128. The wavelet decomposition level is set as 3. Figure 3 shows the\nreconstruction results on \u201dBrain\u201d image with only 20% measurements. All algorithm terminates\nafter 50 iterations. Due to high computational complexity of VB, we do not show the performance of\nVB in the right bottom panel. As AMP and VB can converge with only a small number of iterations\nand are much slower, we run them 10 iterations in all later experiments. The proposed algorithm\nalways achieves the highest SNR to CPU time among all tree-based algorithms or solvers. These\nresults are reasonable because none of the other algorithms uses the sparse prior of MR images in\nwavelet and gradient domain simultaneously.\n\nTable 1: Comparisons of SNR (db) on four MR images\n\nAlgorithms\n\nIterations\n\nAMP [10]\n\nVB [9]\n\nSLEP [18]\nYALL1 [19]\n\nProposed\n\n10\n10\n50\n50\n50\n\nCardiac\n11.36\u00b10.95\n9.62\u00b11.82\n12.24\u00b11.08\n9.56\u00b10.13\n14.80\u00b1 0.51\n\nBrain\n\nChest\n\n11.56\u00b10.60\n9.23\u00b11.39\n12.28\u00b10.78\n7.73\u00b10.15\n14.11\u00b1 0.41\n\n11.00\u00b10.30\n8.93\u00b10.79\n12.34\u00b10.28\n7.76\u00b10.56\n12.90\u00b1 0.13\n\nShoulder\n14.49\u00b11.04\n13.81\u00b10.44\n15.65\u00b11.78\n13.14\u00b10.22\n18.93\u00b1 0.73\n\nTable 1 and 2 show the results on four MR images. Although statistical algorithms are slow in\ngeneral, they have the convenience without tuning parameters, as all parameters are learned from\ndata. Fortunately, good parameters for MR image reconstruction are easy to tune in our model.\nExcept the proposed algorithm, all other algorithms have a strong assumption of the tree structure.\nHowever for real MR data, many images do not strictly follow this assumption. Due to this rea-\n\n6\n\n\fTable 2: Comparisons of execution time(sec) on four MR images\n\nAlgorithms\n\nIterations\n\nAMP [10]\n\nVB [9]\n\nSLEP [18]\nYALL1 [19]\n\nProposed\n\n10\n10\n50\n50\n50\n\nCardiac\n2.30\u00b10.06\n13.95\u00b10.11\n1.44\u00b10.08\n1.02\u00b10.04\n1.54\u00b10.04\n\nBrain\n2.36\u00b10.33\n14.25\u00b10.29\n1.52\u00b10.06\n1.04\u00b10.01\n1.61\u00b10.03\n\nChest\n2.37\u00b10.41\n14.11\u00b10.40\n1.41\u00b10.05\n0.98\u00b10.04\n1.56\u00b10.07\n\nShoulder\n2.29\u00b10.22\n14.15\u00b10.42\n1.45\u00b10.08\n1.00\u00b10.02\n1.62\u00b10.14\n\nFigure 3: Brain image reconstruction with 20% sampling. Visual results from left to right, top to\nbottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19],\nand the proposed algorithm. Their SNR are 11.56, 8.81, 12.28, 7.73 and 14.11. The right side shows\nthe average SNR to iterations and to CPU time. Note that the right bottom panel only shows the \ufb01rst\n10 iterations time of AMP.\n\nson, these tree-based algorithms can not do their best on real MR images. To show the bene\ufb01t of\nproposed model, we design another experiment on a toy MR image, which more strictly follow the\ntree structure assumption. First we set the wavelet coef\ufb01cients who have the smallest 0.1% energy\nto zero. Then if a coef\ufb01cient\u2019s parent or child is zero, we set it to be zero. Hence coef\ufb01cients in\nthe same group are both zeros or non-zeros. Figure 4 shows the original toy brain image and corre-\nsponding results of different algorithms. We found that all algorithms have improved a lot and the\nperformance of all algorithms becomes much closer. From Figure 4 and 3, we could \ufb01nd that the\nproposed algorithm has great superiority on real MR image, because we combined TV and wavelet\nsparsity, which \u201dsoften\u201d and complement the tree structure assumption for real MR data. Other tree\nbased algorithms depend on \u201dhard\u201d tree structure only, which makes them hard to perform well on\nCS-MRI.\nFinally, we show the results at different sampling ratios at Figure 5. For the same algorithm, the\nSNR of the solution tends to be higher when more measurements are used. On the same tested\nimage, the order of performance tends to be the same. It coincides the conclusion in previous paper\n[14,15] that FCSA is better than TVCMRI and RecPF, and far better than classical method CG.\nComparing all these experiments, the proposed algorithm always achieves the highest SNR than all\nother algorithms on real MR images.\n\n7\n\n\fFigure 4: Toy image reconstruction with 20% sampling. Visual results from left to right, top to\nbottom are original image, images reconstructed by AMP [10], VB [9], SLEP [18], YALL1 [19],\nand the proposed algorithm. Their SNR are 12.99, 10.12, 13.53, 13.19 and 15.29. The right side\nshows the average SNR to iterations and to CPU time.\n\nFigure 5: Average SNR with different sampling ratios on 4 MR images. All algorithms terminate\nafter 50 iterations except AMP [10] and VB [9] terminate after 10 iterations. From left to right,\nresults are on \u201dCardiac\u201d,\u201dBrain\u201d,\u201dChest\u201d and \u201dShoulder\u201d.\n\n5 Conclusions\n\nReal MR images not only tend to be tree structured sparse, but also are sparse in wavelet and\ngradient domain . In this paper, we consider all these priors in our model and all terms in this mode\nare convex. To solve this model, we decompose the original problem into three simpler ones and\nsolve each of them very ef\ufb01ciently. Numerous experiments have been conducted to validate our\nmethod. All experiments demonstrate that the proposed algorithm outperforms the state-of-the-art\nones in CS-MRI and general tree-based algorithms or solvers. Compared with the state-of-the-art\nalgorithms in CS-MRI, the tree structure in our model can help reduce the required measurements,\nand leads to better performance. Compared with general tree sparsity algorithms, our algorithm can\nobtain more robust results on real MR data. Future work will be combining the proposed algorithm\nwith the nonlocal total variation [22] for multi-contrast MRI [21].\n\n8\n\n\fReferences\n[1] Donoho, D. (2006) Compressed sensing. IEEE Trans. on Information Theory 52(4):1289-1306.\n[2] Candes, E., Romberg, J. & Tao, T. (2006) Robust uncertainty principles: Exact signal reconstruction from\nhighly incomplete frequency information. IEEE Trans. on Information Theory 52(2):489-509.\n[3] Lustig, M., Donoho, D. & Pauly, J. (2007) Sparse MRI: The application of compressed sensing for rapid\nMR imaging. Magnetic Resonance in Medicine 58(6):1182-1195.\n[4] Huang, J., Zhang, T. & Metaxas, D. (2011) Learning With Structured Sparsity. Journal of Machine Learning\nResearch 12:3371-3412.\n[5] Baraniuk, R.G., Cevher, V., Duarte, M.F. & Hegde, C. (2010) Model-based compressive sensing. IEEE\nTrans. on Information Theory 56:1982-2001.\n[6] Bach, F., Jenatton, R., Mairal, J. & Obozinski, G. (2012) Structured sparsity through convex optimization.\nTechnical report, HAL 00621245-v2, to appear in Statistical Science.\n[7] Rao, N., Nowak, R., Wright, S. & Kingsbury, N. (2011) Convex approaches to model wavelet sparsity\npatterns. In IEEE International Conference On Image Processing, ICIP\u201911\n[8] He, L. & Carin, L. (2009) Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing. IEEE\nTrans. on Signal Processing 57(9):3488-3497.\n[9] He, L., Chen, H. & Carin, L. (2010) Tree-Structured Compressive Sensing with Variational Bayesian Anal-\nysis. IEEE Signal Processing Letters 17(3):233-236\n[10] Som, S., Potter, L.C. & Schniter, P. (2010) Compressive Imaging using Approximate Message Passing and\na Markov-Tree Prior. In Proceedings of Asilomar Conference on Signals, Systems, and Computers.\n[11] Wright, S.J., Nowak, R.D. & Figueiredo, M.A.T. (2009) Sparse reconstruction by separable approximation.\nIEEE Trans. on Signal Processing 57:2479-2493.\n[12] Ma, S., Yin, W., Zhang, Y. & Chakraborty, A.(2008) An ef\ufb01cient algorithm for compressed MR imaging\nusing total variation and wavelets. In In Proc. of the IEEE Computer Society Conference on Computer Vision\nand Pattern Recognition, CVPR\u201908.\n[13] Yang, J., Zhang, Y. & Yin, W. (2010) A fast alternating direction method for tvl1-l2 signal reconstruction\nfrom partial fourier data. IEEE Journal of Selected Topics in Signal Processing, Special Issue on Compressive\nSensing 4(2):288-297.\n[14] Huang, J., Zhang, S. & Metaxas, D. (2011) Ef\ufb01cient MR Image Reconstruction for Compressed MR\nImaging. Medical Image Analysis 15(5):670-679.\n[15] Huang, J., Zhang, S. & Metaxas, D. (2010) Ef\ufb01cient MR Image Reconstruction for Compressed MR\nImaging. In Proc. of the 13th Annual International Conference on Medical Image Computing and Computer\nAssisted Intervention, MICCAI\u201910.\n[16] Beck, A. & Teboulle, M. (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse prob-\nlems. SIAM Journal on Imaging Sciences 2(1):183-202.\n[17] Beck, A. & Teboulle, M. (2009) Fast gradient-based algorithms for constrained total variation image\ndenoising and deblurring problems. IEEE Trans. on Image Processing 18(113):2419-2434\n[18] Liu, J., Ji, S. & Ye, J. (2009) SLEP: Sparse Learning with Ef\ufb01cient Projections. Arizona State University.\nhttp://www.public.asu.edu/ jye02/Software/SLEP.\n[19] Deng, W., Yin, W. & Zhang, Y. (2011) Group Sparse Optimization by Alternating Direction Method. Rice\nCAAM Report TR11-06.\n[20] Manduca A., & Said A. (1996) Wavelet Compression of Medical Images with Set Partitioning in Hierar-\nchical Trees. In Proceedings of International Conference IEEE Engineering in Medicine and Biology Society,\nEMBS.\n[21] Huang, J., Chen, C. & Axel, L. (2012) Fast Multi-contrast MRI Reconstruction. In Proc. of the 15th Annual\nInternational Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI\u201912.\n[22] Huang, J., & Yang, F. (2012) Compressed Magnetic Resonace Imaging Based on Wavelet Sparsity and\nNonlocal Total Variation. IEEE International Symposium on Biomedical Imaging, ISBI\u201912.\n\n9\n\n\f", "award": [], "sourceid": 536, "authors": [{"given_name": "Chen", "family_name": "Chen", "institution": null}, {"given_name": "Junzhou", "family_name": "Huang", "institution": null}]}