{"title": "Topology-Preserving Deep Image Segmentation", "book": "Advances in Neural Information Processing Systems", "page_first": 5657, "page_last": 5668, "abstract": "Segmentation algorithms are prone to make topological errors on fine-scale struc-\ntures, e.g., broken connections. We propose a novel method that learns to segment with correct topology. In particular, we design a continuous-valued loss function that enforces a segmentation to have the same topology as the ground truth, i.e.,having the same Betti number. The proposed topology-preserving loss function is differentiable and can be incorporated into end-to-end training of a deep neural network. Our method achieves much better performance on the Betti number error, which directly accounts for the topological correctness. It also performs superior on other topology-relevant metrics, e.g., the Adjusted Rand Index and the Variation of Information, without sacrificing per-pixel accuracy. We illustrate the effectiveness of the proposed method on a broad spectrum of natural and biomedical datasets.", "full_text": "Topology-Preserving Deep Image Segmentation\n\n\u2217Xiaoling Hu1, Li Fuxin2, Dimitris Samaras1 and Chao Chen1\n\n1Stony Brook University\n2Oregon State University\n\nAbstract\n\nSegmentation algorithms are prone to topological errors on \ufb01ne-scale structures,\ne.g., broken connections. We propose a novel method that learns to segment\nwith correct topology. In particular, we design a continuous-valued loss function\nthat enforces a segmentation to have the same topology as the ground truth, i.e.,\nhaving the same Betti number. The proposed topology-preserving loss function\nis differentiable and we incorporate it into end-to-end training of a deep neural\nnetwork. Our method achieves much better performance on the Betti number error,\nwhich directly accounts for the topological correctness. It also performs superiorly\non other topology-relevant metrics, e.g., the Adjusted Rand Index and the Variation\nof Information. We illustrate the effectiveness of the proposed method on a broad\nspectrum of natural and biomedical datasets.\n\nIntroduction\n\n1\nImage segmentation, i.e., assigning labels to all pixels of an input image, is crucial in many computer\nvision tasks. State-of-the-art deep segmentation methods [27, 22, 10, 11, 12] learn high quality\nfeature representations through an end-to-end trained deep network and achieve satisfactory per-\npixel accuracy. However, these segmentation algorithms are still prone to make errors on \ufb01ne-scale\nstructures, such as small object instances, instances with multiple connected components, and thin\nconnections. These \ufb01ne-scale structures may be crucial in analyzing the functionality of the objects.\nFor example, accurate extraction of thin parts such as ropes and handles is crucial in planning robot\nactions, e.g., dragging or grasping. In biomedical images, correct delineation of thin objects such\nas neuron membranes and vessels is crucial in providing accurate morphological and structural\nquanti\ufb01cation of the underlying system. A broken connection or a missing component may only\ninduce marginal per-pixel error, but can cause catastrophic functional mistakes. See Fig. 1 for an\nexample.\nWe propose a novel deep segmentation method that learns to segment with correct topology. In\nparticular, we propose a topological loss that enforces the segmentation results to have the same\ntopology as the ground truth, i.e., having the same Betti number (number of connected components\nand handles). A neural network trained with such loss will achieve high topological \ufb01delity without\nsacri\ufb01cing per-pixel accuracy. The main challenge in designing such loss is that topological informa-\ntion, namely, Betti numbers, are discrete values. We need a continuous-valued measurement of the\ntopological similarity between a prediction and the ground truth; and such measurement needs to be\ndifferentiable in order to backpropagate through the network.\nTo this end, we propose to use theory from computational topology [15], which summarizes the\ntopological information from a continuous-valued function (in our case, the likelihood function f is\npredicted by a neural network). Instead of acquiring the segmentation by thresholding f at 0.5 and\ninspecting its topology, persistent homology [15, 16, 47] captures topological information carried\n\n\u2217Correspondence to: Xiaoling Hu .\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\f(a)\n\n(b)\n\n(c)\n\n(d)\n\nFigure 1: Illustration of the importance of topological correctness in a neuron image segmentation\ntask. The goal of this task is to segment membranes which partition the image into regions correspond-\ning to neurons. (a) an input neuron image. (b) ground truth segmentation of the membranes (dark\nblue) and the result neuron regions. (c) result of a baseline method without topological guarantee [18].\nSmall pixel-wise errors lead to broken membranes, resulting in merging of many neurons into one.\n(d) Our method produces the correct topology and the correct partitioning of neurons.\n\nby f over all possible thresholds. This provides a uni\ufb01ed, differentiable approach of measuring the\ntopological similarity between f and the ground truth, called the topological loss. We derive the\ngradient of the loss so that the network predicting f can be optimized accordingly. We focus on 0-\nand 1-dimensional topology (components and connections) on 2-dimensional images.\nOur method is the \ufb01rst end-to-end deep segmentation network with guaranteed topological correctness.\nWe show that when the topological loss is decreased to zero, the segmentation is guaranteed to be\ntopologically correct, i.e., have identical topology as the ground truth. Our method is empirically\nvalidated by comparing with state-of-the-arts on natural and biomedical datasets with \ufb01ne-scale\nstructures. It achieves superior performance on metrics that encourage structural accuracy. In\nparticular, our method signi\ufb01cantly outperforms others on the Betti number error which exactly\nmeasures the topological accuracy. Fig. 1 shows a qualitative result.\nOur method shows how topological computation and deep learning can be mutually bene\ufb01cial. While\nour method empowers deep nets with advanced topological constraints, it is also a powerful approach\non topological analysis; the observed function is now learned with a highly nonlinear deep network.\nThis enables topology to be estimated based on a semantically informed and denoised observation.\nRelated work. The closest method to ours is by Mosinska et al. [29], which also proposes a topology-\naware loss. Instead of actually computing and comparing the topology, their approach uses the\nresponse of selected \ufb01lters from a pretrained VGG19 network to construct the loss. These \ufb01lters\nprefer elongated shapes and thus alleviate the broken connection issue. But this method is hard to\ngeneralize to more complex settings with connections of arbitrary shapes. Furthermore, even if this\nmethod achieves zero loss, its segmentation is not guaranteed to be topologically correct.\nDifferent ideas have been proposed to capture \ufb01ne details of objects, mostly revolving around\ndeconvolution and upsampling [27, 10, 11, 12, 32, 37]. However these methods focus on the prediction\naccuracy of individual pixels and are intrinsically topology-agnostic. Topological constraints, e.g.,\n\nFigure 2: An overview of our method.\n\n2\n\n\fconnectivity and loop-freeness, have been incorporated into variational [21, 26, 41, 38, 45, 20] and\nMRF/CRF-based segmentation methods [43, 33, 46, 6, 2, 40, 34, 17]. However, these methods focus\non enforcing topological constraints in the inference stage, while the trained model is agnostic of\nthe topological prior. In neuron image segmentation, some methods [19, 42] directly \ufb01nd an optimal\npartition of the image into neurons, and thus avoid segmenting membranes. These methods cannot be\ngeneralized to other structures, e.g., vessels, cracks and roads.\nFor completeness, we also refer to other existing works on topological features and their applications\n[1, 36, 25, 5, 31, 9, 45]. In graphics, topological similarity was used to simplify and align shapes [35].\nChen et al. [8] proposed a topological regularizer to simplify the decision boundary of a classi\ufb01er.\nAs for deep neural networks, Hofer et al. [23] proposed a CNN-based topological classi\ufb01er. This\nmethod directly extracts topological information from an input image/shape/graph as input for CNN,\nhence cannot generate segmentations that preserve topological priors learned from the training set.\nTo the best of our knowledge, no existing work uses topological information as a loss for training a\ndeep neural network in an end-to-end manner.\n2 Method\nOur method achieves both per-pixel accuracy and topological correctness by training a deep neural\nnetwork with a new topological loss, Ltopo(f, g). Here f is the likelihood map predicted by the\nnetwork and g is the ground truth. The loss function on each training image is a weighted sum of the\nper-pixel cross-entropy loss, Lbce, and the topological loss:\n\nL(f, g) = Lbce(f, g) + \u03bbLtopo(f, g),\n\n(2.1)\nin which \u03bb controls the weight of the topological loss. We assume a binary segmentation task. Thus,\nthere is one single likelihood function f, whose value ranges between 0 and 1.\nIn Sec. 2.1, we introduce the mathematical foundation of topology and how to measure topology of a\nlikelihood map robustly using persistent homology. In Sec. 2.2, we formalize the topological loss\nas the difference between persistent homology of f and g. We derive the gradient of the loss and\nprove its correctness. In Sec. 2.3 we explain how to incorporate the loss into the training of a neural\nnetwork. Although we \ufb01x one architecture in experiments, our method is general and can use any\nneural network that provides pixel-wise prediction. Fig. 2 illustrates the overview of our method.\n\n2.1 Topology and Persistent Homology\nGiven a continuous image domain, \u2126 \u2286 R2 (e.g., a 2D rectangle), we study a likelihood map\nf (x) : \u2126 \u2192 R, which is predicted by a deep neural network (Fig. 3(c)).2 Note that in practice, we\nonly have samples of f at all pixels. In such case, we extend f to the whole image domain \u2126 by\nlinear interpolation. Therefore, f is piecewise-linear and is controlled by values at all pixels. A\nsegmentation, X \u2286 \u2126 (Fig. 3(a)), is calculated by thresholding f at a given value \u03b1 (often set to 0.5).\nGiven X, its d-dimension topological structure, called a homology class [15, 30], is an equivalence\nclass of d-manifolds which can be deformed into each other within X.3 In particular, 0-dim and\n1-dim structures are connected components and handles, respectively. For example, in Fig. 3(a),\nthe segmentation X has two connected components and one handle. Meanwhile, the ground truth\n(Fig. 3(b)) has one connected component and two handles. Given X, we can compute the number of\ntopological structures, called the Betti number, and compare it with the topology of the ground truth.\nHowever, simply comparing Betti numbers of X and g will result in a discrete-valued topological\nerror function. To incorporate topological prior into deep neural networks, we need a continuous-\nvalued function that can reveal subtle difference between similar structures. Fig. 3(c) and 3(d) show\ntwo likelihood maps f and f(cid:48) with identical segmentations, both with incorrect topology comparing\nwith the ground truth g (Fig. 3(b)). However, f is more preferable as we need much less effort to\nchange it so that the thresholded segmentation X has a correct topology. In particular, look closely\nto Fig. 3(c) and 3(d) near the broken handles and view the landscape of the function. To restore the\nbroken handle in Fig. 3(d), we need to spend more effort to \ufb01ll a much deeper gap than Fig. 3(c). The\nsame situation happens near the missing bridge between the two connected components.\n\n2f depends on the network parameter \u03c9, which will be optimized during training. For convenience, we only\n\nuse x as the argument of f.\n\n(d + 1)-dimensional patch.\n\n3To be exact, a homology class is an equivalent class of cycles whose difference is the boundary of a\n\n3\n\n\f(a)\n\n(b)\n\n(c)\n\n(d)\n\nFigure 3: Illustration of topology and topology of a likelihood. For visualization purposes, the higher\nthe function values are, the darker the area is. (a) an example segmentation X with two connected\ncomponents and one handle. (b) The ground truth with one connected component and two handles.\nIt can also be viewed as a binary valued function g. (c) a likelihood map f whose segmentation\n(bounded by the red curve) is X. The landscape views near the broken bridge and handle are drawn.\nCritical points are highlighted in the segmentation. (d) another likelihood map f(cid:48) with the same\nsegmentation as f. But the landscape views reveal that f(cid:48) is worse than f due to deeper gaps.\nTo capture such subtle structural difference between different likelihood maps, we need a holistic\nview. In particular, we use the theory of persistent homology [16, 15]. Instead of choosing a \ufb01xed\nthreshold, persistent homology theory captures all possible topological structures from all thresholds,\nand summarize all these information in a concise format, called a persistence diagram.\nFig. 3 shows that only considering one threshold \u03b1 = 0.5 is insuf\ufb01cient. We consider thresholding the\nlikelihood function with all possible thresholds. The thresholded results, f \u03b1 := {x \u2208 \u2126|f (x) \u2265 \u03b1}\nat different \u03b1\u2019s, constitute a \ufb01ltration, i.e., a monotonically growing sequence induced by decreasing\nthe threshold \u03b1 : \u2205 \u2286 f \u03b11 \u2286 f \u03b12 \u2286 ... \u2286 f \u03b1n = \u2126, where \u03b11 \u2265 \u03b12 \u2265 ... \u2265 \u03b1n. As \u03b1 decreases,\nthe topology of f \u03b1 changes. Some new topological structures are born while existing ones are killed.\nWhen \u03b1 < \u03b1n, only one connected component survives and never gets killed. See Fig. 4(a) and 4(d)\nfor \ufb01ltrations induced by the ground truth g (as a binary-valued function) and the likelihood f.\nFor a continuous-valued function f, its persistence diagram, Dgm(f ), contains a \ufb01nite number of\ndots in 2-dimensional plane, called persistent dots. Each persistent dot p \u2208 Dgm(f ) corresponds\nto a topological structure born and dies in the \ufb01ltration. Denote by birth(p) and death(p) the birth\nand death time/threshold of the structure. For the connected component born at global minimum\nand never dies, we say it dies at maxx f (x) = 1. The coordinates of the dot p in the diagram\nare (1 \u2212 birth(p), 1 \u2212 death(p)).4 Fig. 4(b) and 4(e) show the diagrams of g and f, respectively.\nInstead of comparing discrete Betti numbers, we can use the information from persistence diagrams\nto compare a likelihood f with the ground truth g in terms of topology.\nTo compute Dgm(f ), we use the classic algorithm [15, 16] with an ef\ufb01cient implementation [7, 44]:\nwe \ufb01rst discretize an image patch into vertices (pixels), edges and squares. Note we adopt a cubical\ncomplex discretization, which is more suitable for images. The adjacency relationship between these\ndiscretized elements and their likelihood function values are encoded in a boundary matrix, whose\nrows and columns correspond to vertices/edges/squares. The matrix is reduced using a modi\ufb01ed\nGaussian elimination algorithm. The pivoting entries of the reduced matrix correspond to all the dots\nin Dgm(f ). This algorithm is cubic to the matrix dimension, which is linear to the image size.\n2.2 Topological Loss and its Gradient\nWe are now ready to formalize the topological loss, which measures the topological similarity between\nthe likelihood f and the ground truth g. We abuse the notation and also view g as a binary valued\nfunction. We use the dots in the persistence diagram of f as they capture all possible topological\nstructures f potentially has. We slightly modify the Wasserstein distance for persistence diagrams [14].\nFor persistence diagrams Dgm(f ) and Dgm(g), we \ufb01nd a best one-to-one correspondence between\nthe two sets of dots, and measure the total squared distance between them.5 An unmatched dot will\nbe matched to the diagonal line. Fig. 4(c) shows the optimal matching of the diagrams of g and f.\nFig. 4(f) shows the optimal matching of Dgm(g) and Dgm(f(cid:48)). The latter is clearly more expensive.\n4Unlike traditional setting, we use 1 \u2212 birth and 1 \u2212 death as the x and y axes, because we are using an\n\nupperstar \ufb01ltration, i.e., using the superlevel set, and decreasing \u03b1 value.\n\n5To be exact, the matching needs to be done on separate dimensions. Dots of 0-dim structures (blue markers\nin Fig. 4(b) and 4(e)) should be matched to the diagram of 0-dim structures. Dots of 1-dim structures (red\nmarkers in Fig. 4(b) and 4(e)) should be matched to the diagram of 1-dim structures.\n\n4\n\nXgfffcb(p)cd(p)f\u2019f\u2019f\u2019\f(a) Filtration induced by the ground truth function, g.\n\n(b) Dgm(g)\n\n(c)\nDgm(g)+Dgm(f )\n\n(d) Filtration induced by the likelihood function, f.\n\n(e) Dgm(f )\n\n(f)\nDgm(g)+Dgm(f(cid:48))\n\nFigure 4: An illustration of persistent homology. Left the \ufb01ltrations on the ground truth function\ng and the likelihood function f. The bars of blue and burgundy colors are connected components\nand handles respectively. (a) For g, all structures are born at \u03b1 = 1.0 and die at \u03b1 = 0. (d) For\nf, from left to right, birth of two components, birth of the longer handle, segmentation at \u03b1 = 0.5,\nbirth of the shorter handle, death of the extra component, death of both handles. (b) and (e) the\npersistence diagrams of g and f. (c) the overlay of the two diagrams. Orange arrows denote the\nmatching between the persistent dots. The extra component (a blue cross) from the likelihood is\nmatched to the diagonal line and will be removed if we move Dgm(f ) to Dgm(g). (f) the overlay of\nthe diagrams of g and the worse likelihood Dgm(f(cid:48)). The matching is obviously more expensive.\nThe matching algorithm is as follows. A total of k (=Betti number) dots from ground truth (Dgm(g))\nare at the upper-left corner pul = (0, 1), with birth(pul) = 1 and death(pul) = 0 (Fig. 4(b)). In\nDgm(f ), we \ufb01nd the k dots closest to the corner pul and match them to the ground truth dots. The\nremaining dots in Dgm(f ) are matched to the diagonal line. The algorithm computes and sorts the\nsquared distances from all dots in Dgm(f ) to pul. The complexity is O(n log n), n = the number of\ndots in Dgm(f ). In general, the state-of-the-art matches two arbitrary diagrams in O(n3/2) time [24].\nLet \u0393 be the set of all possible bijections between Dgm(f ) and Dgm(g). The loss Ltopo(f, g) is:\n[birth(p) \u2212 birth(\u03b3\u2217(p))]2 + [death(p) \u2212 death(\u03b3\u2217(p))]2\n\n||p \u2212 \u03b3(p)||2 =\n\n(cid:88)\n\nmin\n\u03b3\u2208\u0393\n\np\u2208Dgm(f )\n\n(cid:88)\n\np\u2208Dgm(f )\n\n(2.2)\n\nwhere \u03b3\u2217 is the optimal matching between two different point sets.\nIntuitively, this loss measures the minimal amount of necessary effort to modify the diagram of\nDgm(f ) to Dgm(g) by moving all dots toward their matches. Note there are more dots in Dgm(f )\n(Fig. 4(c)) than in Dgm(g) (Fig. 4(b)); there will usually be some noise in predicted likelihood map.\nIf a dot p cannot be matched, we match it to its projection on the diagonal line, {(1\u2212 b, 1\u2212 d)|b = d}.\nThis means we consider it as noise that should be removed. The dots matched to the diagonal line\ncorrespond to small noisy components or noisy loops. These dots will be pushed to the diagonal.\nAnd their corresponding components/loops will be removed or merged with others.\nIn this example, the extra connected component (a blue cross) in Dgm(f ) will be removed. For\ncomparison, we also show in Fig. 4(f) the matching between diagrams of the worse likelihood f(cid:48)\nand g. The cost of the matching is obviously higher, i.e., Ltopo(f(cid:48), g) > Ltopo(f, g). As a theoretical\nreassurance, it has been proven that this metric for diagrams is stable, and the loss function Ltopo(f, g)\nis Lipschitz with regard to the likelihood function f [13].\nThe following theorem guarantees that the topological loss, when minimized to zero, enforces the\nconstraint that the segmentation has the same topology and the ground truth.\nTheorem 1 (Topological Correctness). When the loss function Ltopo(f, g) is zero, the segmentation\nby thresholding f at 0.5 has the same Betti number as g.\nProof. Assume Ltopo(f, g) is zero. By Eq. (2.2), Dgm(f ) and Dgm(g) are matched perfectly, i.e.,\np = \u03b3\u2217(p),\u2200p \u2208 Dgm(f ). The two diagrams are identical and have the same number of dots.\n\n5\n\n\u03b1=1.00.01\u2212DeathTime1\u2212BirthTime1\u2212DeathTime1\u2212BirthTime\u03b1=1.00.950.80.50.40.00.470.051\u2212DeathTime1\u2212BirthTime1\u2212DeathTime1\u2212BirthTime\fSince g is a binary-valued function, as we decrease the threshold \u03b1 continuously, all topological\nstructures are created at \u03b1 = 1. The number of topological structures (Betti number) of g\u03b1 for any\n0 < \u03b1 < 1 is the same as the number of dots in Dgm(g). Note that for any \u03b1 \u2208 (0, 1), g\u03b1 is the\nground truth segmentation. Therefore, the Betti number of the ground truth is the number of dots\nin Dgm(g). Similarly, for any \u03b1 \u2208 (0, 1), the Betti number of f \u03b1 equals to the number of dots\nin Dgm(f ). Since the two diagrams Dgm(f ) and Dgm(g) are identical, the Betti number of the\nsegmentation f 0.5 is the same as the ground truth segmentation.6\nTopological gradient. The loss function (Eq. (2.2)) depends on crucial thresholds at which topo-\nlogical changes happen, e.g., birth and death times of different dots in the diagram. These crucial\nthresholds are uniquely determined by the locations at which the topological changes happen. When\nthe underlying function f is differentiable, these crucial locations are exactly critical points, i.e.,\npoints with zero gradients. In the training context, our likelihood function f is a piecewise-linear\nfunction controlled by the neural network predictions at pixels. For such f, a critical point is always a\npixel, since topological changes always happen at pixels. Denote by \u03c9 the neural network parameters.\nFor each dot p \u2208 Dgm(f ) , we denote by cb(p) and cd(p) the birth and death critical points of the\ncorresponding topological structure (See Fig. 3(c) for examples).\nFormally, we can show that the gradient of the topological loss \u2207\u03c9Ltopo(f, g) is:\n+ 2[f (cd(p)) \u2212 death(\u03b3\u2217(p))]\n\n2[f (cb(p)) \u2212 birth(\u03b3\u2217(p))]\n\n(cid:88)\n\n\u2202f (cb(p))\n\n\u2202\u03c9\n\np\u2208Dgm(f )\n\n\u2202f (cd(p))\n\n\u2202\u03c9\n\n(2.3)\n\nTo see this, within a suf\ufb01ciently small neighborhood of f, any other piecewise linear function will\nhave the same super level set \ufb01ltration as f. The critical points of each persistent dot in Dgm(f )\nremains constant within such small neighborhood. So does the optimal mapping \u03b3\u2217. Therefore, the\ngradient can be straightforwardly computed based on the chain rule, as Eq. (2.3). When function\nvalues at different vertices are the same, or when the matching is ambiguous, the gradient does not\nexist. However, these cases constitute a measure zero subspace in the space of likelihood functions.\nIn summary, Ltopo(f, g) is a piecewise differentiable loss function over the space of all possible\nlikelihood functions f.\nIntuition. During training, we take the negative gradient direction, i.e.,\u2212\u2207\u03c9Ltopo(f, g). For each\ntopological structure the gradient descent step is pushing the corresponding dot p \u2208 Dgm(f ) toward\nits match \u03b3\u2217(p) \u2208 Dgm(g). These coordinates are the function values of the critical points cb(p)\nand cd(p). They are both moved closer to the matched persistent dot in Dgm(g). We also show the\nnegative gradient force in the landscape view of function f (blue arrow in Fig. 3(c)). Intuitively, force\nfrom the topological gradient will push the saddle points up so that the broken bridge gets connected.\n2.3 Training a Neural Network\nWe present some crucial details of our training algorithm. Although our method is architecture-\nagnostic, we select one architecture inspired by DIVE [18], which was designed for neuron image\nsegmentation tasks. Our network contains six trainable weight layers, four convolutional layers and\ntwo fully connected layers. The \ufb01rst, second and fourth convolutional layers are followed by a single\nmax pooling layer of size 2 \u00d7 2 and stride 2 by the end of the layer. Particularly, because of the\ncomputational complexity, we use a patch size of 65 \u00d7 65 during all the training process.\nWe use small patches (65 \u00d7 65) instead of big patches/whole image. The reason is twofold. First,\nthe computation of topological information is relatively expensive. Second, the matching process\nbetween the persistence diagrams of predicted likelihood map and ground truth can be quite dif\ufb01cult.\nFor example, if the patch size is too big, there will be many persistent dots in Dgm(g) and even more\ndots in Dgm(g). The matching process is too complex and prone to errors. By focusing on smaller\npatches, we localize topological structures and \ufb01x them one by one.\nTopology of small patches and relative homology. The small patches (65 \u00d7 65) often only\ncontain partial branching structures rather than closed loops. To have meaningful topological\nmeasure on these small patches, we apply relative persistent homology as a more localized\napproach for the computation of topological structures. Particularly, for each patch, we con-\nsider the topological structures relative to the boundary.\nIt is equivalent to padding a black\nframe to the boundary and compute the topology to avoid trivial topological structures. As\n\n6Note that a more careful proof should be done for diagrams of 0- and 1-dimension separately.\n\n6\n\n\fTable 1: Quantitative results for different models on several medical datasets.\n\nAccuracy\n\nARI\n\nVOI\n\nISBI13\n\nCREMI\n\nDataset\n\nISBI12\n\nMethod\nDIVE\nU-Net\nMosin.\nTopoLoss\n\nDIVE\nU-Net\nMosin.\nTopoLoss\n\nDIVE\nU-Net\nMosin.\nTopoLoss\n\n1.235 \u00b1 0.025\n1.367 \u00b1 0.031\n0.983 \u00b1 0.035\n0.782 \u00b1 0.019\n2.790 \u00b1 0.025\n2.583 \u00b1 0.078\n1.534 \u00b1 0.063\n1.436 \u00b1 0.008\n2.513 \u00b1 0.047\n2.346 \u00b1 0.105\n1.623 \u00b1 0.083\n1.462 \u00b1 0.028\n\n0.9434 \u00b1 0.0087\n0.9338 \u00b1 0.0072\n0.9312 \u00b1 0.0052\n0.9444 \u00b1 0.0076\n0.6923 \u00b1 0.0134\n0.7031 \u00b1 0.0256\n0.7483 \u00b1 0.0367\n0.8064 \u00b1 0.0112\n0.6532 \u00b1 0.0247\n0.6723 \u00b1 0.0312\n0.7853 \u00b1 0.0281\n0.8083 \u00b1 0.0104\n\n0.9640 \u00b1 0.0042\n0.9678 \u00b1 0.0021\n0.9532 \u00b1 0.0063\n0.9626 \u00b1 0.0038\n0.9642 \u00b1 0.0018\n0.9631 \u00b1 0.0024\n0.9578 \u00b1 0.0029\n0.9569 \u00b1 0.0031\n0.9498 \u00b1 0.0029\n0.9468 \u00b1 0.0048\n0.9467 \u00b1 0.0058\n0.9456 \u00b1 0.0053\n\nBetti Error\n3.187 \u00b1 0.307\n2.785 \u00b1 0.269\n1.238 \u00b1 0.251\n0.429 \u00b1 0.104\n3.875 \u00b1 0.326\n3.463 \u00b1 0.435\n2.952 \u00b1 0.379\n1.253 \u00b1 0.172\n4.378 \u00b1 0.152\n3.016 \u00b1 0.253\n1.973 \u00b1 0.310\n1.113 \u00b1 0.224\nshown in the \ufb01gure on the right, with the additional frame, a Y -shaped branching structure\ncropped within the patch will create two handles and be captured by persistent homology.\nTraining using these localized topological loss can be very ef\ufb01cient via random\npatch sampling. Speci\ufb01cally, we do not partition the image into patches. Instead,\nwe randomly and densely sample patches which can overlap. As Theorem 1\nguarantees, Our loss enforces correct topology within each sampled patch. These\noverlaps between patches propagate correct topology everywhere. On the other\nhand, correct topology within a patch means the segmentation can be a deformation\nof the ground truth. But the deformation is constrained within the patch. The patch\nsize controls the tolerable geometric deformation. During training, even for a same\npatch, the diagram Dgm(f ), the critical pixels, and the gradients change. At each epoch, we resample\npatches, reevaluate their persistence diagrams, and the loss gradients. After computing topological\ngradients of all sampled patches from a mini-batch, we aggregate them for backpropagation.\n3 Experiments\nWe evaluate our method on six natural and biomedical datasets: CREMI7, ISBI12 [4], ISBI13 [3],\nCrackTree [48], Road [28] and DRIVE [39]. The \ufb01rst three are neuron image segmentation datasets.\nCREMI contains 125 images of size 1250x1250. ISBI12 [4] contains 30 images of size 512x512.\nISBI13 [3] contains 100 images of size 1024x1024. These three datasets are neuron images (Electron\nMicroscopy images). The task is to segment membranes and eventually partition the image into neuron\nregions. CrackTree [48] contains 206 images of cracks in road (resolution 600x800). Road [28] has\n1108 images from the Massachusetts Roads Dataset. The resolution is 1500x1500. DRIVE [39] is a\nretinal vessel segmentation dataset with 20 images. The resolution is 584x565. For all datasets, we\nuse a three-fold cross-validation and report the mean performance over the validation set.\nEvaluation metrics. We use four different evaluation metrics. Pixel-wise accuracy is the percentage\nof correctly classi\ufb01ed pixels. The remaining three metrics are more topology-relevant. The most\nimportant one is Betti number error, which directly compares the topology (number of handles)\nbetween the segmentation and the ground truth8. We randomly sample patches over the segmentation\nand report the average absolute difference between their Betti numbers and the corresponding ground\ntruth patches. Two more metrics are used to indirectly evaluate the topological correctness: Adapted\nRand Index (ARI) and Variation of Information (VOI). They are used in neuron reconstruction to\ncompare the partitioning of the image induced by the segmentation. ARI is the maximal F-score of\nthe foreground-restricted Rand index, a measure of similarity between two clusters. On this version\nof the Rand index we exclude the zero component of the original labels (background pixels of the\nground truth). VOI is a measure of the distance between two clusterings. It is closely related to\nmutual information; indeed, it is a simple linear expression involving the mutual information.\nBaselines. DIVE [18] is a state-of-the-art neural network that predicts the probability of every\nindividual pixel in a given image being a membrane (border) pixel or not. U-Net [37] is a popular\n\n7https://cremi.org/\n8Note we focus on 1-dimensional topology in evaluation and training as they are more crucial in practice.\n\n7\n\n\fTable 2: Quantitative results for different models on retinal, crack, and aerial datasets.\n\nDataset\n\nDRIVE\n\nCrackTree\n\nRoad\n\nMethod\nDIVE\nU-Net\nMosin.\nTopoLoss\n\nDIVE\nU-Net\nMosin.\nTopoLoss\n\nDIVE\nU-Net\nMosin.\nTopoLoss\n\nAccuracy\n\n0.9549 \u00b1 0.0023\n0.9452 \u00b1 0.0058\n0.9543 \u00b1 0.0047\n0.9521 \u00b1 0.0042\n0.9854 \u00b1 0.0052\n0.9821 \u00b1 0.0097\n0.9833 \u00b1 0.0067\n0.9826 \u00b1 0.0084\n0.9734 \u00b1 0.0077\n0.9786 \u00b1 0.0052\n0.9754 \u00b1 0.0043\n0.9728 \u00b1 0.0063\n\nARI\n\n0.8407 \u00b1 0.0257\n0.8343 \u00b1 0.0413\n0.8870 \u00b1 0.0386\n0.9024 \u00b1 0.0113\n0.8634 \u00b1 0.0376\n0.8749 \u00b1 0.0421\n0.8897 \u00b1 0.0201\n0.9291 \u00b1 0.0123\n0.8201 \u00b1 0.0128\n0.8189 \u00b1 0.0097\n0.8456 \u00b1 0.0174\n0.8671 \u00b1 0.0068\n\nVOI\n\n1.936 \u00b1 0.127\n1.975 \u00b1 0.046\n1.167 \u00b1 0.026\n1.083 \u00b1 0.006\n1.570 \u00b1 0.078\n1.625 \u00b1 0.104\n1.113 \u00b1 0.057\n0.997 \u00b1 0.011\n2.368 \u00b1 0.203\n2.249 \u00b1 0.175\n1.457 \u00b1 0.096\n1.234 \u00b1 0.037\n\nBetti Error\n3.276 \u00b1 0.642\n3.643 \u00b1 0.536\n2.784 \u00b1 0.293\n1.076 \u00b1 0.265\n1.576 \u00b1 0.287\n1.785 \u00b1 0.303\n1.045 \u00b1 0.214\n0.672 \u00b1 0.176\n3.598 \u00b1 0.783\n3.439 \u00b1 0.621\n2.781 \u00b1 0.237\n1.275 \u00b1 0.192\n\nFigure 5: Qualitative results of the proposed method compared to other models. From left to right,\nsample images, ground truth, results for DIVE, U-Net, Mosin. and our proposed TopoLoss.\n\nimage segmentation method trained with cross-entropy loss. Mosin. [29] uses the response of selected\n\ufb01lters from a pretrained CNN to construct the topology aware loss. For all methods, we generate\nsegmentations by thresholding the predicted likelihood maps at 0.5.\nQuantitative and qualitative results. Table 1 shows the quantitative results for three different\nneuron image datasets, ISBI12, ISBI13 and CREMI. Table 2 shows the quantitative results for\nDRIVE, CrackTree and Road. Our method signi\ufb01cantly outperforms existing methods in topological\naccuracy (in all three topology-aware metrics), without sacri\ufb01cing pixel accuracy. Fig. 5 shows\nqualitative results. Our method demonstrates more consistency in terms of structures and topology. It\ncorrectly segments \ufb01ne structures such as membranes, roads and vessels, while all other methods fail\nto do so. Note that the topological error cannot be solved by training with dilated ground truth masks.\nWe run additional experiments on CREMI dataset by training a topology-agnostic model with dilated\nground truth masks. For 1 and 2 pixel dilation, We have Betti Error 4.126 and 4.431, respectively.\nThey are still signi\ufb01cantly worse than TopoLoss (Betti Error = 1.113).\nAblation study: loss weights. Our loss (Eq. (2.1)) is a weighted combination of cross entropy loss\nand topological loss. For convenience, we drop the weight of cross entropy loss and weight the\ntopological loss with \u03bb. Fig. 6(b) and 6(c) show ablation studies of \u03bb on CREMI w.r.t. accuracy,\nBetti error and convergence rate. As we increase lambda, per-pixel accuracy is slightly compromised.\nThe Betti error decreases \ufb01rst but increases later. One important observation is that a certain amount\n\n8\n\n\f(a)\n\nFigure 6: (a) Cross Entropy loss, Topological loss and total loss in terms of training epochs. (b)\nAblation studies of lambda on CREMI w.r.t. accuracy, Betti error. (c) Ablation study of lambda on\nCREMI w.r.t. convergence rate.\n\n(b)\n\n(c)\n\nFigure 7: For a sample patch from CREMI, we show the likelihood map and segmentation at different\ntraining epochs. The \ufb01rst row correspond to likelihood maps and the second row are thresholded\nresults. From left to right, original patch/ground truth, results after 10, 20, 30, 40 and 50 epochs.\n\nof topological loss improves the convergence rate signi\ufb01cantly. Empirically, we choose \u03bb via cross-\nvalidation. Different datasets have different \u03bb\u2019s. In general, \u03bb is at the magnitude of 1/10000. This is\nunderstandable; while cross entropy loss gradient is applied to all pixels, topological gradient is only\napplied to a sparse set of critical pixels. Therefore, the weight needs to be much smaller to avoid\nover\ufb01tting with these critical pixels.\nFig. 6(a) shows the weighted topological loss (\u03bbLtopo), cross entropy loss (Lbce) and total loss (L)\nat different training epochs. After 30 epochs, the total loss becomes stable. Meanwhile, while Lbce\nincreases slightly, Ltopo decreases. This is reasonable; incorporating of topological loss may force\nthe network to overtrain on certain locations (near critical pixels), and thus may hurt the overall pixel\naccuracy slightly. This is con\ufb01rmed by the pixel accuracy of TopoLoss in Tables 1 and 2.\nRationale. To further explain the rationale of topological loss, we \ufb01rst study an example training\npatch. In Fig. 7, we plot the likelihood map and the segmentation at different epochs. Within a\nshort period, the likelihood map and the segmentation are stabilized globally, mostly thanks to the\ncross-entropy loss. After epoch 20, topological errors are gradually \ufb01xed by the topological loss.\nNotice the change of the likelihood map is only at speci\ufb01c topology-relevant locations.\nOur topological loss compliments cross-entropy loss by combating sampling bias. In Fig. 7, for most\nmembrane pixels, the network learns to make correct prediction quickly. However, for a small amount\nof dif\ufb01cult locations (blurred regions), it is much harder to learn to predict correctly. The issue is these\nlocations only take a small portion of training pixel samples. Such disproportion cannot be changed\neven with more annotated training images. Topological loss essentially identi\ufb01es these dif\ufb01cult\nlocations during training (as critical pixels). It then forces the network to learn patterns near these\nlocations, at the expense of over\ufb01tting and consequently slightly compromised per-pixel accuracy.\nOn the other hand, we stress that topological loss cannot succeed alone. Without cross-entropy loss,\ninferring topology from a completely random likelihood map is meaningless. Cross-entropy loss\n\ufb01nds a reasonable likelihood map so that the topological loss can improve its topology.\nAcknowledgement. The research of Xiaoling Hu and Chao Chen is partially supported by NSF\nIIS-1909038. The research of Li Fuxin is partially supported by NSF IIS-1911232.\n\n9\n\n01020304050Epoch0.10.20.30.40.5LossLoss VS EpochCross Entropy Loss for each EpochTopological Loss for each EpochTotal Loss for each Epoch\fReferences\n\n[1] Henry Adams, Tegan Emerson, Michael Kirby, Rachel Neville, Chris Peterson, Patrick Shipman,\nSofya Chepushtanova, Eric Hanson, Francis Motta, and Lori Ziegelmeier. Persistence images: A\nstable vector representation of persistent homology. The Journal of Machine Learning Research,\n18(1):218\u2013252, 2017.\n\n[2] Bjoern Andres, J\u00f6rg H Kappes, Thorsten Beier, Ullrich K\u00f6the, and Fred A Hamprecht. Proba-\nbilistic image segmentation with closedness constraints. In 2011 International Conference on\nComputer Vision, pages 2611\u20132618. IEEE, 2011.\n\n[3] I Arganda-Carreras, HS Seung, A Vishwanathan, and D Berger. 3d segmentation of neurites in\n\nem images challenge-isbi 2013, 2013.\n\n[4] Ignacio Arganda-Carreras, Srinivas C Turaga, Daniel R Berger, Dan Cire\u00b8san, Alessandro Giusti,\nLuca M Gambardella, J\u00fcrgen Schmidhuber, Dmitry Laptev, Sarvesh Dwivedi, Joachim M\nBuhmann, et al. Crowdsourcing the creation of image segmentation algorithms for connectomics.\nFrontiers in neuroanatomy, 9:142, 2015.\n\n[5] Mathieu Carriere, Marco Cuturi, and Steve Oudot. Sliced wasserstein kernel for persistence\ndiagrams. In Proceedings of the 34th International Conference on Machine Learning-Volume\n70, pages 664\u2013673. JMLR. org, 2017.\n\n[6] Chao Chen, Daniel Freedman, and Christoph H Lampert. Enforcing topological constraints in\n\nrandom \ufb01eld image segmentation. In CVPR 2011, pages 2089\u20132096. IEEE, 2011.\n\n[7] Chao Chen and Michael Kerber. Persistent homology computation with a twist. In Proceedings\n\n27th European Workshop on Computational Geometry, volume 11, pages 197\u2013200, 2011.\n\n[8] Chao Chen, Xiuyan Ni, Qinxun Bai, and Yusu Wang. A topological regularizer for classi\ufb01ers\nvia persistent homology. In The 22nd International Conference on Arti\ufb01cial Intelligence and\nStatistics, pages 2573\u20132582, 2019.\n\n[9] Chao Chen and Novi Quadrianto. Clustering high dimensional categorical data via topographical\nfeatures. In Proceedings of the 33rd International Conference on Machine Learning; New York;\n19-24 June 2016, volume 48, pages 2732\u20132740. JMLR, 2016.\n\n[10] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.\nSemantic image segmentation with deep convolutional nets and fully connected crfs. arXiv\npreprint arXiv:1412.7062, 2014.\n\n[11] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille.\nDeeplab: Semantic image segmentation with deep convolutional nets, atrous convolution,\nand fully connected crfs. IEEE transactions on pattern analysis and machine intelligence,\n40(4):834\u2013848, 2018.\n\n[12] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous\n\nconvolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017.\n\n[13] David Cohen-Steiner, Herbert Edelsbrunner, and John Harer. Stability of persistence diagrams.\n\nDiscrete & Computational Geometry, 37(1):103\u2013120, 2007.\n\n[14] David Cohen-Steiner, Herbert Edelsbrunner, John Harer, and Yuriy Mileyko. Lipschitz functions\nhave l p-stable persistence. Foundations of computational mathematics, 10(2):127\u2013139, 2010.\n\n[15] Herbert Edelsbrunner and John Harer. Computational topology: an introduction. American\n\nMathematical Soc., 2010.\n\n[16] Herbert Edelsbrunner, David Letscher, and Afra Zomorodian. Topological persistence and\nsimpli\ufb01cation. In Proceedings 41st Annual Symposium on Foundations of Computer Science,\npages 454\u2013463. IEEE, 2000.\n\n[17] Rolando Estrada, Carlo Tomasi, Scott C Schmidler, and Sina Farsiu. Tree topology estimation.\n\nIEEE transactions on pattern analysis and machine intelligence, 37(8):1688\u20131701, 2014.\n\n10\n\n\f[18] Ahmed Fakhry, Hanchuan Peng, and Shuiwang Ji. Deep models for brain em image seg-\nmentation: novel insights and improved performance. Bioinformatics, 32(15):2352\u20132358,\n2016.\n\n[19] Jan Funke, Fabian David Tschopp, William Grisaitis, Arlo Sheridan, Chandan Singh, Stephan\nSaalfeld, and Srinivas C Turaga. A deep structured learning approach towards automating\nconnectome reconstruction from 3d electron micrographs. arXiv preprint arXiv:1709.02974,\n2017.\n\n[20] Mingchen Gao, Chao Chen, Shaoting Zhang, Zhen Qian, Dimitris Metaxas, and Leon Axel.\nSegmenting the papillary muscles and the trabeculae from high resolution cardiac ct through\nrestoration of topological handles. In International Conference on Information Processing in\nMedical Imaging, pages 184\u2013195. Springer, 2013.\n\n[21] Xiao Han, Chenyang Xu, and Jerry L. Prince. A topology preserving level set method for\ngeometric deformable models. IEEE Transactions on Pattern Analysis and Machine Intelligence,\n25(6):755\u2013768, 2003.\n\n[22] Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick. Mask r-cnn. In Proceedings of\n\nthe IEEE international conference on computer vision, pages 2961\u20132969, 2017.\n\n[23] Christoph Hofer, Roland Kwitt, Marc Niethammer, and Andreas Uhl. Deep learning with\ntopological signatures. In Advances in Neural Information Processing Systems, pages 1634\u2013\n1644, 2017.\n\n[24] Michael Kerber, Dmitriy Morozov, and Arnur Nigmetov. Geometry helps to compare persistence\n\ndiagrams. Journal of Experimental Algorithmics (JEA), 22:1\u20134, 2017.\n\n[25] Genki Kusano, Yasuaki Hiraoka, and Kenji Fukumizu. Persistence weighted gaussian kernel for\ntopological data analysis. In International Conference on Machine Learning, pages 2004\u20132013,\n2016.\n\n[26] Carole Le Guyader and Luminita A Vese. Self-repelling snakes for topology-preserving\n\nsegmentation models. IEEE Transactions on Image Processing, 17(5):767\u2013779, 2008.\n\n[27] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for se-\nmantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern\nrecognition, pages 3431\u20133440, 2015.\n\n[28] Volodymyr Mnih. Machine learning for aerial image labeling. University of Toronto (Canada),\n\n2013.\n\n[29] Agata Mosinska, Pablo Marquez-Neila, Mateusz Kozi\u00b4nski, and Pascal Fua. Beyond the pixel-\nwise loss for topology-aware delineation. In Proceedings of the IEEE Conference on Computer\nVision and Pattern Recognition, pages 3136\u20133145, 2018.\n\n[30] James R Munkres. Elements of algebraic topology. CRC Press, 2018.\n\n[31] Xiuyan Ni, Novi Quadrianto, Yusu Wang, and Chao Chen. Composing tree graphical models\nwith persistent homology features for clustering mixed-type data. In Proceedings of the 34th\nInternational Conference on Machine Learning-Volume 70, pages 2622\u20132631. JMLR. org, 2017.\n\n[32] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for\nsemantic segmentation. In Proceedings of the IEEE international conference on computer\nvision, pages 1520\u20131528, 2015.\n\n[33] Sebastian Nowozin and Christoph H Lampert. Global connectivity potentials for random \ufb01eld\nmodels. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 818\u2013825.\nIEEE, 2009.\n\n[34] Martin Ralf Oswald, Jan St\u00fchmer, and Daniel Cremers. Generalized connectivity constraints for\nspatio-temporal 3d reconstruction. In European Conference on Computer Vision, pages 32\u201346.\nSpringer, 2014.\n\n11\n\n\f[35] Adrien Poulenard, Primoz Skraba, and Maks Ovsjanikov. Topological function optimization\nfor continuous shape matching. In Computer Graphics Forum, volume 37, pages 13\u201325. Wiley\nOnline Library, 2018.\n\n[36] Jan Reininghaus, Stefan Huber, Ulrich Bauer, and Roland Kwitt. A stable multi-scale kernel for\ntopological machine learning. In Proceedings of the IEEE conference on computer vision and\npattern recognition, pages 4741\u20134748, 2015.\n\n[37] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for\nbiomedical image segmentation. In International Conference on Medical image computing and\ncomputer-assisted intervention, pages 234\u2013241. Springer, 2015.\n\n[38] Florent S\u00e9gonne. Active contours under topology control\u2014genus preserving level sets. Interna-\n\ntional Journal of Computer Vision, 79(2):107\u2013117, 2008.\n\n[39] Joes Staal, Michael D Abr\u00e0moff, Meindert Niemeijer, Max A Viergever, and Bram Van Gin-\nneken. Ridge-based vessel segmentation in color images of the retina. IEEE transactions on\nmedical imaging, 23(4):501\u2013509, 2004.\n\n[40] Jan Stuhmer, Peter Schroder, and Daniel Cremers. Tree shape priors with connectivity con-\nstraints using convex relaxation on general graphs. In Proceedings of the IEEE International\nConference on Computer Vision, pages 2336\u20132343, 2013.\n\n[41] Ganesh Sundaramoorthi and Anthony Yezzi. Global regularizing \ufb02ows with topology preserva-\ntion for active contours and polygons. IEEE Transactions on Image Processing, 16(3):803\u2013812,\n2007.\n\n[42] Srinivas C Turaga, Kevin L Briggman, Moritz Helmstaedter, Winfried Denk, and H Sebastian\nSeung. Maximin af\ufb01nity learning of image segmentation. arXiv preprint arXiv:0911.5372,\n2009.\n\n[43] Sara Vicente, Vladimir Kolmogorov, and Carsten Rother. Graph cut based image segmentation\nwith connectivity priors. In 2008 IEEE Conference on Computer Vision and Pattern Recognition,\npages 1\u20138. IEEE, 2008.\n\n[44] Hubert Wagner, Chao Chen, and Erald Vu\u00e7ini. Ef\ufb01cient computation of persistent homology\nfor cubical data. In Topological methods in data analysis and visualization II, pages 91\u2013106.\nSpringer, 2012.\n\n[45] Pengxiang Wu, Chao Chen, Yusu Wang, Shaoting Zhang, Changhe Yuan, Zhen Qian, Dimitris\nMetaxas, and Leon Axel. Optimal topological cycles and their application in cardiac trabeculae\nrestoration. In International Conference on Information Processing in Medical Imaging, pages\n80\u201392. Springer, 2017.\n\n[46] Yun Zeng, Dimitris Samaras, Wei Chen, and Qunsheng Peng. Topology cuts: A novel min-\ncut/max-\ufb02ow algorithm for topology preserving segmentation in n\u2013d images. Computer vision\nand image understanding, 112(1):81\u201390, 2008.\n\n[47] Afra Zomorodian and Gunnar Carlsson. Computing persistent homology. Discrete & Computa-\n\ntional Geometry, 33(2):249\u2013274, 2005.\n\n[48] Qin Zou, Yu Cao, Qingquan Li, Qingzhou Mao, and Song Wang. Cracktree: Automatic crack\n\ndetection from pavement images. Pattern Recognition Letters, 33(3):227\u2013238, 2012.\n\n12\n\n\f", "award": [], "sourceid": 3030, "authors": [{"given_name": "Xiaoling", "family_name": "Hu", "institution": "Stony Brook University"}, {"given_name": "Fuxin", "family_name": "Li", "institution": "Oregon State University"}, {"given_name": "Dimitris", "family_name": "Samaras", "institution": "Stony Brook University"}, {"given_name": "Chao", "family_name": "Chen", "institution": "Stony Brook University"}]}