{"title": "Minimum Weight Perfect Matching via Blossom Belief Propagation", "book": "Advances in Neural Information Processing Systems", "page_first": 1288, "page_last": 1296, "abstract": "Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM). It has been shown that BP can solve a number of combinatorial optimization problems including minimum weight matching, shortest path, network flow and vertex cover under the following common assumption: the respective Linear Programming (LP) relaxation is tight, i.e., no integrality gap is present. However, when LP shows an integrality gap, no model has been known which can be solved systematically via sequential applications of BP. In this paper, we develop the first such algorithm, coined Blossom-BP, for solving the minimum weight matching problem over arbitrary graphs. Each step of the sequential algorithm requires applying BP over a modified graph constructed by contractions and expansions of blossoms, i.e., odd sets of vertices. Our scheme guarantees termination in O(n^2) of BP runs, where n is the number of vertices in the original graph. In essence, the Blossom-BP offers a distributed version of the celebrated Edmonds' Blossom algorithm by jumping at once over many sub-steps with a single BP. Moreover, our result provides an interpretation of the Edmonds' algorithm as a sequence of LPs.", "full_text": "Minimum Weight Perfect Matching\n\nvia Blossom Belief Propagation\n\nSungsoo Ahn\u2217\n\nSejun Park\u2217 Michael Chertkov\u2020\n\u2217School of Electrical Engineering,\n\nJinwoo Shin\u2217\n\nKorea Advanced Institute of Science and Technology, Daejeon, Korea\n\n\u2020Theoretical Division and Center for Nonlinear Studies,\nLos Alamos National Laboratory, Los Alamos, USA\n\n\u2217{sungsoo.ahn, sejun.park, jinwoos}@kaist.ac.kr\n\n\u2020chertkov@lanl.gov\n\nAbstract\n\nMax-product Belief Propagation (BP) is a popular message-passing algorithm for\ncomputing a Maximum-A-Posteriori (MAP) assignment over a distribution repre-\nsented by a Graphical Model (GM). It has been shown that BP can solve a num-\nber of combinatorial optimization problems including minimum weight matching,\nshortest path, network \ufb02ow and vertex cover under the following common assump-\ntion: the respective Linear Programming (LP) relaxation is tight, i.e., no integrality\ngap is present. However, when LP shows an integrality gap, no model has been\nknown which can be solved systematically via sequential applications of BP. In\nthis paper, we develop the \ufb01rst such algorithm, coined Blossom-BP, for solving\nthe minimum weight matching problem over arbitrary graphs. Each step of the\nsequential algorithm requires applying BP over a modi\ufb01ed graph constructed by\ncontractions and expansions of blossoms, i.e., odd sets of vertices. Our scheme\nguarantees termination in O(n2) of BP runs, where n is the number of vertices in\nthe original graph. In essence, the Blossom-BP offers a distributed version of the\ncelebrated Edmonds\u2019 Blossom algorithm by jumping at once over many sub-steps\nwith a single BP. Moreover, our result provides an interpretation of the Edmonds\u2019\nalgorithm as a sequence of LPs.\n\n1\n\nIntroduction\n\nGraphical Models (GMs) provide a useful representation for reasoning in a number of scienti\ufb01c dis-\nciplines [1, 2, 3, 4]. Such models use a graph structure to encode the joint probability distribution,\nwhere vertices correspond to random variables and edges specify conditional dependencies. An\nimportant inference task in many applications involving GMs is to \ufb01nd the most-likely assignment\nto the variables in a GM, i.e., Maximum-A-Posteriori (MAP). Belief Propagation (BP) is a popu-\nlar algorithm for approximately solving the MAP inference problem and it is an iterative, message\npassing one that is exact on tree structured GMs. BP often shows remarkably strong heuristic perfor-\nmance beyond trees, i.e., over loopy GMs. Furthermore, BP is of a particular relevance to large-scale\nproblems due to its potential for parallelization [5] and its ease of programming within the modern\nprogramming models for parallel computing, e.g., GraphLab [6], GraphChi [7] and OpenMP [8].\nThe convergence and correctness of BP was recently established for a certain class of loopy GM for-\nmulations of several classical combinatorial optimization problems, including matching [9, 10, 11],\nperfect matching [12], shortest path [13], independent set [14], network \ufb02ow [15] and vertex cover\n[16]. The important common feature of these models is that BP converges to a correct assignment\nwhen the Linear Programming (LP) relaxation of the combinatorial optimization is tight, i.e., when\nit shows no integrality gap. The LP tightness is an inevitable condition to guarantee the performance\nof BP and no combinatorial optimization instance has been known where BP would be used to solve\n\n1\n\n\fproblems without the LP tightness. On the other hand, in the LP literature, it has been extensively\nstudied how to enforce the LP tightness via solving multiple intermediate LPs that are systematically\ndesigned, e.g., via the cutting-plane method [21]. Motivated by these studies, we pose a similar ques-\ntion for BP, \u201chow to enforce correctness of BP, possibly by solving multiple intermediate BPs\u201d. In\nthis paper, we show how to resolve this question for the minimum weight (or cost) perfect matching\nproblem over arbitrary graphs.\nContribution. We develop an algorithm, coined Blossom-BP, for solving the minimum weight\nmatching problem over an arbitrary graph. Our algorithm solves multiple intermediate BPs until the\n\ufb01nal BP outputs the solution. The algorithm is sequential, where each step includes running BP over\na \u2018contracted\u2019 graph derived from the original graph by contractions and infrequent expansions of\nblossoms, i.e., odd sets of vertices. To build such a scheme, we \ufb01rst design an algorithm, coined\nBlossom-LP, solving multiple intermediate LPs. Second, we show that each LP is solvable by\nBP using the recent framework [16] that establishes a generic connection between BP and LP. For\nthe \ufb01rst part, cutting-plane methods solving multiple intermediate LPs for the minimum weight\nmatching problem have been discussed by several authors over the past decades [17, 18, 19, 20] and\na provably polynomial-time scheme was recently suggested [21]. However, LPs in [21] were quite\ncomplex to solve by BP. To address the issue, we design much simpler intermediate LPs that allow\nutilizing the framework of [16].\nWe prove that Blossom-BP and Blossom-LP guarantee to terminate in O(n2) of BP and LP runs,\nrespectively, where n is the number of vertices in the graph. To establish the polynomial complexity,\nwe show that intermediate outputs of Blossom-BP and Blossom-LP are equivalent to those of a vari-\nation of the Blossom-V algorithm [22] which is the latest implementation of the Blossom algorithm\ndue to Kolmogorov. The main difference is that Blossom-V updates parameters by maintaining dis-\njoint tree graphs, while Blossom-BP and Blossom-LP implicitly achieve this by maintaining disjoint\ncycles, claws and tree graphs. Notice, however, that these combinatorial structures are auxiliary, as\nrequired for proofs, and they do not appear explicitly in the algorithm descriptions. Therefore, they\nare much easier to implement than Blossom-V that maintains complex data structures, e.g., priority\nqueues. To the best of our knowledge, Blossom-BP and Blossom-LP are the simplest possible al-\ngorithms available for solving the problem in polynomial time. Our proof implies that in essence,\nBlossom-BP offers a distributed version of the Edmonds\u2019 Blossom algorithm [23] jumping at once\nover many sub-steps of Blossom-V with a single BP.\nThe subject of solving convex optimizations (other than LP) via BP was discussed in the literature\n[24, 25, 26]. However, we are not aware of any similar attempts to solve Integer Programming, via\nsequential application of BP. We believe that the approach developed in this paper is of a broader\ninterest, as it promises to advance the challenge of designing BP-based MAP solvers for a broader\nclass of GMs. Furthermore, Blossom-LP stands alone as providing an interpretation for the Ed-\nmonds\u2019 algorithm in terms of a sequence of tractable LPs. The Edmonds\u2019 original LP formulation\ncontains exponentially many constraints, thus naturally suggesting to seek for a sequence of LPs,\neach with a subset of constraints, gradually reducing the integrality gap to zero in a polynomial num-\nber of steps. However, it remained illusive for decades: even when the bipartite LP relaxation of the\nproblem has an integral optimal solution, the standard Edmonds\u2019 algorithm keeps contracting and\nexpanding a sequence of blossoms. As we mentioned earlier, we resolve the challenge by showing\nthat Blossom-LP is (implicitly) equivalent to a variant of the Edmonds\u2019 algorithm with three major\nmodi\ufb01cations: (a) parameter-update via maintaining cycles, claws and trees, (b) addition of small\nrandom corrections to weights, and (c) initialization using the bipartite LP relaxation.\nOrganization. In Section 2, we provide backgrounds on the minimum weight perfect matching\nproblem and the BP algorithm. Section 3 describes our main result \u2013 Blossom-LP and Blossom-BP\nalgorithms, where the proof is given in Section 4.\n\n2 Preliminaries\n2.1 Minimum weight perfect matching\n\nGiven an (undirected) graph G = (V, E), a matching of G is a set of vertex-disjoint edges, where\na perfect matching additionally requires to cover every vertices of G. Given integer edge weights\n(or costs) w = [we] \u2208 Z|E|, the minimum weight (or cost) perfect matching problem consists in\ncomputing a perfect matching which minimizes the summation of its associated edge weights. The\n\n2\n\n\fproblem is formulated as the following IP (Integer Programming):\n\u2200v \u2208 V,\n\nminimize\n\nw \u00b7 x\n\nxe = 1,\n\nsubject to (cid:88)\n\ne\u2208\u03b4(v)\n\nx = [xe] \u2208 {0, 1}|E|\n\n(1)\n\nWithout loss of generality, one can assume that weights are strictly positive.1 Furthermore, we as-\nsume that IP (1) is feasible, i.e., there exists at least one perfect matching in G. One can naturally\nrelax the above integer constraints to x = [xe] \u2208 [0, 1]|E| to obtain an LP (Linear Programming),\nwhich is called the bipartite relaxation. The integrality of the bipartite LP relaxation is not guaran-\nteed, however it can be enforced by adding the so-called blossom inequalities [22]:\nminimize\n\nw \u00b7 x\n\nxe = 1,\n\n\u2200v \u2208 V,\n\nxe \u2265 1,\n\n\u2200S \u2208 L,\n\nx = [xe] \u2208 [0, 1]|E|,\n\nsubject to (cid:88)\n\ne\u2208\u03b4(v)\n\n(cid:88)\n\ne\u2208\u03b4(S)\n\n(2)\nwhere L \u2282 2V is a collection of odd cycles in G, called blossoms, and \u03b4(S) is a set of edges between\nS and V \\ S. It is known that if L is the collection of all the odd cycles in G, then LP (2) always\nhas an integral solution. However, notice that the number of odd cycles is exponential in |V |, thus\nsolving LP (2) is computationally intractable. To overcome this complication we are looking for a\ntractable subset of L of a polynomial size which guarantees the integrality. Our algorithm, searching\nfor such a tractable subset of L is iterative: at each iteration it adds or subtracts a blossom.\n2.2 Belief propagation for linear programming\nA joint distribution of n (binary) random variables Z = [Zi] \u2208 {0, 1}n is called a Graphical Model\n(GM) if it factorizes as follows: for z = [zi] \u2208 \u2126n,\n\nPr[Z = z] \u221d (cid:89)\n\ni\u2208{1,...,n}\n\n(cid:89)\n\n\u03b1\u2208F\n\n\u03c8i(zi)\n\n\u03c8\u03b1(z\u03b1),\n\nwhere {\u03c8i, \u03c8\u03b1} are (given) non-negative functions, the so-called factors; F is a collection of subsets\n\nF = {\u03b11, \u03b12, ..., \u03b1k} \u2282 2{1,2,...,n}\n\n(each \u03b1j is a subset of {1, 2, . . . , n} with |\u03b1j| \u2265 2); z\u03b1 is the projection of z onto dimensions\nincluded in \u03b1.2 In particular, \u03c8i is called a variable factor. Assignment z\u2217 is called a maximum-a-\nposteriori (MAP) solution if z\u2217 = arg maxz\u2208{0,1}n Pr[z]. Computing a MAP solution is typically\ncomputationally intractable (i.e., NP-hard) unless the induced bipartite graph of factors F and vari-\nables z, so-called factor graph, has a bounded treewidth. The max-product Belief Propagation (BP)\nalgorithm is a popular simple heuristic for approximating the MAP solution in a GM, where it iter-\nates messages over a factor graph. BP computes a MAP solution exactly after a suf\ufb01cient number\nof iterations, if the factor graph is a tree and the MAP solution is unique. However, if the graph\ncontains loops, BP is not guaranteed to converge to a MAP solution in general. Due to the space\nlimitation, we provide detailed backgrounds on BP in the supplemental material.\nConsider the following GM: for x = [xi] \u2208 {0, 1}n and w = [wi] \u2208 Rn,\n\n\u03c8\u03b1(x\u03b1),\n\n(3)\n\nwhere F is the set of non-variable factors and the factor function \u03c8\u03b1 for \u03b1 \u2208 F is de\ufb01ned as\n\nPr[X = x] \u221d (cid:89)\n\ne\u2212wixi (cid:89)\n\ni\n\n\u03b1\u2208F\n\n(cid:26)1\n\n0\n\n\u03c8\u03b1(x\u03b1) =\n\nif A\u03b1x\u03b1 \u2265 b\u03b1, C\u03b1x\u03b1 = d\u03b1\notherwise\n\n,\n\nfor some matrices A\u03b1, C\u03b1 and vectors b\u03b1, d\u03b1. Now we consider Linear Programming (LP) corre-\nsponding to this GM:\n\nminimize\nsubject to\n\nw \u00b7 x\n\u03c8\u03b1(x\u03b1) = 1,\n\n\u2200\u03b1 \u2208 F,\n\nx = [xi] \u2208 [0, 1]n.\n\n(4)\n\ndoes not alter the solution of IP (1).\n\n1If some edges have negative weights, one can add the same positive constant to all edge weights, and this\n2For example, if z = [0, 1, 0] and \u03b1 = {1, 3}, then z\u03b1 = [0, 0].\n\n3\n\n\fOne observes that the MAP solution for GM (3) corresponds to the (optimal) solution of LP (4) if the\nLP has an integral solution x\u2217 \u2208 {0, 1}n. Furthermore, the following suf\ufb01cient conditions relating\nmax-product BP to LP are known [16]:\nTheorem 1 The max-product BP applied to GM (3) converges to the solution of LP (4) if the fol-\nlowing conditions hold:\n\nC1. LP (4) has a unique integral solution x\u2217 \u2208 {0, 1}n, i.e., it is tight.\nC2. For every i \u2208 {1, 2, . . . , n}, the number of factors associated with xi is at most two, i.e.,\n\n|Fi| \u2264 2.\n\nC3. For every factor \u03c8\u03b1, every x\u03b1 \u2208 {0, 1}|\u03b1| with \u03c8\u03b1(x\u03b1) = 1, and every i \u2208 \u03b1 with xi (cid:54)= x\u2217\ni ,\n\nthere exists \u03b3 \u2282 \u03b1 such that\n\n|{j \u2208 {i} \u222a \u03b3 : |Fj| = 2}| \u2264 2\n\n\u03c8\u03b1(x(cid:48)\n\n\u03b1) = 1,\n\n\u03c8\u03b1(x(cid:48)(cid:48)\n\n\u03b1) = 1,\n\nwhere x(cid:48)\n\nk =\n\nwhere x(cid:48)(cid:48)\n\nk =\n\n(cid:26)xk\n(cid:26)xk\n\nx\u2217\n\nk\n\nx\u2217\n\nk\n\nif k /\u2208 {i} \u222a \u03b3\notherwise\nif k \u2208 {i} \u222a \u03b3\notherwise\n\n.\n\n.\n\n3 Main result: Blossom belief propagation\n\nIn this section, we introduce our main result \u2013 an iterative algorithm, coined Blossom-BP, for solving\nthe minimum weight perfect matching problem over an arbitrary graph, where the algorithm uses the\nmax-product BP as a subroutine. We \ufb01rst describe the algorithm using LP instead of BP in Section\n3.1, where we call it Blossom-LP. Its BP implementation is explained in Section 3.2.\n\n(cid:105)\n\n(cid:104)\n\n0, 1|V |\n\n3.1 Blossom-LP algorithm\nLet us modify the edge weights: we \u2190 we + ne, where ne is an i.i.d. random number chosen in\nthe interval\n. Note that the solution of the minimum weight perfect matching problem (1)\nremains the same after this modi\ufb01cation because the overall noise does not exceed 1. The Blossom-\nLP algorithm updates the following parameters iteratively.\n\u25e6 L \u2282 2V : a laminar collection of odd cycles in G.\n\u25e6 yv, yS: v \u2208 V and S \u2208 L.\n\nIn the above, L is called laminar if for every S, T \u2208 L, S \u2229 T = \u2205, S \u2282 T or T \u2282 S.We call S \u2208 L\nan outer blossom if there exists no T \u2208 L such that S \u2282 T . Initially, L = \u2205 and yv = 0 for all\nv \u2208 V . The algorithm iterates between Step A and Step B and terminates at Step C.\n\nBlossom-LP algorithm\nA. Solving LP on a contracted graph. First construct an auxiliary (contracted) graph G\u2020 =\n(V \u2020, E\u2020) by contracting every outer blossom in L to a single vertex, where the weights w\u2020 = [w\u2020\ne :\ne \u2208 E\u2020] are de\ufb01ned as\n\n(cid:88)\n\nyv \u2212\n\nyS,\n\n\u2200 e \u2208 E\u2020.\n\nv\u2208V :v(cid:54)\u2208V \u2020,e\u2208\u03b4(v)\n\nS\u2208L:v(S)(cid:54)\u2208V \u2020,e\u2208\u03b4(S)\n\nWe let v(S) be the blossom vertex in G\u2020 coined as the contracted graph and solve the following LP:\n\ne = we \u2212 (cid:88)\n\nw\u2020\n\nw\u2020 \u00b7 x\n\nminimize\n\nsubject to (cid:88)\n(cid:88)\n\ne\u2208\u03b4(v)\n\nxe = 1,\n\n\u2200 v \u2208 V \u2020, v is a non-blossom vertex\n\nxe \u2265 1,\n\n\u2200 v \u2208 V \u2020, v is a blossom vertex\n\n(5)\n\ne\u2208\u03b4(v)\nx = [xe] \u2208 [0, 1]|E\u2020|.\n\n4\n\n\fB. Updating parameters. After we obtain a solution x = [xe : e \u2208 E\u2020] of LP (5), the parameters\nare updated as follows:\n\n(a) If x is integral, i.e., x \u2208 {0, 1}|E\u2020| and(cid:80)\n(b) Else if there exists a blossom S such that(cid:80)\n\ntermination step C.\n\nblossoms and update\n\ne\u2208\u03b4(v) xe = 1 for all v \u2208 V \u2020, then proceed to the\n\ne\u2208\u03b4(v(S)) xe > 1, then we choose one of such\n\nL \u2190 L\\{S}\n\nand\n\nyv \u2190 0,\n\n\u2200 v \u2208 S.\n\nCall this step \u2018blossom S expansion\u2019.\n\n(c) Else if there exists an odd cycle C in G\u2020 such that xe = 1/2 for every edge e in it, we\n\nchoose one of them and update\n\nL \u2190 L \u222a {V (C)}\n\nand\n\nyv \u2190 1\n2\n\n(\u22121)d(e,v)w\u2020\ne,\n\n\u2200v \u2208 V (C),\n\n(cid:88)\n\ne\u2208E(C)\n\nwhere V (C), E(C) are the set of vertices and edges of C, respectively, and d(v, e) is the\ngraph distance from vertex v to edge e in the odd cycle C. The algorithm also remembers\nthe odd cycle C = C(S) corresponding to every blossom S \u2208 L.\n\nIf (b) or (c) occur, go to Step A.\nC. Termination. The algorithm iteratively expands blossoms in L to obtain the minimum\nweighted perfect matching M\u2217 as follows:\n\ncontracted graph G\u2020 has xe = 1, where x = [xe] is the (last) solution of LP (5).\n\n(i) Let M\u2217 be the set of edges in the original G such that its corresponding edge e in the\n(ii) If L = \u2205, output M\u2217.\n(iii) Otherwise, choose an outer blossom S \u2208 L, then update G\u2020 by expanding S, i.e. L \u2190\n(iv) Let v be the vertex in S covered by M\u2217 and MS be a matching covering S\\{v} using the\n(v) Update M\u2217 \u2190 M\u2217 \u222a MS and go to Step (ii).\n\nedges of odd cycle C(S).\n\nL\\{S}.\n\nAn example of the evolution of L is described in the supplementary material. We provide the\nfollowing running time guarantee for this algorithm, which is proven in Section 4.\nTheorem 2 Blossom-LP outputs the minimum weight perfect matching in O(|V |2) iterations.\n\n3.2 Blossom-BP algorithm\n\nIn this section, we show that the algorithm can be implemented using BP. The result is derived in\ntwo steps, where the \ufb01rst one consists in the following theorem proven in the supplementary material\ndue to the space limitation.\n\nTheorem 3 LP (5) always has a half-integral solution x\u2217 \u2208(cid:8)0, 1\n\nsuch that the collection\n\n2 , 1(cid:9)|E\u2020|\n\nof its half-integral edges forms disjoint odd cycles.\n\nNext let us design BP for obtaining the half-integral solution of LP (5). First, we duplicate each\nedge e \u2208 E\u2020 into e1, e2 and de\ufb01ne a new graph G\u2021 = (V \u2020, E\u2021) where E\u2021 = {e1, e2 : e \u2208 E\u2021}.\nThen, we build the following equivalent LP:\n\nw\u2021 \u00b7 x\n\nminimize\n\nsubject to (cid:88)\n(cid:88)\n\ne\u2208\u03b4(v)\n\nxe = 2,\n\n\u2200 v \u2208 V \u2020, v is a non-blossom vertex\n\nxe \u2265 2,\n\n\u2200 v \u2208 V \u2020, v is a blossom vertex\n\n(6)\n\ne\u2208\u03b4(v)\nx = [xe] \u2208 [0, 1]|E\u2020|,\n\n5\n\n\f= w\u2021\n\nwhere w\u2021\ne. One can easily observe that solving LP (6) is equivalent to solving LP (5)\ndue to our construction of G\u2021, w\u2021, and LP (6) always have an integral solution due to Theorem 3.\nNow, construct the following GM for LP (6):\n\n= w\u2020\n\ne1\n\ne2\n\n\u03c8v(x\u03b4(v)),\n\n(7)\n\nexe (cid:89)\n\new\u2021\n\nv\u2208V \u2020\n\ne\u2208E\u2021\n\nPr[X = x] \u221d (cid:89)\n\uf8f1\uf8f4\uf8f2\uf8f4\uf8f31\n\notherwise\n\n1\n0\n\nif v is a non-blossom vertex and (cid:80)\nelse if v is a blossom vertex and (cid:80)\n\ne\u2208\u03b4(v) xe = 2\ne\u2208\u03b4(v) xe \u2265 2\n\n.\n\nwhere the factor function \u03c8v is de\ufb01ned as\n\n\u03c8v(x\u03b4(v)) =\n\n(cid:32)\n\n(cid:33)\n\nv \u2212(cid:88)\n\nS\u2208L\n\nFor this GM, we derive the following corollary of Theorem 1 proven in the supplementary material\ndue to the space limitation.\nCorollary 4 If LP (6) has a unique solution, then the max-product BP applied to GM (7) converges\nto it.\n\nThe uniqueness condition stated in the corollary above is easy to guarantee by adding small random\nnoises to edge weights. Corollary 4 shows that BP can compute the half-integral solution of LP (5).\n4 Proof of Theorem 2\n\nFirst, it is relatively easy to prove the correctness of Blossom-BP, as stated in the following lemma.\nLemma 5 If Blossom-LP terminates, it outputs the minimum weight perfect matching.\n\u2021\nS : v /\u2208 V \u2020, v(S) /\u2208 V \u2020] denote the parameter values at\nProof. We let x\u2020 = [x\u2020\nthe termination of Blossom-BP. Then, the strong duality theorem and the complementary slackness\ncondition imply that\n\ne], y\u2021 = [y\u2021\n\nv, y\n\n(8)\nwhere y\u2020 be a dual solution of x\u2020. Here, observe that y\u2020 and y\u2021 cover y-variables inside and outside\nof V \u2020, respectively. Hence, one can naturally de\ufb01ne y\u2217 = [y\u2020\nu] to cover all y-variables, i.e.,\nyv, yS for all v \u2208 V, S \u2208 L. If we de\ufb01ne x\u2217 for the output matching M\u2217 of Blossom-LP as x\u2217\ne = 1\nif e \u2208 M\u2217 and x\u2217\ne = 0 otherwise, then x\u2217 and y\u2217 satisfy the following complementary slackness\ncondition:\n\nv) = 0,\n\nv y\u2021\n\ne(w\u2020 \u2212 y\u2020\nx\u2020\n\nu \u2212 y\u2020\n\n\u2200e = (u, v) \u2208 E\u2020.\n\nx\u2217\n\ne\n\nwe \u2212 y\u2217\n\nu \u2212 y\u2217\n\ny\u2217\n\nS\n\n= 0,\n\n\u2200e = (u, v) \u2208 E,\n\ny\u2217\n\nS\n\ne \u2212 1\nx\u2217\n\n\u2200S \u2208 L,\n\nBlossom-BP is designed to enforce(cid:80)\n\nwhere L is the last set of blossoms at the termination of Blossom-BP. In the above, the \ufb01rst equality\nis from (8) and the de\ufb01nition of w\u2020, and the second equality is because the construction of M\u2217 in\ne = 1. This proves that x\u2217 is the optimal solution of\nLP (2) and M\u2217 is the minimum weight perfect matching, thus completing the proof of Lemma 5. (cid:3)\n\ne\u2208\u03b4(S) x\u2217\n\nTo guarantee the termination of Blossom-LP in polynomial time, we use the following notions.\nDe\ufb01nition 1 Claw is a subset of edges such that every edge in it shares a common vertex, called\ncenter, with all other edges, i.e., the claw forms a star graph.\nDe\ufb01nition 2 Given a graph G = (V, E), a set of odd cycles O \u2282 2E, a set of claws W \u2282 2E and\na matching M \u2282 E, (O,W, M ) is called cycle-claw-matching decomposition of G if all sets in\nO \u222a W \u222a {M} are disjoint and each vertex v \u2208 V is covered by exactly one set among them.\n\nTo analyze the running time of Blossom-BP, we construct an iterative auxiliary algorithm that out-\nputs the minimum weight perfect matching in a bounded number of iterations. The auxiliary al-\ngorithm outputs a cycle-claw-matching decomposition at each iteration, and it terminates when the\ncycle-claw-matching decomposition corresponds to a perfect matching. We will prove later that\nthe auxiliary algorithm and Blossom-LP are equivalent and, therefore, conclude that the iteration of\nBlossom-LP is also bounded.\n\n6\n\n\uf8eb\uf8ed (cid:88)\n\ne\u2208\u03b4(S)\n\n\uf8f6\uf8f8 = 0,\n\n\fTo design the auxiliary algorithm, we consider the following dual of LP (5):\n\nminimize (cid:88)\n\nyv\n\nsubject to\n\nv\u2208V \u2020\ne \u2212 yv \u2212 yu \u2265 0,\nw\u2020\n\n\u2200e = (u, v) \u2208 E\u2020,\n\nyv(S) \u2265 0,\n\n(9)\n\n\u2200S \u2208 L.\n\n(cid:80)\n\nNext we introduce an auxiliary iterative algorithm which updates iteratively the blossom set L and\nalso the set of variables yv, yS for v \u2208 V, S \u2208 L. We call edge e = (u, v) \u2018tight\u2019 if we \u2212 yu \u2212 yv \u2212\nS\u2208L:e\u2208\u03b4(S) yS = 0. Now, we are ready to describe the auxiliary algorithm having the following\n\nparameters.\n\n\u25e6 G\u2020 = (V \u2020, E\u2020), L \u2282 2V , and yv, yS for v \u2208 V, S \u2208 L.\n\u25e6 (O,W, M ): A cycle-claw-matching decomposition of G\u2020\n\u25e6 T \u2282 G\u2020: A tree graph consisting of + and \u2212 vertices.\n\nInitially, set G\u2020 = G and L, T = \u2205. In addition, set yv, yS by an optimal solution of LP (9) with\nw\u2020 = w and (O,W, M ) by the cycle-claw-matching decomposition of G\u2020 consisting of tight edges\nwith respect to [yv, yS]. The parameters are updated iteratively as follows.\n\nThe auxiliary algorithm\n\nIterate the following steps until M becomes a perfect matching:\n\n1. Choose a vertex r \u2208 V \u2020 from the following rule.\n\nExpansion. If W (cid:54)= \u2205, choose a claw W \u2208 W of center blossom vertex c and choose\na non-center vertex r in W . Remove the blossom S(c) corresponding to c from L and\nupdate G\u2020 by expanding it. Find a matching M(cid:48) covering all vertices in W and S(c)\nexcept for r and update M \u2190 M \u222a M(cid:48).\nContraction. Otherwise, choose a cycle C \u2208 O, add and remove it from L and\nO, respectively.\nIn addition, G\u2020 is also updated by contracting C and choose the\ncontracted vertex r in G\u2020 and set yr = 0.\n\nSet tree graph T having r as + vertex and no edge.\n\n2. Continuously increase yv of every + vertex v in T and decrease yv of \u2212 vertex v in T by\n\nthe same amount until one of the following events occur:\n\nGrow. If a tight edge (u, v) exists where u is a + vertex of T and v is covered by M,\n\ufb01nd a tight edge (v, w) \u2208 M. Add edges (u, v), (v, w) to T and remove (v, w) from\nM where v, w becomes \u2212, + vertices of T , respectively.\nMatching. If a tight edge (u, v) exists where u is a + vertex of T and v is covered by\nC \u2208 O, \ufb01nd a matching M(cid:48) that covers T \u222a C. Update M \u2190 M \u222a M(cid:48) and remove\nC from O.\nCycle. If a tight edge (u, v) exists where u, v are + vertices of T , \ufb01nd a cycle C and\na matching M(cid:48) that covers T . Update M \u2190 M \u222a M(cid:48) and add C to O.\nClaw. If a blossom vertex v(S) with yv(S) = 0 exists, \ufb01nd a claw W (of center v(S))\nand a matching M(cid:48) covering T . Update M \u2190 M \u222a M(cid:48) and add W to W.\n\nIf Grow occurs, resume the step 2. Otherwise, go to the step 1.\n\nNote that the auxiliary algorithm updates parameters in such a way that the number of vertices in\nevery claw in the cycle-claw-matching decomposition is 3 since every \u2212 vertex has degree 2. Hence,\nthere exists a unique matching M(cid:48) in the expansion step. Furthermore, the existence of a cycle-claw-\nmatching decomposition at the initialization can be guaranteed using the complementary slackness\ncondition and the half-integrality of LP (5). We establish the following lemma for the running time\nof the auxiliary algorithm, where its proof is given in the supplemental material due to the space\nlimitation.\nLemma 6 The auxiliary algorithm terminates in O(|V |2) iterations.\n\n7\n\n\fe = we \u2212 (cid:88)\n\nw\u2020\n\n(cid:88)\n\nNow we are ready to prove the equivalence between the auxiliary algorithm and the Blossom-LP,\ni.e., prove that the numbers of iterations of Blossom-LP and the auxiliary algorithm are equal. To\nthis end, given a cycle-claw-matching decomposition (O,W, M ), observe that one can choose the\ncorresponding x = [xe] \u2208 {0, 1/2, 1}|E\u2020| that satis\ufb01es constraints of LP (5):\n\n\uf8f1\uf8f2\uf8f31\n\n1\n2\n0\n\nxe =\n\nif e is an edge in W or M\nif e is an edge in O\notherwise\n\n.\n\nSimilarly, given a half-integral x = [xe] \u2208 {0, 1/2, 1}|E\u2020| that satis\ufb01es constraints of LP (5), one\ncan \ufb01nd the corresponding cycle-claw-matching decomposition. Furthermore, one can also de\ufb01ne\nweight w\u2020 in G\u2020 for the auxiliary algorithm as Blossom-LP does:\n\nyv \u2212\n\nyS,\n\n\u2200 e \u2208 E\u2020.\n\n(10)\n\nv\u2208V :v(cid:54)\u2208V \u2020,e\u2208\u03b4(v)\n\nS\u2208L:v(S)(cid:54)\u2208V \u2020,e\u2208\u03b4(S)\n\nIn the auxiliary algorithm, e = (u, v) \u2208 E\u2020 is tight if and only if w\u2020\nv = 0. Under\nthese equivalences in parameters between Blossom-LP and the auxiliary algorithm, we will use\nthe induction to show that cycle-claw-matching decompositions maintained by both algorithms are\nequal at every iteration, as stated in the following lemma whose proof is given in the supplemental\nmaterial due to the space limitation..\nLemma 7 De\ufb01ne the following notation:\n\nu \u2212 y\u2020\n\ne \u2212 y\u2020\n\ny\u2020 = [yv : v \u2208 V \u2020]\n\nand\n\ny\u2021 = [yv, yS : v \u2208 V, v (cid:54)\u2208 V \u2020, S \u2208 L, v(S) /\u2208 V \u2020],\n\ni.e., y\u2020 and y\u2021 are parts of y which involves and does not involve in V \u2020, respectively. Then, the\nBlossom-LP and the auxiliary algorithm update parameters L, y\u2021 equivalently and output the same\ncycle-claw-decomposition of G\u2020 at each iteration.\nThe above lemma implies that Blossom-LP also terminates in O(|V |2) iterations due to Lemma 6.\nThis completes the proof of Theorem 2. The equivalence between the half-integral solution of LP\n(5) in Blossom-LP and the cycle-claw-matching decomposition in the auxiliary algorithm implies\nthat LP (5) is always has a half-integral solution, and hence, one of Steps B.(a), B.(b) or B.(c) always\noccurs.\n\n5 Conclusion\n\nThe BP algorithm has been popular for approximating inference solutions arising in graphical mod-\nels, where its distributed implementation, associated ease of programming and strong paralleliza-\ntion potential are the main reasons for its growing popularity. This paper aims for designing a\npolynomial-time BP-based scheme solving the maximum weigh perfect matching problem. We be-\nlieve that our approach is of a broader interest to advance the challenge of designing BP-based MAP\nsolvers in more general GMs as well as distributed (and parallel) solvers for large-scale IPs.\n\nAcknowledgement. This work was supported by Institute for Information & communications\nTechnology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R0132-15-1005),\nContent visual browsing technology in the online and of\ufb02ine environments. The work at LANL was\ncarried out under the auspices of the National Nuclear Security Administration of the U.S. Depart-\nment of Energy under Contract No. DE-AC52-06NA25396.\n\nReferences\n[1] J. Yedidia, W. Freeman, and Y. Weiss, \u201cConstructing free-energy approximations and general-\nized belief propagation algorithms,\u201d IEEE Transactions on Information Theory, vol. 51, no. 7,\npp. 2282 \u2013 2312, 2005.\n\n[2] T. J. Richardson and R. L. Urbanke, Modern Coding Theory. Cambridge University Press,\n\n2008.\n\n[3] M. Mezard and A. Montanari, Information, physics, and computation, ser. Oxford Graduate\n\nTexts. Oxford: Oxford Univ. Press, 2009.\n\n8\n\n\f[4] M. J. Wainwright and M. I. Jordan, \u201cGraphical models, exponential families, and variational\n\ninference,\u201d Foundations and Trends in Machine Learning, vol. 1, no. 1, pp. 1\u2013305, 2008.\n\n[5] J. Gonzalez, Y. Low, and C. Guestrin. \u201cResidual splash for optimally parallelizing belief propa-\n\ngation,\u201d in International Conference on Arti\ufb01cial Intelligence and Statistics, 2009.\n\n[6] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein, \u201cGraphLab:\nA New Parallel Framework for Machine Learning,\u201d in Conference on Uncertainty in Arti\ufb01cial\nIntelligence (UAI), 2010.\n\n[7] A. Kyrola, G. E. Blelloch, and C. Guestrin. \u201cGraphChi: Large-Scale Graph Computation on\n\nJust a PC,\u201d in Operating Systems Design and Implementation (OSDI), 2012.\n\n[8] R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, and J. McDonald, \u201cParallel Program-\n\nming in OpenMP,\u201d Morgan Kaufmann, ISBN 1-55860-671-8, 2000.\n\n[9] M. Bayati, D. Shah, and M. Sharma, \u201cMax-product for maximum weight matching: Conver-\ngence, correctness, and lp duality,\u201d IEEE Transactions on Information Theory, vol. 54, no. 3,\npp. 1241 \u20131251, 2008.\n\n[10] S. Sanghavi, D. Malioutov, and A. Willsky, \u201cLinear Programming Analysis of Loopy Belief\nPropagation for Weighted Matching,\u201d in Neural Information Processing Systems (NIPS), 2007\n[11] B. Huang, and T. Jebara, \u201cLoopy belief propagation for bipartite maximum weight b-\n\nmatching,\u201d in Arti\ufb01cial Intelligence and Statistics (AISTATS), 2007.\n\n[12] M. Bayati, C. Borgs, J. Chayes, R. Zecchina, \u201cBelief-Propagation for Weighted b-Matchings\non Arbitrary Graphs and its Relation to Linear Programs with Integer Solutions,\u201d SIAM Journal\nin Discrete Math, vol. 25, pp. 989\u20131011, 2011.\n\n[13] N. Ruozzi, Nicholas, and S. Tatikonda, \u201cst Paths using the min-sum algorithm,\u201d in 46th Annual\n\nAllerton Conference on Communication, Control, and Computing, 2008.\n\n[14] S. Sanghavi, D. Shah, and A. Willsky, \u201cMessage-passing for max-weight independent set,\u201d in\n\nNeural Information Processing Systems (NIPS), 2007.\n\n[15] D. Gamarnik, D. Shah, and Y. Wei, \u201cBelief propagation for min-cost network \ufb02ow: conver-\n\ngence & correctness,\u201d in SODA, pp. 279\u2013292, 2010.\n\n[16] S. Park, and J. Shin, \u201cMax-Product Belief Propagation for Linear Programming: Applications\nto Combinatorial Optimization,\u201d in Conference on Uncertainty in Arti\ufb01cial Intelligence (UAI),\n2015.\n\n[17] M. Trick. \u201cNetworks with additional structured constraints\u201d, PhD thesis, Georgia Institute of\n\nTechnology, 1978.\n\n[18] M. Padberg, and M. Rao. \u201cOdd minimum cut-sets and b-matchings,\u201d in Mathematics of Oper-\n\nations Research, vol. 7, no. 1, pp. 67\u201380, 1982.\n\n[19] M. Gr\u00a8otschel, and O. Holland. \u201cSolving matching problems with linear programming,\u201d in\n\nMathematical Programming, vol. 33, no. 3, pp. 243\u2013259, 1985.\n\n[20] M Fischetti, and A. Lodi. \u201cOptimizing over the \ufb01rst Chv\u00b4atal closure\u201d, in Mathematical Pro-\n\ngramming, vol. 110, no. 1, pp. 3\u201320, 2007.\n\n[21] K. Chandrasekaran, L. A. Vegh, and S. Vempala. \u201cThe cutting plane method is polynomial for\n\nperfect matchings,\u201d in Foundations of Computer Science (FOCS), 2012\n\n[22] V. Kolmogorov, \u201cBlossom V: a new implementation of a minimum cost perfect matching al-\n\ngorithm,\u201d Mathematical Programming Computation, vol. 1, no. 1, pp. 43\u201367, 2009.\n\n[23] J. Edmonds, \u201cPaths, trees, and \ufb02owers\u201d, Canadian Journal of Mathematics, vol. 3, pp. 449\u2013\n\n467, 1965.\n\n[24] D. Malioutov, J. Johnson, and A. Willsky, \u201cWalk-sums and belief propagation in gaussian\n\ngraphical models,\u201d J. Mach. Learn. Res., vol. 7, pp. 2031-2064, 2006.\n\n[25] Y. Weiss, C. Yanover, and T Meltzer, \u201cMAP Estimation, Linear Programming and Belief Prop-\nagation with Convex Free Energies,\u201d in Conference on Uncertainty in Arti\ufb01cial Intelligence\n(UAI), 2007.\n\n[26] C. Moallemi and B. Roy, \u201cConvergence of min-sum message passing for convex optimization,\u201d\n\nin 45th Allerton Conference on Communication, Control and Computing, 2008.\n\n9\n\n\f", "award": [], "sourceid": 804, "authors": [{"given_name": "Sung-Soo", "family_name": "Ahn", "institution": null}, {"given_name": "Sejun", "family_name": "Park", "institution": "KAIST"}, {"given_name": "Michael", "family_name": "Chertkov", "institution": null}, {"given_name": "Jinwoo", "family_name": "Shin", "institution": "KAIST"}]}