NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6080
Title:Smoothing Structured Decomposable Circuits

Reviewer 1


		
Post-rebuttal: The authors' response addresses my concerns. I don't have any further comments. ********* ORIGINALITY =========== The question of smoothing logical circuits (negative normal forms) in subquadratic time is regarded as an important open problem in the area of knowledge compilation, and this paper makes progress on it. The approach of representing the problem for structured decomposable circuits as a range sum computation problem is the main novelty of this paper. It is a simple connection but a powerful one, as it lets them use previous work in data structures to improve the running time and to prove a lower bound for smoothing algorithms. The improved algorithm for All-Marginals also uses this connection crucially. QUALITY ======= The paper is mathematically sound, and the proofs are straightforward to verify. The experimental section needs to be improved. The experiment in the "Smoothing Circuits" subsection doesn't seem interesting to me. It simply confirms the runtime analysis and is very much expected. For the "Collapsed Sampling" subsection, it plugs in their algorithm for All-Marginals into an existing algorithm for collapsed sampling. They run the modified code on the "Segmentation-11 network". No details about what the overall problem is or what the Segmentation-11 network looks like is provided. Is it a structured decomposable circuit? Are inputs for the collapsed sampling problem usually structured decomposable? CLARITY ======= The paper is reasonably well-written, and the paper is nicely organized in terms of sections. It would be better though if the authors spent more time on motivating structured decomposability and why it's important. SIGNIFICANCE ============ Structured decomposable circuits have been identified as a useful generalization of OBDD's. This work gives nearly optimal smoothing algorithms for them, which has been a longstanding open question for general logical circuits. It also shows that these circuits enjoy a fast algorithm for the all-marginals problem (provided the assumption on the weight function holds true). As an outsider, I don't know whether structured decomposable circuits are widely used in the community and so, how impactful the new smoothing algorithm will be. However, I like the fact that advances in data structures (though 30 years old!) are being brought to bear on knowledge compilation.

Reviewer 2


		
CLARITY The text is nicely written, understandable, and up to a number of minor issues (see details) clear. QUALITY I didn't find major flaws in the claims. Some details are listed below. Experiments are performed on hand-crafted circuits. These may not be representative for practical circuits (see also comment on significance). It may be better to report "speedup factor" rather than "Improve %" (which is near 99% always here). ORIGINALITY To the best of my non-expert knowledge the contribution seems to be original. SIGNIFICANCE The paper motivates the problem by arguing that the algorithm of Darwich (2001) is too expensive. Still, Darwich (2001), which focuses on d-DNNFs rather than general circuits) states that making a circuit smooth is reasonable for circuits occuring in practice, in particular it says "These operations preserve both the decomposability and determinism of a DNNF. They may increase the size of given DNNF but only by a factor of O(n), where n is the number of atoms in the DNNF. This increase is quite minimal in practice though.". It may be useful to clarify this difference in opinion on whether this smoothing is problematics. Another issue is that the proposed algorithm requires a structured circuit as input but doesn't preserve structuredness. Line 266 lists some tasks which don't need structuredness, but doesn't discuss how much other common tasks depend on structuredness. If only tasks not needing structuredness should be performed, then getting the circuit into a structured form (to be able to apply the proposed algorithmà may come with an additional cost. REPRODUCIBILITY The checklist asserts all information on the used data is provided, but the paper doesn't give details. The supplementary material is a zip file, but most directories seem to contain code. L 120 : please specify whether the m in O(m) gives the number of variables or the number of literal or ... If you intend to use m as a global variable throughout the paper, then please make this very clear at the beginning of the section. My guess is here that m is the number of literals. (e.g., hiding the specification of m and n in the caption of Table 1 is not sufficient) * L 159 : the definition suggests that a smoothing gate algorithm could output a circuit which is not logically equivalent to the input circuit. Is that the intention? * L 177 : it seems that for the internal x-gate, c1 and c2 could encode non-trivial circuits, and _replacing_ them with SG(u_{vl}\setminus u_{\rho(c_1)}) and SG(u_{vr}\setminus u_{\rho(c_2)}) gives a non-equivalent circuit. Maybe you want to add children to the gate? * L 181 : in contrast to L 120, it seems here m is the number of intervals (hence L 120 needed a specification of the meaning of m). * L 186 : what is the meaning of n here? * L 204 : thm 4 is ill-formulated. It says "there exists intervals such that .... takes ... time". However, once we know the intervals we can pre-compute the sums. I guess you need to say "for all algorithms, there exist m intervals such that ... takes ... time." * L 211: please define accurately "the class of smoothing-gate algorithms," * L 253 : here, m and n are nicely defined. ------------------- I appreciate the informative author feedback.

Reviewer 3


		
# Summary The paper proposes an efficient method, near-linear time algorithm, for smoothing a circuit, focusing on structured decomposable circuits, thus improving the naive quadratic algorithm. # Quality The authors analytically prove the complexity of the proposed algorithm in Theorem 3. The the prove the lower bound of the same complexity. Due to the result reported in Lemma 1 the authors show how to apply the semigroup range-sum leading to a method for smoothing the AC with a low time complexity (see Theorem 3). Another linear time algorithm is proposed for the All-Marginals inference task avoiding the need of smoothing. A final experimental evaluation prove the validity of the proposed methods. # Clarity The paper is well written, the concepts are well defined and the proofs correctly prove the reported statements. A suggestion could be to better explain the backgrouds on AC to make the paper more self-contained. Definition 8 could be splitted into two definitions, one for smoothness and the other for the task of smoothing. # Originality The idea is quite very simple, however the reported results in terms of theorems and experiments prove the validity of the proposed approach. # Significance The reported approach presented in the paper make able to solve inference tasks in linear time on non smoothed AC. This is a very important result.