__ Summary and Contributions__: The authors discuss a spectral method for identifying the pairs of reads with the most likely significant overlaps between them, especially in the setting of long reads such as PacBio. This method is able to produce confidence intervals. Furthermore, the results on at least one biological dataset appear promising.

__ Strengths__: 1) The authors derive a theoretically clean framework of rank-one approximations (I saw it at RECOMB so was already familiar with it, but I do consider it a key strength).
2) The authors prove rigorous bounds on their confidence intervals (albeit dependent on unspecified constants).
3) The authors demonstrate promising results on an E. coli genome sequenced with PacBio.

__ Weaknesses__: 1) The paper feels unfinished and crammed; instead of including so much tecnhical detail, with derivations, in the main manuscript, I wish the authors had focused on the application, and summarized the technical details, relegating them to appendices.
2) The guarantees derived are unclear as they contain arbitrary constants, which are furthermore different between different theorems and lemmas. It is not clear what the results mean in practice, since asymptotics do not apply so readily to the situation of finite-length reads from a finite-length genome.
3) The conceptual jumps back-and-forth between workers, questions, reads, and k-mers are confusing; the paper needs to be greatly improved in terms of its clarity.

__ Correctness__: As far as I was able to follow them (which is, not all the way), I believe that the claims are correct. The empirical methodology is promising, but comparisons are lacking.

__ Clarity__: Not clear enough, there is a lot of room for improvement.

__ Relation to Prior Work__: Instead of MHAP, I would have liked to see a more recent reference such as Mash.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The paper brings ideas from crowdsourcing and multi-armed bandit domains to the problem of pairwise alignment of biological sequences, viewing random hash functions as outputs of crowdsourcing workers, and solving it via spectral decomposition of a response matrix (k-mers taken from the biological sequences times number of hash functions). Where there are dominant alignments, a multi-armed bandit formulation helps refine this solution efficiently.

__ Strengths__: The strength of the paper is in the crowdsourcing solution, the derivation of the spectral algorithm for identifying the top k items and theorem on the failure probabilities.

__ Weaknesses__: But I am very disappointed with the paper from the point of view of what it promises in the title as algorithms for biological sequence analysis. Following a lot of derivations and theorems, the work in the biological domain is a simple tag-on in the last six lines of the paper with very little analysis of its empirical usefulness in practice. Since I am no expert in crowdsourcing algorithms and am reading this paper from the point of view of what it promises, it is disappointing and I do not vote for its acceptance in this conference. I hope the theoretical strand reviewers might support this on the basis of originality of contribution there.

__ Correctness__: The algorithmic derivations appear correct.

__ Clarity__: The paper is well written and easy to follow. But much of its content is not in line with what it promises.

__ Relation to Prior Work__: Well described for the algorithmic part, but needs to address better the significance and relevance to biological sequence analysis.

__ Reproducibility__: Yes

__ Additional Feedback__: Discussion stage comment:
I am the most negative about this submission, and happy to be ruled out as outlier. BUT, my reservation stands. A highly theoretical work being masqueraded as an application or potential application continues to disappoint me. I do not see an argument as to WHY such a method is suitable for sequence alignment. What assumption about biology / rules of evolution / characteristics of the measurement instrument gives us reason to believe that the method advanced is suitable? Or i it just an accident this is the way forward?? If it is the latter, the empirical work should be better done than a tiny paragraph hidden in volumes of theoretical work. BUT of course the theoretical part (which I have not gone through in detail) alone could make it a good paper -- in which case it should be pitched and evaluated as such.

__ Summary and Contributions__: In bioinformatics, is it of importance to identify similar regions between pairs of DNS sequencing reads. A current methodological trend consists of two steps, in that pairs those are likely to have large alignment are detected first, and expensive alignment algorithms are performed only on the candidates.
For solving the first step problem, the authors employ two key ideas; a rank-one crowdsourcing model and a multi-armed bandit algorithm.
The authors give theoretical analysis on the performance of the proposed algorithm as well as experimental results using both synthetic and real dataset.
---after rebuttal comments---
This reviewer has read the authors' response, and the comments from other reviewers. Considering all that, this reviewer would like to remain the original decision.

__ Strengths__: The combination of two key ideas are novel.

__ Weaknesses__: Computational experiments are rather limited. There is only one baseline method being compared, and the dependence of the proposed method on parameters are not examined much.

__ Correctness__: Not sure. I have not checked the proofs.

__ Clarity__: The paper is clearly written.

__ Relation to Prior Work__: Related works are surveyed and described sufficently.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The authors consider minhash in a rank-1 crowd sourcing framework,
solvable via spectral decomposition. A multi-armed bandit algorithm is
then used to improve estimation via iterative refinement. A key
application for the authors is pairwise alignment of DNA sequences with
similarity estimated through the Jaccard similarity of k-mer sets.

__ Strengths__: The presented bandit algorithm is well justified and demonstrated to
have substantial computational advantages in the empirical section. The
problem is broadly applicable, particularly in the biological
domain. This is a good theoretical contribution with solid applications. Due to being broadly applicable and also computational efficient it is very relevant to the community.

__ Weaknesses__: The authors present a thresholded bandit in the supplementary due to
space limitations but no experiments involving it are presented in either the main text or supplementary. Would have been nice to see some results for this
(even if only in the sup).

__ Correctness__: The paper is solid. The proofs are easy to follow and the empirical methodology is good.

__ Clarity__: The paper is well written and the results quite well presented. Figure 2 would be a bit more readable if the ribbon was 95% CIs rather than 1 sd.

__ Relation to Prior Work__: The paper is well placed in existing literature. The novel contributions made by this article are clear.

__ Reproducibility__: Yes

__ Additional Feedback__: Will software be made available? I'd love to see this used in genomics algorithms, there are many applicable areas.