This paper points out a fundamental limitation of the PAC-Bayes framework; namely, that there exist problems/distributions that are known to be learnable for which PAC-Bayes bounds fail to give meaningful generalization guarantees. This is an important (albeit negative) result that furthers our understanding of this powerful theoretical framework. The reviews agree that the subject is timely and compelling; the paper is well written and its results are more or less presented with clarity. The reviews also raise some valid concerns: 1) The chosen PAC-Bayes bound is by far not the tightest, and that tighter, more recent bounds should be considered. The authors responded by saying that even Seeger's bound can fail, but this should be discussed in detail in the paper. 2) The setting considered -- a 1-D classification problem -- is not very interesting from a practical perspective. That said, most reviewers also felt that this is a minor concern, and the authors point out that, technically, the results hold for any class that contains 1-D classifiers as a subclass (such as N-D classifiers). Most reviewers remained positive on the paper after the author response, but they were also disappointed that the response did not address some of their concerns. That said, with a 1 page limit, it is impossible to address everything. I strongly encourage the authors to incorporate *all* of the reviewers' feedback into the final version of the paper.