The authors proposes a new PAC-Bayes bound on majority vote classifiers (such as, but not restricted to, classifiers that comes from ensemble methods). Contrarily to most of known PAC-Bayes bounds, their proposed bound do not only take into account the so called Gibbs risk (which can be view as the average quality of the voters one which the majority vote is build), but also on some second order information (that quantifies how much the classifiers of the majority votes are decorrelated on their errors. A similar attempt has been proposed by Lacasse et al. (2007) & Germain et al. (2015). Their approach is nevertheless more general, simpler and seems to leads to tighter majority vote bounds. Most of the learning algorithms can be analysed through the PAC-Bayesian theory. However, classical PAC-Bayesian generalization bounds for classification suffers from the fact that the bound rely on the first moment only. The approach of Lacasse et al. (2007) & Germain et al. (2015) being until now the exception. However, on most cases the proposed bounds happened to be too looses. The proposed bound in this paper seems to overcomes this weakness and gives also a more precise understanding of learning algorithms that outcome majority-vote-ish classifier. On my opinion, this results is of high interest for the Machine Learning Community.