This submission investigates the use of the empirical likehood approach for off-line policy evaluation in contextual models (which, by the way, is not made fully explicit from its title). The paper was considered as a novel and conceptually interesting contribution by all reviewers who recommend its acceptance. I agree with this general opinion, with the added warning that both the reviews and the post-rebuttal reviewers' discussion made it clear that the current writting needs improvement in several ways. Please note that this a unanimous request of the reviewers (even those that gave very good rating), as these quotes from the discussion will easily show: "After reading the other referee reports and the authors comments I still have the feeling that the authors could have done a better job in formulating the problems and giving enough context (providing a bunch of references is not equal to writing an easy to read, accessible and good paper). This paper was by far the hardest to read for me this year [...]", "I also found the paper really hard to read, up to the point where I found that this was severely undermining its potential impact.", "I agree with what has been said so far. I hope that the feedback will lead to some improvements to the presentation of the results." Going in the same direction I can also add that I share the same concerns, despite the fact that I knew what the empirical likelihood method was, before reading the paper. The authors in their answer committed to do the requested changes when preparing the final version of the paper, and in the present case it is very important that they take this occasion to improve the general presentation of the paper.