AC's comments before receiving the ethics review (see below): The authors had a lively and detailed discussion and settled on the following points: - The framing of the paper seems reasonable: it seems plausible that one may have certain variables in past data that may not exist, or be allowed during prediction time. There's a similar line of work on this in algorithmic fairness which is very plausible given GDPR, companies not wanting to be sued for violating laws, or even HIPAA/security issues. - They agree that the authors should run the baseline suggested by R3: trying to impute the missing values and see how well that does. They do buy the authors argument that their approach will work better because imputation is a strictly harder problem. - They agree that the setting of runtime confounding is not such a technically difficult problem as you do have confounders in the training data. However the realism of the setting makes them think this is still a useful problem to address, even if it is much simpler than other causal settings. I urge the authors to modify the paper according to the suggestions of reviewers. I vote to accept. AC's comments after receiving the ethics review: After considering the ethical review and looking over the paper again, I’ve decided that the paper should still be accepted for the following reason: The ethical review’s main concerns were that the current paper doesn’t sufficiently engage with current literature on fairness in ML. I do agree that there are a number of methods in the fairness community for addressing the issue of not having sensitive data during test time, this includes algorithms that are specially created for this purpose and methods that only have third parties handle sensitive attributes or that encrypt sensitive attributes: Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J.,and Wallach, H. "A reductions approach to fair classification". ICML 2018 Jagielski, Matthew, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. "Differentially private fair learning.” ICML, 2019. Veale, M. and Binns, R. "Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data”. Big Data & Society, 4(2), 2017. Kilbertus, Niki, Adria Gascon, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller. "Blind Justice: Fairness with Encrypted Sensitive Attributes.” ICML, 2018. while these works only target supervised learning and this work targets a causal quantity, this paper just requires a set of regressions so in principle the above approaches could apply. However, modifying them to fit this setting would be non-trivial. I think if the authors added a discussion on the relationship to this line of work in fairness it would be enough. Further, this approaches applies beyond fairness, to any case of observed confounding during training and unobserved confounding during testing, and I think the benefit of introducing the setting of “runtime confounding” to the ML community outweighs the missing references and discussion, which can easily be added. ----------- ADDITIONAL REVIEW FROM ETHICS EXPERT -------------- - What is your recommendation with regard to the broader impact statement? Are any revisions needed? I am not comfortable in accepting the paper. The paper position itself in contexts in which historical decisions likely suffer from biases, i.e. health care, education, lending, criminal justice, and child welfare. Indeed, these are the main settings considered also in the ML fairness literature. One of the example mentioned in the introduction to justify the goal of dealing with unavailable information at evaluation time is in fact about fairness: The authors say that 'runtime confounding arises when historical decisions and outcomes have been affected by sensitive or protected attributes which for legal or ethical reasons are deemed ineligible as inputs to algorithmic predictions. We may for instance be concerned that parole boards implicitly relied on race in their decisions, but it would not be permissible to include race as a model input.' Thus I was expecting this to be an ML fairness paper, yet the paper is only superficially discussing the fairness aspect in the broader impact statement. I would have liked the paper to contain, already in the introduction, a discussion on (ML) fairness, and later a detailed explanation on what training ignorability means in unfairness terms. Regarding the parole boards example, I have concerns about that implications of ignoring fairness aspects and the ML fairness literature. If the use of race by parole boards is deemed unfair, then the use of the proposed method would not resolve this ethical issue, but only legal requirements. On the other hand, there are methods in the ML fairness literature (also pointed out by the meta-reviewer) that can do both. The question then is also, beyond the issue that the unfairness problem is not addressed by the proposed method, why are these methods from the ML fairness literature not at least discussed in the related work section (if not compared)? I am not objecting that there is no merit in the proposed method wrt dealing with missing information. But given the paper's strong framing in sensitive contexts, a detailed discussion on ML fairness is in my opinion necessary, and perhaps a comparison given that the proposed method would not be appropriate in the fairness cases discussed in the paper. I would like to see the discussion before publication, as some of it might be non-trivial, for example the translation of the ignorabiliy assumptions into fairness assumptions. - Would the publication of the research potentially bring with it undue risk of harm? Please discuss any suggested mitigations or required changes. The framing of the paper is poor. The paper positions itself in settings that suffer from fairness issues and that are considered in ML fairness without a reasonable discussion about it. The proposed method would mostly not be appropriate in such settings, whilst there are methods in the ML literature that are. This should be clarified. It seem to me as the reviewers are not clear about what the ignorabiliy assumption would mean in terms of fairness for the relevant settings (it would not be possible to give one exact formula but the instruments for understanding how to make the translation should be given). The readers and practitioners will be even more confused. I am of the opinion that the paper need to be rewritten substantially into a more informed paper wrt ML fairness before publication.