NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8969
Title:Differential Privacy Has Disparate Impact on Model Accuracy

The paper presents the important finding that certain DP learning methods may amplify unfairness and presents extensive empirical evaluation of the effects. All the reviews and author feedback were discussed extensively among the reviewers. The paper was considered to make a strong and useful contribution from the DP perspective. Its treatment from the fairness angle was considered shallower, but nevertheless strong enough for acceptance. I would urge the authors to seriously consider the recommended improvements for the final version. In particular, you should be especially careful about restricting your claim to what is supported by your experiments, and noting their limitations. You have made a strong case that certain types of DP learning are at odds with certain types of fairness in certain scenarios, but this does not prove that all DP learning must have disparate impact. You should update the title, the abstract and the text to reflect this. Minor point: your response on adversarial training was not considered convincing by the expert reviewers. DP-SGD privacy depends on bounding the norm of per-example gradients but it does not limit how they are obtained.