NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:85
Title:Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation

Reviewer 1


		
Overall the results are somewhat interesting. Smoothed sensitivity is a standard technique for achieving differential privacy; although most interesting smoothed sensitivity algorithms tend to be computationally hard, still the results would be interesting I believe. The paper is also reasonably well-written. -- I read the author response. Something that might make the paper more useful to a practical reader is a more concrete statement of what the results mean for Renyi-DP -- another related relaxation. Finally, a related paper is [1], which also does a Renyi-DP based analysis for smoothed sensitivity algorithms. It would be nice to see how the results in this paper compare with [1]. [1] Scalable Private Learning with PATE Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ăšlfar Erlingsson, ICLR 2018.

Reviewer 2


		
I thank the authors for shedding some light on how these results compare with Karwa and Vadhan [15], and Feldman and Steinke [8]. The results on mean estimation however do seem narrow in scope. The paper will be stronger if the authors could discuss maybe other applications of the smooth sensitivity and CDP theory. --------------- The paper extends the smooth sensitivity framework of Nissim et al. [18] by identifying three new distributions from which additive noise scaled to smooth sensitivity provides concentrated differential privacy. These techniques are applied to the problem of mean estimation (for data drawn i.i.d. from a distribution) using a trimmed mean estimator. The authors present two results for this mean estimation problem. The first gives a stronger accuracy guarantee under symmetric subgaussian assumption, whereas the second gives a weaker accuracy guarantee under minimal distributional assumptions. The authors also present an experimental evaluation of these techniques on Gaussian data. The theoretical results appear correct. The identification of the distributions that provide CDP when scaled to smooth sensitivity could be useful. The main results in this paper are the CDP bounds for estimating the mean. In this case, Theorem 7 (on general distributions) looks interesting. However, it is unclear how significant is the improvement over the existing results of Feldman and Steinke [8]. Overall, at this point it feels like the current results are not significant enough for NeurIPS.

Reviewer 3


		
This paper investigate the problem of private mean estimation. Specifically, authors exploit the smooth sensitivity and concentrated differential privacy( CDP), and then provide CDP for three distribution with quasi-polynomial tails when scaled to smooth sensitivity.