Loading [MathJax]/jax/output/CommonHTML/jax.js

Private Distribution Learning with Public Data: The View from Sample Compression

Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track

Bibtex Paper

Authors

Shai Ben-David, Alex Bie, Clément L Canonne, Gautam Kamath, Vikrant Singhal

Abstract

We study the problem of private distribution learning with access to public data. In this setup, which we refer to as *public-private learning*, the learner is given public and private samples drawn from an unknown distribution p belonging to a class Q, with the goal of outputting an estimate of p while adhering to privacy constraints (here, pure differential privacy) only with respect to the private samples. We show that the public-private learnability of a class Q is connected to the existence of a sample compression scheme for Q, as well as to an intermediate notion we refer to as \emph{list learning}. Leveraging this connection: (1) approximately recovers previous results on Gaussians over Rd; and (2) leads to new ones, including sample complexity upper bounds for arbitrary k-mixtures of Gaussians over Rd, results for agnostic and distribution-shift resistant learners, as well as closure properties for public-private learnability under taking mixtures and products of distributions. Finally, via the connection to list learning, we show that for Gaussians in Rd, at least d public samples are necessary for private learnability, which is close to the known upper bound of d+1 public samples.