This paper examines user-level privacy in the context of learning discrete distributions. Near matching upper and lower bounds (with a corresponding mechanism) on the number of users m required for a desired level of total variation distance are established, while it is shown that natural baselines the Laplace and Gaussian mechanism achieve inferior performance by a factor of sqrt(m). User-level privacy is a variation on pure/approximate differential privacy in which a mechanism's response distribution must be indistinguishable not only to change of an individual item (record) but those items belonging to a user. The paper considers the cases of users contributing equal and unequal numbers of items. R3 highlights an important concern with the user-privacy definition required for the proposed mechanism: that the users' items are drawn i.i.d. from the same distribution. This meta reviewer agrees that this assumption limits the immediate practical grounding of the work somewhat, however one can imagine situations where this assumption is reasonable - such as hospitals (users) each sharing large numbers of items drawn from one district's population. Moreoever as the reviewers note, the presented results are an important milestone in the study of user complexity and the cost of privacy in this federated learning setting; the lower bound of Theorem 3 applies to the stronger model. As R3 requests, it is paramount that the authors discuss the i.i.d. assumption in the paper. The mechanism itself is an interested application of Bun et al. (2019)'s private hypothesis selection algorithm. Along the way to achieving its goals, the paper contributes results of independent interest such as its lower bounding technique (with comparison to Acharaya et al. 2018 needed) and Theorem 5 for learning binomial distributions (bounding parameter estimation from binomial density estimation). It would be interesting to see high-probability bounds on accuracy compared to the TV-distance (expected \ell_1) and the dependence on confidence.