{"title": "Efficiently Estimating Erdos-Renyi Graphs with Node Differential Privacy", "book": "Advances in Neural Information Processing Systems", "page_first": 3770, "page_last": 3780, "abstract": "We give a simple, computationally efficient, and node-differentially-private algorithm for estimating the parameter of an Erdos-Renyi graph---that is, estimating p in a G(n,p)---with near-optimal accuracy. Our algorithm nearly matches the information-theoretically optimal exponential-time algorithm for the same problem due to Borgs et al. (FOCS 2018). More generally, we give an optimal, computationally efficient, private algorithm for estimating the edge-density of any graph whose degree distribution is concentrated in a small interval.", "full_text": "Ef\ufb01ciently Estimating Erd\u02ddos-R\u00e9nyi Graphs\n\nwith Node Differential Privacy\n\nAdam Sealfon\n\nMIT and UC Berkeley\nasealfon@berkeley.edu\n\nJonathan Ullman\n\nNortheastern University\njullman@ccs.neu.edu\n\nAbstract\n\nWe give a simple, computationally ef\ufb01cient, and node-differentially-private algo-\nrithm for estimating the parameter of an Erd\u02ddos-R\u00e9nyi graph\u2014that is, estimating\np in a G(n, p)\u2014with near-optimal accuracy. Our algorithm nearly matches the\ninformation-theoretically optimal exponential-time algorithm for the same problem\ndue to Borgs et al. (FOCS 2018). More generally, we give an optimal, computa-\ntionally ef\ufb01cient, private algorithm for estimating the edge-density of any graph\nwhose degree distribution is concentrated in a small interval.\n\n1\n\nIntroduction\n\nNetwork data modeling individuals and relationships between individuals are increasingly central in\ndata science. As some of the most interesting network datasets include sensitive information about\nindividuals, there is a need for private methods for analysis of these datasets, ideally satisfying strong\nmathematical guarantees like differential privacy [9]. However, while there is a highly successful\nliterature on differentially private statistical estimation for traditional i.i.d. data, the literature on\nestimating network statistics is far less developed.\nEarly work on private network data focused on edge differential privacy, in which the algorithm is\nrequired to \u201chide\u201d the presence or absence of a single edge in the graph (e.g. [20, 14, 16, 13, 1, 22, 17]\nand many more). A more desirable notion of privacy, which is the focus of this work, is node\ndifferential privacy (node-DP), which requires the algorithm to hide the presence or absence of a\nsingle node and the (arbitrary) set of edges incident to that node.\nHowever, node-DP is often dif\ufb01cult to achieve without compromising accuracy, because even very\nsimple graph statistics can be highly sensitive to adding or removing a single node. For example,\nthe count of edges in the graph, |E|, can change by \u00b1n by adding or deleting a single node from an\nn-node graph, which means that no node-DP algorithm can count the number of edges with error o(n)\non a worst-case graph. We emphasize that even these simple statistics like the edge count can disclose\nsensitive information if no steps are taken to ensure privacy, especially when we release many such\nstatistics on related graphs. There has been an enormous body of work that has uncovered the privacy\nrisks of releasing simple statistics like counts in the i.i.d. setting (e.g. [8, 10, 12, 15, 19, 5, 11]) and\nthe additional graph structure only makes these risks more acute.\nAlthough node-DP is dif\ufb01cult to achieve on worst-case graphs, the beautiful works of Blocki et\nal. [2] and Kasiviswanathan et al. [18] showed how to design node-DP estimators that are highly\naccurate on \u201cnice\u201d graphs that have additional properties observed in practice\u2014for example, graphs\nwith small maximum degree\u2014using the technique of Lipschitz extensions. However, many of the\nknown constructions of Lipschitz extensions require exponential running time, and constructions of\ncomputationally ef\ufb01cient Lipschitz extensions [21, 7, 6] lag behind. As a result, even for estimating\nvery simple graph models, there are large gaps in accuracy between the best known computationally\nef\ufb01cient algorithms and the information theoretically optimal algorithms.\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fIn this work we focus on arguably the simplest graph statistic, the edge count, |E|, in undirected\nunweighted graphs. We give improved estimators for this quantity on concentrated-degree graphs.\nIntuitively, a concentrated-degree graph is one in which the degree of every node lies in some small\n(but not publicly known) range [ \u00afd\u2212k, \u00afd+k], which generalizes the case of graphs with low maximum\ndegree. We give a simple, polynomial-time node-DP algorithm with optimal accuracy for estimating\nthe count of edges in concentrated-degree graphs. Our estimator is inspired by Lipschitz extensions,\nbut avoids directly constructing an ef\ufb01cient Lipschitz extension, and thus our approach may be useful\nfor computing other graph statistics in settings where ef\ufb01cient Lipschitz extensions are unknown or\nunachievable.\nThe main application of this estimator is to estimate the parameter for the simplest possible network\nmodel, the Erd\u02ddos-R\u00e9nyi graph. In this model, denoted G(n, p), we are given a number of nodes n\nand a parameter p \u2208 [0, 1], and we sample an n-node graph G by independently including each edge\n(i, j) for 1 \u2264 i < j \u2264 n with probability p. The goal is to design a node-DP algorithm that takes as\ninput a graph G \u223c G(n, p) and outputs an estimate \u02c6p \u2248 p. Surprisingly, until the elegant recent work\nof Borgs et al. [3], the optimal accuracy for estimating the parameter p in a G(n, p) via node-DP\nalgorithms was unknown. Although that work essentially resolved the optimal accuracy of node-DP\nalgorithms, their construction is again based on generic Lipschitz extensions, and thus results in an\nexponential-time algorithm, and, in our opinion, gives little insight for how to construct an ef\ufb01cient\nestimator with similar accuracy. Erd\u02ddos-R\u00e9nyi graphs automatically satisfy the concentrated-degree\nproperty with high probability, and thus we immediately obtain a computationally ef\ufb01cient, node-DP\nestimator for Erd\u02ddos-R\u00e9nyi graphs. The error of our estimator nearly matches that of Borgs et al., and\nindeed does match it for a wide range of parameters.\n\n1.1 Background: Node-Private Algorithms for Erd\u02ddos-R\u00e9nyi Graphs\n\nWithout privacy, the optimal estimator is simply to output the edge-density pG = |E|/(cid:0)n\n\n(cid:1) of the\n\n2\n\nrealized graph G \u223c G(n, p), which guarantees that\n\n(cid:2)(p \u2212 pG)2(cid:3) =\n\nE\nG\n\np(1 \u2212 p)\n\n(cid:0)n\n\n(cid:1)\n\n2\n\n.\n\nThe simplest way to achieve \u03b5-node-DP is to add zero-mean noise to the edge-density with standard-\ndeviation calibrated to its global-sensitivity, which is the amount that changing the neighborhood of a\nsingle node in a graph can change its edge-density. The global sensitivity of pG is \u0398(1/n), and thus\nthe resulting private algorithm Ana\u00efve satis\ufb01es\n\n(cid:2)(p \u2212 Ana\u00efve(G))2(cid:3) = \u0398(1/\u03b52n2).\n\nE\nG\n\nNote that this error is on the same order as or larger than the non-private error.\nBorgs et al. [3] gave an improved \u03b5-node-DP algorithm such that, when both p and \u03b5 are (cid:38) log n\nn ,\n\nE(cid:2)(p \u2212 Abcsz(G))2(cid:3) =\n\np(1 \u2212 p)\n\n(cid:0)n\n(cid:1)\n(cid:123)(cid:122)\n\n2\n\n(cid:125)\n\n(cid:124)\n\nnon-private error\n\n+\n\n(cid:16) p\n(cid:123)(cid:122)\n\n\u02dcO\n\n(cid:124)\n\n\u03b52n3\n\n(cid:17)\n(cid:125)\n\noverhead due to privacy\n\nWhat is remarkable about their algorithm is that, unless \u03b5 is quite small (roughly \u03b5 (cid:46) n\u22121/2), the \ufb01rst\nterm dominates the error, in which case privacy comes essentially for free. That is, the error of the\nprivate algorithm is only larger than that of the optimal non-private algorithm by a 1 + o(1) factor.\nHowever, as we discussed above, this algorithm is not computationally ef\ufb01cient.\nThe only computationally ef\ufb01cient node-DP algorithms for computing the edge-density apply to\ngraphs with small maximum degree [2, 18, 21], and thus do not give optimal estimators for Erd\u02ddos-\nR\u00e9nyi graphs unless p is very small.\n\n1.2 Our Results\n\nOur main result is a computationally ef\ufb01cient estimator for Erd\u02ddos-R\u00e9nyi graphs.\n\n2\n\n\fTheorem 1.1 (Erd\u02ddos-R\u00e9nyi Graphs, Informal). There is an O(n2)-time \u03b5-node-DP algorithm A such\nthat for every n and every p (cid:38) 1/n, if G \u223c G(n, p), then\n\n(cid:2)(p \u2212 A(G))2(cid:3) =\n\nE\nG,A\n\n(cid:18) p\n\np(1 \u2212 p)\n\n(cid:0)n\n(cid:1)\n(cid:123)(cid:122)\n\n2\n\n(cid:124)\n\n(cid:125)\n\n+ \u02dcO\n\n(cid:124)\n\n\u03b52n3 +\n\n1\n\n\u03b54n4\n\n(cid:123)(cid:122)\n\nnon-private error\n\noverhead due to privacy\n\n(cid:19)\n(cid:125)\n\nThe error of Theorem 1.1 matches that of the exponential-time estimator of Borgs et al. [3] up to the\nadditive \u02dcO(1/\u03b54n4) term, which is often not the dominant term in the overall error. In particular, the\nerror of our estimator is still within a 1 + o(1) factor of the optimal non-private error unless \u03b5 or p is\nquite small\u2014for example, when p is a constant and \u03b5 (cid:38) n\u22121/2.\nOur estimator actually approximates the edge density for a signi\ufb01cantly more general class of graphs\nthan merely Erd\u02ddos-R\u00e9nyi graphs. Speci\ufb01cally, Theorem 1.1 follows from a more general result for\nthe family of concentrated-degree graphs. For k \u2208 N, de\ufb01ne Gn,k to be the set of n-node graphs such\nthat the degree of every node is between \u00afd \u2212 k and \u00afd + k, where \u00afd = 2|E|/n is the average degree of\nthe graph.\nTheorem 1.2 (Concentrated-Degree Graphs, Informal). For every k \u2208 N, there is an O(n2)-time\n\u03b5-node-DP algorithm A such that for every n and every G \u2208 Gn,k,\n1\n\n(cid:19)\n\n(cid:104)\n\n(pG \u2212 A(G))2(cid:105)\n\n(cid:18) k2\n(cid:1) is the empirical edge density of G.\n\n= O\n\nE\nA\n\nwhere pG = |E|/(cid:0)n\n\n2\n\n\u03b52n4 +\n\n\u03b54n4\n\n\u221a\n\nTheorem 1.1 follows from Theorem 1.2 by using the fact that for an Erd\u02ddos-R\u00e9nyi graph, with\noverwhelming probability the degree of every node lies in an interval of width \u02dcO(\npn) around the\naverage degree.\nThe main technical ingredient in Theorem 1.2 is to construct a low sensitivity estimator f (G) for\nthe number of edges. The \ufb01rst property we need is that when G satis\ufb01es the concentrated degree\nproperty, f (G) equals the number of edges in G. The second property of the estimator we construct\nis that its smooth sensitivity [20] is low on these graphs G. At a high level, the smooth sensitivity\nof f at a graph G is the most that changing the neighborhood of a small number of nodes in G can\nchange the value of f (G). Once we have this property, it is suf\ufb01cient to add noise to f (G) calibrated\nto its smooth sensitivity. We construct f by carefully reweighting edges that are incident on nodes\nthat do not satisfy the concentrated-degree condition.\nFinally, we are able to show that Theorem 1.2 is optimal for concentrated-degree graphs. In additional\nto being a natural class of graphs in its own right, this lower bound demonstrates that in order to\nimprove Theorem 1.1, we will need techniques that are more specialized to Erd\u02ddos-R\u00e9nyi graphs.\nTheorem 1.3 (Lower Bound, Informal). For every n and k, and every \u03b5-node-DP algorithm A, there\nis some G \u2208 Gn,k such that E\n. The same bound applies to\n= \u2126\n(\u03b5, \u03b4)-node-DP algorithms with suf\ufb01ciently small \u03b4 (cid:46) \u03b5.\n\n(pG \u2212 A(G))2(cid:105)\n\n(cid:16) k2\n\n\u03b52n4 + 1\n\u03b54n4\n\n(cid:17)\n\n(cid:104)\n\nA\n\n2 Preliminaries\nLet Gn be the set of n-node graphs. We say that two graphs G, G(cid:48) \u2208 Gn are node-adjacent, denoted\nG \u223c G(cid:48), if G(cid:48) can be obtained by G modifying the neighborhood of a single node i. That is, there\nexists a single node i such that for every edge e in the symmetric difference of G and G(cid:48), e is incident\non i. As is standard in the literature on differential privacy, we treat n as a \ufb01xed quantity and de\ufb01ne\nadjacency only for graphs with the same number of nodes. We could easily extend our de\ufb01nition of\nadjacency to include adding or deleting a single node itself.\nDe\ufb01nition 2.1 (Differential Privacy [9]). A randomized algorithm A : Gn \u2192 R is (\u03b5, \u03b4)-node-\ndifferentially private if for every G \u223c G(cid:48) \u2208 Gn and every R \u2286 R,\n\nP[A(G) \u2208 R] \u2264 e\u03b5 \u00b7 P[A(G(cid:48)) \u2208 R] + \u03b4.\n\nIf \u03b4 = 0 we will simply say that A is \u03b5-node-differentially private. As we only consider node\ndifferential privacy in this work, we will frequently simply say that A satis\ufb01es differential privacy.\n\n3\n\n\fThe next lemma is the basic composition property of differential privacy.\nLemma 2.2 (Composition [9]). If A1,A2 : Gn \u2192 R are each (\u03b5, \u03b4)-node-differentially private\nalgorithms, then the mechanism A(G) = (A1(G),A2(G)) satis\ufb01es (2\u03b5, 2\u03b4)-node-differential privacy.\nThe same holds if A2 may depend on the output of A1.\nWe will say that two graphs G, G(cid:48) are at node distance c if there exists a sequence of graphs\nG = G0 \u223c G1 \u223c \u00b7\u00b7\u00b7 \u223c Gc = G(cid:48). The standard group privacy property of differential privacy yields\nthe following guarantees for graphs at node distance c > 1.\nLemma 2.3 (Group Privacy [9]). If A : Gn \u2192 R is (\u03b5, \u03b4)-node-differentially private and G, G(cid:48) are\nat node-distance c, then for every R \u2286 R,\n\nP[A(G) \u2208 R] \u2264 ec\u03b5 \u00b7 P[A(G(cid:48)) \u2208 R] + cec\u03b5\u03b4.\n\nSensitivity and Basic DP Mechanisms. The main differentially private primitive we will use is\nsmooth sensitivity [20]. Let f : Gn \u2192 R be a real-valued function. For a graph G \u2208 Gn, we can\nde\ufb01ne the local sensitivity of f at G and the global sensitivity of f to be\n\nLS f (G) = max\n\nG(cid:48):G(cid:48)\u223cG\n\n|f (G) \u2212 f (G(cid:48))|\n\nand GS f = max\nG\n\nLS f (G) = max\nG(cid:48)\u223cG\n\n|f (G) \u2212 f (G(cid:48))|.\n\nA basic result in differential privacy says that we can achieve privacy for any real-valued function f\nby adding noise calibrated to the global sensitivity of f.\nTheorem 2.4 (DP via Global Sensitivity [9]). Let f : Gn \u2192 R be any function. Then the algorithm\nA(G) = f (G) + GS f\n\u00b7 Z, where Z is sampled from a standard Laplace distribution,1 satis\ufb01es\n(\u03b5, 0)-differential privacy. Moreover, this mechanism satis\ufb01es E\nA\nA[|A(G) \u2212 f (G)| \u2265 t \u00b7 GS f /\u03b5] \u2264 exp(\u2212t).\nand for every t > 0, P\n\n(cid:2)(A(G) \u2212 f (G))2(cid:3) = O(GS f /\u03b5),\n\n\u03b5\n\nIn many cases the global sensitivity of f is too high, and we want to use a more re\ufb01ned mechanism\nthat adds instance-dependent noise that is more comparable to the local sensitivity. This can be\nachieved via the smooth sensitivity framework of Nissim et al. [20].\nDe\ufb01nition 2.5 (Smooth Upper Bound [20]). Let f : Gn \u2192 R be a real-valued function and \u03b2 > 0\nbe a parameter. A function S : Gn \u2192 R is a \u03b2-smooth upper bound on LS f if\n\n1. for all G \u2208 Gn, S(G) \u2265 LSf (G), and\n2. for all neighboring G \u223c G(cid:48) \u2208 Gn, S(G) \u2264 e\u03b2 \u00b7 S(G(cid:48)).\n\nThe key result in smooth sensitivity is that we can achieve differential privacy by adding noise to\nf (G) proportional to any smooth upper bound S(G).\nTheorem 2.6 (DP via Smooth Sensitivity [20, 4]). Let f : Gn \u2192 R be any function and S be a\n\u03b2-smooth upper bound on the local sensitivity of f for any \u03b2 \u2264 \u03b5. Then the algorithm A(G) =\n\u00b7 Z, where Z is sampled from a Student\u2019s t-distribution with 3 degrees of freedom,2\nf (G) + S(G)\nsatis\ufb01es (O(\u03b5), 0)-differential privacy.\nMoreover, for any G \u2208 Gn, this algorithm satis\ufb01es E\nA\n\n(cid:2)(A(G) \u2212 f (G))2(cid:3) = O(S(G)2/\u03b52).\n\n\u03b5\n\n3 An Estimator for Concentrated-Degree Graphs\n\n3.1 The Estimator\n\nIn order to describe the estimator we introduce some key notation. The input to the estimator is a\ngraph G = (V, E) and a parameter k\u2217. Intuitively, k\u2217 should be an upper bound on the concentration\n\n1The standard Laplace distribution Z has E[Z] = 0, E(cid:2)Z 2(cid:3) = 2, and density \u00b5(z) \u221d e\u2212|z|.\nX, Y1, Y2, Y3 \u223c N (0, 1) independently from a standard normal and returning Z = X/(cid:112)Y 2\nThis distribution has E[Z] = 0 and E(cid:2)Z 2(cid:3) = 3, and its density is \u00b5(z) \u221d 1/(1 + z2)2.\n\n2The Student\u2019s t-distribution with 3 degrees of freedom can be ef\ufb01ciently sampled by choosing\n3 .\n2 + Y 2\n\n1 + Y 2\n\n4\n\n\fk\u2217).\n\nLet pG = 1\n(n\n2)\n\nAlgorithm 1: Estimating the edge density of a concentrated-degree graph.\nInput: A graph G \u2208 Gn and parameters \u03b5 > 0 and k\u2217 \u2265 0.\nOutput: A parameter 0 \u2264 \u02c6p \u2264 1.\n\n(cid:80)\ne xe and \u00afdG = (n \u2212 1)pG.\n\u221a\nLet \u03b2 = min(\u03b5, 1/\nLet kG > 0 be the smallest positive integer such that at most kG vertices have degree outside\n[ \u00afdG \u2212 k\u2217 \u2212 3kG, \u00afdG + k\u2217 + 3kG].\nFor v \u2208 V , let tv = min{|t| : degG(v) \u00b1 t \u2208 [ \u00afdG \u2212 k\u2217 \u2212 3kG, \u00afdG + k\u2217 + 3kG]} and let\nwtG(v) = max(0, 1 \u2212 \u03b2tv).\nFor each u, v \u2208 V , let wtG({u, v}) = min(wtG(u), wtG(v)) and let\nvalG(e) = wtG(e) \u00b7 xe + (1 \u2212 wtG(e))pG.\nLet f (G) =\n\nvalG({u, v}), where the sum is over unordered pairs of vertices.\n\n(cid:88)\n\nu(cid:54)=v\n\nLet\n\ns = max\n(cid:96)\u2208L\n\n210 \u00b7 e\u2212\u03b2(cid:96) \u00b7 (kG + (cid:96) + k\u2217 + \u03b2(kG + (cid:96))(kG + (cid:96) + k\u2217) + 1/\u03b2),\n\nwhere L = {0,(cid:98)1/\u03b2 \u2212 kG \u2212 k\u2217(cid:99),(cid:100)1/\u03b2 \u2212 kG \u2212 k\u2217(cid:101)}.\nReturn 1\n(n\n2)\ndegrees of freedom.\n\n\u00b7 (f (G) + (s/\u03b5) \u00b7 Z), where Z is sampled from a Student\u2019s t-distribution with three\n\nparameter of the graph, although we obtain more general results when k\u2217 is not an upper bound, in\ncase the user does not have an a priori upper bound on this quantity.\n\n(cid:1) be the empirical edge density of G, and let \u00afdG =\n\nFor a graph G = (V, E), let pG = |E|/(cid:0)n\n\n(n \u2212 1)pG be the empirical average degree of G. Let kG be the smallest positive integer value such\nG := k\u2217 + 3kG. De\ufb01ne\nthat at most kG vertices of G have degree differing from \u00afdG by more than k(cid:48)\nIG = [ \u00afdG \u2212 k(cid:48)\nG]. For each vertex v \u2208 V , let tv = min{|t| : degG(v) \u00b1 t \u2208 IG} be the\ndistance between degG(v) and the interval IG, and de\ufb01ne the weight wtG(v) of v as follows. For a\nparameter \u03b2 > 0 to be speci\ufb01ed later, let\n\nG, \u00afdG + k(cid:48)\n\n2\n\n\uf8f1\uf8f2\uf8f31\n\nwtG(v) =\n\n1 \u2212 \u03b2tv\n0\n\nif tv = 0\nif tv \u2208 (0, 1/\u03b2]\notherwise.\n\nThat is, wtG(v) = max(0, 1 \u2212 \u03b2tv). For each pair of vertices e = {u, v}, de\ufb01ne the weight wtG(e)\nand value valG(e) as follows. Let\n\nand\n\nvalG(e) = wtG(e) \u00b7 xe + (1 \u2212 wtG(e)) \u00b7 pG,\n\nwtG(e) = min(wtG(u), wtG(v))\n\n(cid:80)\nwhere xe denotes the indicator variable on whether e \u2208 E. De\ufb01ne the function f (G) =\nu,v\u2208V valG({u, v}) to be the total value of all pairs of vertices in the graph, where the sum\n\nis over unordered pairs of distinct vertices.\nOnce we construct this function f, we add noise to f proportional to a \u03b2-smooth upper bound on the\nsensitivity of f, which we derive in this section. Pseudocode for our estimator is given in Algorithm 1.\n\n3.2 Analysis Using Smooth Sensitivity\n\nWe begin by bounding the local sensitivity LSf (G) of the function f de\ufb01ned above.\nLemma 3.1. For \u03b2 = \u2126(1/n), we have that LSf (G) = O((kG + k\u2217)(1 + \u03b2kG) + 1\nfor \u03b2 \u2208 [1/n, 1], we have LSf (G) < 210((kG + k\u2217)(1 + \u03b2kG) + 1/\u03b2).\n\n\u03b2 ). In particular,\n\n5\n\n\fn < 2\n\nProof. Consider any pair of graphs G, G(cid:48) differing in only a single vertex v\u2217, and note that the\nn\u22121, so \u00afdG and \u00afdG(cid:48) can differ by\nempirical edge densities pG and pG(cid:48) can differ by at most 2\nat most 2. Moreover, for any vertex v (cid:54)= v\u2217, the degree of v can differ by at most 1 between G\nand G(cid:48). Consequently, by the Triangle Inequality, for any v (cid:54)= v\u2217, | \u00afdG \u2212 degG(v)| can differ from\n| \u00afdG(cid:48) \u2212 degG(cid:48)(v)| by at most 3 and |kG \u2212 kG(cid:48)| \u2264 1, so wtG(v) can differ from wtG(cid:48)(v) by at most\n6\u03b2.\nLet FarG denote the set of at most kG vertices whose degree differs from \u00afdG by more than k(cid:48)\nG =\nk\u2217 + 3kG. For any vertices u, v /\u2208 FarG \u222a FarG(cid:48) \u222a {v\u2217}, we have wtG({u, v}) = wtG(cid:48)({u, v}) = 1,\nso valG({u, v}) = valG(cid:48)({u, v}), since the edge {u, v} appears in G if and only if it appears in G(cid:48).\nNow consider edges {u, v} such that u, v (cid:54)= v\u2217 but u \u2208 FarG \u222a FarG(cid:48) (and v may or may not be as\nwell). If degG(u) /\u2208 [ \u00afdG \u2212 k(cid:48)(cid:48)\nG + 1/\u03b2 + 3, then wtG(u) = wtG(cid:48)(u) = 0 and\nso |valG({u, v})\u2212 valG(cid:48)({u, v})| = |pG \u2212 pG(cid:48)| \u2264 2/n. Otherwise, degG(u) \u2208 [ \u00afdG \u2212 k(cid:48)(cid:48)\nG, \u00afdG + k(cid:48)(cid:48)\nG].\nWe can break up the sum\n\nG, \u00afdG + k(cid:48)(cid:48)\n\nG] for k(cid:48)(cid:48)\n\nG = k(cid:48)\n\nfu(G) :=\n\nvalG({u, v}) =\n\nwtG({u, v}) \u00b7 x{u,v} +\n\n(1 \u2212 wtG({u, v}))pG.\n\n(cid:88)\n\nv(cid:54)=u\n\n(cid:88)\n\nv(cid:54)=u\n\nSince at most kG other vertices can have weight less than that of u, we can bound the \ufb01rst term by\n\nwtG(u)x{u,v} \u00b1 kGwtG(u) = degG(u)wtG(u) \u00b1 kGwtG(u)\n\nv(cid:54)=u\n\n(cid:88)\n(cid:88)\n\uf8eb\uf8ed(n \u2212 1) \u2212(cid:88)\n\nv(cid:54)=u\n\nv(cid:54)=u\n\nand the second term by\n\npG \u00b7\n\nwtG({u, v})\n\n\uf8f6\uf8f8 = \u00afdG \u2212 \u00afdGwtG(u) \u00b1 pGkGwtG(u)\n\nso the total sum is bounded by fu(G) = \u00afdG + (degG(u) \u2212 \u00afdG)wtG(u) \u00b1 2kGwtG(u). Since\n|wtG(u) \u2212 wtG(cid:48)(u)| \u2264 6\u03b2, it follows that\n\n|fu(G) \u2212 fu(G(cid:48))| \u2264 7 + 6\u03b2(k(cid:48)(cid:48)\n\nG + 3) + 9\u03b2 + 6\u03b2kG\n\n= 13 + 45\u03b2 + 6\u03b2(k\u2217 + 4kG)\n= O(1 + \u03b2(kG + k\u2217)).\n\nG \u2264 2kG + 1 vertices in u \u2208 FarG \u222a FarG(cid:48) \\ {v\u2217}, the total difference\nSince there are at most kG + k(cid:48)\nin the terms of f (G) and f (G(cid:48)) corresponding to such vertices is at most 2kG + 1 times this, which\nis O(kG + \u03b2kG(kG + k\u2217)). However, we are double-counting any edges between two vertices in\nu \u2208 FarG \u222a FarG(cid:48); the number of such edges is at most 2k2\nG), and for any such\nedge e, |valG(e) \u2212 valG(cid:48)(e)| \u2264 12\u03b2 + 2/n = O(\u03b2 + 1/n). Consequently the error induced by this\ndouble-counting is at most (2k2\nG/n), so the total difference\nbetween the terms of f (G) and f (G(cid:48)) corresponding to such vertices is at most\n\nG + k2\n13 + 26kG + 45\u03b2 + 126\u03b2kG + 6\u03b2k\u2217 + 12\u03b2k\u2217kG + 72\u03b2k2\n\nG + kG)(12\u03b2 + 2/n), which is O(\u03b2k2\n\nG + kG = O(k2\n\nG + 6k2\n\nG/n,\n\n(cid:88)\n\nwhich is still O(kG + \u03b2kG(kG + k\u2217)) for \u03b2 = \u2126(1/n).\nFinally, consider the edges {u, v\u2217} involving vertex v\u2217. If wtG(v\u2217) = 0 then\nvalG({v\u2217, v}) = (n \u2212 1)pG = \u00afdG.\n\n(cid:88)\nIf wtG(v\u2217) = 1 then degG(v\u2217) \u2208 [ \u00afdG \u2212 k(cid:48)\n\nG, \u00afdG + k(cid:48)\n\nfv\u2217 (G) =\n\nG], so\n\nv(cid:54)=v\u2217\n\nfv\u2217 (G) =\n\nvalG({v\u2217, v}) = degG(v\u2217) \u00b1 kG = \u00afdG \u00b1 k(cid:48)\nv(cid:54)=v\u2217\n(cid:88)\nG \u2212 1/\u03b2, \u00afdG + k(cid:48)\nOtherwise, degG(v\u2217) \u2208 [ \u00afdG \u2212 k(cid:48)\nvalG({v\u2217, v})\n= \u00afdG + (degG(v\u2217) \u2212 \u00afdG)wtG(v\u2217) \u00b1 kGwtG(v\u2217)\n= \u00afdG \u00b1 (degG(v\u2217) \u2212 \u00afdG) \u00b1 kG,\n\nG + 1/\u03b2]. Then we have that\n\nfv\u2217 (G) =\n\nv(cid:54)=v\u2217\n\nG \u00b1 kG.\n\n6\n\n\fso in either case we have that fv\u2217 (G) \u2208 [ \u00afdG\u2212(k(cid:48)\nG +kG +1/\u03b2), \u00afdG +(k(cid:48)\n|fv\u2217 (G) \u2212 fv\u2217 (G(cid:48))| \u2264 3 + 8kG + 2k\u2217 + 2/\u03b2 = O(kG + k\u2217 + 1/\u03b2).\nPutting everything together, we have that\nLSf (G) \u2264 16 + 34kG + 2k\u2217 + 45\u03b2 + 126\u03b2kG + 6\u03b2k\u2217 + 12\u03b2k\u2217kG + 72\u03b2k2\nG/n + 2/\u03b2,\nwhich is O((kG + k\u2217)(1 + \u03b2kG) + 1/\u03b2) for \u03b2 = \u2126(1/n). In particular, for \u03b2 \u2208 [1/n, 1], we have\nthat LSf (G) \u2264 210((kG + k\u2217)(1 + \u03b2kG) + 1\n\u03b2 ).\n\nG +kG +1/\u03b2)]. Consequently\n\nG + 6k2\n\nWe now compute a smooth upper bound on LSf (G). Let\n\ng(kG, k\u2217, \u03b2) = 210((kG + k\u2217)(1 + \u03b2kG) + 1\n\u03b2 )\n\nbe the upper bound on LSf (G) from Lemma 3.1, and let\n\nS(G) = max\n(cid:96)\u22650\n\ne\u2212(cid:96)\u03b2g(kG + (cid:96), k\u2217, \u03b2).\n\nLemma 3.2. S(G) is a \u03b2-smooth upper bound on the local sensitivity of f. Moreover, we have the\nbound S(G) = O((kG + k\u2217)(1 + \u03b2kG) + 1\n\u03b2 ).\nProof. For neighboring graphs G, G(cid:48), we have that\n\nS(G(cid:48)) = max\n(cid:96)\u22650\n\u2264 max\n(cid:96)\u22650\n\ne\u2212(cid:96)\u03b2g(kG(cid:48) + (cid:96), k\u2217, \u03b2)\ne\u2212(cid:96)\u03b2g(kG + (cid:96) + 1, k\u2217, \u03b2)\ne\u2212(cid:96)\u03b2g(kG + (cid:96), k\u2217, \u03b2)\ne\u2212(cid:96)\u03b2g(kG + (cid:96), k\u2217, \u03b2)\n\n= e\u03b2 max\n(cid:96)\u22651\n\u2264 e\u03b2 max\n(cid:96)\u22650\n\n= e\u03b2S(G).\n\nMoreover, for \ufb01xed kG, k\u2217, \u03b2, consider the function h((cid:96)) = e\u2212(cid:96)\u03b2g(kG + (cid:96), k\u2217, \u03b2), and consider the\nderivative h(cid:48)((cid:96)). We have that h(cid:48)((cid:96)) = 210 \u00b7 \u03b2e\u2212(cid:96)\u03b2(kG + (cid:96))(1 \u2212 \u03b2(kG + (cid:96) + k\u2217)). Consequently the\nonly possible local maximum for (cid:96) > 0 would occur for (cid:96) = 1/\u03b2 \u2212 kG \u2212 k\u2217; note that the function h\ndecreases as (cid:96) \u2192 \u221e. Consequently the maximum value of h occurs for some (cid:96) \u2264 1/\u03b2, and so we\ncan show by calculation that S(G) < 630 \u00b7 ((kG + k\u2217)(1 + \u03b2kG) + 1\nRemark. Note that S(G) can be computed ef\ufb01ciently, since (cid:96) can be restricted to the nonnegative\nintegers and so the only candidate values for (cid:96) are 0, (cid:98)1/\u03b2 \u2212 kG \u2212 k\u2217(cid:99), and (cid:100)1/\u03b2 \u2212 kG \u2212 k\u2217(cid:101).\nTheorem 3.3. Algorithm 1 is (O(\u03b5), 0)-differentially private for \u03b5 \u2265 1/n. Moreover, for any\nk-concentrated n-vertex graph G = (V, E) with k \u2265 1, we have that Algorithm 1 satis\ufb01es\n\n\u03b2 ) as desired.\n\n\uf8ee\uf8f0(cid:32)|E|(cid:0)n\n(cid:1) \u2212 A\u03b5,k(G)\n\n(cid:33)2\uf8f9\uf8fb = O\n\n(cid:18) k2\n\nE\nA\n\n2\n\n\u03b52n4 +\n\n1\n\n\u03b54n4\n\n(cid:19)\n\nProof. Algorithm 1 computes function f and releases it with noise proportional to a \u03b2-smooth\nupper bound on the local sensitivity for \u03b2 \u2264 \u03b5. Consequently (O(\u03b5), 0)-differential privacy follows\nimmediately from Theorem 2.6.\nWe now analyze its accuracy on k-concentrated graphs G. If G is k-concentrated and k\u2217 \u2265 k, then\nwtG(v) = 1 for all vertices v \u2208 V and valG({u, v}) = x{u,v} for all u, v \u2208 V , and so f (G) = |E|.\nConsequently Algorithm 1 computes the edge density of a k-concentrated graph with noise distributed\n\naccording to the Student\u2019s t-distribution scaled by a factor of S(G)/(\u03b5(cid:0)n\n(cid:19)\n\nSince G is k-concentrated, we also have that kG = 1, and so S(G) = O(k + \u03b2(k + 1) + 1/\u03b2) \u2264\nO(k + 1/\u03b5) by Lemma 3.2. The variance of the Student\u2019s t-distribution with three degrees of freedom\nis O(1), so the expected squared error of the algorithm is\n\n(cid:18) (k + 1/\u03b5)2\n\n(cid:18) k2\n\n(cid:1)).\n\n(cid:19)\n\n2\n\nO\n\nas desired.\n\n\u03b52n4\n\n1\n\n\u03b52n2 +\n\n\u03b54n4\n\n= O\n\n7\n\n\f4 Application to Erd\u02ddos-R\u00e9nyi Graphs\n\nIn this section we show how to apply Algorithm 1 to estimate the parameter of an Erd\u02ddos-R\u00e9nyi graph.\n\nAlgorithm 2: Estimating the parameter of an Erd\u02ddos-R\u00e9nyi graph.\nInput: A graph G \u2208 Gn and parameters \u03b5, \u03b1 > 0.\nOutput: A parameter 0 \u2264 \u02c6p \u2264 1.\nLet \u02dcp(cid:48) \u2190 1\n(n\n2)\n\n(cid:80)\ne xe + (2/\u03b5n) \u00b7 Z where Z is a standard Laplace\n\nLet \u02dcp \u2190 \u02dcp(cid:48) + 4 log(1/\u03b1)/\u03b5n and \u02dck \u2190(cid:112)\u02dcpn log(n/\u03b1)\n\nReturn \u02c6p \u2190 A\u02dck,\u03b5(G) where A\u02dck,\u03b5 is Algorithm 1 with parameters \u02dck and \u03b5\n\nIt is straightforward to prove that this mechanism satis\ufb01es differential privacy.\nTheorem 4.1. Algorithm 2 satis\ufb01es (O(\u03b5), 0)-node-differential privacy for \u03b5 \u2265 1/n.\n\nProof. The \ufb01rst line computes the empirical edge density of the graph G, which is a function with\n\n(cid:1) = 2/n. Therefore by Theorem 2.4 this step satis\ufb01es (\u03b5, 0)-differential\n\nglobal sensitivity (n \u2212 1)/(cid:0)n\n\nprivacy. The third line runs an algorithm that satis\ufb01es (O(\u03b5), 0)-differential privacy for every \ufb01xed\nparameter \u02dck. By Lemma 2.2, the composition satis\ufb01es (O(\u03b5), 0)-differential privacy.\n\n2\n\nNext, we argue that this algorithm satis\ufb01es the desired accuracy guarantee.\nTheorem 4.2. For every n \u2208 N and 1\nsatis\ufb01es\n\n2 \u2265 p \u2265 0, and an appropriate parameter \u03b1 > 0, Algorithm 2\n\n(cid:2)(p \u2212 A(G))2(cid:3) =\n\np(1 \u2212 p)\n\n(cid:0)n\n\n(cid:1) + \u02dcO\n\n2\n\n(cid:18) max{p, 1\nn}\n\n\u03b52n3\n\n(cid:19)\n\n+\n\n1\n\n\u03b54n4\n\nE\n\nG\u223cG(n,p),A\n\nProof. We will prove the result in the case where p \u2265 log n\nn . The case where p is smaller will\nfollow immediately by using log n\nn as an upper bound on p. The \ufb01rst term in the bound is simply the\nvariance of the empirical edge-density \u00afp. For the remainder of the proof we will focus on bounding\n\nE(cid:2)(\u00afp \u2212 \u02c6p)2(cid:3).\n2 log(1/\u03b1)/n, and (2) the degree of every node i lies in the interval [ \u00afd \u00b1(cid:112)pn log(n/\u03b1)] where \u00afd is\n\nis that with probability at least 1 \u2212 2\u03b1: (1) |\u00afp \u2212 p| \u2264\n\nA basic fact about G(n, p) for p \u2265 log n\n\nthe average degree of G. We will assume for the remainder that these events hold.\nUsing Theorem 2.4, we also have that with probability at least 1 \u2212 \u03b1, the estimate \u02dcp(cid:48) satis\ufb01es\n|\u00afp \u2212 \u02dcp(cid:48)| \u2264 4 log(1/\u03b1)/\u03b5n. We will also assume for the remainder that this latter event holds.\nTherefore, we have p \u2264 \u02dcp and p \u2265 \u02dcp \u2212 8 log(1/\u03b1)/\u03b5n.\nAssuming this condition holds, the graph will have \u02dck concentrated degrees for \u02dck as speci\ufb01ed on line 2\nof the algorithm. Since this assumption holds, we have\n\nn\n\nE(cid:104)\n(\u00afp \u2212 A\u02dck,\u03b5(G))2(cid:105)\n\n(cid:32) \u02dck2\n\n(cid:33)\n\n(cid:18) pn + 1\n\n(cid:19)\n\n(cid:18) pn\n\n(cid:19)\n\n= \u02dcO\n\n\u03b52n4 +\n\n1\n\n\u03b54n4\n\n= \u02dcO\n\n\u03b5n\n\n\u03b52n4 +\n\n1\n\n\u03b54n4\n\n= \u02dcO\n\n\u03b52n4 +\n\n1\n\n\u03b54n4\n\nTo complete the proof, we can plug in a suitably small \u03b1 = 1/poly(n) so that the O(\u03b1) probability\nof failure will not affect the overall mean-squared error in a signi\ufb01cant way.\n\n5 Lower Bounds for Concentrated-Degree Graphs\n\nIn this section we prove a lower bound for estimating the number of edges in concentrated-degree\ngraphs. Theorem 5.1, which lower bounds the mean squared error, follows from Jensen\u2019s Inequality.\n\n8\n\n\fTheorem 5.1. For every n, k \u2208 N, every \u03b5 \u2208 [ 2\nA, there exists G \u2208 Gn,k such that E\n\nA[|pG \u2212 A(G)|] = \u2126(cid:0) k\n\n4 ] and \u03b4 \u2264 \u03b5\n\nn , 1\n\n\u03b5n2 + 1\n\u03b52n2\n\n(cid:1).\n\n32 , and every (\u03b5, \u03b4)-node-DP algorithm\n\n\u03b5 from\n32 )-node-DP algorithm A, there exists b \u2208 {0, 1} such that\n\nThe proof relies only on the following standard fact about differentially private algorithms.\nLemma 5.2. Suppose there are two graphs G0, G1 \u2208 Gn,k at node distance at most 1\none another. Then for every (\u03b5, \u03b5\nA[|pGb \u2212 A(Gb)|] = \u2126(|pG0 \u2212 pG1|).\nE\nWe will construct two simple pairs of graphs to which we can apply Lemma 5.2.\nLemma 5.3 (Lower bound for large k). For every n, k \u2208 N and \u03b5 \u2265 2/n, there is a pair of graphs\nG0, G1 \u2208 Gn,k at node distance 1/\u03b5 such that |pG0 \u2212 pG1| = \u2126( k\nProof. Let G0 be the empty graph on n nodes. Note that pG0 = 0, \u00afdG0 = 0, and G0 is in Gn,k.\n\u03b5 nodes on the left and n \u2212 1\nWe construct G1 as follows. Start with the empty bipartite graph with 1\n\u03b5\nnodes on the right. We connect the \ufb01rst node on the left to each of the \ufb01rst k nodes on the right, then\nthe second node on the left to each of the next k nodes on the right and so on, wrapping around to\n\nthe \ufb01rst node on the right when we run out of nodes. By construction, pG1 = k/\u03b5(cid:0)n\n\n(cid:1), \u00afdG1 = 2k/\u03b5n.\n\n\u03b5n2 ).\n\n\u03b5 nodes has degree exactly k and each of the nodes on the right has degree\n\u03b5n\u22121 \u00b1 1 Thus, for n larger than some absolute constant, every degree lies in the\n\nMoreover, each of the \ufb01rst 1\nn\u22121/\u03b5 \u00b1 1 = k\ninterval [ \u00afdG1 \u00b1 k] so we have G1 \u2208 Gn,k.\nLemma 5.4 (Lower bound for small k). For every n \u2265 4 and \u03b5 \u2208 [2/n, 1/4], there is a pair of\ngraphs G0, G1 \u2208 Gn,1 at node distance 1/\u03b5 such that |pG0 \u2212 pG1| = \u2126( 1\nProof. Let i = (cid:100)n\u03b5(cid:101), and let G0 be the graph consisting of i disjoint cliques each of size (cid:98)n/i(cid:99) or\n(cid:100)n/i(cid:101). Let G1 be the graph consisting of i + 1 disjoint cliques each of size (cid:98)n/(i + 1)(cid:99) or (cid:100)n/(i + 1)(cid:101).\nWe can obtain G0 from G1 by taking one of the cliques and redistributing its vertices among the i\nremaining cliques, so G0 and G1 have node distance (cid:96) := (cid:98)n/(i + 1)(cid:99) \u2264 1/\u03b5. For 1/4 \u2265 \u03b5 \u2265 2/n\nwe have that (cid:96) \u2265 (cid:98)1/2\u03b5(cid:99) > 1/4\u03b5. Transforming G1 into G0 involves removing a clique of size (cid:96),\nat least (cid:96)2 new edges. Consequently G0 contains at least (cid:96)2 \u2212 (cid:96)((cid:96) \u2212 1)/2 = (cid:96)((cid:96) + 1)/2 more edges\nthan G1, so\n\n(cid:1) edges, and then inserting these (cid:96) vertices into cliques that already have size (cid:96), adding\n\ncontaining(cid:0)(cid:96)\n\n\u03b52n2 ).\n\nk/\u03b5\n\n2\n\n2\n\n(cid:0)(cid:96)+1\n(cid:1)\n(cid:0)n\n(cid:1) \u2265 (cid:96)2\n\n2\n\n2\n\nn2 \u2265 \u2126(1/\u03b52n2),\n\n|pG1 \u2212 pG0| \u2265\n\nas desired.\n\nTheorem 5.1 now follows by combining Lemmas 5.2, 5.3, and 5.4.\n\nAcknowledgments\n\nPart of this work was done while the authors were visiting the Simons Institute for the Theory of\nComputing. AS is supported by NSF MACS CNS-1413920, DARPA/NJIT Palisade 491512803,\nSloan/NJIT 996698, and MIT/IBM W1771646. JU is supported by NSF grants CCF-1718088,\nCCF-1750640, and CNS-1816028. The authors are grateful to Adam Smith for helpful discussions.\n\nReferences\n[1] J. Blocki, A. Blum, A. Datta, and O. Sheffet. The johnson-lindenstrauss transform itself\npreserves differential privacy. In 53rd IEEE Symposium on Foundations of Computer Science,\nFOCS\u201912, pages 410\u2013419, New Brunswick, NJ, USA, 2012.\n\n[2] J. Blocki, A. Blum, A. Datta, and O. Sheffet. Differentially private data analysis of social\nIn 4th ACM Conference on Innovations in Theoretical\n\nnetworks via restricted sensitivity.\nComputer Science, ITCS \u201913, pages 87\u201396, Berkeley, CA, USA, 2013. ACM.\n\n9\n\n\f[3] C. Borgs, J. T. Chayes, A. D. Smith, and I. Zadik. Revealing network structure, con\ufb01dentially:\nImproved rates for node-private graphon estimation. In 59th Annual IEEE Symposium on\nFoundations of Computer Science, FOCS \u201918, pages 533\u2013543, Paris, France, 2018.\n\n[4] M. Bun and T. Steinke. Smooth sensitivity, revisited. Manuscript, 2019.\n\n[5] M. Bun, J. Ullman, and S. Vadhan. Fingerprinting codes and the price of approximate differential\nprivacy. In 46th Annual ACM Symposium on the Theory of Computing, STOC \u201914, pages 1\u201310,\nNew York, NY, USA, 2014.\n\n[6] C. L. Canonne, G. Kamath, A. McMillan, J. Ullman, and L. Zakynthinou. Private identity\n\ntesting for high dimensional distributions. arXiv preprint arXiv:1905.11947, 2019.\n\n[7] R. Cummings and D. Durfee. Individual sensitivity preprocessing for data privacy. arXiv\n\npreprint arXiv:1804.08645, 2018.\n\n[8] I. Dinur and K. Nissim. Revealing information while preserving privacy. In Proceedings of the\n22nd ACM Symposium on Principles of Database Systems, PODS \u201903, pages 202\u2013210. ACM,\n2003.\n\n[9] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data\nanalysis. In Proceedings of the 3rd Conference on Theory of Cryptography, TCC \u201906, pages\n265\u2013284, Berlin, Heidelberg, 2006. Springer.\n\n[10] C. Dwork, F. McSherry, and K. Talwar. The price of privacy and the limits of lp decoding. In\nProceedings of the thirty-ninth annual ACM symposium on Theory of computing, pages 85\u201394.\nACM, 2007.\n\n[11] C. Dwork, A. Smith, T. Steinke, J. Ullman, and S. Vadhan. Robust traceability from trace\namounts. In 56th Annual IEEE Symposium on Foundations of Computer Science, FOCS \u201915,\npages 650\u2013669, Berkeley, CA, 2015.\n\n[12] C. Dwork and S. Yekhanin. New ef\ufb01cient attacks on statistical disclosure control mechanisms.\n\nIn Annual International Cryptology Conference, pages 469\u2013480. Springer, 2008.\n\n[13] A. Gupta, A. Roth, and J. Ullman. Iterative constructions and private data release. In 9th IACR\nTheory of Cryptography Conference, TCC \u201912, pages 339\u2013356, Taormina, Italy, 2012. Springer.\n\n[14] M. Hay, C. Li, G. Mikalu, and D. D. Jensen. Accurate estimation of the degree distribution of\nprivate networks. In Proceedings of the 9th IEEE International Confernece on Data Mining,\nICDM\u201909, pages 169\u2013178, Miami, FL, USA, 2009.\n\n[15] N. Homer, S. Szelinger, M. Redman, D. Duggan, W. Tembe, J. Muehling, J. V. Pearson, D. A.\nStephan, S. F. Nelson, and D. W. Craig. Resolving individuals contributing trace amounts\nof DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS\ngenetics, 4(8):e1000167, 2008.\n\n[16] V. Karwa, S. Raskhodnikova, A. D. Smith, and G. Yaroslavtsev. Private analysis of graph\n\nstructure. ACM Transactions on Database Systems, 39(3):22:1\u201322:33, 2014.\n\n[17] V. Karwa and A. Slavkovi\u00b4c. Inference using noisy degrees: Differentially private \u03b2-model and\n\nsynthetic graphs. Annals of Statistics, 44(1):87\u2013112, 2016.\n\n[18] S. P. Kasiviswanathan, K. Nissim, S. Raskhodnikova, and A. D. Smith. Analyzing graphs with\nnode differential privacy. In 10th IACR Theory of Cryptography Conference, TCC \u201913, pages\n457\u2013476, Tokyo, Japan, 2013. Springer.\n\n[19] S. P. Kasiviswanathan, M. Rudelson, A. Smith, and J. Ullman. The price of privately releasing\ncontingency tables and the spectra of random matrices with correlated rows. In Proceedings of\nthe 42nd ACM Symposium on Theory of Computing, STOC \u201910, pages 775\u2013784. ACM, 2010.\n\n[20] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data\nanalysis. In Proceedings of the 30th annual ACM Symposium on Theory of Computing, STOC,\npages 75\u201384, 2007.\n\n10\n\n\f[21] S. Raskhodnikova and A. D. Smith. Lipschitz extensions for node-private graph statistics and\nthe generalized exponential mechanism. In 57th Annual IEEE Symposium on Foundations of\nComputer Science, FOCS \u201916, pages 495\u2013504, New Brunswick, NJ, USA, 2016.\n\n[22] Q. Xiao, R. Chen, and K.-L. Tan. Differentially private network data release via structural\ninference. In 20th ACM International Conference on Knowledge Discovery and Data Mining,\nKDD\u201914, pages 911\u2013920, 2014.\n\n11\n\n\f", "award": [], "sourceid": 2054, "authors": [{"given_name": "Jonathan", "family_name": "Ullman", "institution": "Northeastern University"}, {"given_name": "Adam", "family_name": "Sealfon", "institution": "UC Berkeley"}]}