Overview of Empirical Findings

We examine the trade-offs between the accuracy and performance of various classes of differentially private graph analysis algorithms by benchmarking them on real-world datasets. Our evaluation demonstrates that the optimal choice of algorithm is dependent on the properties of the underlying dataset as well as the performance and privacy requirements of a given user.

Algorithms:

Q: Is there one algorithm that outperforms the rest, across various input settings?

A: No single algorithm dominates all others.

More details

We consider an algorithm unusable if its L1 error is more than 100%, as naively returning a zero array would produce error of 1. For degree distribution queries, edgeAddCum is the best algorithm under node-DP. It reports the lowest L1 error across datasets. Among the useful algorithms, edgeAddCum has the best runtime performance. Under edge-DP, the error of the algorithms are less than those under node-DP. Here, degHis is best in terms of utility and performance.

For subgraph counting queries under edge-DP, the best algorithm is dependent on the query type. For kstar counting, Smooth Sensitivity reports the lowest error and it is also much more computationally efficient than Ladder. For other queries, Ladder reports the lowest error. At larger epsilons, the simple Laplace mechanism may become useful for certain graphs. For example, the Laplace mechanism beats the utility baseline for all epsilon >= 0.5 on FACEBOOK and therefore a performance conscious analyst may choose this mechanism if their dataset is similar to FACEBOOK.


Dataset:

Q: Do graph-DP algorithms’ performances vary on graph datasets with different shapes?

A: A graph’s shape impacts an algorithm's performance.

More details

For degree distribution queries, the error decreases as the size of the graph increases. The exceptions are simpler algorithms, such as degHis and degSeq . For subgraph counting queries, the Laplace mechanism’s error increases with the number of nodes in a graph dataset. This is because the Laplace mechanism achieves differential privacy by adding noise proportional to the global sensitivity of a dataset, and the global sensitivity of a graph dataset increases with the number of nodes.

For the Ladder and Smooth Sensitivity algorithms, graph datasets' shape have an impact on their performances. But exactly which properties of the graph dataset have the most impact on performance remain to be answered.


Query:

Q: Do graph-DP algorithms’ performances vary on different subgraph counting queries?

A: An algorithm may provide more accurate answers for some subgraph counting queries than others.

More details

Intuitively 4-kclique and 2-ktriangle are the “harder” queries to answer. For the triangle and 3-kstar, 4-kclique and 2-ktriangle pairs, there is no clear ordering of the errors where one query has more accurate results than the other for some combinations of algorithms. Here is the observed ranking of errors:

Err(Triangle) & Err(3-kstar) < Err(2-ktriangle) & Err(4-kclique)


Epsilon:

Q: Do graph-DP algorithms' performances vary on different epsilon values?

A: As expected, algorithms' performances increases as epsilon increases.

More details

As epsilon increases, the error for all algorithms should decrease since less noise is added to perturb the true answer. We note that it appears as though degSeq*’s and degHis*’s error does not decrease under node-DP. This is likely due to the large query sensitivity and noise required for these algorithms.