Abstract:
|
Perhaps the single most important use case of differential privacy is to privately answer numerical queries, which is usually achieved by adding noise to the answer vector. The central question is to understand what noise optimizes privacy-accuracy trade-off, especially when the dimension is high. Extensive research has been done and the upper and lower bounds have been successfully matched up to constant factors. In this paper we take a novel approach that partially answers the optimality question including constants. We first demonstrate an intriguing central limit phenomenon: when the dimension is high, a mechanism is approximately \emph{Gaussian Differentially Private} if the added noise satisfies certain conditions. In particular, densities proportional to $\e^{-\|x\|_p^\alpha}$ satisfies the conditions. Taking this perspective, we make use of the Cramer--Rao inequality and show a result with ``uncertainty principle’’ flavor: the product of privacy parameter and the $\ell_2$-loss of the mechanism is lower bounded by the dimension. Furthermore, Gaussian achieves the (constant included) optimal privacy-accuracy trade-off. Our findings are corroborated by numerical experiments.
|