Abstract:
|
Deep learning models are often trained on datasets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy for a refined privacy analysis of training neural networks. This paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks. Our results demonstrate that the f-differential privacy framework allows for the improvement on the prior analysis with a better prediction accuracy without violating the privacy budget, via our experiments in a range of tasks in image classification, text classification, and recommender systems.
|