Abstract:
|
Deep neural networks have been widely adopted in verticals such as healthcare, banking, e-commerce etc. to address business challenges. There is growing trend of protecting user privacy due to General Data Protection Regulation (GDPR) or other privacy related law regulations. Consequently, designing deep neural network systems that protect user privacy has been brought to the spotlight in recent years. One big challenge for developing such privacy preserving machine learning (PPML) systems is to evaluate their privacy risks. Recent works such as differentially private SGD (DP-SGD) successfully addresses the theoretical privacy risk upper bound estimation issue. Nevertheless, threat model specific understanding of PPML designs are still not well studied. In this work, we propose a set of methods to gauge privacy risks under various threat models for deep neural networks. We compare our privacy assessment with provable DP-SGD upper bound to demonstrate the gap between theoretical bound and threat model specific risk. We hope our works provide practical guidance on choosing reasonable strategies for privacy enhancement and risk measurement.
|