Activity Number:
|
136
- Development of Indicators: Prediction vs. Inference
|
Type:
|
Contributed
|
Date/Time:
|
Monday, July 30, 2018 : 8:30 AM to 10:20 AM
|
Sponsor:
|
Social Statistics Section
|
Abstract #329716
|
Presentation
|
Title:
|
Using Neural Generative Models to Release Synthetic Twitter Corpora with Reduced Stylometric Identifiability of Users
|
Author(s):
|
Joshua Snoke* and Alexander Ororbia and Fridolin Linder
|
Companies:
|
and Pennsylvania State University and Pennsylvania State University
|
Keywords:
|
Privacy;
Twitter;
Ethics;
Neural;
Text;
Stylometry
|
Abstract:
|
We present a method for generating synthetic versions of Twitter data using neural generative models. The goal is to protect individuals in the source data from stylometric re-identification attacks while releasing data that carries research value. To generate tweet corpora that maintain user-level word distributions, our approach augments powerful neural language models with local parameters that weight user-specific inputs. We compare our work to two standard text data protection methods: redaction and iterative translation. We evaluate the methods on risk and utility. We define risk following the stylometric models of re-identification, and we define utility based on two general language measures and two common text analysis tasks. We find that neural models are able to significantly lower risk over previous methods at the cost of some utility. More importantly, we show that the risk utility trade-off depends on how the neural model's logits (or the unscaled pre-activation values of the output layer) are scaled. This work presents promising results for a new tool addressing the problem of privacy for free text and sharing social media data in a way that is ethically responsible.
|
Authors who are presenting talks have a * after their name.