Deep Models Under GAN: https://arxiv.org/abs/1702.07464
Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users’ private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS’15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private. Interestingly, we show that differential privacy applied to the shared parameters of the model as suggested at CCS’15 and CCS’16 is utterly futile. In our generative model attack, all techniques adopted to scramble or obfuscate shared parameters in collaborative deep learning are rendered ineffective with no possibility of a remedy under the threat model considered.