LOGAN: Membership Inference Attacks Against Generative Models

Open access

Abstract

Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] M. Abadi A. Chu I. Goodfellow H. B. McMahan I. Mironov K. Talwar and L. Zhang. Deep learning with differential privacy. In CCS 2016.

  • [2] Y. Aono T. Hayashi L. Wang S. Moriai et al. Privacy-preserving deep learning: Revisited and Enhanced. In ATIS 2017.

  • [3] M. Arjovsky S. Chintala and L. Bottou. Wasserstein GAN. arXiv 1701.07875 2017.

  • [4] G. Ateniese L. V. Mancini A. Spognardi A. Villani D. Vitali and G. Felici. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks 2015.

  • [5] M. Backes P. Berrang M. Humbert and P. Manoharan. Membership Privacy in MicroRNA-based Studies. In CCS 2016.

  • [6] B. K. Beaulieu-Jones Z. S. Wu C. Williams and C. S. Greene. Privacy-preserving generative deep neural networks support clinical data sharing. bioRxiv 2017.

  • [7] Y. Bengio L. Yao G. Alain and P. Vincent. Generalized denoising auto-encoders as generative models. In NIPS 2013.

  • [8] D. Berthelot T. Schumm and L. Metz. BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv 1703.10717 2017.

  • [9] K. Bonawitz V. Ivanov B. Kreuter A. Marcedone H. B. McMahan S. Patel D. Ramage A. Segal and K. Seth. Practical secure aggregation for privacy preserving machine learning. In CCS 2017.

  • [10] J. A. Calandrino A. Kilzer A. Narayanan E. W. Felten and V. Shmatikov. “You Might Also Like:” Privacy Risks of Collaborative Filtering. In IEEE Security and Privacy 2011.

  • [11] N. Carlini C. Liu J. Kos Ú. Erlingsson and D. Song. The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets. arXiv:1802.08232 2018.

  • [12] S. Chintala E. Denton M. Arjovsky and M. Mathieu. How to Train a GAN? Tips and tricks to make GANs work. https://github.com/soumith/ganhacks Year.

  • [13] E. Choi S. Biswal B. Malin J. Duke W. F. Stewart and J. Sun. Generating Multi-label Discrete Electronic Health Records using Generative Adversarial Networks. In Machine Learning for Healthcare 2017.

  • [14] N. Dowlin R. Gilad-Bachrach K. Laine K. Lauter M. Naehrig and J. Wernsing. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In ICML 2016.

  • [15] W. Du Y. S. Han and S. Chen. Privacy-preserving multivariate statistical analysis: Linear regression and classification. In ICDM 2004.

  • [16] C. Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of Computation 2008.

  • [17] C. Dwork V. Feldman M. Hardt T. Pitassi O. Reingold and A. Roth. Generalization in adaptive data analysis and holdout reuse. In NIPS 2015.

  • [18] M. Fredrikson S. Jha and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In CCS 2015.

  • [19] M. Fredrikson E. Lantz S. Jha S. Lin D. Page and T. Ristenpart. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In USENIX Security 2014.

  • [20] I. Goodfellow J. Pouget-Abadie M. Mirza B. Xu D. Warde-Farley S. Ozair A. Courville and Y. Bengio. Generative adversarial nets. In NIPS 2014.

  • [21] I. Gulrajani F. Ahmed M. Arjovsky V. Dumoulin and A. Courville. Improved training of Wasserstein GANs. In ICLR (Posters) 2018.

  • [22] G. Hinton O. Vinyals and J. Dean. Distilling the knowledge in a neural network. arXiv 1503.02531 2015.

  • [23] B. Hitaj G. Ateniese and F. Perez-Cruz. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In CCS 2017.

  • [24] N. Homer S. Szelinger M. Redman D. Duggan W. Tembe J. Muehling J. V. Pearson D. A. Stephan S. F. Nelson and D. W. Craig. Resolving individuals contributing trace amounts of DNA to highly complex mixtures using high-density SNP genotyping microarrays. PLoS Genet 2008.

  • [25] G. B. Huang M. Ramesh T. Berg and E. Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. Technical report University of Massachusetts Amherst 2007. http://vis-www.cs.umass.edu/lfw/lfw.pdf.

  • [26] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning 2015.

  • [27] S. Ji W. Li N. Z. Gong P. Mittal and R. A. Beyah. On your social network de-anonymizablity: Quantification and large scale evaluation with seed knowledge. In NDSS 2015.

  • [28] J. Jia and N. Z. Gong. Attriguard: A practical defense against attribute inference attacks via adversarial machine learning. In USENIX Security 2018.

  • [29] Kaggle.com. Diabetic Retinopathy Detection. https://www.kaggle.com/c/diabetic-retinopathy-detection#references 2015.

  • [30] A. Karpathy P. Abbeel G. Brockman P. Chen V. Cheung R. Duan I. Goodfellow D. Kingma J. Ho R. Houthooft T. Salimans J. Schulman I. Sutskever and W. Zaremba. Generative Models. https://blog.openai.com/generative-models/ 2017.

  • [31] D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. In ICLR 2013.

  • [32] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report University of Toronto 2009. https://www.cs.toronto.edu/fflkriz/learning-features-2009-TR.pdf.

  • [33] M. J. Kusner J. R. Gardner R. Garnett and K. Q. Weinberger. Differentially Private Bayesian Optimization. In ICML 2015.

  • [34] A. B. L. Larsen S. K. Sønderby H. Larochelle and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICLM 2016.

  • [35] C. Ledig L. Theis F. Huszár J. Caballero A. Cunningham A. Acosta A. Aitken A. Tejani J. Totz Z. Wang et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv 1609.04802 2016.

  • [36] Y. Lindell and B. Pinkas. Privacy preserving data mining. In CRYPTO 2000.

  • [37] Y. Long V. Bindschaedler L. Wang D. Bu X. Wang H. Tang C. A. Gunter and K. Chen. Understanding Membership Inferences on Well-Generalized Learning Models. arXiv:1802.04889 2018.

  • [38] M. Lucic K. Kurach M. Michalski S. Gelly and O. Bousquet. Are GANs Created Equal? A Large-Scale Study. ArXiv 1711.10337 2017.

  • [39] H. B. McMahan E. Moore D. Ramage S. Hampson et al. Communication-efficient learning of deep networks from decentralized data. In AISTATS 2017.

  • [40] F. McSherry. Statistical inference considered harmful. https://github.com/frankmcsherry/blog/blob/master/posts/2016-06-14.md 2016.

  • [41] L. Melis C. Song E. De Cristofaro and V. Shmatikov. Inference Attacks Against Collaborative Learning. arXiv:1805.04049 2018.

  • [42] A. Narayanan and V. Shmatikov. De-anonymizing social networks. In IEEE Security and Privacy 2009.

  • [43] M. Nasr R. Shokri and A. Houmansadr. Machine Learning with Membership Privacy using Adversarial Regularization. In ACM CCS 2018.

  • [44] D. Nie R. Trullo C. Petitjean S. Ruan and D. Shen. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In MICCAI 2017.

  • [45] otoro.net. Generating Large Images from Latent Vectors. http://blog.otoro.net/2016/04/01/generating-large-images-from-latent-vectors/ 2016.

  • [46] N. Papernot M. Abadi Ú. Erlingsson I. Goodfellow and K. Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In ICLR 2017.

  • [47] N. Papernot P. McDaniel X. Wu S. Jha and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Security and Privacy 2016.

  • [48] N. Papernot S. Song I. Mironov A. Raghunathan K. Talwar and Ú. Erlingsson. Scalable Private Learning with PATE. In ICLR 2018.

  • [49] A. Pyrgelis C. Troncoso and E. De Cristofaro. What Does The Crowd Say About You? Evaluating Aggregation-based Location Privacy. In PETS 2017.

  • [50] A. Pyrgelis C. Troncoso and E. De Cristofaro. Knock Knock Who’s There? Membership Inference on Aggregate Location Data. In NDSS 2018.

  • [51] J. Qian X.-Y. Li C. Zhang and L. Chen. De-anonymizing social networks and inferring private attributes using knowledge graphs. In INFOCOM 2016.

  • [52] A. Radford L. Metz and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 1511.06434 2015.

  • [53] M. A. Rahman T. Rahman R. Laganiere N. Mohammed and Y. Wang. Membership Inference Attack against Differentially Private Deep Learning Model. Transactions on Data Privacy 2018.

  • [54] T. Salimans I. Goodfellow W. Zaremba V. Cheung A. Rad-ford X. Chen and X. Chen. Improved Techniques for Training GANs. In NIPS 2016.

  • [55] T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS 2016.

  • [56] R. Shokri and V. Shmatikov. Privacy-preserving deep learning. In CCS 2015.

  • [57] R. Shokri M. Stronati C. Song and V. Shmatikov. Membership inference attacks against machine learning models. In IEEE Security and Privacy 2017.

  • [58] C. Song T. Ristenpart and V. Shmatikov. Machine learning models that remember too much. In ACM CCS 2017.

  • [59] N. Srivastava G. E. Hinton A. Krizhevsky I. Sutskever and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research 2014.

  • [60] L. Theis W. Shi A. Cunningham and F. Huszár. Lossy image compression with compressive autoencoders. In ICLR 2017.

  • [61] F. Tramèr F. Zhang A. Juels M. K. Reiter and T. Ristenpart. Stealing machine learning models via prediction apis. In USENIX Security 2016.

  • [62] A. Triastcyn and B. Faltings. Generating differentially private datasets using gans. arXiv preprint arXiv:1803.03148 2018.

  • [63] S. Truex L. Liu M. E. Gursoy L. Yu and W. Wei. Towards Demystifying Membership Inference Attacks. arXiv:1807.09173 2018.

  • [64] J. Vincent. https://www.theverge.com/2016/7/5/12095830/google-deepmind-nhs-eye-disease-detection 2016.

  • [65] M. J. Wainwright M. I. Jordan and J. C. Duchi. Privacy aware learning. In Advances in Neural Information Processing Systems 2012.

  • [66] X. Wu M. Fredrikson W. Wu S. Jha and J. F. Naughton. Revisiting differentially private regression: Lessons from learning theory and their consequences. arXiv 1512.06388 2015.

  • [67] X. Wu and X. Zhang. Automated Inference on Criminality using Face Images. arXiv 1611.04135 2016.

  • [68] Y. Wu Y. Burda R. Salakhutdinov and R. Grosse. On the Quantitative Analysis of Decoder-Based Generative Models. In ICLR (Poster) 2017.

  • [69] R. Yeh C. Chen T. Y. Lim M. Hasegawa-Johnson and M. N. Do. Semantic Image Inpainting with Perceptual and Contextual Losses. arXiv 1607.07539 2016.

  • [70] S. Yeom I. Giacomelli M. Fredrikson and S. Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In IEEE CSF 2018.

Search
Journal information
Cited By
Metrics
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1670 1670 82
PDF Downloads 548 548 46