SecureNN: 3-Party Secure Computation for Neural Network Training

Open access

Abstract

Neural Networks (NN) provide a powerful method for machine learning training and inference. To effectively train, it is desirable for multiple parties to combine their data – however, doing so conflicts with data privacy. In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified Linear Units, Maxpool, normalization and so on. This enables us to construct three-party secure protocols for training and inference of several NN architectures such that no single party learns any information about the data. Experimentally, we implement our system over Amazon EC2 servers in different settings. Our work advances the state-of-the-art of secure computation for neural networks in three ways:

1. Scalability: We are the first work to provide neural network training on Convolutional Neural Networks (CNNs) that have an accuracy of > 99% on the MNIST dataset;

2. Performance: For secure inference, our system outperforms prior 2 and 3-server works (SecureML, MiniONN, Chameleon, Gazelle) by 6×-113× (with larger gains obtained in more complex networks). Our total execution times are 2 − 4× faster than even just the online times of these works. For secure training, compared to the only prior work (SecureML) that considered a much smaller fully connected network, our protocols are 79× and 7× faster than their 2 and 3-server protocols. In the WAN setting, these improvements are more dramatic and we obtain an improvement of 553×!

3. Security: Our protocols provide two kinds of security: full security (privacy and correctness) against one semi-honest corruption and the notion of privacy against one malicious corruption [Araki et al. CCS’16]. All prior works only provide semi-honest security and ours is the first system to provide any security against malicious adversaries for the secure computation of complex algorithms such as neural network inference and training.

Our gains come from a significant improvement in communication through the elimination of expensive garbled circuits and oblivious transfer protocols.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] Eigen Library. http://eigen.tuxfamily.org/. Version: 3.3.3.

  • [2] Fixed-point data type. http://dec64.com. Last Updted: 2018-01-20.

  • [3] MNIST database. http://yann.lecun.com/exdb/mnist/. Accessed: 2017-09-24.

  • [4] Stanford CS231n: Convolutional Neural Networks for Visual Recognition. http://cs231n.github.io/convolutionalnetworks/.

  • [5] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (GDPR). Official Journal of the European Union L119 May 2016.

  • [6] Toshinori Araki Jun Furukawa Yehuda Lindell Ariel Nof and Kazuma Ohara. High-throughput semi-honest secure three-party computation with an honest majority. In ACM CCS 2016.

  • [7] Donald Beaver. Efficient multiparty protocols using circuit randomization. In Annual International Cryptology Conference pages 420–432. Springer 1991.

  • [8] Michael Ben-Or Shafi Goldwasser and Avi Wigderson. Completeness theorems for non-cryptographic fault-tolerant distributed computation (extended abstract). In ACM STOC 1988.

  • [9] Dan Bogdanov Sven Laur and Jan Willemson. Sharemind: A framework for fast privacy-preserving computations. In ESORICS pages 192–206 2008.

  • [10] Dan Bogdanov Margus Niitsoo Tomas Toft and Jan Willemson. High-performance secure multi-party computation for data mining applications. International Journal of Information Security 11(6):403–418 2012.

  • [11] Raphael Bost Raluca Ada Popa Stephen Tu and Shafi Goldwasser. Machine learning classification over encrypted data. In NDSS 2015.

  • [12] R. Canetti. Universally composable security: A new paradigm for cryptographic protocols. In Proceedings of the 42Nd IEEE Symposium on Foundations of Computer Science FOCS ’01 pages 136– 2001.

  • [13] Ran Canetti. Security and composition of multiparty cryptographic protocols. Journal of CRYPTOLOGY 13(1):143–202 2000.

  • [14] Octavian Catrina and Sebastiaan de Hoogh. Improved primitives for secure multiparty integer computation. In Security and Cryptography for Networks 7th International Conference SCN 2010 Amalfi Italy September 13–15 2010. Proceedings pages 182–199 2010.

  • [15] Centers for Medicare & Medicaid Services. The Health Insurance Portability and Accountability Act of 1996 (HIPAA). Online at http://www.cms.hhs.gov/hipaa/ 1996.

  • [16] Hervé Chabanne Amaury de Wargny Jonathan Milgram Constance Morel and Emmanuel Prouff. Privacy-preserving classification on deep neural network. Cryptology ePrint Archive Report 2017/035 2017. https://eprint.iacr.org/2017/035.

  • [17] David Chaum Claude Crépeau and Ivan Damgård. Multiparty unconditionally secure protocols (extended abstract). In ACM STOC 1988.

  • [18] Koji Chida Daniel Genkin Koki Hamada Dai Ikarashi Ryo Kikuchi Yehuda Lindell and Ariel Nof. Fast large-scale honest-majority mpc for malicious adversaries. In Crypto pages 34–64 2018.

  • [19] Benny Chor and Eyal Kushilevitz. A zero-one law for boolean privacy. SIAM J. Discrete Math. 4(1) 1991.

  • [20] Ivan Damgård Martin Geisler and Mikkel Krøigaard. Homomorphic encryption and secure comparison. In IJACT 2008.

  • [21] Daniel Demmler Thomas Schneider and Michael Zohner. ABY – A framework for efficient mixed-protocol secure twoparty computation. In NDSS 2015.

  • [22] Cynthia Dwork Aaron Roth et al. The algorithmic foundations of differential privacy. 2014.

  • [23] Jun Furukawa Yehuda Lindell Ariel Nof and Or Weinstein. High-throughput secure three-party computation for malicious adversaries and an honest majority. In IACR Eurocrypt 2017.

  • [24] Ran Gilad-Bachrach Nathan Dowlin Kim Laine Kristin E. Lauter Michael Naehrig and John Wernsing. CryptoNets: Applying neural networks to encrypted data with high throughput and accuracy. In ICML 2016.

  • [25] Oded Goldreich Silvio Micali and Avi Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In ACM STOC 1987.

  • [26] Chiraag Juvekar Vinod Vaikuntanathan and Anantha Chandrakasan. Gazelle: A low latency framework for secure neural network inference. In Usenix Security 2018.

  • [27] Yann LeCun Léon Bottou Yoshua Bengio and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324 1998.

  • [28] Yehuda Lindell and Ariel Nof. A framework for constructing fast MPC over arithmetic circuits with malicious adversaries and an honest-majority. In ACM CCS 2017.

  • [29] Yehuda Lindell and Benny Pinkas. Privacy preserving data mining. In Annual International Cryptology Conference pages 36–54. Springer 2000.

  • [30] Jian Liu Mika Juuti Yao Lu and N. Asokan. Oblivious neural network predictions via MiniONN transformations. In ACM CCS 2017.

  • [31] Payman Mohassel and Peter Rindal. ABY3: A mixed protocol framework for machine learning. In ACM CCS 2018.

  • [32] Payman Mohassel Mike Rosulek and Ye Zhang. Fast and secure three-party computation: The garbled circuit approach. In ACM CCS 2015.

  • [33] Payman Mohassel and Yupeng Zhang. SecureML: A system for scalable privacy-preserving machine learning. In ieeeoakland 2017.

  • [34] Valeria Nikolaenko Stratis Ioannidis Udi Weinsberg Marc Joye Nina Taft and Dan Boneh. Privacy-preserving matrix factorization. In ACM CCS pages 801–812 2013.

  • [35] Takashi Nishide and Kazuo Ohta. Multiparty computation for interval equality and comparison without bitdecomposition protocol. In PKC 2007.

  • [36] M. Sadegh Riazi Christian Weinert Oleksandr Tkachenko Ebrahim M. Songhori Thomas Schneider and Farinaz Koushanfar. Chameleon: A hybrid secure computation framework for machine learning applications. In AsiaCCS 2018.

  • [37] Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security pages 1310–1321. ACM 2015.

  • [38] Wagh Sameer and Gupta Divya and Chandran Nishanth. SecureNN: 3-Party Secure Computation for Neural Network Training. https://eprint.iacr.org/2018/442.pdf 2019.

  • [39] David J. Wu Tony Feng Michael Naehrig and Kristin E. Lauter. Privately evaluating decision trees and random forests. PoPETs 2016 2016.

  • [40] Andrew Chi-Chih Yao. How to generate and exchange secrets (extended abstract). In IEEE FOCS 1986.

  • [41] Xiangxin Zhu Carl Vondrick Charless Fowlkes and Deva Ramanan. Do we need more training data? In International Journal of Computer Vision 2016.

Search
Journal information
Metrics
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 75 75 31
PDF Downloads 32 32 9