. Communication-efficient learning of deep networks from decentralized data. In AISTATS , 2017.  F. McSherry. Statistical inference considered harmful. https://github.com/frankmcsherry/blog/blob/master/posts/2016-06-14.md , 2016.  L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov. InferenceAttacks Against Collaborative Learning. arXiv:1805.04049 , 2018.  A. Narayanan and V. Shmatikov. De-anonymizing social networks. In IEEE Security and Privacy , 2009.  M. Nasr, R. Shokri, and A. Houmansadr. Machine Learning with Membership Privacy using Adversarial
, 2010.  S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A decentralized peer-to-peer web cache. In PODC, 2002.  Y. Jia, Y. Chen, X. Dong, P. Saxena, J. Mao, and Z. Liang. Man-in-the-browser-cache: Persisting https attacks via browser cache poisoning. Computers & Security, 2015.  Y. Jia, X. Dong, Z. Liang, and P. Saxena. I know where you’ve been: Geo-inferenceattacks via the browser cache. IEEE Internet Computing, 2014.  T. Karagiannis, P. Rodriguez, and K. Papagiannaki. Should internet service providers fear peer-assisted content distribution? In
. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Proc. of Advances in Neural Information Processing Systems 27 (NIPS) , pages 2672–2680. NIPS Foundation, 2014.  J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro. Logan: Evaluating privacy leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 , 2017.  J. Hayes, L. Melis, G. Danezis, and E. De Cristofaro. LOGAN: Membership InferenceAttacks Against Generative Models. Proceedings on Privacy Enhancing Technologies (PoPETs) , 2019
traffic forecasting service. arXiv preprint arXiv:1207.1352 , 2012.  J. Kaneps. Apple’s ’differential privacy’ is about collecting your data—but not your data. https://www.wired.com/2016/06/apples-differential-privacy-collecting-data/ , 2016.  C. Kopp, M. Mock, and M. May. Privacy-preserving distributed monitoring of visit quantities. In SIGSPATIAL , 2012.  J. Krumm. Inferenceattacks on location tracks. In Pervasive Computing , 2007.  J. Krumm. A survey of computational location privacy. Personal and Ubiquitous Computing , 13(6), 2009.  S
–717. https://doi.org/10.1007/978-3-642-40203-6_39  Wenrui Diao, Xiangyu Liu, Zhou Li, and Kehuan Zhang. 2016. No pardon for the interruption: New inferenceattacks on android through interrupt timing analysis. In 2016 IEEE Symposium on Security and Privacy (SP) . IEEE, 414–432.  Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. 2008. Querying and mining of time series data: experimental comparison of representations and distance measures. Proceedings of the VLDB Endowment 1, 2 (2008), 1542–1552.  Li Du. 2016. An Overview of Mobile
Due to the recent “Right to be Forgotten” (RTBF) ruling, for queries about an individual, Google and other search engines now delist links to web pages that contain “inadequate, irrelevant or no longer relevant, or excessive” information about that individual. In this paper we take a data-driven approach to study the RTBF in the traditional media outlets, its consequences, and its susceptibility to inference attacks. First, we do a content analysis on 283 known delisted UK media pages, using both manual investigation and Latent Dirichlet Allocation (LDA). We find that the strongest topic themes are violent crime, road accidents, drugs, murder, prostitution, financial misconduct, and sexual assault. Informed by this content analysis, we then show how a third party can discover delisted URLs along with the requesters’ names, thereby putting the efficacy of the RTBF for delisted media links in question. As a proof of concept, we perform an experiment that discovers two previously-unknown delisted URLs and their corresponding requesters. We also determine 80 requesters for the 283 known delisted media pages, and examine whether they suffer from the “Streisand effect,” a phenomenon whereby an attempt to hide a piece of information has the unintended consequence of publicizing the information more widely. To measure the presence (or lack of presence) of a Streisand effect, we develop novel metrics and methodology based on Google Trends and Twitter data. Finally, we carry out a demographic analysis of the 80 known requesters. We hope the results and observations in this paper can inform lawmakers as they refine RTBF laws in the future.
Consider users who share their data (e.g., location) with an untrusted service provider to obtain a personalized (e.g., location-based) service. Data obfuscation is a prevalent user-centric approach to protecting users’ privacy in such systems: the untrusted entity only receives a noisy version of user’s data. Perturbing data before sharing it, however, comes at the price of the users’ utility (service quality) experience which is an inseparable design factor of obfuscation mechanisms. The entanglement of the utility loss and the privacy guarantee, in addition to the lack of a comprehensive notion of privacy, have led to the design of obfuscation mechanisms that are either suboptimal in terms of their utility loss, or ignore the user’s information leakage in the past, or are limited to very specific notions of privacy which e.g., do not protect against adaptive inference attacks or the adversary with arbitrary background knowledge.
In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user’s privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm.
attacks. JMLR , 2017.  C. Huang, P. Kairouz, X. Chen, L. Sankar, and R. Rajagopal. Context-aware generative adversarial privacy. Entropy , 2017.  S. Jana, D. Molnar, A. Moshchuk, A. Dunn, B. Livshits, H. J. Wang, and E. Ofek. Enabling Fine-Grained Permissions for Augmented Reality Applications With Recognizers. USENIX Security, 2013.  S. Jana, A. Narayanan, and V. Shmatikov. A Scanner Darkly: Protecting User Privacy from Perceptual Applications. S & P, 2013.  J. Jia and N. Z. Gong. Attriguard: A practical defense against attribute inferenceattacks via
. Feamster, and V. Paxson. Augur: Internet-wide detection of connectivity disruptions. In Security and Privacy (SP), 2017 IEEE Symposium on , pages 427–443. IEEE, 2017.  J. Postel. Transmission Control Protocol. RFC 793, RFC Editor, September 1981.  Z. Qian and Z. M. Mao. Off-path TCP sequence number inferenceattack. In Security & Privacy . IEEE, 2012.  Qian, Zhiyun and Mao, Zhuoqing Morley. Off-path TCP sequence number inferenceattack-how firewall middleboxes reduce security. In Security and Privacy (SP), 2012 IEEE Symposium on , pages 347–361. IEEE, 2012
Benny Pinkas. Privacy preserving data mining. In Annual International Cryptology Conference , pages 36–54. Springer, 2000.  Milad Nasr, Reza Shokri, and Amir Houmansadr. Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inferenceattacks. arXiv preprint arXiv:1812.00910 , 2018.  Tatsuaki Okamoto and Katsuyuki Takashima. Hierarchical predicate encryption for inner-products. In International Conference on the Theory and Application of Cryptology and Information Security , pages 214