Search Results

1 - 10 of 102 items :

  • "Data Privacy" x
Clear All

and John C Duchi. Privacy and statistical risk: Formalisms and minimax bounds. arXiv preprint arXiv:1412.4451 , 2014. [10] Raef Bassily and Yoav Freund. Typicality-based stability and privacy. arXiv preprint arXiv:1604.03336 , 2016. [11] Raef Bassily, Adam Groce, Jonathan Katz, and Adam Smith. Coupled-worlds privacy: Exploiting adversarial uncertainty in statistical data privacy. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on . IEEE, 2013. [12] Debabrota Basu, Christos Dimitrakakis, and Aristide Tossou. Differential privacy for

References [1] A. Machanavajjhala et al. l-diversity: Privacy beyond kanonymity. Transactions on Knowledge Discovery from Data , 1(1):3, 2007. [2] B. C. M. Fung et al. Introduction to Privacy-Preserving Data Publishing: Concepts and Techniques . CRC Press, 2010. [3] R. J. Bayardo and R. Agrawal. Data privacy through optimal k-anonymization. In International Conference on Data Engineering , pages 217–228, 2005. [4] J. Brickell and V. Shmatikov. The cost of privacy: Destruction of data-mining utility in anonymized data publishing. In ACM SIGKDD International

, Melek Önen, Karin Bernsmed, Anderson Santana Oliveira, and Jakub Sendor.Data Privacy Management, Autonomous Spontaneous Security, and Security Assurance: 9th International Workshop, DPM 2014, 7th International Workshop, SETOP 2014, and 3rd International Workshop, QASA 2014, Wroclaw, Poland, September 10-11, 2014. Revised Selected Papers, chapter A-PPL: An Accountability Policy Language, pages 319-326. Springer International Publishing, Cham, 2015. [8] Walid Benghabrit, Hervé Grall, Jean-Claude Royer, Mohamed Sellami, Monir Azraoui, Kaoutar Elkhiyaoui, Melek Önen

. [32] D. Kifer and A. Machanavajjhala. No free lunch in data privacy. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data , pages 193–204. ACM, 2011. [33] N. Kiukkonen, J. Blom, O. Dousse, D. Gatica-Perez, and J. Laurila. Towards rich mobile phone datasets: Lausanne data collection campaign. Proc. ICPS, Berlin , 2010. [34] B. Köpf and D. Basin. An information-theoretic model for adaptive side-channel attacks. In Proceedings of the 14th ACM conference on Computer and communications security , 2007. [35] D. Korzhyk, Z. Yin, C


The decreasing costs of molecular profiling have fueled the biomedical research community with a plethora of new types of biomedical data, enabling a breakthrough towards more precise and personalized medicine. Naturally, the increasing availability of data also enables physicians to compare patients’ data and treatments easily and to find similar patients in order to propose the optimal therapy. Such similar patient queries (SPQs) are of utmost importance to medical practice and will be relied upon in future health information exchange systems. While privacy-preserving solutions have been previously studied, those are limited to genomic data, ignoring the different newly available types of biomedical data.

In this paper, we propose new cryptographic techniques for finding similar patients in a privacy-preserving manner with various types of biomedical data, including genomic, epigenomic and transcriptomic data as well as their combination. We design protocols for two of the most common similarity metrics in biomedicine: the Euclidean distance and Pearson correlation coefficient. Moreover, unlike previous approaches, we account for the fact that certain locations contribute differently to a given disease or phenotype by allowing to limit the query to the relevant locations and to assign them different weights. Our protocols are specifically designed to be highly efficient in terms of communication and bandwidth, requiring only one or two rounds of communication and thus enabling scalable parallel queries. We rigorously prove our protocols to be secure based on cryptographic games and instantiate our technique with three of the most important types of biomedical data – namely DNA, microRNA expression, and DNA methylation. Our experimental results show that our protocols can compute a similarity query over a typical number of positions against a database of 1,000 patients in a few seconds. Finally, we propose and formalize strategies to mitigate the threat of malicious users or hospitals.


We systematize the knowledge on data breaches into concise step-by-step breach workflows and use them to describe the breach methods. We present the most plausible workflows for 10 famous data breaches. We use information from a variety of sources to develop our breach workflows, however, we emphasize that for many data breaches, information about crucial steps was absent. We researched such steps to develop complete breach workflows; as such, our workflows provide descriptions of data breaches that were previously unavailable. For generalizability, we present a general workflow of 50 data breaches from 2015. Based on our data breach analysis, we develop requirements that organizations need to meet to thwart data breaches. We describe what requirements are met by existing security technologies and propose future research directions to thwart data breaches.


A number of studies have recently been made on discrete distribution estimation in the local model, in which users obfuscate their personal data (e.g., location, response in a survey) by themselves and a data collector estimates a distribution of the original personal data from the obfuscated data. Unlike the centralized model, in which a trusted database administrator can access all users’ personal data, the local model does not suffer from the risk of data leakage. A representative privacy metric in this model is LDP (Local Differential Privacy), which controls the amount of information leakage by a parameter ∈ called privacy budget. When ∈ is small, a large amount of noise is added to the personal data, and therefore users’ privacy is strongly protected. However, when the number of users ℕ is small (e.g., a small-scale enterprise may not be able to collect large samples) or when most users adopt a small value of ∈, the estimation of the distribution becomes a very challenging task. The goal of this paper is to accurately estimate the distribution in the cases explained above. To achieve this goal, we focus on the EM (Expectation-Maximization) reconstruction method, which is a state-of-the-art statistical inference method, and propose a method to correct its estimation error (i.e., difference between the estimate and the true value) using the theory of Rilstone et al. We prove that the proposed method reduces the MSE (Mean Square Error) under some assumptions.We also evaluate the proposed method using three largescale datasets, two of which contain location data while the other contains census data. The results show that the proposed method significantly outperforms the EM reconstruction method in all of the datasets when ℕ or ∈ is small.


Do you remember the times when the copyright or a patent had no economic value? Neither do I, because this happened more than 300 years ago when the printing activity took place completely free. It was the eighteenth century, when France, England, Germany and the United Kingdom realized that the author was pretty important for the state and the first regulations appeared. Exactly like the intellectual property, in the new era of technology, dynamic change and growing e-commerce, the data with personal character is the newest economic good. More and more studies and journals show that in the near future the personal information will also have an economic value since databases are so important for businesses, but also for other institutions like the police or even intelligence agencies. The current article is the first in a row of a complex research regarding the importance of the personal data in the current economy and its actual value in an organization. Further studies will be needed in order to conclude and create a model for measuring the value of personal data. This first step is a research and a detailed analysis of the current status-quo. The changes that appeared after the entry of the European directives regarding General Data Protection Regulation will be analyzed. Another significant section of the article is a close review of the personal data black market. In order to submit this aspect as clear and objective as possible, further research on the dark internet (Onion) was conducted and prices for clones of credit cards, Amazon or PayPal accounts and cloned personal documents were examined and charted.


Cloud computing has emerged as the most dominant computational paradigm in recent times. There are tremendous benefits for enterprises adopting cloud technologies. It provides resources and services on demand, pay-as-you go basis. This includes infrastructure, platform and software services. But there are still a number of security threats and challenges associated with utilizing cloud computing. A proper access control is the fundamental security requirement in any cloud environment, to avoid unauthorized access to the cloud systems. As cloud computing supports multi-tenancy and has a various categories of users with different sets of security requirements, traditional access control models and policies cannot be used. This paper discusses on various access control models used for cloud environment and presents a detailed requirement analysis for developing an access control, specifically for the cloud. A comprehensive study on various security problems associated with outsourced data on the cloud and their existing solutions are also described, with the future research directions.

standard contract terms for generating consumer trust and confidence in digital services. CREATe Working Paper , Oct. 2014. [17] Electronic Privacy Information Center. Public opinion on privacy. . Accessed: 2017-10-19. [18] D. Gauntlett. Using creative visual research methods to understand media audiences. MedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung , 9(0):1–32, Mar. 2005. [19] Gigya. Survey report: How consumers feel about data privacy in 2017.