Search Results

You are looking at 1 - 6 of 6 items for

  • Author: George Danezis x
Clear All Modify Search
Open access

Jamie Hayes and George Danezis

Abstract

“Entry” guards protect the Tor onion routing system from variants of the “predecessor” attack, that would allow an adversary with control of a fraction of routers to eventually de-anonymize some users. Research has however shown the three guard scheme has drawbacks and Dingledine et al. proposed in 2014 for each user to have a single long-term guard. We first show that such a guard selection strategy would be optimal if the Tor network was failure-free and static. However under realistic failure conditions the one guard proposal still suffers from the classic fingerprinting attacks, uniquely identifying users. Furthermore, under dynamic network conditions using single guards offer smaller anonymity sets to users of fresh guards. We propose and analyze an alternative guard selection scheme by way of grouping guards together to form shared guard sets. We compare the security and performance of guard sets with the three guard scheme and the one guard proposal. We show guard sets do provide increased resistance to a number of attacks, while foreseeing no significant degradation in performance or bandwidth utilization.

Open access

Nikita Borisov, George Danezis and Ian Goldberg

Abstract

Users of social applications like to be notified when their friends are online. Typically, this is done by a central server keeping track of who is online and offline, as well as of all of the users’ “buddy lists”, which contain sensitive information. We present DP5, a cryptographic service that implements online presence indication in a privacy-friendly way. DP5 allows clients to register their online presence and query the presence of their list of friends while keeping this list secret. Besides presence, high-integrity status updates are supported, to facilitate key update and rendezvous protocols. While infrastructure services are required for DP5 to operate, they are designed to not require any long-term secrets and provide perfect forward secrecy in case of compromise. We provide security arguments for the indistinguishability properties of the protocol, as well as an evaluation of its scalability and performance.

Open access

Raphael R. Toledo, George Danezis and Ian Goldberg

Abstract

Private Information Retrieval (PIR), despite being well studied, is computationally costly and arduous to scale. We explore lower-cost relaxations of information-theoretic PIR, based on dummy queries, sparse vectors, and compositions with an anonymity system. We prove the security of each scheme using a flexible differentially private definition for private queries that can capture notions of imperfect privacy. We show that basic schemes are weak, but some of them can be made arbitrarily safe by composing them with large anonymity systems.

Open access

Luís T. A. N. Brandão, Nicolas Christin, George Danezis and Anonymous

Abstract

Available online public/governmental services requiring authentication by citizens have considerably expanded in recent years. This has hindered the usability and security associated with credential management by users and service providers. To address the problem, some countries have proposed nation-scale identification/authentication systems that intend to greatly reduce the burden of credential management, while seemingly offering desirable privacy benefits. In this paper we analyze two such systems: the Federal Cloud Credential Exchange (FCCX) in the United States and GOV.UK Verify in the United Kingdom, which altogether aim at serving more than a hundred million citizens. Both systems propose a brokered identification architecture, where an online central hub mediates user authentications between identity providers and service providers. We show that both FCCX and GOV.UK Verify suffer from serious privacy and security shortcomings, fail to comply with privacy-preserving guidelines they are meant to follow, and may actually degrade user privacy. Notably, the hub can link interactions of the same user across different service providers and has visibility over private identifiable information of citizens. In case of malicious compromise it is also able to undetectably impersonate users. Within the structural design constraints placed on these nation-scale brokered identification systems, we propose feasible technical solutions to the privacy and security issues we identified. We conclude with a strong recommendation that FCCX and GOV.UK Verify be subject to a more in-depth technical and public review, based on a defined and comprehensive threat model, and adopt adequate structural adjustments.

Open access

Carmela Troncoso, Marios Isaakidis, George Danezis and Harry Halpin

Abstract

Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.

Open access

Jamie Hayes, Luca Melis, George Danezis and Emiliano De Cristofaro

Abstract

Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.