Search Results

You are looking at 1 - 3 of 3 items for

  • Author: Jamie Hayes x
Clear All Modify Search
Open access

Jamie Hayes and George Danezis

Abstract

“Entry” guards protect the Tor onion routing system from variants of the “predecessor” attack, that would allow an adversary with control of a fraction of routers to eventually de-anonymize some users. Research has however shown the three guard scheme has drawbacks and Dingledine et al. proposed in 2014 for each user to have a single long-term guard. We first show that such a guard selection strategy would be optimal if the Tor network was failure-free and static. However under realistic failure conditions the one guard proposal still suffers from the classic fingerprinting attacks, uniquely identifying users. Furthermore, under dynamic network conditions using single guards offer smaller anonymity sets to users of fresh guards. We propose and analyze an alternative guard selection scheme by way of grouping guards together to form shared guard sets. We compare the security and performance of guard sets with the three guard scheme and the one guard proposal. We show guard sets do provide increased resistance to a number of attacks, while foreseeing no significant degradation in performance or bandwidth utilization.

Open access

Giovanni Cherubin, Jamie Hayes and Marc Juarez

Abstract

Website Fingerprinting (WF) allows a passive network adversary to learn the websites that a client visits by analyzing traffic patterns that are unique to each website. It has been recently shown that these attacks are particularly effective against .onion sites, anonymous web servers hosted within the Tor network. Given the sensitive nature of the content of these services, the implications of WF on the Tor network are alarming. Prior work has only considered defenses at the client-side arguing that web servers lack of incentives to adopt countermeasures. Furthermore, most of these defenses have been designed to operate on the stream of network packets, making practical deployment difficult. In this paper, we propose two application-level defenses including the first server-side defense against WF, as .onion services have incentives to support it. The other defense is a lightweight client-side defense implemented as a browser add-on, improving ease of deployment over previous approaches. In our evaluations, the server-side defense is able to reduce WF accuracy on Tor .onion sites from 69.6% to 10% and the client-side defense reduces accuracy from 64% to 31.5%.

Open access

Jamie Hayes, Luca Melis, George Danezis and Emiliano De Cristofaro

Abstract

Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.