Search Results

1 - 2 of 2 items :

  • Author: Sebastian Meiser x
  • IT-Security and Cryptology x
Clear All Modify Search

Abstract

In this paper, we present a rigorous methodology for quantifying the anonymity provided by Tor against a variety of structural attacks, i.e., adversaries that corrupt Tor nodes and thereby perform eavesdropping attacks to deanonymize Tor users. First, we provide an algorithmic approach for computing the anonymity impact of such structural attacks against Tor. The algorithm is parametric in the considered path selection algorithm and is, hence, capable of reasoning about variants of Tor and alternative path selection algorithms as well. Second, we present formalizations of various instantiations of structural attacks against Tor and show that the computed anonymity impact of each of these adversaries indeed constitutes a worst-case anonymity bound for the cryptographic realization of Tor. Third, we use our methodology to conduct a rigorous, largescale evaluation of Tor’s anonymity which establishes worst-case anonymity bounds against various structural attacks for Tor and for alternative path selection algorithms such as DistribuTor, SelekTOR, and LASTor. This yields the first rigorous anonymity comparison between different path selection algorithms. As part of our analysis, we quantify the anonymity impact of a path selection transition phase, i.e., a small number of users decides to run an alternative algorithm while the vast majority still uses the original one. The source code of our implementation is publicly available.

Abstract

Quantifying the privacy loss of a privacy-preserving mechanism on potentially sensitive data is a complex and well-researched topic; the de-facto standard for privacy measures are ε-differential privacy (DP) and its versatile relaxation (ε, δ)-approximate differential privacy (ADP). Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation. In this paper we unify many previous works via the privacy loss distribution (PLD) of a mechanism. We show that for non-adaptive mechanisms, the privacy loss under sequential composition undergoes a convolution and will converge to a Gauss distribution (the central limit theorem for DP). We derive several relevant insights: we can now characterize mechanisms by their privacy loss class, i.e., by the Gauss distribution to which their PLD converges, which allows us to give novel ADP bounds for mechanisms based on their privacy loss class; we derive exact analytical guarantees for the approximate randomized response mechanism and an exact analytical and closed formula for the Gauss mechanism, that, given ε, calculates δ, s.t., the mechanism is (ε, δ)-ADP (not an over-approximating bound).