Search Results

1 - 10 of 12 items

  • Author: Ian Goldberg x
Clear All Modify Search

Abstract

Anonymous communications networks enable individuals to maintain their privacy online. The most popular such network is Tor, with about two million daily users; however, Tor is reaching limits of its scalability. One of the main scalability bottlenecks of Tor and similar network designs originates from the requirement of distributing a global view of the servers in the network to all network clients. This requirement is in place to avoid epistemic attacks, in which adversaries who know which parts of the network certain clients do and do not know about can rule in or out those clients from being responsible for particular network traffic.

In this work, we introduce a novel solution to this scalability problem by leveraging oblivious RAM constructions and trusted execution environments in order to enable clients to fetch only the parts of the network view they require, without the directory servers learning which parts are being fetched. We compare the performance of our design with the current Tor mechanism and other related works to show one to two orders of magnitude better performance from an end-to-end perspective. We analyse the requirements to actually deploy such a scheme today and conclude that it would only require a small fraction (<2.5%) of the relays to have the required hardware support; moreover, these relays can perform their roles with minimal network bandwidth requirements.

Abstract

Trust and user-generated feedback have become increasingly vital to the normal functioning of the modern internet. However, deployed systems that currently incorporate such feedback do not guarantee users much in the way of privacy, despite a wide swath of research on how to do so spanning over 15 years. Meanwhile, research on systems that maintain user privacy while helping them to track and update each others’ reputations has failed to standardize terminology, or converge on what privacy guarantees should be important. Too often, this leads to misunderstandings of the tradeoffs underpinning design decisions. Further, key insights made in some approaches to designing such systems have not circulated to other approaches, leaving open significant opportunity for new research directions. This SoK investigates 42 systems describing privacy-preserving reputation systems from 2003–2019 in order to organize previous work and suggest directions for future work. Our three key contributions are the systematization of this body of research, the detailing of the tradeoffs implied by overarching design choices, and the identification of underresearched areas that provide promising opportunities for future work.

Abstract

Website fingerprinting allows a local, passive observer monitoring a web-browsing client’s encrypted channel to determine her web activity. Previous attacks have shown that website fingerprinting could be a threat to anonymity networks such as Tor under laboratory conditions. However, there are significant differences between laboratory conditions and realistic conditions. First, in laboratory tests we collect the training data set together with the testing data set, so the training data set is fresh, but an attacker may not be able to maintain a fresh data set. Second, laboratory packet sequences correspond to a single page each, but for realistic packet sequences the split between pages is not obvious. Third, packet sequences may include background noise from other types of web traffic. These differences adversely affect website fingerprinting under realistic conditions. In this paper, we tackle these three problems to bridge the gap between laboratory and realistic conditions for website fingerprinting. We show that we can maintain a fresh training set with minimal resources. We demonstrate several classification-based techniques that allow us to split full packet sequences effectively into sequences corresponding to a single page each. We describe several new algorithms for tackling background noise. With our techniques, we are able to build the first website fingerprinting system that can operate directly on packet sequences collected in the wild.

Abstract

Censorship circumvention is often characterized as a cat-and-mouse game between a nation-state censor and the developers of censorship resistance systems. Decoy routing systems offer a solution to censor- ship resistance that has the potential to tilt this race in the favour of the censorship resistor by using real connections to unblocked, overt sites to deliver censored content to users. This is achieved by employing the help of Internet Service Providers (ISPs) or Autonomous Systems (ASes) that own routers in the middle of the net- work. However, the deployment of decoy routers has yet to reach fruition. Obstacles to deployment such as the heavy requirements on routers that deploy decoy router relay stations, and the impact on the quality of service for customers that pass through these routers have deterred potential participants from deploying existing systems. Furthermore, connections from clients to overt sites often follow different paths in the upstream and downstream direction, making some existing designs impractical. Although decoy routing systems that lessen the burden on participating routers and accommodate asymmetric flows have been proposed, these arguably more deployable systems suffer from security vulnerabilities that put their users at risk of discovery or make them prone to censorship or denial of service attacks. In this paper, we propose a technique for supporting route asymmetry in previously symmetric decoy routing systems. The resulting asymmetric solution is more secure than previous asymmetric proposals and provides an option for tiered deployment, allowing more cautious ASes to deploy a lightweight, non-blocking relay station that aids in defending against routing-capable adversaries. We also provide an experimental evaluation of relay station performance on off-the-shelf hardware and additional security improvements to recently proposed systems.

Abstract

A deniable authenticated key exchange (DAKE) protocol establishes a secure channel without producing cryptographic evidence of communication. A DAKE offers strong deniability if transcripts provide no evidence even if long-term key material is compromised (offline deniability) and no outsider can obtain evidence even when interactively colluding with an insider (online deniability). Unfortunately, existing strongly deniable DAKEs have not been adopted by secure messaging tools due to security and deployability weaknesses.

In this work, we propose three new strongly deniable key exchange protocols—DAKEZ, ZDH, and XZDH—that are designed to be used in modern secure messaging applications while eliminating the weaknesses of previous approaches. DAKEZ offers strong deniability in synchronous network environments, while ZDH and XZDH can be used to construct asynchronous secure messaging systems with offline and partial online deniability. DAKEZ and XZDH provide forward secrecy against active adversaries, and all three protocols can provide forward secrecy against future quantum adversaries while remaining classically secure if attacks against quantum-resistant cryptosystems are found.

We seek to reduce barriers to adoption by describing our protocols from a practitioner’s perspective, including complete algebraic specifications, cryptographic primitive recommendations, and prototype implementations. We evaluate concrete instantiations of our DAKEs and show that they are the most efficient strongly deniable schemes; with all of our classical security guarantees, our exchanges require only 1 ms of CPU time on a typical desktop computer and at most 464 bytes of data transmission. Our constructions are nearly as efficient as key exchanges with weaker deniability, such as the ones used by the popular OTR and Signal protocols.

Abstract

Through recent years, much research has been conducted into processing privacy policies and presenting them in ways that are easy for users to understand. However, understanding privacy policies has little utility if the website’s data processing code does not match the privacy policy. Although systems have been proposed to achieve compliance of internal software to access control policies, they assume a large trusted computing base and are not designed to provide a proof of compliance to an end user. We design Mitigator, a system to enforce compliance of a website’s source code with a privacy policy model that addresses these two drawbacks of previous work. We use trusted hardware platforms to provide a guarantee to an end user that their data is only handled by code that is compliant with the privacy policy. Such an end user only needs to trust a small module in the hardware of the remote back-end machine and related libraries but not the entire OS. We also provide a proof-of-concept implementation of Mitigator and evaluate it for its latency. We conclude that it incurs only a small overhead with respect to an unmodified system that does not provide a guarantee of privacy policy compliance to the end user.

Abstract

Users of social applications like to be notified when their friends are online. Typically, this is done by a central server keeping track of who is online and offline, as well as of all of the users’ “buddy lists”, which contain sensitive information. We present DP5, a cryptographic service that implements online presence indication in a privacy-friendly way. DP5 allows clients to register their online presence and query the presence of their list of friends while keeping this list secret. Besides presence, high-integrity status updates are supported, to facilitate key update and rendezvous protocols. While infrastructure services are required for DP5 to operate, they are designed to not require any long-term secrets and provide perfect forward secrecy in case of compromise. We provide security arguments for the indistinguishability properties of the protocol, as well as an evaluation of its scalability and performance.

Abstract

Private Information Retrieval (PIR), despite being well studied, is computationally costly and arduous to scale. We explore lower-cost relaxations of information-theoretic PIR, based on dummy queries, sparse vectors, and compositions with an anonymity system. We prove the security of each scheme using a flexible differentially private definition for private queries that can capture notions of imperfect privacy. We show that basic schemes are weak, but some of them can be made arbitrarily safe by composing them with large anonymity systems.

Abstract

Secret sharing schemes are desirable across a variety of real-world settings due to the security and privacy properties they can provide, such as availability and separation of privilege. However, transitioning secret sharing schemes from theoretical research to practical use must account for gaps in achieving these properties that arise due to the realities of concrete implementations, threat models, and use cases. We present a formalization and analysis, using Ellison’s notion of ceremonies, that demonstrates how simple variations in use cases of secret sharing schemes result in the potential loss of some security properties, a result that cannot be derived from the analysis of the underlying cryptographic protocol alone. Our framework accounts for such variations in the design and analysis of secret sharing implementations by presenting a more detailed user-focused process and defining previously overlooked assumptions about user roles and actions within the scheme to support analysis when designing such ceremonies. We identify existing mechanisms that, when applied to an appropriate implementation, close the security gaps we identified. We present our implementation including these mechanisms and a corresponding security assessment using our framework.

Abstract

The growth of content delivery networks (CDNs) has engendered centralized control over the serving of internet content. An unwanted by-product of this growth is that CDNs are fast becoming global arbiters for which content requests are allowed and which are blocked in an attempt to stanch malicious traffic. In particular, in some cases honest users-especially those behind shared IP addresses, including users of privacy tools such as Tor, VPNs, and I2P - can be unfairly targeted by attempted ‘catch-all solutions’ that assume these users are acting maliciously. In this work, we provide a solution to prevent users from being exposed to a disproportionate amount of internet challenges such as CAPTCHAs. These challenges are at the very least annoying and at their worst - when coupled with bad implementations - can completely block access from web resources. We detail a 1-RTT cryptographic protocol (based on an implementation of an oblivious pseudorandom function) that allows users to receive a significant amount of anonymous tokens for each challenge solution that they provide. These tokens can be exchanged in the future for access without having to interact with a challenge. We have implemented our initial solution in a browser extension named “Privacy Pass”, and have worked with the Cloudflare CDN to deploy compatible server-side components in their infrastructure. However, we envisage that our solution could be used more generally for many applications where anonymous and honest access can be granted (e.g., anonymous wiki editing). The anonymity guarantee of our solution makes it immediately appropriate for use by users of Tor/VPNs/ I2P. We also publish figures from Cloudflare indicating the potential impact from the global release of Privacy Pass.