Oorschot, and C. Adams. Usability of anonymous web browsing: An examination of Tor interfaces and deployability. In 3rd Symposium on Usable Privacy and Security , pages 41–51. ACM, 2007.  R. Dhamija and A. Perrig. Déjà Vu: A userstudy using images for authentication. In USENIX Security Symposium , volume 9, pages 4–4, 2000.  R. Dhamija, J. D. Tygar, and M. Hearst. Why phishing works. In SIGCHI conference on Human Factors in computing systems , pages 581–590. ACM, 2006.  R. Dingledine and N. Mathewson. Anonymity loves company: Usability and the network
text. We did a preliminary evaluation in this study. More details are expected on the usability of the hierarchy. As our future work, a thorough investigation into the evaluation methodology is needed, including userstudies and comprehensive metrics for navigation performance. Acknowledgements The work described in this article is an extension study funded by the National Natural Science Foundation of China (Grand No.: 70903008). It is also supported by COGS Lab in School of Government, Beijing Normal University. Heartfelt thanks also go to the anonymous reviewers
Habituation is a key factor behind the lack of attention towards permission authorization dialogs during third party application installation. Various solutions have been proposed to combat the problem of achieving attention switch towards permissions. However, users continue to ignore these dialogs, and authorize dangerous permissions, which leads to security and privacy breaches.
We leverage eye-tracking to approach this problem, and propose a mechanism for enforcing user attention towards application permissions before users are able to authorize them. We deactivate the dialog’s decision buttons initially, and use feedback from the eye-tracker to ensure that the user has looked at the permissions. After determining user attention, the buttons are activated. We implemented a prototype of our approach as a Chrome browser extension, and conducted a user study on Facebook’s application authorization dialogs. Using participants’ permission identification, eye-gaze fixations, and authorization decisions, we evaluate participants’ attention towards permissions. The participants who used our approach on authorization dialogs were able to identify the permissions better, compared to the rest of the participants, even after the habituation period. Their average number of eye-gaze fixations on the permission text was significantly higher than the other group participants. However, examining the rate in which participants denied a dangerous and unnecessary permission, the hypothesized increase from the control group to the treatment group was not statistically significant.
Research shows that context is important to the privacy perceptions associated with technology. With Bluetooth Low Energy beacons, one of the latest technologies for providing proximity and indoor tracking, the current identifiers that characterize a beacon are not sufficient for ordinary users to make informed privacy decisions about the location information that could be shared. One solution would be to have standardized category and privacy labels, produced by beacon providers or an independent third-party. An alternative solution is to find an approach driven by users, for users. In this paper, we propose a novel crowdsourcing based approach to introduce elements of context in beacon encounters.We demonstrate the effectiveness of this approach through a user study, where participants use a crowd-based mobile app designed to collect beacon category and privacy information as a scavenger hunt game. Results show that our approach was effective in helping users label beacons according to the specific context of a given beacon encounter, as well as the privacy perceptions associated with it. This labeling was done with an accuracy of 92%, and with an acceptance rate of 82% of all recommended crowd labels. Lastly, we conclusively show how crowdsourcing for context can be used towards a user-centric framework for privacy management during beacon encounters.
We present the design, implementation and evaluation of a system, called MATRIX, developed to protect the privacy of mobile device users from location inference and sensor side-channel attacks. MATRIX gives users control and visibility over location and sensor (e.g., Accelerometers and Gyroscopes) accesses by mobile apps. It implements a PrivoScope service that audits all location and sensor accesses by apps on the device and generates real-time notifications and graphs for visualizing these accesses; and a Synthetic Location service to enable users to provide obfuscated or synthetic location trajectories or sensor traces to apps they find useful, but do not trust with their private information. The services are designed to be extensible and easy for users, hiding all of the underlying complexity from them. MATRIX also implements a Location Provider component that generates realistic privacy-preserving synthetic identities and trajectories for users by incorporating traffic information using historical data from Google Maps Directions API, and accelerations using statistical information from user driving experiments. These mobility patterns are generated by modeling/solving user schedule using a randomized linear program and modeling/solving for user driving behavior using a quadratic program. We extensively evaluated MATRIX using user studies, popular location-driven apps and machine learning techniques, and demonstrate that it is portable to most Android devices globally, is reliable, has low-overhead, and generates synthetic trajectories that are difficult to differentiate from real mobility trajectories by an adversary.
An important line of privacy research is investigating the design of systems for secure input and output (I/O) within Internet browsers. These systems would allow for users’ information to be encrypted and decrypted by the browser, and the specific web applications will only have access to the users’ information in encrypted form. The state-of-the-art approach for a secure I/O system within Internet browsers is a system called ShadowCrypt created by UC Berkeley researchers . This paper will explore the limitations of ShadowCrypt in order to provide a foundation for the general principles that must be followed when designing a secure I/O system within Internet browsers. First, we developed a comprehensive UI attack that cannot be mitigated with popular UI defenses, and tested the efficacy of the attack through a user study administered on Amazon Mechanical Turk. Only 1 of the 59 participants who were under attack successfully noticed the UI attack, which validates the stealthiness of the attack. Second, we present multiple attack vectors against Shadow-Crypt that do not rely upon UI deception. These attack vectors expose the privacy weaknesses of Shadow DOM—the key browser primitive leveraged by ShadowCrypt. Finally, we present a sketch of potential countermeasures that can enable the design of future secure I/O systems within Internet browsers.
The ability to track users’ activities across different websites and visits is a key tool in advertising and surveillance. The HTML5 DeviceMotion interface creates a new opportunity for such tracking via fingerprinting of smartphone motion sensors. We study the feasibility of carrying out such fingerprinting under real-world constraints and on a large scale. In particular, we collect measurements from several hundred users under realistic scenarios and show that the state-of-the-art techniques provide very low accuracy in these settings. We then improve fingerprinting accuracy by changing the classifier as well as incorporating auxiliary information. We also show how to perform fingerprinting in an open-world scenario where one must distinguish between known and previously unseen users.
We next consider the problem of developing fingerprinting countermeasures; we evaluate the usability of a previously proposed obfuscation technique and a newly developed quantization technique via a large-scale user study. We find that both techniques are able to drastically reduce fingerprinting accuracy without significantly impacting the utility of the sensors in web applications.
Over the past decade, research has explored managing the availability of shared personal online data, with particular focus on longitudinal aspects of privacy. Yet, there is no taxonomy that takes user perspective and technical approaches into account. In this work, we systematize research on longitudinal privacy management of publicly shared personal online data from these two perspectives: user studies capturing users’ interactions related to the availability of their online data and technical proposals limiting the availability of data. Following a systematic approach, we derive conflicts between these two sides that have not yet been addressed appropriately, resulting in a list of challenging open problems to be tackled by future research. While limitations of data availability in proposed approaches and real systems are mostly time-based, users’ desired models are rather complex, taking into account content, audience, and the context in which data has been shared. Our systematic evaluation reveals interesting challenges broadly categorized by expiration conditions, data co-ownership, user awareness, and security and trust.
The EU General Data Protection Regulation (GDPR) is one of the most demanding and comprehensive privacy regulations of all time. A year after it went into effect, we study its impact on the landscape of privacy policies online. We conduct the first longitudinal, in-depth, and at-scale assessment of privacy policies before and after the GDPR. We gauge the complete consumption cycle of these policies, from the first user impressions until the compliance assessment. We create a diverse corpus of two sets of 6,278 unique English-language privacy policies from inside and outside the EU, covering their pre-GDPR and the post-GDPR versions. The results of our tests and analyses suggest that the GDPR has been a catalyst for a major overhaul of the privacy policies inside and outside the EU. This overhaul of the policies, manifesting in extensive textual changes, especially for the EU-based websites, comes at mixed benefits to the users.
While the privacy policies have become considerably longer, our user study with 470 participants on Amazon MTurk indicates a significant improvement in the visual representation of privacy policies from the users’ perspective for the EU websites. We further develop a new workflow for the automated assessment of requirements in privacy policies. Using this workflow, we show that privacy policies cover more data practices and are more consistent with seven compliance requirements post the GDPR. We also assess how transparent the organizations are with their privacy practices by performing specificity analysis. In this analysis, we find evidence for positive changes triggered by the GDPR, with the specificity level improving on average. Still, we find the landscape of privacy policies to be in a transitional phase; many policies still do not meet several key GDPR requirements or their improved coverage comes with reduced specificity.