Search Results

1 - 10 of 29 items :

  • "User Studies" x
Clear All

Oorschot, and C. Adams. Usability of anonymous web browsing: An examination of Tor interfaces and deployability. In 3rd Symposium on Usable Privacy and Security , pages 41–51. ACM, 2007. [10] R. Dhamija and A. Perrig. Déjà Vu: A user study using images for authentication. In USENIX Security Symposium , volume 9, pages 4–4, 2000. [11] R. Dhamija, J. D. Tygar, and M. Hearst. Why phishing works. In SIGCHI conference on Human Factors in computing systems , pages 581–590. ACM, 2006. [12] R. Dingledine and N. Mathewson. Anonymity loves company: Usability and the network

Abstract

The European Union’s General Data Protection Regulation (GDPR) requires websites to ask for consent to the use of cookies for specific purposes. This enlarges the relevant design space for consent dialogs. Websites could try to maximize click-through rates and positive consent decision, even at the risk of users agreeing to more purposes than intended. We evaluate a practice observed on popular websites by conducting an experiment with one control and two treatment groups (N = 150 university students in two countries). We hypothesize that users’ consent decision is influenced by (1) the number of options, connecting to the theory of choice proliferation, and (2) the presence of a highlighted default button (“select all”), connecting to theories of social norms and deception in consumer research. The results show that participants who see a default button accept cookies for more purposes than the control group, while being less able to correctly recall their choice. After being reminded of their choice, they regret it more often and perceive the consent dialog as more deceptive than the control group. Whether users are presented one or three purposes has no significant effect on their decisions and perceptions. We discuss the results and outline policy implications.

text. We did a preliminary evaluation in this study. More details are expected on the usability of the hierarchy. As our future work, a thorough investigation into the evaluation methodology is needed, including user studies and comprehensive metrics for navigation performance. Acknowledgements The work described in this article is an extension study funded by the National Natural Science Foundation of China (Grand No.: 70903008). It is also supported by COGS Lab in School of Government, Beijing Normal University. Heartfelt thanks also go to the anonymous reviewers

Abstract

Habituation is a key factor behind the lack of attention towards permission authorization dialogs during third party application installation. Various solutions have been proposed to combat the problem of achieving attention switch towards permissions. However, users continue to ignore these dialogs, and authorize dangerous permissions, which leads to security and privacy breaches.

We leverage eye-tracking to approach this problem, and propose a mechanism for enforcing user attention towards application permissions before users are able to authorize them. We deactivate the dialog’s decision buttons initially, and use feedback from the eye-tracker to ensure that the user has looked at the permissions. After determining user attention, the buttons are activated. We implemented a prototype of our approach as a Chrome browser extension, and conducted a user study on Facebook’s application authorization dialogs. Using participants’ permission identification, eye-gaze fixations, and authorization decisions, we evaluate participants’ attention towards permissions. The participants who used our approach on authorization dialogs were able to identify the permissions better, compared to the rest of the participants, even after the habituation period. Their average number of eye-gaze fixations on the permission text was significantly higher than the other group participants. However, examining the rate in which participants denied a dangerous and unnecessary permission, the hypothesized increase from the control group to the treatment group was not statistically significant.

Abstract

Research shows that context is important to the privacy perceptions associated with technology. With Bluetooth Low Energy beacons, one of the latest technologies for providing proximity and indoor tracking, the current identifiers that characterize a beacon are not sufficient for ordinary users to make informed privacy decisions about the location information that could be shared. One solution would be to have standardized category and privacy labels, produced by beacon providers or an independent third-party. An alternative solution is to find an approach driven by users, for users. In this paper, we propose a novel crowdsourcing based approach to introduce elements of context in beacon encounters.We demonstrate the effectiveness of this approach through a user study, where participants use a crowd-based mobile app designed to collect beacon category and privacy information as a scavenger hunt game. Results show that our approach was effective in helping users label beacons according to the specific context of a given beacon encounter, as well as the privacy perceptions associated with it. This labeling was done with an accuracy of 92%, and with an acceptance rate of 82% of all recommended crowd labels. Lastly, we conclusively show how crowdsourcing for context can be used towards a user-centric framework for privacy management during beacon encounters.

Abstract

We present the design, implementation and evaluation of a system, called MATRIX, developed to protect the privacy of mobile device users from location inference and sensor side-channel attacks. MATRIX gives users control and visibility over location and sensor (e.g., Accelerometers and Gyroscopes) accesses by mobile apps. It implements a PrivoScope service that audits all location and sensor accesses by apps on the device and generates real-time notifications and graphs for visualizing these accesses; and a Synthetic Location service to enable users to provide obfuscated or synthetic location trajectories or sensor traces to apps they find useful, but do not trust with their private information. The services are designed to be extensible and easy for users, hiding all of the underlying complexity from them. MATRIX also implements a Location Provider component that generates realistic privacy-preserving synthetic identities and trajectories for users by incorporating traffic information using historical data from Google Maps Directions API, and accelerations using statistical information from user driving experiments. These mobility patterns are generated by modeling/solving user schedule using a randomized linear program and modeling/solving for user driving behavior using a quadratic program. We extensively evaluated MATRIX using user studies, popular location-driven apps and machine learning techniques, and demonstrate that it is portable to most Android devices globally, is reliable, has low-overhead, and generates synthetic trajectories that are difficult to differentiate from real mobility trajectories by an adversary.

Abstract

An important line of privacy research is investigating the design of systems for secure input and output (I/O) within Internet browsers. These systems would allow for users’ information to be encrypted and decrypted by the browser, and the specific web applications will only have access to the users’ information in encrypted form. The state-of-the-art approach for a secure I/O system within Internet browsers is a system called ShadowCrypt created by UC Berkeley researchers []. This paper will explore the limitations of ShadowCrypt in order to provide a foundation for the general principles that must be followed when designing a secure I/O system within Internet browsers. First, we developed a comprehensive UI attack that cannot be mitigated with popular UI defenses, and tested the efficacy of the attack through a user study administered on Amazon Mechanical Turk. Only 1 of the 59 participants who were under attack successfully noticed the UI attack, which validates the stealthiness of the attack. Second, we present multiple attack vectors against Shadow-Crypt that do not rely upon UI deception. These attack vectors expose the privacy weaknesses of Shadow DOM—the key browser primitive leveraged by ShadowCrypt. Finally, we present a sketch of potential countermeasures that can enable the design of future secure I/O systems within Internet browsers.

Abstract

The ability to track users’ activities across different websites and visits is a key tool in advertising and surveillance. The HTML5 DeviceMotion interface creates a new opportunity for such tracking via fingerprinting of smartphone motion sensors. We study the feasibility of carrying out such fingerprinting under real-world constraints and on a large scale. In particular, we collect measurements from several hundred users under realistic scenarios and show that the state-of-the-art techniques provide very low accuracy in these settings. We then improve fingerprinting accuracy by changing the classifier as well as incorporating auxiliary information. We also show how to perform fingerprinting in an open-world scenario where one must distinguish between known and previously unseen users.

We next consider the problem of developing fingerprinting countermeasures; we evaluate the usability of a previously proposed obfuscation technique and a newly developed quantization technique via a large-scale user study. We find that both techniques are able to drastically reduce fingerprinting accuracy without significantly impacting the utility of the sensors in web applications.

Abstract

Over the past decade, research has explored managing the availability of shared personal online data, with particular focus on longitudinal aspects of privacy. Yet, there is no taxonomy that takes user perspective and technical approaches into account. In this work, we systematize research on longitudinal privacy management of publicly shared personal online data from these two perspectives: user studies capturing users’ interactions related to the availability of their online data and technical proposals limiting the availability of data. Following a systematic approach, we derive conflicts between these two sides that have not yet been addressed appropriately, resulting in a list of challenging open problems to be tackled by future research. While limitations of data availability in proposed approaches and real systems are mostly time-based, users’ desired models are rather complex, taking into account content, audience, and the context in which data has been shared. Our systematic evaluation reveals interesting challenges broadly categorized by expiration conditions, data co-ownership, user awareness, and security and trust.

Abstract

The EU General Data Protection Regulation (GDPR) is one of the most demanding and comprehensive privacy regulations of all time. A year after it went into effect, we study its impact on the landscape of privacy policies online. We conduct the first longitudinal, in-depth, and at-scale assessment of privacy policies before and after the GDPR. We gauge the complete consumption cycle of these policies, from the first user impressions until the compliance assessment. We create a diverse corpus of two sets of 6,278 unique English-language privacy policies from inside and outside the EU, covering their pre-GDPR and the post-GDPR versions. The results of our tests and analyses suggest that the GDPR has been a catalyst for a major overhaul of the privacy policies inside and outside the EU. This overhaul of the policies, manifesting in extensive textual changes, especially for the EU-based websites, comes at mixed benefits to the users.

While the privacy policies have become considerably longer, our user study with 470 participants on Amazon MTurk indicates a significant improvement in the visual representation of privacy policies from the users’ perspective for the EU websites. We further develop a new workflow for the automated assessment of requirements in privacy policies. Using this workflow, we show that privacy policies cover more data practices and are more consistent with seven compliance requirements post the GDPR. We also assess how transparent the organizations are with their privacy practices by performing specificity analysis. In this analysis, we find evidence for positive changes triggered by the GDPR, with the specificity level improving on average. Still, we find the landscape of privacy policies to be in a transitional phase; many policies still do not meet several key GDPR requirements or their improved coverage comes with reduced specificity.