Search Results

You are looking at 1 - 10 of 11 items for

  • Author: Rachel Greenstadt x
Clear All Modify Search
Open access

Rachel Greenstadt, Damon McCoy and Carmela Troncoso

Open access

Janith Weerasinghe, Kediel Morales and Rachel Greenstadt

Abstract

Recent studies have shown that machine learning can identify individuals with mental illnesses by analyzing their social media posts. Topics and words related to mental health are some of the top predictors. These findings have implications for early detection of mental illnesses. However, they also raise numerous privacy concerns. To fully evaluate the implications for privacy, we analyze the performance of different machine learning models in the absence of tweets that talk about mental illnesses. Our results show that machine learning can be used to make predictions even if the users do not actively talk about their mental illness. To fully understand the implications of these findings, we analyze the features that make these predictions possible. We analyze bag-of-words, word clusters, part of speech n-gram features, and topic models to understand the machine learning model and to discover language patterns that differentiate individuals with mental illnesses from a control group. This analysis confirmed some of the known language patterns and uncovered several new patterns. We then discuss the possible applications of machine learning to identify mental illnesses, the feasibility of such applications, associated privacy implications, and analyze the feasibility of potential mitigations.

Open access

Claudia Diaz, Rachel Greenstadt and Damon McCoy

Open access

Claudia Diaz, Rachel Greenstadt and Damon McCoy

Open access

Rachel Greenstadt, Damon McCoy and Carmela Troncoso

Open access

Claudia Diaz, Rachel Greenstadt and Damon McCoy

Open access

Rachel Greenstadt, Damon McCoy and Carmela Troncoso

Open access

Rachel Greenstadt, Damon McCoy and Carmela Troncoso

Open access

Claudia Diaz, Rachel Greenstadt and Damon McCoy

Open access

Rebekah Overdorf and Rachel Greenstadt

Abstract

Stylometry is a form of authorship attribution that relies on the linguistic information to attribute documents of unknown authorship based on the writing styles of a suspect set of authors. This paper focuses on the cross-domain subproblem where the known and suspect documents differ in the setting in which they were created. Three distinct domains, Twitter feeds, blog entries, and Reddit comments, are explored in this work. We determine that state-of-the-art methods in stylometry do not perform as well in cross-domain situations (34.3% accuracy) as they do in in-domain situations (83.5% accuracy) and propose methods that improve performance in the cross-domain setting with both feature and classification level techniques which can increase accuracy to up to 70%. In addition to testing these approaches on a large real world dataset, we also examine real world adversarial cases where an author is actively attempting to hide their identity. Being able to identify authors across domains facilitates linking identities across the Internet making this a key security and privacy concern; users can take other measures to ensure their anonymity, but due to their unique writing style, they may not be as anonymous as they believe.