Natã M. Barbosa, Joon S. Park, Yaxing Yao and Yang Wang
Smart home devices challenge a long-held notion that the home is a private and protected place. With this in mind, many developers market their products with a focus on privacy in order to gain user trust, yet privacy tensions arise with the growing adoption of these devices and the risk of inappropriate data practices in the smart home (e.g., secondary use of collected data). Therefore, it is important for developers to consider individual user preferences and how they would change under varying circumstances, in order to identify actionable steps towards developing user trust and exercising privacy-preserving data practices. To help achieve this, we present the design and evaluation of machine learning models that predict (1) personalized allow/deny decisions for different information flows involving various attributes, purposes, and devices (AUC .868), (2) what circumstances may change original decisions (AUC .899), and (3) how much (US dollars) one may be willing to pay or receive in exchange for smart home privacy (RMSE 12.459). We show how developers can use our models to derive actionable steps toward privacy-preserving data practices in the smart home.