Internet-connected voice-controlled speakers, also known as smart speakers, are increasingly popular due to their convenience for everyday tasks such as asking about the weather forecast or playing music. However, such convenience comes with privacy risks: smart speakers need to constantly listen in order to activate when the “wake word” is spoken, and are known to transmit audio from their environment and record it on cloud servers. In particular, this paper focuses on the privacy risk from smart speaker misactivations, i.e., when they activate, transmit, and/or record audio from their environment when the wake word is not spoken. To enable repeatable, scalable experiments for exposing smart speakers to conversations that do not contain wake words, we turn to playing audio from popular TV shows from diverse genres. After playing two rounds of 134 hours of content from 12 TV shows near popular smart speakers in both the US and in the UK, we observed cases of 0.95 misactivations per hour, or 1.43 times for every 10,000 words spoken, with some devices having 10% of their misactivation durations lasting at least 10 seconds. We characterize the sources of such misactivations and their implications for consumers, and discuss potential mitigations.
 E. Pan, J. Ren, M. Lindorfer, C. Wilson, and D. R. Choffnes, “Panoptispy: Characterizing Audio and Video Exfiltration from Android Applications,” in Privacy Enhancing Technologies Symposium (PETs ’18), 2018.
 A. Mhaidli, M. Venkatesh, Y. Zou, and F. Schaub, “Listen Only When Spoken To: Interpersonal Communication Cues as Smart Speaker Privacy Controls,” in Privacy Enhancing Technologies Symposium (PETs ’20, 2020.
 C. Champion, I. Olade, C. Papangelis, H. Liang, and C. Fleming, “The smart speaker blocker: An open-source privacy filter for connected home speakers,” arXiv preprint arXiv:1901.04879, 2019.
 R. Aloufi, H. Haddadi, and D. Boyle, “Privacy preserving speech analysis using emotion filtering at the edge,” in 17th Conference on Embedded Networked Sensor Systems (Sen- Sys ’19), 2019, pp. 426–427.
 J. Ren, D. J. Dubois, D. Choffnes, A. M. Mandalari, R. Kolcun, and H. Haddadi, “Information Exposure for Consumer IoT Devices: A Multidimensional, Network-Informed Measurement Approach,” in Proc. of the Internet Measurement Conference (IMC ’19), 2019.
 D. Y. Huang, N. Apthorpe, G. Acar, F. Li, and N. Feamster, “IoT Inspector: Crowdsourcing Labeled Network Traffic from Smart Home Devices at Scale,” arXiv preprint arXiv:1909.09848, 2019.
 I. Castell-Uroz, X. Marrugat-Plaza, J. Solé-Pareta, and P. Barlet-Ros, “A first look into alexa’s interaction security,” in 15th ACM Intern.l Conf. on Emerging Networking EXperiments and Technologies (CoNEXT ’19), 2019.
 J. Lau, B. Zimmerman, and F. Schaub, “Alexa, Are You Listening? Privacy Perceptions, Concerns and Privacy-Seeking Behaviors with Smart Speakers,” Proceedings of the ACM on Human-Computer Interaction (issue CSCW), vol. 2, no. 1, pp. 1–31, 2018.
 S. Kennedy, H. Li, C. Wang, H. Liu, B. Wang, and W. Sun, “I Can Hear Your Alexa: Voice Command Fingerprinting on Smart Home Speakers,” in 2019 IEEE Conference on Communications and Network Security (CNS ’19), June 2019, pp. 232–240.
 N. Apthorpe, D. Reisman, S. Sundaresan, A. Narayanan, and N. Feamster, “Spying on the smart home: Privacy attacks and defenses on encrypted iot traffic,” arXiv preprint arXiv:1708.05044, 2017.
 D. Kumar, R. Paccagnella, P. Murley, E. Hennenfent, J. Mason, A. Bates, and M. Bailey, “Skill Squatting Attacks on Amazon Alexa,” in 27th USENIX Security Symposium (USENIX Security ’18), Aug. 2018, pp. 33–47.
 A. Alhadlaq, J. Tang, M. Almaymoni, and A. Korolova, “Privacy in the Amazon Alexa skills ecosystem,” in 10th Workshop on Hot Topics in Privacy Enhancing Technologies (HotPETs ’17), 2017.
 N. Zhang, X. Mi, X. Feng, X. Wang, Y. Tian, and F. Qian, “Understanding and mitigating the security risks of voicecontrolled third-party skills on amazon alexa and google home,” arXiv preprint arXiv:1805.01525, 2018.
 R. Mitev, M. Miettinen, and A.-R. Sadeghi, “Alexa Lied to Me: Skill-based Man-in-the-Middle Attacks on Virtual Assistants,” in 2019 ACM Asia Conf. on Computer and Communications Security (ASIACCS ’19), 2019, pp. 465–478.
 T. Sugawara, B. Cyr, S. Rampazzi, D. Genkin, and K. Fu, Light Commands: Laser-Based Audio Injection on Voice-Controllable Systems. Accessed on 02/28/2020, https://lightcommands.com/.
 R. Iijima, S. Minami, Y. Zhou, T. Takehisa, T. Takahashi, Y. Oikawa, and T. Mori, “Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams and its Feasibility,” IEEE Transactions on Emerging Topics in Computing, 2019.
 N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou, “Hidden voice commands,” in 25th USENIX Conference on Security Symposium (USENIX Security ’16)). USENIX Association, 2016.