VideoDP: A Flexible Platform for Video Analytics with Differential Privacy

Han Wang 1 , Shangyu Xie 2 ,  and Yuan Hong 3
  • 1 Illinois Institute of Technology,
  • 2 Illinois Institute of Technology,
  • 3 Illinois Institute of Technology,

Abstract

Massive amounts of videos are ubiquitously generated in personal devices and dedicated video recording facilities. Analyzing such data would be extremely beneficial in real world (e.g., urban traffic analysis). However, videos contain considerable sensitive information, such as human faces, identities and activities. Most of the existing video sanitization techniques simply obfuscate the video by detecting and blurring the region of interests (e.g., faces, vehicle plates, locations and timestamps). Unfortunately, privacy leakage in the blurred video cannot be effectively bounded, especially against unknown background knowledge. In this paper, to our best knowledge, we propose the first differentially private video analytics platform (VideoDP) which flexibly supports different video analyses with rigorous privacy guarantee. Given the input video, VideoDP randomly generates a utility-driven private video in which adding or removing any sensitive visual element (e.g., human, and object) does not significantly affect the output video. Then, different video analyses requested by untrusted video analysts can be flexibly performed over the sanitized video with differential privacy. Finally, we conduct experiments on real videos, and the experimental results demonstrate that VideoDP can generate accurate results for video analytics.

If the inline PDF is not rendering correctly, you can download the PDF file here.

  • [1] YouTube Official Blog 2012. (02/2020)

  • [2] https://boxy-dataset.com/boxy/ (02/2020)

  • [3] https://drive.google.com/file/d/1hYa5s7fjvQc1S1wRY6GcRqOPwL0Hy_aE/view. (02/2020)

  • [4] https://opencv.org/. (02/2020)

  • [5] 2012. H.R. 6671 (112th): Video Privacy Protection Act.

  • [6] B. Abreu, L. Botelho, A. Cavallaro, D. Douxchamps, T. Ebrahimi, P. Figueiredo, B. Macq, B. Mory, L. Nunes, J. Orri, M. Trigueiros, and A. Violante. Video-based multiagent traffic surveillance system. Intelligent Vehicles Symposium, 2000.

  • [7] R. Bassily and A. Smith. Local, Private, Efficient Protocols for Succinct Histograms. STOC, pages 127–135, 2015.

  • [8] R. Bild, K. A. Kuhn, and F. Prasser. Safepub: A truthful data anonymization algorithm with strong privacy guarantees. PoPETs, 2018(1):67–87, 2018.

  • [9] T. Acharya and A. K. Ray. Image Processing - Principles and Applications. Wiley-Interscience, 2005.

  • [10] A.B. Chan, Z.S.J. Liang, and N. Vasconcelos. Privacy preserving crowd monitoring: Counting people without people models or tracking. CVPR, pages 1–7, 2008.

  • [11] Y. Cao, M. Yoshikawa, Y. Xiao, and L. Xiong. Quantifying Differential Privacy under Temporal Correlations. ICDE, pages 821–832, 2017.

  • [12] O. Chapelle, H. Patrick, and V. N. Vapnik. Support vector machines for histogram-based image classification. Neural Networks, pages 1055–1064, 1999.

  • [13] G. Cormode, C. Procopiuc, D. Srivastava, E. Shen, and T. Yu. Differentially private spatial decompositions. In PVLDB, pages 20–31, 2012.

  • [14] G. Cormode, S. Jha, T. Kulkarni, N. Li, D. Srivastava, and T. Wang. Privacy at Scale: Local Differential Privacy in Practice. In SIGMOD, pages 1655–1658, 2018

  • [15] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, pages 886–893, 2005.

  • [16] P. Dollár, V. abaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal features. In VSPETS, 2005.

  • [17] D. Doma. Comparison of different image interpolation algorithms. West Virginia University, 2008.

  • [18] C. Dwork. Differential privacy. In ICALP, pages 1–12, 2006.

  • [19] C. Dwork, F. McSherry, K. Nissim and A. Smith. Calibrating noise to sensitivity in private data analysis. In TCC, pages 265–284, 2006.

  • [20] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. In Foundations and Trends in Theoretical Computer Science 9(3-4), pages 265–284, 2014.

  • [21] U. Erlingsson, V. Pihur, and A. Korolova. Rappor: Randomized aggregatable privacy-preserving orindal response. In CCS, pages 1054–1067, 2014.

  • [22] L. Fan. Image Pixelization with Differential Privacy. In DBSec, pages 148–162, 2018.

  • [23] D. Fidaleo, H. Nguyen and M. Trivedi. The networked sensor tapestry (nest): a privacy enhanced software architecture for interactive analysis of data in video-sensor networks. In ACM MM Workshops, pages 46–53, 2004.

  • [24] R. Girshick. Fast R-CNN. In ICCV, pages 1440–1448, 2015.

  • [25] M. Götz, A. Machanavajjhala, G. Wang, X. Xiao and J. Gehrke. Publishing search logs - a comparative study of privacy guarantees. TKDE, 24(3):520–532, 2012.

  • [26] M. Handte, M.U. Iqbal, S. Wagner, W. Apolinarski, P.J. Marrón, E.M.M. Navarro, S. Martinez, S.I. Barthelemy and M.G. Fernández. Crowd Density Estimation for Public Transport Vehicles. In EDBT/ICDT Workshops, pages 315-322, 2014.

  • [27] Z.L. He, J. Zhang, M. Kan, S. Shan and X. Chen. Robust fec-cnn: A high accuracy facial landmark detection system. In CVPR Workshops, pages 98–104, 2017.

  • [28] S. Hill, Z. Zhou, L. Saul, and H. Shacham. On the (in) effectiveness of mosaicing and blurring as tools for document redaction. PoPETs, 2016(4):403–417, 2016.

  • [29] Z. Hong. Algebraic feature extraction of image for recognition. Pattern recognition, 24(3):211–219, 1991.

  • [30] Y. Hong, J. Vaidya, H. Lu, and M. Wu. Differentially private search log sanitization with optimal output utility. In Proceedings of the EDBT, pages 50–61. ACM, 2012.

  • [31] Y. Hong, J. Vaidya, H. Lu, P. Karras, and S. Goel. Collaborative search log sanitization: Toward differential privacy and boosted utility. IEEE TDSC, 12(5):504–518, 2015.

  • [32] A. Hore, D. Ziou, L. Saul, and H. Shacham. Image Quality Metrics: PSNR vs. SSIM document redaction. 20th ICPR, pages 2366–2369, 2010.

  • [33] T. Koshimizu, T. Toriyama, and N. Babaguchi. Factors on the sense of privacy in video surveillance. In ACM MM Workshops, pages 35–44, 2006.

  • [34] M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana. Certified robustness to adversarial examples with differential privacy. In IEEE Symposium on Security and Privacy, pages 656–672, 2019.

  • [35] D. Leoni. Non-interactive differential privacy: a survey. In WOD, pages 40–52, 2012.

  • [36] N. Li, W. H. Qardaji, and D. Su. On sampling, anonymization, and differential privacy or, k-anonymization meets differential privacy. In ASIACCS, pages 32–33, 2012.

  • [37] B. Liu, S. Xie, H. Wang, Y. Hong, X. Ban and M. Mohammady. VTDP: Privately Sanitizing Fine-grained Vehicle Trajectory Data with Boosted Utility. In TDSC, 2019.

  • [38] A. Machanavajjhala, D. Kifer, J. M. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory meets practice on the map. In ICDE, pages 277–286, 2008.

  • [39] R. McPherson, R. Shokri, and V. Shmatikov. Defeating image obfuscation with deep learning. In arXiv preprint arXiv:1609.00408, 2016.

  • [40] F. McSherry. Privacy integrated queries: an extensible platform for privacy-preserving data analysis. In SIGMOD, pages 19–30, 2009.

  • [41] F. McSherry. Mechanism Design via Differential Privacy. In FOCS, pages 94–103, 2007.

  • [42] A. Milan, L. Leal-Taixé, I. Reid, S. Roth, and K. Schindler. MOT16: A benchmark for multi-object tracking. CoRR, 2016.

  • [43] S. Moncrieff, S. Venkatesh, and G. West. Dynamic privacy assessment in a smart house environment using multimodal sensing. TOMMCAP, 2008.

  • [44] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In Theory of Computing, pages 75–85, 2007.

  • [45] S.J. Oh, R. Benenson, M. Fritz, and B. Schiele. Faceless person recognition: Privacy implications in social media. In ECCV, pages 19–35, 2016.

  • [46] P. Piccinini, A. Prati, and R. Cucchiara. Real-time object detection and localization with SIFT-based clustering. In Image and Vision Computing, pages 573–587, 2012.

  • [47] W. Qardaji, W. Yang, and N. Li. Differentially private grids for geospatial data. In ICDE, pages 757–768, 2013.

  • [48] Z. Qin, T. Yu, Y. Yang, I. Khalil, X. Xiao and K. Ren. Generating Synthetic Decentralized Social Graphs with Local Differential Privacy. In CCS, pages 425–438, 2017.

  • [49] M. Saini, P. Atrey, S. Mehrotra, and M. Kankanhalli. W3- privacy: understanding what, when, and where inference channels in multi-camera surveillance video. Multimedia Tools and Applications, 68:135–158, 2014.

  • [50] S. Song, Y. Wang, and K. Chaudhuri. Pufferfish Privacy Mechanism for Correlated Data. SIGMOD, pages 1291–1306, 2017.

  • [51] P. N. Sridharan and S. Raman. Characteristics of video data for signal analysis. In ICSP, pages 1254–1257, 1996.

  • [52] L. Sweeney. Achieving k-anonymity privacy protection using generalization and suppression. In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, pages 571–588, 2002.

  • [53] J. Vaidya, B. Shafiq, A. Basu, and Y. Hong. Differentially private naive bayes classification. In Web Intelligence, pages 571–576, 2013.

  • [54] H. Wang, Y. Hong, Y. Kong, and J. Vaidya. Publishing Video Data with Indistinguishable Objects. In EDBT, pages 323–334, 2020.

  • [55] T. Wang, N. Li, and S. Jha. Locally Differentially Private Frequent Itemset Mining. In SP, pages 127–143, 2018.

  • [56] N. Wojke, A. Bewley, and D. Paulus. Simple Online and Realtime Tracking with a Deep Association Metric. In ICIP, pages 3645–3649, 2017.

  • [57] Y. Yang, J. Liu, and M. Shah. Video Scene Understanding Using Multi-scale Analysis. ICCV, pages 1669–1676, 2008.

  • [58] J. Yang, B. Price, S. Cohen, H. Lee and M.H. Yang. Object contour detection with a fully convolutional encoderdecoder network. ICCV, pages 193–202, 2016. Yang,, Price, B.,S. Cohen,, H. Lee and M. H. Yang. (2016).

  • [59] S. Zhang, L. Wen, X. Bian, Z. Li and S.Z. Li. Occlusionaware R-CNN: detecting pedestrians in a crowd. In ECCV, pages 637–653, 2018.

OPEN ACCESS

Journal + Issues

Search