Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/134773
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Oriole: Thwarting Privacy Against Trustworthy Deep Learning Models
Author: Chen, L.
Wang, H.
Zhao, B.Z.H.
Xue, M.
Qian, H.
Citation: Lecture Notes in Artificial Intelligence, 2021 / Baek, J., Ruj, S. (ed./s), vol.13083, pp.550-568
Publisher: Springer International Publishing
Publisher Place: Switzerland
Issue Date: 2021
Series/Report no.: Lecture Notes in Computer Science; 13083
ISBN: 9783030905668
ISSN: 0302-9743
1611-3349
Conference Name: Australasian Conference on Information Security and Privacy (ACISP) (1 Dec 2021 - 3 Dec 2021 : virtual online)
Editor: Baek, J.
Ruj, S.
Statement of
Responsibility: 
Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue, and Haifeng Qian
Abstract: Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes [37] (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.
Keywords: Data poisoning; Deep learning privacy; Facial recognition; Multi-cloaks
Rights: © Springer Nature Switzerland AG 2021
DOI: 10.1007/978-3-030-90567-5_28
Grant ID: http://purl.org/au-research/grants/arc/DP210102670
Published version: https://link.springer.com/conference/acisp
Appears in Collections:Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.