Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks
Abstract
:1. Introduction
- We propose the adversarial patches method for face recognition attacks applicable to the physical world. It does not require knowledge of the parameters of the deep learning model (black box) to achieve attack effectiveness.
- For the reliability of our approach, we performed a comprehensive attack test for all one-to-one combinations. Based on testing quantities, the number of subjects and the number of testing by each subject is higher than the previous literature, which results show that the success rate of dodging attacks is 57.99%, and the impersonation attack success rate is 46.78% in the digital world. The success rate of dodging attacks is 81.77%, impersonation attack success rate reached 63.85% in the physical world.
- The proposed attack method utilizes the adversarial patch, which occupies only a small area of the face, instead of the adversarial example, which occupies the whole face. Therefore, the attacker can adjust the noise region according to the requirements. In our case, we hide the adversarial perturbation in the glasses to achieve the effectiveness of being judged as someone else. As a result, it is difficult for the layperson to know our attack intent and, therefore, poses a significant threat to the face recognition system.
- Based on the method proposed in this study, we found that the number of people with two face databases of different numbers of people, the number of people will further affect the attack’s success rate. The attack success rate increases when the number of people in the database increases. In short, the chances of being hacked increase.
- We propose a novel defense mechanism to counter the GAN-based adversarial patch method. The results show that the proposed mechanism can detect almost all dodging attacks and more than half of the impersonation attacks with high defense effectiveness.
- We explored the relationship between thresholds and attack success and proved that both are relative. In addition, we attack different models by the no-box attack, showing that our attack method is transferable.
2. Background
2.1. Face Recognition
2.2. Adversarial Attack
2.2.1. Adversarial Example
2.2.2. Adversarial Patch
2.3. Attention Area of Face Recognition
3. Related Works
4. Proposed Method
Algorithm 1 Training AdvFace in dodging attack |
Input:
|
Algorithm 2 Training AdvFace in impersonation attack |
Input:
|
4.1. Generator
4.2. Discriminator
4.3. Face Matcher
5. Experimental Results
5.1. Evaluation Metric
5.2. Datasets
5.3. Face Recognition Systems in Digital World
5.4. Face Recognition Systems in Physical World
5.5. Results
5.5.1. Dodging Attacks in Digital World
5.5.2. Impersonation Attacks in Digital World
5.5.3. Dodging Attacks in Physical World
5.5.4. Impersonation Attacks in Physical World
5.6. Comparison of Dodging Attack Success Rates of Different Methods
Defense Mechanism
5.7. Threshold
5.8. Time Efficiency
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
DL | Deep Learning |
FR | Face Recognition |
ABC | Automatic Border Control |
GAN | Generative Adversarial Network |
VGG | Visual Geometry Group |
LFW | Labeled Faces in the Wild |
FGSM | Fast Gradient Sign Method |
BIM | Basic Iterative Method |
PGD | Projected Gradient Descent |
FAR | False Accept Rate |
References
- Hariri, W. Efficient masked face recognition method during the COVID-19 pandemic. Signal Image Video Process. (SIViP) 2022, 16, 605–612. [Google Scholar] [CrossRef] [PubMed]
- Facial Recognition Market Worth $8.5 Billion by 2025. Available online: https://www.marketsandmarkets.com/PressReleases/facial-recognition.asp (accessed on 19 September 2022).
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Brown, T.B.; Mané, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial Patch. Available online: https://arxiv.org/abs/1712.09665 (accessed on 19 September 2022).
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security; Yampolskiy, R.V., Ed.; Chapman & Hall/CRC: New York, NY, USA, 2018; pp. 99–112. [Google Scholar]
- Li, J.; Schmidt, F.; Kolter, Z. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Bhambri, S.; Muku, S.; Tulasi, A.; Buduru, A.B. A Survey of Black-Box Adversarial Attacks on Computer Vision Models. Available online: https://arxiv.org/abs/1912.01667 (accessed on 19 November 2022).
- Deb, D.; Zhang, J.; Jain, A.K. AdvFaces: Adversarial Face Synthesis. In Proceedings of the 2020 IEEE International Joint Conference on Biometrics (IJCB), Houston, TX, USA, 28 June–1 October 2020. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Ye, Q.; Huang, P.; Zhang, Z.; Zheng, Y.; Fu, L.; Yang, W. Multiview Learning With Robust Double-Sided Twin SVM. IEEE Trans. Cybern. 2022, 52, 12745–12758. [Google Scholar] [CrossRef] [PubMed]
- Lahaw, Z.B.; Essaidani, D.; Seddik, H. Robust Face Recognition Approaches Using PCA, ICA, LDA Based on DWT, and SVM Algorithms. In Proceedings of the 2018 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
- Ruan, Y.; Xiao, Y.; Hao, Z.; Liu, B. A Convex Model for Support Vector Distance Metric Learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 3533–3546. [Google Scholar] [CrossRef]
- Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A Dataset for Recognising Faces across Pose and Age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Los Alamitos, CA, USA, 15–19 May 2018. [Google Scholar]
- Chen, J.; Chen, J.; Wang, Z.; Liang, C.; Lin, C.-W. Identity-Aware Face Super-Resolution for Low-Resolution Face Recognition. IEEE Signal Process. Lett. 2020, 27, 645–649. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, T.; Zheng, H.; Guo, G. Hierarchical Pyramid Diverse Attention Networks for Face Recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Usman, M.; Latif, S.; Qadir, J. Using deep autoencoders for facial expression recognition. In Proceedings of the 2017 13th International Conference on Emerging Technologies (ICET), Islamabad, Pakistan, 27–28 December 2017. [Google Scholar]
- Iranmanesh, S.M.; Riggan, B.; Hu, S.; Nasrabadi, N.M. Coupled generative adversarial network for heterogeneous face recognition. Image Vis. Comput. 2020, 94, 1–10. [Google Scholar]
- Rong, C.; Zhang, X.; Lin, Y. Feature-Improving Generative Adversarial Network for Face Frontalization. IEEE Access 2020, 8, 68842–68851. [Google Scholar] [CrossRef]
- Liu, X.; Guo, Z.; You, J.; Vijaya Kumar, B.V.K. Dependency-Aware Attention Control for Image Set-Based Face Recognition. IEEE Trans. Inf. Forensics Secur. 2020, 15, 1501–1512. [Google Scholar] [CrossRef] [Green Version]
- Rao, Y.; Lu, J.; Zhou, J. Attention-Aware Deep Reinforcement Learning for Video Face Recognition. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Deng, J.; Guo, J.; Xue, N.; Zafeiriou, S. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Bromley, J.; Guyon, I.; LeCun, Y.; Säckinger, E.; Shah, R. Signature Verification using a “Siamese” Time Delay Neural Network. Int. J. Pattern Recognit. Artif. Intell. 1993, 7, 669–688. [Google Scholar] [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Available online: https://arxiv.org/abs/1409.1556 (accessed on 21 September 2022).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 27–30 June 2016. [Google Scholar]
- Fuad, M.T.H.; Fime, A.A.; Sikder, D.; Iftee, M.A.R.; Rabbi, J.; Al-Rakhami, M.S.; Gumaei, A.; Sen, O.; Fuad, M.; Islam, M.N. Recent Advances in Deep Learning Techniques for Face Recognition. IEEE Access 2021, 9, 99112–99142. [Google Scholar] [CrossRef]
- Pidhorskyi, S.; Adjeroh, D.; Doretto, G. Adversarial Latent Autoencoders. Available online: https://arxiv.org/abs/2004.04467 (accessed on 19 November 2022).
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. Available online: https://arxiv.org/abs/1706.06083 (accessed on 22 September 2022).
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–24 May 2017. [Google Scholar]
- Thys, S.; Ranst, W.V.; Goedeme, T. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Liu, A.; Liu, X.; Fan, J.; Ma, Y.; Zhang, A.; Xie, H.; Tao, D. Perceptual-Sensitive GAN for Generating Adversarial Patches. Proc. AAAI Conf. Artif. Intell. 2019, 33, 1028–1035. [Google Scholar] [CrossRef] [Green Version]
- Castanon, G.; Byrne, J. Visualizing and Quantifying Discriminative Features for Face Recognition. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018. [Google Scholar]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016. [Google Scholar]
- Zhou, Z.; Tang, D.; Wang, X.; Han, W.; Liu, X.; Zhang, K. Invisible Mask: Practical Attacks on Face Recognition with Infrared. Available online: https://arxiv.org/abs/1803.04683 (accessed on 21 September 2022).
- Komkov, S.; Petiushko, A. AdvHat: Real-World Adversarial Attack on ArcFace Face ID System. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021. [Google Scholar]
- Pautov, M.; Melnikov, G.; Kaziakhmedov, E.; Kireev, K.; Petiushko, A. On Adversarial Patches: Real-World Attack on ArcFace-100 Face Recognition System. In Proceedings of the 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON), Novosibirsk, Russia, 21–27 October 2019. [Google Scholar]
- Shen, M.; Liao, Z.; Zhu, L.; Xu, K.; Du, X. VLA: A Practical Visible Light-Based Attack on Face Recognition Systems in Physical World. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–19. [Google Scholar] [CrossRef]
- Nguyen, D.; Arora, S.S.; Wu, Y.; Yang, H. Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Yin, B.; Wang, W.; Yao, T.; Guo, J.; Kong, Z.; Ding, S.; Li, J.; Liu, C. Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition. Available online: https://arxiv.org/abs/2105.03162 (accessed on 21 September 2022).
- Xiao, C.; Li, B.; Zhu, J.-Y.; He, W.; Liu, M.; Song, D. Generating Adversarial Examples with Adversarial Networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018. [Google Scholar]
- Tong, L.; Chen, Z.; Ni, J.; Cheng, W.; Song, D.; Chen, H.; Vorobeychik, Y. FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems. Available online: https://arxiv.org/abs/2104.04107 (accessed on 18 November 2022).
- Shen, M.; Yu, H.; Zhu, L.; Xu, K.; Li, Q.; Hu, J. Effective and Robust Physical-World Attacks on Deep Learning Face Recognition Systems. IEEE Trans. Inf. Forensics Secur. 2021, 16, 4063–4077. [Google Scholar] [CrossRef]
- Deb, D. AdvFaces: Adversarial Face Synthesis. Available online: https://github.com/ronny3050/AdvFaces (accessed on 19 November 2022).
- Serengil, S.I. Deepface. Available online: https://github.com/serengil/deepface (accessed on 22 September 2022).
- Singh, M.; Arora, A.S. Computer Aided Face Liveness Detection with Facial Thermography. Wirel. Pers. Commun. 2020, 111, 2465–2476. [Google Scholar] [CrossRef]
- Theagarajan, R.; Bhanu, B. Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.-J. ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models. Available online: https://arxiv.org/abs/1708.03999 (accessed on 19 November 2022).
Previous Works | Database | Method | Accuracy | Year |
---|---|---|---|---|
[10] | Tufts face | MvRDTSVM | 91.55% | 2022 |
MvFRDTSVM | 88.82 % | 2022 | ||
[11] | AT&T face | DWT + PCA + SVM | 96% | 2018 |
DWT + LDA + SVM | 96% | 2018 | ||
DWT + ICA + SVM | 94.5% | 2018 | ||
[12] | PubFig83 | CSV-DML | 84.6% | 2022 |
[13] | LFW | DeepFace-ensemble | 97.35% | 2014 |
[14] | LFW | Siamese Network (ZFNet + Inception-v1) | 99.63% | 2015 |
[15] | VGGFace2 | ResNet-50 | 99.6% | 2018 |
[16] | LFW | LightCNN-v29 | 98.98% | 2020 |
[17] | VGGFace2-FP | PDA | 95.32% | 2020 |
[18] | VGGFace2-FP | HOG + Autoencoders | 99.60% | 2017 |
[19] | CASIA NIR-VIS2.0 | CpGAN | 96.63% | 2020 |
[20] | LFW | FI-GAN | 99.6% | 2020 |
[21] | IJB-A | DAC | 0.976 ± 0.01% | 2020 |
[22] | YTF | ADRL | 96.52 ± 0.54% | 2017 |
Related Works | Attack Method | Situation | Generate Object | Adversarial Capacity |
---|---|---|---|---|
[34] | L-BFGS | Physical | Patches | White-Box |
[35] | LED | Physical | LED Infrared | White-Box |
[36] | FGSM | Physical | Patches | White-Box |
[37] | MI-FGSM | Physical | Patches | White-Box |
[38] | VLA | Physical | Visible Light | Black-Box |
[39] | Transformation-Invariant Adversarial Pattern | Physical | Visible Light | Black-Box |
[8] | GAN-based | Digital | Digital Face Image | Black-Box |
[40] | GAN-based | Physical | Adv-Makeup | Black-Box |
[41] | GAN-based | Digital | Digital Image | Black-Box |
[42] | FACESEC | Physical | Eyeglass | Black-box |
[43] | GAN-based | Physical | Sticker | Black-box |
Layer | Type | Filters/Neurons | Stride | Padding |
---|---|---|---|---|
1 | Conv | 64 (kernel size = 7) | 1 | 3 |
2 | Conv | 128 (kernel size = 4) | 2 | - |
3 | Conv | 256 (kernel size = 4) | 3 | - |
- | residual block | kernel size = 3 | - | - |
- | residual block | kernel size = 3 | - | - |
- | residual block | kernel size = 3 | - | - |
4 | Unsampling | 128 (kernel size = 5) | 1 | 2 |
5 | Unsampling | 64 (kernel size = 5) | 1 | 2 |
6 | Conv | 3 (kernel size = 7) | 1 | 3 |
Layer | Type | Filters/Neurons | Stride | Padding |
---|---|---|---|---|
1 | Conv | 32 (kernel size = 4) | 2 | - |
2 | Conv | 64 (kernel size = 4) | 2 | - |
3 | Conv | 128 (kernel size = 4) | 2 | - |
4 | Conv | 256 (kernel size = 4) | 2 | - |
5 | Conv | 512 (kernel size = 4) | 2 | - |
6 | Conv | 1 (kernel size = 1) | 1 | - |
Number | Attack Success Rate |
---|---|
No.1 | 92.18% |
No.2 | 1.56% |
No.3 | 10.93% |
No.4 | 4.68% |
No.5 | 35.93% |
No.6 | 32.81% |
No.7 | 62.5% |
No.8 | 85.93% |
No.9 | 100% |
No.10 | 93.75% |
Average | 52.02% |
Number | Attack Success Rate |
---|---|
No.1 | 0% |
No.2 | 31.25% |
No.3 | 78.13% |
No.4 | 0% |
No.5 | 37.5% |
No.6 | 0% |
No.7 | 0% |
No.8 | 28.13% |
No.9 | 34% |
No.10 | 32.81% |
Average | 24.18% |
Number | Attack Success Rate | Number | Attack Success Rate |
---|---|---|---|
No.1 | 10.9% | No.12 | 0% |
No.2 | 100% | No.13 | 98.4% |
No.3 | 100% | No.14 | 1.5% |
No.4 | 43.75% | No.15 | 100% |
No.5 | 57.8% | No.16 | 100% |
No.6 | 0% | No.17 | 100% |
No.7 | 1.5% | No.18 | 96.8% |
No.8 | 100% | No.19 | 39% |
No.9 | 56.2% | No.20 | 95.3% |
No.10 | 54.6% | No.21 | 34.3% |
No.11 | 35.9% | No.22 | 50% |
Average | 57.99% |
Attack Number | ||||||||||||
Origin Number | No.1 | No.2 | No.3 | No.4 | No.5 | No.6 | No.7 | No.8 | No.9 | No.10 | Average | |
No.1 | 90% | 10% | 30% | 70% | 90% | 60% | 0% | 30% | 70% | 50% | ||
No.2 | 50% | 10% | 30% | 60% | 90% | 60% | 30% | 50% | 60% | 50% | ||
No.3 | 30% | 80% | 10% | 60% | 80% | 60% | 10% | 60% | 50% | 48.9% | ||
No.4 | 70% | 90% | 30% | 80% | 90% | 60% | 10% | 30% | 70% | 58.9% | ||
No.5 | 40% | 70% | 10% | 30% | 80% | 60% | 20% | 40% | 60% | 45.6% | ||
No.6 | 10% | 50% | 0% | 20% | 40% | 40% | 30% | 10% | 10% | 23.3% | ||
No.7 | 50% | 80% | 20% | 30% | 50% | 70% | 0% | 10% | 50% | 40% | ||
No.8 | 10% | 90% | 10% | 20% | 60% | 90% | 50% | 30% | 70% | 47.8% | ||
No.9 | 70% | 90% | 10% | 40% | 40% | 80% | 40% | 10% | 80% | 51.1% | ||
No.10 | 30% | 100% | 20% | 30% | 70% | 90% | 60% | 20% | 50% | 52.2% | ||
Total | 48.78% |
Number | Attack Success Rate |
---|---|
No.1 | 45.4% |
No.2 | 54.5% |
No.3 | 36.3% |
No.4 | 27.2% |
No.5 | 36.3% |
No.6 | 54.5% |
No.7 | 18.1% |
No.8 | 45.4% |
No.9 | 27.2% |
No.10 | 54.5% |
Average | 39.94% |
Number | Attack Success Rate |
No.1 | 81.8% |
No.2 | 72.7% |
No.3 | 63.6% |
No.4 | 100% |
No.5 | 72.7% |
No.6 | 81.8% |
No.7 | 81.8% |
No.8 | 90.9% |
No.9 | 81.8% |
No.10 | 90.9% |
Average | 81.77% |
Attack Number | ||||||||||||
Origin Number | No.1 | No.2 | No.3 | No.4 | No.5 | No.6 | No.7 | No.8 | No.9 | No.10 | Average | |
No.1 | 90% | 0% | 50% | 90% | 90% | 80% | 40% | 90% | 30% | 62.2% | ||
No.2 | 50% | 0% | 40% | 80% | 90% | 60% | 50% | 90% | 90% | 63.3% | ||
No.3 | 70% | 80% | 50% | 90% | 90% | 90% | 40% | 90% | 30% | 70% | ||
No.4 | 60% | 100% | 20% | 100% | 90% | 70% | 40% | 100% | 80% | 73.3% | ||
No.5 | 40% | 80% | 20% | 40% | 80% | 90% | 40% | 90% | 20% | 55.5% | ||
No.6 | 40% | 80% | 10% | 50% | 80% | 80% | 40% | 90% | 20% | 54.4% | ||
No.7 | 50% | 90% | 30% | 50% | 90% | 90% | 30% | 90% | 50% | 63.3% | ||
No.8 | 40% | 70% | 10% | 50% | 90% | 90% | 90% | 80% | 60% | 64.4% | ||
No.9 | 20% | 100% | 40% | 50% | 90% | 90% | 80% | 40% | 70% | 64.4% | ||
No.10 | 40% | 100% | 30% | 50% | 80% | 90% | 90% | 40% | 90% | 67.7% | ||
Total | 63.85% |
Original No. | Attack Target | Attack Success Rate |
---|---|---|
No.1 | No.3 | 70% |
No.2 | No.10 | 100% |
No.3 | No.9 | 40% |
No.4 | No.6 | 50% |
No.5 | No.4 | 100% |
No.6 | No.7 | 90% |
No.7 | No.5 | 90% |
No.8 | No.2 | 50% |
No.9 | No.4 | 100% |
No.10 | No.2 | 90% |
Average | 78% |
Literature | Generate Object | Face Recognition Model | Adversarial Capacity | Number of Subjects | Number of People in Database | Dodging Attack’s Success Rate |
---|---|---|---|---|---|---|
[34] | Patches | VGG-Face | White-Box | 3 | 10 | 97.22% |
[38] | Visible Light | FaceNet | Black-Box | 9 | 5749 (LFW) | 85.7% |
[39] | Visible Light | Commercial | Black-Box | 10 | 50 | 70% |
[42] | Eyeglass | FaceNet | Black-Box | 10 | 5749 (LFW) | 54% |
Ours | Patches | FaceNet | Black-Box | 10 | 22 | 81.77% |
Literature | Generate Object | Face Recognition Model | Attack Subjects | Number of Subjects | Number of Attack Target | Number of People in Database | Impersonation Attack’s Success Rate |
---|---|---|---|---|---|---|---|
[34] | Patches | VGG-Face | White-Box | 3 | 1 | 10 | 75% |
[38] | Visible Light | FaceNet | Black-Box | 9 | 60 | 5749 (LFW) | 32.4% |
[39] | Visible Light | Commercial | Black-Box | 25 | 1 | 50 | 60% |
[40] | Adv-Makeup | Commercial | Black-Box | 1 | 1 | 20 | 52.92% |
[43] | Sticker | FaceNet | Black-Box | 20 | 3 | 20 (VolFace) | 55.32% |
Ours | Patches | FaceNet | Black-Box | 10 | 10 | 22 | 63.85% |
10 | 1 | 22 | 78% |
Number | Attack Success Rate |
---|---|
No.1 | 0% |
No.2 | 0% |
No.3 | 6.25% |
No.4 | 0% |
No.5 | 0% |
No.6 | 0% |
No.7 | 0% |
No.8 | 0% |
No.9 | 0% |
No.10 | 0% |
Average | 0.06% |
Number | Defense Rate |
---|---|
No.1 | 100% |
No.2 | 81.25% |
No.3 | 85.93% |
No.4 | 78.12% |
No.5 | 100% |
No.6 | 56.25% |
No.7 | 100% |
No.8 | 100% |
No.9 | 100% |
No.10 | 95.3% |
Average | 89.69% |
Attack Number | ||||||||||||
Origin Number | No.1 | No.2 | No.3 | No.4 | No.5 | No.6 | No.7 | No.8 | No.9 | No.10 | Average | |
No.1 | 50% | 10% | 30% | 0% | 80% | 30% | 0% | 0% | 70% | 30% | ||
No.2 | 50% | 0% | 30% | 10% | 90% | 10% | 0% | 0% | 60% | 27.7% | ||
No.3 | 20% | 80% | 10% | 0% | 80% | 20% | 0% | 0% | 80% | 32.2% | ||
No.4 | 50% | 70% | 20% | 10% | 90% | 20% | 0% | 0% | 70% | 37.8% | ||
No.5 | 30% | 50% | 10% | 30% | 80% | 10% | 0% | 0% | 60% | 30% | ||
No.6 | 10% | 30% | 0% | 20% | 0% | 10% | 0% | 0% | 10% | 8.9% | ||
No.7 | 10% | 50% | 20% | 30% | 0% | 70% | 0% | 0% | 50% | 25.6% | ||
No.8 | 10% | 70% | 10% | 10% | 0% | 90% | 0% | 0% | 70% | 28.9% | ||
No.9 | 10% | 70% | 0% | 40% | 0% | 80% | 0% | 0% | 80% | 31.1% | ||
No.10 | 30% | 70% | 20% | 30% | 0% | 90% | 30% | 10% | 0% | 31.1% | ||
Total | 28.33% |
Attack Number | ||||||||||||
Origin Number | No.1 | No.2 | No.3 | No.4 | No.5 | No.6 | No.7 | No.8 | No.9 | No.10 | Average | |
No.1 | 40% | 0% | 0% | 70% | 10% | 30% | 40% | 80% | 0% | 34.4% | ||
No.2 | 30% | 10% | 20% | 80% | 0% | 90% | 90% | 10% | 10% | 37.8% | ||
No.3 | 10% | 10% | 0% | 60% | 0% | 50% | 90% | 90% | 0% | 33.3% | ||
No.4 | 20% | 20% | 10% | 80% | 10% | 70% | 90% | 10% | 0% | 34.4% | ||
No.5 | 60% | 50% | 60% | 20% | 20% | 80% | 80% | 10% | 20% | 44.4% | ||
No.6 | 90% | 70% | 10% | 70% | 90% | 90% | 10% | 10% | 90% | 58.9% | ||
No.7 | 90% | 50% | 70% | 30% | 90% | 30% | 90% | 10% | 40% | 55.6% | ||
No.8 | 0% | 20% | 10% | 0% | 60% | 0% | 50% | 80% | 0% | 24.4% | ||
No.9 | 80% | 30% | 70% | 20% | 90% | 20% | 10% | 80% | 20% | 46.7% | ||
No.10 | 40% | 30% | 40% | 20% | 80% | 0% | 60% | 90% | 10% | 41.1% | ||
Total | 41.1% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hwang, R.-H.; Lin, J.-Y.; Hsieh, S.-Y.; Lin, H.-Y.; Lin, C.-L. Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks. Sensors 2023, 23, 853. https://doi.org/10.3390/s23020853
Hwang R-H, Lin J-Y, Hsieh S-Y, Lin H-Y, Lin C-L. Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks. Sensors. 2023; 23(2):853. https://doi.org/10.3390/s23020853
Chicago/Turabian StyleHwang, Ren-Hung, Jia-You Lin, Sun-Ying Hsieh, Hsuan-Yu Lin, and Chia-Liang Lin. 2023. "Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks" Sensors 23, no. 2: 853. https://doi.org/10.3390/s23020853
APA StyleHwang, R.-H., Lin, J.-Y., Hsieh, S.-Y., Lin, H.-Y., & Lin, C.-L. (2023). Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks. Sensors, 23(2), 853. https://doi.org/10.3390/s23020853