An Adversarial Attack Method against Specified Objects Based on Instance Segmentation
Abstract
:1. Introduction
2. Contributions of This Paper
- Different from existing methods, which are based on full images of adversarial examples, our algorithm can attack local objects in the image according to the application scenario requirements and converges faster and is more flexible.
- Based on the optimized framework in this paper, two different forms of attack are implemented in this paper, namely adversarial perturbation and adversarial patching. Both of them can produce a good attack effect on the objects within the area.
- The proposed attack method can delineate one or more attack areas according to the requirements of the complex scenes and attack the objects in the areas, making them undetectable to the detector. Even partially exposed objects can be hidden.
- Compared with several classic adversarial algorithms, our algorithm has the advantages of fast convergence and less modified pixels. We have implemented multiple experiments on ImageNet and coco2017 datasets. The performance of the algorithm exceeds the traditional attack methods.
3. Related Works
3.1. Adversarial Examples Generation
3.2. Adversarial Patches Generation
3.3. Instance Segmentation
3.4. Object Detectors
4. Our Methodology
4.1. Roadmap of the Proposed Approach
4.2. Framework of the Proposed Approach
4.3. Overview of the Optimization Function
4.3.1. Overview of the Optimization Function
4.3.2. Restricted Adversarial Perturbation Generation
4.3.3. Restricted Adversarial Perturbation Generation
5. Experiments
5.1. Experiment Configuration
5.2. Baseline Attacks and Ablation Study
5.3. Adversarial Attack on Simple Scene
5.3.1. Experiment Description
5.3.2. Experiment Results
5.4. Adversarial Examples in Complex Environments
5.5. Analysis of Attack Accuracy
5.6. Influence of Parameters on Algorithm
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:13126199. [Google Scholar]
- Richter, M.L.; Schoning, J.; Wiedenroth, A.; Krumnack, U. Should you go deeper? Optimizing convolutional neural network architectures without training. arXiv 2021, arXiv:2106.123072021. [Google Scholar]
- Finlayson, S.G.; Chung, H.W.; Kohane, I.S.; Beam, A.L. Adversarial attacks against medical deep learning systems. arXiv 2018, arXiv:1804.05296. [Google Scholar]
- Croce, F.; Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. arXiv 2020, arXiv:1804.05296. [Google Scholar]
- Nguyen, A.; Yosinski, J.; Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 427–436. [Google Scholar] [CrossRef]
- Tabacof, P.; Valle, E. Exploring the space of adversarial images. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 426–433. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Moosavi-Dezfoolis, M.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1765–1773. [Google Scholar]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (S&P), San Jose, CA, USA, 22 May 2017; pp. 39–57. [Google Scholar]
- Shi, Y.; Han, Y. Schmidt: Image augmentation for black-box adversarial attack. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
- Liu, Y.; Zhang, W.; Yu, N. Query-free embedding attack against deep learning. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 380–386. [Google Scholar]
- Gao, L.; Zhang, Q.; Song, J.; Liu, X.; Shen, H.T. Patch-Wise Attack for Fooling Deep Neural Network. European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 307–322. [Google Scholar]
- Zügner, D.; Akbarnejad, A.; Günnemann, S. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK; 2019; pp. 2847–2856. [Google Scholar]
- Marra, F.; Gragnaniello, D.; Verdoliva, L.; Poggi, G. Do GANs leave artificial fingerprints? In Proceedings of the 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), San Jose, CA, USA, 28–30 March 2019; pp. 506–511. [Google Scholar]
- Lang, D.; Chen, D.; Shi, R.; He, Y. Attention-guided digital adversarial patches on visual detection. Secur. Commun. Netw. 2021, 2021, 6637936. [Google Scholar] [CrossRef]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 23rd ACM Conference Computer and Communications Security (CCS), Vienna, Austria, 24–28 October 2016; ACM: New York, NY, USA, 2016; pp. 1528–1540. [Google Scholar]
- Xu, K.; Zhang, G.; Liu, S.; Fan, Q.; Sun, M.; Chen, H.; Chen, P.Y.; Wang, Y.; Lin, X. Adversarial t-shirt evading person detectors in a physical world. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 665–681. [Google Scholar]
- Ariyanto, M.; Purnamasari, P.D. Object detection system for self-checkout cashier system based on faster region-based convolution neural network and YOLO9000. In Proceedings of the 2021 17th International Conference on Quality in Research (QIR): International Symposium on Electrical and Computer Engineering, Virtual Conference. 13–15 October 2021; pp. 153–157. [Google Scholar]
- He, R.; Cao, J.; Song, L.; Sun, Z.; Tan, T. Adversarial cross-spectral face completion for NIR-VIS face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1025–1037. [Google Scholar] [CrossRef] [PubMed]
- Thys, S.; Van Ranst, W.; Goedeme, T. Fooling automated surveillance cameras: Adversarial patches to attack person detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 49–55. [Google Scholar]
- Duan, R.; Ma, X.; Wang, Y.; Bailey, J.; Qin, A.K.; Yang, Y. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Xin, L.; Yang, H.; Liu, Z.; Song, L.; Hai, L. DPatch: An adversarial patch attack on object detectors. arXiv 2018, arXiv:1806.02299. [Google Scholar]
- Mou, L.; Hua, Y.; Zhu, X.X. A relation-augmented fully convolutional network for semantic segmentation in aerial scenes. arXiv 2019, arXiv:1904.05730. [Google Scholar]
- Yu, C.; Wang, J.; Gao, C.; Yu, G.; Shen, C.; Sang, N. Context prior for scene segmentation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Zhang, X.; Zhang, Y.; Hu, M.; Ju, X. Insulator defect detection based on YOLO and SPP-Net. In Proceedings of the 2020 International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), Bangkok, Thailand, 30 October–1 November 2020; pp. 403–407. [Google Scholar]
- Wang, T.; Zhang, X.; Yuan, L.; Feng, J. Few-Shot Adaptive Faster R-CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 7166–7175. [Google Scholar]
- He, K.; Sun, J.; Zhang, X.; Ren, S.; Microsoft Technology Licensing LLC. Spatial Pyramid Pooling Networks for Image Processing. U.S. Patent 9,542,621, 10 January 2017. [Google Scholar]
- Ganesan, K. Machine learning data detection poisoning attacks using resource schemes multi-linear regression. Neural Parallel Sci. Comput. 2020, 28, 73–82. [Google Scholar] [CrossRef]
- Brendel, W.; Rauber, J.; Kurakin, A.; Papernot, N.; Bethge, M. Adversarial vision challenge. arXiv 2018, arXiv:1808.01976. [Google Scholar]
- Asi, H.; Duchi, J.C. The importance of better models in stochastic optimization. Proc. Natl. Acad. Sci. USA 2019, 116, 22924–22930. [Google Scholar] [CrossRef] [PubMed]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. nt. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
Models | Clean Images | Re-AEG | FGSM | BIM | ILLC |
---|---|---|---|---|---|
VGG16 | 94.83% | 1.37% | 2.09% | 5.44% | 16.94% |
IncV3 | 97.22% | 1.64% | 6.16% | 5.45% | 4.13% |
Res50 | 95.31% | 2.88% | 16.74% | 16.33% | 13.08% |
IncRes152 | 90.67% | 1.28% | 7.87% | 4.48% | 2.03% |
Models | ImageNet-Single | ImageNet-Multiple | COCO-Single | COCO-Multiple |
---|---|---|---|---|
YOLOv4 | 94.23% | 92.63% | 95.75% | 95.52% |
Faster-RCNN VGG16 | 94.29% | 92.79% | 96.01% | 95.63% |
R-FCN ResNet101 | 93.66% | 93.28% | 95.83% | 95.86% |
YOLOv4 | 94.23% | 92.63% | 95.75% | 95.52% |
Accuracy | Accuracy | ||
---|---|---|---|
−3 | 98.95% | 3 | 52.69% |
−2 | 98.95% | 4 | 56.49% |
−1 | 99.18% | 5 | 62.49% |
0 | 98.58% | 6 | 63.46% |
1 | 94.86% | 7 | 62.94% |
2 | 76.24% | 8 | 62.79% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lang, D.; Chen, D.; Li, S.; He, Y. An Adversarial Attack Method against Specified Objects Based on Instance Segmentation. Information 2022, 13, 465. https://doi.org/10.3390/info13100465
Lang D, Chen D, Li S, He Y. An Adversarial Attack Method against Specified Objects Based on Instance Segmentation. Information. 2022; 13(10):465. https://doi.org/10.3390/info13100465
Chicago/Turabian StyleLang, Dapeng, Deyun Chen, Sizhao Li, and Yongjun He. 2022. "An Adversarial Attack Method against Specified Objects Based on Instance Segmentation" Information 13, no. 10: 465. https://doi.org/10.3390/info13100465
APA StyleLang, D., Chen, D., Li, S., & He, Y. (2022). An Adversarial Attack Method against Specified Objects Based on Instance Segmentation. Information, 13(10), 465. https://doi.org/10.3390/info13100465