Camouflage Backdoor Attack against Pedestrian Detection
Abstract
:1. Introduction
- We reveal the backdoor threat in pedestrian detection within the autonomous driving domain, and we design and implement an effective and stealthy backdoor attack targeting pedestrian detection in autonomous driving.
- We employ image scaling to disguise the backdoor attack; furthermore, we only contaminate the training data, not the labels, making the attack more covert.
- Extensive experiments on the benchmark KITTI dataset are conducted, which verify the effectiveness and stealthiness of our attack.
2. Related Work
2.1. Pedestrian Detection
2.2. Backdoor Attacks
3. Methodology
3.1. Overview
3.2. Concepts and Attack Goal
3.3. Attack Image Crafting
Algorithm 1: Image Matting Algorithm |
Input: Image , Trimap T (containing foreground, background, and unknown regions) Output: Foreground image , Background image |
3.4. Camouflage Backdoor Attack
Algorithm 2: Creating Poisoned Images Algorithm |
Input: Extension dataset , Pedestrian dataset , Image matting model , Trigger pixel Output: Poisoned image |
4. Experiments
4.1. Experimental Setup
4.1.1. Model and Dataset
4.1.2. Evaluation Metrics
4.1.3. Implementation Details
4.2. Experimental Results and Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Deng, Y.; Zhang, T.; Lou, G.; Zheng, X.; Jin, J.; Han, Q.L. Deep learning-based autonomous driving systems: A survey of attacks and defenses. IEEE Trans. Ind. Inform. 2021, 17, 7897–7912. [Google Scholar] [CrossRef]
- Bogdoll, D.; Nitsche, M.; Zöllner, J.M. Anomaly detection in autonomous driving: A survey. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4488–4499. [Google Scholar]
- Gao, C.; Wang, G.; Shi, W.; Wang, Z.; Chen, Y. Autonomous driving security: State of the art and challenges. IEEE Internet Things J. 2021, 9, 7572–7595. [Google Scholar] [CrossRef]
- Chi, C.; Zhang, S.; Xing, J.; Lei, Z.; Li, S.Z.; Zou, X. Pedhunter: Occlusion robust pedestrian detector in crowded scenes. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 10639–10646. [Google Scholar]
- Chen, L.; Lin, S.; Lu, X.; Cao, D.; Wu, H.; Guo, C.; Liu, C.; Wang, F.Y. Deep neural network based vehicle and pedestrian detection for autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3234–3246. [Google Scholar] [CrossRef]
- Khan, A.H.; Nawaz, M.S.; Dengel, A. Localized Semantic Feature Mixers for Efficient Pedestrian Detection in Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 5476–5485. [Google Scholar]
- Liu, Y.; Ma, X.; Bailey, J.; Lu, F. Reflection backdoor: A natural backdoor attack on deep neural networks. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part X 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 182–199. [Google Scholar]
- Wu, Y.; Song, M.; Li, Y.; Tian, Y.; Tong, E.; Niu, W.; Jia, B.; Huang, H.; Li, Q.; Liu, J. Improving convolutional neural network-based webshell detection through reinforcement learning. In Proceedings of the Information and Communications Security: 23rd International Conference, ICICS 2021, Chongqing, China, 19–21 November 2021; Proceedings, Part I 23. Springer: Berlin/Heidelberg, Germany, 2021; pp. 368–383. [Google Scholar]
- Ge, Y.; Wang, Q.; Zheng, B.; Zhuang, X.; Li, Q.; Shen, C.; Wang, C. Anti-distillation backdoor attacks: Backdoors can really survive in knowledge distillation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, 20–24 October 2021; pp. 826–834. [Google Scholar]
- Wang, Z.; Wang, B.; Zhang, C.; Liu, Y.; Guo, J. Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks. Remote Sens. 2023, 15, 2580. [Google Scholar] [CrossRef]
- Ye, Z.; Yan, D.; Dong, L.; Deng, J.; Yu, S. Stealthy backdoor attack against speaker recognition using phase-injection hidden trigger. IEEE Signal Process. Lett. 2023, 30, 1057–1061. [Google Scholar] [CrossRef]
- Zeng, Y.; Tan, J.; You, Z.; Qian, Z.; Zhang, X. Watermarks for Generative Adversarial Network Based on Steganographic Invisible Backdoor. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo, Brisbane, Australia, 10–14 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1211–1216. [Google Scholar]
- Jiang, L.; Ma, X.; Chen, S.; Bailey, J.; Jiang, Y.G. Black-box adversarial attacks on video recognition models. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 864–872. [Google Scholar]
- Kiourti, P.; Wardega, K.; Jha, S.; Li, W. TrojDRL: Evaluation of backdoor attacks on deep reinforcement learning. In Proceedings of the 2020 57th ACM/IEEE Design Automation Conference, Virtual Event, 20–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–6. [Google Scholar]
- Bagdasaryan, E.; Shmatikov, V. Blind backdoors in deep learning models. In Proceedings of the 30th USENIX Security Symposium, Vancouver, BC, Canada, 11–13 August 2021; pp. 1505–1521. [Google Scholar]
- Chen, K.; Meng, Y.; Sun, X.; Guo, S.; Zhang, T.; Li, J.; Fan, C. Badpre: Task-agnostic backdoor attacks to pre-trained nlp foundation models. arXiv 2021, arXiv:2110.02467. [Google Scholar]
- Gan, L.; Li, J.; Zhang, T.; Li, X.; Meng, Y.; Wu, F.; Yang, Y.; Guo, S.; Fan, C. Triggerless backdoor attack for NLP tasks with clean labels. arXiv 2021, arXiv:2111.07970. [Google Scholar]
- Xiao, Q.; Chen, Y.; Shen, C.; Chen, Y.; Li, K. Seeing is not believing: Camouflage attacks on image scaling algorithms. In Proceedings of the 28th USENIX Security Symposium, Santa Clara, CA, USA, 14–16 August 2019; pp. 443–460. [Google Scholar]
- Li, Y.; Li, Y.; Wu, B.; Li, L.; He, R.; Lyu, S. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 16463–16472. [Google Scholar]
- Han, X.; Xu, G.; Zhou, Y.; Yang, X.; Li, J.; Zhang, T. Physical backdoor attacks to lane detection systems in autonomous driving. In Proceedings of the 30th ACM International Conference on Multimedia, Lisbon, Portugal, 10–14 October 2022; pp. 2957–2968. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully Convolutional One-Stage Object Detection. arXiv 2019, arXiv:1904.01355. [Google Scholar]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as Points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1483–1498. [Google Scholar] [CrossRef] [PubMed]
- IARPA. TrojAI: Trojns in Artificial Intelligence. 2021. Available online: https://www.iarpa.gov/index.php/research-programs/trojai (accessed on 1 September 2023).
- M14 Intelligence. Autonomous Vehicle Data Annotation Market Analysis. 2020. Available online: https://www.researchandmarkets.com/reports/4985697/autonomous-vehicledata-annotation-market-analysis (accessed on 1 September 2023).
- Luo, C.; Li, Y.; Jiang, Y.; Xia, S.T. Untargeted backdoor attack against object detection. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Rhodes Island, Greece, 4–9 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Quiring, E.; Rieck, K. Backdooring and poisoning neural networks with image-scaling attacks. In Proceedings of the 2020 IEEE Security and Privacy Workshops, Virtual, 18–20 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 41–47. [Google Scholar]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
- Li, Y.; Zhong, H.; Ma, X.; Jiang, Y.; Xia, S.T. Few-shot backdoor attacks on visual object tracking. arXiv 2022, arXiv:2201.13178. [Google Scholar]
Symbol | Concept Name | Concept Description |
---|---|---|
Extended images | The base images used to create attack images. | |
Foreground images | The foreground images that are obtained through image matting from extended images. | |
Background images | The background images that are obtained through image matting from extended images. | |
Trigger-embedded images | The pedestrian images with specific triggers. | |
Attack images | The images implanted with a specific trigger. | |
Benign images | The images that are considered safe, harmless, or non-malicious. | |
Poisoned images | The images generated by combining benign images with attack images. |
Dataset → | Benign Testing Dataset | ||||
---|---|---|---|---|---|
Models ↓ | Methods ↓, Metrics → | mAP | AP50 | AP75 | mAR |
Cascade R-CNN | Benign | 30.9 ± 0.0 | 63.9 ± 0.1 | 26.1 ± 0.0 | 40.3 ± 0.5 |
Poisoned (Ours) | 30.6 ± 0.2 | 63.5 ± 0.1 | 24.9 ± 0.6 | 39.8 ± 0.5 | |
RetinaNet | Benign | 28.6 ± 0.7 | 63.4 ± 0.2 | 20.4 ± 1.2 | 41.5 ± 0.3 |
Poisoned (Ours) | 28.4 ± 0.1 | 62.4 ± 0.0 | 21.4 ± 0.7 | 41.2 ± 0.2 | |
Faster R-CNN | Benign | 28.5 ± 0.2 | 61.9 ± 0.1 | 22.2 ± 1.6 | 39.4 ± 0.1 |
Poisoned (Ours) | 28.6 ± 0.0 | 63.0 ± 0.2 | 21.9 ± 0.2 | 39.4 ± 0.1 | |
FCOS | Benign | 30.9 ± 1.0 | 67.5 ± 1.0 | 23.6 ± 0.6 | 42.0 ± 0.8 |
Poisoned (Ours) | 29.2 ± 0.1 | 65.8 ± 0.1 | 21.5 ± 0.4 | 39.8 ± 0.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Y.; Gu, Y.; Chen, Y.; Cui, X.; Li, Q.; Xiang, Y.; Tong, E.; Li, J.; Han, Z.; Liu, J. Camouflage Backdoor Attack against Pedestrian Detection. Appl. Sci. 2023, 13, 12752. https://doi.org/10.3390/app132312752
Wu Y, Gu Y, Chen Y, Cui X, Li Q, Xiang Y, Tong E, Li J, Han Z, Liu J. Camouflage Backdoor Attack against Pedestrian Detection. Applied Sciences. 2023; 13(23):12752. https://doi.org/10.3390/app132312752
Chicago/Turabian StyleWu, Yalun, Yanfeng Gu, Yuanwan Chen, Xiaoshu Cui, Qiong Li, Yingxiao Xiang, Endong Tong, Jianhua Li, Zhen Han, and Jiqiang Liu. 2023. "Camouflage Backdoor Attack against Pedestrian Detection" Applied Sciences 13, no. 23: 12752. https://doi.org/10.3390/app132312752
APA StyleWu, Y., Gu, Y., Chen, Y., Cui, X., Li, Q., Xiang, Y., Tong, E., Li, J., Han, Z., & Liu, J. (2023). Camouflage Backdoor Attack against Pedestrian Detection. Applied Sciences, 13(23), 12752. https://doi.org/10.3390/app132312752