Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images
Abstract
:1. Introduction
- (1)
- To the best of our knowledge, this is the first work to evaluate the attack effect on objects of different scales, and we perform physical adversarial attacks on multi-scale objects. The data from the experiments are captured from 25 m to 120 m by a DJI Mini 2.
- (2)
- For the optimization of the adversarial patch, we formulate a joint optimization problem to generate a more effective adversarial patch. Moreover, to make the generated patch valid for multi-scale objects in the real world, we use a scale factor that depends on the height label of the image to rescale the adversarial patch when performing a digital attack.
- (3)
- To verify the superiority of our method, we carry out several comparison experiments on the digital attack against Yolo-V3 and Yolo-V5. The experimental results demonstrate that our method has a better performance than baseline methods. In addition, we perform experiments to test the effect of our method in the physical world.
2. Related Work
2.1. Digital Attack and Physical Attack
2.2. Adversarial Attack in the Remote Sensing Field
3. Approach
3.1. The Flowchart of the Proposed Method
3.2. Problem Formulation
3.3. Transformations of Adversarial Patch
3.4. Adversarial Patch Optimization
4. Experiments
4.1. Experimental Setup
4.2. Digital Attack
4.3. Physical Attack
4.4. Ablation Studies
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, R.; Kuffer, M.; Persello, C. The Temporal Dynamics of Slums Employing a CNN-Based Change Detection Approach. Remote Sens. 2019, 11, 2844. [Google Scholar] [CrossRef] [Green Version]
- Peng, B.; Meng, Z.; Huang, Q.; Wang, C. Patch Similarity Convolutional Neural Network for Urban Flood Extent Mapping Using Bi-Temporal Satellite Multispectral Imagery. Remote Sens. 2019, 11, 2492. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images. Remote Sens. 2019, 11, 1554. [Google Scholar] [CrossRef] [Green Version]
- Liu, H.; Li, J.; He, L.; Wang, Y. Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification. Remote Sens. 2019, 11, 174. [Google Scholar] [CrossRef] [Green Version]
- Matos-Carvalho, J.P.; Moutinho, F.; Salvado, A.B.; Carrasqueira, T.; Campos-Rebelo, R.; Pedro, D.; Campos, L.M.; Fonseca, J.M.; Mora, A. Static and Dynamic Algorithms for Terrain Classification in UAV Aerial Imagery. Remote Sens. 2019, 11, 2501. [Google Scholar] [CrossRef] [Green Version]
- Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest Fire Segmentation from Aerial Imagery Data Using an Improved Instance Segmentation Model. Remote Sens. 2022, 14, 3159. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot Multibox Detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G.; Stoken, A.; Borovec, J.; Tao, X.; Kwon, Y.; Michael, K.; Liu, C.; Fang, J.; Abhiram, V.; Skalski, P.; et al. Ultralytics/yolov5: V6.0—YOLOv5n ‘Nano’ Models, Roboflow Integration, TensorFlow Export, OpenCV DNN Support. Available online: https://zenodo.org/record/5563715#.Y0Y3nmdBy3B (accessed on 20 September 2021).
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Dong, X.; Qin, Y.; Gao, Y.; Fu, R.; Liu, S.; Ye, Y. Attention-Based Multi-Level Feature Fusion for Object Detection in Remote Sensing Images. Remote Sens. 2022, 14, 3735. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, J.; Li, W.; Shan, P.; Wang, X.; Li, L.; Fu, Q. MS-IAF: Multi-Scale Information Augmentation Framework for Aircraft Detection. Remote Sens. 2022, 14, 3696. [Google Scholar] [CrossRef]
- Zheng, Y.; Sun, P.; Zhou, Z.; Xu, W.; Ren, Q. ADT-Det: Adaptive Dynamic Refined Single-Stage Transformer Detector for Arbitrary-Oriented Object Detection in Satellite Optical Imagery. Remote Sens. 2021, 13, 2623. [Google Scholar] [CrossRef]
- Mohamed, A.R.; Dahl, G.E.; Hinton, G. Acoustic Modeling Using Deep Belief Networks. IEEE Trans. Audio Speech Lang. Process. 2011, 20, 14–22. [Google Scholar] [CrossRef]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 8–13 December 2014; pp. 3104–3112. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification With Deep Convolutional Neural Networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NeurIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Qin, H.; Cai, Z.; Zhang, M.; Ding, Y.; Zhao, H.; Yi, S.; Liu, X.; Su, H. Bipointnet: Binary Neural Network for Point Clouds. arXiv 2020, arXiv:2010.05501. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1765–1773. [Google Scholar]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbrücken, Germany, 21–24 March 2016; pp. 372–387. [Google Scholar]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-art Face Recognition. In Proceedings of the 2016 ACM Sigsac Conference on Computer Furthermore, Communications Security (CCS), Vienna, Austria, 24–28 October 2016; pp. 1528–1540. [Google Scholar]
- Xu, K.; Zhang, G.; Liu, S.; Fan, Q.; Sun, M.; Chen, H.; Chen, P.Y.; Wang, Y.; Lin, X. Adversarial T-Shirt! Evading Person Detectors in a Physical World. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 665–681. [Google Scholar]
- Brown, T.B.; Mané, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial Patch. arXiv 2017, arXiv:1712.09665. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the Physical World. In Proceedings of the Workshop of the 5th International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
- Thys, S.; Van Ranst, W.; Goedemé, T. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Hu, Z.; Huang, S.; Zhu, X.; Sun, F.; Zhang, B.; Hu, X. Adversarial Texture for Fooling Person Detectors in the Physical World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 13307–13316. [Google Scholar]
- Xu, Y.; Du, B.; Zhang, L. Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1604–1617. [Google Scholar] [CrossRef]
- Chan-Hon-Tong, A.; Lenczner, G.; Plyer, A. Demotivate Adversarial Defense in Remote Sensing. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021; pp. 3448–3451. [Google Scholar]
- Chen, L.; Zhu, G.; Li, Q.; Li, H. Adversarial Example in Remote Sensing Image Recognition. arXiv 2019, arXiv:1910.13222. [Google Scholar]
- Czaja, W.; Fendley, N.; Pekala, M.; Ratto, C.; Wang, I.J. Adversarial Examples in Remote Sensing. In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2018), Seattle, WA, USA, 6–9 November 2018; pp. 408–411. [Google Scholar]
- Xu, Y.; Ghamisi, P. Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Chen, L.; Xu, Z.; Li, Q.; Peng, J.; Wang, S.; Li, H. An Empirical Study of Adversarial Examples on Remote Sensing Image Scene Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7419–7433. [Google Scholar] [CrossRef]
- Xu, Y.; Du, B.; Zhang, L. Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification. IEEE Trans. Image Process. 2021, 30, 8671–8685. [Google Scholar] [CrossRef] [PubMed]
- Lu, M.; Li, Q.; Chen, L.; Li, H. Scale-Adaptive Adversarial Patch Attack for Remote Sensing Image Aircraft Detection. Remote Sens. 2021, 13, 4078. [Google Scholar] [CrossRef]
- Du, A.; Chen, B.; Chin, T.J.; Law, Y.W.; Sasdelli, M.; Rajasegaran, R.; Campbell, D. Physical Adversarial Attacks on an Aerial Imagery Object Detector. In Proceedings of the Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 4–8 January 2022; pp. 1796–1806. [Google Scholar]
- den Hollander, R.; Adhikari, A.; Tolios, I.; van Bekkum, M.; Bal, A.; Hendriks, S.; Kruithof, M.; Gross, D.; Jansen, N.; Perez, G.; et al. Adversarial patch camouflage against aerial detection. In Proceedings of the Artificial Intelligence and Machine Learning in Defense Applications II, Online, 21–25 September 2020; Volume 11543, p. 115430F. [Google Scholar]
- Chow, K.H.; Liu, L.; Gursoy, M.E.; Truex, S.; Wei, W.; Wu, Y. TOG: Targeted Adversarial Objectness Gradient Attacks on Real-Time Object Detection Systems. arXiv 2020, arXiv:2004.04320. [Google Scholar]
- Liu, X.; Yang, H.; Liu, Z.; Song, L.; Li, H.; Chen, Y. Dpatch: An Adversarial Patch Attack on Object Detectors. arXiv 2018, arXiv:1806.02299. [Google Scholar]
- Athalye, A.; Engstrom, L.; Ilyas, A.; Kwok, K. Synthesizing Robust Adversarial Examples. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 284–293. [Google Scholar]
- Evtimov, I.; Eykholt, K.; Fernandes, E.; Kohno, T.; Li, B.; Prakash, A.; Rahmati, A.; Song, D. Robust Physical-World Attacks on Deep Learning Models. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Chen, S.T.; Cornelius, C.; Martin, J.; Chau, D.H.P. Shapeshifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), Dublin, Ireland, 10–14 September 2018; pp. 52–68. [Google Scholar]
- Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Song, D.; Kohno, T.; Rahmati, A.; Prakash, A.; Tramer, F. Note on Attacking Object Detectors with Adversarial Stickers. arXiv 2017, arXiv:1712.08062. [Google Scholar]
- Lu, J.; Sibai, H.; Fabry, E. Adversarial Examples That Fool Detectors. arXiv 2017, arXiv:1712.02494. [Google Scholar]
- Song, D.; Eykholt, K.; Evtimov, I.; Fernandes, E.; Li, B.; Rahmati, A.; Tramer, F.; Prakash, A.; Kohno, T. Physical Adversarial Examples for Object Detectors. In Proceedings of the 12th USENIX Workshop on Offensive Technologies (WOOT 2018), Co-Located with USENIX Security 2018, Baltimore, MD, USA, 13–14 August 2018. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Wang, Y.; Lv, H.; Kuang, X.; Zhao, G.; Tan, Y.A.; Zhang, Q.; Hu, J. Towards a Physical-World Adversarial Patch for Blinding Object Detection Models. Inf. Sci. 2021, 556, 459–471. [Google Scholar] [CrossRef]
- Wang, J.; Liu, A.; Yin, Z.; Liu, S.; Tang, S.; Liu, X. Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 8565–8574. [Google Scholar]
Confidence | Method | ASR (%) | |||||
---|---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | ||
0.1 | Raw | 0.00 | 0.00 | 0.69 | 0.68 | 0.00 | 0.27 |
Random Patch | 0.00 | 0.00 | 1.38 | 0.68 | 0.00 | 0.40 | |
Dpatch | 0.00 | 0.00 | 7.59 | 51.35 | 52.23 | 22.59 | |
OBJ | 73.33 | 56.76 | 72.41 | 64.19 | 45.22 | 62.17 | |
Patch-Noobj | 16.00 | 20.27 | 59.31 | 77.03 | 71.97 | 49.06 | |
Ours | 81.33 | 75.68 | 81.38 | 85.14 | 79.62 | 80.61 | |
0.2 | Raw | 0.00 | 0.00 | 0.69 | 0.68 | 0.00 | 0.27 |
Random Patch | 0.00 | 0.00 | 1.38 | 1.35 | 0.64 | 0.67 | |
Dpatch | 0.00 | 0.00 | 9.66 | 62.16 | 59.24 | 26.60 | |
OBJ | 86.67 | 72.97 | 79.31 | 72.30 | 54.14 | 72.86 | |
Patch-Noobj | 24.67 | 27.03 | 66.21 | 82.43 | 77.71 | 55.75 | |
Ours | 89.33 | 83.78 | 86.21 | 87.16 | 81.53 | 85.56 | |
0.3 | Raw | 0.00 | 0.00 | 1.38 | 0.68 | 0.00 | 0.40 |
Random Patch | 0.00 | 0.00 | 1.38 | 1.35 | 0.64 | 0.67 | |
Dpatch | 0.00 | 0.00 | 13.10 | 70.27 | 66.24 | 30.35 | |
OBJ | 90.00 | 83.11 | 84.14 | 74.32 | 59.24 | 77.94 | |
Patch-Noobj | 30.00 | 32.43 | 72.41 | 85.14 | 78.34 | 59.76 | |
Ours | 91.33 | 88.51 | 89.66 | 88.51 | 84.71 | 88.50 | |
0.4 | Raw | 0.67 | 0.00 | 2.07 | 0.68 | 0.00 | 0.67 |
Random Patch | 0.67 | 0.00 | 2.07 | 1.35 | 1.27 | 1.07 | |
Dpatch | 0.67 | 0.68 | 13.79 | 76.35 | 70.70 | 32.89 | |
OBJ | 92.00 | 84.46 | 87.59 | 77.03 | 63.06 | 80.61 | |
Patch-Noobj | 33.33 | 37.84 | 82.07 | 85.14 | 79.62 | 63.64 | |
Ours | 93.33 | 91.89 | 92.41 | 89.86 | 85.35 | 90.51 | |
0.5 | Raw | 0.67 | 0.00 | 2.07 | 1.35 | 0.64 | 0.94 |
Random Patch | 0.67 | 0.68 | 2.76 | 2.03 | 1.91 | 1.60 | |
Dpatch | 2.00 | 1.35 | 20.69 | 77.70 | 78.34 | 36.50 | |
OBJ | 96.67 | 93.24 | 88.28 | 79.73 | 63.69 | 84.09 | |
Patch-Noobj | 40.00 | 46.62 | 84.83 | 86.49 | 81.53 | 67.91 | |
Ours | 96.00 | 93.92 | 95.17 | 91.22 | 87.26 | 92.65 | |
0.6 | Raw | 1.33 | 1.35 | 2.07 | 2.03 | 0.64 | 1.47 |
Random Patch | 1.33 | 1.35 | 3.45 | 2.70 | 1.91 | 2.14 | |
Dpatch | 2.00 | 2.03 | 30.34 | 81.08 | 80.25 | 39.57 | |
OBJ | 97.33 | 95.27 | 91.03 | 80.41 | 66.88 | 85.96 | |
Patch-Noobj | 50.67 | 50.00 | 87.59 | 87.84 | 83.44 | 71.93 | |
Ours | 96.67 | 97.97 | 96.55 | 92.57 | 89.17 | 94.52 |
Confidence | Method | ASR (%) | |||||
---|---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | ||
0.1 | Raw | 0.00 | 0.00 | 0.69 | 0.68 | 0.00 | 0.27 |
Random Patch | 0.00 | 0.00 | 1.38 | 0.68 | 0.00 | 0.40 | |
Dpatch | 0.00 | 0.00 | 0.00 | 0.00 | 3.18 | 0.67 | |
OBJ | 8.00 | 3.38 | 3.45 | 7.43 | 12.74 | 7.09 | |
Patch-Noobj | 0.00 | 0.00 | 1.38 | 3.38 | 3.18 | 1.60 | |
Ours | 10.00 | 10.14 | 13.10 | 27.03 | 29.30 | 18.05 | |
0.2 | Raw | 0.00 | 0.00 | 1.38 | 1.35 | 0.00 | 0.53 |
Random Patch | 0.00 | 0.00 | 2.07 | 2.03 | 0.64 | 0.94 | |
Dpatch | 0.00 | 0.00 | 0.00 | 2.03 | 6.37 | 1.74 | |
OBJ | 18.00 | 9.46 | 9.66 | 16.22 | 25.48 | 15.91 | |
Patch-Noobj | 0.00 | 0.68 | 2.07 | 6.76 | 10.19 | 4.01 | |
Ours | 20.00 | 20.27 | 26.90 | 35.81 | 50.96 | 31.02 | |
0.3 | Raw | 0.00 | 0.00 | 1.38 | 2.03 | 0.00 | 0.67 |
Random Patch | 0.00 | 0.00 | 2.07 | 2.03 | 1.27 | 1.07 | |
Dpatch | 0.00 | 0.00 | 0.00 | 3.38 | 11.46 | 3.07 | |
OBJ | 29.33 | 15.54 | 13.79 | 20.27 | 41.40 | 24.33 | |
Patch-Noobj | 0.00 | 1.35 | 3.45 | 10.14 | 24.20 | 8.02 | |
Ours | 28.00 | 29.73 | 33.10 | 45.95 | 63.69 | 40.37 | |
0.4 | Raw | 0.00 | 0.00 | 2.07 | 2.03 | 0.64 | 0.94 |
Random Patch | 0.00 | 0.00 | 2.07 | 2.70 | 1.91 | 1.34 | |
Dpatch | 0.00 | 0.00 | 0.00 | 5.41 | 14.01 | 4.01 | |
OBJ | 38.67 | 20.27 | 23.45 | 30.41 | 56.05 | 34.09 | |
Patch-Noobj | 0.00 | 1.35 | 4.14 | 13.51 | 33.76 | 10.83 | |
Ours | 36.67 | 41.22 | 45.52 | 54.05 | 77.07 | 51.20 | |
0.5 | Raw | 0.00 | 0.00 | 2.07 | 2.70 | 1.27 | 1.20 |
Random Patch | 0.00 | 0.00 | 3.45 | 2.70 | 1.27 | 1.47 | |
Dpatch | 0.00 | 0.00 | 2.76 | 9.46 | 19.75 | 6.55 | |
OBJ | 46.67 | 31.08 | 30.34 | 38.51 | 64.97 | 42.65 | |
Patch-Noobj | 0.00 | 2.70 | 6.21 | 23.65 | 44.59 | 15.78 | |
Ours | 49.33 | 51.35 | 56.55 | 60.81 | 81.53 | 60.16 | |
0.6 | Raw | 0.00 | 0.68 | 2.07 | 2.70 | 1.27 | 1.34 |
Random Patch | 0.00 | 0.68 | 2.76 | 3.38 | 1.27 | 1.60 | |
Dpatch | 0.00 | 0.00 | 6.90 | 20.27 | 29.94 | 11.63 | |
OBJ | 56.67 | 47.30 | 44.83 | 51.35 | 76.43 | 55.61 | |
Patch-Noobj | 0.00 | 5.41 | 13.79 | 37.16 | 57.32 | 23.13 | |
Ours | 59.33 | 63.51 | 66.90 | 70.95 | 85.99 | 69.52 |
Model | Number of Objects with Adversarial Patch | |||||
---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | |
Yolo-V3 | 1780 | 1770 | 1840 | 1600 | 1750 | 8740 |
Yolo-V5 | 1540 | 1470 | 1500 | 1500 | 1530 | 7540 |
Model | ASR (%) | |||||
---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | |
Yolo-V3 | 0.45 | 0.73 | 0.60 | 0.24 | 0.12 | 0.43 |
Yolo-V5 | 0.44 | 0.50 | 0.50 | 0.09 | 0.00 | 0.30 |
Confidence | Loss | ASR (%) | |||||
---|---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | ||
0.1 | Object Loss | 74.00 | 65.54 | 74.48 | 80.41 | 71.97 | 73.26 |
Detection Loss | 30.67 | 56.08 | 69.66 | 75.68 | 63.06 | 58.96 | |
Object Loss +0.001 × Detection Loss | 68.67 | 62.16 | 68.97 | 77.70 | 73.89 | 70.32 | |
Object Loss +0.005 × Detection Loss | 80.00 | 65.54 | 80.00 | 80.41 | 67.52 | 74.60 | |
Object Loss +0.01 × Detection Loss | 77.33 | 63.51 | 73.10 | 75.00 | 67.52 | 71.26 | |
Object Loss +0.02 × Detection Loss | 81.33 | 75.68 | 81.38 | 85.14 | 79.62 | 80.61 | |
Object Loss +0.04 × Detection Loss | 70.67 | 58.78 | 74.48 | 77.70 | 72.61 | 70.86 | |
0.3 | Object Loss | 90.00 | 83.11 | 84.83 | 88.51 | 79.62 | 85.16 |
Detection Loss | 51.33 | 74.32 | 84.83 | 83.78 | 75.80 | 73.93 | |
Object Loss +0.001 × Detection Loss | 84.00 | 75.00 | 82.76 | 84.46 | 80.89 | 81.42 | |
Object Loss +0.005 × Detection Loss | 93.33 | 83.78 | 87.59 | 83.78 | 77.71 | 85.16 | |
Object Loss +0.01 × Detection Loss | 90.00 | 82.43 | 84.14 | 83.11 | 80.89 | 84.09 | |
Object Loss +0.02 × Detection Loss | 91.33 | 88.51 | 89.66 | 88.51 | 84.71 | 88.50 | |
Object Loss +0.04 × Detection Loss | 83.33 | 77.70 | 88.28 | 83.78 | 73.25 | 81.15 | |
0.5 | Object Loss | 92.67 | 92.57 | 89.66 | 91.22 | 82.17 | 89.57 |
Detection Loss | 74.00 | 85.14 | 87.59 | 88.51 | 83.44 | 83.69 | |
Object Loss +0.001 × Detection Loss | 93.33 | 89.86 | 91.03 | 87.16 | 85.99 | 89.44 | |
Object Loss +0.005 × Detection Loss | 95.33 | 93.24 | 88.28 | 88.51 | 83.44 | 89.71 | |
Object Loss +0.01 × Detection Loss | 95.33 | 91.22 | 91.03 | 85.81 | 84.71 | 89.57 | |
Object Loss +0.02 × Detection Loss | 96.00 | 97.97 | 96.55 | 92.57 | 89.17 | 94.52 | |
Object Loss +0.04 × Detection Loss | 90.00 | 87.84 | 92.41 | 89.86 | 86.62 | 89.30 |
Confidence | Loss | ASR (%) | |||||
---|---|---|---|---|---|---|---|
Group 1 (25–40 m) | Group 2 (45–60 m) | Group 3 (65–80 m) | Group 4 (85–100 m) | Group 5 (105–120 m) | Total | ||
0.1 | Object Loss | 5.33 | 8.78 | 6.90 | 10.14 | 19.11 | 10.16 |
Detection Loss | 0.00 | 0.00 | 0.00 | 0.00 | 11.46 | 2.41 | |
Object Loss +0.001 × Detection Loss | 11.33 | 8.78 | 13.79 | 15.54 | 23.57 | 14.71 | |
Object Loss +0.005 × Detection Loss | 10.00 | 10.14 | 13.10 | 27.03 | 29.30 | 18.05 | |
Object Loss +0.01 × Detection Loss | 13.33 | 4.73 | 6.90 | 8.78 | 14.01 | 9.63 | |
Object Loss +0.02 × Detection Loss | 8.00 | 5.41 | 8.28 | 12.84 | 14.65 | 9.89 | |
Object Loss +0.04 × Detection Loss | 9.33 | 3.38 | 2.76 | 6.08 | 10.83 | 6.55 | |
0.3 | Object Loss | 19.33 | 25.68 | 28.97 | 35.81 | 49.04 | 31.95 |
Detection Loss | 0.00 | 0.00 | 0.69 | 6.08 | 25.48 | 6.68 | |
Object Loss +0.001 × Detection Loss | 30.67 | 27.03 | 33.10 | 35.14 | 56.69 | 36.76 | |
Object Loss +0.005 × Detection Loss | 28.00 | 29.73 | 33.10 | 45.95 | 63.69 | 40.37 | |
Object Loss +0.01 × Detection Loss | 40.00 | 21.62 | 24.83 | 29.05 | 34.39 | 30.08 | |
Object Loss +0.02 × Detection Loss | 27.33 | 24.32 | 26.21 | 33.78 | 36.31 | 29.68 | |
Object Loss +0.04 × Detection Loss | 37.33 | 16.22 | 13.79 | 22.30 | 29.94 | 24.06 | |
0.5 | Object Loss | 38.00 | 37.84 | 44.14 | 50.68 | 70.06 | 48.40 |
Detection Loss | 0.00 | 1.35 | 1.38 | 20.95 | 42.04 | 13.50 | |
Object Loss +0.001 × Detection Loss | 56.00 | 42.57 | 52.41 | 56.76 | 79.62 | 57.75 | |
Object Loss +0.005 × Detection Loss | 49.33 | 51.35 | 56.55 | 60.81 | 81.53 | 60.16 | |
Object Loss +0.01 × Detection Loss | 60.67 | 43.92 | 42.76 | 50.00 | 63.06 | 52.27 | |
Object Loss +0.02 × Detection Loss | 53.33 | 44.59 | 37.93 | 43.24 | 57.32 | 47.46 | |
Object Loss +0.04 × Detection Loss | 59.33 | 34.46 | 33.79 | 43.24 | 49.04 | 44.12 |
Model | Loss | AP (%) | Precision | Recall |
---|---|---|---|---|
Yolo-V3 | Raw | 99.0 | 99.0 | 97.3 |
Object Loss | 65.7 | 68.0 | 61.9 | |
Detection Loss | 29.6 | 31.3 | 39.8 | |
Detection Loss+Object Loss | 59.7 | 58.2 | 59.3 | |
Yolo-V5 | Raw | 99.1 | 99.4 | 96.6 |
Object Loss | 46.1 | 42.0 | 44.2 | |
Detection Loss | 2.83 | 4.77 | 30.7 | |
Detection Loss+Object Loss | 36.0 | 32.9 | 46.1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Zhang, Y.; Qi, J.; Bin, K.; Wen, H.; Tong, X.; Zhong, P. Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images. Remote Sens. 2022, 14, 5298. https://doi.org/10.3390/rs14215298
Zhang Y, Zhang Y, Qi J, Bin K, Wen H, Tong X, Zhong P. Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images. Remote Sensing. 2022; 14(21):5298. https://doi.org/10.3390/rs14215298
Chicago/Turabian StyleZhang, Yichuang, Yu Zhang, Jiahao Qi, Kangcheng Bin, Hao Wen, Xunqian Tong, and Ping Zhong. 2022. "Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images" Remote Sensing 14, no. 21: 5298. https://doi.org/10.3390/rs14215298
APA StyleZhang, Y., Zhang, Y., Qi, J., Bin, K., Wen, H., Tong, X., & Zhong, P. (2022). Adversarial Patch Attack on Multi-Scale Object Detection for UAV Remote Sensing Images. Remote Sensing, 14(21), 5298. https://doi.org/10.3390/rs14215298