An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition
Abstract
:1. Introduction
- 1.
- We propose a novel FBUA framework that can effectively attack a DNN-based SAR ATR without any access to the model’s gradient, parameters, architecture, or even training data. In addition, the generated single adversarial perturbation is universal to attack a variety of models and a large fraction of target images. Comprehensive experimental evaluations on the MSTAR and SARSIM datasets prove the efficacy of our proposal.
- 2.
- This article conducts a comprehensive evaluation to reveal the black-box adversarial threats of ATR systems using the proposed FBUA method. We find that popular DNN-based SAR ATR models are vulnerable to UAPs generated by an FBUA. Specifically, with a relatively small magnitude (16 out of 255), a single UAP is capable of achieving a fooling ratio of 64.6% averaged on eight DNN models.
- 3.
- We find that the vulnerability of the target DNN model exhibits a high degree of class-wise variability; that is, data points within a class share the similar robustness to the UAP generated by an FBUA.
- 4.
- We empirically demonstrate that the UAPs created by an FBUA primarily function by activating spurious features, which are then coupled with clean features to form robust features that support several dominant labels. Therefore, DNNs demonstrate class-wise vulnerability to UAPs; that is, classes that do not conform to dominant labels are easily induced, whilst other classes show robustness to UAPs.
2. Preliminaries and Experimental Settings
2.1. Problem Description of Universal Adversarial Attack to SAR ATR
2.2. UAP Generation Algorithms
2.2.1. DeepFool-UAP
2.2.2. Dominant Feature-UAP (DF-UAP)
2.2.3. CosSim-UAP
2.2.4. Fast Feature Fool (FFF) and Generalizable Data-Free UAP (GD-UAP)
2.3. Database
2.3.1. MSTAR Dataset
2.3.2. SARSIM Dataset
2.4. Implementation Details
2.4.1. Environment
2.4.2. DNN Models and Training Details
2.4.3. Implementation Details
2.4.4. Metric
3. Results
3.1. Quantitative Results
3.2. Qualitative Results
3.3. Comparison to Random Noise
3.4. Robustness of the UAP
3.5. Analysis of the Class-Wise Vulnerability
- The dominant labels where the misclassified data concentrate are meanwhile the robust classes where the clean data are not easy to be successfully attacked. Additionally, data within the robust classes also tend to transfer to one of the dominant labels, such as for DenseNet121, the BMP2 data is easy to be induced to T62.
- The dominant labels of diverse models share a certain similarity, although they are caused by a single substitute model. For instance, BMP2 is shown as a dominant label for all eight models and T72 for seven models (except DenseNet121). It demonstrates a universal representation of all the target models
- The class-wise vulnerability to UAPs is more universal than the attack selectivity reported in previous image-dependent attack studies [9,10]. When facing the data-dependent adversarial examples, dominant labels are highly class-dependent. For example, most of the adversarial examples of D7 would be recognized as ZIL131, ZSU234, and T62, while examples of other classes are misclassified to different labels. It also behaves differently from the universal attack for optical image classifiers which trained on 1000 categories of images that the adversarial images would classify as a single dominant class [24,44].
3.6. Summary
4. Conclusions and Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
ATR | Automatic Target Recognition |
BIM | Basic Iterative Method |
CosSim | Cosine Similarity |
DF | Dominant Feature |
DNN | Deep Neural Network |
FFF | Fast Feature Fool |
FGSM | Fast Gradient Sign Method |
GD | Generalizable Date-Free |
MSTAR | Moving and Stationary Target Acquisition and Recognition |
SAR | Synthetic Aperture Radar |
UAP | Universal Adversarial Perturbation |
Appendix A
Noise | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Uniform | 3.6 | 0.2 | 0.4 | 27.5 | 7.4 | 0 | 29.7 | 0.2 | |
Gaussian | 0.6 | 0.1 | 0.3 | 1.0 | 1.3 | 0 | 4.0 | 0.1 | |
UAP | 47.7 | 24.7 | 32.5 | 55.8 | 47.3 | 10.4 | 57.2 | 12.7 | |
Uniform | 24.5 | 6.4 | 2.6 | 62.9 | 44.4 | 0 | 72.9 | 0.2 | |
Gaussian | 5.2 | 0.4 | 0.6 | 37.0 | 12.3 | 0 | 38.6 | 0.2 | |
UAP | 68.2 | 51.2 | 67.0 | 79.6 | 75.0 | 38.3 | 76.6 | 43.0 | |
Uniform | 42.0 | 17.7 | 9.2 | 77.5 | 70.4 | 0.1 | 85.1 | 0.7 | |
Gaussian | 16.7 | 28.3 | 1.4 | 59.0 | 36.2 | 0 | 65.7 | 0.2 | |
UAP | 74.5 | 58.7 | 77.9 | 89.9 | 80.7 | 57.3 | 82.2 | 60.6 | |
Uniform | 52.8 | 27.2 | 18.8 | 85.9 | 84.2 | 2.1 | 89.3 | 6.3 | |
Gaussian | 29.1 | 9.2 | 3.8 | 67.3 | 53.4 | 0 | 78.1 | 0.2 | |
UAP | 80.4 | 68.2 | 81.4 | 90.1 | 88.6 | 64.0 | 89.1 | 66.2 |
Noise | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Uniform | 3.6 | 0.2 | 0.4 | 27.5 | 7.4 | 0 | 29.7 | 0.2 | |
Gaussian | 0.6 | 0.1 | 0.3 | 1.0 | 1.3 | 0 | 4.0 | 0.1 | |
UAP | 44.9 | 24.9 | 34.8 | 35.4 | 48.6 | 16.2 | 40.4 | 17.9 | |
Uniform | 24.5 | 6.4 | 2.6 | 62.9 | 44.4 | 0 | 72.9 | 0.2 | |
Gaussian | 5.2 | 0.4 | 0.6 | 37.0 | 12.3 | 0 | 38.6 | 0.2 | |
UAP | 65.0 | 52.4 | 65.9 | 73.1 | 82.7 | 31.8 | 77.0 | 42.4 | |
Uniform | 42.0 | 17.7 | 9.2 | 77.5 | 70.4 | 0.1 | 85.1 | 0.7 | |
Gaussian | 16.7 | 28.3 | 1.4 | 59.0 | 36.2 | 0 | 65.7 | 0.2 | |
UAP | 72.9 | 62.9 | 83.0 | 89.1 | 86.0 | 50.8 | 80.9 | 64.9 | |
Uniform | 52.8 | 27.2 | 18.8 | 85.9 | 84.2 | 2.1 | 89.3 | 6.3 | |
Gaussian | 29.1 | 9.2 | 3.8 | 67.3 | 53.4 | 0 | 78.1 | 0.2 | |
UAP | 80.9 | 70.5 | 87.4 | 90.0 | 87.4 | 61.1 | 82.5 | 76.1 |
Interference | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Clean | 47.7 | 24.7 | 32.5 | 55.8 | 47.3 | 10.4 | 57.2 | 12.7 | |
Additive | 49.2 | 26.2 | 32.7 | 59.1 | 50.3 | 10.1 | 61.2 | 12.9 | |
Multiplicative | 35.6 | 13.6 | 16.7 | 53.8 | 37.7 | 3.6 | 52.6 | 6.0 | |
Gaussian | 40.5 | 18.2 | 28.1 | 17.7 | 26.5 | 11.1 | 22.6 | 11.3 | |
Median | 48.4 | 33.9 | 43.1 | 36.9 | 43.6 | 20.8 | 38.4 | 21.5 | |
Clean | 68.2 | 51.2 | 67.0 | 79.6 | 75.0 | 38.3 | 76.6 | 43.0 | |
Additive | 67.1 | 50.2 | 65.6 | 77.1 | 73.6 | 35.7 | 76.1 | 40.5 | |
Multiplicative | 56.8 | 38.7 | 42.6 | 70.1 | 63.1 | 15.2 | 73.2 | 19.7 | |
Gaussian | 64.8 | 49.8 | 66.0 | 66.9 | 66.3 | 40.7 | 54.6 | 41.5 | |
Median | 66.7 | 54.4 | 72.6 | 73.3 | 70.9 | 55.0 | 63.3 | 55.3 | |
Clean | 74.5 | 58.7 | 77.9 | 89.9 | 80.7 | 57.3 | 82.2 | 60.6 | |
Additive | 73.0 | 58.0 | 77.8 | 89.7 | 79.6 | 55.9 | 81.2 | 59.8 | |
Multiplicative | 66.9 | 49.9 | 62.6 | 75.5 | 74.8 | 30.6 | 78.7 | 40.5 | |
Gaussian | 68.5 | 58.4 | 75.4 | 78.9 | 75.3 | 59.5 | 70.7 | 60.8 | |
Median | 74.1 | 62.8 | 77.8 | 77.7 | 79.3 | 66.6 | 73.5 | 66.2 | |
Clean | 80.4 | 68.2 | 81.4 | 90.1 | 88.6 | 64.0 | 89.1 | 66.2 | |
Additive | 80.1 | 67.0 | 81.3 | 90.1 | 87.7 | 63.5 | 88.9 | 66.3 | |
Multiplicative | 69.5 | 55.0 | 68.9 | 87.3 | 79.4 | 41.6 | 80.1 | 50.8 | |
Gaussian | 75.3 | 66.3 | 79.5 | 84.0 | 82.1 | 66.1 | 78.3 | 67.4 | |
Median | 77.2 | 69.5 | 81.4 | 87.3 | 84.7 | 70.7 | 80.8 | 73.8 |
Interference | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Clean | 44.9 | 24.9 | 34.8 | 35.4 | 48.6 | 16.2 | 40.4 | 17.9 | |
Additive | 40.0 | 20.0 | 28.0 | 31.6 | 42.4 | 11.5 | 35.3 | 13.3 | |
Multiplicative | 26.0 | 5.1 | 12.5 | 22.1 | 21.3 | 4.1 | 26.4 | 6.1 | |
Gaussian | 42.1 | 19.8 | 31.1 | 20.3 | 30.2 | 15.1 | 25.1 | 14.8 | |
Median | 48.8 | 26.6 | 40.1 | 28.6 | 37.7 | 23.2 | 32.2 | 21.5 | |
Clean | 65.0 | 52.4 | 65.9 | 73.1 | 82.7 | 31.8 | 77.0 | 42.4 | |
Additive | 63.6 | 50.4 | 62.0 | 70.0 | 80.7 | 27.9 | 75.9 | 39.5 | |
Multiplicative | 51.2 | 34.4 | 31.4 | 67.3 | 64.0 | 8.0 | 69.1 | 17.7 | |
Gaussian | 62.4 | 48.7 | 58.6 | 63.3 | 65.9 | 31.4 | 52.9 | 37.3 | |
Median | 67.7 | 54.6 | 68.1 | 76.0 | 71.1 | 42.7 | 56.5 | 48.1 | |
Clean | 72.9 | 62.9 | 83.0 | 89.1 | 86.0 | 50.8 | 80.9 | 64.9 | |
Additive | 72.2 | 61.8 | 82.3 | 88.8 | 85.7 | 49.3 | 81.1 | 63.2 | |
Multiplicative | 64.2 | 47.3 | 54.2 | 71.6 | 82.4 | 17.8 | 80.4 | 34.5 | |
Gaussian | 69.8 | 60.7 | 79.4 | 81.5 | 78.5 | 51.9 | 63.2 | 59.0 | |
Median | 73.5 | 65.5 | 85.0 | 81.7 | 77.9 | 62.6 | 71.6 | 68.9 | |
Clean | 80.9 | 70.5 | 87.4 | 90.0 | 87.4 | 61.1 | 82.5 | 76.1 | |
Additive | 80.6 | 69.4 | 87.3 | 90.0 | 87.0 | 59.4 | 81.6 | 75.1 | |
Multiplicative | 70.0 | 55.4 | 67.0 | 85.9 | 87.2 | 27.4 | 83.3 | 47.7 | |
Gaussian | 76.9 | 69.5 | 86.6 | 82.0 | 83.0 | 64.9 | 75.3 | 72.9 | |
Median | 79.9 | 76.8 | 88.3 | 89.0 | 85.4 | 75.3 | 82.8 | 79.4 |
Class | AlexNet | VGG11 | ResNet50 | DenseNet121 | MobileNetV2 | AConvNet | ShuffleNetV2 | SqueezeNet |
---|---|---|---|---|---|---|---|---|
2S1 | 100 | 99 | 100 | 100 | 100 | 78 | 100 | 85 |
BMP2 | 9 | 5 | 48 | 48 | 5 | 26 | 2 | 7 |
BRDM2 | 89 | 68 | 77 | 89 | 92 | 43 | 91 | 37 |
BTR70 | 1 | 3 | 7 | 31 | 39 | 0 | 77 | 2 |
BTR60 | 99 | 88 | 87 | 98 | 98 | 67 | 98 | 82 |
D7 | 98 | 87 | 98 | 100 | 100 | 76 | 100 | 85 |
T72 | 14 | 0 | 1 | 100 | 24 | 0 | 2 | 0 |
T62 | 100 | 100 | 91 | 32 | 100 | 57 | 100 | 64 |
ZIL131 | 78 | 62 | 83 | 98 | 92 | 34 | 96 | 58 |
ZSU234 | 94 | 0 | 78 | 100 | 100 | 2 | 100 | 10 |
Total | 682 | 512 | 670 | 796 | 750 | 383 | 766 | 430 |
Class | AlexNet | VGG11 | ResNet50 | DenseNet121 | MobileNetV2 | AConvNet | ShuffleNetV2 | SqueezeNet |
---|---|---|---|---|---|---|---|---|
2S1 | 100 | 99 | 100 | 100 | 100 | 71 | 100 | 82 |
BMP2 | 17 | 5 | 60 | 29 | 2 | 26 | 0 | 9 |
BRDM2 | 85 | 54 | 67 | 80 | 88 | 19 | 82 | 28 |
BTR70 | 22 | 34 | 11 | 43 | 80 | 8 | 90 | 14 |
BTR60 | 94 | 89 | 79 | 92 | 98 | 59 | 98 | 90 |
D7 | 93 | 85 | 95 | 100 | 99 | 75 | 100 | 83 |
T72 | 9 | 0 | 4 | 100 | 76 | 0 | 9 | 0 |
T62 | 99 | 100 | 86 | 2 | 100 | 42 | 100 | 51 |
ZIL131 | 69 | 58 | 82 | 87 | 85 | 18 | 91 | 49 |
ZSU234 | 62 | 0 | 75 | 98 | 99 | 0 | 100 | 18 |
Total | 650 | 524 | 659 | 731 | 827 | 318 | 770 | 424 |
References
- Yue, D.; Xu, F.; Frery, A.C.; Jin, Y. Synthetic Aperture Radar Image Statistical Modeling: Part One-Single-Pixel Statistical Models. IEEE Geosci. Remote Sens. Mag. 2021, 9, 82–114. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Conference and Workshop on Neural Information Processing Systems (NeurIPS); NIPS: Lake Tahoe, NV, USA, 2012; pp. 1097–1105. [Google Scholar]
- Liu, L.; Chen, J.; Zhao, G.; Fieguth, P.; Chen, X.; Pietikäinen, M. Texture Classification in Extreme Scale Variations Using GANet. IEEE Trans. Image Proces. 2019, 18, 3910–3922. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H.; Xu, F.; Jin, Y.Q. Target Classification Using the Deep Convolutional Networks for SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
- Zhang, L.; Leng, X.; Feng, S.; Ma, X.; Ji, K.; Kuang, G.; Liu, L. Domain Knowledge Powered Two-Stream Deep Network for Few-Shot SAR Vehicle Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Li, Y.; Du, L.; Wei, D. Multiscale CNN Based on Component Analysis for SAR ATR. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Ai, J.; Wang, F.; Mao, Y.; Luo, Q.; Yao, B.; Yan, H.; Xing, M.; Wu, Y. A Fine PolSAR Terrain Classification Algorithm Using the Texture Feature Fusion Based Improved Convolutional Autoencoder. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1. [Google Scholar] [CrossRef]
- Zhu, X.; Montazeri, S.; Ali, M.; Hua, Y.; Wang, Y.; Mou, L.; Shi, Y.; Xu, F.; Bamler, R. Deep Learning Meets SAR: Concepts, Models, Pitfalls, and Perspectives. IEEE Geosci. Remote Sens. Mag. 2021, 9, 143–172. [Google Scholar] [CrossRef]
- Chen, L.; Xu, Z.; Li, Q.; Peng, J.; Wang, S.; Li, H. An Empirical Study of Adversarial Examples on Remote Sensing Image Scene Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7419–7433. [Google Scholar] [CrossRef]
- Li, H.; Huang, H.; Chen, L.; Peng, J.; Huang, H.; Cui, Z.; Mei, X.; Wu, G. Adversarial Examples for CNN-Based SAR Image Classification: An Experience Study. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1333–1347. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing Properties of Neural Networks. In Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Tanay, T.; Griffin, L. A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples; Cornell University: Ithaca, NY, USA, 2016. [Google Scholar] [CrossRef]
- Fawzi, A.; Moosavi-Dezfooli, S.M.; Frossard, P. The Robustness of Deep Networks: A Geometrical Perspective. IEEE Signal Process. Mag. 2017, 34, 50–62. [Google Scholar] [CrossRef]
- Ortiz-Jiménez, G.; Modas, A.; Moosavi-Dezfooli, S.M.; Frossard, P. Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness. Proc. IEEE 2021, 109, 635–659. [Google Scholar] [CrossRef]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Ma, C.; Chen, L.; Yong, J.H. Simulating Unknown Target Models for Query-Efficient Black-Box Attacks. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Online, 21–25 June 2021; pp. 11835–11844. [Google Scholar]
- Chen, W.; Zhang, Z.; Hu, X.; Wu, B. Boosting Decision-Based Black-Box Adversarial Attacks with Random Sign Flip. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 276–293. [Google Scholar]
- Shi, Y.; Han, Y.; Hu, Q.; Yang, Y.; Tian, Q. Query-efficient Black-box Adversarial Attack with Customized Iteration and Sampling. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 1. [Google Scholar] [CrossRef] [PubMed]
- Dong, Y.; Pang, T.; Su, H.; Zhu, J. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4312–4321. [Google Scholar]
- Lin, J.; Song, C.; He, K.; Wang, L.; Hopcroft, J. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; Yuille, A.L. Improving Transferability of Adversarial Examples With Input Diversity. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Wang, X.; He, K. Enhancing the Transferability of Adversarial Attacks Through Variance Tuning. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 1924–1933. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Poursaeed, O.; Katsman, I.; Gao, B.; Belongie, S. Generative Adversarial Perturbations. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Mopuri, K.R.; Ojha, U.; Garg, U.; Babu, R.V. NAG: Network for Adversary Generation. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Mopuri, K.R.; Garg, U.; Babu, R.V. Fast Feature Fool: A Data Independent Approach to Universal Adversarial Perturbations. arXiv 2017, arXiv:1707.05572. [Google Scholar]
- Khrulkov, V.; Oseledets, I. Art of Singular Vectors and Universal Adversarial Perturbations. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Mopuri, K.R.; Ganeshan, A.; Babu, R.V. Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2452–2465. [Google Scholar] [CrossRef] [PubMed]
- Zhang, C.; Benz, P.; Lin, C.; Karjauv, A.; Wu, J.; Kweon, I.S. A Survey on Universal Adversarial Attack. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Online, 19–26 August 2021; pp. 4687–4694. [Google Scholar]
- Huang, T.; Zhang, Q.; Liu, J.; Hou, R.; Wang, X.; Li, Y. Adversarial Attacks on Deep-Learning-Based SAR Image Target Recognition. J. Netw. Comput. Appl. 2020, 162, 102632. [Google Scholar] [CrossRef]
- Du, C.; Zhang, L. Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network. Remote Sens. 2021, 13, 4358. [Google Scholar] [CrossRef]
- Du, C.; Huo, C.; Zhang, L.; Chen, B.; Yuan, Y. Fast C&W: A Fast Adversarial Attack Algorithm to Fool SAR Target Recognition With Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Liu, Z.; Xia, W.; Lei, Y. SAR-GPA: SAR Generation Perturbation Algorithm. In Proceedings of the 2021 3rd International Conference on Advanced Information Science and System (AISS 2021), Sanya, China, 26–28 November 2021; pp. 1–6. [Google Scholar]
- Xu, L.; Feng, D.; Zhang, R.; Wang, X. High-Resolution Range Profile Deception Method Based on Phase-Switched Screen. IEEE Antennas Wirel. Propag. Lett. 2016, 15, 1665–1668. [Google Scholar] [CrossRef]
- Xu, H.; Guan, D.F.; Peng, B.; Liu, Z.; Yong, S.; Liu, Y. Radar One-Dimensional Range Profile Dynamic Jamming Based on Programmable Metasurface. IEEE Antennas Wirel. Propag. Lett. 2021, 20, 1883. [Google Scholar] [CrossRef]
- Peng, B.; Peng, B.; Zhou, J.; Xia, J.; Liu, L. Speckle-Variant Attack: Towards Transferable Adversarial Attack to SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4509805. [Google Scholar] [CrossRef]
- Kusk, A.; Abulaitijiang, A.; Dall, J. Synthetic SAR Image Generation Using Sensor, Terrain and Target Models. In Proceedings of the 11th European Conference on Synthetic Aperture Radar (VDE), Hamburg, Germany, 6–9 June 2016; pp. 1–5. [Google Scholar]
- Malmgren-Hansen, D.; Kusk, A.; Dall, J.; Nielsen, A.A.; Engholm, R.; Skriver, H. Improving SAR Automatic Target Recognition Models with Transfer Learning from Simulated Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1484–1488. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Las Vegas, NV, USA, 26–30 June 2016; pp. 2574–2582. [Google Scholar]
- Zhang, C.; Benz, P.; Imtiaz, T.; Kweon, I.S. Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Seoul, Korea, 14–19 June 2020. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the Importance of Initialization and Momentum in Deep Learning. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; pp. 1139–1147. [Google Scholar]
- Zhang, C.; Benz, P.; Karjauv, A.; Kweon, I.S. Data-Free Universal Adversarial Perturbation and Black-Box Attack. In Proceedings of the International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 7868–7877. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K. Densely Connected Convolutional Networks. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50× Fewer Parameters and <0.5 MB Model Size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Zhang, C.; Benz, P.; Imtiaz, T.; Kweon, I.S. CD-UAP: Class Discriminative Universal Adversarial Perturbation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6754–6761. [Google Scholar]
- Benz, P.; Zhang, C.; Imtiaz, T.; Kweon, I.S. Double Targeted Universal Adversarial Perturbations. In Proceedings of the Asian Conference on Computer Vision (ACCV), Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
- Shafahi, A.; Najibi, M.; Xu, Z.; Dickerson, J.; Davis, L.S.; Goldstein, T. Universal Adversarial Training. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 5636–5643. [Google Scholar]
Type | Class | Serial Number | Training Set (17°) | Test Set (15°) |
---|---|---|---|---|
Rocket launcher | 2S1 | b01 | 299 | 274 |
Armored personnel carrier | BMP2 | 9563 | 233 | 195 |
BRDM2 | E71 | 298 | 274 | |
BTR70 | c71 | 233 | 196 | |
BTR60 | k10yt7532 | 256 | 195 | |
Bulldozer | D7 | 92v13015 | 299 | 274 |
Tank | T72 | 132 | 232 | 196 |
T62 | A51 | 299 | 273 | |
Truck | ZIL131 | E12 | 299 | 274 |
Air defense unit | ZSU234 | d08 | 299 | 274 |
Total | 2747 | 2425 |
Type | Class | Training Set (17°) | Training Set (15°) |
---|---|---|---|
Bulldozer | 8020 | 216 | 216 |
13013 | 216 | 216 | |
Car | Peugeot607 | 216 | 216 |
Toyota | 216 | 216 | |
Humvee | 3663 | 216 | 216 |
9657 | 216 | 216 | |
Tank | 65047 | 216 | 216 |
86347 | 216 | 216 | |
Truck | 2096 | 216 | 216 |
2107 | 216 | 216 | |
Total | 2160 | 2160 |
Model | Input Size | # Paras. | FLOPs | MSTAR Acc. (%) | SARSIM Acc. (%) |
---|---|---|---|---|---|
AlexNet [2] | 58,299,082 | 1.06 | 96.3 | 94.2 | |
VGG11 [45] | 128,814,154 | 2.02 | 98.1 | 96.5 | |
ResNet50 [46] | 23,522,250 | 4.04 | 96.4 | 92.8 | |
DenseNet121 [47] | 6,957,706 | 2.80 | 97.8 | 96.4 | |
MobileNetV2 [48] | 2,236,106 | 0.31 | 97.4 | 96.0 | |
AConvNet [4] | 303,498 | 0.04 | 98.1 | 94.5 | |
ShuffleNetV2 [49] | 1,263,422 | 0.14 | 96.9 | 96.6 | |
SqueezeNet [50] | 726,474 | 0.26 | 97.1 | 94.0 |
Noise | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Uniform | 3.6 | 0.2 | 0.4 | 27.5 | 7.4 | 0 | 29.7 | 0.2 | |
Gaussian | 0.6 | 0.1 | 0.3 | 1.0 | 1.3 | 0 | 4.0 | 0.1 | |
UAP | 47.3 | 27.1 | 29.8 | 57.0 | 49.9 | 10.6 | 59.1 | 13.7 | |
Uniform | 24.5 | 6.4 | 2.6 | 62.9 | 44.4 | 0 | 72.9 | 0.2 | |
Gaussian | 5.2 | 0.4 | 0.6 | 37.0 | 12.3 | 0 | 38.6 | 0.2 | |
UAP | 67.4 | 53.0 | 69.2 | 81.2 | 74.4 | 45.3 | 77.4 | 48.7 | |
Uniform | 42.0 | 17.7 | 9.2 | 77.5 | 70.4 | 0.1 | 85.1 | 0.7 | |
Gaussian | 16.7 | 28.3 | 1.4 | 59.0 | 36.2 | 0 | 65.7 | 0.2 | |
UAP | 72.1 | 62.6 | 78.5 | 89.3 | 78.8 | 61.1 | 83.2 | 63.0 | |
Uniform | 52.8 | 27.2 | 18.8 | 85.9 | 84.2 | 2.1 | 89.3 | 6.3 | |
Gaussian | 29.1 | 9.2 | 3.8 | 67.3 | 53.4 | 0 | 78.1 | 0.2 | |
UAP | 77.8 | 73.8 | 82.3 | 90.0 | 83.6 | 65.7 | 88.8 | 72.6 |
Interference | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze | |
---|---|---|---|---|---|---|---|---|---|
Clean | 47.3 | 27.1 | 29.8 | 57.0 | 49.9 | 10.6 | 59.1 | 13.7 | |
Additive | 41.0 | 21.4 | 25.1 | 54.2 | 43.8 | 8.2 | 54.2 | 10.4 | |
Multiplicative | 27.4 | 7.0 | 12.1 | 41.6 | 25.8 | 2.6 | 37.6 | 5.1 | |
Gaussian | 37.7 | 19.5 | 26.4 | 18.6 | 28.3 | 11.5 | 25.8 | 13.0 | |
Median | 50.8 | 35.8 | 46.0 | 34.6 | 45.7 | 24.6 | 40.0 | 24.2 | |
Clean | 67.4 | 53.0 | 69.2 | 81.2 | 74.4 | 45.3 | 77.4 | 48.7 | |
Additive | 66.6 | 51.8 | 67.3 | 77.5 | 73.3 | 41.8 | 76.7 | 47.3 | |
Multiplicative | 57.1 | 40.9 | 46.3 | 69.5 | 63.0 | 20.6 | 71.9 | 24.9 | |
Gaussian | 64.7 | 52.1 | 67.5 | 67.4 | 67.7 | 46.2 | 58.9 | 48.6 | |
Median | 66.9 | 54.4 | 72.6 | 73.0 | 71.3 | 55.7 | 64.1 | 55.5 | |
Clean | 72.1 | 62.6 | 78.5 | 89.3 | 78.8 | 61.1 | 83.2 | 63.0 | |
Additive | 70.7 | 61.7 | 77.5 | 88.8 | 78.0 | 60.0 | 82.5 | 62.0 | |
Multiplicative | 66.8 | 50.6 | 64.1 | 75.9 | 74.2 | 36.4 | 77.8 | 44.4 | |
Gaussian | 70.7 | 62.0 | 76.3 | 76.9 | 74.9 | 62.4 | 71.5 | 63.1 | |
Median | 74.2 | 62.7 | 77.8 | 77.3 | 79.3 | 66.9 | 73.7 | 66.4 | |
Clean | 77.8 | 73.8 | 82.3 | 90.0 | 83.6 | 65.7 | 88.8 | 72.6 | |
Additive | 77.3 | 72.8 | 81.8 | 90.0 | 83.0 | 65.2 | 88.7 | 71.6 | |
Multiplicative | 70.1 | 56.3 | 71.4 | 86.6 | 78.3 | 48.2 | 80.0 | 53.2 | |
Gaussian | 76.2 | 71.2 | 79.3 | 83.9 | 80.1 | 66.5 | 80.2 | 73.0 | |
Median | 77.3 | 70.2 | 81.2 | 87.6 | 84.3 | 71.0 | 80.5 | 75.5 |
Class | AlexNet | VGG11 | ResNet50 | Dense | Mobile | AConvNet | Shuffle | Squeeze |
---|---|---|---|---|---|---|---|---|
2S1 | 100 | 99 | 100 | 100 | 100 | 86 | 100 | 92 |
BMP2 | 10 | 13 | 66 | 63 | 8 | 55 | 4 | 27 |
BRDM2 | 89 | 69 | 80 | 91 | 92 | 46 | 95 | 44 |
BTR70 | 2 | 4 | 8 | 37 | 38 | 0 | 80 | 2 |
BTR60 | 99 | 89 | 89 | 98 | 98 | 70 | 98 | 86 |
D7 | 98 | 89 | 98 | 100 | 100 | 81 | 100 | 86 |
T72 | 11 | 0 | 0 | 100 | 16 | 0 | 0 | 0 |
T62 | 100 | 100 | 89 | 25 | 100 | 55 | 100 | 72 |
ZIL131 | 77 | 67 | 87 | 98 | 92 | 58 | 97 | 68 |
ZSU234 | 88 | 0 | 75 | 100 | 100 | 2 | 100 | 10 |
Total | 674 | 530 | 692 | 812 | 744 | 453 | 774 | 487 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Peng, B.; Peng, B.; Yong, S.; Liu, L. An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition. Remote Sens. 2022, 14, 4017. https://doi.org/10.3390/rs14164017
Peng B, Peng B, Yong S, Liu L. An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition. Remote Sensing. 2022; 14(16):4017. https://doi.org/10.3390/rs14164017
Chicago/Turabian StylePeng, Bowen, Bo Peng, Shaowei Yong, and Li Liu. 2022. "An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition" Remote Sensing 14, no. 16: 4017. https://doi.org/10.3390/rs14164017
APA StylePeng, B., Peng, B., Yong, S., & Liu, L. (2022). An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition. Remote Sensing, 14(16), 4017. https://doi.org/10.3390/rs14164017