A SAR Ship Detection Method Based on Adversarial Training
Abstract
:1. Introduction
- (1)
- The adversarial learning is used in SAR ship detection, and the principles and formulas of adversarial learning are described in detail.
- (2)
- The optimization method based on separated batch normalization (BN), selecting largely from classification and localization losses, and the K-step gradient ascent with one-step gradient descent are used to solve the adversarial learning formulas.
- (3)
- The experiments on the classical detectors and public datasets are conducted, which show obvious superiority compared to the traditional data augmentation methods.
2. Methodology
3. Experimental Results
3.1. Experiment Setup and Baseline
3.2. Performance on Different Datasets
3.2.1. Performance on SSDD
3.2.2. Performance on SAR-Ship-Dataset
3.2.3. Performance on AIR-SARShip
3.3. Ablation Experiments
3.4. The Detection Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, H.C.; Hong, W.; Wu, Y.R.; Fan, P.Z. An efficient and flexible statistical model based on generalized Gamma distribution for amplitude SAR images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2711–2722. [Google Scholar]
- Achim, A.; Kuruoglu, E.E.; Zerubia, J. SAR image filtering based on the heavy-tailed Rayleigh model. IEEE Trans. Image Process. 2006, 15, 2686–2693. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Qu, C.; Shao, J. Ship detection in SAR images based on an improved faster R-CNN. In Proceedings of the SAR in Big Data Era: Models, Methods & Applications, Beijing, China, 13–14 November 2017. [Google Scholar]
- Li, J.; Xu, C.; Su, H.; Gao, L.; Wang, T. Deep Learning for SAR Ship Detection: Past, Present and Future. Remote Sens. 2022, 14, 2712. [Google Scholar] [CrossRef]
- Yang, S.; Xiao, W.; Zhang, M.; Guo, S.; Zhao, J.; Shen, F. Image Data Augmentation for Deep Learning: A Survey. arXiv 2022, arXiv:2204.08610. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–26 May 2017. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar] [CrossRef]
- Xie, Q.; Luong, M.-T.; Hovy, E.; Le, Q.V. Self-training with noisy student improves imagenet classification. In Proceedings of the Computer Vision and Pattern Recognition, Virtual, 14–19 June 2020. [Google Scholar]
- Goodfellow, I.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2015. [Google Scholar] [CrossRef]
- Yin, D.; Gontijo Lopes, R.; Shlens, J.; Cubuk, E.D.; Gilmer, J. A fourier perspective on model robustness in computer vision. arXiv 2019, arXiv:1906.08988. [Google Scholar]
- Zhang, T.; Zhu, Z. Interpreting adversarially trained convolutional neural networks. Int. Conf. Mach. Learn. PMLR 2019, 97, 7502–7511. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. arXiv 2016, arXiv:1611.01236. [Google Scholar]
- Zhang, H.; Wang, J. Towards adversarially robust object detection. In Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Schwinn, L.; Raab, R.; Nguyen, A.; Zanca, D.; Eskofier, B. Exploring misclassifications of robust neural networks to enhance adversarial attacks. Appl. Intell. 2023, 53, 19843–19859. [Google Scholar] [CrossRef]
- Shafahi, A.; Najibi, M.; Ghiasi, M.A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L.S.; Taylor, G.; Goldstein, T. Adversarial training for free! arXiv 2019, arXiv:1904.12843. [Google Scholar]
- Zhang, D.; Zhang, T.; Lu, Y.; Zhu, Z.; Dong, B. You only propagate once: Accelerating adversarial training via maximal principle. arXiv 2019, arXiv:1905.00877. [Google Scholar]
- Zhu, C.; Cheng, Y.; Gan, Z.; Sun, S.; Goldstein, T.; Liu, J. Freelb: Enhanced adversarial training for natural language understanding. arXiv 2019, arXiv:1909.11764. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 99, 2999–3007. [Google Scholar]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR Dataset of Ship Detection for Deep Learning under Complex Backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef]
- Sun, X.; Wang, Z.; Sun, Y.; Diao, W.; Zhang, Y.; Fu, K. AIR-SARShip-1.0: High-resolution SAR ship detection dataset. J. Radars 2019, 8, 852–862. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- DeVries, T.; Taylor, G.W. Improved regularization of convolutional neural networks with cutout. arXiv 2017, arXiv:1708.04552. [Google Scholar]
- Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13001–13008. [Google Scholar] [CrossRef]
- Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
- Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6023–6032. [Google Scholar]
- Hendrycks, D.; Mu, N.; Cubuk, E.D.; Zoph, B.; Gilmer, J.; Lakshminarayanan, B. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv 2019, arXiv:1912.02781. [Google Scholar]
- Chen, P.; Liu, S.; Zhao, H.; Jia, J. Gridmask data augmentation. arXiv 2020, arXiv:2001.04086. [Google Scholar]
- Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 113–123. [Google Scholar]
- Li, P.; Li, X.; Long, X. Fencemask: A data augmentation approach for pre-extracted image features. arXiv 2020, arXiv:2006.07877. [Google Scholar]
Datasets | FR | RN | Y3 | CN | AVE |
---|---|---|---|---|---|
SSDD | 0.84 | 0.90 | 0.87 | 0.94 | 0.89 |
SAR-Ship-Dataset | 0.86 | 0.89 | 0.79 | 0.86 | 0.85 |
AIR-SARShip | 0.82 | 0.86 | 0.46 | 0.82 | 0.74 |
DAM | FR | RN | Y3 | CN | AVE1 | AVE2 |
---|---|---|---|---|---|---|
Cutout | 0.91 | 0.93 | 0.75 | 0.88 | 0.87 | 0.85 |
Random erasing | 0.91 | 0.91 | 0.78 | 0.86 | 0.87 | |
Mixup | 0.91 | 0.97 | 0.82 | 0.92 | 0.91 | |
Cutmix | 0.73 | 0.77 | 0.71 | 0.72 | 0.73 | |
Augmix | 0.92 | 0.94 | 0.85 | 0.91 | 0.91 | |
Gridmask | 0.90 | 0.90 | 0.84 | 0.85 | 0.87 | |
Mosic | 0.88 | 0.89 | 0.80 | 0.87 | 0.86 | |
Copy/paste | 0.82 | 0.91 | 0.65 | 0.88 | 0.82 | |
Proposed | 0.94 | 0.95 | 0.89 | 0.95 | 0.93 | 0.93 |
DAM | FR | RN | Y3 | CN | AVE1 | AVE2 |
---|---|---|---|---|---|---|
Cutout | 0.87 | 0.91 | 0.81 | 0.89 | 0.87 | 0.83 |
Random erasing | 0.86 | 0.88 | 0.84 | 0.83 | 0.85 | |
Mixup | 0.89 | 0.93 | 0.80 | 0.90 | 0.88 | |
Cutmix | 0.71 | 0.92 | 0.52 | 0.60 | 0.69 | |
Augmix | 0.90 | 0.91 | 0.65 | 0.88 | 0.84 | |
Gridmask | 0.85 | 0.89 | 0.78 | 0.84 | 0.84 | |
Mosic | 0.88 | 0.86 | 0.84 | 0.86 | 0.86 | |
Copy/paste | 0.86 | 0.90 | 0.64 | 0.89 | 0.82 | |
Proposed | 0.93 | 0.94 | 0.89 | 0.94 | 0.93 | 0.93 |
DAM | FR | RN | Y3 | CN | AVE1 | AVE2 |
---|---|---|---|---|---|---|
Cutout | 0.91 | 0.88 | 0.40 | 0.81 | 0.75 | 0.72 |
Random erasing | 0.91 | 0.85 | 0.42 | 0.84 | 0.76 | |
Mixup | 0.90 | 0.81 | 0.38 | 0.86 | 0.74 | |
Cutmix | 0.62 | 0.51 | 0.30 | 0.83 | 0.57 | |
Augmix | 0.90 | 0.86 | 0.41 | 0.89 | 0.77 | |
Gridmask | 0.81 | 0.76 | 0.2 | 0.84 | 0.65 | |
Mosic | 0.80 | 0.84 | 0.65 | 0.85 | 0.79 | |
Copy/paste | 0.81 | 0.78 | 0.48 | 0.81 | 0.72 | |
Proposed | 0.95 | 0.93 | 0.73 | 0.93 | 0.89 | 0.89 |
Methods | SSDD | SAR-Ship-Dataset | AIR-SARShip |
---|---|---|---|
Traditional methods | 0.85 | 0.83 | 0.72 |
Plain adversarial training | 0.88 | 0.87 | 0.81 |
+Separate BN | 0.89 | 0.89 | 0.84 |
+Maxing | 0.91 | 0.90 | 0.86 |
+Optimization | 0.93 | 0.93 | 0.89 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, J.; Yu, Z.; Chen, J.; Jiang, H. A SAR Ship Detection Method Based on Adversarial Training. Sensors 2024, 24, 4154. https://doi.org/10.3390/s24134154
Li J, Yu Z, Chen J, Jiang H. A SAR Ship Detection Method Based on Adversarial Training. Sensors. 2024; 24(13):4154. https://doi.org/10.3390/s24134154
Chicago/Turabian StyleLi, Jianwei, Zhentao Yu, Jie Chen, and Hao Jiang. 2024. "A SAR Ship Detection Method Based on Adversarial Training" Sensors 24, no. 13: 4154. https://doi.org/10.3390/s24134154
APA StyleLi, J., Yu, Z., Chen, J., & Jiang, H. (2024). A SAR Ship Detection Method Based on Adversarial Training. Sensors, 24(13), 4154. https://doi.org/10.3390/s24134154