YOLO-UAVShip: An Effective Method and Dateset for Multi-View Ship Detection in UAV Images
Abstract
1. Introduction
- Construction of a high-quality dataset: We introduce Marship-OBB9, a novel UAV ship detection dataset comprising 11,268 aerial images and 18,632 instances across nine representative ship categories.
- Design of a robust rotated detection framework: Building upon the YOLO11, we propose the YOLO11-UAVShip model by integrating an oriented bounding box (OBB) detection mechanism, the CK_DCNv4 module, and the SGKLD loss function specifically designed for robust position regression of ships with large aspect ratios in dense environments. This framework considerably improves detection accuracy and robustness under UAV perspective.
- Optimization of accuracy–efficiency balance: The proposed method achieves absolute improvements of 2.1% in mAP@0.5 and 2.3% in recall compared to the baseline model on the Marship OBB9 dataset, while maintaining negligible impact on inference speed.
- Comprehensive ablation and comparison experiments: We perform comprehensive ablation studies to evaluate the efficacy of each proposed component and benchmark our methodology against existing horizontal and rotated object detection models. The results demonstrate that our framework offers a superior balance between accuracy, efficiency, and generalization ability.
2. Related Work
2.1. Marine Ship Detection Datasets
2.2. Rotated Object Detection
2.3. Ship Detection
3. Proposed Network
3.1. YOLO11
3.2. YOLO11-OBB
3.3. Improved YOLO-UAVShip Detection Algorithm
3.3.1. Overview
3.3.2. Deformable Perception Module CK_DCNv4
3.3.3. Rotation-Robust Localization Loss Function
4. Experiments
4.1. Dataset Marship-OBB9
4.1.1. Image Acquisition and Preprocessing
4.1.2. Data Annotation
4.1.3. Evaluation Metrics
4.2. Experimental Details
4.3. Ablation Study
4.4. Comparative Experiment
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- MerchantFleet. Available online: https://unctadstat.unctad.org/datacentre/dataviewer/US.MerchantFleet (accessed on 26 June 2025).
- Feng, S.; Huang, Y.; Zhang, N. An Improved YOLOv8 OBB Model for Ship Detection through Stable Diffusion Data Augmentation. Sensors 2024, 24, 5850. [Google Scholar] [CrossRef]
- Gonçalves, L.; Damas, B. Automatic Detection of Rescue Targets in Maritime Search and Rescue Missions Using UAVs. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 21–24 June 2022; pp. 1638–1643. [Google Scholar]
- Liu, Y.; Yan, J.; Zhao, X. Deep Reinforcement Learning Based Latency Minimization for Mobile Edge Computing with Virtualization in Maritime UAV Communication Network. IEEE Trans. Veh. Technol. 2022, 71, 4225–4236. [Google Scholar] [CrossRef]
- Cheng, S.; Zhu, Y.; Wu, S. Deep Learning Based Efficient Ship Detection from Drone-Captured Images for Maritime Surveillance. Ocean Eng. 2023, 285, 115440. [Google Scholar] [CrossRef]
- Robicquet, A.; Sadeghian, A.; Alahi, A.; Savarese, S. Learning Social Etiquette: Human Trajectory Understanding in Crowded Scenes. In Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 549–565. [Google Scholar]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and Tracking Meet Drones Challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7380–7399. [Google Scholar] [CrossRef]
- Zhao, C.; Liu, R.W.; Qu, J.; Gao, R. Deep Learning-Based Object Detection in Maritime Unmanned Aerial Vehicle Imagery: Review and Experimental Comparisons. Eng. Appl. Artif. Intell. 2024, 128, 107513. [Google Scholar] [CrossRef]
- Shao, Z.; Wu, W.; Wang, Z.; Du, W.; Li, C. SeaShips: A Large-Scale Precisely Annotated Dataset for Ship Detection. IEEE Trans. Multimed. 2018, 20, 2593–2604. [Google Scholar] [CrossRef]
- Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
- Zheng, Y.; Zhang, S. Mcships: A Large-Scale Ship Dataset for Detection and Fine-Grained Categorization in the Wild. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
- Iancu, B.; Soloviev, V.; Zelioli, L.; Lilius, J. ABOships—An Inshore and Offshore Maritime Vessel Detection Dataset with Precise Annotations. Remote Sens. 2021, 13, 988. [Google Scholar] [CrossRef]
- Sun, X.; Wang, P.; Yan, Z.; Xu, F.; Wang, R.; Diao, W.; Chen, J.; Li, J.; Feng, Y.; Xu, T.; et al. FAIR1M: A Benchmark Dataset for Fine-Grained Object Recognition in High-Resolution Remote Sensing Imagery. ISPRS J. Photogramm. Remote Sens. 2022, 184, 116–130. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Li, J.; Xu, X.; Wang, B.; Zhan, X.; Xu, Y.; Ke, X.; Zeng, T.; Su, H.; et al. SAR Ship Detection Dataset (SSDD): Official Release and Comprehensive Data Analysis. Remote Sens. 2021, 13, 3690. [Google Scholar] [CrossRef]
- Wang, N.; Wang, Y.; Wei, Y.; Han, B.; Feng, Y. Marine Vessel Detection Dataset and Benchmark for Unmanned Surface Vehicles. Appl. Ocean Res. 2024, 142, 103835. [Google Scholar] [CrossRef]
- Yu, C.; Yin, H.; Rong, C.; Zhao, J.; Liang, X.; Li, R.; Mo, X. YOLO-MRS: An Efficient Deep Learning-Based Maritime Object Detection Method for Unmanned Surface Vehicles. Appl. Ocean Res. 2024, 153, 104240. [Google Scholar] [CrossRef]
- Han, J.; Ding, J.; Li, J.; Xia, G.-S. Align Deep Features for Oriented Object Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5602511. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 2999–3007. [Google Scholar]
- Han, J.; Ding, J.; Xue, N.; Xia, G.-S. ReDet: A Rotation-Equivariant Detector for Aerial Object Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2 November 2021; IEEE: Nashville, TN, USA, 2021; pp. 2785–2794. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.U.; Polosukhin, I. Attention is All You Need. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.-S.; Lu, Q. Learning RoI Transformer for Oriented Object Detection in Aerial Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Dai, L.; Liu, H.; Tang, H.; Wu, Z.; Song, P. AO2-DETR: Arbitrary-Oriented Object Detection Transformer. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 2342–2356. [Google Scholar] [CrossRef]
- Zeng, Y.; Chen, Y.; Yang, X.; Li, Q.; Yan, J. ARS-DETR: Aspect Ratio-Sensitive Detection Transformer for Aerial Oriented Object Detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5610315. [Google Scholar] [CrossRef]
- Yang, X.; Yan, J.; Ming, Q.; Wang, W.; Zhang, X.; Tian, Q. Rethinking Rotated Object Detection with Gaussian Wasserstein Distance Loss. In Proceedings of the 38th International Conference on Machine Learning, Online, 18–24 July 2021; PMLR: New York, NY, USA, 2021; pp. 11830–11841. [Google Scholar]
- Yang, X.; Yang, X.; Yang, J.; Ming, Q.; Wang, W.; Tian, Q.; Yan, J. Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence. In Proceedings of the 35th International Conference on Neural Information Processing Systems, Online, 6–14 December 2021; Curran Associates, Inc.: New York, NY, USA, 2021; Volume 34, pp. 18381–18394. [Google Scholar]
- Zhang, B.; Liu, J.; Liu, R.W.; Huang, Y. Deep-Learning-Empowered Visual Ship Detection and Tracking: Literature Review and Future Direction. Eng. Appl. Artif. Intell. 2025, 141, 109754. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Chen, B.; Liu, L.; Zou, Z.; Shi, Z. Target Detection in Hyperspectral Remote Sensing Image: Current Status and Challenges. Remote Sens. 2023, 15, 3223. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: New York, NY, USA, 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Zwemer, M.H.; Wijnhoven, R.G.J.; de With, P.H.N. Ship Detection in Harbour Surveillance Based on Large-Scale Data and CNNs. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018), Funchal, Madeira, 27–29 January 2018; Volume 5, pp. 153–160. [Google Scholar] [CrossRef]
- Wang, Y.; Ning, X.; Leng, B.; Fu, H. Ship Detection Based on Deep Learning. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 275–279. [Google Scholar]
- Tang, G.; Liu, S.; Fujino, I.; Claramunt, C.; Wang, Y.; Men, S. H-YOLO: A Single-Shot Ship Detection Approach Based on Region of Interest Preselected Network. Remote Sens. 2020, 12, 4192. [Google Scholar] [CrossRef]
- Li, H.; Deng, L.; Yang, C.; Liu, J.; Gu, Z. Enhanced YOLO v3 Tiny Network for Real-Time Ship Detection from Visual Image. IEEE Access 2021, 9, 16692–16706. [Google Scholar] [CrossRef]
- Guo, F.; Ma, H.; Li, L.; Lv, M.; Jia, Z. FCNet: Flexible Convolution Network for Infrared Small Ship Detection. Remote Sens. 2024, 16, 2218. [Google Scholar] [CrossRef]
- Dolgopolov, A.V.; Kazantsev, P.A.; Bezuhliy, N.N. Ship Detection in Images Obtained from the Unmanned Aerial Vehicle (UAV). Indian J. Sci. Technol. 2016, 9, 1–7. [Google Scholar] [CrossRef]
- Wang, Q.; Wang, J.; Wang, X.; Wu, L.; Feng, K.; Wang, G. A YOLOv7-Based Method for Ship Detection in Videos of Drones. J. Mar. Sci. Eng. 2024, 12, 1180. [Google Scholar] [CrossRef]
- Han, Y.; Guo, J.; Yang, H.; Guan, R.; Zhang, T. SSMA-YOLO: A Lightweight YOLO Model with Enhanced Feature Extraction and Fusion Capabilities for Drone-Aerial Ship Image Detection. Drones 2024, 8, 145. [Google Scholar] [CrossRef]
- Khanam, R.; Hussain, M. YOLOv11: An Overview of the Key Architectural Enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar] [CrossRef]
- Llerena, J.M.; Zeni, L.F.; Kristen, L.N.; Jung, C. Gaussian Bounding Boxes and Probabilistic Intersection-over-Union for Object Detection. arXiv 2021, arXiv:2106.06072. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-Time Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar]
Dataset | Ship Classes | Images | Instances | Annotation | Year | Source |
---|---|---|---|---|---|---|
Seaship [9] | 6 | 31,455 | 40,077 | HBB | 2018 | Camera |
HRSID [10] | 1 | 5604 | 16,951 | HBB | 2020 | SAR |
McShips [11] | 13 | 14,709 | 26,529 | HBB | 2020 | Search engine |
ABOships [12] | 9 | 9880 | 41,967 | HBB | 2021 | Camera |
FAIRIM [13] | 9 | 2235 | - | OBB | 2021 | Satellite imagery |
SSDD [14] | 1 | 1160 | 2456 | HBB/OBB | 2021 | SAR |
MVDD13 [15] | 13 | 35,474 | 40,839 | HBB | 2024 | USV |
MODD-13 [16] | 13 | 9097 | - | HBB | 2024 | USV |
MS2ship [8] | 1 | 6470 | 13,697 | HBB | 2024 | UAV |
Marship-OBB9 | 9 | 11,268 | 18,632 | HBB/OBB | 2025 | UAV |
Group | Category | Quantity | Ratio |
---|---|---|---|
#1 | fishing boat | 3391 | 0.182 |
#2 | tug | 2521 | 0.135 |
#3 | general cargo vessels | 2367 | 0.127 |
#4 | bulk carrier | 2309 | 0.124 |
#5 | container ship | 2196 | 0.118 |
#6 | oil tanker | 2013 | 0.108 |
#7 | passenger ship | 2001 | 0.107 |
#8 | coast guard ship | 970 | 0.052 |
#9 | other | 864 | 0.046 |
Parameter | Values |
---|---|
Computer operating system | Ubuntu 18.04 |
CPU | Intel (R) Xeon (R) Gold6430 |
GPU | NVIDI ARTX 4090 (24 GB) |
CUDA | V12.1 |
Python | V3.10.14 |
Pytorch | V2.4.1 |
Hyperparameters | Values |
---|---|
Images Size | 640 × 640 |
Learning Rate | 0.01 |
Momentum | 0.937 |
Epochs | 200 |
Batch Size | 16 |
Optimizer | SGD |
YOLO11n-obb | DCNv4 | SGKLD | P (%) | R (%) | mAP@0.5 (%) | mAP@0.5–0.95 (%) | Model Size (M) | GFLOPs (G) | FPS |
---|---|---|---|---|---|---|---|---|---|
√ | 88.9 | 82.2 | 86.1 | 71.2 | 5.46 | 6.6 | 94 | ||
√ | √ | 89.5 | 84.4 | 86.7 | 71.5 | 5.57 | 6.7 | 83 | |
√ | √ | 90.2 | 82.2 | 86.3 | 71.7 | 5.46 | 6.6 | 100 | |
√ | √ | √ | 91.0 | 84.5 | 87.8 | 72.8 | 5.57 | 6.7 | 89 |
Method | Size (Pixels) | Recall (%) | mAP@0.5 (%) | Params (M) | GFLOPs (G) | FPS | |
---|---|---|---|---|---|---|---|
HBB | YOLOv8n | 640 | 79.5 | 80.7 | 3.01 | 8.1 | 133 |
YOLO11n | 640 | 80.4 | 81.5 | 2.58 | 6.3 | 110 | |
RT-DETR [52] | 640 | 86.5 | 86.3 | 32.00 | 103.5 | 36 | |
OBB | rotated-faster-rcnn [33] | 640 | 73.3 | 68.04 | 41.13 | 91.01 | 15 |
RoI-Transformer [21] | 640 | 77.4 | 74.3 | 55.08 | 104.96 | 13 | |
Redet [19] | 640 | 82.7 | 80.7 | 31.60 | 48.3 | 6 | |
Retinanet-obb | 640 | 80.6 | 71.9 | 36.29 | 83.28 | 23 | |
S2Anet | 640 | 80.7 | 79.1 | 38.57 | 76.96 | 15 | |
YOLOv8n-OBB | 640 | 81.4 | 84.9 | 2.76 | 7.1 | 108 | |
YOLO11n-OBB | 640 | 82.2 | 86.1 | 2.66 | 6.6 | 94 | |
Ours | 640 | 84.5 | 87.8 | 2.72 | 6.7 | 89 |
Category | fb | tug | gc | bc | cs | ot | ps | cg | Other |
---|---|---|---|---|---|---|---|---|---|
YOLOv8n | 90.7 | 92.7 | 93.2 | 96.7 | 94.3 | 97.0 | 94.9 | 97.8 | 51.7 |
YOLO11n | 91.0 | 92.9 | 95.3 | 98.4 | 95.3 | 98.7 | 94.9 | 97.5 | 52.1 |
RT-DETR | 91.7 | 93.9 | 95.5 | 96.3 | 93.8 | 97.2 | 94.5 | 98.1 | 55.3 |
rotated-faster-rcnn | 62.0 | 65.4 | 72.4 | 80.2 | 77.2 | 76.7 | 80.6 | 78.8 | 19.0 |
RoI-Transformer | 68.5 | 76.9 | 84.2 | 85.0 | 84.7 | 82.4 | 81.5 | 77.7 | 27.6 |
redet | 79.0 | 80.9 | 88.5 | 89.5 | 89.9 | 87.6 | 90.1 | 88.5 | 32.7 |
Retinanet-obb | 70.1 | 69.7 | 79.3 | 77.5 | 80.3 | 80.9 | 83.6 | 82.0 | 24.4 |
S2Anet | 77.0 | 79.5 | 86.4 | 86.1 | 87.4 | 80.1 | 89.6 | 79.0 | 46.7 |
YOLOv8n-OBB | 92.0 | 93.1 | 95.8 | 96.7 | 94.7 | 97.6 | 95.8 | 98.3 | 48.7 |
YOLO11n-OBB | 91.7 | 92.9 | 95.9 | 96.6 | 94.6 | 98.0 | 95.7 | 98.4 | 55.0 |
Our | 92.1 | 94.7 | 96.1 | 98.9 | 96.9 | 99.1 | 96.7 | 98.5 | 57.7 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Tian, Y.; Yuan, C.; Yu, K.; Yin, K.; Huang, H.; Yang, G.; Li, F.; Zhou, Z. YOLO-UAVShip: An Effective Method and Dateset for Multi-View Ship Detection in UAV Images. Remote Sens. 2025, 17, 3119. https://doi.org/10.3390/rs17173119
Li Y, Tian Y, Yuan C, Yu K, Yin K, Huang H, Yang G, Li F, Zhou Z. YOLO-UAVShip: An Effective Method and Dateset for Multi-View Ship Detection in UAV Images. Remote Sensing. 2025; 17(17):3119. https://doi.org/10.3390/rs17173119
Chicago/Turabian StyleLi, Youguang, Yichen Tian, Chao Yuan, Kun Yu, Kai Yin, Huiping Huang, Guang Yang, Fan Li, and Zengguang Zhou. 2025. "YOLO-UAVShip: An Effective Method and Dateset for Multi-View Ship Detection in UAV Images" Remote Sensing 17, no. 17: 3119. https://doi.org/10.3390/rs17173119
APA StyleLi, Y., Tian, Y., Yuan, C., Yu, K., Yin, K., Huang, H., Yang, G., Li, F., & Zhou, Z. (2025). YOLO-UAVShip: An Effective Method and Dateset for Multi-View Ship Detection in UAV Images. Remote Sensing, 17(17), 3119. https://doi.org/10.3390/rs17173119