DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery
Abstract
1. Introduction
- P2 detection layer: A new high-resolution detection head (shallow feature map P2/4) is added in YOLOv8, which directly enhances the detail capture of targets smaller than 50 pixels
- C2f-DCAM module: For the first time, a Dynamic Convolutional Attention Mechanism (DCAM) is embedded into the C2f structure. It realizes joint modeling through local multi-scale convolution (Lepe branch) and global sparse attention (Attention branch), thus solving the problem of insufficient long-range dependence in traditional CNNs.
- SCDown downsampling: A lightweight downsampling unit with spatial-channel decoupling is proposed, which reduces parameters while maintaining accuracy.
2. Materials and Methods
2.1. P2: Small Object Detection Head
2.2. DCS-Net Architecture
2.2.1. DCAM Module
2.2.2. C2f-DCAM
2.2.3. SCDown Module
- Pointwise Convolution(1 × 1): Compresses channel dimensions from to , reducing redundancy and emphasizing salient features.
- Deptwise Convolution (k × k, stride s): Performs channel-wise convolution for spatial downsampling, enabling the network to capture scale-specific information with reduced parameter count.
2.2.4. Overall Architecture
- Stage 1 The input image undergoes convolution and downsampling, reducing the feature map size by 1/4 while increasing the number of channels to 64–128. This process facilitates initial low-level feature extraction, focusing on aspects like edges and textures. The resulting thermal map primarily highlights the edge contours in the image, exhibiting heightened sensitivity to low-level features, such as textures and boundaries. This stage captures intricate spatial details, establishing the groundwork for further feature extraction.
- Stage 2 The feature map is further reduced to 1/8th of its original size to emphasize extracting intricate local structural details. This process aids the model in discerning boundary and shape characteristics of small targets. The thermal map progressively narrows down to the target region, displaying pronounced highlights around small targets. This phenomenon suggests that the network is differentiating foreground from background areas and developing semantic recognition capabilities.
- Stage 3 The feature map size is reduced through the integration of the C2f-DCAM module. This module, known as the Dynamic Convolutional Attention Blending Module (DCAM), enhances the contextual semantic representation of small targets by employing parallel local enhancement (Lepe branching) and global dependency modeling (Attention branching). These mechanisms notably enhance target detection in scenarios with occluded, dense, or complex backgrounds. Notably, the thermal map excels in its capacity to concentrate on specific areas: it significantly amplifies responses in regions containing small targets while preserving background structural information. This observation suggests that the DCAM module steers the network towards establishing prolonged dependencies on critical regions via a global attention mechanism, thereby enhancing the discernment of small targets within intricate backgrounds.
- Stage 4 The feature map undergoes additional compression for downsampling efficiency, departing from conventional large-step convolution methods. The SCDown module operates through channel-wise spatial compression and separate dot-convolution channel compression, diminishing parameter volume while preserving essential spatial structures. This approach effectively addresses information loss concerns. Despite further reduction in thermal map spatial resolution, high responsiveness to small target areas is preserved. This outcome is credited to the SCDown module’s computational compression, which safeguards crucial spatial layout features and prevents excessive information loss. Finally, the SPPF module (Fast Spatial Pyramid Pooling) fuses feature maps from different scales to enhance the adaptability to multi-scale objects, especially for detecting large and small objects simultaneously.
2.3. SDBIoU Loss Function
3. Results
3.1. Dataset
3.2. Experimental Environment and Training Strategy
3.3. Evaluation Metrics
- is calculated at an IoU threshold of 0.5.
- is computed by averaging the AP values across IoU thresholds ranging from 0.5 to 0.95 in increments of 0.05.
3.4. Experiment Results
3.4.1. Comparison of Loss Functions
3.4.2. Comparison with Different Mainstream Models
- YOLOv8 and previous versions: Compared with previous models, such as YOLOv3, YOLOv5s, and YOLOv7, DCS-YOLOv8 has significantly improved detection accuracy while having a lower or comparable parameter scale. For example, although YOLOv7 achieves lightweight with 3.1 M parameters, its is only 40.2%, while DCS-YOLOv8 reaches a of 44.5% with 9.9 M parameters, demonstrating that it balances compactness and detection reliability in complex aerial photography scenarios.
- YOLOv10 series: The parameter scales of YOLOv10n and YOLOv10s are 2.7 M and 8.1 M, respectively, which have obvious lightweight advantages, but their percentages are only 34.0% and 40.8%, far lower than the 44.5% of DCS-YOLOv8. This indicates that simply reducing parameters may sacrifice the ability to perceive small objects, while DCS-YOLOv8 achieves accuracy improvement with a 1.2M reduction in parameters through the optimization of the SCDown module and C2f-DCAM structure.
- YOLOv11 series: YOLOv11s achieves a of 40.6% with 9.4 M parameters, while DCS-YOLOv8, with slightly fewer parameters (9.9 M), increases by 3.9 percentage points and from 24.8% to 26.9%. This benefits from the preservation of high-resolution features by the P2 detection layer of DCS-YOLOv8 and the dynamic adaptation of the SDBIoU loss to the scale of small objects.
- YOLOv12 series: The of YOLOv12s is 41.4%, slightly lower than the 44.5% of DCS-YOLOv8, and its inference time (11.6 ms) is longer than the 10.1 ms of DCS-YOLOv8. Although both adopt lightweight designs, the SCDown module of DCS-YOLOv8 better preserves the spatial details of small objects while reducing computational overhead by separating spatial and channel operations.
- Two-stage models: The of Faster R-CNN and Cascade R-CNN are 36.6% and 39.4%, respectively, far lower than the 44.5% of DCS-YOLOv8. This is because two-stage models rely on region proposal mechanisms, which are prone to missed detections when dealing with dense and small-scale targets in UAV images.
- Transformer-based models: The of Swin Transformer is 39.2%, but its window attention mechanism is prone to information discontinuity when the target scale changes drastically. In contrast, the DCAM module of DCS-YOLOv8 better captures global–local dependencies through the fusion of dynamic convolution and attention.
- Single-stage anchor-free models: The of CenterNet is 39.7%, but its positioning accuracy for small targets in complex backgrounds is insufficient. DCS-YOLOv8 enhances the ability to distinguish low-resolution targets through the P2 layer and SDBIoU loss.
3.5. Ablation Experiments
3.6. Visual Assessment
3.7. Real-Time Object Detectio
3.8. Generalization Test
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Adil, M.; Song, H.; Jan, M.A.; Khan, M.K.; He, X.; Farouk, A.; Jin, Z. UAV-Assisted IoT Applications, QoS Requirements and Challenges with Future Research Directions. ACM Comput. Surv. 2024, 56, 35. [Google Scholar] [CrossRef]
- Cai, W.; Wei, Z. Remote Sensing Image Classification Based on a Cross-Attention Mechanism and Graph Convolution. IEEE Geosci. Remote Sens. Lett. 2020, 19, 8002005. [Google Scholar] [CrossRef]
- Peng, C.; Zhu, M.; Ren, H.; Emam, M. Small Object Detection Method Based on Weighted Feature Fusion and CSMA Attention Module. Electronics 2022, 11, 2546. [Google Scholar] [CrossRef]
- Feng, F.; Hu, Y.; Li, W.; Yang, F. Improved YOLOv8 algorithms for small object detection in aerial imagery. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102113. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, T.; Jiao, J.L. Remote Sensing Object Detection Meets Deep Learning: A metareview of challenges and advances. Geosci. Remote Sens. 2023, 11, 8–44. [Google Scholar] [CrossRef]
- Jiang, Y.; Xi, Y.; Zhang, L.; Wu, Y.; Tan, F.; Hou, Q. Infrared Small Target Detection Based on Local Contrast Measure With a Flexible Window. IEEE Geosci. Remote Sens. Lett. 2024, 21, 7001805. [Google Scholar] [CrossRef]
- Li, Z.; Dong, Y.; Shen, L.; Liu, Y.; Pei, Y.; Yang, H.; Zheng, L.; Ma, J. Development and challenges of object detection: A survey. Neurocomputing 2024, 598, 23. [Google Scholar] [CrossRef]
- Tang, G.; Ni, J.; Zhao, Y.; Gu, Y.; Cao, W. A Survey of Object Detection for UAVs Based on Deep Learning. Remote Sens. 2024, 16, 29. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers; Springer International Publishing: Cham, Switzerland, 2020. [Google Scholar]
- Liu, X.; Li, H. A study on UAV target detection and 3D positioning methods based on the improved deformable DETR model and multi-view geometry. Adv. Mech. Eng. 2025, 17, 16878132251315505. [Google Scholar] [CrossRef]
- Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Li, C.Y.; Yang, J.; Su, H.; Zhu, J.J. Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection. arXiv 2023, arXiv:2303.05499. [Google Scholar]
- Wang, H.; Ma, J.; Chen, W.; Han, Q.; Lin, J.; Li, J.; Yao, Z. Personal Protective Equipment Detection for Industrial Environments: A Lightweight Model Based on RTDETR for Small Targets; IOP Publishing Ltd.: Bristol, UK, 2025. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the Computer Vision & Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger; IEEE: New York, NY, USA, 2017; pp. 6517–6525. [Google Scholar]
- Terven, J.; Cordova-Esparza, D.M.; Romero-Gonzalez, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Bi, J.; Zhu, Z.; Meng, Q. Transformer in Computer Vision. In Proceedings of the 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI), Fuzhou, China, 24–26 September 2021; pp. 178–188. [Google Scholar] [CrossRef]
- Xia, Z.; Pan, X.; Song, S.; Li, L.E.; Huang, G. Vision Transformer with Deformable Attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Shah, S.; Tembhurne, J. Object detection using convolutional neural networks and transformer-based models: A review. J. Electr. Syst. Inf. Technol. 2023, 10, 1–35. [Google Scholar] [CrossRef]
- Islam, S.; Elmekki, H.; Pedrycz, R.W. A comprehensive survey on applications of transformers for deep learning tasks. Expert Syst. Appl. 2024, 241, 122666.1–122666.48. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Liu, K.; Tang, H.; He, S.; Yu, Q.; Xiong, Y.; Wang, N. Performance Validation of Yolo Variants for Object Detection. In Proceedings of the BIC 2021: 2021 International Conference on Bioinformatics and Intelligent Computing, Harbin, China, 22–24 January 2021. [Google Scholar]
- Wei, L.; Tong, Y. Enhanced-YOLOv8: A new small target detection model. Digit. Signal Process. 2024, 153, 104611. [Google Scholar] [CrossRef]
- Xu, W.; Cui, C.; Ji, Y.; Li, X.; Li, S. YOLOv8-MPEB small target detection algorithm based on UAV images. Heliyon 2024, 10, 18. [Google Scholar] [CrossRef]
- Ding, X.; Zhang, Y.; Ge, Y.; Zhao, S.; Song, L.; Yue, X.; Shan, Y. UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition; IEEE: New York, NY, USA, 2023. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. arXiv 2019, arXiv:1911.08287. [Google Scholar] [CrossRef]
- Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. arXiv 2020, arXiv:2005.03572. [Google Scholar] [CrossRef]
- Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IOU Loss for Accurate Bounding Box Regression. arXiv 2021, arXiv:2101.08158. [Google Scholar] [CrossRef]
- Yang, J.; Liu, S.; Wu, J.; Su, X.; Hai, N.; Huang, X. Pinwheel-shaped Convolution and Scale-based Dynamic Loss for Infrared Small Target Detection. arXiv 2024, arXiv:2412.16986. [Google Scholar] [CrossRef]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and Tracking Meet Drones Challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7380–7399. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1483–1498. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint Triplets for Object Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6568–6577. [Google Scholar] [CrossRef]
- Ma, S.; Xu, Y. MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression. arXiv 2023, arXiv:2307.07662. [Google Scholar] [CrossRef]
- Li, Y.; Zhou, Z.; Pan, Y. YOLOv11-BSS: Damaged Region Recognition Based on Spatial and Channel Synergistic Attention and Bi-Deformable Convolution in Sanding Scenarios. Electronics 2025, 14, 1469. [Google Scholar] [CrossRef]
- Tanrıverdi, V.; Alemdar, K.D. Comparative Analysis of Data Augmentation Strategies Based on YOLOv12 and MCDM for Sustainable Mobility Safety: Multi-Model Ensemble Approach. Sustainability 2025, 17, 5638. [Google Scholar] [CrossRef]
- Tahir, N.U.A.; Long, Z.; Zhang, Z.; Asim, M.; Elaffendi, M. PVswin-YOLOv8s: UAV-Based Pedestrian and Vehicle Detection for Traffic Management in Smart Cities Using Improved YOLOv8. Drones 2024, 8, 84. [Google Scholar] [CrossRef]
- Wang, Y.; Pan, F.; Li, Z.; Xin, X.; Li, W. CoT-YOLOv8: Improved YOLOv8 for Aerial images Small Target Detection. In Proceedings of the 2023 China Automation Congress (CAC), Chongqing, China, 17–19 November 2023; pp. 4943–4948. [Google Scholar] [CrossRef]
- Zhang, H.; Li, G.; Wan, D.; Wang, Z.; Dong, J.; Lin, S.; Deng, L.; Liu, H. DS-YOLO: A dense small object detection algorithm based on inverted bottleneck and multi-scale fusion network. Microelectron. J. 2024, 4, 100190. [Google Scholar] [CrossRef]
- Yang, C.; Huang, Z.; Wang, N. QueryDet: Cascaded sparse query for accelerating high-resolution small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13668–13677. [Google Scholar]
- Xiao, C.; An, W.; Zhang, Y.; Su, Z.; Li, M.; Sheng, W.; Pietikäinen, M.; Liu, L. Highly efficient and unsupervised framework for moving object detection in satellite videos. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 11532–11539. [Google Scholar] [CrossRef]
- Wu, S.; Xiao, C.; Wang, Y.; Yang, J.; An, W. Sparsity-Aware Global Channel Pruning for Infrared Small-target Detection Networks. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5615011. [Google Scholar] [CrossRef]
Metrics | Precision | Recall | ||
---|---|---|---|---|
CIoU | 51.8 | 39.4 | 40.6 | 24.3 |
DIoU | 52 | 38.9 | 40.6 | 24.5 |
EIoU | 49.8 | 39.4 | 40 | 24.3 |
MPIoU [37] | 52.1 | 39 | 40.7 | 23.9 |
SDBIoU (d = 0.5) | 51.1 | 39.5 | 40.4 | 24.3 |
SDBIoU (d = 0.7) | 51.2 | 39.3 | 40.2 | 24.1 |
SDBIoU (d = 0.3) | 51.6 | 40.1 | 40.8 | 24.5 |
Models | Precision | Recall | Time/ms | Parameter/ | ||
---|---|---|---|---|---|---|
YOLOv3 | 53.8 | 43.1 | 42.2 | 23.2 | 210 | 18.4 |
YOLOv5s | 46.7 | 34.9 | 34.5 | 19.4 | 14.1 | 12.0 |
YOLOv7 | 51.6 | 42.3 | 40.2 | 21.9 | 73.3 | 1.7 |
YOLOv8n | 45.9 | 34.2 | 34.5 | 19.8 | 5.7 | 3.1 |
YOLOv8s | 51.8 | 39.4 | 40.6 | 24.3 | 7.1 | 11.1 |
YOLOv8m | 55.8 | 42.6 | 44.5 | 26.6 | 16.8 | 25.9 |
YOLOv10n | 45.5 | 33.5 | 34 | 19.8 | 8 | 2.7 |
YOLOv10s | 51 | 39.4 | 40.8 | 24.6 | 7.6 | 8.1 |
YOLOv11n [38] | 45.9 | 33.4 | 34.3 | 20.1 | 4.6 | 2.6 |
YOLOv11s | 52.1 | 39.4 | 40.6 | 24.8 | 8.0 | 9.4 |
YOLOv12n [39] | 43.4 | 34.6 | 33.7 | 19.8 | 6.6 | 2.5 |
YOLOv12s | 52.5 | 40.4 | 41.4 | 25 | 11.6 | 9.2 |
PVswin-YOLO [40] | 54.5 | 41.8 | 43.3 | 26.4 | 8.8 | 10.1 |
CoT-YOLO [41] | 53.2 | 41.1 | 42.7 | 25.7 | 12.2 | 10.6 |
DS-YOLO [42] | 52.4 | 41.6 | 43.1 | 26.0 | 19.7 | 9.3 |
DCS-YOLOv8 | 54.2 | 42.1 | 44.5 | 26.9 | 10.1 | 9.9 |
Models | ||
---|---|---|
Faster R-CNN [11] | 36.6 | 21.1 |
Swin Transformer [35] | 39.7 | 23.1 |
CenterNet [36] | 39.2 | 22.7 |
Cascade R-CNN [34] | 39.4 | 24.2 |
RT-DETR-R18 [15] | 42.5 | 25.4 |
DION [14] | 41.3 | 24.1 |
DCS-YOLOv8 | 44.5 | 26.9 |
Models | Ped | People | Bicycle | Car | Van | Truck | Tricy | A-Tricy | Bus | Motor | |
---|---|---|---|---|---|---|---|---|---|---|---|
A | 44.2 | 34.3 | 13.9 | 80 | 45.5 | 40.2 | 28.5 | 16.6 | 57.8 | 44.8 | 40.6 |
B | 44.1 | 34.0 | 14.4 | 79.6 | 45.8 | 38.4 | 29.7 | 15.8 | 60.9 | 44.9 | 40.8 |
C | 50 | 40.7 | 16.6 | 83.3 | 46.7 | 39.7 | 29.1 | 16 | 60.5 | 50.6 | 43.3 |
D | 51 | 40.8 | 16.9 | 83.8 | 47.5 | 39.3 | 32.3 | 17 | 59.4 | 50.5 | 43.9 |
E | 51.5 | 42.5 | 17.2 | 83.8 | 48.5 | 39.4 | 33.3 | 18.1 | 59.8 | 50.6 | 44.5 |
Baseline | SDBIOU | P2 | DCAM | SCDown | Precision | Recall | Time/ms | Parameter/ | ||
---|---|---|---|---|---|---|---|---|---|---|
✓ | 51.8 | 39.4 | 40.6 | 24.3 | 7.1 | 11.1 | ||||
✓ | ✓ | 51.6 | 40.1 | 40.8 | 24.5 | 5.5 | 11.1 | |||
✓ | ✓ | ✓ | 53.9 | 41.2 | 43.3 | 26.1 | 6.6 | 10.6 | ||
✓ | ✓ | ✓ | ✓ | 54.8 | 41.8 | 43.9 | 26.5 | 9.7 | 11.3 | |
✓ | ✓ | ✓ | ✓ | ✓ | 54.2 | 42.1 | 44.5 | 26.9 | 10.1 | 9.9 |
Dataset | Models | Precision | Recall | ||
---|---|---|---|---|---|
SSDD | YOLOv8s | 95.2 | 92.4 | 95.8 | 62.9 |
YOLOv10s | 85.3 | 83.8 | 91 | 57.2 | |
YOLOv11s | 89.1 | 92.9 | 96.5 | 62.7 | |
YOLOv12s | 91.6 | 92 | 96.6 | 61.1 | |
DCS-YOLOv8 | 94.1 | 93.1 | 97.2 | 64.4 | |
NWPU VHR-10 | YOLOv8s | 92.2 | 85.4 | 91.3 | 57.6 |
YOLOv10s | 67.5 | 69.5 | 72.8 | 43.9 | |
YOLOv11s | 90.2 | 79.3 | 87.4 | 54.1 | |
YOLOv12s | 88.7 | 80.3 | 87.5 | 52.7 | |
DCS-YOLOv8 | 91.2 | 88 | 92.6 | 59.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, X.; Yang, Z.; Zhao, H. DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery. Remote Sens. 2025, 17, 2989. https://doi.org/10.3390/rs17172989
Zhao X, Yang Z, Zhao H. DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery. Remote Sensing. 2025; 17(17):2989. https://doi.org/10.3390/rs17172989
Chicago/Turabian StyleZhao, Xiaozheng, Zhongjun Yang, and Huaici Zhao. 2025. "DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery" Remote Sensing 17, no. 17: 2989. https://doi.org/10.3390/rs17172989
APA StyleZhao, X., Yang, Z., & Zhao, H. (2025). DCS-YOLOv8: A Lightweight Context-Aware Network for Small Object Detection in UAV Remote Sensing Imagery. Remote Sensing, 17(17), 2989. https://doi.org/10.3390/rs17172989