DBM-YOLO: A Dual-Branch Model with Feature Sharing for UAV Object Detection in Low-Illumination Environments
Highlights
- We propose a parallel network framework that consists of a Zero-DCE illumination enhancement network and the backbone of a YOLOv11n-based object detection network.
- The DPSA module is introduced to enhance feature representation and multi-scale adaptability through dynamic channel and spatial attention, while the HLSAFM module refines high- and low-frequency features to enable richer feature extraction and improved discriminative capability.
- This parallel architecture establishes a dual-branch framework that enables collaborative feature training and real-time updates, effectively enhancing the feature adaptability and accuracy of object.
- Ablation studies confirm that the synergistic effect of the proposed modules yields more robust and discriminative feature representations, significantly enhancing detection performance in UAV scenarios in low-illumination environments.
Abstract
1. Introduction
- (1)
- This work presents a parallel detection architecture, where illumination enhancement network and object detection network are jointly incorporated to achieve synchronized feature learning. This parallel architecture allows the two networks to adapt to each other during training while sharing feature representations, thereby improving feature adaptability and detection accuracy under low-illumination environments.
- (2)
- A High and Low Frequency Spatial-Adaptive Feature Modulation (HLSAFM) module is designed to enhance model robustness in extremely low-illumination scenarios by decomposing features into high and low frequency and modulating spatially adaptive modulation.
- (3)
- A Dynamic Pooling Synergistic Attention (DPSA) module is introduced, which integrates dynamic pooling, channel attention, spatial attention, and multi-scale convolution to improve discriminative capability for objects at different scales.
2. Materials and Methods
2.1. Dataset
2.2. DBM-YOLO
2.2.1. Architecture
2.2.2. Parallel Network Framework
2.2.3. HLSAFM
2.2.4. DPSA
2.3. Experimental Settings
2.3.1. Experimental Environment
2.3.2. Evaluation Metrics
3. Experimental Results and Analysis
3.1. Comparison Experiments
3.1.1. Comparison of Low-Light Detection Methods on the VisDrone (Night) Dataset
| Model | % | % | ||||
|---|---|---|---|---|---|---|
| A | RT-DETR [41] | 13.3 | 23.2 | 45.2 | 20.1 | 27.8 |
| YOLOv5n | 8.37 | 17.5 | 40.8 | 18.3 | 25.3 | |
| YOLOv8n | 10.1 | 17.9 | 42.3 | 18.2 | 25.4 | |
| YOLOv11n | 10.2 | 18.1 | 38.8 | 19.6 | 26.0 | |
| B | URetinex+YOLOv11n | 9.05 | 18.1 | 41.9 | 19.8 | 26.9 |
| SCI+YOLOv11n | 9.24 | 18.2 | 42.0 | 18.6 | 25.8 | |
| Zero_DCE+YOLOv11n | 9.01 | 18.4 | 43.2 | 18.5 | 25.9 | |
| ENGAN+YOLOv11n | 9.66 | 19.8 | 48.3 | 21.1 | 29.4 | |
| RUAS+YOLOv11n | 8.45 | 16.5 | 39.7 | 20.3 | 26.9 | |
| ENGAN+YOLOv8n | 10.4 | 19.7 | 45.9 | 19.3 | 27.2 | |
| RUAS+YOLOv8n | 9.79 | 17.9 | 44.1 | 17.8 | 25.0 | |
| RUAS+YOLOv11n | 9.2 | 17.0 | 44.9 | 18.4 | 25.7 | |
| ENGAN+YOLOv11n | 10.8 | 20.0 | 39.5 | 22.9 | 29.0 | |
| C | RetinaNet | 8.1 | 15.2 | 42.3 | 14.5 | 21.6 |
| PG-YOLO | 9.36 | 18.5 | 38.0 | 19.6 | 25.9 | |
| GBS-YOLOv11n | 10.0 | 17.6 | 41.4 | 17.7 | 24.8 | |
| Drone-YOLO | 10.5 | 20.3 | 40.2 | 21.3 | 27.9 | |
| Our Method | 13.3 | 24.1 | 45.0 | 25.0 | 32.1 |
3.1.2. Comparison of Heatmaps from Different Backbone Networks
3.2. Ablation Study
3.2.1. Ablation Study of the Improved Modules for Object Detection
3.2.2. Experimental Analysis
3.3. Visualization
4. Discussion
4.1. Advantages
4.2. Challenges and Limitations
4.3. Future Perspectives
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ma, J.; Sun, Z.; Wu, Y.; Jin, D. Efficient Detection Model of UAVs under Low-Light Conditions Based on LL-YOLO and EnlightenGAN. Meas. Control 2025. [Google Scholar] [CrossRef]
- Zhao, D.; Shao, F.; Zhang, S.; Yang, L.; Zhang, H.; Liu, S.; Liu, Q. Advanced Object Detection in Low-Light Conditions: Enhancements to YOLOv7 Framework. Remote Sens. 2024, 16, 4493. [Google Scholar] [CrossRef]
- Weng, T.; Niu, X. Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11. Sensors 2025, 25, 4463. [Google Scholar] [CrossRef] [PubMed]
- Abdelnabi, A.A.B.; Rabadi, G. Human Detection From Unmanned Aerial Vehicles’ Images for Search and Rescue Missions: A State-of-the-Art Review. IEEE Access 2024, 12, 152009–152035. [Google Scholar] [CrossRef]
- Jaffri, S.M.A.A.; ul Haq, M.; Farhan, M. Enhancing Object Detection in Low Light Environments Using Image Enhancement Techniques and YOLO Architectures. In Proceedings of the 26th International Multi-Topic Conference (INMIC), Karachi, Pakistan, 30–31 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
- Liu, X.; Liu, C.; Zhou, X.; Fan, G. Enhancing Low-Light Object Detection with En-YOLO: Leveraging Dual Attention and Implicit Feature Learning. Multimed. Syst. 2025, 31, 249. [Google Scholar] [CrossRef]
- Peng, D.; Ding, W.; Zhen, T. A Novel Low-Light Object Detection Method Based on the YOLOv5 Fusion Feature Enhancement. Sci. Rep. 2024, 14, 4486. [Google Scholar] [CrossRef]
- Shovo Abir, S.U.A.; Kabir, M.G.R.; Mridha, M.M.; Mridha, F.M. Advancing Low-Light Object Detection with You Only Look Once Models: An Empirical Study and Performance Evaluation. Cogn. Comput. Syst. 2024, 6, 119–134. [Google Scholar] [CrossRef]
- Han, Z.; Yue, Z.; Liu, L. 3L-YOLO: A Lightweight Low-Light Object Detection Algorithm. Appl. Sci. 2025, 15, 90. [Google Scholar] [CrossRef]
- Li, J.; Qian, L. LLEYOLO: A Target Detection Algorithm Based on Improved YOLOv5 for Low-Light Environments. J. King Saud Univ. Comput. Inf. Sci. 2025, 37, 284. [Google Scholar] [CrossRef]
- Di, R.; Fan, H.; Ma, Y.; Wang, J.; Qian, R. GAME-YOLO: Global Attention and Multi-Scale Enhancement for Low-Visibility UAV Detection with Sub-Pixel Localization. Entropy 2025, 27, 1263. [Google Scholar] [CrossRef]
- Gong, B.; Zhang, H.; Ma, B.; Tao, Z. Enhancing Real-Time Low-Light Object Detection via Multi-Scale Edge and Illumination-Guided Features in YOLOv8. J. Supercomput. 2025, 81, 1120. [Google Scholar] [CrossRef]
- Cai, W.; Chen, Y.; Qiu, X.; Niu, M.; Li, J. LLD-YOLO: A Low-Light Object Detection Algorithm Based on Dynamic Weighted Fusion of Shallow and Deep Features. IEEE Access 2025, 13, 69967–69979. [Google Scholar] [CrossRef]
- Cai, T.; Chen, C.; Dong, F. Low-Light Object Detection Combining Transformer and Dynamic Feature Fusion. Comput. Eng. Appl. 2024, 60, 135–141. [Google Scholar]
- Fu, S.; Zhao, Q.; Liu, H.; Tao, Q.; Liu, D. Low-Light Object Detection via Adaptive Enhancement and Dynamic Feature Fusion. Alex. Eng. J. 2025, 126, 60–69. [Google Scholar] [CrossRef]
- Xu, Z.; Su, J.; Huang, K. A-RetinaNet: A Novel RetinaNet with an Asymmetric Attention Fusion Mechanism for Dim and Small Drone Detection in Infrared Images. Math. Biosci. Eng. 2023, 20, 6630–6651. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Di, X.; Liu, M. DBLDNet: Dual Branch Low Light Object Detector Based on Feature Localization and Multi-Scale Feature Enhancement. Multimed. Syst. 2025, 31, 288. [Google Scholar] [CrossRef]
- Ji, J.; Zhao, Y.; Zhang, Y.; Zuo, X.; Wang, C.; Shi, F. FCMA-Det: Low-Light Image Object Detection Based on Feature Complementarity and Multicontent Aggregation. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5636414. [Google Scholar] [CrossRef]
- Jung, M.; Cho, J. Enhancing Detection of Pedestrians in Low-Light Conditions by Accentuating Gaussian–Sobel Edge Features from Depth Maps. Appl. Sci. 2024, 14, 8326. [Google Scholar] [CrossRef]
- Qi, G.; Yu, Z.; Song, J. Multi-Scale Feature Fusion and Context-Enhanced Spatial Sparse Convolution Single-Shot Detector for Unmanned Aerial Vehicle Image Object Detection. Appl. Sci. 2025, 15, 924. [Google Scholar] [CrossRef]
- Wei, Y.; Tao, J.; Wu, W.; Yuan, D.; Hou, S. RHS-YOLOv8: A Lightweight Underwater Small Object Detection Algorithm Based on Improved YOLOv8. Appl. Sci. 2025, 15, 3778. [Google Scholar] [CrossRef]
- Telçeken, M.; Akgun, D.; Kacar, S. An Evaluation of Image Slicing and YOLO Architectures for Object Detection in UAV Images. Appl. Sci. 2024, 14, 11293. [Google Scholar] [CrossRef]
- Choutri, K.; Lagha, M.; Meshoul, S.; Batouche, M.; Bouzidi, F.; Charef, W. Fire Detection and Geo-Localization Using UAV’s Aerial Images and YOLO-Based Models. Appl. Sci. 2023, 13, 11548. [Google Scholar] [CrossRef]
- Jung, H.-K.; Choi, G.-S. Improved YOLOv5: Efficient Object Detection Using Drone Images under Various Conditions. Appl. Sci. 2022, 12, 7255. [Google Scholar] [CrossRef]
- Kim, J.; Huh, J.; Park, I.; Bak, J.; Kim, D.; Lee, S. Small Object Detection in Infrared Images: Learning from Imbalanced Cross-Domain Data via Domain Adaptation. Appl. Sci. 2022, 12, 11201. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, Z.; Song, W.; Zhao, D.; Zhao, H. Efficient Small-Object Detection in Underwater Images Using the Enhanced YOLOv8 Network. Appl. Sci. 2024, 14, 1095. [Google Scholar] [CrossRef]
- Deng, H.; Zhang, S.; Wang, X.; Han, T.; Ye, Y. USD-YOLO: An Enhanced YOLO Algorithm for Small Object Detection in Unmanned Systems Perception. Appl. Sci. 2025, 15, 3795. [Google Scholar] [CrossRef]
- Miao, Y.; Wang, X.; Zhang, N.; Wang, K.; Shao, L.; Gao, Q. Research on a UAV-View Object-Detection Method Based on YOLOv7-Tiny. Appl. Sci. 2024, 14, 11929. [Google Scholar] [CrossRef]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and Tracking Meet Drones Challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7380–7399. [Google Scholar] [CrossRef] [PubMed]
- Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-Based RGB-Infrared Cross-Modality Vehicle Detection Via Uncertainty-Aware Learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 5901–5910. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 5637–5646. [Google Scholar]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-Inspired Unrolling with Cooperative Prior Architecture Search for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 10561–10570. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef]
- Guo, W.; Li, W.; Li, Z.; Gong, W.; Cui, J.; Wang, X. A Slimmer Network with Polymorphic and Group Attention Modules for More Efficient Object Detection in Aerial Images. Remote Sens. 2020, 12, 3750. [Google Scholar] [CrossRef]
- Liu, H.; Duan, X.; Lou, H.; Gu, J.; Chen, H.; Bi, L. Improved GBS-YOLOv5n Algorithm Based on YOLOv5n Applied to UAV Intelligent Traffic. Sci. Rep. 2023, 13, 9577. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z. Drone-YOLO: An Efficient Neural Network Method for Target Detection in Drone Images. Drones 2023, 7, 526. [Google Scholar] [CrossRef]
- Kong, Y.; Shang, X.; Jia, S. Drone-DETR: Efficient Small Object Detection for Remote Sensing Image Using Enhanced RT-DETR Model. Sensors 2024, 24, 5496. [Google Scholar] [CrossRef]








| Model | % | % | ||||
|---|---|---|---|---|---|---|
| A | RT-DETR [41] | 41.8 | 73.1 | 78.2 | 64.5 | 70.7 |
| YOLOv5n | 43.2 | 72.4 | 79.0 | 63.0 | 70.1 | |
| YOLOv8n | 44.0 | 70.0 | 75.0 | 64.0 | 69.1 | |
| YOLOv11n | 46.6 | 72.4 | 75.5 | 68.1 | 71.6 | |
| B | URetinex+YOLOv11n | 43.2 | 71.5 | 75.4 | 64.8 | 69.4 |
| SCI+YOLOv11n | 42.6 | 71.2 | 74.3 | 64.6 | 69.1 | |
| Zero_DCE+YOLOv11n | 43.2 | 72.5 | 77.9 | 65.5 | 71.2 | |
| ENGAN+YOLOv11n | 42.9 | 71.8 | 76.5 | 64.0 | 69.7 | |
| RUAS+YOLOv11n | 41.6 | 70.2 | 76.9 | 62.2 | 68.8 | |
| ENGAN+YOLOv8n | 45.7 | 72.3 | 76.4 | 66.6 | 71.2 | |
| RUAS+YOLOv8n | 41.9 | 67.5 | 73.2 | 61.5 | 66.9 | |
| RUAS+YOLOv11n | 46.3 | 72.4 | 73.9 | 67.5 | 70.6 | |
| ENGAN+YOLOv11n | 45.4 | 71.8 | 75.3 | 65.2 | 69.9 | |
| C | RetinaNet | 36.4 | 61.5 | 61.5 | 43.8 | 51.2 |
| PG-YOLO | 38.2 | 70.3 | 75.1 | 63.8 | 69.0 | |
| GBS-YOLOv11n | 42.3 | 66.9 | 74.5 | 65.3 | 69.6 | |
| Drone-YOLO | 42.5 | 72.1 | 72.4 | 62.6 | 67.1 | |
| Our Method | 47.4 | 73.4 | 78.6 | 67 | 72.3 |
| Model | P | R | % | % | |||
|---|---|---|---|---|---|---|---|
| d | 43.0 | 23.6 | 22.1 | 12.2 | 30.5 | 75.2 | 84.3 |
| d+HLSAFM | 43.5 | 25.6 | 22.5 | 13.2 | 32.2 | 59.2 | 102.6 |
| d+DPSA | 34.4 | 24.9 | 23.5 | 12.9 | 28.8 | 61.6 | 101.7 |
| d+HLSAFM+DPSA | 45.0 | 25.0 | 24.1 | 13.3 | 32.1 | 63.3 | 119.9 |
| Model | P | R | % | % | |||
|---|---|---|---|---|---|---|---|
| d | 74.8 | 67.9 | 72.4 | 46.8 | 73.1 | 118.72 | 84.3 |
| d+HLSAFM | 75.7 | 68.3 | 73.5 | 47.7 | 71.8 | 125.49 | 102.6 |
| d+DPSA | 75.9 | 67.8 | 73.1 | 47.5 | 71.6 | 148.76 | 101.7 |
| d+HLSAFM+DPSA | 78.6 | 67 | 73.4 | 47.4 | 72.3 | 162.31 | 119.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, L.; Li, H.; Fu, G.; Zhou, B.; Wang, Y.; Fan, R. DBM-YOLO: A Dual-Branch Model with Feature Sharing for UAV Object Detection in Low-Illumination Environments. Drones 2026, 10, 169. https://doi.org/10.3390/drones10030169
Liu L, Li H, Fu G, Zhou B, Wang Y, Fan R. DBM-YOLO: A Dual-Branch Model with Feature Sharing for UAV Object Detection in Low-Illumination Environments. Drones. 2026; 10(3):169. https://doi.org/10.3390/drones10030169
Chicago/Turabian StyleLiu, Liwen, Huilin Li, Gui Fu, Bo Zhou, You Wang, and Rong Fan. 2026. "DBM-YOLO: A Dual-Branch Model with Feature Sharing for UAV Object Detection in Low-Illumination Environments" Drones 10, no. 3: 169. https://doi.org/10.3390/drones10030169
APA StyleLiu, L., Li, H., Fu, G., Zhou, B., Wang, Y., & Fan, R. (2026). DBM-YOLO: A Dual-Branch Model with Feature Sharing for UAV Object Detection in Low-Illumination Environments. Drones, 10(3), 169. https://doi.org/10.3390/drones10030169

