Target Detection in Underground Mines Based on Low-Light Image Enhancement
Abstract
1. Introduction
- We propose a lightweight zero-reference image enhancement algorithm, LIENet, which can quickly adjust low-light images at the pixel level;
- We propose a full-layer feature extraction method, which contains two branch structures that can extract long-distance feature information and local area feature information respectively;
- We validate the superiority of the proposed methods in our self-built low-light dataset mines, achieving performance improvements compared to other detection algorithms.
2. Materials and Methods
2.1. Dataset Construction
2.1.1. Data Sources and Composition
2.1.2. Low-Light Synthesis Method
2.2. Network Architecture
2.2.1. LIENet
2.2.2. Low-Light Enhancement Algorithm
2.3. Illumination Model Loss Function Design
- 1.
- Spatial Consistency Loss ():
- 2.
- Exposure Control Loss ():
- 3.
- Color Constancy Loss ():
- 4.
- Illuminance Smoothness Loss ():
- 5.
- Total Loss ():
2.4. Improvement of YOLOv8 Object Detection Algorithm
2.5. Loss Function of Object Detection Model
2.6. Overall Training and Inference Pipeline
3. Results
3.1. Experiment Preparation
3.2. Image Enhancement Evaluation Index
3.3. Image Enhancement Effect Analysis
3.4. Target Detection Performance Evaluation
3.5. Ablation Experiment
4. Discussion
- Limitations of application scenarios: The algorithm proposed in this paper is developed primarily to address the low-light conditions prevalent in underground mine environments. It is important to clarify that this work focuses specifically on illumination enhancement and does not explicitly handle other common visual degradations in such settings, including but not limited to dust scattering, lens contamination, motion blur from equipment vibration, strong local glare, and dense occlusion. Consequently, when deployed in scenarios where these additional factors are prominent, the performance of the proposed method may be suboptimal, and further adaptation or integration with complementary techniques would be necessary to achieve robust performance.
- Integration with downstream tasks: The algorithm proposed in this paper has achieved certain effects in target detection under low-light environments, but the application of the algorithm goes far beyond this. For example, it can be combined with detection in the research on underground positioning in mines. In this case, targeted improvements to the algorithm are required to optimize the performance of specific downstream tasks.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| HFE | hierarchical feature extraction |
References
- Shen, Z.; Xu, H.; Jiang, G.; Yu, M.; Du, B.; Luo, T.; Zhu, Z. Pseudo-retinex decomposition-based unsupervised underwater image enhancement and beyond. Digit. Signal Process. 2023, 137, 103993. [Google Scholar] [CrossRef]
- Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef] [PubMed]
- Chen, B.; Guo, Z.; Yao, W.; Ding, X.; Zhang, D. A novel low-light enhancement via fractional-order and low-rank regularized retinex model. Comput. Appl. Math. 2023, 42, 7. [Google Scholar] [CrossRef]
- Yang, S.; Zhou, D. Single image low-light enhancement via a dual-path generative adversarial network. Circuits Syst. Signal Process. 2023, 42, 4221–4237. [Google Scholar] [CrossRef]
- Chen, Y.; Zhu, G.; Wang, X.; Shen, Y. Fmr-net: A fast multi-scale residual network for low-light image enhancement. Multimed. Syst. 2024, 30, 73. [Google Scholar] [CrossRef]
- Yang, X.; Tian, L.; Cai, F. Thermal infrared imaging for conveyor roller fault detection in coal mines. PLoS ONE 2024, 19, e0307591. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.; Shao, L. Learning enriched features for fast image restoration and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1934–1948. [Google Scholar] [CrossRef]
- Xie, S.; Ma, Y.; Xu, W.; Qiu, S.; Sun, Y. Semi-supervised learning for low-light image enhancement by pseudo low-light image. In Proceedings of the 16th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Taizhou, China, 28–30 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. arXiv 2020, arXiv:2001.06826. [Google Scholar]
- Du, Q.; Zhang, S.; Wang, Z.; Liang, J.; Yang, S. A hybrid zero-reference and dehazing network for joint low-light underground image enhancement. Sci. Rep. 2025, 15, 10135. [Google Scholar] [CrossRef]
- Li, Y.; Tian, J.; Chen, Y.; Wang, H.; Yan, H.; Peng, Y.; Wang, T. Rw-Dm: Retinex and wavelet-based diffusion model for low-light image enhancement in underground coal mines. Complex Intell. Syst. 2025, 11, 327. [Google Scholar] [CrossRef]
- Han, W.; Xiao, Y.; Yin, Y. UM-GAN: Underground mine GAN for underground mine low-light image enhancement. IET Image Process. 2024, 18, 2154–2160. [Google Scholar] [CrossRef]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Ru, L.; Zhan, Y.; Yu, B.; Du, B. Learning affinity from attention: End-to-end weakly-supervised semantic segmentation with transformers. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 16846–16855. [Google Scholar]
- Li, R.; Mai, Z.; Zhang, Z.; Jang, J.; Sanner, S. Transcam: Transformer attention-based cam refinement for weakly supervised semantic segmentation. J. Visual Commun. Image Represent. 2023, 92, 103800. [Google Scholar] [CrossRef]
- Chen, L.; Guo, L.; Cheng, D.; Kou, Q. Structure-preserving and color-restoring up-sampling for single low-light image. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1889–1902. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. Llnet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar] [CrossRef]
- Loh, Y.P.; Chan, C.S. Getting to know low-light images with the exclusively dark dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. Lime: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO, January 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 5 October 2025).
- Quan, Y.; Zhang, D.; Zhang, L.; Tang, J. Centralized feature pyramid for object detection. IEEE Trans. Image Process. 2023, 32, 4341–4354. [Google Scholar] [CrossRef]
- Tolstikhin, I.O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. Adv. Neural Inf. Process. Syst. 2021, 34, 24261–24272. [Google Scholar]
- Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X.; Feng, J.; Yan, S. Metaformer is actually what you need for vision. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10819–10829. [Google Scholar]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. Mbllen: Low-light image/video enhancement using cnns. BMVC 2018, 220, 4. [Google Scholar]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In ACM SIGGRApH 2010 Posters; Association for Computing Machinery: New York, NY, USA, 2010; p. 1. [Google Scholar]
- Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision —ECCV 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Jocher, G. YOLOv5 by Ultralytics, May 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 5 October 2025).
- Zhou, X.; Wang, D.; Krhenbuhl, P. Objects as points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: Fully convolutional one-stage object detection. arxiv 2019, arXiv:1904.01355. [Google Scholar]
- Wang, T.; Qu, H.; Liu, C.; Zheng, T.; Lyu, Z. Lle-std: Traffic sign detection method based on lowlight image enhancement and small target detection. Mathematics 2024, 12, 3125. [Google Scholar] [CrossRef]
- Peng, D.; Ding, W.; Zhen, T. A novel low light object detection method based on the yolov5 fusion feature enhancement. Sci. Rep. 2024, 14, 4486. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Q.; Zhang, D.; Liu, H.; He, Y. Kcs-yolo: An improved algorithm for traffic light detection under low visibility conditions. Machines 2024, 12, 557. [Google Scholar] [CrossRef]














| Category | Number of Labels | Number of Pictures |
|---|---|---|
| Worker | 9758 | 1185 |
| Helmet | 14,541 | 3261 |
| Anchor bolt | 6531 | 2054 |
| Sum | 30,830 | 6500 |
| Enhanced Network | PSNR↑ | SSIM↑ | mAP@0.75 | Model Parameter/103 | Inference Time/ms | Platform |
|---|---|---|---|---|---|---|
| RetinexNet | 18.01 | 0.545 | 80.86% | 555.21 | 128 | Pytorch (GPU) |
| MIRNetv2 | 19.73 | 0.613 | 81.74% | 5858.56 | 802 | Pytorch (GPU) |
| MBLLEN | 21.49 | 0.662 | 82.54% | 450.17 | 7890 | Tensorflow (GPU) |
| Zero-DCE | 18.44 | 0.582 | 82.13% | 79.42 | 8.89 | Pytorch (GPU) |
| SCI | 18.14 | 0.513 | 82.11% | 0.26 | 0.574 | Pytorch (GPU) |
| BIMEF | 15.62 | 0.452 | 76.54% | - | - | Python 3.9 |
| Dong | 16.21 | 0.482 | 77.13% | - | - | Python 3.9 |
| LIME | 17.58 | 0.515 | 79.65% | - | - | Pytorch (GPU) |
| LIENet | 22.59 | 0.721 | 83.44% | 67.3 | 7.4 | Pytorch (GPU) |
| Dataset | LOL | ExDark | LIME | |||
|---|---|---|---|---|---|---|
| Evaluation | PSNR↑ | SSIM↑ | NIQE↓ | BRISQUE↓ | NIQE↓ | BRISQUE↓ |
| RetinexNet | 17.64 | 0.47 | 4.42 | 31.82 | 5.26 | 29.47 |
| MIRNetv2 | 23.54 | 0.84 | 3.15 | 24.53 | 3.82 | 21.24 |
| Zero-DCE | 14.86 | 0.56 | 3.22 | 25.95 | 3.96 | 23.73 |
| SCI | 15.12 | 0.51 | 3.95 | 27.56 | 4.28 | 24.41 |
| LIME | 16.75 | 0.56 | 3.38 | 26.43 | 4.35 | 22.31 |
| Dong | 16.72 | 0.48 | 3.85 | 29.51 | 4.24 | 26.22 |
| BIMEF | 13.86 | 0.60 | 3.18 | 25.46 | 3.92 | 24.25 |
| LIENet | 21.54 | 0.86 | 3.03 | 23.47 | 3.78 | 22.22 |
| Enhanced Network | Detection Network | Backbone | mAP@0.5 | mAP@0.75 | mAP@0.5:0.95 | FPS |
|---|---|---|---|---|---|---|
| LIENet | Faster R-CNN | ResNet50 | 94.25% | 71.62% | 63.42% | 18.22 |
| SSD | VGG16 | 92.93% | 64.78% | 59.40% | 45.63 | |
| RetinaNet | ResNet50 | 94.85% | 72.57% | 63.23% | 23.66 | |
| YOLOv5s | CSP-DarkNet53 | 95.44% | 76.24% | 67.72% | 31.94 | |
| CenterNet | ResNet50 | 93.54% | 66.26% | 61.17% | 35.43 | |
| FCOS | ResNet50 | 94.27% | 68.50% | 60.54% | 26.79 | |
| YOLOv8s | CSP-DarkNet53 | 95.61% | 78.51% | 68.68% | 39.37 | |
| Wang et al. [36] | - | 93.45% | 75.56% | 65.92% | 27.54 | |
| Peng et al. [37] | - | 92.32% | 72.45% | 65.54% | 25.4 | |
| Zhou et al. [38] | - | 95.94% | 79.56% | 69.53% | 28.54 | |
| proposed method | - | 96.96% | 83.44% | 72.50% | 35.2 |
| YOLOv8s | LIENet | PLP | PLEM | mAP@0.5 | mAP@0.75 | mAP@0.5:0.95 | Runtime/ms |
|---|---|---|---|---|---|---|---|
| √ | 94.70% | 75.34% | 64.52% | 18 | |||
| √ | √ | 95.61% | 78.51% | 69.68% | 25.4 | ||
| √ | √ | 96.27% | 82.63% | 71.86% | 27.5 | ||
| √ | √ | √ | 96.8% | 83.10% | 72.17% | 27.8 | |
| √ | √ | √ | √ | 96.96% | 83.44% | 72.50% | 28.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Guo, H.; Lu, K.; Zhan, S.; Li, J.; Wu, Z. Target Detection in Underground Mines Based on Low-Light Image Enhancement. Digital 2026, 6, 13. https://doi.org/10.3390/digital6010013
Guo H, Lu K, Zhan S, Li J, Wu Z. Target Detection in Underground Mines Based on Low-Light Image Enhancement. Digital. 2026; 6(1):13. https://doi.org/10.3390/digital6010013
Chicago/Turabian StyleGuo, Haodong, Kaibo Lu, Shanning Zhan, Jiangtao Li, and Zhifei Wu. 2026. "Target Detection in Underground Mines Based on Low-Light Image Enhancement" Digital 6, no. 1: 13. https://doi.org/10.3390/digital6010013
APA StyleGuo, H., Lu, K., Zhan, S., Li, J., & Wu, Z. (2026). Target Detection in Underground Mines Based on Low-Light Image Enhancement. Digital, 6(1), 13. https://doi.org/10.3390/digital6010013

