Large–Small-Scale Structure Blended U-Net for Brightening Low-Light Images
Abstract
:1. Introduction
- We propose a robust and feasible LLIE method which can process the luminosity and color components of low-light images in the HSV space. Experiments show that our LSUNet can yield noise-free and natural-looking images with clear details and vivid color. Additionally, our method is also helpful for object detection under low-light conditions.
- We propose a lightweight light-boosting network built on a large–small-scale structure for fully exploring multiscale features at different scale spaces. In comparison with existing multiscale structures, our method offers faster speed for light enhancement without extra observable under- and over-enhancement.
- We propose a color correction network based on a U-shaped network with a strided convolution strategy to remove color casts and noise. Additionally, an efficient feature interaction module (EFIM) is also presented to explore the relationship of luminosity and color features for generating natural-looking visibility and clearer details.
2. Related Work
2.1. Traditional Methods
2.2. Learning-Based Methods
3. Method
3.1. Motivation
3.2. Network Architecture
3.2.1. Color Space Transformation
3.2.2. Color Correction Network
3.2.3. Light-Boosting Network
3.2.4. Efficient Feature Interaction Module
3.3. Loss Function
4. Experiment and Analysis
4.1. Experimental Details
4.1.1. Training Details
4.1.2. Datasets and Evaluation Metrics
4.2. Ablation Study
4.2.1. Study of Color Space Transformation
4.2.2. Study of CC-Net
4.2.3. Study of Loss Function
4.3. Benchmark Evaluations
4.3.1. Comprehensive Evaluation on Synthetic Datasets
4.3.2. Comprehensive Evaluation on Real Datasets
4.4. Comprehensive Evaluation of Computational Complexity
4.5. Object Detection in the Dark
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, J.; Li, B.; Tu, Z.; Liu, X.; Guo, Q.; Juefei-Xu, F.; Xu, R.; Yu, H. Light the night: A multi-condition diffusion framework for unpaired low-light enhancement in autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 15205–15215. [Google Scholar]
- Ghari, B.; Tourani, A.; Shahbahrami, A.; Gaydadjiev, G. Pedestrian detection in low-light conditions: A comprehensive survey. Image Vis. Comput. 2024, 148, 105106. [Google Scholar] [CrossRef]
- Wang, H.; Köser, K.; Ren, P. Large Foundation Model Empowered Discriminative Underwater Image Enhancement. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5609317. [Google Scholar] [CrossRef]
- Wei, R.; Wei, X.; Xia, S.; Chang, K.; Ling, M.; Nong, J.; Xu, L. Multi-scale wavelet feature fusion network for low-light image enhancement. Comput. Graph. 2025, 127, 104182. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5901–5910. [Google Scholar]
- Zou, D.; Yang, B. Infrared and low-light visible image fusion based on hybrid multiscale decomposition and adaptive light adjustment. Opt. Lasers Eng. 2023, 160, 107268. [Google Scholar] [CrossRef]
- Xu, H.; Wang, M.; Chen, S. Multiscale luminance adjustment-guided fusion for the dehazing of underwater images. J. Electron. Imaging 2024, 33, 013007. [Google Scholar] [CrossRef]
- Prinzi, F.; Currieri, T.; Gaglio, S.; Vitabile, S. Shallow and deep learning classifiers in medical image analysis. Eur. Radiol. Exp. 2024, 8, 26. [Google Scholar] [CrossRef]
- Ganga, B.; Lata, B.; Venugopal, K. Object detection and crowd analysis using deep learning techniques: Comprehensive review and future directions. Neurocomputing 2024, 597, 127932. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Fan, G.; Yao, Z.; Chen, G.Y.; Su, J.N.; Gan, M. IniRetinex: Rethinking Retinex-type Low-Light Image Enhancer via Initialization Perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 2834–2842. [Google Scholar]
- Zhang, M.; Yin, J.; Zeng, P.; Shen, Y.; Lu, S.; Wang, X. TSCnet: A text-driven semantic-level controllable framework for customized low-light image enhancement. Neurocomputing 2025, 625, 129509. [Google Scholar] [CrossRef]
- Zheng, Y.; Zhan, J.; He, S.; Dong, J.; Du, Y. Curricular contrastive regularization for physics-aware single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5785–5794. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Diff-Retinex++: Retinex-Driven Reinforced Diffusion Model for Low-Light Image Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 1–18. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Zhang, W.; Ren, P. Self-organized underwater image enhancement. ISPRS J. Photogramm. Remote Sens. 2024, 215, 1–14. [Google Scholar] [CrossRef]
- Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater image enhancement via weighted wavelet visual perception fusion. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2469–2483. [Google Scholar] [CrossRef]
- Dhal, K.G.; Das, A.; Ray, S.; Gálvez, J.; Das, S. Histogram equalization variants as optimization problems: A review. Arch. Comput. Methods Eng. 2021, 28, 1471–1496. [Google Scholar] [CrossRef]
- Huang, Z.; Wang, Z.; Zhang, J.; Li, Q.; Shi, Y. Image enhancement with the preservation of brightness and structures by employing contrast limited dynamic quadri-histogram equalization. Optik 2021, 226, 165877. [Google Scholar] [CrossRef]
- Dyke, R.M.; Hormann, K. Histogram equalization using a selective filter. Vis. Comput. 2023, 39, 6221–6235. [Google Scholar] [CrossRef]
- Yuan, Q.; Dai, S. Adaptive histogram equalization with visual perception consistency. Inf. Sci. 2024, 668, 120525. [Google Scholar] [CrossRef]
- Samraj, D.; Ramasamy, K.; Krishnasamy, B. Enhancement and diagnosis of breast cancer in mammography images using histogram equalization and genetic algorithm. Multidimens. Syst. Signal Process. 2023, 34, 681–702. [Google Scholar] [CrossRef]
- Sule, O.O.; Ezugwu, A.E. A two-stage histogram equalization enhancement scheme for feature preservation in retinal fundus images. Biomed. Signal Process. Control 2023, 80, 104384. [Google Scholar] [CrossRef]
- Rivera-Aguilar, B.A.; Cuevas, E.; Pérez, M.; Camarena, O.; Rodríguez, A. A new histogram equalization technique for contrast enhancement of grayscale images using the differential evolution algorithm. Neural Comput. Appl. 2024, 36, 12029–12045. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Rahman, Z.u.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar] [CrossRef]
- Cai, R.; Chen, Z. Brain-like retinex: A biologically plausible retinex algorithm for low light image enhancement. Pattern Recognit. 2023, 136, 109195. [Google Scholar] [CrossRef]
- Jia, F.; Wong, H.S.; Wang, T.; Zeng, T. A reflectance re-weighted retinex model for non-uniform and low-light image enhancement. Pattern Recognit. 2023, 144, 109823. [Google Scholar] [CrossRef]
- Veluchamy, M.; Subramani, B. Detail preserving noise aware retinex model for low light image enhancement. J. Opt. 2025, 1–16. [Google Scholar] [CrossRef]
- Yang, W.; Gao, H.; Zou, W.; Liu, T.; Huang, S.; Ma, J. Low-Light Image Enhancement via Weighted Low-Rank Tensor Regularized Retinex Model. In Proceedings of the 2024 International Conference on Multimedia Retrieval, Phuket, Thailand, 10–14 June 2024; pp. 767–775. [Google Scholar]
- Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
- Zhang, W.; Liu, Q.; Feng, Y.; Cai, L.; Zhuang, P. Underwater Image Enhancement via Principal Component Fusion of Foreground and Background. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 10930–10943. [Google Scholar] [CrossRef]
- Wang, H.; Sun, S.; Chang, L.; Li, H.; Zhang, W.; Frery, A.C.; Ren, P. INSPIRATION: A reinforcement learning-based human visual perception-driven image enhancement paradigm for underwater scenes. Eng. Appl. Artif. Intell. 2024, 133, 108411. [Google Scholar] [CrossRef]
- Jiang, Q.; Mao, Y.; Cong, R.; Ren, W.; Huang, C.; Shao, F. Unsupervised Decomposition and Correction Network for Low-Light Image Enhancement. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19440–19455. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-Light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, MM ’19, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar] [CrossRef]
- Zhu, A.; Zhang, L.; Shen, Y.; Ma, Y.; Zhao, S.; Zhou, Y. Zero-Shot Restoration of Underexposed Images via Robust Retinex Decomposition. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
- Wu, H.; Wang, C.; Tu, L.; Patsch, C.; Jin, Z. CSPN: A Category-specific processing network for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 11929–11941. [Google Scholar] [CrossRef]
- Lim, S.; Kim, W. DSLR: Deep stacked Laplacian restorer for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 4272–4284. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Di, X.; Zhang, B.; Wang, C. Self-Supervised Image Enhancement Network: Training with Low-Light Images Only. arXiv 2020, arXiv:2002.11300. [Google Scholar] [CrossRef]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 12504–12513. [Google Scholar]
- Xu, L.; Hu, C.; Hu, Y.; Jing, X.; Cai, Z.; Lu, X. UPT-Flow: Multi-scale transformer-guided normalizing flow for low-light image enhancement. Pattern Recognit. 2025, 158, 111076. [Google Scholar] [CrossRef]
- Kou, K.; Yin, X.; Gao, X.; Nie, F.; Liu, J.; Zhang, G. Lightweight two-stage transformer for low-light image enhancement and object detection. Digit. Signal Process. 2024, 150, 104521. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Q.; An, Z.; Wang, Z.; Zhang, C.; Lin, C.W. Mutual Retinex: Combining Transformer and CNN for Image Enhancement. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2240–2252. [Google Scholar] [CrossRef]
- Wang, H.; Yan, X.; Hou, X.; Li, J.; Dun, Y.; Zhang, K. Division gets better: Learning brightness-aware and detail-sensitive representations for low-light image enhancement. Knowl.-Based Syst. 2024, 299, 111958. [Google Scholar] [CrossRef]
- Nguyen, C.M.; Chan, E.R.; Bergman, A.W.; Wetzstein, G. Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2024; pp. 4146–4157. [Google Scholar]
- Jiang, H.; Luo, A.; Liu, X.; Han, S.; Liu, S. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; pp. 161–179. [Google Scholar]
- Yan, Q.; Feng, Y.; Zhang, C.; Pang, G.; Shi, K.; Wu, P.; Dong, W.; Sun, J.; Zhang, Y. HVI: A New color space for Low-light Image Enhancement. arXiv 2025, arXiv:2502.20272. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 97–104. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Fu, X.; Zhang, X.P.; Ding, X. A fusion-based method for single backlit image enhancement. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4077–4081. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar] [CrossRef]
- Loh, Y.P.; Chan, C.S. Getting to know low-light images with the Exclusively Dark dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
Input | Operator | Kernel Size | Channels | Output |
---|---|---|---|---|
Input_AB | ReLU(Conv) | 3 × 3 | 32 | Conv1_0 |
Conv1_0 | ReLU(Conv) | 3 × 3 | 64 | Conv1_1 |
Conv1_1 | ReLU(Conv) | 1 × 1 | 128 | Conv1_2 |
Conv1_2 | ReLU(Conv) | 3 × 3 | 64 | Conv1_3 |
Conv1_3 | Conv | 3 × 3 | 2 | Conv1_4 |
Conv1_4, Input_AB | ⨂ | - | 2 | Conv1 |
Input_AB | ReLU(Conv) | 3 × 3 | 32 | Conv2_0 |
Conv2_0 | ReLU(Conv) | 3 × 3 | 64 | Conv2_1 |
Conv2_1 | ReLU(Conv) | 1 × 1 | 128 | Conv2_2 |
Conv2_2 | ReLU(Conv) | 3 × 3 | 64 | Conv2_3 |
Conv2_3 | ReLU(Conv) | 3 × 3 | 64 | Conv2_4 |
Conv2_4 | ReLU(Conv) | 1 × 1 | 128 | Conv2_5 |
Conv2_5 | ReLU(Conv) | 3 × 3 | 64 | Conv2_6 |
Conv2_6 | Conv | 3 × 3 | 2 | Conv2_7 |
Conv2_7, Input_AB | ⨂ | - | 2 | Conv2 |
Conv1, Conv2 | ⨁ | - | 2 | Enhanced_AB |
Color Spaces | PSNR ↑ | SSIM ↑ | AG ↑ |
---|---|---|---|
RGB | 22.57 | 0.8873 | 6.3470 |
HSV | 22.91 | 0.8977 | 6.3598 |
YCbCr | 22.03 | 0.8903 | 6.5210 |
Eigencolor | 21.69 | 0.8867 | 6.5001 |
LAB | 23.62 | 0.9009 | 6.5378 |
Model | PSNR ↑ | SSIM ↑ |
---|---|---|
- w/o SM | 21.73 | 0.8968 |
-w/o LM | 22.99 | 0.8975 |
Ours | 23.62 | 0.9009 |
Loss Function | PSNR | SSIM |
---|---|---|
1. w/o , w/o , w/o | 17.03 | 0.6949 |
2. with , w/o , w/o | 18.44 | 0.7032 |
3. w/o , with , w/o | 18.41 | 0.7511 |
4. w/o , w/o , with | 18.43 | 0.8540 |
5. with , with , w/o | 18.55 | 0.8374 |
6. with , w/o , with | 19.84 | 0.8557 |
7. w/o , with , with | 19.72 | 0.8938 |
MIT-Adobe FiveK | LOL | |||||
---|---|---|---|---|---|---|
Method | PSNR ↑ | SSIM ↑ | LPIPS ↓ | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
RetinexNet [35] | 18.31 | 0.7808 | 0.244 | 16.77 | 0.6613 | 0.280 |
EnlightenGAN [14] | 19.49 | 0.8473 | 0.167 | 17.32 | 0.6785 | 0.215 |
KinD [36] | 18.15 | 0.8024 | 0.121 | 18.21 | 0.7490 | 0.147 |
RRDNet [37] | 15.31 | 0.7945 | 0.245 | 15.57 | 0.7029 | 0.229 |
Zero-DCE [41] | 16.10 | 0.8359 | 0.203 | 14.58 | 0.6107 | 0.163 |
R2RNet [38] | 19.00 | 0.8399 | 0.116 | 20.21 | 0.8460 | 0.085 |
LightenDiffusion [50] | 19.96 | 0.8013 | 0.127 | 20.45 | 0.8035 | 0.192 |
CSPN [39] | 23.57 | 0.9291 | 0.095 | 23.82 | 0.8543 | 0.085 |
DSLR [40] | 18.58 | 0.5971 | 0.143 | 21.17 | 0.6921 | 0.201 |
Ours | 23.62 | 0.9009 | 0.039 | 23.43 | 0.8664 | 0.036 |
MEF | Fusion | VV | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Method | NIQE ↓ | NFERM ↓ | IE ↑ | LOE ↓ | NIQE ↓ | NFERM ↓ | IE ↑ | LOE ↓ | NIQE ↓ | NFERM ↓ | IE ↑ | LOE ↓ |
RetinexNet [35] | 4.128 | 22.584 | 6.321 | 1258.403 | 4.069 | 15.507 | 7.098 | 1550.720 | 3.692 | 19.113 | 6.441 | 911.311 |
EnlightenGAN [14] | 4.319 | 15.670 | 9.891 | 1567.019 | 3.972 | 13.391 | 8.652 | 1339.109 | 3.429 | 16.098 | 8.910 | 609.801 |
KinD [36] | 3.718 | 18.758 | 7.252 | 875.813 | 4.1952 | 19.350 | 9.627 | 935.044 | 3.084 | 17.444 | 8.862 | 744.395 |
RRDNet [37] | 3.605 | 20.508 | 6.907 | 605.083 | 3.458 | 9.041 | 7.573 | 904.111 | 2.625 | 15.456 | 7.126 | 544.622 |
Zero-DCE [41] | 3.448 | 13.890 | 7.105 | 1389.023 | 3.820 | 11.346 | 7.943 | 1134.603 | 2.767 | 4.894 | 7.953 | 489.423 |
R2RNet [38] | 4.195 | 9.341 | 7.647 | 934.145 | 3.809 | 9.694 | 9.036 | 969.601 | 3.356 | 6.528 | 8.121 | 652.811 |
CIDNet [51] | 3.561 | 11.701 | 7.376 | 743.193 | 3.834 | 12.371 | 8.739 | 1203.746 | 2.876 | 4.972 | 7.803 | 598.367 |
LightenDiffusion [50] | 3.992 | 12.976 | 7.112 | 799.840 | 4.001 | 12.574 | 8.652 | 1467.101 | 3.610 | 5.079 | 7.521 | 666.096 |
DSLR [40] | 4.105 | 13.027 | 6.976 | 1199.605 | 3.998 | 15.599 | 6.987 | 1501.121 | 3.666 | 19.187 | 6.392 | 923.653 |
Ours | 3.382 | 10.473 | 14.087 | 404.736 | 3.050 | 7.541 | 12.528 | 754.142 | 2.577 | 5.345 | 11.824 | 534.580 |
Method | MEF | Fusion | VV |
---|---|---|---|
RetinexNet [35] | 1.3284 | 1.2776 | 1.3489 |
EnlightenGAN [14] | 2.1698 | 3.9870 | 3.0982 |
KinD [36] | 3.2476 | 3.4276 | 4.0128 |
RRDNet [37] | 2.9488 | 3.6610 | 3.5821 |
Zero-DCE [41] | 4.1852 | 3.5223 | 3.1326 |
R2RNet [38] | 3.1689 | 2.1095 | 2.7691 |
CIDNet [51] | 3.8897 | 3.9853 | 3.8985 |
LightenDiffusion [50] | 3.4453 | 3.8769 | 3.7741 |
DSLR [40] | 3.0698 | 3.6672 | 2.9978 |
Ours | 4.3372 | 4.2769 | 4.2010 |
Method | Param (M) ↓ | Flops (G) ↓ | Time (s) ↓ |
---|---|---|---|
RetinexNet [35] | 1.23 | 6.79 | 0.5217 |
KinD [36] | 8.49 | 7.44 | 0.6445 |
RRDNet [37] | 28.83 | 53.47 | 0.9763 |
Zero-DCE [41] | 1.21 | 5.21 | 0.007 |
R2RNet [38] | - | - | 0.6894 |
CIDNet [51] | 1.98 | 8.03 | 0.7869 |
LightenDiffusion [50] | 101.71 | 210 | 1.2001 |
EnlightenGAN [14] | 8.64 | 7.88 | 0.6501 |
CSPN [39] | 60.921 | 1.40 | 0.149 |
DSLR [40] | 14.31 | 22.95 | 0.9210 |
Ours | 1.01 | 4.13 | 0.0059 |
Input | Bicycle | Boat | Bottle | Bus | Car | Cat | Chair | Cup | Dog | Motor | People | Table |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Original | 70.3 | 61.3 | 54.7 | 70.3 | 61.3 | 50.1 | 35.9 | 46.4 | 51.7 | 51.3 | 46.8 | 34.3 |
Enhanced | 76.1 | 66.0 | 65.7 | 82.9 | 81.9 | 60.1 | 59.1 | 68.8 | 62.3 | 65.0 | 65.9 | 48.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cheng, H.; Pan, K.; Lu, H.; Wang, W.; Liu, Z. Large–Small-Scale Structure Blended U-Net for Brightening Low-Light Images. Sensors 2025, 25, 3382. https://doi.org/10.3390/s25113382
Cheng H, Pan K, Lu H, Wang W, Liu Z. Large–Small-Scale Structure Blended U-Net for Brightening Low-Light Images. Sensors. 2025; 25(11):3382. https://doi.org/10.3390/s25113382
Chicago/Turabian StyleCheng, Hao, Kaixin Pan, Haoxiang Lu, Wenhao Wang, and Zhenbing Liu. 2025. "Large–Small-Scale Structure Blended U-Net for Brightening Low-Light Images" Sensors 25, no. 11: 3382. https://doi.org/10.3390/s25113382
APA StyleCheng, H., Pan, K., Lu, H., Wang, W., & Liu, Z. (2025). Large–Small-Scale Structure Blended U-Net for Brightening Low-Light Images. Sensors, 25(11), 3382. https://doi.org/10.3390/s25113382