DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior
Abstract
1. Introduction
- DADNet is a new dual-branch method for low-light enhancement. The Illumination Enhancement Module (IEM) branch uses the DCP theory to restore brightness while the Color Transformation Module (CTM) branch corrects color distortion and adjusts saturation through the attention mechanism. DADNet effectively enhances low-light images with natural visual outputs.
- The IEM branch generates the illumination enhancement image based on the DCP theory and pixel-level least squares model. The IEM extracts dark channel features through two components, including the Dark Channel Feature Block (DCFB) and the Dark Channel Block (DCB). We also construct a physical model that combines the low-light image with multiplication and addition feature maps, achieving balanced brightness enhancement through pixel-wise computation.
- The CTM branch uses the adaptive attention mechanism. It focuses on color features and saturation adjustment across the entire image and significantly improves performance in visual tasks.
- The experimental results demonstrate that DADNet effectively enhances image brightness to appropriate levels. It also preserves details and texture information while accurately restoring image colors. In both qualitative and quantitative assessments, DADNet exhibits outperformance compared to state-of-the-art methods.
2. Related Works
2.1. Grayscale Transformation-Based Methods
2.2. Physical Model-Based Methods
2.3. Deep Learning-Based Methods
3. Proposed Method
3.1. Overall Network Architecture
3.2. Illumination Enhancement Module
3.3. Color Transformation Module
3.4. Loss Functions
4. Experimental Results Analysis
4.1. Experimental Settings
4.2. Ablation Study
4.3. Qualitative Evaluation
4.4. Quantitative Evaluation
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Singh, P.; Bhandari, A.K. A Review on Computational Low-Light Image Enhancement Models: Challenges, Benchmarks, and Perspectives. Arch. Comput. Methods Eng. 2025, 32, 2853–2885. [Google Scholar] [CrossRef]
- Rahman, S.; Rahman, M.M.; Abdullah-Al-Wadud, M.; Al-Quaderi, G.D.; Shoyaib, M. An adaptive gamma correction for image enhancement. EURASIP J. Image Video Process. 2016, 2016, 35. [Google Scholar] [CrossRef]
- Ma, Q.; Wang, Y.; Zeng, T. Retinex-Based Variational Framework for Low-Light Image Enhancement and Denoising. IEEE Trans. Multimed. 2023, 25, 5580–5588. [Google Scholar] [CrossRef]
- Li, X.; Liu, M.; Ling, Q. Pixel-Wise Gamma Correction Mapping for Low-Light Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 681–694. [Google Scholar] [CrossRef]
- Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
- Tang, H.; Zhu, H.; Fei, L.; Wang, T.; Cao, Y.; Xie, C. Low-Illumination Image Enhancement Based on Deep Learning Techniques: A Brief Review. Photonics 2023, 10, 198. [Google Scholar] [CrossRef]
- Land, E.H. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference; British Machine Vision Association: Durham, UK, 2018. [Google Scholar]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023. [Google Scholar] [CrossRef]
- Mukhopadhyay, S.; Hossain, S.; Malakar, S.; Cuevas, E.; Sarkar, R. Image contrast improvement through a metaheuristic scheme. Soft Comput. 2023, 27, 13657–13676. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193. [Google Scholar] [CrossRef] [PubMed]
- Abdullah-Al-Wadud, M.; Kabir, M.H.; Akber Dewan, M.A.; Chae, O. A Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
- Liu, X.; Li, H.; Zhu, C. Joint Contrast Enhancement and Exposure Fusion for Real-World Image Dehazing. IEEE Trans. Multimed. 2022, 24, 3934–3946. [Google Scholar] [CrossRef]
- Rahman, Z.; Pu, Y.F.; Aamir, M.; Wali, S. Structure revealing of low-light images using wavelet transform based on fractional-order denoising and multiscale decomposition. Vis. Comput. 2021, 37, 865–880. [Google Scholar] [CrossRef]
- Tirumani, V.H.L.; Tenneti, M.; K, C.S.; Kotamraju, S.K. Image resolution and contrast enhancement with optimal brightness compensation using wavelet transforms and particle swarm optimization. IET Image Process. 2021, 15, 2833–2840. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
- Wang, Y.F.; Liu, H.M.; Fu, Z.W. Low-Light Image Enhancement via the Absorption Light Scattering Model. IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef]
- Zhou, M.; Wu, X.; Wei, X.; Xiang, T.; Fang, B.; Kwong, S. Low-Light Enhancement Method Based on a Retinex Model for Structure Preservation. IEEE Trans. Multimed. 2024, 26, 650–662. [Google Scholar] [CrossRef]
- Lin, Y.H.; Lu, Y.C. Low-Light Enhancement Using a Plug-and-Play Retinex Model With Shrinkage Mapping for Illumination Estimation. IEEE Trans. Image Process. 2022, 31, 4897–4908. [Google Scholar] [CrossRef]
- Pu, T.; Zhu, Q. Non-Uniform Illumination Image Enhancement via a Retinal Mechanism Inspired Decomposition. IEEE Trans. Consum. Electron. 2024, 70, 747–756. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
- Yu, S.Y.; Zhu, H. Low-Illumination Image Enhancement Algorithm Based on a Physical Lighting Model. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 28–37. [Google Scholar] [CrossRef]
- Singh, K.; Parihar, A.S. Illumination estimation for nature preserving low-light image enhancement. Vis. Comput. 2024, 40, 121–136. [Google Scholar] [CrossRef]
- McCartney, E.J. Optics of the Atmosphere: Scattering by Molecules and Particles; John Wiley and Sons: New York, NY, USA, 1976. [Google Scholar]
- Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar] [CrossRef]
- Dong, X.; Pang, Y.A.; Wen, J.G. Fast efficient algorithm for enhancement of low lighting video. In ACM SIGGRAPH 2010 Posters; SIGGRAPH ’10; Association for Computing Machinery: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
- Guo, Z.; Wang, C. Low Light Image Enhancement Algorithm Based on Retinex and Dehazing Model. In ICRAI ’20: Proceedings of the 6th International Conference on Robotics and Artificial Intelligence; Association for Computing Machinery: New York, NY, USA, 2021; pp. 84–90. [Google Scholar] [CrossRef]
- Abraham, N.J.; Daway, H.G.; Ali, R.A. Low lightness image enhancement using modified DCP based lightness mapping in lab color space. Int. J. Intell. Eng. Syst. 2022, 15, 244–251. [Google Scholar] [CrossRef]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-light image/video enhancement using cnns. In Proceedings of the BMVC; Northumbria University: Tyne, UK, 2018; Volume 220, p. 4. [Google Scholar]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
- Liang, D.; Xu, Z.; Li, L.; Wei, M.; Chen, S. Pie: Physics-inspired low-light enhancement. Int. J. Comput. Vis. 2024, 132, 3911–3932. [Google Scholar] [CrossRef]
- Feng, Y.; Hou, S.; Lin, H.; Zhu, Y.; Wu, P.; Dong, W.; Sun, J.; Yan, Q.; Zhang, Y. DiffLight: Integrating Content and Detail for Low-light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Qian, L.; Jiang, L. LIENet: A low-light image enhancement network for extreme darkness. J. King Saud Univ. Comput. Inf. Sci. 2025, 37, 344. [Google Scholar] [CrossRef]
- Hu, R.; Luo, T.; Jiang, G.; Chen, Y.; Xu, H.; Liu, L.; He, Z. DiffDark: Multi-prior integration driven diffusion model for low-light image enhancement. Pattern Recognit. 2025, 168, 111814. [Google Scholar] [CrossRef]
- Liu, J.; Wang, S.; Chen, C.; Hou, Q. DFP-Net: An unsupervised dual-branch frequency-domain processing framework for single image dehazing. Eng. Appl. Artif. Intell. 2024, 136, 109012. [Google Scholar] [CrossRef]
- Cui, Z.; Li, K.; Gu, L.; Su, S.; Gao, P.; Jiang, Z.; Qiao, Y.; Harada, T. You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. In Proceedings of the 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, 21–24 November 2022; BMVA Press: Lancaster, UK, 2022. [Google Scholar]
- Zhang, W.; Ding, Y.; Zhang, M.; Zhang, Y.; Cao, L.; Huang, Z.; Wang, J. TCPCNet: A transformer-CNN parallel cooperative network for low-light image enhancement. Multimed. Tools Appl. 2024, 83, 52957–52972. [Google Scholar] [CrossRef]
- Wu, X.; Lai, Z.; Zhou, J.; Hou, X.; Pedrycz, W.; Shen, L. Light-Aware Contrastive Learning for Low-Light Image Enhancement. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–20. [Google Scholar] [CrossRef]
- Dang, J.; Zhong, Y.; Qin, X. PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement. Comput. Vis. Image Underst. 2024, 241, 103930. [Google Scholar] [CrossRef]
- Wen, Y.; Xu, P.; Li, Z.; Xu(ATO), W. An illumination-guided dual attention vision transformer for low-light image enhancement. Pattern Recognit. 2025, 158, 111033. [Google Scholar] [CrossRef]
- He, R.; Li, X.; Wu, J. LEESDFormer: A lightweight unsupervised CNN-Transformer-based curve estimation network for low-light image enhancement, exposure suppression, and denoising. Neural Netw. 2025, 190, 107764. [Google Scholar] [CrossRef]
- Huang, W.; Zhu, Y.; Huang, R. Low Light Image Enhancement Network With Attention Mechanism and Retinex Model. IEEE Access 2020, 8, 74306–74314. [Google Scholar] [CrossRef]
- Jiang, S.; Shi, Y.; Zhang, Y.; Zhang, Y. An Improved Retinex-Based Approach Based on Attention Mechanisms for Low-Light Image Enhancement. Electronics 2024, 13, 3645. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023. [Google Scholar]
- Jiang, K.; Wang, Q.; An, Z.; Wang, Z.; Zhang, C.; Lin, C.W. Mutual Retinex: Combining Transformer and CNN for Image Enhancement. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2240–2252. [Google Scholar] [CrossRef]
- Luo, Y.; Lv, G.; Ling, J.; Hu, X. Low-light image enhancement via an attention-guided deep Retinex decomposition model. Appl. Intell. 2025, 55, 1–13. [Google Scholar] [CrossRef]
- Chobola, T.; Liu, Y.; Zhang, H.; Schnabel, J.A.; Peng, T. Fast context-based low-light image enhancement via neural implicit representations. In Proceedings of the Computer Vision—ECCV 2024; Springer: Cham, Switzerland, 2025; pp. 413–430. [Google Scholar] [CrossRef]
- Gonzalez, R.C. Digital Image Processing; Pearson Education India: Noida, India, 2009. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko Sergey, e.A.; Bischof, H.; Brox, T.; Frahm, J.M. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision—ECCV 2020; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
- Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
- Dang-Nguyen, D.T.; Pasquini, C.; Conotter, V.; Boato, G. RAISE: A raw images dataset for digital image forensics. In Proceedings of the 6th ACM Multimedia Systems Conference (MMSys ’15), New York, NY, USA, 18–20 March 2015. [Google Scholar] [CrossRef]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
- Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Grove, CA, USA, 9–12 November 2003. [Google Scholar] [CrossRef]
- Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [PubMed]
- Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; mlBovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- N, V.; D, P.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015. [Google Scholar]
- Sun, S.; Ren, W.; Peng, J.; Song, F.; Cao, X. DI-Retinex: Digital-imaging Retinex model for low-light image enhancement. Int. J. Comput. Vis. 2025, 133, 8293–8314. [Google Scholar] [CrossRef]
- Brateanu, A.; Balmez, R.; Avram, A.; Orhei, C.; Ancuti, C. LYT-NET: Lightweight YUV Transformer-Based Network for Low-Light Image Enhancement. IEEE Signal Process. Lett. 2025, 32, 2065–2069. [Google Scholar] [CrossRef]











| Hyperparameter | Set |
|---|---|
| Optimizer | Adam |
| Learning rate | 0.0005 |
| Weight decay | 0.000001 |
| Batch size | 8 |
| Epochs | 500 |
| Method | IEM | CTM | PSNR ↑ | FSIM ↑ | MS-SSIM ↑ | GMSD ↓ | CIEDE2000 ↓ |
|---|---|---|---|---|---|---|---|
| Without IEM | ✗ | ✓ | 19.3047 | 0.8657 | 0.8443 | 0.1145 | 0.0695 |
| Without CTM | ✓ | ✗ | 18.1440 | 0.9022 | 0.8987 | 0.0922 | 0.0888 |
| DADNet (Ours) | ✓ | ✓ | 24.3585 | 0.9343 | 0.9354 | 0.0686 | 0.0456 |
| Method | PSNR ↑ | FSIM ↑ | MS-SSIM ↑ | GMSD ↓ | CIEDE2000 ↓ |
|---|---|---|---|---|---|
| Without | 24.5807 | 0.9349 | 0.9412 | 0.0713 | 0.0457 |
| 1st-position feature as | 23.6164 | 0.9340 | 0.9341 | 0.0702 | 0.0485 |
| 2nd-position feature as | 24.8902 | 0.9347 | 0.9415 | 0.0708 | 0.0450 |
| DADNet (Ours) | 24.3585 | 0.9343 | 0.9354 | 0.0686 | 0.0456 |
| Method | LOL-v1 | RAISE | SICE | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| PSNR ↑ | FSIM ↑ | MS-SSIM ↑ | GMSD ↓ | PSNR ↑ | FSIM ↑ | MS-SSIM ↑ | GMSD ↓ | PSNR ↑ | FSIM ↑ | MS-SSIM ↑ | GMSD ↓ | |
| MSRCR | 15.4146 | 0.8805 | 0.8788 | 0.0909 | 14.9279 | 0.9195 | 0.9192 | 0.0645 | 14.4296 | 0.8221 | 0.7875 | 0.1454 |
| CLAHE | 13.1818 | 0.8220 | 0.7781 | 0.1336 | 13.6161 | 0.7654 | 0.7644 | 0.1810 | 13.6467 | 0.8037 | 0.7545 | 0.1608 |
| LIME | 15.8091 | 0.9050 | 0.9093 | 0.0867 | 17.2111 | 0.8656 | 0.8709 | 0.1126 | 15.9160 | 0.8466 | 0.8090 | 0.1364 |
| Jeon et al. | 15.8813 | 0.8481 | 0.8385 | 0.1239 | 16.9431 | 0.9023 | 0.9091 | 0.0911 | 15.7785 | 0.8446 | 0.8031 | 0.1425 |
| NPLIE | 12.2643 | 0.8934 | 0.8439 | 0.0967 | 16.2839 | 0.8672 | 0.8856 | 0.1112 | 13.4317 | 0.8274 | 0.7829 | 0.1503 |
| RetinexNet | 15.7419 | 0.8473 | 0.7935 | 0.1230 | 17.5062 | 0.8707 | 0.8438 | 0.1211 | 17.4090 | 0.8041 | 0.7045 | 0.1623 |
| ZeroDCE | 14.1621 | 0.9038 | 0.8644 | 0.0869 | 17.6059 | 0.9156 | 0.9286 | 0.0772 | 15.3786 | 0.8440 | 0.7939 | 0.1338 |
| ZeroDCE++ | 14.0946 | 0.8444 | 0.8122 | 0.1559 | 17.3176 | 0.9141 | 0.9216 | 0.0683 | 14.8897 | 0.7863 | 0.7313 | 0.1906 |
| PIE | 10.9588 | 0.8651 | 0.7883 | 0.1180 | 14.5064 | 0.8709 | 0.8806 | 0.1136 | 12.3346 | 0.8049 | 0.7392 | 0.1559 |
| CoLIE | 12.8431 | 0.8980 | 0.8496 | 0.0917 | 14.7721 | 0.8625 | 0.8545 | 0.1076 | 14.3155 | 0.8409 | 0.7940 | 0.1412 |
| DI-Retinex | 16.9067 | 0.9092 | 0.8843 | 0.0844 | 16.1313 | 0.8781 | 0.8875 | 0.1108 | 15.2333 | 0.8271 | 0.7879 | 0.1471 |
| LYT-Net | 20.8864 | 0.9400 | 0.9447 | 0.0644 | 21.4561 | 0.9566 | 0.9635 | 0.0457 | 18.6950 | 0.8574 | 0.8250 | 0.1306 |
| DADNet | 24.3585 | 0.9343 | 0.9354 | 0.0686 | 25.1977 | 0.9560 | 0.9627 | 0.0489 | 22.0133 | 0.8714 | 0.8408 | 0.1237 |
| Method | LOL-v1 | RAISE | SICE |
|---|---|---|---|
| MSRCR | 0.1228 | 0.1175 | 0.1183 |
| CLAHE | 0.1462 | 0.1316 | 0.1304 |
| LIME | 0.1173 | 0.0850 | 0.0933 |
| Jeon et al. | 0.1272 | 0.1007 | 0.0990 |
| NPLIE | 0.1589 | 0.1077 | 0.1329 |
| RetinexNet | 0.1127 | 0.0818 | 0.0835 |
| ZeroDCE | 0.1318 | 0.0950 | 0.1058 |
| ZeroDCE++ | 0.1334 | 0.1018 | 0.1151 |
| PIE | 0.1781 | 0.1294 | 0.1508 |
| CoLIE | 0.1543 | 0.1330 | 0.1227 |
| DI-Retinex | 0.1002 | 0.1051 | 0.1042 |
| LYT-Net | 0.0651 | 0.0630 | 0.0767 |
| DADNet | 0.0456 | 0.0478 | 0.0561 |
| Method | LIME | MEF | ||
|---|---|---|---|---|
| NIQE ↓ | PIQUE ↓ | NIQE ↓ | PIQUE ↓ | |
| MSRCR | 12.0923 | 11.5761 | 8.6239 | 14.8090 |
| CLAHE | 10.4246 | 13.0890 | 10.4044 | 8.0896 |
| LIME | 14.2610 | 17.9406 | 14.0343 | 19.8174 |
| Jeon et al. | 11.1577 | 10.7985 | 11.9930 | 8.6572 |
| NPLIE | 12.6595 | 10.2433 | 12.1399 | 8.3904 |
| RetinexNet | 16.5498 | 18.9426 | 10.6775 | 10.8139 |
| ZeroDCE | 11.8049 | 10.8743 | 11.8711 | 8.1916 |
| ZeroDCE++ | 11.8469 | 10.8599 | 11.9123 | 8.1105 |
| PIE | 12.5824 | 10.3639 | 12.8463 | 8.3032 |
| CoLIE | 12.7824 | 11.9858 | 12.2817 | 10.2174 |
| DI-Retinex | 13.1412 | 13.1136 | 11.9158 | 12.4605 |
| LYT-Net | 14.9682 | 17.9638 | 17.1333 | 14.1563 |
| DADNet | 11.9995 | 11.6010 | 11.9121 | 8.3768 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Wang, L.; Tang, M.; Li, H.; Yang, F.; Yuan, M. DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry 2026, 18, 564. https://doi.org/10.3390/sym18040564
Wang L, Tang M, Li H, Yang F, Yuan M. DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry. 2026; 18(4):564. https://doi.org/10.3390/sym18040564
Chicago/Turabian StyleWang, Lingyun, Minli Tang, Hua Li, Feiyan Yang, and Ming Yuan. 2026. "DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior" Symmetry 18, no. 4: 564. https://doi.org/10.3390/sym18040564
APA StyleWang, L., Tang, M., Li, H., Yang, F., & Yuan, M. (2026). DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry, 18(4), 564. https://doi.org/10.3390/sym18040564

