LapECNet: Laplacian Pyramid Networks for Image Exposure Correction
Abstract
1. Introduction
- We present LapECNet, which integrates Laplacian pyramid decomposition into deep learning-based exposure correction, providing explicit frequency-domain separation to achieve independent optimization of illumination distribution and texture details.
- We design an FEM module that employs dual attention mechanisms to adaptively calibrate multi-frequency features. Moreover, we design a DAM module that implements learnable adaptive weights to replace fixed fusion rules.
- Our method achieved state-of-the-art results on multiple public datasets, while providing satisfactory visual quality and computational efficiency.
2. Related Work
2.1. Traditional Methods
2.2. Deep-Learning-Based Methods
3. Methods
3.1. Laplacian Pyramid
3.2. Feature Enhancement Module (FEM)
3.3. Dynamic Aggregation Module (DAM)
3.4. Loss Function
4. Experiments
4.1. Datasets
4.2. Implementation Details
4.3. Comparison with Other Methods
4.3.1. Quantitative Comparison
4.3.2. Qualitative Comparison
4.4. Ablation Studies
4.4.1. Impact of Laplacian Pyramid Levels
4.4.2. The Effectiveness of FEM and DAM
4.4.3. Impact of Loss Function Weights
4.5. User Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yang, W.; Yuan, Y.; Ren, W.; Liu, J.; Scheirer, W.J.; Wang, Z.; Zhang, T.; Zhong, Q.; Xie, D.; Pu, S.; et al. Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study. IEEE Trans. Image Process. 2020, 29, 5737–5752. [Google Scholar] [CrossRef] [PubMed]
- Strudel, R.; Garcia, R.; Laptev, I.; Schmid, C. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7262–7272. [Google Scholar]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex decomposition for low-light enhancement. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018; pp. 1–12. [Google Scholar]
- Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep Retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5901–5910. [Google Scholar]
- Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A unified deep framework for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1076–1088. [Google Scholar] [CrossRef]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10556–10565. [Google Scholar] [CrossRef]
- Huang, J.; Liu, Y.; Fu, X.; Zhou, M.; Wang, Y.; Zhao, F.; Xiong, Z. Exposure normalization and compensation for multiple-exposure correction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 6043–6052. [Google Scholar]
- Huang, J.; Liu, Y.; Zhao, F.; Yan, K.; Zhang, J.; Huang, Y.; Zhou, M.; Xiong, Z. Deep fourier-based exposure correction network with spatial-frequency interaction. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 163–180. [Google Scholar]
- Li, Z.; Shao, Y.; Zhang, F.; Zhang, J.; Wang, Y.; Sang, N. Difficulty-aware dynamic network for lightweight exposure correction. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 5033–5048. [Google Scholar] [CrossRef]
- Li, Z.; Zhang, J.; Wang, Y. Half aggregation transformer for exposure correction. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision, Xiamen, China, 13–15 October 2023; pp. 469–481. [Google Scholar]
- Li, Z.; Wang, Y.; Zhang, J. Low-light image enhancement with knowledge distillation. Neurocomputing 2023, 518, 332–343. [Google Scholar] [CrossRef]
- Cheng, H.D.; Shi, X. A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 2004, 14, 158–170. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Pizer, S.; Johnston, R.; Ericksen, J.; Yankaskas, B.; Muller, K. Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of the International Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; pp. 337–345. [Google Scholar]
- Bennett, E.P.; McMillan, L. Video enhancement using per-pixel virtual exposures. ACM Trans. Graph. 2005, 24, 845–852. [Google Scholar] [CrossRef]
- Yuan, L.; Sun, J. Automatic exposure correction of consumer photographs. In Proceedings of the European Conference on Computer Vision, Lorence, Italy, 7–13 October 2012; pp. 771–785. [Google Scholar]
- Jobson, D.; Rahman, Z.; Woodell, G. Properties and performance of a center/surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
- Jobson, D.; Rahman, Z.; Woodell, G. A multiscale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar] [CrossRef]
- Zhang, Q.; Yuan, G.; Xiao, C.; Zhu, L.; Zheng, W.S. High-quality exposure correction of underexposed photos. In Proceedings of the ACM International Conference on Multimedia, Yokohama, Japan, 11–14 June 2018; pp. 582–590. [Google Scholar]
- Zhang, Q.; Nie, Y.; Zheng, W.S. Dual illumination estimation for robust exposure correction. Comput. Graph. Forum 2019, 38, 243–252. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
- Guo, X. LIME: A method for low-light image enhancement. In Proceedings of the ACM International Conference on Multimedia, New York, NY, USA, 6–9 June 2016; pp. 87–91. [Google Scholar]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Qu, Z.; Xing, Y.; Song, Y. An image enhancement method based on non-subsampled shearlet transform and directional information measurement. Information 2018, 9, 308. [Google Scholar] [CrossRef]
- Yang, X.; Nie, L.; Zhang, Y.; Zhang, L. Image Generation and Super-Resolution Reconstruction of Synthetic Aperture Radar Images Based on an Improved Single-Image Generative Adversarial Network. Information 2025, 16, 370. [Google Scholar] [CrossRef]
- Yang, Z.; Liu, F.; Li, J. Efcanet: Exposure fusion cross-attention network for low-light image enhancement. Appl. Sci. 2022, 13, 380. [Google Scholar] [CrossRef]
- Li, G.; Li, G.; Han, G. Enhancement of low contrast images based on effective space combined with pixel learning. Information 2017, 8, 135. [Google Scholar] [CrossRef]
- Li, Y.; Yang, M.; Bian, T.; Wu, H. MRI Super-Resolution Analysis via MRISR: Deep Learning for Low-Field Imaging. Information 2024, 15, 655. [Google Scholar] [CrossRef]
- Liu, X.; Tong, Z.; Wang, H.; Li, P. ZRRD-MBNet: Zero-Reference Retinex Decomposition-Based Multi-Branch Network for Low-Light Image Enhancement. Res. Sq. 2024. [Google Scholar] [CrossRef]
- Wang, C.; Gao, G.; Wang, J.; Lv, Y.; Li, Q.; Li, Z.; Wu, H. GCT-VAE-GAN: An Image Enhancement Network for Low-Light Cattle Farm Scenes by Integrating Fusion Gate Transformation Mechanism and Variational Autoencoder GAN. IEEE Access 2023, 11, 126650–126660. [Google Scholar] [CrossRef]
- Liu, Z.; Huang, Y.; Zhang, R.; Lu, H.; Wang, W.; Zhang, Z. Low-Light Image Enhancement via Multistage Laplacian Feature Fusion. J. Electron. Imaging 2024, 33, 023020. [Google Scholar] [CrossRef]
- Xue, W.; Wang, Y.; Qin, Z. Multiscale Feature Attention Module Based Pyramid Network for Medical Digital Radiography Image Enhancement. IEEE Access 2024, 12, 53686–53697. [Google Scholar] [CrossRef]
- Lai, W.; Huang, J.; Ahuja, N.; Yang, M. Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2599–2613. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, D.; Zou, P.; Zhang, W.; Zhang, W. Retinex-Based Laplacian Pyramid Method for Image Defogging. IEEE Access 2019, 7, 122459–122472. [Google Scholar] [CrossRef]
- Mok, T.C.W.; Chung, A.C.S. Large Deformation Diffeomorphic Image Registration with Laplacian Pyramid Networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2020, Lima, Peru, 4–8 October 2020; pp. 211–221. [Google Scholar] [CrossRef]
- Hu, L.; Chen, H.; Allebach, J. Joint Multi-Scale Tone Mapping and Denoising for HDR Image Enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 729–738. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1777–1786. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3060–3069. [Google Scholar] [CrossRef]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 2021, 30, 3461–3473. [Google Scholar] [CrossRef] [PubMed]
- Liang, Z.; Li, C.; Zhou, S.; Feng, R.; Loy, C.C. Iterative prompt learning for unsupervised backlit image enhancement. In Proceedings of the IEEE International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 8094–8103. [Google Scholar]
- Li, Z.; Zhang, F.; Cao, M.; Zhang, J.; Shao, Y.; Wang, Y.; Sang, N. Real-time exposure correction via collaborative transformations and adaptive sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 2984–2994. [Google Scholar]
- Baek, J.H.; Kim, D.; Choi, S.M.; Lee, H.J.; Kim, H.; Koh, Y.J. Luminance-aware Color Transform for Multiple Exposure Correction. In Proceedings of the IEEE International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 6133–6142. [Google Scholar]
- Wang, Y.; Peng, L.; Li, L.; Cao, Y.; Zha, Z.J. Decoupling-and-aggregating for image exposure correction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18115–18124. [Google Scholar]
- Li, Y.; Xu, K.; Hancke, G.P.; Lau, R.W. Color shift estimation-and-correction for image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 25389–25398. [Google Scholar]
- Afifi, M.; Derpanis, K.G.; Ommer, B.; Brown, M.S. Learning multi-scale photo exposure correction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9157–9167. [Google Scholar]
- Wang, H.; Xu, K.; Lau, R.W. Local color distributions prior for image enhancement. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 343–359. [Google Scholar]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3277–3285. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar]
- Fu, Z.; Yang, Y.; Tu, X.; Huang, Y.; Ding, X.; Ma, K.K. Learning a simple low-light image enhancer from paired low-light instances. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22252–22261. [Google Scholar]
Methods | Under | Over | Overall | |||
---|---|---|---|---|---|---|
PSNR ↑ | SSIM ↑ | PSNR ↑ | SSIM ↑ | PSNR ↑ | SSIM ↑ | |
HE [13] | 16.52 | 0.6918 | 16.53 | 0.6991 | 16.53 | 0.6959 |
CLAHE [15] | 16.77 | 0.6211 | 14.45 | 0.5842 | 15.38 | 0.5990 |
LIME [23] | 13.98 | 0.6630 | 9.88 | 0.5700 | 11.52 | 0.6070 |
WVM [20] | 18.67 | 0.7280 | 12.75 | 0.6450 | 15.12 | 0.6780 |
RetinexNet [3] | 12.13 | 0.6209 | 10.47 | 0.5953 | 11.14 | 0.6048 |
URetinexNet [5] | 13.85 | 0.7371 | 9.81 | 0.6733 | 11.42 | 0.6988 |
DPED [51] | 20.06 | 0.6826 | 13.14 | 0.5812 | 15.91 | 0.6219 |
DRBN [42] | 19.74 | 0.8290 | 19.37 | 0.8321 | 19.52 | 0.8309 |
SID [52] | 19.37 | 0.8103 | 18.83 | 0.8055 | 19.04 | 0.8074 |
MSEC [49] | 20.52 | 0.8129 | 19.79 | 0.8156 | 20.08 | 0.8145 |
ZeroDCE [39] | 14.55 | 0.5887 | 10.40 | 0.5142 | 12.06 | 0.5441 |
Zero-DCE++ [40] | 13.82 | 0.5887 | 9.74 | 0.5142 | 11.37 | 0.5583 |
RUAS [7] | 13.43 | 0.6807 | 6.39 | 0.4655 | 9.20 | 0.5515 |
SCI [53] | 9.97 | 0.6681 | 5.84 | 0.5190 | 7.49 | 0.5786 |
PairLIE [54] | 11.78 | 0.6596 | 8.37 | 0.5887 | 9.73 | 0.6171 |
ENC-DRBN [8] | 22.72 | 0.8544 | 22.11 | 0.8521 | 22.35 | 0.8530 |
CLIP-LIT [44] | 17.79 | 0.7611 | 12.02 | 0.6894 | 14.32 | 0.7181 |
FECNet [9] | 22.96 | 0.8598 | 23.22 | 0.8748 | 23.12 | 0.8688 |
LCDPNet [50] | 22.35 | 0.8650 | 22.17 | 0.8476 | 22.30 | 0.8552 |
Ours | 23.10 | 0.8652 | 23.25 | 0.8729 | 23.18 | 0.8691 |
Methods | PSNR ↑ | SSIM ↑ | Param (M) ↓ | FLOPs (G) ↓ | Time (s) ↓ |
---|---|---|---|---|---|
HE [13] | 15.98 | 0.6840 | / | / | - |
CLAHE [15] | 16.33 | 0.6420 | / | / | - |
LIME [23] | 17.34 | 0.6860 | / | / | - |
WVM [20] | 18.16 | 0.7390 | / | / | - |
RetinexNet [3] | 16.20 | 0.6304 | 0.84 | 141.52 | 0.1529 |
URetinexNet [5] | 17.67 | 0.7369 | 1.32 | 228.34 | 0.1877 |
DPED [51] | 18.56 | 0.7120 | 0.39 | 144.25 | - |
DRBN [42] | 15.47 | 0.6979 | 0.58 | 37.55 | 0.1226 |
SID [52] | 21.89 | 0.8082 | 7.40 | 48.36 | 0.0387 |
MSEC [49] | 17.07 | 0.6428 | 7.04 | 18.45 | 0.0468 |
ZeroDCE [39] | 18.96 | 0.7743 | 0.079 | 20.60 | 0.0229 |
Zero-DCE++ [40] | 18.42 | 0.7669 | 0.01 | 0.027 | 0.0024 |
RUAS [7] | 13.93 | 0.6340 | 0.003 | 0.97 | 0.0281 |
SCI [53] | 15.96 | 0.6646 | 0.0003 | 0.55 | 0.0021 |
PairLIE [54] | 16.51 | 0.6667 | 0.34 | 89.59 | 0.0716 |
ENC-DRBN [8] | 23.08 | 0.8302 | 0.58 | 51.89 | 0.1869 |
CLIP-LIT [44] | 19.24 | 0.7477 | 0.28 | 72.99 | 0.0877 |
FECNet [9] | 22.34 | 0.8038 | 0.15 | 23.28 | 0.1261 |
LCDPNet [50] | 23.24 | 0.8420 | 0.96 | 8.18 | 0.0472 |
Ours | 23.65 | 0.8524 | 0.17 | 7.26 | 0.0536 |
Decomposition Layers | PSNR (dB) ↑ | SSIM ↑ |
---|---|---|
1 | 22.84 | 0.8371 |
2 | 23.12 | 0.8420 |
3 | 23.65 | 0.8524 |
4 | 23.67 | 0.8520 |
Settings | Feature Extraction | Feature Fusion | PSNR (dB) ↑ | SSIM ↑ | |||
---|---|---|---|---|---|---|---|
Res | FEM | ADD | CAT | DAM | |||
1 | ✔ | ✔ | 22.49 | 0.8313 | |||
2 | ✔ | ✔ | 22.65 | 0.8359 | |||
3 | ✔ | ✔ | 23.10 | 0.8475 | |||
4 | ✔ | ✔ | 23.06 | 0.8452 | |||
5 (Ours) | ✔ | ✔ | 23.65 | 0.8524 |
PSNR (dB) ↑ | SSIM ↑ | |||
---|---|---|---|---|
10 | 1 | 0.1 | 23.65 | 0.8524 |
10 | 1 | 0.5 | 23.58 | 0.8542 |
10 | 1 | 1 | 23.45 | 0.8556 |
10 | 0.1 | 0.1 | 23.42 | 0.8491 |
10 | 5 | 0.1 | 23.51 | 0.8508 |
10 | 10 | 0.1 | 23.38 | 0.8485 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Y.; Jiang, J. LapECNet: Laplacian Pyramid Networks for Image Exposure Correction. Appl. Sci. 2025, 15, 8840. https://doi.org/10.3390/app15168840
Li Y, Jiang J. LapECNet: Laplacian Pyramid Networks for Image Exposure Correction. Applied Sciences. 2025; 15(16):8840. https://doi.org/10.3390/app15168840
Chicago/Turabian StyleLi, Yongchang, and Jing Jiang. 2025. "LapECNet: Laplacian Pyramid Networks for Image Exposure Correction" Applied Sciences 15, no. 16: 8840. https://doi.org/10.3390/app15168840
APA StyleLi, Y., & Jiang, J. (2025). LapECNet: Laplacian Pyramid Networks for Image Exposure Correction. Applied Sciences, 15(16), 8840. https://doi.org/10.3390/app15168840