HVI-Based Spatial–Frequency-Domain Multi-Scale Fusion for Low-Light Image Enhancement
Abstract
1. Introduction
- This paper presents a novel mechanism for deep coupling between spatial features and Fourier transform features in the I-channel of HVI color space, which significantly enhances joint detail and illumination restoration through fusion of I-channel illumination characteristics with frequency-domain information.
- We develop a dual-path spatial–frequency fusion model that concurrently processes features via a multi-scale spatial module and a Fourier–Intensity coupling module and then employs adaptive fusion to drive an encoder–decoder network for joint local–global feature optimization.
- Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance across multiple public benchmarks, conclusively validating the effectiveness and superiority of our model for low-light image enhancement.
2. Related Work
2.1. Traditional Low-Light Image Enhancement Methods
2.2. Deep-Learning-Based Low-Light Image Enhancement Methods
3. Methods
3.1. Overall
3.2. HVI Space Information
3.3. SpaBlock
3.4. FourBlock
3.5. Loss Function
4. Experiments
4.1. Datasets and Metrics
4.2. Experiment Settings
4.3. Comparison with State of the Arts
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sun, T.; Segu, M.; Postels, J.; Wang, Y.; Van Gool, L.; Schiele, B.; Tombari, F.; Yu, F. SHIFT: A synthetic driving dataset for continuous multi-task domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21371–21382. [Google Scholar] [CrossRef]
- Liu, M.; Yurtsever, E.; Fossaert, J.; Zhou, X.; Zimmer, W.; Cui, Y.; Zagar, B.L.; Knoll, A.C. A Survey on Autonomous Driving Datasets: Statistics, Annotation Quality, and a Future Outlook. IEEE Trans. Intell. Veh. 2024, 9, 7138–7164. [Google Scholar] [CrossRef]
- Hong, S.; Marinescu, R.; Dalca, A.V.; Bonkhoff, A.K.; Bretzner, M.; Rost, N.S.; Golland, P. 3D-StyleGAN: A style-based generative adversarial network for generative modeling of three-dimensional medical images. In Proceedings of the MICCAI Workshop on Deep Generative Models, Strasbourg, France, 1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 24–34. [Google Scholar] [CrossRef]
- Cap, Q.H.; Fukuda, A.; Iyatomi, H. A practical framework for unsupervised structure preservation medical image enhancement. Biomed. Signal Process. Control 2025, 100, 106918. [Google Scholar] [CrossRef]
- Rabbi, J.; Ray, N.; Schubert, M.; Chowdhury, S.; Chao, D. Small-object detection in remote sensing images with end-to-end edge-enhanced GAN and object detector network. Remote Sens. 2020, 12, 1432. [Google Scholar] [CrossRef]
- Li, J.; Li, J.; Hou, X.; Wang, H. Exploring Distortion Prior With Latent Diffusion Models for Remote Sensing Image Compression. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5623713. [Google Scholar] [CrossRef]
- Kim, H.U.; Koh, Y.J.; Kim, C.S. Global and local enhancement networks for paired and unpaired image enhancement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 339–354. [Google Scholar] [CrossRef]
- Jung, H.; Jang, H.; Ha, N.; Sohn, K. Deep low-contrast image enhancement using structure tensor representation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 1725–1733. [Google Scholar] [CrossRef]
- Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar] [CrossRef]
- Jiang, K.; Wang, Z.; Wang, Z.; Chen, C.; Yi, P.; Lu, T.; Lin, C.W. Degrade is upgrade: Learning degradation for low-light image enhancement. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 1078–1086. [Google Scholar] [CrossRef]
- Jin, Y.; Yang, W.; Tan, R.T. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 404–421. [Google Scholar] [CrossRef]
- Wang, C.; Wu, H.; Jin, Z. Fourllie: Boosting low-light image enhancement by fourier frequency information. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October 2023; pp. 7459–7469. [Google Scholar] [CrossRef]
- Zhang, T.; Liu, P.; Zhao, M.; Lv, H. DMFourLLIE: Dual-stage and multi-branch fourier network for low-light image enhancement. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, VIC, Australia, 28 October–1 November 2024; pp. 7434–7443. [Google Scholar] [CrossRef]
- Jiang, H.; Luo, A.; Liu, X.; Han, S.; Liu, S. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 161–179. [Google Scholar] [CrossRef]
- Yan, Q.; Feng, Y.; Zhang, C.; Pang, G.; Shi, K.; Wu, P.; Dong, W.; Sun, J.; Zhang, Y. Hvi: A new color space for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 11–15 June 2025; pp. 5678–5687. [Google Scholar] [CrossRef]
- Li, Z.; Jia, Z.; Yang, J.; Kasabov, N. Low illumination video image enhancement. IEEE Photonics J. 2020, 12, 3900613. [Google Scholar] [CrossRef]
- Zhang, Y.; Di, X.; Zhang, B.; Ji, R.; Wang, C. Better than reference in low-light image enhancement: Conditional re-enhancement network. IEEE Trans. Image Process. 2021, 31, 759–772. [Google Scholar] [CrossRef]
- Guo, X.; Hu, Q. Low-light image enhancement via breaking down the darkness. Int. J. Comput. Vis. 2023, 131, 48–66. [Google Scholar] [CrossRef]
- Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
- Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef]
- Arici, T.; Dikbas, S.; Altunbasak, Y. A histogram modification framework and its application for image contrast enhancement. IEEE Trans. Image Process. 2009, 18, 1921–1935. [Google Scholar] [CrossRef]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef]
- Wang, X.; Chen, L. An effective histogram modification scheme for image contrast enhancement. Signal Process. Image Commun. 2017, 58, 187–198. [Google Scholar] [CrossRef]
- Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Rahman, Z.u.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. Star: A structure and texture aware retinex model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-purpose oriented single nighttime image haze removal based on unified variational retinex model. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1643–1657. [Google Scholar] [CrossRef]
- Lv, F.; Liu, B.; Lu, F. Fast enhancement for non-uniform illumination images using light-weight CNNs. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1450–1458. [Google Scholar] [CrossRef]
- Zhang, F.; Li, Y.; You, S.; Fu, Y. Learning temporal consistency for low light video enhancement from single images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4967–4976. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646. [Google Scholar] [CrossRef]
- Fu, Y.; Hong, Y.; Chen, L.; You, S. LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl.-Based Syst. 2022, 240, 108010. [Google Scholar] [CrossRef]
- Fu, Z.; Yang, Y.; Tu, X.; Huang, Y.; Ding, X.; Ma, K.K. Learning a simple low-light image enhancer from paired low-light instances. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 22252–22261. [Google Scholar] [CrossRef]
- Wu, Y.; Pan, C.; Wang, G.; Yang, Y.; Wei, J.; Li, C.; Shen, H.T. Learning semantic-aware knowledge guidance for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 1662–1671. [Google Scholar] [CrossRef]
- Hou, J.; Zhu, Z.; Hou, J.; Liu, H.; Zeng, H.; Yuan, H. Global structure-aware diffusion process for low-light image enhancement. Adv. Neural Inf. Process. Syst. 2023, 36, 79734–79747. [Google Scholar]
- Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar] [CrossRef]
- Zheng, C.; Shi, D.; Shi, W. Adaptive unfolding total variation network for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 4439–4448. [Google Scholar] [CrossRef]
- Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 12302–12311. [Google Scholar] [CrossRef]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 4–6 October 2023; pp. 12504–12513. [Google Scholar] [CrossRef]
- Feng, Y.; Hou, S.; Lin, H.; Zhu, Y.; Wu, P.; Dong, W.; Sun, J.; Yan, Q.; Zhang, Y. Difflight: Integrating content and detail for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; pp. 6143–6152. [Google Scholar] [CrossRef]
- Xu, Q.; Zhang, R.; Zhang, Y.; Wang, Y.; Tian, Q. A fourier-based framework for domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14383–14392. [Google Scholar] [CrossRef]
- Fuoli, D.; Van Gool, L.; Timofte, R. Fourier space losses for efficient perceptual image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 2360–2369. [Google Scholar] [CrossRef]
- Jiang, L.; Dai, B.; Wu, W.; Loy, C.C. Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 13919–13929. [Google Scholar] [CrossRef]
- Guo, X.; Fu, X.; Zhou, M.; Huang, Z.; Peng, J.; Zha, Z.J. Exploring Fourier Prior for Single Image Rain Removal. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI-22), Vienna, Austria, 23–29 July 2022; pp. 935–941. [Google Scholar]
- Huang, J.; Liu, Y.; Zhao, F.; Yan, K.; Zhang, J.; Huang, Y.; Zhou, M.; Xiong, Z. Deep fourier-based exposure correction network with spatial-frequency interaction. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 163–180. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.L.; Zhou, M.; Liang, Z.; Zhou, S.; Feng, R.; Loy, C.C. Embedding fourier for ultra-high-definition low-light image enhancement. arXiv 2023, arXiv:2302.11831. [Google Scholar]
- Lv, X.; Zhang, S.; Wang, C.; Zheng, Y.; Zhong, B.; Li, C.; Nie, L. Fourier priors-guided diffusion for zero-shot joint low-light enhancement and deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 25378–25388. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 97–104. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar]
- Dong, X.; Pang, Y.; Wen, J. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the ACM SIGGRAPH 2010 Posters, Los Angeles, CA, USA, 26–30 July 2010; p. 1. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. IEEE Trans. Image Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
- Lim, S.; Kim, W. DSLR: Deep stacked Laplacian restorer for low-light image enhancement. IEEE Trans. Multimed. 2020, 23, 4272–4284. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar] [CrossRef]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar] [CrossRef]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 2654–2662. [Google Scholar] [CrossRef]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 492–511. [Google Scholar] [CrossRef]
- Xu, X.; Wang, R.; Fu, C.W.; Jia, J. Snr-aware low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17714–17724. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]





| Methods | BIMEF [58] | FEA [59] | LIME [60] | MF [61] | NPE [62] | RetinexNet [50] | DSLR [63] | KinD [64] |
| PSNR | 13.88 | 16.72 | 16.76 | 16.97 | 16.97 | 16.77 | 14.98 | 17.65 |
| SSIM | 0.595 | 0.478 | 0.445 | 0.508 | 0.484 | 0.425 | 0.596 | 0.772 |
| LPIPS | 0.326 | 0.385 | 0.395 | 0.380 | 0.405 | 0.474 | 0.376 | 0.175 |
| Methods | RUAS [9] | Uformer [65] | Restormer [66] | LLFormer [67] | Diff-Retinex [40] | LightenDiffusion [14] | CIDNet [15] | HVI-FourNet (Ours) |
| PSNR | 18.23 | 18.55 | 22.37 | 23.65 | 21.98 | 20.45 | 23.50 | 24.37 |
| SSIM | 0.717 | 0.721 | 0.816 | 0.816 | 0.863 | 0.803 | 0.870 | 0.863 |
| LPIPS | 0.354 | 0.321 | 0.141 | 0.169 | - | 0.192 | 0.105 | 0.091 |
| Methods | MF [61] | NPE [62] | RetinexNet [50] | KinD [64] | KinD++ [68] | MIRNet [69] | SGM [51] |
| PSNR | 18.72 | 17.33 | 16.08 | 20.01 | 20.59 | 22.11 | 20.06 |
| SSIM | 0.509 | 0.464 | 0.656 | 0.841 | 0.829 | 0.794 | 0.816 |
| LPIPS | 0.240 | 0.236 | 0.236 | 0.081 | 0.088 | 0.145 | 0.073 |
| Methods | FECNet [47] | Z_DCE [38] | SNR-Aware [70] | Bread [18] | FourLLIE [12] | CIDNet [15] | HVI-FourNet (Ours) |
| PSNR | 20.67 | 18.55 | 21.48 | 20.83 | 22.34 | 24.11 | 24.32 |
| SSIM | 0.795 | 0.713 | 0.849 | 0.822 | 0.847 | 0.871 | 0.876 |
| LPIPS | 0.100 | 0.172 | 0.074 | 0.095 | 0.051 | 0.108 | 0.122 |
| Methods | MF [61] | NPE [62] | RetinexNet [50] | KinD [64] | KinD++ [68] | MIRNet [69] | SGM [51] |
| PSNR | 17.50 | 16.60 | 18.28 | 22.62 | 21.17 | 22.52 | 22.05 |
| SSIM | 0.774 | 0.778 | 0.774 | 0.904 | 0.881 | 0.900 | 0.909 |
| LPIPS | 0.106 | 0.108 | 0.147 | 0.052 | 0.068 | 0.057 | 0.484 |
| Methods | FECNet [47] | Z_DCE [38] | SNR-Aware [70] | Bread [18] | FourLLIE [12] | CIDNet [15] | HVI-FourNet (Ours) |
| PSNR | 22.57 | 20.54 | 24.14 | 17.63 | 24.65 | 25.71 | 25.86 |
| SSIM | 0.894 | 0.854 | 0.928 | 0.838 | 0.919 | 0.942 | 0.942 |
| LPIPS | 0.070 | 0.069 | 0.032 | 0.068 | 0.039 | 0.045 | 0.042 |
| Methods | BIMEF [58] | FEA [59] | MF [61] | NPE [62] | SRIE [26] | RetinexNet [50] | DSLR [63] |
| PSNR | 17.97 | 15.23 | 17.63 | 17.38 | 18.63 | 12.51 | 20.24 |
| SSIM | 0.797 | 0.716 | 0.814 | 0.793 | 0.838 | 0.671 | 0.829 |
| LPIPS | 0.140 | 0.195 | 0.120 | 0.132 | 0.105 | 0.254 | 0.153 |
| Methods | KinD [64] | Z_DCE [38] | Z_DCE++ [71] | RUAS [9] | ELGAN [72] | CIDNet [15] | HVI-FourNet (Ours) |
| PSNR | 16.20 | 15.93 | 14.61 | 16.00 | 17.91 | 19.78 | 24.74 |
| SSIM | 0.784 | 0.767 | 0.406 | 0.786 | 0.836 | 0.785 | 0.849 |
| LPIPS | 0.150 | 0.165 | 0.231 | 0.140 | 0.143 | 0.138 | 0.089 |
| Metrics | PSNR | SSIM | LPIPS |
|---|---|---|---|
| HVI-FourNet w/o FourBlock | 23.54 | 0.865 | 0.129 |
| HVI-FourNet w/o SpaBlock | 23.67 | 0.859 | 0.137 |
| HVI-FourNet w/o (sRGB Only) | 24.00 | 0.872 | 0.118 |
| HVI-FourNet w/o ( Norm Loss) | 8.078 | 0.002 | 1.018 |
| HVI-FourNet w/o (Edge Loss) | 23.92 | 0.876 | 0.118 |
| HVI-FourNet w/o (Perceptual Loss) | 23.70 | 0.859 | 0.174 |
| HVI-FourNet w/o (SSIM Loss) | 24.19 | 0.874 | 0.119 |
| HVI-FourNet | 24.32 | 0.876 | 0.122 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Zheng, H.; Xu, X.; Zhu, H. HVI-Based Spatial–Frequency-Domain Multi-Scale Fusion for Low-Light Image Enhancement. Appl. Sci. 2025, 15, 10376. https://doi.org/10.3390/app151910376
Zhang Y, Zheng H, Xu X, Zhu H. HVI-Based Spatial–Frequency-Domain Multi-Scale Fusion for Low-Light Image Enhancement. Applied Sciences. 2025; 15(19):10376. https://doi.org/10.3390/app151910376
Chicago/Turabian StyleZhang, Yuhang, Huiying Zheng, Xinya Xu, and Hancheng Zhu. 2025. "HVI-Based Spatial–Frequency-Domain Multi-Scale Fusion for Low-Light Image Enhancement" Applied Sciences 15, no. 19: 10376. https://doi.org/10.3390/app151910376
APA StyleZhang, Y., Zheng, H., Xu, X., & Zhu, H. (2025). HVI-Based Spatial–Frequency-Domain Multi-Scale Fusion for Low-Light Image Enhancement. Applied Sciences, 15(19), 10376. https://doi.org/10.3390/app151910376

