This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessFeature PaperArticle
Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion
School of Electronic and Electrical Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3629; https://doi.org/10.3390/math13223629 (registering DOI)
Submission received: 14 October 2025
/
Revised: 8 November 2025
/
Accepted: 10 November 2025
/
Published: 12 November 2025
Abstract
Multi-exposure image fusion (MEF) aims to generate a well-exposed image by combining multiple photographs captured at different exposure levels. However, deep learning-based approaches are often highly dependent on the quality of the training data, which can lead to inconsistent color reproduction and loss of fine details. To address this issue, this study proposes a variance-driven hybrid MEF framework based on a U-Net architecture, which adaptively balances structural and chromatic information. In the proposed method, the variance of randomly cropped patches is used as a training weight, allowing the model to emphasize structurally informative regions and thereby preserve local details during the fusion process. Furthermore, a fusion strategy based on the geometric color distance, referred to as the Chroma scale, in the LAB color space is applied to preserve the original chroma characteristics of the input images and improve color fidelity. Visual gamma compensation is also employed to maintain perceptual luminance consistency and synthesize a natural fine image with balanced tone and smooth contrast transitions. Experiments conducted on 86 exposure pairs demonstrate that the proposed model achieves superior fusion quality compared with conventional and deep-learning-based methods, obtaining high JNBM (17.91) and HyperIQA (70.37) scores. Overall, the proposed variance-driven U-Net effectively mitigates dataset dependency and color distortion, providing a reliable and computationally efficient solution for robust MEF applications.
Share and Cite
MDPI and ACS Style
Son, C.-W.; Go, Y.-H.; Lee, S.-H.; Lee, S.-H.
Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion. Mathematics 2025, 13, 3629.
https://doi.org/10.3390/math13223629
AMA Style
Son C-W, Go Y-H, Lee S-H, Lee S-H.
Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion. Mathematics. 2025; 13(22):3629.
https://doi.org/10.3390/math13223629
Chicago/Turabian Style
Son, Chang-Woo, Young-Ho Go, Seung-Hwan Lee, and Sung-Hak Lee.
2025. "Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion" Mathematics 13, no. 22: 3629.
https://doi.org/10.3390/math13223629
APA Style
Son, C.-W., Go, Y.-H., Lee, S.-H., & Lee, S.-H.
(2025). Variance-Driven U-Net Weighted Training and Chroma-Scale-Based Multi-Exposure Image Fusion. Mathematics, 13(22), 3629.
https://doi.org/10.3390/math13223629
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.