Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain
Abstract
:1. Introduction
- (1)
- Model-based methods aim to build an explicit model to enhance low-light images, but suboptimal lighting conditions dramatically increase model complexity. Therefore, these methods require the complex manual tuning of parameters and even the idealization of some mathematical processes, making it challenging to achieve dynamic adjustment and even more difficult to achieve optimal enhancement results;
- (2)
- Data-driven methods typically employ a limited size of convolutional kernels to extract image features, which have a limited receptive field to obtain global illumination information for adaptive image enhancement. Consequently, bright areas in the original image may become over-exposed after enhancement processing, leading to poor overall visibility. Furthermore, a natural concern for data-driven methods is the necessity to acquire large amounts of high-quality data, which is very costly and difficult, especially when these data have to be acquired under real-world illumination conditions for the same scenarios;
- (3)
- Moreover, although deep neural networks have shown impressive performance in image enhancement and restoration, their massive parameter leads to large memory requirements and long inference time, making them unsuitable for resource-limited and real-time devices. To address these issues, designing deep neural networks with optimized network structures and reduced parameters is crucial for practical engineering and real-time device applications, where a low computational cost and fast inference speed of deep models are highly desired.
- (1)
- Inspired by previous work [12], we proposed a novel low-light image enhancement method that mapped the physics occurring from the frequency domain into a deep neural network architecture to build a more efficient image enhancement algorithm. The proposed method can balance broad applications and performance of the model-based and data-driven based method, as well as data efficiency and a large requirement of training data;
- (2)
- Considering strong feature consistency in images under varying lighting conditions, this paper designed an unsupervised learning network based on the recursive-based gated convolution block to obtain the global illumination information from the low-light image. Furthermore, the unsupervised network is independent of paired and unpaired training data. Through this process, the network is able to extract higher-order, consistent illumination features in images, thus providing support for the global adaptive image enhancement task without the large amounts of high-quality data;
- (3)
- In this paper, the superiority of the proposed unsupervised algorithm is verified by comparative experiments with the state-of-the-art unsupervised algorithms based on the different low-light public datasets. Furthermore, the expansion experiment demonstrated that the ULEFD can be accelerated in both physical modeling and network structure levels while still keeping impressive image enhancement performance, which has great potential for deployment on resource-limited devices for real-time image enhancement.
2. Related Work
2.1. Model-Based Methods
2.2. Data-Driven Methods
3. Materials and Methods
3.1. Brightness Adjustment in Frequency-Domain Component
3.1.1. Physical Brightness Adjustment
3.1.2. Mathematical Modeling
3.1.3. Dynamic Adjustment Tuning
3.2. Global Enhancement Net
3.3. Loss Function
3.3.1. Loss for Brightness Adjustment in Frequency-Domain Component
3.3.2. Loss for Global Enhancement Component
4. Experiment and Results
4.1. Implementation Details
4.2. Quantitative Evaluation
4.3. Qualitative Evaluation
4.4. Ablation Study
4.4.1. Contribution of BAFD Component
4.4.2. Contribution of Each Loss
4.5. Pedestrian Detection in the Dark
5. Discussion
- Deep-learning-based methods have recently attracted significant attention in the image processing field. Due to the powerful feature representation ability of the data, data-driven methods can learn more general visual features. This property means these methods can be used to relieve some challenges for image enhancement, such as poor illumination conditions. Our research aims to combine the physical brightness adjustment model based on frequency information with a data-driven-based low-light image enhancement method to improve the performance of the dynamic enhancement for low-light images. Moreover, the proposed method is based on a lightweight network design, offering it the advantages of a flexible generalization capability and real-time inference speed. The quantitative results in Table 2 and Table 3 show that the data-driven methods have better image enhancement results on all the test sets than the conventional method when the training data is sufficient. It is due to the fact that the data-driven approach relies on the powerful feature extraction capability of the deep learning network to adjust the brightness of each pixel in the image dynamically. As for data-driven methods, supervised learning usually has better image enhancement results because it can rely on normally exposed images to guide network learning. However, collecting pairs of images in natural environments is very time-consuming. The data dependence of supervised learning also causes a lack of generalization ability of the model. Specifically, the model degrades in scenarios with significant differences from the training data. In contrast, unsupervised learning reduces the reliance on paired data and performs better generalization. The result of our method shows that the method in this paper outperforms all unsupervised learning methods in key metrics and surpasses some of the supervised learning methods, with a small gap compared to the state-of-the-art supervised learning methods. In general, its performance is based on each branch of the network. Firstly, the physical brightness via the frequency-domain model in the BAFD component is able to improve the performance of the algorithm by providing interpretability for algorithm optimization, even with the limited amount of training datasets. Moreover, the integrated physics modeling procedure represents greater robustness than other methods for enhanced images with scenario changes. Furthermore, it also significantly reduces the complexity of designing the network architecture, which leads to the proposed method only having 70K parameters. Secondly, the design of the GEN component architecture, which is inspired by image restoration methods and the variable effective receptive field of the recursive gated deep convolution, keeps the high-order features and the detail information (such as texture) from the image to preserve the original structural information of the image and suppress noise generated by enhanced processing.
- Through ablation experiments, this paper analyzes the reasons for the performance improvement of the algorithm from two aspects. First, the ablation experiments demonstrate that this paper uses the two-branch network structure, and the one-way network introduces the channel characterizing the image brightness with the frequency-domain feature model under the assumption of the virtual light field, which can effectively achieve the brightness adjustment. Moreover, a lightweight parameter estimation network can achieve dynamic brightness adjustment. Meanwhile, the other network relies on acquiring global image information to preserve the original image structure, color contrast, and other critical information while enhancing the image so that the enhanced image noise can be better suppressed. On the other hand, the contribution of the loss function of constrained unsupervised learning is analyzed in this paper through ablation experiments. Through the structure of the ablation experiment, it is easy to find that for the brightness adjustment branch, the histogram prior information loss function used in this paper can effectively preserve the original distribution of image information while brightness adjustment, thus making it possible to adjust the brightness without losing the original image semantic structure features. On the other hand, the illumination smoothing loss function allows the network to reduce the impact of noise on the overall image enhancement results during the brightness adjustment learning. For the global enhancement branch, this paper constrains the network to retain the high-level image feature information from two aspects: color gradient consistency and image gradient change consistency so that the enhanced images achieve significant improvement in both the quantitative and qualitative evaluation (in Table 2 and Table 3 and Figure 3 and Figure 4). Meanwhile, the exposure consistency loss further enhances the intuitive image enhancement effect.
- To analyze the potential of the algorithms in this paper for real-time applications, the paper first compares the parametric quantities and inference implementations of the various algorithms in Table 2. It can be seen that the number of parameters of the proposed method in this paper is better than most of the comparison methods, and the inference speed is only slightly slower than Zero-DCE++ [6], which is significantly lightweight and fast for practical applications.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CNN | Convolutional Neural Networks |
ULEFD | Unsupervised Low-light Image Enhancement via Virtual Diffraction |
in Frequency Domain | |
BAFD | Bright Adjustment in Frequency Domain |
GEN | Global Enhancement Net |
FT | Fourier Transform |
FFT | Fast Fourier Transform |
IFFT | Inverse Fourier Transform |
MLP | Multi-Layer Perception |
MSE | Mean-Square Error |
References
- Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar] [CrossRef] [PubMed]
- Wang, L.; Xiao, L.; Liu, H.; Wei, Z. Variational Bayesian method for retinex. IEEE Trans. Image Process. 2014, 23, 3381–3396. [Google Scholar] [CrossRef]
- Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193–200. [Google Scholar] [CrossRef] [Green Version]
- Pizer, S.M.; Johnston, R.E.; Ericksen, J.P.; Yankaskas, B.C.; Muller, K.E. Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990; Volume 337, p. 1. [Google Scholar]
- Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.M.; Gu, J.; Loy, C.C. Low-light image and video enhancement using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9396–9416. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
- Quan, Y.; Fu, D.; Chang, Y.; Wang, C. 3D Convolutional Neural Network for Low-Light Image Sequence Enhancement in SLAM. Remote Sens. 2022, 14, 3985. [Google Scholar] [CrossRef]
- Fan, S.; Liang, W.; Ding, D.; Yu, H. LACN: A lightweight attention-guided ConvNeXt network for low-light image enhancement. Eng. Appl. Artif. Intell. 2023, 117, 105632. [Google Scholar] [CrossRef]
- Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new image contrast enhancement algorithm using exposure fusion framework. In Proceedings of the Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, 22–24 August 2017; Proceedings, Part II 17. Springer: Berlin/Heidelberg, Germany, 2017; pp. 36–46. [Google Scholar]
- Chen, Y.S.; Wang, Y.C.; Kao, M.H.; Chuang, Y.Y. Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6306–6314. [Google Scholar]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Jalali, B.; MacPhee, C. VEViD: Vision Enhancement via Virtual diffraction and coherent Detection. eLight 2022, 2, 24. [Google Scholar]
- Farid, H. Blind inverse gamma correction. IEEE Trans. Image Process. 2001, 10, 1428–1433. [Google Scholar] [CrossRef] [PubMed]
- Lee, Y.; Zhang, S.; Li, M.; He, X. Blind inverse gamma correction with maximized differential entropy. Signal Process. 2022, 193, 108427. [Google Scholar]
- Coltuc, D.; Bolon, P.; Chassery, J.M. Exact histogram specification. IEEE Trans. Image Process. 2006, 15, 1143–1152. [Google Scholar] [CrossRef]
- Ibrahim, H.; Kong, N.S.P. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 1752–1758. [Google Scholar] [CrossRef]
- Stark, J.A. Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans. Image Process. 2000, 9, 889–896. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef] [PubMed]
- Singh, K.; Kapoor, R. Image enhancement using exposure based sub image histogram equalization. Pattern Recognit. Lett. 2014, 36, 10–14. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2016, 26, 982–993. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Zhang, F.; Shao, Y.; Sun, Y.; Zhu, K.; Gao, C.; Sang, N. Unsupervised low-light image enhancement via histogram equalization prior. arXiv 2021, arXiv:2112.01766. [Google Scholar]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar]
- Jiang, N.; Lin, J.; Zhang, T.; Zheng, H.; Zhao, T. Low-Light Image Enhancement via Stage-Transformer-Guided Network. In IEEE Transactions on Circuits and Systems for Video Technology; IEEE: Piscataway Township, NJ, USA, 2023. [Google Scholar]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef] [Green Version]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-Light Image/Video Enhancement Using CNNs. In Proceedings of the BMVC, Newcastle, UK, 3–6 September 2018; Volume 220, p. 4. [Google Scholar]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway Township, NJ, USA, 2011; pp. 97–104. [Google Scholar]
- Wang, R.; Zhang, Q.; Fu, C.W.; Shen, X.; Zheng, W.S.; Jia, J. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6849–6857. [Google Scholar]
- Ren, W.; Liu, S.; Ma, L.; Xu, Q.; Xu, X.; Cao, X.; Du, J.; Yang, M.H. Low-light image enhancement via a deep hybrid network. IEEE Trans. Image Process. 2019, 28, 4364–4375. [Google Scholar] [CrossRef] [PubMed]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 5901–5910. [Google Scholar]
- Xu, K.; Yang, X.; Yin, B.; Lau, R.W. Learning to restore low-light images via decomposition-and-enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2281–2290. [Google Scholar]
- Fu, Y.; Hong, Y.; Chen, L.; You, S. LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl.-Based Syst. 2022, 240, 108010. [Google Scholar] [CrossRef]
- Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 3063–3072. [Google Scholar]
- Saravanan, G.; Yamuna, G.; Nandhini, S. Real time implementation of RGB to HSV/HSI/HSL and its reverse color space models. In Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2016; IEEE: Piscataway Township, NJ, USA, 2016; pp. 0462–0466. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 5728–5739. [Google Scholar]
- Chen, L.; Chu, X.; Zhang, X.; Sun, J. Simple baselines for image restoration. In Proceedings of the Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Proceedings, Part VII. Springer: Berlin/Heidelberg, Germany, 2022; pp. 17–33. [Google Scholar]
- Rao, Y.; Zhao, W.; Tang, Y.; Zhou, J.; Lim, S.N.; Lu, J. Hornet: Efficient high-order spatial interactions with recursive gated convolutions. Adv. Neural Inf. Process. Syst. 2022, 35, 10353–10366. [Google Scholar]
- Zhang, Y.; Di, X.; Zhang, B.; Li, Q.; Yan, S.; Wang, C. Self-supervised low light image enhancement and denoising. arXiv 2021, arXiv:2103.00832. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 694–711. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Liu, J.; Xu, D.; Yang, W.; Fan, M.; Huang, H. Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 2021, 129, 1153–1184. [Google Scholar]
- Yang, X.; Gong, J.; Wu, L.; Yang, Z.; Shi, Y.; Nie, F. Reference-free low-light image enhancement by associating hierarchical wavelet representations. Expert Syst. Appl. 2023, 213, 118920. [Google Scholar]
- Haoyuan Wang, K.X.; Lau, R.W. Local Color Distributions Prior for Image Enhancement. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
- Yuan, Y.; Yang, W.; Ren, W.; Liu, J.; Scheirer, W.J.; Wang, Z. UG 2+ Track 2: A Collective Benchmark Effort for Evaluating and Advancing Image Understanding in Poor Visibility Environments. arXiv 2019, arXiv:1904.04474. [Google Scholar]
- Li, J.; Wang, Y.; Wang, C.; Tai, Y.; Qian, J.; Yang, J.; Wang, C.; Li, J.; Huang, F. DSFD: Dual shot face detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5060–5069. [Google Scholar]
- Yang, S.; Luo, P.; Loy, C.C.; Tang, X. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5525–5533. [Google Scholar]
Method | Model-Based | Data-Driven |
---|---|---|
Advantage | Data efficient | Require limited priors |
Physics are universal | High performance | |
Resource-friendly | Dynamic adjustment | |
Disadvantage | Require precise modeling | Careful selection data |
Suboptimal performance | Efficiency depends on structure | |
No adaptive adjustment | High computational cost |
Learning | Method | LOL | VE-LOL | Efficiency | ||||||
---|---|---|---|---|---|---|---|---|---|---|
PSNR ↑ | SSIM ↑ | NIQE ↓ | PSNR ↑ | SSIM ↑ | NIQE ↓ | Params(M) ↓ | FLOPs(G) ↓ | Test Time (s) ↓ | ||
Conventional | LIME (2016) [21] | 16.76 | 0.56 | 10.61 | 14.77 | 0.53 | 10.85 | - | - | 0.491 (on CPU) |
Vevid (2022) [12] | 17.23 | 0.65 | 10.53 | 14.92 | 0.56 | 10.64 | - | - | 0.0012 | |
Supervised | KinD++ (2021) [26] | 21.30 | 0.82 | 11.02 | 20.87 | 0.80 | 11.60 | 8.28 | 268.79 | 0.829 |
Restormer (2022) [38] | 23.17 | 0.84 | 10.14 | 22.49 | 0.82 | 10.53 | 8.19 | 231.56 | 0.821 | |
LACN (2023) [8] | 23.54 | 0.84 | 10.11 | 23.09 | 0.83 | 10.19 | 7.25 | 195.63 | 0.744 | |
Unsupervised | Zero-DCE++ (2021) [6] | 14.86 | 0.57 | 10.95 | 16.93 | 0.68 | 10.81 | 0.01 | 28.76 | 0.0012 |
Reference-freeLLIE (2023) [46] | 16.85 | 0.58 | 10.74 | 19.41 | 0.69 | 10.42 | 0.08 | 91.27 | 0.011 | |
EnlightGAN (2021) [11] | 16.21 | 0.59 | 14.74 | 17.48 | 0.65 | 14.42 | 8.63 | 273.24 | 0.871 | |
LE-GAN (2022) [35] | 21.38 | 0.82 | 11.32 | 21.50 | 0.82 | 10.71 | 9.92 | 294.12 | 0.907 | |
our (Training on LOL) | 21.97 | 0.83 | 10.23 | 21.63 | 0.83 | 10.21 | 0.07 | 71.45 | 0.008 | |
our (Training on VE-LOL) | 21.44 | 0.82 | 10.19 | 22.12 | 0.84 | 10.13 | 0.07 | 71.45 | 0.008 |
NIQE ↓/BRISQUE ↓ | NIQE ↓/BRISQUE ↓ | NIQE ↓/BRISQUE ↓ | NIQE ↓/BRISQUE ↓ | NIQE ↓/BRISQUE ↓ | |||
---|---|---|---|---|---|---|---|
Learning | Method | DICM [18] | LIME [21] | VV 1 | LCDP [47] | SCIE [48] | Avg |
Conventional | LIME (2016) [21] | 11.823/5573.418 | 10.612/5062.801 | 11.672/6375.428 | 9.456/3443.928 | 10.818/4099.466 | 10.876/4911.008 |
Vevid (2022) [12] | 11.168/4604.262 | 12.605/3697.681 | 10.679/5617.055 | 10.574/3371.317 | 11.197/4589.276 | 11.245/4375.908 | |
Supervised | KinD++ (2021) [26] | 15.043/3836.451 | 10.911/3341.541 | 11.449/4986.575 | 9.461/3241.841 | 11.451/4634.521 | 11.663/4008.186 |
Restormer (2022) [38] | 14.012/5852.303 | 10.290/4383.280 | 11.128/5916.383 | 9.352/3018.414 | 10.787/3983.399 | 11.114/4630.756 | |
LACN (2023) [8] | 9.532/2579.112 | 10.531/2611.333 | 10.597/2287.331 | 9.796/2922.254 | 10.133/2681.562 | 10.118/2616.318 | |
Unsupervised | Zero-DCE++ (2021) [6] | 10.995/7965.129 | 10.932/2996.481 | 10.645/5885.046 | 10.217/4294.057 | 10.560/3917.639 | 10.701/5011.670 |
Reference-freeLLIE (2023) [46] | 13.645/7658.416 | 14.792/6084.275 | 10.690/7173.563 | 11.622/3788.007 | 11.153/3858.341 | 12.380/5712.520 | |
EnlightenGAN (2021) [11] | 15.201/4444.962 | 11.335/4248.576 | 11.298/5024.721 | 9.251/3315.532 | 10.546/2858.341 | 11.526/3978.426 | |
LE-GAN (2022) [35] | 11.928/3630.062 | 10.690/4153.124 | 10.41/2940.849 | 10.364/4926.882 | 10.588/2905.512 | 10.796/3711.286 | |
Our | 10.037/3261.936 | 10.084/3148.224 | 10.504/3585.173 | 9.336/3141.579 | 10.245/2962.109 | 10.041/3219.804 |
Loss Functions | BAFD | LOL | VE-LOL | ||||
---|---|---|---|---|---|---|---|
Relative Losses | Component | PSNR | SSIM | PSNR | SSIM | ||
✓ | 17.52 | 0.80 | 18.87 | 0.73 | |||
✓ | ✓ | ✓ | 19.05 | 0.81 | 19.42 | 0.82 | |
✓ | ✓ | ✓ | 20.39 | 0.82 | 21.55 | 0.83 | |
✓ | ✓ | ✓ | ✓ | 21.44 | 0.82 | 22.12 | 0.84 |
Loss Functions | LOL | VE-LOL | ||||||
---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | |||||
✓ | 12.62 | 0.54 | 14.26 | 0.57 | ||||
✓ | ✓ | 17.88 | 0.68 | 18.49 | 0.70 | |||
✓ | ✓ | ✓ | 18.24 | 0.70 | 18.86 | 0.71 | ||
✓ | ✓ | ✓ | ✓ | 20.72 | 0.77 | 21.60 | 0.79 | |
✓ | ✓ | ✓ | ✓ | ✓ | 21.44 | 0.82 | 22.12 | 0.84 |
Method | IoU Thresholds | ||
---|---|---|---|
0.5 | 0.7 | 0.9 | |
low-light image | 0.231278 | 0.007296 | 0.000002 |
LIME [21] | 0.293970 | 0.013417 | 0.000007 |
KinD++ [26] | 0.243714 | 0.008616 | 0.000003 |
Restormer [38] | 0.304128 | 0.017581 | 0.000007 |
Zero-DCE++ [6] | 0.289232 | 0.014772 | 0.000006 |
EnlightenGAN [11] | 0.276574 | 0.015545 | 0.000003 |
LE-GAN [35] | 0.294977 | 0.017107 | 0.000005 |
Ours | 0.303135 | 0.017204 | 0.000009 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, X.; Qin, H.; Yu, Y.; Yan, X.; Yang, S.; Wang, G. Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain. Remote Sens. 2023, 15, 3580. https://doi.org/10.3390/rs15143580
Zhang X, Qin H, Yu Y, Yan X, Yang S, Wang G. Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain. Remote Sensing. 2023; 15(14):3580. https://doi.org/10.3390/rs15143580
Chicago/Turabian StyleZhang, Xupei, Hanlin Qin, Yue Yu, Xiang Yan, Shanglin Yang, and Guanghao Wang. 2023. "Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain" Remote Sensing 15, no. 14: 3580. https://doi.org/10.3390/rs15143580
APA StyleZhang, X., Qin, H., Yu, Y., Yan, X., Yang, S., & Wang, G. (2023). Unsupervised Low-Light Image Enhancement via Virtual Diffraction Information in Frequency Domain. Remote Sensing, 15(14), 3580. https://doi.org/10.3390/rs15143580