# Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- A fast LLIE method is proposed, namely, the ZRCRN, which establishes a double-layer parameter-generating network to automatically extract the exposure ratio of a camera response model to realize enhancement. The process is simplified, and the speed can reach more than twice that of similar methods. In addition, the ZRCRN can still obtain an accuracy comparable to SOTA methods.
- A contrast-preserving brightness loss is proposed to retain the brightness distribution in the original input and enhance the contrast of the final output. It converts the input image to a single-channel brightness diagram to avoid serious color deviations caused by subsequent operations and linearly stretches the brightness diagram to obtain the expected target. It can effectively improve the brightness and contrast without requiring references from the dataset, which can improve the generalization ability of the model.
- An edge-preserving smoothness loss is proposed to remove noise and enhance details. The variation trend in the input image pixel values is selectively promoted or suppressed to reach the desired goal. While maintaining the advantage of zero references, it can also drive the model to achieve the effect of sharpening and noise reduction, further refining its performance.

## 2. Related Works

#### 2.1. Camera Response Model

#### 2.2. Zero-Reference LLIE Loss Function

## 3. Methods

#### 3.1. Parameter-Generating Network

#### 3.2. Loss Function

#### 3.2.1. Contrast-Preserving Brightness Loss

- In order to reduce the impact on color information, the RGB channel is fused into a single channel to obtain the brightness map. The conversion formula uses the following, which is more suitable for human perception:$$\Phi \left({\mathbf{x}}_{\mathbf{i}}\right)=0.299\xb7{r}_{i}+0.587\xb7{g}_{i}+0.114\xb7{b}_{i},$$
- After applying $\Phi (\xb7)$ to each pixel of ${\mathbf{P}}_{\mathbf{0}}$, the brightness map (${\mathbf{E}}_{\mathbf{brightness}}$) is dark. Then, it is linearly stretched to enhance the brightness and contrast. The following equation is used to extend the pixel values of ${\mathbf{E}}_{\mathbf{brightness}}$ linearly to the $\left[a-b\right]$ interval to obtain the expected brightness map (${\mathbf{E}}_{\mathbf{exptar}}$):$${\mathbf{E}}_{\mathbf{exptar}}=a+{\displaystyle \frac{b-a}{max\left({\mathbf{E}}_{\mathbf{brightness}}\right)-min\left({\mathbf{E}}_{\mathbf{brightness}}\right)}}({\mathbf{E}}_{\mathbf{brightness}}-a).$$

#### 3.2.2. Edge-Preserving Smoothness Loss

**Edge Detection.**The main basis for generating the edge diagram is the Canny algorithm [32]. Firstly, horizontal and vertical Gaussian filters are applied to ${\mathbf{P}}_{\mathbf{0}}$ to suppress the influence of noise on edge detection. The size and standard deviation of the filter inherit the default parameters of 5 and 1. Then, the gradient information is extracted. The horizontal and vertical Sobel operators of size $3\times 3$ are used to calculate the gradient diagram in two directions of each of the three RGB channels from the filtered result. And, the amplitude of the total gradient at each position is obtained by summing the gradient amplitudes in each channel. Next, the double-threshold algorithm is used to find the positions of the strong and weak edges. Finally, the edge diagram (${\mathbf{E}}_{\mathbf{edge}}$) is obtained by judging the connectivity and removing the isolated weak edge. The edge refinement operation with non-maximum suppression from the original Canny algorithm is skipped. So, there is no need to calculate the direction of the total gradient. The main purpose is to keep the gradient edge with a larger width in the original image and make the enhancement result more natural. The two thresholds $\alpha $ and $\beta $ of the double-threshold algorithm have a great influence on the edge detection results. Their best values, as determined by the experiments described in the next section, are 0.707 and 1.414, respectively.**Gradient Extraction.**The horizontal and vertical Sobel operators are applied directly to ${\mathbf{P}}_{\mathbf{0}}$ to obtain a total of six gradient maps in three channels and two directions. Then, the total gradient amplitude (the sum of the gradient amplitudes in all channels) is calculated as the pixel value in the final gradient map (${\mathbf{E}}_{\mathbf{grad}}$). The entire gradient extraction process can be expressed as the following equation:$${\mathbf{E}}_{\mathbf{grad}}=\Psi \left({\mathbf{P}}_{\mathbf{0}}\right).$$**Selectively Scaling.**The edge diagram (${\mathbf{E}}_{\mathbf{edge}}$) is used as a mask to multiply the gradient map (${\mathbf{E}}_{\mathbf{grad}}$) element by element, which can filter out the non-edge part of ${\mathbf{E}}_{\mathbf{grad}}$. Then, the result is scaled by an amplification factor ($\gamma $) to obtain the expected target image (${\mathbf{E}}_{\mathbf{smoothtar}}$):$${\mathbf{E}}_{\mathbf{smoothtar}}=\gamma {\mathbf{E}}_{\mathbf{grad}}\u2a00{\mathbf{E}}_{\mathbf{edge}}.$$

^{−8}is added to the base number to ensure that the training process is continuous.

#### 3.2.3. Total Loss Function

## 4. Experiments and Results

#### 4.1. Implementation Details

#### 4.2. Ablation Study

#### 4.2.1. Influence of Key Parameters

**Number of Layers and Channels in PGN.**Using deeper and wider convolution structures gives the model stronger representation but leads to more computation, less efficiency and more difficulties in finding model parameters that meet the expected goals. $m=1,2,3,4$ and n = 1, 2, 4 were used to generate binary groups for the quantitative evaluation. The results are shown in Table 1. It can be seen that the scheme with m = 1, which obtains single-channel enhancement parameters directly from the three-channel input image, only achieved an SSIM of 0.44. It is too simple a structure to extract enough features and cannot obtain sufficient accuracy. The combination of $m=n=2$ achieved the highest SSIM in the test set and could process extremely fast. Structures with $m>2$ and $n>2$ failed to achieve higher accuracy with the current learning strategy. Although setting the additional convolution kernels to specific values should yield the same output result as the combination of $m=n=2$, a deeper and wider model structure has a larger parameter space and is more likely to fall into other local minimum points during the optimization, which leads to sub-optimal results. They also slowed down the enhancement. Considering accuracy and efficiency, a network structure with $m=n=2$ was chosen.

**Minimum of Linear Stretching Target Interval in Contrast-Preserving Brightness Loss.**The brightness and contrast of low-light images can be improved by linear stretching. The brightness distribution of low-light images is concentrated near zero, so the selection of the lower limit of the target interval has a great influence on the enhancement effect. A larger value will brighten the image remarkably. Values of a = 0, 0.1, 0.2, 0.3, 0.4 and 0.5, combined b with a value of 1, were taken to obtain the linear stretching target interval $[a,1]$ for comparison. The results are shown in Table 2. When a was too large (a = 0.5) or too small (a = 0.0), the SSIM between the output of the network and the target image was degraded significantly. Although the SSIMs of the other four sets of parameters were not much different, the PSNR achieved the highest value at a = 0.2. We think that with a = 0.2, a good reference can be built for most low-light images, and a = 0.2 is chosen as the minimum of the linear stretching target interval in contrast-preserving brightness loss.

**Double Thresholds and Amplification Factor in Edge-preserving Smoothness Loss.**The double thresholds ($\alpha $ and $\beta $) in the edge-preserving smoothness loss directly determine the final position of the detected edge. It is easy to lose details when the thresholds are too large, and a large amount of noise will be retained when they are too small. The amplification factor ($\gamma $) changes the amplitude of the gradient at the edge, and an inappropriate release may trigger additional color variations. Considering that a great change in any color channel can indicate the presence of an edge, alternative parameters were set based on the theoretical maximum of the single-channel gradient amplitude. The larger ($\beta $) of the double thresholds was taken as 0.304, 0.707, 1.414 and 2.828, which were halved to obtain the smaller threshold ($\alpha $), combined with $\gamma $ = 1, 2, 3 for comparison. The results are shown in Table 3. As shown in the table, both metrics at $\gamma =1$ were poor. Although the SSIM was similar, the PSNR was better when $\gamma =2$ compared to $\gamma =3$ with the same $\beta $. And, the PSNR achieved the highest value at $\beta =1.414$. Thus, $\alpha =0.707$, $\beta =1.414$ and $\gamma =2$ were chosen as the double thresholds and the amplification factor in the edge-preserving smoothness loss. These values may build good references for most low-light images.

#### 4.2.2. Effect of Inverse Transformation

#### 4.2.3. Effect of Loss Functions

#### 4.3. Experimental Configuration

#### 4.4. Benchmark Evaluations

#### 4.4.1. Qualitative Comparison

#### 4.4.2. Quantitative Comparison

#### 4.5. Night Face Detection

## 5. Discussion

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Ibrahim, H.; Kong, N.S.P. Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron.
**2007**, 53, 1752–1758. [Google Scholar] [CrossRef] - Chen, S.D.; Ramli, A.R. Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron.
**2004**, 49, 1310–1319. [Google Scholar] [CrossRef] - Lu, L.; Zhou, Y.; Panetta, K.; Agaian, S. Comparative study of histogram equalization algorithms for image enhancement. Proc. SPIE Int. Soc. Opt. Eng.
**2010**, 7708, 770811-1. [Google Scholar] - Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE Trans. Image Process.
**2020**, 29, 5862–5876. [Google Scholar] [CrossRef] [PubMed] - Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process.
**2018**, 27, 2828–2841. [Google Scholar] [CrossRef] - Seonhee, P.; Byeongho, M.; Seungyong, K.; Soohwan, Y.; Joonki, P. Low-light image enhancement using variational optimization-based Retinex model. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Berlin, Germany, 3–6 September 2017; pp. 70–71. [Google Scholar] [CrossRef]
- Loza, A.; Bull, D.R.; Hill, P.R.; Achim, A.M. Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients. Digit. Signal Process.
**2013**, 23, 1856–1866. [Google Scholar] [CrossRef] - Malm, H.; Oskarsson, M.; Warrant, E.; Clarberg, P.; Hasselgren, J.; Lejdfors, C. Adaptive enhancement and noise reduction in very low light-level video. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio De Janeiro, Brazi, 14–21 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
- Dong, X.; Wang, G.; Pang, Y.; Li, W.; Wen, J.; Meng, W.; Lu, Y. Fast efficient algorithm for enhancement of low lighting video. In Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, Barcelona, Spain, 11–15 July 2011; pp. 1–6. [Google Scholar] [CrossRef]
- Muhammad, N.; Khan, H.; Bibi, N.; Usman, M.; Ahmed, N.; Khan, S.N.; Mahmood, Z. Frequency component vectorisation for image dehazing. J. Exp. Theor. Artif. Intell.
**2021**, 33, 919–932. [Google Scholar] [CrossRef] - Zhu, Y.; Jia, Z.; Yang, J.; Kasabov, N.K. Change detection in multitemporal monitoring images under low illumination. IEEE Access
**2020**, 126700–126712. [Google Scholar] [CrossRef] - Ren, Y.R.; Ying, Z.Q.; Li, T.H.; Li, G. LECARM: Low-Light Image Enhancement Using the Camera Response Model. IEEE Trans. Circuits Syst. Video Technol.
**2019**, 29, 968–981. [Google Scholar] [CrossRef] - Guo, X.J.; Li, Y.; Ling, H.B. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process.
**2017**, 26, 982–993. [Google Scholar] [CrossRef] - Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. Pattern Recognit.
**2017**, 61, 650–662. [Google Scholar] [CrossRef] - Yi, W.; Dong, L.; Liu, M.; Hui, M.; Kong, L.; Zhao, Y. SID-Net: Single image dehazing network using adversarial and contrastive learning. Multimed. Tools Appl.
**2024**, 83, 71619–71638. [Google Scholar] [CrossRef] - Khan, H.; Xiao, B.; Li, W.; Muhammad, N. Recent advancement in haze removal approaches. Multimed. Syst.
**2022**, 28, 687–710. [Google Scholar] [CrossRef] - Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process.
**2021**, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed] - Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1777–1786. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern Anal. Mach. Intell.
**2022**, 44, 4225–4238. [Google Scholar] [CrossRef] - Xiang, S.; Wang, Y.; Deng, H.; Wu, J.; Yu, L. Zero-shot Learning for Low-light Image Enhancement Based on Dual Iteration. J. Electron. Inf. Technol.
**2022**, 44, 3379–3388. [Google Scholar] [CrossRef] - Zheng, S.; Gupta, G. Semantic-Guided Zero-Shot Learning for Low-Light Image/Video Enhancement. In Proceedings of the 22nd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 581–590. [Google Scholar] [CrossRef]
- Xia, Y.; Xu, F.; Zheng, Q. Zero-shot Adaptive Low Light Enhancement with Retinex Decomposition and Hybrid Curve Estimation. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023. [Google Scholar] [CrossRef]
- Grossberg, M.D.; Nayar, S.K. Modeling the space of camera response functions. IEEE Trans. Pattern Anal. Mach. Intell.
**2004**, 26, 1272–1282. [Google Scholar] [CrossRef] [PubMed] - Eilertsen, G.; Kronander, J.; Denes, G.; Mantiuk, R.K.; Unger, J. HDR image reconstruction from a single exposure using deep CNNs. ACM Trans. Graph.
**2017**, 36. [Google Scholar] [CrossRef] - Wu, Y.; Liu, F. Zero-shot contrast enhancement and denoising network for low-light images. Multimed. Tools Appl.
**2023**, 83, 4037–4064. [Google Scholar] [CrossRef] - Tian, J.; Zhang, J. A Zero-Shot Low Light Image Enhancement Method Integrating Gating Mechanism. Sensors
**2023**, 23, 7306. [Google Scholar] [CrossRef] [PubMed] - Kar, A.; Dhara, S.K.; Sen, D.; Biswas, P.K. Zero-shot Single Image Restoration through Controlled Perturbation of Koschmieder’s Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 16200–16210. [Google Scholar] [CrossRef]
- Xie, C.; Tang, H.; Fei, L.; Zhu, H.; Hu, Y. IRNet: An Improved Zero-Shot Retinex Network for Low-Light Image Enhancement. Electronics
**2023**, 12, 3162. [Google Scholar] [CrossRef] - Zhang, Q.; Zou, C.; Shao, M.; Liang, H. A Single-Stage Unsupervised Denoising Low-Illumination Enhancement Network Based on Swin-Transformer. IEEE Access
**2023**, 11, 75696–75706. [Google Scholar] [CrossRef] - Kendall, A.; Gal, Y.; Cipolla, R. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7482–7491. [Google Scholar] [CrossRef]
- Sener, O.; Koltun, V. Multi-Task Learning as Multi-Objective Optimization. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS), Montréal, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
- Canny, J. A Computational approach to edge-detection. IEEE Trans. Pattern Anal. Mach. Intell.
**1986**, 8, 679–698. [Google Scholar] [CrossRef] [PubMed] - Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process.
**2018**, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. 2017. Available online: https://xxx.lanl.gov/abs/1412.6980 (accessed on 24 June 2024).
- Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Restarts. 2016. Available online: http://xxx.lanl.gov/abs/1608.03983 (accessed on 24 June 2024).
- Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse Gradient Regularized Deep Retinex Network for Robust Low-Light Image Enhancement. IEEE Trans. Image Process.
**2021**, 30, 2072–2086. [Google Scholar] [CrossRef] [PubMed] - Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process.
**2004**, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] - Hao, S.J.; Han, X.; Guo, Y.R.; Xu, X.; Wang, M. Low-Light Image Enhancement With Semi-Decoupled Decomposition. IEEE Trans. Multimed.
**2020**, 22, 3025–3038. [Google Scholar] [CrossRef] - Zhang, F.; Li, Y.; You, S.D.; Fu, Y. Learning Temporal Consistency for Low Light Video Enhancement from Single Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 4965–4974. [Google Scholar] [CrossRef]
- Wu, W.H.; Weng, J.; Zhang, P.P.; Wang, X.; Yang, W.H.; Jiang, J.M.; Ieee Comp, S.O.C. URetinex-Net: Retinex-based Deep Unfolding Network for Low-light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5891–5900. [Google Scholar] [CrossRef]
- Ma, L.; Ma, T.Y.; Liu, R.S.; Fan, X.; Luo, Z.X.; Ieee Comp, S.O.C. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5627–5636. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. 2018. Available online: http://xxx.lanl.gov/abs/1808.04560 (accessed on 24 June 2024).
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation. In Proceedings of the 19th IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 965–968. [Google Scholar]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process.
**2015**, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed] - Yang, W.; Yuan, Y.; Ren, W.; Liu, J.; Scheirer, W.J.; Wang, Z.; Zhang, T.; Zhong, Q.; Xie, D.; Pu, S.; et al. Advancing Image Understanding in Poor Visibility Environments: A Collective Benchmark Study. IEEE Trans. Image Process.
**2020**, 29, 5737–5752. [Google Scholar] [CrossRef] [PubMed] - Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process.
**2012**, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed] - Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett.
**2013**, 20, 209–212. [Google Scholar] [CrossRef] - Li, J.; Wang, Y.; Wang, C.; Tai, Y.; Qian, J.; Yang, J.; Wang, C.; Li, J.; Huang, F. DSFD: Dual Shot Face Detector. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 5055–5064. [Google Scholar] [CrossRef]
- Yang, S.; Luo, P.; Loy, C.C.; Tanal, X. WIDER FACE: A Face Detection Benchmark. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 5525–5533. [Google Scholar] [CrossRef]

**Figure 1.**The relationship between the pixel values of the image and the incident radiation in the same scene under two different exposure conditions.

**Figure 2.**The framework of the ZRCRN. f is the CRF, and g is the BTF. The parameter-generating network includes inverse CRF transformation and convolutional model extraction with two main operations. It extracts the parameter $\mathbf{K}$ from the input low-light image ${\mathbf{P}}_{\mathbf{0}}$. The BTF calls $\mathbf{K}$ to transform ${\mathbf{P}}_{\mathbf{0}}$ to the enhancement result ${\mathbf{P}}_{\mathbf{1}}$.

**Figure 3.**The structure of the PGN. The number at the bottom represents the number of channels of the corresponding image or convolution layer. ${f}^{-1}$ indicating the inverse CRF transformation. The low-light image (${\mathbf{P}}_{\mathbf{0}}$) acquired by the camera first passes through inverse CRF transformation to obtain the low-light incident radiation map (${\mathbf{E}}_{\mathbf{0}}$) and then passes through a series of layers to obtain the enhancement-required single-channel pixel-wise parameter diagram ($\mathbf{K}$).

**Figure 5.**A comparison of the ablation of loss functions. The left side of “-” in each title indicates the state of brightness loss (Our: contrast-preserving brightness loss; Const: fixed-target-exposure loss; No: not use), and the right side indicates the state of smoothness loss.

**Figure 6.**The qualitative comparison of eight methods on FR datasets with the input and ground truth (GT).

**Figure 8.**A comparison of enhancement methods on the DARK FACE dataset. “ori” indicates no enhancement. Targets were detected by DSFD from the outputs of each enhancement method to obtain the metric.

SSIM | PSNR | TIME/ms | |
---|---|---|---|

m1 | 0.44 | 12.32 | 0.41 |

m2-n1 | 0.69 | 17.99 | 0.50 |

m2-n2 | 0.69 | 18.20 | 0.51 |

m2-n4 | 0.68 | 17.74 | 0.61 |

m3-n1 | 0.54 | 15.80 | 0.56 |

m3-n2 | 0.61 | 16.98 | 0.59 |

m3-n4 | 0.61 | 16.98 | 0.73 |

m4-n1 | 0.59 | 17.28 | 0.64 |

m4-n2 | 0.60 | 17.48 | 0.64 |

m4-n4 | 0.68 | 17.67 | 0.87 |

SSIM | PSNR | SSIM-EXPTAR2BONL | |
---|---|---|---|

a = 0.0 | 0.67 | 17.13 | 0.4 |

a = 0.1 | 0.69 | 17.73 | 0.65 |

a = 0.2 | 0.69 | 18.20 | 0.73 |

a = 0.3 | 0.69 | 18.09 | 0.73 |

a = 0.4 | 0.68 | 17.84 | 0.69 |

a = 0.5 | 0.64 | 16.94 | 0.65 |

SSIM | PSNR | |
---|---|---|

$\gamma =1$, $\beta =0.304$ | 0.69 | 17.78 |

$\gamma =1$, $\beta =0.707$ | 0.68 | 17.33 |

$\gamma =1$, $\beta =1.414$ | 0.67 | 16.85 |

$\gamma =1$, $\beta =2.828$ | 0.43 | 11.67 |

$\gamma =2$, $\beta =0.304$ | 0.69 | 17.98 |

$\gamma =2$, $\beta =0.707$ | 0.69 | 18.14 |

$\gamma =2$, $\beta =1.414$ | 0.69 | 18.20 |

$\gamma =2$, $\beta =2.828$ | 0.69 | 18.01 |

$\gamma =3$, $\beta =0.304$ | 0.69 | 17.85 |

$\gamma =3$, $\beta =0.707$ | 0.69 | 17.94 |

$\gamma =3$, $\beta =1.414$ | 0.69 | 18.05 |

$\gamma =3$, $\beta =2.828$ | 0.69 | 17.87 |

SSIM | PSNR | |
---|---|---|

With inverse | 0.69 | 18.20 |

Without inverse | 0.65 | 17.2 |

SSIM | PSNR | |
---|---|---|

OurExp-OurSmooth | 0.69 | 18.20 |

OurExp-NoSmooth | 0.69 | 18.00 |

ConstExp-OurSmooth | 0.68 | 17.93 |

NoExp-OurSmooth | 0.28 | 10.02 |

LOL | LOL_v2 | Mean | ||||
---|---|---|---|---|---|---|

SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | |

LECARM | 0.67 | 14.42 | 0.67 | 17.66 | 0.67 | 16.04 |

SDD | 0.67 | 13.25 | 0.7 | 16.51 | 0.69 | 14.88 |

StableLLVE | 0.78 | 17.25 | 0.78 | 19.8 | 0.78 | 18.52 |

URetinex_Net | 0.86 | 19.55 | 0.85 | 20.66 | 0.85 | 20.11 |

SCI | 0.63 | 13.8 | 0.63 | 17.3 | 0.63 | 15.55 |

EnlightenGAN | 0.74 | 17.44 | 0.74 | 18.62 | 0.74 | 18.03 |

Zero_DCE | 0.7 | 14.84 | 0.69 | 18.13 | 0.69 | 16.49 |

Ours | 0.72 | 16.44 | 0.69 | 18.2 | 0.71 | 17.32 |

DICM | LIME | MEF | Mean | |||||
---|---|---|---|---|---|---|---|---|

BRISQUE | NIQE | BRISQUE | NIQE | BRISQUE | NIQE | BRISQUE | NIQE | |

LECARM | 26.70 | 4.24 | 20.97 | 3.99 | 19.74 | 3.00 | 22.47 | 3.74 |

SDD | 30.74 | 3.94 | 27.09 | 4.01 | 29.73 | 3.86 | 29.19 | 3.94 |

StableLLVE | 33.73 | 4.24 | 29.12 | 4.20 | 34.11 | 4.30 | 32.32 | 4.25 |

URetinex_Net | 28.04 | 4.07 | 26.38 | 4.41 | 25.14 | 3.73 | 26.52 | 4.07 |

SCI | 23.04 | 3.67 | 20.68 | 4.17 | 19.24 | 3.16 | 20.99 | 3.67 |

EnlightenGAN | 23.80 | 3.50 | 19.62 | 3.54 | 21.06 | 3.07 | 21.49 | 3.37 |

Zero_DCE | 25.04 | 3.57 | 22.17 | 3.90 | 21.09 | 3.03 | 22.77 | 3.50 |

Ours | 21.83 | 3.40 | 22.39 | 3.87 | 18.25 | 3.23 | 20.82 | 3.50 |

FLOPs/G | #Params/k | TIME/ms | |
---|---|---|---|

LECARM | - | - | 71.310 |

SDD | - | - | 5952.140 |

StableLLVE | 165.316 | 4316.259 | 3.306 |

URetinex_Net | 939.912 | 340.105 | 89.750 |

SCI | 0.272 | 0.258 | 0.676 |

EnlightenGAN | 275.020 | 8636.675 | 19.390 |

Zero_DCE | 85.769 | 79.416 | 1.300 |

Ours | 0.081 | 0.075 | 0.518 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, X.; Huang, L.; Li, M.; Han, C.; Liu, X.; Nie, T.
Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model. *Sensors* **2024**, *24*, 5019.
https://doi.org/10.3390/s24155019

**AMA Style**

Wang X, Huang L, Li M, Han C, Liu X, Nie T.
Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model. *Sensors*. 2024; 24(15):5019.
https://doi.org/10.3390/s24155019

**Chicago/Turabian Style**

Wang, Xiaofeng, Liang Huang, Mingxuan Li, Chengshan Han, Xin Liu, and Ting Nie.
2024. "Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model" *Sensors* 24, no. 15: 5019.
https://doi.org/10.3390/s24155019