Next Article in Journal
System Design and Experimental Study of a Four-Roll Bending Machine
Previous Article in Journal
Interdependent Development of Physical and Cognitive Skills in U12 Soccer Players: Sprinting, Agility, and Decision-Making Are Interconnected
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement

School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7382; https://doi.org/10.3390/app15137382
Submission received: 16 May 2025 / Revised: 25 June 2025 / Accepted: 27 June 2025 / Published: 30 June 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

In recent years, the continuous advancement of deep learning technology and its integration into the domain of low-light image enhancement have led to a steady improvement in enhancement effects. However, this progress has been accompanied by an increase in model complexity, imposing significant constraints on applications that demand high real-time performance. To address this challenge, inspired by the state-of-the-art Zero-DCE approach, we introduce a novel method that transforms the low-light image enhancement task into a curve estimation task tailored to each individual image, utilizing a lightweight shallow neural network. Specifically, we first design a novel curve formula based on Gamma correction, which we call the Gamma-based light-enhancement (GLE) curve. This curve enables outstanding performance in the enhancement task by directly mapping the input low-light image to the enhanced output at the pixel level, thereby eliminating the need for multiple iterative mappings as required in the Zero-DCE algorithm. As a result, our approach significantly improves inference speed. Additionally, we employ a lightweight network architecture to minimize computational complexity and introduce a novel global channel attention (GCA) module to enhance the nonlinear mapping capability of the neural network. The GCA module assigns distinct weights to each channel, allowing the network to focus more on critical features. Consequently, it enhances the effectiveness of low-light image enhancement while incurring a minimal computational cost. Finally, our method is trained using a set of zero-reference loss functions, akin to the Zero-DCE approach, without relying on paired or unpaired data. This ensures the practicality and applicability of our proposed method. The experimental results of both quantitative and qualitative comparisons demonstrate that, despite its lightweight design, the images enhanced using our method not only exhibit perceptual quality, authenticity, and contrast comparable to those of mainstream state-of-the-art (SOTA) methods but in some cases even surpass them. Furthermore, our model demonstrates very fast inference speed, making it suitable for real-time inference in resource-constrained or mobile environments, with broad application prospects.

1. Introduction

In practical scenarios, images acquired under insufficient or constrained lighting conditions frequently manifest characteristics such as low illumination, high noise levels, and poor contrast [1,2,3,4]. These adverse characteristics not only severely impair human visual perception and degrade user experience but also severely hinder the performance of critical computer vision tasks, such as image segmentation, object detection, and face recognition [5,6,7,8]. Consequently, the enhancement of images captured under low-light conditions constitutes a vital research priority. At present, the existing low-light image enhancement techniques can be broadly classified into two categories: traditional methods and deep learning-based approaches.
The traditional methods for low-light image enhancement primarily encompass histogram equalization [9,10,11], Gamma correction [12,13,14], and Retinex theory [15,16,17]. These techniques have been widely utilized due to their simplicity and intuitive principles, yet they suffer from various limitations when applied to complex low-light scenarios. Histogram equalization is a well-established technique that enhances image contrast by redistributing the pixel value distribution. However, it often leads to over-enhancement, noise amplification, and color distortion [18], particularly in images with narrow or skewed histograms. These artifacts can significantly degrade the visual quality of the enhanced images, making histogram equalization less suitable for low-light-enhancement tasks that require fine detail preservation and natural appearance. Retinex theory-based methods decompose images into reflection and illumination components [19], achieving enhancement by adjusting the illumination component. While these methods can effectively enhance the overall brightness and contrast of low-light images, they are computationally intensive due to the complex decomposition and reconstruction processes. Additionally, the parameter tuning required for optimal performance is often non-trivial, making it challenging to adapt these methods to diverse low-light scenarios without extensive manual intervention [20]. Gamma correction is a classical nonlinear transformation method that is computationally simple and efficient. Its nonlinear characteristics align well with the human visual system’s perception of brightness [21], making it suitable for enhancing the overall luminance of low-light images. However, most Gamma correction methods typically employ a fixed Gamma value for the entire image, which cannot adequately adapt to the varying brightness levels within an image. This limitation often results in overexposure in brighter regions or insufficient brightness adjustment in darker areas, thereby failing to achieve balanced enhancement across the entire image. Other traditional approaches, such as those based on wavelet transforms [22,23], fuzzy set theory [24,25], and multi-scale fusion [26], have also been explored for low-light image enhancement. The wavelet transform-based methods excel at effectively separating and processing image information at different scales, enhancing details while suppressing noise, with good edge preservation. For example, the method proposed by Jung et al. [22] in 2017 achieved effective contrast enhancement and noise reduction. However, these methods are often sensitive to the choice in thresholds and basis functions, may lead to artifacts or color distortion if mishandled, and are relatively computationally complex. The fuzzy set theory-based methods can naturally model the fuzziness and uncertainty of image brightness, offering flexible nonlinear mapping rules and strong adaptability. Yet, their performance heavily relies on the design of membership functions and fuzzy rules, and may introduce unnatural appearance or noise if not applied carefully. The multi-scale fusion-based methods can integrate complementary information from different scales or processing results, leveraging the strengths of various techniques to typically achieve more robust and natural enhancement results. For instance, the multi-scale fusion approach proposed by Zhang et al. [26] in 2023 yielded promising results. Nevertheless, these methods are generally computationally intensive, requiring the generation and processing of multiple inputs, the design of fusion weights is crucial and challenging, and improper fusion can easily introduce artifacts or blurring. In summary, while the traditional methods have laid the foundation for low-light image enhancement, their inherent limitations in terms of computational complexity, parameter tuning, and adaptability to diverse scenarios necessitate the development of more advanced and flexible techniques to achieve high-quality enhancement results in complex low-light environments.
In recent years, the field of image enhancement has seen a significant increase in the development of low-illumination enhancement methods based on deep learning. These methods have not only achieved superior performance compared to traditional approaches but also demonstrated promising prospects for further development. The advent of deep learning has fundamentally transformed the landscape of low-light image enhancement by leveraging the powerful representation learning capabilities of neural networks. The pioneering work in this domain can be traced back to 2017, when Lore et al. [27] proposed a deep autoencoder approach for natural low-light image enhancement, known as LLNet, marking the beginning of a new era in low-light enhancement. Their work demonstrated that a variant of a stacked sparse denoising autoencoder trained on synthetic data could effectively enhance low-light and noisy images, thereby highlighting the potential of deep learning in addressing the challenges of low-light imaging. Since then, a plethora of deep learning-based low-light-enhancement methods have been proposed, broadly categorized into supervised and unsupervised approaches. In the realm of supervised learning, Wei et al. [28] introduced Deep Retinex Decomposition (RetinexNet), which leverages deep convolutional neural networks to achieve end-to-end learning of Retinex theory. This method significantly advanced the state of the art by integrating the well-established Retinex model with modern deep learning techniques. Subsequently, Zhang et al. [29] proposed KinD, a simple yet effective network that decomposes the enhancement task into three distinct sub-networks: a layer decomposition net, a reflectance restoration net, and an illumination adjustment net. This modular design enables KinD to effectively address the complexities of low-light enhancement by tackling different aspects of the problem separately. More recently, Cai et al. [6] introduced Retinexformer, the first Transformer-based algorithm for low-light image enhancement. By estimating illumination and directly predicting the illumination enhancement map, Retinexformer avoids the numerical instability issues often encountered in traditional methods that rely on estimating the illumination map. This innovative approach leverages the strengths of Transformer architectures, such as their ability to capture long-range dependencies and contextual information, thereby achieving state-of-the-art performance in low-light-enhancement tasks. However, supervised learning methods, despite their impressive performance, face significant limitations. One of the primary challenges is the requirement for annotated or paired training datasets. Currently, the availability of paired low-light images is limited, which restricts the scalability and applicability of these methods. Moreover, supervised low-light-enhancement methods often exhibit insufficient generalization capabilities when confronted with complex and varied scenes. This limitation is particularly evident when these methods are applied to real-world low-light images with diverse lighting conditions, where they may fail to effectively adapt and produce satisfactory results. In contrast, unsupervised models have emerged as a promising alternative, exhibiting stronger generalization capabilities. In 2021, Jiang et al. [30] proposed EnlightenGAN, a pioneering method that leverages generative adversarial networks (GANs) for deep-light enhancement without paired supervision. This work demonstrated that GANs could be effectively utilized to enhance low-light images by learning the mapping between low-light and normal-light images without requiring paired training data. Building on this foundation, Liang et al. [31] explored the potential of Contrastive Language–Image Pretraining (CLIP) for pixel-level image enhancement and proposed CLIP-LIT, a novel unsupervised backlight image enhancement method. By leveraging the rich semantic information captured by CLIP, CLIP-LIT achieves improved enhancement results while maintaining strong generalization capabilities. In 2024, Chobola et al. [32] introduced CoLIE, a method that redefines the enhancement process by mapping the 2D coordinates of underexposed images to their illumination components. This innovative approach achieves improved results by effectively capturing the spatial relationships and contextual information within the image, thereby providing a more holistic enhancement strategy. In summary, while supervised deep learning methods have achieved remarkable success in low-light image enhancement, their reliance on paired training data and limited generalization capabilities pose significant challenges. In contrast, unsupervised methods, such as EnlightenGAN, CLIP-LIT, and CoLIE, have demonstrated the potential to overcome these limitations by leveraging advanced techniques, such as GANs and CLIP. These developments highlight the ongoing evolution of low-light image enhancement, driven by the continuous integration of deep learning techniques and innovative algorithmic designs.
As mentioned above, it is evident that traditional low-light-enhancement methods often necessitate complex manual prior knowledge or intricate iterative algorithms to achieve satisfactory enhancement results, leading to high computational complexity. On the other hand, while the existing supervised and unsupervised deep learning-based low-light-enhancement models have been continuously improving in terms of enhancement effects, these methods still have certain limitations, particularly regarding inference speed. Most deep learning-based low-light-enhancement methods rely on complex network architectures, resulting in large model parameters and high computational costs, making them challenging to run in real time on resource-constrained devices. In recent years, Guo et al. [33] proposed the Zero-Reference Deep Curve Estimation (Zero-DCE) method for low-light image enhancement, which demonstrates unique advantages with its lightweight design and zero-reference training strategy. This method directly adjusts the original image by learning pixel-level curve mapping functions, circumventing the complex feature extraction process and effectively reducing computational complexity. Zero-DCE does not depend on paired datasets for training, instead, it is guided by zero-reference loss functions such as spatial consistency loss, exposure control loss, and color constancy loss, ensuring the naturalness of the enhancement results. Subsequently, to further improve computational speed and reduce the model size, Li et al. [34] introduced an accelerated and lightweight version of Zero-DCE, called Zero-DCE++. However, despite the partial improvements made by Zero-DCE++, its computational speed is still unsatisfactory, indicating the need for further research and development to enhance the efficiency and practicality of low-light image enhancement methods for real-time applications on resource-limited devices.
As is known, the Zero-DCE and Zero-DCE++ methods design a light-enhancement curve (LE curve), a quadratic function that necessitates eight iterations to achieve satisfactory enhancement results. This iterative process, while effective, presents an opportunity for further improvement in terms of computational efficiency. Drawing from the insights of Zero-DCE, we recognize that the low-light image enhancement task can be reformulated as the estimation of a specific curve, enabling the design of a dedicated curve tailored for low-light image enhancement. Consequently, in this work, our objective is to devise a powerful curve that can directly accomplish the enhancement task—specifically the pixel brightness mapping task—excellently without the need for iterative processing. To this end, we conducted an in-depth analysis of the properties of the LE curve. After continuous efforts, we ultimately designed a completely new curve based on Gamma correction theory, termed the Gamma-based light-enhancement (GLE) curve. Furthermore, to enhance both computational speed and enhancement effects, we redesigned the network architecture and introduced a novel global channel attention (GCA) mechanism for channel reconstruction. This attention mechanism builds upon the SE [35] attention mechanism, simplifying the complex squeeze-and-excitation process and retaining the channel weight allocation strategy. By incorporating the GCA module, our model focuses more on important regions, thereby enhancing its generalization ability and overall enhancement performance. In summary, our contributions are as follows:
(1)
We propose a novel curve formula, the Gamma-based light-enhancement (GLE) curve, specifically tailored for low-light image enhancement tasks, which executes independent pixel-wise transformations and conducts distinct parameter estimation for each of the three RGB channels. This formula achieves pixel brightness mapping with a single application, thereby avoiding the parameter redundancy and computational slowdown associated with multiple iterations. In contrast to Zero-DCE and Zero-DCE++, which require eight iterations for curve application, our GLE curve is applied only once, resulting in a significantly faster computational speed and enhanced efficiency.
(2)
We introduce a new channel attention mechanism, i.e., GCA. The GCA module retains only the essential channel weight allocation process, simplifying the computationally intensive squeeze-and-excitation operations that can impede speed. We redesign the network architecture to fully leverage the GCA module. The proposed model boasts a significantly reduced parameter count of merely 8000, in stark contrast to the 80,000 parameters of the Zero-DCE model. This streamlined architecture, comprising only basic convolutional layers, activation layers, and the GCA module, ensures both effective enhancement and high execution efficiency.
(3)
Our proposed method not only achieves low-light image enhancement results comparable to or even better than those of the state-of-the-art methods in terms of various objective evaluation metrics but also boasts the fastest execution speed. Consequently, our model demonstrates broader application prospects, particularly in scenarios requiring real-time processing and deployment on resource-constrained devices.

2. Related Works

The Zero-DCE [33] method, introduced by Guo et al. in 2020, represents a zero-reference approach for enhancing low-light images. Drawing inspiration from the curve adjustment techniques used in photo editing software to modify image brightness, the authors designed a specific curve for low-light image enhancement, referred to as the LE curve. This innovation effectively transforms the low-light-enhancement task into an image-specific curve estimation problem. The mathematical formulation of the LE curve is given by
L E ( I ( x ) ; α ) = I ( x ) + α I ( x ) ( 1 I ( x ) ) ,
where I ( x ) denotes the normalized pixel value at position x , constrained within the range [0, 1], and α is a learnable parameter that governs the curvature of the enhancement curve. This pixel-wise adjustment mechanism facilitates spatially varying enhancement, allowing the method to adapt dynamically to different regions within the image.
The network architecture of Zero-DCE, known as DCE-Net, is composed of seven layers, with skip connections incorporated to further preserve image details. This structure is both concise and efficient, utilizing 3 × 3 convolutional kernels and ReLU activation functions in each layer, while the final layer employs a Tanh activation function, ensuring a lightweight design. The output layer generates 24 channels, which are then reshaped into 8 three-channel maps. These maps serve as adjustment parameters for the RGB channels in multiple iterations of the curve mapping function. By iteratively applying the LE curve, the model can gradually optimize the brightness and contrast of the image through progressive refinements. Moreover, Zero-DCE employs a set of carefully designed loss functions to guide the training process, eliminating the need for paired images. These loss functions include spatial consistency loss, exposure control loss, illumination smoothness loss, and color constancy loss. These non-reference losses work in conjunction with the iterative curve mapping process to generate pixel-specific adjustment parameters during training, enabling the model to adaptively enhance each pixel based on its surrounding context and content characteristics. Unlike other supervised learning methods that require a large amount of paired images, the zero-reference learning strategy of Zero-DCE significantly reduces the dependence on training data. This approach not only achieves satisfactory visual quality but also demonstrates clear advantages in computational efficiency and practicality, offering a solution for low-light image enhancement that balances effectiveness and efficiency.
To further lighten the model and accelerate computational speed, Li et al. [34] reintroduced Zero-DCE++, an enhanced version of Zero-DCE. One of the primary improvements is the adoption of depthwise-separable convolutions instead of standard convolutions, which significantly reduces the number of network parameters. Additionally, the number of output channels is decreased from twenty-four to three. Unlike Zero-DCE, which requires 24 different parameter mappings over eight iterations, Zero-DCE++ only needs to reuse three-channel (R, G, and B) maps in each iteration. Furthermore, Zero-DCE++ incorporates downsampling to achieve a better balance between enhancement performance and computational cost.

3. Methodology

In this section, the Gamma-nonlinear enhancement curve is formulated as a specialized mapping operator for low-light image restoration. Its theoretical foundation is derived from adaptive Gamma correction, ensuring pixel-wise adjustments and separate enhancement on the three RGB channels. To preserve critical feature discriminability during illumination transformation, a lightweight convolutional channel attention (GCA) module is strategically incorporated, achieving enhanced perceptual quality within a computationally efficient framework. Subsequent subsections are organized as follows: First, mathematical derivations of the GLE curve are formalized, followed by architectural specifications of the GCA mechanism. The hierarchical neural network topology is then delineated, concluding with an introduction of the hybrid loss function governing model training.

3.1. Basic Idea

While the Zero-DCE method utilizes a simple quadratic light-enhancement curve, achieving satisfactory image enhancement necessitates multiple iterations. However, these iterative applications still leave room for significant improvement in terms of computational efficiency. To address this, we posit that low-light image enhancement can be mathematically formulated as a direct curve mapping process, formally defined through Gamma transformations. Building upon the principles of Gamma correction, we redesign a specialized curve tailored specifically for low-light image enhancement. Consequently, our primary objective is to develop a dedicated curve that is capable of accomplishing the enhancement task through a single-step direct-mapping process.

3.2. Gamma-Based Light-Enhancement (GLE) Curve

Our proposed formula is designed to meet the following critical requirements: (1) Enhanced images should be confined to the [0, 1] normalized range, avoiding information loss from overflow truncation. (2) The curve should possess both brightness enhancement and overexposure suppression capabilities. (3) Akin to the LE curve, our curve must maintain monotonicity within the [0, 1] range to preserve the differences between adjacent pixels. In the subsequent sections, we will detail the step-by-step derivation of our final formula, starting from the standard Gamma correction.
Here, we first introduce the classic Gamma correction formula, which is defined as follows:
y = c x γ , x [ 0 , 1 ]
where x represents the input intensity, c is the scaling coefficient, y represents the output intensity, and γ is the correction parameter.
First, we explore the constraint issue of normalized range of enhanced pixel values. In our work, the parameter c is set to 1, ensuring that, when input pixel values x are within the [0, 1] interval, the corrected pixel values y remain naturally limited to the [0, 1] interval. Based on this property, we decide to retain the design using the original pixel x as the base such that the formula becomes
y = x γ , x [ 0 , 1 ] .
Next, we analyze how the curve can simultaneously possess the dual functions of brightness enhancement and overexposure suppression. As illustrated in Figure 1, it is evident that, when the γ value is between 0 and 1, the pixel values after Gamma transformation are increased relative to the original values, thereby achieving image brightness enhancement. Conversely, when the γ value exceeds 1, the transformed pixel values decrease, resulting in reduced image brightness and effective suppression of overexposure. To achieve both enhancement of low-illumination regions and suppression of overexposed regions, we modify the formula as follows:
y = x 1 + g , x [ 0 , 1 ] .
In this formula, x still represents the normalized original pixel value, y is the adjusted pixel value, and g represents the trainable curve parameter. Through the Tanh activation function, we strictly limit the range of g to the [−1, 1] interval, with different pixel values x corresponding to different parameters g. This design enables the curve function to simultaneously meet the requirements of enhancing low-light regions and suppressing high-light regions. However, in practical applications, the above formula poses potential problems. As shown in Figure 2a, when the pixel value x is small and g is negative, the adjusted pixel value y undergoes dramatic changes compared to the original value x, leading to unstable model training. Particularly when parameter g approaches 1 , the function curve becomes abnormally steep, potentially triggering a gradient explosion issue that severely affects the model’s convergence performance. To resolve the stability issue, we need to introduce an adaptive regulation factor into the formula to limit the magnitude of changes during brightness adjustment and avoid excessive enhancement. Based on this idea, we attempt to incorporate the original pixel value x as a regulation factor into the calculation of γ , forming the following formula:
y = x 1 + g x , x [ 0 , 1 ] .
Unfortunately, experimental results indicate that directly employing x as a regulation factor remains suboptimal. This direct application results in excessive regulation, thereby overly suppressing the model’s ability to enhance dark regions. This can be observed in Figure 2b, where, when the parameter g is set to 1 , low-pixel-value regions exhibit almost no noticeable changes, which contradicts our original intention of enhancing dark areas. Through in-depth analysis, we discover that the previous regulation factor excessively suppresses the model’s ability to enhance dark regions. This necessitates an optimized design of the regulation factor to provide moderate suppression in dark areas without being overly restrictive. This requirement calls for a nonlinear transformation of the regulation factor. When x is directly used as the regulation factor, its gradient remains constant, resulting in poor enhancement effects in dark regions. The ideal regulation factor should be a function that monotonically increases within the [0, 1] range but with gradually decreasing derivatives. Based on the above analysis, we decide to apply exponential transformation to the regulation factor, adopting the form x a , where a is a hyperparameter. Validated through ablation studies, when a is set to 0.13, it can achieve good enhancement effects in dark regions while maintaining moderate suppression. As such, considering all the above design factors, our GLE curve formula can be defined as
y = x 1 + g x a , x [ 0 , 1 ] .
Theoretical analysis demonstrates that our proposed formula exhibits significant adaptive advantages. When the pixel value x approaches 0, although the regulation factor x 0.13 also approaches 0, its gradient changes relatively smoothly, ensuring that the formula maintains stable and effective enhancement capabilities in low-light regions. Conversely, when x approaches 1, the regulation factor approaches 1, effectively suppressing excessive enhancement of high-light regions and preventing overexposure distortion. To further enhance performance, we observe that, while using the Tanh activation function limits parameter g to the [−1, 1] range, this constrains the formula’s ability to suppress overexposure. To address this issue, we implement an asymmetric mapping for parameter g after Tanh activation, amplifying its value by a factor of 10 when g > 0 (used for overexposure suppression) while keeping it unchanged when g < 0 (used for brightening dark regions). This asymmetric design significantly enhances the model’s control over high-light regions while preserving the fine adjustment capability for dark areas. Figure 3 illustrates the curve characteristics corresponding to different g values under the fixed hyperparameter a = 0.13 . As such, the derivation of the proposed GLE formula and its parameter settings is ultimately complete. The proposed GLE formula satisfies the three objectives we previously established. It requires only a single application to accomplish the mapping of pixel brightness levels, thereby enhancing low-illumination regions while suppressing overexposed areas.

3.3. Global Channel Attention (GCA)

In this section, we introduce a novel channel attention mechanism specifically designed for channel reconstruction. Our approach represents an advancement based on the SE [35] attention mechanism, which enhances feature representation through a two-step process. Initially, the squeeze operation compresses spatial information of each channel into a single descriptor via global average pooling:
z c = F s q ( u c ) = 1 H × W i = 1 H j = 1 W u c ( i , j )
where z c is the channel descriptor, u c represents the c-th channel of feature map, H and W denote the height and width of the feature map, and F s q is the squeeze function. Subsequently, the excitation operation captures nonlinear interdependencies between channels through a two-layer fully connected network:
s = F e x ( z , W ) = σ ( g ( z , W ) ) = σ ( W 2 δ ( W 1 z ) ) ,
where s represents the channel attention weights, F e x is the excitation function, δ represents the ReLU activation function, W 1 R C r × C and W 2 R C × C r denote dimensionality-reduction and dimensionality-increasing parameter matrices, respectively, and σ is the sigmoid activation function. The final reconstruction occurs through channel-wise multiplication:
x ˜ c = F s c a l e ( u c , s c ) = s c · u c ,
where x ˜ c is the recalibrated feature, s c is the attention weight for channel c, and F s c a l e denotes the channel-wise scaling function. Building upon our analysis of SE module, we propose an exceptionally lightweight channel attention mechanism. For a feature map x R C × H × W , we initially apply global average pooling to acquire channel descriptors:
z c = 1 H × W i = 1 H j = 1 W x c ( i , j ) ,
where z c is the global descriptor for channel c, x c represents the c-th channel of input feature map, and H and W are the spatial dimensions. Next, we remove the other tedious steps in Equation (8) and directly apply the softmax activation function to these channel descriptors:
a c = e z c k = 1 C e z k ,
where a c is the attention weight for channel c, and the formula represents the softmax activation function that normalizes the exponential values across all channels. Finally, we reconstruct the original features using the generated channel weights:
x ˜ c = x c × a c ,
where x ˜ c is the recalibrated output for channel c, x c is the original feature, and a c is the computed attention weight. Therefore, our approach only retains the channel weight assignment mechanism in SE attention mechanism, removing modules such as dimensionality-reduction and dimensionality-increasing layers. The proposed attention module can be inserted after any convolutional layer to perform channel reconstruction. Despite its simplicity, our experimental results demonstrate that this channel attention mechanism maintains performance while substantially reducing computational complexity, offering a novel approach for lightweight network architecture design.

3.4. A Lightweight Neural Network

Based on the aforementioned GCA attention mechanism and GLE formula, we design a lightweight neural network for low-light image enhancement. The architecture of our network is depicted in Figure 4, which consists of five convolutional layers integrated with two channel reconstruction modules. Regarding the convolutional layer design, the first four layers employ 3 × 3 convolutional kernels and maintain 16 feature channels to extract comprehensive feature representations. Each of these layers is followed by a ReLU activation function to introduce nonlinear transformations. The final layer utilizes a 3 × 3 convolutional kernel to reduce the feature dimensionality to three output channels, with a Tanh activation function applied to constrain the output range within [−1, 1]. The two channel reconstruction modules are strategically positioned after the second and fourth convolutional layers. These modules implement our GCA channel attention mechanism through global average pooling combined with softmax activation, effectively enhancing the network’s selective capability toward information-rich channels. Subsequently, conditional value amplification is applied, selectively magnifying all positive parameter regions by a factor of 10 while preserving negative value regions unchanged. This asymmetric approach extends the final parameter g output range to [−1, 10], thereby enabling enhanced exposure suppression capabilities. The network ultimately outputs a three-channel mapping g, which is used for pixel adjustment across the R, G, and B channels, resulting in the enhanced image. The proposed model, with a significantly reduced parameter count of merely 8 K compared to the 80 K parameters of the Zero-DCE model, thereby gains a substantial advantage in terms of execution efficiency. The experimental results demonstrate that this network architecture achieves an optimal balance between computational efficiency and parameter estimation accuracy, providing a practical and effective solution for low-light image enhancement.

3.5. Loss Functions

To enable end-to-end training of our network without reference images, we employ a suite of differentiable non-reference loss functions, which facilitate zero-reference training and allow our model to produce enhanced results with optimized visual quality. These loss functions are directly adopted from the Zero-DCE method, and we have retained the original formulations without modification. The total loss function, which integrates four loss items, is expressed as
L total = w s p a L s p a + w e x p L e x p + w c o l L c o l + w t v L t v ( g )
Here, L s p a , L e x p , L c o l , and L t v represent the spatial consistency loss, exposure control loss, color constancy loss, and illumination smoothness loss, respectively. w s p a , w e x p , w c o l , and w t v represent the weights of these loss functions, which were originally established in the Zero-DCE method. These weights have been meticulously calibrated to balance the contributions of the different loss components. Our empirical evaluations have also confirmed their effectiveness for our task, leading us to use these loss functions and keep the same weights as the Zero-DCE approach throughout our experiments to ensure optimal performance.

4. Results

In this section, we detail our experimental setup, implementation specifics, and comparative results with the existing methods.

4.1. Implementation Details

To comprehensively train the proposed model, enabling it to enhance low-light regions and suppress overexposed areas, we select multi-exposure sequence images from the SICE dataset [36] as our training set, totaling 2002 images. This dataset encompasses scenes under various exposure conditions, providing a rich array of samples for the model to learn image features under different lighting scenarios. All the training images are resized to 512 × 512 pixels to ensure input consistency and enhance training efficiency. We implement our proposed method using the PyTorch 2.1.0 framework and conduct training and testing on a desktop computer equipped with an NVIDIA RTX 3090 GPU, whose manufacturer is NVIDIA company and was made in Santa Clara, CA, USA. For the training parameter settings, the batch size is set to 8. Network weight initialization employs a standard Gaussian distribution with zero mean and a standard deviation of 0.02, while the bias term is initialized as a constant. The optimization algorithm used is the Adam optimizer with a fixed learning rate of 0.0001. The weight parameters in the loss functions, w s p a , w e x p , w c o l , and w t v ( g ) , are set to values consistent with those provided in the Zero-DCE source code to balance the contribution of different loss components.
We conducted comprehensive evaluations across the following six datasets: LOLv1 [28], LOLv2-Real, LOLv2-Synthetic [37], LSRW-huawei [38], DICM [39], and MEF [40], obtaining extensive experimental data. We adopted a comprehensive evaluation strategy, combining qualitative and quantitative analyses to verify the effectiveness of the proposed method. Qualitative assessment involved analyzing the visual perception of enhanced images in terms of detail preservation, color restoration, and contrast balance. Quantitative evaluation employed two categories of objective evaluation metrics: reference-based metrics, including PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity Index Measure) [41], and LPIPS (Learned Perceptual Image Patch Similarity) [42], and no-reference metrics, such as NIQE (Natural Image Quality Evaluator) [43]. Note that higher values are preferred for PSNR and SSIM, while lower values indicate better performance for LPIPS and NIQE. The comparison methods included RetinexNet [28], Zero-DCE [33], R2RNet [38], RUAS [44], Zero-DCE++ [34], CLIP-LIT [31], GCP [45], and CoLIE [32]. Comprehensive experimental results demonstrated that our proposed method achieves satisfactory performance in both subjective visual quality and objective evaluation metrics. Particularly, we also conducted a comparison of the execution efficiency of various methods. The experimental data indicates that the proposed model is the most efficient in terms of execution speed, achieving the intended design objectives.

4.2. Ablation Experiment

To validate the optimal value of hyperparameter a and the effectiveness of the GCA attention mechanism, we conducted a series of ablation studies. Through these experiments, we were able to quantitatively assess the impact of these key components on the overall model performance, thereby providing empirical support for our choices.

4.2.1. Hyperparameter a

From the previous analysis and Figure 4, it is evident that hyperparameter a plays a crucial role in controlling the intensity of image enhancement. When a is either too small or too large, achieving the desired enhancement effect becomes challenging. Therefore, it is necessary to investigate the appropriate value of a. Table 1 presents the variation in PSNR and SSIM metrics across a combined dataset of LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei as a ranges from 0.10 to 0.16. The results demonstrate that the value of a enables the model to achieve the best performance between 0.13 and 0.14. Specifically, the PSNR is higher when a is closer to 0.13, and the SSIM is higher when a is closer to 0.14. Considering that the enhancement of darker regions is more effective when a is set to 0.13 compared to 0.14, we select 0.13 as the value for hyperparameter a. Additionally, these metrics consistently decline when a deviates from 0.13 and 0.14. The possible explanation for this trend is that, when a is smaller, the curve’s enhancement capability becomes excessively strong, leading to images with increased noise or overexposure. Conversely, when a is larger, the enhancement effect on dark regions becomes less pronounced, and the curve’s ability to enhance low-light conditions deteriorates, failing to adequately meet the task requirements.

4.2.2. Channel Attention Mechanism

Channel attention mechanisms enable our model to focus more on important features while suppressing less important ones, thereby better accomplishing the low-light image enhancement task. In our design, to prevent degradation of model generalization capacity caused by using attention at every layer, we strategically implemented attention only after two network layers: specifically following the second and fourth convolutional layers. We now need to conduct comparative validation between implementations with and without our global attention mechanism to demonstrate the effectiveness of our model. We again selected a combined dataset comprising LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei for validation. As shown in Table 2, compared to the configuration without the GCA attention mechanism, the model incorporating the GCA module clearly achieved higher scores across the PSNR, SSIM, and LPIPS metrics. This demonstrates that our designed GCA attention module contributes significantly to the image enhancement task. All the subsequent experiments were conducted on the foundation of models incorporating the GCA module.

4.3. Visual and Perceptual Comparisons

In this subsection, we conduct visual comparisons on the LOLv1, LOLv2-Real, LOLv2-Synthetic, LSRW-huawei, DICM, and MEF datasets to gain a more intuitive understanding of the effectiveness of our proposed brightness adjustment method. As illustrated in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, our method achieves superior visual perception in many scenarios. Furthermore, as demonstrated in Figure 7, our proposed method is also effective in scenarios where the input images originally have better lighting conditions. In contrast, methods such as RetinexNet, Zero-DCE, R2RNet, RUAS, and Zero-DCE++ result in varying degrees of overexposure or deviation from the original images.

4.4. Quantitative Comparisons

To further evaluate the effectiveness and advancement of our proposed model, we conducted comprehensive metric assessments on six different datasets, including LOLv1, LOLv2-Real, LOLv2-Synthetic, LSRW-huawei, DICM, and MEF. In all the reference metric evaluations, we highlight the best results in bold and underline, while the second- and third-best results are marked in bold. We measured reference metrics PSNR, SSIM, LPIPS, and the non-reference metric NIQE, as shown in Table 3, Table 4 and Table 5 below. Table 3 and Table 4 display the comparative results of the reference metrics. It can be observed that our method demonstrates significant advantages regarding the PSNR, SSIM, and LPIPS metrics. In the PSNR metric comparisons, our method is only slightly inferior to R2RNet on the LOLv1 dataset while achieving the highest metrics on the other three datasets. In terms of the SSIM metric comparisons, our method achieves commendable results, attaining the best performance on the LOLv1 and LSRW-huawei datasets, and securing a position within the top three on the LOLv2-Real dataset. Regarding the LPIPS metric comparison, our approach achieves the best results on both the LOLv2-Synthetic and LSRW-huawei datasets, and also ranks third on the remaining two datasets. Additionally, since the numerical results in the top-performing entries are closely clustered, we introduce a standard deviation column in Table 4 to further demonstrate the stability of the enhancement performance. Overall, judging from the standard deviation indicator in the experimental data, the standard deviation value of the proposed algorithm ranks in the upper-middle level among all the algorithms, showing a certain degree of stability, which basically meets our original design intention of achieving enhancement effects comparable to mainstream algorithms while ensuring optimal computational efficiency. The above results with a reference metric evaluation indicate that our proposed method has a better enhancement effect, and the enhanced image is close to the real image under normal light. Throughout these comparisons, we observed that R2RNet achieved relatively good results across multiple metrics and datasets, possibly because it employs supervised learning with a more complex model, whereas our model is unsupervised with a simpler structure. Furthermore, Table 3 and Table 4 demonstrate that, across all the datasets, our method consistently outperforms Zero-DCE and Zero-DCE++ in the PSNR, SSIM, and LPIPS metrics, validating the superiority of our curve estimation method, which generates enhanced images more closely resembling the real normal-light reference images.
In Table 5, we present a comprehensive comparison of various methods across six datasets using the no-reference indicator NIQE. The results demonstrate that our proposed model achieves good performance in various datasets. Specifically, our model attains the best metric on the LOLv2-Synthetic dataset, outperforming all the other methods. Additionally, on the LOLv1, LOLv2-Real, and LSRW-huawei datasets, our NIQE metrics consistently rank within the top three, further validating the effectiveness of our approach. It should be noted that, although the proposed method does not rank in the top three on the DICM and MEF datasets, the difference in performance metrics compared to the third-ranked algorithm is actually quite small and very close.

4.5. Computation Time

To demonstrate the fast computational capability of our model, we conducted experiments using 100 images on the LOLv2-Real dataset and obtained the average inference time for one image. As illustrated in Table 6, our model achieves remarkable inference speed. It consistently outperforms all the other compared algorithms, securing the top position in terms of computational efficiency. In summary, our approach effectively reconciles the trade-off between rapid inference and superior performance metrics. With a computation time of 0.7 ms, our method is highly suitable for implementation on mobile devices, satisfying the real-time processing requirements. Moreover, the compact model size facilitates deployment on resource-constrained mobile platforms.

5. Conclusions

In this work, we propose a novel lightweight low-light image enhancement method and validate its performance through extensive experiments. The proposed method retains the zero-reference learning strategy of the Zero-DCE method, eliminating the need for paired or unpaired datasets during training, thus making it highly versatile across various application scenarios. The key differentiator between our approach and the Zero-DCE method lies in the significant advancements achieved in both inference speed and image quality. These improvements are primarily attributed to our redesigned GLE curve and a lightweight network architecture that incorporates a GCA mechanism. Specifically, we design a new curve based on Gamma correction principles that requires only a single mapping to complete low-light image enhancement, substantially improving the processing efficiency compared to the Zero-DCE method, which requires eight iterations of the LE curve application to accomplish low-light image enhancement. Additionally, we introduce a GCA mechanism, which enables the model to focus more effectively on key features, thereby significantly enhancing performance in low-light-enhancement tasks. Although our method demonstrates encouraging outcomes, its image quality performance still lags behind certain advanced supervised and unsupervised techniques. Consequently, subsequent research will investigate strategies for boosting generated image quality without compromising the model’s lightweight nature, providing more efficient and robust solutions for practical application scenarios.

Author Contributions

S.X. and H.Z. contributed to the conception of the study, H.Z. drafted the main manuscript text, S.X. significantly contributed to the manuscript revision, and L.P., H.H. and S.J. made significant contributions to the analysis and performed the associated experiments. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China, grant number 62162043.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to express their sincere gratitude to the contributors of the source code for this study, which helped us to conduct our research and improve the quality and reproducibility of our work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, T.; Zhang, K.; Zhang, Y.; Luo, W.; Stenger, B.; Lu, T.; Kim, T.K.; Liu, W. LLDiffusion: Learning degradation representations in diffusion models for low-light image enhancement. Pattern Recognit. 2025, 166, 111628. [Google Scholar] [CrossRef]
  2. Zhang, N.; Han, X.; Liu, C.; Gang, R.; Ma, S.; Cao, Y. Joint Luminance Adjustment and Color Correction for Low-Light Image Enhancement Network. Appl. Sci. 2024, 14, 6320. [Google Scholar] [CrossRef]
  3. Kapoor, S.; Arora, Y.; Bansal, N.; Virdi, K.; Ismail, F.S.M.; Malik, S. Low Light Image Enhancement: A Special Click. In Proceedings of the 2025 2nd International Conference on Computational Intelligence, Communication Technology and Networking (CICTN), Ghaziabad, India, 6–7 February 2025; pp. 532–537. [Google Scholar]
  4. Li, L.; Peng, J.; Wan, Y. Patch-Wise-Based Diffusion Model with Uncertainty Guidance for Low-Light Image Enhancement. Appl. Sci. 2025, 15, 1604. [Google Scholar] [CrossRef]
  5. Pei, X.; Huang, Y.; Su, W.; Zhu, F.; Liu, Q. FFTFormer: A spatial-frequency noise aware CNN-Transformer for low light image enhancement. Knowl.-Based Syst. 2025, 314, 113055. [Google Scholar] [CrossRef]
  6. Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 12504–12513. [Google Scholar]
  7. Ryu, J.; Lim, H.; Oh, H.; Oh, J.; Paik, J. Low-Light Image Enhancement and Color Correction Using a Contrast-Driven Neural Network. In Proceedings of the 2025 IEEE International Conference on Consumer Electronics (ICCE), Kaohsiung, Taiwan, 16–18 July 2025; pp. 1–3. [Google Scholar]
  8. Shen, Z.; Wang, C.; Li, F.; Liang, J.; Li, X.; Qu, D. Self-Guided Pixel-Wise Calibration for Low-Light Image Enhancement. Appl. Sci. 2024, 14, 11033. [Google Scholar] [CrossRef]
  9. Singh, K.; Kapoor, R.; Sinha, S.K. Enhancement of low exposure images via recursive histogram equalization algorithms. Optik 2015, 126, 2619–2625. [Google Scholar] [CrossRef]
  10. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.A.A.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  11. Tan, T.; Sim, K.; Tso, C.P. Image enhancement using background brightness preserving histogram equalisation. Electron. Lett. 2012, 48, 155–157. [Google Scholar] [CrossRef]
  12. Huang, S.C.; Cheng, F.C.; Chiu, Y.S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 2012, 22, 1032–1041. [Google Scholar] [CrossRef]
  13. Khunteta, A.; Ghosh, D.; Ribhu. Fuzzy rule-based image exposure level estimation and adaptive gamma correction for contrast enhancement in dark images. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012; Volume 1, pp. 667–672. [Google Scholar]
  14. Singh, H.; Kumar, A.; Balyan, L.; Singh, G.K. Swarm intelligence optimized piecewise gamma corrected histogram equalization for dark image enhancement. Comput. Electr. Eng. 2018, 70, 462–475. [Google Scholar] [CrossRef]
  15. Rahman, Z.u.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image enhancement. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; Volume 3, pp. 1003–1006. [Google Scholar]
  16. Rahman, Z.u.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar]
  17. Fu, X.; Sun, Y.; LiWang, M.; Huang, Y.; Zhang, X.P.; Ding, X. A novel retinex based approach for image enhancement with illumination adjustment. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 1190–1194. [Google Scholar]
  18. Kong, N.S.P.; Ibrahim, H.; Hoo, S.C. A literature review on histogram equalization and its variations for digital image enhancement. Int. J. Innov. Manag. Technol. 2013, 4, 386. [Google Scholar] [CrossRef]
  19. Bai, J.; Yin, Y.; He, Q.; Li, Y.; Zhang, X. Retinexmamba: Retinex-based mamba for low-light image enhancement. arXiv 2024, arXiv:2405.03349. [Google Scholar]
  20. Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Diff-Retinex++: Retinex-Driven Reinforced Diffusion Model for Low-Light Image Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2025; early access. [Google Scholar]
  21. Poynton, C. Digital Video and HD: Algorithms and Interfaces; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  22. Jung, C.; Yang, Q.; Sun, T.; Fu, Q.; Song, H. Low light image enhancement with dual-tree complex wavelet transform. J. Vis. Commun. Image Represent. 2017, 42, 28–36. [Google Scholar] [CrossRef]
  23. AAMIR, M.; REHMAN, Z.; PU, Y.F.; AHMED, A.; ABRO, W.A. Image enhancement in varying light conditions based on wavelet transform. In Proceedings of the 2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing, Chengdu, China, 14–15 December 2019; pp. 317–322. [Google Scholar]
  24. Jebadass, J.R.; Balasubramaniam, P. Low light enhancement algorithm for color images using intuitionistic fuzzy sets with histogram equalization. Multimed. Tools Appl. 2022, 81, 8093–8106. [Google Scholar] [CrossRef]
  25. Cepeda-Negrete, J.; Sanchez-Yanez, R.E. Automatic selection of color constancy algorithms for dark image enhancement by fuzzy rule-based reasoning. Appl. Soft Comput. 2015, 28, 1–10. [Google Scholar] [CrossRef]
  26. Zhang, E.; Guo, L.; Guo, J.; Yan, S.; Li, X.; Kong, L. A low-brightness image enhancement algorithm based on multi-scale fusion. Appl. Sci. 2023, 13, 10230. [Google Scholar] [CrossRef]
  27. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  28. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
  29. Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  30. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
  31. Liang, Z.; Li, C.; Zhou, S.; Feng, R.; Loy, C.C. Iterative prompt learning for unsupervised backlit image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 8094–8103. [Google Scholar]
  32. Chobola, T.; Liu, Y.; Zhang, H.; Schnabel, J.A.; Peng, T. Fast Context-Based Low-Light Image Enhancement via Neural Implicit Representations. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 413–430. [Google Scholar]
  33. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar]
  34. Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
  35. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  36. Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
  37. Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef] [PubMed]
  38. Hai, J.; Xuan, Z.; Yang, R.; Hao, Y.; Zou, F.; Lin, F.; Han, S. R2rnet: Low-light image enhancement via real-low to real-normal network. J. Vis. Commun. Image Represent. 2023, 90, 103712. [Google Scholar] [CrossRef]
  39. Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans. Image Process. 2013, 22, 5372–5384. [Google Scholar] [CrossRef] [PubMed]
  40. Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
  41. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  42. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
  43. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  44. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10561–10570. [Google Scholar]
  45. Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
Figure 1. The curves generated by the formula y = x γ for different values of the parameter γ .
Figure 1. The curves generated by the formula y = x γ for different values of the parameter γ .
Applsci 15 07382 g001
Figure 2. Comparison of the effects of including the regulation factor x versus excluding it for different values of the parameter g. (a): The curves generated by the formula y = x 1 + g without a regulation factor. (b): The curves generated by the formula y = x 1 + g x with a regulation factor.
Figure 2. Comparison of the effects of including the regulation factor x versus excluding it for different values of the parameter g. (a): The curves generated by the formula y = x 1 + g without a regulation factor. (b): The curves generated by the formula y = x 1 + g x with a regulation factor.
Applsci 15 07382 g002
Figure 3. The curves generated by the formula y = x 1 + g x a with asymmetric parameter values of g. (a): The curves of enhancement and suppression exposure with g limited to the range of [−1, 1]. (b): The suppression exposure curves after being expanded by 10 times when g is greater than 0.
Figure 3. The curves generated by the formula y = x 1 + g x a with asymmetric parameter values of g. (a): The curves of enhancement and suppression exposure with g limited to the range of [−1, 1]. (b): The suppression exposure curves after being expanded by 10 times when g is greater than 0.
Applsci 15 07382 g003
Figure 4. Overview of our proposed neural network. The input and output images processed by this model are both in the RGB color space. During training, the parameter g is updated through zero-reference loss functions. During testing, parameter g is first estimated using our neural network, and then the brightness is adjusted pixel-wise for the R, G, and B channels of the original image using the proposed GLE curve. E is the default exposure parameter for exposure control loss, set to 0.6.
Figure 4. Overview of our proposed neural network. The input and output images processed by this model are both in the RGB color space. During training, the parameter g is updated through zero-reference loss functions. During testing, parameter g is first estimated using our neural network, and then the brightness is adjusted pixel-wise for the R, G, and B channels of the original image using the proposed GLE curve. E is the default exposure parameter for exposure control loss, set to 0.6.
Applsci 15 07382 g004
Figure 5. Visual comparisons of various methods on LOLv1 dataset. (a) input image. (b) SSIM = 0.602; (c) SSIM = 0.400; (d) SSIM = 0.587; (e) SSIM = 0.590; (f) SSIM = 0.337; (g) SSIM = 0.511; (h) SSIM = 0.549; (i) SSIM = 0.608; (j) SSIM = 0.644.
Figure 5. Visual comparisons of various methods on LOLv1 dataset. (a) input image. (b) SSIM = 0.602; (c) SSIM = 0.400; (d) SSIM = 0.587; (e) SSIM = 0.590; (f) SSIM = 0.337; (g) SSIM = 0.511; (h) SSIM = 0.549; (i) SSIM = 0.608; (j) SSIM = 0.644.
Applsci 15 07382 g005
Figure 6. Visual comparisons of various methods on LOLv2-Real dataset. (a) input image. (b) SSIM = 0.512; (c) SSIM = 0.346; (d) SSIM = 0.649; (e) SSIM = 0.460; (f) SSIM = 0.325; (g) SSIM = 0.805; (h) SSIM = 0.624; (i) SSIM = 0.707; (j) SSIM = 0.670.
Figure 6. Visual comparisons of various methods on LOLv2-Real dataset. (a) input image. (b) SSIM = 0.512; (c) SSIM = 0.346; (d) SSIM = 0.649; (e) SSIM = 0.460; (f) SSIM = 0.325; (g) SSIM = 0.805; (h) SSIM = 0.624; (i) SSIM = 0.707; (j) SSIM = 0.670.
Applsci 15 07382 g006
Figure 7. Visual comparisons of various methods on LOLv2-Synthetic dataset. (a) input image. (b) SSIM = 0.972; (c) SSIM = 0.718; (d) SSIM = 0.386; (e) SSIM = 0.208; (f) SSIM = 0.685; (g) SSIM = 0.968; (h) SSIM = 0.892; (i) SSIM = 0.925; (j) SSIM = 0.964.
Figure 7. Visual comparisons of various methods on LOLv2-Synthetic dataset. (a) input image. (b) SSIM = 0.972; (c) SSIM = 0.718; (d) SSIM = 0.386; (e) SSIM = 0.208; (f) SSIM = 0.685; (g) SSIM = 0.968; (h) SSIM = 0.892; (i) SSIM = 0.925; (j) SSIM = 0.964.
Applsci 15 07382 g007
Figure 8. Visual comparisons of various methods on LSRW-huawei dataset. (a) input image. (b) SSIM = 0.611; (c) SSIM = 0.388; (d) SSIM = 0.660; (e) SSIM = 0.698; (f) SSIM = 0.375; (g) SSIM = 0.672; (h) SSIM = 0.675; (i) SSIM = 0.730; (j) SSIM = 0.756. It can be seen that the image enhanced by the RetinexNet method produces some distortion, while, in contrast, the enhanced image obtained by using our method has more natural colors and better results.
Figure 8. Visual comparisons of various methods on LSRW-huawei dataset. (a) input image. (b) SSIM = 0.611; (c) SSIM = 0.388; (d) SSIM = 0.660; (e) SSIM = 0.698; (f) SSIM = 0.375; (g) SSIM = 0.672; (h) SSIM = 0.675; (i) SSIM = 0.730; (j) SSIM = 0.756. It can be seen that the image enhanced by the RetinexNet method produces some distortion, while, in contrast, the enhanced image obtained by using our method has more natural colors and better results.
Applsci 15 07382 g008
Figure 9. Visual comparisons of various methods on DICM dataset. (a) input image. (b) NIQE = 3.142; (c) NIQE = 2.040; (d) NIQE = 4.894; (e) NIQE = 4.496; (f) NIQE = 2.289; (g) NIQE = 2.428; (h) NIQE = 2.691; (i) NIQE = 2.126; (j) NIQE = 2.064.
Figure 9. Visual comparisons of various methods on DICM dataset. (a) input image. (b) NIQE = 3.142; (c) NIQE = 2.040; (d) NIQE = 4.894; (e) NIQE = 4.496; (f) NIQE = 2.289; (g) NIQE = 2.428; (h) NIQE = 2.691; (i) NIQE = 2.126; (j) NIQE = 2.064.
Applsci 15 07382 g009
Figure 10. Visual comparisons of various methods on MEF dataset. (a) input image. (b) NIQE = 2.501; (c) NIQE = 2.141; (d) NIQE = 4.307; (e) NIQE = 5.859; (f) NIQE = 2.350; (g) NIQE = 2.179; (h) NIQE = 2.597; (i) NIQE = 2.560; (j) NIQE = 2.347.
Figure 10. Visual comparisons of various methods on MEF dataset. (a) input image. (b) NIQE = 2.501; (c) NIQE = 2.141; (d) NIQE = 4.307; (e) NIQE = 5.859; (f) NIQE = 2.350; (g) NIQE = 2.179; (h) NIQE = 2.597; (i) NIQE = 2.560; (j) NIQE = 2.347.
Applsci 15 07382 g010
Table 1. Performance comparison of our method with hyperparameter a ranging from 0.1 to 0.16.
Table 1. Performance comparison of our method with hyperparameter a ranging from 0.1 to 0.16.
a = 0.10 a = 0.11 a = 0.12 a = 0.13 a = 0.14 a = 0.15 a = 0.16
PSNR ↑18.67218.92619.06319.07918.95518.79018.602
SSIM ↑0.6100.6280.6290.6350.6460.6440.635
Table 2. Validation of the effectiveness of the GCA module.
Table 2. Validation of the effectiveness of the GCA module.
GCAPSNR ↑SSIM ↑LPIPS ↓
18.4220.5850.184
19.0790.6350.178
Table 3. Comparison of reference metrics (PSNR and SSIM) across LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei datasets.
Table 3. Comparison of reference metrics (PSNR and SSIM) across LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei datasets.
MethodPSNR ↑ SSIM ↑
LOLv1LOLv2-
Real
LOLv2-
Synthetic
LSRW-
Huawei
LOLv1LOLv2-
Real
LOLv2-
Synthetic
LSRW-
Huawei
RetinexNet16.77416.09717.13716.8190.5460.4400.7950.556
Zero-DCE16.75217.86618.07017.9560.4610.4940.5380.451
R2RNet18.17917.94916.08917.2770.6050.6540.5240.554
RUAS16.40515.32613.40415.6900.5530.4850.5490.571
Zero-DCE++15.90118.35218.22317.1380.4330.5070.5360.418
CLIP-LIT12.39415.18216.19013.5610.5260.5630.7180.555
GCP17.39316.61016.62016.5500.5210.4360.6680.541
CoLIE16.31016.91017.69717.2490.5650.5160.7510.590
Ours18.06519.04419.30918.9400.6100.6060.6730.619
Table 4. Comparison of reference metric (LPIPS) across LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei datasets.
Table 4. Comparison of reference metric (LPIPS) across LOLv1, LOLv2-Real, LOLv2-Synthetic, and LSRW-huawei datasets.
MethodLPIPS ↓
LOLv1 LOLv2-
Real
LOLv2-
Synthetic
LSRW-
Huawei
MeanStd.MeanStd.MeanStd.MeanStd.
RetinexNet0.3640.098 0.4250.110 0.1990.062 0.3600.072
Zero-DCE0.2550.075 0.2680.097 0.1400.048 0.2930.072
R2RNet0.1700.058 0.1830.063 0.2260.078 0.2600.059
RUAS0.1800.088 0.2060.062 0.2810.091 0.3460.102
Zero-DCE++0.2330.078 0.2280.091 0.1290.053 0.2600.064
CLIP-LIT0.2740.101 0.2560.117 0.1640.073 0.2850.070
GCP0.3310.115 0.3490.143 0.1530.077 0.3080.102
CoLIE0.2600.107 0.2590.124 0.1400.066 0.2860.087
Ours0.2150.084 0.2170.100 0.1110.052 0.2530.072
Table 5. Comparison of the non-reference metric (NIQE) across the LOLv1, LOLv2-Real, LOLv2-Synthetic, LSRW-huawei, DICM, and MEF datasets.
Table 5. Comparison of the non-reference metric (NIQE) across the LOLv1, LOLv2-Real, LOLv2-Synthetic, LSRW-huawei, DICM, and MEF datasets.
MethodNIQE ↓
LOLv1LOLv2-
Real
LOLv2-
Synthetic
LSRW-
Huawei
DICMMEF
RetinexNet8.7349.2545.3464.0634.4384.344
Zero-DCE7.7558.0794.4853.5113.8073.435
R2RNet4.7214.9845.2145.1215.2034.886
RUAS6.2916.5016.5305.6996.7625.369
Zero-DCE++7.8718.1064.5223.7573.9473.537
CLIP-LIT8.1028.2134.5533.9023.9383.461
GCP9.2119.1564.6654.0203.8803.586
CoLIE8.2048.2834.5033.4943.7613.410
Ours7.6007.8604.4553.7163.8823.465
Table 6. Comparison of average computation time per image (ms) on the LOLv2-Real dataset.
Table 6. Comparison of average computation time per image (ms) on the LOLv2-Real dataset.
MethodRetinexNetR2RNetRUASZero-DCE++CLIP-LITGCPCoLIEOurs
Time (ms)95.11345.03.51.01.949.01243.00.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, H.; Xu, S.; Peng, L.; Hu, H.; Jiang, S. Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Appl. Sci. 2025, 15, 7382. https://doi.org/10.3390/app15137382

AMA Style

Zhao H, Xu S, Peng L, Hu H, Jiang S. Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Applied Sciences. 2025; 15(13):7382. https://doi.org/10.3390/app15137382

Chicago/Turabian Style

Zhao, Huitao, Shaoping Xu, Liang Peng, Hanyang Hu, and Shunliang Jiang. 2025. "Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement" Applied Sciences 15, no. 13: 7382. https://doi.org/10.3390/app15137382

APA Style

Zhao, H., Xu, S., Peng, L., Hu, H., & Jiang, S. (2025). Efficient Gamma-Based Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Applied Sciences, 15(13), 7382. https://doi.org/10.3390/app15137382

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop