Next Article in Journal
Quantizing the Exterior Region of a Schwarzschild–AdS Black Hole Leads to a Resolution of the Information Paradox on a Quantum Level
Previous Article in Journal
Symmetrical User Fairness in Asymmetric Indoor Channels: A Max–Min Framework for Joint Discrete RIS Partitioning and Power Allocation in NOMA Systems
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior

School of Big Data Engineering, Kaili University, Kaili 556011, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(4), 564; https://doi.org/10.3390/sym18040564
Submission received: 14 February 2026 / Revised: 19 March 2026 / Accepted: 23 March 2026 / Published: 26 March 2026
(This article belongs to the Section Computer)

Abstract

Images captured in low-light conditions often have poor visibility, low contrast, and color distortion due to uneven lighting. Most existing enhancement methods often suffer from unstable brightness recovery and color cast, which affect both visual quality and performance of advanced vision tasks. To address those issues, we propose DADNet, a dual-branch network with an attention mechanism and dark channel prior containing an Illumination Enhancement Module (IEM) and Color Transformation Module (CTM). The IEM extracts multi-scale features and improves lighting based on the dark channel prior, while the CTM employs the attention mechanism to handle color features and adjust saturation adaptively. Experimental results on three datasets show that DADNet performs well in both qualitative and quantitative evaluations. It effectively preserves image structure and texture details while achieving a good balance between overall brightness and color quality.

1. Introduction

The performance of computer vision systems is highly dependent on environmental conditions. In low-light or poorly exposed environments, imaging devices often produce images with low contrast, poor visibility, and color distortion. When such image degradation occurs, it can significantly impact the effectiveness of computer vision tasks. Therefore, effective enhancement of low-light images becomes crucial for maintaining system performance.
To address the negative impact of low-light conditions on computer vision tasks, researchers have conducted extensive work [1] in low-light image enhancement (LLIE), which has evolved through three main stages [2,3,4,5,6]: grayscale transformation, physical models, and deep neural networks. Traditional methods achieved some success. Retinex [7] and dark channel prior (DCP) [8] provide the theoretical foundation for LLIE, which performed poorly in color restoration and texture preservation and relied on manual parameter tuning, showing that the methods lack robustness. Deep learning methods use neural networks and large-scale datasets to learn characteristic patterns from low-light images. Among early deep learning methods, LLNet [9] used stacked autoencoders trained on synthetic data to perform both enhancement and denoising simultaneously and is the first end-to-end deep learning method for LLIE. RetinexNet [10] is the first end-to-end learning framework that implements the Retinex theory using convolutional neural networks, while LLFormer [11] introduced the Transformer architecture to reduce linear complexity. These deep learning methods demonstrate significantly better enhancement performance than traditional methods. However, they still have limitations in restoring illumination and correcting color degradation under complex lighting conditions.
In this work, we develop a dual-branch network based on the attention mechanism and dark channel prior (DADNet). DADNet effectively enhances illumination and restores color while balancing brightness, contrast, and saturation to produce visually natural results. The following is a summary of the contributions.
  • DADNet is a new dual-branch method for low-light enhancement. The Illumination Enhancement Module (IEM) branch uses the DCP theory to restore brightness while the Color Transformation Module (CTM) branch corrects color distortion and adjusts saturation through the attention mechanism. DADNet effectively enhances low-light images with natural visual outputs.
  • The IEM branch generates the illumination enhancement image based on the DCP theory and pixel-level least squares model. The IEM extracts dark channel features through two components, including the Dark Channel Feature Block (DCFB) and the Dark Channel Block (DCB). We also construct a physical model that combines the low-light image with multiplication and addition feature maps, achieving balanced brightness enhancement through pixel-wise computation.
  • The CTM branch uses the adaptive attention mechanism. It focuses on color features and saturation adjustment across the entire image and significantly improves performance in visual tasks.
  • The experimental results demonstrate that DADNet effectively enhances image brightness to appropriate levels. It also preserves details and texture information while accurately restoring image colors. In both qualitative and quantitative assessments, DADNet exhibits outperformance compared to state-of-the-art methods.
The remainder of this paper is organized as follows: Section 2 provides a brief review of relevant research work, Section 3 describes the proposed DADNet, Section 4 presents the experimental results, and Section 5 concludes this paper.

2. Related Works

Existing LLIE methods can be broadly classified into grayscale transformation (Section 2.1), physical model (Section 2.2), and deep learning-based methods (Section 2.3). This section briefly reviews LLIE methods from each category.

2.1. Grayscale Transformation-Based Methods

Early research mainly used grayscale transformation methods like histogram equalization and gamma correction [2,5,12] and improved brightness by expanding the grayscale range. However, this only worked well for images with limited dynamic range and had major drawbacks. Adaptive Histogram Equalization (AHE) [13], Contrast-Limited Adaptive Histogram Equalization (CLAHE) [14], and Dynamic Histogram Equalization (DHE) [15] enhance image contrast directly but often ignore image content features, which led to poor enhancement results. As research progressed, researchers tended to combine multiple grayscale transformation methods to achieve better brightness enhancement. Liu et al. [16] integrated gamma correction and histogram equalization within a multi-scale exposure fusion framework to improve image quality. Jeon et al. [5] introduced a method that adaptively adjusts local gamma values for high- and low-illumination regions, with this approach enhancing computational speed. Recent studies [17,18] focus on applying non-spatial domain information to low-light enhancement. These methods analyze frequency or gradient features to enhance local structure and details and often achieve better enhancement results.

2.2. Physical Model-Based Methods

Physical model-based methods aim to recover high-quality images by modeling the image degradation process and solving the inverse problem [19,20,21,22,23]. The Retinex theory proposed by Land et al. [7] suggests that object color remains constant under varying illumination. This theory splits an image into two independent components, reflectance and illumination, and image enhancement is achieved by separately processing and optimizing these components. Based on this foundation, the Multi-Scale Retinex with Color Restoration (MSRCR) [24] generates enhanced images through pixel-wise operations. While Guo et al. [19] proposed a low-light image enhancement method named LIME that only estimates the illumination component, Lin et al. [22] designed a new regularizer to smooth the illumination map and reformulated the ADMM algorithm for Retinex decomposition, significantly improving illumination adjustment performance. To address the halo effect under complex illumination, Yu et al. [25] treated ambient light and light attenuation rates as pointwise variables, which avoids over-enhancement but may introduce false color in extremely dark regions. Zhou et al. [21] proposed a reflectance component-based adaptive partitioning method that improves low-light image quality through structural weight map generation. NPLIE [26] achieves natural lighting variations in images by perceiving the initial illumination structure, maintaining color consistency throughout this process. These methods depend on preset parameters to estimate reflection, which often causes high computation costs, color problems, and missing details.
The atmospheric scattering model [27] is one of the most widely used models in the field of image enhancement. Tan et al. [28] used Markov random fields to restore local contrast in foggy images; this method tends to produce noticeable halos in areas of dense fog. The DCP method developed by He et al. [8] effectively addresses the shortcomings of previous methods, though it may introduce halo effects and sky region distortion in areas with significant changes in scene depth. Dong et al. [29] discovered that inverted low-light images share similar characteristics with foggy images. Guo et al. [30] combined the DCP theory with the Retinex model, using dual-parameter control to achieve anti-overexposure, high detail retention, and precise color restoration. Abraham et al. [31] optimized images based on the DCP theory then enhanced the brightness component in the Lab color space using an S-shaped nonlinear mapping function, which effectively improves the overall contrast and brightness of the image.

2.3. Deep Learning-Based Methods

In recent years, deep learning has greatly improved the visual quality of LLIE. CNN-based methods [32,33,34,35,36,37,38,39] effectively restore illumination by extracting spatial information and local features. MBLLEN [32] utilizes CNNs to extract multi-level features and combines the outputs of subnetworks for optimal results. Although CNNs work well for enhancement, they struggle with long-range dependencies, which often causes detail loss and noise amplification. To solve these problems, researchers have added attention mechanisms to create CNN–attention architectures [40,41,42,43,44,45]. IAT [40] created a lightweight method that uses attention to dynamically adjust image signal processor parameters. TCPCNet [41] maintains global brightness while adding local details, and He et al. [45] developed a CNN–Transformer framework that simplifies enhancement into a curve estimation task, reducing computation needs. However, attention models have their own limitations. They often miss local details, require heavy computation, and lack clear physical interpretation. The integration of attention mechanisms with Retinex theory has enabled better global feature learning. These methods [46,47] improve model interpretability while restoring image details and structure more effectively. EnlightenGAN [48] uses self-feature preservation, perceptual loss, and attention mechanisms for end-to-end enhancement without synthetic data. Cai et al. [49] designed a single-stage network that guides Transformer interactions between different lighting areas, which reduces noise and color issues. Jiang et al. [50] combined Retinex theory with self-attention and CNN advantages for better color consistency and natural results, while Luo et al. [51] proposed a Retinex-based method using polynomial regularized decomposition and attention–GAN fusion. To improve scene adaptability, researchers have also explored unsupervised and zero-shot learning methods. CoLIE [52] uses implicit neural functions combined with an embedded guided filter for image enhancement, but its ability to improve brightness is limited.
LLIE is evolving from single CNN architectures toward approaches that integrate attention mechanisms, Retinex theory, and zero-shot learning. As a result, model interpretability is steadily improving. However, existing methods still struggle to preserve color consistency while restoring brightness. To address this issue, we propose a new LLIE method based on Retinex theory. DADNet consists of two branches: the IEM branch incorporates the DCP to restore brightness, and the CTM branch introduces the attention mechanism to prevent color distortion.

3. Proposed Method

3.1. Overall Network Architecture

Deep learning-based LLIE methods can be viewed as a mapping problem between the low-light image and the ground truth image:
I g t = N e t ( I l o w ; θ )
where N e t ( · ) represents the network model and θ represents the learnable parameters. Our main goal is to obtain the enhanced images I e n h a n c e d without over-amplifying contrast or saturation. The difference between the enhanced image I e n h a n c e d and the ground truth image I g t should be minimized
θ ^ = arg min θ L ( I e n h a n c e d , I g t )
where L ( · ) is the loss function. To achieve the mentioned goal, we propose DADNet. As shown in Figure 1, DADNet improves low-light images I l o w through two branches based on Retinex theory. The IEM branch receives the illumination component I IEM R H × W × C based on DCP, and the CTM branch obtains the reflection component I CTM R 3 × 3 through the attention mechanism. The complete process can be described as
I enhanced = I IEM · I CTM
where · is the dot product. The outputs of the two branches are combined to produce the final enhanced image I enhanced . To reduce computational costs, we only use the L1 loss function to train DADNet. The enhanced images show balanced exposure, improved contrast, suitable saturation, and better overall quality. Further details about DADNet will be provided in the following subsections.

3.2. Illumination Enhancement Module

The IEM branch restores the brightness of the input image. DCP assumes that in most nonsky local areas, there will always be at least one color channel with the lowest value
I dark = min y Ω ( x ) min c { R , G , B } I low c 0
where y Ω ( x ) represents the local region of the image subjected to minimum value filtering. c { R , G , B } denotes the RGB color channels, and the dark channel image I dark is generated by calculating the minimum pixel value across the R, G, and B channels of the original image. The generated image I dark is then fed into the DCFB module, which uses eight consecutive 3 × 3 convolutional layers to extract shallow features and expand dimensions. To preserve the information in the intermediate layers, the LeakyReLU activation is applied only at the final step. Then, the final feature map is generated F DCFB . The complete DCFB process can be represented as:
F DCFB = LeakyReLU Conv Conv 8 I dark
The feature map F DCFB is then passed to two separate branches. Each branch contains four stacked DCB modules, which are an improvement to the Pixel-wise Enhancement Module (PEM) [40], adding a Nonlinear Normalization (NLN) layer. F ( x ) represents the input information, and the NLN process can be represented as
F AT = α · F ( x ) + β F ANA = F AT · σ ( w adp · F AT ) F NLN = F ANA · c
where α ,   β are learnable vectors initialized to all ones and zeros, respectively. It applies an affine transformation to the input features, and the learnable slope parameter w adp is introduced with an initial value of 1.0. The value of this slope controls the gating effect on features, with a larger slope increasing feature suppression or enhancement. σ ( · ) is the sigmoid function and c is a learnable transformation matrix initialized as the identity matrix. The DCB process can be represented as
F PER = F ( x ) + Conv ( F ( x ) ) F DCR = Conv GDConv Conv ( NLN ( F PER ) ) · γ 1 + F PER F DCB = Conv GELU Conv ( NLN ( F DCR ) ) · γ 2 + F DCR
where F PER , F DCR , and F DCB are all residual connections, and γ 1 ,   γ 2 R 1 × C × 1 × 1 are learnable scaling factors with an initial value of 10 4 . Through residual connections and learnable scaling, the DCB enables effective fusion of local feature extraction and channel-wise adaptive transformation. To preserve the original information and prevent feature degradation, we use element-wise addition, applying the 3 × 3 convolution to reduce channel dimensions. Finally, LeakyReLU generates the multiplication feature map I mul , and the Hardtanh function generates the image addition feature map I add :
I mul = LeakyReLU Conv F DCFB + F DCB F DCB 4 F DCFB I add = Hardtanh Conv F DCFB + F DCB F DCB 4 F DCFB
Assume that the input image I low has globally uniform low illumination, with no shadows and no complex lighting. In this case, the simple illumination problem can be approximated as a linear transformation [53] between the low-light image and the enhanced image. The enhanced image I IEM can be represented as
I IEM = I mul · I low + I add
where the image multiplication feature map I mul controls the overall contrast. The image addition feature map I add controls the overall brightness offset. This formulation provides a simple and effective enhancement method.

3.3. Color Transformation Module

In the CTM branch, the low-light image I low first goes through the Low-light Feature Block (LFB), a module that uses progressive downsampling for multi-scale feature extraction. The input low-light image I low passes through three convolutional layers with a stride of 2. This gradually reduces spatial resolution and prevents sudden information loss. These operations also extract high-level semantic features with 64 channels. After the first convolution, we use the Gaussian Error Linear Unit (GELU) activation function, which avoids introducing excessive nonlinearity that could damage features. The final layer applies batch normalization to stabilize the output distribution, where the process can be represented as
F LFB = BatchNorm ( Conv Conv ( GELU ( Conv ( I low ) ) ) )
The extracted features F LFB are subsequently fed into the Color Prediction Block (CPB) for further processing. To effectively learn from the feature F LFB , the learnable fixed-query attention mechanism F FQ is introduced for color prediction restoration. This design is inspired by the query concept in DETR [54]. Unlike DETR, we initialize the query Q R N h × 12 × d k as a tensor of all ones, ensuring all queries start learning from the same baseline. The key ( K ) and value ( V ) components are generated from the input X R N l × D through bias-free fully connected layers. Here, K , V R N h × N l × d k , D denotes the feature dimension of F LFB , and N l is the sequence length. The number of attention heads N h is set to 4, and the size of each projection head is controlled by the parameter d k :
F FQ = softmax QK d k V
The learnable fixed-query attention mechanism F FQ enables the model to adaptively focus on color-related features from the input. The attention feature F ATT can be represent as follows:
F ATT = F FQ LayerNorm ( F LFB + Conv ( F LFB ) )
To capture more local features, the attention features F ATT are further processed through GELU activation, LayerNorm, and fully connected layers. These steps produce the final output represented as
F CPB = F ATT + Linear GELU ( Linear ( LayerNorm ( F ATT ) ) )
The F CPB ultimately outputs 12 learned parameters. Among these, the features at positions 4 to 12 form the color matrix F CCM , and the feature at the 3rd position serves as the color offset factor F Offset for global color adjustment. Further details of F Offset are provided in Table 3. The final output I CTM is obtained by adding the offset factor F Offset to every element of the color matrix F CCM through broadcasting. The color component I CTM is as follows:
I CTM = F CCM + F Offset
The CTM branch effectively addresses color distortion during image enhancement. It significantly improves the visual quality of enhanced results and maintains good balance between noise reduction and detail preservation.

3.4. Loss Functions

The L1 loss function is used to calculate the pixel-wise absolute difference between the enhanced image I enhanced and the ground truth image I gt
L L 1 = 1 N i = 1 N | I enhanced i I gt i |
where i represents the i-th pixel value of the image. N represents the total number of pixels. While low-light images contain a large amount of noise, the L1 loss function is robust to outliers, which can make the quality of the enhanced results more stable. Furthermore, the L1 loss function better preserves edge features so is suitable for low-light enhancement that requires the preservation of high-frequency details.

4. Experimental Results Analysis

4.1. Experimental Settings

Datasets. To evaluate the performance of DADNet, we conducted comparative experiments on three widely used LLIE benchmark datasets. The datasets include LOL-v1 [10], SICE [55], RAISE [56], LIME [19], and MEF [57]. The LOL-v1 dataset consists of 500 low/normal-light image pairs, split into 485 pairs for training and 15 pairs for validation. To ensure proper sequence alignment, we selected 229 image pairs from the SICE dataset with an exposure value of 0.5. Among these, 15 pairs are used for validation, and the remaining 214 are for training. The RAISE dataset provides 1000 synthetically generated image pairs, as described in [10]. We adopt the LIME and MEF datasets for visual comparison, and to increase the number of training samples, all training image pairs were cropped to a size of 256 × 256 pixels. During validation, the original full-resolution images were used. Due to limited computational resources, all images from the SICE dataset were resized to a resolution of 600 × 400 pixels to reduce the hardware load during training.
Metrics. To objectively evaluate the performance of different low-light image enhancement methods, we assess image quality from multiple perspectives, including pixel fidelity, structural features, and color difference. The metrics include Peak Signal-to-Noise Ratio (PSNR) [58], Feature Similarity Index (FSIM) [59], Multi-Scale Structural Similarity Index (MS-SSIM) [60], Gradient Magnitude Similarity Deviation (GMSD) [61], and the color difference metric CIEDE2000 [62]. PSNR measures the pixel-level fidelity between the generated image and the reference image; a higher PSNR value indicates better overall reconstruction quality; FSIM evaluates the preservation of structural details and texture information by analyzing phase consistency and gradient features; and a higher FSIM value reflects better fidelity of key visual features. FSIM is suitable for assessing the detail loss often encountered in image enhancement. MS-SSIM imitates the multi-resolution perception of the human visual system and comprehensively assesses luminance, contrast, and structural information at different scales. A higher MS-SSIM score signifies better preservation of visual structure, while GMSD quantifies the dissimilarity between gradient maps of two images to reflect the naturalness of local structures. A lower GMSD value indicates less gradient distortion, making it suitable for evaluating potential artifacts in enhanced images. The CIEDE2000 color difference metric is used to evaluate color fidelity, where a lower value means better color accuracy. To fully evaluate the naturalness and visual comfort of enhanced images, we use the Natural Image Quality Evaluator (NIQE) [63] and the Perception-based Image Quality Evaluator (PIQUE) [64]. NIQE measures how closely an image matches the statistical properties of natural images, and PIQUE focuses on blocking artifacts and noise in the image.
Implementation details. The experiments were conducted on a hardware system with an Intel® Core™ i7-10700K @3.80GHz×16 CPU and a Quadro RTX 4000 GPU. All experiments were implemented on Ubuntu 22.04.5 LTS using the PyCharm IDE and the PyTorch deep learning framework (torch version 2.5.1+cu118). To train the DADNet model, we used the Adam optimizer, where the initial learning rate was set to 0.0005 for all datasets, with a weight decay of 0.000001. The detailed training hyperparameters are listed in Table 1. To validate the effectiveness of DADNet on real low-light images, we compared it with several methods, including classical methods such as MSRCR [24], CLAHE [14], LIME [19], and RetinexNet [10], as well as state-of-the-art methods like ZeroDCE [33], ZeroDCE++ [34], Jeon et al. [5], NPLIE [26], PIE [35], CoLIE [52], DI-Retinex [65], and LYT-Net [66]. All methods and evaluation metrics were implemented in Python, and parameters and pretrained models were kept consistent with those provided by the original authors to ensure a fair comparison. Image quality metrics were implemented using the pyiqa library.

4.2. Ablation Study

To evaluate the effectiveness of each module in DADNet, we conducted an ablation study on the LOL-v1 dataset. The study separately assesses the contributions of the IEM and CTM modules, analyzes the impact of positional choices in the F Offset , and compares the visual enhancement effects of different module combinations. As shown in Figure 2, for Image 1, the output without CTM (without-CTM) results in overly saturated leaves, while the decorative object in the lower left retains a normal color. In the output without IEM (without-IEM), the leaves appear natural, but the decorative object in the lower left becomes overexposed. For Image 2, the output without IEM suffers from overexposure on the table. Although the without-CTM output maintains proper brightness on the table, the orchid in the lower left shows abnormal color recovery. These visual examples highlight the respective contributions of IEM and CTM. As shown in Table 2, the full DADNet achieves significant improvements across all evaluation metrics on the LOL-v1 dataset compared to both without-CTM and without-IEM variants.
Regarding the choice of FOffset position, as shown in Figure 2, the configurations without F Offset (without- F Offset ) and using the F CPB ’s second-position feature as F Offset achieve reasonable quantitative results in Table 3. However, their visual performance is poor in Image 1 and Image 2. In Image 1, the without- F Offset output shows unnatural leaf colors. When using the first-position feature as F Offset , the brightness of the decorative object in the lower left corner is slightly overexposed, and the corresponding metrics in Table 3 are also unsatisfactory.
The IEM module effectively enhances image brightness, while the CTM module helps preserve color fidelity. Together, IEM and CTM jointly improve overall image quality. The ablation studies confirm the complementary roles of these modules and validate the rationality and robustness of the overall architecture.

4.3. Qualitative Evaluation

Qualitative analysis visually demonstrates the advantages of methods in terms of visual quality. Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 present a comparative analysis of enhancement results from DADNet and other methods across three datasets. As shown in Figure 3, methods such as CLAHE, PIE, and NPLIE introduce visible block effects and structural artifacts near brightness transitions. Figure 4 shows that LIME, Jeon et al., NPLIE, PIE, and CoLIE improve brightness in high-contrast scenes, but their contrast enhancement is insufficient. As a result, the structural details of the ceiling decoration are not effectively restored. MSRCR, RetinexNet, ZeroDCE, ZeroDCE++, DI-Retinex, and LYT-Net recover these structural details, yet they introduce color distortion. Figure 5 demonstrates that CLAHE, LIME, Jeon et al., NPLIE, and RetinexNet produce unnatural details during the restoration process. On the LIME and MEF datasets, as shown in Figure 6 and Figure 7, Jeon et al., NPLIE, RetinexNet, PIE, CoLIE, DI-Retinex, and LYT-Net fail to recover details effectively. Our DADNet method successfully reveals details in dark regions while maintaining natural contrast, with the enhanced images achieving the appropriate brightness, producing visually comfortable results with rich details.
DADNet achieves balanced and stable brightness enhancement across all test image sets. Most comparison methods show inconsistent results on different image types. As shown in Figure 8, Figure 9 and Figure 10, MSRCR suffers from insufficient overall contrast. NPLIE, PIE, CoLIE, ZeroDCE, ZeroDCE++, and DI-Retinex exhibit unstable brightness restoration. As shown in Figure 9, PIE and CoLIE fail to adequately restore brightness for Image 4, with PIE producing an overly dark result for Image 6, while CoLIE makes it too bright. As shown in Figure 10, NPLIE, PIE, and CoLIE achieve moderate brightness for Image 5 and Image 6 but lead to excessive contrast in the other images. ZeroDCE and ZeroDCE++ consistently yield low contrast, while DI-Retinex overexposes Images 2, 4, 5, 7, and 8, causing a complete loss of sky details in Images 7 and 8, while Image 6 suffers from insufficient contrast. LIME, Jeon et al., and LYT-Net generally restore brightness well but still have limitations. Though LIME performs well in Figure 8 and Figure 9, as shown in Figure 10, it overexposes the light source areas in Images 2, 4, and 7, as well as producing excessive contrast in Image 7 and Image 8. Jeon et al. generate excessive contrast for Images 2, 3, 4, and 7 in Figure 10, while LYT-Net tends to be overly bright overall in Figure 9. Additionally, both CLAHE and RetinexNet introduce varying degrees of unnatural artifacts during restoration. RetinexNet produces artifacts in the light area of Image 6 in Figure 8, shows insufficient contrast in Figure 9, fails to recover details in Image 6 in Figure 10, and introduces artifacts in the sky region of Image 7.
In terms of color restoration, LIME, Jeon et al., and RetinexNet tend to produce oversaturated colors. MSRCR, LIME, RetinexNet, ZeroDCE++, and LYT-Net exhibit varying degrees of color cast. As shown in Figure 9, MSRCR appears greenish on Images 1, 2, and 3 and purplish on Image 4. LIME and RetinexNet show a noticeable green tint in Image 4, while LYT-Net yields an overall yellowish cast in Figure 8 and ZeroDCE++ introduces perceptible tonal deviations across the entire image. In contrast, our proposed method effectively reconstructs color information degraded under low-light conditions. The enhanced images display rich yet natural colors without significant color distortion, color casts, or insufficient saturation.
DADNet effectively restores brightness, preserves details, reconstructs colors, and suppresses noise and artifacts. In qualitative evaluations, our method produces images that are more natural looking, richer in information, and more visually comfortable compared to other methods, producing results that align better with human visual preferences.

4.4. Quantitative Evaluation

Qualitative analysis can be influenced by personal preference and other subjective factors. To evaluate low-light enhancement methods more accurately and objectively, we also conducted quantitative analysis. Table 4 presents the experimental results using full-reference image quality metrics, while Table 5 shows the results of the color difference metric CIEDE2000 and Table 6 evaluates the generalization ability of the method. To clearly highlight the performers, the three best results in each table are marked, with the best value shown in red and bold, the second best in blue and bold, and the third best in black and bold.
Table 4 and Figure 11 show that DADNet achieves the best performance on all metrics on the SICE dataset. Compared with the second-best method, LYT-Net, DADNet improves PSNR by 17.75%, FSIM by 1.63%, and MS-SSIM by 1.91%, as well as reducing GMSD by 5.28%. On the LOLv1 and RAISE datasets, DADNet still leads in PSNR, with improvements of 16.62% and 17.44%, respectively. This suggests that DADNet can effectively preserve image reconstruction quality at the pixel level. Although DADNet ranks second in FSIM and MS-SSIM on LOLv1 and RAISE, slightly behind LYT-Net, the values are very close. Moreover, compared with the third-ranked method, DADNet reduces GMSD by 18.74% and 24.08%, respectively, indicating that DADNet has a clear advantage in preserving image quality related to human visual perception. In summary, the proposed DADNet demonstrates good overall performance in low-light image enhancement tasks. Table 5 and Figure 11 show that DADNet achieves the best CIEDE2000 on all datasets. On the LOL-v1, RAISE, and SICE datasets, DADNet reduces the metric by 29.89%, 24.14%, and 26.88%, respectively, compared to the second-best method. This indicates that DADNet performs consistently well in color restoration and effectively recovers color information in enhanced images. As shown in Table 6, although DADNet did not rank among the top three methods, its performance on both datasets is mostly above average, validating the stability of DADNet.
Based on the above analysis, our proposed DADNet outperforms state-of-the-art methods in image quality, structure preservation, color accuracy, and detail retention. Experimental results show that DADNet effectively addresses the low-light enhancement problem and significantly improves visual quality while maintaining image realism, showing good practical utility.

4.5. Discussion

As shown in Table 4 and Table 5, our DADNet achieves competitive average PSNR, FSIM, MS-SSIM, and GMSD scores, along with the best CIEDE2000 performance across three paired datasets. This advantage is mainly attributed to its two-branch design. The IEM branch focuses on restoring global illumination using the dark channel prior, while the CTM branch refines color recovery through an attention mechanism. This structure enables the model to better balance brightness enhancement and color preservation in complex scenes, such as the global low-light conditions in Figure 9 and the high-contrast scenes in Figure 10. As a result, DADNet avoids the color casts seen in MSRCR, RetinexNet, and ZeroDCE++, as well as the insufficient brightness recovery observed in NPLIE, PIE, and CoLIE. The ablation studies in Table 2 and Table 3 further confirm the contribution of each module. Removing either branch degrades the quality of the reconstructed images, demonstrating that both are essential. To evaluate generalization ability, we directly applied the model to the unpaired LIME and MEF datasets without fine-tuning. DADNet still produces visually natural results with clear details. However, as shown in Table 6, its generalization performance is not the best among the compared methods. We attribute this limitation to the LOLv1 training dataset, which mainly consists of indoor scenes and lacks diverse lighting conditions. This points to future directions: developing more lightweight network designs and incorporating more varied training data to improve model robustness.

5. Conclusions

In this paper, we propose a low-light image enhancement method, DADNet, based on the attention mechanism and dark channel prior. Unlike previous deep learning methods, our method first uses the IEM branch to obtain an initial illumination map through the dark channel prior and then extracts multi-scale features through convolution for brightness enhancement. Similarly, the CTM branch employs attention mechanisms to restore color information and adaptively adjust saturation. Through qualitative and quantitative comparisons, we find that DADNet better balances brightness enhancement and color recovery while minimizing distortion during the enhancement process compared to traditional methods and existing deep learning low-light enhancement methods. The method shows advantages in image quality, naturalness, contrast, color restoration, and saturation. Despite achieving significant results, the proposed method still has room for improvement. The current network architecture consists of only two branches. The brightness enhancement branch is relatively complex and consumes substantial computational resources. The color correction branch learns only a color matrix through the attention mechanism, ignoring other color features. Additionally, DADNet shows limited generalization ability. Future work will focus on developing lightweight deep learning models to better preserve color information while also improving generalization performance.

Author Contributions

L.W.: conceptualization, methodology, software, validation, supervision, writing—original draft, and resources. M.T.: writing—review and editing, formal analysis, data curation, and resources. H.L.: writing—review and editing and resources. F.Y.: investigation, visualization, and software. M.Y.: visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Foundation Research Project of Kaili University (grant No. 2025YB028), the Guizhou Provincial Science and Technology Projects (grant No. QKHJC [2024] Youth 408), the Science and Technology Innovation Talent Team Project of Data Science and Computing Intelligence of Guizhou Province (grant No. QKHRC-CXTD2025038), the Scientific Research Platform (Key Laboratory) Project of Kaili University (grant No. YTH-PT202501), the Big Data and Pattern Recognition Research Team Project of Kaili University (grant No. YTH-TD20252I), the Specialized Fund for the Doctoral of Kaili University (grant No. BS20240102), and the Basic Research Program of Qiandongnan Miao and Dong Autonomous Prefecture (grant No. [2024] 0003).

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Singh, P.; Bhandari, A.K. A Review on Computational Low-Light Image Enhancement Models: Challenges, Benchmarks, and Perspectives. Arch. Comput. Methods Eng. 2025, 32, 2853–2885. [Google Scholar] [CrossRef]
  2. Rahman, S.; Rahman, M.M.; Abdullah-Al-Wadud, M.; Al-Quaderi, G.D.; Shoyaib, M. An adaptive gamma correction for image enhancement. EURASIP J. Image Video Process. 2016, 2016, 35. [Google Scholar] [CrossRef]
  3. Ma, Q.; Wang, Y.; Zeng, T. Retinex-Based Variational Framework for Low-Light Image Enhancement and Denoising. IEEE Trans. Multimed. 2023, 25, 5580–5588. [Google Scholar] [CrossRef]
  4. Li, X.; Liu, M.; Ling, Q. Pixel-Wise Gamma Correction Mapping for Low-Light Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 681–694. [Google Scholar] [CrossRef]
  5. Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
  6. Tang, H.; Zhu, H.; Fei, L.; Wang, T.; Cao, Y.; Xie, C. Low-Illumination Image Enhancement Based on Deep Learning Techniques: A Brief Review. Photonics 2023, 10, 198. [Google Scholar] [CrossRef]
  7. Land, E.H. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
  8. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
  9. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  10. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference; British Machine Vision Association: Durham, UK, 2018. [Google Scholar]
  11. Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023. [Google Scholar] [CrossRef]
  12. Mukhopadhyay, S.; Hossain, S.; Malakar, S.; Cuevas, E.; Sarkar, R. Image contrast improvement through a metaheuristic scheme. Soft Comput. 2023, 27, 13657–13676. [Google Scholar] [CrossRef]
  13. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  14. Pisano, E.D.; Zong, S.; Hemminger, B.M.; DeLuca, M.; Johnston, R.E.; Muller, K.; Braeuning, M.P.; Pizer, S.M. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. J. Digit. Imaging 1998, 11, 193. [Google Scholar] [CrossRef] [PubMed]
  15. Abdullah-Al-Wadud, M.; Kabir, M.H.; Akber Dewan, M.A.; Chae, O. A Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  16. Liu, X.; Li, H.; Zhu, C. Joint Contrast Enhancement and Exposure Fusion for Real-World Image Dehazing. IEEE Trans. Multimed. 2022, 24, 3934–3946. [Google Scholar] [CrossRef]
  17. Rahman, Z.; Pu, Y.F.; Aamir, M.; Wali, S. Structure revealing of low-light images using wavelet transform based on fractional-order denoising and multiscale decomposition. Vis. Comput. 2021, 37, 865–880. [Google Scholar] [CrossRef]
  18. Tirumani, V.H.L.; Tenneti, M.; K, C.S.; Kotamraju, S.K. Image resolution and contrast enhancement with optimal brightness compensation using wavelet transforms and particle swarm optimization. IET Image Process. 2021, 15, 2833–2840. [Google Scholar] [CrossRef]
  19. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
  20. Wang, Y.F.; Liu, H.M.; Fu, Z.W. Low-Light Image Enhancement via the Absorption Light Scattering Model. IEEE Trans. Image Process. 2019, 28, 5679–5690. [Google Scholar] [CrossRef]
  21. Zhou, M.; Wu, X.; Wei, X.; Xiang, T.; Fang, B.; Kwong, S. Low-Light Enhancement Method Based on a Retinex Model for Structure Preservation. IEEE Trans. Multimed. 2024, 26, 650–662. [Google Scholar] [CrossRef]
  22. Lin, Y.H.; Lu, Y.C. Low-Light Enhancement Using a Plug-and-Play Retinex Model With Shrinkage Mapping for Illumination Estimation. IEEE Trans. Image Process. 2022, 31, 4897–4908. [Google Scholar] [CrossRef]
  23. Pu, T.; Zhu, Q. Non-Uniform Illumination Image Enhancement via a Retinal Mechanism Inspired Decomposition. IEEE Trans. Consum. Electron. 2024, 70, 747–756. [Google Scholar] [CrossRef]
  24. Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  25. Yu, S.Y.; Zhu, H. Low-Illumination Image Enhancement Algorithm Based on a Physical Lighting Model. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 28–37. [Google Scholar] [CrossRef]
  26. Singh, K.; Parihar, A.S. Illumination estimation for nature preserving low-light image enhancement. Vis. Comput. 2024, 40, 121–136. [Google Scholar] [CrossRef]
  27. McCartney, E.J. Optics of the Atmosphere: Scattering by Molecules and Particles; John Wiley and Sons: New York, NY, USA, 1976. [Google Scholar]
  28. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar] [CrossRef]
  29. Dong, X.; Pang, Y.A.; Wen, J.G. Fast efficient algorithm for enhancement of low lighting video. In ACM SIGGRAPH 2010 Posters; SIGGRAPH ’10; Association for Computing Machinery: New York, NY, USA, 2010. [Google Scholar] [CrossRef]
  30. Guo, Z.; Wang, C. Low Light Image Enhancement Algorithm Based on Retinex and Dehazing Model. In ICRAI ’20: Proceedings of the 6th International Conference on Robotics and Artificial Intelligence; Association for Computing Machinery: New York, NY, USA, 2021; pp. 84–90. [Google Scholar] [CrossRef]
  31. Abraham, N.J.; Daway, H.G.; Ali, R.A. Low lightness image enhancement using modified DCP based lightness mapping in lab color space. Int. J. Intell. Eng. Syst. 2022, 15, 244–251. [Google Scholar] [CrossRef]
  32. Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-light image/video enhancement using cnns. In Proceedings of the BMVC; Northumbria University: Tyne, UK, 2018; Volume 220, p. 4. [Google Scholar]
  33. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  34. Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4225–4238. [Google Scholar] [CrossRef]
  35. Liang, D.; Xu, Z.; Li, L.; Wei, M.; Chen, S. Pie: Physics-inspired low-light enhancement. Int. J. Comput. Vis. 2024, 132, 3911–3932. [Google Scholar] [CrossRef]
  36. Feng, Y.; Hou, S.; Lin, H.; Zhu, Y.; Wu, P.; Dong, W.; Sun, J.; Yan, Q.; Zhang, Y. DiffLight: Integrating Content and Detail for Low-light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
  37. Qian, L.; Jiang, L. LIENet: A low-light image enhancement network for extreme darkness. J. King Saud Univ. Comput. Inf. Sci. 2025, 37, 344. [Google Scholar] [CrossRef]
  38. Hu, R.; Luo, T.; Jiang, G.; Chen, Y.; Xu, H.; Liu, L.; He, Z. DiffDark: Multi-prior integration driven diffusion model for low-light image enhancement. Pattern Recognit. 2025, 168, 111814. [Google Scholar] [CrossRef]
  39. Liu, J.; Wang, S.; Chen, C.; Hou, Q. DFP-Net: An unsupervised dual-branch frequency-domain processing framework for single image dehazing. Eng. Appl. Artif. Intell. 2024, 136, 109012. [Google Scholar] [CrossRef]
  40. Cui, Z.; Li, K.; Gu, L.; Su, S.; Gao, P.; Jiang, Z.; Qiao, Y.; Harada, T. You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. In Proceedings of the 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, 21–24 November 2022; BMVA Press: Lancaster, UK, 2022. [Google Scholar]
  41. Zhang, W.; Ding, Y.; Zhang, M.; Zhang, Y.; Cao, L.; Huang, Z.; Wang, J. TCPCNet: A transformer-CNN parallel cooperative network for low-light image enhancement. Multimed. Tools Appl. 2024, 83, 52957–52972. [Google Scholar] [CrossRef]
  42. Wu, X.; Lai, Z.; Zhou, J.; Hou, X.; Pedrycz, W.; Shen, L. Light-Aware Contrastive Learning for Low-Light Image Enhancement. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–20. [Google Scholar] [CrossRef]
  43. Dang, J.; Zhong, Y.; Qin, X. PPformer: Using pixel-wise and patch-wise cross-attention for low-light image enhancement. Comput. Vis. Image Underst. 2024, 241, 103930. [Google Scholar] [CrossRef]
  44. Wen, Y.; Xu, P.; Li, Z.; Xu(ATO), W. An illumination-guided dual attention vision transformer for low-light image enhancement. Pattern Recognit. 2025, 158, 111033. [Google Scholar] [CrossRef]
  45. He, R.; Li, X.; Wu, J. LEESDFormer: A lightweight unsupervised CNN-Transformer-based curve estimation network for low-light image enhancement, exposure suppression, and denoising. Neural Netw. 2025, 190, 107764. [Google Scholar] [CrossRef]
  46. Huang, W.; Zhu, Y.; Huang, R. Low Light Image Enhancement Network With Attention Mechanism and Retinex Model. IEEE Access 2020, 8, 74306–74314. [Google Scholar] [CrossRef]
  47. Jiang, S.; Shi, Y.; Zhang, Y.; Zhang, Y. An Improved Retinex-Based Approach Based on Attention Mechanisms for Low-Light Image Enhancement. Electronics 2024, 13, 3645. [Google Scholar] [CrossRef]
  48. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef] [PubMed]
  49. Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023. [Google Scholar]
  50. Jiang, K.; Wang, Q.; An, Z.; Wang, Z.; Zhang, C.; Lin, C.W. Mutual Retinex: Combining Transformer and CNN for Image Enhancement. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2240–2252. [Google Scholar] [CrossRef]
  51. Luo, Y.; Lv, G.; Ling, J.; Hu, X. Low-light image enhancement via an attention-guided deep Retinex decomposition model. Appl. Intell. 2025, 55, 1–13. [Google Scholar] [CrossRef]
  52. Chobola, T.; Liu, Y.; Zhang, H.; Schnabel, J.A.; Peng, T. Fast context-based low-light image enhancement via neural implicit representations. In Proceedings of the Computer Vision—ECCV 2024; Springer: Cham, Switzerland, 2025; pp. 413–430. [Google Scholar] [CrossRef]
  53. Gonzalez, R.C. Digital Image Processing; Pearson Education India: Noida, India, 2009. [Google Scholar]
  54. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko Sergey, e.A.; Bischof, H.; Brox, T.; Frahm, J.M. End-to-End Object Detection with Transformers. In Proceedings of the Computer Vision—ECCV 2020; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
  55. Cai, J.; Gu, S.; Zhang, L. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
  56. Dang-Nguyen, D.T.; Pasquini, C.; Conotter, V.; Boato, G. RAISE: A raw images dataset for digital image forensics. In Proceedings of the 6th ACM Multimedia Systems Conference (MMSys ’15), New York, NY, USA, 18–20 March 2015. [Google Scholar] [CrossRef]
  57. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef]
  58. Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010. [Google Scholar] [CrossRef]
  59. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  60. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Grove, CA, USA, 9–12 November 2003. [Google Scholar] [CrossRef]
  61. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [PubMed]
  62. Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
  63. Mittal, A.; Soundararajan, R.; mlBovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  64. N, V.; D, P.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015. [Google Scholar]
  65. Sun, S.; Ren, W.; Peng, J.; Song, F.; Cao, X. DI-Retinex: Digital-imaging Retinex model for low-light image enhancement. Int. J. Comput. Vis. 2025, 133, 8293–8314. [Google Scholar] [CrossRef]
  66. Brateanu, A.; Balmez, R.; Avram, A.; Orhei, C.; Ancuti, C. LYT-NET: Lightweight YUV Transformer-Based Network for Low-Light Image Enhancement. IEEE Signal Process. Lett. 2025, 32, 2065–2069. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed method, DADNet. Initially, the low-light image and its corresponding dark channel map are fed into two separate branches. The Illumination Enhancement Module (IEM) branch consists of the Dark Channel Feature Block (DCFB) and the Dark Channel Block (DCB). The Color Transformation Module (CTM) branch consists of the Low-light Feature Block (LFB) and the attention mechanism. Detailed network architectures for each block are provided in the schematic diagram below the overall framework.
Figure 1. Overview of the proposed method, DADNet. Initially, the low-light image and its corresponding dark channel map are fed into two separate branches. The Illumination Enhancement Module (IEM) branch consists of the Dark Channel Feature Block (DCFB) and the Dark Channel Block (DCB). The Color Transformation Module (CTM) branch consists of the Low-light Feature Block (LFB) and the attention mechanism. Detailed network architectures for each block are provided in the schematic diagram below the overall framework.
Symmetry 18 00564 g001
Figure 2. Visual comparison of the ablation study. The red box highlights the key area of attention. The output without IEM suffers from local overexposure. The output without CTM exhibits oversaturation. The outputs without F Offset and with the second-position feature as F Offset show abnormal color recovery. Please zoom in for a better view.
Figure 2. Visual comparison of the ablation study. The red box highlights the key area of attention. The output without IEM suffers from local overexposure. The output without CTM exhibits oversaturation. The outputs without F Offset and with the second-position feature as F Offset show abnormal color recovery. Please zoom in for a better view.
Symmetry 18 00564 g002
Figure 3. Global and local visual comparison between DADNet and state-of-the-art methods on the LOL-v1 dataset. DI-Retinex, LYT-Net, and DADNet produce overall brightness levels closest to the ground truth. In the glove detail, CLAHE, PIE, and NPLIE show artifacts. Please zoom in for a better view.
Figure 3. Global and local visual comparison between DADNet and state-of-the-art methods on the LOL-v1 dataset. DI-Retinex, LYT-Net, and DADNet produce overall brightness levels closest to the ground truth. In the glove detail, CLAHE, PIE, and NPLIE show artifacts. Please zoom in for a better view.
Symmetry 18 00564 g003
Figure 4. Global and local visual comparison between DADNet and state-of-the-art methods on the RAISE dataset. MSRCR, RetinexNet, ZeroDCE, ZeroDCE++, DI-Retinex, and LYT-Net show color casts. LIME, Jeon et al., NPLIE, and DADNet achieve overall results closest to the ground truth. In the zoomed-in details, LIME, Jeon et al., NPLIE, PIE, and CoLIE fail to recover fine details effectively. Please zoom in for a better view.
Figure 4. Global and local visual comparison between DADNet and state-of-the-art methods on the RAISE dataset. MSRCR, RetinexNet, ZeroDCE, ZeroDCE++, DI-Retinex, and LYT-Net show color casts. LIME, Jeon et al., NPLIE, and DADNet achieve overall results closest to the ground truth. In the zoomed-in details, LIME, Jeon et al., NPLIE, PIE, and CoLIE fail to recover fine details effectively. Please zoom in for a better view.
Symmetry 18 00564 g004
Figure 5. Global and local visual comparison between DADNet and state-of-the-art methods on the SICE dataset. Jeon et al., RetinexNet, LYT-Net, and DADNet have overall brightness closest to the ground truth. CLAHE, LIME, Jeon et al., NPLIE, and RetinexNet recover details unnaturally. Please zoom in for a better view.
Figure 5. Global and local visual comparison between DADNet and state-of-the-art methods on the SICE dataset. Jeon et al., RetinexNet, LYT-Net, and DADNet have overall brightness closest to the ground truth. CLAHE, LIME, Jeon et al., NPLIE, and RetinexNet recover details unnaturally. Please zoom in for a better view.
Symmetry 18 00564 g005
Figure 6. Global and local visual comparison between DADNet and state-of-the-art methods on the LIME dataset. Jeon et al., NPLIE, RetinexNet, CoLIE, DI-Retinex, and LYT-Net fail to recover clear details in the extremely dark region in the top left. DADNet ensures clear details in that area. Please zoom in for a better view.
Figure 6. Global and local visual comparison between DADNet and state-of-the-art methods on the LIME dataset. Jeon et al., NPLIE, RetinexNet, CoLIE, DI-Retinex, and LYT-Net fail to recover clear details in the extremely dark region in the top left. DADNet ensures clear details in that area. Please zoom in for a better view.
Symmetry 18 00564 g006
Figure 7. Global and local visual comparison between DADNet and state-of-the-art methods on the MEF dataset. Jeon et al., NPLIE, PIE, CoLIE, DI-Retinex, and LYT-Net fail to sufficiently restore brightness, resulting in blurred details in extremely dark regions. DADNet preserves clear details in these areas. Please zoom in for a clearer view.
Figure 7. Global and local visual comparison between DADNet and state-of-the-art methods on the MEF dataset. Jeon et al., NPLIE, PIE, CoLIE, DI-Retinex, and LYT-Net fail to sufficiently restore brightness, resulting in blurred details in extremely dark regions. DADNet preserves clear details in these areas. Please zoom in for a clearer view.
Symmetry 18 00564 g007
Figure 8. Visualization comparisons between DADNet and state-of-the-art methods on the SICE dataset. Please zoom in for a better view.
Figure 8. Visualization comparisons between DADNet and state-of-the-art methods on the SICE dataset. Please zoom in for a better view.
Symmetry 18 00564 g008
Figure 9. Visualization comparisons between DADNet and state-of-the-art methods on the LOL-v1 dataset. Please zoom in for a better view.
Figure 9. Visualization comparisons between DADNet and state-of-the-art methods on the LOL-v1 dataset. Please zoom in for a better view.
Symmetry 18 00564 g009
Figure 10. Visualization comparisons between DADNet and state-of-the-art methods on the RAISE dataset. Please zoom in for a better view.
Figure 10. Visualization comparisons between DADNet and state-of-the-art methods on the RAISE dataset. Please zoom in for a better view.
Symmetry 18 00564 g010
Figure 11. Radar chart comparison between DADNet and state-of-the-art methods. All data are uniformly normalized, with outer values denoting superior performance. Please zoom in for a better view.
Figure 11. Radar chart comparison between DADNet and state-of-the-art methods. All data are uniformly normalized, with outer values denoting superior performance. Please zoom in for a better view.
Symmetry 18 00564 g011
Table 1. Hyperparameter settings for model training.
Table 1. Hyperparameter settings for model training.
HyperparameterSet
OptimizerAdam
Learning rate0.0005
Weight decay0.000001
Batch size8
Epochs500
Table 2. Ablation study of key components in DADNet. The experiments were conducted on the LOLv1 dataset. The best value is shown in red and bold, the second best in blue and bold.
Table 2. Ablation study of key components in DADNet. The experiments were conducted on the LOLv1 dataset. The best value is shown in red and bold, the second best in blue and bold.
MethodIEMCTMPSNR ↑FSIM ↑MS-SSIM ↑GMSD ↓CIEDE2000 ↓
Without IEM19.30470.86570.84430.11450.0695
Without CTM18.14400.90220.89870.09220.0888
DADNet (Ours)24.35850.93430.93540.06860.0456
Table 3. Ablation study on F Offset in DADNet. It validated the selection of the third position. The best and second best values are the same as in Table 2, and the third best is in black and bold.
Table 3. Ablation study on F Offset in DADNet. It validated the selection of the third position. The best and second best values are the same as in Table 2, and the third best is in black and bold.
MethodPSNR ↑FSIM ↑MS-SSIM ↑GMSD ↓CIEDE2000 ↓
Without F Offset 24.58070.93490.94120.07130.0457
1st-position feature as F Offset 23.61640.93400.93410.07020.0485
2nd-position feature as F Offset 24.89020.93470.94150.07080.0450
DADNet (Ours)24.35850.93430.93540.06860.0456
Table 4. Experiment results with image quality assessment metrics on paired datasets.
Table 4. Experiment results with image quality assessment metrics on paired datasets.
MethodLOL-v1RAISESICE
PSNR ↑FSIM ↑MS-SSIM ↑GMSD ↓PSNR ↑FSIM ↑MS-SSIM ↑GMSD ↓PSNR ↑FSIM ↑MS-SSIM ↑GMSD ↓
MSRCR15.41460.88050.87880.090914.92790.91950.91920.064514.42960.82210.78750.1454
CLAHE13.18180.82200.77810.133613.61610.76540.76440.181013.64670.80370.75450.1608
LIME15.80910.90500.90930.086717.21110.86560.87090.112615.91600.84660.80900.1364
Jeon et al.15.88130.84810.83850.123916.94310.90230.90910.091115.77850.84460.80310.1425
NPLIE12.26430.89340.84390.096716.28390.86720.88560.111213.43170.82740.78290.1503
RetinexNet15.74190.84730.79350.123017.50620.87070.84380.121117.40900.80410.70450.1623
ZeroDCE14.16210.90380.86440.086917.60590.91560.92860.077215.37860.84400.79390.1338
ZeroDCE++14.09460.84440.81220.155917.31760.91410.92160.068314.88970.78630.73130.1906
PIE10.95880.86510.78830.118014.50640.87090.88060.113612.33460.80490.73920.1559
CoLIE12.84310.89800.84960.091714.77210.86250.85450.107614.31550.84090.79400.1412
DI-Retinex16.90670.90920.88430.084416.13130.87810.88750.110815.23330.82710.78790.1471
LYT-Net20.88640.94000.94470.064421.45610.95660.96350.045718.69500.85740.82500.1306
DADNet24.35850.93430.93540.068625.19770.95600.96270.048922.01330.87140.84080.1237
Table 5. Experiment results with the CIEDE2000 metric on paired datasets.
Table 5. Experiment results with the CIEDE2000 metric on paired datasets.
MethodLOL-v1RAISESICE
MSRCR0.12280.11750.1183
CLAHE0.14620.13160.1304
LIME0.11730.08500.0933
Jeon et al.0.12720.10070.0990
NPLIE0.15890.10770.1329
RetinexNet0.11270.08180.0835
ZeroDCE0.13180.09500.1058
ZeroDCE++0.13340.10180.1151
PIE0.17810.12940.1508
CoLIE0.15430.13300.1227
DI-Retinex0.10020.10510.1042
LYT-Net0.06510.06300.0767
DADNet0.04560.04780.0561
Table 6. Experiment results with no-reference metrics on unpaired datasets.
Table 6. Experiment results with no-reference metrics on unpaired datasets.
MethodLIMEMEF
NIQE ↓PIQUE ↓NIQE ↓PIQUE ↓
MSRCR12.092311.57618.623914.8090
CLAHE10.424613.089010.40448.0896
LIME14.261017.940614.034319.8174
Jeon et al.11.157710.798511.99308.6572
NPLIE12.659510.243312.13998.3904
RetinexNet16.549818.942610.677510.8139
ZeroDCE11.804910.874311.87118.1916
ZeroDCE++11.846910.859911.91238.1105
PIE12.582410.363912.84638.3032
CoLIE12.782411.985812.281710.2174
DI-Retinex13.141213.113611.915812.4605
LYT-Net14.968217.963817.133314.1563
DADNet11.999511.601011.91218.3768
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Tang, M.; Li, H.; Yang, F.; Yuan, M. DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry 2026, 18, 564. https://doi.org/10.3390/sym18040564

AMA Style

Wang L, Tang M, Li H, Yang F, Yuan M. DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry. 2026; 18(4):564. https://doi.org/10.3390/sym18040564

Chicago/Turabian Style

Wang, Lingyun, Minli Tang, Hua Li, Feiyan Yang, and Ming Yuan. 2026. "DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior" Symmetry 18, no. 4: 564. https://doi.org/10.3390/sym18040564

APA Style

Wang, L., Tang, M., Li, H., Yang, F., & Yuan, M. (2026). DADNet: Dual-Branch Low-Light Image Enhancement Network Based on Attention Mechanism and Dark Channel Prior. Symmetry, 18(4), 564. https://doi.org/10.3390/sym18040564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop