Next Article in Journal
Highly Efficient and Secure Metadata-Driven Integrity Measurement for Containers
Previous Article in Journal
Bus Voltage Fluctuation Suppression Strategy for Hybrid Energy Storage Systems Based on MPC Power Allocation and Tracking
Previous Article in Special Issue
Adaptive Distributed Type-2 Fuzzy Dynamic Event-Triggered Formation Control for Switched Nonlinear Multi-Agent System with Actuator Faults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cycle-Iterative Image Dehazing Based on Noise Evolution

College of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(17), 3392; https://doi.org/10.3390/electronics14173392
Submission received: 17 July 2025 / Revised: 15 August 2025 / Accepted: 20 August 2025 / Published: 26 August 2025

Abstract

Benefiting from the prevalence of machine learning theory, deep learning-based image dehazing algorithms have made remarkable progress. However, limited by (1) the incompleteness of symmetric datasets, (2) insufficient extraction of deep priors, and (3) the excessive scale of the network, such algorithms have poor generalization ability on real-world datasets and lack real-time performance. To address these issues, this paper proposes a noise evolution-based cycle-iterative dehazing algorithm. In our method, the noise evolution in each iteration includes an atmospheric scattering model (ASM)-based dehazing module, a random haze addition module, and a Retinex-based inverse enhancement module. More specifically, the ASM-based image dehazing module initially clarifies hazy images by extracting haze-related features according to the ASM. The random haze addition module combines the depth-related parameters extracted by the previous module with a random adjustment or an assignment mechanism to expand the samples, thereby addressing the problem of data shortage. The Retinex-based inverse enhancement module is introduced to mine “depth” features related to illumination, aiming to ensure the extraction of richer priors from the Retinex model. It is worth noting that both the dehazing module and the inverse enhancement module use the low-complexity U-Net as the main backbone, and the random haze addition module only involves simple operation. Therefore, it effectively suppresses the deployment scale and computational complexity of our algorithm. Experiments reveal that the proposed algorithm not only robustly restores hazy images but also exhibits promising advantages in terms of running time and the scale of network parameters.

1. Introduction

Haze, dust, and suspended particles in the atmosphere, under adverse weather conditions, typically scatter and absorb reflected light during imaging, resulting in reduced image contrast and the loss of crucial details. This degradation severely impacts both the visual quality of images and the performance of downstream high-level vision tasks that rely on high-quality inputs, such as object detection [1] and image recognition [2]. As a result, developing an efficient haze removal technique has become a critical research challenge, drawing significant attention from both academia and industry.
According to the existing literature, dehazing algorithms can be broadly classified into three categories: image enhancement-based methods, atmospheric scattering model-based methods, and learning-based methods. Image enhancement-based methods, unlike the other two categories, do not rely on prior knowledge or physical models. Instead, they directly perform global or local contrast stretching, color correction, and detail enhancement to restore hazy images. Common techniques in this group include histogram equalization (HE) [3], color correction methods [4], and Retinex-based methods [5]. These algorithms are relatively simple and computationally efficient. However, they often neglect the underlying optical degradation mechanisms involved in the imaging process, making them susceptible to over-enhancement, artifacts, and other undesired side effects during the dehazing process.
In contrast, atmospheric scattering model-based methods treat the hazy image as a result of the superposition of the scene’s true radiance and the airlight component after atmospheric attenuation. Notable examples include the dark channel prior (DCP) [6], the color attenuation prior (CAP) [7], and the gamma correction prior (GCP) [8]. These methods are more effective in preserving real color and structural details, thanks to the incorporation of physical constraints. Nevertheless, they often rely on prior assumptions that may not hold in more complex scenarios, such as those involving dense or uneven haze. This can lead to inaccuracies in the estimation of atmospheric light and transmission maps, ultimately impairing the effectiveness of the dehazing results.
With the rapid development of artificial intelligence, deep learning models have gradually become the main research strategy in the field of image dehazing, offering superior performance over traditional methods in both restoration quality and computational efficiency. However, persistent challenges, especially for incomplete symmetric datasets and inadequate depth prior extraction, as well as oversize network architectures, continue to restrain the generalization capability and real-time applicability of existing learning-based dehazing algorithms.
Recently, diffusion models [9] have gained widespread use in computer vision, leading to several impactful advancements. A diffusion model typically consists of two stages, noise addition and denoising, which allows for the deep model to explore a wider range of possibilities. Motivated by this, we propose a cycle-iterative image dehazing algorithm from the perspective of noise evolution, which effectively integrates denoising, noise addition, and re-denoising into a unified framework. Specifically, the noise evolution in each iteration comprises three modules: a dehazing module, a random haze addition module, and an inverse enhancement module. These components simulate haze degradation and restoration, optimizing feature extraction and enhancing the algorithm’s applicability in real-world scenarios. It is important to note a key distinction between our noise evolution and traditional diffusion models. Unlike the two-stage process of diffusion (which involves only noise addition and denoising), our method adopts a cyclic iterative strategy of denoising, adding noise, and re-denoising, which helps to learn more possibilities. Moreover, the atmospheric scattering model (ASM) is embedded within the dehazing module, while the Retinex model is incorporated in the inverse enhancement module. This integration leverages physical models to guide network training. Specifically, the Retinex model is applied to enhance the inverse of the hazy images synthesized by the previous result produced by the ASM-based dehazing module. As such, the two models work independently without interfering with one another, each contributing its own functionality for denoising. Experiments demonstrate that the proposed algorithm achieves impressive dehazing results, significantly improving generalization in real-world environments. Furthermore, it outperforms existing state-of-the-art methods in terms of computational efficiency. Our contributions are summarized as follows:
  • To the best of our knowledge, we are the first to propose a novel dehazing framework grounded in the concept of noise evolution. This innovative approach not only enables the virtualization of haze data during the process but also facilitates the extraction of richer, more informative priors from both the atmospheric scattering model (ASM) and Retinex model. By integrating these models within a dynamic iterative cycle, our framework introduces a new paradigm for image dehazing that enhances both the interpretability and effectiveness of haze removal.
  • In response to the limitations of existing symmetric datasets, particularly in terms of sample diversity and completeness, we introduce a novel random haze addition module. This module innovatively simulates haze and applies it to the existing dataset, effectively expanding the sample space. By generating virtual haze variations, our method not only enriches the dataset but also significantly enhances its robustness. This expansion improves the performance of our approach across both real-world images and their virtual counterparts, marking a substantial advancement in dehazing techniques by addressing the gap in diverse, high-quality training data.
  • To address the limitation of existing algorithms, which primarily focus on haze-related depth features or overlook the Retinex model, we propose an innovative inverse enhancement module that leverages a reverse strategy. This module is designed to extract and refine depth features related to illumination, thereby solving the critical issues of over-dehazing and under-dehazing. By incorporating illumination-driven enhancements, our approach enables a more balanced and accurate dehazing process, offering a significant advancement over traditional methods that fail to account for these complex scenarios.
The paper is organized as follows. Section 2 provides a comprehensive review of the related works in image dehazing, highlighting key advancements and identifying the limitations of existing approaches. Section 3 introduces the proposed method, detailing its core components and the technical innovations. In Section 4, we present the experimental results along with a comparative analysis against state-of-the-art methods. Finally, Section 5 concludes the paper.

2. Related Works

In this section, a brief overview of the research related to single-image dehazing techniques is provided, which can be generally divided into Retinex-based image dehazing algorithms, atmospheric scattering model (ASM)-based dehazing algorithms, and learning-based image dehazing algorithms [10].

2.1. Retinex-Based Image Dehazing

Early image dehazing algorithms primarily relied on image enhancement techniques to remove haze. These methods typically improve visual perception by enhancing the local or global contrast of images. Essentially, such approaches lack a physically grounded degradation model. Meanwhile, they do not consider the underlying causes of image degradation but rather focus on visual appearance, aiming to improve image quality through techniques. Among them, the most representative algorithm is the well-known Retinex algorithm, which is based on Retinex theory [11] to achieve enhancement by decomposing an image into reflection and illumination components. Mathematically, it can be expressed as
I ( x , y ) = R ( x , y ) · L ( x , y )
where ( x , y ) denotes a pixel coordinate in an image, and I is the observed image that can be expressed as the product of the reflection component R and the illumination component L.
Traditional Retinex methods, e.g., Single-Scale Retinex (SSR) [5] and Multi-Scale Retinex (MSR) [12], can significantly brighten low-light images and improve the corresponding contrast. However, they suffer from high computational cost and require careful tuning of enhancement parameters to prevent over-enhancement and color artifacts. Recent work has therefore focused on integrating Retinex principles into deep neural networks for more efficient and robust low-light enhancement. For instance, Zhang et al. [13] introduced the Kind++ network, which explicitly decomposes input images into reflectance and illumination components. Moreover, it also employs a multi-scale illumination attention module to boost restoration accuracy. In Ref. [14], Yi et al. fused Retinex concepts with diffusion-based generative models, embedding a supervised attention mechanism guided by Retinex theory to learn and extract illumination-invariant features. More recently, to predict the initial illumination map directly, Fan et al. [15] eliminated the traditional Retinex illumination refinement stage and instead trained a convolutional neural network, streamlining the enhancement pipeline.
Image enhancement-based dehazing algorithms offer some advantages, especially for simplicity in operation and flexibility in implementation, and they have been widely applied in various image processing scenarios. However, due to the drawbacks of the Retinex model, these methods always lack the modeling of the atmospheric scattering process and often ignore the physical interactions between illumination and the medium. As a result, the restored images using this model may suffer from poor physical plausibility, which leads to deficiencies in color fidelity and detail restoration. Furthermore, most Retinex-based methods are based on global processing strategies despite using the learning method, which are inadequate in handling the spatial non-uniformity of haze distribution. Therefore, there are over-enhancement or under-enhancement issues in certain regions, thereby restricting their applicability in high-quality image restoration tasks.

2.2. Atmospheric Scattering Model-Based Image Dehazing

Atmospheric scattering model (ASM)-based [16] image dehazing algorithms are typically grounded in the prior knowledge to estimate the imaging parameter in this model, which aims to simulate the image degradation process under hazy weather conditions by modeling the underlying physical imaging mechanisms. Mathematically, the ASM can be formally expressed as follows:
I ( x , y ) = J ( x , y ) · t ( x , y ) + A · ( 1 t ( x , y ) )
where I is the observed hazy image, J is the clear image, A is atmospheric light, and t is the transmission map. It is clear that recovering J requires accurate estimates of both A and t from I ; thus, their estimation accuracy directly governs the quality of the recovery result.
The nature of ASM-based dehazing methods is to utilize handcrafted priors to solve for the unknown parameters of the ASM and then recover the clear scene using these parameters. A typical example is the well-known dark channel prior (DCP) by He et al. [6], which states that, in clear outdoor images, most local patches contain at least one color channel with near-zero intensity; this insight is then employed to infer the transmission map. In Ref. [7], Zhu et al. proposed the color attenuation prior (CAP), formulating a relationship between scene depth and the difference between image brightness and saturation. Subsequently, Ju et al. introduced the region line prior (RLP) [17] and the gamma correction prior (GCP) [8] to leverage inherent spatial structures for constraining the haze degradation. To enable joint dehazing and exposure compensation in a unified framework, Ju et al. [18] augmented the scattering model with a transmission-dependent absorption coefficient.
Although the above ASM-based dehazing methods have achieved remarkable progress in visual quality, their effectiveness is often constrained by the applicability of the underlying prior assumptions. For example, the DCP algorithm tends to fail when processing images with large bright regions or low-contrast scenes, as its assumption no longer holds in such cases. Moreover, these methods typically rely on a globally selected atmospheric light value, which further limits their adaptability under complex lighting conditions, thereby hindering their robustness and generalization ability in real-world applications.

2.3. Learning-Based Image Dehazing

The surge of deep learning has propelled learning-based dehazing into the forefront of research. These methods fall into two broad classes. The first class embeds the ASM within a neural network to mitigate the reliance on handcrafted priors for parameter estimation. For instance, Ren et al. [19] employed a dual-branch architecture—coarse and fine scales—to estimate the transmission map and thereby refine image details. In Ref. [20], Li et al. introduced a framework combining fuzzy region segmentation with haze-density decomposition to adapt the standard scattering model and apply multi-scale fusion to bolster enhancement quality. To further improve the performance, Liu et al. [21] proposed an illumination-adaptive scattering model to synthesize training data and incorporate an illumination-adaptive module. The other class learns haze characteristics directly from data via end-to-end networks. For example, Zhang et al. [22] unified depth estimation and dehazing objectives in a single network, fostering mutual reinforcement between the two tasks, while Su et al. [23] improved generalization by combining multiple priors with physics-driven domain transfer. Cheng et al. [24] leveraged contrastive learning to update negative samples dynamically, ensuring reconstructed images converge toward clear references while remaining distinct from adversarial examples. Fang et al. [25] designed a structure-guided dehazing network that exploits YCbCr-domain texture and channel cues to inform RGB-domain feature learning, which is able to boost color fidelity.
Benefiting from leveraging powerful nonlinear mapping capabilities and feature extraction mechanisms, deep learning-based dehazing methods have demonstrated superior performance over prior-based and enhancement-based algorithms in terms of detail restoration, edge preservation, and color fidelity. However, most deep learning-based dehazing models are trained on synthetic datasets, which are usually generated using simplified physical models and fail to fully capture the diversity and complexity of haze distributions in real-world scenarios. As a result, these models often exhibit limited generalization ability when applied to real hazy images. Moreover, due to the imbalanced distributions of haze density, illumination conditions, and scene content in synthetic datasets, the trained models may struggle to learn comprehensive degradation patterns during training, thereby affecting robustness and practical applicability. In summary, while deep learning-based dehazing approaches have achieved groundbreaking advances in both theoretical research and experimental validation, further improvements are still needed in terms of data realism, model generalization, and computational efficiency.

2.4. Iterative Image Dehazing and Enhancement

Iterative frameworks have become a promising approach in image dehazing and enhancement, which address the ill-posed nature of these tasks through progressive refinement. For instance, Fu et al. [26] introduced IPC-Dehaze, an iterative predictor–critic architecture that leverages progressively refined codebook priors through cyclic prediction–evaluation iterations, enabling stable dehazed outputs while mitigating error accumulation in real-world scenarios. Zheng et al. [27] proposed T-Net and constructed a recursively unfolded Stack T-Net, using it as a submodule. By iteratively inputting “stage output + original image,” the system achieves coarse-to-fine haze removal. In Ref. [28], Sun et al. proposed a novel framework that leverages large vision-language models (VLMs) to generate textual instructions describing normal illumination conditions. These instructions are iteratively refined to guide a diffusion model, enabling controllable and high-quality low-light image enhancement. Liang et al. [29] proposed an iterative prompt learning framework that progressively refines positive and negative textual prompts to optimize Contrastive Language-Image Pretraining (CLIP) priors, effectively minimizing the distribution gap among backlit images, enhanced results, and well-lit images in the latent space, which achieves high-quality and generalizable image enhancement without paired training data. Liu et al. [30] developed a cyclic “brightness stretching coefficient estimation-fusion” framework to progressively correct non-uniform low illumination. This method iteratively optimizes local luminance mapping through two-stage refinement while preserving structural details and color fidelity.

3. Cycle-Iterative Image Dehazing Based on Noise Evolution

3.1. Overall Network Architecture

To address these challenges mentioned above, we introduce a cycle-iterative image dehazing framework, which is inspired from the perspective of noise evolution. Its core idea is to alternately denoise and add noise for an input image in order to enhance our method’s generalization capability. More specifically, each iteration consists of three sequential modules: The atmospheric scattering model (ASM)-based dehazing module (evolution 1) extracts haze-related features from an input image and performs initial denoising to attenuate the haze effect using the ASM. The responsibility of the random haze addition module (evolution 2) is to re-inject the controlled synthetic haze by randomly sampling from atmospheric light distribution combined with the estimated transmission map. This evolution is able to generate varied haze intensities, effectively augmenting the training data. The Retinex-based inverse enhancement module (evolution 3) inverts the synthetically hazed image, processes it through a Retinex-based image enhancement network, and then re-inverts the output to reconstruct the dehazed image. This invert–enhance cycle preserves structural details and delivers improved contrast and color fidelity in the final result. Figure 1 provides an overview of the proposed framework, illustrating the hierarchical relationships and interactions among its core components. A more detailed depiction of the overall architecture is shown in Figure 2. As demonstrated, the proposed method iteratively refines the input through cycles of denoising and noise evolution, leading to enhanced detail preservation and visual clarity. This iterative strategy significantly improves the robustness and generalization ability of the model when applied to complex real-world hazy scenes.

3.2. ASM-Based Dehazing Module (Evolution 1)

Motivated by the unified parameter estimation strategy proposed in AOD-Net [31], this paper combines the atmospheric light A and transmission t ( x ) into a single parameter matrix K ( x ) . Formally, according to this strategy, the ASM, i.e., Equation (2), can be reformulated to be
J ( x , y ) = K ( x , y ) · I ( x , y ) K ( x , y ) + b
where
K ( x , y ) = 1 t ( x , y ) ( I ( x , y ) A ) + ( A b ) I ( x , y ) 1
where K encapsulates the atmospheric light A and transmission map t, and b denotes a constant offset.
To reduce the complexity and computational burden associated with the AOD-Net, this paper replaces conventional CNN architectures with a U-Net encoder–decoder structure [32], augmented by residual blocks [33] to facilitate effective feature representation and propagation. In the encoder stage, multi-scale features are progressively extracted through max-pooling operations combined with residual connections. In the decoder stage, features are up-sampled via transposed convolutions, while skip connections concatenate corresponding encoder feature maps to recover detailed information. A final 1 × 1 convolutional layer outputs the joint estimation parameter. The network then produces the dehazed image by applying Equation (3) using K and the original hazy input I . The architecture of this dehazing module is illustrated on the left side in Figure 3. As the initial step in the proposed method using noise evolution, this module reduces haze and suppresses noise in the input image.

3.3. Random Haze Addition Module (Evolution 2)

As aforementioned, the random haze addition module is created to simulate hazy images that can expand the training datasets to some extent. Meanwhile, it is obvious that the joint parameter K obtained from the previous dehazing module is related to depth and thus can be further served as a key input for the haze addition module. To align the K and t as much as possible, here we utilize Equation (4) and assume A = 1 to compute the coarse transmission map t ¯ i in the i-th iteration. Here, we remark that, although the coarse transmission map is produced by A = 1 and K , the computed transmission map still can provide a relatively accurate result since the atmospheric light value in real-world images is typically less than 1. Now, we only need to randomly reassign the value of atmospheric light and the distribution of the transmission map to simulate hazy images. Here, two strategies are proposed:
  • The first strategy applies a statistically guided linear normalization to the initial transmission map. Specifically, it is formulated as
    t i = α i · t ¯ i + β i
    where α i and β i denote the scaling coefficient and offset term in the linear transformation, respectively. These parameters are derived from the statistical distribution of t ¯ i . By applying the linear transformation in Equation (5), the distribution of t ¯ i is normalized to [ 0 , 1 ] as follows:
    α i · t ¯ min i + β i = 0 α i · t ¯ max i + β i = 1
    where symbols t ¯ max i and t ¯ min i denote the maximum and minimum values of the coarse transmission map t ¯ i , respectively. To reduce the influence of outliers, the 5th and 95th percentiles are used in place of global extrema during experiments to improve robustness. This linear transformation maps the original values into the standard interval [ 0 , 1 ] , ensuring physical plausibility of the transmission map generated.
  • The other strategy is based on gamma correction [8] to perform nonlinear adjustment. To effectively refine the transmission map t, the gamma value γ is randomly sampled within the range [1.1, 1.5]. Setting this range aims to balance low- and high-transmission regions and avoid excessive correction. By introducing randomness into gamma sampling, this method simulates the non-uniformity of haze in natural environments, thereby enhancing the realism and diversity of the synthesized hazy images. Mathematically, the expression of such a transformation is given by
    t i = t ¯ i γ i
    where γ i denotes the randomly selected gamma value in the i-th iteration. Subsequently, the adjusted transmission map t and randomly generated atmospheric light A are substituted into Equation (2) to synthesize a hazy image. The second stage of each iteration not only facilitates effective collaboration with the subsequent enhancement module but also significantly diversifies the training data through controlled noise introduction. For clarity, the architecture of the haze addition module is illustrated on the right side in Figure 3. Thanks to the second stage within the proposed noise evolution framework, the random haze addition module systematically introduces synthetic haze and is able to control noise into the output from the previous dehazing stage. The benefit of doing so is that it can expand the dataset from the original samples that may contain non-uniform haze cover, which is very useful to improve the robustness or generalization of our method.

3.4. Retinex-Based Inverse Enhancement Module (Evolution 3)

Low-light images typically exhibit characteristics such as reduced brightness and blurred details. Following the assumption in [8] that the inversion of a hazy image closely resembles a low-light scenario, we invert hazy images to generate visually low-light equivalents (see Figure 4). By applying Retinex-based image enhancement techniques to these inverted images and subsequently re-inverting the enhanced outputs, an effective dehazing outcome can be achieved. This is due to the fact that employing Retinex to enhance an image is not only a visual enhancement by learning a deep prior on illumination, but it also distills latent knowledge of the Retinex model into our proposed network.
Motivated by this fact, a Retinex-based inverse enhancement module is proposed, whose nature is to deeply exploit the “illumination”-related depth features in hazy images based on a reverse strategy, and then use it to effectively enhance the dehazed image quality. According to Ref. [8], the mathematical formulation corresponding to the complete dehazing process can be written as
H = 1 f 1 L
where L represents the output from the haze addition module, f · denotes the image enhancement network, and H is the final enhanced image obtained after applying the inversion operation. It can be analyzed from the above equation that this module firstly employs an image inversion operation on both the hazy image produced by the haze addition module, and then using a Retinex deep network to deal with this inverted scenario, the final output can be further obtained by using inversion operation again. Therefore, once these inverted images are produced, they are fed into a decomposition network composed of multiple convolutional layers with ReLU activations for feature extraction. The network ultimately outputs the reflectance component R low i and illumination component L l o w i of the inverted hazy image, alongside the reflectance component R normal i and illumination component L normal i of the inverted clear image.
The enhancement network takes the illumination component L low i from the decomposition network and processes it using a U-Net architecture to generate an illumination variation map L ^ low i with the same dimensions as the input. This map captures the necessary illumination corrections for enhancing low-light representations. It is then combined with the denoised reflectance variation R ^ low i to reconstruct the enhanced image. Finally, an inverse transformation is applied to bring the enhanced result back into the natural image domain, thereby effectively improving the visual quality. The architecture of this module is depicted in Figure 5.
Relative to Retinex-Net [34], the proposed module incorporates an inversion operation, reduces the convolutional depth of the decomposition network, and leverages a U-Net architecture to extract the illumination variation map L ^ low i . This design effectively reduces computational complexity while preserving the core capabilities of image decomposition. As the third stage in the proposed noise evolution framework, the Retinex-based inverse enhancement module brightens and enhances the hazy image again from the perspective of the Retinex model, ultimately producing a clean output.

3.5. Loss Function

To ensure the effectiveness and robustness of the proposed dehazing framework, a set of multi-type constraints is introduced and optimized to enhance overall network performance. The dehazing module employs a mean squared error (MSE) loss to measure the pixel-wise discrepancy between the initial dehazed output and the clear image. By accumulating this loss across multiple iterations, the method reinforces the reliability and accuracy of the dehazing process. The corresponding mathematical formulation is given as
L 1 = 1 n i = 1 N J J dehaze i 2 2
where J d e h a z e i denotes the output of the dehazing module at the i-th iteration, J denotes the corresponding clear image, 2 represents the L 2 norm, n is the total number of image pixels, and i is the total number of iterations.
In the inverse enhancement module, following the design principles of Retinex-Net, three loss terms are jointly employed to supervise learning: reconstruction loss, reflectance consistency loss, and relighting loss. The reconstruction loss ensures the accuracy of the illumination and reflectance components predicted by the decomposition network. The reflectance consistency loss enforces alignment between the reflectance components of the inverted (low-light) and clear images. The relighting loss encourages the illumination condition of the inverted image to better approximate that of the clear image. The overall loss function for this module is defined as
L 2 = i = 1 N L rec i + L rcl i + L rl i
where L rec i , L rcl i , and L rl i represent the reconstruction loss, reflectance consistency loss, and relighting loss at the i-th iteration, respectively:
L rec i = m = low , normal n = low , normal λ m n R m i × L n i S n i 1 L rcl i = R low i R normal i 1 L rl i = R low i · L ^ low i S normal i 1
where S normal i denotes the inverted version of the clear image at the i-th iteration, and 1 represents the L 1 norm.
The MSE loss and total variation (TV) loss are introduced as global constraints at each iteration. The MSE loss encourages the enhanced output to remain aligned with the clear image, while the TV loss penalizes abrupt gradient changes to suppress noise and artifacts during dehazing and enhancement, thereby preserving the natural appearance of the image. The corresponding mathematical expressions are defined as
L 3 = i = 1 N L mse i + L TV i
where
L mse i = 1 n J J enh i 2 2
L TV i = x , y h M i x , y 2 + v M i x , y 2
where J enh i denotes the output of the inverse enhancement module at iteration i, while h M i x , y 2 and v M i x , y 2 represent the horizontal and vertical gradients at pixel location ( x , y ) , respectively.
A weighted combination of the aforementioned loss terms is employed to construct a unified total loss function, enabling joint optimization across all modules and ensuring the robustness of the network in hazy environments. The total loss function is formulated as follows:
L total = β 1 L 1 + β 2 L 2 + β 3 L 3
where β i represents the weighting coefficient used to balance the contribution of each loss term.

4. Experiment Results Analysis

4.1. Experiment Settings

The proposed method is implemented using the PyTorch framework and trained on a system equipped with an NVIDIA GeForce RTX 3080 GPU, an Intel Core i5-10900X CPU, and 32 GB of RAM. During training, images are randomly cropped to a fixed size of 640 × 480, followed by random augmentations, such as rotation at arbitrary angles, horizontal and vertical flipping, or remaining unchanged, to enhance data diversity. The batch size is set to 8, with the Adam optimizer employed and an initial learning rate of 0.001.

4.2. Dataset and Benchmark

The RESIDE [35] dataset is a large-scale synthetic hazy image benchmark that provides both indoor and outdoor training subsets. In this study, the ITS (Indoor Training Set) and OTS (Outdoor Training Set) are employed to train the proposed network. For evaluation, the O-HAZE [36], I-HAZE [37], Real-world Task-Driven Testing Set (RTTS) [35], and hybrid Subjective Testing Set (HSTS) [35] datasets are used. The proposed approach is compared against several typical or state-of-the-art dehazing algorithms, including DCP [6], T-Net [27], IPC-Dehaze [26], C2PNet [38], RIDCP [39], KANet [40], DCMPNet [20], and DNMGDT [23]. It is worth noting that both T-Net and IPC-Dehaze adopt iterative strategies similar to our noise evolution framework. Therefore, including them in the comparison helps to highlight the advantages and effectiveness of our proposed algorithm.

4.3. Experiment on Synthetic Datasets

To quantitatively assess the dehazing performance on synthetic hazy datasets, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) are employed as objective evaluation metrics. The PSNR quantifies the intensity-level differences between the dehazed and reference clear images, while the SSIM evaluates their structural similarity in terms of luminance, contrast, and structural consistency. Higher values of the PSNR and SSIM indicate superior dehazing quality. In addition, the runtime of each algorithm is recorded as an indicator of computational efficiency to provide a comprehensive performance comparison.
As shown in Table 1 and Table 2, the proposed method outperforms most of the selected benchmark approaches on synthetic datasets, demonstrating the superiority of our approach. It is worth noting that although the PSNR and SSIM scores are slightly lower than those of DNMGDT and DCMPNet, our method offers significantly faster computational speed (see Table 3). These results highlight the effective balance between high-quality dehazing and computational efficiency, making our approach particularly suitable for practical applications, such as on mobile devices and in real-time video processing, where low latency is essential. Moreover, both DNMGDT and DCMPNet struggle to process outdoor images collected from real-world scenarios, as will be further demonstrated and discussed later.
As illustrated in Figure 6 and Figure 7, there are clear disparities in image restoration quality among different dehazing algorithms. The DCP algorithm, which relies on the dark channel prior, effectively eliminates haze but results in overall low image brightness and introduces artifacts in high-intensity regions. Among the deep learning-based approaches, T-Net and IPC-Dehaze exhibit limitations in local chromatic adaptation, resulting in noticeable hue inconsistencies on objects such as chairs. C2PNet and RIDCP exhibit limited capability in restoring details, often leaving residual haze and causing localized color distortions. KANet tends to produce overexposed outputs, leading to the loss of detail in bright areas. Moreover, DCMPNet and DNMGDT demonstrate unsatisfied performance in complex scenes, such as sidewalks and distant regions, where haze remnants or structural blurring remain apparent. In contrast, the algorithm proposed exhibits notable superiority. Specifically, the proposed approach significantly enhances the overall brightness of the dehazed images and achieves superior performance in haze removal, color fidelity, and detail preservation, producing clearer and more visually realistic restoration results.

4.4. Experiment on Real-World Datasets

To further evaluate the performance of the proposed algorithm on real-world hazy images, we conducted comparative experiments using widely adopted real hazy images from the RTTS and HSTS datasets. As shown in Figure 8, the traditional dark channel prior (DCP) algorithm exhibits notable limitations when applied to real hazy images, often resulting in underexposed dehazed outputs. Although deep learning-based approaches, such as DCMPNet and DNMGDT, show promising results on synthetic datasets, they still struggle with real-world applications. This is because the haze in the synthetic dataset is simulated based on simplified physical rules, which leads to these methods being unable to fully learn the potential haze feature in the real world. For example, as seen in the sixth row of Figure 8, DNMGDT fails to adequately remove haze in background regions, leading to incomplete detail recovery. Furthermore, color restoration is unstable in some cases, with DCMPNet and IPC-Dehaze introducing noticeable color distortions in reconstructed structures, such as roads and buildings. In contrast, our algorithm effectively bridges the domain gap between synthetic and real-world images by not only removing haze more thoroughly but also preserving natural color fidelity and structural details. This advantage can be attributed to our noise addition module, which can simulate richer haze features for alignment with the real world.

4.5. Ablation Experiments

To systematically validate the effectiveness and contribution of the key components in the proposed algorithm, a set of ablation experiments was designed based on hazy images from various scenarios. The ablation study includes the following four comparative variants:
  • Variant I: Both the inverse enhancement module and the random haze addition module are removed.
  • Variant II: The random haze addition module is removed, while the dehazing and inverse enhancement modules are retained.
  • Variant III: The complete network structure is preserved, but the haze addition module adopts a linear transformation strategy to calibrate the initial transmission map, replacing the gamma correction method.
  • Variant IV: The random haze addition module uses the gamma correction strategy, but the transmission-related parameter γ i is fixed to 1, and the atmospheric light value is set to 0.85. This variant examines the model’s robustness and generalization ability under fixed parameter settings.
As the baseline for comparison, the complete proposed network that uses gamma correction as the calibration method is employed. All the experiments are conducted under the same training environment, using identical training parameters and data processing strategies to ensure result comparability and consistency. The experiment results are quantitatively evaluated using the PSNR and SSIM metrics, as shown in Table 4, with qualitative dehazing comparisons presented in Figure 9.
According to the results shown in Table 4, comparing Variant I (only the basic dehazing module) with Variant II (with the added inverse enhancement module), the PSNR improves by 3.19 dB and 1.73 dB, respectively. This validates the inverse enhancement module’s role in enhancing illumination and detail visibility, aiding the model in recovering the real structure and color of the scene. The SSIM also increases, indicating improved structural consistency and visual perception. Introducing the random haze addition module further boosts the PSNR by 2.42 dB and 2.7 dB, and the SSIM by 0.229 and 0.177, compared to Variant II. This highlights the module’s ability to simulate varying haze intensities and distributions.
Between the gamma correction and linear transformation strategies in the haze addition module, the gamma correction approach yielded a higher PSNR and SSIM, indicating its superior capability in simulating the non-uniform lighting and perceptual consistency in real hazy images. The nonlinear nature of gamma correction better approximates the complex attenuation effects caused by scattering and absorption in natural scenes, thereby narrowing the domain gap between synthetic and real data. In addition, the algorithm proposed incorporates a strategy based on a random parameter generation mechanism. Compared with Variant IV, which utilizes fixed parameter settings, both the PSNR and SSIM metrics demonstrate improvement. This suggests that by introducing a stochastic process for generating atmospheric light values and transmittance, the training bias caused by fixed parameters can be effectively mitigated. Consequently, the model exhibits enhanced adaptability and stability when confronted with complex and dynamic real-world environments.
In summary, through the stepwise integration and quantitative evaluation of key modules, the experiment results fully validate the effectiveness of the inverse enhancement module, the random haze addition module, and the random parameter sampling mechanism in enhancing the dehazing performance. These modules work in synergy to provide solid theoretical support for building a high-performance dehazing network architecture.
By deeply analyzing the ablation experiment results, the following findings can be drawn: In terms of the dehazing effect, Variant I (only basic dehazing module) and Variant II (dehazing module + reverse enhancement module) still have obvious haze residue in this area. In terms of color restoration quality, in the gradual improvement process from Variant I to the complete algorithm, the experiment results gradually presented a natural color distribution that was closest to the real scene, as shown in the framed area of Figure 9. This series of changes strongly proves the synergy between the reverse enhancement module and the parameter randomization mechanism in improving color fidelity. In addition, as illustrated in the second-row image of Figure 9, by comparing the edge detail preservation of Variant III and the complete algorithm, it can be found that the complete algorithm has obvious advantages in preserving vegetation details. So, we can conclude that gamma correction strategies can better adapt to the characteristic differences in different scenes than the linear correction strategy, thereby achieving finer detail preservation in the dehazing process.

4.6. Performance Test on Different Gamma Value Ranges

To further evaluate the influence of different gamma ranges, we implemented randomized value selection across four gamma intervals. Given that gamma values approaching zero exert minimal impact on transmission map correction, our experimental setup begins at γ = 0.1 . The quantitative evaluation results for these intervals are presented in Table 5, and their corresponding visual effects are illustrated in Figure 10. As shown in Table 5 and Figure 10, a smaller gamma value ( γ [ 0.1 , 1.0 ] ) results in a small amount of haze remaining on the restored result. On the contrary, a larger gamma value ( γ [ 1 , 2.5 ] ) is capable of removing the haze cover in an image thoroughly, but the over-enhancement becomes more obvious as γ increases. As a trade-off, γ [ 1 , 1.5 ] is selected as the optimal range in this work.

4.7. Impact of Algorithm on Object Detection Performance

To further evaluate the impact of the proposed algorithm on high-level visual tasks, we conducted an experiment focusing on object detection. Given that hazy weather conditions can significantly degrade image quality in practical applications, adversely affecting the performance of subsequent visual tasks, it is essential to assess the impact of dehazing on these downstream tasks. For this purpose, we utilized the representative real hazy dataset RTTS (Realistic Traffic-signs in Hazy Scenes) and deployed YOLOv10 [41] as the object detection model. Both hazy images and clear images processed through various dehazing algorithms were used as input. To ensure consistency in our experiment results, all the parameters of the detection model, including input resolution and confidence thresholds, were held constant.
Figure 11 illustrates the hazy images and their corresponding detection outcomes from different dehazing algorithms. The visual results suggest that detecting directly on hazy images leads to a significant number of missed detections, particularly in the case of distant targets or those in low-contrast regions, which are frequently overlooked. In contrast, after applying the proposed method, the overall image clarity is markedly improved, with effective minimization of background interference, allowing the detector to more accurately locate and recognize targets. In comparison to other dehazing algorithms, the method proposed exhibits notable advantages in visual quality; it not only effectively restores color fidelity but also enhances detail in critical areas, thereby improving the robustness of the detection model. The experiment results indicate that our algorithm can effectively meet the requirements of object detection tasks without introducing additional semantic errors, highlighting its potential for real-world applications.

5. Conclusions

In this paper, we proposed a cycle-iterative dehazing algorithm based on noise evolution theory. The algorithm organically integrates an ASM-based dehazing module, a random haze addition module, and a Retinex-based inverse enhancement module within a multi-stage alternating iterative framework. The main advantage of this design concept is that it can simulate more virtual data with different haze levels to enhance the performance of the network. Another advantage is that our method can have an ability to better learn the mapping between hazy images and their clear versions under the joint supervision of the Retinex mode and ASM. The experiment results demonstrate that the proposed algorithm outperforms existing state-of-the-art methods across multiple objective evaluation metrics and subjective visual assessments, particularly excelling in color fidelity and texture detail preservation. Moreover, the algorithm features a concise architecture and high computational efficiency, making it well-suited for deployment in resource-constrained real-world environments.

Author Contributions

Conceptualization, G.H.; methodology, G.H.; software, G.H.; validation, G.H., H.W.; formal analysis, G.H.; investigation, G.H.; resources, M.J.; data curation, G.H.; writing—original draft preparation, G.H., H.W.; writing—review and editing, G.H., H.W.; visualization, G.H., H.W.; supervision, M.J.; project administration, M.J.; funding acquisition, M.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 62471253).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, L.; Zhao, H.; Li, H.; Qiao, L.; Yang, Z.; Li, X. GCSTG: Generating Class-Confusion-Aware Samples With a Tree-Structure Graph for Few-Shot Object Detection. IEEE Trans. Image Process. 2025, 34, 772–784. [Google Scholar] [CrossRef]
  2. Feng, K.Y.; Gong, M.; Pan, K.; Zhao, H.; Wu, Y.; Sheng, K. Model sparsification for communication-efficient multi-party learning via contrastive distillation in image classification. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 150–163. [Google Scholar] [CrossRef]
  3. Ketcham, D.J.; Lowe, R.W.; Weber, J.W. Image enhancement techniques for cockpit displays. Technical report. 1974. [Google Scholar]
  4. Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
  5. Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  6. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [CrossRef]
  7. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef]
  8. Ju, M.; Ding, C.; Guo, Y.J.; Zhang, D. IDGCP: Image dehazing based on gamma correction prior. IEEE Trans. Image Process. 2019, 29, 3104–3118. [Google Scholar] [CrossRef]
  9. Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
  10. Guo, X.; Yang, Y.; Wang, C.; Ma, J. Image dehazing via enhancement, restoration, and fusion: A survey. Inf. Fusion 2022, 86, 146–170. [Google Scholar] [CrossRef]
  11. Land, E.H. The retinex theory of color vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef]
  12. Jobson, D.J.; Rahman, Z.u.; Woodell, G.A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
  14. Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Diff-Retinex++: Retinex-Driven Reinforced Diffusion Model for Low-Light Image Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 6823–6841. [Google Scholar] [CrossRef]
  15. Fan, G.; Yao, Z.; Chen, G.Y.; Su, J.N.; Gan, M. IniRetinex: Rethinking Retinex-type Low-Light Image Enhancer via Initialization Perspective. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 2834–2842. [Google Scholar]
  16. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  17. Ju, M.; Ding, C.; Guo, C.A.; Ren, W.; Tao, D. IDRLP: Image dehazing using region line prior. IEEE Trans. Image Process. 2021, 30, 9043–9057. [Google Scholar] [CrossRef]
  18. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image dehazing and exposure using an enhanced atmospheric scattering model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
  19. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 154–169. [Google Scholar]
  20. Li, T.; Liu, Y.; Ren, W.; Shiri, B.; Lin, W. Single Image Dehazing Using Fuzzy Region Segmentation and Haze Density Decomposition. IEEE Trans. Circuits Syst. Video Technol. 2025; early access. [Google Scholar]
  21. Liu, H.; Hu, H.M.; Jiang, Y.; Liu, Y. PEIE: Physics Embedded Illumination Estimation for Adaptive Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 5469–5477. [Google Scholar]
  22. Zhang, Y.; Zhou, S.; Li, H. Depth information assisted collaborative mutual promotion network for single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–24 June 2024; pp. 2846–2855. [Google Scholar]
  23. Su, Y.; Wang, N.; Cui, Z.; Cai, Y.; He, C.; Li, A. Real scene single image dehazing network with multi-prior guidance and domain transfer. IEEE Trans. Multimed. 2025; early access. [Google Scholar]
  24. Cheng, D.; Li, Y.; Zhang, D.; Wang, N.; Sun, J.; Gao, X. Progressive negative enhancing contrastive learning for image dehazing and beyond. IEEE Trans. Multimed. 2024, 26, 8783–8798. [Google Scholar] [CrossRef]
  25. Fang, W.; Fan, J.; Zheng, Y.; Weng, J.; Tai, Y.; Li, J. Guided real image dehazing using ycbcr color space. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; Volume 39, pp. 2906–2914. [Google Scholar]
  26. Fu, J.; Liu, S.; Liu, Z.; Guo, C.L.; Park, H.; Wu, R.; Wang, G.; Li, C. Iterative Predictor-Critic Code Decoding for Real-World Image Dehazing. In Proceedings of the Computer Vision and Pattern Recognition Conference, Nashville, TN, USA, 11–15 June 2025; pp. 12700–12709. [Google Scholar]
  27. Zheng, L.; Li, Y.; Zhang, K.; Luo, W. T-net: Deep stacked scale-iteration network for image dehazing. IEEE Trans. Multimed. 2022, 25, 6794–6807. [Google Scholar] [CrossRef]
  28. Sun, X.; Wang, L.; Wang, C.; Jin, Y.; Lam, K.m.; Su, Z.; Yang, Y.; Pan, J. Adapting Large VLMs with Iterative and Manual Instructions for Generative Low-light Enhancement. arXiv 2025, arXiv:2507.18064. [Google Scholar] [CrossRef]
  29. Liang, Z.; Li, C.; Zhou, S.; Feng, R.; Loy, C.C. Iterative prompt learning for unsupervised backlit image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 4–6 October 2023; pp. 8094–8103. [Google Scholar]
  30. Liu, C.; Wu, F.; Wang, X. EFINet: Restoration for low-light images via enhancement-fusion iterative network. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8486–8499. [Google Scholar] [CrossRef]
  31. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  34. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar] [CrossRef]
  35. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef]
  36. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-haze: A dehazing benchmark with real hazy and haze-free outdoor images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 754–762. [Google Scholar]
  37. Ancuti, C.; Ancuti, C.O.; Timofte, R.; De Vleeschouwer, C. I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. In Proceedings of the International Conference on ADVANCED concepts for Intelligent Vision Systems, Poitiers, France, 24–27 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 620–631. [Google Scholar]
  38. Zheng, Y.; Zhan, J.; He, S.; Dong, J.; Du, Y. Curricular contrastive regularization for physics-aware single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5785–5794. [Google Scholar]
  39. Wu, R.Q.; Duan, Z.P.; Guo, C.L.; Chai, Z.; Li, C. Ridcp: Revitalizing real image dehazing via high-quality codebook priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22282–22291. [Google Scholar]
  40. Feng, Y.; Ma, L.; Meng, X.; Zhou, F.; Liu, R.; Su, Z. Advancing real-world image dehazing: Perspective, modules, and training. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9303–9320. [Google Scholar] [CrossRef]
  41. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar]
Figure 1. A synoptic schema of the proposed network.
Figure 1. A synoptic schema of the proposed network.
Electronics 14 03392 g001
Figure 2. Overall architecture of the proposed dehazing network.
Figure 2. Overall architecture of the proposed dehazing network.
Electronics 14 03392 g002
Figure 3. Architecture of ASM-based dehazing module and Random haze addition module.
Figure 3. Architecture of ASM-based dehazing module and Random haze addition module.
Electronics 14 03392 g003
Figure 4. Hazy images and corresponding inverse image.
Figure 4. Hazy images and corresponding inverse image.
Electronics 14 03392 g004
Figure 5. Architecture of Retinex-based inverse enhancement module.
Figure 5. Architecture of Retinex-based inverse enhancement module.
Electronics 14 03392 g005
Figure 6. Experiment results of different methods on the synthetic images from the SOTS dataset.
Figure 6. Experiment results of different methods on the synthetic images from the SOTS dataset.
Electronics 14 03392 g006
Figure 7. Experiment results of different methods on the images from the I-HAZE and O-HAZE datasets.
Figure 7. Experiment results of different methods on the images from the I-HAZE and O-HAZE datasets.
Electronics 14 03392 g007
Figure 8. Experiment results of different methods on real-world images from RTTS and HSTS.
Figure 8. Experiment results of different methods on real-world images from RTTS and HSTS.
Electronics 14 03392 g008
Figure 9. Ablation results of different variants in terms of visual quality.
Figure 9. Ablation results of different variants in terms of visual quality.
Electronics 14 03392 g009
Figure 10. Visual results using different ranges of gamma values.
Figure 10. Visual results using different ranges of gamma values.
Electronics 14 03392 g010
Figure 11. Experiment results on object detection performance.
Figure 11. Experiment results on object detection performance.
Electronics 14 03392 g011
Table 1. Objective metrics comparison of different methods on the SOTS dataset.
Table 1. Objective metrics comparison of different methods on the SOTS dataset.
MethodIndoorOutdoor
PSNR (dB)SSIMPSNR (dB)SSIM
DCP17.130.44616.620.392
T-Net21.790.81319.030.674
IPC-Dehaze22.870.79320.060.694
C2PNet21.320.60517.350.525
RIDCP22.540.54217.260.461
KANet23.220.72519.180.618
DCMPNet24.530.75621.250.750
DNMGDT24.550.87321.120.757
Proposed Method23.830.83420.230.702
Table 2. Objective metrics comparison of different methods on the I-HAZE and O-HAZE datasets.
Table 2. Objective metrics comparison of different methods on the I-HAZE and O-HAZE datasets.
MethodI-HAZEO-HAZE
PSNR (dB)SSIMPSNR (dB)SSIM
DCP16.330.47716.030.407
T-Net20.130.80319.230.594
IPC-Dehaze22.090.69919.970.673
C2PNet19.790.57719.050.563
RIDCP20.710.53719.520.515
KANet22.170.72518.890.598
DCMPNet23.290.77321.250.714
DNMGDT23.550.78621.170.710
Proposed Method23.830.83421.230.702
Table 3. Runtime comparison of different methods on the SOTS dataset.
Table 3. Runtime comparison of different methods on the SOTS dataset.
MethodPublicationPlatformTime (s)
DCPCVPR’09Matlab0.3276
T-NetTMM’22Pytorch0.297
IPC-DehazeCVPR’25Pytorch0.0393
C2PNetCVPR’23Pytorch0.0284
RIDCPCVPR’23Pytorch0.0541
KANetTPAMI’24Pytorch0.0326
DCMPNetCVPR’24Pytorch0.0380
DNMGDTTMM’25Pytorch0.0367
Proposed MethodPytorch0.0276
Table 4. Ablation results of different variants in terms of PSNR and SSIM.
Table 4. Ablation results of different variants in terms of PSNR and SSIM.
MethodIndoorOutdoor
PSNR (dB)SSIMPSNR (dB)SSIM
Variant I18.130.54616.620.592
Variant II21.320.60518.350.525
Variant III21.450.67418.260.653
Variant IV22.560.76519.360.674
Proposed Method23.740.83421.050.702
Table 5. Experiment results on different ranges of gamma value on SOTS.
Table 5. Experiment results on different ranges of gamma value on SOTS.
Gamma RangesPSNR (dB)SSIM
[ 0.1 , 0.5 ) 18.610.513
[ 0.5 , 1.0 ) 21.320.627
[ 1.0 , 1.5 ) 22.970.723
[ 1.5 , 2.0 ) 21.510.677
[ 2.0 , 2.5 ] 20.980.691
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, G.; Wang, H.; Ju, M. Cycle-Iterative Image Dehazing Based on Noise Evolution. Electronics 2025, 14, 3392. https://doi.org/10.3390/electronics14173392

AMA Style

Huang G, Wang H, Ju M. Cycle-Iterative Image Dehazing Based on Noise Evolution. Electronics. 2025; 14(17):3392. https://doi.org/10.3390/electronics14173392

Chicago/Turabian Style

Huang, Gongrui, Han Wang, and Mingye Ju. 2025. "Cycle-Iterative Image Dehazing Based on Noise Evolution" Electronics 14, no. 17: 3392. https://doi.org/10.3390/electronics14173392

APA Style

Huang, G., Wang, H., & Ju, M. (2025). Cycle-Iterative Image Dehazing Based on Noise Evolution. Electronics, 14(17), 3392. https://doi.org/10.3390/electronics14173392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop