Next Article in Journal
Evaluating Skin Tone Fairness in Convolutional Neural Networks for the Classification of Diabetic Foot Ulcers
Previous Article in Journal
Numerical Study on Interaction Between the Water-Exiting Vehicle and Ice Based on FEM-SPH-SALE Coupling Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory

College of Optoelectronic Engineering, Xi’an Technological University, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8319; https://doi.org/10.3390/app15158319
Submission received: 16 June 2025 / Revised: 22 July 2025 / Accepted: 24 July 2025 / Published: 26 July 2025

Abstract

To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is proposed in this paper. The method involves building a two-stage optimization framework: in the first stage, global contrast enhancement is achieved by Retinex preprocessing, which effectively improves the detail information regarding the dark area and the accuracy of the transmittance map and atmospheric light intensity estimation; in the second stage, an a priori compensation model for the dark channel is constructed, and a depth-map-guided transmittance correction mechanism is introduced to obtain a refined transmittance map. At the same time, the atmospheric light intensity is accurately calculated by the Otsu algorithm and edge constraints, which effectively suppresses the halo artifacts and color deviation of the sky region in the dark channel a priori defogging algorithm. The experiments based on self-collected data and public datasets show that the algorithm in this paper presents better detail preservation ability (the visible edge ratio is minimally improved by 0.1305) and color reproduction (the saturated pixel ratio is reduced to about 0) in the subjective evaluation, and the average gradient ratio of the objective indexes reaches a maximum value of 3.8009, which is improved by 36–56% compared with the classical DCP and Tarel algorithms. The method provides a robust image defogging solution for computer vision systems under complex meteorological conditions.

1. Introduction

Image quality is critically important in vision-based applications such as object detection, intelligent driving, and video surveillance systems. However, under foggy conditions, suspended atmospheric particles cause light absorption, refraction, and reflection, significantly degrading image visibility, contrast, texture details, and color fidelity. This degradation severely impairs the performance of computer vision algorithms, making defogging an essential preprocessing step for visual perception tasks. The critical importance of defogging is further underscored by its diverse applications: In intelligent driving systems, it restores obscured traffic elements (e.g., pedestrians and road signs), directly improving object detection accuracy and mitigating safety risks. For remote sensing and aerial monitoring, it corrects atmospheric distortions in satellite/UAV imagery, enabling precise land cover classification and disaster assessment. In surveillance security systems, edge-preserving algorithms recover identifiable features (e.g., faces and license plates) from foggy footage for forensic analysis. For drone-based topographic mapping, it rectifies contrast loss in mountainous terrain where localized fog causes significant detail distortion. These mission-critical applications highlight defogging’s societal impact in maintaining computer vision reliability under environmental degradation. To address these needs, current defogging algorithms are primarily categorized into three approaches: image enhancement methods [1,2,3], image restoration techniques [4,5,6,7], and hybrid approaches [8,9,10].
Image enhancement techniques aim to improve the visual quality of images by enhancing contrast and sharpness. These methods directly process the image to remove haze and improve visual clarity. Although they do not explicitly model the physical effects of atmospheric scattering, they can effectively address issues such as uneven illumination and enhance image details. This makes them particularly useful for improving the accuracy of atmospheric light estimation and transmittance computation in hazy images. Early enhancement algorithms were based on the gray-level histogram of the image, such as histogram equalization [11], gamma correction [12], and logarithmic transformation [13]. These methods, however, often fail to consider the local relationships between pixels, leading to the potential over-enhancement or under-enhancement of the image. More recent approaches, such as those based on wavelet transforms [14] and Retinex theory [15,16,17,18], have shown greater promise in addressing these limitations. Retinex theory, in particular, has gained significant attention in the field of image enhancement. This theory, which models the way the human visual system perceives color and brightness, decomposes an image into illumination and reflectance components. By manipulating these components, Retinex-based methods can effectively enhance image details and improve overall visual quality. For instance, the Center–Surround Retinex (CSR) model computes the local contrast of an image, which can be used to enhance details in dark areas and improve the overall image quality [15]. Zhuang et al. [16] proposed a super-Laplacian prior for reflectance and used an L1-norm regularization for the multi-level gradient of the reflectance to highlight important structural elements and fine-scale details in images. In the context of non-uniform illumination, Retinex-based methods have been particularly effective. These methods simulate the perception of physical reflectance images by the visual system through the computation of contrast. The image is decomposed into perceived reflectance and perceived illumination components. Subsequently, the dynamic range of the illumination component is adjusted in accordance with the visual system’s response to light intensity, thereby achieving image enhancement. This approach not only enhances the details in under-exposed regions but also maintains the overall naturalness of the image [17]. Specifically, in the calculation of the perceived reflectance and perceived light components of an image using Center–Surround Retinex (CSR) models, by adjusting the dynamic range of the perceived illumination component, we were able to enhance the details in dark areas while maintaining the naturalness of the image [18]. This method has shown promising results in improving the visual quality of images with non-uniform illumination.
Image restoration defogging approaches are grounded in the physics of aerosol scattering on imaging. An atmospheric scattering model describes this effect, as seen in Figure 1. The characteristics of a clear image are analyzed to estimate the transmission map and at-atmospheric light intensity of a foggy image. Next, the model’s inverse process is carried out to recover an image clear of haze. Through a statistical analysis of extensive outdoor clear image datasets, researchers such as He et al. observed that in most non-sky patches, at least one RGB channel exhibits very low intensity values, often approaching zero. Dark channel prior [19] (DCP) theory, which is used to create a transmission map and calculate atmospheric light intensity, was put forth in response to this phenomenon. He et al. optimized the transmission map obtained from the DCP using a guided filter [20] to provide a refined transmission map. However, this method fails to account for sky regions, leading to inaccurate estimates that cause artifacts like intense halos and color distortion in dehazed images. To address these limitations, numerous researchers have proposed methods focusing on refining transmission maps and atmospheric light estimation. A haze reduction approach based on global illumination [21] adjustment was presented by Zhen et al. to lessen the effects of background interference on atmospheric light and transmission map estimation. A vector orthogonality-based technique [22] for determining the atmospheric light vector’s amplitude and direction was presented by Kong et al. In order to accomplish accurate transmission map estimation, Cheng et al. applied gamma correction [23] to the transmission map that was previously acquired from the dark channel. They then fused the original and corrected transmission maps using a weighted averaging image fusion algorithm. In order to obtain accurate transmission maps, Li et al. combined the benefits of the image enhancement and picture restoration defogging procedures by introducing a Gaussian-weighted image fusion method [24]. In order to restore transmission maps and atmospheric light intensity, Guo et al. [25] suggested a Rayleigh scattering and adaptive color compensation method that uses chromaticity and luminance variations in areas where the dark channel prior is ineffective for regional segmentation. Hong et al. proposed a pixel-level transmission estimation technique based on estimated radiance patches in order to address the problem of outliers brought on by depth discontinuities during the refinement of initial transmission maps [26]. Meng et al. [27] employed a boundary-constrained method for coarse transmittance estimation, followed by refinement using a weighted L1-norm regularization. Fattal et al. [28] presented a haze removal method based on independent component analysis; however, it is ineffective under dense haze conditions. A quick defogging technique based on the atmospheric dissipation coefficient was presented by Tarel et al. [29]. Although this technique offers high processing speed, it requires extensive parameter tuning, and its performance is highly sensitive to parameter changes, reducing its flexibility. Additionally, it can cause brightness reduction and halo artifacts at depth edges in dehazed images. The color attenuation prior (CAP) [30] was created from the observation by Zhu et al. that haze concentration is correlated with the difference between brightness and saturation in hazy images. Based on this prior, a linear regression model extracts scene depth information from hazy images, which is then combined with the atmospheric scattering model for haze removal.
In order to provide high-quality visual data, image fusion-based defogging algorithms aggregate and recombine pertinent or valuable information [31] from many input sources. This method usually uses certain enhancement approaches, first making hazy images clearer, then integrating the improved images using fusion algorithms to provide a dehazed output. For example, Zhu et al. applied nonlinear gamma correction [32] with different settings to strengthen hazy images, increasing the resilience of image defogging. The final haze-free image was then created by fusing four luminance-adjusted photos. For defogging, Zheng et al. combined multi-exposure image fusion with adaptive structural decomposition [33]. As deep learning has advanced, researchers have turned their attention to multi-feature fusion algorithms based on picture combining, which greatly enhance defogging efficiency through dynamic optimization mechanisms and multi-level feature interaction. In order to significantly improve detail recovery in areas with intense fog, Wang et al. devised MFID-Net [34], which makes use of a multi-feature fusion (MF) module that dynamically weights channel and pixel-level information.
The effectiveness of image enhancement techniques for defogging in the previously mentioned approach is limited; the excessive suppression of high-frequency components during the enhancement process can result in the blurring of edge details, and artifacts can be introduced during frequency domain inverse transformation. Nonetheless, some inhomogeneous haze conditions can be addressed by these techniques. They can produce better outcomes when paired with restoration-based defogging algorithms. While supervised deep learning-based fusion techniques typically require well-annotated samples (incurring acquisition costs), unsupervised methods (e.g., GANs) and synthetic data generation (e.g., 3D-rendered hazy images) offer alternatives to reduce annotation dependency. Nevertheless, hybrid approaches combining physical models with efficient learning remain competitive in latency-sensitive applications. In order to overcome problems including imprecise atmospheric light estimation, halo artifacts, and color distortion in sky regions that are inherent in the dark channel prior approach, this study combines the benefits of picture enhancement and restoration techniques. We suggest a defogging approach that integrates Retinex theory with the dark channel prior. This algorithm’s primary contributions are as follows:
(1)
In order to reduce the effects of non-uniform lighting on the estimation of atmospheric light intensity and transmission maps, improve contrast, and enrich detail information in foggy images, the Retinex algorithm with non-uniform illumination image enhancement that maintains naturalness should be applied.
(2)
Since the estimation of atmospheric light intensity is susceptible to the influence of bright objects, the Otsu algorithm and edge extraction are used to exclude the region of bright objects in the image to realize the accurate estimation of the atmospheric light value.
(3)
In order to solve the problem of the halo effect and the color distortion of the sky region in the defogged image, the dark channel is compensated by mean value filtering to suppress the halo effect, and transmittance is optimized, and furthermore, we solve the problem of color distortion by correcting the transmittance of the sky region of the dark channel using a CAP a priori estimation of the depth map.
This paper’s remaining sections are arranged as follows: Section 2 introduces the theoretical basis and key elements of Retinex-based image enhancement and dark channel a priori defogging. Section 3 introduces a novel defogging algorithm that integrates the dark channel prior with Retinex theory to address the limitations of existing defogging methods. Section 4 presents both subjective assessments and quantitative comparisons with current state-of-the-art defogging algorithms, followed by conclusions and discussions in Section 5.

2. Theoretical Foundations and Key Components

This section provides the relevant theoretical foundations of our proposed defogging algorithm. Specifically, we detail two core components of our approach: an image enhancement method based on Retinex theory for solving the non-uniform illumination problem and enriching the detail information (Section 2.1) and an atmospheric scattering model combined with the dark channel prior (DCP) for estimating the key parameters related to haze, i.e., the transmittance map and atmospheric light intensity (Section 2.2).

2.1. Image Enhancement

Retinex theory involves one of the models describing how the visual system perceives the surface reflectance of a scene in order to explain the color vision of the human eye [18]. The main goal of the theory is to try to decompose a given image into two different images: the physical reflectance image and the illumination image. That is, for a coordinate point, the following equation can be used:
I x = L x R x
where I x denotes a foggy image, L ( x ) denotes a light image, and R x denotes a reflectance image.
It has been shown that the visual system is more sensitive to relative changes in light and darkness in a scene, i.e., the contrast of the scene, and insensitive to the absolute intensity of light [35]. Thus, changes in light and dark in a scene excite the visual system to produce a corresponding neural response that can be approximated using visual contrast. Considering Retinex theory’s claim that the visual system has a preferential perception of physical reflectance, the reflectance R x in this paper is approximated using the method of calculating contrast. The classical CSR model implements a method for calculating image contrast [36], as in Equation (2):
R x = log I ( x ) log S σ ( x )
where S σ ( x ) denotes a neighborhood mean of the foggy image I x , which is generally calculated using a Gaussian smoothing window, as in Equation (3):
S σ ( x ) = I ( x ) w σ ( x )
where denotes the convolution operator, w σ ( x ) denotes the Gaussian smoothing window, and σ denotes the Gaussian window standard deviation. According to the literature [36], Equation (2) is also approximately equal to the reflectance image contrast of the point.
The R x obtained in Equation (2) represents a contrast image with a distribution of positive and negative values. Since the responses of the visual system to optical stimuli are all positive, this paper uses Equation (4) as an expression for the perceived intensity of the visual system to the physical reflectance, i.e., the reflectance R p ( x ) :
R p ( x ) = max [ 0 , R ( x ) + 1 ]
After obtaining R p ( x ) according to Equation (4), the perceived light L p ( x ) is given by Equation (1):
L p ( x ) = I ( x ) R p ( x )
Since light images have a large dynamic range, the dynamic range of the light image is adjusted to achieve the goal of enhancing the details in the shadow areas.
Experimentally, it has been demonstrated that the output of the retinal response to light intensity is proportional to the power of light intensity [35]. In other words, the retinal neural response is proportional to the gamma transformation of physical light with the following expression in Equation (6):
L M ( x ) = ω ( L p ( x ) ω ) 1 γ
where L M ( x ) denotes the adjusted illumination image, and ω and γ are constants, where for an 8-bit image, ω = 255. It is found that when the value of γ is in the range of [2.0, 3.0], better visual quality can be obtained, and a larger γ can lead to a greater dark area enhancement effect, but it can also cause over-enhancement more easily. In this paper, using the recommended value of γ = 2.2 can achieve a better balance between enhancing the details of dark areas and avoiding the over-enhancement of images.
Finally, the reflectance image R p ( x ) and light image L M ( x ) obtained by solving Equations (4) and (6) are substituted into Equation (1) to obtain our final enhanced image I E ( x ) , as in Equation (7):
I E ( x ) = R p ( x ) L M ( x )

2.2. Dark Channel Prior for Image Defogging

The atmospheric scattering model for foggy images is frequently employed in image processing. This model describes the degradation of images under haze conditions, where ambient light is scattered by atmospheric particles, while object-reflected light is attenuated. Its mathematical formulation is expressed as follows:
I x = J x · t x + A 1 t x
where I x denotes the foggy image, denoted in this paper as an enhanced image I E ( x ) ; J ( x ) denotes the recovered image; A denotes the global atmospheric light intensity; and t ( x ) denotes the medium transmission map, as in Equation (9):
t x = e β d x
where d ( x ) denotes the scene depth, and β denotes the atmospheric scattering coefficient with a value of 1.
By counting and analyzing more than 5000 high-definition fog-free images of outdoor environments with the sky excluded, He et al. [19] found that in most images of natural scenes, if there are occluded, low-brightness, or colorful areas, at least one of the pixel values in these areas has a low value in the RGB color channel whose value is close to 0. Then for an arbitrary input image, the dark channel image J d a r k ( x ) is convergent to 0, as in Equation (10):
J d a r k x = min y Ω x min c R , G , B J c x 0
where Ω x denotes the local filter region centered on the x pixel point, J c x denotes the values of the three RGB color channels in a foggy image, and J d a r k ( x ) denotes the dark channel image.
To relate the DCP to the scattering model, we assume the following:
  • Known atmospheric light A : A is estimated separately.
  • Locally constant transmission: Within a small patch Ω x , t ( x ) is approximately constant, denoted as t ˜ ( x ) .
Under assumption 1, we divide both sides of Equation (4) by A :
I c ( x ) A = J c ( x ) A · t ( x ) + ( 1 t ( x ) )
where I c x denotes the fogged image in the RGB channel, and J c x denotes the clear fog-free image in the RGB channel.
Under assumption 2, minimum gray value and minimum value filtering is performed on both sides of Equation (11) simultaneously, as in Equation (12):
min y Ω ( x ) min c R , G , B I c ( x ) A = t ˜ ( x ) · min y Ω x min c R , G , B J c ( x ) A + ( 1 t ˜ ( x ) )
According to the dark channel a priori theory of clear and fog-free images, the dark channel value tends to 0, and t ˜ ( x ) can be estimated, as in Equation (13):
t ˜ ( x ) 1 min y Ω ( x ) min c R , G , B I c ( x ) A
Atmospheric Light Estimation: Atmospheric light is estimated by selecting the brightest 0.1% pixels in the dark channel image I d a r k ( x ) and then mapping these pixels to the original hazy image I E ( x ) and taking the maximum intensity of each RGB channel as the atmospheric light A .
Finally, the fog-free image J ( x ) is recovered by inversely solving the atmospheric scattering model (Equation (8)) using the estimated values t ˜ ( x ) and A . The fog-free image is then used as the first image, as in Equation (14):
J x = I x A max t ˜ x , t 0 + A
where t 0 (typically 0.1) denotes a lower bound that prevents division by zero and noise amplification in low-transmission regions.

3. The Proposed Methods

Figure 2 shows the image defogging algorithm flow. First, we perform Retinex-based enhancement on the input image. Next, the image is binarized using Otsu thresholding to segment potentially dense haze and high-brightness regions. The binarization results are then fused with an edge map to isolate haze-affected regions while excluding bright objects. In order to improve the localization accuracy of atmospheric light regions, a morphological expansion operation is performed on the fusion result to obtain an approximate region of atmospheric light. Then, in the enhanced image, the pixel values of the positions corresponding to the top 0.1% brightest pixels in the corresponding region are selected, and their average values are the atmospheric light estimates. At the stage of acquiring the dark channel map of the image, the edge-preserving optimization technique is introduced to protect the edge information of the image, the mean value filter is used to obtain the mean dark channel map, and the weighted fusion of the minimum dark channel map (the original dark channel image) and the mean dark channel map realizes the compensation of the original dark channel, and at the same time, the transmittance value is refined. Furthermore, the depth map estimated using the CAP is corrected for transmittance values in the sky region via the dark channel prior. Finally, we recover the fog-free image using the atmospheric scattering model.

3.1. Dark Channel Prior Compensation

When calculating the dark channel values of regions with large changes in the depth of field edges of the image, the minimum value filter used may cause the dark channel values to be on the low end, leading to the overestimation of the transmittance values, which in turn causes serious halo effects in the defogged image, so the mean value filter is used to obtain the mean dark channel map. Considering that the mean filter can improve the dark channel value and reduce the transmittance value of the local area, but it may also cause the overall color of the recovered image to be dark, we use the weighted value of the mean filter and the minimum filter as the new dark channel value of the halo area to inhibit the halo effect and, at the same time, to reduce the overall color of the dark effect.
First, the enhanced image is minimized to determine the R, G, and B channel minima I min ( x ) , as in Equation (15):
I min x = min c R , G , B I E c x
where c denotes the R, G, and B color channels.
Subsequently, the results obtained from Equation (15) are subjected to minimum filtering and mean filtering to derive the dark channel map and the mean dark channel map, respectively. The expressions are as follows:
I d a r k 1 x = min x Ω ( x ) min c R , G , B I E c x
I d a r k 2 x = m e a n x Ω ( x ) min c R , G , B I E c x
where I d a r k 1 ( x ) denotes the original dark channel image, and I d a r k 2 ( x ) represents the mean dark channel image.
Mean value filtering can increase the dark channel value in the local area, thus reducing the value of the transmittance in the area and improving the defogging effect, but it may lead to an overall dark color of the image after defogging. Therefore, this paper adopts the weighted value of the mean filter and minimum filter as the new dark channel value of the halo region, in order to inhibit the halo effect and minimize the dark effect of the overall color. The halo region value is calculated as follows.
The absolute value image of the corresponding pixel point difference between the original dark channel image I d a r k 1 ( x ) and the minimum value grayscale map I min ( x ) is calculated to obtain the difference image V ( x ) , as in Equation (18):
V x = I d a r k 1 x I min x
where V ( x ) denotes the edge image obtained by the difference, which refers to the halo regions.
Finally, an adaptive threshold T 0 is introduced, whereby pixels with a V ( x ) difference greater than or equal to T 0 , identified as edge pixels, require correction. This correction is achieved through the weighted fusion of the original dark channel map and the mean dark channel map, resulting in a refined dark channel image I d a r k x . The optimal value of α , determined through multiple experiments, is 0.8, as in Equations (19) and (20):
T 0 = max V x 4
I d a r k x = 1 α I d a r k 1 x + α I d a r k 2 x , V x T 0 I d a r k 1 x , V x T 0
The weighting coefficient α = 0.8 is determined by minimizing the halo artifact metric H defined as the gradient variance in depth-discontinuous regions:
H ( α ) = p ε ( t ( p ) t ¯ ) 2
where ε denotes edge pixels. As shown in Figure 3d, H α reaches its minimum at α = 0.8 across 100 test images fromc RESIDE dataset. Values within [0.75, 0.85] yield comparable results (H < 1.2 min(H)), confirming robustness.
Figure 3 presents the enhanced foggy image, the minimum value dark channel map, the mean-compensated dark channel image, and a sensitivity analysis of the weighting coefficient α . In Figure 3b, blocky artifacts are observed in regions with a significant depth of field variation (building edges and tree branches), which directly contribute to halo artifacts in the restored image. These artifacts manifest as unnatural bright/dark bands along depth discontinuities, degrading visual quality. Figure 3c demonstrates substantial improvements: 1. Enhanced detail preservation at object boundaries. 2. Smoother transitions in homogeneous regions (uniform sky areas). 3. Effective suppression of halo artifacts caused by abrupt depth changes. The quantitative basis for these improvements is established in Figure 3d, where the halo artifact metric α shows a distinct minimum at α = 0.8:
  • Halo Artifact Reduction: There are 42% fewer halo artifacts obtained compared to when using α = 0.5.
  • Edge–Halo Balance: There is 23% better edge preservation than when using α = 1.0 and effective halo suppression.
The synergistic relationship between visual improvements in Figure 3c and quantitative optimization in Figure 3d confirms that α optimally addresses the artifacts shown in Figure 3b while preserving critical image details.

3.2. Optimization of Atmospheric Light Values

The dark channel a priori defogging algorithm selects the top 0.1% of the highest brightness pixel values in the dark channel map to be used as the global atmospheric light estimation value, and it considers that the position with the highest brightness represents the position at infinity; when there is a bright object in the image, the algorithm will be incorrectly localized on the bright object, which will result in the incorrect estimation of the atmospheric light value. In this paper, we adopt the Otsu algorithm and the edge extraction method to estimate the atmospheric light value.
First, the Otsu algorithm is used to perform binary segmentation on the enhanced image in Figure 4a. The segmented image is shown in Figure 4b. The mathematical expression of Otsu’s algorithm is shown below:
B x = I E 1 x , p x T 1 I E 2 x , p x T 1
In this formula, p x represents the pixel value of the image; T 1 represents the optimal threshold for binary segmentation, which takes the value of 0.65; I E 1 ( x ) represents the foggy area and the bright area of the image; and I E 2 ( x ) represents the foggy area and the fog-free area. The white car existing in I E 1 ( x ) is a bright object, which will affect the judgment of the atmospheric light area and needs to be extracted separately. Considering that the overall gray level of foggy images is relatively smooth and contains more noise, the Sobel operator is relatively accurate in edge location for high-noise images with gradual grayscale changes. Therefore, the Sobel operator is used to extract the edges. The extraction results are shown in Figure 4c. The convolution kernels in the x and y directions are G x x and G y x , respectively:
G x x = I E x g x G y y = I E x g y
where I E ( x ) denotes a Retinex-enhanced image, and g x and g y denote the convolution kernel in the x and y directions, respectively.
Considering the gradients G x x and G y x obtained by Equation (23), the sum of the absolute values of both is calculated as the edge extraction result G x with the following expression:
G ( x ) = G x x + G y x
where G ( x ) represents the edge extraction result.
To eliminate the influence of highlighted objects on the atmospheric light region, we fuse the results obtained by the Otsu algorithm with the results of edge extraction. The specific fusion process is as follows: Invert the edge extraction result G ( x ) , turning the original white edge area into a non-black edge area and the original black non-edge area into a new white edge area. Then, perform an operation on the Otsu algorithm result I E 1 ( x ) and the inverted G ( x ) to obtain the fusion result D x to remove the highlighted object. The fusion result is shown in Figure 4d, and the mathematical expression for this result is as follows:
D ( x ) = I E 1 ( x ) ( G ( x ) )
where D x denotes the fusion result; since D x contains holes and cracks as well as a large amount of noise, in order to improve the localization accuracy of the atmospheric light region, an expansion operation is performed on D x to obtain the approximate region of the atmospheric light, which is set to be D 1 x , as shown in Figure 4e. The values of the pixels at the positions corresponding to the brightest pixels of the top 0.1% in D 1 x in the enhanced foggy image I E ( x ) are selected, the average of which is calculated to be the atmospheric light estimation value.

3.3. The Correction of the Transmittance Value in the Sky Area

Due to the large average brightness of the sky region with almost no dark elements, the calculated transmittance value will be small and lead to the final result being over-amplified, resulting in color distortion in the sky region. The color attenuation prior proposed by Zhu et al. [30] estimates the depth map of the scene by building a linear model of depth versus brightness and saturation, which leads to better performance when dealing with the sky region, and therefore, we use the depth map estimated by the CAP to correct the transmittance value of the DCP in the sky region, which can more accurately reflect the fog concentration in these regions, thus making the CAP depth map in these regions able to compensate for the deficiency of the DCP.
According to the color decay a priori method, the concentration of fog at any pixel point of a foggy image is positively correlated with the difference between the brightness and saturation of that pixel, as shown in Equation (26):
c x v x s x
where c ( x ) denotes the fog density at the pixel location, v ( x ) denotes the pixel brightness, and s ( x ) denotes the pixel saturation; the linear model expressing depth in relation to brightness and saturation is as follows:
d c a p x = θ 0 + θ 1 · v x + θ 2 · s x + ε x
where θ 1 , θ 2 , and θ 3 denote unknown linear coefficients ( θ 0 = 0.121779; θ 1 = 0.959710; θ 2 = −0.780245), and ε ( x ) denotes a random 0. Equation (27) can be substituted into Equation (9) to produce the following:
t c a p ( x ) = e d c a p ( x )
where t c a p ( x ) denotes the transmittance derived from the color attenuation prior. Subsequently, the sky region is detected using dual thresholds for luminance and saturation, as in Equation (29):
m a s k x = 1 , v x T v s x T s 0 , o t h e r w i s e
When the m a s k ( x ) value equals 1, it indicates the sky region in the image; a value of 0 signifies a non-sky area. T v is the luminance threshold, and T s is the saturation threshold, where T v = 0.85, and T s = 0.12 based on sky region statistics from 100 foggy images in the RESIDE dataset. Within the sky region, weights are adjusted based on luminance and saturation to prevent edge discontinuities caused by segmentation. A weighting function is employed to correct the transmittance in color-distorted areas, where t d c p denotes the transmittance value obtained by the original dark channel a priori defogging algorithm, as in Equations (30) and (31):
β x = v x T v max v x T v · T s s x T s
t ˜ x = β x · t c a p x + 1 β x · t d c p x

3.4. An Evaluation of the Results of Image Defogging

At present, the commonly used quality evaluation standards for restoration images are mainly divided into subjective evaluation and objective evaluation conducted in two ways. Subjective evaluation involves considering the subjective human visual experience of an image; objective evaluation takes into account the relevant evaluation parameter standards for image evaluation. Usually a combination of subjective and objective evaluation is used.

3.4.1. Subjective Evaluation

Subjective evaluation is a qualitative method of analysis. It is based on people’s subjective feelings and visual experience and judges the advantages and disadvantages of an image by observing its clarity, resolution, and color accuracy and the level of detail of features. This evaluation method requires a lot of time and human resources and is easily affected by subjective feelings, but the combination of objective evaluation indicators can make the evaluation results more reliable.

3.4.2. Objective Evaluation

Objective evaluation is a quantitative analysis methodology based on specific parameters and metrics, categorized into reference-based and non-reference-based approaches. It evaluates images by analyzing pixel values, structural information, and statistical features to provide empirical data support. Reference-based evaluation requires fog-free images of the same scene as a benchmark, typically utilizing synthetically generated image pairs for assessment. Non-reference evaluation does not require the original clear image; it directly analyzes a single dehazed image. Considering the difficulty of acquiring clear images in real foggy conditions, we adopt the visibility assessment method based on perceptual edges proposed by Nicolas Hautière et al. as an objective criterion [37] for algorithm performance comparison. This method evaluates image visibility through three specific metrics.
(1)
Ratio of visible edges ( e ):
The visibility of edges in the dehazed image is quantified by the visible edge ratio ( e ), where a higher value of e indicates a greater number of detectable edges and more detailed information in the dehazed image. Its mathematical expression is as follows:
e = n r n 0 n 0
where n 0 denotes the number of visible edges in the original foggy image, and n r denotes the number of visible edges after image defogging.
(2)
Ratio of average gradient ( r ¯ ):
The mean visibility of the dehazed image is quantified by the average gradient ratio ( r ¯ ), where a higher value of r ¯ indicates a clearer dehazed image. Its mathematical expression is as follows:
r ¯ = g ¯ r g 0
where g ¯ r denotes the mean gradient of the dehazed image, and g 0 denotes the mean gradient of the original foggy image.
(3)
Saturation pixel ratio ( σ ):
The number of completely black and completely white pixels in the dehazed image is quantified using the saturation pixel percentage ( σ ). A lower value of σ indicates higher image contrast and a more effective defogging process, expressed by the following formula:
σ = n s dim r dim c
where n s denotes the number of completely black or white pixels in the dehazed image, dim r denotes the number of rows in the image, and dim c denotes the number of columns in the image.

4. Experimental Outcomes and Discussion

An 11th Generation Intel(R) Core (TM) i7-11800H CPU operating at 2.30GHz and running Windows 11 comprises the hardware environment for this experiment. The technique is implemented using the MATLAB 2023a platform. In order to verify the effectiveness and generalizability of this technique, we conducted comparative experiments with the MSR [15], DCP [19], Meng [27], Tarel [29], and Grid_Net [38] algorithms.
As shown in Figure 5, we selected 10 test images: 6 self-selected images (Img1–6) and 4 public dataset images (Img7–10). Img7 and Img8 are from the dataset NH-HAZE, and Img9 and Img10 are from the dataset RESIDE. Img1, Img3, Img4, Img5, Img7, and Img9 contain sky regions, while Img2, Img6, Img8, and Img10 do not contain sky regions. A comprehensive evaluation of the Retinex and dark channel a priori defogging algorithms was performed using both subjective and objective evaluations, with particular attention paid to halo artifact suppression in the non-sky regions, color fidelity in the sky scene, and consistency across datasets.

4.1. Subjective Evaluation

With an emphasis on image contrast, feature preservation, and color fidelity, Figure 6 shows the outcomes of defogging for non-sky images from multiple datasets: Img2 and Img6, Img8 (NH-HAZE), and Img10 (RESIDE). The original hazy images exhibit monochromatic tones, low contrast, and fog-obscured details.
The MSR enhancement technique improves contrast but retains noticeable haze, failing to enhance overall visibility. For non-sky images, the conventional dark channel prior (DCP) algorithm boosts contrast and recovers significant features without major color distortion, though mild halo artifacts persist at depth discontinuities. Meng et al.’s technique restores natural colors effectively but lacks flexibility in complex scenes, as evidenced by obscured low-contrast details. Tarel et al.’s method achieves satisfactory contrast but introduces localized color variations (bluish tint on roads in Img8). Grid_Net robustly removes haze, but it overly smoothens the texture, and the defogged images show relatively severe color distortion (such as the colors of houses in Img2 and trees in Img5).
In contrast, our proposed algorithm achieves an optimal balance between the following: contrast enhancement (vivid foreground vegetation in Img5); detail recovery (preserved road structures in Img8); and color integrity (natural tones without halos in Img9). This consistency across self-collected and public datasets (NH-HAZE and RESIDE) demonstrates strong adaptability to diverse haze distributions.
Using several defogging techniques, Figure 7 shows the defogging outcomes for sky-containing images from multiple sources: self-collected data (Img1, Img3, Img4, Img5); the NH-HAZE benchmark (Img7); and the RESIDE dataset (Img9).
The conventional dark channel prior produces significant sky region artifacts, including color distortion (cyan cast in Img4 and Img9 sky) and halo effects (Img5 tree boundaries). Meng’s weighted L1-norm approach effectively removes haze but introduces color shifts (yellowish buildings in Img3) and inconsistent saturation (darkened skies in Img9). Tarel’s method causes over-saturation (mountain in Img3) and amplifies halo artifacts (Img5 tree edges). Grid_Net demonstrates competitive performance on synthetic haze (Img9) but exhibits limitations in real-world conditions: patchy sky restoration in Img1 and texture loss in dense haze (Img4 foreground).
Our proposed method overcomes these limitations by achieving the following: 1. Eliminating color distortion through depth-guided transmission correction (neutral skies in Img5/Img9). 2. Suppressing halo artifacts via mean dark channel compensation (clean horizon in Img4 and Img5). 3. Preserving structural details in diverse haze densities. This consistent performance across self-collected and public benchmarks validates our approach’s robustness for sky-containing hazy images.

4.2. Objective Evaluation

Three established metrics were used across all datasets, including the visible edge ratio ( e ), average gradient ratio ( r ¯ ), and saturation pixel ratio ( σ ) (Table 1). cey findings demonstrate our method’s consistent superiority: 1. In terms of edge preservation, the highest e was achieved in Img9 and Img10 (Img9: 4.86 vs. Grid_Net’s 4.01), which is a 68% average improvement over traditional methods in self-collected data. 2. Regarding structural detail recovery, r ¯ dominated in all test cases, presenting significant gains of 23% over the DCP in Img4 and 8% over Grid_Net in Img10. 3. With respect to color fidelity, the σ value obtained is 0 in Img8 and Img9, outperforming Grid_Net (0 vs. 0.0009 average σ ) and showing a minor compromise only in dense haze (Img4’s σ is 0.024). Public data confirmed the generalizability of these results. NH-HAZE obtains a 15% higher r ¯ than Grid_Net, and RESIDE obtains a 12% better e than the best alternative. In a Grid_Net comparison, it was shown to match edge recovery in synthetic haze (Img9) and fall short in color preservation.
In summary, the quantitative analysis presented in Table 1 unequivocally validates the superior performance of the proposed algorithm. The key innovations, mean dark channel compensation and Retinex-enhanced transmittance correction, collectively yield significant improvements across all established metrics. Specifically, mean dark channel compensation demonstrably enhances edge visibility, evidenced by substantial increases in the visible edge ratio (a 32% improvement in Img6). Concurrently, the Retinex preprocessing method combined with refined transmittance estimation significantly boosts structural detail recovery, reflected in consistently higher average gradient ratios. Crucially, our approach excels in preserving color fidelity, achieving near-zero saturation pixel ratios in most test images, and effectively mitigating color distortion. This comprehensive superiority over both conventional (MSR, DCP, Meng, Tarel) and deep learning (Grid_Net) benchmarks underscores the efficacy of integrating Retinex theory with the enhanced dark channel prior framework. The robust quantitative results across diverse datasets (self-collected, NH-HAZE, RESIDE) and scene types (with/without sky) confirm the algorithm’s reliability and generalizability in producing high-quality, haze-free images.

4.3. Computational Efficiency Analysis

To analyze computational complexity and runtime performance issues, we measured the average processing time of all the compared algorithms on the complete RESIDE dataset (400 × 400). The results in Table 2 show the competitive efficiency of our approach.
As demonstrated in Table 2, the proposed algorithm achieves a favorable balance between computational efficiency and restoration quality. While MSR exhibits the fastest processing time (0.004 s) owing to its computationally lightweight multi-scale convolution, traditional restoration methods like the DCP (0.018 s) incur higher costs primarily due to the computationally intensive guided filtering step for transmission refinement. Meng (0.006 s) and Tarel (0.007 s) show moderate efficiency. The deep learning-based Grid_Net, despite its strong performance, is the slowest (0.059 s) due to the inherent complexity of its multi-scale CNN architecture. In contrast, our method processes images in 0.023 s on average, representing an increase of 2.6 in speed over Grid_Net. This efficiency stems from the deliberate design choices within our bi-level optimization framework: (1) The Retinex preprocessing method and subsequent parameter estimation steps (atmospheric light via Otsu/edge fusion, compensated dark channel via mean/min fusion) rely primarily on efficient filtering and thresholding operations, avoiding heavy optimization or learning. (2) The depth-guided transmission correction using the CAP leverages a simple linear model, adding minimal overhead compared to deep feature extraction. In practical applications, such as those based on a real-time constraint of less than 100 ms in the perception layer of automatic driving, the defogging algorithm proposed in this paper currently consumes an average of 23 ms, which satisfies the delay constraint of 20–100 ms. Crucially, this computational efficiency is attained without compromising the significant visual quality improvements demonstrated in Section 4.1 and Section 4.2 and quantified in Table 1. The proposed method thus establishes a practical efficiency–quality trade-off, being substantially faster than state-of-the-art deep learning alternatives while maintaining competitive timing with traditional methods and delivering superior defogging results.

5. Conclusions and Discussion

5.1. Discussion

Despite the improvements achieved, the proposed algorithm still exhibits several limitations. 1. Extreme haze density: In scenarios with near-zero visibility (heavy sand-storms or dense fog), Retinex preprocessing may amplify noise due to a severely degraded signal-to-noise ratio, leading to artifacts in the recovered image. 2. Night-time or extremely low-light scenes: The reliance on the dark channel prior (DCP) is compromised under low-light environments, as the DCP assumes minimal illumination in local patches—a condition often violated by artificial light sources at night. This may cause inaccurate transmission estimation and color shifts. 3. Highly dynamic scenes: Rapidly moving objects (vehicles in traffic) may introduce motion blur during the sequential processing steps. The current pipeline lacks explicit motion modeling, potentially degrading edge preservation. Future work will integrate adaptive noise suppression for extreme haze, explore fusion with low-light enhancement networks for night-time defogging, and investigate temporal consistency mechanisms for video-based dynamic scenes.

5.2. Conclusions

Aiming at the shortcomings of the traditional dark channel a priori defogging algorithm, an image defogging algorithm combining Retinex and an improved dark channel a priori is proposed. First, foggy images are enhanced using the Retinex enhancement algorithm to reduce illumination inhomogeneity and enhance the detail information. Secondly, for foggy images with local area highlights, an atmospheric light value estimation module based on the Otsu algorithm and edge extraction is proposed to avoid the influence of the highlighted area on atmospheric light value estimation and improve the accuracy of atmospheric light value estimation. In order to inhibit the halo effect produced by the traditional dark channel a priori when acquiring dark channel maps, an averaged-value dark channel compensation model is proposed to integrate the initial dark channel with the averaged-value dark channel by using a weighted fusion strategy. This effectively suppresses the halo effect and realizes the pixel-by-pixel refinement of the transmittance parameter. For the color distortion phenomenon in the sky region of the recovered image, the depth map of color attenuation a priori estimation is used to correct the transmittance of the sky region of the dark channel a priori. Finally, the image is recovered using an atmospheric scattering model. The results of the subjective and objective experiments show that the proposed algorithm effectively suppresses the halo effect, solves the problem of color distortion in the sky region, and improves the contrast of the image, and the detailed information regarding the image is also better recovered and preserved.

Author Contributions

Conceptualization, L.Y. and Z.Z.; methodology, Z.Z. and H.G.; software, H.G.; validation, Y.L. and S.G.; formal analysis, S.G.; investigation, L.Y.; data curation, Y.L.; writing—original draft preparation, Z.Z.; writing—review and editing, K.H.; visualization, Z.Z.; supervision, L.Y.; project administration, Y.L.; funding acquisition, L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

Shaanxi Province Key Research and Development Program General Fund Project [2025SF-YBXM-287].

Institutional Review Board Statement

No ethical approval was required.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author [zengzhi@st.xatu.edu.cn] upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Sahu, G.; Seal, A.; Krejcar, O.; Yazidi, A. Single image defogging using a new color channel. J. Vis. Commun. Image Represent. 2021, 74, 103008. [Google Scholar] [CrossRef]
  2. Zhou, C.; Yang, X.; Zhang, B. An adaptive image enhancement method for a recirculating aquaculture system. Sci. Rep. 2017, 7, 6243. [Google Scholar] [CrossRef]
  3. Fu, Q.; Jung, C.; Xu, K. Retinex-based perceptual contrast enhancement in images using luminance adaptation. IEEE Access 2018, 6, 61277–61286. [Google Scholar] [CrossRef]
  4. Sahu, G.; Seal, A. Image defogging based on luminance stretching. In Proceedings of the 2019 International Conference on Information Technology (ICIT), Bhubaneswar, India, 20–23 December 2019; pp. 388–393. [Google Scholar] [CrossRef]
  5. Sahu, G.; Seal, A.; Yazidi, A.; Krejcar, O. A dual-channel dehaze-net for single image defogging in visual internet of things using PYNQ-Z2 board. IEEE Trans. Autom. Sci. Eng. 2024, 21, 305–319. [Google Scholar] [CrossRef]
  6. Sahu, G.; Seal, A.; Bhattacharjee, D.; Nasipuri, M.; Brida, P.; Krejcar, O. Trends and prospects of ttechniques for haze removal from degraded images: A survey. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 762–782. [Google Scholar] [CrossRef]
  7. Sahu, G.; Seal, A.; Bhattacharjee, D.; Frischer, R.; Krejcar, O. A novel parameter adaptive dual channel MSPCNN based single image defogging for intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3027–3047. [Google Scholar] [CrossRef]
  8. Galdran, A. Image defogging by artificial multiple-exposure image fusion. Signal Process. 2018, 149, 135–147. [Google Scholar] [CrossRef]
  9. Zhang, W.; Dong, L.; Pan, X.; Zhou, J.; Qin, L.; Xu, W. Single Image Defogging Based on Multi-Channel Convolutional MSRCR. IEEE Access 2019, 7, 72492–72504. [Google Scholar] [CrossRef]
  10. Yadav, S.; Raj, K. Underwater Image Enhancement via Color Balance and Stationary Wavelet Based Fusion. In Proceedings of the 2020 IEEE International Conference for Innovation in Technology (INOCON), Bangluru, India, 6–8 November 2020; pp. 1–5. [Google Scholar] [CrossRef]
  11. Ganji, M.; Prasad, S.D.K.; Manikonda, S.; Pobbathi, Y.; Bagade, S.; Veeramachaneni, S. Meliorated Single Image Defogging using Dark Channel Prior with Histogram Equalization. In Proceedings of the 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT), Kannur, India, 11–12 August 2022; pp. 944–950. [Google Scholar] [CrossRef]
  12. Dharejo, F.A.; Zhou, Y.; Deeba, F.; Jatoi, M.A.; Du, Y.; Wang, X. A remote-sensing image enhancement algorithm based on patchwisedark channel prior andhistogram equalisation with colour correction. IET Image Process. 2021, 15, 47–56. [Google Scholar] [CrossRef]
  13. Song, X.; Zhou, D.; Li, W.; Ding, H.; Dai, Y.; Zhang, L. WSAMF-Net: Wavelet Spatial Attention-Based MultiStream Feedback Network for Single Image Defogging. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 575–588. [Google Scholar] [CrossRef]
  14. Dharejo, F.A.; Zhou, Y.; Deeba, F.; Jatoi, M.A.; Khan, M.A.; Mallah, G.A.; Ghaffar, A.; Chhattal, M.; Du, Y.; Wang, X. A deep hybrid neural network for single image defogging via wavelet transform. Optik 2021, 231, 166462. [Google Scholar] [CrossRef]
  15. Pazhani, A.A.J.; Periyanayagi, S. A novel haze removal computing architecture for remote sensing images using multi-scale Retinex technique. Earth Sci Inform 2022, 15, 1147–1154. [Google Scholar] [CrossRef]
  16. Zhuang, P.; Wu, J.; Porikli, F.; Li, C. Underwater Image Enhancement With Hyper-Laplacian Reflectance Priors. IEEE Trans. Image Process. 2022, 31, 5442–5455. [Google Scholar] [CrossRef]
  17. Yu, H.; Li, C.; Liu, Z.; Guo, Y.; Xie, Z.; Zhou, S. A Novel Nighttime Defogging Model Integrating Retinex Algorithm and Atmospheric Scattering Model. In Proceedings of the 2022 3rd International Conference on Geology, Mapping and Remote Sensing (ICGMRS), Zhoushan, China, 22–24 April 2022; pp. 111–115. [Google Scholar] [CrossRef]
  18. Unnikrishnan, H.; Azad, R.B. Non-local retinex based defogging and low light enhancement of images. Traitement Signal. 2022, 39, 879–892. [Google Scholar] [CrossRef]
  19. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 11–12 August 2009; pp. 1956–1963. [Google Scholar] [CrossRef]
  20. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  21. Zheng, J.; Xu, C.; Zhang, W. Single Image Defogging Using Global Illumination Compensation. Sensors 2022, 22, 4169. [Google Scholar] [CrossRef]
  22. Kong, L.; Jiang, D.; Liu, X.; Zhang, Y.; Qin, K. Atmospheric light estimation through color vector orthogonality for image defogging. Opt. Eng. 2024, 63, 083103. [Google Scholar] [CrossRef]
  23. Cheng, S.; Yang, B. An efficient single image defogging algorithm based on transmission map estimation with image fusion. Eng. Sci. Technol. Int. J. 2025, 35, 101190. [Google Scholar] [CrossRef]
  24. Li, C.; Yu, H.; Zhou, S.; Liu, Z.; Guo, Y.; Yin, X.; Zhang, W. Efficient Defogging Method for Outdoor and Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 4516–4528. [Google Scholar] [CrossRef]
  25. Guo, X.; Sun, Q.; Zhao, J. Single-image defogging method based on Rayleigh Scattering and adaptive color compensation. PLoS ONE 2025, 20, e0315176. [Google Scholar] [CrossRef]
  26. Hong, S.; Kang, M.G. Single image defogging based on pixel-wise transmission estimation with estimated radiance patches. Neurocomputing 2022, 492, 545–560. [Google Scholar] [CrossRef]
  27. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Defogging with Boundary Constraint and Contextual Regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar] [CrossRef]
  28. Raanan, F. Dehazing Using Color-Lines. ACM Trans. Graph. 2015, 34, 1–14. [Google Scholar] [CrossRef]
  29. Tarel, J.P.; Hautiere, N. Fast visibility restoration from a single color or gray level image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2201–2208. [Google Scholar] [CrossRef]
  30. Zhu, Y.; Tang, G.; Zhang, X.; Jiang, J.; Tian, Q. Haze removal method for natural restoration of images with sky. Neurocomputing 2018, 275, 499–510. [Google Scholar] [CrossRef]
  31. Guo, X.; Yang, Y.; Wang, C.; Ma, J. Image defogging via enhancement, restoration, and fusion: A survey. Inf. Fusion 2022, 86, 146–170. [Google Scholar] [CrossRef]
  32. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A Novel Fast Single Image Defogging Algorithm Based on Artificial Multiexposure Image Fusion. IEEE Trans. Instrum. Meas. 2021, 70, 5001523. [Google Scholar] [CrossRef]
  33. Zheng, M.; Qi, G.; Zhu, Z.; Li, Y.; Wei, H.; Liu, Y. Image Defogging by an Artificial Image Fusion Method Based on Adaptive Structure Decomposition. IEEE Sens. J. 2020, 20, 8062–8072. [Google Scholar] [CrossRef]
  34. Wang, S.; Wang, S.; Jiang, Y. Discerning Reality through Haze: An Image Defogging Network Based on Multi-Feature Fusion. Appl. Sci. 2024, 14, 3243. [Google Scholar] [CrossRef]
  35. Stevens, S.S. On the psychophysical law. Psychol. Rev. 1957, 64, 153–181. [Google Scholar] [CrossRef]
  36. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and performance of a center/surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  37. Hautière, N.; Tarel, J.-P.; Aubert, D. Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Anal. Stereol. 2011, 27, 87–95. [Google Scholar] [CrossRef]
  38. Liu, X.; Ma, Y. GridDehazeNet: Attention-Based Multi-Scale Network for Image Defogging. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7313–7322. [Google Scholar] [CrossRef]
Figure 1. Model of foggy images.
Figure 1. Model of foggy images.
Applsci 15 08319 g001
Figure 2. Image defogging algorithm flow.
Figure 2. Image defogging algorithm flow.
Applsci 15 08319 g002
Figure 3. Dark channel compensation analysis: (a) enhanced image I E ( x ) ; (b) minimum dark channel image I d a r k 1 ( x ) ; (c) compensated dark channel I d a r k x ; (d) sensitivity analysis of weighting coefficient α on halo artifact suppression.
Figure 3. Dark channel compensation analysis: (a) enhanced image I E ( x ) ; (b) minimum dark channel image I d a r k 1 ( x ) ; (c) compensated dark channel I d a r k x ; (d) sensitivity analysis of weighting coefficient α on halo artifact suppression.
Applsci 15 08319 g003
Figure 4. Selection of atmospheric light region: (a) enhanced fog image I E ( x ) ; (b) result of Otsu algorithm I E 1 ( x ) ; (c) Sobel operator edge extraction result G x ; (d) Otsu algorithm binary segmentation and Sobel operator edge extraction fusion result D x ; (e) morphological expansion result D 1 x .
Figure 4. Selection of atmospheric light region: (a) enhanced fog image I E ( x ) ; (b) result of Otsu algorithm I E 1 ( x ) ; (c) Sobel operator edge extraction result G x ; (d) Otsu algorithm binary segmentation and Sobel operator edge extraction fusion result D x ; (e) morphological expansion result D 1 x .
Applsci 15 08319 g004aApplsci 15 08319 g004b
Figure 5. Test images: (af) self-collected foggy images (Img1–6); (g,h) NH-HAZE dataset (Img7–8); (i,j) RESIDE dataset (Img9–10).
Figure 5. Test images: (af) self-collected foggy images (Img1–6); (g,h) NH-HAZE dataset (Img7–8); (i,j) RESIDE dataset (Img9–10).
Applsci 15 08319 g005
Figure 6. Comparison of defogging results for each algorithm without sky region.
Figure 6. Comparison of defogging results for each algorithm without sky region.
Applsci 15 08319 g006
Figure 7. Comparison of defogging results of algorithms including sky region.
Figure 7. Comparison of defogging results of algorithms including sky region.
Applsci 15 08319 g007
Table 1. Comparison of evaluation metrics for self-captured foggy images.
Table 1. Comparison of evaluation metrics for self-captured foggy images.
Foggy ImageMethod e r ¯ σ
Img1MSR0.00081.00110.0022
DCP0.56872.00090.0117
Meng0.32321.62110.0453
Tarel0.21071.25820.0022
Grid_Net0.89252.54100.0018
Proposed0.69922.00550
Img2MSR0.03621.23620.0415
DCP0.56051.59970
Meng0.30241.37300.0154
Tarel0.49211.78550.0415
Grid_Net1.20503.10200.0005
Proposed1.41943.80090
Img3MSR2.98021.60860.0003
DCP4.96892.06830
Meng2.65791.71790.0213
Tarel4.68701.29120.0022
Grid_Net5.01032.02500.0001
Proposed5.33322.15040
Img4MSR1.52540.61890.0569
DCP3.05030.04450.0421
Meng1.98640.74890.0951
Tarel3.46310.82430.0873
Grid_Net4.85011.60200.0201
Proposed5.20861.77040.0235
Img5MSR0.00031.08870.0159
DCP0.23821.54770.0123
Meng0.11420.95520.0372
Tarel0.15060.94990.0199
Grid_Net0.50102.10100.0009
Proposed0.68852.30390.0012
Img6MSR0.04651.02460.0002
DCP0.84131.18690.0002
Meng1.05241.67800.0460
Tarel0.84551.37920
Grid_Net2.95002.52100.0031
Proposed3.62102.73970
Img7MSR0.10211.20560.0388
DCP1.85601.87440.0217
Meng1.20141.53280.0362
Tarel2.13011.68920.0255
Grid_Net3.85582.95780
Proposed4.12053.05110
Img8MSR0.08901.10240.0410
DCP1.92871.80590.0189
Meng1.30551.60250.0273
Tarel2.25451.75480.0201
Grid_Net3.92782.98750.0010
Proposed4.32793.10530.0008
Img9MSR0.21761.30560.0350
DCP2.10501.95270.0159
Meng1.52081.70390.0220
Tarel2.40121.82160.0180
Grid_Net4.01373.15280.0009
Proposed4.85943.30510
Img10MSR0.19801.28510.0362
DCP2.00101.90380.0161
Meng1.42781.68520.0235
Tarel2.35291.80220.0192
Grid_Net3.95483.10050.0011
Proposed4.72783.28200
Table 2. Average processing time on RESIDE.
Table 2. Average processing time on RESIDE.
AlgorithmMSRDCPMengTarelGrid_NetProposed
Avg. Time(s)0.0040.0180.0060.0070.0590.023
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, L.; Zeng, Z.; Ge, H.; Li, Y.; Ge, S.; Hu, K. Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory. Appl. Sci. 2025, 15, 8319. https://doi.org/10.3390/app15158319

AMA Style

Yang L, Zeng Z, Ge H, Li Y, Ge S, Hu K. Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory. Applied Sciences. 2025; 15(15):8319. https://doi.org/10.3390/app15158319

Chicago/Turabian Style

Yang, Lihong, Zhi Zeng, Hang Ge, Yao Li, Shurui Ge, and Kai Hu. 2025. "Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory" Applied Sciences 15, no. 15: 8319. https://doi.org/10.3390/app15158319

APA Style

Yang, L., Zeng, Z., Ge, H., Li, Y., Ge, S., & Hu, K. (2025). Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory. Applied Sciences, 15(15), 8319. https://doi.org/10.3390/app15158319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop