Next Article in Journal
Comparative Results for αi-Curvature Conditions on Kenmotsu Manifolds via the Levi-Civita and Schouten–van Kampen Connections
Next Article in Special Issue
FDR-Net: Fine-Grained Lesion Detection Model for Tilapia in Aquaculture via Multi-Scale Feature Enhancement and Spatial Attention Fusion
Previous Article in Journal
Haptic Flow as a Symmetry-Bearing Invariant in Skilled Human Movement: A Screw-Theoretic Extension of Gibson’s Optic Flow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

United Scattering Transmission Model for Haze Removal

1
School of Physics and Electronic Information, Yantai University, 30 Qingquan Road, Laishan District, Yantai 264005, China
2
Faculty of Infomation Science and Engineering, Ocean University of China, 238 Songling Road, Laoshan District, Qingdao 266100, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(3), 472; https://doi.org/10.3390/sym18030472
Submission received: 30 January 2026 / Revised: 3 March 2026 / Accepted: 4 March 2026 / Published: 10 March 2026
(This article belongs to the Special Issue Symmetry and Asymmetry in Computer Vision Under Extreme Environments)

Abstract

Haze removal methods based on the estimation of scene depth ratio in the Atmospheric Scattering Model (ASM) have achieved satisfactory results. However, the ASM ignores the blur equivalent to a point spread function caused by forward scattering. This paper proposes a simplified United Scattering Transmission Model (USTM), in which both forward scattering and back scattering are taken into consideration physically. It utilizes Taylor expansion to correlate the hazy image and its second-order operator with the dehazed image. Additionally, we establish a layered decomposition mechanism of the scattering medium; by fitting the limitation expression and the image signal at infinity, the parameters related to the inherent optical properties used in the model can be obtained. When the stable transmittance estimation approaches are applied into this USTM, the scene radiance can be restored effectively. We conducted evaluation experiments on datasets including RESIDE-RTTS (Real-world Task-Driven Testing Set), Haze4K, and DenseHaze, using metrics such as PSNR, SSIM, newly visible edges and the ratio of the gradients. The results demonstrate that USTM achieves satisfactory results across multiple evaluation dimensions. Regarding the core objective fidelity metric PSNR, it achieves an optimal score of 11.87 dB, representing an approximate 3.85% improvement over the second-best method. Compared to the traditional ASM, the USTM shows an average improvement of approximately 23.5% in edge restoration capability (newly visible edges) and an average improvement of approximately 18.1% in gradient fidelity (the mean ratio of the gradients). Furthermore, compared with advanced deep learning dehazing methods, our method remains highly competitive in edge and gradient restoration metrics, and its lightweight design provides excellent efficiency and compatibility with downstream tasks. The comprehensive results show that the USTM achieves effective improvements in both physical accuracy and detail restoration performance.

1. Introduction

Images in bad weather suffer from the scattering of haze, fog, and dust in the atmosphere. The scattering reduces scene visibility, attenuates color and blurs details. With the development of outdoor computer vision systems, single image dehazing approaches have been widely studied in recent years. These approaches can be divided into two categories: physical model-based approaches and non-physical model-based approaches.
The prime reason for the low visibility of hazy images is the light absorption and scattering by suspended particles in the atmosphere [1]; the scattering causes the attenuation of light in the transmission process between the target and the camera, and adds a layer of air light scattering [2]. Narasimhan et al. [3] built Atmospheric Scattering Model (ASM) to explain the factors of scattering. The model, widely used for dehazing, is formulated by the following equation:
I = J e β z + A ( 1 e β z )
where I is the observed hazy image, J is the scene radiance, A is the air light,  e β z is the scattering medium transmittance, β is the attenuation coefficient of the scattering medium, and z is the scene depth.
Since light scattering is the main cause of image degradation in bad weather, ASM for single image dehazing that is physically valid has gained popularity. The essence of these algorithms is to estimate the transmittance and air light using priors [4,5,6]. Deep learning strategies [7,8,9] have been also proposed to estimate parameters in the ASM more accurately without artificial priors.
Decreases in contrast and saturation are one of the most remarkable features of hazy images, and most non-physical-based approaches aim to increase contrast or saturation to suit human vision. In the early days, image contrast enhancement methods including Histogram Equalization (HE) algorithms [10,11], Retinex algorithms [12,13,14], and image fusion algorithms [15,16] were used for haze removal; as they do not consider physical constraints, the effectiveness of these enhancement methods is quite limited. The current data-driven methods [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32] that map the input hazy image directly to the dehazed image can obtain satisfactory results. However, there are still many challenges that must be overcome. Methods based on priors suffer from certain shortcomings, such as inadequate edge information recovery and the emergence of halo artifacts in sky regions. While data-driven deep learning algorithms are not based on physical processes, they are prone to generating artifacts or color shifts under conditions of dense or non-uniform haze. Furthermore, the acquisition of large-scale, high-quality paired datasets has long been a persistent challenge for such algorithms.
Therefore, ideas for improving the physical scattering model have been investigated. Ju et al. [33] established a concise gamma correction-based dehazing model (GDM) for the inner relationship between gamma correction and the ASM.
GDM compensates for the brightness deficiency of ASM to a certain extent, but the defects of the model itself is hard to overcome. As the formula for transmittance t is in the form of an exponential function, it will dramatically reduce with the depth of the scene.
Considering these issues, this paper proposed a new scattering model called the United Scattering Transmission Model (USTM), based on the image degradation model:
I = J h ( z ) + b ( z )
where h ( z ) is the point spread function caused by forward scattering, and  b ( z ) is the noise caused by back scattering, both of which are related to the depth z. During the process of building the model, both forward scattering and back scattering are taken into consideration physically. Then, the haze-free image, hazy image, and its edge operator are related by using Taylor expansion as shown in Figure 1.
Haze removal with the USTM is carried out and compared with other scattering models and also with deep learning networks. We summarize the contributions of our work as follows:
  • In contrast to existing scattering models in the literature that merely superimpose forward and backward scattering effects in a decoupled manner, we remodel the medium using a layered model and propose a physically self-consistent unified framework, USTM, that achieves the coupled modeling of the two scattering mechanisms via diffusion equations. It overcomes the drawbacks of traditional combined models, such as premature convergence in large-depth scenes and insufficient compensation for high-frequency details.
  • We use Taylor expansion to introduce a derivative edge operator in this USTM that makes contributions to details’ protection.
  • The depth parameter in the USTM is formulated into attenuation optical depth β z , not the transmittance e β z . Since the dynamic range of the attenuation optical depth is larger than that of the transmission which plays an important role in constrainting image over-saturation, this model performs better in the sky region or prospect area.

2. Related Works

2.1. GC-Based Dehazing Model (GDM)

Gamma correction (GC) is a non-linear power law-formed stretch image enhancement method. Coincidentally, the transmission e β z in the ASM is formed via exponential power. Ju [33] proposed the GDM to describe the inner relationship between GC and the ASM by a fraction. GDM can be derived as
J = A ( I / A ) e β z
The ASM is a linear expression of transmission e β z , which brings out the halo/blocking artifacts. Meanwhile, the GDM is able to suppress them.

2.2. Physical Model-Based Methods

He et al. [4] mapped the transmittance to the dark channel of hazy images by a dark channel prior (DCP), which is a kind of statistic of the haze-free outdoor images. Zhu [5] introduced a Color Attenuation Prior (CAP) for depth ratio maps’ estimation. Berman [6] used a non-local dehazing (NLD) strategy to estimate the transmission map through hazy lines.
With the exponential growth of data and the improvement of computing power, more and more deep learning-based approaches are being put into application. Cai [7] presented a DehazeNet based on a Convolutional Neural Network (CNN) to estimate the transmission from input hazy images. Later, an end-to-end All-in-One Dehaze-Net (AODNet) [8] and Densely Connected Pyramid Dehazing Network (DCPDN) [9] were proposed, both of which can jointly study transmission and atmospheric illumination to reduce error.

2.3. Non-Physical Model-Based Methods

Qu [17] proposed an Enhanced Pix2pix Dehazing Network (EPDN) embedded in a generative adversarial network that is followed by a well-designed enhancer for direct dehazing. Chen [18] proposed a Gated Context Aggregation Network (GCANet) to reduce grid artifacts by using a new smoothed dilation technique.Chen [19] proposed a Principled Synthetic-to-real Dehazing (PSD) framework to improve the generalization performance of dehazing real-world hazy images. To narrow the solution space of contrastive learning, Zheng [20] proposed C 2 PNet, which obtained better dehazing effects. Rui-Qi [21] proposed RIDCP, which integrates a phenomenological degradation pipeline for realistic hazy data synthesis with a codebook prior-driven network. Gui [22] proposed IC-Dehazing, a multi-output network with illumination controllability based on the Retinex model, which leverages a Transformer backbone with dual decoders and unsupervised prior-based training to generate user-selectable multi-outputs without paired data. Ma [23] proposed a new computational flow called Compression and Adaptation (CoA) to tackle the challenges of deep learning methods and the domain gap between real-world and synthetic data, obtaining impressive results. Fu [24] proposed IPC-Dehaze, an iterative predictor-critic code decoding framework. Unlike one-shot decoding approaches, it progressively refines code predictions by using high-quality codes from prior iterations as guidance, with a novel Code-Critic module introduced to assess and resample codes based on their correlations. Addressing the limited transport mapping capability of GANs in unpaired training, Lan [25] proposed DehazeSB, a framework built on the Schrödinger Bridge. It leverages optimal transport theory to directly bridge the distributions between hazy and clear images. Lan [26] also proposed Diff-Dehazer, an unpaired framework that leverages the strong generative prior of diffusion models for real-world image dehazing, using a pre-trained diffusion model as a bijective mapping learner within the classic CycleGAN structure. Shin [27] reformulated the Atmospheric Scattering Model into an ordinary differential equation, proposing the HazeFlow framework. It incorporates a non-homogeneous haze generation method to improve its adaptability to real-world scenes. Wang [28] proposed a novel “hazing-to-dehazing” pipeline, consisting of two key components: a Realistic Hazy Image Generation framework (HazeGen) and a Diffusion-based Dehazing framework (DiffDehaze). Chen [29] proposed DehazeXL that balances global context and local feature extraction, enabling the end-to-end modeling of large images on mainstream GPU hardware. Liu [30] introduced a diffusion-based framework that explicitly operates in the frequency domain for unpaired image dehazing, leveraging the diffusion model’s generative prior while using frequency-domain transformations to guide the restoration of attenuated high-frequency details. Mehra [31] proposed ReViewNet, a fast and lightweight deep learning network designed specifically for image dehazing in autonomous driving systems. Fan [32] proposed a unified framework to jointly tackle video dehazing and depth estimation from monocular hazy driving videos. A depth-centric learning paradigm that tightly couples the ASM with the BCC (Brightness Consistency Constraint) through a shared depth network been designed, allowing each task to mutually enhance the other.
These deep learning methods model the dehazing task from different perspectives and have achieved impressive results. However, it is worth noting that many methods suffer from domain gaps. Since they do not strictly adhere to physical constraints and rely entirely on haze generators during training to establish dehazing mappings, their performance may degrade when applied to real-world dehazing. In addition, deep learning methods require considerable computational resources, especially those proposed in recent years, making them difficult to deploy on edge devices such as autonomous driving systems.
In contrast, the USTM strictly follows physical constraints, enabling it to maintain robustness under various real-world conditions and suppress unnatural artifacts. Moreover, the USTM has low computational requirements and can run efficiently on a CPU alone, making it suitable for deployment in a wide range of scenarios.

3. United Scattering Transmission Model

Figure 2 shows a layered transmission imaging model in a scattering medium [34], in which the forward scattering and the back scattering can be explicitly expressed by the inherent optical property (IOP) of the scattering medium. Suppose the scattering is symmetrically distributed around the optical axis of the imaging sensor (represented by the Z axis); we then divide the scattering medium into m layers along the Z axis, each with a small thickness Δ . If the distance from the sensor to the target is z, then z = m Δ . Each layer of medium acts as an independent transmission unit with forward scattering and back scattering. Thus, the degradation of image transmission is described as an accumulated output of the successive degradation of sub-systems of the scattering medium. As shown below, such a degradation can be modeled as a diffusion process, fitting the general diffusion equation under the Gaussian blur approximation of the forward scattering.

3.1. Description of the Forward Scattering

With the layered transmission model, the point spread function (PSF) associated to a certain distance z, denoted h ( z ) , is represented as the convolution of the equally distributed PSFs of each single layer among the volume between zero and z, denoted h Δ , We model the sub-PSF h Δ of each layer with Gaussian approximation [34], whose Fourier transform is formulated as
F ( h Δ ) = e Θ Δ ( u 2 + v 2 )
where ( u , v ) are the frequency-domain coordinates and Θ Δ is the parameter associated to the PSF of a single layer. Thus, the Fourier transform of h ( z ) is
F ( h ( z ) ) = F ( h Δ ) m = e m Θ Δ ( u 2 + v 2 )
For continuous expression, let Δ 0 ; thus, Θ Δ 0 and we define ρ = Θ Δ / Δ as the unit thickness transfer parameter. Moreover, considering the attenuation during the transmission between zero and z in the medium volume, we can reformulate Equation (5) into the following form:
F ( h ( z ) ) = e z [ ρ ( u 2 + v 2 ) ] + β ]
When only taking account of forward scattering and radiance intensity attenuation, the Fourier transform of the degraded image I at distance z is:
F ( I ) = F ( J ) F ( h ( z ) )
The frequency-domain expressions simplify the derivation of the diffusion equation that describes the image degradation in transmission.
We employ two parameters k 1 and k 2 to fit the following diffusion equation:
2 I k 1 z I k 2 I = 0
where 2 is the Laplace operator. The Fourier transform of Equation (8) gives rise to
F ( 2 I ) k 1 F ( z I ) k 2 F ( I ) = 0
According to Equation (7), we write down the following equations:
F ( 2 I ) = ( u 2 + v 2 ) F ( J ) F [ h ( z ) ]
F ( z I ) = F ( J ) F [ h ( z ) ] z = [ ρ ( u 2 + v 2 ) + β ] F ( J ) F [ h ( z ) ]
Substituting Equations (6), (10) and (11) into Equation (9), we can obtain
k 1 = 1 / ρ , k 2 = β / ρ

3.2. Description of the Back Scattering

With the layered transmission model, each layer is illuminated by the incident light A and the back-scattered light generated by each layer passes through the scattering volume with different distances to the sensor. The total back-scattered light at distance z is the cumulative output of the back-scattered light generated by all layers. Since the forward scattering and attenuation occurs in that process, the back-scattered light contributed by a single layer b Δ at distance z to the sensor is
b Δ = κ I Δ e β z h Δ
where κ is the back scattering efficient of the unit thickness of the medium and ∗ is the operator of convolution.
Let d z = Δ 0 , and the integration of Equation (13) in the frequency domain give rise to the total back-scattered light; since the mean component at zero frequency (at u = 0  and  v = 0 in the PSF h in frequency domain) of the back-scattered light dominates the image degradation, the formula of back-scattering noise is simplified to
b ( z ) = κ I / β ( 1 e β z )
It is important to note that although b ( z ) is a function of I and I itself depends on b ( z ) , Equation (14) is not intended to directly solve for b ( z ) or I. Instead, it serves solely as an intermediate step for constructing the back scattering compensatory term introduced in the following derivation. Thus, the risk of circular dependency in the derivation is not a concern.
Consequently, the expression of Equation (9) is reformulated as
F ( 2 I ) k 1 F ( z I ) k 2 F ( I ) = ϕ ( z )
Given that both back scattering and forward scattering occur simultaneously within the same medium, the form of the back-scattering compensatory ϕ ( z ) is expressed as ϕ ( z ) = 2 b ( z ) k 1 z b ( z ) k 2 b ( z ) .
Together with Equation (14), ϕ ( z ) can be reformulated as
ϕ ( z ) = k 1 κ I
Finally, the unified diffusion equation is described as
z I = 2 I k 1 k 2 I k 1 κ I

3.3. Formation of the USTM by Solving Diffusion Equation

Equation (17) describes the image degradation in the scattering medium which unifies the forward and back scattering effects. For the purpose of image dehazing, however, we can simplify the solution under proper approximations to derive an explicit dehazing model which we call the United Scattering Transmission Model (USTM). Now, we expand the image function in Taylor series at z Δ , where Δ is sufficiently small; we then obtain
I ( z Δ ) = I ( z ) z I Δ + R n ( z )
where R n ( z ) is the higher-order term which could be neglected for the image signal when Δ is small with first-order approximation. Substituting Equation (17), we obtain
I ( z Δ ) = I + k 2 k 1 I Δ 1 k 1 2 I Δ + κ I Δ
If taking Taylor expansions of I ( z Δ ) in the same way and propagating the same routine up to z = 0 , during which the first order approximation is hold, then
I ( z 2 Δ ) = I + 2 k 2 k 1 I Δ 1 k 1 2 I ( z Δ 1 ) Δ 2 κ I Δ + ( I k 2 k 1 2 I κ I ) Δ 2
Since Δ is an infinitesimal, the term including Δ 2 becomes vanishingly smaller. Meanwhile, according to diffusion equation 2 I = a 2 z I , it is possible to approximate 2 I ( z Δ ) to 2 I . In this way, when we return to the m layer in turn, and take k 1 , k 2 , finally we obtain
J = I + [ I ρ β 2 I κ I β ] β z
The power spectrum of the back scattered light can be expressed as
P n ( z ) = κ I 2 [ ρ ( u 2 + v 2 ) + β ] 2 ( 1 e [ ρ ( u 2 + v 2 ) + β ] z ) 2
Suppose the air light is available in the image, where z , then evaluation of the statistics of Equations (14) and (22) in the air light region gives rise to the following expression:
b ( ) = κ I β = A
P n ( ) = ( κ I ) 2 [ ρ ( u 2 + v 2 ) + β ] 2 = A 2 [ ( u 2 + v 2 ) / k 2 + 1 ] 2
Thus, by sampling the local image of the air light, we can apply the least-squares fitting method to estimate k 2 . The air light A as shown in Equation (1) can be alternatively obtained by employing well-developed estimation routines in image dehazing algorithms.
Hereby, we formulated the so-called United Scattering Transmission Model:
J = I + [ I 2 I k 2 A ] β z
Unlike other physical models that merely superimpose forward and backward scattering effects, the USTM directly characterizes the scattering process using the diffusion equation and unifies the two mechanisms intrinsically. This ensures that the scattering mechanism conforms to the actual laws of light propagation at the fundamental level.
The most distinctive component of the USTM is the Laplacian operator 2 I and optical depth β z , which is different from the ASM. Mathematically, the Laplace operator is derived from Taylor expansion and serves as a correction term in Equation (25). Meanwhile, the exponential transmittance is incorporated into the derivation of the USTM and replaced by the linear attenuation optical depth β z in form. Physically, the Laplace operator is introduced to compensate for the high-frequency detail loss caused by forward scattering, and the linear β z prevents excessive attenuation as the scene depth increases, enabling the USTM to achieve more explicit and effective edge restoration and distant region details than the ASM and its variations.

4. Simulated Scattering Experiment

In order to compare the characteristics of the linear constraint between the optical distance and the actual distance in the above three models, we set up a simulation experiment environment as shown in Figure 3.
We used a sink full of muddy water as the scattering medium, then positioned a camera at one end of the sink, and a light source at the other end to assist in the generation of slit light. The blackboard could slide and stop at a certain distance from the camera. In this way, we acquired a series of slit images at different depths, which are listed in Figure 4.
Let the optical depth α = β z , as in Equations (1), (3) and (25). Since each slit image was at a fixed depth z, the optical depth α should be a constant. However, the attenuation coefficient β of the scattering medium is unknown, which makes it unable to restore the haze-free image with the exact value of α . Here, we chose various values of α for restoration, and evaluated the restored results. The α corresponding to the optimal restored result should approximate to the actual optical depth. We separately applied ASM, GDM, and USTM into the sets of slit images in Figure 4. After this, we can obtain the restoration results of each model corresponding to different β z values as shown in Figure 5. Due to the fact that the light attenuation coefficients of different wavelengths are dissimilar in water, we transformed the color from RGB space to HSI space and restored the image in subchannel of brightness.
In order to evaluate the restoration results, we compared the optical depths corresponding to the optimal restoration results determined by the PSNR of each image with the actual depth; due to the simplicity of slit images, the required clean reference images are directly generated by code. Figure 6 shows the actual depth curve and optical depth curves obtained by the ASM, GDM, and USTM. It can be noted from the figure that the optical depths estimated by the USTM retain a proportional relationship with the actual depths in both small and large scenes, while the other two models have the problem of premature convergence at the larger depth.
It is worth noting that this experiment is not intended to simulate atmospheric scattering conditions, but to verify the feasibility of the USTM modeling method and the applicability of the model’s physical constraints on the correlation between optical depth and actual depth. Although aqueous and atmospheric scattering media differ in their properties, they follow the same fundamental scattering laws, both imposing the superposition effect of forward and back scattering during light propagation. The experimental results confirm that USTM can maintain a linear correlation between optical depth and actual depth, which validates the rationality of the model’s layered medium decomposition and diffusion equation derivation.

5. Haze Removal Using USTM

Using the USTM to remove haze also requires two estimates: one is used to estimate the attenuation optical depth map; the other is used to find the sky region, and then to utilize the power spectrum and mean value of this region to fit k 2 and A, which are called scattering medium characteristic parameters. The pseudo code for the dehazing process is provided in Algorithm 1.
The algorithm takes the hazy image I as the input and outputs the haze-free image J as the final result. In Steps 1 and 2, the atmospheric light A and transmission map t are first estimated using the methods described below. In Steps 3 to 4, the power spectrum of the sky region is calculated. From Steps 5 to 11, a frequency map is constructed based on the sky region, and the zero-frequency component is removed via a mask. The term r 2 in Step 10 corresponds to ( u 2 + v 2 ) in the denominator of Equation (24), and  P masked represents the power spectrum after excluding the zero-frequency point. In Steps 12 and 13, the parameter k 2 is estimated via the least squares method by combining P masked and r 2 . Finally, in Steps 14 to 16, the result of the Laplacian operator and the dehazed image derived from the USTM model are computed.

5.1. Scattering Medium: Characteristic Parameters’ Estimation

Air light A in Equation (1), (3) and (25) represents the intensity of atmospheric light at infinity. In images containing sky regions, the sky’s brightness is used as an approximation for this distant atmospheric light. In images without clear sky regions, a small portion of the brightest pixels or the brightest regions is typically selected, and their average value is used as an estimate for A. This is because the brightest parts of an image usually correspond to areas with extremely large scene depth or to inherently bright, highlighted objects. Moreover, this approach has been demonstrated to be robust even for images without a clear sky region, as evidenced by numerous publications in the field.    
Algorithm 1: Single Image Dehazing Algorithm.
     
Input: Hazy image I
     
Output: Dehazed image J
     
// Estimate air light, This estimation module can be replaced.
  1 
( A , sky _ area ) A _ estimation ( I ) ;
// Estimate transmission map t ( x ) , This estimation module can be replaced.
  2 
t tx _ estimation ( I ) ;
// Calculate parameter k 2 using power spectrum method
// Compute 2D Fourier transform and power spectrum
  3 
F fft 2 ( sky _ area ) ;
  4 
P | F | 2
// Create frequency coordinate grid
  5 
[ M , N ] size ( sky _ area ) ;
  6 
u fftshift ( fftfreq ( N ) ) ;
  7 
v fftshift ( fftfreq ( M ) ) ;
  8 
[ U , V ] meshgrid ( u , v ) ;
// Exclude zero frequency point
  9 
mask ( U 0 ) ( V 0 ) ;
10 
r 2 U [ mask ] 2 + V [ mask ] 2 ;
11 
P masked P [ mask ] ;
// Least squares fitting to estimate k 2
12 
Define model: f ( r 2 , k 2 ) = A 2 ( r 2 / k 2 + 1 ) 2 ;
13 
k 2 curve _ fit ( f , r 2 , P masked , initial _ guess = 1.0 ) ;
// Perform dehazing calculation
14 
L compute _ laplacian ( I ) ;
15 
J I + I · t 1 k 2 · L · t A · t ;
16 
return  J
In our algorithm, we adopt a quadtree-based method to estimate the air light A. This method recursively divides the image into four sub-blocks, calculates a score for each sub-block (defined as average intensity minus standard deviation), and selects the sub-block with the highest score for further iterative partitioning. The iteration terminates when the size of the current sub-block is less than predefined minimum block size (e.g., 50 × 50 pixels). Finally, the average intensity of the selected sub-block is used as the estimate for the air light A.
After we obtain the air light A, its converted one-dimensional power spectrum can be used to fit the parameters in (25). Here it should be mentioned that because the edge operator in USTM is very small, as long as the fitting parameter value is correct in the range of the order of magnitude, it is robust to the result.

5.2. Depth Map Estimation

We use three prior-based (DCP, CAP, and NLD) depth map estimation methods in this paper.
DCP is based on the statistical law of a large number of haze-free images: the minimal gray level among the three RGB color channels of each image is very low and tends to be zero. For any input image, the mathematical expression of its dark channel is defined as
f d a r k ( J ) = min k Ω ( j ) ( min c r , g , b f c ( k ) )
For a clear haze-free image J, the mathematical representation of DCP is J d a r k 0 . What is more, based on statistical analysis of considerable dark channel images of the edge operators of hazy images, DCP is also valid; in other words, 2 I d a r k 0 . Then, assuming that the attenuation optical depth is a constant in the same patch Ω , we perform dark channel operations on both sides of the USTM’s equation:
( J A ) d a r k = ( I A ) d a r k + ( ( I A ) d a r k 2 ( I A ) d a r k / k 2 1 ) β z
Due to J d a r k and 2 I d a r k both being zeros, the above formula can be simplified:
β z = ( I A ) d a r k / [ 1 ω ( I A ) d a r k ]
In case the denominator is zero, we use a constraint coefficient ω = 0.95 . Finally, we estimate the refined attenuation optical depth map by a guided image filter [33]. In addition, the transmission estimation methods independent of the scattering model can be directly applied into the USTM, such as CAP and NLD. Since the dynamic range of the attenuation optical depth is larger than that of the transmission e β z , we compare the transmission depth maps estimated by DCP in both the ASM and USTM. Figure 7 shows the comparison of transmission maps estimated by two models. The groups on the left side are hazy images with a larger-depth scale of field, and the other groups on the right side are images with a smaller depth scale. Obviously, the results of the two models did not differ significantly in the right groups, but, on the prospect area in the left groups, the USTM had a greater anti-interference ability.

6. Experimental Results

6.1. Datasets

To make the comparison more comprehensive, our validation dataset employs both paired synthetic data and unpaired real-world data, covering diverse indoor/outdoor scenes and varying haze densities. Since obtaining reference-based evaluation metrics for the latter is difficult, we use no-reference metrics on the unpaired data.
RESIDE-RTTS [35]: The Real-world Task-Driven Testing Set (RTTS) is a subset of the RESIDE dataset created by Boyi Li et al. It contains over 4000 real hazy traffic images with varying resolutions and is suitable for multiple evaluation metrics.
Haze4K: Haze4K is a synthetic dataset comprising 4000 hazy images, each paired with corresponding clean images, ground truth transmission maps, and atmospheric light matrices. The images have a resolution of 400 × 400 pixels. Released in 2021 by multiple academic institutions, it serves as a large-scale, high-quality paired synthetic dataset.
Dense-Haze [36]: Dense-Haze consists of 33 pairs of real hazy and corresponding haze-free images, along with 22 additional pairs from the I-Haze and O-Haze datasets. The hazy images were captured under controlled conditions using a professional haze-generating machine, which produces dense and uniform haze that closely simulates real hazy environments. Furthermore, since the images were collected in a controlled setting, both hazy and haze-free ground truth images were captured under identical lighting conditions.
Classic hazy images: We collected about 30 classic hazy images frequently used in previous works as the dataset for real-world hazy scene images.
UA-DETRAC [37]: This dataset is used for the application experiment in Section 6.6. UA-DETRAC is a benchmark dataset for vehicle detection with precise annotations, captured in clear weather conditions. It contains four vehicle categories: Car, Bus, Van, and Other. In this work, it is utilized to train the object recognition model for the application experiment.
HazyDet [38]: This is a paired synthetic dataset captured by drones, designed for image dehazing research. In the application experiment of this paper, a subset is extracted and re-annotated according to the UA-DETRAC category standard to construct a hazy vehicle detection dataset for testing purposes.
For the comparative experiments, the code was executed on a cloud server with the following configuration (Table 1):

6.2. Evaluation Metrics

We categorize the evaluation metrics into reference-based metrics, which require paired clear images, and no-reference metrics, which do not require paired images.
The reference-based metrics are: PSNR, SSIM and NIMA.The calculation methods for the first two metrics are as follows:
PSNR = 10 · log 10 MAX I 2 MSE
MSE = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2
SSIM ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
For the PSNR calculation in Equation (29), MAX I denotes the maximum possible pixel value of the image, and MSE in Equation (30) is the Mean Squared Error, where m and n represent the image height and width in pixels, and I ( i , j ) and K ( i , j ) are the pixel values at position (i,j) in the original clear image and the processed dehazed image, respectively. For the SSIM index in Equation (31), which is computed between two local image patches x and y, μ x and μ y represent the mean pixel intensities (for luminance), σ x and σ y are the standard deviations (for contrast), and σ x y is the covariance (for structural similarity). The constants C 1 and C 2 are small values introduced to stabilize the division and prevent a denominator of zero.
NIMA (Neural Image Assessment) [39] is a deep learning-based method for image quality evaluation. It typically employs a pre-trained Convolutional Neural Network (CNN), such as Inception-v3, as a feature extractor. The extracted features are then fed into a regression or classification head to predict a quality score that correlates with human perceptual judgments.
The calculation methods in [34,40] for the no-reference evaluation metrics are as follows:
BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) is a no-reference image quality assessment method. Its core principle is that natural images have predictable statistical regularities (Natural Scene Statistics; NSS), and distortions disrupt these regularities. The key step is to compute the Mean Subtracted Contrast Normalized (MSCN) coefficients to eliminate local content variations. The formula is as follows:
I ^ ( i , j ) = I ( i , j ) μ ( i , j ) σ ( i , j ) + C
where μ ( i , j ) and σ ( i , j ) are the local mean and standard deviation calculated using a Gaussian-weighted window, and C is a small constant for stabilization. Subsequently, the algorithm extracts features from the distributions of the MSCN coefficients and their adjacent product coefficients (such as shape parameters fitted with a Generalized Gaussian Distribution; GGD), and inputs these features into a pre-trained Support Vector Regression (SVR) model to finally output a quality score.
Newly visible edges e can be obtained in the following equation:
e = n r n o n o
where n r and n o are edges from the restored clear image and original hazy image defined by adaptive thresholds. The contrast of edges must be greater than 5%.
This metric takes into account the sensitivity of human eyes to high-contrast objects, focuses on the evaluation of local detail restoration, and can show the ability of the dehazing algorithm to recover the edges and texture details of objects.
The mean ratio of the gradients, r, is defined as below:
r = V L r V L o
where V L r and V L o denote the visibility levels of the object in the restored and original images. The visibility level can be obtained with formulation below:
V L = C a c t u a l C t h r e s h o l d
where C is the Weber luminous contrast:
C = Δ L L b
r is mathematically defined, because only the gradients of the visible edges in the restored image are considered. This metric focuses on edges but, differing from e, the mean ratio of gradients can be more sensitive to the variation of optical depth; therefore, combining the two metrics can obtain more objective and accurate evaluation.
The mean saturated pixels s quantifies the proportion of pixels reaching maximum or minimum luminance values in the processed image. It effectively evaluates the degree of dynamic range loss induced by contrast enhancement in dehazing algorithms. Higher s values indicate more severe image quality degradation.
The mathematical formulation is given by
σ = N s a t u r a t e d H × W
Here, N s a t u r a t e d denotes the pixels with 0 or 255 bright value; H × W is the size of image.

6.3. Haze Removal Comparison of the ASM, GDM, and USTM

In this experiment, we compared the haze removal quality of the ASM, GDM, and USTM under different prior estimations. A quick demonstration using DCP from Section 4 is shown in Figure 8.
We used five widely used classic real hazy images as samples, and marked them as Figure 9, Bank, Figure 10, Stadium, and Figure 11, and, from left to right, as City, Snowy day and Square. Additionally, we evaluated their newly visible edges e, the ratio of the gradients r, and the saturated pixels s, as shown in Table 2.
We can observe that the USTM achieves excellent performance across all scenarios, and its restoration of color, contrast, and edge details surpasses that of ASM and GDM in most cases. In terms of key metrics, the USTM shows an average improvement of approximately 23.5% in newly visible edges and an average improvement of approximately 18.1% in the mean ratio of the gradients.
Because the minimum constraint of transmission in the power term of the GDM is not equal to zero, the saturated pixels s of the GDM are lower than the other two models. Both the e and r scores of the Stadium example are consistent with subjective perception that restoration using the USTM is better. In the Bank, Snowy day and Square examples, due to the heavy fog, the transition discontinuity produces false edges at large depth in the ASM results that lead to higher e. In the dense fog images, with the same transmittance estimation, the USTM can suppress the transition discontinuity very well. In the mist outdoor image in the City example, the ASM stretches the contrast in both the near and far region, and brightens the image, leading to greater r.
From the perspective of the visualization results, we can see that no matter whether there is fog or mist on a clear day, the ASM tends to over-saturate the sky and the transition is not smooth as indicated by the red arrow in Figure 9; on the contrary, the USTM ensures the true color in the sky. The USTM is better at suppressing super-saturation in the further sky region than the ASM. The zoomed area in the frame reveals more detail.
The color fidelity of the GDM is also slightly better than that of ASM, but the contrast is lower. The USTM is also excellent at highlighting details, as shown by the green color in the local amplification diagrams. Additionally, more examples in Figure 11 show that the USTM performs better in terms of both over-saturation suppression and detail restoration.
To investigate why the three models exhibit these visual differences, we conducted a statistical analysis on the intensity of pixels with different brightness levels and optical distances. It should be mentioned that the GDM does not have uniform effects on both bright and dark pixels unlike the ASM and USTM, as shown in Figure 12. The ASM and GDM have a positive exponential correlation with optical depth β z , but the USTM has a linear correlation with β z . This explains why the ASM will over recover rapidly with the increase in β z at greater depth, while the GDM can avoid this but only for dark pixels.
Generally, regardless of subjective or objective evaluation, the performance of the USTM is satisfactory.

6.4. Comparison Between USTM and Deep Learning

In this experiment, we implemented several representative data-driven deep learning algorithms and compared their results with the USTM using various evaluation metrics. These deep learning methods can learn the high-level features of the image and obtain excellent dehazing effects. Considering the effect and time-consuming nature of the transmittance estimation methods, DCP, CAP and NLD, we chose NLD for the USTM to compare it with deep learning approaches. In the case of dense fog, the results will be converted to HSB space, and compensated for the brightness component due to the presence of light attenuation.
We selected seven real hazy images from the classic hazy image dataset, as shown in Figure 13, and calculated the e and r metrics of the dehazing results for each method, which are two of the most representative indicators of the restoration quality. The results are presented in Figure 14 and Figure 15. More dehazing results of deep learning algorithms in this experiment are provided in the Supplementary Materials.
It can be noticed that the USTM performs better than the other competing methodologies at both e and r. Since the USTM involves forward scattering blur, the dehazing results of this model can ensure edge recovery. Secondly, PSD is relatively effective in contrast enhancement but with supersaturation, and EPDN appears to be able to increase more visible edges.
To better compare with the latest deep learning algorithms and make the experimental results fairer and more rigorous, we conducted supplementary experiments based on the 55 real hazy image pairs provided by the Dense-Haze dataset, using the widely recognized metrics PSNR, SSIM, NIMA, and BRISQUE in the field as the basis for qualitative analysis. Then, we randomly selected hazy images from the RTTS and Haze4K datasets and added them to the test set, resulting in approximately 100 hazy data samples covering both synthetic and real-world scenarios, as a demonstration of the visualization results.
The test results on the Dense-Haze dataset are shown in Table 3. In terms of PSNR, the USTM achieves the optimal score 11.87 dB, representing an approximate 3.85% improvement over the second-best method. The USTM ranks second in SSIM, with a gap of about 0.01 from the first place and an approximately 11.76% improvement over the average level, and ranks third in NIMA and BRISQUE scores. The USTM is still highly competitive among the dehazing algorithms proposed in recent years.
The visualization results are shown in Figure 16 and Figure 17. Figure 16 is from the Dense-Haze dataset, with the ground truth (GT) captured under the same exposure conditions without using a smoke machine.
It can be observed that algorithms such as RIDCP and DehazeSB remove haze unevenly and may produce artifacts like those in the DehazeSB results. This may be related to the fact that the domain mapping fitted by deep learning during training does not strictly follow physical constraints. In contrast, AOD-Net, PSD, and the USTM, which incorporate more physical constraints, do not generate artifacts.
Notably, HazeFlow shows blue color distortion in the dehazing results and suffers from the excessive loss of dark details in some scenes. In comparison, the USTM demonstrates accurate depth estimation even in different haze conditions, and maintains an excellent overall readability of the image content. This approach effectively preserves scene details while avoiding common issues such as over-saturation and color distortion.
The comprehensive evaluation based on these metrics further validates the effectiveness of the USTM in image dehazing. Benefiting from its robust physical framework, the USTM can effectively address the challenges posed by various haze conditions. This enables the USTM to achieve accurate detail restoration and recover image information.

6.5. Efficiency Comparison

Deep learning algorithms have higher hardware requirements, and the wide range of dehazing applications has made the demand for efficient algorithms increasingly important. In this experiment, we select representative algorithms with performance requirements ranging from low to high to conduct an efficiency comparison with the USTM.
The experiment is conducted on the Dense-Haze dataset, and the evaluation metrics include the number of parameters, Floating Point Operations (FLOPs), and algorithm speed. The results are shown in Table 4.
Notably, the USTM is not a deep learning algorithm, so the metric of the number of parameters is not applicable. In addition, since the USTM is not implemented based on deep learning frameworks (e.g., PyTorch and TensorFlow), it is difficult to obtain an accurate calculation of FLOPs at the code level. According to our estimation, its FLOPs are approximately 0.28 G.
The USTM performs excellently in terms of its lightweight design and computational speed. Even when running on the CPU, it outperforms most compared algorithms.

6.6. Application Experiments

As an upstream task, dehazing does not need to be evaluated solely based on human visual perception; its compatibility with downstream tasks is equally important. In this experiment, we test two widely used versions of the YOLO (You Only Look Once) model, namely YOLOv5 and YOLOv11.
First, we train the model on the UA-DETRAC dataset for 100 epochs. Then, we sample 15 pairs of road-containing images from the HazeDet dataset and perform dehazing on these images using USTM, resulting in three subsets: Hazy, Clear, and Dehazed. Hazy and Clear are obtained from the HazeDet dataset, while Dehazed denotes the dehazed images generated by USTM taking Hazy as input.
Notably, due to the scarcity of public detection datasets with paired hazy images, we re-annotated the sampled images from HazeDet. Considering the domain gap caused by differences in shooting angles and focal lengths between the two datasets, the performance evaluation values of clear images from HazeDet are lower than those on the UA-DETRAC validation set. Therefore, when analyzing the experimental results, we take the detection accuracy on clear images as the baseline and focus on the relative improvement.
We evaluate object detection performance using two core metrics: mAP50 and mAP50-95, both based on Intersection over Union (IoU)—the standard metric to quantify the alignment between predicted and ground-truth bounding boxes. mAP50 reflects the basic detection accuracy at 0.5 IoU, while mAP50-95 averages the mAP over IoU thresholds from 0.5 to 0.95 to rigorously measure localization precision. Our results, in terms of mAP50 and mAP50-95, are presented in Table 5.
To intuitively demonstrate the improvement of the USTM on the detection task, we selected a subset of experimental results for visualization, as shown in Figure 18.
From the experimental results, it can be seen that there exists a performance gap between the two YOLO versions. The bounding boxes of YOLOv5 are less affected by haze, but still suffer from classification errors, while YOLOv11 is more vulnerable to haze interference.
It is clear that the detection results combined with USTM dehazing are much closer to those obtained directly using clean images. For YOLOv11, mAP50 is improved by 112.70% and mAP50-95 by 102.96%. For YOLOv5, mAP50 is improved by 6.65% and mAP50-95 by 12.57%.
These experiments demonstrate that the USTM has great potential for practical deployment in downstream vision tasks. We will further improve its applicability and generalization in future work.

7. Discussion

The traditional Atmospheric Scattering Model describes the main reason for image degradation, namely, back scattering, which has been proven to be effective in the restoration of mist images. On the contrary, no matter if it is estimating air light and transmission on the basis of the priors or from deep-level image features by deep learning, it is still powerless to dense fog images or the region at an infinite distance like the sky. Subsequent fusion-based deep learning dehazing models perform well in terms of reducing halos and enhancing contrast, but bring about depth confusion without considering the physical degradation mechanism. Moreover, deep learning approaches need a large quantity of data to train model parameters and also strong computer hardware capability to support the training progress.
Consequently, regression to a physical model is an optimized way to solve problems. The forward scattering neglected in the previous scattering model is an important reason to cause edge blur, which is not perceptible in the mist but prominent when the depth ratio is large. The USTM presented in this paper unifies both the forward and back scattering in the process of scattering degradation.
USTM is well-suited for edge devices due to its lightweight inference design. Its core dehazing computation consists of simple pixel-wise operations combined with efficient prior estimation methods, enabling the USTM to achieve an excellent balance between dehazing performance and inference efficiency on edge platforms and demonstrating strong deployment potential for real-time dehazing tasks at low resolutions.
To fully unlock its deployment potential under the stringent memory and computational constraints of edge hardware, two adaptations can be made to the USTM. First, an appropriate image downsampling ratio. Smaller images occupy less memory and require less computation for dehazing. Before deployment, studying the specific application scenarios of edge devices to determine the downsampling ratio can effectively utilize device resources. Second, adopting more lightweight parameter estimation methods. The most direct approach is to modify the convolution kernel size of existing methods and use low-precision estimation that is suitable for the application scenario. Additionally, if the device is equipped with a sensor that can directly acquire scene depth, it can be linked to replace the transmittance estimation step.
The Laplace operator of USTM delivers outstanding performance in edge detail preservation. However, if the image resolution is extremely low and parameter estimation deviates beyond the valid range, it may slightly amplify the cosine noise introduced by image formatting. Our future work will focus on addressing this issue.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/sym18030472/s1, More results of deep learning dehazing algorithms in the comparative experiments are shown in the Supplementary Materials.

Author Contributions

Conceptualization, R.W.; methodology, R.W.; software, R.W., Z.W. and A.L.; investigation, Z.W., A.L. and T.J.; writing—original draft, R.W.; and writing—review and editing, R.W., Z.W., A.L. and T.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was sponsored by the National Natural Science Foundation of China (grant number: 62201490).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Koschmieder, H. Theorie der Horizontalen Sichtweite: Kontrast und Sichtweite; Keim and Nemnich: Leipzig-Munich, Germany, 1924. [Google Scholar]
  2. McCartney, E.J.; Hall, F.F. Optics of the Atmosphere: Scattering by Molecules and Particles; John Wiley and Sons: New York, NY, USA, 1976. [Google Scholar]
  3. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  4. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar]
  5. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [PubMed]
  6. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 1674–1682. [Google Scholar]
  7. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef]
  8. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Dan, F. AOD-Net: All-in-One Dehazing Network. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); IEEE: Piscataway, NJ, USA, 2017; pp. 4770–4778. [Google Scholar]
  9. Zhang, H.; Patel, V.M. Densely Connected Pyramid Dehazing Network. In Proceedings of the IEEE international Conference on Computer Vision (ICCV), Istanbul, Turkey, 30–31 January 2018; pp. 3194–3203. [Google Scholar]
  10. Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization; Graphics Gems; Academic Press: Cambridge, MA, USA, 1994. [Google Scholar]
  11. Abdullah-Al-Wadud, M.; Kabir, M.H.; Dewan, M.; Chae, O. A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
  12. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  13. Rizzi, A.; Gatta, C.; Marini, D. A new algorithm for unsupervised global and local color correction. Pattern Recognit. Lett. 2003, 24, 1663–1677. [Google Scholar] [CrossRef]
  14. Provenzi, E.; De Carli, L.; Rizzi, A.; Marini, D. Mathematical definition and analysis of the retinex algorithm. J. Opt. Soc. Am. A 2005, 22, 2613–2621. [Google Scholar] [CrossRef]
  15. Ancuti, C.O.; Ancuti, C. Single image dehazing by multi–scale fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef]
  16. Choi, L.K.; You, J.; Bovik, A.C. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans. Image Process. 2015, 24, 3888–3901. [Google Scholar] [CrossRef]
  17. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 8152–8160. [Google Scholar]
  18. Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated Context Aggregation Network for Image Dehazing and Deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 1375–1383. [Google Scholar]
  19. Chen, Z.; Wang, Y.; Yang, Y.; Liu, D. PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 7180–7189. [Google Scholar]
  20. Zheng, Y.; Zhan, J.; He, S.; Dong, J.; Du, Y. Curricular contrastive regularization for physics-aware single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 5812–5820. [Google Scholar]
  21. Wu, R.Q.; Duan, Z.P.; Guo, C.L.; Chai, Z.; Li, C. Ridcp: Revitalizing real image dehazing via high-quality codebook priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  22. Gui, J.; Cong, X.; He, L.; Tang, Y.Y.; Kwok, J.T.Y. Illumination controllable dehazing network based on unsupervised retinex embedding. IEEE Trans. Multimed. 2024, 26, 4819–4830. [Google Scholar] [CrossRef]
  23. Ma, L.; Feng, Y.; Zhang, Y.; Liu, J.; Wang, W.; Chen, G.Y.; Xu, C.; Su, Z. Coa: Towards real image dehazing via compression-and-adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 11–15 June 2025. [Google Scholar]
  24. Fu, J.; Liu, S.; Liu, Z.; Guo, C.-L.; Park, H.; Wu, R.; Wang, G.; Li, C. Iterative Predictor-Critic Code Decoding for Real-World Image Dehazing. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Nashville, TN, USA, 11–15 June 2025. [Google Scholar]
  25. Lan, Y.; Cui, Z.; Luo, X.; Liu, C.; Wang, N.; Zhang, M.; Su, Y.; Liu, D. When Schrödinger Bridge Meets Real-World Image Dehazing with Unpaired Training. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–23 October 2025. [Google Scholar]
  26. Lan, Y.; Cui, Z.; Liu, C.; Peng, J.; Wang, N.; Luo, X.; Liu, D. Exploiting Diffusion Prior for Real-World Image Dehazing with Unpaired Training. Proc. Aaai Conf. Artif. Intell. 2025, 39, 4455–4463. [Google Scholar] [CrossRef]
  27. Shin, J.; Chung, S.; Yang, Y.; Kim, T.H. HazeFlow: Revisit Haze Physical Model as ODE and Non-Homogeneous Haze Generation for Real-World Dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–23 October 2025; pp. 6263–6272. [Google Scholar]
  28. Wang, R.; Zheng, Y.; Zhang, Z.; Li, C.; Liu, S.; Zhai, G.; Liu, X. Learning Hazing to Dehazing: Towards Realistic Haze Generation for Real-World Image Dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 11–15 June 2025. [Google Scholar]
  29. Chen, J.; Yan, X.; Xu, Q.; Li, K. Tokenize Image Patches: Global Context Fusion for Effective Haze Removal in Large Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 11–15 June 2025; pp. 2258–2268. [Google Scholar]
  30. Liu, C.; Qi, L.; Pan, J.; Qian, X.; Yang, M.-H. Frequency Domain-Based Diffusion Model for Unpaired Image Dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, HI, USA, 19–23 October 2025. [Google Scholar]
  31. Mehra, A.; Mandal, M.; Narang, P.; Chamola, V. ReViewNet: A Fast and Resource Optimized Network for Enabling Safe Autonomous Driving in Hazy Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4256–4266. [Google Scholar] [CrossRef]
  32. Fan, J.; Wang, K.; Yan, Z.; Chen, X.; Gao, S.; Li, J.; Yang, J. Depth-Centric Dehazing and Depth-Estimation from Real-World Hazy Driving Video. Proc. Aaai Conf. Artif. Intell. 2025, 39, 2852–2860. [Google Scholar] [CrossRef]
  33. Ju, M.; Ding, C.; Guo, Y.J.; Zhang, D. IDGCP: Image Dehazing Based on Gamma Correction Prior. IEEE Trans. Image Process. 2019, 29, 3104–3118. [Google Scholar] [CrossRef]
  34. Wang, R.; Wang, G.Y. Single image recovery in scattering medium by propagating deconvolution. Opt. Express 2014, 22, 8114–8119. [Google Scholar] [CrossRef]
  35. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed]
  36. Ancuti, C.O.; Ancuti, C.; Sbert, M.; Timofte, R. Dense Haze: A Benchmark for Image Dehazing with Dense-Haze and Haze-Free Images. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–29 September 2019. [Google Scholar]
  37. Wen, L.; Du, D.; Cai, Z.; Lei, Z.; Chang, M.-C.; Qi, H.; Lim, J.; Yang, M.-H.; Lyu, S. UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking. Comput. Vis. Image Underst. 2020, 193, 102907. [Google Scholar] [CrossRef]
  38. Feng, C.; Chen, Z.; Li, X.; Wang, C.; Yang, J.; Cheng, M.-M.; Dai, Y.; Fu, Q. HazyDet: Open-Source Benchmark for Drone-View Object Detection with Depth-Cues in Hazy Scenes. arXiv 2025, arXiv:2409.19833. [Google Scholar]
  39. Talebi, H.; Milanfar, P. NIMA: Neural Image Assessment. IEEE Trans. Image Process. 2018, 27, 3998–4011. [Google Scholar] [CrossRef] [PubMed]
  40. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
Figure 1. Illustration of United Scattering Transmission Model (USTM).
Figure 1. Illustration of United Scattering Transmission Model (USTM).
Symmetry 18 00472 g001
Figure 2. Layered decomposition of the scattering medium between the target and the sensor.
Figure 2. Layered decomposition of the scattering medium between the target and the sensor.
Symmetry 18 00472 g002
Figure 3. Simulated environment that produces a series of slit images with different depth.
Figure 3. Simulated environment that produces a series of slit images with different depth.
Symmetry 18 00472 g003
Figure 4. Subfigures from left to right, and from top to bottom are the slit image obtained when the blackboard is 6 cm, 8 cm, 10 cm, …36 cm away from the camera respectively.
Figure 4. Subfigures from left to right, and from top to bottom are the slit image obtained when the blackboard is 6 cm, 8 cm, 10 cm, …36 cm away from the camera respectively.
Symmetry 18 00472 g004
Figure 5. Comparison of the resilience of three models. (a), (b) and (c) represent ASM, GDM, and, USTM respectively; the sub-image with the border represents the optimal restoration result determined by PSNR for each model.
Figure 5. Comparison of the resilience of three models. (a), (b) and (c) represent ASM, GDM, and, USTM respectively; the sub-image with the border represents the optimal restoration result determined by PSNR for each model.
Symmetry 18 00472 g005
Figure 6. Optical depth associated with the optimal restoration results and actual depth curve of the ASM, GDM, and USTM.
Figure 6. Optical depth associated with the optimal restoration results and actual depth curve of the ASM, GDM, and USTM.
Symmetry 18 00472 g006
Figure 7. Examples of transmission depth maps obtained by DCP-ASM, and DCP-USTM, respectively.
Figure 7. Examples of transmission depth maps obtained by DCP-ASM, and DCP-USTM, respectively.
Symmetry 18 00472 g007
Figure 8. Examples by DCP-ASM, and DCP-USTM, respectively.
Figure 8. Examples by DCP-ASM, and DCP-USTM, respectively.
Symmetry 18 00472 g008
Figure 9. The results of the example Bank using the transmission of CAP and NLD into respectively ASM, GDM, and, USTM. The color block diagrams show the same local amplification in each model results. Red is for the ASM, blue is for the GDM, and green is for the USTM.
Figure 9. The results of the example Bank using the transmission of CAP and NLD into respectively ASM, GDM, and, USTM. The color block diagrams show the same local amplification in each model results. Red is for the ASM, blue is for the GDM, and green is for the USTM.
Symmetry 18 00472 g009
Figure 10. The Stadium example of the haze removal results using transmission of CAP and NLD into respectively ASM, GDM, and, USTM. The color block diagrams show the same local amplification in each model results. Red is for the ASM, blue is for the GDM, and green is for the USTM.
Figure 10. The Stadium example of the haze removal results using transmission of CAP and NLD into respectively ASM, GDM, and, USTM. The color block diagrams show the same local amplification in each model results. Red is for the ASM, blue is for the GDM, and green is for the USTM.
Symmetry 18 00472 g010
Figure 11. Results of more examples including City, Snowy day and Square using CAP and NLD into, respectively, the ASM, GDM, and USTM.
Figure 11. Results of more examples including City, Snowy day and Square using CAP and NLD into, respectively, the ASM, GDM, and USTM.
Symmetry 18 00472 g011
Figure 12. (a) The effects on dark pixels of the ASM, GDM, and USTM; (b) the effects on bright pixels of the ASM, GDM, and USTM. The optical depth is β z .
Figure 12. (a) The effects on dark pixels of the ASM, GDM, and USTM; (b) the effects on bright pixels of the ASM, GDM, and USTM. The optical depth is β z .
Symmetry 18 00472 g012
Figure 13. Example results using Dehazenet, AODnet, DCPDN, EPDN, GCANet, PSD and the proposed USTM respectively.
Figure 13. Example results using Dehazenet, AODnet, DCPDN, EPDN, GCANet, PSD and the proposed USTM respectively.
Symmetry 18 00472 g013
Figure 14. Bar chart of the newly visible edges e from samples in Figure 13.
Figure 14. Bar chart of the newly visible edges e from samples in Figure 13.
Symmetry 18 00472 g014
Figure 15. Bar chart of the mean ratio of the gradients r from samples in Figure 13.
Figure 15. Bar chart of the mean ratio of the gradients r from samples in Figure 13.
Symmetry 18 00472 g015
Figure 16. Example results of AODNet, DehazeSB, Diff-Dehazer, HazeFlow, PSD, and RIDCP.
Figure 16. Example results of AODNet, DehazeSB, Diff-Dehazer, HazeFlow, PSD, and RIDCP.
Symmetry 18 00472 g016
Figure 17. Demonstration of results in different scenes.
Figure 17. Demonstration of results in different scenes.
Symmetry 18 00472 g017
Figure 18. Detection results’ demonstration.
Figure 18. Detection results’ demonstration.
Symmetry 18 00472 g018
Table 1. Configuration for comparative experiments.
Table 1. Configuration for comparative experiments.
ComponentSpecification
OSUbuntu 22.04
CUDA Version12.8
GPUNVIDIA GeForce RTX 4090 (24 GB VRAM)
CPUIntel(R) Xeon(R) Platinum 8352 V CPU @ 2.10 GHz (16 vCPU)
RAM120 GB
Table 2. Quality assessments of newly visible edges, the mean ratio of the gradients, and the saturated pixels of the examples in Figure 9, Figure 10 and Figure 11.
Table 2. Quality assessments of newly visible edges, the mean ratio of the gradients, and the saturated pixels of the examples in Figure 9, Figure 10 and Figure 11.
IndicatorCAP-ASMCAP-GDMCAP-USTMNLD-ASMNLD-GDMNLD-USTM
Example 1. Bank
e0.19570.04920.18730.53590.07300.4535
r1.20241.15891.51151.55221.22372.4963
s0.09410.00020.07130.02770.00010.0132
Example 2. Stadium
e0.38900.23720.42440.35390.32870.4759
r2.0981.52452.36642.26641.79612.4297
s000000
Example 3. City
e0.05220.01761.10270.69590.19150.7214
r1.55381.20551.46701.69951.28301.6531
s0.008900.00870.001300.0017
Example 4. Snowy day
e0.29190.38550.46420.34750.2360.5133
r2.4991.63592.96683.25311.41823.0443
s0.011500.0020.001100
Example 5. Square
e0.52240.16480.50240.75520.26510.6280
r0.99101.01931.37571.27480.97371.4217
s0.14280.00010.07670.036100.0406
Note: Bold values indicate the best result among different algorithms on the same samples.
Table 3. Quality assessments of PSNR, SSIM, NIMA and BRISQUE of different methods.
Table 3. Quality assessments of PSNR, SSIM, NIMA and BRISQUE of different methods.
MethodPSNRSSIMNIMA ↑BRISQUE ↓
DehazeSB11.240.395.1225.30
AOD-Net9.730.315.0021.16
Diff-Dehazer8.970.304.1818.49
HazeFlow11.430.365.5713.15
PSD10.310.304.8319.65
RIDCP9.910.373.8111.36
USTM11.870.385.3615.32
Note: ↑ denotes positive correlation between value and image quality; ↓ denotes negative correlation.
Table 4. Parameters, FLOPs and speeds of different dehazing methods.
Table 4. Parameters, FLOPs and speeds of different dehazing methods.
MethodNumber of ParametersFLOPsSpeed (s/image)
AOD-Net0.002 M0.28 G0.97
EPDN16.574 M101.84 G1.10
GCANet0.702 M543.90 G1.08
HazeFlow83.053 M2.845 T0.54
PSD5.917 M1.154 T1.69
DehazeSB14.636 M341.373 G0.70
RIDCP28.115 M10.791 T1.06
USTMN/A0.28 G0.60
N/A: Not Applicable.
Table 5. Object detection performance on different image subsets.
Table 5. Object detection performance on different image subsets.
mAP50
MethodHazeDet-HazyHazeDet-DehazedHazeDet-ClearUA-DETRAC-Val
YOLOv50.5710.6090.7720.649
YOLOv110.1890.4020.3370.605
mAP50-95
MethodHazeDet-HazyHazeDet-DehazedHazeDet-ClearUA-DETRAC-Val
YOLOv50.3420.3850.5080.474
YOLOv110.1350.2740.2160.472
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Wang, R.; Li, A.; Ji, T. United Scattering Transmission Model for Haze Removal. Symmetry 2026, 18, 472. https://doi.org/10.3390/sym18030472

AMA Style

Wang Z, Wang R, Li A, Ji T. United Scattering Transmission Model for Haze Removal. Symmetry. 2026; 18(3):472. https://doi.org/10.3390/sym18030472

Chicago/Turabian Style

Wang, Zhengfei, Rui Wang, Anran Li, and Tingting Ji. 2026. "United Scattering Transmission Model for Haze Removal" Symmetry 18, no. 3: 472. https://doi.org/10.3390/sym18030472

APA Style

Wang, Z., Wang, R., Li, A., & Ji, T. (2026). United Scattering Transmission Model for Haze Removal. Symmetry, 18(3), 472. https://doi.org/10.3390/sym18030472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop