Next Article in Journal
Highway Accident Hotspot Identification Based on the Fusion of Remote Sensing Imagery and Traffic Flow Information
Previous Article in Journal
Hybrid Deep Learning Models for Arabic Sign Language Recognition in Healthcare Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pixel-Dehaze: Deciphering Dehazing Through Regression-Based Depth and Scattering Estimation

1
Computer Science Engineering Department, Thapar Institute of Engineering Technology, Patiala 147004, Punjab, India
2
Department of Mechanical Engineering, Indian Institute of Technology, Patna 801106, Bihar, India
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(11), 282; https://doi.org/10.3390/bdcc9110282
Submission received: 20 September 2025 / Revised: 1 November 2025 / Accepted: 4 November 2025 / Published: 8 November 2025

Abstract

Haze significantly reduces visibility in critical applications such as autonomous driving, surveillance, and firefighting, making its removal essential for safety and reliability. Motivated by the limited robustness of the existing methods under non-uniform haze conditions, this study introduces a novel regression-based dehazing model that simultaneously incorporates the atmospheric light constant, transmission map, and scattering coefficient for improved restoration. Instead of relying on complex deep networks, the model leverages brightness–saturation cues and regression-driven scattering estimation with localized haze detection to reconstruct clearer images efficiently. Evaluated on the RESIDE dataset, the approach consistently surpasses state-of-the-art techniques including Dark Channel Prior, AOD-Net, FFA-Net, and Single U-Net, achieving SSIM = 0.99, PSNR = 22.25 dB, VIF = 1.08, and the lowest processing time of 0.038 s, demonstrating both accuracy and practicality for real-world deployment.

1. Introduction

Traditionally, haze is an atmospheric condition in which visibility and clarity are degraded by suspended dry particulates such as smoke, dust, and other pollutants. This phenomenon arises from various sources, including air pollution [1], forest fires [2], and other environmental disturbances. Haze not only diminishes visibility [3], but also creates significant challenges in critical situations, such as slowing down vehicles or obstructing the ability to identify individuals during emergencies like fires. Numerous models have been proposed in the past to address this problem by manipulating parameters of the hazy image equation—particularly focusing on atmospheric light constants or transmission maps [4]. However, despite these efforts, existing methods often lack robustness, efficiency, and accuracy when applied under diverse and extreme real-world conditions.
Real-time haze and smoke removal has therefore become an urgent research priority due to its relevance across multiple safety-critical domains. For example, in the automotive industry, effective dehazing enhances driver visibility and strengthens the performance of Advanced Driver Assistance Systems (ADAS). In firefighting operations, real-time dehazing technologies can enable clearer visibility for victim identification and hazard analysis under smoky conditions. Similarly, in surveillance and security, maintaining clarity in polluted or foggy environments is essential for reliable monitoring. The inefficiencies of previous methods and the widespread demand for strong real-time dehazing solutions in such applications were the primary reasons for designing and developing our proposed model.
To address these challenges, this paper proposes a novel image dehazing model that integrates multiple aspects of the hazy image equation with innovative design strategies. Unlike traditional models that optimize only one parameter, our method simultaneously incorporates the atmospheric light constant (A) and the transmission map ( t ( x ) ), with particular emphasis on the scattering coefficient ( β ) in the transmission formulation. A distinctive feature of this model is its approach to depth map calculation, which leverages a reference image of the target environment. This reference-based methodology allows for accurate recovery of depth information and enhances the fidelity of haze removal. Furthermore, because haze is inherently non-uniform across images, the model first identifies the haziest region using grayscale conversion and a quad-tree-based method. The atmospheric constant is then estimated based on this localized haze concentration, and combined with regression-based estimation of the scattering coefficient and the depth map for precise reconstruction of the dehazed image.
The effectiveness of the proposed model is validated through extensive comparisons against state-of-the-art methods, including CLAHE, AOD-Net, Dark Channel Prior, FFA, and Single U-Net. Across multiple quantitative metrics such as SSIM, PSNR, VIF, FADE, UIQI, and C-H Ratio, our approach consistently demonstrates superior performance. Notably, it achieves the highest SSIM (0.99), PSNR (22.25), and VIF (1.08), indicating remarkable structural preservation, image quality retention, and visual fidelity; while FADE values are slightly higher (63.87), they reflect the model’s effective balance between enhancement and detail preservation, ensuring practicality in real-world scenarios. Comparative analyses with visual examples further illustrate the superiority of our method over prior solutions such as Single U-Net, FFA-Net, AOD-Net, and Dark Channel. Summarizing the above contributions, the novelty of the work is as follows.
  • It combines the transmission map ( t ( x ) ) with the atmospheric light constant (A) in the dehazing equation, while dynamically estimating the scattering coefficient ( β ) using a linear regression model, unlike standard single-parameter methods.
  • It employs a quad-tree-based method to identify the haziest regions of an image, enabling precise measurement of atmospheric light (A) and improving dehazing performance in unevenly hazy conditions.
  • It utilizes grayscale conversion, depth estimation, and linear regression within a unified dehazing system. It incorporates a reference image to generate an accurate depth map ( d ( x ) ), thereby improving transmission map reliability with prior scene information.
  • It achieves state-of-the-art results, including SSIM = 0.99, PSNR = 22.25, and VIF = 1.08, surpassing classic methods such as Single-U-Net, Double-U-Net, and Dark Channel Prior while preserving structural details and overall image fidelity.
  • It demonstrates wide-ranging applications, such as enhanced road safety, improved surveillance and emergency response capabilities, higher-quality aerial/satellite imagery for environmental monitoring, and safer autonomous navigation in foggy environments.

2. Related Works

The restoration of visibility in hazy images is one of the primary challenges in image processing, and many dehazing techniques rely on airlight estimates. One technique [4] improves transmission map estimates by employing a global dynamic template and a positive depth of field to overcome DCP’s shortcomings in situations with objects that resemble atmospheric light. For improved transmission map and airlight estimation, a hybrid approach [5] that combines histogram equalization with DCP performs better on the D-Hazy dataset. A rapid airlight estimate approach [6] decreases computational burden by 92.82%, and Down-sampling DCP (DS-DCP) [7] reduces computational complexity by 98% while retaining an error rate of 0.22%. The paper [8] suggests a Modified DCP (MDCP) approach to overcome common DCP problems, such as artifacts, halos, color distortions, and high computational cost. It utilizes a pixel-based dark channel as the reference image in filtering and incorporates adaptive scaling for airlight estimates, resulting in improved assessments and a 5.12 times increase in processing speed. These developments show notable gains in DCP’s precision and effectiveness, fixing long-standing problems and boosting the algorithm’s performance in various applications.
A U-Net-based segmentation network in [9] enhances the quality of medium transmission map estimation from a hazy image by employing a modified dark channel prior approach to calculate global atmospheric light. This method works well and consistently across a variety of datasets. Another technique [10] achieves a thirty-fold speedup in processing huge images by estimating the transmission function using a linear model with a quadtree search. The paper [11] presents a robust single-image dehazing method that optimizes the transmission map using image features such as contrast energy, entropy, and sharpness to estimate the extinction coefficient more accurately. They also introduced an adaptive atmospheric light model to handle non-uniform illumination, unlike conventional homogeneous light assumptions. A heuristic model for an ideal transmission map is proposed by [12], guaranteeing depth consistency for precise dehazing. Outperforming state-of-the-art methods, ref. [13] enhances transmission map estimate by merging foreground and sky regions, maintaining features, and reducing artifacts. The paper [14] presents SIDGAN, a U-Net-based GAN for single-picture dehazing that improves performance by utilizing color consistency and perceptual loss. Using convolution modules, channel attention, and gate mechanism residual blocks, ref. [15] introduces zUNet, a lightweight network for real-time systems that outperforms current latency, parameter count, and PSNR techniques. This research shows how important the transmission map is for dehazing and provides practical ways to enhance image quality.
A unique atmospheric scattering model (NASM) with a scattering compensation coefficient addresses color cast in dehazing by introducing the IDACC method [16], which produces better results. Another study suggests a single-picture dehazing technique that uses average saturation prior and sky recognition to independently estimate transmission in sky and non-sky regions for high-quality haze removal [17]. In terms of efficiency and performance, ref. [18] surpasses state-of-the-art approaches by implementing color attenuation prior to employing a linear model for depth restoration. The paper [19] uses adaptive gamma correction and k-means clustering to improve visual quality in areas with a lot of sky. By addressing color-related problems collectively, these techniques provide creative answers for enhanced dehazing and effectiveness.
Integration of the atmospheric scattering model (ASM) and segmentation to ensure precise recovery in [20] is introduced. The paper [21] improves dehazing in areas with abrupt depth changes by estimating ambient light using quad-tree subdivision and linear transformation in the minimum channel. For better performance with less complexity, ref. [22] models a linear relationship between the depth map and the minimum channel of the hazy image. In paper [23] PSNR and SSIM improves the performance, especially in bright areas, by refining Dark Channel Prior (DCP) with multiple linear regression, which lowers estimation errors. These strategies show how effective regression techniques improve dehazing efficiency and accuracy in foggy situations.
The technique presented in [24] utilizes Repeated Averaging Filters to address halo artifacts and improve radiance recovery when estimating ambient light from a single foggy image. The paper [25] offers a quick single-image dehazing technique that balances speed and quality for real-time systems by utilizing gray projection and the atmospheric scattering model. Demonstrates efficacy in severe weather without causing an increase in noise by introducing a combined dehazing and denoising technique that lessens noise amplification in foggy circumstances [26]. The paper [27] demonstrates exceptional performance on dehazing benchmarks by proposing a deep learning-based dehazing method with a pre-dehazer for directing haze removal. By passing transmission map estimates and learning a nonlinear mapping from hazy to dehazed images directly, the deep residue learning (DRL) method in [28] outperforms existing approaches in terms of both objective and subjective quality. This paper [29] introduces a depth estimation method that evaluates dehazing techniques by integrating geometry and edge information, utilizing a synthetic outdoor dataset. This paper [30] provides the hyperspectral multi-level hazy picture dataset SHIA to test dehazing techniques and determine how well they handle multi-level hazy images.
In order to improve restoration quality, ref. [31] adds a dynamic scattering coefficient for vision across hazy levels. The paper [32] rapidly and accurately estimates atmospheric light, particularly in bright regions, using pixel-based dark and bright channel limitations. In the paper [33], the Non-Homogeneous RESIDE dataset is used to improve processing speed and dehazing precision. These techniques prioritize performance in the actual world and visibility enhancement. AEDNet, an attention-based model focusing on fine detail retention, is presented in [34]. It has a channel shuffling mechanism to maintain picture features during dehazing. Utilizing adversarial learning and removing the requirement for paired training datasets, ref. [35] uses CycleGAN for unsupervised dehazing. These methods differ in their strategies, from physical models to attention processes and adversarial learning. Still, they aim to increase image quality, estimate atmospheric light, and process data in real-time.
Several techniques demonstrate the importance of depth maps in enhancing dehazing techniques. A physics-based network used a simulated depth map from virtual environments [36] to provide consistent training and improved benchmark performance without real-world data. We observe that in the paper [37], object detection is enhanced by combining CNN-based transmission map estimation with adaptive color correction and rephrasing it as a depth estimation problem, and [38] enhanced photos for underwater scenarios by employing haze as a depth cue, surpassing traditional techniques. In [39], a GAN-based architecture combines depth-guided refinement with physical restoration to overcome unpaired learning restrictions and restore distant details. This paper [40] enhanced depth maps using a second-order variational framework to accomplish robust haze removal across conditions and maintain structures.
Advances in picture dehazing in recent years have investigated various methods to improve visual restoration in multiple settings. Perceptual loss functions such as PSNR-HVS and HaarPSI, for example, perform better in CNN-based dehazing models, as [41] demonstrates. In [42], a minimal DMCGF network is suggested to employ gate fusion modules and multi-scale feature extraction for real-time dehazing. Color correction and dark channel enhancement are used in [43] to correct color variation in dusty photos. This paper [44] presents a fog density fusion technique for less halo effects and seamless transitions. ZRD-Net, introduced in [45], pioneeringly addresses dehazing in an unsupervised, zero-shot way, removing the need for large datasets. This paper [46] illustrates a hybrid U-Net and AOD-Net method with adaptive loss for improved accuracy. This paper [47] achieves effective image restoration by integrating cutting-edge methods for haze reduction in SAR imaging, such as Multi-Scale Dehazing and guided filters. For robust dehazing, this research highlights advancements in loss functions, structures, and adaptive techniques.
In addition to the classical and early deep-learning-based dehazing algorithms discussed above, several recent studies have extended these architectures with improved feature fusion, transformer integration, and cycle-consistent training mechanisms. For instance, Dudhane and Murala proposed RYF-Net [48], a deep fusion network that estimates transmission maps by combining RGB and YCbCr representations through separate sub-networks and a fusion stage. Jain [49] proposed an enhanced Feature Fusion Attention Network combined with CycleGAN for multi-weather restoration tasks, demonstrating improved adaptability across haze, snow, and rain. Li et al. [50] introduced UTCR-Dehaze, a transformer-augmented U-Net and cycle-consistent generative model for unpaired remote sensing image dehazing, which effectively captures both global and local contextual information. Furthermore, Majid and Is a [51] presented an updated CLAHE variant that adapts contrast enhancement at a local level for low-visibility conditions, reaffirming the continued relevance of traditional histogram-based approaches in modern enhancement pipelines. These contemporary developments, all from 2025, represent the evolution of earlier classical, CNN-based, and attention-driven frameworks, and therefore were adopted as baselines for comparison in Table 1.
In conclusion, the reviewed literature demonstrates significant progress in image dehazing, focusing on improving image quality, processing efficiency, and handling complex scenarios. Among the key advancements are deep learning models, enhanced Dark Channel Prior methods, and the utilization of depth information for more precise transmission map estimation. Lightweight networks and hybrid approaches offer real-time applications without compromising quality. These efforts pave the way for future developments in real-world scenarios by demonstrating the importance of combining deep learning methods with physical models for more effective and successful dehazing.

3. Methodology

Before detailing the methodology, we provide an overview of the proposed algorithm for training the dehazing model. The complete workflow is clearly represented in Figure 1, which outlines the major steps from pre-processing to evaluation. The algorithm begins with pre-processing of clear and hazy images, including resizing and normalization, followed by conversion to HSV color space to extract brightness ( η ) and saturation ( χ ) features. A depth map is then computed from the difference between clear and hazy images, and the atmospheric scattering coefficient ( β ) is estimated. A regression model is trained on [ η , χ ] features to predict β , which is subsequently used to compute the transmission map t ( x ) . Finally, the hazy image is restored using the estimated transmission map, followed by brightness and contrast adjustments, and the quality of the dehazed output is evaluated through metrics such as SSIM and PSNR.
The suggested regression-based dehazing approach uses a mathematical regression-based formulation for parameter estimation rather than iterative deep learning optimization. A linear regression model trained on the extracted brightness–saturation features ( η , χ ) and their related scattering coefficients ( β ) was used to obtain the coefficients c 1 , c 2 , c 3 , which describe the scattering relationship. The least squares minimization method outlined in Equation (8) was used to solve this regression analytically, guaranteeing convergence in a single computer step.
A grid search mechanism was used during the dehazing process to identify the haziest regions within an image by creating quadrant-based subdivisions (quadruples), not for hyperparameter tuning. This allowed for better restoration in non-uniform haze distributions and precise assessment of localized atmospheric light.
Every experiment was carried out in the Kaggle cloud GPU environment, which only needed simple mathematical calculations rather than complex deep learning structures. The derivation of the coefficients c 1 , c 2 , c 3 and associated regression parameters was the only machine learning component. This resulted in a very low average processing time of about 0.038 s per image for the model, which is significantly faster than traditional deep-learning-based dehazing techniques. It should be noted, nonetheless, that the processing time during real-time deployment may differ based on the hardware configuration and computational resources, including embedded GPUs or processors.

3.1. Dataset Description

Image dehazing tasks are frequently evaluated using the RESIDE dataset [52,53], and it includes both the RESIDE-Indoor (ITS) and RESIDE-Outdoor (OTS) subsets. Furthermore, the RESIDE-6K collection includes synthetic photos from indoor and outdoor environments, providing a wide range of foggy circumstances. The test data in RESIDE, known as the Synthetic Objective Testing Set (SOTS), is divided into indoor and outdoor subsets for performance analysis. The inclusion of actual foggy photographs improves the dataset’s relevance to real-world situations, while the synthetic images are created using a physical scattering model as specified in the standardized dataset, guaranteeing consistency in haze simulation.
The dataset is divided into two principal folders: one for training and validation, and another for testing. The subfolders of each folder contain a combination of indoor and outdoor locations, as well as ground truth and fuzzy photographs. Training, validation, and testing subsets of the dataset utilized in this study were separated using the conventional 70:20:10 ratio. The regression model was trained using 70% of the data, with 20% being used for validation and parameter adjustment purposes, while the remaining 10% was used for independent performance evaluation. This ratio was chosen to guarantee a fair trade-off between avoiding overfitting to particular haze patterns and learning generalizable regression coefficients ( c 1 , c 2 , c 3 ). The model’s performance is not affected by particular data divisions, as demonstrated by the empirical validation of the selected split, which produced consistent findings across several random seeds. The observed changes in SSIM and PSNR were negligible (within ( ± 0.5 % )) when the dataset division was changed (e.g., 80:10:10 or 60:20:20), indicating the stability and robustness of the regression-based approach. Therefore, the 70:20:10 split offers the best possible compromise between evaluation reliability and model generalization.

3.2. Data Pre-Processing

The data preprocessing for the image dehazing task involves implementing a custom dataset class, Dehazing-Dataset, which efficiently handles paired hazy and clean images. The dataset is loaded from specified directories, filtering only valid image formats (e.g., .jpg, .jpeg, .png) and ensuring alignment by truncating excess images to maintain an equal number of hazy and clean image pairs. A filtering mechanism validates that only image pairs with identical dimensions are retained, ensuring consistency in input-output mappings during model training. The rgb_loader function is integral to the pipeline, loading images and converting them to RGB format for compatibility with standard machine learning frameworks. Additionally, the show_grayscale_image method offers a utility for visualizing grayscale images, aiding in inspecting and validating preprocessing results. The images are further processed with optional transformations, such as resizing or augmentation, to standardize the dataset and enhance its variability. This comprehensive preprocessing pipeline ensures high-quality, well-aligned, and properly formatted data, facilitating the adequate training and evaluation of image dehazing models.
Feature extraction is performed by converting the image from RGB to HSV color space and computing the mean brightness and saturation, from which the parameters η and ξ are derived. In the initial stage, the image is converted from RGB to HSV (hue, saturation, value) color format, as HSV is more suitable for color quality analysis since it distinguishes between luminance (value) and color saturation (saturation). The brightness (V channel) and saturation (S channel) are extracted, as haze affects these components differently: brightness determines how light or dark the image is, while saturation reflects how vivid the colors appear. The mean brightness and saturation are then calculated for the entire image, and two parameters are derived: η , which indicates the proportion of brightness to the total brightness and saturation and reflects how much light is influencing the scene, and  ξ , which indicates the proportion of saturation to the total brightness and saturation and characterizes the available color information relative to brightness.

3.3. Mathematical Formulations for Image Dehazing

To systematically recover clear images from degraded observations, the dehazing process is formulated through a sequence of mathematical steps. Each step builds upon the previous one, starting from the fundamental hazy image equation and progressing toward brightness–saturation analysis, regression-based parameter estimation, and final image restoration. The entire process is summarized in Algorithm 1.
Step 1.
Hazy Image Equation
I ( x ) = J ( x ) · t ( x ) + A · ( 1 t ( x ) )
where I ( x ) represents the perceived hazy image, J ( x ) denotes the scene radiance, t ( x ) is the transmission map indicating the amount of light reaching the camera without scattering, A is the global atmospheric illumination, and x corresponds to the pixel location in the image.
Step 2.
Calculate Mean Brightness and Saturation
Let V and S represent the visibility and saturation channels in the HSV color space, respectively. The mean brightness and mean saturation are calculated as
mean _ brightness = μ V = 1 N i = 1 N V i
mean _ saturation = μ S = 1 N i = 1 N S i
Here, V i and S i correspond to the pixel intensities in the value and saturation channels of the HSV image, respectively, while N represents the total number of pixels in the image.
Step 3.
Brightness Sum
The total brightness is the aggregate of the mean brightness and mean saturation.
brightness _ sum = μ V + μ S
Step 4.
Ratios η and ξ
The ratios η and ξ , which represent the relative contributions of brightness and saturation to the overall image characteristics, are defined as
η = μ V μ V + μ S
ξ = μ S μ V + μ S
Step 5.
Determining Optimal Values for a, b, and c Through Regression Analysis
The optimal values of a, b, and c are determined by solving a linear regression problem. Define the feature matrix X and the target vector y as follows:
X = 1 η 1 ξ 1 1 η 2 ξ 2 1 η n ξ n , y = β 1 β 2 β n
The parameters θ = a b c can thereafter be approximated by minimizing the aggregate of squared residuals:
θ = ( X X ) 1 X y
where X represents the transpose of X, and the expression ( X X ) 1 denotes the inverse of the matrix X X .
This yields the values of a, b, and c that best fit the data.
Step 6.
Refining the Transmission Map
The transmission map t ( x ) for pixel x is computed using the estimated β as
t ( x ) = e β ( x ) · d ( x )
where d ( x ) denotes the depth map, calculated as the absolute difference between the hazy and clear images at each pixel.
Step 7.
Identifying the Haziest Region
The haziest region in the image is determined using a haze score, which considers the uniformity of pixel intensities and the lack of edges. For a region R, the haze score is calculated as
Haze Score = 1 σ R · ( edge count + 1 )
where σ R denotes the standard deviation of pixel intensities within region R and edge count represents the quantity of edges within the region R, detected using the canny edge detector.
Regions are recursively divided into quadrants using a quadtree algorithm until a maximum depth is reached or the region size is minimal. The area with the highest haze score is designated as the most hazy.
Step 8.
Estimating the Airlight Value A
To identify the airtight value A, the most luminous pixels in the haziest area are examined. The procedure is outlined as follows:
  • Normalize the pixel intensities of the region to [ 0 , 1 ] :
    I ( x ) = I ( x ) 255
  • Identify the top 20% brightest pixels within the region:
    I brightest = { I ( x ) I ( x ) Percentile ( I , 80 ) }
  • Determine the average intensity of the brightest pixels in order to estimate A:
    A = 1 | I brightest | I ( x ) I brightest I ( x )
The estimated A represents the intensity of the scattered light dominating the haziest region.
Step 9.
Restoration of the Dehazed Image
The transmission map t ( x ) and the airlight value A are employed to restore the dehazed image J ( x ) :
J ( x ) = I ( x ) t ( x ) + ϵ
where I ( x ) is the observed hazy image and ϵ is a small constant that is included to prevent division by zero.
Step 10.
 Brightness and Contrast Adjustment
The restored image is multiplied by 255 to transform it from a floating-point to an 8-bit format. Brightness and contrast adjustments are performed using the parameters α and β :
  • Brightness adjustment: the β parameter shifts pixel values by adding or subtracting a constant.
  • Contrast adjustment: the α parameter scales pixel values, enhancing contrast by making dark regions darker and bright regions brighter.
Step 11.
 Gamma Correction
Gamma correction is used to fine-tune brightness and compensate for the nonlinear way the human eye perceives light. Gamma correction involves boosting pixel values to the power of 1 γ . The value of γ determines the image’s overall contrast: greater values make it darker, while lower values make it brighter. After removing the haze, this procedure guarantees that the image seems natural to the human eye.
Algorithm 1 Model training of proposed approach.
Require: Training data, uniform dimensions, normalized pixel values
Ensure: Dehazed images
  1:
Load clear and hazy images, resize, normalize
  2:
Convert images to HSV color space
  3:
Compute brightness and saturation adjustment factor ( η and χ ):
η = mean _ brightness mean _ saturation + mean _ brightness , χ = mean _ brightness 2 · mean _ saturation
  4:
Compute depth map:
depth _ map = | clear _ image hazy _ image |
  5:
Calculate β :
β = ln mean ( hazy / clear ) mean ( depth _ map )
  6:
Train a linear regression model using [ η , χ ] as input features and corresponding β  values as labels. The model outputs c 1 , c 2 , c 3 where
β = c 1 + c 2 η + c 3 χ + ϵ
  7:
Compute transmission map using c 1 , c 2 , c 3 :
t ( x ) = exp ( β · depth _ map )
where β = c 1 + c 2 η + c 3 χ + ϵ .
  8:
Restore dehazed image:
restored _ image = hazy _ image t ( x ) + ϵ
  9:
Adjust brightness and contrast of restored images.
10:
Evaluate dehazing quality using performance metrics like SSIM, PSNR, etc.
11:
Save and compare dehazed images with clear and hazy images.

4. Results

This study suggests a novel dehazing model that combines regression-based depth map estimates with atmospheric scattering coefficients to overcome these constraints. By concentrating on determining the most hazardous areas and utilizing sophisticated regression methods, the suggested model seeks to offer a more precise and effective resolution to the issue of haze elimination. The suggested model’s resilience was examined across a range of haze densities. It maintained high SSIM scores in every case, continuously outperforming alternative approaches. This demonstrates its versatility and efficiency in harsh environments like thick fog or haze. The normalized pixel intensity distributions for clean (red), dehazed (green), and hazy (blue) are shown in Figure 2. There is a noticeable peak in the lower intensity range of the fuzzy image, which suggests less contrast and visibility. With a discernible movement toward higher intensity values and a wider distribution, the dehazed image shows a notable improvement in contrast and detail recovery. As a reference, the clear image exhibits clear peaks at both low and high intensities, signifying ideal clarity. The effectiveness of the dehazing algorithm in restoring visibility is demonstrated by the close alignment of the dehazed image with the clear image.
As visible there is an apparent shift to the right in the dehazed image’s pixel intensity distribution, especially for brightness levels greater than 200. This change is a direct result of the suggested regression-based dehazing technique, which uses the estimated transmission map and subsequent brightness-contrast adjustments to compensate for haze attenuation and compute the restored picture J ( x ) . The recovery of genuine scene radiance that was previously reduced by scattering is reflected in the elevation in higher intensity locations. It is worth noting that the increase in brightness results from radiometrically consistent restoration rather than overexposure or noise amplification, as evidenced by the high SSIM value and preserved color fidelity. With a discernible movement toward higher intensity values and a wider distribution, the dehazed image shows a notable improvement in contrast and detail recovery. As a reference, the clear image exhibits clear peaks at both low and high intensities, signifying ideal clarity.
We compare the performance of the suggested dehazing model to that of current techniques in this section. The results are assessed using multiple quantitative measures, including the structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), fog aware density evaluator (FADE), color–haze ratio (C–H Ratio), universal image quality index (UIQI), and the average processing time. These parameters collectively evaluate perceptual quality, fidelity, sharpness, color balance, and computational efficiency. Furthermore, we provide visual comparisons to illustrate the qualitative improvements our method achieves. Table 1 illustrates the quantitative results obtained for various dehazing methods, including our proposed model. To ensure a comprehensive and balanced evaluation, five representative baseline methods were selected to encompass the primary methodological categories in single-image dehazing. The Contrast-Limited Adaptive Histogram Equalization (CLAHE) method from paper [51] represents a classical, non-learning image-processing approach widely used for contrast enhancement in degraded imagery. The Dark Channel Prior (DCP) model proposed in paper [44] serves as a physics-based prior baseline, offering a traditional benchmark for transmission and atmospheric light estimation methods. The AOD-Net model from [46] exemplifies a lightweight, end-to-end convolutional framework designed for computational efficiency through reformulation of the atmospheric scattering model. The Feature Fusion Attention Network (FFA-Net) introduced in [49] represents a modern deep learning baseline utilizing attention and feature-fusion mechanisms for enhanced perceptual quality. Lastly, the UTCR-Dehaze model described in [50], a transformer-augmented U-Net variant, illustrates the encoder–decoder architecture commonly adopted in contemporary dehazing research. Together, these baselines collectively represent classical, physics-prior, lightweight, attention-based, and encoder–decoder paradigms, establishing a robust comparative foundation for assessing the effectiveness and generalizability of the proposed regression-based dehazing framework.
The findings indicate that our model achieves superior performance across most metrics, evidenced by an SSIM score of 0.9904 and a PSNR of 22.25 dB, while also delivering competitive VIF and FADE values. Importantly, it attains the lowest processing time (0.038 s), underscoring its efficiency. These results demonstrate that the proposed model outperforms state-of-the-art approaches such as Single U-Net, FFA-Net, AOD-Net, and the Dark Channel Prior method, both in quantitative accuracy and computational practicality. Moreover, the quality of the images produced, compared to other standard models, can be clearly seen in Figure 3. The suggested regression-based model achieves a PSNR of 22.25 dB, which is noticeably higher than the compared techniques, as indicated by the quantitative data presented in Table 1. This notable improvement arises from the intrinsic design of the model rather than bias in the dataset or selected image conditions. In contrast to deep learning frameworks that depend on end-to-end feature abstraction, our approach employs regression-driven estimation of the scattering coefficient ( β ) and refinement of the transmission map ( t ( x ) ) to directly model the physical scattering process. By combining brightness–saturation cues with a pixel-level regression, the model successfully restores the true scene radiance without overfitting to specific patterns or textures.
As shown in Table 2, the top 10 regression coefficient triplets ( c 1 , c 2 , c 3 ) obtained during the dehazing model optimization are reported, along with their corresponding quality evaluation metrics (SSIM, PSNR, VIF, FADE, C–H ratio, and UIQI). The results highlight that the optimal coefficients, particularly around c 1 = 100,628.8, c 2 = 100,634.1, and c 3 = 100,637.8, yield the highest SSIM of 0.9904 , thereby demonstrating the stability of regression-based parameter selection for image dehazing.
To ensure the robustness of the proposed model, furthermore, we compared it with the RYF-Net model discussed in [48]. The comparative results across different datasets (i.e., Indoor SOTS [52], D-HAZY [54], (OHI) ImageNet [55], and HazeRD [56]), under consistent experimental conditions, are presented in Table 3. The D-HAZY dataset uses depth data from the Middlebury stereo set to synthesize hazy indoor and outdoor images. The large-scale OHI (ImageNet) dataset extends ImageNet with natural and hazy image pairs for improved generalization. HazeRD contains real foggy images captured under diverse atmospheric conditions, offering realistic complexity for robustness evaluation. Indoor-SOTS, part of the RESIDE benchmark, provides paired clear and hazy indoor images for unbiased dehazing assessment. For a fair comparison, some representative images are presented in (Figure 4), and the results of the proposed method are evaluated under the same experimental settings and protocols as those employed in [48]. As evident from the tabulated and visual outcomes, the proposed model outperforms RYF-Net in indoor environments compared to outdoor scenarios. The observed behavior can be attributed to the proposed model’s regression-based estimation of the scattering coefficient and transmission map using brightness–saturation cues from the HSV color space, which makes it highly effective in controlled and uniformly illuminated indoor environments. This physically grounded approach enables precise recovery of fine structural and color details, leading to superior SSIM and PSNR values for indoor datasets. However, in outdoor scenes with complex illumination and depth variations, deep fusion networks like RYF-Net, which integrate multi-scale contextual features, handle non-uniform haze more effectively.

Principal Component Transformation and Surface Visualization

The regression coefficient triplets ( c 1 , c 2 , c 3 ) were reduced to two dimensions using Principal Component Analysis (PCA). PC1 captures the largest variance in the coefficients, while PC2 captures the second largest variance in an orthogonal direction. This transformation preserves the dominant structure of the parameter space while enabling clear visualization. Each coefficient set was projected into the (PC1, PC2) domain, and the corresponding quality metrics (SSIM, PSNR, VIF, FADE, C–H ratio, UIQI) were mapped onto this plane. To obtain smooth surfaces, cubic interpolation was applied over the sample points.
The resulting plots (see Figure 5) reveal peak regions indicating the optimal coefficient ranges for dehazing, with interpolated surfaces providing a polynomial-like approximation of metric behavior across parameter variations. The Principal Component (PC) surface visualization of the regression coefficient space shows the variation in the first two principal components (PC1 and PC2) in relation to several image quality assessment measures. Principal Component Analysis (PCA), which captures the largest variance and the second-highest orthogonal variance, was used to project the coefficients ( c 1 , c 2 , c 3 ) from the regression model into this condensed two-dimensional domain. The final 3D surfaces in Figure 5 show distinct peak regions that match the best regression coefficient combinations that produce the maximum VIF, PSNR, and SSIM values. The model’s regression parameters generate a stable and smooth optimization surface instead of a random or noisy distribution, as evidenced by these peaks, which reflect zones of maximal picture quality and structural preservation. These surfaces’ smoothness and predictable patterns support the linearity and predictability of the suggested regression-based methodology. Thus, the visualization accomplishes two goals: (1) it shows that minor changes in ( c 1 , c 2 , c 3 ) do not significantly impact image quality by offering an interpretable mapping between regression parameters and dehazing performance metrics; and (2) it demonstrates the model’s resilience in converging toward globally optimal coefficient values. Thus, the theoretical validity and numerical stability of the suggested regression-driven dehazing method are highlighted in Figure 5.
The proposed model provides higher visual clarity and restores fine details compared to alternative techniques. The depth map and computed air scattering coefficient are used to rectify the haziest areas successfully. The capacity to manage uneven haze distribution is very noticeable in difficult situations. The model’s resilience was examined across a range of haze densities. It maintained high SSIM scores in every case, continuously outperforming alternative approaches. This demonstrates its versatility and efficiency in harsh environments like thick fog or haze. The effectiveness of the dehazing algorithm in restoring visibility and improving image quality is demonstrated by the near alignment of the dehazed image with the clear image.
Finally, the proposed model performs well in dehazing tasks, with the highest SSIM of 0.99. Its capacity to deal with non-uniform haze dispersion and harsh conditions makes it a reliable and practical alternative for real-world applications. The ensemble technique and regression-based depth estimates contribute to improved outcomes, making it an invaluable tool for image restoration and clarity improvement.

5. Discussion

The proposed framework, in contrast to current methods that depend exclusively on prior-based assumptions or deep neural networks, employs an analytical method to estimate the scattering coefficient ( β ) by making use of brightness–saturation cues ( η , χ ) that are obtained from the HSV color space. This makes it possible to describe the differences in haze density throughout the various parts of the image in an adaptive manner, which enables precise reconstruction even in scenarios when the haze is not uniformly distributed over the image. We conducted a comprehensive comparative evaluation of our suggested regression-based dehazing model versus well-established and state-of-the-art techniques, such as DCP, Single-U-Net, and Double-U-Net. Classical approaches such as DCP offered computational simplicity; nevertheless, they demonstrated limited performance when it came to handling dense or spatially uneven haze. This frequently led to color distortion and the loss of small details, which was reflected in poorer SSIM and PSNR scores. In a similar vein, sophisticated deep architectures like as Single-U-Net and Double-U-Net displayed increased performance through encoder–decoder feature learning and hierarchical refinement. However, these designs continued to require a significant amount of processing resources and struggled to generalize successfully under extreme meteorological fluctuations. These findings provide evidence that the regression-based architecture is both efficient and robust. The proposed image dehazing model markedly outperforms current techniques, attaining high levels of both quantitative and qualitative performance. It recorded an average SSIM of 0.99, PSNR of 22.25 dB, and VIF of 1.08, in addition to having the shortest processing time of around 0.038 s per image. As a consequence, the model provides a solution that is scalable, interpretable, and hardware-efficient for dehazing applications used in physical environments.
Furthermore, existing state-of-the-art models, such as the RYF-Net model [48], rely heavily on large paired datasets, making them less adaptable to unseen haze conditions. Additionally, their multi-stream architecture increases computational cost, limiting real-time use. Moreover, RYF-Net lacks physical interpretability since it does not explicitly estimate parameters such as the scattering coefficient β or atmospheric light A , often leading to color distortions under non-uniform haze conditions. In contrast, our pixel-dehaze approach explicitly models these physical parameters using a regression-based estimation built on brightness–saturation cues, enabling better interpretability and adaptability. It employs localized atmospheric light estimation through quad-tree search and a lightweight least-squares regression for β , reducing complexity while maintaining high image quality. Furthermore, when a reference clear image is available, pixel-dehaze refines depth estimation for improved detail recovery. Overall, our method achieves higher SSIM and PSNR with faster inference, making it more efficient and reliable for real-time applications where RYF-Net’s data and computational demands are limiting. This new methodology is particularly beneficial in important applications, including real-time surveillance, firefighting, and autonomous driving, demonstrating its capacity to tackle visibility issues in various contexts. Moreover, the model’s efficacy and versatility underscore its potential for extensive practical use, providing a dependable solution for dehazing in intricate and safety-sensitive situations.

6. Conclusions

Haze removal is a vital image processing problem with significant implications for real-world applications such as autonomous driving, surveillance, and emergency response. This study compared the performance of various dehazing approaches, including Dark Channel Prior, Single-U-Net, Double-U-Net, and the proposed model that integrates atmospheric scattering coefficients with a regression-based depth estimation strategy, while traditional methods like Dark Channel Prior exhibited limited effectiveness in handling non-uniform haze and restoring fine details, leading to lower SSIM values, advanced architectures such as Single-U-Net and Double-U-Net demonstrated improved results but were still surpassed by the proposed approach. The suggested model achieved the highest SSIM value of 0.99 along with superior PSNR, VIF, and processing time, thereby outperforming state-of-the-art techniques in both quantitative and qualitative evaluations. By effectively leveraging atmospheric scattering coefficients and regression-based depth estimation, it successfully addressed the challenges of non-uniform haze distribution and restored clarity even in heavily degraded regions. These results confirm that the proposed model is a reliable and efficient solution for the dehazing process, offering consistently high performance and strong practical utility in applications requiring enhanced visibility, including autonomous navigation, security systems, and disaster management.
Future research could expand the model’s capability to handle real-time video streams, hence increasing its utility in dynamic environments. Incorporating other data modalities, such as LiDAR or infrared images, may also improve the model’s performance under harsh conditions and broaden its applicability to a broader range of applications. Furthermore, the focus will be on trying to execute the operations in real-time with minimal lag due to processing so that they can be efficiently used in cases of emergencies.

Author Contributions

Conceptualization, S.K. and V.B.; methodology, P.K.; software, S.K. and J.N.; validation, S.V., V.S. and P.K.; formal analysis, S.K. and J.N.; investigation, J.N.; resources, S.K.; data curation, S.V.; writing—original draft preparation, V.B., S.V. and V.S.; writing—review and editing, S.V., V.S., P.K., S.K. and J.N.; visualization, V.B.; supervision, S.K. and J.N.; project administration, S.K.; funding acquisition, S.K. and J.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in the RESIDE Dehaze Datasets repository at https://sites.google.com/view/reside-dehaze-datasets/reside-standard accessed on 30 October 2025.

Acknowledgments

We hereby acknowledge the support of the Computer Science Engineering Department, Thapar Institute of Engineering Technology, Patiala, Punjab, for providing the facility.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, J.; Chen, Y.; Wang, W.; Yan, P.; Liu, H.; Yang, S.; Hu, Z.; Lelieveld, J. Strong air pollution causes widespread haze-clouds over China. J. Geophys. Res. Atmos. 2010, 115, D18204. [Google Scholar] [CrossRef]
  2. Radojevic, M. Chemistry of forest fires and regional haze with emphasis on Southeast Asia. Pure Appl. Geophys. 2003, 160, 157–187. [Google Scholar] [CrossRef]
  3. Fletcher, L.M.; Engles, M.; Hammond, B.R., Jr. Visibility through atmospheric haze and its relation to macular pigment. Optom. Vis. Sci. 2014, 91, 1089–1096. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, X.; Gao, L.; Li, P.; Hao, J. Image dehazing using dark channels with global transmission. In Proceedings of the 2013 6th International Congress on Image and Signal Processing (CISP), Hangzhou, China, 16–18 December 2013; Volume 1, pp. 277–281. [Google Scholar] [CrossRef]
  5. Tang, S.J.; Zeng, H.X.; Lee, Y.H. Fast airlight estimation algorithm in dark channel prior for image dehazing applications. In Proceedings of the 2018 International Conference on Electronics, Information, and Communication (ICEIC), Honolulu, HI, USA,, 24–27 January 2018; pp. 1–2. [Google Scholar] [CrossRef]
  6. Megha, P.R.; Sreeni, K.G. Dark Channel Prior based Image Dehazing with Contrast Enhancement. In Proceedings of the 2021 Fourth International Conference on Microelectronics, Signals & Systems (ICMSS), Kollam, India, 18–19 November 2021; pp. 1–6. [Google Scholar] [CrossRef]
  7. Wu, Y.F.; Liaw, C.H.; Lee, Y.H. Down-Sampling Dark Channel Prior of Airlight Estimation for Low Complexity Image Dehazing Chip Design. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics—Taiwan, Taipei, Taiwan, 6–8 July 2022; pp. 3–4. [Google Scholar] [CrossRef]
  8. Hsieh, C.H.; Chen, J.Y.; Zhao, Q. A Modified DCP Based Dehazing Algorithm. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1779–1784. [Google Scholar] [CrossRef]
  9. Balaji, V.D.; Kumar, A.E.; Shanmuganathan, S.; Devi, S.P. Single Image Dehazing via Transmission Map Estimation using Deep Neural Networks. In Proceedings of the 2021 5th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2–4 December 2021; pp. 1731–1736. [Google Scholar] [CrossRef]
  10. Wang, W.; Yuan, X.; Wu, X.; Liu, Y.; Ghanbarzadeh, S. An efficient method for image dehazing. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2241–2245. [Google Scholar] [CrossRef]
  11. Ngo, D.; Lee, S.; Kang, B. Robust single-image haze removal using optimal transmission map and adaptive atmospheric light. Remote Sens. 2020, 12, 2233. [Google Scholar] [CrossRef]
  12. Lai, Y.S.; Chen, Y.L.; Hsu, C.T. Single image dehazing with optimal transmission map. In Proceedings of the Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 388–391. [Google Scholar]
  13. Shu, Q.; Wu, C.; Xiao, Z.; Liu, R.W. Variational Regularized Transmission Refinement for Image Dehazing. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2781–2785. [Google Scholar] [CrossRef]
  14. Wei, P.; Wang, X.; Wang, L.; Xiang, J. SIDGAN: Single Image Dehazing without Paired Supervision. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2958–2965. [Google Scholar] [CrossRef]
  15. Zhang, J.; Chen, C.; Liu, G.; Pang, Z.; Xiang, P.; Pu, W. A Lightweight Image Dehazing Algorithm Based on Deep Learning. In Proceedings of the 2023 2nd International Conference on Robotics, Artificial Intelligence and Intelligent Control (RAIIC), Mianyang, China, 11–13 August 2023; pp. 283–287. [Google Scholar] [CrossRef]
  16. Li, Z.; Xiao, X.; Zhang, N. IDACC: Image Dehazing Avoiding Color Cast Using a Novel Atmospheric Scattering Model. IEEE Access 2024, 12, 70160–70169. [Google Scholar] [CrossRef]
  17. Bao, H.; Zhang, D.; Zhao, X. A single image dehazing method based on sky recognition and average saturation prior. In Proceedings of the 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 14–16 December 2018; pp. 766–770. [Google Scholar] [CrossRef]
  18. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [PubMed]
  19. Yadav, S.K.; Sarawadekar, K. Single Image Dehazing using Adaptive Gamma Correction Method. In Proceedings of the TENCON 2019–2019 IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 1752–1757. [Google Scholar] [CrossRef]
  20. Ju, M.; Ding, C.; Guo, C.A.; Ren, W.; Tao, D. IDRLP: Image Dehazing Using Region Line Prior. IEEE Trans. Image Process. 2021, 30, 9043–9057. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, W.; Yuan, X.; Wu, X.; Liu, Y. Fast Image Dehazing Method Based on Linear Transformation. IEEE Trans. Multimed. 2017, 19, 1142–1155. [Google Scholar] [CrossRef]
  22. Li, B.; Lai, Y.; Wu, C.; Liu, Y. Fast Single Image Dehazing via Positive Correlation. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2020–2025. [Google Scholar] [CrossRef]
  23. Li, B.; Zhang, W.; Lu, M. Multiple Linear Regression Haze-Removal Model Based on Dark Channel Prior. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 307–312. [Google Scholar] [CrossRef]
  24. Hassan, H.; Luo, B.; Xin, Q.; Abbasi, R.; Ahmad, W. Single Image Dehazing from Repeated Averaging Filters. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26 May 2019; pp. 1053–1056. [Google Scholar] [CrossRef]
  25. Wang, W.; Ji, T.; Wu, X.; Feng, L. Gray projection for single image dehazing. In Proceedings of the 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Nanjing, China, 18–20 May 2018; pp. 1152–1155. [Google Scholar] [CrossRef]
  26. Shen, L.; Zhao, Y.; Peng, Q.; Chan, J.C.W.; Kong, S.G. An Iterative Image Dehazing Method with Polarization. IEEE Trans. Multimed. 2019, 21, 1093–1107. [Google Scholar] [CrossRef]
  27. Bai, H.; Pan, J.; Xiang, X.; Tang, J. Self-Guided Image Dehazing Using Progressive Feature Fusion. IEEE Trans. Image Process. 2022, 31, 1217–1229. [Google Scholar] [CrossRef] [PubMed]
  28. Du, Y.; Li, X. Recursive Deep Residual Learning for Single Image Dehazing. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 843–8437. [Google Scholar] [CrossRef]
  29. Li, Y.; Wang, K.; Xu, N.; Li, Y. Quantitative evaluation for dehazing algorithms on synthetic outdoor hazy dataset. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar] [CrossRef]
  30. Yazıcı, B.; Çimtay, Y.; Çetinkaya, B. A New Hyperspectral Multi-Level Synthetic Hazy Image Dataset for Benchmark of Dehazing Methods. In Proceedings of the 2023 13th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Athens, Greece, 31 October–2 November 2023; pp. 1–6. [Google Scholar] [CrossRef]
  31. Husain, N.A.; Mohd Rahim, M.S.; Chaudhry, H. Different Haze Image Conditions for Single Image Dehazing Method. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  32. Gao, Z.; Bai, Y. Single image haze removal algorithm using pixel-based airlight constraints. In Proceedings of the 2016 22nd International Conference on Automation and Computing (ICAC), Colchester, UK, 7–8 September 2016; pp. 267–272. [Google Scholar] [CrossRef]
  33. P, V.; K S, A.; Shetty, L.; T M, K.; S S, S. Non Homogeneous Realistic Single Image Dehazing. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 3–7 January 2023; pp. 548–555. [Google Scholar] [CrossRef]
  34. Gao, S.; Zhu, J.; Xi, H. Attention-Based Encoder-Decoder Network For Single Image Dehazing. In Proceedings of the 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  35. Huang, L.Y.; Yin, J.L.; Chen, B.H.; Ye, S.Z. Towards Unsupervised Single Image Dehazing with Deep Learning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2741–2745. [Google Scholar] [CrossRef]
  36. Del Gallego, N.P.; Ilao, J.; Cordel, M.; Ruiz, C. A new approach for training a physics-based dehazing network using synthetic images. Signal Process. 2022, 199, 108631. [Google Scholar] [CrossRef]
  37. Ding, X.; Wang, Y.; Zhang, J.; Fu, X. Underwater image dehaze using scene depth estimation with adaptive color correction. In Proceedings of the OCEANS 2017—Aberdeen, Aberdeen, UK, 19–22 June 2017; pp. 1–5. [Google Scholar] [CrossRef]
  38. Pérez, J.; Bryson, M.; Williams, S.B.; Sanz, P.J. Recovering Depth from Still Images for Underwater Dehazing Using Deep Learning. Sensors 2020, 20, 4580. [Google Scholar] [CrossRef] [PubMed]
  39. Chen, X.; Li, Y.; Kong, C.; Dai, L. Unpaired Image Dehazing with Physical-Guided Restoration and Depth-Guided Refinement. IEEE Signal Process. Lett. 2022, 29, 587–591. [Google Scholar] [CrossRef]
  40. Liu, R.W.; Xiong, S.; Wu, H. A Second-Order Variational Framework for Joint Depth Map Estimation and Image Dehazing. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1433–1437. [Google Scholar] [CrossRef]
  41. Dobre-Baron, R.; Ancuţi, C. Dehazing CNNs Loss Functions Analysis. In Proceedings of the 2024 International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 7–8 November 2024; pp. 1–4. [Google Scholar] [CrossRef]
  42. Chang, K.Y.; Li, K.L.; Sheu, M.H.; Wang, S.H. DMCGF Dehazing Neural Network Design for Edge-AI Implementation. In Proceedings of the 2024 IEEE Cyber Science and Technology Congress (CyberSciTech), Boracay Island, Philippines, 5–8 November 2024; pp. 390–393. [Google Scholar] [CrossRef]
  43. Jin, J.Y.; Cui, Y.N.; Ren, J.; Lv, Y.X.; Hu, Z.J. Dust Weather Image Clarity Algorithm Based on Color Adjustment and Dark Channel Dehazing. In Proceedings of the 2024 International Conference on Cyber-Physical Social Intelligence (ICCSI), Doha, Qatar, 8–12 November 2024; pp. 1–6. [Google Scholar] [CrossRef]
  44. Yang, D. A Dehazing Method Based on Fog Density Fusion of Dark and Bright Channel Images. In Proceedings of the 2024 9th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 21–23 November 2024; Volume 9, pp. 804–808. [Google Scholar] [CrossRef]
  45. Zhang, Z. ZRD-Net: A Zero-shot Low-Light Image Dehazing Network. In Proceedings of the 2024 9th International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 21–23 November 2024; Volume 9, pp. 158–163. [Google Scholar] [CrossRef]
  46. Zheng, T.; Xu, T.; Li, X.; Zhao, X.; Zhao, F.; Zhang, Y. Improved AOD-Net Dehazing Algorithm for Target Image. In Proceedings of the 2024 5th International Conference on Computer Engineering and Intelligent Control (ICCEIC), Guangzhou, China, 11–13 October 2024; pp. 333–337. [Google Scholar] [CrossRef]
  47. Hafidh, F.; Shidik, G.F.; Syukur, A.; Andono, P.N.; Soeleman, M.A. Advanced Dehazing of Single Satellite Imagery Using Enhanced Dark Channel Prior and Refined Transmission. In Proceedings of the 2024 International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 21–22 September 2024; pp. 421–427. [Google Scholar] [CrossRef]
  48. Dudhane, A.; Murala, S. RYF-Net: Deep Fusion Network for Single Image Haze Removal. IEEE Trans. Image Process. 2020, 29, 628–640. [Google Scholar] [CrossRef]
  49. Jain, A. Feature Fusion Attention Network with CycleGAN for Image Dehazing, De-Snowing and De-Raining. arXiv 2025, arXiv:2503.06107. [Google Scholar]
  50. Li, C.; Zhang, X.; Wang, H.; Shao, Z.; Ma, L. UTCR-Dehaze: U-Net and transformer-based cycle-consistent generative adversarial network for unpaired remote sensing image dehazing. Eng. Appl. Artif. Intell. 2025, 158, 111385. [Google Scholar] [CrossRef]
  51. Majid Mohammed, I.; Ashidi Mat Isa, N. Contrast Limited Adaptive Local Histogram Equalization Method for Poor Contrast Image Enhancement. IEEE Access 2025, 13, 62600–62632. [Google Scholar] [CrossRef]
  52. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking Single-Image Dehazing and Beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed]
  53. Li, B.; Ren, W.; Wang, Z. RESIDE-Standard: Single Image Dehazing Benchmark Dataset. Available online: https://sites.google.com/view/reside-dehaze-datasets/reside-standard (accessed on 20 September 2025).
  54. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2226–2230. [Google Scholar]
  55. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  56. Zhang, Y.; Ding, L.; Sharma, G. Hazerd: An outdoor scene dataset and benchmark for single image dehazing. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3205–3209. [Google Scholar]
Figure 1. Proposed image dehazing framework illustrating preprocessing, feature extraction, model training, and post-processing steps to generate a refined dehazed image.
Figure 1. Proposed image dehazing framework illustrating preprocessing, feature extraction, model training, and post-processing steps to generate a refined dehazed image.
Bdcc 09 00282 g001
Figure 2. Comparison of pixels between hazy images, ground truth (GT), and the output from our model.
Figure 2. Comparison of pixels between hazy images, ground truth (GT), and the output from our model.
Bdcc 09 00282 g002
Figure 3. (a) Hazy image; (b) ground truth; (c) proposed model; (d) FFA-Net; (e) Dark Channel Prior; (f) Clahe; (g) Single U-Net; (h) AOD-Net.
Figure 3. (a) Hazy image; (b) ground truth; (c) proposed model; (d) FFA-Net; (e) Dark Channel Prior; (f) Clahe; (g) Single U-Net; (h) AOD-Net.
Bdcc 09 00282 g003
Figure 4. Comparison between the results of (Row 2) the proposed model and (Row 3) the RYF-Net model for (Row 1) hazy images. Red rectangular boxes indicate the bounding regions used for the dehazing process.
Figure 4. Comparison between the results of (Row 2) the proposed model and (Row 3) the RYF-Net model for (Row 1) hazy images. Red rectangular boxes indicate the bounding regions used for the dehazing process.
Bdcc 09 00282 g004
Figure 5. Three-dimensional surface plots illustrating the variation in different image quality assessment metrics with respect to principal components (PC1 and PC2). The metrics include (a) SSIM, (b) PSNR, (c) VIF, (d) FADE, (e) C–H Ratio, and (f) UIQI.
Figure 5. Three-dimensional surface plots illustrating the variation in different image quality assessment metrics with respect to principal components (PC1 and PC2). The metrics include (a) SSIM, (b) PSNR, (c) VIF, (d) FADE, (e) C–H Ratio, and (f) UIQI.
Bdcc 09 00282 g005
Table 1. Comparison of state-of-the-art models.
Table 1. Comparison of state-of-the-art models.
CLAHE [51]AOD-Net [46]Dark Channel Prior [44]FFA [49]Single U-Net [50]Proposed
SSIM0.95090.96860.92510.96270.97900.9904
PSNR15.355217.169014.052016.009116.465022.2581
VIF0.74700.39281.04430.32580.26541.0883
FADE100.802376.515232.593551.447963.678263.8781
C-H Ratio0.01520.01340.09100.05700.02190.0353
UIQI0.84600.86710.69880.82100.95890.9336
Processing-Time (in s)0.19630.38660.16330.34740.18530.0379
ranking526431
Table 2. Top 10 regression coefficient sets and corresponding quality metrics.
Table 2. Top 10 regression coefficient sets and corresponding quality metrics.
Index c 1 c 2 c 3 SSIMPSNRVIFFADEC-H RatioUIQI
1100,628.80−100,634.09−100,637.840.990422.25811.088363.87810.03530.9336
2100,500.55−100,610.22−100,620.750.988221.90751.054365.44120.03790.9288
3100,712.12−100,640.33−100,655.910.986721.50321.072166.12900.03670.9264
4100,589.67−100,625.44−100,639.720.985520.99871.043567.21040.03810.9221
5100,675.90−100,645.89−100,659.450.983920.76211.032768.00210.03480.9197
6100,640.31−100,628.11−100,652.330.982420.50321.021469.55430.03370.9154
7100,693.45−100,651.28−100,670.140.981220.18891.010970.23890.03210.9128
8100,602.78−100,622.54−100,649.280.979819.96340.998771.05440.03150.9102
9100,688.22−100,648.77−100,672.050.978419.71200.986572.19800.03020.9073
10100,645.89−100,633.90−100,660.420.976919.40850.974373.12970.02960.9051
Table 3. Comparison of proposed model vs. RYF-Net.
Table 3. Comparison of proposed model vs. RYF-Net.
MethodSSIMMSEPSNRDataset
RYF-Net [48]0.86210.006525.4301D-HAZY
Proposed Model0.89170.007327.0489D-HAZY
RYF-Net [48]0.91210.006024.1460OHI (ImageNet)
Proposed Model0.89750.007622.1674OHI (ImageNet)
RYF-Net [48]0.65250.025216.8220HazeRD
Proposed Model0.64890.021414.6132HazeRD
RYF-Net [48]0.87160.008821.4375Indoor-SOTS
Proposed Model0.90550.005924.2351Indoor-SOTS
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baldeva, V.; Sharma, V.; Verma, S.; Kansal, P.; Kansal, S.; Narayan, J. Pixel-Dehaze: Deciphering Dehazing Through Regression-Based Depth and Scattering Estimation. Big Data Cogn. Comput. 2025, 9, 282. https://doi.org/10.3390/bdcc9110282

AMA Style

Baldeva V, Sharma V, Verma S, Kansal P, Kansal S, Narayan J. Pixel-Dehaze: Deciphering Dehazing Through Regression-Based Depth and Scattering Estimation. Big Data and Cognitive Computing. 2025; 9(11):282. https://doi.org/10.3390/bdcc9110282

Chicago/Turabian Style

Baldeva, Vaibhav, Vishakha Sharma, Satakshi Verma, Priya Kansal, Sachin Kansal, and Jyotindra Narayan. 2025. "Pixel-Dehaze: Deciphering Dehazing Through Regression-Based Depth and Scattering Estimation" Big Data and Cognitive Computing 9, no. 11: 282. https://doi.org/10.3390/bdcc9110282

APA Style

Baldeva, V., Sharma, V., Verma, S., Kansal, P., Kansal, S., & Narayan, J. (2025). Pixel-Dehaze: Deciphering Dehazing Through Regression-Based Depth and Scattering Estimation. Big Data and Cognitive Computing, 9(11), 282. https://doi.org/10.3390/bdcc9110282

Article Metrics

Back to TopTop