Next Article in Journal
Soil Mercury Pollution in Nature-Based Solutions Across Various Land Uses: A Review of Trends, Treatment Outcomes, and Future Directions
Previous Article in Journal
Relationship-Based Ambient Detection for Concrete Pouring Verification: Improving Detection Accuracy in Complex Construction Environments
Previous Article in Special Issue
Privacy-Preserving Live Video Analytics for Drones via Edge Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications

Department of Information & Communication Engineering Department, Wonkwang University, Iksan 54538, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6503; https://doi.org/10.3390/app15126503
Submission received: 27 February 2025 / Revised: 31 May 2025 / Accepted: 5 June 2025 / Published: 9 June 2025

Abstract

This research addresses visibility challenges in surveillance systems under foggy conditions through a novel synthetic fog generation method leveraging the GridNet dehazing architecture. Our approach uniquely reverses GridNet, originally developed for fog removal, to synthesize realistic foggy images. The proposed Fog Generator Model incorporates perceptual and dark channel consistency losses to enhance fog realism and structural consistency. Comparative experiments on the O-HAZY dataset demonstrate that dehazing models trained on our synthetic fog outperform those trained on conventional methods, achieving superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores. These findings confirm that integrating high-performance dehazing networks into fog synthesis improves the realism and effectiveness of fog removal solutions, offering significant benefits for real-world surveillance applications.

1. Introduction

The integration of fog removal capabilities into fixed surveillance cameras is a critical advancement for enhancing safety, security, and public order [1]. In environments where visibility is frequently compromised by atmospheric conditions such as fog, the ability to maintain clear and reliable imagery becomes paramount. Surveillance systems, tasked with monitoring critical infrastructure, public spaces, and transportation networks, must operate effectively under adverse weather conditions to ensure continuous protection and situational awareness [2]. Fog, as a natural phenomenon, obscures visual details, reduces contrast, and hampers the detection of objects or individuals, posing significant risks to security operations. Consequently, equipping surveillance cameras with robust fog removal functionalities is not merely an enhancement but a necessity for modern safety systems [3,4].
Extensive research has been conducted on fog removal networks, yielding remarkable progress in restoring visibility from degraded images [5,6]. Techniques ranging from traditional approaches like the Dark Channel Prior (DCP) [7] to advanced deep learning methods such as convolutional neural networks (CNNs) and GridNet architectures [8] have demonstrated substantial improvements in dehazing performance. These studies have collectively contributed to a deeper understanding of image restoration under foggy conditions, achieving impressive results in various scenarios [9,10,11]. However, a significant challenge persists in the domain of fog removal research: the acquisition of paired datasets—images of the same scene captured both with and without fog. For effective training and evaluation of dehazing algorithms, datasets must include both indoor and outdoor scenes with synthetically or naturally induced fog [12]. In practice, obtaining such paired images is exceedingly difficult due to the variability of environmental conditions and the impracticality of controlling natural fog in real-world settings. Photographing identical scenes under foggy and clear conditions simultaneously is a logistical challenge, often rendering traditional data collection methods insufficient.
To address this limitation, the generation of synthetic fog becomes a pivotal step in advancing fog removal research [13]. While numerous studies have focused on dehazing [14,15], fewer have explored the equally critical task of fog synthesis [16]. The ability to artificially create fog in a controlled manner enables researchers to simulate diverse foggy conditions, thereby facilitating the development and validation of dehazing algorithms. Existing fog generation techniques include simplistic approaches, such as uniform fog application [17], as well as more sophisticated methods like dark channel-based synthesis [7] and distance-aware (depth-based) fog generation [18]. Although these methods provide a viable means to simulate fog, they often lack the realism and adaptability required to mirror natural fog distributions accurately. Moreover, synthetic fog generated through rudimentary techniques may not adequately challenge the robustness of modern dehazing networks, potentially leading to overfitting or suboptimal performance when applied to real-world foggy scenes.
Among the various approaches to fog synthesis, the development of a dedicated fog generation network emerges as a promising solution [16]. Unlike traditional methods that rely on heuristic assumptions or physical models, a neural network-based fog generator can learn complex patterns of fog distribution directly from data, producing more realistic and contextually relevant foggy images. A key insight driving this research is that the efficacy of a fog generation network is closely tied to the architecture of the dehazing network it aims to complement [8]. Specifically, leveraging a high-performing dehazing network as the backbone of the fog generation process ensures that the synthesized fog aligns with the characteristics that advanced dehazing algorithms are designed to address. This symbiotic relationship between fog generation and removal enhances the overall performance of the system, as the generated fog can effectively test and validate the limits of the dehazing network.
In this study, we propose and validate a novel approach to fog generation and removal tailored for surveillance applications. Our research focuses on training a fog generation network, utilizing a state-of-the-art dehazing neural network as its foundation [8], to produce synthetic foggy images. These images are then used to evaluate and compare the performance of various fog generation techniques—including simple fog [17], dark channel-based fog [7], and depth-aware fog [18]—against our proposed network-based method [16]. Unlike existing fog generation methods based on physical models or depth priors, our approach uniquely leverages a high-performance dehazing network (GridNet) in reverse to synthesize fog that is better aligned with modern dehazing pipelines. This network-driven synthesis strategy produces more realistic and structurally consistent fog patterns, enabling more effective evaluation and improvement of dehazing performance. Our experiments demonstrate that the proposed fog generation network, informed by the best-performing dehazing architecture, achieves superior results, as evidenced by quantitative metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). Through this comparative analysis, we confirm that a network-driven fog synthesis approach not only enhances the realism of generated fog but also maximizes the effectiveness of fog removal, offering a robust solution for real-world surveillance systems [1].
The main contributions of this paper are as follows:
  • We propose a neural network-based fog generation model built upon the GridNet architecture, enabling realistic and context-aware synthetic fog for surveillance imagery.
  • We design a training strategy that incorporates perceptual and dark channel consistency losses to enhance the visual quality of the generated fog.
  • We perform comparative evaluations with existing fog generation techniques using standardized dehazing networks and demonstrate superior restoration performance in terms of PSNR and SSIM.
  • We validate the practical applicability of our approach in real-world surveillance scenarios, highlighting its effectiveness under adverse weather conditions.

2. Related Work

Fog generation and removal have been extensively studied in computer vision to address visibility degradation in adverse weather conditions [5]. These efforts are pivotal for applications such as surveillance systems [1], autonomous driving [13], and outdoor imaging [17]. This section reviews key methodologies for fog removal and generation, focusing on GridNet [8], Simple Fog Generation [17], Depth-Aware Fog Generation [18], and Dark Channel Prior-based approaches [7]. Each method offers unique strengths and limitations, which we discuss in the context of their applicability to real-world scenarios.

2.1. GridNet for Fog Removal

GridNet is a sophisticated deep learning architecture designed for image restoration tasks, including fog removal. Introduced by Liu et al. [8], GridNet leverages a grid-like structure of convolutional layers to process multi-scale features effectively. The network employs Residual Dense Blocks (RDBs) to enhance feature extraction and combines downsampling and upsampling operations to capture both local and global contexts. This hierarchical design allows GridNet to handle varying fog densities and complex scene structures, making it particularly suitable for surveillance applications where image clarity is critical [4,19].

2.2. Simple Fog Generation

Simple Fog Generation is a foundational approach to simulating fog in images, often based on uniform fog application techniques [17]. This method applies a uniform fog layer using a simplified physical model, making it computationally efficient and easy to implement. It has been used to generate synthetic foggy datasets for training dehazing algorithms [13]. However, its uniform application of fog overlooks scene depth variations and atmospheric heterogeneity, leading to unrealistic results in complex outdoor environments.

2.3. Depth-Aware Fog Generation

Depth-Aware Fog Generation enhances the realism of synthetic fog by incorporating scene depth information into the fog synthesis process. Proposed by Hwang et al. [18], this method modulates the transmission map based on pixel-wise depth values, better simulating the natural phenomenon of fog where visibility decreases with distance. It has been shown to improve the performance of dehazing networks by providing more challenging and realistic training data [13].

2.4. Dark Channel Prior for Fog Removal and Generation

The Dark Channel Prior (DCP), introduced by Sun et al. [7], is a seminal method in fog removal that exploits the observation that, in most non-sky patches of a clear image, at least one color channel has a very low intensity. Beyond fog removal, the DCP has been adapted for fog generation by reversing the process [7], producing realistic fog effects, particularly in scenes with strong depth cues.
In contrast to these methods, our approach integrates a high-performance dehazing network (GridNet) directly into the fog generation process. This integration enables the generation of synthetic fog that closely reflects the degradation characteristics targeted by modern dehazing networks. Additionally, our method employs perceptual and dark channel consistency losses during training—techniques rarely explored in prior fog synthesis studies—resulting in improved realism and downstream restoration performance.
The reviewed methods—GridNet [8], Simple Fog Generation [17], Depth-Aware Fog Generation [18], and Dark Channel Prior [7]—represent a spectrum of approaches to fog removal and generation. A notable research gap exists in leveraging high-performance dehazing networks for fog generation [16], which could unify the strengths of these approaches.

3. Fog Generator Model

In this section, we present the detailed design and implementation of our proposed Fog Generator Model, a neural network-based approach for synthesizing realistic foggy images. Leveraging the architectural strengths of GridNet [8], our Fog Generator integrates advanced feature extraction and fog synthesis mechanisms.
The overall architecture of the proposed Fog Generator Model is illustrated in Figure 1. This figure provides a high-level overview of how the model synthesizes foggy images from clear inputs, highlighting the use of a GridNet-based backbone, attention modules, and a multi-scale structure. This visual summary helps guide the reader through the subsequent detailed descriptions of each module in this section.

3.1. Training Methodology

The Fog Generator is trained on paired clear and foggy images from the O-HAZE dataset [12], augmented with synthetic foggy images generated via traditional methods [7,17,18].

3.2. Model Architecture

The proposed Fog Generator Model is built upon the GridNet framework [8], which was originally designed for image dehazing. In this study, we adapt the GridNet architecture for the inverse task of fog synthesis. The model comprises three primary components: the GridNet backbone, a transmission estimation network, and fog parameter layers.
The network takes a haze-free clear image J ( x ) as input and generates a synthetic foggy image I ( x ) by simulating atmospheric scattering effects. This process is guided by learned fog features and a predicted transmission map.
The fog synthesis process is modeled as follows:
I ( x ) = J ( x ) · t ( x ) + A · ( 1 t ( x ) ) + α F ( x )
In this formulation, J ( x ) denotes the scene radiance, representing the haze-free ground truth image, while I ( x ) is the synthesized foggy image generated by the model. The transmission map is represented by t ( x ) , and A denotes the global atmospheric light. The learned fog feature map F ( x ) , extracted through the GridNet backbone, captures complex fog structures, and α is a blending factor (set to 0.1 in our experiments) that controls the fog density.
This extended formulation builds upon the conventional atmospheric scattering model by integrating learned, data-driven fog features alongside the traditional transmission and atmospheric light components. As a result, it enhances both the realism and diversity of synthetic fog patterns beyond what heuristic methods can achieve.

3.2.1. GridNet Backbone

The backbone of our Fog Generator is a modified GridNet, consisting of a grid-like structure with height H = 3 and width W = 6 . It employs Residual Dense Blocks (RDBs) with n = 4 dense layers and a growth rate of 16, enabling rich feature extraction across multiple scales. The network begins with an input convolution layer (conv_in) that maps the three-channel RGB image to a depth rate of 16, followed by a series of RDBs and downsampling/upsampling modules. The output is processed through an output RDB and a final convolution layer (conv_out) to generate fog features F ( x ) . This hierarchical design ensures that both local details and global fog patterns are captured effectively.

3.2.2. Transmission Estimation Network

To model the transmission map t ( x ) , we introduce a shallow convolutional neural network comprising three layers:
  • A 3 × 3 convolution with 32 filters and ReLU activation;
  • A 3 × 3 convolution with 16 filters and ReLU activation;
  • A 3 × 3 convolution with 1 filter followed by a sigmoid activation.
This network takes the clear image J ( x ) as input and outputs a single-channel transmission map, constrained between 0 and 1. To enhance realism, we apply a depth-dependent modulation:
t ( x ) = t ( x ) · ( 1 0.4 · G y ( x ) ) ,
where G y ( x ) is a vertical gradient ranging from 0 (top) to 1 (bottom), simulating the natural increase in fog density with distance.

3.2.3. Fog Parameter Layers

The atmospheric light A and fog strength are modeled as learnable parameters. The base fog color is initialized as A base = [ 0.8 , 0.8 , 0.9 ] with a variance A var = [ 0.1 , 0.1 , 0.1 ] , adjusted via a tanh function during training. The fog intensity is controlled by a scalar parameter β , initialized at 0.5 and fine-tuned to match target fog levels (e.g., 0.2 in our experiments). These parameters allow the model to adaptively synthesize fog with varying characteristics.

3.3. Training Methodology

The Fog Generator is trained on paired clear and foggy images from the O-HAZE dataset [12], augmented with synthetic foggy images generated via traditional methods. We employ a composite loss function:
L = L L 1 + 0.04 · L perceptual + 0.1 · L dark ,
where
  • L L 1 = I pred ( x ) I gt ( x ) 1 is the L1 loss between predicted and ground-truth foggy images;
  • L perceptual = MSE ( ϕ ( I pred ) , ϕ ( I gt ) ) uses VGG-16 features up to layer 16 for perceptual similarity;
  • L dark = MSE ( min ( I pred ) , min ( I gt ) ) enforces dark channel consistency.
Training is conducted over 100 epochs using the Adam optimizer with a learning rate of 10 4 , on an NVIDIA GPU with CUDA support. The model is saved periodically, with the final weights stored as fog_generator.pth.

3.4. Model Visualization

Figure 2 illustrates the architecture of the Fog Generator Model. The diagram highlights the flow from the input clear image through the GridNet backbone, transmission estimation, and fog parameter integration, culminating in the foggy output.
The Fog Generator Model integrates the strengths of GridNet’s multi-scale processing with a physically inspired synthesis process, offering a versatile tool for generating realistic foggy images. Unlike traditional methods, it adapts to scene content through learned features, addressing the uniformity limitations of Simple Fog Generation and the depth dependency of depth-aware methods. The use of GridNet ensures compatibility with high-performance dehazing networks, aligning with our research goal of optimizing fog removal in surveillance systems.

4. Experiments and Results Analysis

This section outlines the experimental setup and results analysis conducted to evaluate our proposed Fog Generator Model and its impact on fog removal performance using the O-HAZY dataset. We detail the experimental procedure, including dataset description, training of the fog generation network, fog synthesis, fog removal training, and performance evaluation. The results are analyzed through quantitative metrics and visual comparisons, with key findings illustrated using figures and tables.

4.1. Experimental Procedure

4.1.1. O-HAZY Dataset

The O-HAZY dataset [12], a benchmark for outdoor dehazing, comprises 45 pairs of real hazy and corresponding haze-free images captured in diverse outdoor scenes. Each pair consists of a foggy image and its ground-truth clear counterpart, making it ideal for training and evaluating both fog generation and removal models. The images vary in fog density, scene complexity, and lighting conditions, providing a robust testbed for our experiments. For training, we split the dataset into 36 pairs (80%) for training and 9 pairs (20%) for evaluation, ensuring a balanced representation of foggy conditions. The training set is augmented with synthetic foggy images generated by our proposed method and baseline techniques to enhance model robustness.

4.1.2. Fog Generation Network Training

The Fog Generator Model, described in Section 3, was trained using the O-HAZY training subset. Clear images were input to the network, which synthesized foggy outputs to match the real hazy images in the dataset. The training utilized a composite loss function combining L1, perceptual, and dark channel losses, as defined in Equation (3). We employed the Adam optimizer with a learning rate of 1 × 10 4 and trained for 100 epochs on an NVIDIA RTX 3090 GPU with CUDA support. The batch size was set to 1 due to memory constraints. This process ensured that the network learned to generate realistic fog distributions aligned with real-world conditions observed in O-HAZY.

4.1.3. Fog Synthesis

Using the trained Fog Generator, we synthesized foggy images from the clear images in both the training and evaluation subsets of O-HAZY. The fog intensity was set to 0.2, reflecting a moderate fog level suitable for surveillance applications. Additionally, we generated foggy images using three baseline methods: Simple Fog Generation (Simple), Depth-Aware Fog Generation (DA), and Dark Channel Prior-based Fog Generation (DC).
Figure 3 illustrates examples of synthetic fog generated by each method, showcasing their visual differences. For instance, the Depth-Aware Fog method highlights a depth-dependent fog distribution, while our GridNet-based approach demonstrates nuanced fog patterns.

4.1.4. Fog Removal Training

Four GridNet-based dehazing models were trained using the synthetic foggy images generated by each method:
  • ModelS: Trained on images with simple fog;
  • ModelU: Trained on images with GridNet-based fog;
  • ModelDA: Trained on images with depth-aware fog;
  • ModelDC: Trained on images with dark channel fog.
Each model was trained for 30 epochs using the same GridNet architecture, with a loss function combining L1 and perceptual losses (using VGG-16 features). The training process mirrored that of the Fog Generator. The goal was to assess how the quality of synthetic fog influences dehazing performance.

4.1.5. Fog Removal Performance Evaluation

The trained dehazing models were evaluated on the O-HAZY evaluation subset (nine real foggy images). Performance was measured using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), standard metrics for image restoration quality. For each model, we computed the average PSNR and SSIM across the evaluation set and visualized the results for qualitative analysis. Example dehazing outputs are shown in Figure 4, with the GridNet-based model demonstrating superior restoration quality.

4.2. Results Analysis

4.2.1. Training Dynamics

Figure 5 and Figure 6 depict the training loss curves for the Fog Generator and dehazing models, respectively. The Fog Generator exhibits a steady decline in loss, converging after approximately 80 epochs, indicating effective learning of fog synthesis patterns. Similarly, the dehazing models show consistent loss reduction, with ModelU achieving the lowest final loss, suggesting that training on GridNet-generated fog enhances dehazing optimization.

4.2.2. Quantitative Performance

Table 1 summarizes the dehazing performance of each model on the O-HAZY evaluation set. ModelU, trained on GridNet-generated fog, achieves the highest PSNR (17.6018 dB) and SSIM (0.7697), outperforming ModelS (14.8382 dB, 0.7186), ModelDA (14.6777 dB, 0.7125), and ModelDC (14.5749 dB, 0.7007). This result validates our hypothesis that a network-driven fog generation approach improves dehazing efficacy by providing more realistic and challenging training data.

4.2.3. Sample-Level Analysis

Figure 7 presents the sample-level PSNR and SSIM distributions across the evaluation set. The combined PSNR plot shows ModelU consistently achieving higher PSNR values across samples, while the SSIM plot confirms its structural fidelity. These distributions underscore ModelU’s robustness and consistency compared to baseline-trained models.

4.3. Visual Comparison of Real and Synthetic Fog

To evaluate the visual realism of the synthetic fog generated by our method, we compare it with real-world fog and other baseline fog generation methods, as shown in Figure 8. The figure presents two real images (a clear image and a foggy image captured under natural fog conditions) alongside synthetic fog samples produced using four different approaches: simple fog, depth-aware fog, Dark Channel Prior, and our proposed GridNet-based method.
Among these, the synthetic fog generated by our model exhibits a more natural fog distribution and density that closely resembles the characteristics of real fog. The fog appears more spatially consistent and visually plausible, especially in its interaction with background structures and atmospheric depth. In contrast, the fog produced by the baseline methods tends to be either too sparse or uniformly spread, failing to capture the heterogeneous and complex nature of real fog. These observations visually support the superiority of our method in generating realistic fog, which is critical for training effective dehazing networks in surveillance scenarios.

4.4. Discussion

While our study demonstrates the effectiveness of the proposed fog generation model in improving dehazing performance, we acknowledge that the evaluation did not include comparisons with recent state-of-the-art dehazing architectures such as FFA-Net, DehazeFormer, or DehazeGS. This was a deliberate decision to isolate the effect of fog generation by maintaining a consistent dehazing network (GridNet) across all experiments. While this helped ensure a controlled comparison among fog synthesis methods, we recognize that incorporating more advanced dehazing networks in future work will provide a broader and more generalizable validation of our synthetic fog generation approach.
Additionally, we acknowledge that a component-wise ablation analysis was not conducted. In particular, we did not individually evaluate the contributions of components such as the perceptual loss, dark channel consistency, and the GridNet-based architectural design. We believe that such analysis would offer important insights into the role of each element in performance improvement. We plan to conduct systematic ablation studies in future work to more rigorously validate our model’s design choices and improve its efficiency.

5. Conclusions

This study successfully validates the efficacy of a network-driven fog generation approach for improving dehazing performance in surveillance systems. By leveraging the GridNet architecture as the backbone for both fog synthesis and removal, the proposed Fog Generator Model produces realistic synthetic fog that enhances the robustness of dehazing networks. Comparative experiments reveal that dehazing models trained on GridNet-generated fog outperform those trained on traditional methods, as evidenced by superior PSNR and SSIM scores on the O-HAZY dataset. These findings confirm that integrating high-performance dehazing networks into fog generation not only bridges the gap in paired dataset availability but also elevates the quality of visibility restoration under adverse weather conditions. The proposed solution offers a practical and effective advancement for real-world surveillance, ensuring reliable operation in foggy environments. Moreover, the proposed fog generation model can be effectively applied in various practical domains such as traffic surveillance, autonomous driving, smart city security cameras, and drone-based environmental monitoring, where robust vision systems are required under adverse weather conditions. Future research could explore real-time implementation, broader dataset applications, and further optimization of the fog generation-dehazing pipeline to enhance its applicability across diverse scenarios.

Author Contributions

Conceptualization, S.Y.; methodology, S.Y.; software, S.Y.; validation, H.L.; formal analysis, S.Y.; investigation, S.Y. and B.P.; writing—original draft preparation, B.P.; writing—review and editing, S.Y.; visualization, Y.-K.K.; supervision, S.Y.; project administration, S.Y.; funding acquisition, S.Y.; resources, B.P.; data curation, B.P. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by Wonkwang University in 2025.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon reasonable request from the corresponding author.

Acknowledgments

The authors thank the anonymous reviewers and editors for their insightful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Miclea, R.-C.; Ungureanu, V.-I.; Sandru, F.-D.; Silea, I. Visibility Enhancement and Fog Detection: Solutions Presented in Recent Scientific Papers with Potential for Application to Mobile Systems. Sensors 2021, 21, 3370. [Google Scholar] [CrossRef] [PubMed]
  2. Younis, R.; Bastaki, N. Accelerated Fog Removal from Real Images for Car Detection. In Proceedings of the 2017 9th IEEE-GCC Conference and Exhibition (GCCCE), Manama, Bahrain, 8–11 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  3. Carter, V.; Verbrugghe, N.; Lobos-Roco, F.; Del Río, C.; Albornoz, F.; Khan, A.Z. Unlocking the Fog: Assessing Fog Collection Potential and Need as a Complementary Water Resource in Arid Urban Lands—The Alto Hospicio, Chile Case. Front. Environ. Sci. 2025, 13, 1537058. [Google Scholar] [CrossRef]
  4. Liu, X.; Hong, L.; Lin, Y. Rapid Fog-Removal Strategies for Traffic Environments. Sensors 2023, 23, 7506. [Google Scholar] [CrossRef] [PubMed]
  5. Shen, M.; Lv, T.; Liu, Y.; Zhang, J.; Ju, M. A Comprehensive Review of Traditional and Deep-Learning-Based Defogging Algorithms. Electronics 2024, 13, 3392. [Google Scholar] [CrossRef]
  6. Kim, H.; Tyagi, S.; Tiwari, N.; Sungshin Women’s University; Shri Ram College of Engineering and Management. A Review on Fog Removal with Its Techniques and Types. Int. J. Smart Bus. Technol. 2015, 3, 39–48. [Google Scholar] [CrossRef]
  7. Sun, C.-C.; Lai, H.-C.; Sheu, M.-H.; Huang, Y.-H. Single Image Fog Removal Algorithm Based on an Improved Dark Channel Prior Method. In Proceedings of the 2016 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Phuket, Thailand, 24–27 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar]
  8. Liu, X.; Ma, Y.; Shi, Z.; Chen, J. GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing. arXiv 2019, arXiv:1908.03245. [Google Scholar]
  9. Yi, Q.; Jiang, A.; Deng, X.; Liu, C. MSNet: A Novel End-to-end Single Image Dehazing Network with Multiple Inter-scale Dense Skip-connections. IET Image Process. 2021, 15, 143–154. [Google Scholar] [CrossRef]
  10. Asha, C.S.; Siddiq, A.B.; Akthar, R.; Rajan, M.R.; Suresh, S. ODD-Net: A Hybrid Deep Learning Architecture for Image Dehazing. Sci. Rep. 2024, 14, 30619. [Google Scholar] [CrossRef] [PubMed]
  11. Kijima, D.; Kushida, T.; Kitajima, H.; Tanaka, K.; Kubo, H.; Funatomi, T.; Mukaigawa, Y. Time-of-Flight Imaging in Fog Using Multiple Time-Gated Exposures. Opt. Express 2021, 29, 6453. [Google Scholar] [CrossRef] [PubMed]
  12. Ancuti, C.O.; Ancuti, C.; Timofte, R.; De Vleeschouwer, C. O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free Outdoor Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 867–8678. [Google Scholar]
  13. Xie, Y.; Wei, H.; Liu, Z.; Wang, X.; Ji, X. SynFog: A Photo-Realistic Synthetic Fog Dataset Based on End-to-End Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving. arXiv 2024, arXiv:2403.17094. [Google Scholar]
  14. Yu, J.; Wang, Y.; Lu, Z.; Guo, J.; Li, Y.; Qin, H.; Zhang, X. DehazeGS: Seeing Through Fog with 3D Gaussian Splatting. arXiv 2025, arXiv:2501.03659. [Google Scholar]
  15. Xue, M.; Fan, S.; Palaiahnakote, S.; Zhou, M. UR2P-Dehaze: Learning a Simple Image Dehaze Enhancer via Unpaired Rich Physical Prior. arXiv 2025, arXiv:2501.06818. [Google Scholar]
  16. Gong, R.; Dai, D.; Chen, Y.; Li, W.; Paudel, D.P.; Van Gool, L. Analogical Image Translation for Fog Generation. Proc. AAAI Conf. Artif. Intell. 2021, 35, 1433–1441. [Google Scholar] [CrossRef]
  17. Kim, K.; Kim, S.; Kim, K. Effective Image Enhancement Techniques for Fog-affected Indoor and Outdoor Images. IET Image Process. 2018, 12, 465–471. [Google Scholar] [CrossRef]
  18. Hwang, S.-H.; Kwon, K.-W.; Im, T.-H. AEA-RDCP: An Optimized Real-Time Algorithm for Sea Fog Intensity and Visibility Estimation. Appl. Sci. 2024, 14, 8033. [Google Scholar] [CrossRef]
  19. Morales, P.; Klinghoffer, T.; Lee, S.J. Feature Forwarding for Efficient Single Image Dehazing. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2078–2085. [Google Scholar]
Figure 1. GridHazeNet: attention-based multi-scale network architecture.
Figure 1. GridHazeNet: attention-based multi-scale network architecture.
Applsci 15 06503 g001
Figure 2. Architecture of the Fog Generator Model. The clear image J ( x ) is processed through the GridNet backbone to extract fog features F ( x ) , while a transmission network generates t ( x ) . These are combined with learned fog parameters (A, β ) to produce the foggy image I ( x ) .
Figure 2. Architecture of the Fog Generator Model. The clear image J ( x ) is processed through the GridNet backbone to extract fog features F ( x ) , while a transmission network generates t ( x ) . These are combined with learned fog parameters (A, β ) to produce the foggy image I ( x ) .
Applsci 15 06503 g002
Figure 3. Synthetic fog samples generated from the O-HAZY dataset.
Figure 3. Synthetic fog samples generated from the O-HAZY dataset.
Applsci 15 06503 g003
Figure 4. Dehazing results on O-HAZY evaluation set: (a) ModelS, (b) ModelU, This study primarily focuses on the design and evaluation of a fog generation network suitable for real-time learning, rather than dehazing itself. Further analysis of these visual degradation artifacts and improvements in dehazing performance will be addressed in future work. (c) ModelDA, and (d) ModelDC. Each image shows the ground truth, foggy input, and dehazed output.
Figure 4. Dehazing results on O-HAZY evaluation set: (a) ModelS, (b) ModelU, This study primarily focuses on the design and evaluation of a fog generation network suitable for real-time learning, rather than dehazing itself. Further analysis of these visual degradation artifacts and improvements in dehazing performance will be addressed in future work. (c) ModelDA, and (d) ModelDC. Each image shows the ground truth, foggy input, and dehazed output.
Applsci 15 06503 g004
Figure 5. Loss curve of the Fog Generator Model during training, illustrating convergence over 100 epochs.
Figure 5. Loss curve of the Fog Generator Model during training, illustrating convergence over 100 epochs.
Applsci 15 06503 g005
Figure 6. Loss curves of dehazing models (ModelS, ModelU, ModelDA, and ModelDC) over 30 epochs, highlighting ModelU’s superior convergence.
Figure 6. Loss curves of dehazing models (ModelS, ModelU, ModelDA, and ModelDC) over 30 epochs, highlighting ModelU’s superior convergence.
Applsci 15 06503 g006
Figure 7. Sample-level performance: (a) combined PSNR across models and (b) combined SSIM across models.
Figure 7. Sample-level performance: (a) combined PSNR across models and (b) combined SSIM across models.
Applsci 15 06503 g007
Figure 8. Comparison of real-world clear and foggy images with synthetic fog samples generated by different fog-generation methods.
Figure 8. Comparison of real-world clear and foggy images with synthetic fog samples generated by different fog-generation methods.
Applsci 15 06503 g008
Table 1. Dehazing performance comparison on the O-HAZY evaluation set.
Table 1. Dehazing performance comparison on the O-HAZY evaluation set.
ModelPSNR (dB)SSIM
ModelS14.83820.7186
ModelU17.60180.7697
ModelDA14.67770.7125
ModelDC14.57490.7007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, H.; Park, B.; Kim, Y.-K.; Youm, S. Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications. Appl. Sci. 2025, 15, 6503. https://doi.org/10.3390/app15126503

AMA Style

Lee H, Park B, Kim Y-K, Youm S. Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications. Applied Sciences. 2025; 15(12):6503. https://doi.org/10.3390/app15126503

Chicago/Turabian Style

Lee, Heekwon, Byeongseon Park, Yong-Kab Kim, and Sungkwan Youm. 2025. "Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications" Applied Sciences 15, no. 12: 6503. https://doi.org/10.3390/app15126503

APA Style

Lee, H., Park, B., Kim, Y.-K., & Youm, S. (2025). Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications. Applied Sciences, 15(12), 6503. https://doi.org/10.3390/app15126503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop