Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (234)

Search Parameters:
Keywords = dehazing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 27119 KiB  
Article
Dehazing Algorithm Based on Joint Polarimetric Transmittance Estimation via Multi-Scale Segmentation and Fusion
by Zhen Wang, Zhenduo Zhang and Xueying Cao
Appl. Sci. 2025, 15(15), 8632; https://doi.org/10.3390/app15158632 (registering DOI) - 4 Aug 2025
Viewed by 130
Abstract
To address the significant degradation of image visibility and contrast in turbid media, this paper proposes an enhanced image dehazing algorithm. Unlike traditional polarimetric dehazing methods that exclusively attribute polarization information to airlight, our approach integrates object radiance polarization and airlight polarization for [...] Read more.
To address the significant degradation of image visibility and contrast in turbid media, this paper proposes an enhanced image dehazing algorithm. Unlike traditional polarimetric dehazing methods that exclusively attribute polarization information to airlight, our approach integrates object radiance polarization and airlight polarization for haze removal. First, sky regions are localized through multi-scale fusion of polarization and intensity segmentation maps. Second, region-specific transmittance estimation is performed by differentiating haze-occluded regions from haze-free regions. Finally, target radiance is solved using boundary constraints derived from non-haze regions. Compared with other dehazing algorithms, the method proposed in this paper demonstrates greater adaptability across diverse scenarios. It achieves higher-quality restoration of targets with results that more closely resemble natural appearances, avoiding noticeable distortion. Not only does it deliver excellent dehazing performance for land fog scenes, but it also effectively handles maritime fog environments. Full article
Show Figures

Figure 1

21 pages, 6628 KiB  
Article
MCA-GAN: A Multi-Scale Contextual Attention GAN for Satellite Remote-Sensing Image Dehazing
by Sufen Zhang, Yongcheng Zhang, Zhaofeng Yu, Shaohua Yang, Huifeng Kang and Jingman Xu
Electronics 2025, 14(15), 3099; https://doi.org/10.3390/electronics14153099 - 3 Aug 2025
Viewed by 165
Abstract
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle [...] Read more.
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle with remote-sensing images due to their complex imaging conditions and scale diversity. Given this, we propose a novel Multi-Scale Contextual Attention Generative Adversarial Network (MCA-GAN), specifically designed for satellite image dehazing. Our method integrates multi-scale feature extraction with global contextual guidance to enhance the network’s comprehension of complex remote-sensing scenes and its sensitivity to fine details. MCA-GAN incorporates two self-designed key modules: (1) a Multi-Scale Feature Aggregation Block, which employs multi-directional global pooling and multi-scale convolutional branches to bolster the model’s ability to capture land-cover details across varying spatial scales; (2) a Dynamic Contextual Attention Block, which uses a gated mechanism to fuse three-dimensional attention weights with contextual cues, thereby preserving global structural and chromatic consistency while retaining intricate local textures. Extensive qualitative and quantitative experiments on public benchmarks demonstrate that MCA-GAN outperforms other existing methods in both visual fidelity and objective metrics, offering a robust and practical solution for remote-sensing image dehazing. Full article
Show Figures

Figure 1

22 pages, 24173 KiB  
Article
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
by Hao Zhou, Yalun Wang, Wanting Peng, Xin Guan and Tao Tao
Remote Sens. 2025, 17(15), 2664; https://doi.org/10.3390/rs17152664 - 1 Aug 2025
Viewed by 204
Abstract
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm [...] Read more.
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm for vision tasks, showing great promise due to their computational efficiency and robust capacity to model global dependencies. However, most existing learning-based dehazing methods lack physical interpretability, leading to weak generalization. Furthermore, they typically rely on spatial features while neglecting crucial frequency domain information, resulting in incomplete feature representation. To address these challenges, we propose ScaleViM-PDD, a novel network that enhances an SSM backbone with two key innovations: a Multi-scale EfficientViM with Physical Decoupling (ScaleViM-P) module and a Dual-Domain Fusion (DD Fusion) module. The ScaleViM-P module synergistically integrates a Physical Decoupling block within a Multi-scale EfficientViM architecture. This design enables the network to mitigate haze interference in a physically grounded manner at each representational scale while simultaneously capturing global contextual information to adaptively handle complex haze distributions. To further address detail loss, the DD Fusion module replaces conventional skip connections by incorporating a novel Frequency Domain Module (FDM) alongside channel and position attention. This allows for a more effective fusion of spatial and frequency features, significantly improving the recovery of fine-grained details, including color and texture information. Extensive experiments on nine publicly available remote sensing datasets demonstrate that ScaleViM-PDD consistently surpasses state-of-the-art baselines in both qualitative and quantitative evaluations, highlighting its strong generalization ability. Full article
Show Figures

Figure 1

14 pages, 21956 KiB  
Article
Evaluating Image Quality Metrics as Loss Functions for Image Dehazing
by Rareș Dobre-Baron, Adrian Savu-Jivanov and Cosmin Ancuți
Sensors 2025, 25(15), 4755; https://doi.org/10.3390/s25154755 - 1 Aug 2025
Viewed by 209
Abstract
The difficulty and manual nature of procuring human evaluators for ranking the quality of images affected by various types of degradations, and of those cleaned up by developed algorithms, has lead to the widespread adoption of automated metrics, like the Peak Signal-to-Noise Ratio [...] Read more.
The difficulty and manual nature of procuring human evaluators for ranking the quality of images affected by various types of degradations, and of those cleaned up by developed algorithms, has lead to the widespread adoption of automated metrics, like the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Metric (SSIM). However, disparities between rankings given by these metrics and those given by human evaluators have encouraged the development of improved image quality assessment (IQA) metrics that are a better fit for this purpose. These methods have been previously used solely for quality assessments and not as objectives in the training of neural networks for high-level vision tasks, despite the potential improvements that may come about by directly optimizing for desired metrics. This paper examines the adequacy of ten recent IQA metrics, compared with standard loss functions, within two trained dehazing neural networks, with observed broad improvement in their performance. Full article
(This article belongs to the Special Issue Sensing and Imaging in Computer Vision)
16 pages, 123395 KiB  
Article
Semi-Supervised Image-Dehazing Network Based on a Trusted Library
by Wan Li and Chenyang Chang
Electronics 2025, 14(15), 2956; https://doi.org/10.3390/electronics14152956 - 24 Jul 2025
Viewed by 208
Abstract
In the field of image dehazing, many deep learning-based methods have demonstrated promising results. However, these methods often neglect crucial frequency-domain information and rely heavily on labeled datasets, which limits their applicability to real-world hazy images. To address these issues, we propose a [...] Read more.
In the field of image dehazing, many deep learning-based methods have demonstrated promising results. However, these methods often neglect crucial frequency-domain information and rely heavily on labeled datasets, which limits their applicability to real-world hazy images. To address these issues, we propose a semi-supervised image-dehazing network based on a trusted library (WTS-Net). We construct a dual-branch wavelet transform network (DBWT-Net). It fuses high- and low-frequency features via a frequency-mixing module and enhances global context through attention mechanisms. Building on DBWT-Net, we embed this backbone in a teacher–student model to reduce reliance on labeled data. To enhance the reliability of the teacher network, we introduce a trusted library guided by NR-IQA. In addition, we employ a two-stage training strategy for the network. Experiments show that WTS-Net achieves superior generalization and robustness in both synthetic and real-world dehazing scenarios. Full article
Show Figures

Figure 1

22 pages, 7562 KiB  
Article
FIGD-Net: A Symmetric Dual-Branch Dehazing Network Guided by Frequency Domain Information
by Luxia Yang, Yingzhao Xue, Yijin Ning, Hongrui Zhang and Yongjie Ma
Symmetry 2025, 17(7), 1122; https://doi.org/10.3390/sym17071122 - 13 Jul 2025
Viewed by 362
Abstract
Image dehazing technology is a crucial component in the fields of intelligent transportation and autonomous driving. However, most existing dehazing algorithms only process images in the spatial domain, failing to fully exploit the rich information in the frequency domain, which leads to residual [...] Read more.
Image dehazing technology is a crucial component in the fields of intelligent transportation and autonomous driving. However, most existing dehazing algorithms only process images in the spatial domain, failing to fully exploit the rich information in the frequency domain, which leads to residual haze in the images. To address this issue, we propose a novel Frequency-domain Information Guided Symmetric Dual-branch Dehazing Network (FIGD-Net), which utilizes the spatial branch to extract local haze features and the frequency branch to capture the global haze distribution, thereby guiding the feature learning process in the spatial branch. The FIGD-Net mainly consists of three key modules: the Frequency Detail Extraction Module (FDEM), the Dual-Domain Multi-scale Feature Extraction Module (DMFEM), and the Dual-Domain Guidance Module (DGM). First, the FDEM employs the Discrete Cosine Transform (DCT) to convert the spatial domain into the frequency domain. It then selectively extracts high-frequency and low-frequency features based on predefined proportions. The high-frequency features, which contain haze-related information, are correlated with the overall characteristics of the low-frequency features to enhance the representation of haze attributes. Next, the DMFEM utilizes stacked residual blocks and gradient feature flows to capture local detail features. Specifically, frequency-guided weights are applied to adjust the focus of feature channels, thereby improving the module’s ability to capture multi-scale features and distinguish haze features. Finally, the DGM adjusts channel weights guided by frequency information. This smooths out redundant signals and enables cross-branch information exchange, which helps to restore the original image colors. Extensive experiments demonstrate that the proposed FIGD-Net achieves superior dehazing performance on multiple synthetic and real-world datasets. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 5189 KiB  
Article
YOLO-Extreme: Obstacle Detection for Visually Impaired Navigation Under Foggy Weather
by Wei Wang, Bin Jing, Xiaoru Yu, Wei Zhang, Shengyu Wang, Ziqi Tang and Liping Yang
Sensors 2025, 25(14), 4338; https://doi.org/10.3390/s25144338 - 11 Jul 2025
Viewed by 561
Abstract
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. [...] Read more.
Visually impaired individuals face significant challenges in navigating safely and independently, particularly under adverse weather conditions such as fog. To address this issue, we propose YOLO-Extreme, an enhanced object detection framework based on YOLOv12, specifically designed for robust navigation assistance in foggy environments. The proposed architecture incorporates three novel modules: the Dual-Branch Bottleneck Block (DBB) for capturing both local spatial and global semantic features, the Multi-Dimensional Collaborative Attention Module (MCAM) for joint spatial-channel attention modeling to enhance salient obstacle features and reduce background interference in foggy conditions, and the Channel-Selective Fusion Block (CSFB) for robust multi-scale feature integration. Comprehensive experiments conducted on the Real-world Task-driven Traffic Scene (RTTS) foggy dataset demonstrate that YOLO-Extreme achieves state-of-the-art detection accuracy and maintains high inference speed, outperforming existing dehazing-and-detect and mainstream object detection methods. To further verify the generalization capability of the proposed framework, we also performed cross-dataset experiments on the Foggy Cityscapes dataset, where YOLO-Extreme consistently demonstrated superior detection performance across diverse foggy urban scenes. The proposed framework significantly improves the reliability and safety of assistive navigation for visually impaired individuals under challenging weather conditions, offering practical value for real-world deployment. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

37 pages, 20758 KiB  
Review
A Comprehensive Review of Image Restoration Research Based on Diffusion Models
by Jun Li, Heran Wang, Yingjie Li and Haochuan Zhang
Mathematics 2025, 13(13), 2079; https://doi.org/10.3390/math13132079 - 24 Jun 2025
Viewed by 1848
Abstract
Image restoration is an indispensable and challenging task in computer vision, aiming to enhance the quality of images degraded by various forms of degradation. Diffusion models have achieved remarkable progress in AIGC (Artificial Intelligence Generated Content) image generation, and numerous studies have explored [...] Read more.
Image restoration is an indispensable and challenging task in computer vision, aiming to enhance the quality of images degraded by various forms of degradation. Diffusion models have achieved remarkable progress in AIGC (Artificial Intelligence Generated Content) image generation, and numerous studies have explored their application in image restoration, achieving performance surpassing that of other methods. This paper provides a comprehensive overview of diffusion models for image restoration, starting with an introduction to the background of diffusion models. It summarizes relevant theories and research in utilizing diffusion models for image restoration in recent years, elaborating on six commonly used methods and their unified paradigm. Based on these six categories, this paper classifies restoration tasks into two main areas: image super-resolution reconstruction and frequency-selective image restoration. The frequency-selective image restoration category includes image deblurring, image inpainting, image deraining, image desnowing, image dehazing, image denoising, and low-light enhancement. For each area, this paper delves into the technical principles and modeling strategies. Furthermore, it analyzes the specific characteristics and contributions of the diffusion models employed in each application category. This paper summarizes commonly used datasets and evaluation metrics for these six applications to facilitate comprehensive evaluation of existing methods. Finally, it concludes by identifying the limitations of current research, outlining challenges, and offering perspectives on future applications. Full article
Show Figures

Figure 1

18 pages, 3132 KiB  
Article
ICAFormer: An Image Dehazing Transformer Based on Interactive Channel Attention
by Yanfei Chen, Tong Yue, Pei An, Hanyu Hong, Tao Liu, Yangkai Liu and Yihui Zhou
Sensors 2025, 25(12), 3750; https://doi.org/10.3390/s25123750 - 15 Jun 2025
Cited by 1 | Viewed by 614
Abstract
Single image dehazing is a fundamental task in computer vision, aiming to recover a clear scene from a hazy input image. To address the limitations of traditional dehazing algorithms—particularly in global feature association and local detail preservation—this study proposes a novel Transformer-based dehazing [...] Read more.
Single image dehazing is a fundamental task in computer vision, aiming to recover a clear scene from a hazy input image. To address the limitations of traditional dehazing algorithms—particularly in global feature association and local detail preservation—this study proposes a novel Transformer-based dehazing model enhanced by an interactive channel attention mechanism. The proposed architecture adopts a U-shaped encoder–decoder framework, incorporating key components such as a feature extraction module and a feature fusion module based on interactive attention. Specifically, the interactive channel attention mechanism facilitates cross-layer feature interaction, enabling the dynamic fusion of global contextual information and local texture details. The network architecture leverages a multi-scale feature pyramid to extract image information across different dimensions, while an improved cross-channel attention weighting mechanism enhances feature representation in regions with varying haze densities. Extensive experiments conducted on both synthetic and real-world datasets—including the RESIDE benchmark—demonstrate the superior performance of the proposed method. Quantitatively, it achieves PSNR gains of 0.53 dB for indoor scenes and 1.64 dB for outdoor scenes, alongside SSIM improvements of 1.4% and 1.7%, respectively, compared with the second-best performing method. Qualitative assessments further confirm that the proposed model excels in restoring fine structural details in dense haze regions while maintaining high color fidelity. These results validate the effectiveness of the proposed approach in enhancing both perceptual quality and quantitative accuracy in image dehazing tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 21844 KiB  
Article
DWTMA-Net: Discrete Wavelet Transform and Multi-Dimensional Attention Network for Remote Sensing Image Dehazing
by Xin Guan, Runxu He, Le Wang, Hao Zhou, Yun Liu and Hailing Xiong
Remote Sens. 2025, 17(12), 2033; https://doi.org/10.3390/rs17122033 - 12 Jun 2025
Viewed by 1197
Abstract
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, [...] Read more.
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, aiming to restore image information in both the frequency and spatial domains to enhance overall image quality. Specifically, we design a wavelet transform-based downsampling module that effectively fuses frequency and spatial features. The input first passes through a discrete wavelet block to extract frequency-domain information. These features are then fed into a multi-dimensional attention block, which incorporates pixel attention, Fourier frequency-domain attention, and channel attention. This combination allows the network to capture both global and local characteristics while enhancing deep feature representations through dimensional expansion, thereby improving spatial-domain feature extraction. Experimental results on the SateHaze1k, HRSD, and HazyDet datasets demonstrate the effectiveness of the proposed method in handling remote sensing images with varying haze levels and drone-view scenarios. By recovering both frequency and spatial details, our model achieves significant improvements in dehazing performance compared to existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Graphical abstract

14 pages, 13345 KiB  
Article
Synthetic Fog Generation Using High-Performance Dehazing Networks for Surveillance Applications
by Heekwon Lee, Byeongseon Park, Yong-Kab Kim and Sungkwan Youm
Appl. Sci. 2025, 15(12), 6503; https://doi.org/10.3390/app15126503 - 9 Jun 2025
Viewed by 394
Abstract
This research addresses visibility challenges in surveillance systems under foggy conditions through a novel synthetic fog generation method leveraging the GridNet dehazing architecture. Our approach uniquely reverses GridNet, originally developed for fog removal, to synthesize realistic foggy images. The proposed Fog Generator Model [...] Read more.
This research addresses visibility challenges in surveillance systems under foggy conditions through a novel synthetic fog generation method leveraging the GridNet dehazing architecture. Our approach uniquely reverses GridNet, originally developed for fog removal, to synthesize realistic foggy images. The proposed Fog Generator Model incorporates perceptual and dark channel consistency losses to enhance fog realism and structural consistency. Comparative experiments on the O-HAZY dataset demonstrate that dehazing models trained on our synthetic fog outperform those trained on conventional methods, achieving superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) scores. These findings confirm that integrating high-performance dehazing networks into fog synthesis improves the realism and effectiveness of fog removal solutions, offering significant benefits for real-world surveillance applications. Full article
Show Figures

Figure 1

16 pages, 5141 KiB  
Article
Multi-Channel Attention Fusion Algorithm for Railway Image Dehazing
by Haofei Xu, Ziyu Cai, Shanshan Li, Siyang Hu, Junrong Tu, Song Chen, Kai Xie and Wei Zhang
Electronics 2025, 14(11), 2241; https://doi.org/10.3390/electronics14112241 - 30 May 2025
Viewed by 360
Abstract
Railway safety inspections, a critical component of modern transportation systems, face significant challenges from adverse weather conditions, like fog and rain, which degrade image quality and compromise inspection accuracy. To address this limitation, we propose a novel deep learning-based image dehazing algorithm optimized [...] Read more.
Railway safety inspections, a critical component of modern transportation systems, face significant challenges from adverse weather conditions, like fog and rain, which degrade image quality and compromise inspection accuracy. To address this limitation, we propose a novel deep learning-based image dehazing algorithm optimized for outdoor railway environments. Our method integrates adaptive high-pass filtering and bilateral grid processing during the feature extraction phase to enhance detail preservation while maintaining computational efficiency. The framework uniquely combines RGB color channels with atmospheric brightness channels to disentangle environmental interference from critical structural information, ensuring balanced restoration across all spectral components. A dual-attention mechanism (channel and spatial attention modules) is incorporated during feature fusion to dynamically prioritize haze-relevant regions and suppress weather-induced artifacts. Comprehensive evaluations demonstrate the algorithm’s superior performance: On the SOTS-Outdoor benchmark, it achieves state-of-the-art PSNR (35.27) and SSIM (0.9869) scores. When tested on a specialized railway inspection dataset containing 12,840 fog-affected track images, the method attains a PSNR of 30.41 and SSIM of 0.9511, with the SSIM being marginally lower (0.0017) than DeHamer while outperforming other comparative methods in perceptual clarity. Quantitative and qualitative analyses confirm that our approach effectively restores critical infrastructure details obscured by atmospheric particles, improving defect detection accuracy by 18.6 percent compared to non-processed images in simulated inspection scenarios. This work establishes a robust solution for weather-resilient railway monitoring systems, demonstrating practical value for automated transportation safety applications. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

16 pages, 15339 KiB  
Article
MLKD-Net: Lightweight Single Image Dehazing via Multi-Head Large Kernel Attention
by Jiwon Moon and Jongyoul Park
Appl. Sci. 2025, 15(11), 5858; https://doi.org/10.3390/app15115858 - 23 May 2025
Viewed by 441
Abstract
Haze significantly degrades image quality by reducing contrast and blurring object boundaries, which impairs the performance of computer vision systems. Among various approaches, single-image dehazing remains particularly challenging due to the absence of depth information. While Vision Transformer (ViT)-based models have achieved remarkable [...] Read more.
Haze significantly degrades image quality by reducing contrast and blurring object boundaries, which impairs the performance of computer vision systems. Among various approaches, single-image dehazing remains particularly challenging due to the absence of depth information. While Vision Transformer (ViT)-based models have achieved remarkable results by leveraging multi-head attention and large effective receptive fields, their high computational complexity limits their applicability in real-time and embedded systems. To address this limitation, we propose MLKD-Net, a lightweight CNN-based model that incorporates a novel Multi-Head Large Kernel Block (MLKD), which is based on the Multi-Head Large Kernel Attention (MLKA) mechanism. This structure preserves the benefits of large receptive fields and a multi-head design while also ensuring compactness and computational efficiency. MLKD-Net achieves a PSNR of 37.42 dB on the SOTS-Outdoor dataset while using 90.9% fewer parameters than leading Transformer-based models. Furthermore, it demonstrates real-time performance with 55.24 ms per image (18.2 FPS) on the NVIDIA Jetson Orin Nano in TensorRT-INT8 mode. These results highlight its effectiveness and practicality for resource-constrained, real-time image dehazing applications. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

21 pages, 4536 KiB  
Article
Feature Attention Cycle Generative Adversarial Network: A Multi-Scene Image Dehazing Method Based on Feature Attention
by Na Li, Na Liu, Yanan Duan and Yuyang Chai
Appl. Sci. 2025, 15(10), 5374; https://doi.org/10.3390/app15105374 - 12 May 2025
Viewed by 382
Abstract
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in [...] Read more.
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in the real world are ignored in most current algorithms; that is, the degree of fog is related to the depth of field and scattering coefficient. Moreover, most current dehazing algorithms only consider the image dehazing of land scenes and ignore maritime scenes. To address these problems, we propose a multi-scene image dehazing algorithm based on an improved cycle generative adversarial network (CycleGAN). The generator structure is improved based on the CycleGAN model, and a feature fusion attention module is proposed. This module obtains relevant contextual information by extracting different levels of features. The obtained feature information is fused using the idea of residual connections. An attention mechanism is introduced in this module to retain more feature information by assigning different weights. During the training process, the atmospheric scattering model is established to guide the learning of the neural network using its prior information. The experimental results show that, compared with the baseline model, the peak signal-to-noise ratio (PSNR) increases by 32.10%, the structural similarity index (SSIM) increases by 31.07%, the information entropy (IE) increases by 4.79%, and the NIQE index is reduced by 20.1% in quantitative comparison. Meanwhile, it demonstrates better visual effects than other advanced algorithms in qualitative comparisons on synthetic datasets and real datasets. Full article
Show Figures

Figure 1

22 pages, 23370 KiB  
Article
A Dehazing Method for UAV Remote Sensing Based on Global and Local Feature Collaboration
by Chenyang Li, Suiping Zhou, Ting Wu, Jiaqi Shi and Feng Guo
Remote Sens. 2025, 17(10), 1688; https://doi.org/10.3390/rs17101688 - 11 May 2025
Viewed by 664
Abstract
Non-homogeneous haze in UAV-based remote sensing images severely deteriorates image quality, introducing significant challenges for downstream interpretation and analysis tasks. To tackle this issue, we propose UAVD-Net, a novel dehazing framework specifically designed to enhance UAV remote sensing imagery affected by spatially varying [...] Read more.
Non-homogeneous haze in UAV-based remote sensing images severely deteriorates image quality, introducing significant challenges for downstream interpretation and analysis tasks. To tackle this issue, we propose UAVD-Net, a novel dehazing framework specifically designed to enhance UAV remote sensing imagery affected by spatially varying haze. UAVD-Net integrates both global and local feature extraction mechanisms to effectively remove non-uniform haze across different spatial regions. A Transformer-based Multi-layer Global Information Capturing (MGIC) module is introduced to progressively capture and integrate global contextual features across multiple layers, enabling the model to perceive and adapt to spatial variations in haze distribution. This design significantly enhances the network’s ability to model large-scale structures and correct non-homogeneous haze across the image. In parallel, a local information extraction sub-network equipped with an Adaptive Local Information Enhancement (ALIE) module is used to refine texture and edge details. Additionally, a Cross-channel Feature Fusion (CFF) module is incorporated in the decoder stage to effectively merge global and local features through a channel-wise attention mechanism, generating dehazed outputs that are both structurally coherent and visually natural. Extensive experiments on synthetic and real-world datasets demonstrate that UAVD-Net consistently outperforms existing state-of-the-art dehazing methods. Full article
Show Figures

Figure 1

Back to TopTop