Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (46)

Search Parameters:
Keywords = remote-sensing image dehazing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 21103 KB  
Article
Dehazing of Panchromatic Remote Sensing Images Based on Histogram Features
by Hao Wang, Yalin Ding, Xiaoqin Zhou, Guoqin Yuan and Chao Sun
Remote Sens. 2025, 17(20), 3479; https://doi.org/10.3390/rs17203479 - 18 Oct 2025
Viewed by 246
Abstract
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application [...] Read more.
During long-range imaging, the turbid medium in the atmosphere absorbs and scatters light, resulting in reduced contrast, a narrowed dynamic range, and obscure detail information in remote sensing images. The prior-based method has the advantages of good real-time performance and a wide application range. However, few of the existing prior-based methods are applicable to the dehazing of panchromatic images. In this paper, we innovatively propose a prior-based dehazing method for panchromatic remote sensing images through statistical histogram features. First, the hazy image is divided into plain image patches and mixed image patches according to the histogram features. Then, the features of the average occurrence differences between adjacent gray levels (AODAGs) of plain image patches and the features of the average distance to the gray-level gravity center (ADGG) of mixed image patches are, respectively, calculated. Then, the transmission map is obtained according to the statistical relation equation. Then, the atmospheric light of each image patch is calculated separately based on the maximum gray level of the image patch using the threshold segmentation method. Finally, the dehazed image is obtained based on the physical model. Extensive experiments in synthetic and real-world panchromatic hazy remote sensing images show that the proposed algorithm outperforms state-of-the-art dehazing methods in both efficiency and dehazing effect. Full article
Show Figures

Figure 1

23 pages, 7046 KB  
Article
Atmospheric Scattering Prior Embedded Diffusion Model for Remote Sensing Image Dehazing
by Shanqin Wang and Miao Zhang
Atmosphere 2025, 16(9), 1065; https://doi.org/10.3390/atmos16091065 - 10 Sep 2025
Viewed by 799
Abstract
Remote sensing image dehazing presents substantial challenges in balancing physical fidelity with generative flexibility, particularly under complex atmospheric conditions and sensor-specific degradation patterns. Traditional physics-based methods often struggle with nonlinear haze distributions, while purely data-driven approaches tend to lack interpretability and physical consistency. [...] Read more.
Remote sensing image dehazing presents substantial challenges in balancing physical fidelity with generative flexibility, particularly under complex atmospheric conditions and sensor-specific degradation patterns. Traditional physics-based methods often struggle with nonlinear haze distributions, while purely data-driven approaches tend to lack interpretability and physical consistency. To bridge this gap, we propose the Atmospheric Scattering Prior embedded Diffusion Model (ASPDiff), a novel framework that seamlessly integrates atmospheric physics into the diffusion-based generative restoration process. ASPDiff establishes a closed-loop feedback mechanism by embedding the atmospheric scattering model as a physics-driven regularization throughout both the forward degradation simulation and the reverse denoising trajectory. The framework operates through the following three synergistic components: (1) an Atmospheric Prior Estimation Module that uses the Dark Channel Prior to generate initial estimates of the transmission map and global atmospheric light, which are then refined through learnable adjustment networks; (2) a Diffusion Process with Atmospheric Prior Embedding, where the refined priors serve as conditional guidance during the reverse diffusion sampling, ensuring physical plausibility; and (3) a Haze-Aware Refinement Module that adaptively enhances structural details and compensates for residual haze via frequency-aware decomposition and spatial attention. Extensive experiments on both synthetic and real-world remote sensing datasets demonstrate that ASPDiff significantly outperforms existing methods, achieving state-of-the-art performance while maintaining strong physical interpretability. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

26 pages, 62819 KB  
Article
Low-Light Image Dehazing and Enhancement via Multi-Feature Domain Fusion
by Jiaxin Wu, Han Ai, Ping Zhou, Hao Wang, Haifeng Zhang, Gaopeng Zhang and Weining Chen
Remote Sens. 2025, 17(17), 2944; https://doi.org/10.3390/rs17172944 - 25 Aug 2025
Viewed by 1028
Abstract
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot [...] Read more.
The acquisition of nighttime remote-sensing visible-light images is often accompanied by low-illumination effects and haze interference, resulting in significant image quality degradation and greatly affecting subsequent applications. Existing low-light enhancement and dehazing algorithms can handle each problem individually, but their simple cascade cannot effectively address unknown real-world degradations. Therefore, we design a joint processing framework, WFDiff, which fully exploits the advantages of Fourier–wavelet dual-domain features and innovatively integrates the inverse diffusion process through differentiable operators to construct a multi-scale degradation collaborative correction system. Specifically, in the reverse diffusion process, a dual-domain feature interaction module is designed, and the joint probability distribution of the generated image and real data is constrained through differentiable operators: on the one hand, a global frequency-domain prior is established by jointly constraining Fourier amplitude and phase, effectively maintaining the radiometric consistency of the image; on the other hand, wavelets are used to capture high-frequency details and edge structures in the spatial domain to improve the prediction process. On this basis, a cross-overlapping-block adaptive smoothing estimation algorithm is proposed, which achieves dynamic fusion of multi-scale features through a differentiable weighting strategy, effectively solving the problem of restoring images of different sizes and avoiding local inconsistencies. In view of the current lack of remote-sensing data for low-light haze scenarios, we constructed the Hazy-Dark dataset. Physical experiments and ablation experiments show that the proposed method outperforms existing single-task or simple cascade methods in terms of image fidelity, detail recovery capability, and visual naturalness, providing a new paradigm for remote-sensing image processing under coupled degradations. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

21 pages, 25577 KB  
Article
DFFNet: A Dual-Domain Feature Fusion Network for Single Remote Sensing Image Dehazing
by Huazhong Jin, Zhang Chen, Zhina Song and Kaimin Sun
Sensors 2025, 25(16), 5125; https://doi.org/10.3390/s25165125 - 18 Aug 2025
Viewed by 933
Abstract
Single remote sensing image dehazing aims to eliminate atmospheric scattering effects without auxiliary information. It serves as a crucial preprocessing step for enhancing the performance of downstream tasks in remote sensing images. Conventional approaches often struggle to balance haze removal and detail restoration [...] Read more.
Single remote sensing image dehazing aims to eliminate atmospheric scattering effects without auxiliary information. It serves as a crucial preprocessing step for enhancing the performance of downstream tasks in remote sensing images. Conventional approaches often struggle to balance haze removal and detail restoration under non-uniform haze distributions. To address this issue, we propose a Dual-domain Feature Fusion Network (DFFNet) for remote sensing image dehazing. DFFNet consists of two specialized units: the Frequency Restore Unit (FRU) and the Context Extract Unit (CEU). As haze primarily manifests as low-frequency energy in the frequency domain, the FRU effectively suppresses haze across the entire image by adaptively modulating low-frequency amplitudes. Meanwhile, to reconstruct details attenuated due to dense haze occlusion, we introduce the CEU. This unit extracts multi-scale spatial features to capture contextual information, providing structural guidance for detail reconstruction. Furthermore, we introduce the Dual-Domain Feature Fusion Module (DDFFM) to establish dependencies between features from FRU and CEU via a designed attention mechanism. This leverages spatial contextual information to guide detail reconstruction during frequency domain haze removal. Experiments on the StateHaze1k, RICE and RRSHID datasets demonstrate that DFFNet achieves competitive performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

21 pages, 6628 KB  
Article
MCA-GAN: A Multi-Scale Contextual Attention GAN for Satellite Remote-Sensing Image Dehazing
by Sufen Zhang, Yongcheng Zhang, Zhaofeng Yu, Shaohua Yang, Huifeng Kang and Jingman Xu
Electronics 2025, 14(15), 3099; https://doi.org/10.3390/electronics14153099 - 3 Aug 2025
Cited by 1 | Viewed by 548
Abstract
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle [...] Read more.
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle with remote-sensing images due to their complex imaging conditions and scale diversity. Given this, we propose a novel Multi-Scale Contextual Attention Generative Adversarial Network (MCA-GAN), specifically designed for satellite image dehazing. Our method integrates multi-scale feature extraction with global contextual guidance to enhance the network’s comprehension of complex remote-sensing scenes and its sensitivity to fine details. MCA-GAN incorporates two self-designed key modules: (1) a Multi-Scale Feature Aggregation Block, which employs multi-directional global pooling and multi-scale convolutional branches to bolster the model’s ability to capture land-cover details across varying spatial scales; (2) a Dynamic Contextual Attention Block, which uses a gated mechanism to fuse three-dimensional attention weights with contextual cues, thereby preserving global structural and chromatic consistency while retaining intricate local textures. Extensive qualitative and quantitative experiments on public benchmarks demonstrate that MCA-GAN outperforms other existing methods in both visual fidelity and objective metrics, offering a robust and practical solution for remote-sensing image dehazing. Full article
Show Figures

Figure 1

22 pages, 24173 KB  
Article
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
by Hao Zhou, Yalun Wang, Wanting Peng, Xin Guan and Tao Tao
Remote Sens. 2025, 17(15), 2664; https://doi.org/10.3390/rs17152664 - 1 Aug 2025
Viewed by 584
Abstract
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm [...] Read more.
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm for vision tasks, showing great promise due to their computational efficiency and robust capacity to model global dependencies. However, most existing learning-based dehazing methods lack physical interpretability, leading to weak generalization. Furthermore, they typically rely on spatial features while neglecting crucial frequency domain information, resulting in incomplete feature representation. To address these challenges, we propose ScaleViM-PDD, a novel network that enhances an SSM backbone with two key innovations: a Multi-scale EfficientViM with Physical Decoupling (ScaleViM-P) module and a Dual-Domain Fusion (DD Fusion) module. The ScaleViM-P module synergistically integrates a Physical Decoupling block within a Multi-scale EfficientViM architecture. This design enables the network to mitigate haze interference in a physically grounded manner at each representational scale while simultaneously capturing global contextual information to adaptively handle complex haze distributions. To further address detail loss, the DD Fusion module replaces conventional skip connections by incorporating a novel Frequency Domain Module (FDM) alongside channel and position attention. This allows for a more effective fusion of spatial and frequency features, significantly improving the recovery of fine-grained details, including color and texture information. Extensive experiments on nine publicly available remote sensing datasets demonstrate that ScaleViM-PDD consistently surpasses state-of-the-art baselines in both qualitative and quantitative evaluations, highlighting its strong generalization ability. Full article
Show Figures

Figure 1

20 pages, 21844 KB  
Article
DWTMA-Net: Discrete Wavelet Transform and Multi-Dimensional Attention Network for Remote Sensing Image Dehazing
by Xin Guan, Runxu He, Le Wang, Hao Zhou, Yun Liu and Hailing Xiong
Remote Sens. 2025, 17(12), 2033; https://doi.org/10.3390/rs17122033 - 12 Jun 2025
Cited by 1 | Viewed by 1811
Abstract
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, [...] Read more.
Haze caused by atmospheric scattering often leads to color distortion, reduced contrast, and diminished clarity, which significantly degrade the quality of remote sensing images. To address these issues, we propose a novel network called DWTMA-Net that integrates discrete wavelet transform with multi-dimensional attention, aiming to restore image information in both the frequency and spatial domains to enhance overall image quality. Specifically, we design a wavelet transform-based downsampling module that effectively fuses frequency and spatial features. The input first passes through a discrete wavelet block to extract frequency-domain information. These features are then fed into a multi-dimensional attention block, which incorporates pixel attention, Fourier frequency-domain attention, and channel attention. This combination allows the network to capture both global and local characteristics while enhancing deep feature representations through dimensional expansion, thereby improving spatial-domain feature extraction. Experimental results on the SateHaze1k, HRSD, and HazyDet datasets demonstrate the effectiveness of the proposed method in handling remote sensing images with varying haze levels and drone-view scenarios. By recovering both frequency and spatial details, our model achieves significant improvements in dehazing performance compared to existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Graphical abstract

22 pages, 23370 KB  
Article
A Dehazing Method for UAV Remote Sensing Based on Global and Local Feature Collaboration
by Chenyang Li, Suiping Zhou, Ting Wu, Jiaqi Shi and Feng Guo
Remote Sens. 2025, 17(10), 1688; https://doi.org/10.3390/rs17101688 - 11 May 2025
Cited by 3 | Viewed by 1334
Abstract
Non-homogeneous haze in UAV-based remote sensing images severely deteriorates image quality, introducing significant challenges for downstream interpretation and analysis tasks. To tackle this issue, we propose UAVD-Net, a novel dehazing framework specifically designed to enhance UAV remote sensing imagery affected by spatially varying [...] Read more.
Non-homogeneous haze in UAV-based remote sensing images severely deteriorates image quality, introducing significant challenges for downstream interpretation and analysis tasks. To tackle this issue, we propose UAVD-Net, a novel dehazing framework specifically designed to enhance UAV remote sensing imagery affected by spatially varying haze. UAVD-Net integrates both global and local feature extraction mechanisms to effectively remove non-uniform haze across different spatial regions. A Transformer-based Multi-layer Global Information Capturing (MGIC) module is introduced to progressively capture and integrate global contextual features across multiple layers, enabling the model to perceive and adapt to spatial variations in haze distribution. This design significantly enhances the network’s ability to model large-scale structures and correct non-homogeneous haze across the image. In parallel, a local information extraction sub-network equipped with an Adaptive Local Information Enhancement (ALIE) module is used to refine texture and edge details. Additionally, a Cross-channel Feature Fusion (CFF) module is incorporated in the decoder stage to effectively merge global and local features through a channel-wise attention mechanism, generating dehazed outputs that are both structurally coherent and visually natural. Extensive experiments on synthetic and real-world datasets demonstrate that UAVD-Net consistently outperforms existing state-of-the-art dehazing methods. Full article
Show Figures

Figure 1

33 pages, 44660 KB  
Article
NAF-MEEF: A Nonlinear Activation-Free Network Based on Multi-Scale Edge Enhancement and Fusion for Railway Freight Car Image Denoising
by Jiawei Chen, Jianhai Yue, Hang Zhou and Zhunqing Hu
Sensors 2025, 25(9), 2672; https://doi.org/10.3390/s25092672 - 23 Apr 2025
Viewed by 1197
Abstract
Railwayfreight cars operating in heavy-load and complex outdoor environments are frequently subject to adverse conditions such as haze, temperature fluctuations, and transmission interference, which significantly degrade the quality of the acquired images and introduce substantial noise. Furthermore, the structural complexity of freight cars, [...] Read more.
Railwayfreight cars operating in heavy-load and complex outdoor environments are frequently subject to adverse conditions such as haze, temperature fluctuations, and transmission interference, which significantly degrade the quality of the acquired images and introduce substantial noise. Furthermore, the structural complexity of freight cars, coupled with the small size, diversity, and complex structure of defect areas, poses serious challenges for image denoising. Specifically, it becomes extremely difficult to remove noise while simultaneously preserving fine-grained textures and edge details. These challenges distinguish railway freight car image denoising from conventional image restoration tasks, necessitating the design of specialized algorithms that can achieve both effective noise suppression and precise structural detail preservation. To address the challenges of incomplete denoising and poor preservation of details and edge information in railway freight car images, this paper proposes a novel image denoising algorithm named the Nonlinear Activation-Free Network based on Multi-Scale Edge Enhancement and Fusion (NAF-MEEF). The algorithm constructs a Multi-scale Edge Enhancement Initialization Layer to strengthen edge information at multiple scales. Additionally, it employs a Nonlinear Activation-Free feature extractor that effectively captures local and global image information. Leveraging the network’s multi-branch parallelism, a Multi-scale Rotation Fusion Attention Mechanism is developed to perform weight analysis on information across various scales and dimensions. To ensure consistency in image details and structure, this paper introduces a fusion loss function. The experimental results show that compared with recent advanced methods, the proposed algorithm has better noise suppression and edge preservation performance. The proposed method achieves significant denoising performance on railway freight car images affected by Gaussian, composite, and simulated real-world noise, with PSNR gains of 1.20 dB, 1.45 dB, and 0.69 dB, and SSIM improvements of 2.23%, 2.72%, and 1.08%, respectively. On public benchmarks, it attains average PSNRs of 30.34 dB (Set12) and 28.94 dB (BSD68), outperforming several state-of-the-art methods. In addition, this method also performs well in railway image dehazing tasks and demonstrates good generalization ability in denoising tests of remote sensing ship images, further proving its robustness and practical application value in diverse image restoration tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 45366 KB  
Article
U-Shaped Dual Attention Vision Mamba Network for Satellite Remote Sensing Single-Image Dehazing
by Tangyu Sui, Guangfeng Xiang, Feinan Chen, Yang Li, Xiayu Tao, Jiazu Zhou, Jin Hong and Zhenwei Qiu
Remote Sens. 2025, 17(6), 1055; https://doi.org/10.3390/rs17061055 - 17 Mar 2025
Cited by 4 | Viewed by 1489
Abstract
In remote sensing single-image dehazing (RSSID), adjacency effects and the multi-scale characteristics of the land surface–atmosphere system highlight the importance of a network’s effective receptive field (ERF) and its ability to capture multi-scale features. Although multi-scale hybrid models combining convolutional neural networks and [...] Read more.
In remote sensing single-image dehazing (RSSID), adjacency effects and the multi-scale characteristics of the land surface–atmosphere system highlight the importance of a network’s effective receptive field (ERF) and its ability to capture multi-scale features. Although multi-scale hybrid models combining convolutional neural networks and Transformers show promise, the quadratic complexity of Transformer complicates the balance between ERF and efficiency. Recently, Mamba achieved global ERF with linear complexity and excelled in modeling long-range dependencies, yet its design for sequential data and channel redundancy limits its direct applicability to RSSID. To overcome these challenges and improve performance in RSSID, we present a novel Mamba-based dehazing network, U-shaped Dual Attention Vision Mamba Network (UDAVM-Net) for Satellite RSSID, which integrates multi-path scanning and incorporates dual attention mechanisms to better capture non-uniform haze features while reducing redundancy. The core module, Residual Vision Mamba Blocks (RVMBs), are stacked within a U-Net architecture to enhance multi-scale feature learning. Furthermore, to enhance the model’s applicability to real-world remote sensing data, we abandoned overly simplified haze image degradation models commonly used in existing works, instead adopting an atmospheric radiative transfer model combined with a cloud distortion model to construct a submeter-resolution satellite RSSID dataset. Experimental results demonstrate that UDAVM-Net consistently outperforms competing methods on the StateHaze1K dataset, our newly proposed dataset, and real-world remote sensing images, underscoring its effectiveness in diverse scenarios. Full article
Show Figures

Figure 1

28 pages, 5216 KB  
Article
VBI-Accelerated FPGA Implementation of Autonomous Image Dehazing: Leveraging the Vertical Blanking Interval for Haze-Aware Local Image Blending
by Dat Ngo, Jeonghyeon Son and Bongsoon Kang
Remote Sens. 2025, 17(5), 919; https://doi.org/10.3390/rs17050919 - 5 Mar 2025
Cited by 1 | Viewed by 1577
Abstract
Real-time image dehazing is crucial for remote sensing systems, particularly in applications requiring immediate and reliable visual data. By restoring contrast and fidelity as images are captured, real-time dehazing enhances image quality on the fly. Existing dehazing algorithms often prioritize visual quality and [...] Read more.
Real-time image dehazing is crucial for remote sensing systems, particularly in applications requiring immediate and reliable visual data. By restoring contrast and fidelity as images are captured, real-time dehazing enhances image quality on the fly. Existing dehazing algorithms often prioritize visual quality and color restoration but rely on computationally intensive methods, making them unsuitable for real-time processing. Moreover, these methods typically perform well under moderate to dense haze conditions but lack adaptability to varying haze levels, limiting their general applicability. To address these challenges, this paper presents an autonomous image dehazing method and its corresponding FPGA-based accelerator, which effectively balance image quality and computational efficiency for real-time processing. Autonomous dehazing is achieved by fusing the input image with its dehazed counterpart, where fusion weights are dynamically determined based on the local haziness degree. The FPGA accelerator performs computations with strict timing requirements during the vertical blanking interval, ensuring smooth and flicker-free processing of input data streams. Experimental results validate the effectiveness of the proposed method, and hardware implementation results demonstrate that the FPGA accelerator achieves a processing rate of 45.34 frames per second at DCI 4K resolution while maintaining efficient utilization of hardware resources. Full article
(This article belongs to the Special Issue Optical Remote Sensing Payloads, from Design to Flight Test)
Show Figures

Graphical abstract

21 pages, 9794 KB  
Article
Weamba: Weather-Degraded Remote Sensing Image Restoration with Multi-Router State Space Model
by Shuang Wu, Xin He and Xiang Chen
Remote Sens. 2025, 17(3), 458; https://doi.org/10.3390/rs17030458 - 29 Jan 2025
Cited by 1 | Viewed by 1522
Abstract
Adverse weather conditions, such as haze and raindrop, consistently degrade the quality of remote sensing images and affect subsequent vision-based applications. Recent years have witnessed advancements in convolutional neural networks (CNNs) and Transformers in the field of remote sensing image restoration. However, these [...] Read more.
Adverse weather conditions, such as haze and raindrop, consistently degrade the quality of remote sensing images and affect subsequent vision-based applications. Recent years have witnessed advancements in convolutional neural networks (CNNs) and Transformers in the field of remote sensing image restoration. However, these methods either suffer from limited receptive fields or incur quadratic computational overhead, leading to an imbalance between performance and model efficiency. In this paper, we propose an effective vision state space model (called Weamba) for remote sensing image restoration by modeling long-range pixel dependencies with linear complexity. Specifically, we develop a local-enhanced state space module to better aggregate rich local and global information, both of which are complementary and beneficial for high-quality image reconstruction. Furthermore, we design a multi-router scanning strategy for spatially varying feature extraction, alleviating the issue of redundant information caused by repeated scanning directions in existing methods. Extensive experiments on multiple benchmarks show that the proposed Weamba performs favorably against state-of-the-art approaches. Full article
Show Figures

Figure 1

19 pages, 3737 KB  
Article
End-to-End Multi-Scale Adaptive Remote Sensing Image Dehazing Network
by Xinhua Wang, Botao Yuan, Haoran Dong, Qiankun Hao and Zhuang Li
Sensors 2025, 25(1), 218; https://doi.org/10.3390/s25010218 - 2 Jan 2025
Cited by 3 | Viewed by 1613
Abstract
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, [...] Read more.
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, this paper proposes an end-to-end multi-scale adaptive feature extraction method for remote sensing image dehazing (MSD-Net). In our network model, we introduce a dilated convolution adaptive module to extract global and local detail features of remote sensing images. The design of this module can extract important image features at different scales. By expanding convolution, the receptive field is expanded to capture broader contextual information, thereby obtaining a more global feature representation. At the same time, a self-adaptive attention mechanism is also used, allowing the module to automatically adjust the size of its receptive field based on image content. In this way, important features suitable for different scales can be flexibly extracted to better adapt to the changes in details in remote sensing images. To fully utilize the features at different scales, we also adopted feature fusion technology. By fusing features from different scales and integrating information from different scales, more accurate and rich feature representations can be obtained. This process aids in retrieving lost detailed information from remote sensing images, thereby enhancing the overall image quality. A large number of experiments were conducted on the HRRSD and RICE datasets, and the results showed that our proposed method can better restore the original details and texture information of remote sensing images in the field of dehazing and is superior to current state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 13020 KB  
Article
Multi-Dimensional and Multi-Scale Physical Dehazing Network for Remote Sensing Images
by Hao Zhou, Le Wang, Qiao Li, Xin Guan and Tao Tao
Remote Sens. 2024, 16(24), 4780; https://doi.org/10.3390/rs16244780 - 22 Dec 2024
Cited by 4 | Viewed by 1851
Abstract
Haze obscures remote sensing images, making it difficult to extract valuable information. To address this problem, we propose a fine detail extraction network that aims to restore image details and improve image quality. Specifically, to capture fine details, we design multi-scale and multi-dimensional [...] Read more.
Haze obscures remote sensing images, making it difficult to extract valuable information. To address this problem, we propose a fine detail extraction network that aims to restore image details and improve image quality. Specifically, to capture fine details, we design multi-scale and multi-dimensional extraction blocks and then fuse them to optimize feature extraction. The multi-scale extraction block adopts multi-scale pixel attention and channel attention to extract and combine global and local information from the image. Meanwhile, the multi-dimensional extraction block uses depthwise separable convolutional layers to capture additional dimensional information. Additionally, we integrate an atmospheric scattering model unit into the network to enhance both the dehazing effectiveness and stability. Our experiments on the SateHaze1k and HRSD datasets demonstrate that the proposed method efficiently handles remote sensing images with varying levels of haze, successfully recovers fine details, and achieves superior results compared to existing state-of-the-art dehazing techniques. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

17 pages, 3307 KB  
Article
MCADNet: A Multi-Scale Cross-Attention Network for Remote Sensing Image Dehazing
by Tao Tao, Haoran Xu, Xin Guan and Hao Zhou
Mathematics 2024, 12(23), 3650; https://doi.org/10.3390/math12233650 - 21 Nov 2024
Cited by 1 | Viewed by 1668
Abstract
Remote sensing image dehazing (RSID) aims to remove haze from remote sensing images to enhance their quality. Although existing deep learning-based dehazing methods have made significant progress, it is still difficult to completely remove the uneven haze, which often leads to color or [...] Read more.
Remote sensing image dehazing (RSID) aims to remove haze from remote sensing images to enhance their quality. Although existing deep learning-based dehazing methods have made significant progress, it is still difficult to completely remove the uneven haze, which often leads to color or structural differences between the dehazed image and the original image. In order to overcome this difficulty, we propose the multi-scale cross-attention dehazing network (MCADNet), which offers a powerful solution for RSID. MCADNet integrates multi-kernel convolution and a multi-head attention mechanism into the U-Net architecture, enabling effective multi-scale information extraction. Additionally, we replace traditional skip connections with a cross-attention-based gating module, enhancing feature extraction and fusion across different scales. This synergy enables the network to maximize the overall similarity between the restored image and the real image while also restoring the details of the complex texture areas in the image. We evaluate MCADNet on two benchmark datasets, Haze1K and RICE, demonstrating its superior performance. Ablation experiments further verify the importance of our key design choices in enhancing dehazing effectiveness. Full article
(This article belongs to the Special Issue Image Processing and Machine Learning with Applications)
Show Figures

Figure 1

Back to TopTop