remotesensing-logo

Journal Browser

Journal Browser

AI-Driven Remote Sensing Image Restoration and Generation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 1968

Special Issue Editors

Faculty of Science and Technology, University of Macau, Macau 999078, China
Interests: remote sensing; image processing; image restoration

E-Mail Website
Guest Editor
Department of Computer and Information Science, University of Macau, Macau 999078, China
Interests: remote sensing; computer vision; artificial intelligence

E-Mail Website
Guest Editor
Department of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi 23201, United Arab Emirates
Interests: image restoration; remote sensing; computer vision

E-Mail Website
Guest Editor
Data Science in Earth Observation, Technical University of Munich, Arcisstr. 21, 80333 München, Germany
Interests: satellite image/video processing; image fusion; model and data-driven; machine learning/deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Advanced artificial techniques have made substantial contributions to intelligent remote sensing image interpretation. However, the availability of high-quality data for such data-intensive and model-heavy approaches is fundamentally limited by the high acquisition costs and degradation arising from atmospheric variability, sensor-induced distortions, and environmental factors. AI-driven remote sensing image restoration and generation aim to leverage advanced AI techniques to enhance data quality and usability, offering superior modeling capability and adaptability compared with traditional methods. Remote sensing image restoration focuses on recovering reliable information from degraded observations, ensuring accurate data for downstream analysis. Remote sensing image generation supplies diverse simulated data to support model training under increasingly stringent data requirements on the scale, diversity, and quality.

Recent progress in artificial intelligence, particularly multimodal foundation models and agent techniques, has become a central methodological paradigm in modern image processing. This Special Issue aims to explore recent advances, methodologies, and applications in AI-driven remote sensing image restoration and generation, covering novel theories, algorithms, and frameworks for improving image quality or synthesizing realistic remote sensing imagery, as well as emerging challenges and future opportunities.

Articles may address, but are not limited to, the following topics:

  1. Advanced learning-based architectures for remote sensing image processing (e.g., multimodal foundation model, agent-based framework);
  2. Remote sensing image and video restoration tasks (e.g., super-resolution, denoising, dehazing);
  3. Remote sensing image and video generation frameworks (e.g., diffusion models, GAN, flow matching);
  4. Multi-modal and cross-domain fusion techniques for remote sensing data (e.g., visible, hyperspectral, multispectral, SAR);
  5. Others related applications for remote sensing imagery (e.g., evaluation metrics, benchmark, datasets).

Dr. Shi Chen
Prof. Dr. Yicong Zhou
Dr. Jiaqi Ma
Dr. Jiang He
Dr. Kui Jiang
Dr. Xiangyong Cao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • machine learning
  • image processing
  • image restoration
  • image generation
  • data fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 11823 KB  
Article
MDD-VIR: Vis-to-IR Remote Sensing Image Generation Method Based on Mechanism-Data Dual-Driven Strategy
by Yue Li, Dechang Sun, Xiaorui Wang, Fafa Ren and Chao Zhang
Remote Sens. 2026, 18(10), 1502; https://doi.org/10.3390/rs18101502 - 11 May 2026
Viewed by 66
Abstract
High-fidelity infrared remote sensing imagery serves as a critical foundation for the development of technologies such as infrared scene simulation and long-range imaging detection. Addressing the core limitations of two categories of methods: traditional physical modeling methods—low fidelity and efficiency—and deep learning-based generation [...] Read more.
High-fidelity infrared remote sensing imagery serves as a critical foundation for the development of technologies such as infrared scene simulation and long-range imaging detection. Addressing the core limitations of two categories of methods: traditional physical modeling methods—low fidelity and efficiency—and deep learning-based generation methods with insufficient interpretability and weak generalization capabilities, we propose a visible-to-infrared (Vis-to-IR) remote sensing image generation method based on the multi-dimensional features of scene elements and mechanism-data dual-driven strategy (MDD-VIR) in this paper. First, a scene element multi-dimensional feature extractor (SEMFE) is designed by analyzing and reconstructing limited datasets, bridging physical mechanisms and intelligent learning. From a game-theoretic perspective, we present a Unet3+-based frequency-domain adaptive spatial channel reconstruction convolution module (FASCRC_Unet3+) and a feature fusion discrimination method based on proactive material weighting (FFD_PMW) to enhance the model’s ability to learn and transform high-value regional and multi-scale features. Furthermore, a collaborative optimization loss function (LossCO) is designed to integrate dual-driven paradigm advantages to facilitate efficient iteration. Experiments show that the average SSIM of MDD-VIR simulated images reached 91.07%. Innovatively fusing physical algorithms with intelligent models, this approach enables the Vis-to-IR remote sensing image generation model to achieve the multiple objectives of robust physical consistency, high fidelity, and high efficiency. Full article
(This article belongs to the Special Issue AI-Driven Remote Sensing Image Restoration and Generation)
Show Figures

Figure 1

24 pages, 4824 KB  
Article
PCRDiff: A Perlin Noise-Based Cloud Removal Diffusion Model for Remote Sensing Images
by Danjun Liu, Weidong Cao, Zeqing Feng, Zhongbo Li and Yongqiang Xie
Remote Sens. 2026, 18(8), 1171; https://doi.org/10.3390/rs18081171 - 14 Apr 2026
Viewed by 437
Abstract
Remote sensing imagery is a crucial component of remote sensing data. However, in its application to downstream tasks, cloud cover can hinder effective data utilization, making the removal of cloud occlusion from remote sensing images a persistent and important research direction. Recently, diffusion [...] Read more.
Remote sensing imagery is a crucial component of remote sensing data. However, in its application to downstream tasks, cloud cover can hinder effective data utilization, making the removal of cloud occlusion from remote sensing images a persistent and important research direction. Recently, diffusion models have demonstrated powerful performance in conditional image generation. However, their direct application to cloud removal yields suboptimal results, as the interference pattern of random Gaussian noise differs significantly from that of actual cloud occlusion. To address this, we developed the Perlin Noise-Based Cloud Removal Diffusion Model (PCRDiff). Compared to traditional diffusion models, PCRDiff abandons random Gaussian noise and instead utilizes Perlin noise to simulate the interference pattern of cloud occlusion on images. Based on this, we designed a novel training and iterative denoising process, along with a corresponding Perlin noise intensity quantization module. Furthermore, we developed a multi-attention fusion module as the backbone of the model to enhance its performance. Extensive experiments on two commonly used benchmark datasets demonstrate that our method achieves superior performance across multiple metrics. Full article
(This article belongs to the Special Issue AI-Driven Remote Sensing Image Restoration and Generation)
Show Figures

Figure 1

23 pages, 131728 KB  
Article
Hyperspectral Image Reconstruction Based on State Space Models
by Xuguang Wang, Haozhe Zhou, Tongxin Wei and Yanchao Zhang
Remote Sens. 2026, 18(7), 990; https://doi.org/10.3390/rs18070990 - 25 Mar 2026
Viewed by 591
Abstract
To address the high hardware costs associated with hyperspectral imaging in precision agriculture, spectral reconstruction (SR) is emerging as a feasible solution for obtaining hyperspectral images. However, existing methods, mainly including CNN and Transformer, face a notable dilemma: convolutional neural networks (CNNs) are [...] Read more.
To address the high hardware costs associated with hyperspectral imaging in precision agriculture, spectral reconstruction (SR) is emerging as a feasible solution for obtaining hyperspectral images. However, existing methods, mainly including CNN and Transformer, face a notable dilemma: convolutional neural networks (CNNs) are limited by their local receptive fields, while Transformers encounter the problem of quadratic computational complexity. Effectively balancing computational efficiency with the capture of long-range spatial dependencies remains a significant challenge. To this end, this study proposes FGA-Mamba (Frequency-Gradient Attention Mamba), a novel reconstruction network based on the Mamba architecture. This network introduces a Frequency-Visual State Space (F-VSS) module, which combines the linear long-range modeling capability of state space models (SSMs) with a frequency-domain self-calibration mechanism to enhance global structural consistency by explicitly modulating frequency features. In addition, we designed an Enhanced Gradient Attention Module (EGAM). This module optimizes local feature representation through a gradient-aware mechanism, effectively compensating for the loss of spatial details. Experimental results on 3 datasets shows that FGA-Mamba have significant improvement in both quantitative and qualitative metrics. Moreover, the high consistency observed in vegetation index (VI) calculations confirms its potential for practical agricultural application. Full article
(This article belongs to the Special Issue AI-Driven Remote Sensing Image Restoration and Generation)
Show Figures

Figure 1

29 pages, 17945 KB  
Article
Map Feature Perception Metric for Map Generation Quality Assessment and Loss Optimization
by Jing Bai, Chenxing Sun, Hongyu Chen, Xiechun Lu and Zhanlong Chen
Remote Sens. 2026, 18(6), 924; https://doi.org/10.3390/rs18060924 - 18 Mar 2026
Viewed by 322
Abstract
Evaluating the quality of synthesized maps remains a critical challenge in generative cartography. Prevailing methods rely on pixel-wise computer vision metrics (e.g., PSNR, SSIM). However, these metrics prioritize low-level signal fidelity over high-level geographical logic features and treat pixels as independent units, which [...] Read more.
Evaluating the quality of synthesized maps remains a critical challenge in generative cartography. Prevailing methods rely on pixel-wise computer vision metrics (e.g., PSNR, SSIM). However, these metrics prioritize low-level signal fidelity over high-level geographical logic features and treat pixels as independent units, which prevents them from capturing the complex topological interdependencies and global semantics inherent in maps. Consequently, they inadequately assess essential cartographic features and spatial relationships, often producing outputs with semantic and structural artifacts. To address this limitation, we introduce the map feature perception (MFP) metric, a novel approach that quantifies disparities in high-level cartographic structures and spatial configurations. Unlike pixel-based comparisons, MFP extracts deep elemental-level features to encode cartographic structural integrity and topological relationships comprehensively. Experimental validation demonstrates MFP’s superior capability in evaluating cartographic semantics. Furthermore, when implemented as a loss function, our MFP-based objective consistently outperforms conventional loss functions across diverse generative frameworks and benchmarks. Our findings establish that explicitly optimizing for cartographic features and spatial coherence is crucial for enhancing the geographical plausibility of synthesized maps. Full article
(This article belongs to the Special Issue AI-Driven Remote Sensing Image Restoration and Generation)
Show Figures

Figure 1

Back to TopTop