remotesensing-logo

Journal Browser

Journal Browser

AI-Driven Remote Sensing Image Restoration and Generation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 267

Special Issue Editors

Faculty of Science and Technology, University of Macau, Macau 999078, China
Interests: remote sensing; image processing; image restoration

E-Mail Website
Guest Editor
Department of Computer and Information Science, University of Macau, Macau 999078, China
Interests: remote sensing; computer vision; artificial intelligence

E-Mail Website
Guest Editor
Department of Computer Vision, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi 23201, United Arab Emirates
Interests: image restoration; remote sensing; computer vision

E-Mail Website
Guest Editor
Chair of Data Science in Earth Observation, Technical University of Munich, 81675 Munich, Germany
Interests: satellite image/video processing; image fusion; multi-task collaboration; data-driven

Special Issue Information

Dear Colleagues,

Advanced artificial techniques have made substantial contributions to intelligent remote sensing image interpretation. However, the availability of high-quality data for such data-intensive and model-heavy approaches is fundamentally limited by the high acquisition costs and degradation arising from atmospheric variability, sensor-induced distortions, and environmental factors. AI-driven remote sensing image restoration and generation aim to leverage advanced AI techniques to enhance data quality and usability, offering superior modeling capability and adaptability compared with traditional methods. Remote sensing image restoration focuses on recovering reliable information from degraded observations, ensuring accurate data for downstream analysis. Remote sensing image generation supplies diverse simulated data to support model training under increasingly stringent data requirements on the scale, diversity, and quality.

Recent progress in artificial intelligence, particularly multimodal foundation models and agent techniques, has become a central methodological paradigm in modern image processing. This Special Issue aims to explore recent advances, methodologies, and applications in AI-driven remote sensing image restoration and generation, covering novel theories, algorithms, and frameworks for improving image quality or synthesizing realistic remote sensing imagery, as well as emerging challenges and future opportunities.

Articles may address, but are not limited to, the following topics:

  1. Advanced learning-based architectures for remote sensing image processing (e.g., multimodal foundation model, agent-based framework);
  2. Remote sensing image and video restoration tasks (e.g., super-resolution, denoising, dehazing);
  3. Remote sensing image and video generation frameworks (e.g., diffusion models, GAN, flow matching);
  4. Multi-modal and cross-domain fusion techniques for remote sensing data (e.g., visible, hyperspectral, multispectral, SAR);
  5. Others related applications for remote sensing imagery (e.g., evaluation metrics, benchmark, datasets).

Dr. Shi Chen
Prof. Dr. Yicong Zhou
Dr. Jiaqi Ma
Dr. Jiang He
Dr. Kui Jiang
Dr. Xiangyong Cao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • deep learning
  • machine learning
  • image processing
  • image restoration
  • image generation
  • data fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 17945 KB  
Article
Map Feature Perception Metric for Map Generation Quality Assessment and Loss Optimization
by Jing Bai, Chenxing Sun, Hongyu Chen, Xiechun Lu and Zhanlong Chen
Remote Sens. 2026, 18(6), 924; https://doi.org/10.3390/rs18060924 - 18 Mar 2026
Viewed by 52
Abstract
Evaluating the quality of synthesized maps remains a critical challenge in generative cartography. Prevailing methods rely on pixel-wise computer vision metrics (e.g., PSNR, SSIM). However, these metrics prioritize low-level signal fidelity over high-level geographical logic features and treat pixels as independent units, which [...] Read more.
Evaluating the quality of synthesized maps remains a critical challenge in generative cartography. Prevailing methods rely on pixel-wise computer vision metrics (e.g., PSNR, SSIM). However, these metrics prioritize low-level signal fidelity over high-level geographical logic features and treat pixels as independent units, which prevents them from capturing the complex topological interdependencies and global semantics inherent in maps. Consequently, they inadequately assess essential cartographic features and spatial relationships, often producing outputs with semantic and structural artifacts. To address this limitation, we introduce the map feature perception (MFP) metric, a novel approach that quantifies disparities in high-level cartographic structures and spatial configurations. Unlike pixel-based comparisons, MFP extracts deep elemental-level features to encode cartographic structural integrity and topological relationships comprehensively. Experimental validation demonstrates MFP’s superior capability in evaluating cartographic semantics. Furthermore, when implemented as a loss function, our MFP-based objective consistently outperforms conventional loss functions across diverse generative frameworks and benchmarks. Our findings establish that explicitly optimizing for cartographic features and spatial coherence is crucial for enhancing the geographical plausibility of synthesized maps. Full article
(This article belongs to the Special Issue AI-Driven Remote Sensing Image Restoration and Generation)
Show Figures

Figure 1

Back to TopTop