remotesensing-logo

Journal Browser

Journal Browser

City Future: The Innovative Fusion of Artificial Intelligence and Multi-Source Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 31 October 2026 | Viewed by 1993

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Geosciences and Engineering, Southwest Jiaotong University, Chengdu, China
Interests: remote sensing; multi-modal data; data fusion; land-use classification; change detection

E-Mail Website
Guest Editor
College of Geoscience and Surveying Engineering, China University of Mining and Technology (Beijing), Beijing, China
Interests: urban big data; remote sensing; deep learning; data fusion; environmental monitoring
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Alba Regia Technical Faculty, Obuda University, Budapest, Hungary
Interests: remote sensing; digital photogrammetry; urban ecology studies; 3D modeling

E-Mail Website
Guest Editor
Institute for Earth Observation, Eurac Research, Bolzano, Italy
Interests: multi-source data fusion; remote sensing; generative model; deep learning; data augmentation; super-resolution; land cover mapping; onboard AI

E-Mail Website
Guest Editor Assistant
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
Interests: remote sensing; urban environment analysis; deep learning; climate change

Special Issue Information

Dear Colleagues,

The rapid evolution of artificial intelligence (AI), coupled with the explosive growth of multi-source remote sensing data, is profoundly transforming the way we perceive, analyze, and manage cities. In recent years, the integration of data from optical, SAR, LiDAR, hyperspectral, and social sensing platforms has provided unprecedented opportunities to monitor urban dynamics across multiple spatial and temporal scales. Meanwhile, breakthroughs in deep learning, self-supervised learning, and foundation models have enabled machines to extract complex spatial patterns and semantic knowledge from massive heterogeneous data sources, bridging the gap between observation and intelligent interpretation. As global cities face growing challenges such as climate change, resource scarcity, and population expansion, the fusion of AI and remote sensing offers a powerful path toward intelligent urban governance, sustainable development, and resilient city design.

This Special Issue aims to provide an international forum for sharing innovative theories, methodologies, and applications that harness AI to unlock the full potential of multi-source remote sensing data for future cities. We welcome original research focusing on the synergy between AI and remote sensing for urban studies. Topics of interest include, but are not limited to, the following:

  • AI-driven fusion and interpretation of multi-source data such as optical, SAR, LiDAR, hyperspectral, and social sensing data;
  • Foundation models and self-supervised learning for large-scale urban analysis;
  • Urban land use and land cover mapping, change detection, and 3D reconstruction;
  • Data reliability assessment and uncertainty modeling in heterogeneous urban datasets;
  • Dynamic monitoring of urban infrastructure, transportation, and environment;
  • Intelligent urban planning, risk assessment, and sustainable development supported by remote sensing.

Dr. Maofan Zhao
Dr. Shouhang Du
Prof. Dr. Jancsó Tamás
Dr. Abhishek Singh
Guest Editors

Dr. Jiahao Wu
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • urban remote sensing
  • multi-source remote sensing
  • foundation models
  • land use and change detection
  • 3D reconstruction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

34 pages, 8230 KB  
Article
DPF-DETR: Enhancing Drone Image Detection with Density Perception and Multi-Scale Feature Fusion
by Sidi Lai, Zhensong Li, Xiaotan Wei, Yutong Wang and Shiliang Zhu
Remote Sens. 2026, 18(8), 1221; https://doi.org/10.3390/rs18081221 - 17 Apr 2026
Viewed by 316
Abstract
The DPF-DETR model has been designed to address the challenges encountered in object detection within drone imagery, particularly in scenarios involving significant target scale variations, dense targets, and complex backgrounds. To overcome the limitations of traditional object detection methods, the Density Sensing Mechanism [...] Read more.
The DPF-DETR model has been designed to address the challenges encountered in object detection within drone imagery, particularly in scenarios involving significant target scale variations, dense targets, and complex backgrounds. To overcome the limitations of traditional object detection methods, the Density Sensing Mechanism (DSM) and Adaptive Density Map Loss (AdaptiveDM Loss) have been incorporated into the model to provide fine-grained supervision signals. The DSM optimizes the query selection mechanism by utilizing density maps, enabling the number of queries to be adaptively adjusted based on the distribution density of targets, thus improving detection accuracy in dense regions. Furthermore, the precision of the model in detecting dense targets is enhanced by AdaptiveDM Loss, which dynamically adjusts the weights for object localization and classification. Multi-scale feature fusion capabilities are also improved by the Multi-Scale Feature Fusion Network (MSFFN) and the Selective Feature Integration Module (SFIM). The MSFFN refines the fusion of features, which improves the detection of targets across various scales, particularly in complex scenes. Additionally, SFIM enhances the detection accuracy for small targets and complex backgrounds by integrating low-level spatial features with high-level semantic information. The Context-Sensitive Feature Interaction Module (CSFIM) further optimizes multi-scale feature fusion through context-guided interactions, bridging the semantic gap between features of different scales, thus improving the robustness of the model in dense scenarios. Experimental results have shown that DPF-DETR outperforms traditional models and state-of-the-art detection methods across multiple datasets, demonstrating superior robustness and accuracy, especially in dense target detection and complex background scenarios. Full article
Show Figures

Figure 1

26 pages, 9001 KB  
Article
PSiam-HDSFNet: A Pseudo-Siamese Hybrid Dilation Spiral Feature Network for Flood Inundation Change Detection Based on Heterogeneous Remote Sensing Imagery
by Yichuang Luo, Xunqiang Gong, Yuanxin Ye, Pengyuan Lv, Shuting Yang, Ailong Ma and Yanfei Zhong
Remote Sens. 2026, 18(5), 788; https://doi.org/10.3390/rs18050788 - 4 Mar 2026
Viewed by 416
Abstract
Flood change detection from remote sensing data can be used to identify post-disaster flooded areas, providing decision support for emergency rescue and post-disaster reconstruction. Although the combination of SAR and optical images effectively addresses obscuration by clouds and rain, the inherent difference in [...] Read more.
Flood change detection from remote sensing data can be used to identify post-disaster flooded areas, providing decision support for emergency rescue and post-disaster reconstruction. Although the combination of SAR and optical images effectively addresses obscuration by clouds and rain, the inherent difference in their imaging mechanisms poses a challenge to improving the accuracy of flood area change detection. Furthermore, existing flood inundation change detection methods based on heterogeneous remote sensing imagery struggle to distinguish small ground objects within the background from the actual inundated regions. Therefore, a pseudo-Siamese hybrid dilation spiral feature network (PSiam-HDSFNet) is proposed in this paper. Firstly, the feature extraction pipeline progressively processes optical and SAR images through five-layer Enhanced Deep Residual Blocks and five-layer Residual Dense Blocks, respectively. A Hybrid Dilated Pyramid (HDP) module based on a sawtooth wave-like dilated coefficient is designed to enhance multi-scale semantics of deep features in order to selectively reinforce semantic features in flood areas and weaken the noise semantics from small ground objects. Then, a Spiral Feature Pyramid (SFP) module is designed to make the deep features of SAR and optical images more consistent in spatial structure and numerical distribution patterns, so that the features of flood areas become more prominent while the noise semantics from small ground objects are further suppressed. After that, the Galerkin-type attention with linear complexity is introduced to the decoder, rapidly reconstructing the abstract semantic information of floods into interpretable flood features. Finally, the Align OPT-SAR (AlignOS) method is designed to align SAR and optical image features, enabling subsequent flood area detection. Seven metrics are adopted in the comparison between PSiam-HDSFNet and the other 14 methods. The results indicate that PSiam-HDSFNet improves change detection accuracy by extracting and processing depth features of these two images without image domain translation, and its F1 scores are improved by 7.704%, 7.664%, 4.353%, and 1.111% in the four flood coverage categories detection tasks compared to the suboptimum. Full article
Show Figures

Figure 1

22 pages, 3276 KB  
Article
AFR-CR: An Adaptive Frequency Domain Feature Reconstruction-Based Method for Cloud Removal via SAR-Assisted Remote Sensing Image Fusion
by Xiufang Zhou, Qirui Fang, Xunqiang Gong, Shuting Yang, Tieding Lu, Yuting Wan, Ailong Ma and Yanfei Zhong
Remote Sens. 2026, 18(2), 201; https://doi.org/10.3390/rs18020201 - 8 Jan 2026
Viewed by 760
Abstract
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and [...] Read more.
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and optical data and leveraging deep learning networks to reconstruct cloud-free optical imagery. However, these methods do not fully consider the characteristics of the frequency domain when processing feature integration, resulting in blurred edges of the generated cloudless optical images. Therefore, an adaptive frequency domain feature reconstruction-based cloud removal method is proposed to solve the problem. The proposed method comprises four key sequential stages. First, shallow features are extracted by fusing optical and SAR images. Second, a Transformer-based encoder captures multi-scale semantic features. Subsequently, the Frequency Domain Decoupling Module (FDDM) is employed. Utilizing a Dynamic Mask Generation mechanism, it explicitly decomposes features into low-frequency structures and high-frequency details, effectively suppressing cloud interference while preserving surface textures. Finally, robust information interaction is facilitated by the Cross-Frequency Reconstruction Module (CFRM) via transposed cross-attention, ensuring precise fusion and reconstruction. Experimental evaluation on the M3R-CR dataset confirms that the proposed approach achieves the best results on all four evaluated metrics, surpassing the performance of the eight other State-of-the-Art methods. It has demonstrated its effectiveness and advanced capabilities in the task of SAR-optical fusion for cloud removal. Full article
Show Figures

Figure 1

Back to TopTop