remotesensing-logo

Journal Browser

Journal Browser

Intelligent Image Analysis: Advancing Remote Sensing with Artificial Intelligence

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 4459

Special Issue Editors


E-Mail Website
Guest Editor
School of Artificial Intelligence, Xidian University, Xi’an 710071, China
Interests: quantum-inspired evolutionary computation; computation intelligence; machine learning; pattern recognition

E-Mail Website
Guest Editor
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi’an 710071, China
Interests: remote sensing image captioning

Special Issue Information

Dear Colleagues,

The rapid evolution of remote sensing technologies has enabled daily acquisition of massive remote sensing images (RSIs). However, the exponential expansion of RSIs in terms of volume, diversity, and complexity has fundamentally challenged the conventional image analysis methods reliant on manually crafted features and shallow machine learning architectures. In recent years, artificial intelligence has exhibited revolutionary breakthroughs and provides a transformative solution for RSI analysis. Researchers have significantly enhanced the accuracy and efficiency of RSI analysis by leveraging advanced AI techniques. Notably, the emergence of remote sensing foundation models has driven substantial progress in cognitive reasoning capabilities for remote sensing applications.

Therefore, we have organized a Special Issue titled “Intelligent Image Analysis: Advancing Remote Sensing with Artificial Intelligence” in Remote Sensing. This Special Issue aspires to create a platform to share and discuss studies on advanced AI technology for RSI analysis. The research topics cover classification, detection, segmentation, and other image analysis tasks, and the data sources include, but are not limited to, optical, hyperspectral, and SAR. We welcome submissions relevant to intelligent RSI interpretation, remote sensing foundation models, multi-modal RSI analysis, remote sensing 3D reconstruction, open-world RSI interpretation, and other remote sensing interpretation applications with advanced AI technology.

Prof. Dr. Yangyang Li
Dr. Tianyang Zhang
Dr. Xinghua Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent RSI interpretation
  • remote sensing foundation models development and downstream applications
  • multi-modal RSI interpretation
  • remote sensing 3D reconstruction
  • open-world remote sensing image interpretation
  • remote sensing interpretation applications with advanced AI technology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 2660 KB  
Article
SpaA: A Spatial-Aware Network for 3D Object Detection from LiDAR Point Clouds
by Jianfeng Song, Chu Zhang, Cheng Zhang, Li Song, Ruobin Wang and Kun Xie
Remote Sens. 2026, 18(8), 1104; https://doi.org/10.3390/rs18081104 - 8 Apr 2026
Viewed by 315
Abstract
Grid-based 3D object detection methods effectively leverage mature point cloud processing techniques and convolutional neural networks for feature extraction and object localization. However, unlike the 2D object detection domain, the unique characteristics of point cloud data being unevenly and sparsely distributed in space [...] Read more.
Grid-based 3D object detection methods effectively leverage mature point cloud processing techniques and convolutional neural networks for feature extraction and object localization. However, unlike the 2D object detection domain, the unique characteristics of point cloud data being unevenly and sparsely distributed in space necessitate that detection networks possess a certain level of spatial structural perception. Learning spatial information such as point cloud density and distribution patterns can significantly benefit 3D detection networks. This paper proposes a Spatial-aware Network for 3D object detection (SpaA). Based on the 3D sparse convolution network, we designed a Variable Sparse Convolution network (VS-Conv) capable of perceiving the importance of locations. To address the issue of set abstraction operations completely ignoring spatial structure during local feature aggregation, we proposed a Spatial-aware Density-based Local Aggregation (SDLA) method. Experiments demonstrate that enhancing the spatial-awareness capability of detection networks is crucial for complex 3D object detection. Detection results on the KITTI dataset validate the effectiveness of our method. The test set results of SpaA achieved 3D AP values of 82.20%, 44.04%, and 70.34% for the Car, Pedestrian, and Cyclist categories, respectively, and a competitive 3D mAP of 67.23%, outperforming several published methods. Full article
Show Figures

Figure 1

27 pages, 24363 KB  
Article
Application of High-Precision Classification Method Based on Spatiotemporal Stable Samples and Land Use Policy in Oasis–Desert Mosaic Landscape Areas
by Jinghan Wang, Yuefei Zhou, Miaohang Zhou, Zengjing Song, Xiangyu Ji and Xujun Han
Remote Sens. 2025, 17(23), 3859; https://doi.org/10.3390/rs17233859 - 28 Nov 2025
Viewed by 705
Abstract
Land cover products are essential tools in environmental and ecological research. However, limited attention has been paid to their data quality issues. Many existing products suffer from pronounced spatiotemporal inconsistencies, characterized by frequent and repetitive classification fluctuations in specific regions and years, which [...] Read more.
Land cover products are essential tools in environmental and ecological research. However, limited attention has been paid to their data quality issues. Many existing products suffer from pronounced spatiotemporal inconsistencies, characterized by frequent and repetitive classification fluctuations in specific regions and years, which substantially compromise the accuracy of analyses and models that rely on them. To address these challenges, this study introduces a method for deriving spatiotemporally stable samples to support high-precision land cover classification. The approach integrates national and regional land-use policies to assess temporal stability and incorporates advanced time-series processing techniques together with innovative vegetation indices to facilitate effective sample reuse. Experimental results show that this method markedly improves classification accuracy across vegetation types and reduces the extent of areas prone to frequent land-cover changes by 22.64%. Compared with existing products of similar spatial resolution, our approach achieves an overall classification accuracy of 91.1%, providing stable, high-quality input data that underpin precise and reliable regional-scale environmental and ecological modeling. Full article
Show Figures

Figure 1

19 pages, 3397 KB  
Article
FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery
by Weixing Liu, Bin Luo, Jun Liu, Han Nie and Xin Su
Remote Sens. 2025, 17(15), 2639; https://doi.org/10.3390/rs17152639 - 30 Jul 2025
Cited by 3 | Viewed by 1204
Abstract
Accurate and efficient cloud detection is critical for maintaining the usability of optical remote sensing imagery, particularly in large-scale Earth observation systems. In this study, we propose FEMNet, a lightweight dual-branch network that combines state space modeling with convolutional encoding for multi-class cloud [...] Read more.
Accurate and efficient cloud detection is critical for maintaining the usability of optical remote sensing imagery, particularly in large-scale Earth observation systems. In this study, we propose FEMNet, a lightweight dual-branch network that combines state space modeling with convolutional encoding for multi-class cloud segmentation. The Mamba-based encoder captures long-range semantic dependencies with linear complexity, while a parallel CNN path preserves spatial detail. To address the semantic inconsistency across feature hierarchies and limited context perception in decoding, we introduce the following two targeted modules: a cross-stage semantic enhancement (CSSE) block that adaptively aligns low- and high-level features, and a multi-scale context aggregation (MSCA) block that integrates contextual cues at multiple resolutions. Extensive experiments on five benchmark datasets demonstrate that FEMNet achieves state-of-the-art performance across both binary and multi-class settings, while requiring only 4.4M parameters and 1.3G multiply–accumulate operations. These results highlight FEMNet’s suitability for resource-efficient deployment in real-world remote sensing applications. Full article
Show Figures

Figure 1

24 pages, 19550 KB  
Article
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
by Jie Yin, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan and Jianjun He
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422 - 12 Jul 2025
Cited by 1 | Viewed by 1466
Abstract
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing [...] Read more.
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant (Cn2) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances. Full article
Show Figures

Graphical abstract

Back to TopTop