remotesensing-logo

Journal Browser

Journal Browser

New Insights in Remote Sensing Image Interpretation with Deep Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 30 October 2025 | Viewed by 731

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
Interests: artificial intelligence; deep learning; remote sensing; semantic segmentation; change detection

E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: deep learning framework; remote sensing image processing; semantic segmentation; large foundational model; vector extraction

E-Mail
Guest Editor
National Geomatics Center of China, Beijing 100830, China
Interests: remote sensing; deep learning; computer vision; 3D modeling

E-Mail Website
Guest Editor
School of Computer Science, China University of Geosciences, Wuhan 430074, China
Interests: deep learning; vector data rendering, and processing; GIS applications; artificial intelligent applications in GIS and RS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing image interpretation is essential for monitoring environmental changes, managing natural resources, supporting disaster response, and urban planning. It provides immediate information about the Earth’s surface conditions, helping decision-makers to make scientifically sound judgments. In recent years, advances in remote sensing technology have enabled us to collect vast amounts of data, but interpreting these data remains a significant challenge due to their complexity and variability.

This Special Issue focuses on the latest advances in the field of remote sensing image interpretation, especially the application of deep learning technology. In the past few years, deep learning has greatly promoted the development of remote sensing data processing, significantly improving the performance of tasks such as object detection, semantic segmentation, image classification, change detection, and so on. Given the broad scope of the Remote Sensing journal, which encompasses various aspects of computer vision and pattern recognition, this Special Issue aligns well with the journal’s mission to promote cutting-edge research in these areas.

This Special Issue aims to bring together the latest research results and explore how advanced deep learning models can be used to overcome the challenges faced by traditional methods such as semantic segmentation and change detection for complex scenes, small samples, and multi-modal data fusion. We invite original research papers on the design of novel network architectures, large-scale pre-trained models, cross-domain adaptation, and multimodal data fusion processing algorithms in order to provide new insights into remote sensing image interpretation. We welcome submissions of research articles, review papers, and case studies that contribute to the theoretical and practical advancement of this field. We also welcome the latest large foundational models that apply to remote sensing image semantic segmentation, object detection, change detection, and vector target extraction.

We look forward to your contributions that will further advance the theory and practice of this field.

Dr. Shiyan Pang
Dr. Mi Zhang
Dr. Huiwei Jiang
Dr. Yongyang Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • remote sensing
  • semantic segmentation
  • change detection
  • classification
  • object detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

36 pages, 25361 KiB  
Article
Remote Sensing Image Compression via Wavelet-Guided Local Structure Decoupling and Channel–Spatial State Modeling
by Jiahui Liu, Lili Zhang and Xianjun Wang
Remote Sens. 2025, 17(14), 2419; https://doi.org/10.3390/rs17142419 (registering DOI) - 12 Jul 2025
Abstract
As the resolution and data volume of remote sensing imagery continue to grow, achieving efficient compression without sacrificing reconstruction quality remains a major challenge, given that traditional handcrafted codecs often fail to balance rate-distortion performance and computational complexity, while deep learning-based approaches offer [...] Read more.
As the resolution and data volume of remote sensing imagery continue to grow, achieving efficient compression without sacrificing reconstruction quality remains a major challenge, given that traditional handcrafted codecs often fail to balance rate-distortion performance and computational complexity, while deep learning-based approaches offer superior representational capacity. However, challenges remain in achieving a balance between fine-detail adaptation and computational efficiency. Mamba, a state–space model (SSM)-based architecture, offers linear-time complexity and excels at capturing long-range dependencies in sequences. It has been adopted in remote sensing compression tasks to model long-distance dependencies between pixels. However, despite its effectiveness in global context aggregation, Mamba’s uniform bidirectional scanning is insufficient for capturing high-frequency structures such as edges and textures. Moreover, existing visual state–space (VSS) models built upon Mamba typically treat all channels equally and lack mechanisms to dynamically focus on semantically salient spatial regions. To address these issues, we present an innovative architecture for distant sensing image compression, called the Multi-scale Channel Global Mamba Network (MGMNet). MGMNet integrates a spatial–channel dynamic weighting mechanism into the Mamba architecture, enhancing global semantic modeling while selectively emphasizing informative features. It comprises two key modules. The Wavelet Transform-guided Local Structure Decoupling (WTLS) module applies multi-scale wavelet decomposition to disentangle and separately encode low- and high-frequency components, enabling efficient parallel modeling of global contours and local textures. The Channel–Global Information Modeling (CGIM) module enhances conventional VSS by introducing a dual-path attention strategy that reweights spatial and channel information, improving the modeling of long-range dependencies and edge structures. We conducted extensive evaluations on three distinct remote sensing datasets to assess the MGMNet. The results of the investigations revealed that MGMNet outperforms the current SOTA models across various performance metrics. Full article
Show Figures

Figure 1

19 pages, 5785 KiB  
Article
RPFusionNet: An Efficient Semantic Segmentation Method for Large-Scale Remote Sensing Images via Parallel Region–Patch Fusion
by Shiyan Pang, Weimin Zeng, Yepeng Shi, Zhiqi Zuo, Kejiang Xiao and Yujun Wu
Remote Sens. 2025, 17(13), 2158; https://doi.org/10.3390/rs17132158 - 24 Jun 2025
Viewed by 350
Abstract
Mainstream deep learning segmentation models are designed for small-sized images, and when applied to high-resolution remote sensing images, the limited information contained in small-sized images greatly restricts a model’s ability to capture complex contextual information at a global scale. To mitigate this challenge, [...] Read more.
Mainstream deep learning segmentation models are designed for small-sized images, and when applied to high-resolution remote sensing images, the limited information contained in small-sized images greatly restricts a model’s ability to capture complex contextual information at a global scale. To mitigate this challenge, we present RPFusionNet, a novel parallel semantic segmentation framework that is specifically designed to efficiently integrate both local and global features. RPFusionNet leverages two distinct feature representations: REGION (representing large areas) and PATCH (representing smaller regions). This framework comprises two parallel branches: the REGION branch initially downsamples the entire image, then extracts features via a convolutional neural network (CNN)-based encoder, and subsequently captures multi-level information using pooled kernels of varying sizes. This design enables the model to adapt effectively to objects of different scales. In contrast, the PATCH branch utilizes a pixel-level feature extractor to enrich the high-dimensional features of the local region, thereby enhancing the representation of fine-grained details. To model the semantic correlation between the two branches, we have developed the Region–Patch scale fusion module. This module ensures that the network can comprehend a wider range of image contexts while preserving local details, thus bridging the gap between regional and local information. Extensive experiments were conducted on three public datasets: WBDS, AIDS, and Vaihingen. Compared to other state-of-the-art methods, our network achieved the highest accuracy on all three datasets, with an IoU score of 92.08% on the WBDS dataset, 89.99% on the AIDS dataset, and 88.44% on the Vaihingen dataset. Full article
Show Figures

Figure 1

Back to TopTop