remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Image Change Detection and Feature Enhancement Based on Deep Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 28 February 2026 | Viewed by 962

Special Issue Editors


E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: intelligent interpretation of remote sensing images; quantitative remote sensing and time-series analysis; multi-source remote sensing image change detection; UAV intelligent application

E-Mail Website
Guest Editor
School of Information Science and Engineering, Wuhan University of Science and Technology, Wuhan 430072, China
Interests: image processing; change detection and ecological remote sensing monitoring
Department of Geographical Sciences, University of Maryland, College Park, MD 20742, USA
Interests: geospatial information science and remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor Assistant
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
Interests: remote sensing change detection; land use/cover change detection; deep learning

E-Mail Website
Guest Editor Assistant
School of Computer and Communication Engineering, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
Interests: multi-modal and multi-temporal remote sensing image analysis

Special Issue Information

Dear Colleagues,

Change detection using deep learning has become essential for understanding various environmental changes, including land use and cover transformations, biodiversity shifts, urbanization, and disaster response. A critical aspect of this process is feature enhancement, which improves the learning of image features. By innovating deep learning techniques for change detection and enhancing features, we can achieve more efficient and interpretable results, ultimately facilitating informed decision-making for both environmental and socio-economic challenges.

This Special Issue seeks to compile advanced research on deep learning-based remote sensing image change detection and feature enhancement, emphasizing innovative algorithms, architectures, and applications that improve accuracy, efficiency, and interpretability.

Submissions may address, but are not limited to, the following topics:

  • Remote sensing image change detection.
  • Knowledge-guided, data-driven change detection.
  • Remote sensing intelligent interpretation.
  • Multi-source remote sensing image analysis.
  • Multi-modal image analysis.
  • Multi-temporal remote sensing image analysis.
  • Deep learning for time-series analysis.
  • Object detection.
  • Semantic segmentation.
  • Image matching.
  • Scene association.
  • Land cover/land use change detection.
  • Feature fusion.
  • Feature enhancement.
  • Multi-scale feature extraction.
  • Ecological remote sensing monitoring.

Prof. Dr. Kaimin Sun
Dr. Wenzhuo Li
Dr. Xin Tao
Guest Editors

Dr. Ting Bai
Dr. Wangbin Li
Guest Editor Assistants

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing change detection
  • feature enhancement
  • feature fusion
  • semantic segmentation
  • deep learning
  • multi-model image analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

36 pages, 22245 KB  
Article
CMSNet: A SAM-Enhanced CNN–Mamba Framework for Damaged Building Change Detection in Remote Sensing Imagery
by Jianli Zhang, Liwei Tao, Wenbo Wei, Pengfei Ma and Mengdi Shi
Remote Sens. 2025, 17(23), 3913; https://doi.org/10.3390/rs17233913 - 3 Dec 2025
Viewed by 394
Abstract
In war and explosion scenarios, buildings often suffer varying degrees of damage characterized by complex, irregular, and fragmented spatial patterns, posing significant challenges for remote sensing–based change detection. Additionally, the scarcity of high-quality datasets limits the development and generalization of deep learning approaches. [...] Read more.
In war and explosion scenarios, buildings often suffer varying degrees of damage characterized by complex, irregular, and fragmented spatial patterns, posing significant challenges for remote sensing–based change detection. Additionally, the scarcity of high-quality datasets limits the development and generalization of deep learning approaches. To overcome these issues, we propose CMSNet, an end-to-end framework that integrates the structural priors of the Segment Anything Model (SAM) with the efficient temporal modeling and fine-grained representation capabilities of CNN–Mamba. Specifically, CMSNet adopts CNN–Mamba as the backbone to extract multi-scale semantic features from bi-temporal images, while SAM-derived visual priors guide the network to focus on building boundaries and structural variations. A Pre-trained Visual Prior-Guided Feature Fusion Module (PVPF-FM) is introduced to align and fuse these priors with change features, enhancing robustness against local damage, non-rigid deformations, and complex background interference. Furthermore, we construct a new RWSBD (Real-world War Scene Building Damage) dataset based on Gaza war scenes, comprising 42,732 annotated building damage instances across diverse scales, offering a strong benchmark for real-world scenarios. Extensive experiments on RWSBD and three public datasets (CWBD, WHU-CD, and LEVIR-CD+) demonstrate that CMSNet consistently outperforms eight state-of-the-art methods in both quantitative metrics (F1, IoU, Precision, Recall) and qualitative evaluations, especially in fine-grained boundary preservation, small-scale change detection, and complex scene adaptability. Overall, this work introduces a novel detection framework that combines foundation model priors with efficient change modeling, along with a new large-scale war damage dataset, contributing valuable advances to both research and practical applications in remote sensing change detection. Additionally, the strong generalization ability and efficient architecture of CMSNet highlight its potential for scalable deployment and practical use in large-area post-disaster assessment. Full article
Show Figures

Figure 1

27 pages, 10767 KB  
Article
HCTANet: Hierarchical Cross-Temporal Attention Network for Semantic Change Detection in Complex Remote Sensing Scenes
by Zhuli Xie, Gang Wan, Zhanji Wei, Nan Li and Guangde Sun
Remote Sens. 2025, 17(23), 3906; https://doi.org/10.3390/rs17233906 - 2 Dec 2025
Viewed by 204
Abstract
Semantic change detection has become a key technology for monitoring the evolution of land cover and land use categories at the semantic level. However, existing methods often lack effective information interaction and fail to capture changes at multiple granularities using single-scale features, resulting [...] Read more.
Semantic change detection has become a key technology for monitoring the evolution of land cover and land use categories at the semantic level. However, existing methods often lack effective information interaction and fail to capture changes at multiple granularities using single-scale features, resulting in inconsistent outcomes and frequent missed or false detections. To address these challenges, we propose a three-branch model HCTANet, which enhances spatial and semantic feature representations at each time stage and models semantic correlations and differences between multi-temporal images through three innovative modules. First, the multi-scale change mapping association module extracts and fuses multi-resolution dual-temporal difference features in parallel, explicitly constraining semantic segmentation results with the change area output. Second, an adaptive collaborative semantic attention mechanism is introduced, modeling the semantic correlations of dual-temporal features via dynamic weight fusion and cross-temporal cross-attention. Third, the spatial semantic residual aggregation module aggregates global context and high-resolution shallow features through residual connections to restore pixel-level boundary details. HCTANet is evaluated on the SECOND, SenseEarth 2020 and AirFC datasets, and the results show that it outperforms existing methods in metrics such as mIoU and SeK, demonstrating its superior capability and effectiveness in accurately detecting semantic changes in complex remote sensing scenarios. Full article
Show Figures

Figure 1

Back to TopTop