remotesensing-logo

Journal Browser

Journal Browser

Intelligent Processing and Analysis of Multi-modal Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 25 May 2025 | Viewed by 2439

Special Issue Editors


E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 102206, China
Interests: image processing; pattern recognition; computer vision; machine learning; remote sensing; aerospace exploration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Hypatia Research Consortium, Via del Politecnico SNC, C/O Italian Space Agency, 00133 Rome, Italy
Interests: hyperspactral image analysis; machine learning; deep learning techniques; dimensionality reduction; super-resolution; spectral unmixing; data fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The processing and automatic interpretation of the extensive remote sensing data obtained by satellites in orbit, near-space vehicles or manned/unmanned aerial vehicles (UAV) has always been a popular topic in the field of remote sensing. In recent years, artificial intelligence (AI) technology, especially deep learning, has developed rapidly and significantly accelerated research in a range of areas. Intelligent processing and analysis have extensive applicative potential in multiple remote sensing data, including panchromatic, multispectral, hyperspectral, infrared, SAR, LiDAR, etc. This Special Issue aims to compile recent works in this field that provide methodological contributions and innovative applications. Novel research that employs intelligent methods to address practical problems in remote sensing applications is also welcome. The scope of this Special Issue includes, but is not limited to, the following topics:

  • Intelligent image processing for remote sensing, including restoration (denoising, deblurring and dehazing), super-resolution, enhancement, registration, pan-sharpening, etc.;
  • Unsupervised/weakly-supervised/self-supervised learning for image processing and analysis of remote sensing data;
  • Data fusion/multi-modal data analysis;
  • Domain adaption for cross-modal data analysis;
  • AI methods for image/scene classification;
  • AI methods for target detection/recognition/tracking;
  • Cloud detection and removal in remote sensing images;
  • Change detection/semantic segmentation for remote sensing;
  • Large models for remote sensing;
  • Deep learning for remote sensing applications.

Dr. Haopeng Zhang
Dr. Giorgio Licciardi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • image processing
  • image analysis
  • artificial intelligence
  • deep learning
  • data fusion
  • multispectral and hyperspectral data
  • synthetic aperture radar
  • LiDAR

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 3682 KiB  
Article
MDFA-Net: Multi-Scale Differential Feature Self-Attention Network for Building Change Detection in Remote Sensing Images
by Yuanling Li, Shengyuan Zou, Tianzhong Zhao and Xiaohui Su
Remote Sens. 2024, 16(18), 3466; https://doi.org/10.3390/rs16183466 - 18 Sep 2024
Viewed by 731
Abstract
Building change detection (BCD) from remote sensing images is an essential field for urban studies. In this well-developed field, Convolutional Neural Networks (CNNs) and Transformer have been leveraged to empower BCD models in handling multi-scale information. However, it is still challenging to accurately [...] Read more.
Building change detection (BCD) from remote sensing images is an essential field for urban studies. In this well-developed field, Convolutional Neural Networks (CNNs) and Transformer have been leveraged to empower BCD models in handling multi-scale information. However, it is still challenging to accurately detect subtle changes using current models, which has been the main bottleneck to improving detection accuracy. In this paper, a multi-scale differential feature self-attention network (MDFA-Net) is proposed to effectively integrate CNN and Transformer by balancing the global receptive field from the self-attention mechanism and the local receptive field from convolutions. In MDFA-Net, two innovative modules were designed. Particularly, a hierarchical multi-scale dilated convolution (HMDConv) module was proposed to extract local features with hybrid dilation convolutions, which can ameliorate the effect of CNN’s local bias. In addition, a differential feature self-attention (DFA) module was developed to implement the self-attention mechanism at multi-scale difference feature maps to overcome the problem that local details may be lost in the global receptive field in Transformer. The proposed MDFA-Net achieves state-of-the-art accuracy performance in comparison with related works, e.g., USSFC-Net, in three open datasets: WHU-CD, CDD-CD, and LEVIR-CD. Based on the experimental results, MDFA-Net significantly exceeds other models in F1 score, IoU, and overall accuracy; the F1 score is 93.81%, 95.52%, and 91.21% in WHU-CD, CDD-CD, and LEVIR-CD datasets, respectively. Furthermore, MDFA-Net achieved first or second place in precision and recall in the test in all three datasets, which indicates its better balance in precision and recall than other models. We also found that subtle changes, i.e., small-sized building changes and irregular boundary changes, are better detected thanks to the introduction of HMDConv and DFA. To this end, with its better ability to leverage multi-scale differential information than traditional methods, MDFA-Net provides a novel and effective avenue to integrate CNN and Transformer in BCD. Further studies could focus on improving the model’s insensitivity to hyper-parameters and the model’s generalizability in practical applications. Full article
Show Figures

Graphical abstract

25 pages, 16150 KiB  
Article
Multi-Degradation Super-Resolution Reconstruction for Remote Sensing Images with Reconstruction Features-Guided Kernel Correction
by Yi Qin, Haitao Nie, Jiarong Wang, Huiying Liu, Jiaqi Sun, Ming Zhu, Jie Lu and Qi Pan
Remote Sens. 2024, 16(16), 2915; https://doi.org/10.3390/rs16162915 - 9 Aug 2024
Viewed by 920
Abstract
A variety of factors cause a reduction in remote sensing image resolution. Unlike super-resolution (SR) reconstruction methods with single degradation assumption, multi-degradation SR methods aim to learn the degradation kernel from low-resolution (LR) images and reconstruct high-resolution (HR) images more suitable for restoring [...] Read more.
A variety of factors cause a reduction in remote sensing image resolution. Unlike super-resolution (SR) reconstruction methods with single degradation assumption, multi-degradation SR methods aim to learn the degradation kernel from low-resolution (LR) images and reconstruct high-resolution (HR) images more suitable for restoring the resolution of remote sensing images. However, existing multi-degradation SR methods only utilize the given LR images to learn the representation of the degradation kernel. The mismatches between the estimated degradation kernel and the real-world degradation kernel lead to a significant deterioration in performance of these methods. To address this issue, we design a reconstruction features-guided kernel correction SR network (RFKCNext) for multi-degradation SR reconstruction of remote sensing images. Specifically, the proposed network not only utilizes LR images to extract degradation kernel information but also employs features from SR images to correct the estimated degradation kernel, thereby enhancing the accuracy. RFKCNext utilizes the ConvNext Block (CNB) for global feature modeling. It employs CNB as fundamental units to construct the SR reconstruction subnetwork module (SRConvNext) and the reconstruction features-guided kernel correction network (RFGKCorrector). The SRConvNext reconstructs SR images based on the estimated degradation kernel. The RFGKCorrector corrects the estimated degradation kernel by reconstruction features from the generated SR images. The two networks iterate alternately, forming an end-to-end trainable network. More importantly, the SRConvNext utilizes the degradation kernel estimated by the RFGKCorrection for reconstruction, allowing the SRConvNext to perform well even if the degradation kernel deviates from the real-world scenario. In experimental terms, three levels of noise and five Gaussian blur kernels are considered on the NWPU-RESISC45 remote sensing image dataset for synthesizing degraded remote sensing images to train and test. Compared to existing super-resolution methods, the experimental results demonstrate that our proposed approach achieves significant reconstruction advantages in both quantitative and qualitative evaluations. Additionally, the UCMERCED remote sensing dataset and the real-world remote sensing image dataset provided by the “Tianzhi Cup” Artificial Intelligence Challenge are utilized for further testing. Extensive experiments show that our method delivers more visually plausible results, demonstrating the potential of real-world application. Full article
Show Figures

Graphical abstract

Back to TopTop