remotesensing-logo

Journal Browser

Journal Browser

Machine Learning and Deep Learning Applied to Remote Sensing Image Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 28 September 2025 | Viewed by 635

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 610031, China
Interests: remote sensing image super-resolution; deep learning; multi-modal learning
Special Issues, Collections and Topics in MDPI journals
School of Astronautics, Beihang University, Beijing 100191, China
Interests: hyperspectral remote sensing image processing; image generation; deep learning

E-Mail Website
Guest Editor
School of Astronautics, Beihang University, Beijing 100191, China
Interests: remote sensing image processing; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing technology has significantly enhanced earth observation by providing vast amounts of high-resolution multimodal data across various sensing platforms, including satellite, airborne, and UAV systems. The rapid advancement of machine learning (ML) and deep learning (DL) techniques has further improved our ability to process and interpret these data, leading to breakthroughs in land cover classification, object detection, environmental monitoring, etc.

This Special Issue will focus on the latest developments in ML and DL methods for remote sensing image analysis. We aim to highlight novel algorithms, innovative applications, and emerging trends that address the challenges of remote sensing interpretation and remote sensing image generation. We welcome original research articles, methodological contributions, and application-driven studies that will advance this field. Submissions can include work on various remote sensing data types, such as multispectral, hyperspectral, synthetic aperture radar (SAR), LiDAR, etc. Potential topics of interest include, but are not limited to, the following:

  • Supervised, unsupervised, and self-supervised learning for remote sensing images;
  • Advanced deep learning architectures for scene classification, object detection, image segmentation, and change detection;
  • Generative models (e.g., generative adversarial networks (GANs), diffusion models, autoregressive models) for image quality improvement and data generation;
  • Multimodal learning and cross-modal learning for remote sensing applications;
  • Vision–language modeling for image caption, referring segmentation, visual grounding, and visual question-answering;
  • Domain adaptation and transfer learning for cross-sensor and cross-region generalization;
  • Real-time processing of remote sensing images.

Dr. Sen Lei
Dr. Liqin Liu
Prof. Dr. Zhengxia Zou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing
  • machine learning
  • deep learning
  • generative models
  • multimodal learning
  • multispectral
  • hyperspectral
  • synthetic aperture radar

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 5811 KiB  
Article
Multi-Attitude Hybrid Network for Remote Sensing Hyperspectral Images Super-Resolution
by Chi Chen, Yunhan Sun, Xueyan Hu, Ning Zhang, Hao Feng, Zheng Li and Yongcheng Wang
Remote Sens. 2025, 17(11), 1947; https://doi.org/10.3390/rs17111947 - 4 Jun 2025
Viewed by 276
Abstract
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the [...] Read more.
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the internal information, which in turn limits the precise reconstruction of detailed texture and spectral features. Therefore, we propose the multi-attitude hybrid network (MAHN) for extracting and characterizing information from multiple feature spaces. On the one hand, we construct the spectral hypergraph cross-attention module (SHCAM) and the spatial hypergraph self-attention module (SHSAM) based on the high and low-frequency features in the spectral and the spatial domains, respectively, which are used to capture the main structure and detail changes within the image. On the other hand, high-level semantic information in mixed pixels is parsed by spectral mixture analysis, and semantic hypergraph 3D module (SH3M) are constructed based on the abundance of each category to enhance the propagation and reconstruction of semantic information. Furthermore, to mitigate the domain discrepancies among features, we introduce a sensitive bands attention mechanism (SBAM) to enhance the cross-guidance and fusion of multi-domain features. Extensive experiments demonstrate that our method achieves optimal reconstruction results compared to other state-of-the-art algorithms while effectively reducing the computational complexity. Full article
Show Figures

Figure 1

24 pages, 5723 KiB  
Article
A Robust Multispectral Reconstruction Network from RGB Images Trained by Diverse Satellite Data and Application in Classification and Detection Tasks
by Xiaoning Zhang, Zhaoyang Peng, Yifei Wang, Fan Ye, Tengying Fu and Hu Zhang
Remote Sens. 2025, 17(11), 1901; https://doi.org/10.3390/rs17111901 - 30 May 2025
Viewed by 217
Abstract
Multispectral images contain richer spectral signatures than easily available RGB images, for which they are promising to contribute to information perception. However, the relatively high cost of multispectral sensors and lower spatial resolution limit the widespread application of multispectral data, and existing reconstruction [...] Read more.
Multispectral images contain richer spectral signatures than easily available RGB images, for which they are promising to contribute to information perception. However, the relatively high cost of multispectral sensors and lower spatial resolution limit the widespread application of multispectral data, and existing reconstruction algorithms suffer from a lack of diverse training datasets and insufficient reconstruction accuracy. In response to these issues, this paper proposes a novel and robust multispectral reconstruction network from low-cost natural color RGB images based on free available satellite images with various land cover types. First, to supplement paired natural color RGB and multispectral images, the Houston hyperspectral dataset was used to train a convolutional neural network Model-TN for generating natural color RGB images from true color images combining CIE standard colorimetric system theory. Then, the EuroSAT multispectral satellite images for eight land cover types were selected to produce natural RGB using Model-TN as training image pairs, which were input into a residual network integrating channel attention mechanisms to train the multispectral images reconstruction model, Model-NM. Finally, the feasibility of the reconstructed multispectral images is verified through image classification and target detection. There is a small mean relative absolute error value of 0.0081 for generating natural color RGB images, which is 0.0397 for reconstructing multispectral images. Compared to RGB images, the accuracies of classification and detection using reconstructed multispectral images have improved by 16.67% and 3.09%, respectively. This study further reveals the potential of multispectral image reconstruction from natural color RGB images and its effectiveness in target detection, which promotes low-cost visual perception of intelligent unmanned systems. Full article
Show Figures

Figure 1

Back to TopTop