remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing Image Thorough Analysis by Advanced Machine Learning

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (28 February 2025) | Viewed by 4616

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
Interests: remote sensing; image processing; machine learning
Special Issues, Collections and Topics in MDPI journals
School of Computer science, Xi’an University of Posts & Telecommunications, Xi’an 710121, China
Interests: remote sensing; image processing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
Interests: image processing; machine learning; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing imaging captures electromagnetic radiation in various wavelengths, such as visible and infrared, reflected or emitted from Earth's surface or atmosphere using sensors mounted on aircrafts and satellites, which provides valuable information about the physical properties and characteristics of the observed area in a non-invasive and cost-effective manner. Thus, remote sensing images can be widely utilized in fields like environmental monitoring, agriculture, disaster response, urban planning, and geological exploration. The merits of such wide applications include thorough analyzing the semantic content of each image using advanced machine learning techniques.

Although machine learning techniques, especially the deep learning, have inspired great success in the domain of remote sensing image analysis, thoroughly analyzing the semantic content of each remote sensing image is still confounded by significant challenges, e.g., difficulty in matching or fusing cross-modality images, limited generalization capacity in cross-domain few-shot or zero-shot settings, expensive computational and memory costs, etc. Therefore, more effort should be paid to exploiting advanced machine learning techniques, e.g., meta-learning, self-supervised learning, large-foundation modeling, to mitigate these challenges and facilitate the wide application of remote sensing images.

For this Special Issue, we encourage submissions that focus on addressing the challenges in thorough remote sensing analysis using advanced machine learning techniques. This Special Issue welcomes high-quality submissions that provide the community with the most recent advancements on all aspects of remote sensing technologies and applications, including but not limited to the following:

  • Noisy, rainy, and foggy remote sensing image restoration;
  • Spatial and spectral remote sensing image super-resolution;
  • Remote sensing image matching;
  • Multi-modal remote sensing image fusion;
  • Remote sensing image segmentation;
  • Remote sensing object detection;
  • Remote sensing image analysis model acceleration or compression;
  • Other topics on applications of remote sensing image analysis.

Prof. Dr. Lei Zhang
Dr. Chen Ding
Prof. Dr. Wei Wei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image restoration
  • image super-resolution
  • image matching
  • image fusion
  • image segmentation
  • object detection
  • model acceleration or compression

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 7802 KiB  
Article
Can Separation Enhance Fusion? An Efficient Framework for Target Detection in Multimodal Remote Sensing Imagery
by Yong Wang, Jiexuan Jia, Rui Liu, Qiusheng Cao, Jie Feng, Danping Li and Lei Wang
Remote Sens. 2025, 17(8), 1350; https://doi.org/10.3390/rs17081350 - 10 Apr 2025
Viewed by 233
Abstract
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge [...] Read more.
Target detection in remote sensing images has garnered significant attention due to its wide range of applications. Many traditional methods primarily rely on unimodal data, which often struggle to address the complexities of remote sensing environments. Furthermore, small-target detection remains a critical challenge in remote sensing image analysis, as small targets occupy only a few pixels, making feature extraction difficult and prone to errors. To address these challenges, this paper revisits the existing multimodal fusion methodologies and proposes a novel framework of separation before fusion (SBF). Leveraging this framework, we present Sep-Fusion—an efficient target detection approach tailored for multimodal remote sensing aerial imagery. Within the modality separation module (MSM), the method separates the three RGB channels of visible light images into independent modalities aligned with infrared image channels. Each channel undergoes independent feature extraction through the unimodal block (UB) to effectively capture modality-specific features. The extracted features are then fused using the feature attention fusion (FAF) module, which integrates channel attention and spatial attention mechanisms to enhance multimodal feature interaction. To improve the detection of small targets, an image regeneration module is exploited during the training stage. It incorporates the super-resolution strategy with attention mechanisms to further optimize high-resolution feature representations for subsequent positioning and detection. Sep-Fusion is currently developed on the YOLO series to make itself a potential real-time detector. Its lightweight architecture enables the model to achieve high computational efficiency while maintaining the desired detection accuracy. Experimental results on the multimodal VEDAI dataset show that Sep-Fusion achieves 77.9% mAP50, surpassing many state-of-the-art models. Ablation experiments further illustrate the respective contribution of modality separation and attention fusion. The adaptation of our multimodal method to unimodal target detection is also verified on NWPU VHR-10 and DIOR datasets, which proves Sep-Fusion to be a suitable alternative to current detectors in various remote sensing scenarios. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

21 pages, 24146 KiB  
Article
SMEP-DETR: Transformer-Based Ship Detection for SAR Imagery with Multi-Edge Enhancement and Parallel Dilated Convolutions
by Chushi Yu and Yoan Shin
Remote Sens. 2025, 17(6), 953; https://doi.org/10.3390/rs17060953 - 7 Mar 2025
Viewed by 738
Abstract
Synthetic aperture radar (SAR) serves as a pivotal remote sensing technology, offering critical support for ship monitoring, environmental observation, and national defense. Although optical detection methods have achieved good performance, SAR imagery still faces challenges, including speckle, complex backgrounds, and small, dense targets. [...] Read more.
Synthetic aperture radar (SAR) serves as a pivotal remote sensing technology, offering critical support for ship monitoring, environmental observation, and national defense. Although optical detection methods have achieved good performance, SAR imagery still faces challenges, including speckle, complex backgrounds, and small, dense targets. Reducing false alarms and missed detections while improving detection performance remains a key objective in the field. To address these issues, we propose SMEP-DETR, a transformer-based model with multi-edge enhancement and parallel dilated convolutions. This model integrates a speckle denoising module, a multi-edge information enhancement module, and a parallel dilated convolution and attention pyramid network. Experimental results demonstrate that SMEP-DETR achieves the high mAP 98.6% on SSDD, 93.2% in HRSID, and 80.0% in LS-SSDD-v1.0, surpassing several state-of-the-art algorithms. Visualization results validate the model’s capability to effectively mitigate the impact of speckle noise while preserving valuable information in both inshore and offshore scenarios. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Graphical abstract

22 pages, 12168 KiB  
Article
Multi-Scale Long- and Short-Range Structure Aggregation Learning for Low-Illumination Remote Sensing Imagery Enhancement
by Yu Cao, Yuyuan Tian, Xiuqin Su, Meilin Xie, Wei Hao, Haitao Wang and Fan Wang
Remote Sens. 2025, 17(2), 242; https://doi.org/10.3390/rs17020242 - 11 Jan 2025
Viewed by 548
Abstract
Profiting from the surprising non-linear expressive capacity, deep convolutional neural networks have inspired lots of progress in low illumination (LI) remote sensing image enhancement. The key lies in sufficiently exploiting both the specific long-range (e.g., non-local similarity) and short-range (e.g., local continuity) structures [...] Read more.
Profiting from the surprising non-linear expressive capacity, deep convolutional neural networks have inspired lots of progress in low illumination (LI) remote sensing image enhancement. The key lies in sufficiently exploiting both the specific long-range (e.g., non-local similarity) and short-range (e.g., local continuity) structures distributed across different scales of each input LI image to build an appropriate deep mapping function from the LI images to their corresponding high-quality counterparts. However, most existing methods can only individually exploit the general long-range or short-range structures shared across most images at a single scale, thus limiting their generalization performance in challenging cases. We propose a multi-scale long–short range structure aggregation learning network for remote sensing imagery enhancement. It features flexible architecture for exploiting features at different scales of the input low illumination (LI) image, with branches including a short-range structure learning module and a long-range structure learning module. These modules extract and combine structural details from the input image at different scales and cast them into pixel-wise scale factors to enhance the image at a finer granularity. The network sufficiently leverages the specific long-range and short-range structures of the input LI image for superior enhancement performance, as demonstrated by extensive experiments on both synthetic and real datasets. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

19 pages, 2735 KiB  
Article
Hierarchical Spectral–Spatial Transformer for Hyperspectral and Multispectral Image Fusion
by Tianxing Zhu, Qin Liu and Lixiang Zhang
Remote Sens. 2024, 16(22), 4127; https://doi.org/10.3390/rs16224127 - 5 Nov 2024
Cited by 1 | Viewed by 1189
Abstract
This paper presents the Hierarchical Spectral–Spatial Transformer (HSST) network, a novel approach applicable to both drone-based and broader remote sensing platforms for integrating hyperspectral (HSI) and multispectral (MSI) imagery. The HSST network improves upon conventional multi-head self-attention transformers by integrating cross attention, effectively [...] Read more.
This paper presents the Hierarchical Spectral–Spatial Transformer (HSST) network, a novel approach applicable to both drone-based and broader remote sensing platforms for integrating hyperspectral (HSI) and multispectral (MSI) imagery. The HSST network improves upon conventional multi-head self-attention transformers by integrating cross attention, effectively capturing spectral and spatial features across different modalities and scales. The network’s hierarchical design facilitates the extraction of multi-scale information and employs a progressive fusion strategy to incrementally refine spatial details through upsampling. Evaluations on three prominent hyperspectral datasets confirm the HSST’s superior efficacy over existing methods. The findings underscore the HSST’s utility for applications, including drone operations, where the high-fidelity fusion of HSI and MSI data is crucial. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

Review

Jump to: Research

51 pages, 21553 KiB  
Review
Deep Feature-Based Hyperspectral Object Tracking: An Experimental Survey and Outlook
by Yuchao Wang, Xu Li, Xinyan Yang, Fuyuan Ge, Baoguo Wei, Lixin Li and Shigang Yue
Remote Sens. 2025, 17(4), 645; https://doi.org/10.3390/rs17040645 - 13 Feb 2025
Viewed by 1123
Abstract
With the rapid advancement of hyperspectral imaging technology, hyperspectral object tracking (HOT) has become a research hotspot in the field of remote sensing. Advanced HOT methods have been continuously proposed and validated on scarce datasets in recent years, which can be roughly divided [...] Read more.
With the rapid advancement of hyperspectral imaging technology, hyperspectral object tracking (HOT) has become a research hotspot in the field of remote sensing. Advanced HOT methods have been continuously proposed and validated on scarce datasets in recent years, which can be roughly divided into handcrafted feature-based methods and deep feature-based methods. Compared with methods via handcrafted features, deep feature-based methods can extract highly discriminative semantic features from hyperspectral images (HSIs) and achieve excellent tracking performance, making them more favored by the hyperspectral tracking community. However, deep feature-based HOT still faces challenges such as data-hungry, band gap, low tracking efficiency, etc. Therefore, it is necessary to conduct a thorough review of current trackers and unresolved problems in the HOT field. In this survey, we systematically classify and conduct a comprehensive analysis of 13 state-of-the-art deep feature-based hyperspectral trackers. First, we classify and analyze the trackers based on the framework and tracking process. Second, the trackers are compared and analyzed in terms of tracking accuracy and speed on two datasets for cross-validation. Finally, we design a specialized experiment for small object tracking (SOT) to further validate the tracking performance. Through in-depth investigation, the advantages and weaknesses of current HOT technology based on deep features are clearly demonstrated, which also points out the directions for future development. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

Back to TopTop