remotesensing-logo

Journal Browser

Journal Browser

Efficient Object Detection Based on Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 20 April 2026 | Viewed by 3231

Special Issue Editors

School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: remote sensing image understing; imaging detection and intelligent perception; trustworthy artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
Interests: SAR system design; on-board SAR image processing; integration of imaging and detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Radar Research Laboratory, School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
Interests: radar system; SAR imaging technology; artificial intelligence and pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of artificial intelligence in the tasks of object detection and recognition in remote sensing images, an increasing number of detection and recognition models have achieved very high precision. However, in certain specialized applications, detection frameworks, models, and specific methods are required not only to maintain high accuracy but also to meet the demands for real-time performance and fast inference, which is in stark contrast to the large-scale parameters of current models. Therefore, designing and implementing efficient detection frameworks, particularly for complex remote sensing images such as synthetic aperture radar (SAR) and optical images, has become a critical issue in the field of intelligent perception for remote sensing images. Furthermore, in the context of future satellite constellations with massive sensing nodes, it is imperative to develop on-orbit detection and recognition capabilities for satellites. This requires intelligent models to not only ensure accuracy but also significantly improve inference efficiency. Thus, this Special Issue invites papers focusing on the following areas:

  • New lightweight neural network architectures for object detection and recognition, considering on-orbit computation on satellites or drones.
  • Real-time weak and small object detection in high-resolution aerial and satellite images.
  • Trustworthy object detection and recognition, including explainability, robustness, etc.
  • Efficient object detection through new AI techniques, such as few/zero-shot learning, incremental/transfer learning, model distillation, etc.
  • Attention mechanism, feature mining and refinement.
  • New remote sensing datasets, including high resolution, optical, SAR datasets, etc.
  • Technique reviews on related topics.

Dr. Yanan You
Prof. Dr. Wei Yang
Dr. Yan Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • SAR weak target detection
  • small object detection
  • trustworthy target detection
  • integration of imaging and detection
  • model compression and acceleration
  • real-time object detection
  • efficient detection architectures
  • optimization of on-orbit detection models

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 13039 KiB  
Article
An Azimuth Ambiguity Suppression Method for SAR Based on Time-Frequency Joint Analysis
by Gangbing Zhou, Ze Yu, Xianxun Yao and Jindong Yu
Remote Sens. 2025, 17(13), 2327; https://doi.org/10.3390/rs17132327 - 7 Jul 2025
Viewed by 228
Abstract
Azimuth ambiguity caused by spectral aliasing severely degrades the quality of Synthetic Aperture Radar (SAR) images. To suppress azimuth ambiguity while preserving image details as much as possible, this paper proposes an azimuth ambiguity suppression method for SAR based on time-frequency joint analysis. [...] Read more.
Azimuth ambiguity caused by spectral aliasing severely degrades the quality of Synthetic Aperture Radar (SAR) images. To suppress azimuth ambiguity while preserving image details as much as possible, this paper proposes an azimuth ambiguity suppression method for SAR based on time-frequency joint analysis. By exploiting the distribution differences of ambiguous signals across different sub-spectra, the method locates azimuth ambiguity in the time domain through multi-sub-spectrum change detection and fusion, followed by ambiguity suppression in the azimuth time-frequency domain. Experimental results demonstrate that the proposed method effectively suppresses azimuth ambiguity while maintaining superior performance in preserving genuine targets. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

25 pages, 34278 KiB  
Article
Complementary Local–Global Optimization for Few-Shot Object Detection in Remote Sensing
by Yutong Zhang, Xin Lyu, Xin Li, Siqi Zhou, Yiwei Fang, Chenlong Ding, Shengkai Gao and Jiale Chen
Remote Sens. 2025, 17(13), 2136; https://doi.org/10.3390/rs17132136 - 21 Jun 2025
Viewed by 510
Abstract
Few-shot object detection (FSOD) in remote sensing remains challenging due to the scarcity of annotated samples and the complex background environments in aerial images. Existing methods often struggle to capture fine-grained local features or suffer from bias during global adaptation to novel categories, [...] Read more.
Few-shot object detection (FSOD) in remote sensing remains challenging due to the scarcity of annotated samples and the complex background environments in aerial images. Existing methods often struggle to capture fine-grained local features or suffer from bias during global adaptation to novel categories, leading to misclassification as background. To address these issues, we propose a framework that simultaneously enhances local feature learning and global feature adaptation. Specifically, we design an Extensible Local Feature Aggregator Module (ELFAM) that reconstructs object structures via multi-scale recursive attention aggregation. We further introduce a Self-Guided Novel Adaptation (SGNA) module that employs a teacher-student collaborative strategy to generate high-quality pseudo-labels, thereby refining the semantic feature distribution of novel categories. In addition, a Teacher-Guided Dual-Branch Head (TG-DH) is developed to supervise both classification and regression using pseudo-labels generated by the teacher model to further stabilize and enhance the semantic features of novel classes. Extensive experiments on DlOR and iSAlD datasets demonstrate that our method achieves superior performance compared to existing state-of-the-art FSOD approaches and simultaneously validate the effectiveness of all proposed components. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

27 pages, 9528 KiB  
Article
RSNet: Compact-Align Detection Head Embedded Lightweight Network for Small Object Detection in Remote Sensing
by Qing Dong, Tianxin Han, Gang Wu, Baiyou Qiao and Lina Sun
Remote Sens. 2025, 17(12), 1965; https://doi.org/10.3390/rs17121965 - 6 Jun 2025
Viewed by 424
Abstract
Detecting small objects in high-resolution remote sensing images presents persistent challenges due to their limited pixel coverage, complex backgrounds, and dense spatial distribution. These difficulties are further exacerbated by the increasing resolution and volume of remote sensing data, which impose stringent demands on [...] Read more.
Detecting small objects in high-resolution remote sensing images presents persistent challenges due to their limited pixel coverage, complex backgrounds, and dense spatial distribution. These difficulties are further exacerbated by the increasing resolution and volume of remote sensing data, which impose stringent demands on both detection accuracy and computational efficiency. Addressing these challenges requires the development of lightweight yet robust detection frameworks tailored for small object detection under resource-constrained conditions. To address these challenges, we propose RSNet, a lightweight and highly efficient detection framework designed specifically for small object detection in high-resolution remote sensing images. At its core, RSNet features the Compact-Align Detection Head (CADH), which enhances scale-adaptive localization and improves detection sensitivity for densely distributed small-scale targets. To preserve features during spatial downsampling, we introduce the Adaptive Downsampling module (ADown), which effectively balances computational efficiency with semantic retention. Additionally, RSNet integrates GSConv to enable efficient multi-scale feature fusion while minimizing resource consumption. In addition, we adopt a K-fold cross-validation strategy to enhance the stability and credibility of model evaluation under spatially heterogeneous remote sensing data conditions. To evaluate the performance of RSNet, we conduct extensive experiments on two widely recognized remote sensing benchmarks, DOTA and NWPU VHR-10. The results show that RSNet achieves mean Average Precision (mAP) scores of 76.4% and 92.1%, respectively, significantly surpassing existing state-of-the-art models in remote sensing object detection. These findings confirm RSNet’s ability to balance high detection accuracy with computational efficiency, making it highly suitable for real-time applications on resource-constrained platforms such as satellite-based remote sensing systems or edge computing devices used in remote sensing applications. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

25 pages, 13827 KiB  
Article
SFG-Net: A Scattering Feature Guidance Network for Oriented Aircraft Detection in SAR Images
by Qingyang Ke, Youming Wu, Wenchao Zhao, Qingbiao Meng, Tian Miao and Xin Gao
Remote Sens. 2025, 17(7), 1193; https://doi.org/10.3390/rs17071193 - 27 Mar 2025
Viewed by 499
Abstract
Synthetic Aperture Radar (SAR) aircraft detection plays a crucial role in various civilian applications. Benefiting from the powerful capacity of feature extraction and analysis of deep learning, aircraft detection performance has been improved by most traditional general-purpose visual intelligence methods. However, the inherent [...] Read more.
Synthetic Aperture Radar (SAR) aircraft detection plays a crucial role in various civilian applications. Benefiting from the powerful capacity of feature extraction and analysis of deep learning, aircraft detection performance has been improved by most traditional general-purpose visual intelligence methods. However, the inherent imaging mechanisms of SAR fundamentally differ from optical images, which poses challenges for SAR aircraft detection. Aircraft targets in SAR imagery typically exhibit indistinct details, discrete features, and weak contextual associations and are prone to non-target interference, which makes it difficult for existing visual detectors to capture critical features of aircraft, limiting further optimization of their performance. To address these issues, we propose the scattering feature guidance network (SFG-Net), which integrates feature extraction, global feature fusion, and label assignment with essential scattering distribution of targets. This enables the network to focus on critical discriminative features and leverage robust scattering features as guidance to enhance detection accuracy while suppressing interference. The core components of the proposed method include the detail feature supplement (DFS) module and the context-aware scattering feature enhancement (CAFE) module. The former integrates low-level texture and contour features to mitigate detail ambiguity and noise interference, while the latter leverages global context of strong scattering information to generate more discriminative feature representations, guiding the network to focus on critical scattering regions and improving learning of essential features. Additionally, a feature scattering center-based label assignment (FLA) strategy is introduced, which utilizes the spatial distribution of scattering information to adaptively adjust the sample coverage and ensure that strong scattering regions are prioritized during training. A series of experiments was conducted on the CSAR-AC dataset to validate the effectiveness and generalizability of the proposed method. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Graphical abstract

22 pages, 6239 KiB  
Article
Fine-Grained Aircraft Recognition Based on Dynamic Feature Synthesis and Contrastive Learning
by Huiyao Wan, Pazlat Nurmamat, Jie Chen, Yice Cao, Shuai Wang, Yan Zhang and Zhixiang Huang
Remote Sens. 2025, 17(5), 768; https://doi.org/10.3390/rs17050768 - 23 Feb 2025
Cited by 2 | Viewed by 913
Abstract
With the rapid development of deep learning, significant progress has been made in remote sensing image target detection. However, methods based on deep learning are confronted with several challenges: (1) the inherent limitations of activation functions and downsampling operations in convolutional networks lead [...] Read more.
With the rapid development of deep learning, significant progress has been made in remote sensing image target detection. However, methods based on deep learning are confronted with several challenges: (1) the inherent limitations of activation functions and downsampling operations in convolutional networks lead to frequency deviations and loss of local detail information, affecting fine-grained object recognition; (2) class imbalance and long-tail distributions further degrade the performance of minority categories; (3) large intra-class variations and small inter-class differences make it difficult for traditional deep learning methods to effectively extract fine-grained discriminative features. To address these issues, we propose a novel remote sensing aircraft recognition method. First, to mitigate the loss of local detail information, we introduce a learnable Gabor filter-based texture feature extractor, which enhances the discriminative feature representation of aircraft categories by capturing detailed texture information. Second, to tackle the long-tail distribution problem, we design a dynamic feature hallucination module that synthesizes diverse hallucinated samples, thereby improving the feature diversity of tail categories. Finally, to handle the challenge of large intra-class variations and small inter-class differences, we propose a contrastive learning module to enhance the spatial discriminative features of the targets. Extensive experiments on the large-scale fine-grained datasets FAIR1M and MAR20 demonstrate the effectiveness of our method, achieving detection accuracies of 53.56% and 89.72%, respectively, and surpassing state-of-the-art performance. The experimental results validate that our approach effectively addresses the key challenges in remote sensing aircraft recognition. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop