Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
remotesensing-logo

Journal Browser

Journal Browser

Advanced Methods and Applications in SAR (Synthetic Aperture Radar) Image Target Detection and Recognition

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 15 June 2026 | Viewed by 4113

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: remote sensing information processing; synthetic aperture radar (SAR) image interpretation; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Forest Engineering, Santa Catarina State University (UDESC), Florianópolis 88035-901, SC, Brazil
Interests: SAR; data fusion; data integration; change detection; environmental modeling; hyperspectral remote sensing; spatial analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710054, China
Interests: multimodal remote sensing image collaboration; intelligent surface interpretation; change monitoring
Special Issues, Collections and Topics in MDPI journals
School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
Interests: synthetic aperture radar (SAR); image processing; feature extraction; automatic target detection and recognition; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor Assistant
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: SAR image object detection and recognition; AI for SAR ship detection; multi-temporal SAR image interpretation

Special Issue Information

Dear Colleagues,

Synthetic Aperture Radar (SAR), as a core remote sensing tool with all-weather, all-day, penetration, and stable ground imaging capabilities, is rapidly evolving to have higher resolution, broader coverage, increased revisit rates, multi-constellation and multi-platform collaborative networking, as well as multi-frequency, multi-polarization, and multi-baseline three-dimensional imaging. In response to critical needs in ocean monitoring, early disaster warning, national land and infrastructure security, and others, the acquisition and operational deployment of vast amounts of SAR data are paving the way for new trends of large volumes, fast updates, and timeliness. Concurrently, intelligent technologies such as deep learning, generative large models, self-supervised learning, multi-modal fusion, and cloud-edge collaborative computing are accelerating integration, providing a new paradigm for the data–information–knowledge–decision loop. Against this backdrop of both opportunities and challenges, intelligent detection and recognition of SAR images urgently require breakthroughs in methodology and engineering, particularly in areas like weak target detection, complex scene detection, cross-sensor and cross-domain generalization, real-time and automated processing, and explainable and trustworthy AI in order to fully unlock the application potential of next-generation SAR systems.

This Special Issue aims to bring together advanced methods and applications in SAR image target detection and recognition. We invite you to contribute your latest research findings to this issue. Both original research articles and review papers will be accepted.

Research areas may include (but are not limited to) the following:

  1. Satellite/airborne SAR image target detection and recognition;
  2. SAR image detection and recognition assisted by multi-modal data;
  3. Exploring the application of large model technologies in SAR target detection and recognition;
  4. Semi-supervised and unsupervised learning for SAR target detection and recognition;
  5. Edge/cloud computing-based intelligent SAR image detection and recognition;
  6. Multi-modal/multi-source information fusion for target detection and recognition;
  7. Explainability in SAR image intelligent detection and recognition.

Prof. Dr. Kefeng Ji
Dr. Veraldo Liesenberg
Prof. Dr. Zhiyong Lv
Dr. Haohao Ren
Guest Editors

Dr. Zhongzhen Sun
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar (SAR)
  • target detection and recognition
  • multi-modal information fusion
  • explainability

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

39 pages, 9763 KB  
Article
SAR-DRBNet: Adaptive Feature Weaving and Algebraically Equivalent Aggregation for High-Precision Rotated SAR Detection
by Lanfang Lei, Sheng Chang, Zhongzhen Sun, Xinli Zheng, Changyu Liao, Wenjun Wei, Long Ma and Ping Zhong
Remote Sens. 2026, 18(4), 619; https://doi.org/10.3390/rs18040619 - 16 Feb 2026
Viewed by 434
Abstract
Synthetic aperture radar (SAR) imagery is widely used for target detection in complex backgrounds and adverse weather conditions. However, high-precision detection of rotated small targets remains challenging due to severe speckle noise, significant scale variations, and the need for robust rotation-aware representations. To [...] Read more.
Synthetic aperture radar (SAR) imagery is widely used for target detection in complex backgrounds and adverse weather conditions. However, high-precision detection of rotated small targets remains challenging due to severe speckle noise, significant scale variations, and the need for robust rotation-aware representations. To address these issues, we propose SAR-DRBNet, a high-precision rotated small-target detection framework built upon YOLOv13. First, we introduce a Detail-Enhanced Oriented Bounding Box detection head (DEOBB), which leverages multi-branch enhanced convolutions to strengthen fine-grained feature extraction and improve oriented bounding box regression, thereby enhancing rotation sensitivity and localization accuracy for small targets. Second, we design a Ck-MultiDilated Reparameterization Block (CkDRB) that captures multi-scale contextual cues and suppresses speckle interference via multi-branch dilated convolutions and an efficient reparameterization strategy. Third, we propose a Dynamic Feature Weaving module (DynWeave) that integrates global–local dual attention with dynamic large-kernel convolutions to adaptively fuse features across scales and orientations, improving robustness in cluttered SAR scenes. Extensive experiments on three widely used SAR rotated object detection benchmarks (HRSID, RSDD-SAR, and DSSDD) demonstrate that SAR-DRBNet achieves a strong balance between detection accuracy and computational efficiency compared with state-of-the-art oriented bounding box detectors, while exhibiting superior cross-dataset generalization. These results indicate that SAR-DRBNet provides an effective and reliable solution for rotated small-target detection in SAR imagery. Full article
Show Figures

Figure 1

24 pages, 4319 KB  
Article
HLNet: A Lightweight Network for Ship Detection in Complex SAR Environments
by Xiaopeng Guo, Fan Deng, Jie Gong, Jing Zhang, Jiajia Guo, Yong Wang, Yinmei Zeng and Gongquan Li
Remote Sens. 2026, 18(4), 577; https://doi.org/10.3390/rs18040577 - 12 Feb 2026
Viewed by 244
Abstract
The coherent speckle noise in synthetic aperture radar (SAR) imagery, together with complex sea clutter and large variations in ship target scales, poses significant challenges to accurate and robust ship detection, particularly under strict lightweight constraints required by satellite-borne and airborne platforms. To [...] Read more.
The coherent speckle noise in synthetic aperture radar (SAR) imagery, together with complex sea clutter and large variations in ship target scales, poses significant challenges to accurate and robust ship detection, particularly under strict lightweight constraints required by satellite-borne and airborne platforms. To address this issue, this paper proposes a high-precision lightweight detection network, termed High-Lightweight Net (HLNet), specifically designed for SAR ship detection. The network incorporates a novel multi-scale backbone, Multi-Scale Net (MSNet), which integrates dynamic feature completion and multi-core parallel convolutions to alleviate small-target feature loss and suppress background interference. To further enhance multi-scale feature fusion while reducing model complexity, a lightweight path aggregation feature pyramid network, High-Lightweight Feature Pyramid (HLPAFPN), is introduced by reconstructing fusion pathways and removing redundant channels. In addition, a lightweight detection head, High-Lightweight Head (HLHead), is designed by combining grouped convolutions with distribution focal loss to improve localization robustness under low signal-to-noise ratio conditions. Extensive experiments conducted on the public SSDD and HRSID datasets demonstrate that HLNet achieves mAP50 scores of 98.3% and 91.7%, respectively, with only 0.66 M parameters. Extensive evaluations on the more challenging CSID subset, composed of complex scenes selected from SSDD and HRSID, demonstrate that HLNet attains an mAP50 of 75.9%, outperforming the baseline by 4.3%. These results indicate that HLNet achieves an effective balance between detection accuracy and computational efficiency, making it well-suited for deployment on resource-constrained SAR platforms. Full article
Show Figures

Figure 1

22 pages, 13053 KB  
Article
Lightweight Complex-Valued Siamese Network for Few-Shot PolSAR Image Classification
by Yinyin Jiang, Rongzhen Du, Wanying Song, Peng Zhang, Lei Liu and Zhenxi Zhang
Remote Sens. 2026, 18(2), 344; https://doi.org/10.3390/rs18020344 - 20 Jan 2026
Viewed by 353
Abstract
Complex-valued convolutional neural networks (CVCNNs) have demonstrated strong capabilities for polarimetric synthetic aperture radar (PolSAR) image classification by effectively integrating both amplitude and phase information inherent in polarimetric data. However, their practical deployment faces significant challenges due to high computational costs and performance [...] Read more.
Complex-valued convolutional neural networks (CVCNNs) have demonstrated strong capabilities for polarimetric synthetic aperture radar (PolSAR) image classification by effectively integrating both amplitude and phase information inherent in polarimetric data. However, their practical deployment faces significant challenges due to high computational costs and performance degradation caused by extremely limited labeled samples. To address these challenges, a lightweight CV Siamese network (LCVSNet) is proposed for few-shot PolSAR image classification. Considering the constraints of limited hardware resources in practical applications, simple one-dimensional (1D) CV convolutions along the scattering dimension are combined with two-dimensional (2D) lightweight CV convolutions. In this way, the inter-element dependencies of polarimetric coherency matrix and the spatial correlations between neighboring units can be captured effectively, while simultaneously reducing computational costs. Furthermore, LCVSNet incorporates a contrastive learning (CL) projection head to explicitly optimize the feature space. This optimization can effectively enhance the feature discriminability, leading to accurate classification with a limited number of labeled samples. Experiments on three real PolSAR datasets demonstrate the effectiveness and practical utility of LCVSNet for PolSAR image classification with a small number of labeled samples. Full article
Show Figures

Figure 1

27 pages, 27172 KB  
Article
Shadow Spatiotemporal Track-Before-Detect Approach for Distributed UAV-Borne Video SAR
by Liwu Wen, Ming Ke, Ming Jiang, Jinshan Ding and Xuejun Huang
Remote Sens. 2026, 18(2), 343; https://doi.org/10.3390/rs18020343 - 20 Jan 2026
Viewed by 482
Abstract
Shadow detection has become a key technology for ground-based moving target indication in video synthetic aperture radar (SAR). However, single-platform video SAR faces the issue of moving-target shadows being occluded. This paper proposes a new dynamic programming-based spatiotemporal track-before-detect (DP-ST-TBD) algorithm for moving-target [...] Read more.
Shadow detection has become a key technology for ground-based moving target indication in video synthetic aperture radar (SAR). However, single-platform video SAR faces the issue of moving-target shadows being occluded. This paper proposes a new dynamic programming-based spatiotemporal track-before-detect (DP-ST-TBD) algorithm for moving-target shadow indication based on a distributed unmanned aerial vehicle (UAV)-borne video SAR system. First, this approach establishes a spatiotemporal cooperative shadow detection model, which extends the temporal accumulation of traditional DP-TBD to spatiotemporal accumulation by state temporal transition and spatial mapping. Second, an adaptive state transition method is proposed to address the challenge in which the fixed-state transition of traditional DP-TBD struggles with maneuvering target detection. It utilizes target’s Doppler features from heterogeneous-view range-Doppler (RD) spectra to assist in target’s shadow search within the image domain. Finally, a state shrinking–sparseness strategy is used to reduce the computational burden caused by dense states in spatiotemporal search; thus, multi-platform, multi-frame accumulation of moving-target shadows can be realized based on sparse states. The comparative experiments demonstrate that the proposed DP-ST-TBD improves shadow-detection performance through heterogeneous-view measurements while reducing the required number of frames for reliable detection compared to the conventional two-step detection method (single-platform shadow detection followed by multi-platform track fusion). Full article
Show Figures

Figure 1

43 pages, 42157 KB  
Article
SAREval: A Multi-Dimensional and Multi-Task Benchmark for Evaluating Visual Language Models on SAR Image Understanding
by Ziyan Wang, Lei Liu, Gang Wan, Yuchen Lu, Fengjie Zheng, Guangde Sun, Yixiang Huang, Shihao Guo, Xinyi Li and Liang Yuan
Remote Sens. 2026, 18(1), 82; https://doi.org/10.3390/rs18010082 - 25 Dec 2025
Viewed by 1079
Abstract
Vision-Language Models (VLMs) demonstrate significant potential for remote sensing interpretation through multimodal fusion and semantic representation of imagery. However, their adaptation to Synthetic Aperture Radar (SAR) remains challenging due to fundamental differences in imaging mechanisms and physical properties compared to optical remote sensing. [...] Read more.
Vision-Language Models (VLMs) demonstrate significant potential for remote sensing interpretation through multimodal fusion and semantic representation of imagery. However, their adaptation to Synthetic Aperture Radar (SAR) remains challenging due to fundamental differences in imaging mechanisms and physical properties compared to optical remote sensing. SAREval, the first comprehensive benchmark specifically designed for SAR image understanding, incorporates SAR-specific characteristics, including scattering mechanisms and polarization features, through a hierarchical framework spanning perception, reasoning, and robustness capabilities. It encompasses 20 tasks from image classification to physical-attribute inference with over 10,000 high-quality image–text pairs. Extensive experiments conducted on 11 mainstream VLMs reveal substantial limitations in SAR image interpretation. Models achieve merely 25.35% accuracy in fine-grained ship classification tasks and demonstrate significant difficulties in establishing mappings between visual features and physical parameters. Furthermore, certain models exhibit unexpected performance improvements under certain noise conditions that challenge conventional robustness understanding. SAREval establishes an essential foundation for developing and evaluating VLMs in SAR image interpretation, providing standardized assessment protocols and quality-controlled annotations for cross-modal remote sensing research. Full article
Show Figures

Figure 1

Review

Jump to: Research

36 pages, 5941 KB  
Review
Physics-Driven SAR Target Detection: A Review and Perspective
by Xinyi Li, Lei Liu, Gang Wan, Fengjie Zheng, Shihao Guo, Guangde Sun, Ziyan Wang and Xiaoxuan Liu
Remote Sens. 2026, 18(2), 200; https://doi.org/10.3390/rs18020200 - 7 Jan 2026
Viewed by 902
Abstract
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. [...] Read more.
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. This approach overlooks SAR’s unique physical nature, failing to account for key factors such as backscatter variations from different polarizations, target representation changes across resolutions, and detection threshold shifts due to clutter background heterogeneity. Consequently, these limitations lead to insufficient cross-polarization adaptability, feature masking, and degraded recognition accuracy due to clutter interference. To address these challenges, this paper systematically reviews recent research advances in SAR target detection, focusing on physical constraints including polarization characteristics, scattering mechanisms, signal-domain properties, and resolution effects. Finally, it outlines promising research directions to guide future developments in physics-aware SAR target detection. Full article
Show Figures

Figure 1

Back to TopTop