Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (137)

Search Parameters:
Keywords = radar automatic target recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6967 KB  
Article
Semantics- and Physics-Guided Generative Network for Radar HRRP Generalized Zero-Shot Recognition
by Jiaqi Zhou, Tao Zhang, Siyuan Mu, Yuze Gao, Feiming Wei and Wenxian Yu
Remote Sens. 2026, 18(1), 4; https://doi.org/10.3390/rs18010004 - 19 Dec 2025
Abstract
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks [...] Read more.
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks have emerged as the dominant approaches. Nevertheless, these traditional closed-set recognition methods require labeled data for every class in training, while in reality, seen classes and unseen classes coexist. Therefore, it is necessary to explore methods that can identify both seen and unseen classes simultaneously. To this end, a semantic- and physical-guided generative network (SPGGN) was innovatively proposed for HRRP generalized zero-shot recognition; it combines a constructed knowledge graph with attribute vectors to comprehensively represent semantics and reconstructs strong scattering points to introduce physical constraints. Specifically, to boost the robustness, we reconstructed the strong scattering points from deep features of HRRPs, where class-aware contrastive learning in the middle layer effectively mitigates the influence of target-aspect variations. In the classification stage, discriminative features are produced through attention-based feature fusion to capture multi-faceted information, while the design of balancing loss abates the bias towards seen classes. Experiments on two measured aircraft HRRP datasets validated the superior recognition performance of our method. Full article
Show Figures

Figure 1

27 pages, 13327 KB  
Article
Boosting SAR ATR Trustworthiness via ERFA: An Electromagnetic Reconstruction Feature Alignment Method
by Yuze Gao, Dongying Li, Weiwei Guo, Jianyu Lin, Yiren Wang and Wenxian Yu
Remote Sens. 2025, 17(23), 3855; https://doi.org/10.3390/rs17233855 - 28 Nov 2025
Viewed by 269
Abstract
Deep learning-based synthetic aperture radar (SAR) automatic target recognition (ATR) methods exhibit a tendency to overfit specific operating conditions—such as radar parameters and background clutter—which frequently leads to high sensitivity against variations in these conditions. A novel electromagnetic reconstruction feature alignment (ERFA) method [...] Read more.
Deep learning-based synthetic aperture radar (SAR) automatic target recognition (ATR) methods exhibit a tendency to overfit specific operating conditions—such as radar parameters and background clutter—which frequently leads to high sensitivity against variations in these conditions. A novel electromagnetic reconstruction feature alignment (ERFA) method is proposed in this paper, which integrates electromagnetic reconstruction with feature alignment into a fully convolutional network, forming the ERFA-FVGGNet. The ERFA-FVGGNet comprises three modules: electromagnetic reconstruction using our proposed orthogonal matching pursuit with image-domain cropping-optimization (OMP-IC) algorithm for efficient, high-precision attributed scattering center (ASC) reconstruction and extraction; the designed FVGGNet combining transfer learning with a lightweight fully convolutional network to enhance feature extraction and generalization; and feature alignment employing a dual-loss to suppress background clutter while improving robustness and interpretability. Experimental results demonstrate that ERFA-FVGGNet boosts trustworthiness by enhancing robustness, generalization and interpretability. Full article
Show Figures

Figure 1

50 pages, 6455 KB  
Review
Deep Learning-Based SAR Target Recognition: A Dual-Perspective Survey of Closed Set and Open Set
by Ying Yang and Haitao Zhao
Appl. Sci. 2025, 15(23), 12501; https://doi.org/10.3390/app152312501 - 25 Nov 2025
Viewed by 499
Abstract
Owing to the all-weather, day-and-night imaging capability of Synthetic Aperture Radar (SAR), SAR automatic target recognition (ATR) has long been a central focus in academia and industry. Since 2013, deep learning has become the dominant paradigm for SAR ATR owing to its end-to-end [...] Read more.
Owing to the all-weather, day-and-night imaging capability of Synthetic Aperture Radar (SAR), SAR automatic target recognition (ATR) has long been a central focus in academia and industry. Since 2013, deep learning has become the dominant paradigm for SAR ATR owing to its end-to-end learning capability and robust feature-extraction capacity. To the best of our knowledge, this work provides the first systematic survey of SAR target recognition from dual closed-set and open-set perspectives and identifies four major performance bottlenecks: data scarcity, algorithmic limitations, hardware constraints, and application barriers. To address the first three bottlenecks, an in-depth analysis of closed-set solutions is presented, covering data augmentation, network optimization, and lightweight architectures. For the fourth challenge, a comprehensive analysis of open-set SAR recognition methods is provided. The intrinsic relationship and distinctions between closed-set and open-set recognition are further examined. To tackle the open-set challenge, an enhanced domain-adaptive algorithm for open-set recognition is proposed. Experiments on the OpenSAR and FUSAR datasets demonstrate at least a 3% improvement in open-set accuracy (OSA) over seven recent domain-adaptation algorithms. The rejection rate of unknown targets (RRU) reaches 80.30%, demonstrating a strong ability to distinguish unknown-class targets and offering practical insights for future research. Finally, potential directions for advancing SAR ATR are outlined, providing a comprehensive reference for the continued development of deep-learning-based SAR recognition. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2019 KB  
Article
Out-of-Distribution Knowledge Inference-Based Approach for SAR Imagery Open-Set Recognition
by Changjie Cao, Ying Yang, Zhongli Zhou, Bingli Liu, Bizao Wu, Cheng Li and Yunhui Kong
Remote Sens. 2025, 17(22), 3669; https://doi.org/10.3390/rs17223669 - 7 Nov 2025
Viewed by 522
Abstract
The efficacy of data-driven automatic target recognition (ATR) algorithms relies on the prior knowledge acquired from the target sample set. However, the lack of knowledge of high-value unknown target samples hinders the practical application of existing ATR models, as the acquisition of this [...] Read more.
The efficacy of data-driven automatic target recognition (ATR) algorithms relies on the prior knowledge acquired from the target sample set. However, the lack of knowledge of high-value unknown target samples hinders the practical application of existing ATR models, as the acquisition of this SAR imagery is often challenging. In this paper, we propose an out-of-distribution knowledge inference-based approach for the implementation of open-set-recognition tasks in SAR imagery. The proposed method integrates two modules: out-of-distribution feature inference and a knowledge-sharing retrain mechanism. First, the proposed out-of-distribution feature inference module aims to provide the requisite prior knowledge for the ATR model to effectively recognize unknown target samples. Furthermore, the aforementioned module also employs a compact feature extraction scheme to mitigate the potential overlap between the constructed out-of-distribution feature distribution and the known sample feature distribution. Finally, the proposed method employs the novel knowledge-sharing retraining mechanism to learn prior knowledge of unknown SAR target samples. Several experimental results show the superiority of the proposed approach based on the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set. Some ablation experiments also demonstrate the role of each module of the proposed approach. Even when one category of target sample information is completely absent from the training set, the recognition accuracy of the proposed approach still achieves 90.31%. Full article
Show Figures

Figure 1

23 pages, 16607 KB  
Article
Few-Shot Class-Incremental SAR Target Recognition with a Forward-Compatible Prototype Classifier
by Dongdong Guan, Rui Feng, Yuzhen Xie, Xiaolong Zheng, Bangjie Li and Deliang Xiang
Remote Sens. 2025, 17(21), 3518; https://doi.org/10.3390/rs17213518 - 23 Oct 2025
Viewed by 708
Abstract
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target [...] Read more.
In practical Synthetic Aperture Radar (SAR) applications, new-class objects can appear at any time as the rapid accumulation of large-scale and high-quantity SAR imagery and are usually supported by limited instances in most cooperative scenarios. Hence, powering advanced deep-learning (DL)-based SAR Automatic Target Recognition (SAR ATR) systems with the ability to continuously learn new concepts from few-shot samples without forgetting the old ones is important. In this paper, we tackle the Few-Shot Class-Incremental Learning (FSCIL) problem in the SAR ATR field and propose a Forward-Compatible Prototype Classifier (FCPC) by emphasizing the model’s forward compatibility to incoming targets before and after deployment. Specifically, the classifier’s sensitivity to diversified cues of emerging targets is improved in advance by a Virtual-class Semantic Synthesizer (VSS), considering the class-agnostic scattering parts of targets in SAR imagery and semantic patterns of the DL paradigm. After deploying the classifier in dynamic worlds, since novel target patterns from few-shot samples are highly biased and unstable, the model’s representability to general patterns and its adaptability to class-discriminative ones are balanced by a Decoupled Margin Adaptation (DMA) strategy, in which only the model’s high-level semantic parameters are timely tuned by improving the similarity of few-shot boundary samples to class prototypes and the dissimilarity to interclass ones. For inference, a Nearest-Class-Mean (NCM) classifier is adopted for prediction by comparing the semantics of unknown targets with prototypes of all classes based on the cosine criterion. In experiments, contributions of the proposed modules are verified by ablation studies, and our method achieves considerable performance on three FSCIL of SAR ATR datasets, i.e., SAR-AIRcraft-FSCIL, MSTAR-FSCIL, and FUSAR-FSCIL, compared with numerous benchmarks, demonstrating its superiority and effectiveness in dealing with the FSCIL of SAR ATR. Full article
Show Figures

Figure 1

22 pages, 1057 KB  
Article
Relation-Guided Embedding Transductive Propagation Network with Residual Correction for Few-Shot SAR ATR
by Xuelian Yu, Hailong Yu, Yan Peng, Lei Miao and Haohao Ren
Remote Sens. 2025, 17(17), 2980; https://doi.org/10.3390/rs17172980 - 27 Aug 2025
Viewed by 698
Abstract
Deep learning-based methods have shown great promise for synthetic aperture radar (SAR) automatic target recognition (ATR) in recent years. These methods demonstrate superior performance compared to traditional approaches across various recognition tasks. However, these methods often face significant challenges due to the limited [...] Read more.
Deep learning-based methods have shown great promise for synthetic aperture radar (SAR) automatic target recognition (ATR) in recent years. These methods demonstrate superior performance compared to traditional approaches across various recognition tasks. However, these methods often face significant challenges due to the limited availability of labeled samples, which is a common issue in SAR image analysis owing to the high cost and difficulty of data annotation. To address this issue, a variety of few-shot learning approaches have been proposed and have demonstrated promising results under data-scarce conditions. Nonetheless, a notable limitation of many existing few-shot methods is that their performance tends to plateau when more labeled samples become available. Most few-shot methods are optimized for scenarios with extremely limited data. As a result, they often fail to leverage the advantages of larger datasets. This leads to suboptimal recognition performance compared to conventional deep learning techniques when sufficient training data is available. Therefore, there is a pressing need for approaches that not only excel in few-shot scenarios but also maintain robust performance as the number of labeled samples increases. To this end, we propose a novel method, termed relation-guided embedding transductive propagation network with residual correction (RGE-TPNRC), specifically designed for few-shot SAR ATR tasks. By leveraging mechanisms such as relation node modeling, relation-guided embedding propagation, and residual correction, RGE-TPNRC can fully utilize limited labeled samples by deeply exploring inter-sample relations, enabling better scalability as the support set size increases. Consequently, it effectively addresses the plateauing performance problem of existing few-shot learning methods when more labeled samples become available. Firstly, input samples are transformed into support-query relation nodes, explicitly capturing the dependencies between support and query samples. Secondly, the known relations among support samples are utilized to guide the propagation of embeddings within the network, enabling manifold smoothing and allowing the model to generalize effectively to unseen target classes. Finally, a residual correction propagation classifier refines predictions by correcting potential errors and smoothing decision boundaries, ensuring robust and accurate classification. Experimental results on the moving and stationary target acquisition and recognition (MSTAR) and OpenSARShip datasets demonstrate that our method can achieve state-of-the-art performance in few-shot SAR ATR scenarios. Full article
Show Figures

Figure 1

24 pages, 4345 KB  
Article
Single-Domain Generalization via Multilevel Data Augmentation for SAR Target Recognition Training on Fully Simulated Data
by Wenyu Shu, Ronghui Zhan, Shiqi Chen, Yue Guo and Huiqiang Zhang
Remote Sens. 2025, 17(17), 2966; https://doi.org/10.3390/rs17172966 - 27 Aug 2025
Viewed by 1539
Abstract
Due to the existence of domain shift, the synthetic aperture radar (SAR) automatic target recognition (ATR) model trained on simulated data will have significant performance degradation when applied to real-world measured data. To bridge the domain gap, this paper proposes a single-domain generalization [...] Read more.
Due to the existence of domain shift, the synthetic aperture radar (SAR) automatic target recognition (ATR) model trained on simulated data will have significant performance degradation when applied to real-world measured data. To bridge the domain gap, this paper proposes a single-domain generalization (SDG) method based on multilevel data augmentation (MLDA), enabling SAR-ATR models that have been fully trained on simulated data to be generalized to unseen real SAR data. The proposed method aims to enhance the model’s generalizable capability through three key components: (1) pixel-level augmentation, which enriches data distribution via random Gaussian noise injection in the spatial domain and high-frequency perturbation in the frequency domain to enhance pixel-level diversity; (2) feature-level style augmentation, which probabilistically mixes instance-wise feature statistics, generating hybrid-styled feature maps to enhance feature-level diversity; (3) domain-adversarial training, which constructs an adversarial framework between a feature extractor and discriminator to enforce the learning of domain-invariant representations. Experiments on two simulation-to-reality SAR datasets demonstrate that the proposed method outperforms existing baselines and other SDG algorithms, achieving state-of-the-art (SOTA) performance on both datasets (96.76% accuracy on the public SAMPLE dataset and 93.70% accuracy on the self-built S2M-5 dataset). Full article
Show Figures

Figure 1

20 pages, 28899 KB  
Article
MSDP-Net: A Multi-Scale Domain Perception Network for HRRP Target Recognition
by Hongxu Li, Xiaodi Li, Zihan Xu, Xinfei Jin and Fulin Su
Remote Sens. 2025, 17(15), 2601; https://doi.org/10.3390/rs17152601 - 26 Jul 2025
Cited by 1 | Viewed by 908
Abstract
High-resolution range profile (HRRP) recognition serves as a foundational task in radar automatic target recognition (RATR), enabling robust classification under all-day and all-weather conditions. However, existing approaches often struggle to simultaneously capture the multi-scale spatial dependencies and global spectral relationships inherent in HRRP [...] Read more.
High-resolution range profile (HRRP) recognition serves as a foundational task in radar automatic target recognition (RATR), enabling robust classification under all-day and all-weather conditions. However, existing approaches often struggle to simultaneously capture the multi-scale spatial dependencies and global spectral relationships inherent in HRRP signals, limiting their effectiveness in complex scenarios. To address these limitations, we propose a novel multi-scale domain perception network tailored for HRRP-based target recognition, called MSDP-Net. MSDP-Net introduces a hybrid spatial–spectral representation learning strategy through a multiple-domain perception HRRP (DP-HRRP) encoder, which integrates multi-head convolutions to extract spatial features across diverse receptive fields, and frequency-aware filtering to enhance critical spectral components. To further enhance feature fusion, we design a hierarchical scale fusion (HSF) branch that employs stacked semantically enhanced scale fusion (SESF) blocks to progressively aggregate information from fine to coarse scales in a bottom-up manner. This architecture enables MSDP-Net to effectively model complex scattering patterns and aspect-dependent variations. Extensive experiments on both simulated and measured datasets demonstrate the superiority of MSDP-Net, achieving 80.75% accuracy on the simulated dataset and 94.42% on the measured dataset, highlighting its robustness and practical applicability. Full article
Show Figures

Figure 1

10 pages, 4354 KB  
Article
A Residual Optronic Convolutional Neural Network for SAR Target Recognition
by Ziyu Gu, Zicheng Huang, Xiaotian Lu, Hongjie Zhang and Hui Kuang
Photonics 2025, 12(7), 678; https://doi.org/10.3390/photonics12070678 - 5 Jul 2025
Viewed by 496
Abstract
Deep learning (DL) has shown great capability in remote sensing and automatic target recognition (ATR). However, huge computational costs and power consumption are challenging the development of current DL methods. Optical neural networks have recently been proposed to provide a new mode to [...] Read more.
Deep learning (DL) has shown great capability in remote sensing and automatic target recognition (ATR). However, huge computational costs and power consumption are challenging the development of current DL methods. Optical neural networks have recently been proposed to provide a new mode to replace artificial neural networks. Here, we develop a residual optronic convolutional neural network (res-OPCNN) for synthetic aperture radar (SAR) recognition tasks. We implement almost all computational operations in optics and significantly decrease the network computational costs. Compared with digital DL methods, res-OPCNN offers ultra-fast speed, low computation complexity, and low power consumption. Experiments on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset demonstrate the lightweight nature of the optronic method and its feasibility for SAR target recognition. Full article
Show Figures

Figure 1

24 pages, 3716 KB  
Article
HRRPGraphNet++: Dynamic Graph Neural Network with Meta-Learning for Few-Shot HRRP Radar Target Recognition
by Lingfeng Chen, Zhiliang Pan, Qi Liu and Panhe Hu
Remote Sens. 2025, 17(12), 2108; https://doi.org/10.3390/rs17122108 - 19 Jun 2025
Cited by 2 | Viewed by 1822
Abstract
High-Resolution Range Profile (HRRP) radar recognition suffers from data scarcity challenges in real-world applications. We present HRRPGraphNet++, a framework combining dynamic graph neural networks with meta-learning for few-shot HRRP recognition. Our approach generates graph representations dynamically through multi-head self attention (MSA) mechanisms that [...] Read more.
High-Resolution Range Profile (HRRP) radar recognition suffers from data scarcity challenges in real-world applications. We present HRRPGraphNet++, a framework combining dynamic graph neural networks with meta-learning for few-shot HRRP recognition. Our approach generates graph representations dynamically through multi-head self attention (MSA) mechanisms that adapt to target-specific scattering characteristics, integrated with a specialized meta-learning framework employing layer-wise learning rates. Experiments demonstrate state-of-the-art performance in 1-shot (82.3%), 5-shot (91.8%), and 20-shot (94.7%) settings, with enhanced noise robustness (68.7% accuracy at 0 dB SNR). Our hybrid graph mechanism combines physical priors with learned relationships, significantly outperforming conventional methods in challenging scenarios. Full article
(This article belongs to the Special Issue Advanced AI Technology for Remote Sensing Analysis)
Show Figures

Graphical abstract

18 pages, 2585 KB  
Article
Incremental SAR Automatic Target Recognition with Divergence-Constrained Class-Specific Dictionary Learning
by Xiaojie Ma, Xusong Bu, Dezhao Zhang, Zhaohui Wang and Jing Li
Remote Sens. 2025, 17(12), 2090; https://doi.org/10.3390/rs17122090 - 18 Jun 2025
Viewed by 777
Abstract
Synthetic aperture radar (SAR) automatic target recognition (ATR) plays a pivotal role in SAR image interpretation. While existing approaches predominantly rely on batch learning paradigms, their practical deployment is constrained by the sequential arrival of training data and high retraining costs. To overcome [...] Read more.
Synthetic aperture radar (SAR) automatic target recognition (ATR) plays a pivotal role in SAR image interpretation. While existing approaches predominantly rely on batch learning paradigms, their practical deployment is constrained by the sequential arrival of training data and high retraining costs. To overcome this challenge, this paper introduces a divergence-constrained incremental dictionary learning framework that enables progressive model updates without full data reprocessing. Specifically, firstly, this method learns class-specific dictionaries for each target category via sub-dictionary learning, where the learning process for a specific class does not involve data from other classes. Secondly, the intra-class divergence constraint is incorporated during sub-dictionary learning to address the challenges of significant intra-class variations and minor inter-class differences in SAR targets. Thirdly, the sparse representation coefficients of the target to be classified are solved across all sub-dictionaries, followed by the computation of corresponding reconstruction errors and intra-class divergence metrics to achieve classification. Finally, when the targets of new categories are obtained, the corresponding class-specific dictionaries are calculated and added to the learned dictionary set. In this way, the incremental update of the SAR ATR system is completed. Experimental results on the MSTAR dataset indicate that our method attains >96.62% accuracy across various incremental scenarios. Compared with other state-of-the-art methods, it demonstrates better recognition performance and robustness. Full article
Show Figures

Figure 1

19 pages, 3825 KB  
Article
A Semi-Supervised Attention-Temporal Ensembling Method for Ground Penetrating Radar Target Recognition
by Li Liu, Dajiang Yu, Xiping Zhang, Hang Xu, Jingxia Li, Lijun Zhou and Bingjie Wang
Sensors 2025, 25(10), 3138; https://doi.org/10.3390/s25103138 - 15 May 2025
Cited by 1 | Viewed by 1000
Abstract
Ground penetrating radar (GPR) is an effective and efficient nondestructive testing technology for subsurface investigations. Deep learning-based methods have been successfully used in automatic underground target recognition. However, these methods are mostly based on supervised learning, requiring large amounts of labeled training data [...] Read more.
Ground penetrating radar (GPR) is an effective and efficient nondestructive testing technology for subsurface investigations. Deep learning-based methods have been successfully used in automatic underground target recognition. However, these methods are mostly based on supervised learning, requiring large amounts of labeled training data to guarantee high accuracy and generalization ability, which is a challenge in GPR fields due to time-consuming and labor-intensive data annotation work. To alleviate the demand for abundant labeled data, a semi-supervised deep learning method named attention–temporal ensembling (Attention-TE) is proposed for underground target recognition using GPR B-scan images. This method integrates a semi-supervised temporal ensembling architecture with a triplet attention module to enhance the classification performance. Experimental results of laboratory and field data demonstrate that the proposed method can automatically recognize underground targets with an average accuracy of above 90% using less than 30% of labeled data in the training dataset. Ablation experimental results verify the efficiency of the triplet attention module. Moreover, comparative experimental results validate that the proposed Attention-TE algorithm outperforms the supervised method based on transfer learning and four semi-supervised state-of-the-art methods. Full article
Show Figures

Figure 1

17 pages, 6722 KB  
Article
MCT-CNN-LSTM: A Driver Behavior Wireless Perception Method Based on an Improved Multi-Scale Domain-Adversarial Neural Network
by Kaiyu Chen, Yue Diao, Yucheng Wang, Xiafeng Zhang, Yannian Zhou, Minming Gu, Bo Zhang, Bin Hu, Meng Li, Wei Li and Shaoxi Wang
Sensors 2025, 25(7), 2268; https://doi.org/10.3390/s25072268 - 3 Apr 2025
Cited by 2 | Viewed by 1244
Abstract
Driving behavior recognition based on Frequency-Modulated Continuous-Wave (FMCW) radar systems has become a widely adopted paradigm. Numerous methods have been developed to accurately identify driving behaviors. Recently, deep learning has gained significant attention in radar signal processing due to its ability to eliminate [...] Read more.
Driving behavior recognition based on Frequency-Modulated Continuous-Wave (FMCW) radar systems has become a widely adopted paradigm. Numerous methods have been developed to accurately identify driving behaviors. Recently, deep learning has gained significant attention in radar signal processing due to its ability to eliminate the need for intricate signal preprocessing and its automatic feature extraction capabilities. In this article, we present a network that incorporates multi-scale and channel-time attention modules, referred to as MCT-CNN-LSTM. Initially, a multi-channel convolutional neural network (CNN) combined with a Long Short-Term Memory Network (LSTM) is employed. This model captures both the spatial features and the temporal dependencies from the input radar signal. Subsequently, an Efficient Channel Attention (ECA) module is utilized to allocate adaptive weights to the feature channels that carry the most relevant information. In the final step, domain-adversarial training is applied to extract common features from both the source and target domains, which helps reduce the domain shift. This approach enables the accurate classification of driving behaviors by effectively bridging the gap between domains. Evaluation results show that our method reached an accuracy of 97.3% in a real measured dataset. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

25 pages, 2998 KB  
Article
Graph-Based Few-Shot Learning for Synthetic Aperture Radar Automatic Target Recognition with Alternating Direction Method of Multipliers
by Jing Jin, Zitai Xu, Nairong Zheng and Feng Wang
Remote Sens. 2025, 17(7), 1179; https://doi.org/10.3390/rs17071179 - 26 Mar 2025
Cited by 1 | Viewed by 1327
Abstract
Synthetic aperture radar (SAR) automatic target recognition (ATR) underpins various remote sensing tasks, such as defense surveillance, environmental monitoring, and disaster management. However, the scarcity of annotated SAR data significantly limits the performance of conventional data-driven methods. To address this challenge, we propose [...] Read more.
Synthetic aperture radar (SAR) automatic target recognition (ATR) underpins various remote sensing tasks, such as defense surveillance, environmental monitoring, and disaster management. However, the scarcity of annotated SAR data significantly limits the performance of conventional data-driven methods. To address this challenge, we propose a novel few-shot learning (FSL) framework: the alternating direction method of multipliers–graph convolutional network (ADMM-GCN) framework. ADMM-GCN integrates a GCN with ADMM to enhance SAR ATR under limited data conditions, effectively capturing both global and local structural information from SAR samples. Additionally, it leverages a mixed regularized loss to mitigate overfitting and employs an ADMM-based optimization strategy to improve training efficiency and model stability. Extensive experiments conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset demonstrate the superiority of ADMM-GCN, achieving an impressive accuracy of 92.18% on the challenging three-way 10-shot task and outperforming the benchmarks by 3.25%. Beyond SAR ATR, the proposed approach also advances FSL for real-world applications in remote sensing and geospatial analysis, where learning from scarce data is essential. Full article
(This article belongs to the Special Issue SAR-Based Signal Processing and Target Recognition (Second Edition))
Show Figures

Figure 1

21 pages, 6412 KB  
Article
Inverse Synthetic Aperture Radar Image Multi-Modal Zero-Shot Learning Based on the Scattering Center Model and Neighbor-Adapted Locally Linear Embedding
by Xinfei Jin, Hongxu Li, Xinbo Xu, Zihan Xu and Fulin Su
Remote Sens. 2025, 17(4), 725; https://doi.org/10.3390/rs17040725 - 19 Feb 2025
Viewed by 1731
Abstract
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple [...] Read more.
Inverse Synthetic Aperture Radar (ISAR) images are extensively used in Radar Automatic Target Recognition (RATR) for non-cooperative targets. However, acquiring training samples for all target categories is challenging. Recognizing target classes without training samples is called Zero-Shot Learning (ZSL). When ZSL involves multiple modalities, it becomes Multi-modal Zero-Shot Learning (MZSL). To achieve MZSL, a framework is proposed for generating ISAR images with optical image aiding. The process begins by extracting edges from optical images to capture the structure of ship targets. These extracted edges are used to estimate the potential locations of the target’s scattering centers. Using the Geometric Theory of Diffraction (GTD)-based scattering center model, the edges’ ISAR images are generated from the scattering centers. Next, a mapping is established between the edges’ ISAR images and the actual ISAR images. Neighbor-Adapted Local Linear Embedding (NALLE) generates pseudo-ISAR images for the unseen classes by combining the edges’ ISAR images with the actual ISAR images from the seen classes. Finally, these pseudo-ISAR images serve as training samples, enabling the recognition of test samples. In contrast to the network-based approaches, this method requires only a limited number of training samples. Experiments based on simulated and measured data validate the effectiveness. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Back to TopTop