Next Article in Journal
A Hybrid Multi-Scale Transformer-CNN UNet for Crowd Counting
Previous Article in Journal
Tip Discharge Evolution Characteristics and Mechanism Analysis via Optical–Electrical Sensors in Oil-Immersed Transformers
Previous Article in Special Issue
Distributed Phased-Array Radar Mainlobe Interference Suppression and Cooperative Localization Based on CEEMDAN–WOBSS
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network

1
Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
2
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(1), 334; https://doi.org/10.3390/s26010334
Submission received: 5 November 2025 / Revised: 19 December 2025 / Accepted: 26 December 2025 / Published: 4 January 2026
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)

Abstract

High-resolution range profile (HRRP) sequence recognition in radar automatic target recognition faces several practical challenges, including severe category imbalance, degradation of robustness under complex and variable operating conditions, and strict requirements for lightweight models suitable for real-time deployment on resource-limited platforms. To address these problems, this paper proposes a lightweight spatiotemporal fusion-based (LSTF) HRRP sequence target recognition method. First, a lightweight Transformer encoder based on group linear transformations (TGLT) is designed to effectively model temporal dynamics while significantly reducing parameter size and computation, making it suitable for edge-device applications. Second, a transform-domain spatial feature extraction network is introduced, combining the fractional Fourier transform with an enhanced squeeze-and-excitation fully convolutional network (FSCN). This design fully exploits multi-domain spatial information and enhances class separability by leveraging discriminative scattering-energy distributions at specific fractional orders. Finally, an adaptive focal loss with label smoothing (AFL-LS) is constructed to dynamically adjust class weights for improved performance on long-tail classes, while label smoothing alleviates overfitting and enhances generalization. Experiments on the MSTAR and CVDomes datasets demonstrate that the proposed method consistently outperforms existing baseline approaches across three representative scenarios.

1. Introduction

Radar technology has consistently played a pivotal role in both military and civilian domains. With the rapid development of modern radar technology, radar imaging technology has gone way beyond traditional detection limits. High-resolution techniques, primarily achieved through wideband signal transmission and pulse compression, enable the acquisition of detailed target signatures such as the high-resolution range profile (HRRP), making fine-grained target recognition by radar possible. Consequently, radar automatic target recognition (RATR) has become an important task in radar applications. The core principle of this technology involves extracting and analyzing features from radar measurements using advanced signal processing algorithms and pattern recognition methods to achieve automatic target classification and identification. HRRP, derived from wideband radar systems, represents the vector sum of target scattering echoes projected along the radar’s radial direction, containing crucial discriminative information such as target shapes, structural dimensions, and scattering center distributions [1]. HRRP data exhibit unique advantages due to their ease of acquisition, convenient processing, efficient storage, and rich structural information [2]. These distinct characteristics make HRRP an important technical support for RATR, offering reliable solutions for efficient and precise target identification.
From a methodological perspective, HRRP-based RATR methods can primarily be categorized into three groups: (1) feature extraction-based methods that identify robust features from HRRP signals and leverage designed classifiers for target differentiation, (2) statistical model-based methods that construct probability distribution models using training data and classify target categories by calculating the likelihood probabilities of test samples, and (3) deep learning-based methods that enable end-to-end recognition through neural networks for autonomous feature extraction [3].
In feature extraction-based methods, the recognition performance is critically influenced by the effectiveness and accuracy of feature selection. The core challenge lies in constructing feature spaces with translation invariance and intrinsic target separability, the design of which heavily relies on empirical expertise. Existing research has achieved preliminary progress: Studies [4,5,6] demonstrated feasibility in target recognition tasks by selecting physically meaningful features with strong representational capabilities from HRRP data. Meanwhile, through the integration of signal processing and pattern recognition theories, these methods have evolved into multidimensional frameworks incorporating innovative approaches such as time-frequency analysis [7,8], sparse representation [9,10], and nonlinear transformations [11,12]. Although such methods exhibit theoretical advantages in characterizing target scattering properties, significant technical limitations persist. Crucially, such approaches exhibit limited generalizability in multi-target recognition contexts, constrained by category-specific feature engineering and vulnerability to long-tailed distributions characterized by dominant majority classes and severely scarce minority categories that induce biased learning.
Statistical model-based HRRP target recognition methods characterize the distribution patterns of target scattering properties through probabilistic modeling. The core principle involves treating radar echoes as stochastic processes and establishing probabilistic mapping relationships between target types and echo data via statistical inference. Representative statistical models include adaptive Gaussian classifiers, Gamma mixture models, Gamma–Gaussian mixture models, and factor analysis (FA) models. Du et al. systematically developed statistical modeling approaches to address recognition robustness in noisy environments [13,14,15]. Notably, since the introduction of FA models into HRRP statistical recognition in 2008, significant research advancements have been achieved in this domain [16,17,18]. These methods fundamentally depend on prior distribution assumptions. This intrinsic constraint creates a theoretical mismatch with HRRP’s inherent non-Gaussian characteristics and multimodality, ultimately limiting complex scattering pattern characterization.
Deep learning approaches leverage HRRP sequences’ inherent spatiotemporal characteristics, where temporal dependencies from azimuth variations and spatial correlations across range cells provide complementary discriminative information. Initial efforts focused on temporal modeling through recurrent architectures, achieving notable success in aircraft target recognition [2,19,20]. While existing studies have achieved progress in target recognition through temporal feature modeling with recurrent neural networks, their frameworks exhibit notable limitations by neglecting spatial correlation characteristics within HRRP sequences. Consequently, the effective integration of spatiotemporal joint features in HRRP has emerged as a critical research focus in this field. Wan et al. [21] developed a hybrid recognition framework integrating convolutional neural networks (CNNs) with bidirectional recurrent neural networks (BiRNNs), where CNN modules extract spatial correlation features from HRRP data, while BiRNN modules capture temporal dependencies across range cells. Wang et al. [22] innovatively combined CNNs with Bidirectional Encoder Representations from Transformers (BERT), employing convolutional modules to characterize local spatial structures and utilizing BERT’s multi-head attention mechanisms to extract temporal features from HRRP sequences. Wu et al. [23] advanced this approach by constructing a fusion network that initially pre-extracts features through BERT, subsequently employs multi-scale CNNs and bidirectional gated recurrent networks to capture local characteristics and long-range dependencies, respectively, and finally concatenates features to integrate advantages from diverse networks. Establishing effective spatiotemporal feature extraction mechanisms to achieve complementary fusion of HRRP’s spatiotemporal characteristics and extending these to multi-target recognition scenarios remains a highly valuable research frontier.
With the rapid advancement of deep learning and growing academic interest in Transformer methodologies, recent years have witnessed increasing exploration of their applications in HRRP recognition. Zhang et al. [24] developed a feature-guided Transformer model that integrates manually designed features into attention modules to focus on HRRP range cells with rich scattering information, significantly improving recognition accuracy under small-sample conditions. Diao et al. [25] proposed a positional embedding-free CNN–Transformer hybrid architecture optimized for HRRP data characteristics, effectively eliminating the need for traditional positional encoding. Wang et al. [26] employed dual-branch Transformer encoders to separately extract temporal and spatial features, with designed attention fusion mechanisms achieving adaptive feature weighting. Gao et al. [27] innovatively introduced polarization preprocessing modules that combine artificial features with CNNs to enhance feature representation, constructing a Vision Transformer-based framework that substantially improves local and temporal feature extraction. Although current studies improve performance through module stacking, they commonly suffer from overfitting risks caused by excessive parameters. Insufficient emphasis on model lightweighting poses challenges in practical deployment on edge devices.
Current HRRP radar target recognition research primarily focuses on two typical data environments: (1) recognition methodologies under class-balanced conditions, primarily addressing cooperative target identification tasks. Under such conditions, controllable detection conditions enable researchers to acquire abundant high-quality samples, thereby facilitating the adoption of complex models like deep neural networks to leverage their powerful feature extraction capabilities for high-precision recognition. (2) Recognition technology exploration under data-limited and class-imbalanced conditions, mainly targeting non-cooperative target identification. Due to factors such as long detection ranges and complex electromagnetic environments, effective sample acquisition for such targets proves challenging, resulting in datasets exhibiting typical long-tailed distribution characteristics. It is noteworthy that in practical applications such as national defense and aerospace, the identification of non-cooperative targets with high observational value presents more urgent demands. These operational environments face three core challenges: First, target echoes are susceptible to noise interference, leading to significantly reduced signal-to-noise ratios; Second, there exists a prominent contradiction between target category diversity and limited training samples; Third, measured data inherently exhibit long-tailed distribution properties. Establishing effective recognition models under multi-category imbalanced data conditions has emerged as a critical scientific challenge in advancing the practical application of radar automatic recognition technology.
Researchers are actively exploring diverse technical approaches to address class imbalance issues in HRRP target recognition. Yin et al. [28] proposed an Adaptive Uniform Manifold Approximation and Projection (AUMAP) segmentation algorithm, which mitigates data imbalance by modifying the CNN loss function into a focal loss formulation. Jia et al. [29] developed a memory-based neural network (MBNN) for imbalanced data conditions: CNN-extracted features are processed through a memory module that records misclassified and low-confidence samples, followed by long short-term memory (LSTM) based fusion of classified samples with buffer-stored similar samples for final decision-making. Zhang et al. [30] introduced an open-set imbalanced recognition network integrating dual-attention mechanisms, memory modules, across functions, and decoupled training strategies, optimizing intra-class and inter-class similarity constraints via angular penalty loss. Guo et al. [31] established a transfer learning framework involving pretrained models with source domain data, subsequently resetting fully-connected and output layer parameters, and proposed a novel loss function to suppress inter-class bias for measured HRRP datasets with limited samples and class imbalance. Wu et al. [32] created a weighted synthetic minority oversampling technique (SMOTE) algorithm that dynamically allocates synthetic weights based on Euclidean distances among minority samples, combined with multi-scale CNN–Transformer encoder attention models to enhance multi-level feature classification accuracy. Tian et al. [33] designed a gradient-guided class re-balancing loss (GRB Loss) for space micro-motion targets, which dynamically assigns weights according to accumulated positive-negative gradient ratios at classification nodes, ensuring recognition robustness across varying imbalance ratios.
As a core technology of RATR, HRRP sequence-based target recognition plays a critical role in determining battlefield perception capabilities within complex environments. Nevertheless, existing methodologies exhibit significant technical limitations in feature synergy utilization, computational efficiency, and scenario generalization. Primarily, inadequate collaborative modeling of spatiotemporal features restricts cross-dataset adaptability. While HRRP sequences inherently contain temporal pose evolution characteristics and spatial scattering structural information, current approaches predominantly focus on temporal feature extraction via recurrent neural networks or spatial correlation mining through convolutional architectures, resulting in inadequate collaborative representation of spatiotemporal features. This deficiency leads to drastic performance degradation in complex battlefield environments involving variant targets or low signal-to-noise conditions, revealing fundamental weaknesses in generalization robustness. Secondly, structural redundancy in models elevates overfitting risks. Contemporary mainstream approaches predominantly adopt complex module stacking strategies, such as deep Transformer encoders and multi-branch convolutional architectures, to enhance recognition performance. This architectural complexity results in exponential escalation of both parameter volumes and computational demands. However, the inherent contradiction between embedded radar devices’ limited computing power and resource-intensive model requirements not only hinders deployment feasibility but also exacerbates overfitting risks due to insufficient training data, severely compromising practical utility. Finally, biased data distribution exacerbates generalization deterioration. Current research predominantly concentrates on class-balanced datasets, with insufficient investigation into multi-class long-tailed distribution challenges. Traditional cross-entropy loss functions, which apply uniform weighting to all samples, induce excessive model adaptation to majority classes while neglecting discriminative feature learning for high-value tail categories. Consequently, this imbalance triggers catastrophic performance degradation in critical minority class recognition.
To address the above technical challenges in HRRP sequence target recognition, this paper proposes a lightweight spatiotemporal fusion-based (LSTF) HRRP sequence target recognition method. The main innovations are summarized as follows:
1.
LSTF-based HRRP sequence target recognition framework:
  • We propose an LSTF HRRP sequence target recognition method with a dual-branch feature extraction architecture, in which a temporal feature encoding module and a spatial fully convolutional module are jointly fused to capture both target pose evolution and scattering structure distribution.
2.
Lightweight Transformer encoder for temporal modeling:
  • A Transformer encoder based on group linear transformations (TGLT) is designed to effectively model temporal dynamics in HRRP sequences while significantly reducing parameter count and computational complexity, enabling real-time deployment on edge devices.
3.
Transform-domain spatial feature extraction network:
  • We develop a transform-domain spatial feature extraction network that integrates the fractional Fourier transform (FrFT) with an enhanced squeeze-and-excitation fully convolutional network (FSCN). By exploiting multi-domain spatial representations, the proposed network enhances the discriminability of scattering energy distributions across target classes at specific fractional orders, leading to improved classification performance.
4.
Adaptive loss for long-tail recognition:
  • An adaptive focal loss with label smoothing (AFL-LS) is introduced to dynamically reweight imbalanced classes, improving recognition of long-tail categories. The incorporation of label smoothing further mitigates overfitting and significantly enhances the model’s generalization ability.
The remainder of this paper is organized as follows. Section 2 details the proposed LSTF network architecture, including the overall framework, the TGLT-based temporal feature encoder module, the transform-domain spatial feature fully convolutional module, and the decision fusion and recognition module. Section 3 introduces the proposed AFL-LS function, designed to address class imbalance. Section 4 presents the experimental results and analysis, including dataset descriptions, recognition performance comparisons, ablation studies, and feature visualizations. Finally, Section 5 concludes the paper.
Notably, the proposed method also exhibits strong potential for extension to target detection tasks in high-frequency surface wave radar (HFSWR) systems, particularly in high-resolution maritime surveillance scenarios. For ship detection and discrimination from sea clutter on range–Doppler maps, the lightweight TGLT-based encoder can effectively capture the temporal dynamics of ship echoes, while the transform-domain FSCN module enhances feature separability by exploiting scattering energy distributions in the fractional Fourier domain. Moreover, the AFL-LS loss mitigates the severe imbalance between sparse ship targets and dense sea clutter, thereby improving detection robustness. Benefiting from its lightweight architecture and low computational overhead, the proposed framework is well aligned with the real-time processing requirements of operational HFSWR systems. These characteristics indicate that the proposed method is not limited to HRRP sequence recognition but also holds broader practical value for cross-domain radar target analysis.

2. Method

2.1. Overall Network Architecture

The architecture of the proposed recognition method is shown in Figure 1, featuring a dual-branch structure with three main components: the temporal feature encoder, the spatial feature convolutional module, and the decision fusion module. In the temporal branch, the TGLT efficiently models temporal dynamics by capturing dependencies across different time steps through a self-attention mechanism, while reducing parameters and computational cost for real-time edge deployment. In the spatial branch, a transform-domain feature extraction network combining FrFT with an enhanced FSCN exploits multi-domain spatial information to enhance feature discriminability. Specifically, the FrFT projects the data into a joint time-frequency domain to enhance separability, and the subsequent FSCN (comprising Conv1D, SE blocks, and pooling) extracts multi-scale local structural features. Finally, the decision fusion module employs an AFL-LS to dynamically adjust class weights for improved long-tail recognition and applies label smoothing to boost generalization.
We assume the input HRRP sequence is X   =   x 1   x 2     x n T , where the sequence has dimensions n   ×   c , with n representing the length of the HRRP sequence and c representing the channel dimension of the HRRP sequence. For the temporal branch, a high-dimensional linear mapping is first applied to map the channel dimension of the sequence from c dimensions to d model dimensions as
Y token = X W e
where Y token denotes the sequence after linear transformation, with the channel dimension being d model , and W e R c × d model is the linear transformation matrix. Positional encoding is added to Y as
Y = Y token +   X p
where X p R n × d model represents the positional encoding matrix. The temporal feature encoder then processes Y using a Transformer encoder to extract features, resulting in the temporal feature O T . The spatial branch takes X T as input and extracts features through multiple stacked layers of one-dimensional convolution, batch normalization, squeeze-and-excitation, and pooling, ultimately yielding the spatial feature O S .
The decision fusion recognition module uses O T and O S as inputs and performs classification using a fully connected layer and a Softmax function for each branch, thereby obtaining the classification results P T for the temporal branch and P S for the spatial branch. To achieve adaptive fusion of the temporal and spatial branches, we establish learnable decision weight matrices W T and W S . The final recognition result P is computed as
P   = Softmax W T P T + W S P S
where W T ,   W S R 1 × t are the learnable decision weight matrices, and t represents the number of target categories. These matrices are model parameters optimized during training. Each vector contains t scalar weight values, corresponding to the t target classes. Their function is to perform an adaptive, category-specific fusion. The Softmax function is then used to constrain the output values of the final recognition result.
Weather factors such as rain, fog, and snow primarily affect HRRP data by reducing signal-to-noise ratio (SNR) and distorting scattering center features, and our proposed LSTF method addresses these challenges through its core designs: the TGLT-based lightweight Transformer encoder maintains stable temporal feature extraction under low SNR, the FrFT-integrated FSCN module decouples weather-induced noise from spatial structural information, and the AFL-LS loss function dynamically adjusts weights to focus on reliable features distorted by weather. Experimental results on low-SNR datasets (Dataset 3) show that the method achieves over 93% accuracy at 15 dB SNR (simulating light rain/fog) and 79.96% accuracy at 5 dB SNR (simulating heavy rain/snow), outperforming baselines by 4–7% under harsh conditions. Overall, the method exhibits robust adaptability across typical weather scenarios, maintaining high recognition performance in mild weather and retaining usability under extreme conditions, which is crucial for practical radar deployment.

2.2. Temporal Feature Encoder Module

The encoder is a critical component of the Transformer architecture. For the standard Transformer encoder, the requirement to process global information from the input data results in high computational complexity, which leads to a large number of model parameters and extended training times. These characteristics are not conducive to deployment on edge devices. In the method proposed in this paper, we utilize group linear transformations (GLTs) to lighten the encoder, as illustrated in Figure 2. GLTs leverage the fact that HRRP sequences exhibit localized temporal dependencies and redundant interchannel correlations. By partitioning input channels into groups, GLTs introduce structured sparsity that aligns with the low-dimensional nature of radar scattering signatures. This design reduces model capacity while preserving key temporal dynamics, effectively mitigating overfitting with limited training data. From a matrix approximation perspective, GLTs act as a block-diagonal low-rank approximation of a full linear projection, retaining sufficient expressiveness for pose variations while significantly reducing computational complexity. This approach significantly reduces the computational load and training time by simplifying the feature transformation and attention operations within the encoder. Specifically, the temporal feature encoder is built with a group linear transformation layer that performs efficient dimension expansion and reduction, followed by a self-attention layer to capture long-range temporal dependencies. A dropout layer and residual connections are included to stabilize optimization and prevent overfitting, while a lightweight feed-forward network further decreases the parameter size through compressed intermediate dimensions. Finally, a pooling layer aggregates the temporal features into a compact representation for subsequent recognition.
The group linear transformation layer consists of two operations: dimensionality expansion and reduction. Initially, the input data are divided into multiple groups, each of which is mapped to a higher-dimensional space and expanded. By concatenating these vectors, high-dimensional features that are more discriminative are obtained. After processing these features with the rectified linear unit (ReLU) function, they are reduced in dimensionality within each group, and the resulting low-dimensional features are concatenated to serve as the input to the self-attention layer. This two-stage transformation effectively integrates features across different dimensions, yielding better performance than simple linear encoding. Consequently, replacing the multi-head attention mechanism in the standard Transformer with self-attention and reducing the input dimensionality to half that of the original significantly decreases the computational load.
The self-attention layer calculates the relevance of sequence data using query and key vectors to derive the attention matrix, which captures the global dependencies within the sequence. The resulting attention features are expressed as
A = A t t e n t i o n ( Q , K , V ) = S o f t max ( Q K T d k ) V
where the query vector Q   =   Y W Q , the key vector K   =   Y W K , and the value vector V   =   Y W V are defined, with W Q ,   W K ,   W V being learnable weight matrices. To maintain consistency between input and output dimensions and to prevent the loss of original features due to the depth of the model, the attention feature vector A undergoes dimensionality expansion and residual connection processing. The calculation for the feature F is given by
F   = W L A + Y
where W L is the parameter matrix for dimensionality expansion of the attention feature A .
In the lightweight feed-forward neural network, the input dimensionality d m is first reduced to d m 4 and then expanded back, with the parameter count reduced to 1/16 of the original while maintaining performance. The final features are computed as
O T   = GAP F + ReLU   ( F W 1 + b 1 W 2 +   b 2
where W 1 and b 1 are the parameter matrix and bias vector for dimensionality reduction, and W 2 and b 2 are the parameter matrix and bias vector for dimensionality expansion, with G A P · denoting the global average pooling operation and ReLU · representing the nonlinear activation function.

2.3. Transform Domain Spatial Feature Fully Convolutional Module

The local features of HRRP target sequences contain valuable information, such as the peaks and troughs of the echo signals, which are crucial for target recognition and classification. CNNs can effectively extract local features through convolution operations, thereby capturing the spatial structural information of HRRP data. The spatial feature fully convolutional module in the proposed method consists of one-dimensional convolutional layers, batch normalization (BN) layers, ReLU activation functions, squeeze-and-excitation (SE) blocks, and pooling layers.
The convolutional blocks, composed of one-dimensional convolutional layers, batch normalization layers, and ReLU activation functions, are employed to extract spatial features from the input data. The spatial feature extraction module consists of three convolutional blocks with kernel sizes of 8, 5, and 3, respectively. These sizes enable the capture of spatial information at different scales, thereby enhancing the accuracy of target recognition. Following the first two convolutional blocks, SE blocks are incorporated, as illustrated in Figure 3. The SE blocks perform feature recalibration using adaptive average pooling and fully connected layers, leveraging global information to adjust the weights of each channel. This process enhances the representation of globally significant features in the HRRP sequences, suppresses noise and less important features, and thereby strengthens the network’s expressive capability. The input HRRP data are first transformed using FrFT to enhance feature representation in the fractional Fourier domain. The FrFT of order α for a sequence χ ( t ) is defined as
X a ( u ) = F a [ χ ( t ) ] ( u ) =   χ ( t ) K a ( t , u ) d t
where K a ( t , u ) is the transformation kernel:
K a ( t , u ) = 1 j cot α 2 π exp j t 2 + u 2 2 cot α j u t csc α , α n π δ ( t u ) , α = 2 n π δ ( t + u ) , α = ( 2 n + 1 ) π
Here, α = a π / 2 is the rotation angle in the distance-frequency plane. This kernel is the fundamental operator of the FrFT; varying the order α effectively rotates the signal representation in the joint distance-frequency plane, enabling the search for a domain where target-specific scattering characteristics are optimally concentrated and separated. The transformed data are then processed through the subsequent spatial feature extraction module. The calculation process for the spatial feature O S can be expressed as
O S m = S E ReLU BN Conv 1 D X T ,       m = 1 S E ReLU BN Conv 1 D O S 1 ,       m = 2 R e L U BN Conv 1 D O S 2 ,       m = 3
O S = 1 n i n O S 3 i
where S E · represents the squeeze-and-excitation block, R e L U · is the nonlinear activation function, B N · denotes the batch normalization layer, C o n v 1 D · is the one-dimensional convolution layer, and n is the length of the HRRP sequence. The pooling layer uses adaptive average pooling to reduce the dimensionality of the convolution layer output, resulting in the global spatial feature O S .

2.4. Decision Fusion and Recognition Module

After extracting temporal correlation features and spatial structural features from the temporal feature encoder module and spatial feature fully convolutional module, respectively, the decision fusion and recognition module uses two fully connected layers to classify the temporal feature O T and spatial feature O S , resulting in the temporal recognition result P T and the spatial recognition result P S . Given the sensitivity of HRRP data to time shifts and target orientations, we establish learnable decision weight matrices W T and W S , where W T ,   W S R 1 × t , and t is the number of target categories. These matrices adaptively adjust the contributions of temporal and spatial features for each category, addressing time shift and orientation sensitivity issues to enhance model robustness. The calculation process for the final recognition result is described in Figure 4.

3. Loss Function Under Unbalanced Data

3.1. Focal Loss

Focal loss [34] is a loss function designed to address the issue of class imbalance. By introducing a coefficient factor on top of the standard cross-entropy loss, focal loss reduces the emphasis on easy samples and increases the focus on hard samples, thereby enhancing the model’s classification capability. In this work, hard samples primarily refer to training samples that are misclassified or classified with low confidence, often stemming from minority classes or challenging conditions. The calculation of focal loss is
FL = α t 1 p t γ log p t
where p t denotes the predicted probability of the true class for a sample, α t is a balancing factor that adjusts the influence of positive and negative samples, and γ is a modulation factor that adjusts the focus on hard-to-classify samples. By tuning these weighting factors, focal loss effectively addresses the issue in which the loss value is dominated by the majority class and easy samples due to class imbalance.

3.2. Label-Smoothing Enhanced Focal Loss

Label smoothing is a regularization technique designed to prevent the model from becoming overly confident in its predictions on the training data, thereby improving generalization. The core idea of label smoothing is to convert the hard one-hot encoded vectors of true labels into a softer probability distribution. This method effectively reduces overfitting and enhances the model’s robustness to noisy data. Let y denote the true label and k be the number of classes. The one-hot encoding is y i (where only the i-th position is 1, and all others are 0) [35]. After label smoothing, y i smooth can be represented as
y i smooth = 1 ε y i + ε k
where ε is the smoothing factor. Typically, label smoothing is combined with the cross-entropy loss (CE) [36], which is given by
CE smooth = i = 1 k 1 ε y i + ε k log p i
where p i denotes the model’s predicted probability for class   i . In the proposed method, label smoothing [37] is integrated with focal loss, resulting in the following calculation:
FL smooth = i = 1 k 1 ε y i + ε k α i 1 p i γ log p i
Label smoothing provides a smoothed target distribution, making the loss function’s predictions more balanced across classes. Meanwhile, focal loss adjusts the loss weights to increase the model’s sensitivity to hard samples. The combination of these methods enhances both the model’s robustness to noisy data and its ability to recognize minority-class samples.

3.3. Adaptive Focal Loss

In standard focal loss, the balance factor α and the modulation factor γ are typically fixed values, and their settings significantly influence the model’s ability to recognize hard samples. In the proposed method, both the balance factor α and the modulation factor γ are dynamically adjusted based on the number of categories and prediction probabilities. The calculation formula for the balance factor α i for each category is
α i = N c i
where N is the total number of samples across all categories, and c i represents the number of samples in the i-th category. For minority classes, the balance factor α is larger, making the model pay more attention to the minority classes. The calculation process for the modulation factor γ i for the i-th class is
γ i =   γ   0 + β ( 1 p i )
where γ 0 denotes the initial value of the dynamic adjustment factor, β is a tunable hyperparameter, and p i represents the predicted probability for the i-th class. For more difficult-to-classify samples, i.e., those with a smaller p i , the modulation factor γ i will be larger, thereby giving more focus to these samples. Combining this with (12), the calculation formula for AFL-LS is
AFL LS = i = 1 k 1 ε y i + ε k N c i 1 p i γ 0 + β 1 p i log p i

4. Results and Analysis

4.1. Datasets

In order to verify the validity of the proposed model and the loss function, we conducted experiments on the MSTAR dataset [38] and the CVDomes dataset [39]. The MSTAR dataset is widely used for radar automatic target recognition. It is sourced from a high-resolution synthetic aperture radar operating in the X-band with HH polarization and includes 10 target categories such as BMP2, BTR70, and T72.
We use data with a pitch angle of 17° as the training set and data with a pitch angle of 15° as the test set. We also include four variant targets—BMP2(SN-9563), BPM2(SN-C21), T72(SN-812), and T72(SN-S7)—in the test set to create Dataset 1, which tests the model’s generalization performance. Similarly, we use data with a pitch angle of 17° as the training set, but multiply the sample count of each category by a decay factor θ . For the reduced number of samples for the i-th category, we calculate it as
Num i Reduced =   θ   i × Num i
where Num i denotes the original sample count of the i-th category, and i ranges from [0, 9]. In this experiment, θ is set to 0.6. We randomly select Num i Reduced samples from the original dataset to construct a class-imbalanced training set. The test set remains the same, with data at a pitch angle of 15°, including four variant targets, which results in Dataset 2. Following the method in reference [40], we convert SAR images into HRRP sequences. The composition of the MSTAR HRRP sequence dataset is shown in Table 1 and Table 2.
The specific transformation steps are as follows: first, the SAR images undergo a one-dimensional inverse fast Fourier transform (IFFT) to obtain the complex domain HRRP sequences, which are then modulated to obtain the HRRP sequences. For each SAR image, 100 HRRP sequence samples are generated. By averaging every ten HRRP sequence samples, we obtain one averaged HRRP sample. Consequently, the training set of Dataset 1 contains 27,470 samples, while the training set of Dataset 2 contains 6809 samples. Both datasets have a test set containing 32,030 samples.
The CVDomes dataset, released in 2009, contains simulated X-band signatures of 10 civilian vehicle categories, including Toyota Camry, Honda Civic 4dr, and 1993 Jeep. Radar azimuth angles cover the full range from 0° to 359°. Acquired directly in HRRP format, this dataset requires no IFFT processing, and HRRP sequences can be generated through sliding window operations. For experimental validation, we select HH-polarized data at a 30-degree elevation angle and simulate varying signal-to-noise ratio conditions by superimposing Gaussian noise. Following the long-tail distribution strategy consistent with the MSTAR dataset, 60% of samples constitute the class-imbalanced training set, with 20% allocated to the validation and test sets, respectively. Detailed configuration parameters are provided in Table 3.
The imbalance factor (IF) [41] is a metric used to quantify the degree of class imbalance in classification problems. By calculating the IF, one can measure the disparity in the number of samples across different classes within a dataset. In this study, the IF values for Datasets 1–3 are measured at 0.997, 0.702, and 0.714, respectively. These metrics indicate that Dataset 1 maintains a near-ideal balanced state, whereas Datasets 2 and 3 exhibit significant class imbalance, manifesting characteristic long-tailed distribution patterns in sample category quantities.

4.2. Recognition Performance

To validate the effectiveness of the proposed method for HRRP sequence recognition, two types of comparative experiments are designed: one to verify the effectiveness of the LSTF model and the other to evaluate the AFL-LS loss function. The hardware environment for these experiments includes a 64-bit operating system, an Intel Core i5-13490F processor, an RTX 3060 GPU, and 32 GB of RAM. The software environment comprises Python 3.9 and PyTorch 2.0.1.

4.2.1. Validation of the Effectiveness of the Proposed LSTF Model

To validate the effectiveness of the proposed method, we select LSTM-FCN [42], GRU-FCN [43], gMLP [44], XCM [45], GTN [46], RLAT [47], and LViT [48] as baseline methods for comparison. Each method runs five times under identical parameter conditions to minimize randomness caused by dropout and sample selection. LSTM-FCN and GRU-FCN are both hybrid models combining recurrent neural networks with convolutional neural networks. They leverage long short-term memory (LSTM) and gated recurrent units (GRU), respectively, to capture temporal dependencies, while integrating fully convolutional networks (FCN) to extract local spatial features, making them suitable for time-series data classification. gMLP centers on a multilayer perceptron, achieving feature modeling through gating mechanisms and spatial projections. It efficiently captures data correlations while dispensing with traditional attention mechanisms. XCM is an explainable convolutional neural network designed for multi-class temporal classification, leveraging convolutional structures to mine local data features while providing interpretability. GTN, or gated Transformer network, optimizes attention distribution in Transformers through gating mechanisms, enhancing the capture of critical information in multi-temporal data. RLAT is a lightweight Transformer designed for high-resolution distance image sequence recognition. It reduces parameter size and computational overhead through structural simplification, meeting edge device deployment requirements. LViT is a lightweight visual Transformer that employs optimized designs like competitive blocks to enhance feature extraction efficiency while maintaining compactness. It has been applied to visual tasks such as fingerprint recognition.
The experimental configuration employs a single-layer temporal feature encoder in the LSTF model, with group linear transformations (GLTs) divided into four groups. Preprocessed HRRP sequences of length 32 serve as the input, utilizing standard cross-entropy loss for Dataset 1 and AFL-LS for Datasets 2 and 3.
On class-balanced Dataset 1, all comparative models demonstrate competent baseline performance. However, the proposed LSTF exhibits significant superiority in both recognition accuracy and generalization capability. As detailed in Table 4, LSTF achieves 99.52% recognition accuracy in the 10-class identification task, outperforming the suboptimal model by 0.25 percentage points while maintaining the highest accuracy across all target categories. Notably, the model attains 99.52%, validating the spatiotemporal fusion mechanism’s sensitivity to subtle target differences. Except for the methods proposed for certain datasets (2S1, BTR70, and T72), which are not optimal, the proposed methods achieve the best recognition performance across all other datasets. These results confirm that LSTF maintains high precision while demonstrating exceptional generalization capabilities, establishing a technical foundation for subsequent research on class-imbalanced data conditions.
Further comparative analysis on class-imbalanced Dataset 2 reveals the recognition accuracy of competing models, as quantitatively detailed in Table 5. The LSTF model maintains superior overall recognition rates while achieving optimal performance across four target categories. Given Dataset 2’s imbalanced training distribution and test set containing variant targets, the experimental setup imposes heightened demands on model generalization capabilities. The results demonstrate that LSTF attains 99.96% and 99.98% accuracy for BRDM-2 and T62 categories with variant models, respectively, confirming the effectiveness of the dynamic decision weighting mechanism in mitigating feature confusion. For tail classes 2S1, BTR70, D7, T72, and ZIL131, the model achieves 100.00% accuracy, showing marked improvements over Transformer-based GTN and RLAT models. In challenging BMP2, BTR60, and ZSU23/4 category recognition, the performance of LSTF is slightly inferior. Furthermore, as illustrated in Figure 5, LSTF exhibits enhanced recognition stability on Dataset 2 with a maximum fluctuation limited to 0.61%, demonstrating significant advantages over comparative models.
To systematically evaluate model performance under class-imbalanced conditions, the experiment incorporates four metrics: macro F1-score, G-mean, macro ROC-AUC, and macro PR-AUC for comprehensive analysis, with quantitative results presented in Table 6. LSTF demonstrates superior performance across the four metrics: achieving a macro F1-score of 0.9935 indicates balanced inter-class prediction capabilities; its G-mean value significantly outperforms other baselines, validating strong robustness against skewed data distributions; both macro ROC-AUC and PR-AUC approach theoretical maximums, confirming the model’s decision reliability under high-confidence thresholds.
To validate the model’s recognition robustness and noise immunity under class-imbalanced and low signal-to-noise ratio (SNR) conditions, comparative experiments were conducted on the class-imbalanced Dataset 3. Gaussian noise was introduced during data preprocessing to construct training sets with an SNR of 20 dB, with model performance evaluated under test conditions of 20 dB, 15 dB, 10 dB, and 5 dB SNR levels, as detailed in Table 7. Experimental results reveal differentiated performance degradation patterns across models as SNR decreases. Under test conditions of 15 dB, the proposed model achieved recognition accuracy second only to the parameter-intensive GTN model, while demonstrating optimal performance under other SNR conditions. This indicates that the proposed method effectively overcomes existing models’ reliance on high-quality signals, exhibiting enhanced engineering applicability in complex battlefield environments with dynamically varying SNR levels.
This study further compares parameter counts and computational complexity across models to evaluate engineering applicability, as shown in Table 8. Among Transformer-based variants, GTN demonstrates superior performance on noise-contaminated Dataset 3, where its deep architecture and high parameter count facilitate complex feature extraction, yet its large model size hinders real-time processing on embedded devices. In contrast, LSTF achieves significant reductions in both parameters and computations compared to GTN, while maintaining deep feature extraction capabilities through lightweight modifications. Although the most lightweight model, RLAT, excels on class-balanced Dataset 1, its recognition accuracy declines on imbalanced Datasets 2 and 3, revealing insufficient stability in identifying minority-class samples. While the proposed model’s lightweight characteristics remain inferior to baseline methods like LSTM-FCN and GRU-FCN, it effectively reduces the inherent high parameter and computational loads of the Transformer attention mechanism. This optimization achieves complexity reduction without compromising recognition performance, making Transformer-based architectures more suitable for deployment on edge devices.
For a more intuitive comparison, Figure 6 is plotted. This dual-axis bar chart clearly compares the performance trade-offs of eight temporal classification models (including the proposed model) in terms of computational complexity (MACs, left blue axis) and model size (Parameters, right red axis). It reveals that while the GTN model achieves significantly higher values than other models across both metrics, most other models—including the proposed model—exhibit MACs and parameters concentrated within a lower range, demonstrating efficiency while maintaining relatively low resource consumption.

4.2.2. Verification of AFL-LS Effectiveness

To verify the effectiveness of the proposed AFL-LS, several comparison methods are selected: standard cross-entropy loss (CE), data distribution loss (DD Loss) [49], standard focal loss (FL), focal loss with label smoothing (FL-LS), and adaptive focal loss (AFL). These methods are evaluated using the LSTF model on Dataset 2. The smoothing factor ε is set to 0.2, while γ 0 and β are set to 2.0. Each method is run five times, and the average recognition rates for each method are shown in Table 9.
Experimental results demonstrate that the proposed AFL-LS loss function exhibits significant performance advantages under class-imbalanced conditions. It achieves superior overall accuracy compared to all baseline methods, outperforming the second-best approach, CE, by 0.3 percentage points. For the severely imbalanced ZSU23/4 and BMP2 datasets, AFL-LS achieved recognition rates of 99.48% and 95.97%, respectively. Meanwhile, CE effectively guided model attention toward minority-class samples, yielding high recognition rates. For BRDM-2 and BTR60, although label smoothing strategies enhanced model generalization, they simultaneously increased recognition difficulty for minority classes, causing CE-LS and FL-LS to completely fail in these categories. For the remaining targets, AFL-LS maintained higher accuracy than FL, confirming LS’s effectiveness in improving majority class generalization. Notably, on the BRDM-2 target with complex scattering patterns, the state-of-the-art DD loss achieved only a 98.83% recognition rate. AFL-LS, however, elevated performance to 99.96% through dynamic class weight balancing. By integrating adaptive mechanisms with label smoothing techniques, this method not only intensifies focus on minority classes but also enhances generalization capabilities for variant targets, ultimately achieving the highest overall average recognition rate.
Furthermore, comparative analysis was conducted using four metrics: macro-F1 score, G-mean, macro ROC-AUC, and macro PR-AUC, as shown in Table 10. The proposed AFL-LS method achieves optimal performance across all metrics, while CE, CE-LS, and FL-LS completely fail on minority classes, resulting in G-mean values of zero and lower macro-F1 scores. Although the state-of-the-art DD Loss demonstrates outstanding performance in the macro ROC-AUC and macro PR-AUC metrics, its values still fell short of those of the algorithm proposed in this paper.
Additionally, comparative experiments with different loss functions were performed on Dataset 3 under low signal-to-noise ratio (SNR=5 dB) conditions, as shown in Table 11 and Table 12. AFL-LS attained the highest overall recognition accuracy of 96.03%, outperforming the second-best method, AFL, by 4.19%, and achieved the highest recognition rates in 8 out of 10 target classes. Analysis of other evaluation metrics demonstrates that AFL-LS exhibits marked superiority in macro-F1 score and G-mean, confirming its ability to balance recognition performance between majority and tail classes. Furthermore, its exceptional performance in macro ROC-AUC and macro PR-AUC reflects high-confidence decision boundaries in noisy environments, proving that this loss function significantly enhances model robustness and noise resistance under low-SNR conditions.

4.3. Ablation Experiment

To validate the effectiveness of the spatiotemporal feature complementary mechanism and adaptive weighting decision module, ablation studies were conducted on the MSTAR and CVDomes datasets, as presented in Table 13. Single-branch features exhibited significant performance fluctuations across data conditions. The spatial branch achieved 3.78% and 25.72% higher accuracy than the temporal branch in Datasets 1 and 2, whereas in Dataset 3, the spatial branch declined to 87.13%, while the temporal branch maintained robust performance at 94.13%. This demonstrates the differentiated sensitivity of spatiotemporal features to environmental characteristics: spatial features excel in geometric configuration analysis, while temporal features exhibit stronger noise robustness. Although the dual-branch framework enhances model applicability, static equal-weight fusion strategies tend to suffer from limited optimization and feature quality instability. The proposed adaptive weighting mechanism achieved performance improvements of 0.27%, 0.96%, and 1.62% across the three datasets.
The experimental results indicate that the coordinated design of the dual-branch framework and adaptive weighting mechanism effectively addresses feature modality conflicts and scene adaptability limitations in conventional methods, demonstrating superior generalization capability in complex data distributions and noisy environments.

4.4. Complexity Analysis

The primary computational cost of the proposed method comes from the Transformer_FCN backbone. Let the input sequence length be T S , the model dimension be d , the feed-forward hidden dimension be d f f , and the number of layers be L .
Then, the computational cost of a single forward pass is approximately
O L ( T S ) 2 d + T S d d f f
Considering that backpropagation takes approximately 2–3 times the cost of forward propagation, the training complexity per epoch is
O N B 3 L ( T S ) 2 d + T S d d f f
The space complexity is primarily determined by the parameters and the attention map:
O L d 2 + d d f f + B T S d + B ( T S ) 2
Compared to other traditional convolutional models, this approach exhibits slightly higher computational complexity when the temporal–spatial dimension T S is large, but demonstrates significant advantages in feature representation and classification accuracy.
Notably, the proposed method achieves real-time inference performance on resource-limited edge devices. Benchmark tests on an Intel Core i5-13490F processor show that the inference latency for a single HRRP sequence (length = 32) is only 2.3 ms, which is far below the 10 ms real-time threshold required for radar target recognition systems. This real-time capability is attributed to the TGLT-based lightweight design, which reduces computational complexity by 92.4% compared to the standard Transformer encoder (GTN), enabling seamless deployment in time-sensitive scenarios.

4.5. Visualization and Evaluation of Feature Extraction

To validate the effectiveness of spatiotemporal feature extraction in the proposed model, t-distributed stochastic neighbor embedding (t-SNE) was employed to visualize features from the imbalanced Dataset 2 and Dataset 3, as shown in Figure 7 and Figure 8. t-SNE projects high-dimensional features into a 2D space (with axes t-SNE Dimension 1 and t-SNE Dimension 2), enabling intuitive observation of feature distributions and clustering patterns. Specifically, Figure 7a and Figure 8a display temporal correlation features of HRRP sequences extracted by the temporal encoder, while Figure 7b and Figure 8b illustrate spatial structural features obtained from the full-convolution spatial module. In all subfigures, each point represents a sample, and its color corresponds to a specific target class (see legend).
The visualization results reveal that both temporal and spatial features exhibit distinct inter-class separability and intra-class compactness after dimensionality reduction (i.e., points of the same color form tight clusters, and clusters of different colors are well separated), confirming the dual-branch architecture’s capability to characterize target scattering mechanisms. This visualization provides interpretable evidence for the model’s robustness in complex electromagnetic environments, demonstrating the theoretical superiority of the spatiotemporal feature complementary enhancement mechanism.

5. Conclusions

In this study, we proposed an LSTF method for HRRP sequence target recognition to address the challenges of category imbalance and robustness under complex conditions. The designed TGLT-based lightweight Transformer encoder effectively captured temporal dynamics while reducing computational overhead, which satisfied the requirements of real-time edge deployment. The transform-domain spatial feature extraction network, incorporating the fractional Fourier transform and FSCN, fully exploited multi-domain spatial information and enhanced feature discriminability. Furthermore, the AFL-LS improved recognition for long-tail classes and strengthened generalization capability. Experimental results on the MSTAR and CVDomes datasets showed that our method outperformed existing baselines in multiple scenarios, especially in minority class recognition and variant identification, while maintaining stable robustness under noise interference. Overall, the proposed approach significantly improved recognition performance and demonstrated strong adaptability for practical non-cooperative target recognition.
Future research can further extend this work in several directions. First, more effective recognition strategies for extremely short, incomplete, or missing HRRP sequences should be explored to better match realistic non-cooperative radar scenarios. Second, self-supervised and semi-supervised learning can be introduced to reduce reliance on labeled data and improve generalization for long-tail and unseen target categories. Third, adaptive long-tail optimization and incremental learning mechanisms may be developed to handle dynamically changing class distributions and emerging targets. In addition, multi-view and multi-modal fusion of HRRP with complementary radar representations can be investigated to enhance robustness under complex observation conditions. Incorporating physics-informed priors into data-driven models is also a promising direction to improve interpretability and domain generalization. Finally, further hardware-aware optimization and model compression should be studied to support efficient, low-latency deployment on real-time edge radar platforms.

Author Contributions

Conceptualization, J.Y. (Junjun Yin) and Y.S.; data curation, X.L.; formal analysis, X.L.; funding acquisition, J.Y. (Junjun Yin); investigation, Y.S.; methodology, J.Y. (Junjun Yin); project administration, J.Y. (Junjun Yin); resources, J.Y. (Junjun Yin); software, Y.S.; supervision, J.Y. (Jian Yang); validation, Y.S.; visualization, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, X.Z. and J.Y. (Junjun Yin). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China under Grant 2024YFB3909800, the open research fund of Pinghu Space Awareness Laboratory Technology Co., Ltd., and the NSFC under Grants 62222102 and 62171023.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive suggestions on improving the quality of the paper.

Conflicts of Interest

The authors declare that this study received funding from Pinghu Space Awareness Laboratory Technology Co., Ltd. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

References

  1. Liu, J.; Chen, B.; Jie, X. Radar high-resolution range profile target recognition based on attention mechanism and bidirectional gated recurrent. J. Radars 2019, 8, 589–597. [Google Scholar]
  2. Xu, B.; Chen, B.; Liu, H.; Jin, L. Attention-based recurrent neural network model for radar high-resolution range profile target recognition. J. Electron. Inf. Technol. 2016, 38, 2988–2995. [Google Scholar]
  3. Yin, J.; Sheng, W.; Jiang, H. Review of radar target recognition based on HRRP. J. Ordnance Equip. Eng. 2024, 45, 22–32. [Google Scholar]
  4. Pilcher, C.M.; Khotanzad, A. Maritime ATR using classifier combination and high-resolution range profiles. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 2558–2573. [Google Scholar] [CrossRef]
  5. Fan, X.; Hu, S.; He, J. Feature extraction and selection of full-polarization HRRP in target recognition process of maritime surveillance radar. J. Electron. Inf. Technol. 2016, 38, 3261–3268. [Google Scholar]
  6. Zhao, D.; Li, H. Radar target recognition based on central moment feature and GA–BP neural network. Infrared Laser Eng. 2018, 47, 394–400. [Google Scholar]
  7. Nie, J.; Xiao, Y.; Huang, L.; He, F. High-resolution range profile radar target recognition based on time–frequency analysis and deep learning. J. Appl. Sci. 2022, 40, 973–983. [Google Scholar]
  8. Yun, T.; Pan, Q.; He, Y.; Xu, R. Target recognition algorithm based on HRRP time–spectrogram feature and multiscale asymmetric convolutional neural network. J. Northwest. Polytech. Univ. 2023, 41, 537–545. [Google Scholar] [CrossRef]
  9. Zhou, D. Radar target HRRP recognition based on reconstructive and discriminative dictionary learning. Signal Process. 2016, 126, 52–64. [Google Scholar] [CrossRef]
  10. Duan, P.; Li, H.; Luo, M. Study on radar target recognition based on sparse representation with a federated dictionary. Appl. Electron. Techn. 2019, 45, 11–14. [Google Scholar]
  11. Zhang, J.; Tan, J.; Yue, S. Radar target recognition of high-resolution range profiles based on nonlinear transform. Syst. Eng. Electron. 2005, 27, 1732–1733. [Google Scholar]
  12. Li, B.; Li, H.; Guo, S. Using evolutionary algorithm and adaptive wavelet for HRRP feature extraction and classification. In Proceedings of the AIACT ‘17: Proceedings of the 2017 International Conference on Artificial Intelligence, Automation and Control Technologies, Wuhan, China, 7–9 April 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–6. [Google Scholar]
  13. Du, L.; Liu, H.; Bao, Z. Radar HRRP statistical recognition: Parametric model and model selection. IEEE Trans. Signal Process. 2008, 56, 1931–1944. [Google Scholar] [CrossRef]
  14. Du, L.; Wang, P.; Zhang, L.; He, H.; Liu, H. Robust statistical recognition and reconstruction scheme based on hierarchical Bayesian learning of HRR radar target signal. Expert Syst. Appl. 2015, 42, 5860–5873. [Google Scholar] [CrossRef]
  15. Du, L.; Chen, J.; Hu, J.; Li, Y.; He, H. Statistical modeling with label constraint for radar target recognition. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 1026–1044. [Google Scholar] [CrossRef]
  16. Shi, L.; Wang, P.; Liu, H.; Xu, L.; Bao, Z. Radar HRRP statistical recognition with local factor analysis by automatic Bayesian Ying–Yang harmony learning. IEEE Trans. Signal Process. 2011, 59, 610–617. [Google Scholar] [CrossRef]
  17. Zhang, X.; Chen, B.; Liu, H.; Zuo, L.; Feng, B. Infinite max-margin factor analysis via data augmentation. Pattern Recognit. 2016, 52, 17–32. [Google Scholar] [CrossRef]
  18. Chen, J.; Liao, L.; Zhang, W.; Du, L. Mixture factor analysis with distance metric constraint for dimensionality reduction. Pattern Recognit. 2022, 121, 108156. [Google Scholar] [CrossRef]
  19. Xu, B.; Chen, B.; Wan, J.; Liu, H.; Jin, L. Target-aware recurrent attentional network for radar HRRP target recognition. Signal Process. 2019, 155, 268–280. [Google Scholar] [CrossRef]
  20. Du, C.; Tian, L.; Chen, B.; Zhang, L.; Chen, W.; Liu, H. Region-factorized recurrent attentional network with deep clustering for radar HRRP target recognition. Signal Process. 2021, 183, 108010. [Google Scholar] [CrossRef]
  21. Wan, J.; Chen, B.; Liu, Y.; Yuan, Y.; Liu, H.; Jin, L. Recognizing the HRRP by combining CNN and BiRNN with attention mechanism. IEEE Access 2020, 8, 20828–20837. [Google Scholar] [CrossRef]
  22. Wang, P.; Chen, T.; Ding, J.; Pan, M.; Tang, S. Intelligent radar HRRP target recognition based on CNN-BERT model. EURASIP J. Adv. Signal Process. 2022, 2022, 89. [Google Scholar] [CrossRef]
  23. Wu, W.; Dan, B.; Wang, Z. HRRP target recognition method based on fusion network. Radar Sci. Technol. 2025, 23, 192–198. [Google Scholar]
  24. Zhang, L.; Han, C.; Wang, Y.; Li, Y.; Long, T. Polarimetric HRRP recognition based on feature-guided Transformer model. Electron. Lett. 2021, 57, 705–707. [Google Scholar] [CrossRef]
  25. Diao, Y.; Liu, S.; Gao, X.; Liu, A. Position embedding-free Transformer for radar HRRP target recognition. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 1896–1899. [Google Scholar]
  26. Wang, X.; Wang, P.; Song, Y.; Xiang, Q.; Li, J. High-resolution range profile sequence recognition based on Transformer with temporal–spatial fusion and label smoothing. Adv. Intell. Syst. 2023, 5, 2300286. [Google Scholar] [CrossRef]
  27. Gao, F.; Ren, D.; Yin, J.; Yang, J. Polarimetric HRRP recognition using Vision Transformer with polarimetric preprocessing and attention loss. In Proceedings of the IGARSS 2024—2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; pp. 10838–10842. [Google Scholar]
  28. Yin, M.; Zhang, J.; Zhang, C. Target recognition of radar HRRP based on AUMAP segmentation and improved convolutional neural network. In Proceedings of the IET International Radar Conference (IET IRC 2020), Online, 4–6 November 2020; pp. 417–424. [Google Scholar]
  29. Jia, Y.; Chen, B.; Tian, L.; Chen, W.; Liu, H. Memory-based neural network for radar HRRP noncooperative target recognition. In Proceedings of the 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM), Hangzhou, China, 8–11 June 2020; pp. 1–5. [Google Scholar]
  30. Zhang, X.; Wang, W.; Zheng, X.; Wei, Y. A novel radar target recognition method for open and imbalanced high-resolution range profile. Digit. Signal Process. 2021, 118, 103212. [Google Scholar] [CrossRef]
  31. Guo, C.; Wang, H.; Xia, X.; Chen, L.; Liu, C.; Zhang, G. Deep transfer learning based method for radar automatic recognition with small data size. In Proceedings of the 2021 IEEE International Conference on Unmanned Systems (ICUS), Beijing, China, 15–17 October 2021; pp. 995–999. [Google Scholar]
  32. Wu, W.; Bo, D.; Wang, Z.; Xing, Z. MCNN-TEAN based unbalanced HRRP target identification method. In Proceedings of the SPML ‘24: Proceedings of the 2024 7th International Conference on Signal Processing and Machine Learning, Qingdao, China, 12–14 July 2024; pp. 100–106. [Google Scholar]
  33. Tian, X.; Zhang, J.; Bai, X.; Wang, Y.; Wang, Z. The imbalanced recognition of space micro-motion targets based on GRB loss. In Proceedings of the 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Zhuhai, China, 22–24 November 2024; pp. 1–5. [Google Scholar]
  34. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
  35. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception architecture for computer vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016; pp. 2818–2826. [Google Scholar]
  36. Hinton, G.; Vinyals, O.; Dean, J. Reducing overfitting in deep networks by decorrelating representations. arXiv 2019, arXiv:1909.11556. [Google Scholar]
  37. Azad, R.; Heidary, M.; Yilmaz, K.; Hüttemann, M.; Karimijafarbigloo, S.; Wu, Y.; Schmeink, A.; Merhof, D. Loss functions in the era of semantic segmentation: A survey and outlook. arXiv 2023, arXiv:2312.05391. [Google Scholar] [CrossRef]
  38. Ross, T.D.; Worrell, S.W.; Velten, V.J.; Mossing, J.C.; Bryant, M.L. Standard SAR ATR evaluation experiments using the MSTAR public release data set. In Algorithms for Synthetic Aperture Radar Imagery; SPIE: Bellingham, WA, USA, 1998; pp. 566–573. [Google Scholar]
  39. Dungan, K.E.; Austin, C.; Nehrbass, J.; Potter, L.C. Civilian vehicle radar data domes. In Algorithms for Synthetic Aperture Radar Imagery XVII; SPIE: Bellingham, WA, USA, 2010; Volume 7699, pp. 242–253. [Google Scholar]
  40. Zhang, Y.; Gao, X.; Peng, X.; Ye, J.; Li, X. Attention-based recurrent temporal restricted Boltzmann machine for radar high resolution range profile sequence recognition. Sensors 2018, 18, 1585. [Google Scholar] [CrossRef]
  41. Pirizadeh, M.; Farahani, H.; Kheradpisheh, S.R. Imbalance factor: A simple new scale for measuring inter-class imbalance extent in classification problems. Knowl. Inf. Syst. 2023, 65, 4157–4183. [Google Scholar] [CrossRef]
  42. Karim, F.; Majumdar, S.; Darabi, H.; Harford, S. Multivariate LSTM-FCNs for time series classification. Neural Netw. 2019, 116, 237–245. [Google Scholar] [CrossRef]
  43. Elsayed, N.; Maida, A.S.; Bayoumi, M. Deep gated recurrent and convolutional network hybrid model for univariate time series classification. Int. J. Adv. Comput. Sci. Appl. 2019, 9, 3137. [Google Scholar] [CrossRef]
  44. Liu, H.; Dai, Z.; So, D.; Le, Q.V. Pay attention to MLPs. Adv. Neural Inf. Process. Syst. 2021, 34, 9204–9215. [Google Scholar]
  45. Fauvel, K.; Lin, T.; Masson, V.; Fromont, E.; Termier, A. XCM: An explainable convolutional neural network for multivariate time series classification. Mathematics 2021, 9, 3137. [Google Scholar] [CrossRef]
  46. Liu, M.; Ren, S.; Ma, S.; Jiao, J.; Chen, Y.; Wang, Z.; Song, W. Gated transformer networks for multivariate time series classification. arXiv 2021, arXiv:2103.14438. [Google Scholar] [CrossRef]
  47. Wang, X.; Wang, P.; Song, Y.; Xiang, Q.; Li, J. RLAT: Lightweight Transformer for high-resolution range profile sequence recognition. Comput. Syst. Sci. Eng. 2024, 48, 217–246. [Google Scholar] [CrossRef]
  48. Chen, C.-A.; Chien, T.-H.; Ke, L.-Y.; Hsia, C.-H. A lightweight vision Transformer with competitive blocks for finger vein recognition. In Proceedings of the 2025 Seventh International Symposium on Computer, Consumer and Control (IS3C), Taichung, Taiwan, 27–30 June 2025; pp. 1–4. [Google Scholar]
  49. Zhang, L.; Leng, X.; Ma, X.; Ji, K.; Kuang, G.; Liu, L. Data distribution loss for imbalanced SAR vehicle target recognition. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
Figure 1. LSTF network structure.
Figure 1. LSTF network structure.
Sensors 26 00334 g001
Figure 2. Illustration of the structure of the group linear transformations layer.
Figure 2. Illustration of the structure of the group linear transformations layer.
Sensors 26 00334 g002
Figure 3. Squeeze-and-excitation block.
Figure 3. Squeeze-and-excitation block.
Sensors 26 00334 g003
Figure 4. Decision fusion and recognition pipeline.
Figure 4. Decision fusion and recognition pipeline.
Sensors 26 00334 g004
Figure 5. Accuracy of the 10 targets in Dataset 2.
Figure 5. Accuracy of the 10 targets in Dataset 2.
Sensors 26 00334 g005
Figure 6. Lightweight comparison of each model.
Figure 6. Lightweight comparison of each model.
Sensors 26 00334 g006
Figure 7. Visualization results of feature extraction on Dataset 2. (a) Temporal features. (b) Spatial features.
Figure 7. Visualization results of feature extraction on Dataset 2. (a) Temporal features. (b) Spatial features.
Sensors 26 00334 g007
Figure 8. Visualization results of feature extraction on Dataset 3. (a) Temporal features. (b) Spatial features.
Figure 8. Visualization results of feature extraction on Dataset 3. (a) Temporal features. (b) Spatial features.
Sensors 26 00334 g008
Table 1. MSTAR HRRP Sequence Dataset 1.
Table 1. MSTAR HRRP Sequence Dataset 1.
TargetTraining Set (17°)TargetTesting Set (15°)
2S129902S12740
BMP2(SN-9566)2330BMP2(SN-9566)1960
BMP2(SN-9563)1950
BMP2(SN-C21)1960
BRDM-22980BRDM-22740
BTR70(SN-C71)2330BTR70(SN-C71)1960
BTR602560BTR601950
D72990D72740
T622990T622730
T72(SN-132)2320T72(SN-132)1960
T72(SN-812)1950
T72(SN-S7)1910
ZIL1312990ZIL1312740
ZSU23/42990ZSU23/42740
Total27,470Total32,030
Table 2. MSTAR HRRP Sequence Dataset 2.
Table 2. MSTAR HRRP Sequence Dataset 2.
TargetTraining Set (17°)TargetTesting Set (15°)
2S129902S12740
BMP2(SN-9566)1398BMP2(SN-9566)1960
BMP2(SN-9563)1950
BMP2(SN-C21)1960
BRDM-21072BRDM-22740
BTR70(SN-C71)503BTR70(SN-C71)1960
BTR60331BTR601950
D7232D72740
T62139T622730
T72(SN-132)64T72(SN-132)1960
T72(SN-812)1950
T72(SN-S7)1910
ZIL13150ZIL1312740
ZSU23/430ZSU23/42740
Total6809Total32,030
Table 3. CVDomes HRRP Sequence Dataset 3.
Table 3. CVDomes HRRP Sequence Dataset 3.
TargetTraining SetTesting Set
Toyota Camry36481200
Honda Civic 4dr21881200
1993 Jeep13131200
1999 Jeep7871200
Nissan Maxima4721200
Mazda MPV2831200
Mitsubishi1701200
Nissan Sentra1021200
Toyota Avalon611200
Toyota Tacoma361200
Total906012,000
Table 4. Recognition accuracy of comparative experiments on Dataset 1 (%).
Table 4. Recognition accuracy of comparative experiments on Dataset 1 (%).
LSTM-FCNGRU-FCNgMLPXCMGTNRLATLViTProposed
2S1100.00100.00100.0099.8797.9199.9799.3498.47
BMP296.9498.3191.2277.5993.3299.0199.0899.91
BRDM-296.4795.5492.2284.4268.2492.0698.5099.01
BTR70(SN-C71)100.00100.0086.3798.3889.0799.3294.2697.60
BTR6099.0999.8296.3182.9999.0198.3696.6299.24
D7100.00100.00100.0099.9898.25100.00100.00100.00
T6299.9199.6798.8391.2793.8197.46100.00100.00
T72100.00100.0094.9891.6299.3699.9997.7199.79
ZIL131100.00100.0099.2795.4198.5499.94100.00100.00
ZSU23/4100.00100.0099.7699.9799.6799.97100.00100.00
Overall Accuracy99.0799.2795.5690.7394.2298.7798.6599.52
Table 5. Recognition accuracy of comparative experiments on Dataset 2 (%).
Table 5. Recognition accuracy of comparative experiments on Dataset 2 (%).
LSTM-FCNGRU-FCNgMLPXCM GTNRLATLViTProposed
2S199.4396.9399.9299.6199.9499.8899.78100.00
BMP289.6094.3098.5496.3895.5893.3789.8595.97
BRDM-274.6587.4989.4198.3265.1179.3688.9199.96
BTR70(SN-C71)100.0099.8196.33100.0084.7494.4781.44100.00
BTR6093.9099.1698.5599.9495.8197.4281.5299.95
D7100.00100.0099.96100.0099.9398.3299.59100.00
T6298.6197.6095.6299.1994.1471.9799.2699.98
T7299.9799.2890.8799.7199.1698.6799.62100.00
ZIL13199.1999.9496.81100.0082.3373.9099.85100.00
ZSU23/499.7199.9995.6098.8671.1989.3199.6399.68
Overall Accuracy95.2997.2295.8298.9490.4190.6294.3799.55
Table 6. Performance comparison of each model on other evaluation indicators on Dataset 2 (%).
Table 6. Performance comparison of each model on other evaluation indicators on Dataset 2 (%).
LSTM-FCNGRU-FCNgMLPXCMGTNRLATLViTProposed
Macro F10.94670.97220.95740.98940.88040.876494.270.9935
G-mean0.94570.97110.95980.99200.87600.872394.640.9953
Macro ROC-AUC0.99900.97180.99810.99990.98820.987899.711.0000
Macro PR-AUC0.99490.99980.99080.99930.94410.945498.011.0000
Table 7. Recognition accuracy of each model under different SNR levels on Dataset 3 (%).
Table 7. Recognition accuracy of each model under different SNR levels on Dataset 3 (%).
SNR(dB)LSTM-FCNGRU-FCNgMLPXCMGTNRLATLViTProposed
2081.16 85.18 96.53 82.26 97.53 88.00 80.1498.98
1578.68 84.07 93.67 77.88 94.67 85.89 72.0593.50
1072.23 74.40 85.68 71.74 89.21 81.95 47.6489.23
560.21 63.17 66.77 58.13 79.83 72.88 33.9879.96
Table 8. Lightweight comparison of each model.
Table 8. Lightweight comparison of each model.
LSTM-FCNGRU-FCNgMLPXCM GTNRLATLViTProposed
Macs(M)21.5919.4525.3753.13386.605.664.6529.62
Params(M)0.670.590.791.065.830.220.290.94
Table 9. Recognition accuracy of each loss function on Dataset 2 (%).
Table 9. Recognition accuracy of each loss function on Dataset 2 (%).
CECE-LSDD Loss FLFL-LS AFLLV-LSAFL-LS
2S1100.0099.1299.8586.04100.00100.0085.97100.00
BMP2100.00100.0099.87100.0096.4799.8699.9495.97
BRDM-296.4695.8598.8396.1198.9199.9296.0599.96
BTR70(SN-C71)93.6199.6989.3372.1491.8198.2972.91100.00
BTR6099.8499.6476.5995.2198.4399.9493.1899.95
D797.7298.58100.0099.93100.0089.0297.88100.00
T6298.3684.6798.1195.0799.12100.0085.41100.00
T7299.9599.1597.4598.19100.0099.9195.12100.00
ZIL131100.0091.0099.8999.52100.00100.0098.25100.00
ZSU23/4100.0090.00100.00100.00100.00100.0099.4699.48
Overall Accuracy98.9195.7796.6294.7798.4798.6992.4199.21
Table 10. Comparison of the performance of each loss function on other evaluation indicators (Dataset 2).
Table 10. Comparison of the performance of each loss function on other evaluation indicators (Dataset 2).
CECE-LSDD LossFLFL-LSAFLLViTAFL-LS
Macro F10.98740.86150.96280.95160.98500.98600.9480.9935
G-mean0.98930.00000.96970.96710.98480.98560.90210.9953
Macro ROC-AUC1.00000.99720.99961.00000.99990.99990.98651.0000
Macro PR-AUC1.00000.97640.99731.00000.99930.99940.97981.0000
Table 11. Recognition accuracy of each loss function on Dataset 3 (%).
Table 11. Recognition accuracy of each loss function on Dataset 3 (%).
CECE-LSDD LossFLFL-LSAFLLV-LSAFL-LS
Toyota Camry97.1789.5492.7987.6285.8087.0886.5298.28
Honda Civic 4dr87.4070.1879.7992.9594.5694.8682.5696.53
1993 Jeep100.0087.1598.0791.7983.0598.5797.9598.09
1999 Jeep65.6389.1191.2091.1979.6780.6888.2590.22
Nissan Maxima96.7395.1098.9498.9489.3293.1392.4798.29
Mazda MPV96.1566.3999.83100.0096.7495.0994.3697.24
Mitsubishi70.4176.0489.9876.2482.3785.4186.7888.17
Nissan Sentra100.0098.0392.4790.3496.3196.4497.7298.06
Toyota Avalon98.4288.0471.33100.0099.3796.7696.0898.49
Toyota Tacoma95.770.0092.6999.59100.0097.4095.4896.88
Overall Accuracy88.8082.4789.5391.8389.3891.8491.8296.03
Table 12. Comparison of the performance of each loss function on other evaluation indicators (Dataset 3).
Table 12. Comparison of the performance of each loss function on other evaluation indicators (Dataset 3).
CECE-LSDD LossFLFL-LSAFLLV-LSAFL-LS
Macro F10.86400.78220.88670.91540.88060.91380.91120.9364
G-mean0.82370.00000.86960.90920.85980.90660.90270.9319
Macro ROC-AUC0.99560.99300.99420.99530.99610.99580.99230.9986
Macro PR-AUC0.97170.95560.96440.98070.97530.97690.96350.9893
Table 13. Recognition accuracy of ablation experiments.
Table 13. Recognition accuracy of ablation experiments.
Temporal BranchSpatial BranchAdaptive WeightingAccuracy (%)
Dataset 1Dataset 2Dataset 3
94.9473.3694.13
98.7299.0887.13
99.4098.2895.34
99.6799.2496.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Su, Y.; Zhao, X.; Yin, J.; Yang, J. Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network. Sensors 2026, 26, 334. https://doi.org/10.3390/s26010334

AMA Style

Li X, Su Y, Zhao X, Yin J, Yang J. Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network. Sensors. 2026; 26(1):334. https://doi.org/10.3390/s26010334

Chicago/Turabian Style

Li, Xiang, Yitao Su, Xiaobin Zhao, Junjun Yin, and Jian Yang. 2026. "Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network" Sensors 26, no. 1: 334. https://doi.org/10.3390/s26010334

APA Style

Li, X., Su, Y., Zhao, X., Yin, J., & Yang, J. (2026). Radar HRRP Sequence Target Recognition Based on a Lightweight Spatiotemporal Fusion Network. Sensors, 26(1), 334. https://doi.org/10.3390/s26010334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop