Next Article in Journal
Assessment of Semi-Automated Techniques for Crop Mapping in Chile Based on Global Land Cover Satellite Data
Next Article in Special Issue
Design of a Near-Field Synthetic Aperture Radar Imaging System Based on Improved RMA
Previous Article in Journal
1D-CNN-Transformer for Radar Emitter Identification and Implemented on FPGA
Previous Article in Special Issue
Millimeter-Wave Radar Detection and Localization of a Human in Indoor Complex Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Limited Sample Radar HRRP Recognition Using FWA-GAN

1
Radar Technology Research Institute, Beijing Institute of Technology, Beijing 100081, China
2
Zhengzhou Academy of Intelligent Technology, Beijing Institute of Technology, Zhengzhou 450000, China
3
Electromagnetic Sensing Research Center of CEMEE State Key Laboratory, Beijing Institute of Technology, Beijing 100081, China
4
Beijing Key Laboratory of Embedded Real-Time Information Processing Technology, Beijing 100081, China
5
Chongqing Innovation Center, Beijing Institute of Technology, Chongqing 401120, China
6
Advanced Technology Research Institute, Beijing Institute of Technology, Jinan 250300, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(16), 2963; https://doi.org/10.3390/rs16162963
Submission received: 5 July 2024 / Revised: 4 August 2024 / Accepted: 9 August 2024 / Published: 12 August 2024
(This article belongs to the Special Issue State-of-the-Art and Future Developments: Short-Range Radar)

Abstract

:
In radar High-Resolution Range Profile (HRRP) target recognition, the targets of interest are always non-cooperative, posing a significant challenge in acquiring sufficient samples. This limitation results in the prevalent issue of limited sample availability. To mitigate this problem, researchers have sought to integrate handcrafted features into deep neural networks, thereby augmenting the information content. Nevertheless, existing methodologies for fusing handcrafted and deep features often resort to simplistic addition or concatenation approaches, which fail to fully capitalize on the complementary strengths of both feature types. To address these shortcomings, this paper introduces a novel radar HRRP feature fusion technique grounded in the Feature Weight Assignment Generative Adversarial Network (FWA-GAN) framework. This method leverages the generative adversarial network architecture to facilitate feature fusion in an innovative manner. Specifically, it employs the Feature Weight Assignment Model (FWA) to adaptively assign attention weights to both handcrafted and deep features. This approach enables a more efficient utilization and seamless integration of both feature modalities, thereby enhancing the overall recognition performance under conditions of limited sample availability. As a result, the recognition rate increases by over 4% compared to other state-of-the-art methods on both the simulation and experimental datasets.

1. Introduction

The radar High-Resolution Range Profile (HRRP) represents the projection of the target scattering points along the radar line of sight (LOS), effectively capturing the spatial distribution of these scattering points in the distance dimension [1,2,3,4]. This profile contains crucial information pertaining to the target geometry, structural arrangement, scattering point energies, and other vital attributes, which are indispensable for supporting radar automatic target recognition (RATR) [5,6,7,8,9,10,11,12]. To acquire the information, numerous feature extraction methods are applied. According to the different derived means, the feature extraction method can be divided into two categories: handcrafted and deep feature extraction [13]. In the former, the handcrafted features are manually designed and extracted by humans based on domain knowledge and experience. These features require careful engineering and selection to ensure effectively capturing the relevant information in the radar image [14,15,16]. The latter is automatically learned by deep learning models from raw data through a training process. These features are not explicitly designed by humans but are discovered by the model as it learns to solve the task. Deep features can capture complex patterns and representations that are difficult to engineer manually [17,18].
However, targets of interest in RATR are typically non-cooperative targets. Obtaining non-cooperative target radar HRRP is challenging, and target recognition methods based on deep feature require a sufficient number of radar HRRP samples. Limited sample size will lead to an insufficient generalization ability of the deep neural network, resulting in a significant decline in recognition performance [19]. In contrast, handcrafted features have a minimal impact on the number of samples. Consequently, in the event of small samples, the fusion of handcrafted features with deep features can alleviate the issue of insufficient generalization ability and enhance the accuracy and stability of recognition [9].
Feature fusion methods that combine handcrafted and deep features are widely used to solve small sample problems. Ref. [20] integrated handcrafted scattering point features with deep features and proposed a weight assignment module based on the attention mechanism to enhance the stability of the feature fusion process. Ref. [21] introduced collaborative loss, which facilitates the interaction of information between deep features and handcrafted features and adapts the importance between various features. Ref. [22] extracted alignment-invariant subsets of handcrafted and deep features by constructing multi-region, multi-scale subsets, which were then fused to enhance generalization under complex observation conditions. Refs. [23,24] embedded handcrafted features into deep features to enhance the generalization capability for small sample data.
This paper proposes a feature fusion method based on the Feature Weight Assignment Generative Adversarial Network (FWA-GAN) to enhance the performance of radar HRRP target recognition in the case of small samples. FWA-GAN comprises a generative module with a sample discriminator and a feature discriminator. Firstly, both handcrafted features and deep features are input into the generative module simultaneously. The generative module then incorporates the priori information from the handcrafted features into the deep features, thereby obtaining the fused samples and fused features. Subsequently, the sample discriminator is employed to classify the fused samples, thereby obtaining the adversarial loss and the sample category loss. The adversarial loss is used to supervise the training of the generative adversarial network as a whole, while the sample category loss is utilized to optimize the classification performance of the network. Furthermore, the fused features are categorized using the feature discriminator to obtain the feature category loss, which is then used to optimize the feature fusion performance of the generative module and guarantee that the network can learn the prior information provided by the handcrafted features. The main innovations of this paper are as follows:
(1)
A novel FWA-GAN feature fusion method is proposed. The method fuses deep features with handcrafted features using a generative module, uses a sample discriminator to supervise the target recognition task, and introduces a feature discriminator to supervise the fusion process of handcrafted features. This makes the handcrafted feature fusion process more stable.
(2)
A new loss function consisting of adversarial loss, sample category loss, and feature category loss is employed to integrate the deep feature and the handcrafted feature. This loss function is specifically designed to foster dynamic knowledge matching and mutual learning between the two domains.
(3)
This paper proposes a method for adaptively assigning the weights of handcrafted features and deep features. The loss weights are assigned according to the correlation between the two features and the original sample.

2. Proposed Method

2.1. General Framework

In this section, a target recognition with limited samples based on FWA-GAN is proposed as shown in Figure 1, and the model adopts a dual discriminator GAN structure, comprising a generative module, a discriminative module, and a feature weight assignment module (FWAM). Firstly, the handcrafted features of HRRP are extracted using the handcrafted feature method, and the deep features of HRRP are extracted using the convolutional neural network. Thereafter, the handcrafted features and the deep features are input into the generative module, which fuses the handcrafted features with the deep features to obtain the fusion vector, which consists of the generative sample and the generative feature. The generative sample is input into the discriminative module to obtain the adversarial loss, while the generative feature and the generative sample are input into the feature weight assignment module.
During the training process, the generative module and the discriminative module are backpropagated using the adversarial loss LA, which is used to supervise whether the generated samples are real HRRPs or not. The generative module and the FWAM are backpropagated using the feature fusion loss LF, which is utilized to supervise the generated samples category and assign the weight of the sample branch and the feature branch. In the testing process, the model retains only the generative module and the FWAM. Firstly, the handcrafted features and deep features of the HRRP are extracted. The generative module is then employed to fuse the HRRP with the handcrafted feature, thereby generating the fusion sample. The generative sample and the generative feature are subsequently input into the FWAM to estimate the category of the HRRP.

2.2. Network Structure

The following section describes the feature fusion method based on adversarial generative networks in detail, according to modules.
The handcrafted features include 13 features, such as the number of scattering points, peak point distance, and valley point distance. The handcrafted features are represented as:
T H = F H ( x )
where x denotes the input HRRP, while the handcrafted feature vector consists of 13 features.
We then extracted HRRP deep features using the deep feature extraction module FE:
T D = F E ( x )
FE is an AlexNet-like convolutional neural network structure.
The complementary strengths of both handcrafted and deep features are harnessed through their comprehensive utilization. After being extracted from a common data source using different methodologies, these distinct feature types inherently capture varied information and possess differing vector dimensions. To facilitate seamless integration in subsequent processing stages, both handcrafted and deep features undergo individual processing within their respective backbone architectures before being unified to a common vector dimension. This standardization ensures equal footing for both feature types, allowing the network to dynamically adjust their respective weights based on their relative significance. The harmoniously aligned features are then concatenated and fused within the feature fusion module FF, yielding a composite representation that encapsulates the essence of both handcrafted and deep feature domains. The unified dimensionality of handcrafted features and deep features is spliced, fused, and input into the feature fusion module FF, so as to obtain the fused features:
T F = F F ( concat ( T H , T D ) )
FF is a VGG16-like convolutional neural network structure. VGG16 is a classical structure in convolutional neural networks that can achieve effective feature fusion with less computational effort.
The fused features are input into the reconstruction module FR to obtain the reconstructed HRRP samples SR and reconstructed features reconstructed TR:
S R , T R = F R ( T F )
FR is a VGG16-like deconvolutional neural network structure for reconstruction samples and features. The reconstruction module structure mirrors the feature fusion module structure, ensuring that the reconstructed HRRP samples and reconstructed feature dimensions are consistent with the original HRRP samples and that the reconstruction differences are reduced.
The feature extraction module, the feature fusion module, and the reconstruction module form the generative module G. The above equation can be expressed as:
S R , T R = F F ( c o n c a t ( T H , F E ( x ) ) ) = G ( T H , x )
The reconstructed sample contains the original HRRP sample information and handcrafted feature information. This sample is then input into the sample discriminator FDR, and the Softmax layer is used to predict the category of the reconstructed sample and whether it is a real sample:
y S C = softmax ( MLP C ( F D R ( S R ) ) ) = D C ( S R )
y S A = softmax ( MLP A ( F D R ( S R ) ) ) = D A ( S R )
where FDR is a VGG16-like convolutional neural network.
In order to retain more valid information in the handcrafted features and ensure that the valid information in the handcrafted features can act on the reconstructed samples, the fusion process of the handcrafted features is supervised using the feature discriminator FT:
y T = softmax ( V G G 16 ( T R ) ) = F T ( T R )
The feature discriminator is a pre-trained convolutional neural network with a VGG16-like structure. The training data used for pre-training are all the training data employed in the fusion experiments, augmented with noise and random upsampling in a few classes of targets.

2.3. Loss Function

The feature fusion loss is introduced in this network to rationally distribute the influence of the two classes of features on the network while achieving handcrafted feature supervision. The feature fusion loss function consists of adversarial loss, sample category loss, and feature category loss.
Feature fusion networks improve on traditional GANs by retaining the adversarial loss of traditional GANs. The adversarial loss optimizes the network structure by evaluating the difference between real and reconstructed samples. The generative module G and the adversarial discriminator DA complete a zero-sum game task, where the generative module is tasked with constructing samples in a manner that ensures they can be identified as real samples, whereas the discriminator aims to distinguish the reconstructed samples from the real samples. The adversarial loss can be expressed as:
l A = min G max D A E x ~ P ( x ) [ D A ( x ) ] E x ~ P G ( x ) [ D A ( x ) ] = min G max D A E x ~ P ( x ) [ D A ( x ) ] E x ~ P ( x ) [ D A ( G ( T H , x ) ) ]
where x~P(x) denotes that the sample x obeys the P distribution, P(x) is the real sample distribution, and PG(x) is the reconstructed sample distribution.
The difference between the reconstructed sample and the real sample in the distribution of the category space is assessed using the sample category loss, whereby the smaller the difference, the smaller the value of the loss function. Sample category identification can be similarly described as a zero-sum game task. However, the feature classification module DC must identify the reconstructed samples as belonging to other class targets and classify the real samples according to the real class. The sample discrimination loss can be expressed as follows:
l S C = min G max D A E x ~ P ( x ) [ D A ( x ) ] E x ~ P G ( x ) [ D C ( x ) ] = min G max D A E x ~ P ( x ) [ D A ( x ) ] E x ~ P ( x ) [ D C ( G ( T H , x ) ) ]
The reconstructed features are passed through the category discriminator to obtain the categories corresponding to the reconstructed features. The feature category loss is employed to describe the difference between the distribution of reconstructed features and handcrafted features. The category discriminator DTC is obtained by using pre-training of handcrafted features with real HRRP. The parameters of the category discriminator are fixed during FWA-GAN training. The feature category loss can be expressed as follows:
l T C = min G E x ~ P ( x ) [ D T C ( G ( T H , x ) ) ]
In traditional feature fusion methods, the loss of multi-features is calculated separately, with the assumption that multi-features have the same contribution to recognition. This paper proposes a loss-adaptive calculation method for determining the weights of handcrafted and deep features, taking into account their importance for recognition. The method automatically derives the weights, denoted as α and β, for handcrafted and deep features, respectively, based on their significance in the recognition task. This approach allows for a more accurate and efficient fusion of multi-features, leading to improved recognition performance. The overall network loss is:
L = l A + α l S C + β l T C
This paper proposes a loss-adaptive calculation method for determining the weights of handcrafted and deep features in feature fusion, based on their importance for recognition, addressing the issue of traditional methods that assume equal contribution from multi-features.

3. Experimental Results

In this section, the proposed FWA-GAN network is experimentally verified based on simulation and measured data, respectively. The simulation data are the AFRL dataset, while the measured data are the MSTAR inversion dataset. Considering the influence of model, elevation, orientation, and other factors, the experiments of foundation, model, elevation, and orientation are designed according to the different training and test sets.

3.1. Basic Performance Evaluation

In the AFRL simulation data, three types of targets were selected, including cars, commercial vehicles, and pickup trucks. An azimuth range of 0° to 90° was selected, and uniform sampling was conducted at 0.5° intervals, with a total of 180 positions. In order to verify the performance of the model with different sample sizes, all directions were randomly divided into 10 groups, and the first X group is selected as the known direction. The original HRRP sample was enhanced with noise, 20 times AWGN was added to the sample, and the sample peak signal-to-noise ratio (PSNR) was set to 20 dB. HRRP samples with an elevation of 30° were selected as the training dataset, and HRRP samples with an elevation of 40° were selected as the test dataset. The training set and test set settings in the basic experiment of the AFRL simulation dataset are shown in Table 1.
In this section, the proposed HRRP feature fusion method based on FWA-GAN is compared with the handcrafted feature, deep feature, and existing feature fusion methods. Handcrafted feature methods include SVM and Random forest, deep feature methods include Alexnet [25], VGG16 [26], Transformer [27], TCNN [28], and MFCNN [29], and feature fusion methods include add and concat, where add and concat are used to fuse features, respectively [30]. The feature category discriminator in the proposed method is removed using the same GAN result.
As shown in Figure 2, the recognition rate of the handcrafted feature method is relatively low but insensitive to the number of samples. The recognition rate of the deep feature method is higher than that of the handcrafted feature method, but it is more sensitive to the number of samples. Among the feature fusion methods, the add method has no significant impact on the recognition rate, while the concat method can improve the target recognition rate, especially when the number of samples is small. The proposed FWA-GAN significantly improves the recognition rate, especially by more than 5% when the number of samples is small.
Using handcrafted features alone cannot achieve ideal results, with a recognition rate below 60%. However, the handcrafted feature method is insensitive to the number of samples, with only a 7% drop when the sample factor decreases from 10 to 1. In contrast, the deep feature method experiences a drop of more than 15%.
The recognition rate of the deep feature method is superior to that of the handcrafted feature method. When the sample factor is 1, the recognition rate is more than 9% higher than that of the handcrafted feature method, and when the sample factor is 10, it is 20% higher. The AlexNet method has a simple structure and the lowest recognition rate among the deep feature methods. Other methods have structural improvements compared to AlexNet, resulting in higher recognition rates. VGG16 achieves recognition rates comparable to other more complex methods with fewer parameters. However, deep feature methods are overly sensitive to the number of samples, with a significant decrease in the recognition rate when the sample factor decreases.
In feature fusion methods, the add method directly adds handcrafted features to deep features. When the sample factor is low, the add method fails to effectively integrate useful information from handcrafted features into deep features, resulting in a lower recognition rate than VGG16. The concat method concatenates handcrafted features to deep features, allowing the network to autonomously learn the relationship between them. When the sample factor is low, the recognition rate is higher than VGG16, but the effect is not significant. The proposed FWA-GAN adopts a generative adversarial network structure and uses a feature discriminator to supervise the feature fusion process, effectively integrating handcrafted features into deep features. When the sample factor is 1, the recognition rate increases by more than 5%, and when the sample factor is 10, it increases by 3%. Through feature fusion, FWA-GAN achieves a more significant improvement in the recognition rate when the number of samples is small.
We then added 100 times AWGN to the sample and set the sample peak signal-to-noise ratio (PSNR) to 20 dB. The HRRP sample with 15° elevation was selected as the training dataset, and the HRRP sample with 17° elevation was selected as the test dataset. The settings of the training set and test set in the basic experiment of the MSTAR measured dataset are shown in Table 2.
Figure 3 compares the recognition effect of the proposed method with other methods. It can be seen that the recognition rate of the proposed method on simulation and simulation datasets is superior to other methods. The recognition rate of the handcrafted feature-oriented method is about 70%, and the effect of the handcrafted feature alone is not good. The recognition rate of the four deep learning methods is higher than 80%, which is obviously better than the traditional handcrafted feature-oriented methods. According to the different main computing modules, depth-oriented methods can be divided into two categories: convolutional neural network-based and self-attention mechanism-based, among which AlexNet and VGG16 belong to convolutional neural networks. By observing the recognition results, it is found that AlexNet and the VGG16 network have the same target recognition effect and are obviously superior to handcrafted feature methods. The number of convolutional layers in VGG16 is much higher than in AlexNet, but the recognition rate is lower than in the AlexNet model, indicating that merely deepening the number of convolutional network layers cannot achieve a better recognition effect.

3.2. Generalization with Different Target Models

When the radar target type is different, the scattering characteristics of the target change. In this section, model experiments are designed to verify the effectiveness of the proposed method. In the measured data of MSTAR, three types of targets were selected, namely, infantry fighting vehicles, armored transport vehicles, and tanks, and different types of tanks were taken as the test set of the experiment. The settings of the training set and test set in the basic experiment of the MSTAR measured dataset are shown in Table 3.
Figure 4 compares the recognition effect of the proposed method with other methods. It can be seen that the recognition rate of the proposed method on simulation and simulation datasets is superior to other methods. The recognition rate of the handcrafted feature-oriented method is about 70%, and the effect of the handcrafted feature alone is not good. The recognition rate of the four deep learning methods is higher than 80%, which is obviously better than the traditional handcrafted feature-oriented methods. According to the different main computing modules, depth-oriented methods can be divided into two categories: convolutional neural network-based methods and self-attention mechanism-based methods. Among them, AlexNet and VGG16 belong to convolutional neural networks, and Transformer belongs to the self-attention mechanism. By observing the recognition results, it is found that AlexNet and the VGG16 network have the same target recognition effect and are obviously superior to the handcrafted feature methods. The number of convolutional layers in VGG16 is much higher than in AlexNet, but the recognition rate is lower than in the AlexNet model, indicating that merely deepening the five layers of the convolutional network does not achieve a better recognition effect. The Transformer network is implemented through a self-attention mechanism, which can realize global information awareness and allocate more attention to important parts. Transformer performs best among the existing methods. The method in this paper is improved on the basis of Transformer, and the recognition rate is improved by 2.3% compared with Transformer.

3.3. Generalization with Different Elevation Angles

The elevation difference between the observation platform and the target will result in the inconsistent distribution of radar HRRP. In order to verify the effectiveness of the proposed fusion method in the case of a large elevation difference, an elevation experiment was designed.
In the AFRL simulation data, three targets of cars, commercial vehicles, and pickup trucks were selected, an azimuth range of 0° to 90° was selected, non-uniform sampling was conducted at 0.5° intervals, 100 times of SWGN was added to the samples, and the sample peak signal-to-noise ratio (PSNR) was set to 20dB. HRRP samples with 30° elevation were selected as the training dataset and HRRP samples with 50° elevation were selected as the test dataset. The training set and test set settings in the basic experiment of the AFRL simulation dataset are shown in Table 4.
In the measured data of MSTAR, three types of targets, namely infantry fighting vehicles, armored transport vehicles, and tanks, were selected, while an azimuth range of 0° to 90° was selected, non-uniform sampling was conducted at intervals of 1° to 2°, 100 times of AMGN was added to the samples, and the sample peak signal-to-noise ratio (PSNR) was set to 20dB. The HRRP sample with 15° elevation was selected as the training dataset and the HRRP sample with 17° elevation was selected as the test dataset. The settings of the training set and test set in the basic experiment of the MSTAR measured dataset are shown in Table 5.
Figure 5 and Figure 6 compare the recognition effect of the proposed method with other methods. It can be seen that the recognition rate of the proposed method on simulation and simulation datasets is superior to other methods. The recognition rate of the handcrafted feature-oriented method is about 70%, and the effect of the handcrafted feature alone is not good. The recognition rate of the four deep learning methods is higher than 80%, which is obviously better than the traditional handcrafted feature methods. According to the different main computing modules, depth-oriented methods can be divided into two categories: convolutional neural network-based methods and self-attention mechanism-based methods. Among them, AlexNet and VGG16 belong to convolutional neural networks, and Transformer belongs to the self-attention mechanism. By observing the recognition results, it is found that AlexNet and the VGG16 network have the same target recognition effect and are obviously superior to handcrafted feature methods. The number of convolutional layers in VGG16 is much higher than in AlexNet, but the recognition rate is lower than in the AlexNet model, indicating that merely deepening the number of convolutional network layers cannot achieve a better recognition effect. The method in this paper is improved on the basis of VGG16, and the recognition rate has been improved by 6.4% compared with Transformer.

3.4. Ablation Experiment

In this section, we conduct ablation experiments to verify the effectiveness of the feature fusion module and FWA (Feature Weighting or Attention Mechanism, based on the context) in the network. The dataset settings for the ablation experiments are consistent with the baseline experimental settings. Since VGG16 is used as the module in FWA-GAN, we use the results of VGG16 as the baseline for the ablation experiments. Then, we add the feature fusion module to the network, and finally, introduce FWA to obtain the FWA-GAN.
As can be seen from Figure 7, the introduction of the feature fusion module significantly improves the recognition rate when the sample coefficient is low. In the AFRL experiments, when the sample coefficient is 1 or 2, the recognition rate is increased by approximately 3%. Similarly, in the MSTAR experiments, when the sample coefficient is 1 or 2, the recognition rate is improved by about 2%. The incorporation of the FWA module enables the adaptive allocation of weights to different features during recognition, thus enhancing the stability of recognition and increasing the recognition rate by an average of 3%.

4. Conclusions

The feature fusion method based on adversarial neural networks incorporates both handcrafted features and deep features into the generative module. The generative module then integrates the handcrafted features into the deep features, resulting in generated samples and generated features. A discriminator is used to classify the generated samples and produce an adversarial loss, which is then utilized to supervise the overall training of the generative adversarial network. Additionally, the generated samples and generated features are jointly fed into the FWA (Feature Weighting or Attention Mechanism). By calculating the contribution of the generated samples and features to the recognition task, the FWA adaptively adjusts their weights in the task, enabling self-adaptive attention allocation.

Author Contributions

All the authors contributed significantly to this study. Y.S., writing—data curation, formal analysis, software, validation, writing—original draft; L.Z., validation, writing—review and editing, Y.W., funding acquisition, methodology, resources, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Project Grant No. 62388102) and in part by the Shandong Provincial Natural Science Foundation (Project Grant No. ZR2021MF134).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, H.; Yang, S. Using Range Profiles as Feature Vectors to Identify Aerospace Objects. IEEE Trans. Antennas Propag. 1993, 41, 261–268. [Google Scholar] [CrossRef]
  2. Curry, G.R. A low-cost space-based radar system concept. IEEE Aerosp. Electron. Syst. Mag. 1996, 11, 21–24. [Google Scholar] [CrossRef]
  3. Slomka, S.; Gibbins, D.; Gray, D.; Haywood, B. Features for High Resolution Radar Range Profile Based Ship Classification. In Proceedings of the Fifth International Symposium on Signal Processing and its Applications, Brisbane, QLD, Australia, 22–25 August 1999; pp. 329–332. [Google Scholar] [CrossRef]
  4. Xing, M.; Bao, Z.; Pei, B. Properties of High-resolution Range Profiles. Opt. Eng. 2002, 41, 493–504. [Google Scholar] [CrossRef]
  5. Du, L.; Liu, H.; Bao, Z.; Xing, M. Radar HRRP target recognition based on higher order spectra. IEEE Trans. Signal Process. 2005, 53, 2359–2368. [Google Scholar] [CrossRef]
  6. Zhou, D.; Shen, X.; Yang, W. Radar Target Recognition Based on Fuzzy Optimal Transformation Using High-Resolution Range Profile. Pattern Recognit. Lett. 2013, 34, 256–264. [Google Scholar] [CrossRef]
  7. Lei, S.Q.; Yue, D.X.; Wang, F. Natural Scene Recognition Based on HRRP Statistical Modeling. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4944–4947. [Google Scholar] [CrossRef]
  8. Wang, Y.; Ma, Y.; Zhang, Z.; Zhang, X.; Zhang, L. Type-Aspect Disentanglement Network for HRRP Target Recognition With Missing Aspects. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3509305. [Google Scholar] [CrossRef]
  9. Liu, Q.; Zhang, X.; Liu, Y. A Prior-Knowledge-Guided Neural Network Based on Supervised Contrastive Learning for Radar HRRP Recognition. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 2854–2873. [Google Scholar] [CrossRef]
  10. Zhou, Q.; Wang, Y.; Zhang, X.; Zhang, L.; Long, T. Domain-Adaptive HRRP Generation Using Two-Stage Denoising Diffusion Probability Model. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3504305. [Google Scholar] [CrossRef]
  11. Liu, Y.; Long, T.; Zhang, L.; Wang, Y.; Zhang, X.; Li, Y. SDHC: Joint Semantic-Data Guided Hierarchical Classification for Fine-Grained HRRP Target Recognition. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 3993–4009. [Google Scholar] [CrossRef]
  12. Yang, L.; Feng, W.; Wu, Y.; Huang, L.; Quan, Y. Radar-Infrared Sensor Fusion Based on Hierarchical Features Mining. IEEE Signal Process. Lett. 2024, 31, 66–70. [Google Scholar] [CrossRef]
  13. Kan, S.; Cen, Y.; He, Z. Supervised Deep Feature Embedding with Hand Crafted Feature. IEEE Trans. Image Process. 2019, 28, 5809–5823. [Google Scholar] [CrossRef] [PubMed]
  14. Cristianint, N. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  15. Wei, Z.; Jie, W.; Jian, G. An efficient SAR target recognition algorithm based on contour and shape context. In Proceedings of the 3rd International Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Seoul, Republic of Korea, 26–30 September 2011; pp. 1–4. [Google Scholar]
  16. Park, J.I.; Park, S.H.; Kim, K.T. New discrimination features for SAR automatic target recognition. IEEE Geosci. Remote Sens. Lett. 2013, 10, 476–480. [Google Scholar] [CrossRef]
  17. Ai, J.; Mao, Y.; Luo, Q.; Jia, L.; Xing, M. SAR target classification using the multikernel-size feature fusion-based convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5214313. [Google Scholar] [CrossRef]
  18. Chen, S.; Wang, H.; Xu, F.; Jin, Y.-Q. Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4806–4817. [Google Scholar] [CrossRef]
  19. Shi, L.; Liang, Z.; Wen, Y.; Zhuang, Y.; Huang, Y.; Ding, X. One-shot HRRP generation for radar target recognition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 3504405. [Google Scholar] [CrossRef]
  20. Zhang, J.; Xing, M.; Xie, Y. FEC: A feature fusion framework for SAR target recognition based on electromagnetic scattering features and deep CNN features. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2174–2187. [Google Scholar] [CrossRef]
  21. Zheng, H.; Hu, Z.; Yang, L. Multifeature Collaborative Fusion Network with Deep Supervision for SAR Ship Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5212614. [Google Scholar] [CrossRef]
  22. Liu, Z.; Wang, L.; Wen, Z. Multilevel Scattering Center and Deep Feature Fusion Learning Framework for SAR Target Recognition. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5227914. [Google Scholar] [CrossRef]
  23. Li, Y.; Du, L.; Wei, D. Multiscale CNN Based on Component Analysis for SAR ATR. IEEE Trans. Geosci. Remote Sens. 2024, 60, 5211212. [Google Scholar] [CrossRef]
  24. Qin, J.; Zou, B.; Chen, Y. Scattering Attribute Embedded Network for Few-Shot SAR ATR. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 4182–4197. [Google Scholar] [CrossRef]
  25. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classifica tion with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Stateline, NV, USA, 3–6 December 2012; Volume 25, pp. 1097–1105. [Google Scholar]
  26. Yadav, D.; Kohli, N.; Agarwal, A. Fusion of Handcrafted and Deep Learning Features for Large-scale Multiple Iris Presentation Attack Detection. In Proceedings of the International Conference on Computer Vision and Pattern Recognition-Workshop on Biometrics (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
  27. Zhang, L.; Han, C.; Wang, Y.; Li, Y.; Long, T. Polarimetric HRRP recognition based on feature-guided transformer model. Electron. Lett. 2021, 57, 705–707. [Google Scholar] [CrossRef]
  28. Wan, J.; Chen, B.; Xu, B.; Liu, H.; Jin, L. Convolutional neuralnetworks for radar HRRP target recognition and rejection. EURASIP J. Adv. Signal Process. 2019, 2019, 5. [Google Scholar] [CrossRef]
  29. Cho, J.H.; Park, C.G. Multiple feature aggregation using convolutional neural networks for SAR image-based automatic target recognition. lEEE Geosci. Remote Sens. Lett. 2018, 15, 1882–1886. [Google Scholar] [CrossRef]
  30. Yang, J.; Yang, J.-Y.; Zhang, D. Feature fusion: Parallel strategy vs. serial strategy. Pattern Recognit. 2003, 36, 1369–1381. [Google Scholar] [CrossRef]
Figure 1. Structure of feature fusion model based on FWA-GAN.
Figure 1. Structure of feature fusion model based on FWA-GAN.
Remotesensing 16 02963 g001
Figure 2. Accuracy curve under different limited factors of the AFRL simulation dataset in the basic experiment.
Figure 2. Accuracy curve under different limited factors of the AFRL simulation dataset in the basic experiment.
Remotesensing 16 02963 g002
Figure 3. Accuracy curve under different limited factors of the MSTAR experimental dataset in the basic experiment.
Figure 3. Accuracy curve under different limited factors of the MSTAR experimental dataset in the basic experiment.
Remotesensing 16 02963 g003
Figure 4. Accuracy curve under different limited factors of the MSTAR experimental dataset in the model experiment.
Figure 4. Accuracy curve under different limited factors of the MSTAR experimental dataset in the model experiment.
Remotesensing 16 02963 g004
Figure 5. Accuracy curve under different limited factors of the AFRL simulation dataset in the elevation experiment.
Figure 5. Accuracy curve under different limited factors of the AFRL simulation dataset in the elevation experiment.
Remotesensing 16 02963 g005
Figure 6. Accuracy curve under different limited factors of the MSTAR experimental dataset in the elevation experiment.
Figure 6. Accuracy curve under different limited factors of the MSTAR experimental dataset in the elevation experiment.
Remotesensing 16 02963 g006
Figure 7. Accuracy curve under different limited factors in the ablation experiment: (a) AFRL; (b) MSTAR.
Figure 7. Accuracy curve under different limited factors in the ablation experiment: (a) AFRL; (b) MSTAR.
Remotesensing 16 02963 g007
Table 1. Simulation data training set and test set in basic experiment.
Table 1. Simulation data training set and test set in basic experiment.
Target TypeTraining SetTest Set
ElevationSelect the Number of AzimuthsNumber of Training Set SamplesElevationSelect the Number of AzimuthsNumber of Test Set Samples
Toyota Camery30°18 × X1800 × X40°18018,000
Mazda MPV30°18 × X1800 × X40°18018,000
Toyota Tocoma30°18 × X1800 × X40°18018,000
Table 2. Simulation data training set and test set in basic experiment.
Table 2. Simulation data training set and test set in basic experiment.
Target TypeTraining SetTesting Set
ElevationAzimuths NumberTraining Set NumberElevationAzimuths Number Testing Set Number
BMP217°12 × X1200 × X15°454500
BTR7017°11 × X1100 × X15°545400
T7217°12 × X1200 × X15°424200
Table 3. Experimental data training set and test set in model experiment.
Table 3. Experimental data training set and test set in model experiment.
Target TypeTraining SetTest Set
ElevationAzimuths NumberModelElevationAzimuths Number ModelElevationAzimuths Number
BMP217°12×X95631200 × X15°9563454500
BTR7017°11×XC711100 × X15°C71545400
T7217°12×X1321200 × X15°A32, A62, A63, A6419219,200
Table 4. Simulation data training set and test set in elevation experiment.
Table 4. Simulation data training set and test set in elevation experiment.
Target TypeTraining SetTest Set
ElevationAzimuth NumberTraining Set NumberElevationAzimuth Number Testing Set Number
Toyota Camery30°18 × X1800 × X50°18018,000
Mazda MPV30°18 × X1800 × X50°18018,000
Toyota Tocoma30°18 × X1800 × X50°18018,000
Table 5. Experimental data training set and test set in elevation experiment.
Table 5. Experimental data training set and test set in elevation experiment.
Target TypeTraining SetTest Set
ElevationAzimuth NumberTraining Set NumberElevationAzimuth Number Testing Set Number
BMP217°12 × X1200 × X30°454500
BTR7017°11 × X1200 × X30°545400
T7217°12 × X1200 × X30°424200
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, Y.; Zhang, L.; Wang, Y. Limited Sample Radar HRRP Recognition Using FWA-GAN. Remote Sens. 2024, 16, 2963. https://doi.org/10.3390/rs16162963

AMA Style

Song Y, Zhang L, Wang Y. Limited Sample Radar HRRP Recognition Using FWA-GAN. Remote Sensing. 2024; 16(16):2963. https://doi.org/10.3390/rs16162963

Chicago/Turabian Style

Song, Yiheng, Liang Zhang, and Yanhua Wang. 2024. "Limited Sample Radar HRRP Recognition Using FWA-GAN" Remote Sensing 16, no. 16: 2963. https://doi.org/10.3390/rs16162963

APA Style

Song, Y., Zhang, L., & Wang, Y. (2024). Limited Sample Radar HRRP Recognition Using FWA-GAN. Remote Sensing, 16(16), 2963. https://doi.org/10.3390/rs16162963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop