Next Article in Journal
A Public Dataset for Fine-Grained Ship Classification in Optical Remote Sensing Images
Previous Article in Journal
Autonomous Repeat Image Feature Tracking (autoRIFT) and Its Application for Tracking Ice Displacement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data

National Lab. of Radar Signal Processing, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(4), 750; https://doi.org/10.3390/rs13040750
Submission received: 20 January 2021 / Revised: 8 February 2021 / Accepted: 15 February 2021 / Published: 18 February 2021

Abstract

:
For high-resolution range profile (HRRP)-based radar automatic target recognition (RATR), adequate training data are required to characterize a target signature effectively and get good recognition performance. However, collecting enough training data involving HRRP samples from each target orientation is hard. To tackle the HRRP-based RATR task with limited training data, a novel dynamic learning strategy is proposed based on the single-hidden layer feedforward network (SLFN) with an assistant classifier. In the offline training phase, the training data are used for pretraining the SLFN using a reduced kernel extreme learning machine (RKELM). In the online classification phase, the collected test data are first labeled by fusing the recognition results of the current SLFN and assistant classifier. Then the test samples with reliable pseudolabels are used as additional training data to update the parameters of SLFN with the online sequential RKELM (OS-RKELM). Moreover, to improve the accuracy of label estimation for test data, a novel semi-supervised learning method named constraint propagation-based label propagation (CPLP) was developed as an assistant classifier. The proposed method dynamically accumulates knowledge from training and test data through online learning, thereby reinforcing performance of the RATR system with limited training data. Experiments conducted on the simulated HRRP data from 10 civilian vehicles and real HRRP data from three military vehicles demonstrated the effectiveness of the proposed method when the training data are limited.

1. Introduction

A high-resolution range profile (HRRP) provides the geometrical shape and structural characteristics of a target along the radar line-of-sight (LOS). Compared to synthetic aperture radar (SAR) [1] and inverse SAR images, it is easier to get and store. Therefore, HRRP has been widely used for radar automatic target recognition (RATR) systems.
A typical RATR system comprises the feature extraction followed by a powerful classifier. Various target features from HRRP have been developed over the years, such as spectral features [2,3], scattering center features [4,5], statistical features [6,7,8], high-level features learned from deep networks [9,10] and so on. Typical classifiers applied to HRRP-based RATR include the template matching method (TMM) [4,5], Bayes’ classifier [6,7,8], the hidden Markov model (HMM) [11], etc.
To get good recognition performance, the methods above require complete training data which are an effective representation of the target’s signature. However, acquisition of enough target HRRPs is difficult in many real applications. The main reasons are these: First, the targets of interest may be noncooperative and their echoes can be hard to collect. Second, due to the aspect-sensitivity of the target HRRP, the complete training data should involve HRRP samples of all possible orientations, which is unrealistic in practice. When the available training data are limited, the RATR system will exhibit unsatisfactory performance even though the extracted feature and classifier used are good enough. Therefore, developing an HRRP-based RATR method with limited training data is important.
Some studies have reported to deal with the RATR task with limited training data. The support vector machine (SVM) [12] has been widely used for HRRP-based target recognition [13], since it can get the smallest generalization error and is suitable for small sample learning. However, the SVM does not consider the overall characteristics of the target feature, which makes it sensitive to missing data. In [14], the dictionary learning (DL) method, i.e., the K-SVD algorithm, was employed to classify HRRPs of a target when training data were limited, for it shares the latent information among samples. Motivated by [14], reference [15] proposed a robust DL method which overcomes the uncertainty of sparse representations and is robust to the amplitude variations of adjacent HRRPs. It achieves good recognition performance with limited training data. In [16], a factor analysis model was proposed based on multitask learning (MTL), in which the relevant information among HRRP samples with different target aspects is employed, and the aspect-dependent parameters are obtained collectively. This method can reduce the number of HRRP samples required in each target aspect-frame. However, the MTL and robust DL models still require dozens of HRRP samples in each target aspect-frame to estimate parameters, which may not be realized in practice. In [17], the discriminant deep autoencoder (DDAE) was proposed to extract high-level features of HRRPs and train HRRP samples globally, which enhances the recognition performance with little training data compared to most methods.
The above methods above are supervised learning approaches whose classifiers stay unchanged during the classification process. They only take advantage of the offline training data and ignore the useful information from new samples collected during the classification process. Considering this, Yver [18] proposed a dynamic learning strategy which updates the classifier using test samples. This strategy requires the RATR system to accumulate knowledge dynamically, and exploit the knowledge to facilitate future learning and classification in the update process. Four online learning approaches were presented, including online self-training label-propagation (LP) [19], self-training LASVM [20], a combination of LP and LASVM, and online TSVM. Nevertheless, these methods come with several drawbacks. The self-training method teaches itself using the learned knowledge, so it cannot reduce the bias caused by the limited training data [21]. LASVM gets the approximate classification model in one-by-one mode but not in chunk-by-chunk mode, so its computation is time-consuming. In [22], an updating convolutional neural network (CNN) was proposed for use with SAR images. The initial CNN model trained by the seed images was updated using the test images whose pseudolabels were assigned with the SVM. This method exhibited good recognition performance on the MSTAR dataset. However, it must store all the test data during the update process, which burdens the memory resources.
Single-hidden layer feedforward neural networks (SLFNs) have been widely used in pattern recognition (PR). The reduced kernel extreme learning machine (RKELM) [23] is a kernel-based learning technique for SLFN in which the mapping samples are selected from the training dataset and the output weights are analytically determined. In theory, it can provide good generalization performance at high learning speed. The online sequential RKELM (OS-RKELM) [24] is a fast online learning algorithm that can process data in chunk-by-chunk mode and discard data after they have been learned. Hence, it achieves savings in terms of both processing time and memory resources. However, the OS-RKELM is a supervised learning method that requires the online received data to be labeled, and it cannot be applied directly to the HRRP-based RATR task.
In this paper, a novel dynamic learning method based on the SLFN with an assistant classifier is proposed to enhance the performance of HRRP-based RATR with limited training data. In the offline training stage, the initial SLFN model is trained with the RKELM algorithm using the labeled training data. In the classification stage, two steps, i.e., the pseudolabel assignment and SLFN parameter update steps, are iterated. Once a certain number of test samples are collected, their pseudolabels are assigned. Then, the test samples with reliable pseudolabels are considered as additional training data to update the SLFN parameters by the OS-RKELM. Particularly, to improve the accuracy of label estimation for test data, a novel semi-supervised learning method named constraint propagation-based label propagation (CPLP) was developed as an assistant classifier, in which the offline training samples are viewed as labeled data and the newly collected test samples as unlabeled data. The pseudolabel of each test sample is assigned by fusing the classification results of CPLP and the SLFN. Through the dynamic learning strategy, the HRRP-based RATR system dynamically accumulates knowledge from both offline training data and online test data, thereby reinforcing the performance of the RATR system.
Our contributions are summarized as follows:
  • To deal with the HRRP-based RATR task with limited training data, a dynamic learning strategy is introduced based on the SLFN with an assistant classifier. The proposed method processes data chunk-by-chunk and discards the test data once they have been learned, so it requires less memory and processing time.
  • A novel semi-supervised learning method named constraint propagation-based label propagation (CPLP) is proposed as an assistant classifier to improve the label estimation accuracy for test data.
In the experiments, the effectiveness of the proposed method was demonstrated on two datasets, i.e., simulated data of 10 civilian vehicles and measured data of three military vehicles. In this paper, the superiority of the CPLP algorithm is verified first. Then, the recognition performance of the proposed SLFN with the CPLP algorithm is presented along with the update process. By leaning information from the test data, a performance improvement was achieved when training data were limited. Next, the comparative experiments are shown, and the proposed method showed very competitive recognition performance with the state-of-the-art methods on the two datasets. Finally, the computational complexity is analyzed.
The remainder of this paper is organized as follows. Section 2 introduces the proposed algorithm in detail. Section 3 provides the experiment results on the simulated HRRP data of 10 civilian vehicles and the measured data of three military vehicles. Section 4 presents the conclusions.

2. Methodology

The framework of the proposed recognition method is shown in Figure 1. In the RKELM and OS-RKELM algorithms, the offline training data have to be stored in order to compute the kernel matrix in the online classification stage. Consequently, for the proposed CPLP algorithm, the offline training data are viewed as labeled samples, and the newly collected test data chunk (TDC) as unlabeled samples. The test samples with reliable pseudolabels are viewed as additional training data to update the SLFN model. Once a TDC is learned, it is discarded. The method continues to classify newly collected test data and exploit them to update the SLFN parameters. In this way, the knowledge of both the offline training data and the online test data can be accumulated in the SLFN model, and be used to facilitate future learning and classification alongside the update process.
For simplification purposes, some notation is as follows. X L = x 1 , , x N l is the offline training dataset (OTD) containing N l HRRP samples from C classes. Y L = y 1 , , y N l T R N l × C is the label matrix with y n c = 1 if the label of sample x n is c; and y n c = 0 otherwise; n = 1 , 2 , , N l and c = 1 , 2 , , C . X m U = x 1 u , , x N u u is the mth chunk of test HRRP data that is collected in the classification stage. The HRRP samples in X m U are processed at a time during the update process.
In what follows, the SLFN classifier, CPLP algorithm, and decision fusion of SLFN and CPLP classifiers are introduced.

2.1. Single-Hidden Layer Feedforward Neural Network

In our proposed method, the SLFN has N l hidden nodes and C output nodes. The parameters of SLFN are initialized using the OTD X L by the RKELM. Let β R N l × C denote the output weights connecting the hidden layer and output layer; then its optimal value can be derived in closed from [23]
β 0 = I ξ + K 0 T K 0 1 K 0 T Y L
where K 0 is the kernel matrix of which the ( i , j ) th entry is computed by K x i , x j , and the parameter ξ is used to relax the overfitting issue of the SLFN model.
In the online classification stage, when the HRRP TDC X m U is collected, the OS-RKELM updates β m as follows: [24]:
G m = G m 1 G m 1 K m T I + K m G m 1 K m T 1 K m G m 1
β m = β m 1 + G m K m T Y m s K m β m 1
where K m = K X m U , X L , G 0 = I ξ + K 0 T K 0 1 , and Y m s = y 1 s , , y N u s is the pseudolabel matrix of X m U .
Let x n u be the nth test sample in X m U ; its prediction is computed as [24]
t n u = K x n u , X L β m 1
Since the elements in t n u are not probability values, we define the label probability vector p n u = p 1 , p 2 , , p C of x n u as
p c = exp a t n u t c 2 2 c = 1 C exp a t n u t c 2 2
where p c is the probability of x n u belonging to the cth class, a is a constant greater than 0, t c is a C-dimensional row vector of which the cth element is 1 and the other elements are 0. Then
P m U = p 1 u p N u u
is the label probability matrix of X m U computed by the SLFN.

2.2. Constraint Propagation-Based Label Propagation

LP is a graph-based semi-supervised learning method which propagates the label information of labeled data to unlabeled data according to the intrinsic geometrical structure of the data. The graph construction method plays an important role in label estimation for unlabeled data.
According to the LP method, the label probability matrix F m U of X m U is estimated as [19]:
F m U = I W ¯ U U 1 W ¯ U L Y L
where the nth row f n u of F m U is the probability vector whose element f n , c is the probability of x n u belonging to the cth class. W ¯ U L R N u × N l and W ¯ U U R N u × N u are the normalized probabilistic transition matrices which measure similarities between unlabeled samples and labeled ones, and similarities between unlabeled samples, respectively. They are usually constructed in an unsupervised manner, such as using a k-nearest neighbor (k-NN) graph [25], which ignores the label information of labeled data. Considering this, we exploit the label information to encode the pairwise constraints [26,27] between labeled samples, and then construct W ¯ U L by propagating the constraint information via the similarity. The motivation for this idea is twofold.
  • If samples x k X U and x i X L are similar, and x j X L has the same class as x i , then x k tends to be similar to x j .
  • If samples x k , x i X U are similar, x k and x j X L are similar, then both x i and x k are prone to having the same label as x j .
The proposed CPLP algorithm consists of two steps. In the first step, the traditional unsupervised graph is constructed. Let W U U R N u × N u denote the affinity matrix characterizing the similarities among unlabeled data, and W U L R N u × N l denote the affinity matrix describing the similarities among labeled samples and unlabeled ones. Since HRRP is in high dimensional space, the k-reciprocal nearest neighbors (k-RNNs) [28,29] is adopted to compute the i , j th entry of W U U and W U L as follows.
w i j = e x p d x i , x j 2 σ 2 , x i N x j x j N x i 0 , o t h e r w i s e
where N x is the k-NN sample set of x . For W U U , we set x i , x j X m U . For W U L , we define a dataset D k = x k , X m U , x k X L , and the entries in the kth column are computed by setting x i , x j D k .
In the second step, we aim to propagate the constraint information of labeled data to unlabeled data via the affinity matrices W U U and W U L .
First, To incorporate the known label information, the affinity matrix W L L R N l × N l among labeled data is constructed by imposing two kinds of pairwise constraints. If the samples x i and x j are from the same class, the must-link constraint is imposed. Otherwise, the cannot-link constraint is set. The entries of W L L are computed as [27]
w i j = 1 , y i = y j 0 , y i y j
Then, the pairwise constraints are propagated to their nearest neighbors by an iterative process [27,30]. In the tth iteration, the ith row of W U L t is computed as
W U L , i t = 1 α W U L , i 0 + α j = 1 N l M U L , i j W L L , j t 1 + k = 1 N u M U U , i k W U L , k t 1
where t = 0 , 1 , 2 , is the time stamp, W U L 0 = W U L , W L L t = W L L is clamped, W U L , i t denotes the ith row of W U L t , α 0 , 1 is the trade-off parameter, M U L , i j is the i , j th entry of M U L , and M U U , i k is the i , k th entry of M U U —the final two are calculated by (11) and (12), respectively.
M U L , i j = W U L , i j W U L , i j j = 1 N l W U L , i j + k = 1 N u W U U , i k j = 1 N l W U L , i j + k = 1 N u W U U , i k
M U U , i k = W U U , i k W U U , i k j = 1 N l W U L , i j + k = 1 N u W U U , i k j = 1 N l W U L , i j + k = 1 N u W U U , i k
The matrix form of Equation (10) is presented as
W U L t = 1 α W U L 0 + α M U L W L L + α M U U W U L t 1
When t , W U L t reaches a steady state W U L * which is calculated as
W U L * = I α M U U 1 1 α W U L + α M U L W L L
Let W U = W U L * W U U ; the normalized probabilistic transition matrices W ¯ U L and W ¯ U U are derived as
W ¯ U L , i j = W U L , i j * W U L , i j * k = 1 N l + N u W U , i k k = 1 N l + N u W U , i k
W ¯ U U , i j = W U U , i j W U U , i j k = 1 N l + N u W U , i k k = 1 N l + N u W U , i k
Then F m U is estimated using (7).
The procedure of the proposed CPLP algorithm is depicted in Figure 2.

2.3. Decision Fusion

To improve the accuracy of label estimation for test data, we fuse the decisions of the SLFN and CPLP to get the final label probability of test samples. The fused probability vector z n u of x n u is obtained by
z n u = κ n p n u + 1 κ n f n u
where κ n is a parameter controlling the relative contributions of classification results of the SLFN and CPLP algorithms for x n u . The more reliable the classification result of CPLP, the smaller the parameter κ . The classifier energy based evaluation [21] that thinks neighbor samples should have the same labels is used to compute κ as follows.
Let Y = X L X m U , E 1 = Y L T P m U T , E 2 = Y L T F m U T , and K R n u be the indices of KRNNs of x n u in Y . The energy-based evaluation of classifier for x n u is defined as
r v n = i = 1 K R n u E v N l + n E v K R n u i 2 2 , v = 1 , 2
where E v N l + n is the ( N l + n ) th column of E v , and K R n u is the number of samples in K R n u . Then κ n is computed as
κ n = r 2 n r 2 n r 1 n + r 2 n r 1 n + r 2 n
Then the pseudolabel of x n u is assigned as c * = arg max c z n u , and the corresponding label probability is max z n u . We select the test samples in the top one percent (p) of label probability as additional training data to update the SLFN parameters.

3. Experiment Results and Analysis

In this section, the effectiveness of the proposed HRRP-based RATR method is demonstrated by experiments that used limited training data. Two datasets, the simulated HRRP dataset of 10 civilian vehicles and the measured HRRP dataset of three military vehicles, were tested. First, the superiority of the proposed CPLP method is verified. Then, the recognition performance of the proposed SLFN with CPLP method is presented with limited training data. Finally, the computational complexity is analyzed.
For the recognition results to be more persuasive, S-fold cross-validation was utilized in the following experiments. The dataset of each target was divided into S groups, of which the γ th group contained HRRP samples with indexes γ , γ + S , γ + 2 S , , where γ = 1 , 2 , , S . In the γ th experiment, the γ th group was viewed as a complete training dataset, and the remaining data as the test dataset. The limited training data were simulated by uniformly choosing samples from the complete training dataset. The recognition results were computed by averaging the results of S experiments.
The time-shift sensitivity and amplitude-scale sensitivity are two issues that must be addressed in HRRP-based RATR. In our experiments, the zero phase sensitivity alignment method [31] was used to tackle the time-shift sensitivity, and the l2-norm normalization was performed to deal with the amplitude-scale sensitivity.
All the experiments were performed with MATLAB code on a PC with 16 GB of RAM and an Intel i7 CPU running at 3.6 GHz.

3.1. Simulated Data of 10 Civilian Vehicles

3.1.1. Dataset Description

The first HRRP dataset was the Air Force Research Laboratory’s (AFRL) publicly released dataset. The dataset consists of simulated scattering data from 10 civilian vehicles, including a Toyota Camry, Honda Civic 4dr, 1993 Jeep, 1999 Jeep, Nissan Maxima, Mazda MPV, Mitsubishi, Nissan Sentra, Toyota Avalon, and Toyota Tacoma. The center frequency is 9.6 GHz, and 128 frequencies are equally spaced by 10.48 MHz. The azimuth angle changes from 0 to 360 with an interval of 0.0625 . Since the CAD models of targets have azimuthal symmetry, the data with the azimuth angle of [0 ,180 ] were exploited, resulting in 2880 samples per target. The detailed description of this dataset can be consulted in [32]. In our experiments, the subdataset with an elevation angle of 30 was exploited since it simulates the HRRP of targets and the ground plane, which is more practical. The normalized HRRPs of 10 civilian vehicles are shown in Figure 3.
The 8-fold cross-validation was exploited as mentioned before. Consequently, the complete training dataset of each target contained 360 HRRP samples with an azimuth interval of 0.5 ; and 2520 test samples constituted the test dataset of each target. In each validation, 30 Monte Carlo experiments were conducted.

3.1.2. Recognition Performance of the CPLP Algorithm

In this section, the recognition performance of CPLP algorithm is demonstrated on the simulated data of 10 civilian vehicles. First, the influences of some parameters on the recognition performance are presented. Then, we study the recognition performance versus the size of unlabeled dataset and labeled dataset.
 (1) Parameter Setup
In this section, the effects of the parameters α , σ 2 , and k in the k-RNN algorithm for computing W U L (denoted as k 1 ) and for computing W U U (denoted as k 2 ) on the recognition performance of the CPLP algorithm are given. The labeled dataset consists of a tenth of the complete training data, i.e., 36 HRRPs from each target. The unlabeled dataset contains 1000 HRRP samples, i.e., 100 samples randomly selected from the test dataset of each target.
First, we evaluate the influences of parameters k 1 and k 2 on the recognition performance of the CPLP algorithm. The parameter k 1 controls the number of RNNs between labeled and unlabeled samples, and k 2 is related to the number of RNNs between unlabeled samples. Let the values of k 1 and k 2 range from 3 to 20 with interval of 1; the results are shown in Figure 4. We can see that the recognition accuracy first increased and then decreased with increasing k 1 and k 2 , and the best performance was obtained within the range of k 1 4 , 9 and k 2 6 , 12 . In the following experiments conducted on the data of 10 civilian vehicles, we fixed k 1 = 6 and k 2 = 10 .
Next, we analyzed the recognition accuracy under different values of the parameter σ 2 ranging from 0.01 to 0.2 with interval 0.01; the results are shown in Figure 5. Clearly, the recognition rates increased when σ 2 varied from 0.01 to 0.05, and then reduced gradually. Hence, σ 2 = 0.05 for the following experiments on the data of 10 civilian vehicles.
Finally, the effect of the parameter α was evaluated. The other parameters were fixed as σ 2 = 0.05 , k 1 = 6 , and k 2 = 10 . As shown in Section 2.2, the parameter α affects the value of W U L * . We set α = 0 , 0.3 , 0.6 , 0.99 ; the computed matrices W U L * are shown in Figure 6.
As can be seen, with increased α , more edges are built between labeled samples and unlabeled ones, and the edge weights become larger. The samples belonging to the same class may not be connected with small α , whereas the edges between the samples belonging to different classes may be built when α is large. Both cases above resulted in unsatisfactory recognition performance. Thus, an appropriate value of α is important for the performance of the CPLP algorithm. It should be noted that the matrix W U L * degrades to the unsupervised version W U L when α = 0 . Hence, the LP algorithm that computes W U L * in an unsupervised manner can be regarded as a specialization of the proposed CPLP algorithm.
Figure 7 shows the recognition results when the parameter α varies from 0 to 0.99 with interval 0.01. It can be seen that the recognition accuracy increases gradually when α 0 , 0.4 , then maintains a relatively stable value when α 0.4 , 0.75 , and finally decreases gradually. In the following experiments on the HRRP data of 10 civilian vehicles, α = 0.6 was chosen.
 (2) Performance Versus Size of Unlabeled Dataset
The recognition performance of the proposed CPLP algorithm is compared with the LP algorithm in regard to the size of the unlabeled dataset. The number of unlabeled samples of each target ranged from 50 to 500 with interval 50; the recognition results are shown in Figure 8. It can be observed that (1) the recognition accuracy of both methods increased as the size of the unlabeled dataset increased, because more unlabeled samples facilitate more accurate descriptions of data structure; (2) the proposed CPLP algorithm exhibited better performance than the LP algorithm due to the constraint propagation for computing the matrix W U L * .
 (3) Performance Versus Size of Labeled Dataset
We also conducted experiments to compare the CPLP and LP algorithms under different sizes of labeled dataset. The recognition results for when the number of labeled samples of each target ranged from 24 to 72 with an interval of 12 are shown in Figure 9. We see that the recognition performance became better with the increase in size of the labeled dataset. The reason is that a small labeled dataset is insufficient for discovering the data structure of each target. Moreover, the CPLP algorithm outperformed the LP algorithm, especially for small labeled datasets, because of the constraint propagation for computing the matrix W U L * .

3.1.3. Recognition Performance of SLFN with CPLP Method

In this section, the effectiveness of the proposed SLFN with CPLP method is demonstrated on the simulated data of 10 civilian vehicles. First, the recognition performance of the proposed method is investigated with varying sizes of OTD and TDC, and values of parameter p. Then, comparative experiments are shown that were carried out to analyze the proposed method and state-of-the-art methods, including the self-training OS-RKELM method, the OS-RKELM with a SVM, the incremental Laplacian regularization extreme learning machine (ILR-ELM) [33], the SVM, the K-SVD, and the DDAE.
In the following experiments, a Gaussian kernel taking the form of K x 1 , x 2 = exp x 1 x 2 2 x 1 x 2 2 a b was selected for Equation (1). The parameter b was set as 0.6, which is the optimal value according to our observations. The TDC was randomly selected from the test dataset. After being learned, it was removed from the test dataset.
 (1) Performance Versus Size of the OTD
In this section, the recognition performance of the proposed SLFN with CPLP method is studied with varying sizes of the OTD. The recognition results with learning steps are shown in Figure 10. As expected, for all the sizes of OTD, the proposed method exhibited increasing recognition performance along learning steps. This indicates that the proposed method can improve the recognition performance by exploiting knowledge from test data. In addition, the larger the size of the OTD, the better the recognition performance, since a large OTD can describe the target signature in more detail.
 (2) Performance Versus Size of the TDC
In this section, we study the recognition performance of the proposed method with different sizes of TDC. The results are shown in Figure 11. The observations obtained were as follows. First, for all the sizes of TDC, the recognition performance increased in the update process, which demonstrates the effectiveness of the proposed method. Second, a large size for the TDC yielded a greater improvement of recognition performance. The reason is that the CPLP algorithm exhibits better recognition performance with more test samples, so that the new training data used for updating SLFN parameters contained less error-labeled samples.
 (3) Performance Versus Parameter p
In this section, we study the influence of the parameter p on the recognition performance of the proposed SLFN with CPLP algorithm. Figure 12 shows the recognition results with learning steps under different values of p.
We observed that (1) when p 30 , the recognition performance became better and better with learning steps; (2) when p = 10 and 20, the recognition accuracy first increased and then decreased with learning steps; (3) the recognition performance gained the largest improvement when p = 90 . The reasons are analyzed in what follows. As illustrated in [34], in the update process, the samples with true pseudolabels but low confidence carry more information and contribute more to improving the SLFN performance, whereas employing the samples with high confidence may result in overfitting of classifiers. When p is small, the dataset used for updating the SLFN parameters consists only of the samples with high confidence that match well to the current SLFN and CPLP models, so the SLFN model may overfit the learned samples with learning steps. When p increases, more samples with low confidence are used for updating the SLFN parameters, resulting in larger improvement of recognition performance. However, the number of error-labeled samples also increases with the increasing p, so the recognition performance with p = 100 is worse than that with p = 90 .
Figure 13 shows the variation of recognition results after the update process with the parameter p. We can see that when the size of the OTD is fixed, the trends in the performance curves of different sizes of TDC are almost the same as with the parameter p. This indicates that the size of the TDC has little effect on the optimal value of p. However, the optimal value of p becomes greater with an increase in the size of the OTD. The reason is that the greater the size of the OTD, the smaller the number of error-labeled samples contained in the TDC. When the size of the OTD of each target is 24, the optimal value of p is 85, so a small number of error-labeled samples were used to update the SLFN parameters. When the size of the OTD of each target goes up to 72, the optimal value of p is 100. The number of error-labeled samples contained in the TDC was small enough that all samples with pseudolabels in TDC were used to update the SLFN parameters.
 (4) Performance Comparison
The proposed SLFN with CPLP algorithm is evaluated through a comparison with the state-of-the-art methods. Figure 14 shows the recognition results with learning steps under different sizes of OTD and TDC. In addition, we provide the baseline results as an upper bound in which all the test data are labeled correctly and were adopted for updating the SLFN parameters.
We found that the proposed method outperforms the other methods at all sizes of TDC and OTD. The recognition accuracy of the self-training OS-RKELM method first increased but then reduced gradually with learning steps. This was because that this method selects the test data with high confidence annotated by itself, which reinforces the bias of current encoded model along learning steps. The SLFN with SVM method exhibited the same trend as the self-training OS-RKELM method on HRRPs of 10 civilian vehicles. The reason for that may be that the SVM classifier only considers the structure of training data but ignores the information involved in the test data. Moreover, it views all the learned test samples as training data for SVM, which accumulates the error-labeled samples and leads to worse recognition performance. The ILR-ELM depends on the manifold assumption to utilize the unlabeled test data. However, this assumption does not hold for all samples, which may actually hurt recognition accuracy. The SVM, K-SVD, and DDAEs algorithms do not utilize test data, so their recognition rates were unchanged during the classification process. Since the SVM method did not utilize all the training samples, it exhibited worse performance than the RKELM algorithm when the training data were not complete. The K-SVD algorithm was not good in terms of recognition performance due to a lack of latent information among training samples for 10 civilian vehicles. For the DDAE algorithm, the limited training data of 10 civilian vehicles were not enough to get the appropriate weights of deep networks, so the overfitting problem could occur and the generalization performance was unsatisfactory.

3.2. Measured Data of 3 Military Vehicles

3.2.1. Dataset Description

In this section, the effectiveness of the proposed method is demonstrated using the measured data of three military vehicles with sizes of 5.8 × 2.9 m, 5.5 × 2.6 m and 6.5 × 3 m, respectively. The measured data were collected in an open field using a radar system placed on a platform 100 m above the ground plane. A stepped-frequency pulsed waveform was transmitted with the stepped frequency interval of 5 MHz and the resultant bandwidth of 640 MHz. Figure 15 shows the top views of geometries of the three military vehicles relative to radar. The first military vehicle was spinning around its center when it was illuminated by electromagnetic waves, and 1991 HRRP samples were collected. The second military vehicle moved following an elliptical path during 1398 HRRP samples being acquired. The echoes of the third vehicle were collected at eight azimuth angles, with 120 HRRP samples collected at each azimuth angle.
The 4-fold, 3-fold, and 4-fold cross-validations were exploited for the data of three military vehicles, respectively. As a result, the complete training datasets of the three targets contain 497, 466, and 240 HRRP samples, respectively; and 1494, 932, and 720 samples respectively constitute the test dataset of the three vehicles. In the following experiments, 80 Monte Carlo experiments were conducted in each validation.

3.2.2. Recognition Performance of CPLP Algorithm

The recognition performance of the CPLP algorithm was studied on the data of three military vehicles. First, the effects of some parameters is given. Then, the performance versus the size of unlabeled dataset and labeled dataset is analyzed.
 (1) Parameter Setup
In this section, we present the impact of the parameters α , σ 2 , k 1 and k 2 , on the recognition performance using the measured data of three military vehicles. The labeled dataset consists of a tenth of the complete training data, i.e., 47 HRRPs from the first target, 50 HRRPs from the second target, and 24 HRRPs from the third target. The unlabeled dataset contains 300 HRRP samples, i.e., 100 samples randomly selected from the test data of each target.
First, we analyze the effect of the parameters k 1 and k 2 . In the experiments, the parameter k 1 varied from 10 to 100 with interval of 1, and the parameter k 2 ranged from 1 to 30 with an interval of 1. The average recognition rates are shown in Figure 16. As shown, the CPLP algorithm achieved higher performance at k 1 42 , 48 and k 2 = 4 . In the following experiments conducted on the measured data of three military vehicles, k 1 = 46 and k 2 = 4 were chosen.
The performance of the CPLP algorithm with the parameter σ 2 is evaluated next. Figure 17 shows the recognition results. Clearly, the recognition accuracy first increased and then decreased. The best performance was achieved at 0.06. In the following experiments conducted on the measured data of three military vehicles, σ 2 = 0.06 was chosen.
Finally, the impact of the parameter α is analyzed. The constructed matrices W U L * are shown in Figure 18 when α = 0 , 0.35 , 0.7 and 0.99. We can see that a large α leads to more edges constructed and larger edge weights, which is the same as what was observed in Figure 6.
Figure 19 shows the recognition accuracy when α ranges from 0 to 0.99 with interval of 0.01. We can see that both large and small values of α result in degradation of recognition performance. The best recognition performance was achieved when α 0.26 , 0.4 . In the following experiments conducted on the measured data of three military vehicles, α = 0.32 was chosen.
 (2) Performance Versus Size of Unlabeled Dataset
The performance of the CPLP and LP algorithms is studied versus the size of unlabeled dataset. The recognition results are presented in Figure 20 for when the number of unlabeled samples of each target varied from 50 to 250 with interval of 50. It is shown that the recognition accuracy of both methods became better with a larger size of unlabeled dataset. Moreover, because of the constraint propagation for the matrix W U L * , the proposed CPLP algorithm showed greater recognition accuracy than the LP algorithm.
 (3) Performance Versus Size of Labeled Dataset
We compare the recognition performance of the CPLP with LP algorithms versus the size of labeled dataset. The recognition results are shown in Figure 21 for when the size of the labeled dataset ranged from 60 to 300 with an interval of 60. It is shown that the recognition rates become higher when the size of the labeled dataset increases, and the proposed CPLP algorithm outperformed the LP algorithm at all sizes of the labeled dataset.

3.2.3. Recognition Performance of SLFN with CPLP Method

In this section, the superiority of the SLFN with CPLP method is demonstrated on the measure data from three military vehicles. First, the effects of the size of OTD, size of TDC, and parameter p on the recognition performance are analyzed. Comparative experiments were conducted too.
In Equation (1), the optimal parameter b is 0.5 according to our observation. The TDC contained samples selected randomly from the test dataset. After being learned, they were removed from the test dataset.
 (1) Performance Versus Size of OTD
In this section, the recognition performance of the proposed method for three military vehicles is studied under different sizes of OTD. Figure 22 plots the recognition results with learning steps. As can be seen, the performance of the proposed method improves with learning steps, which indicates its effectiveness. Furthermore, more offline training samples lead to higher recognition rates, which is in agreement with the previous analysis.
 (2) Performance Versus Size of TDC
The recognition performance of the proposed method for three military vehicles under different sizes of TDC is studied in this section. The recognition results are shown in Figure 23. As is seen, the recognition rates increase with learning steps for all the sizes of TDC, which verifies the effectiveness of the proposed method. Moreover, the TDC containing more samples yielded better performance, which is consistent with observations from Figure 11.
 (3) Performance with Parameter p
In this section, the influence of the parameter p on recognition performance of the SLFN with CPLP algorithm is studied on the measured data of three military vehicles. The recognition results with learning steps under different values of p are presented in Figure 24. We observe that (1) when the size of OTD = 120, the recognition rates increase with learning steps under all the value of p, and the greatest improvement of recognition performance is gained at p = 80; (2) when the size of OTD = 240, the recognition rates increase with learning steps when p 30 , and the recognition performance achieves the best at p = 90. These phenomena demonstrate the effectiveness of the proposed method, and the reason is illustrated in Section 3.1.3.
Figure 25 shows the recognition results for three military vehicles after update process versus the parameter p. As expected, the optimal value of the parameter p is little influenced by the size of TDC, but is affected by the size of OTD. The more offline training samples, the greater the parameter p.
 (4) Performance Comparison
The following experiments examined the performances of the proposed method and other methods on the measured data. Figure 26 presents the recognition results with learning steps under different sizes of OTD and TDC. Obviously, the proposed method exceeds the compared methods at all the sizes of OTD and TDC. When the size of OTD = 120, the DDAEs algorithm exhibits better performance than the proposed method before the update, but is not competing with the proposed method along the learning process.

3.3. Computation Analysis

In the online stage, the proposed SLFN with CPLP method comprises two stages, i.e., the pseudolabel assignment for test data and SLFN parameter update by OS-RKELM algorithm. For the sake of clarity, we analyze the computational complexity of these two stages.
The pseudolabel assignment for test data comprises the prediction by the SLFN, the prediction by the CPLP algorithm, and decision fusion of SLFN and CPLP algorithms. Let L be the length of HRRP, then the computational complexity of the SLFN is O N u N l L + C . For the CPLP algorithm, the most time consuming parts are the KRNN graph construction for computing W U U and W U L , construction for W U L * , and computation for F m U by Equation (7). The KRNN graph construction has the computational complexity in the order of O N u 2 l o g 2 N u . The construction for W U L * has the running time of O N u 3 + N u N l 2 + N u 2 N l . The time for computing F m U has the complexity of O N u 3 + N u 2 N l + N u N l C . Thus the CPLP algorithm has the computational complexity in the order of O N u 3 + N u 2 N l + N u N l 2 + N u N l C + N u 2 l o g 2 N u . For the decision fusion of SLFN and CPLP algorithms, the computational complexity is O N u + N l 2 l o g 2 N u + N l .
For the SLFN parameter update by the OS-RKELM algorithm, the computational complexity is O N l N l 2 + N u 2 + N l N u [24].
Table 1 and Table 2 show the average computational time of the proposed method and the compared methods on the simulated HRRP data of 10 civilian vehicles and measured HRRP data of three military vehicles, respectively. Clearly, the computational time increases with the increase in the size of TDC and OTD. Due to the SVM training process, the SLFN with SVM method is the most time-consuming of all the recognition methods. The proposed SLFN with CPLP method takes less time than the SLFN with SVM method, but takes more time than the ILR-ELM and self-training OS-RKELM methods due to the CPLP algorithm. The SVM, K-SVD and DDAEs algorithms only classify samples and do not update the parameters of classifiers in the test phase, so they take much less time than the update methods.

4. Conclusions

A novel dynamic learning strategy based on the SLFN with assistance from the CPLP algorithm was proposed to tackle the HRRP target recognition task with limited training data. In the offline training phase, the initial parameters of SLFN are obtained with the training data. In the classification phase, the collected test data are first labeled by fusing the recognition results of current SLFN and CPLP algorithms. Then the test samples with reliable pseudolabels are used as additional training data to update the SLFN parameters by the OS-RKELM algorithm. The proposed method dynamically accumulates knowledge from training and test data through online learning, thereby reinforcing the performance of the RATR system with limited training data. In the experiments, the superiority of the SLFN with CPLP method was demonstrated on the simulated HRRP data from 10 civilian vehicles and real HRRP data from three military vehicles.
In the future, more PR methods should be investigated to improve the accuracy of pseudolabels of test data, and other online learning methods should be studied to deal with the RATR problem with limited training data. Moreover, we want to extend our work to the polarimetric HRRP-based RATR.

Author Contributions

J.W. conceived the main idea, designed the experiments, wrote the MATLAB code, and wrote the manuscript. Z.L. reviewed the manuscript. R.X. sourced the funding. L.R. sourced the funding and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Nature Science Foundation of China under grant 62001346, in part by the Equipment Pre-search Field Foundation under grant 61424010408, in part by the Fundamental Research Funds for the Central Universities under grant JB190206, and in part by the China Postdoctoral Science Foundation under grant 2019M663632.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments to improve the paper quality.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ran, L.; Liu, Z.; Xie, R.; Zhang, L. Focusing high-squint synthetic aperture radar data based on factorized back-projection and precise spectrum fusion. Remote Sens. 2019, 11, 2885. [Google Scholar] [CrossRef] [Green Version]
  2. Zhang, X.; Shi, Y.; Bao, Z. A new feature vector using selected bispectra for signal classification with application in radar target recognition. IEEE Trans. Signal Process. 2001, 49, 1875–1885. [Google Scholar] [CrossRef]
  3. Du, L.; Liu, H.; Bao, Z.; Xing, M. Radar HRRP target recognition based on higher order spectra. IEEE Trans. Signal Process. 2005, 53, 2359–2368. [Google Scholar]
  4. Wang, J.; Liu, Z.; Li, T.; Ran, L.; Xie, R. Radar HRRP target recognition via statistics-based scattering center set registration. IET Radar Sonar Navig. 2019, 13, 1264–1271. [Google Scholar] [CrossRef]
  5. Du, L.; He, H.; Zhao, L.; Wang, P. Noise robust radar HRRP target recognition based on scatterer matching algorithm. IEEE Sensors J. 2016, 16, 1743–1753. [Google Scholar] [CrossRef]
  6. Jacobs, S.P.; O’Sullivan, J.A. Automatic target recognition using sequences of high resolution radar range-profiles. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 364–381. [Google Scholar] [CrossRef]
  7. Copsey, K.; Webb, A. Bayesian gamma mixture model approach to radar target recognition. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1201–1217. [Google Scholar] [CrossRef]
  8. Du, L.; Liu, H.; Bao, Z.; Zhang, J. A two-distribution compounded statistical model for radar HRRP target recognition. IEEE Trans. Signal Process. 2006, 54, 2226–2238. [Google Scholar]
  9. Feng, B.; Chen, B.; Liu, H. Radar HRRP target recognition with deep networks. Pattern Recogn. 2017, 61, 379–393. [Google Scholar] [CrossRef]
  10. Pan, M.; Jiang, J.; Kong, Q.; Shi, J.; Sheng, Q.; Zhou, T. Radar HRRP target recognition based on t-SNE segmentation and discriminant deep belief network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1609–1613. [Google Scholar] [CrossRef]
  11. Liao, X.; Runkle, P.; Carin, L. Identification of ground targets from sequential high-range-resolution radar signatures. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 1230–1242. [Google Scholar] [CrossRef]
  12. Burges, C.J.C. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discovery 1998, 2, 121–167. [Google Scholar] [CrossRef]
  13. Li, L.; Liu, Z.; Tao, L. Radar high resolution range profile recognition via multi-SV method. J. Syst. Eng. Electron. 2017, 28, 879–889. [Google Scholar] [CrossRef]
  14. Feng, B.; Du, L.; Liu, H.; Li, F. Radar HRRP target recognition based on K-SVD algorithm. In Proceedings of the 2011 IEEE CIE International Conference on Radar, Chengdu, China, 24–27 October 2011; pp. 642–645. [Google Scholar]
  15. Feng, B.; Du, L.; Shao, C.; Wang, P.; Liu, H. Radar HRRP target recognition based on robust dictionary learning with small training data size. In Proceedings of the 2013 IEEE Radar Conference (RadarCon13), Ottawa, ON, Canada, 29 April–3 May 2013; pp. 1–4. [Google Scholar]
  16. Du, L.; Liu, H.; Wang, P.; Feng, B.; Pan, M.; Bao, Z. Noise robust radar HRRP target recognition based on multitask factor analysis with small training data size. IEEE Trans. Signal Process. 2012, 60, 3546–3559. [Google Scholar]
  17. Pan, M.; Jiang, J.; Li, Z.; Cao, J.; Zhou, T. Radar HRRP recognition based on discriminant deep autoencoders with small training data size. Electron. Lett. 2016, 52, 1725–1727. [Google Scholar]
  18. Yver, B. Online semi-supervised learning: Application to dynamic learning from RADAR data. In Proceedings of the 2009 International Radar Conference “Surveillance for a Safer World” (RADAR 2009), Bordeaux, France, 12–16 October 2009; pp. 1–6. [Google Scholar]
  19. Zhu, X.; Ghahramani, Z. Learning from labeled and unlabeled data with label propagation. Tech. Rep. 2002, 3175, 237–244. [Google Scholar]
  20. Bordes, A.; Ertekin, S.; Weston, J.; Bottou, L. Fast kernel classifiers with online and active learning. J. Mach. Learn. Res. 2005, 6, 1579–1619. [Google Scholar]
  21. Zhang, R.; Rudnicky, A.I. A new data selection principle for semi-supervised incremental learning. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; pp. 780–783. [Google Scholar]
  22. Cui, Z.; Tang, C.; Cao, Z.; Dang, S. SAR unlabeled target recognition based on updating CNN with assistant decision. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1585–1589. [Google Scholar] [CrossRef]
  23. Deng, W.; Ong, Y.-S.; Zhang, Q. A fast reduced kernel extreme learning machine. Neural Netw. 2016, 76, 29–38. [Google Scholar] [CrossRef] [Green Version]
  24. Deng, W.; Ong, Y.-S.; Tan, P.S.; Zheng, Q. Online sequential reduced kernel extreme learning machine. Neurocomputing 2016, 174, 72–84. [Google Scholar] [CrossRef]
  25. Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, W.; Zhang, T. Bidirectional label propagation over graphs. Int. J. Softw. Inform. 2013, 7, 419–433. [Google Scholar]
  27. Gong, C.; Fu, K.; Wu, Q.; Tu, E.; Yang, J. Semi-supervised classification with pairwise constraints. Neurocomputing 2014, 139, 130–137. [Google Scholar] [CrossRef]
  28. Qin, D.; Gammeter, S.; Bossard, L.; Quack, T.; Gool, L.V. Hello neighbor: Accurate object retrieval with k-reciprocal nearest neighbors. In Proceedings of the 2011 Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 777–784. [Google Scholar]
  29. Zhong, Z.; Zheng, L.; Cao, D.; Li, S. Re-ranking person re-identification with k-reciprocal Encoding. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3652–3661. [Google Scholar]
  30. Liu, W.; Tian, X.; Tao, D.; Liu, J. Constrained metric learning via distance gap maximization. Proc. Nat. Conf. Artif. Intell. (AAAI) 2010, 24, 1. [Google Scholar]
  31. Chen, B.; Liu, H.; Bao, Z. Analysis of three kinds of classification based on different absolute alignment methods. Mod. Radar 2006, 28, 58–62. [Google Scholar]
  32. Dungan, K.E.; Austin, C.; Nehrbass, J.; Potter, L.C. Civilian vehicle radar data demos. SPIE Proc. 2010, 7699, 731–739. [Google Scholar] [CrossRef]
  33. Yang, L.; Yang, S.; Li, S.; Liu, Z.; Jiao, L. Incremental laplacian regularization extreme learning machine for online learning. Appl. Soft Comput. 2017, 59, 546–555. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Er, M.J. Sequential active learning using meta-cognitive extreme learning machine. Neurocomputing 2016, 173, 835–844. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed method.
Figure 1. Framework of the proposed method.
Remotesensing 13 00750 g001
Figure 2. Flowchart of the proposed constraint propagation-based label propagation (CPLP) algorithm.
Figure 2. Flowchart of the proposed constraint propagation-based label propagation (CPLP) algorithm.
Remotesensing 13 00750 g002
Figure 3. Normalized high-resolution range profiles (HRRPs) of 10 civilian vehicles. (a) Camry; (b) Jeep99; (c) Mitsubishi; (d) MazdaMPV; (e) HondaCivic4dr; (f) Jeep93; (g) Maxima; (h) Sentra; (i) ToyotaAvalon; (j) ToyotaTacoma.
Figure 3. Normalized high-resolution range profiles (HRRPs) of 10 civilian vehicles. (a) Camry; (b) Jeep99; (c) Mitsubishi; (d) MazdaMPV; (e) HondaCivic4dr; (f) Jeep93; (g) Maxima; (h) Sentra; (i) ToyotaAvalon; (j) ToyotaTacoma.
Remotesensing 13 00750 g003
Figure 4. Recognition results of the CPLP algorithm for 10 civilian vehicles with different values of k 1 and k 2 . The parameters α and σ 2 were searched to get the best recognition results.
Figure 4. Recognition results of the CPLP algorithm for 10 civilian vehicles with different values of k 1 and k 2 . The parameters α and σ 2 were searched to get the best recognition results.
Remotesensing 13 00750 g004
Figure 5. Recognition results of the CPLP algorithm for 10 civilian vehicles in relation to the parameter σ 2 . In the experiments, k 1 = 6 , and k 2 = 10 , and α was used to get the best recognition results.
Figure 5. Recognition results of the CPLP algorithm for 10 civilian vehicles in relation to the parameter σ 2 . In the experiments, k 1 = 6 , and k 2 = 10 , and α was used to get the best recognition results.
Remotesensing 13 00750 g005
Figure 6. Matrix W U L * computed in the CPLP algorithm when α = 0 , 0.3 , 0.6 , 0.99 , respectively. The labeled data and unlabeled data are arranged in the same target order: Camry, 1999 Jeep, Mitsubishi, Mazda MPV, Honda Civic 4dr, 1993 Jeep, Nissan Maxima, Nissan Sentra, Toyota Avalon, and Toyota Tacoma. (a) α = 0 ; (b) α = 0.3 ; (c) α = 0.6 ; (d) α = 0.99 .
Figure 6. Matrix W U L * computed in the CPLP algorithm when α = 0 , 0.3 , 0.6 , 0.99 , respectively. The labeled data and unlabeled data are arranged in the same target order: Camry, 1999 Jeep, Mitsubishi, Mazda MPV, Honda Civic 4dr, 1993 Jeep, Nissan Maxima, Nissan Sentra, Toyota Avalon, and Toyota Tacoma. (a) α = 0 ; (b) α = 0.3 ; (c) α = 0.6 ; (d) α = 0.99 .
Remotesensing 13 00750 g006
Figure 7. Recognition results of the CPLP algorithm for 10 civilian vehicles with different values of α .
Figure 7. Recognition results of the CPLP algorithm for 10 civilian vehicles with different values of α .
Remotesensing 13 00750 g007
Figure 8. Recognition results of CPLP and label-propagation (LP) algorithms for 10 civilian vehicles in relation to the size of unlabeled dataset of each target. In the experiments, labeled dataset consists of 360 HRRP samples, i.e., 36 HRRP samples from each target. The unlabeled data are randomly selected from test dataset.
Figure 8. Recognition results of CPLP and label-propagation (LP) algorithms for 10 civilian vehicles in relation to the size of unlabeled dataset of each target. In the experiments, labeled dataset consists of 360 HRRP samples, i.e., 36 HRRP samples from each target. The unlabeled data are randomly selected from test dataset.
Remotesensing 13 00750 g008
Figure 9. Recognition results of CPLP and LP algorithms for 10 civilian vehicles in relation to the size of the labeled dataset of each target. In the experiments, we randomly selected 100 HRRP samples from the test dataset of each target, yielding 1000 HRPPs in the unlabeled dataset.
Figure 9. Recognition results of CPLP and LP algorithms for 10 civilian vehicles in relation to the size of the labeled dataset of each target. In the experiments, we randomly selected 100 HRRP samples from the test dataset of each target, yielding 1000 HRPPs in the unlabeled dataset.
Remotesensing 13 00750 g009
Figure 10. Recognition results of the proposed SLFN with CPLP method for 10 civilian vehicles versus learning steps under different sizes of offline training dataset (OTD). The test data chunk (TDC) contained 200 HRRP samples from each target, and the parameter p was set as 90 in the experiments.
Figure 10. Recognition results of the proposed SLFN with CPLP method for 10 civilian vehicles versus learning steps under different sizes of offline training dataset (OTD). The test data chunk (TDC) contained 200 HRRP samples from each target, and the parameter p was set as 90 in the experiments.
Remotesensing 13 00750 g010
Figure 11. Recognition results for 10 civilian vehicles with number of learned samples per target under different sizes of TDC. In the experiments, the size of OTD from each target was fixed as 36, and the parameter p was set as 100.
Figure 11. Recognition results for 10 civilian vehicles with number of learned samples per target under different sizes of TDC. In the experiments, the size of OTD from each target was fixed as 36, and the parameter p was set as 100.
Remotesensing 13 00750 g011
Figure 12. Recognition accuracy of the proposed SLFN with CPLP method for 10 civilian vehicles with learning steps when p ranged from 10 to 100 with an interval of 10. The OTD consisted of a tenth of the complete training data, and the size of TDC from each target was 200.
Figure 12. Recognition accuracy of the proposed SLFN with CPLP method for 10 civilian vehicles with learning steps when p ranged from 10 to 100 with an interval of 10. The OTD consisted of a tenth of the complete training data, and the size of TDC from each target was 200.
Remotesensing 13 00750 g012
Figure 13. Recognition results for 10 civilian vehicles after the update process under varying values of parameter p. (a) Size of OTD from each target = 24; (b) size of OTD from each target = 36; (c) size of OTD from each target = 48; (d) size of OTD from each target = 60; (e) size of OTD from each target = 72.
Figure 13. Recognition results for 10 civilian vehicles after the update process under varying values of parameter p. (a) Size of OTD from each target = 24; (b) size of OTD from each target = 36; (c) size of OTD from each target = 48; (d) size of OTD from each target = 60; (e) size of OTD from each target = 72.
Remotesensing 13 00750 g013
Figure 14. Recognition performances of the proposed SLFN with CPLP method and other methods for 10 civilian vehicles with respect to learning steps; (a) size of OTD per target = 36 and size of TDC per target = 100; (b) size of OTD per target = 36 and size of TDC per target = 200; (c) size of OTD per target = 36 and size of TDC per target = 300; (d) size of OTD per target = 36 and size of TDC per target = 400; (e) size of OTD per target = 72 and size of TDC per target = 200; (f) size of OTD per target = 24 and size of TDC per target = 200.
Figure 14. Recognition performances of the proposed SLFN with CPLP method and other methods for 10 civilian vehicles with respect to learning steps; (a) size of OTD per target = 36 and size of TDC per target = 100; (b) size of OTD per target = 36 and size of TDC per target = 200; (c) size of OTD per target = 36 and size of TDC per target = 300; (d) size of OTD per target = 36 and size of TDC per target = 400; (e) size of OTD per target = 72 and size of TDC per target = 200; (f) size of OTD per target = 24 and size of TDC per target = 200.
Remotesensing 13 00750 g014
Figure 15. Top views of the geometries of the 3 military vehicles relative to radar. (a) Military vehicle 1; (b) Military vehicle 2; (c) Military vehicle 3.
Figure 15. Top views of the geometries of the 3 military vehicles relative to radar. (a) Military vehicle 1; (b) Military vehicle 2; (c) Military vehicle 3.
Remotesensing 13 00750 g015
Figure 16. Recognition results of CPLP algorithm for 3 military vehicles with different values of k 1 and k 2 . In the experiments, The parameters σ 2 and α were searched to provide the best recognition performance.
Figure 16. Recognition results of CPLP algorithm for 3 military vehicles with different values of k 1 and k 2 . In the experiments, The parameters σ 2 and α were searched to provide the best recognition performance.
Remotesensing 13 00750 g016
Figure 17. Recognition results of CPLP algorithm for 3 military vehicles with parameter σ 2 . The parameter α is searched to acquire the best results in the experiments.
Figure 17. Recognition results of CPLP algorithm for 3 military vehicles with parameter σ 2 . The parameter α is searched to acquire the best results in the experiments.
Remotesensing 13 00750 g017
Figure 18. Matrix W U L * computed in the CPLP algorithm using the measured data of 3 military vehicles when α = 0 , 0.35 , 0.7 , and 0.99 . The labeled data and unlabeled data are arranged in the same target order. (a) α = 0 ; (b) α = 0.35 ; (c) α = 0.7 ; (d) α = 0.99 .
Figure 18. Matrix W U L * computed in the CPLP algorithm using the measured data of 3 military vehicles when α = 0 , 0.35 , 0.7 , and 0.99 . The labeled data and unlabeled data are arranged in the same target order. (a) α = 0 ; (b) α = 0.35 ; (c) α = 0.7 ; (d) α = 0.99 .
Remotesensing 13 00750 g018
Figure 19. Recognition results of CPLP algorithm for 3 military vehicles versus parameter α .
Figure 19. Recognition results of CPLP algorithm for 3 military vehicles versus parameter α .
Remotesensing 13 00750 g019
Figure 20. Recognition results of CPLP and LP algorithm for three military vehicles versus size of unlabeled dataset. In the experiments, a tenth of the complete training data constituted the labeled dataset.
Figure 20. Recognition results of CPLP and LP algorithm for three military vehicles versus size of unlabeled dataset. In the experiments, a tenth of the complete training data constituted the labeled dataset.
Remotesensing 13 00750 g020
Figure 21. Recognition results of CPLP and LP algorithms for three military vehicles versus size of labeled dataset. In the experiments, the unlabeled dataset of each target contained 100 HRRP samples randomly selected from the test dataset.
Figure 21. Recognition results of CPLP and LP algorithms for three military vehicles versus size of labeled dataset. In the experiments, the unlabeled dataset of each target contained 100 HRRP samples randomly selected from the test dataset.
Remotesensing 13 00750 g021
Figure 22. Recognition results of the proposed SLFN with CPLP method for three military vehicles versus learning steps under different sizes of OTD. In the experiments, The TDC contained a total of 300 samples.
Figure 22. Recognition results of the proposed SLFN with CPLP method for three military vehicles versus learning steps under different sizes of OTD. In the experiments, The TDC contained a total of 300 samples.
Remotesensing 13 00750 g022
Figure 23. Recognition results of the proposed SLFN with CPLP method for 3 military vehicles versus learning steps under different sizes of TDC. In the experiments, the size of OTD was fixed as 120.
Figure 23. Recognition results of the proposed SLFN with CPLP method for 3 military vehicles versus learning steps under different sizes of TDC. In the experiments, the size of OTD was fixed as 120.
Remotesensing 13 00750 g023
Figure 24. Recognition accuracy of the proposed SLFN with CPLP method for 3 military vehicles with learning steps when p ranges from 10 to 100 with an interval of 10. In the experiments, the size of TDC = 300. (a) Size of OTD = 120; (b) size of OTD = 240.
Figure 24. Recognition accuracy of the proposed SLFN with CPLP method for 3 military vehicles with learning steps when p ranges from 10 to 100 with an interval of 10. In the experiments, the size of TDC = 300. (a) Size of OTD = 120; (b) size of OTD = 240.
Remotesensing 13 00750 g024
Figure 25. Recognition results for 3 military vehicles after update process under varying values of parameter p. (a) Size of OTD = 120; (b) size of OTD = 180; (c) size of OTD = 240.
Figure 25. Recognition results for 3 military vehicles after update process under varying values of parameter p. (a) Size of OTD = 120; (b) size of OTD = 180; (c) size of OTD = 240.
Remotesensing 13 00750 g025
Figure 26. Recognition performances of the proposed SLFN with CPLP algorithm and other methods for 3 military vehicles versus learning steps. (a) Size of OTD = 60 and size of TDC = 300; (b) size of OTD = 120 and size of TDC = 300; (c) size of OTD = 120 and size of TDC = 500; (d) size of OTD = 180 and size of TDC = 300; (e) size of OTD = 240 and size of TDC = 300; (f) size of OTD = 300 and size of TDC = 300.
Figure 26. Recognition performances of the proposed SLFN with CPLP algorithm and other methods for 3 military vehicles versus learning steps. (a) Size of OTD = 60 and size of TDC = 300; (b) size of OTD = 120 and size of TDC = 300; (c) size of OTD = 120 and size of TDC = 500; (d) size of OTD = 180 and size of TDC = 300; (e) size of OTD = 240 and size of TDC = 300; (f) size of OTD = 300 and size of TDC = 300.
Remotesensing 13 00750 g026
Table 1. Average computational times of the proposed SLFN with CPLP method and other methods on the HRRP dataset of 10 civilian vehicles.
Table 1. Average computational times of the proposed SLFN with CPLP method and other methods on the HRRP dataset of 10 civilian vehicles.
Size of
OTD
Size of
TDC
SLFN with
CPLP (s)
ILR-ELM
(s)
SLFN with
SVM (s)
OS-RKELM
Self-Training (s)
SVM
(ms)
K-SVD
(ms)
DDAEs
(ms)
24010000.5210.22113.6370.2110.9060.9685.338
20001.3980.38113.9530.351
30003.0000.62314.5430.543
40005.3350.93115.0700.811
36010000.6090.27615.1600.3050.9680.9757.024
20001.5700.45515.2810.520
30003.2180.70715.7020.867
40005.6961.03616.7241.420
72010000.8970.47719.2200.5951.1060.9837.297
20001.9530.68719.5151.244
30003.8280.96021.0212.473
40006.4511.37322.9114.251
Table 2. Average computational times of the proposed SLFN with CPLP method and other methods on the HRRP dataset of 3 military vehicles.
Table 2. Average computational times of the proposed SLFN with CPLP method and other methods on the HRRP dataset of 3 military vehicles.
Size of
OTD
Size of
TDC
SLFN with
CPLP (s)
ILR-ELM
(s)
SLFN with
SVM (s)
OS-RKELM
Self-Training (s)
SVM
(ms)
K-SVD
(ms)
DDAEs
(ms)
602000.0780.0140.3440.0080.2280.3005.317
3000.1000.0220.3560.012
5000.1540.0410.3650.020
1202000.0800.0150.3960.0110.2940.3046.309
3000.1030.0230.3970.017
5000.1560.0410.4050.032
2402000.0820.0170.4630.0180.3080.3067.077
3000.1030.0250.4650.026
5000.1630.0470.4710.041
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, J.; Liu, Z.; Xie, R.; Ran, L. Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data. Remote Sens. 2021, 13, 750. https://doi.org/10.3390/rs13040750

AMA Style

Wang J, Liu Z, Xie R, Ran L. Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data. Remote Sensing. 2021; 13(4):750. https://doi.org/10.3390/rs13040750

Chicago/Turabian Style

Wang, Jingjing, Zheng Liu, Rong Xie, and Lei Ran. 2021. "Radar HRRP Target Recognition Based on Dynamic Learning with Limited Training Data" Remote Sensing 13, no. 4: 750. https://doi.org/10.3390/rs13040750

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop