Next Article in Journal
Training to Compete: Are Basketball Training Loads Similar to Competition Achieved?
Previous Article in Journal
Lung Cancer Detection Model Using Deep Learning Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RPREC: A Radar Plot Recognition Algorithm Based on Adaptive Evidence Classification

1
Engineering Comprehensive Training Center, Xi’an University of Architecture and Technology, Xi’an 710055, China
2
School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(22), 12511; https://doi.org/10.3390/app132212511
Submission received: 19 September 2023 / Revised: 14 November 2023 / Accepted: 17 November 2023 / Published: 20 November 2023
(This article belongs to the Topic Radar Signal and Data Processing with Applications)

Abstract

:
When radar receives target echoes to form plots, it is inevitably affected by clutter, which brings a lot of imprecise and uncertain information to target recognition. Traditional radar plot recognition algorithms often have poor performance in dealing with imprecise and uncertain information. To solve this problem, a radar plot recognition algorithm based on adaptive evidence classification (RPREC) is proposed in this paper. The RPREC can be considered as the evidence classification version under the belief functions. First, the recognition framework based on the belief functions for target, clutter, and uncertainty is created, and a deep neural network model classifier that can give the class of radar plots is also designed. Secondly, according to the classification results of each iteration round, the decision pieces of evidence are constructed and fused. Before being fused, evidence will be corrected based on the distribution of radar plots. Finally, based on the global fusion results, the class labels of all radar plots are updated, and the classifier is retrained and updated so as to iterate until all the class labels of radar plots are no longer changed. The performance of the RPREC is verified and analyzed based on the real radar plot datasets by comparison with other related methods.

1. Introduction

Due to the influence of detection accuracy and clutter, radars will form a large number of clutter plots when receiving target echoes for processing, which is not conducive to target recognition. Especially in areas with high-clutter density, the target plots can hardly be seen [1,2]. In order to eliminate radar detection errors, multiple radars will work together. But at this point, they can also cause clutter plots to overlap with each other, forming a dense clutter area with irregular spatial distribution, seriously affecting the recognition accuracy and real-time performance of radar data processing. It can be seen that in radar plot processing, how to effectively detect targets from a large amount of clutter and uncertain plots is the key to achieving precise target tracking by radar [3,4,5]. In order to ensure efficient and fast data processing for radar plots, traditional radar plot recognition algorithms usually use binary classification recognition rules to determine whether they are a target or clutter [6,7,8,9]. However, it is well-known that targets cannot be accurately classified in some cases. Therefore, due to the inability to effectively represent the uncertain measurements, this yes or no decision judgment can increase the error rate. This is not conducive to correct decision analysis in these traditional methods. There are two main ways to solve this problem: one is to use belief functions to accurately represent uncertain information, and the other is to use deep neural networks with stronger sample learning ability for classification. The belief functions have significant advantages in dealing with uncertain and imprecise information [10,11,12,13,14]. So, they have been applied in some fields such as pattern recognition [15,16], data clustering [17,18,19,20,21,22,23], data classification [16,24,25,26,27], security assessment [28,29], sensor information fusion [30], abnormal detection [31], tumor segmentation [32,33], decision-making [34,35,36], community detection [37,38], and items of interest recommendation [39,40]. Considering the good performance of belief functions in application, evidence classification algorithms have been studied to improve the accuracy of target recognition. Among these methods, the evidence K-nearest neighbor (EK-NN) is the most representative [25,41]. Subsequently, in order to adapt to different application scenarios, some modifications of the EK-NN have been studied. For instance, many optimization methods based on the EK-NN were proposed in [42,43,44,45]. Generally, these improved algorithms can have good performances in some domains, such as machine diagnosis [46], process control [47], remote sensing [48], medical image processing [49], and bioinformatics [50,51], among others.
Compared with traditional classification methods, the deep neural network model has strong learning ability and also the advantage of constantly updating with an increase in samples [52,53,54,55,56]. Therefore, radar plot recognition algorithms based on neural networks have been studied. In reference [53], the full connected neural network (FNN) is used to study the classification of radar clutter and real targets. The authors designed a network with five layers, with nodes in each layer being 8, 64, 128, 32, and 2. When the number of training and testing samples is 6276 and 2000, respectively, the classification accuracy of the FNN can reach approximately 0.83 to 0.88. In reference [54], a multi-layer perceptron algorithm optimized by particle swam optimization was used for radar plot recognition (PSO-MLP), which achieved a good recognition accuracy of 0.857. In reference [55], the convolutional neural network (CNN) was compared with the fully connected neural network (FNN) and support vector machine (SVM). The experimental results show that when the training and testing samples are sufficient, the recognition accuracy of the CNN can reach 0.943. In reference [56], the same authors used the recurrent neural network (RNN) instead of the CNN to further improve classification accuracy. When the training samples of radar plots exceed 10000, the recognition accuracy can reach 0.991, which is impressive.
Therefore, in order to effectively characterize the uncertain data and also improve the recognition accuracy, a radar plot recognition algorithm based on adaptive evidence classification (RPREC) is proposed in this paper. In the RPREC, a confidence recognition framework is first created that includes target, clutter, and uncertainty, and an updatable classifier based on deep neural networks is also designed. Then, based on the network classification model obtained in each round, the category of all radar plots can be confirmed. If the network classification model does not have samples for training, the class of each radar plot will be randomly initially given. Finally, the class of each radar plot is updated through the fusion of belief functions, and the plots after updating the category label can be used for classifier training and parameter optimization. The optimized classifier can also be reused to obtain the class of each plot. This cycle continues until the category labels of each plot are no longer updated or iterated to a certain number. The performance of the RPREC is verified through some experiments based on the real radar plot dataset. The results show that the RPREC can effectively handle clutter and uncertain data compared to other typical algorithms. So, it can improve the recognition accuracy of radar plots. In addition, the RPREC has less dependence on training samples, making it easy to apply to other scenarios.
The rest of this paper is organized as follows. In Section 2, we will recall the belief functions and the evidence K-nearest neighbor classification, respectively. In Section 3, we will introduce the proposed radar plot recognition algorithm based on adaptive evidence classification. Finally, experiments will be presented in Section 4, and the paper will be concluded in Section 5.

2. Related Work

Firstly, the belief function theory is introduced in Section 2.1. Then, the K-nearest neighbor classification of evidence was reviewed in Section 2.2.

2.1. Belief Functions

The belief functions can be seen as a generalization of probability theory. They have been proven to be an effective theoretical framework, especially in the processing of uncertain and imprecise information.
In belief functions, the set Ω = { ω 1 , , ω c } is called discernment, which can be extended to the powerset 2 Ω . For example, if Ω = { ω 1 , ω 2 , ω 3 } , then 2 Ω = { , { ω 1 } , { ω 2 } , { ω 3 } , { ω 1 , ω 2 } , { ω 1 , ω 3 } , { ω 2 , ω 3 } , Ω } . The mass function m is used to express the belief of the different elements in 2 Ω , which is a mapping function from 2 Ω to the interval [0, 1] that is defined by:
m ( ) = 0 , A 2 Ω , A m ( A ) = 1
Subsets A 2 Ω are called the focal sets that satisfy m ( A ) > 0 . Each number m (A) is interpreted as the probability that the evidence supports exactly the assertion A. In particular, m (Ω) is the probability that the evidence tells us nothing about ω, i.e., it is the probability of knowing nothing. For any subset A 2 Ω , the probability that the evidence supports A and the probability that the evidence does not contradict A can be defined as:
B e l ( A ) = B A m ( B )
P l ( A ) = B A m ( B ) .
Functions Bel and Pl are called, respectively, the belief function and the plausibility function associated with m. They can be regarded as providing lower and upper bounds for the degree of belief that can be attached to each subset of Ω.
In the Dempster–Shafer (D-S) theory, independent evidence can be combined with each other to ultimately form the belief that supports decision-making. Assume that on the same frame of discernment, there are two pieces of evidence represented by m1 and m2, respectively. Then, the evidence combination based on the D-S rule is defined as:
( m 1 m 2 ) ( A ) = B C = A m 1 ( B ) m 2 ( C ) 1 k
And:
k = B C = m 1 ( B ) m 2 ( C )
k is used to represent the degree of conflict between evidence, where the evidence refers to m1 and m2. If k = 0, these two pieces of evidence are completely consistent and there is no conflict between them. If k = 1, these two pieces of evidence are completely contradictory and cannot be combined.
For example, let us consider Ω = { ω 1 , ω 2 } and the two pieces of evidence providing the following mass function:
m 1 ( ω 1 ) = 0.8 ,   m 1 ( ω 2 ) = 0.2 m 2 ( ω 1 ) = 0.7 ,   m 2 ( ω 2 ) = 0.3
Using the D-S rule to combine these two pieces of evidence, one obtains:
m D S ( ω 1 ) = m 1 ( ω 1 ) m 2 ( ω 1 ) = 0.56 m D S ( ω 2 ) = m 1 ( ω 2 ) m 2 ( ω 2 ) = 0.06 m D S ( ω 1 ω 2 ) = m 1 ( ω 1 ) m 2 ( ω 2 ) + m 1 ( ω 2 ) m 2 ( ω 1 ) = 0.38
Here, K = m D S ( ω 1 ω 2 ) = 0.38 is the degree of conflict between m1 and m2.
The two pieces of evidence provide the following mass function:
m 1 ( ω 1 ) = 1 ,   m 1 ( ω 2 ) = 0 m 2 ( ω 1 ) = 0 ,   m 2 ( ω 2 ) = 1
Combining these two pieces of evidence with the D-S rule, one obtains:
m D S ( ω 1 ) = 0 ,   m D S ( ω 2 ) = 0 ,   m D S ( ω 1 ω 2 ) = 1
Then, K = m D S ( ω 1 ω 2 ) = 1 . The fusion result is meaningless because of the full contradiction of the two sources of information. Therefore, when the degree of conflict is very high, the two pieces of evidence cannot be combined.

2.2. Evidence Classification

In the design of evidence classification algorithms based on belief function frameworks, the evidential K-nearest neighbor classification (EK-NN) plays a significant role. In the EK-NN, each neighbor of the sample is considered as evidence, which is used to provide decision support for the class label of the sample. The final decision is presented by combining these K-nearest neighbor pieces of evidence.
Consider a data classification problem where the object o will be classified into a certain class that belongs to the class set Ω = { ω 1 , , ω c } . If the nearest neighbor o j with a feature distance of d j from the target o belongs to the group ω k ( j ) , then the evidence of each neighbor can be represented by the following formula.
m j ( { ω k ( j ) } ) = φ ( d j ) m j ( Ω ) = 1 φ ( d j )
where φ is a non-increasing mapping from [0, +∞) to [0, 1], and it was proposed to choose φ as:
φ ( d j ) = α 0 exp ( γ q d j )
The set of these nearest neighbors is represented by NK; therefore, the combination of the mass function of the nearest neighbors becomes:
m = j N k   m j .
At this point, the final decision can be made that object is assigned to the class ω q with the highest confidence level. The EK-NN provides a good idea for data classification with uncertain information.
In order to present the characteristics of evidence classification more intuitively and clearly, the flowchart of evidence classification is shown below in Figure 1:

3. Proposed Method

In this section, the radar plot recognition algorithm based on adaptive evidence classification (RPREC) is presented in detail. The RPREC mainly includes three parts. In Section 3.1, the confidence recognition framework for target, clutter, and uncertainty has been constructed, and a deep neural network classification model has also been designed. In Section 3.2, the mass function of each radar plot was constructed, and the evidence was corrected and fused. In Section 3.3, iterative updating of category labels for target data was carried out based on the real-time optimization of classifiers.

3.1. Design of a Neural Network Classifier

In the RPREC, the recognition framework Ω = { C o , C n , Θ } for target, clutter, and uncertainty plots is first constructed, which is different from the binary recognition rule of either target or clutter in traditional methods. Here, C o represents the category of the real target plots, C n represents the category of clutter, and Θ represents the category of uncertainty. The mathematical relationship between them is: C o C n = , C o Θ = C o , and C n Θ = C n .
As shown in Figure 2, a fully connected neural network has been designed, which can continuously optimize as the confidence function of the plots’ category is iteratively updated. The specific network parameters are set as follows:
Input layer: The network input is the collected radar plot data, including target, clutter, and uncertainty. Before being input into the network classifier for category recognition, all plots are modeled using feature vectors recommended in [53]. The mathematical representation is F = [ R , E , R w , A w , M a , A a ] . Here, R represents the distance from the target to the radar, E represents the altitude information, Rw represents the size of the target in the distance, Aw represents the size of the target in the azimuth, Ma represents the maximum amplitude of all participating condensation echoes, and Ma represents the average amplitude of all participating condensation echoes.
Therefore, the number of nodes in this network input layer is set to 6, which is consistent with the input feature dimension.
Output layer: The network output represents the category membership of plots belonging to C o , C n , and Θ under the confidence recognition framework Ω . It represents the confidence level of radar plots belonging to target, clutter, and uncertainty, respectively. Its mathematical representation is μ = [ μ C o , μ C n , μ Θ ] ; therefore, the number of output nodes is set to 3.
Hidden layer: The design of network hidden layers is usually related to the size and distribution characteristics of the dataset. After some preliminary experiments, setting the hidden layer to 30 layers with 50 nodes per layer and using the sigmoid function for nonlinear processing has the best effect.

3.2. Construction and Correction Fusion of Belief Functions

The belief function is often referred to as the mass function. The construction and fusion of the mass function for the target plot mainly include three parts: initial classification of target data, construction of mass function, and correction and combination of evidence.

3.2.1. Initial Classification of Radar Plots

For the radar plots dataset O = { o 1 , , o n } , based on the constructed deep network model classifier under the confidence recognition framework Ω = { C o , C n , Θ } , the category membership of target O i   ( i = 1 , , n ) can be obtained as μ i = ( μ i 1 , , μ i c ) , which satisfies the following equations:
μ i j [ 0 , 1 ] ,   j = 1 c μ i j = 1
Due to the limitations of the framework Ω, the value of category C is taken as 3, representing the target, clutter, and uncertainty, respectively. The number of radar plots belonging to classes C o , C n , and Θ is separately counted as N 1 , N 2 , and N 3 . The total belief of each category is defined as:
B e l ( C o ) = i = 1 N 1 μ i ( C o ) B e l ( C n ) = i = 1 N 2 μ i ( C n ) B e l ( Θ ) = i = 1 N 3 μ i ( Θ )
Here, the total belief can be seen as a description of the confidence density of radar plots in various categories. The sample data around the target can be used as auxiliary evidence for category decision-making, among which samples belonging to the same category can be used for confidence enhancement, while heterogeneous samples can be used for confidence correction. Therefore, a certain number of nearest neighbor samples can be selected from the three types of target plots formed by the above initial classification to construct decision evidence.

3.2.2. Construction of Mass Function

The samples were selected separately from the sample sets of categories C o , C n , and Θ , with specific numerical settings proportional to the total belief, as shown in the following formula:
K i = B e l ( i ) × K i { C o , C n , Θ } B e l ( i )
where the value of K is related to the size and distribution of radar plots and needs to be obtained through specific experiments based on different application backgrounds. After obtaining the nearest neighbor samples of the target point, the decision evidence set Φ K ( i ) of target o i can be constructed. The basic confidence assignment function for each sample in this set is defined as:
m i ( C j ) = α i j m i ( Θ ) = 1 α i j , j = 1 , , K
With:
α i j = λ e M i j   ,   C j { C o , C n } 0   ,                     C j = Θ
where M i j is the confidence similarity between the target data and the decision sample, defined as:
M i j = c i , c j c 0 , c n , Θ ( B e l i ( C i ) B e l j ( C j ) ) 2

3.2.3. Correction and Combination of Evidence

This section mainly focuses on revising and combining decision evidence. Firstly, the evidence belonging to the same category is combined, and then the combination results belonging to different categories are fused.
1.
Combining evidence with the same category
Firstly, combine the evidence with the same category assignment in decision evidence set Φ K ( i ) ; that is, combine the evidence in the decision evidence subset T q of each category.
m t T q ( C q ) = 1 j T q m i , j ( Θ ) m t T q ( Θ ) = j T q m i , j ( Θ )
Assuming that E r is the error rate of the radar plot, the setting of the evidence discount factor is defined as:
W C q = ( 1 E r ) × B e l ( C q ) C q { C o , C n , Θ } B e l ( C q )
Based on the correction factor, the update of evidence can be achieved as follows:
d m t T q ( C q ) = W C q ( 1 j T q m t , j ( Θ ) ) d m t T q ( Θ ) = 1 W C q ( 1 j T q m t , j ( Θ ) )
2.
Combining the results under different categories
There may be certain confidence conflicts between different categories of decision evidence. The definition of evidence conflict between different categories is as follows:
T c = q = 1 c d m t T q ( ω q )
Therefore, for the initial fusion evidence d m t T q of different categories T q , a combination based on conflict resolution rules can be obtained as follows:
m t ( C q ) = ( d m t T q ( C q ) h { 1 , , c } , h q d m t T h ( Θ ) ) / ( 1 T c ) m t ( Θ ) = 1 q = 1 c ( d m t T q ( ω q ) h { 1 , , c } , h q d m t T h ( Θ ) ) / ( 1 T c )
Here, m t is the global mass function obtained from the fusion of decision evidence set Φ K ( i ) . Then, the confidence level B e l and the probability level P l of target o i belonging to each pattern category C i can be calculated.
B e l i ( C i ) = C i { C o , C n , Θ } , C q C i m t ( C q )
P l i ( C i ) = C i { C o , C n , Θ } , C q C o m t ( C q )
The criteria for updating the category labels of radar plots are:
Criterion 1: Update categories with maximum confidence;
Criterion 2: The minimum confidence difference between the new category and other categories must be greater than the threshold T1, which means that there must be sufficient confidence differences between different categories;
Criterion 3: The difference between the probability and confidence of the updated category must be less than the threshold T2, which means that the uncertainty of the category cannot be too high.
For example, let us assume that there are four pieces of evidence in the decision evidence set Φ K ( i ) :
m 1 ( C o ) = 0.8 ,   m 1 ( Θ ) = 0.2 m 2 ( C o ) = 0.7 ,   m 2 ( Θ ) = 0.3 m 3 ( C n ) = 0.8 ,   m 3 ( Θ ) = 0.2 m 4 ( C n ) = 0.6 ,   m 3 ( Θ ) = 0.4
Firstly, combine the evidence with the same category. That is, evidence 1 and 2 need to be combined, and evidence 3 and 4 need to be combined. One obtains:
m 1 , 2 ( C o ) = 0.94 ,   m 1 , 2 ( Θ ) = 0.06 m 3 , 4 ( C n ) = 0.92 ,   m 3 , 4 ( Θ ) = 0.08
Assuming that the error rate of the radar plot is 0.1, the setting of the evidence discount factor is:
W C o = ( 1 0.1 ) × 0.94 0.94 + 0.92 = 0.455 W C n = ( 1 0.1 ) × 0.92 0.94 + 0.92 = 0.445
Based on the correction factor, the update of evidence can be achieved as follows:
d m 1 , 2 ( C o ) = 0.428 ,   d m 1 , 2 ( Θ ) = 0.572 d m 3 , 4 ( C n ) = 0.409 ,   d m 3 , 4 ( Θ ) = 0.591
Then, we combine the results under different categories, and obtain:
T c = 0.428 × 0.409 = 0.175 m t ( C n ) = 0.308 ,   m t ( C n ) = 0.284 ,   m t ( Θ ) = 0.408
Here m t is the global mass function obtained from the fusion of decision evidence. In this example, uncertainty has the highest confidence because there is some conflict between the pieces of evidence, such as evidence 1 and 2, believing that the plot is the target, and evidence 3 and 4, believing that the plot is cluttered. But do not worry, as the number of decision pieces of evidence increases, this problem can be effectively solved.
The implementation process of constructing a confidence function is shown in Figure 3.

3.3. Iterative Update of Target Category Confidence

Based on the network classification model of each round, the category membership of the target can be obtained. If the network classification model is not trained, the category membership of the target will be randomly initially given. Then, the target category is updated through the correction fusion of the belief function, and the sample data after updating the category label can be used for classifier training and parameter optimization. The optimized classifier can also be reused to obtain the category membership of the target. This cycle continues until the category labels of each point in the target dataset are no longer updated or iterated a certain number of times. The value of iteration times is usually related to the timeliness of engineering applications and can be reasonably configured according to specific scenarios. The specific implementation of the update strategy is as follows:
Step 1: The global mass function m t of each test sample O i can be obtained through confidence evidence fusion based on the current deep network model classifier. The target dataset can be divided into two subsets, Φ 1 and Φ 2 , based on m t , where the subsets are defined as follows:
Φ 1 = { O i : max m t ( C q ) C q { C o , C n , Θ } m t ( Θ ) }
Φ 2 = { O j : max m t ( C q ) C q { C o , C n , Θ } = m t ( Θ ) }
Here, the category of the sample is target or noise in subset Φ 1 , while the category of the sample is uncertain in subset Φ 2 .
Step 2: In each round of iterative updates, the samples in subset Φ 1 can be temporarily used as training sample data { ( O i , Y i ) , i = 1 , , N Φ 1 } after being classified by the deep learning network model. N Φ 1 is the total number of samples in dataset Φ 1 , and Y i is the category label of target O i . Then, by combining the mass function of each sample, the center of this pattern category can be obtained as follows.
m C q = C q { C o , C n } , O i Φ 1 m t ( O i ) / N Φ 1
Step 3: For the sample O j in subset Φ 2 , the confidence similarity d between it and the center of categories C o and C n is calculated separately.
d ( O j , C q ) = m t ( O j ) m C q ,   C q { C o , C n }
According to the value of confidence similarity, the sample data are sequentially assigned to the pattern categories with the closest confidence.
In RPREC, the iterative update process of target category confidence is shown in Figure 4.
Based on the above implementation process, it can be seen that easily classified plots can provide additional evidence to help classify uncertain plots, especially in cases where clutter and target plots have high similarity in the feature space. This is also the advantage of the iterative optimization classification strategy.We summarize the RPREC in Algorithm 1.
Algorithm 1. RPREC Algorithm.
Require: Radar data set O = { o 1 , , o n } ; Classification threshold T1 and T2; Deep learning classifier; Number of Decision Evidence K; Maximum number of iteration updates Th;
Initialization: Set the number of network layers and nodes in each layer. If there are samples with class labels, the classifier is initially trained based on recognition framework Ω = { C o , C n , Θ } .
1: s ← 0;
2: repeat
3: Calculate the category membership μ i = ( μ i 1 , , μ i c ) of each target O i ;
4: Calculate the total belief of each category: B e l ( C o ) , B e l ( C n ) , B e l ( Θ ) .
5: Calculate the number K i of samples that should be selected in each pattern category based on Equation (11), and build the decision evidence set Φ K ( i ) of each target O i ;
6: Calculate the confidence similarity M i j between the target data and the decision sample;
7: Build the basic confidence assignment function m i for each sample;
8: Combine the evidence in the decision evidence subset T q with the same category, and obtain fusion confidence assignment function m t T q ;
9: Calculate the evidence discount factor W C q , and Obtain correction evidence d m t T q ( C q ) ;
10: Combine the correction evidence and obtain the global mass function m t ;
11: Calculate the confidence level B e l of target O i belonging to each pattern category;
12: Update category labels for each target data based on classification rules;
13: s ← s + 1;
14: until s = Th or the category labels of each target is no longer updated;
15: return the category assignment relationship of each target { m ( O i , C q ) } i = 1 N

4. Experiments

In this section, some experimental results using real radar plots are presented, showing the effectiveness of the RPREC. All algorithms were tested on the MATLAB platform. The software used in these experiments is MATLAB R2023a under a Windows 11 system. We give an evaluation of the recognition accuracy and the run time of the considered radar plot recognition algorithms. The computations were executed on a Microsoft Surface Book with an Intel (R) Core (TM) i9-12900HX CPU @2.5 GHz and 16 GB MEMORY. There are mainly two types of scenarios: one is high-density clutter, and the other is low-density clutter. The datasets used are all collected from the X-type ATC radar. Each group of plot data is processed by a track processing program. According to the test environment and plots, the possible false track and the real track are both retained. Firstly, mark the plot data corresponding to the real track as the target. Then, mark the plot data corresponding to the false track as uncertain. Finally, mark the rest of the plots as clutter.
  • The dataset with high clutter density
As shown in Table 1, the dataset with high-clutter density includes 1150 targets, 2350 clutters, and 325 uncertainties, and their specific distribution is shown in Figure 5. The distribution of data containing only targets and clutter is shown in Figure 6.
In Figure 6, it can be seen that the target plots are very clear and easy to identify when not affected by clutter. In Figure 5, some target plots are almost completely submerged by clutter, which poses a certain challenge to the performance of recognition algorithms.
2.
The dataset with low-clutter density
As shown in Table 2, the dataset with high-clutter density includes 171 targets, 213 clutters, and 19 uncertainties, with a specific distribution as shown in Figure 7.
As shown in Figure 7, under low-clutter density, the target plot is clearly visible, while uncertain data are mainly distributed near the intersection of different target plots. This dataset is mainly used to verify the performance of each algorithm when the sample size is small.
Specific experimental verification is carried out from four aspects. First, the performance of the RPREC is evaluated with respect to some classical radar plot recognition algorithms in Section 4.1. In Section 4.2, the correlation between recognition accuracy and iteration number is provided. In Section 4.3 and Section 4.4, the impact of algorithm parameters, such as classifier iteration updates and confidence threshold on algorithm performance, is analyzed.

4.1. Radar Plot Recognition

This experiment is based on two types of radar-measured datasets, high-density clutter, and low-density clutter, which have been introduced at the beginning of Section 4. In each scenario, half of the radar plot data were randomly selected as training samples, and the remaining were used as testing samples. In order to effectively verify the performance of the RPREC, we compared it with some classic radar dot recognition algorithms, including PSO-SVM [3], PSO-MLP [54], FNN [53], CNN [55], and RNN [56]. Here, the percentage of recognition accuracy and CPU time are used as important indicators to measure the performance of these algorithms.
The experimental results based on the high-density clutter dataset are shown in Table 3, and the experimental results based on the low-density clutter dataset are shown in Table 4. In these tables, ω1, ω2, and ω3 represent, respectively, target, clutter, and uncertainty.
The experimental results shown in Table 3 indicate that PSO-SVM, PSO-MLP, FNN, CNN, and RNN have some shortcomings in the representation of uncertain data. Therefore, these algorithms focus on the binary classification of targets and clutter, so the uncertain data represented by ω3 will be hard to classify into targets or clutter. This will affect their recognition accuracy. The recognition accuracy of PSO-SVM is about 0.83, and the CPU time is 1.64. The recognition accuracy of PSO-MLP and the FNN is similar, but the FNN takes more time. Compared to the FNN, the RNN and CNN have advantages in deep feature extraction of radar plots, resulting in a higher recognition accuracy of over 0.91. Of course, the CPU time has also doubled accordingly. The most significant feature of the RPREC proposed in this article is its ability to characterize and measure uncertain data. Therefore, it can be seen from the fifth row in Table 3 that uncertain plots have been effectively identified with a recognition accuracy of 0.945. In addition, the RPREC also maintains good performance in target and clutter recognition, with recognition accuracy rates of 0.921 and 0.932, respectively. However, the RPREC also has a significant limitation in that it has a large CPU time, which is 3 to 15 times that of other algorithms.
The experimental results shown in Table 4 indicate that in low-clutter datasets with fewer radar plots, the recognition accuracy of all algorithms decreases, except for PSO-SVM and the RPREC. The reason is that PSO-SVM is an optimization algorithm of SVM, which is very suitable for small samples and can achieve a recognition accuracy of 0.854. In this case, its recognition performance is similar to the RNN. Comparing Table 3 and Table 4, it can be found that the recognition accuracy of PSO-MLP, FNN, CNN, and RNN has decreased by approximately 2 to 6 percentage points. The reason why the RPREC is not also significantly affected is that its classifier can iteratively learn the inherent characteristics of radar plots and maintain optimization, but the cost is that the CPU time is several times that of other algorithms.
Overall, PSO-MLP, FNN, CNN, and RNN perform well in high-clutter density radar point datasets, while PSO-SVM can demonstrate its advantages in low-clutter density radar point datasets. The RPREC maintains good performance in both datasets but has the highest CPU time. Therefore, in terms of being able to adapt to various different scenarios, the RPREC is the best. But if we pursue computational timeliness, the RPREC is not the best choice.

4.2. The Impact of Training Sample Sets

As is well-known, the performance of recognition algorithms is usually closely related to the training samples. Therefore, the impact of the training samples on each algorithm is mainly analyzed in this section. The dataset used here is a mixture of high-clutter density and low-clutter density samples. It has a total of 4228 radar plots, including 1321 target plots, 2653 clutter plots, and 344 uncertainty plots. Four different scenarios are set here, represented by S1, S2, S3, and S4. In each scenario, a certain number of samples are selected for classifier model training, and the remaining samples are used to test the performance of each algorithm. The specific number of training and testing samples for S1 to S4 is set in Table 5. Here, the recognition accuracy and CPU time are also used as evaluation indicators. The test results of each algorithm are summarized in Table 6.
Table 6 shows that PSO-MLP, FNN, CNN, and RNN are significantly affected by the number of samples. For example, in the S4 scenario with sufficient samples, the recognition rate of each algorithm can be about 6 to 9 percent higher than S1 with insufficient samples. At the same time, the CPU time of these algorithms is also reduced, as the number of test samples is also decreasing. The recognition accuracy of PSO-SVM does not significantly improve with an increase in the number of training samples from S1 to S4, which are generally only around 0.1 to 0.2. It can be seen that PSO-SVM is a good choice in small sample datasets. The recognition accuracy of the RPREC is also not significantly affected by the number of samples but at the cost of sacrificing a significant amount of time to optimize the classifier. So, it can be seen that the time for S4 is 19.31, but S1 requires 42.55, which is almost three times higher.
In addition, compared to other algorithms, the proposed algorithm can basically maintain the best recognition accuracy. Of course, it inevitably has the most CPU time in each scenario. This is because the classifier in other algorithms only needs to be trained offline and can be directly applied during testing. However, the RPREC needs to continuously analyze radar plots during the testing process to optimize the classifier. Specifically, the network model classifier of the proposed algorithm needs to be trained by certain samples in advance. Then, the self-learning of the classifier can be achieved through iterative updates of the class confidence of the target data, ultimately gradually improving the classification accuracy. This is why the fewer training samples, the longer it takes for the classifier to optimize to good performance. Therefore, how to maintain good timeliness like other algorithms is also the focus of future research.

4.3. The Impact of Iteration Times

In this section, the relationship between the recognition accuracy, CPU time, and classifier iteration times of the RPREC is mainly analyzed. The specific relationship curve between the recognition accuracy and the number of iterations of the classifier in the RPREC is depicted in Figure 8. We conducted three repetitive experiments based on the same dataset described in Section 4.2. Then, the classifier was updated with 200, 500, 1000, and 1500 iterations in each experiment. Finally, the statistical results of each experiment, including recognition accuracy and CPU time, are recorded in Table 7.
It can be seen that once the recognition accuracy of the RPREC reaches a certain level, it will not significantly improve with an increase in iteration times. As shown in Figure 8, the recognition accuracy of targets, clutter, and uncertainty is always difficult to reach 0.98. When the number of iterations exceeds 1000, it will approach its upper limit of 0.976.
Table 7 shows that the CPU time is basically proportional to the number of iterations in each experiment. When the number of iterations reaches 500 from 200, the recognition accuracy can be quickly improved, from around 0.35 to over 0.93. When the number of iterations increased from 500 to 1000, the improvement in recognition accuracy slowed down by about 10 percentage points. When the number of iterations exceeds 1000, there is no significant change in recognition accuracy as the number of iterations increases. For example, in these three experiments, they remained around 0.951, 0.935, and 0.961, respectively. This indicates that the classifier initially improves its ability with the learning of radar plots and then reaches a certain level of performance that cannot be further improved with iterative learning, which can only increase CPU time. Therefore, the reasonable setting of parameters requires a comprehensive requirement for recognition accuracy and timeliness based on specific scenarios. Of course, it is necessary for us to further carry out research on adaptive parameter configurations that can adapt to different radar plots in the future.

4.4. Confidence Threshold Parameter

In this section, the impact of thresholds T1 and T2 on the recognition performance of the proposed algorithm is analyzed. Here, three performance evaluation indicators, including recognition accuracy, CPU time, and number of loop iterations, were selected in this experiment. For simplicity, the Ra, Ct, and Li in the table are, respectively, used to represent recognition accuracy, convergence time, and iteration number. The statistical values of these three indicators are based on the average results obtained from 1000 Monte Carlo simulations. The experimental results are shown in Table 8 and Table 9 and Figure 9.
As shown in Table 8, when the value of T1 is changed, both Ct and Li do not show significant changes. As shown in Figure 9 the recognition accuracy of the algorithm increases first and then decreases as the T1 value increases. When the value of T1 is 0.4, the algorithm’s recognition accuracy can reach the maximum value of 0.962 calculated in this experiment. The reason is that when the T1 value is small, the algorithm’s ability to eliminate clutter will decrease, leading to an increase in the number of false targets. The high value of T1 can improve the suppression of clutter plots, but it also reduces the algorithm’s ability to recognize real targets, which are similar to clutter. Therefore, according to experimental statistical analysis, a T1 value of 0.4 is the best in this scenario.
The experimental results in Figure 9 show that as the value of T2 increases, the recognition accuracy of the proposed algorithm gradually decreases. This indicates that minimizing the uncertainty interval of the target’s category is more helpful in accurately distinguishing true from false, but it will inevitably affect the convergence speed of the proposed algorithm. Therefore, the experimental results in Table 9 show that as T2 decreases, the values of Ct and Li both increase. So, if there is enough time, a value of 0.1 for T2 is the best choice for this scenario.
Therefore, the reasonable setting of thresholds T1 and T2 parameters is a challenge in the RPREC. What we need to pay special attention to is the effective adjustment of parameters, which will depend on the specific situation in the future.

5. Conclusions

In this paper, the RPREC was proposed to improve the recognition performance of radar plots with the help of a deep neural network classifier where the basic confidence assignment of an object can iterate loop optimization. The RPREC first constructs a confidence framework for targets, clutter, and uncertainty categories to effectively represent radar plots. Then, a deep network model classifier is used to obtain the class confidence of each object online, and decision evidence is used to correct and update class labels. Finally, the updated data drive the classifier to complete iterative optimization, thus achieving accurate recognition of radar plots. The effectiveness of the proposed algorithm based on the real radar plot dataset has been verified. The experimental results show that the recognition accuracy of the RPREC can reach almost 93%, which is superior to traditional recognition algorithms. In addition, the RPREC can also gradually improve recognition ability by iteratively learning the inherent distribution characteristics of sample data when the number of samples is small.
In the future, this research topic can be further explored by integrating the RPREC with adaptive parameter configuration. This enables the recognition algorithm to be better applied to various types of radar plot scenarios.

Author Contributions

Writing—original draft preparation, R.Y.; writing—review and editing, Y.Z.; data curation, Y.S.; validation, R.Y. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation of China, grant number 61804120, and in part by the Natural Science Basic Research Program of Shaanxi, grant number 2021JQ-515.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The processed data required to reproduce these findings cannot be shared, as the data also form part of an ongoing study.

Acknowledgments

The authors are grateful to the reviewers for all their remarks that helped us to clarify and improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, D. Research on Radar Plot Processing under Complex Conditions; Xidian University: Xi’an, China, 2020. [Google Scholar]
  2. Duan, C.D.; Han, C.L.; Yang, Z.W. Inshore ambiguity clutter suppression method aided by clutter classification. J. Xidian Univ. 2021, 48, 64–71. [Google Scholar]
  3. Peng, W.; Lin, Q. An Identification Method of True and False Plots Based on PSO-SVM Algorithm. Radar Sci. Technol. 2021, 49, 429–437. [Google Scholar]
  4. Luo, X.W.; Zhang, B.Y.; Liu, J. Researches on the Method of Clutter Suppression in Radar Data Processing. Syst. Eng. Electron. 2016, 38, 37–44. [Google Scholar]
  5. Lin, J.X.; Shen, X.Y.; Lou, Q.Z. An AdaBoost Based Method for Suppression of Radar Residual Clutter. Electron. Opt. Control. 2020, 27, 53–57. [Google Scholar]
  6. Hu, Q.Y.; Xie, J.W.; Liu, Z.Q. False-Targets Discrimination Method Based on Measurement Fusion. J. Nanjing Univ. Posts Telecommun. Nat. Sci. Ed. 2017, 37, 88–92. [Google Scholar]
  7. Zhang, X.; Yang, L.; He, W.-K.; Bi, F.-H. Wind farm clutter suppression for air surveillance radar based on a combined method of clutter map and K-SVD algorithm. IET Radar Sonar Navig. 2020, 14, 1354–1364. [Google Scholar]
  8. Lv, F.Q.; Tang, S.H.; He, G.H. Point cloud extraction and monomer of airborne liDAR buildings based on DBSCAN algorithm. Sci. Technol. Eng. 2022, 22, 3446–3452. [Google Scholar]
  9. Zhong, J.X.; Jin, L.N. Robust and adaptive clustering for point cloud with millimeter wave radar. Sci. Technol. Eng. 2022, 22, 1936–1943. [Google Scholar]
  10. Dempster, A.P. Upper and lower probabilities induced by a multi-valued mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  11. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  12. Denoeux, T. 40 years of Dempster-Shafer theory. Int. J. Approx. Reason. 2016, 79, 1–6. [Google Scholar] [CrossRef]
  13. Denoeux, T.; Sriboonchitta, S.; Kanjanatarakul, O. Logistic regression, neural networks and Dempster-Shafer theory: A new perspective. Knowl.-Based Syst. 2019, 176, 54–67. [Google Scholar] [CrossRef]
  14. Denoeux, T.; Dubois, D.; Prade, H. Representations of uncertainty in artificial intelligence: Beyond probability and possibility. In A Guided Tour of Artificial Intelligence Research; Marquis, P., Papini, O., Prade, H., Eds.; Springer: Cham, Switzerland, 2020; Volume 1, pp. 119–150. [Google Scholar]
  15. Liu, S.T.; Li, X.J.; Zhou, Z.J. Review on the application of evidence theory in patten classification. J. CAEIT 2022, 17, 247–258. [Google Scholar]
  16. Meng, J.T. Research and Application of Data Classification Based on the Theory of Belief Functions; University of Science and Technology Beijing: Beijing, China, 2021. [Google Scholar]
  17. Masson, M.H.; Denoeux, T. ECM: An evidential version of the fuzzy c-means algorithm. Pattern Recog. 2008, 41, 1384–1397. [Google Scholar] [CrossRef]
  18. Liu, Z.G.; Pan, Q.; Dezert, J.; Mercier, G. Credal c-means clustering method based on belief functions. Knowl.-Based Syst. 2015, 74, 119–132. [Google Scholar] [CrossRef]
  19. Denoeux, T.; Kanjanatarakul, O. Evidential clustering: A review. In Proceedings of the International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, Da Nang, Vietnam, 30 November–2 December 2016; pp. 24–35. [Google Scholar]
  20. Denoeux, T.; Sriboonchitta, S.; Kanjanatarakul, O. Evidential clustering of large dissimilarity data. Knowl.-Based Syst. 2016, 106, 179–195. [Google Scholar] [CrossRef]
  21. Jiao, L.; Denoeux, T.; Liu, Z.-G.; Pan, Q. EGMM: An evidential version of the gaussian mixture model for clustering. Appl. Soft Comput. 2022, 129, 109619. [Google Scholar] [CrossRef]
  22. Denoeux, T. NN-EVCLUS: Neural network-based evidential clustering. Inf. Sci. 2021, 572, 297–330. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Liu, Z.; Martin, A.; Zhou, K. Dynamic evidential clustering algorithm. Knowl.-Based Syst. 2021, 213, 106643. [Google Scholar] [CrossRef]
  24. Liu, Z.G.; Qiu, G.H.; Mercier, G.; Pan, Q. A transfer classification method for heterogeneous data based on evidence theory. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 1–13. [Google Scholar] [CrossRef]
  25. Denoeux, T. A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. 1995, 25, 804–813. [Google Scholar] [CrossRef]
  26. Ma, Z.F.; Tian, H.P.; Liu, Z.C.; Zhang, Z.W. A new incomplete pattern belief classification method with multiple estimations based on KNN. Appl. Soft. Comput. 2020, 90, 106175. [Google Scholar] [CrossRef]
  27. Zhang, Z.; Tian, H.; Yan, L.; Martin, A.; Zhou, K. Learning a creedal classifier with optimized and adaptive multiestimation for missing data imputation. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 4092–4104. [Google Scholar] [CrossRef]
  28. Han, X.X.; Wang, J.; Chen, Y. Safety assessment of water supply and drainage based on evidential reasoning rule. Sci. Technol. Eng. 2021, 21, 13758–13764. [Google Scholar]
  29. Liu, S.; Guo, X.J.; Zhang, L.Y. FMEA evaluation of all-electric ship propulsion Based on fuzzy confidence theory. Control. Eng. China 2021, 28, 1807–1813. [Google Scholar]
  30. Xie, B.L. Research on Muli-Sensor Data Fusion Method Based on DS Evidence Theory; Henan University: Kaifeng, China, 2022. [Google Scholar]
  31. He, K.X.; Wang, T.; Su, Z.Y. Improved abnormal condition detection based on evidence K-nearest neighbor and its Application. Control. Eng. China 2022, 29, 655–660. [Google Scholar]
  32. Lian, C.; Ruan, S.; Denoeux, T.; Li, H.; Vera, P. Spatial evidential clustering with adaptive distance metric for tumor segmentation in FDGPET images. IEEE Trans. Biomed. Eng. 2017, 65, 21–30. [Google Scholar] [CrossRef]
  33. Lian, C.; Ruan, S.; Denoeux, T.; Li, H.; Vera, P. Joint Tumor Segmentation in PET-CT Images Using Co-Clustering and Fusion Based on Belief Functions. IEEE Trans. Image Process. 2018, 28, 755–766. [Google Scholar] [CrossRef]
  34. Han, D.; Dezert, J.; Yang, Y. Belief Interval-Based Distance Measures in the Theory of Belief Functions. IEEE Trans. Syst. Man Cybern. Syst. 2016, 48, 833–850. [Google Scholar] [CrossRef]
  35. Liu, Z.; Zhang, X.; Niu, J.; Dezert, J. Combination of Classifiers with Different Frames of Discernment Based on Belief Functions. IEEE Trans. Fuzzy Syst. 2020, 29, 1764–1774. [Google Scholar] [CrossRef]
  36. Denoeux, T. Decision-Making with Belief Functions: A Review. Int. J. Approx. Reason. 2019, 109, 87–110. [Google Scholar] [CrossRef]
  37. Zhou, K.; Martin, A.; Pan, Q.; Liu, Z.-G. Median evidential c-means algorithm and its application to community detection. Knowl.-Based Syst. 2015, 74, 69–88. [Google Scholar] [CrossRef]
  38. Zhou, K.; Martin, A.; Pan, Q.; Liu, Z.G. SELP: Semi-supervised evidential label propagation algorithm for graph data clustering. Int. J. Approx. Reason. 2018, 92, 139–154. [Google Scholar] [CrossRef]
  39. Abdelkhalek, R.; Boukhris, I.; Elouedi, Z. An Evidential Collaborative Filtering Approach Based on Items Contents Clustering. In Proceedings of the International Conference on Belief Functions, Theory and Applications, Paris, France, 26–28 October 2022; pp. 1–9. [Google Scholar]
  40. Abdelkhalek, R.; Boukhris, I.; Elouedi, Z. An Evidential Clustering for Collaborative Filtering Based on Users’ Preferences. In Proceedings of the International Conference on Modeling Decisions for Artificial Intelligence, Milan, Italy, 4–6 September 2019; pp. 224–235. [Google Scholar]
  41. Zouhal, L.; Denoeux, T. An evidence-theoretic k-NN rule with parameter optimization. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 1998, 28, 263–271. [Google Scholar] [CrossRef]
  42. Liu, Z.-G.; Pan, Q.; Dezert, J. Evidential classifier for imprecise data based on belief functions. Knowl.-Based Syst. 2013, 52, 246–257. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Hou, J.; Liu, Z.G. A new evidential K-nearest neighbor data classification method. Fire Control. Command. Control. 2013, 38, 58–60. [Google Scholar]
  44. Lian, C.; Ruan, S.; Denœux, T. An evidential classifier based on feature selection and two-step classification strategy. Pattern Recognit. 2015, 48, 2318–2327. [Google Scholar] [CrossRef]
  45. Pal, N.; Ghosh, S. Some classification algorithms integrating Dempster-Shafer theory of evidence with the rank nearest neighbor rules. IEEE Trans. Syst. Man Cybern. Part A 2001, 31, 59–66. [Google Scholar] [CrossRef]
  46. Yang, B.-S.; Kim, K.J. Application of Dempster–Shafer theory in fault diagnosis of induction motors using vibration and current signals. Mech. Syst. Signal Process. 2006, 20, 403–420. [Google Scholar] [CrossRef]
  47. Su, Z.-G.; Wang, P.-H. Improved adaptive evidential k-NN rule and its application for monitoring level of coal powder filling in ball mill. J. Process. Control. 2009, 19, 1751–1762. [Google Scholar] [CrossRef]
  48. Zhu, H.; Basir, O. An adaptive fuzzy evidential nearest neighbor formulation for classifying remote sensing images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1874–1889. [Google Scholar]
  49. Capelle, A.-S.; Colot, O.; Fernandez-Maloigne, C. Evidential segmentation scheme of multi-echo MR images for the detection of brain tumors using neighborhood information. Inf. Fusion 2004, 5, 203–216. [Google Scholar] [CrossRef]
  50. Shen, H.-B.; Chou, K.-C. Predicting protein subnuclear location with optimized evidence-theoretic K-nearest classifier and pseudo amino acid composition. Biochem. Biophys. Res. Commun. 2005, 337, 752–756. [Google Scholar] [CrossRef] [PubMed]
  51. Shen, H.; Chou, K.-C. Using optimized evidence-theoretic K-nearest neighbor classifier and pseudo-amino acid composition to predict membrane protein types. Biochem. Biophys. Res. Commun. 2005, 334, 288–292. [Google Scholar] [CrossRef] [PubMed]
  52. Shi, D.Y.; Lin, Q.; Hu, B. Radar clutter suppression method based on neural network optimized by Genetic Algorithm. Mod. Def. Technol. 2021, 49, 79–89. [Google Scholar]
  53. Qi, Y.; Yu, C.; Dai, X. Research on Radar Plot Classification Based on Fully Connected Neural Network. In Proceedings of the 2019 3rd International Conference on Electronic Information Technology and Computer Engineering, Xiamen, China, 18–20 October 2019; pp. 698–703. [Google Scholar]
  54. Peng, W.; Lin, Q. Research on Plot Authenticity Identification Method Based on PSO-MLP. J. Phys. Conf. Ser. 2020, 1486, 042003. [Google Scholar] [CrossRef]
  55. Liu, Z.; Qi, Y.; Dai, X. Radar Plot Classification Based on Machine Learning. In Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering, Xiamen, China, 22–24 October 2021; pp. 537–541. [Google Scholar]
  56. Liu, Z.; Qi, Y.; Dai, X. Radar Plot Classification Method Based on Recurrent Neural Network. In Proceedings of the 2020 4th International Conference on Electronic Information Technology and Computer Engineering, Xiamen, China, 6–8 November 2020; pp. 611–615. [Google Scholar]
Figure 1. The flowchart of evidence classification.
Figure 1. The flowchart of evidence classification.
Applsci 13 12511 g001
Figure 2. The fully connected network structure designed in the RPREC.
Figure 2. The fully connected network structure designed in the RPREC.
Applsci 13 12511 g002
Figure 3. Implementation process of confidence function construction.
Figure 3. Implementation process of confidence function construction.
Applsci 13 12511 g003
Figure 4. The flowchart for implementing category updates.
Figure 4. The flowchart for implementing category updates.
Applsci 13 12511 g004
Figure 5. The distribution of the dataset with high-clutter density.
Figure 5. The distribution of the dataset with high-clutter density.
Applsci 13 12511 g005
Figure 6. The distribution of target and uncertain data.
Figure 6. The distribution of target and uncertain data.
Applsci 13 12511 g006
Figure 7. The distribution of the dataset with low-clutter density.
Figure 7. The distribution of the dataset with low-clutter density.
Applsci 13 12511 g007
Figure 8. The specific relationship curve between recognition accuracy and iteration number. The red line represents the upper limit of recognition accuracy that the algorithm can approach.
Figure 8. The specific relationship curve between recognition accuracy and iteration number. The red line represents the upper limit of recognition accuracy that the algorithm can approach.
Applsci 13 12511 g008
Figure 9. The relationship curve between recognition accuracy and threshold parameters.
Figure 9. The relationship curve between recognition accuracy and threshold parameters.
Applsci 13 12511 g009
Table 1. The sample dataset with high-clutter density.
Table 1. The sample dataset with high-clutter density.
Dataset 1TargetClutterUncertain
Number11502350325
Table 2. The sample dataset with low-clutter density.
Table 2. The sample dataset with low-clutter density.
Dataset 2TargetClutterUncertain
Number17121319
Table 3. The recognition accuracy and CPU time of six different algorithms on the high-density clutter dataset.
Table 3. The recognition accuracy and CPU time of six different algorithms on the high-density clutter dataset.
The Radar PlotsRecognition Accuracy/CPU Time (s)
PSO-SVMPSO-MLPFNNCNNRNNRPREC
ω10.821/0.520.853/0.730.861/1.210.911/2.710.923/3.550.921/9.55
ω20.837/1.120.847/1.710.832/3.260.913/5.130.927/6.320.932/18.35
ω3/////0.945/3.37
All0.829/1.640.850/2.440.847/4.470.902/7.840.925/9.870.932/31.27
Table 4. The recognition accuracy and CPU time of six different algorithms on the low-density clutter dataset.
Table 4. The recognition accuracy and CPU time of six different algorithms on the low-density clutter dataset.
The Radar PlotsRecognition Accuracy/CPU Time (s)
PSO-SVMPSO-MLPFNNCNNRNNRPREC
ω10.852/0.120.833/0.430.821/0.890.851/1.080.873/1.320.923/12.55
ω20.856/0.310.842/0.980.812/1.090.863/1.530.857/2.210.936/15.39
ω3/////0.925/11.37
All0.854/0.430.836/1.410.817/1.980.857/2.610.865/3.530.928/39.31
Table 5. The training and sample testing settings in different experimental scenarios.
Table 5. The training and sample testing settings in different experimental scenarios.
Experimental ScenarioTraining Samples/Testing Samples
TargetClutterUncertain
S1200/1121500/2036100/244
S2300/10211000/1563150/194
S3500/8211500/1063200/144
S4900/4212000/563250/94
Table 6. The recognition accuracy and CPU time of different algorithms.
Table 6. The recognition accuracy and CPU time of different algorithms.
ScenarioRecognition Accuracy/CPU Time (s)
PSO-SVMPSO-MLPFNNCNNRNNRPREC
S10.852/2.120.833/3.430.821/4.890.841/7.080.843/8.320.923/52.55
S20.856/1.310.852/2.980.842/4.090.863/5.530.894/6.210.936/35.39
S30.873/1.240.881/2.440.877/3.470.903/4.030.925/5.920.931/29.27
S40.871/1.030.906/1.410.895/1.980.927/2.610.931/4.530.929/18.31
Table 7. The statistical results under different iterations for each experiment.
Table 7. The statistical results under different iterations for each experiment.
Experiment NumberIterationsCPU Time (s)Recognition Accuracy
Exp. 1 2004.350.339
50010.390.857
100025.830.951
150035.210.952
Exp. 2 2003.130.323
50012.270.876
100023.690.935
150035.780.934
Exp. 3 2003.310.359
5009.360.867
100022.570.961
150037.310.962
Table 8. The recognition results under different values of T1.
Table 8. The recognition results under different values of T1.
T1RaCt (s) Li
0.10.9428.731297
0.20.9478.911284
0.30.9599.151389
0.40.9629.361458
0.50.9388.621245
0.60.9419.021231
0.70.9319.511321
0.80.9219.321297
0.90.9138.791193
Table 9. The recognition results under different values of T2.
Table 9. The recognition results under different values of T2.
T2RaCt (s) Li
0.10.95916.922831
0.20.95615.302627
0.30.93514.972398
0.40.94114.922291
0.50.9379.361530
0.60.9199.161439
0.70.9229.281481
0.80.9188.751321
0.90.9128.121021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, R.; Zhao, Y.; Shi, Y. RPREC: A Radar Plot Recognition Algorithm Based on Adaptive Evidence Classification. Appl. Sci. 2023, 13, 12511. https://doi.org/10.3390/app132212511

AMA Style

Yang R, Zhao Y, Shi Y. RPREC: A Radar Plot Recognition Algorithm Based on Adaptive Evidence Classification. Applied Sciences. 2023; 13(22):12511. https://doi.org/10.3390/app132212511

Chicago/Turabian Style

Yang, Rui, Yingbo Zhao, and Yuan Shi. 2023. "RPREC: A Radar Plot Recognition Algorithm Based on Adaptive Evidence Classification" Applied Sciences 13, no. 22: 12511. https://doi.org/10.3390/app132212511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop