Multi-Source T-S Target Recognition via an Intuitionistic Fuzzy Method

: To realize aerial target recognition in a complex environment, we propose a multi-source Takagi–Sugeno (T-S) intuitionistic fuzzy rules method (MTS-IFRM). In the proposed method, to improve the robustness of the training process of the model, the features of the aerial targets are classified as the input results of the corresponding T-S target recognition model. The intuitionistic fuzzy approach and ridge regression method are used in the consequent identification, which constructs a regression model. To train the premise parameter and reduce the influence of data noise, novel intuitionistic fuzzy C-regression clustering based on dynamic optimization is proposed. Moreover, a modified adaptive weight algorithm is presented to obtain the final outputs, which improves the classification accuracy of the corresponding model. Finally, the experimental results show that the proposed method can effectively recognize the typical aerial targets in error-free and error-prone environments, and that its performance is better than other methods proposed for aerial target recognition.


Introduction
The complexity of the battlefield environment is enhanced significantly by high-tech equipment, which has introduced great difficulties to the acquisition of target information.As the battlefield expands to the five-dimensional space of sea, land, air, sky, and electromagnetics, the collection of target information will not only be affected by the accuracy and stability of sensor equipment, the influences of the climate environment, and the complex electromagnetic field environment, but also by other factors that lead to deviations or even errors in the collected target information.In addition, there will be interference and confusing equipment intentionally released by the enemy, which increases the uncertainty of the observation of the target.Therefore, it is difficult for a single information source to obtain accurate and complete intelligence information in such a complex environment, and also meet the requirements of actual aerial combat.
With the development of multi-source detection technology, a structure able to track multiple targets and realize target recognition is essential to a multi-sensor data fusion system.Information fusion can recognize a target from multiple dimensions and multiple directions, which data can then be comprehensively processed with the complementarity and redundancy of information, to eliminate the influence of inaccuracy and incompleteness of information obtained from a single information source.Moreover, multi-feature fusion processing is designed to obtain more accurate target features by data fusion of two or more sensors, thus breaking the limits of single-sensor detection, in which equipment generally collects the information of only one feature within a corresponding sensing range [1].Target features obtained by different sensors are imprecise and conflict with the influence of complex environments, interference signals and so on; for example, impulsive noise may cause the collected data to deviate from the original range, leading to the drawing of the wrong conclusions in the target recognition system.Therefore, muti-feature fusion and improving the interpretability of target recognition are particularly important.
With regard to the framework of the BPA, some modeling approaches have been provided.Moreover, Dempster's combination method is performed to transform the BPA into probability distribution, the quality of the BPA in evidence theory will determine whether the recognition result is reasonable.Yin et al. [26] proposed a measurement model to achieve uncertainty management of the BPA via the processing of negation and the links between uncertain data and entropy.Jiang et al. [27] constructed a correlation coefficient to describe the non-intersection and the distinctions between the focal elements.Wang et al. [28] proposed a belief divergence measurement that presented the correlation of various kinds of subsets with a belief function and an appropriate probability distribution.Kaur et al. [8] processed nonnegative and symmetric divergence measures for BPA.Hu et al. [9] proposed the cross-information to change the comprehensive BPA.However, an algorithm based on decision-level data fusion needs high data preprocessing and the decision-making methods are short of general structure after obtaining the characterized distributions of basis reliability.
When coping with highly conflicting evidence, D-S evidence theory may lead to counter-intuitionistic recognition.Therefore, many methods have been proposed including Yager's combination rules method [29], Murphy's arithmetically average model of bodies of evidence [30], Li's trust-based method [31], and so on.Target recognition methods based on fuzzy set theory only need a small amount of prior knowledge to achieve more efficient and accurate recognition.Wang [32] proposed the intuitionistic fuzzy dynamic Bayesian network to transfer the outputs of intuitionistic fuzzy rules into probability.Jiang [7] established a hybrid decision-making fuzzy rough and hesitant sets model and developed a machine learning mechanism to construct the relative loss functions.Guo [33] proposed the recognition structure of UAVs based on a recurrent convolutional strategy, which influenced the degrees of super-resolution realization by setting the numbers of cycles and iterations with changes in the blur degree.Moreover, intuitionistic fuzzy sets (IFS) can conquer the inaccuracy and limitations of traditional fuzzy sets for solving specific information and eliminate the bottleneck that Bayesian models excessively rely on.Lei [34] proposed an intuitionistic fuzzy reasoning (IFR) framework to obtain the membership and non-membership degrees of the property variables of a recognition model.Dolgiy [35] combined the D-S method and Takagi-Sugeno (T-S) fuzzy system to develop the empirical process of an expert system of probability estimates based on subjective preferences of the description of typical sensors.Therefore, a novel hybrid T-S and intuitionistic fuzzy inference system are applied to target recognition in our method.

Our Contributions
In this paper, a novel MTS-IFRM is proposed for high-performance multi-target recognition in error-free and error-prone environments.The main novelties of our method include:

•
Improving the robustness of the training process of the model: the features of the aerial targets are classified as inputs to the corresponding T-S target recognition model, so that features are divided into multi-level features with the target properties;

•
In the T-S model algorithm, the study of premise and consequence parameter identification has been the key question.We apply an intuitionistic fuzzy C-means method based on the dynamic particle swarm optimization (DPSO) algorithm and the ridge regression model to identify the premise and consequence parameter of the T-S intuitionistic fuzzy model, respectively, which better realizes the parametric identification of the model;

•
High classification accuracy can be guaranteed in error-free and error-prone environments.The adaptive weight algorithm reduces the weight corresponding to the model with a low degree of discrimination and increases the weight corresponding to the model with a high degree of discrimination, which is better distinguished from the input features.

Organization of the Article
The organization of the method is described as follows: The fuzzy target recognition model is given in Section 2. Model construction and parameter identification are presented in Section 3. The simulation results and an analysis with comparable methods are given in Section 4. Finally, the conclusions are organized in Section 5.The meanings of notation in the article are listed in Table 1.

Preliminaries
In this section, the preliminaries of the Dempster-Shafer evidence theory and Takagi-Sugeno intuitionistic fuzzy rules method are first introduced.

Evidence Theory
Dempster-Shafer evidence theory has flexibility and effectiveness in modeling uncertainties without prior information [19].A discriminative frame Θ consisting of all possible propositions is defined as follows: Mass function mapping m from 2 Θ to [0, 1] is defined as BBA, which satisfies the following conditions: If m(θ) > 0 , then θ is described as the focal element.Suppose two independent basic belief assignments m 1 , m 2 construct the form m 1 ⊕ m 2 according to Dempster's rule of combination, which can be expressed as follows: With where E, F ∈ 2 θ and K is the conflict coefficient of m 1 and m 2 .When the evidence is highly conflicting, the evidence fusion processing will lead to counter-intuitionistic results.For a multi-source target recognition system, a degree of conflict of the information is provided by each sensor, so dealing with the conflicts between the evidence is the key to applying various evidence-based theories for accurate target recognition.
The common features of information on aerial targets, such as flight speed, acceleration, flight height and so on, can be detected by a multi-source system.Due to the problem of various forms of signal interference and other factors, a system detecting the target information will contain a lot of uncertainty.Most methods based on decision-level data fusion, such as D-S and Yager, require a high level of data preprocessing and display low interpretability.In order to improve the interpretability of the information fusion and the process of aerial target recognition, the T-S intuitionistic fuzzy model is introduced to establish mapping between the feature space and the target space.The T-S intuitionistic fuzzy model has strong learning ability and robustness, which means it can label historically detected targets with the correct categories, and input their feature information into the T-S intuitionistic fuzzy model for training after intuitionistic fuzzification, then forming a correct mapping relationship.By continuously learning target features, the final trained model can accurately obtain the mapping relationship between the features and the targets.

Takagi-Sugeno Intuitionistic Fuzzy Rules Method
When the number of input variables increases, the number of rules of the T-S model will increase exponentially, resulting in a decrease in training performance.For typical aerial targets, we divide the features of the aerial targets into two or three groups for modeling.Figure 1 illustrates the classification and the process of target recognition.First, the features are divided into primary features and secondary features with the target properties, and each secondary feature contains two or three primary features.Then, the model is trained by the training data to obtain the premise and consequence parameters, and the primary features are fused and judged by the trained MTS-IFRM.First, the features are divided into primary features and secondary features with the target properties, and each secondary feature contains two or three primary features.Then, the model is trained by the training data to obtain the premise and consequence parameters, and the primary features are fused and judged by the trained MTS-IFRM.Finally, the identity estimation results of the target are fused with secondary features to obtain the final recognition result of the target.
The main difficulty of aerial target recognition lies in the fusion of multiple features.Achieving accurate recognition of targets from imprecise and conflicting feature data is the key.This section will mainly introduce the proposed aerial target recognition algorithm.The MTS-IFRM is designed by taking the radar graphic (RG) as an example, the inputs of the model are the feature values of aspect ratio (AR) and cross-sectional area (CA) after intuitionistic fuzzification, then we define the MTS-IFRM based on a fuzzy set: 1 , and z AR isA l 2 , then : where the part after "if" denotes the premise and the part after "then" denotes the consequence of the rule.z CA = {⟨CA, µ(CA), υ(CA)⟩|CA ∈ E CA } and z AR = {⟨AR, µ(AR), υ(AR)⟩|AR ∈ E AR } denote the inputs of the CA and AR after intuitionistic fuzzification, respectively.µ(•) and υ(•) are the degrees of membership and the non-membership, respectively, which represent the intuitionistic fuzzy number.Then, 0 denotes the intuitionistic index of the intuitionistic fuzzy number.The specific process can be referenced in [36].E CA and E AR denote the universe of discourses of the CA and AR, respectively.A l 1 and A l 2 denote the intuitionistic fuzzy subsets corresponding to the inputs z CA and z AR of rule l, respectively.The input vector z RG = [z CA , z AR ] denotes the premise variable of the model.p l RG = p l RG0 , p l RG1 , p l RG2 denotes the consequence part.S(•) denotes the scoring function with the abilities of sequencing and decision-making, which converts an intuitionistic fuzzy set into a definite numerical value [37].L RG denotes the RG number of fuzzy rules.Therefore, the weighted average y 0 RG of the final outputs for each rule f l RG (z RG ) are obtained by: where µ l (z RG ) denotes the fuzzy membership degree of fuzzy rule l to input z RG .The normalization method is defined as: where Here, µ are calculated by the premise parameter identification.
µ A l i (•) can be expressed by using a suitable index λ (generally setting λ 1 = 1, λ 2 = 0, and Similarly, the MTS-IFRM based on the feature of motion (M) and location (L) can be established.The output results of the corresponding model are defined as follows: where z FS , z A , z VS , z FH and z DD denote flight speed, acceleration, vertical speed, flight height, and detection distance features after intuitionistic fuzzification, respectively.

Aerial Target Recognition Methods Based on the MTS-IFRM
According to the above analysis, parameter identification is a central role of a T-S rule-based system, which evaluates the quality of the rule modeling.Therefore, the related work of the MTS-IFRM contains the structure identification of consequent parameters based on the ridge regression method, the identification of the premise part with a novel intuitionistic fuzzy C-means (IFCM) clustering model, and the adaptive weight algorithm.

Construction of MTS-IFRM
In this section, we take the training of the RG consequence parameters as an example.First, according to Equations ( 5) and ( 6), let: where s = [S(z CS ), S(z AR )] denotes the scoring function set of the input z.So that: where µ l (z RG ) is acquired in Equation (7).Next, the output of the model is denoted as: In Equation (19), we obtain the RG output of the MTS-IFRM.To solve the target recognition problem, each secondary feature needs to have the corresponding output.Therefore, the MTS-IFRM is constructed.The ridge regression method, a modified analysis of the least-squares estimation, can deal with multicollinearity by operating the unbiased estimator.To obtain a more reliable estimate of the consequent parameter, ridge regression analysis is constructed to train the model: Equation ( 20) contains the minimization of empirical risk and structure risk.Where p g,m,RG denotes the consequent parameter of the m-th aerial target, N denotes the number of training samples, y n,m denotes the M-dimensional label vector of the n-th training sample, γ 1 represents the regularization parameter.To adjust the consequent parameter p g,m,RG , the final optimization result is calculated by the first-order necessary condition: In Equation ( 21), p g,m,RG is as follows: Therefore, a new MTS-IFRM of RF for aerial target recognition can be expressed as follows according to Equations ( 5) and ( 22): where z ′ CS and z ′ AR are the CS features and AR features after intuitionistic fuzzification, respectively.p l ′ g,m,RGi denotes the consequent parameter corresponding to rule l ′ of model m, here, i = 0, 1, 2 and L ′ RG denotes the number of rules.Similarly, the corresponding rules of the MTS-IFRM for the motion feature (MF) and location feature (LF) can be established in the same construction procedures.

Premise Identification
IFCM and the FCM clustering are very sensitive to the initial clustering center position and are prone to converging to the local optimal solution in a noisy environment.Moreover, the variation factors of dynamic evolution theory are introduced into the PSO algorithm to improve the clustering optimization model [38].
Suppose the position of the i-th particle is X i = (x i,1 , x i,2 , . . ., x i,d ), the velocity is ) and P i = (p i,1 , p i,2 , . . ., p i,d ) is the optimal solution in d-dimensional space, where i = 1, 2, • • • , G, G is the size of the particle swarm, then the velocity and position updated in the j-th dimension at an iteration are: x i,j (t where P g = (p g,1 , p g,2 , . . ., p g,d ) denotes the current global optimal solution, w denotes the inertia weight.r 1 and r 2 are random numbers in the interval [0, 1], respectively.c 1 and c 2 denote the learning parameter of the DPSO, respectively, and are defined as follows: where t denotes the number of iterations in this round, and T denotes the maximum number of iterations.c 1 and c 2 change dynamically to meet the changing rule with the increase in the number of iterations.Therefore, the algorithm can adaptively expand the local search range in the early stage of iteration and accelerate the global convergence speed in the late iteration.This learning mechanism is used to accelerate the overall convergence.
In the iterative process, the inertia weight can affect the search range of the current round according to the speed of the previous round.At the end of each round of iterations, the fitness function of the selected particle swarms should be obtained.Moreover, the inertia weights can be dynamically adjusted based on the results of the fitness values, which will make the selected particle swarms in this round of iterations have a more balanced position.The nonlinear adaptive inertia weight strategy is used to calculate the inertia weight, and the method is as follows: where w max and w min are the maximum and minimum inertia weights set, respectively, and f min and f max represent the minimum and maximum fitness values of the particle swarm in this round, respectively.f avg represents the average fitness of a particle swarm.At this point, the speed of the particle swarm mainly refers to the speed of the previous round to increase the activity of the particle swarm.Conversely, the speed of the particle swarm at this time mainly refers to the local optimal position and the global optimal position to accelerate the particle swarm to move closer to the dominant space.Suppose Z = {z 1 , z 2 , . . . ,z N } is the dataset, where The objective function is given below: where µ nm is the membership degree of the sample data in the m-th class.U = [µ nm ] N×M denotes the fuzzy membership matrix of X.c 0 ∈ [1, +∞) denotes the fuzzification index.d nm 2 (z n , v m ) denotes the ordinary Euclidean distance between the measurement point z n and the clustering center v m , which is defined as: where p i = (1/p, 1/p, . . ., 1/p), µ z n (x i ), υ z n (x i ) and π z n (x i ) are the fuzzy membership degree, non-membership degree, and intuitionistic index of input data z n , respectively.
) are the fuzzy membership degree, non-membership degree, and intuitionistic index of clustering center v m , respectively.
Therefore, to obtain the optimal objective function by DPSO, it can be considered that the smaller the result of the objective function J m (U, V), the better the fitness of the particles, so the particle fitness can be expressed by the following: where v i,m denotes the intuitionistic fuzzy number of the m-th dimension of particle x i and also denotes m-th clustering center.λ is a constant, which can be manually adjusted according to the specific situation.The main steps for DPSO-IFCM are summarized in Figure 2. Compute the fitness of each particle according to Eq. ( 31) Update personal best and global best accroding to the particle fitness value Update learning factors accroding to Eq. ( 26) ;Update weight accroding to Eq. ( 28) Update particle velocity and position according to Eqs. ( 24) and ( 25 In Figure 2, it is shown that the proposed DPSO-IFCM clustering algorithm includes the following steps: 1. Initialization: Initialize G particles to form G first-generation particles, where each particle randomly generates M clustering centers.The fitness value is calculated by Equation (31) and determines the current optimal position of each particle i by the fitness value, and the position of the current particle swarm with the highest fitness is g p ; 2. Compute the velocity and position of each particle in the new particle swarm using Equations ( 24) and (25); 3. Compute the fitness value of each particle in the new particle swarm using Equation (31) and compare it with the previous generation.For the same individual, if the individual fitness in the new population is larger than the corresponding individual in the previous generation, replace the individual of the previous generation and this becomes the optimal position of particle i , otherwise, it remains unchanged; 4. Compare the fitness value of the optimal individual of the new particle swarm with the optimal individual of the previous generation, if the fitness is greater than the previous generation, update the optimal position of the population to the optimal position of the new particle swarm, otherwise, it remains unchanged, then = +1 t t .

Repeat
Steps 2-4 until a criterion is met that is usually of a sufficiently good fitness or a maximum number of iterations; 6. Obtain the individual position with the highest fitness value as the initial clustering center of the IFCM algorithm; 7. Compute the membership degree nm μ of each sample dataset to each clustering center and the premise parameters ( ), ( ), ( ) μ π of the model.A detailed method can be found in Ref. [38].In Figure 2, it is shown that the proposed DPSO-IFCM clustering algorithm includes the following steps: 1.
Initialization: Initialize G particles to form G first-generation particles, where each particle randomly generates M clustering centers.The fitness value is calculated by Equation (31) and determines the current optimal position of each particle i by the fitness value, and the position of the current particle swarm with the highest fitness is p g ; 2.
Compute the velocity and position of each particle in the new particle swarm using Equations ( 24) and (25); 3.
Compute the fitness value of each particle in the new particle swarm using Equation (31) and compare it with the previous generation.For the same individual, if the individual fitness in the new population is larger than the corresponding individual in the previous generation, replace the individual of the previous generation and this becomes the optimal position of particle i, otherwise, it remains unchanged; 4.
Compare the fitness value of the optimal individual of the new particle swarm with the optimal individual of the previous generation, if the fitness is greater than the previous generation, update the optimal position of the population to the optimal position of the new particle swarm, otherwise, it remains unchanged, then t = t + 1.

5.
Repeat Steps 2-4 until a criterion is met that is usually of a sufficiently good fitness or a maximum number of iterations; 6.
Obtain the individual position with the highest fitness value as the initial clustering center of the IFCM algorithm; 7.
Compute the membership degree µ nm of each sample dataset to each clustering center and the premise parameters µ A m i (x i ), v A m i (x i ), π A m i (x i ) of the model.A detailed method can be found in Ref. [36].
Finally, we input the intuitionistic fuzzy features into the trained MTS-IFRM.The output of the j-th model is:

Adaptive Weight Algorithm
From Equation (32), we know that every target has a corresponding MTS-IFRM, then each model is trained and obtains the corresponding label vector output.If the features of the input data are more similar to a certain class, then the value of the corresponding class in the label vector output will be closer to one, otherwise, the value will be closer to zero.When the values of more than one class are relatively close, the class cannot be well distinguished from the input features; that is, the degree of discrimination is not obvious.At this point, we can focus on other models to realize the classification and recognition of the target; that is, reduce the weight corresponding to the model with a low degree of discrimination, and increase the weight corresponding to the model with a high degree of discrimination.First, the initial weight of each model is 1/h, h denotes the number of secondary features, the weight distribution is also related to the following two points: 1.
For a certain secondary feature, in the output result of the corresponding model, if all the values in the output vector are less than 0.5, the possibility of the feature belonging to the target being classified is too low.Therefore, the secondary feature should be reduced according to the impact of the secondary features on the classification results, the weight corresponding to the secondary features is reduced and assigned to other features.Suppose that the maximum value of the label vector output by the model is x max , the weight of the corresponding model can be expressed as: The final output matrix can be obtained: where h 1 and h 2 are two constants to control the speed of weight change.Figure 3 shows the weight change under h 1 = 20, h 2 = 0.25.
Remote Sens. 2023, 15, x FOR PEER REVIEW 11 of 22 Finally, we input the intuitionistic fuzzy features into the trained MTS-IFRM.The output of the j-th model is:

Adaptive Weight Algorithm
From Equation (32), we know that every target has a corresponding MTS-IFRM, then each model is trained and obtains the corresponding label vector output.If the features of the input data are more similar to a certain class, then the value of the corresponding class in the label vector output will be closer to one, otherwise, the value will be closer to zero.When the values of more than one class are relatively close, the class cannot be well distinguished from the input features; that is, the degree of discrimination is not obvious.At this point, we can focus on other models to realize the classification and recognition of the target; that is, reduce the weight corresponding to the model with a low degree of discrimination, and increase the weight corresponding to the model with a high degree of discrimination.First, the initial weight of each model is 1 / h , h denotes the number of secondary features, the weight distribution is also related to the following two points: 1.For a certain secondary feature, in the output result of the corresponding model, if all the values in the output vector are less than 0.5, the possibility of the feature belonging to the target being classified is too low.Therefore, the secondary feature should be reduced according to the impact of the secondary features on the classification results, the weight corresponding to the secondary features is reduced and assigned to other features.Suppose that the maximum value of the label vector output by the model is m ax x , the weight of the corresponding model can be expressed as: The final output matrix can be obtained: where 1 h and 2 h are two constants to control the speed of weight change.Figure 3 shows the weight change under 1 2 20 , 0.25 h h = = .  In Figure 3, when x max is less than 0.5, the weight of the corresponding model will gradually decrease.When x max = 0.25, the weight of the corresponding model will decrease rapidly.When the weight is below 0.1, the corresponding model weight is close to 0 and the larger weight will be allocated to the model that can be better identified, which can obtain a higher recognition accuracy.

2.
For a certain secondary feature to the corresponding T-S IFM output, if the maximum value in the label vector is greater than 0.5, and the difference between the maximum value and the second large value is less than 0.3, then the classification ability of the secondary features for all of the targets to be classified is weak.However, because the maximum value in the label vector is greater than 0.5, the feature has a certain classification ability for a certain type or several types of targets, but it cannot determine which type the input feature data belongs to.Therefore, the corresponding weight can be appropriately reduced and assigned to other features.
Suppose that the difference between the maximum value and the sub-maximum value in the label vector output by the model is 4 shows the weight adjustment under h 1 = 20, h 2 = 0. Different from case 1, case 2 cannot clearly distinguish which category the target belongs to, because there is a value in the label vector, only the weight is appropriately reduced.From Figure 4, the weight is reduced to at most half of the original.
In Figure 3, when m ax x is less than 0.5, the weight of the corresponding model will gradually decrease.When max 0.25 , the weight of the corresponding model will decrease rapidly.When the weight is below 0.1, the corresponding model weight is close to 0 and the larger weight will be allocated to the model that can be better identified, which can obtain a higher recognition accuracy.
2. For a certain secondary feature to the corresponding T-S IFM output, if the maximum value in the label vector is greater than 0.5, and the difference between the maximum value and the second large value is less than 0.3, then the classification ability of the secondary features for all of the targets to be classified is weak.However, because the maximum value in the label vector is greater than 0.5, the feature has a certain classification ability for a certain type or several types of targets, but it cannot determine which type the input feature data belongs to.Therefore, the corresponding weight can be appropriately reduced and assigned to other features.
Suppose that the difference between the maximum value and the sub-maximum value in the label vector output by the model is . Finally, in case 2, Figure 4 shows the weight adjustment under 1 2 20 , 0 h h = = .Different from case 1, case 2 cannot clearly distinguish which category the target belongs to, because there is a value in the label vector, only the weight is appropriately reduced.From Figure 4, the weight is reduced to at most half of the original.According to the above two points of analysis, the final weight allocation method of each model is designed, and the process is as follows: To assign the reduced weight portion of the model of cases 1 and 2 equally to the other models, first, the number of secondary features that do not satisfy the above two cases can be expressed as: Equation ( 35) denotes the number of models with obvious classification effects.Then, the final weight adjustment of each model can be expressed as:  According to the above two points of analysis, the final weight allocation method of each model is designed, and the process is as follows: To assign the reduced weight portion of the model of cases 1 and 2 equally to the other models, first, the number of secondary features that do not satisfy the above two cases can be expressed as: Equation ( 35) denotes the number of models with obvious classification effects.Then, the final weight adjustment of each model can be expressed as: where W i denotes the final weight of the i-th model.f (x j ) denotes the weight of the corresponding model when case 1 or case 2 occurs.Therefore, the final fusion results are calculated as follows:

Computational Complexity Analysis
In the proposed MTS-IFRM, the main program includes the implementation of the DPSO-IFCM algorithm and the structure identification of consequence parameters based on the ridge regression method.The total computational complexity of the ridge regression is calculated as N(L • N • M 2 ), where L is the number of intuitionistic T-S fuzzy rules, N is the number of samples, and M is the number of label vector dimensions.In the DPSO-IFCM algorithm, the total computational complexity of the main loop of DPSO is N(G where G is the size of the particle swarm, T is the maximum number of iterations, d is the dimension of the solution space, and the calculation time of the IFCM is mainly used for the fuzzy membership µ nm and the computational complexity is N(L • N • M).In summary, the computational cost of the proposed algorithm is determined by L, N, M, G, T, and d.

Simulation Results and Analysis
To evaluate the performance of the MTS-IFRM approach to the problem of recognizing aerial targets in a complex environment, two examples were used to show the recognition performance of MTS-IFRM compared to that of the standard forms of the D-S [19], Yager [29], Murphy [30], multi-sensor data fusion algorithm (MSDF) [32], Kaur [8], and Hu [9] in a complicated environment.Table 2 presents the feature ranges of five typical aerial targets (bomber (Br), fighter (Fr), helicopter (Hr), air-to-ground missile (AGM), and tactical ballistic missile (TBM)).Table 2 shows the complete discernment frame is Θ = {Br, Fr, HG, AGM, TBM}, and the target recognition feature set is E = {E A , E B , E C , E D , E E , E F , E G }, which represents the credibility of the evidence of the flight altitude (FH), detection distance (DD), flight speed (FS), acceleration (A), vertical speed (VS), cross-section area (CA), and aspect ratio (AR), respectively.The training data is generated within the scope of feature ranges, the experiment uses 125 sets of target feature data within the appropriate range as the training phase with the rules of nine sets.Table 3 presents seven training datasets from the training datasets.Table 3 shows that the collected 13-14 d historical feature datasets with results are obtained as the training datasets and the testing target is recognized according to the trained MTS-IFRM, then the feature datasets and model parameters of the target are updated with the recognition result.
The fuzzy membership function is very important for the initial recognition process because of the uncertainty in the feature data.By analyzing the features of aerial targets, the Gaussian membership function is used to recognize the target in Equation (38) and Table 4 presents δ and x of five typical aerial targets with difference features, showing the fuzzy membership functions corresponding to the detection distance.
Table 4. Five typical aerial targets with different features.Table 4 shows the appropriate membership function µ(x i ) can be designed by adjusting δ and x with the different features of the targets x i by analyzing the various feature attributes of each target in Table 2.Then, take the feature of detection distance as an example.Figure 5 presents the fuzzy membership functions corresponding to the detection distance.From Figure 5, the fuzzy membership degree of each target will be different with different values of primary features.When the detection distance is 450 km, the fuzzy membership degree belonging to target Br is the highest, which is 0.8226, and the fuzzy membership degree belonging to the target AGM is the lowest, approaching zero.When the target features obtained by the radar system are inaccurate and uncertain, the features are calculated by the membership function, thus effectively recognizing the target initially.Figure 6 shows the target recognition framework based on fuzzy membership degree and evidence theory.From Figure 5, the fuzzy membership degree of each target will be different with different values of primary features.When the detection distance is 450 km, the fuzzy membership degree belonging to target Br is the highest, which is 0.8226, and the fuzzy membership degree belonging to the target AGM is the lowest, approaching zero.When the target features obtained by the radar system are inaccurate and uncertain, the features are calculated by the membership function, thus effectively recognizing the target initially.Figure 6 shows the target recognition framework based on fuzzy membership degree and evidence theory.

Br
In Figure 6, the supporting information of the target obtained by the fuzzy membership function may not be consistent.We use the recognition result of the target obtained by the fuzzy membership function as the confidence degree, and evidence theory is used to fuse the confidence degree and obtain a target recognition result.
membership degree belonging to target Br is the highest, which is 0.8226, and the fuzzy membership degree belonging to the target AGM is the lowest, approaching zero.When the target features obtained by the radar system are inaccurate and uncertain, the features are calculated by the membership function, thus effectively recognizing the target initially.Figure 6 shows the target recognition framework based on fuzzy membership degree and evidence theory.In Figure 6, the supporting information of the target obtained by the fuzzy membership function may not be consistent.We use the recognition result of the target obtained by the fuzzy membership function as the confidence degree, and evidence theory is used to fuse the confidence degree and obtain a target recognition result.

Example 1: The Data Does Not Contain Fault Features
In this example, data without fault features is employed to show the performance of the methods, that is, all target features support a certain target.Suppose the radar detects    Table 5 shows the corresponding BPA functions and X denotes the unknown term.The features are expressed with fuzzy membership for the unknown targets detected by radar, all the features of the unknown target have high credibility for the target Br, and no feature opposes the Br.Tables 6-9 show the recognition results of the target with different numbers of evidence in an error-free environment.From Tables 6-9 when the quantity of evidence increases, the recognition accuracy of the other six methods steadily improves except for Yager.The reason is that Yager assigns all the conflicts between evidence to X, which leads to cumulative conflicts between pieces of evidence in the synthetic evidence, and the value of X will increase as the quantity of fusing conflicting evidence increases.When the quantity of evidence is small, the MTS-IFRM maintains better target recognition performance and faster convergence because it can deal with the uncertainty well.Regardless of whether fewer features or more features are available, the MTS-IFRM has higher accuracy when recognizing the targets.

Example 2: The Data Contains Fault Features
The dataset simulated in this paper contains one or more fault features obtained by the equipment, so that the multiple features do not all support a certain target.Suppose the radar detects a suspicious target, the obtained target features are: A = 23 km, B = 450 km, C = 350 m/s, D = 10 m/s 2 , E = 40 m/s, F = 0.31 m 2 , and G = 4.1.Except for the target aspect ratio, other features are the same as in example 1. Due to the influence of factors such as noise and the working status of the sensor device, the target aspect ratio feature is abnormal, and the BPA of the aspect ratio can be expressed as: The aspect ratio has a high degree of support for target Hr, while the support degree for Br is 0. Therefore, E G shows significant conflict with the other evidence.Tables 10 and 11 compare the target recognition performance of the algorithms.Tables 10 and 11 show that because of the conflicting evidence E G , D-S finally determines that Fr is the final result, which is counter-intuitionistic.Meanwhile, the Yager is also unable to correctly recognize the target because it assigns the high-conflict part of the evidence to X. Murphy, MSDF, Kaur, Hu, and the MTS-IFRM can process the conflicting evidence and realize reasonable results.The Murphy method has lower convergence because it calculates the averages without considering the correlations between the evidence, the MSDF method modifies the entropy method to calculate the weight of the evidence, and the Kaur and Hu methods comprehensively improve the credibility of evidence by analyzing the discrepancy in different aspects.Moreover, the accuracy of the MTS-IFRM is higher compared to other methods in the case of fewer features.The MTS-IFRM establishes a higher stability and reliability structure when confronting uncertainty.
The reasons why the MTS-IFRM shows better performance for aerial target recognition can be explained as follows.First, the MTS-IFRM is constructed according to intuitionistic fuzzy theory, which deals with uncertainty data of aerial targets using DPSO-IFCM clustering.Second, the adaptive weight algorithm is used to further improve the classification accuracy of the model, which is crucial for addressing the target recognition problem in an error-free or error-prone environment.
To further verify the effectiveness of the method, a dataset of 10,000 target features is randomly generated within the range given in Table 12 as the test dataset of the simulation.The data model for the simulation feature parameters is: where f ij denotes the j-th feature of the target i corresponding to the deviation δ ij , randn denotes a normal random number with a mean of 0 and a variance of 1. Six algorithms with higher recognition rate methods are employed in the experiment.
In Table 13, a( ) represents the recognition rate of the "•", which is obtained by dividing the number of correctly recognized samples by the total number of testing samples, and in bold is the best simulation result under the same conditions.After fusing the seven features, Figure 7 shows the final recognition rates of six algorithms.In Figure 7, the MTS-IFRM algorithm has better performance than the other five methods and is slightly inferior to other algorithms for the Hr.The main reasons for this: in other methods, the preliminary recognition of the target with the fuzzy membership function will have high accuracy, and the results will be fused by the evidence theory method.Moreover, Table 2 shows that the features of flight height and speed for Hr have a large difference from those of other targets, for example, suppose the radar detects a suspicious target, the target features are: A = 1.

Conclusions
In this paper, a target recognition approach based on MTS-IFRM is proposed, which constructs a fuzzy classification model to enhance the robustness of the recognition process.The intuitionistic fuzzy theory and ridge regression method are employed in the consequent identification, the intuitionistic fuzzy C-regression clustering based on dynamic optimization can realize the premise identification.Then, the adaptive weight algorithm improves the classification accuracy of the corresponding model.The experimental results show that the MTS-IFRM can effectively recognize aerial targets in error-free and errorprone environments, and its performance is better than the methods proposed for aerial target recognition.
Although the proposed MTS-IFRM can show encouraging results for target recognition, many issues remain.For example, when fusing the outputs of multiple models, the method of the weight distribution is still relatively rough.As the features of the target increase, a more complete weight allocation algorithm needs to fuse the outputs of multiple models accurately.In the future, further methods can be proposed to improve accuracy by extending the models to adjust to different types of datasets and by developing more efficient objective functions for the MTS-IFRM using specific samples.

Figure 5 .
Figure 5. Fuzzy membership functions of detection distance.

Figure 6 .
Figure 6.The target recognition framework based on fuzzy membership and evidence theory.

Figure 6 .
Figure 6.The target recognition framework based on fuzzy membership and evidence theory.

4. 1 .
Example 1: The Data Does Not Contain Fault Features In this example, data without fault features is employed to show the performance of the methods, that is, all target features support a certain target.Suppose the radar detects a suspicious target, the target features are: A = 23 km, B = 450 km, C = 350 m/s, D = 10 m/s 2 , E = 40 m/s, F = 0.31 m 2 , and G = 1.0.

Figure 7 .
Figure 7.The recognition rates of six algorithms.

Figure 7 .
Figure 7.The recognition rates of six algorithms.
i , V i , P i Position, velocity, optimal solution of the i-th particle RG Outputs for the model f min , f max , f avg Minimum, maximum, and average fitness of the particle swarm

Table 2 .
Feature ranges of five aerial targets.

Table 3 .
The feature data of aerial targets.
Table 5 presents the BPA example of multi-source information fusion.

Table 5 .
The BPA example of the multi-source information fusion.

Table 6 .
Comparison of algorithms with EA and E B in an error-free environment.

Table 7 .
Comparison of algorithms with E C , E D and E E in an error-prone environment.

Table 8 .
Comparison of algorithms with E F and E G in an error-free environment.

Table 9 .
Comparison of algorithms with in an error-free environment.

Table 10 .
Comparison of algorithms with E F and E G in an error-free environment.

Table 11 .
Comparison of algorithms with E in an error-prone environment.

Table 12 .
Range of the test dataset.