Next Article in Journal
Certain Identities Associated with (p,q)-Binomial Coefficients and (p,q)-Stirling Polynomials of the Second Kind
Next Article in Special Issue
Connectedness and Stratification of Single-Valued Neutrosophic Topological Spaces
Previous Article in Journal
Damage Assessment and Progressive Collapse Resistance of a Long-Span Prestressed Double-Layer Composite Torsional Reticulated Shell
Previous Article in Special Issue
Single-Valued Neutrosophic Set Correlation Coefficient and Its Application in Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Multi-Sensor Fusion Target Recognition Method Based on Complementarity Analysis and Neutrosophic Set

1
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
2
Peng Cheng Laboratory, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(9), 1435; https://doi.org/10.3390/sym12091435
Submission received: 1 August 2020 / Revised: 25 August 2020 / Accepted: 27 August 2020 / Published: 31 August 2020

Abstract

:
To improve the efficiency, accuracy, and intelligence of target detection and recognition, multi-sensor information fusion technology has broad application prospects in many aspects. Compared with single sensor, multi-sensor data contains more target information and effective fusion of multi-source information can improve the accuracy of target recognition. However, the recognition capabilities of different sensors are different during target recognition, and the complementarity between sensors needs to be analyzed during information fusion. This paper proposes a multi-sensor fusion recognition method based on complementarity analysis and neutrosophic set. The proposed method mainly has two parts: complementarity analysis and data fusion. Complementarity analysis applies the trained multi-sensor to extract the features of the verification set into the sensor, and obtain the recognition result of the verification set. Based on recognition result, the multi-sensor complementarity vector is obtained. Then the sensor output the recognition probability and the complementarity vector are used to generate multiple neutrosophic sets. Next, the generated neutrosophic sets are merged within the group through the simplified neutrosophic weighted average (SNWA) operator. Finally, the neutrosophic set is converted into crisp number, and the maximum value is the recognition result. The practicality and effectiveness of the proposed method in this paper are demonstrated through examples.

1. Introduction

In daily life, target recognition involves all aspects of our lives, such as intelligent video surveillance and face recognition. These applications also make target recognition technology more popular. The development of related technologies has greatly enriched the application scenarios of target recognition and tracking theories. Research on related theoretical methods has also received extensively attention. Target recognition involves image processing, calculation computer vision, pattern recognition and other subjects.
Generalized target recognition includes two stages, feature extraction, and classifier classification. Through feature extraction, image, video, and other target observation data are preprocessed to extract feature information, and then the classifier algorithm implements target classification based on the feature information [1]. Common image features can be divided into color gray statistical feature, texture edge feature, algebraic feature, and variation coefficient feature. The feature extraction methods corresponding to the above features are color histogram, gray-level co-occurrence matrix method, principal component analysis method, wavelet transform [2]. The classic target classification algorithms include decision tree, support vector machine [3], neural network [4], logistic regression, and naive Bayes classification [5,6,7,8]. On the basis of a single classifier, the ensemble classifier integrates the classification results of a series of weak classifiers through an ensemble learning method, so as to obtain a better classification effect than a single classifier, mainly including Bagging and Boosting [9]. With the development of high-performance computing equipment and the enlargement in the amount of available data, deep learning related theories and methods have developed rapidly, and the application of deep learning in the direction of target recognition has also made it break through the limitations of traditional methods based on deep neural networks [10,11,12]. Classification network models include LeNet, AlexNet, VggNet.
In practical applications, multi-source sensor data is often processed in decision analysis [13,14,15,16,17,18,19] as the number of sensors increases. For the problem of uncertain information processing, there are many theoretical methods such as Dempster-Shafer evidence theory [20,21,22,23,24,25,26], fuzzy set theory [27,28], D number [29,30,31,32] and rough set theory [33].
Smarandache [34] firstly generalized the concepts of fuzzy sets [35], intuitionistic fuzzy sets (IFS) [36] and interval-valued intuitionistic fuzzy sets (IVIFS) [37], and proposed the neutrosophic set. It is very suitable to use the neutrosophic set to deal with uncertain and inconsistent information in the real world. However, its authenticity, uncertainty, and false membership function are defined in the real number standard or non-standard subset. Therefore, non-standard intervals are not suitable for scientific and engineering applications. Therefore, Ye [38] introduced a simplified neutrosophic set (SNS), which limits the true value, uncertainty, and false membership function to the actual standard interval [ 0 , 1 ] . In addition, SNS also includes single value neutrosophic set (SVNS) [39,40,41] and interval neutrosophic set (INS) [42].
As a new kind of fuzzy set, neutrosophic set [43,44] have been used in many fields, such as decision-making [45,46,47,48], data analysis [49,50], fault diagnosis [51], the shortest path problem [52]. There is also a lot of progress in the related theoretical research of neutrosophic set. For example, score function of pentagonal neutrosophic set [53,54].
Existing multi-sensor fusion methods, such as evidence theory, have complex calculation and long calculation time [55], and there are a few methods that use neutrosophic set in multi-sensor fusion. Therefore, this paper proposes a multi-sensor fusion based on neutrosophic set. First, the complementarity vectors between multiple sensors are calculated. Then these complementarity vectors and the probability of sensor output are used to form a group of neutrosophic sets, and generated neutrosophic sets are fused through the SNWA operator. Finally, the neutrosophic set is converted to the crisp number, and the maximum value is the recognition result. The proposed method has simple calculation and fast operation, and can effectively improve the accuracy of target recognition.
The rest of this article is organized as follows: Section 2 introduces some necessary concepts, such as neutrosophic set and multi-category evaluation standard. The proposed multi-sensor fusion recognition method is listed step by step in Section 3. In Section 4, an example is used to illustrate and explain the effective of proposed method. Some results discussion are shown in Section 5.

2. Preliminaries

2.1. Neutrosophic Set

Definition 1.
The the simplified neutrosophic set (SNS) is defined as follows [38]:
X is a finite set, with a element of X denoted by x. A neutrosophic set (A) in X contains three parts: a truth-membership function ( T p ) , an indeterminacy-membership function ( I p ) , and a falsity-membership function ( F p ) .
0 T P ( x ) 1
0 I P ( x ) 1
0 F P ( x ) 1
0 T P ( x ) + I P ( x ) + F P ( x ) 3
A single-valued neutrosophic set P on X is defined as:
P = { x , T P ( x ) , I P ( x ) , F P ( x ) | x X }
This is called a SNS. In particular, if X includes only one element, N = x , T P ( x ) , I P ( x ) , F P ( x ) is called a SNN (the simplified neutrosophic number) and is denoted by α = μ , π , ν . The numbers μ , π , ν denote the degree of membership, the degree of indeterminacy-membership, and the degree of non-membership.
Definition 2.
The crisp number of each SNN is deneutrosophicated and calculated as follows [56]:
S i = μ i + ( π i ) × ( μ i μ i + ν i )
S i can be regarded as the score of SNN, so SNN can be sorted according to crisp number S i .

2.2. Commonly Used Evaluation Indicators for Multi-Classification Problems

The index for evaluating the performance of a classifier is generally the accuracy of the classifier, which is defined as the ratio of the number of samples correctly classified by the classifier to the total number of samples for a given test data set [57].
The commonly used evaluation indicators for classification problems are precision and recall. Usually, the category of interest is regarded as the positive category, and the other categories are regarded as the negative category. The prediction of the classifier on the test data set is correct or incorrect. There are four situations as follows:
  • True Positive(TP): The true category is a positive example, and the predicted category is a positive example.
  • False Positive (FP): The true category is negative, and the predicted category is positive.
  • False Negative (FN): The true category is positive, and the predicted category is negative.
  • True Negative (TN): The true category is negative, and the predicted category is negative.
Based on the above basic concepts, the commonly used evaluation indicators for multi-classification problems are as follows.
  • Precision or precision rate, also known as precision (P):
    P = T P T P + F P
  • Recall rate, also known as recall rate (R):
    R = T P T P + F N
  • F1 score is an index used to measure the accuracy of the classification model. It also takes into account the accuracy and recall of the classification model. The score can be regarded as a harmonic average of model accuracy and recall. Its maximum value is 1 and its minimum value is 0:
    F 1 = 2 × P × R P + R

2.3. AdaBoost Algorithm

AdaBoost is essentially an iterative algorithm [58], and its core idea is to train some weak classifier h i based on the initial sample using the decision tree algorithm. Use the classifier to detect the sample set. For each training sample point, adjust its weight according to whether the result of its classification is accurate: if h i makes it classified correctly, reduce the weight of the sample point; otherwise, increase the sample the weight of the point. The adjusted weight is calculated according to the accuracy of the detection result. The sample set after adjusting the weight constitutes the sample set to be trained at the next level, which is used to train the next level classifier. In this way, iterate step by step to obtain a new classifier until the classifier h m is obtained, and the sample detection error rate is 0.
Combine h 1 , h 2 , , h m according to the error rate of the sample detection: make the weak classifier with the larger error account for the smaller weight in the combined classifier, and the weak classifier with the smaller error account for the larger weight to obtain a combined classifier.
The algorithm is essentially a comprehensive improvement of the weak classifier trained by the basic decision tree algorithm. Through continuous training of samples and weight adjustment, multiple classifiers are obtained, and the classifiers are combined by weight to obtain a comprehensive classifier that improves the ability of data classification. The whole process is as follows:
  • Train weak classifiers with sample sets.
  • Calculate the error rate of the weak classifier, and obtain the correct and incorrect sample sets.
  • Adjust the sample set weight according to the classification result to obtain a redistributed sample set.
After M cycles, M weak classifiers are obtained, and the joint weight of the classifier is calculated according to the detection accuracy of each weak classifier, and finally a strong classifier is obtained.

2.4. HOG Feature

Histogram of oriented gradient (HOG), which is a feature descriptor for target detection. This technology counts the number of directional gradients that appear locally in the image. This method is similar to the histogram of edge orientation and scale-invariant feature transform, but the difference is hog calculate the density matrix based on the uniform space to improve accuracy. Navneet Dalal and Bill Triggs first proposed HOG in 2005 for pedestrian detection in static images or videos [59].
The core idea of HOG is that the shape of the detected local object can be described by the light intensity gradient or the distribution of the edge direction. By dividing the entire image into small connected areas (called cells). Each cell generates a directional gradient histogram or the edge direction of the pixel in the cell, and the descriptor is represented by combining the histogram. To improve the accuracy, the local histogram can be compared and standardized by calculating the light intensity of a larger area (called block) in the image as a measure, and then using this value (measure) to normalize all cells in the block. This normalization process completes better illumination/shadow invariance. Compared with other descriptors, the descriptors obtained by HOG maintain the invariance of geometric and optical transformations (unless the object orientation changes).

2.5. Gabor Feature

Gabor feature [60] is a feature that can be used to describe the texture information of an image. The frequency and direction of the Gabor filter are similar to the human visual system, and it is particularly suitable for texture representation and discrimination. The Gabor feature mainly relies on the Gabor kernel to window the signal in the frequency domain, so as to describe the local frequency information of the signal.
In terms of feature extraction, Gabor wavelet transform is compared with other methods: on the one hand, it processes less data and can meet the real-time requirements of the system; on the other hand, wavelet transform is insensitive to changes in illumination and can tolerate a certain degree of when image rotation and deformation are used for recognition based on Euclidean distance, the feature pattern and the feature to be measured do not need to correspond strictly, so the robustness of the system can be improved.

2.6. D-AHP Theory

Analytic Hierarchy Process (AHP) [61] is a systematic and hierarchical analysis method that combines qualitative and quantitative analysis. The characteristic of this method is that on the basis of in-depth research on the nature, influencing factors and internal relations of complex decision-making problems, it uses less quantitative information to mathematicize the thinking process of decision-making, thereby providing multi-objective, multi-criteria or complex decision-making problems with no structural characteristics provide simple decision-making methods.
The D-AHP method extends the traditional AHP method in theory. In the D-AHP method [62], the derived results about the ranking and priority weights of alternatives are impacted by the credibility of providing information. A parameter λ is used to express the credibility of information, and its value is associated with the cognitive ability of experts. If the comparison information used in the decision-making process is provided by an authoritative expert, λ will take a smaller value. If the comparison information comes from an expert whose judgment is with low belief, λ takes a higher value.

3. The Proposed Method

In general sensor recognition, the training set is inputting to train the sensor by extracting feature, and then the test set is inputting to test and get the recognition result. To improve the accuracy of multi-sensor fusion recognition, this paper proposes a fusion recognition method based on neutrosophic set. The proposed method in this paper is mainly divided into two parts: complementarity analysis and data fusion. The main steps of the method proposed in this paper are shown in Figure 1.
The essence of complementarity is to calculate the weight of the recognition ability of the base sensor in different categories. Based on this, the data set is divided into training set, validation set, and test set. First, the base sensor preference matrix is obtained from the recognition matrix of the trained base sensor on the verification set, and then the sensor complementarity vector is calculated in each category.
Data fusion aims at different target types, based on the sensor complementarity vector, the recognition results of different sensors are generated by the neutrosophic set. Then the fusion of the neutrosophic set can be obtained in different categories, and finally the neutrosophic set is converted into crisp number, and the maximum value is taken as the recognition result.

3.1. Complementarity

The main steps of the multi-sensor complementarity analysis are proposed as follows:
  • According to the data test result, the sensor recognition matrix can be obtained.
  • The sensor preference matrix for different target types is obtained from multiple sensor recognition matrices.
  • The sensor complementarity vector is gotten from the sensor preference relationship matrix.

3.1.1. Sensor Recognition Matrix

Suppose there are n types of sensors X 1 , X 2 , X 3 , , X n , and m types of target types Y 1 , Y 2 , Y 3 , , Y m . For a certain data set i, the recognition results of all its samples can be represented by the following recognition matrix R i .
R i = r i 11 r i 12 · · · r i 1 m r i 21 r i 22 r i 2 m r i n 1 r i n 2 r i n m

3.1.2. Sensor Preference Matrix

To obtain the preference relationship matrix between sensors, the preference between sensors needs to be defined. For two sensors, if the recognition performance of the sensor X 1 on the target Y j is better than the recognition performance of the sensor X 2 , then for the target Y j , the sensor X 1 is better than the sensor X 2 . The recognition results of a certain sensor on the samples in the data set i can be organized in the form of a recognition matrix, but the rows and columns become the recognized category of the sample and the true category of the sample. Furthermore, for the target of category Y j , according to the recognition matrix, we can get Table 1:
Record the above matrix as R j i in Table 1, where r i j j is the number of correct recognition of the category Y j by the sensor, k j r i k j is the number of samples that the sensor misrecognizes non-targets. k j r i j k is the number of misrecognized category Y j samples into other categories. l j k j r i k l is the number of samples other than the above three cases number. If the optimal performance of sensor recognition is expressed as a matrix:
I j = l = 1 h r j l i 0 0 k j l = 1 h r k l i
Which means the category is fully recognized correctly. Then the recognition performance of this sensor to the category Y j is defined as 1 R j i I j . Therefore, if there are two sensors X k and X l , the preference value of the recognition accuracy rate of X k versus X l in the category (representing the priority of the recognition ability of X k over X l in the category of Y j ) is defined as:
p j k l = 1 R k j I j 1 R k j I j + 1 R l j I j = R l j I j R k j I j + R l j I j
In the same way, the recognition accuracy preference value of sensor X l vs X k on category Y j is defined as follows:
p j l k = R k j I j R k j I j + R l j I j
It is easy to get from the above two formulas that p j k l + p j l k = 1 , which is the sum of the two preference values is 1. If l = k , then p j k l = 0.5 . For the case where multiple sensors recognize at the same time, the preference relationship matrix P j on the category Y j can be obtained. For each target category in the data set i, a corresponding preference relationship matrix can be obtained.

3.1.3. Sensor Complementarity Vector

Next, using the method in D-AHP theory [62], the complementarity vector can be calculated by the preference relationship matrix.
According to the classifier preference relation matrix P j of category Y j .
P j = P j 11 P j 12 · · · P j 1 n P j 21 P j 22 P j 2 n P j n 1 P j n 2 P j n n
It is calculated that for each sensor complementarity vector C j of category Y j , C j is a 1 × n dimensional vector. The flowchart of the C j calculation is presented as Figure 2, the calculation steps are as follows:
  • Express the importance of the index relative to the evaluation target through the preference relationship, and construct the D number preference matrix R D .
  • According to the integrated representation of the D number, transform the D number preference matrix into a certain number matrix R I .
  • Construct a probability matrix R I based on the deterministic number matrix R P , and calculate the preference probability between the indicators compared in pairs.
  • Convert the probability matrix R P into a triangularized probability matrix R T P , and sort the indicators according to their importance.
  • According to the index sorting result, the deterministic number matrix R I is expressed as a matrix R T I , finally C j is obtained.

3.2. Data Fusion

For a picture with unknown target type, the probability matrix is formed via the recognition result vectors of each sensor.
Q = q 11 q 1 m q n 1 q n m
where q i j is the probability that the sensor X i considers the unknown target as the target type Y j .
At this time, the complementary vector is normalized to obtain the weight coefficient:
H j = C j / ( C j m i n + C j m a x )
If the target type is Y j , the complementarity vector H j and Q can be combined to obtain a neutrosophic set α = μ , π , ν . Since there are n sensors, n groups neutrosophic sets can be obtained according to Q and Y j as follows:
α i = [ μ = q i j × H j , π = ( 1 q i j ) × H j , ν = 1 H j ]
Combining n groups of neutrosophic sets, a group of fused neutrosophic set can be obtianed, and the target recognition neutrosophic set is calculated under the imaginary target type Y j . Since there are m types of target, we can finally use the SNWA operator [63] to get m neutrosophic sets:
α T i = W 1 × α 1 + W 2 × α 2 + + W n × α n = [ 1 i = 1 n ( 1 μ i ) W i , i = 1 n ( π i ) W i , i = 1 n ( ν i ) W i ]
Finally, convert these m SNNs into crisp numbers, and take the maximum value as the recognition result.
R S = M a x ( S ) = M a x [ S 1 = μ T 1 + ( π T 1 ) × ( μ T 1 μ T 1 + ν T 1 ) , , S m = μ T m + ( π T m ) × ( μ T m μ T m + ν T m ) ]

4. Simulation

4.1. Data Set

The data source type of the experiment consists of two types: visible light image and infrared light image. There are four target types: sailboat (1), cargo ship (2), speed boat (3), and fishing boat (4). The structure of the experimental data is shown in Table 2 and Table 3. The data consists of a train set, a verification set, and a test set. The verification set and the test set are the same pictures. Since the information of visible light and infrared sensors needs to be fused in the verification set and test set, two visible and infrared images taken at the same location are required for the same target, as shown in Figure 3, Figure 4, Figure 5 and Figure 6. Due to the lack of data, K = 8 cross-validation is used for verificating and testing. This means that the validation set (test set) is divided into eight groups, and when a certain one group is tested, the remaining seven groups are used as the validation set. The image features use Gabor and HOG, and the classifier uses AdaBoost.

4.2. Sensor

Two data sources (visible light, infrared), two image features (HOG, Gabor), and the classification algorithm AdaBoost can be combined separately to obtain 4 classifiers, as shown in Table 4. Since the background of this research is target recognition, these classifiers are regarded as different sensors, so as to recognize the target and generate their respective recognition results. The specific process of recognition is shown in Figure 7.

4.3. Base Sensor Recognition Confusion Matrix

The verification set is input into the trained base sensor, and the recognition confusion matrix of the base sensor can be obtained according to the recognition result of the sensor. The Table 5, Table 6, Table 7 and Table 8 are the identification confusion matrixs of the base sensors on the verification set.

4.4. Preference Matrix

Table 9, Table 10, Table 11 and Table 12 show the preference comparison matrix of the four sensors for the four types of target.

4.5. Complementarity Vector

According to the previous research, the complementarity vector can be obtained from the preference matrix, as shown in Table 13, which reflects the complementarity between the 4 sensors if the target to be identified is the first type of target sailboat. These information also reflect the importance of each sensor, so the complementary vector is used as the weight to generate the neutrosophic set when fusing the neutrosophic set.

4.6. Data Fusion

When identifying an unknown target, the four sensors can obtain the probability of the category through the trained classifier, as shown in Table 14.
According to these probabilities, by Equation (16), the complementarity vectors are converted into weight vectors, as shown in Table 15.
Use Equation (17) to combine P and W and get four neutrosophic sets in one category. The current recognition framework has four categories, so four groups of neutrosophic set are obtained, as follows:
α 1 = α 11 = [ 0.209 , 0.607 , 0.184 ] α 12 = [ 0.032 , 0.205 , 0.761 ] α 13 = [ 0.014 , 0.168 , 0.817 ] α 14 = [ 0.048 , 0.439 , 0.513 ]
α 2 = α 21 = [ 0.102 , 0.731 , 0.167 ] α 22 = [ 0.039 , 0.127 , 0.833 ] α 23 = [ 0.029 , 0.242 , 0.729 ] α 24 = [ 0.057 , 0.297 , 0.645 ]
α 3 = α 31 = [ 0.154 , 0.699 , 0.147 ] α 32 = [ 0.046 , 0.101 , 0.853 ] α 33 = [ 0.138 , 0.255 , 0.607 ] α 34 = [ 0.089 , 0.222 , 0.689 ]
α 4 = α 41 = [ 0.348 , 0.442 , 0.210 ] α 42 = [ 0.085 , 0.189 , 0.726 ] α 43 = [ 0.365 , 0.425 , 0.210 ] α 44 = [ 0.095 , 0.116 , 0.789 ]
Combine these four groups of neutrosophic sets according to Equation (18) to obtain 4 neutrosophic sets:
α 1 = 0.25 × α 11 + 0.25 × α 12 + 0.25 × α 13 + 0.25 × α 14 = [ 0.079 , 0.492 , 0.310 ] α 2 = 0.25 × α 21 + 0.25 × α 22 + 0.25 × α 23 + 0.25 × α 24 = [ 0.058 , 0.506 , 0.286 ] α 3 = 0.25 × α 31 + 0.25 × α 32 + 0.25 × α 33 + 0.25 × α 34 = [ 0.108 , 0.479 , 0.251 ] α 4 = 0.25 × α 41 + 0.25 × α 42 + 0.25 × α 43 + 0.25 × α 44 = [ 0.234 , 0.399 , 0.253 ]
Furthermore, the four neutrosophic sets are transformed into crisp numbers by Equation (19).
R S = 4 = M a x ( S ) = M a x [ S 1 = 0.181 , S 2 = 0.142 , S 3 = 0.252 , S 4 = 0.427 ]
Therefore, the recognition result of the unknown target is fishing boat.

4.7. Recognition Result

After all the test sets are finally tested, the results of the method proposed in this paper are shown in Table 16. And the results of two other fusion methods such as simple fusion (Table 17) and D-S fusion (Table 18) are given to compare with the proposed method.
The recognition results of the 4 base sensors on the test set are shown in Table 19, Table 20, Table 21 and Table 22:
The multi-category evaluation criteria is used in the previous article to evaluate the classification results, shown in Table 23. It can be seen that the method proposed in this paper has improved in the recall rate, accuracy rate, F1 score and other indicators compared with these four sensors. After multi-sensor fusion recognition, the accuracy rate of a single sensor is increased by 3.25% at the lowest and 35.77% at the highest, and the average performance of a single sensor is improved by 21.13%. At the same time, compared with the other two fusion methods, the method proposed in this parper also performs better.It can be seen that the accuracy of fusion recognition can be significantly improved.

5. Results

Aiming at the problem of multi-sensor target recognition, this paper proposes a new method based on the complementary characteristics of sensors in the fusion of neutrosophic set, which improves the accuracy of target type recognition. Using the identification of the sea surface vessel type as the verification scenario, the category-oriented sensor complementarity vector is constructed through feature extraction, sensor training of the target’s infrared and visible image training data. The multi-sensor neutrosophic set model is performed on the target to be recognized to realize the multi-sensor. Compared with other methods, the method proposed in this paper performs better in recognition accuracy, Compared with other fuzzy mathematics theories, the neutrosophic set theory is more helpful for us to deal with the complementary information between sensors. At the same time, the three sets of functions included in the neutrosophic set allow us to flexibly adjust the weight and other parameters, and the calculation of the neutrosophic set is simple, it takes less time to run the program. Further research will mostly concentrate on the the proposed method can be used to more complicated study to further demonstrate its efficiency.

Author Contributions

Y.G. wrote this paper, W.J., X.D. and M.W. reviewed and improved this article, Y.G. and Z.M. discussed and analyzed the numerical results. All authors have read and agreed to the published version of the manuscript.

Funding

The work is partially supported by Equipment Pre-Research Fund (Program No. 61400010109), the Seed Foundation of Innovation and Creation for Graduate Students in Northwestern Polytechnical University (Program No. CX2020151).

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Akula, A.; Singh, A.; Ghosh, R.; Kumar, S.; Sardana, H.K. Target Recognition in Infrared Imagery Using Convolutional Neural Network; Springer: Singapore, 2017; pp. 25–34. [Google Scholar]
  2. Liu, J.; Tang, Y.Y. An evolutionary autonomous agents approach to image feature extraction. IEEE Trans. Evol. Comput. 1997, 1, 141–158. [Google Scholar]
  3. Hearst, M.A.; Dumais, S.T.; Osman, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
  4. Sutskever, I.; Vinyals, O.; Le, Q. Sequence to Sequence Learning with Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 3104–3112. [Google Scholar]
  5. Calders, T.; Verwer, S. Three naive Bayes approaches for discrimination-free classification. Data Min. Knowl. Discov. 2010, 21, 277–292. [Google Scholar] [CrossRef] [Green Version]
  6. Jiang, W.; Cao, Y.; Deng, X. A Novel Z-network Model Based on Bayesian Network and Z-number. IEEE Trans. Fuzzy Syst. 2020, 28, 1585–1599. [Google Scholar] [CrossRef]
  7. Zhang, L.; Wu, X.; Qin, Y.; Skibniewski, M.J.; Liu, W. Towards a Fuzzy Bayesian Network Based Approach for Safety Risk Analysis of Tunnel-Induced Pipeline Damage. Risk Anal. 2016, 36, 278–301. [Google Scholar] [CrossRef] [PubMed]
  8. Huang, Z.; Yang, L.; Jiang, W. Uncertainty measurement with belief entropy on the interference effect in the quantum-like Bayesian Networks. Appl. Math. Comput. 2019, 347, 417–428. [Google Scholar] [CrossRef] [Green Version]
  9. Wood, D.; Underwood, J.; Avis, P. Integrated learning systems in the classroom. Comput. Educ. 1999, 33, 91–108. [Google Scholar] [CrossRef]
  10. Geng, J.; Deng, X.; Ma, X.; Jiang, W. Transfer Learning for SAR Image Classification Via Deep Joint Distribution Adaptation Networks. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5377–5392. [Google Scholar] [CrossRef]
  11. Geng, J.; Jiang, W.; Deng, X. Multi-scale deep feature learning network with bilateral filtering for SAR image classification. ISPRS J. Photogramm. Remote Sens. 2020, 167, 201–213. [Google Scholar] [CrossRef]
  12. Jiang, W.; Huang, K.; Geng, J.; Deng, X. Multi-Scale Metric Learning for Few-Shot Learning. IEEE Trans. Circuits Syst. Video Technol. 2020. [Google Scholar] [CrossRef]
  13. He, Z.; Jiang, W. An evidential dynamical model to predict the interference effect of categorization on decision making. Knowl.-Based Syst. 2018, 150, 139–149. [Google Scholar] [CrossRef]
  14. He, Z.; Jiang, W. An evidential Markov decision making model. Inf. Sci. 2018, 467, 357–372. [Google Scholar] [CrossRef] [Green Version]
  15. Fu, C.; Chang, W.; Xue, M.; Yang, S. Multiple criteria group decision making with belief distributions and distributed preference relations. Eur. J. Oper. Res. 2019, 273, 623–633. [Google Scholar] [CrossRef]
  16. Sun, C.; Li, S.; Deng, Y. Determining Weights in Multi-Criteria Decision Making Based on Negation of Probability Distribution under Uncertain Environment. Mathematics 2020, 8, 191. [Google Scholar] [CrossRef] [Green Version]
  17. Fu, C.; Chang, W.; Liu, W.; Yang, S. Data-driven group decision making for diagnosis of thyroid nodule. Sci.-China-Inf. Sci. 2019, 62, 212205. [Google Scholar] [CrossRef] [Green Version]
  18. Xiao, F. GIQ: A generalized intelligent quality-based approach for fusing multi-source information. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  19. Xiao, F.; Cao, Z.; Jolfaei, A. A novel conflict measurement in decision making and its application in fault diagnosis. IEEE Trans. Fuzzy Syst. 2020. [Google Scholar] [CrossRef]
  20. Li, S.; Liu, G.; Tang, X.; Lu, J.; Hu, J. An Ensemble Deep Convolutional Neural Network Model with Improved D-S Evidence Fusion for Bearing Fault Diagnosis. Sensors 2017, 17, 1729. [Google Scholar] [CrossRef] [Green Version]
  21. Jiang, W. A correlation coefficient for belief functions. Int. J. Approx. Reason. 2018, 103, 94–106. [Google Scholar] [CrossRef] [Green Version]
  22. Deng, X.; Jiang, W.; Wang, Z. Zero-sum polymatrix games with link uncertainty: A Dempster-Shafer theory solution. Appl. Math. Comput. 2019, 340, 101–112. [Google Scholar] [CrossRef]
  23. Deng, X.; Jiang, W.; Wang, Z. Weighted belief function of sensor data fusion in engine fault diagnosis. Soft Comput. 2020, 24, 2329–2339. [Google Scholar]
  24. Fei, L.; Feng, Y.; Liu, L. Evidence combination using OWA-based soft likelihood functions. Int. J. Intell. Syst. 2019, 34, 2269–2290. [Google Scholar] [CrossRef]
  25. Fei, L.; Lu, J.; Feng, Y. An extended best-worst multi-criteria decision-making method by belief functions and its applications in hospital service evaluation. Comput. Ind. Eng. 2020, 142, 106355. [Google Scholar] [CrossRef]
  26. Mao, S.; Han, Y.; Deng, Y.; Pelusi, D. A hybrid DEMATEL-FRACTAL method of handling dependent evidences. Eng. Appl. Artif. Intell. 2020, 91, 103543. [Google Scholar] [CrossRef]
  27. Zimmermann, H.J. Fuzzy set theory. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 317–332. [Google Scholar] [CrossRef]
  28. Xiao, F. A distance measure for intuitionistic fuzzy sets and its application to pattern classification problems. IEEE Trans. Syst. Man Cybern. Syst. 2019. [Google Scholar] [CrossRef]
  29. Deng, X.; Jiang, W. D number theory based game-theoretic framework in adversarial decision making under a fuzzy environment. Int. J. Approx. Reason. 2019, 106, 194–213. [Google Scholar] [CrossRef]
  30. Deng, X.; Jiang, W. Evaluating green supply chain management practices under fuzzy environment: A novel method based on D number theory. Int. J. Fuzzy Syst. 2019, 21, 1389–1402. [Google Scholar] [CrossRef]
  31. Deng, X.; Jiang, W. A total uncertainty measure for D numbers based on belief intervals. Int. J. Intell. Syst. 2019, 34, 3302–3316. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, B.; Deng, Y. Risk Evaluation in Failure Mode and Effects Analysis Based on D Numbers Theory. Int. J. Comput. Commun. Control 2019, 14, 672–691. [Google Scholar]
  33. Qian, Y.; Liang, J.; Pedrycz, W.; Dang, C. Positive approximation: An accelerator for attribute reduction in rough set theory. Artif. Intell. 2010, 174, 597–618. [Google Scholar] [CrossRef] [Green Version]
  34. Smarandache, F. Neutrosophic Probability, Set, and Logic (First Version); American Research Press: Rehoboth, DE, USA, 2016; pp. 41–48. [Google Scholar]
  35. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  36. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  37. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  38. Ye, J. A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 2459–2466. [Google Scholar] [CrossRef]
  39. Haibin, W.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single Valued Neutrosophic Sets. 2010. Available online: http://fs.unm.edu/SingleValuedNeutrosophicSets.pdf (accessed on 31 July 2020).
  40. Zhang, C.; Li, D.; Broumi, S.; Sangaiah, A.K. Medical Diagnosis Based on Single-Valued Neutrosophic Probabilistic Rough Multisets over Two Universes. Symmetry 2018, 10, 213. [Google Scholar] [CrossRef] [Green Version]
  41. Saber, Y.; Alsharari, F.; Smarandache, F. On Single-Valued Neutrosophic Ideals in Šostak Sense. Symmetry 2020, 12, 193. [Google Scholar] [CrossRef] [Green Version]
  42. Haibin, W.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Interval Neutrosophic Sets and Logic: Theory and Applications in Computing. 2012. Available online: https://arxiv.org/abs/cs/0505014 (accessed on 31 July 2020).
  43. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis and Synthetic Analysis; American Research Press: Rehoboth, DE, USA, 1998. [Google Scholar]
  44. Zhou, X.; Li, P.; Smarandache, F.; Khalil, A.M. New Results on Neutrosophic Extended Triplet Groups Equipped with a Partial Order. Symmetry 2019, 14, 1514. [Google Scholar] [CrossRef] [Green Version]
  45. Ma, Y.X.; Wang, J.-Q.; Wang, J.; Wu, X.-H. An interval neutrosophic linguistic multi-criteria group decision-making method and its application in selecting medical treatment options. Neural Comput. Appl. 2017, 28, 2745–2765. [Google Scholar] [CrossRef] [Green Version]
  46. Wei, G.; Zhang, Z. Some single-valued neutrosophic Bonferroni power aggregation operators in multiple attribute decision making. J. Ambient. Intell. Humaniz. Comput. 2018, 10, 863–882. [Google Scholar] [CrossRef]
  47. Ye, J. Another Form of Correlation Coefficient between Single Valued Neutrosophic Sets and Its Multiple Attribute Decision- Making Method. Neutrosophic Sets Syst. 2013, 1, 8–12. [Google Scholar] [CrossRef]
  48. Ye, J. Trapezoidal neutrosophic set and its application to multiple attribute decision-making. Neural Comput. Appl. 2015, 26, 1157–1166. [Google Scholar] [CrossRef]
  49. Kandasamy, V.W.B.; Kandasamy, I.; Smarandache, F. Neutrosophic Components Semigroups and Multiset Neutrosophic Components Semigroups. Symmetry 2020, 14, 818. [Google Scholar]
  50. Yang, W.; Cai, L.; Edalatpanah, S.A.; Smarandache, F. Triangular Single Valued Neutrosophic Data Envelopment Analysis: Application to Hospital Performance Measurement. Symmetry 2020, 14, 588. [Google Scholar] [CrossRef] [Green Version]
  51. Gou, L.; Zhong, Y. A New Fault Diagnosis Method Based on Attributes Weighted Neutrosophic Set. IEEE Access 2019, 7, 117740–117748. [Google Scholar] [CrossRef]
  52. Broumi, S.; Nagarajan, D.; Bakali, A.; Talea, M.; Smarandache, F.; Lathamaheswari, M. The shortest path problem in interval valued trapezoidal and triangular neutrosophic environment. Complex Intell. Syst. 2019, 5, 391–402. [Google Scholar] [CrossRef] [Green Version]
  53. Chakraborty, A. A new score function of pentagonal neutrosophic number and its application in networking problem. Int. J. Neutrosophic Sci. 2020, 1, 40–51. [Google Scholar]
  54. Broumi, S.; Nagarajan, D.; Bakali, A.; Talea, M.; Smarandache, F.; Lathamaheswari, M.; Kavikumar, J. Implementation of neutrosophic function memberships using matlab program. Neutrosophic Sets Syst. 2019, 14, 44–52. [Google Scholar]
  55. Zhang, L.; Wu, X.; Zhu, H.; AbouRizk, S.M. Perceiving safety risk of buildings adjacent to tunneling excavation: An information fusion approach. Autom. Constr. 2017, 73, 88–101. [Google Scholar] [CrossRef]
  56. Kavita; Yadav, S.P.; Kumar, S. A Multi-Criteria Interval-Valued Intuitionistic Fuzzy Group Decision Making for Supplier Selection with TOPSIS Method; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5908. [Google Scholar]
  57. Li, H. Statistical Learning Methods; Tsinghua University Press: Beijing, China, 2012. [Google Scholar]
  58. Rtsch, G.; Onoda, T.; Muller, K.R. Soft Margins for AdaBoost. Mach. Learn. 2001, 42, 287–320. [Google Scholar] [CrossRef]
  59. Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–25 June 2005; Volume 2. [Google Scholar]
  60. Liu, C.; Koga, M.; Fujisawa, H. Gabor feature extraction for character recognition: Comparison with gradient feature. In Proceedings of the Eighth International Conference on Document Analysis and Recognition, Seoul, Korea, 31 August–1 September 2005; Volumes 1 and 2, pp. 121–125. [Google Scholar]
  61. Saaty, T.L. Analytic Hierarchy Process. 2013. Available online: https://onlinelibrary.wiley.com/doi/abs/10.1002/0470011815.b2a4a002 (accessed on 31 July 2020).
  62. Deng, X.; Deng, Y. D-AHP method with different credibility of information. Soft Comput. 2019, 23, 683–691. [Google Scholar] [CrossRef]
  63. Peng, J.-J.; Wang, J.-Q.; Wang, J.; Zhang, H.-Y.; Chen, X.-H. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems. Int. J. Syst. Sci. 2016, 47. [Google Scholar] [CrossRef]
Figure 1. Target fusion recognition based on sensor complementarity and neutrosophic set.
Figure 1. Target fusion recognition based on sensor complementarity and neutrosophic set.
Symmetry 12 01435 g001
Figure 2. Complementary vector generation process [62].
Figure 2. Complementary vector generation process [62].
Symmetry 12 01435 g002
Figure 3. Visible light image and infrared light image for the same sailboat.
Figure 3. Visible light image and infrared light image for the same sailboat.
Symmetry 12 01435 g003
Figure 4. Visible light image and infrared light image for the same cargo ship.
Figure 4. Visible light image and infrared light image for the same cargo ship.
Symmetry 12 01435 g004
Figure 5. Visible light image and infrared light image for the same speedboat.
Figure 5. Visible light image and infrared light image for the same speedboat.
Symmetry 12 01435 g005
Figure 6. Visible light image and infrared light image for the same fishing boat.
Figure 6. Visible light image and infrared light image for the same fishing boat.
Symmetry 12 01435 g006
Figure 7. The work process of sensor.
Figure 7. The work process of sensor.
Symmetry 12 01435 g007
Table 1. R j i : The recognition situation of a certain sensor to the target of category Y j .
Table 1. R j i : The recognition situation of a certain sensor to the target of category Y j .
Real-Recognition Y j Non- Y j
Y j r i j j k j r i j k
non- Y j k j r i k j l j k j r i k l
Table 2. Target recognition image of infrared data.
Table 2. Target recognition image of infrared data.
Infrared Light DataTrain SetValidation SetTest Set
Sailboat652828
Cargo ship683535
Speed boat632525
Fishing boat793535
Table 3. Target recognition image of visible light data.
Table 3. Target recognition image of visible light data.
Visible Light DataTrain SetValidation SetTest Set
Sailboat652828
Cargo ship793535
Speed boat702525
Fishing boat783535
Table 4. Base sensor recognition confusion matrix.
Table 4. Base sensor recognition confusion matrix.
Sensor 1Sensor 2
Visible light + HOG + AdaBoostInfrared light + HOG + AdaBoost
Sensor 3Sensor 4
Visible light + GABOR + AdaBoostInfrared light + GABOR + AdaBoost
Table 5. Recognition confusion matrix of sensor 1.
Table 5. Recognition confusion matrix of sensor 1.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat23020
Cargo ship02930
Speed boat11200
Fishing boat00031
Table 6. Recognition confusion matrix of sensor 2.
Table 6. Recognition confusion matrix of sensor 2.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat16360
Cargo ship113171
Speed boat00184
Fishing boat01921
Table 7. Recognition confusion matrix of sensor 3.
Table 7. Recognition confusion matrix of sensor 3.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat16180
Cargo ship517100
Speed boat00220
Fishing boat00031
Table 8. Recognition confusion matrix of sensor 4.
Table 8. Recognition confusion matrix of sensor 4.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat25000
Cargo ship52502
Speed boat010111
Fishing boat00625
Table 9. Preference matrix P ( 1 , 1 ) of Sailboat.
Table 9. Preference matrix P ( 1 , 1 ) of Sailboat.
P ( 1 , 1 ) Sensor 1Sensor 2Sensor 3Sensor 4
Sensor 10.5000.8010.8210.690
Sensor 20.1980.5000.5320.355
Sensor 30.1780.4670.5000.326
Sensor 40.3090.6440.6730.500
Table 10. Preference matrix P ( 1 , 2 ) of Cargo ship.
Table 10. Preference matrix P ( 1 , 2 ) of Cargo ship.
P ( 1 , 2 ) Sensor 1Sensor 2Sensor 3Sensor 4
Sensor 10.5000.8590.8260.794
Sensor 20.1400.5000.4360.386
Sensor 30.1730.5630.5000.448
Sensor 40.2050.6140.5510.500
Table 11. Preference matrix P ( 1 , 3 ) of Speedboat.
Table 11. Preference matrix P ( 1 , 3 ) of Speedboat.
P ( 1 , 3 ) Sensor 1Sensor 2Sensor 3Sensor 4
Sensor 10.5000.8560.7690.802
Sensor 20.1430.5000.3580.403
Sensor 30.2300.6410.5000.548
Sensor 40.1970.5960.4510.500
Table 12. Preference matrix P ( 1 , 4 ) of Fishing boat.
Table 12. Preference matrix P ( 1 , 4 ) of Fishing boat.
P ( 1 , 4 ) Sensor 1Sensor 2Sensor 3Sensor 4
Sensor 10.5001.0000.5001.000
Sensor 200.50000.561
Sensor 30.5001.0000.5001.000
Sensor 400.43800.500
Table 13. Complementarity vector: C.
Table 13. Complementarity vector: C.
Sensor 1Sensor 2Sensor 3Sensor 4
C ( 1 , 1 ) 0.4730.1380.1060.283
C ( 1 , 2 ) 0.5130.1020.1660.219
C ( 1 , 3 ) 0.5000.0860.2310.183
C ( 1 , 4 ) 0.3820.1320.3820.104
Table 14. Probability of the 4 sensors to recognize the unknown target.
Table 14. Probability of the 4 sensors to recognize the unknown target.
TypeSailboatCargo ShipSpeedboatFishing Boat
Sensor 10.2560.1220.1800.442
Sensor 20.1360.2370.3150.312
Sensor 30.0780.1070.3520.463
Sensor 40.0990.1620.2860.453
Table 15. Weight vector: H.
Table 15. Weight vector: H.
Sensor 1Sensor 2Sensor 3Sensor 4
W ( 1 , 1 ) 0.8170.2380.1830.487
W ( 1 , 2 ) 0.8330.1670.2620.355
W ( 1 , 3 ) 0.8530.1470.3930.311
W ( 1 , 4 ) 0.7890.2740.7890.210
Table 16. The proposed method recognition result.
Table 16. The proposed method recognition result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat27010
Cargo ship13130
Speed boat00250
Fishing boat00035
Table 17. Simple fusion result.
Table 17. Simple fusion result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat27010
Cargo ship13310
Speed boat11212
Fishing boat00035
Table 18. D-S fusion result.
Table 18. D-S fusion result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat26110
Cargo ship22850
Speed boat00250
Fishing boat00233
Table 19. Sensor 1 recognition result.
Table 19. Sensor 1 recognition result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat25030
Cargo ship03140
Speed boat11230
Fishing boat00035
Table 20. Sensor 2 recognition result.
Table 20. Sensor 2 recognition result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat17380
Cargo ship114182
Speed boat01195
Fishing boat011024
Table 21. Sensor 3 recognition result.
Table 21. Sensor 3 recognition result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat171100
Cargo ship618110
Speed boat00250
Fishing boat00035
Table 22. Sensor 4 recognition result.
Table 22. Sensor 4 recognition result.
Real Category/Identify CategorySailboatCargo ShipSpeedboatFishing Boat
Sailboat28000
Cargo ship62702
Speed boat012112
Fishing boat00629
Table 23. Recognition result analysis.
Table 23. Recognition result analysis.
CategoryAccuracy RateRecall RateF1 ScoreCount TimeCorrect Rate
Sensor 1Sailboat0.9620.8930.92654S92.68%
Cargo ship0.9690.8860.925
Speedboat0.7670.9200.836
Fishing boat1.0001.0001.000
Sensor 2Sailboat0.9440.6070.73952S60.16%
Cargo ship0.7370.4000.519
Speedboat0.3450.7600.475
Fishing boat0.7740.6860.727
Sensor 3Sailboat0.7390.6070.667162S77.24%
Cargo ship0.9470.5140.667
Speedboat0.5431.0000.704
Fishing boat1.0001.0001.000
Sensor 4Sailboat0.8241.0000.903158S69.11%
Cargo ship0.6920.7710.730
Speedboat0.1430.0400.063
Fishing boat0.6740.8290.744
D-S fusionSailboat0.9290.9290.929434S91.56%
Cargo ship0.9660.8000.875
Speedboat0.7581.0000.862
Fishing boat1.0000.9430.971
Simple fusionSailboat0.9310.9640.947413S94.30%
Cargo ship0.9710.9430.957
Speedboat0.9130.8400.875
Fishing boat1.0000.9460.972
Proposed methodSailboat0.9640.9640.964398S95.93%
Cargo ship1.0000.8860.940
Speedboat0.8621.0000.926
Fishing boat1.0001.0001.000

Share and Cite

MDPI and ACS Style

Gong, Y.; Ma, Z.; Wang, M.; Deng, X.; Jiang, W. A New Multi-Sensor Fusion Target Recognition Method Based on Complementarity Analysis and Neutrosophic Set. Symmetry 2020, 12, 1435. https://doi.org/10.3390/sym12091435

AMA Style

Gong Y, Ma Z, Wang M, Deng X, Jiang W. A New Multi-Sensor Fusion Target Recognition Method Based on Complementarity Analysis and Neutrosophic Set. Symmetry. 2020; 12(9):1435. https://doi.org/10.3390/sym12091435

Chicago/Turabian Style

Gong, Yuming, Zeyu Ma, Meijuan Wang, Xinyang Deng, and Wen Jiang. 2020. "A New Multi-Sensor Fusion Target Recognition Method Based on Complementarity Analysis and Neutrosophic Set" Symmetry 12, no. 9: 1435. https://doi.org/10.3390/sym12091435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop