A Weighted Combination Method for Conflicting Evidence in Multi-Sensor Data Fusion

Dempster–Shafer evidence theory is widely applied in various fields related to information fusion. However, how to avoid the counter-intuitive results is an open issue when combining highly conflicting pieces of evidence. In order to handle such a problem, a weighted combination method for conflicting pieces of evidence in multi-sensor data fusion is proposed by considering both the interplay between the pieces of evidence and the impacts of the pieces of evidence themselves. First, the degree of credibility of the evidence is determined on the basis of the modified cosine similarity measure of basic probability assignment. Then, the degree of credibility of the evidence is adjusted by leveraging the belief entropy function to measure the information volume of the evidence. Finally, the final weight of each piece of evidence generated from the above steps is obtained and adopted to modify the bodies of evidence before using Dempster’s combination rule. A numerical example is provided to illustrate that the proposed method is reasonable and efficient in handling the conflicting pieces of evidence. In addition, applications in data classification and motor rotor fault diagnosis validate the practicability of the proposed method with better accuracy.


Introduction
Multi-sensor data fusion technology has received significant attention in a variety of fields, as it combines the collected information from multi-sensors, which can enhance the robustness and safety of a system. In wireless sensor networks applications, however, the data that are collected from the sensors are often imprecise and uncertain [1]. How to model and handle the uncertainty information is still an open issue. To address this problem, many mathematical approaches have been presented, such as the fuzzy sets theory [2,3], that focuses on the intuitive reasoning by taking into account human subjectivity and imprecision; the intuitionistic fuzzy sets theory [4] which generalizes fuzzy sets by considering the uncertainty in the assignment of membership degree known as the hesitation degree; evidence theory [5][6][7], as a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility, and imprecise probability theories; rough sets theory [8,9] where its methodology is concerned with the classification and analysis of imprecise, uncertain, or incomplete information and knowledge, which is considered one of the first non-statistical approaches in data analysis; evidential reasoning [10,11] which is a generic evidence-based multi-criteria decision analysis (MCDA) approach for dealing with problems having both quantitative and qualitative criteria under various uncertainties including ignorance and randomness; Z numbers [12,13], that intend to provide a basis for computation with numbers which are not totally reliable; D numbers theory [14][15][16][17] which is a generalization of Dempster-Shafer theory, but does not follow the commutative law; and so on [18][19][20][21]. In addition, mixed intelligent methods have been applied in decision making [22], risk analysis [23], supplier selection [24], pattern recognition [25], classification [26], human reliability analysis [27], and fault diagnosis [28], etc. In this paper, we focus on evidence theory to deal with the uncertain problem of multi-sensor data fusion.
Dempster-Shafer evidence theory was firstly presented by Dempster [5] in 1967; later, it was extended by Shafer [6] in 1976. Dempster-Shafer evidence theory is effective to model both of the uncertainty and imprecision without prior information, so it is widely applied in various fields for information fusion [29][30][31][32]. Nevertheless, it may result in counter-intuitive results when combining highly conflicting pieces of evidence [33]. To address this issue, many methods have been presented in recent years [34][35][36]. On the one hand, some researchers focused on amending Dempster's combination rule. On the other hand, some researchers tried to pretreat the bodies of evidence before using Dempster's combination rule. In terms of of amending Dempster's combination rule, the major works contain Smets's unnormalized combination rule [37], Dubois and Prade's disjunctive combination rule [38], and Yager's combination rule [39]. However, the modification of combination rule often breaks the good properties, like commutativity and associativity. Furthermore, if the sensor failure gives rise to the counter-intuitive results, the modification of combination rule is considered to be unreasonable. Therefore, in order to resolve the fusion problem of highly conflicting pieces of evidence, researchers prefer to pretreat the bodies of evidence. With respect to pretreating the bodies of evidence, the main works contain Murphy's simple average approach of the bodies of evidence [40], and Deng et al.'s weighted average of the masses based on distance of evidence [41]. Deng et al.'s method [41] conquered the deficiency of the method in [40]. However, the impact of evidence itself was neglected in the decision-making process.
Hence, in this paper, a weighted combination method for conflicting pieces of evidence in multi-sensor data fusion is proposed to resolve fusion problem of highly conflicting evidence. First, the credibility degree of each piece of evidence is determined on the basis of the modified cosine similarity measure of basic probability assignment [42]. Then, credibility degree of each piece of evidence is modified by adopting the belief entropy function [43] to measure the information volume of the evidence. Finally, the modified credibility degree of each piece of evidence is used to adjust its corresponding body of evidence to obtain the weighted averaging evidence before using Dempster's combination rule. A numerical example is given to illustrate the feasibility and effectiveness of the proposed method. Additionally, the proposed method is applied in data classification and motor rotor fault diagnosis, which validates the practicability of it.
The rest of this paper is organized as follows. Section 2 briefly introduces the preliminaries of this paper. After that, Section 3 proposes the novel method, which is based on the similarity measure of evidence and belief function entropy. Then, Section 4 gives a numerical example to show the effectiveness of the proposed method. A statistical experiment is carried out in Section 5. Afterwards, the proposed method is applied to Iris data set classification, and motor rotor fault diagnosis is performed in Section 6. Finally, Section 7 gives the conclusions.

Data Fusion
Data fusion can be identified as a combination of multiple sources to obtain improved information with less expensive, higher quality, or more relevant information [44]. General data fusion structure can be classified into three types based on the different stages: data-level, feature-level, and decision-level, as referred in [45].
In the data-level fusion, all raw data from sensors for a measured object are combined directly. Then, a feature vector is extracted from the fused data. Fusion of data at this level consists of the maximum information so that it can generate good results. However, sensors used in the data-level fusion, such as the sensors reporting vibration signals, must be homogeneous. As a consequence, the data-level fusion is limited in the actual application environment, because many physical quantities can be measured for a more comprehensive analysis. In the feature-level fusion, heterogeneous sensors can be used to report the data. According to the types of collected raw data, the features are extracted from the sensors. Then, these heterogeneous sensor data are combined at the feature-level stage. All of the feature vectors are combined into a single feature vector, which is then utilized in a special classification model for decision-making. In the decision-level fusion, the processes of feature extraction and pattern recognition are sequentially conducted for the data collected from each sensor. Then, the produced decision vectors are combined by using decision-level fusion techniques such as the Bayesian method, Dempster-Shafer evidence theory, or behavior knowledge space.
In this paper, we focus on decision-level fusion, and try to improve the performance of the system based on Dempster-Shafer evidence theory.

Dempster-Shafer Evidence Theory
Dempster-Shafer evidence theory was firstly proposed by Dempster [5] and was then further developed by Shafer [6]. Dempster-Shafer evidence theory, as a generalization of Bayesian inference, asks for weaker conditions, which makes it more flexible and effective to model both the uncertainty and imprecision. The basic concepts are introduced as below.
Definition 1. Let U be a set of mutually exclusive and collectively exhaustive events, indicated by The set U is called frame of discernment. The power set of U is indicated by 2 U , where and ∅ is an empty set. If A ∈ 2 U , A is called a proposition or hypothesis.

Definition 2.
For a frame of discernment U, a mass function is a mapping m from 2 U to [0, 1], formally defined by which satisfies the following condition: In Dempster-Shafer evidence theory, a mass function can be also called as a basic probability assignment (BPA). If m(A) is greater than 0, A will be called as a focal element, and the union of all of the focal elements is known as the core of the mass function.
The plausibility function Pl : 2 U → [0, 1] is defined as Apparently, Pl(A) is equal or greater than Bel(A), where the function Bel is the lower limit function of proposition A and the function Pl is the upper limit function of proposition A. Definition 4. Let the two BPAs be m 1 and m 2 on the frame of discernment U. Assuming that these BPAs are independent, Dempster's rule of combination, denoted by m = m 1 ⊕ m 2 , known as the orthogonal sum, is defined as below: with where B and D are also the elements of 2 U , and K is a constant that presents the conflict between the two BPAs.
Note that Dempster's combination rule is only practicable for the two BPAs with the condition K < 1.

Modified Cosine Similarity Measure of BPAs
A modified cosine similarity measure is proposed by Jiang [42]. Because it considers three important factors, namely, angle, distance, and vector norm, the modified cosine similarity measure is an efficient approach to measure the similarity between vectors more precisely. The modified cosine similarity measure among the BPAs can determine whether the pieces of evidence conflict with each other. A large similarity indicates that this piece of evidence has more support from another piece of evidence, while a small similarity indicates that this piece of evidence has less support from another piece of evidence. . . , f n ] be two vectors of R n . The modified cosine similarity between vectors E and F is defined as where α is a constant whose value is greater than 1, P is the Euclidean distance between the two vectors E and F, α −P is the distance-based similarity measure, min( |E| |F| , |F| |E| ) is the minimum of |E| |F| and |F| |E| , and si cos (E, F) is the cosine similarity. The larger the α is, the greater the distance impact on vector similarity will be. Definition 6. Let m 1 and m 2 be the BPAs in the frame of discernment U = {C 1 , C 2 , . . . , C N }. The two vectors are expressed as Then, the belief function vector similarity SI(Bel 1 , Bel 2 ) and the plausibility function vector similarity SI(Pl 1 , Pl 2 ) can be calculated. The new similarity of BPAs is defined as with where λ is the total uncertainty of BPAs, which is defined as The larger the uncertainty λ is, the greater the influence on the similarity of BPA will be.

Belief Entropy
A novel type of belief entropy, known as the Deng entropy, was first proposed by Deng [43]. When the uncertain information is expressed by probability, the Deng entropy degenerates to the Shannon entropy. Hence, the Deng entropy is regarded as a generalization of the Shannon entropy. It is an efficient mathematical tool to measure the uncertain information, especially when the uncertain information is expressed by the BPA. Because of its advantage in measuring the uncertain information, the Deng entropy is applied in a variety of areas [57,58]. The basic concepts are introduced below.

Definition 7.
Let B be a hypothesis or proposition of the BPA m in the frame of discernment U and |B| be the cardinality of B. The Deng entropy of the BPA m is defined as follows: When the belief value is only allocated to the singleton, the Deng entropy degenerates to the Shannon entropy, i.e., The larger the value of the cardinality of the hypothesis or proposition, the larger the value the Deng entropy of evidence, which means that the piece of evidence involves more information. Therefore, if a piece of evidence has a large Deng entropy value, it has more support from other pieces of evidence, indicating that this piece of evidence plays an important role in the evidence combination.

The Proposed Method
In this paper, a weighted combination method for conflicting pieces of evidence multi-sensor data fusion is proposed by combining the modified cosine similarity measure of evidence with the belief entropy function. In contrast to the method of Jiang et al. [42], in the proposed method, the impact of evidence itself is considered in the process of fusion of multiple pieces of evidence by leveraging the belief entropy [43], i.e., a useful uncertainty measure tool, to measure the information volume of each piece of evidence, so that the proposed method can combine multiple pieces of evidence with greater accuracy. This will be discussed further in the next section.

Process Steps
The proposed method is composed of the following procedures. The credibility degree of the pieces of evidence is first determined on the basis of the similarity measure among the BPAs. Then, the credibility degree is modified by leveraging the belief entropy function to measure the information volume of the evidence. Afterwards, the final weight of each piece of evidence is obtained and adopted to adjust the body of evidence before using Dempster's combination rule. The specific calculation processes are listed as follows. The flowchart of the proposed method is shown in Figure 1.
Step 3: Calculate the credibility degrees of the pieces of evidence.
Step 1: Measure the similarities between the pieces of evidence.
Step 2: Obtain the support degrees of the pieces of evidence.
Step 4: Measure the information volume of the pieces of evidence.
Step 7: Normalise the modified credibility degrees of the pieces of evidence.
Step 5: Normalise the information volume of the pieces of evidence.
Step 6: Modify the credibility degrees of the pieces of evidence.
Step 8: Obtain the weighted average evidence.
Step 9: Fuse the multiple weighted average pieces of evidence. Step 1: Measure the similarities between the pieces of evidence. The similarity measure SI BPA (ij) between the BPAs m i and m j can be obtained by Equations (11)- (13). Then, a similarity measure matrix (SMM) can be constructed as follows: Step 2: Obtain the support degrees of the pieces of evidence. The support degree of the BPA m i (i = 1, . . . , k), denoted as SD(m i ), is defined as follows: Step 3: Calculate the credibility degrees of the pieces of evidence.
The credibility degree of the BPA m i (i = 1, . . . , k), denoted as CD(m i ), is defined as follows: Step 4: Measure the information volume of the pieces of evidence. According to Equation (14), the belief entropy E d (m i ) of the BPA m i (i = 1, . . . , k) can be calculated. To avoid assigning zero weight to the evidence, the information volume IV(m i ) is used for measuring the uncertain information of m i . It is defined as follows: Step 5: Normalize the information volume of the pieces of evidence. The information volume of the BPA m i (i = 1, . . . , k) will be normalized as below: Step 6: Modify the credibility degrees of the pieces of evidence. Based on the normalized information volume, the credibility degree of the BPA m i (i = 1, . . . , k) will be modified, denoted as MCD(m i ): Step 7: Normalize the modified credibility degrees of the pieces of evidence. The modified credibility degree MCD(m i ) of the BPA m i (i = 1, . . . , k) will be normalized as below, and is considered as the final weight to adjust the bodies of evidence.
Step 8: Obtain the weighted average evidence. Based on the modified credibility degree of the BPA m i (i = 1, . . . , k), the weighted average evidence WAE(m) is defined as follows: Step 9: Fuse multiple weighted average pieces of evidence. When k number of pieces of evidence exist, the weighted average evidence will be fused through Dempster's combination rule Equation (7) via k − 1 times as below, Ultimately, we can obtain the final fusion result of the evidence.

Algorithm
Let m = {m 1 , . . . , m i , . . . , m k } be a set of multiple pieces of evidence. After receiving k pieces of evidence, a fusion result is expected to be generated for decision-making support. The weighted fusion method for multiple pieces of evidence is outlined in Algorithm 1.
As shown in Algorithm 1, it provides a formal expression in terms of the specific calculation processes of the proposed method listed in Section 3.1. To be specific, Lines 2-7 explain how to measure the similarities between the pieces of evidence and construct the similarity measure matrix for k pieces of evidence. Lines 9-11 show how to obtain the support degrees for k pieces of evidence. Lines 13-15 represent how to calculate the credibility degrees for k pieces of evidence. Lines 17-19 explain how to measure the information volumes for k pieces of evidence. Lines 21-23 express how to normalize the information volumes for k pieces of evidence. Lines 25-27 state how to modify the credibility degrees for k pieces of evidence. Lines 29-31 show how to normalize the modified credibility degrees for k pieces of evidence. Line 33 describes how to obtain the weighted average evidence based on k pieces of evidence. Lines 35-37 depict how to generate the fusion result.

Numerical Example
In this section, in order to demonstrate the feasibility and effectiveness of the proposed method, a numerical example is illustrated.

Example 1.
Consider the decision-making problem of the multi-sensor-based target recognition system from [59] associated with five different kinds of sensors to observe objects, where U = {a, b, c}. Here, a, b, and c are the three objects in the frame of discernment U. The five BPAs that are collected by the system are listed as shown in Table 1.  Step 1: The similarity measure SI BPA (ij) (i, j = 1, 2, 3, 4, 5) between the BPAs m i and m j can be constructed as below: Step 2: The support degree SD(m) of the BPA m i (i = 1, 2, 3, 4, 5) is calculated as shown in Table 2. Table 2. The calculated results in terms of support degree, credibility degree, information volume, normalized information volume, credibility degree, and modified credibility degree of BPAs. Step 3:

Pieces of Evidence
The credibility degree CD(m) of the BPA m i (i = 1, 2, 3, 4, 5) is obtained as shown in Table 2.
Step 4: The information volume IV(m) of the BPA m i (i = 1, 2, 3, 4, 5) is measured as shown in Table 2.
Step 5: The information volume of the BPA m i (i = 1, 2, 3, 4, 5) is normalized as shown in Table 2, denoted by IV(m).
Step 8: The weighted average evidence WAE(m) is computed as shown in Table 3. Step 9: By fusing the weighted average evidence via Dempster's combination rule four times, the final fusion result Fus(m) of evidence can be produced as shown in Table 3.
From Example 1, it is obvious that m 2 highly conflicts with other pieces of evidence. The fusing results that are obtained by different combination approaches are presented in Table 4. In addition, the comparisons of target a's BPA in terms of different combination rules are shown in Figure 2. Dempster [5] 0.0000 0.9153 0.0847 0.0000 b Murphy [40] 0.8389 0.1502 0.0099 0.0010 a Deng et al. [41] 0.9499 0.0411 0.0080 0.0010 a Qian et al. [59] 0.9525 0.0393 0.0074 0.0008 a Proposed method 0.9713 0.0204 0.0073 0.0010 a As shown in Table 4, no matter how many pieces of evidence support target a, Dempster's combination method [5] always generates a counterintuitive result. As the number of pieces of evidence increases to three, Murphy's combination method [40] and Deng et al.'s combination method [41] cannot deal with the highly conflicting pieces of evidence very well, because the BPA values of object a generated by Murphy's method [40] and Deng et al.'s method [41] are 33.24% and 44.77%, respectively, which are smaller than 50%. When the number of pieces of evidence increases from four to five, Murphy's combination method [40] and Deng et al.'s combination method [41] work well, and the BPA values of object a generated by Murphy's method [40] and Deng et al.'s method [41] increase up to 83.89% and 94.99%, respectively.
On the other hand, as shown in Table 4, Qian et al.'s combination method [59] and the proposed method show reasonable results and can efficiently deal with the highly conflicting pieces of evidence as the number of pieces of evidence increases from three to five. In the face of five pieces of evidence, the BPA value of object a generated by the proposed method increases to 97.13% which is much higher than for other combination approaches, as shown in Figure 2. Therefore, it is concluded that the proposed method is as feasible and effective as related approaches.

Statistical Experiment
In this section, in order to make a sound comparison, a statistical experiment is carried out with multiple pieces of initial data for the comparison of the proposed method with other related methods.
This statistical experiment is implemented based on Example 1. In the experimental setting, for generating multiple initial data 100 times, we provide a variation range [−0.1, 0.1] for each BPA of m 1 , and vary the values of BPAs of m 1 randomly.
Then, the generated multiple pieces of initial data are fused by utilizing the different methods, namely, Dempster's combination method [5], Murphy's combination method [40], Deng et al.'s combination method [41], Jiang et al.'s combination method [42], and the proposed method.
The experimental results of target a's BPA generated by different combination methods are shown in Figure 3. From the comparison results, it is obvious that Murphy's combination method [40], Deng et al.'s combination method [41], Jiang et al.'s combination method [42], and the proposed method are more efficient than Dempster's combination method [5], because Dempster's combination method cannot effectively deal with the conflicting pieces of evidence, and thus always generates counterintuitive results where target a's BPA value is 0 (under 0.5). In contrast, the other methods can effectively cope with the conflicting evidence and recognize the target a, where its corresponding BPA value is always larger than 0.5 under multiple experiments. On the other hand, because Murphy's combination method is a simply average-weighted approach to the bodies of evidence, its overall performance is poorer than that of Deng et al.'s combination method, Jiang et al.'s combination method, and the proposed method to a certain extent.
Furthermore, as shown in Figure 3a, Jiang et al.'s combination method [42] which is based on the modified cosine similarity measure, is more effective than Deng et al.'s combination method [41] that is based on the Jousselme distance as a whole. This is the reason that the modified cosine similarity measure is considered in this study.
In order to improve the performance of Jiang et al.'s combination method, we investigate and find that in the process of fusion of multiple pieces of evidence, the impact of the evidence itself is overlooked in their method. Hence, we also take the belief entropy into consideration to measure the information volume of each piece of evidence in the course of fusion and design the proposed method. Consequently, as shown in Figure 3b, it can be noted that the proposed method is superior to Jiang et al.'s combination method [42] with a higher target a BPA value.

Applications
In this section, the proposed approach is applied to Iris data set classification and motor rotor fault diagnosis, respectively, to validate its practicability, in which the experimental data in [48,59] are leveraged for the comparison among different approaches.

Iris Data Set Classification
Consider the Iris data set classification problem associated with a frame of discernment U consisting of three species of Iris flowers given by U = {setosa, versicolor, virginica} = {Se, Ve, Vi} in terms of four numerical attributes of Iris flowers given by {sepal length (SL), sepal width (SW), petal length (PL), petal width (PW)}, where the BPAs of Iris instances are modeled with noisy data and given in Table 5 from [59].  The fusion results based on different combination approaches that were applied on the Iris data set are presented in Table 6. From the experimental results, it can be seen that Dempster's combination method [5] and Murphy's combination method [40] always generate counterintuitive results and classify the species of Iris flower as versicolor, even when the number of pieces of evidence increases from two (m SL , m SW ) to four (m SL , m SW , m PL , m PW ). By contrast, Deng et al.'s combination method [41] works well when the number of pieces of evidence is increased up to four (m SL , m SW , m PL , m PW ), because it can classify the species of Iris flower as the target setosa with a belief value of 73.01%.  [40] 0.4422 0.5546 0.0032 8 × 10 −8 5 × 10 −10 6 × 10 −7 3 × 10 −11 Ve Deng et al. [41] 0.7301 0.2652 0.0047 1 × 10 −7 7 × 10 −10 9 × 10 −7 5 × 10 −11 Se Qian et al. [59] 0.8338 0.1617 0.0045 9 × 10 −8 6 × 10 −10 9 × 10 −7 4 × 10 −11 Se Proposed method 0.8693 0.1254 0.0053 1 × 10 −7 7 × 10 −10 1 × 10 −6 5 × 10 −11 Se Obviously, Qian et al.'s combination method [59] and the proposed method show reasonable results and classify the species of Iris flower as the target setosa with 83.38% and 86.93% belief values, respectively. Therefore, we can conclude that the proposed method is more efficient than other related methods with better accuracy of data classification, as shown in Figure 4. The reason is that the proposed method not only takes the interplay between the pieces of evidence into account, but also considers the impacts of the pieces of evidence themselves.

Motor Rotor Fault Diagnosis
Supposing there are three types of faults for a motor rotor given by {F 1 , F 2 , F 3 } = {rotor unbalance, rotor misalignment, pedestal looseness} in the frame of discernment U. We place a set of vibration acceleration sensors at different places for gathering the vibration signals given by S = {S 1 , S 2 , S 3 }.
The acceleration vibration frequency amplitudes at 1X, 2X, and 3X frequencies are considered as the fault feature variables. The collected sensor reports at 1X, 2X, and 3X frequencies modeled as BPAs are shown in Tables 7-9, respectively, in which m 1 (·), m 2 (·), and m 3 (·) represent the BPAs modeled from the three vibration acceleration sensors S 1 , S 2 , and S 3 . By conducting the steps in Section 3, the weighted average evidence with regard to motor rotor fault diagnosis at 1X frequency is obtained as below: Then, the final fusion results for motor rotor fault diagnosis at 1X frequency are computed as follows:

Motor Rotor Fault Diagnosis at 2X Frequency
By carrying out the steps in Section 3, the weighted average evidence with respect to motor rotor fault diagnosis at 2X frequency is obtained as follows: Afterwards, the final fusion results in terms of motor rotor fault diagnosis at 2X frequency are generated as below:

Motor Rotor Fault Diagnosis at 3X Frequency
By applying the steps in Section 3, the weighted average evidence with respect to motor rotor fault diagnosis at 3X frequency is obtained as follows: Then, the final combination results for motor rotor fault diagnosis at 3X frequency are shown below: From the experimental results as shown in Tables 10-12, it can be seen that the proposed method diagnoses the fault type as F 2 , in accordance with Jiang et al.'s method [48].
Furthermore, the proposed method outperforms Jiang et al.'s method [48] in dealing with the uncertainty as shown in Figures 5-7, because by utilizing the proposed method, the belief degrees allocated to the target fault type F 2 at 1X frequency, 2X frequency and 3X frequency increase up to 90.55%, 98.22%, and 63.21%, respectively; however, by using Jiang et al.'s method [48], the belief degrees allocated to the target F 2 at 1X frequency, 2X frequency and 3X frequency are 88.61%, 96.21%, and 59.04%, respectively.
Additionally, by utilizing the proposed method, the uncertainty {F 1 , F 2 } falls from 0.0582 to 0.0541, and the uncertainty {F 1 , F 2 , F 3 } falls from 0.0555 to 0.0404 at 1X frequency; the uncertainty {F 1 , F 2 , F 3 } decreased from 0.0371 to 0.0178 at 2X frequency; the uncertainty {F 1 , F 2 } falls from 0.0651 to 0.0333, and the uncertainty {F 1 , F 2 , F 3 } drops from 0.0061 to 0.0001 at 3X frequency. As a result, the proposed method can diagnose motor rotor faults more accurately than the related work.

Conclusions
In this paper, a weighted combination method for conflicting evidence in multi-sensor data fusion was proposed by combining the modified cosine similarity measure of the pieces of evidence with the belief entropy function. The proposed method was a kind of pretreatment of the bodies of evidence, which was effective to handle the conflicting pieces of evidence in a multi-sensor environment. A numerical example was illustrated to show the feasibility and effectiveness of the proposal. In addition, applications in data classification and motor rotor fault diagnosis were presented to validate the practicability of the proposed method, where it outperformed the related methods with better accuracy.
Author Contributions: F.X. contributed most of the work in this paper. B.Q contributed the experiments in this paper.