Conﬂict Management for Target Recognition Based on PPT Entropy and Entropy Distance

Conﬂicting evidence affects the ﬁnal target recognition results. Thus, managing conﬂicting evidence efﬁciently can help to improve the belief degree of the true target. In current research, the existing approaches based on belief entropy use belief entropy itself to measure evidence conﬂict. However, it is not convincing to characterize the evidence conﬂict only through belief entropy itself. To solve this problem, we comprehensively consider the inﬂuences of the belief entropy itself and mutual belief entropy on conﬂict measurement, and propose a novel approach based on an improved belief entropy and entropy distance. The improved belief entropy based on pignistic probability transformation function is named pignistic probability transformation (PPT) entropy that measures the conﬂict between evidences from the perspective of self-belief entropy. Compared with the state-of-the-art belief entropy, it can measure the uncertainty of evidence more accurately, and make full use of the intersection information of evidence to estimate the degree of evidence conﬂict more reasonably. Entropy distance is a new distance measurement method and is used to measure the conﬂict between evidences from the perspective of mutual belief entropy. Two measures are mutually complementary in a sense. The results of numerical examples and target recognition applications demonstrate that our proposed approach has a faster convergence speed, and a higher belief degree of the true target compared with the existing methods.


Introduction
Information is affected by various subjective factors and objective environment, and there is some uncertainty. How to measure uncertainty has become an open issue. Some theoretical tools have been proposed, including probability theory [1], fuzzy set theory [2,3], D-numbers [4,5], Z-numbers [6,7], rough set theory [8,9], Dempster-Shafer (D-S) evidence theory [10,11], fractal theory [12,13], etc. D-S evidence theory is one of the most effective tools among them. It not only allocates belief to the power set of propositions [14], but also has the acceptance of an incomplete model without prior probabilities [15]. For this reason, D-S evidence theory has been widely applied in risk analysis [16,17], uncertainty [18], fault diagnosis [19], decision making [20], and so on.
However, a counterintuitive result is generated when there is a high degree of conflict between the evidences. To address this issue, researchers have proposed a variety of improved methods which can be divided into modifying Dempster's combination rule [21][22][23] and modifying the original evidence. For the first class, Smets et al. think that the conflict should be assigned to empty set [21]. Lefevre et al. propose a modified method which proportionally distributed the conflict information to the focal element sets [22]. Whereas, the flaw of the modification of Dempster's combination rule is that the good performance is destructed, like commutativity and associativity. Therefore, many researchers are inclined to modify the original evidence. For the second class, the initial evidences are modified by corresponding weights and rational combined results can be achieved by using the classical Dempster's rule. Therefore, the most commonly used method is the weighted evidence combination. Jiang et al.'s method obtains the weight of evidence by jointly using the distance of evidence and Deng entropy [24]. Tao et al. propose a modified average method to combine conflicting evidence based on belief entropy and induced ordered weighted averaging operator [25]. Li et al. define a new discount coefficient to modify the original evidence based on Belief Entropy and Negation [26]. In contrast, some existing methods [27][28][29] are based on belief entropy and use belief entropy itself to measure evidence conflict. Obviously, it is not convincing to characterize the evidence conflict only through belief entropy itself. Especially when the belief entropy of conflict evidence and normal evidence is close, the conflict evidence will point to a weight almost equal to that of normal evidence, and conflicts are not handled effectively.
To overcome the shortcomings of the methods, we propose a novel method based on an improved entropy and entropy distance, taking into account both the impacts of self-belief entropy and mutual belief entropy on conflict management. The improved belief entropy is used to quantify the uncertainty of the evidence, so as to measure the degree of conflict from a global perspective. PPT entropy introduces pignistic probability transformation function into Yan et al.'s entropy [27]. Compared with the state-of-the-art belief entropy [27,[30][31][32][33][34], it can measure the uncertainty of evidence more accurately, and make full use of the intersection information of evidence to better estimate the degree of evidence conflict. Given the good performance of PPT entropy, this paper chooses it to measure the uncertainty of body of evidence (BOE). Entropy distance describes the entropy difference between evidence, so as to measure the degree of conflict from a local perspective. These two measures are mutually complementary in a sense.
The proposed method deals with conflicting evidence in a more comprehensive way, so that the conflict measurement could be more accurate and reasonable. No matter whether the belief entropy of conflict evidence and normal evidence is equal, close or a large difference, it can still identify conflict evidence and assign a smaller weight to conflict evidence. Not only the drawbacks of existing methods are overcome, but also it has a better convergence performance. Two numerical examples in experiments are illustrated that the novel method is feasible and superior in dealing with the conflicting evidence, where the belief degree of the correct hypothesis has 3.8% and 1.6% increase compared to existing methods, respectively. Furthermore, the belief degree of the true target increases to 98.8% in target recognition application. In this research, the support degree of BOE is a fractional form in which PPT entropy is treated as a numerator and the sum of entropy distances as a denominator. The greater the conflict between the evidences, the greater the entropy distance and the smaller the weight of conflicting evidence.
In this paper, two contributions can be summarized as follows: • We propose PPT entropy based on pignistic probability transformation function. It fully considers the influence of the intersection between propositions on uncertainty and makes the uncertainty measurement of evidence more accurate and range wider. • A novel method for conflict management is presented based on PPT entropy and entropy distance. It measures conflict between evidences from the perspective of belief entropy itself and mutual belief entropy. Not only it has a better convergence performance, but also the higher belief value of the correct hypothesis and true target is obtained.
To facilitate our discussion, Section 2 introduces some basic concepts. In Section 3, we propose PPT entropy and entropy distance. The property and requirements of behaviour of PPT entropy are discussed, and some examples are provided to illustrate the validity of our proposed belief entropy. Based on that, a novel method of conflict management is presented. In Section 4, an application in target recognition is shown to verify the effectiveness of our proposed method. Section 5 is the conclusion.

Preliminaries
In this section, some preliminaries are briefly introduced, including D-S evidence theory, several typical belief entropies and pignistic probability transform, for the purpose of understanding the descriptions in the rest of this paper.

D-S Evidence Theory
Suppose Θ is a set of mutually exclusive and exhaustive elements F i (i = 1, 2, 3, 4, . . . , N), and it can be defined as [32] where Θ is called the frame of discernment (FOD), and F i is named single-element proposition or subset. We define 2 Θ as a power set which contains 2 N elements and can be described as where ∅ is an empty set in Equation (2), and the basic probability assignment (BPA) function m is defined as a mapping of the power set 2 Θ to [0,1].
where mass function m(A) represents the probability of support to A, and A is called focal element, proposition or subset. This paper researches classical D-S evidence theory, so the mass function m(φ) is equal to 0 . The belief function Bel(A) can be defined as [31] Bel where Bel(A) represents the total belief to the proposition A. In D-S evidence theory, two BPAs can be combined with Dempster' s rule of combination, defined as follows [35]: in which where ⊕ represents Dempster's combination rule. K is called conflict coefficient, and its scope is [0, 1]. The bigger K is, the more conflict between two evidences is.

Entropy
Several typical belief entropies are briefly introduced.

Shannon Entropy
Shannon entropy is an uncertain measure of information volume and is denoted by [27]: where N is the number of basic states in the system, p i is the probability of state i and it satisfies ∑ N i=1 p i = 1. The larger the Shannon entropy H is, the higher the uncertainty degree is.

Deng Entropy
Shannon entropy has a great contribution to the measurement of uncertainty, but it has some limitations when there is the BPA. Because it measures uncertainty based on probability. In order to solve this problem, Deng proposes Deng entropy in the framework of D-S evidence theory. It is a generalization of Shannon entropy and is defined as [30]: where |A| is the cardinality of the proposition A, that is, the total number of single-element subsets contained in the proposition A.

Zhou et al.'s Entropy
Zhou et al.'s belief entropy considers the influence of the scale of BOE on uncertainty and is denoted by [31]: where |S| is the cardinality of the Θ, that is, the total number of single-element subsets contained in the FOD.

Yan et al.'s Entropy
Yan et al.'s belief entropy uses the belief function to extend method of measuring uncertain and is an improved method based on Zhou's belief entropy as follows [27]:

Pignistic Probability Transform
Pignistic probability transform is defined as follows [36], where |A ∩ B| is the cardinality of the intersection of A and B. The mass function m(∅) is equal to 0 in a closed world (i.e., in classical D-S evidence theory). So it can be simplified to the following form.

Proposed Method for Conflict Management
In evidence theory, a counterintuitive result is generated when combining the highly conflicting evidence. To address this problem, it is necessary to assign weights to the evidence reasonably. Some existing approaches put forward to use self-belief entropy to determine the weights of evidence. However, it fails to identify the conflicting evidence when the belief entropy of conflicting evidence and normal evidence is equal or close. In this paper, through comprehensively consider the influence of self-belief entropy and mutual belief entropy on the conflict, we proposed a novel method based on PPT entropy and entropy distance. In the following section, we first propose PPT entropy, and then entropy distance. Based on that, a novel method of conflict management is presented.

PPT Entropy
In D-S evidence theory, pignistic probability transform is described as allocating the belief to each proposition equally [37]. The essence is the influence of the intersection between propositions on proposition's belief degree. In this literature, we introduce pignistic probability transformation function to extend the method of uncertainty measurement. The proposed belief entropy is denoted as follows: where BetP m (A) is pignistic probability transform function as shown in Equation (11). The proposed belief entropy is named PPT entropy. We can infer from the Equation (12) that the PPT entropy is degenerated into Zhou et al.'s belief entropy when multi-element propositions are not in intersection. Furthermore, it is degenerated into Shannon entropy when there is only single-element propositions.

Probability Consistency
When m is a Bayesian BPA, PPT entropy must be degenerated into Shannon entropy. Hence, the probability consistency holds.

Set Consistency
Suppose that the PPT entropy satisfies the property of set consistency when exists a set A such that m(A) = 1, which means that it must satisfy the equation: We can get Hence, the set consistency does not hold.

Range
Mathematically, the value range of PPT entropy is [0, +∞). First, PPT entropy is always non-negative.
Second, the maximum value of PPT entropy is infinite.
The proposition A and the FOD Θ consist of at least one single-element subset and there is no superior limit, thus the range of |A| and |S| are [1,+∞). Moreover, the range of m(A) and BetP m (A) are [0, 1]. Obviously, the maximum value of PPT entropy is infinite according to the Equation (12).
Finally, we can conclude that the value range of PPT entropy is [0, +∞).

Additivity
Suppose that the PPT entropy satisfies the additivity, which means that it satisfies the equation: where Θ 1 × Θ 2 is the product space of the sets Θ 1 and Θ 2 and m is a BPA on Θ 1 × Θ 2 . It also need to satisfy m( The marginal BPAs on X × Y are the following ones: where z ij = x i , y j , we can get Therefore, the additivity does not hold.

Subadditivity
If the PPT entropy satisfies the following conditions: where m is a BPA on the space Θ 1 × Θ 2 , m(Θ 1 ) and m(Θ 2 ) are marginal BPAs on Θ 1 and Θ 2 , then it is said that PPT entropy satisfies the subadditivity.
According to the property of additivity, we have got H p (m) > H p (m 1 ) + H p (m 2 ). Therefore, the subadditivity does not hold.

Monotonicity
Given two FODs Θ 1 and , then it is said that PPT entropy satisfies the monotonicity.
Therefore, the monotonicity holds.
To analyze the proposed entropy, Table 1 shows the properties of different entropies. Although the PPT entropy satisfies the probability consistency and monotonicity which can be considered as two important properties for uncertainty measure, it does not satisfy set consistency, additivity and subadditivity, which will bring some challenges to the extension of the uncertainty measure on more general theories. In addition, its scope of application will be limited to a certain extent.

Requirements of Behaviour for PPT Entropy
The requirements of behaviour (RB) for uncertainty measures suggested by Abellan and Masegosa [40] could be expressed in the following way: RB1: The calculation of uncertainty measure should be simple.

RB2:
The uncertainty measure should reflect the uncertainty of conflict and nonspecificity co-existing in the D-S evidence theory.
RB3: The uncertainty measure should be sensitive to change of the BPA.

RB4:
The extension of the uncertainty measure in the D-S evidence theory on more general theories must be possible.
In the next section, we will discuss the above requirements of behaviour for PPT entropy.

Low Computing Complexity
The PPT entropy has a relatively simple calculation and it is only necessary to obtain the pignistic probability transform function BetP m (A) according to the given BPA. When the mass assignments are only transferred to single-element propositions, the PPT entropy is degenerated into Shannon entropy and the calculation is simpler.

Concealment of Conflict and Non-Specificity
A simple transformation of Equation (12) is as follows.
where the last term of the above equation can not be further transformed into a form similar to Shannon entropy, so that it does not measure the uncertainty of conflict. Similarly, the first term ∑ A⊆Θ m(A) log 2 2 2 |A| − 1 e

1−|A| |S|
can not be further transformed into a form similar to I [40] and couldn't measure the uncertainty of non-specificity. In other words, PPT entropy couldn't be converted into a linear combination of S * and I. Here, function S * is used as a conflict measure and function I as a non-specificity measure. I has the following expression: Therefore, PPT entropy has no clear separation between conflict and non-specificity.

Sensitivity to Changes in Evidence
The PPT entropy is sensitive to change of the BPA. A detailed analysis is presented in Example 5. The value of H p could change with the change of the BPA as shown in Figure 1.

Extension on More General Theories
Currently, the belief entropy based on Deng entropy in the D-S evidence theory framework are limited to the closed world where the FOD is assumed to be complete. Tang et al. propose the nonzero mass function of the empty set (i.e., m(∅) = 0) which extends belief entropy theory to the open world [34]. The calculation of H p contains the m(∅) as shown in Equation (10), so the extension of PPT entropy on more general belief entropy theory is possible when there is m(∅) = 0. However, there are two key problems when it is extended on more general belief entropy theory. On the one hand, the PPT entropy only satisfies the property of probability consistency with respect to Klir's five requirements, which is a huge challenge to extend belief entropy theory. On the other hand, the PPT entropy will be meaningless when it is m(∅) = 1.

Examples
The belief entropy is calculated with Deng entropy as follows: The belief entropy is calculated with Zhou et al.'s belief entropy as follows: The belief entropy is calculated with Yan et al.'s belief entropy as follows: The belief entropy is calculated with our proposed belief entropy as follows: In the Example 1, for evidence S 1 , the intersection between three multi-element subsets is single element subset B. However, the intersection between two multi-elements subsets is single element subset B for evidence S 2 and S 3 . For evidence S 2 , the intersection between multi-element subsets is single element subset B and D, but it is single element subset A and B for evidence S 3 . What's more, the probability of the basic probability assignment m(B, D) is higher than m(A, B) in BOEs. Based on the above analysis, it is obvious that the uncertainty of the three BOEs is not the same. However, the belief entropy of the three evidences is the same obtained by Deng entropy, Zhou et al.'s and Yan et al.'s belief entropy, which is contrary to common sense. Our proposed belief entropy can obtain a much more reasonable and satisfactory result, and the uncertainty of S 1 , S 2 and S 3 increases in turn. This is in line with human intuition as shown in Figure 2.
Factors influencing the degree of uncertainty of evidence include the cardinality of the subset, the scale of BOE and the intersection between subsets. The three uncertainty measures fail to distinguish Example 1. The reason is that Deng entropy only considers the cardinality of the subset, while Zhou's entropy just considers the cardinality of the subset and the scale of BOE. Although Yan's entropy considers intrinsic connection between subsets in BOE, it only makes full use of the information of inclusion relationship between subsets and ignores the intersection between subsets. When there is only an intersection relationship among the subsets and no inclusion relationship, Yan's entropy fails to distinguish Example 1 and can be degenerated into Zhou's entropy. It is the reason that the uncertain value is the same obtained by Yan's entropy and Zhou's belief entropy. The comparison results of four uncertainty measures are given in Table 2.
The belief entropy is calculated with Deng entropy as follows: The belief entropy is calculated with our proposed belief entropy as follows: In the Example 2 , evidence S 1 compared to evidence S 2 , The number of multi-elements subsets whose intersection is single element subset C is greater. It is the same for evidence S 3 and evidence S 4 . Obviously, the uncertainty of evidence S 1 is different from that of evidence S 2 , and the uncertainty of evidence S 3 is different from that of evidence S 4 .
The comparison results of different uncertainty measures are given in Table 3.
According to the results, we can conclude that the proposed belief entropy overcomes the limitations of the previous three methods, and could distinguish the uncertainty of four BOEs validly, on the contrary, the other three methods have failed. In addition, for the same BOE from Example 2, its degree of uncertainty calculated by four methods is monotonously decreasing. Because the latter method makes more use of the potential internal information in BOE.  [27].
As we can see from the results, when the FOD has only one single-element proposition, the value of uncertainty is 0. The four entropies and Shannon entropy get the same result.  [27].
The results of Example 3 and 4 show that the four entropies and Shannon entropy are the same when the BPA only consists of single-element propositions. At the same time, it also illustrates that PPT entropy retains the characteristics of Shannon entropy.   For the four uncertainty measures, the uncertainty values of a vacuous BPA are all larger than that of a Bayesian one. This is intuitive, because the vacuous BPA represents that the information is completely unknown to the information system, while a Bayesian one could provide more certain information. It shows that PPT entropy inherits the advantages of Deng Figure 4.
The values of E d , E Md , H n and H p all change with the change of the BPA in each step. The uncertainty values of a vacuous BPA are larger than that of a categorical one. It is intuitive, because a vacuous BPA means completely unknown, and a categorical one means to be pretty certain. When there is m(θ 1 ) = 1, the uncertainty values of different uncertainty measures all change to zero. It is also intuitive, because the four belief entropies are all degenerated into Shannon entropy when there is only one single-element proposition as aforementioned in Example 3. It once again illustrates that the PPT entropy retains the characteristics of Shannon entropy. As shown in Figure 5, when |B| has a smaller value, the values of E d , E Md , H n and H p will change faster at each step. Because the same mass assignments are assigned to a smaller cardinality proposition. When there is one multi-element proposition in BOE, i.e., m(Θ) = 1 or m(B) = 1, the values of E Md , H n and H p is equal. However, E d is different from the other uncertainty measures. Because it is no interaction between propositions when there is only one multi-element proposition, and the scale of FOD plays a decisive role in uncertainty measure. Only E d does not consider the influence of the scale of FOD on the uncertain measurement. what's more, the uncertainty value of H p is always the smallest, which illustrates that the PPT entropy has a less information loss than other uncertainty measures and the result is most accurate. As shown in Figure 6, the values of E d , E Md , H n and H p all increase with the increase of |A| at the first 8 steps and decrease or increase in the last. The reasons are described as follows. According to the monotonicity, the uncertainty of BOE which consists of two multielement propositions is undoubtedly increasing at the first 8 steps. Comparatively, due to the BPA is shifted from two multi-element propositions to one multi-element proposition in the last step, the changes in uncertainty are inconsistent and may be large or small. If a has a larger value, they will increase even more, because relatively more mass assignments are assigned to a larger cardinality proposition. In addition, we can see that the values of H n and H p are getting closer and closer, and they almost overlap at some points, as a increases. Because the intersection between multi-element propositions has less influence on the uncertainty when the m(Θ) is decreasing. where A represents a proposition, and the cardinality of proposition A is variable from 1 to 14. The example is cited from [27]. The belief entropy is calculated with the four uncertainty measures that is shown in Figure 7.
As we can see in Figure 7, with the cardinality of proposition A continuing to increase, the values of E d , E Md , H n and H p increase monotonically which further illustrates the monotonicity of PPT entropy. Furthermore, by taking into consideration of the intersection between propositions, the PPT entropy takes advantage of more valuable information in BOE and the uncertain degree is always smaller than the other three uncertainty measures, which ensures it to be more reasonable and effective for uncertainty measure. The cardinality of proposition A

Entropy Distance
In this paper, we introduce the concept of entropy distance. It is a novel distance measurement method and is used to measure the difference between belief entropy of evidences, thereby characterizing the degree of conflict between evidences from the perspective of mutual belief entropy. The more similar between BOEs, the smaller the entropy distance between BOEs.
Suppose that A 1 , A 2 , · · · , A K are the focal element of the BPAs m i and m j , then the entropy distance between m i and m j is defined as follows: H p (m i (A k )) is belief entropy of the focal element A k , and d H m i , m j denotes the maximum value of the difference in belief entropy of the corresponding focal element. Entropy distance, as a novel distance measurement method, has the following two advantages than relative entropy [42]. On the one hand, the entropy distance satisfies the symmetry. On the other hand, it can measure the difference of two BOEs more accurately and in a wider range. Next, an example is used to verify the advantages of entropy distance. The distance value between m 1 and m 2 is calculated with relative entropy as follows: The distance value between m 3 and m 4 is calculated with relative entropy as follows: The distance value between m 5 and m 6 is calculated with relative entropy as follows: The distance value between m 1 and m 2 is calculated with entropy distance as follows: The distance value between m 3 and m 4 is calculated with entropy distance as follows: The distance value between m 5 and m 6 is calculated with entropy distance as follows: The distance value between m 1 and m 2 obtained by the entropy distance is the same as m 2 and m 1 , which proves that entropy distance satisfies the symmetry, however, relative entropy does not. The difference between m 1 and m 2 is intuitively different from the difference between m 3 and m 4 , but the two distance values obtained by the relative entropy are both equal to 0.841. The two distance values obtained by the entropy distance is not equal, which illustrates that our proposed method can distinguish two situations and measure the difference of two BOEs more accurately. Since the denominator is 0, the distance value between m 5 and m 6 can not be calculated with relative entropy. In contrast, the distance value between the two BOEs can be measured by entropy distance, which verifies that the entropy distance measures the difference of two BOEs in a wider range. The comparison results of two distance methods are given in Table 4. Comparing to the existing methods based on belief entropy itself, entropy distance measures the conflict between evidences from a global point of view, entropy distance measures the conflict between evidences from a local perspective. With introducing entropy distance into conflict management, conflicts are measured from a global and local point of view, and conflict measurement could be more accurate and comprehensive. Even though the belief entropy of conflict evidence and normal evidence is equal or close, it can still effectively identify conflict evidence.

Proposed Method for Conflict Management Based on PPT Entropy and Entropy Distance
The flowchart of the proposed conflict management approach based on PPT entropy and entropy distance is given in Figure 8.
Suppose that Θ is the FOD and there are n pieces of evidences. (14).

Calculate the belief entropy of focal elements and BOEs with Equation (12). 2. Construct entropy distance matrix with Equation
3. Calculate the support degree of BOEs with Equation (15).
Sup(m i ) represents the support degree of BOE m i . It is a fractional form in which PPT entropy is treated as a numerator and sum of entropy distance as a denominator. Entropy distance is inversely proportional to the weight of evidence. This is consistent with the intuitive analysis results. Whereas, For the existing methods, the support degree of BOE is calculated by Equation (16) and is equal to belief entropy itself.
H (m i ) represents different belief entropies.
4. Measure the weight value of BOEs with Equation (17).
6. Use the Dempster's rule of combination with Equation (4), we combine the modified BPA n − 1 times.

Numerical Example 9
Example 12. Suppose that in the same frame of discernment Θ = {A, B, C}, the system has obtained the data information from four different types of sensors, and the BPA of each sensor data is shown in Table 5. Intuitively, the evidence m 2 is highly conflicting with other evidence and the hypothesis A will be obtained the highest belief.  According to the proposed method, the specific calculation process is shown below.
Step 1: Calculate the belief entropy of focal elements and BOEs. The belief entropy of focal elements can be obtained as follows: Step 4: Measure the weight of the four BOEs.
Step 5: Get the modified BPA. Step 6: Use the Dempster's rule of combination to combine the modified BPA 3 times.
The fusion results of four BPAs with different methods are shown in the Table 6.
In this example, the belief entropy of four evidences is 0.469 according to the calculated results. Obviously, the belief entropy of conflict evidence and normal evidence is equal. As seen from the results, the existing methods can obtain correct result. However, the same weight is allocated to each evidence which makes conflicting evidence fail to be effectively handled in the subsequent fusion process. The reason is that the existing methods are only using belief entropy itself as the support degree to assign the weight of evidence.
Compared with existing methods, a smaller weight of conflicting evidence m 2 is obtained by our proposed approach, which reduces the impact of conflict evidence on subsequent evidence combination. It not only has better convergence performance, but also obtain the higher belief degree of the hypothesis A by 100%. The feasibility of the proposed method of conflict management is verified.  Existing methods [27][28][29] w 1 = w 2 = 0.5 w 1 = w 3 = 0.33, w 2 = 0.34 Example 13. Suppose that in the same frame of discernment Θ = {A, B, C}, the system has obtained the data information from five different types of sensors [43], and the BPA of each sensor data is shown in Table 7. Intuitively, the evidence m 3 is highly conflicting with other evidence and the hypothesis A will be obtained the highest belief. The example is cited from [43]. Table 7. The basic probability assignment (BPA) of multi-sensor data. The fusion results of five BPAs with different methods are shown in the Table 8.
As seen from the results, the convergence speed of the proposed method is faster and m(a) = 0.704 after three evidences including the conflict evidence m 3 are combined, which is higher than that of the existing approaches. What's more, the combination results can promptly converge to the desired value with the increasing number of evidences, m(a) = 0.934 after four evidences are combined, and m(a) = 0.982 after five evidences are fused. The belief degree of the hypothesis A increases by 1.6% compared to existing methods. The reason is that our proposed method comprehensively considers the impact of the self-belief entropy and mutual belief entropy on conflict management. It not only can measure the degree of conflict between evidences effectively, but also can strengthen the influence of normal evidence further and at the same time weaken the influence of conflicting evidence further. Additionally, the belief entropy of five evidences is 1.190, 1.238, 1.109, 1.238 and 1.238 by the calculation, respectively. Obviously, the belief entropy of a conflict evidence and normal evidences is close. So we can conclude from Table 6 and 8 that conflict can be effectively handled, whether the belief entropy of conflict evidence and normal evidence is equal or close. The robustness and superiority of the proposed approach is fully illustrated.

Application
In this section, the effectiveness of the proposed method in the application of target recognition is shown. Not only the results are consistent with existing method, but also the belief degree of the true target is improved. The example is cited [15,24,26,28,29].

Example 14.
In a multi-target recognition system, three targets are denoted as A, B and C. Suppose that there is a total of five sensors, respectively obtaining five pieces of evidences. The BPA of each sensor data is shown in Table 9. Intuitively, the evidence m 2 is highly conflicting with other evidence and the target A will be obtained the highest belief. According to the proposed method, the specific calculation process is shown below.
Step 1: Calculate the belief entropy of focal elements and BOEs. Step 2: Establish the entropy distance matrix D 5×5 . Step 4: Measure the weight of the five BOEs.
Step 5: Get the modified BPA. The fusion results of five BPAs with different methods are shown in Table 10.  As can be seen from Table 10, the recognized target by our improved approach is consistent with existing method. On this basis, the belief degree of the true target is also improved from 98.2% to 98.8%, the greater the conflict between evidences, the greater the performance improvement as shown in Example 12 and 13. Two reasons for the effectiveness of the proposed method are given. On the one hand, we measure conflict between evidences from the overall and local point of view so that the conflicts can be accurately measured. On the other hand, the introduction of entropy distance makes the difference between evidences more precise,and conflicting evidences can be assigned a smaller weight than normal evidences. Furthermore, according to the calculation results, the belief entropy of five evidences is 1.566, 0.469, 1.219, 1.313 and 1.224, respectively.
Obviously, there is a large difference in the belief entropy between a conflict evidence and normal evidences which is different from Example 12 and 13. This fully demonstrates that our proposed approach can deal with conflicts on a wider range effectively.
It should be pointed that when the sum of entropy distances ∑ n j=1 d H m i , m j as the denominator is equal to zero, proposed method will be meaningless. In order to solve this problem, we just need to replace ∑ n j=1 d H m i , m j with e ∑ n j=1 d H (mi,mj) , the proposed method is effective again.

Conclusions
In this paper, we comprehensively consider the influences of the belief entropy itself and mutual belief entropy on conflict management and propose a novel approach based on PPT entropy and entropy distance. PPT entropy measures the conflict from the perspective of self-belief entropy. Compared with the state-of-the-art belief entropy, it can measure the uncertainty of evidence more accurately, and can make full use of the intersection information of evidence to estimate the degree of evidence conflict more reasonably. Entropy distance is a new distance measurement method and is used to measure the conflict between evidences from the perspective of mutual belief entropy. The combination of the two measures makes conflict measurement more accurate and comprehensive. Example results show that the proposed method can assign a smaller weight to conflicting evidence to minimize the influence of conflict on subsequent evidence combination, no matter whether the belief entropy of conflict evidence and normal evidence is equal, close or a large difference. It not only has a faster convergence speed, but also the belief degree of the correct hypothesis has 3.8% and 1.6% increase compared to existing methods in two experiments, respectively. Furthermore, our proposed method can efficiently manage conflict on a wider range as shown in target recognition application.
Considering this proposed method only obtains satisfactory results in the application of target recognition, follow-up studies will test the performance of the method in univariate time series classification applications. First, we choose the MultiLayer Perceptron, Fully Convolutional Network and Residual Network as three classifiers. Then, the univariate time series classification dataset from UCR is used to obtain the classification results on three classifiers. Finally, the classification results which have a conflict are processed by our proposed method.
Author Contributions: In this research activity, all the authors were involved in the data collection and preprocessing phase, developing the theoretical concept of the model, empirical research, results analysis and discussion, and manuscript preparation. All authors have agreed to the published version of the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: PPT pignistic probability transformation D-S Dempster-Shafer BOE body of evidence FOD frame of discernment RB requirements of behaviour NSFC National Natural Science Foundation of China