Next Article in Journal
Interacting Particle Solutions of Fokker–Planck Equations Through Gradient–Log–Density Estimation
Previous Article in Journal
Thermodynamic Analysis of Bistability in Rayleigh–Bénard Convection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Base Belief Function-Based Conflict Data Fusion Approach Considering Belief Entropy in the Evidence Theory

School of Big Data and Software Engineering, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(8), 801; https://doi.org/10.3390/e22080801
Submission received: 19 June 2020 / Revised: 14 July 2020 / Accepted: 20 July 2020 / Published: 22 July 2020
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
Due to the nature of the Dempster combination rule, it may produce results contrary to intuition. Therefore, an improved method for conflict evidence fusion is proposed. In this paper, the belief entropy in D–S theory is used to measure the uncertainty in each evidence. First, the initial belief degree is constructed by using an improved base belief function. Then, the information volume of each evidence group is obtained through calculating the belief entropy which can modify the belief degree to get the final evidence that is more reasonable. Using the Dempster combination rule can get the final result after evidence modification, which is helpful to solve the conflict data fusion problems. The rationality and validity of the proposed method are verified by numerical examples and applications of the proposed method in a classification data set.

1. Introduction

Dempster-Shafer theory (D–S theory) [1,2] plays a vital role for addressing uncertainty in medical diagnosis [3], target recognition [4,5], fault diagnosis [6], classification [7,8,9], clustering [10,11,12], risk analysis [13] and many other fields [14]. D–S theory can clearly measure the uncertainty of events, and then provide the basis for decision-making by the data fusion results. However, due to the complexity of data, evidence conflicts are often encountered in the actual data processing. In [15], the concepts of conflict from different perspectives are proposed to clarify what conflict is and from where the conflicts come. In [16,17], Zadeh points out that if a conflict exists between the subjects of evidence, classical evidence theory will often get the opposite results in its normalization process. Due to the nature of Dempster combination rule, it may produce results contrary to intuition [16,17]. Smets analyzes the ’jungle’ of combination rules and the nature of the combinations [18]. Classical evidence theory can not deal with conflict data effectively, which greatly restricts the promotion and application of evidence theory. Therefore, this paper studies the conflict data fusion.
Because the several pieces of evidence from multiple information elements are often inconsistent, the data is often in conflict. Many experts and scholars have done a lot of research on conflict data fusion. At present, there are many methods to solve conflict data fusion [19,20]. Part of the research focus on proposing new combination rules like inconsistent measure-based rule [21], combination rule considering evidence dependence [22] or improving the original combination rules using belief entropy-based method [23], fuzzy element [24], so as to improve the results of conflict data fusion. In [25], some basic principles are proposed after a systematic review of existed fusion rules, which can reasonably solve the fusion when there exits incomplete information. In [26], a new combination rule is proposed, which is based on the analysis and illustration of similarity collision. And this method aims to solve the problem of conflict. In [27], Decision making trial and evaluation laboratory (DEMATEL) method is proposed to merge conflicting data. New combination rule can effectively solve problems in recognition field. In [28], the improved combination rule of D-number is applied to emitter identification. In [29], a method is proposed to select the source behavior, which is based on a very general and expressive fusion scheme. The important advantage of this method is that it can clearly explain the assumption of the source. Furthermore, incomplete information should also be considered in conflict data fusion [30,31,32].
In addition, another method of conflict fusion is to manage the uncertainty in the evidence sources before evidence fusion. Entropy is a typical method for uncertainty measure and management [33]. In evidence theory, the belief entropy [34,35,36] or uncertainty measure of mass function [37,38] is used to address the information volume of evidence, so as to modify evidence sources. In [39], Deng entropy is proposed, which can not only deal with the uncertainty of basic probability distribution effectively but also correctly. Deng entropy, as a belief entropy, has been widely used in many applications such as risk analysis [40]. In [41], a Multiple-Criteria Decision-Making method is proposed, which is based on D Numbers and Belief Entropy. This method can deal with the conflict problems effectively. In [42], a novel belief entropy is proposed to measure uncertainty of basic probability assignments, which is based on belief function and plausibility function. This method can deal with the conflicts reasonably during information fusion. In [43], an improved method is proposed to combine conflicting evidence, which is based on the similarity measure (which can evaluate the similarity between two things) and belief function entropy.
Besides, constructing initial belief on each evidence can reduce the conflict between BPAs. In [44], base belief function is proposed, which can modify the BPAs to deal with conflict data fusion. Based on this, in [45], an improved method is proposed to manage conflict data by assigning an elementary belief. On the basis of the conflict data fusion strategy, the improved base belief function [45] and belief entropy [39] are used to solve the problem of conflict data fusion. The procedure of the proposed method is as follows. Firstly, the BPAs are modified by the improved method of base belief function. Secondly, belief entropy is used to calculate the information volume and to get the weight of each evidence group. Thirdly, the weight is used to modify the BPAs again. At last, the Dempster combination rule is used for data fusion.
The proposed method can solve the data conflict problems effectively which can get better combination result. The proposed method considers both the focus elements in the current evidence and the proposition in the power set space. In addition, the proposed method reallocates the BPAs for conflicting data, which also can solve some the initial BPAs that are zero value in each evidence group. At the same time, due to different information sources having different influence on the final results, the proposed method can distribute the weight according to the information volume of information sources. The final BPAs obtained by using the belief entropy can make the data fusion results more logical.
The following parts of this paper are organized as follows. In Section 2, we review some basic concepts. Then in Section 3, we propose an improved approach using information volume to weight basic probability assignment, so as to obtain a reasonable combination result using D–S theory. In addition, a few examples are given to verify the correctness of the proposed method. In Section 4, the classification experiments are presented to show the effectiveness of the proposed method. The open issues are given in Section 5. Finally, conclusions of proposed method are given in Section 6.

2. Preliminaries

In this section, some preliminaries are introduced.

2.1. Dempster–Shafer Evidence Theory

Dempster–Shafer theory [1,2], which is known as belief function theory, is the extension of the Bayesian subjective probability theory. The evidence theory was developed by Shafer, the concept of belief function is also introduced by him. Shafer formed a set of mathematical methods of “evidence” and “combination” to settle the uncertain reasoning. The D–S evidence theory does not need to know the prior probability, which can represent “uncertainty” well. In addition, D–S theory is widely used to deal with uncertain data. It is mainly applicable to information fusion, expert system, information and legal case analysis, multi-attribute decision-making analysis as an uncertain reasoning method. Its biggest characteristic is to use “interval estimation” instead of “point estimation” for the description of uncertainty information, so as to distinguish the unknown and uncertain aspects, accurately reflect the evidence collection, which shows great flexibility.
Let U be the frame of discernment (FOD). Basic Probability Assignment (BPA) is a mass function m which is 2 U [ 0 , 1 ] and satisfies
m ( ) = 0 , A U m ( A ) = 1
In FOD, belief function is defined as,
B e l ( A ) = B A m ( B )
The plausibility function [46] is defined as,
P l ( A ) = B A m ( B )
The Dempster combination rule is a key step to combine the output of multiple principals. For two mass functions m 1 and m 2 , the Dempster combination rule can be defined as follows:
m 1 , 2 ( A ) = m 1 ( B ) m 2 ( C ) = B , C Ω , B C = A m 1 ( B ) × m 2 ( C ) 1 B C = m 1 ( B ) × m 2 ( C ) ,
where a coefficient K is defined as follows:
K = B C = m 1 ( B ) × m 2 ( C ) .
The advantages of the Dempster combination rule are mainly reflected in the case of less evidence conflict. However, the Dempster combination rule also has some disadvantages. If there is a high conflict between two pieces of evidence, the following defects will appear when using them: It may assign 100% belief to a small possible proposition, which will produce results contrary to intuition. It is also very sensitive to the allocation of basic reliability.
In D–S theory, for hypothesis A in FOD, the belief function B e l ( A ) and plausibility function P l ( A ) are calculated according to the basic probability assignment BPA to form the belief interval [ B e l ( A ) , P l ( A ) ] , which is used to indicate the degree of confirmation of hypothesis A.

2.2. Belief Entropy

Belief entropy is one of the hot issues in the field of information fusion, many types of belief entropy are proposed such as Dubois-Prade’s entropy [47], Jirousek-Shenoy entropy [48,49], Deng entropy [39], and so on [50]. Deng entropy as a measurement of uncertain information is defined as follows [39,51]:
E d m = A X m A log 2 m A 2 | A | 1 ,
among them, m is the a mass function defined on the FOD X, and A is a focal element of m. | A | stands for the cardinality of A.

2.3. Improved Base Belief Function

The improved base belief function is proposed to obtain the modified BPAs before data fusion in [45]. Let θ be a set of N possible values which are mutually exclusive. So, the power set of θ is 2 θ , where the number of elements is 2 N . If the FOD is complete, m ( ) = 0 . Determine the number of propositions with initial belief degree assigned in evidence group as λ . Thus, the improved base belief function n ( R i ) is defined as [45]:
n ( R i ) = 1 λ
where R i represents a subset in FOD Ω . λ represents the number of propositions with initial belief degree in evidence group. Then n ( R i ) is adopted to modify the initial BPA m through the arithmetic mean [45]:
m ( R i ) = n ( R i ) + m ( R i ) 1 + 2 Ω 1 λ .
The following is an example of calculating the improved base belief function [45]. For FOD Ω = { a , b , c } , the BPAs are as follows:
m 1 ( a ) = 0 . 99 , m 1 ( a , b ) = 0.01 ;
m 2 ( b ) = 0.01 , m 2 ( c ) = 0.99 .
There are four focal elements { a } , { b } , { c } and { a , b } in m 1 and m 2 . So, λ =4. Using Equation (7), the value of the improved base belief function is n ( R i ) = 1 λ = 1 4 . There are 3 elements in FOD, so the size is 2 3 1 = 7 . With Equation (8), the modified BPA are as follows:
m 1 ( a ) = m 1 ( a ) + n ( R i ) 1 + 2 Ω 1 λ = 0.99 + 1 4 1 + 7 4 = 0.45 . m 1 ( a , b ) = m 1 ( a , b ) + n ( R i ) 1 + 2 Ω 1 λ = 0.01 + 1 4 1 + 7 4 = 0.09 .
m 1 ( b ) = M 1 ( c ) = M 1 ( a , c ) = M 1 ( b , c ) = M 1 ( a , b , c ) = m 1 ( R i ) + n ( R i ) 1 + 2 Ω 1 λ = 1 4 1 + 7 4 = 0.09 .
m 2 ( b ) = m 2 ( b ) + n ( R i ) 1 + 2 Ω 1 λ = 0.01 + 1 4 1 + 7 4 = 0.09 . m 2 ( c ) = m 2 ( c ) + n ( R i ) 1 + 2 Ω 1 λ = 0.99 + 1 4 1 + 7 4 = 0.45 .
m 2 ( a ) = M 2 ( a , b ) = M 2 ( a , c ) = M 2 ( b , c ) = M 2 ( a , b , c ) = m 2 ( c ) + n ( R i ) 1 + 2 Ω 1 λ = 0.99 + 1 4 1 + 7 4 = 0.09 .

3. Proposed Method

In this section, the improved base belief function in [45] and a belief entropy in [39] are adopted to construct a new data fusion method.

3.1. Method

The procedure of the proposed method is listed as follows. And the flowchart of the proposed improvement method is shown in Figure 1.
Step 1: 
For potentially conflict data, the improved base belief function method n in Equation (7) is used to modify the BPAs to get the modified evidence m of Equation (8).
Based on the improved base belief function, the situation where the belief of a proposition is zero can be avoided, which can overcome the shortcoming of Dempster combination rule in conflict data fusion.
Step 2: 
For the ith evidence, the information volume I v is calculated through the Deng entropy E d ( i ) [39]. I v is defined as follows:
I v ( i ) = e E d = e i m ( A i ) l o g m ( A i ) 2 | A i | 1 .
Calculating information volume is the basis of obtaining weight.
Step 3: 
For each evidence, the weight w ( i ) is defined as follows:
w ( i ) = I v ( i ) i = 1 n I v ( i ) .
Due to different information sources have different influence on the final results, weight can represent the impact of each evidence group on the final result. In this way, each evidence group is assigned a small weight, which is more reasonable in application.
Step 4: 
The weights are obtained through step 3 to modify the BPAs before fusing data. After evidence modification using the base belief function and information volume-based uncertainty, the final evidence for data fusion can be calculated as follows:
m w ( A i ) = i = 1 n w i m i ( A i ) .
Using the weight factor to modify the BPAs again to get the final evidence. And fusing the final evidence can obtain better results which is more realistic.
Step 5: 
The final evidence obtained by step 4 can be fused through the Dempster combination rule in Equation (4) to get the final result. If there are n bodies of evidence, then the modified evidence will be fused with n 1 times.
Step 6: 
Decision making based on the data fusion result.

3.2. Examples and Discussion

Numerical examples are given to explain and verify the rationality of the proposed method.
Example 1.
Supposed that the FOD is Ω = { a , b } and the BPAs are given as
m 1 ( a ) = 1 , m 1 ( b ) = 0 , m 1 ( a , b ) = 0 , m 2 ( a ) = 0 , m 2 ( b ) = 1 , m 2 ( a , b ) = 0 .
The improved base belief function based on Equation (7) is
n ( a ) = n ( b ) = n ( a , b ) = 1 3 .
Then, the improved base belief function is used to modify the BPAs based on Equation (8). The modified BPAs are
m 1 ( a ) = 0.6667 , m 1 ( b ) = 0.1667 , m 1 ( a , b ) = 0.1667 , m 2 ( a ) = 0.1667 , m 2 ( b ) = 0.1667 , m 2 ( a , b ) = 0.6667 .
After evidence modification with the improved base belief function method, the information volume of E1 and E2 can be measured by belief entropy as:
I v ( E 1 ) = 2.8596 , I v ( E 2 ) = 2.8596 ,
and the final evidence functions are
m ( a ) = 0.4167 , m ( b ) = 0.1667 , m ( a , b ) = 0.3472 .
Dempster combination rule is used for data fusion. The final result is:
m ( a ) = 0.4787 , m ( b ) = 0.4787 , m ( a , b ) = 0.0426 .
In this example, the evidence sources given are completely conflict. According to the data of fusion result, the propositions {a} and {b} have an equal belief degree, which is consistent with the initial belief assignment in BPAs: m 1 ( a ) = m 2 ( b ) = 1 . In addition, a certain amount of belief is assigned to the proposition {a,b}, which shows the uncertainty among the propositions {a} and {b}.
Example 2.
Supposed that the FOD is Ω = { a , b , c } and two sets of BPAs (adopted from Zadeh [16,17]) are as follows.
m 1 ( a ) = 0.99 , m 1 ( a , b ) = 0.01 ,
m 2 ( b ) = 0.01 , m 2 ( c ) = 0.99 .
The improved base belief function is
n ( a ) = n ( b ) = n ( c ) = n ( a , b ) = n ( a , c ) = n ( b , c ) = n ( a , b , c ) = 1 4
Then, the modified BPAs using improved base belief function are:
m 1 ( a ) = 0.4509 , m 1 ( a , b ) = 0.0945 , m 1 ( b ) = m 1 ( c ) = m 1 ( a , c ) = m 1 ( b , c ) = m 1 ( a , b , c ) = 0.0909 ,
m 2 ( b ) = 0.0945 , m 2 ( c ) = 0.4509 , m 2 ( a ) = m 2 ( a , b ) = m 2 ( a , c ) = m 2 ( b , c ) = m 2 ( a , b , c ) = 0.0909 .
After data modification, the information volume of E1 and E2 and the final evidence are:
I v ( E 1 ) = 7.9692 , I v ( E 2 ) = 7.9375 ,
m ( a ) = 0.2713 , m ( b ) = 0.0927 ,
m ( c ) = 0.2706 , m ( a , b ) = 0.0927 ,
m ( a , c ) = m ( b , c ) = m ( a , b , c ) = 0.0909 .
The final evidence is calculated by Dempster combination rule. The final result is:
m ( a ) = 0.3762 , m ( b ) = 0.1200 ,
m ( c ) = 0.3729 , m ( a , b ) = 0.0400 ,
m ( a , c ) = m ( b , c ) = 0.0389 , m ( a , b , c ) = 0.0129 .
According to the final result, a big belief degree for m ( a ) in m 1 and m ( c ) in m 2 is logical because {a} and {c} have a 99% belief assignment from the original evidence sources: m 1 ( a ) = m 2 ( c ) = 0.99 . And the proposition {b} does not get too much support of belief degree which is consistent with the initial belief assignment: m 1 ( a , b ) = 0.01 and m 2 ( b ) = 0.01 . In addition, a certain amount of belief is assigned to the proposition {b,c}, {a,c} and {a,b,c}, which shows the uncertainty among the events {a}, {b} and {c}.
Example 3.
Supposed that the FOD is Ω = { a , b , c } and two sets of BPAs are given as
m 1 ( a ) = 0.9 , m 1 ( a , b ) = 0.1 ,
m 2 ( c ) = 0.9 , m 2 ( a , b , c ) = 0.1 .
The improved base belief function is
n ( a ) = n ( b ) = n ( c ) = n ( a , b ) = n ( a , c ) = n ( b , c ) = n ( a , b , c ) = 1 4
After evidence modification, the information volume of each evidence (E1 and E2) is:
I v ( E 1 ) = 8.6395 , I v ( E 2 ) = 8.6395 .
The final result can be calculated by Dempster combination rule, as shown in Table 1. The fusion result is compared with the methods with only classical Dempster combination rule and only the improved base belief function, as shown in Table 1 and Figure 2. From the original evidence, m 1 ( a ) = m 2 ( c ) = 0.9 , m ( a ) in m 1 and m ( c ) in m 2 have the same belief assignment for { a } and { c } as a single set, but, m 1 assigns no belief on { c } , thus, the proposed method assigns a higher belief on { a } than {c}. In addition, {b}, {a,c} and {b,c} which are assigned the belief degree, which shows the superiority of base belief assignment method in comparison with the method only using Dempster rule directly.
Example 4.
Supposed that the FOD is Ω = { a , b , c } and two sets of BPAs are given as
m 1 ( a ) = 0.9 , m 1 ( b ) = 0.05 , m 1 ( c ) = 0.05 , m 2 ( a ) = 0.05 , m 2 ( b ) = 0.05 , m 2 ( c ) = 0.9 .
The data fusion results with the proposed method and the methods with only classical Dempster combination rule and only improved base belief function are shown in Table 2 and Figure 3.
Compared with the method only using Dempster combination rule, the proposed method can reflect the uncertainty among the events { a } , { b } and { c } . In addition, { a , b } , { a , c } , { b , c } and { a , b , c } also have belief assignment which are reasonable. While in comparison with the method only use the improved base belief function, the proposed method has more belief assignment on { a } and { c } . This is because in the initial BPAs, m 1 ( a ) = m 2 ( c ) = 0.9 , { a } and { c } have a significant higher belief than other propositions.
Example 5.
Supposed that the FOD is Ω = { a , b , c } and two sets of BPAs are given as
m 1 ( a ) = 0.9 , m 1 ( a , b , c ) = 0.1 ,
m 2 ( a ) = 0.05 , m 2 ( b ) = 0.05 , m 2 ( c ) = 0.9 .
The final results with the proposed method and the methods with only classical Dempster combination rule and improved base belief function are shown in Table 3 and Figure 4.
Compared with the method only using Dempster combination rule, the proposed method can reflect the uncertainty among the events { a } , { b } and { c } reasonably. Compared with the method only using improved base belief function, the proposed method has more belief assignment on { a } . This is contributed by a belief assignment m 2 ( a ) = 0.05 on { a } in m 2 while no similar belief assignment on { c } in m 1 . The uncertainty in multi subset proposition is reflected by assigning less belief degree on { c } in comparison with the method only use the improved base belief function, which also reflects the differences in the initial BPAs between this example and the previous one.
From the results of Examples 3, 4 and 5, the effectiveness and rationality of the proposed method for conflict data fusion are verified. Compared with the method with only Dempster combination rule or only improved base belief function, the proposed method can get a more rational fusion result. Because the proposed method considers both the initial belief assignment in base belief assignment and the information volume with belief entropy.

4. Application of Proposed Method

In this section, the classical example in machine learning to classify the Iris is adopted to evaluate the rationality and effectiveness of the proposed method. The real data set comes from the UCI machine learning library and the BPAs after evidence modelling are adopted from [44,52]. In the Iris data set, there are three species (named Setosa (a), Versicolor (b), and Virginica (c)), each species contains 50 instances. Each species of Iris has four attributes (sepal length (SL), sepal width (SW), petal length (PL), petal width (PW)).

4.1. Experiment 1

In [44], Wang et al. select 40 instances from each species randomly, so the remaining 10 are considered test sets. An instance is randomly selected from the species Setosa (a) of the test set to generate BPA. The BPAs of the four attributes are shown in Table 4.
According to the steps of proposed method, the calculation procedure of this experiment is shown in Figure 5. The BPAs of the first two attributes (SL and SW) are assigned belief degree in all power set spaces, which does not lead to possible anti intuitive fusion results due to zero values when using Dempster composition rules. Therefore, only the BPAs of the last two attributes (PL and PW) will be modified using the proposed method. And the improved base belief function can be calculated as:
n ( a ) = n ( b ) = n ( c ) = n ( a , b ) = n ( a , c ) = n ( b , c ) = n ( a , b , c ) = 1 7
According to the data modification steps based on the improved base belief function in the proposed method, the modification BPAs of the two attributes PL and PW is shown in Table 5.
After evidence modification, the information volume of each evidence is:
I v ( E 1 ) = 4.1451 , I v ( E 2 ) = 5.7692 , I v ( 3 ) = 7.1579 , I v ( E 4 ) = 7.2326
and the final evidence functions are
m ( a ) = 0.3770 , m ( b ) = 0.2449 ,
m ( c ) = 0.1552 , m ( a , b ) = 0.0547 ,
m ( a , c ) = 0.0546 , m ( b , c ) = 0.0632 , m ( a , b , c ) = 0.0504 .
After the BPAs of attribute PL and PW is modified based on the belief entropy, the final evidence is fused three times by using the Dempster combination rule, and the final result is calculated. Table 6 shows the final data fusion results using the proposed method and using only the improved base belief function. From the final results, the belief degree of the test case to Setosa (a) species is the highest, which is consistent with the actual situation, indicating the rationality of the proposed method. In addition, the belief degree using the proposed method assigned to species of Setosa (a) is 67.98%, which is higher than 62.32% using only the improved base belief function. According to this, the validity and rationality of the proposed method are shown.

4.2. Experiment 2

In [52], Yuan et al. took 120 specimens as the training set and the remaining 30 specimens as the test set to generate the BPAs. The generated BPAs of four attributes of Setosa samples are shown in Table 7, where θ means all the three species { a , b , c } .
All attributes of each sample have data conflicts, and only using Dempster composition rule will lead to possible illogical fusion results owing to zero values. So, according to the step 1 of the proposed method shown in Figure 5, all BPAs generated by four attributes of Setosa samples will be modified by using the improved base belief function firstly. Then the rest steps of the proposed method are executed to get the final evidence using belief entropy. The final combination results using the proposed method and only using the improved base belief function are shown in Table 8.
It can be seen from the combination results of two methods that the BPA of the proposition {a} is the highest in each sample. According to the final results, the sample is Setosa obviously. In addition, the BPA of hypothesis {a} using the proposed method is higher than only using improved base belief function. Compared with only using the improved base belief function, the proposed method can effectively deal with data conflicts to some extent. The results verify the validity and rationality of the proposed method.

5. Open Issues

Some open issues exist in the current work. First of all, the uncertainty measure in D–S theory is still an open issue. How to measure the reliable and independent of evidence needs further study. Is the belief entropy good enough for this open issue [33,39,50,51]?
Secondly, Dempster combination rule is axiomatically justified in [15,18,53]. Dempster rule can be used under the condition that two sources must be entirely reliable and independent. But this is also the source of problem, in practical world, there is full of uncertainty, it is hard to find two sources which are entirely reliable and independent. Among so many improved rules [29,54], how to find the proper one for the specific applications and cases?
The third is for information fusion in the open world assumption [30,31]. The unknown and new information should be taken into consideration. Dynamic evidence reasoning may be a choice [55].
The experiments should be conducted on several databases for further work.

6. Conclusions

Regarding conflict data fusion problems, an improved method is proposed in this paper, which is based on the belief entropy and improved base belief function in D–S theory. First, in the power set space of evidence, the initial belief is calculated through using the improved base belief function, and the initial belief is calculated according to the number of propositions with belief. Then, using the belief entropy measures the information volume of each evidence. The improved base belief function and information volume are used to modify the evidence. At last, the data fusion is based on Dempster combination rule. The effectiveness and rationality of the proposed method are verified by numerical examples and two applications of the proposed methed on classification data set.
The proposed method not only considers the focus elements which are assigned the initial belief in the current evidence, but also considers the proposition in the power set space such as the propositions which are zero value or are not assigned the belief degree. However, there are still some shortcomings. The proposed method only is applied to the closed-world hypothesis, however, the uncertain factors will increase in the open-world.

Author Contributions

Conceptualization, Y.T.; Formal analysis, Y.L. and Y.T.; Funding acquisition, Y.L.; Methodology, S.N., Y.L. and Y.T.; Supervision, Y.T.; Validation, S.N. and Y.T.; Visualization, S.N.; Writing—original draft, S.N.; Writing—review & editing, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

The work is partially supported by the Chongqing Technology Innovation and Application Development Project (Grant No. cstc2019jscx-dxwtBX0012).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dempster, A.P. Upper and Lower Probabilities Induced by a Multi-Valued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
  2. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  3. Wang, J.; Hu, Y.; Xiao, F.; Deng, X.; Deng, Y. A novel method to use fuzzy soft sets in decision making based on ambiguity measure and Dempster–Shafer theory of evidence: An application in medical diagnosis. Artif. Intell. Med. 2016, 69, 1–11. [Google Scholar] [CrossRef]
  4. Han, Y.; Deng, Y. An evidential fractal analytic hierarchy process target recognition method. Def. Sci. J. 2018, 68, 367. [Google Scholar] [CrossRef] [Green Version]
  5. Ding, B.; Wen, G.; Huang, X.; Ma, C.; Yang, X. Target recognition in synthetic aperture radar images via matching of attributed scattering centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3334–3347. [Google Scholar] [CrossRef]
  6. Gong, Y.; Su, X.; Qian, H.; Yang, N. Research on fault diagnosis methods for the reactor coolant system of nuclear power plant based on DS evidence theory. Ann. Nucl. Energy 2018, 112, 395–399. [Google Scholar] [CrossRef]
  7. Meng, J.; Fu, D.; Tang, Y.; Yang, T.; Zhang, D. A novel semi-supervised classification method based on soft evidential label propagation. IEEE Access 2019, 7, 62210–62220. [Google Scholar] [CrossRef]
  8. Xu, X.; Zhang, D.; Bai, Y.; Chang, L.; Li, J. Evidence reasoning rule-based classifier with uncertainty quantification. Inf. Sci. 2020, 516, 192–204. [Google Scholar] [CrossRef]
  9. Liu, Z.; Liu, Y.; Dezert, J.; Cuzzolin, F. Evidence Combination Based on Credal Belief Redistribution for Pattern Classification. IEEE Trans. Fuzzy Syst. 2020, 28, 618–631. [Google Scholar] [CrossRef] [Green Version]
  10. Zhou, K.; Martin, A.; Pan, Q.; Liu, Z. SELP: Semi–supervised evidential label propagation algorithm for graph data clustering. Int. J. Approx. Reason. 2018, 92, 139–154. [Google Scholar] [CrossRef] [Green Version]
  11. Meng, J.; Fu, D.; Tang, Y. Belief-peaks clustering based on fuzzy label propagation. Appl. Intell. 2020, 50, 1259–1271. [Google Scholar] [CrossRef]
  12. Su, Z.g.; Denoeux, T. BPEC: Belief-peaks evidential clustering. IEEE Trans. Fuzzy Syst. 2018, 27, 111–123. [Google Scholar] [CrossRef]
  13. Zheng, H.; Tang, Y. A Novel Failure Mode and Effects Analysis Model Using Triangular Distribution-Based Basic Probability Assignment in the Evidence Theory. IEEE Access 2020, 8, 66813–66827. [Google Scholar] [CrossRef]
  14. Wang, H.; Guo, L.; Dou, Z.; Lin, Y. A new method of cognitive signal recognition based on hybrid information entropy and DS evidence theory. Mob. Netw. Appl. 2018, 23, 677–685. [Google Scholar] [CrossRef]
  15. Destercke, S.; Burger, T. Toward an axiomatic definition of conflict between belief functions. IEEE Trans. Cybern. 2013, 43, 585–596. [Google Scholar] [CrossRef] [PubMed]
  16. Zadeh, L.A. Review of a mathematical theory of evidence. AI Mag. 1984, 5, 81. [Google Scholar]
  17. Zadeh, L.A. A simple view of the Dempster-Shafer theory of evidence and its implication for the rule of combination. AI Mag. 1986, 7, 85. [Google Scholar]
  18. Smets, P. Analyzing the combination of conflicting belief functions. Inf. Fusion 2007, 8, 387–412. [Google Scholar] [CrossRef]
  19. Yuan, K.; Xiao, F.; Fei, L.; Kang, B.; Deng, Y. Conflict management based on belief function entropy in sensor fusion. SpringerPlus 2016, 5, 638. [Google Scholar] [CrossRef] [Green Version]
  20. Fei, L.; Deng, Y.; Hu, Y. DS-VIKOR: A new multi-criteria decision-making method for supplier selection. Int. J. Fuzzy Syst. 2019, 21, 157–175. [Google Scholar] [CrossRef]
  21. Zhao, Y.; Jia, R.; Shi, P. A novel combination method for conflicting evidence based on inconsistent measurements. Inf. Sci. 2016, 367, 125–142. [Google Scholar] [CrossRef]
  22. Su, X.; Li, L.; Qian, H.; Mahadevan, S.; Deng, Y. A new rule to combine dependent bodies of evidence. Soft Comput. 2019, 23, 9793–9799. [Google Scholar] [CrossRef]
  23. Dong, Y.; Zhang, J.; Li, Z.; Hu, Y.; Deng, Y. Combination of evidential sensor reports with distance function and belief entropy in fault diagnosis. Int. J. Comput. Commun. Control 2019, 14, 329–343. [Google Scholar] [CrossRef]
  24. Zhang, L.; Ding, L.; Wu, X.; Skibniewski, M.J. An improved Dempster–Shafer approach to construction safety risk perception. Knowl.-Based Syst. 2017, 132, 30–46. [Google Scholar] [CrossRef]
  25. Dubois, D.; Liu, W.; Ma, J.; Prade, H. The basic principles of uncertain information fusion. An organised review of merging rules in different representation frameworks. Inf. Fusion 2016, 32, 12–39. [Google Scholar] [CrossRef]
  26. Wang, J.; Qiao, K.; Zhang, Z. An improvement for combination rule in evidence theory. Future Gener. Comput. Syst. 2019, 91, 1–9. [Google Scholar] [CrossRef]
  27. Zhang, W.; Deng, Y. Combining conflicting evidence using the DEMATEL method. Soft Comput. 2019, 23, 8207–8216. [Google Scholar] [CrossRef]
  28. Guan, X.; Liu, H.; Yi, X.; Zhao, J. The improved combination rule of D numbers and its application in radiation source identification. Math. Probl. Eng. 2018, 2018, 6025680. [Google Scholar] [CrossRef] [Green Version]
  29. Pichon, F.; Destercke, S.; Burger, T. A consistency-specificity trade-off to select source behavior in information fusion. IEEE Trans. Cybern. 2014, 45, 598–609. [Google Scholar] [CrossRef]
  30. Deng, Y. Generalized evidence theory. Appl. Intell. 2015, 43, 530–543. [Google Scholar] [CrossRef] [Green Version]
  31. Jiang, W.; Zhan, J. A modified combination rule in generalized evidence theory. Appl. Intell. 2017, 46, 630–640. [Google Scholar] [CrossRef]
  32. Zhou, X.; Tang, Y. A Note on Incomplete Information Modeling in the Evidence Theory. IEEE Access 2019, 7, 166410–166414. [Google Scholar] [CrossRef]
  33. Dragos, V.; Ziegler, J.; de Villiers, J.P.; de Waal, A.; Jousselme, A.; Blasch, E. Entropy-Based Metrics for URREF Criteria to Assess Uncertainty in Bayesian Networks for Cyber Threat Detection. In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2–5 July 2019; pp. 1–8. [Google Scholar]
  34. Li, Y.; Deng, Y. Generalized ordered propositions fusion based on belief entropy. Int. J. Comput. Commun. Control 2018, 13, 792–807. [Google Scholar] [CrossRef]
  35. Li, M.; Xu, H.; Deng, Y. Evidential decision tree based on belief entropy. Entropy 2019, 21, 897. [Google Scholar] [CrossRef] [Green Version]
  36. Kang, B.; Deng, Y. The maximum Deng entropy. IEEE Access 2019, 7, 120758–120765. [Google Scholar] [CrossRef]
  37. Huang, Z.; Yang, L.; Jiang, W. Uncertainty measurement with belief entropy on the interference effect in the quantum-like Bayesian networks. Appl. Math. Comput. 2019, 347, 417–428. [Google Scholar] [CrossRef] [Green Version]
  38. Deng, X.; Jiang, W. On the negation of a Dempster–Shafer belief structure based on maximum uncertainty allocation. Inf. Sci. 2020, 516, 346–352. [Google Scholar] [CrossRef] [Green Version]
  39. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  40. Zheng, H.; Tang, Y. Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis. Entropy 2020, 22, 280. [Google Scholar] [CrossRef] [Green Version]
  41. Xiao, F. A multiple-criteria decision-making method based on D numbers and belief entropy. Int. J. Fuzzy Syst. 2019, 21, 1144–1153. [Google Scholar] [CrossRef]
  42. Pan, L.; Deng, Y. A new belief entropy to measure uncertainty of basic probability assignments based on belief function and plausibility function. Entropy 2018, 20, 842. [Google Scholar] [CrossRef] [Green Version]
  43. Xiao, F. An improved method for combining conflicting evidences based on the similarity measure and belief function entropy. Int. J. Fuzzy Syst. 2018, 20, 1256–1266. [Google Scholar] [CrossRef]
  44. Wang, Y.; Zhang, K.; Deng, Y. Base belief function: An efficient method of conflict management. J. Ambient Intell. Humaniz. Comput. 2019, 10, 3427–3437. [Google Scholar] [CrossRef]
  45. Li, R.; Li, H.; Tang, Y. An Improved Method to Manage Conflict Data Using Elementary Belief Assignment Function in the Evidence Theory. IEEE Access 2020, 8, 37926–37932. [Google Scholar] [CrossRef]
  46. Yang, M.S.; Chen, T.C.; Wu, K.L. Generalized belief function, plausibility function, and Dempster’s combinational rule to fuzzy sets. Int. J. Intell. Syst. 2003, 18, 925–937. [Google Scholar] [CrossRef]
  47. Dubois, D.; Prade, H. A note on measures of specificity for fuzzy sets. Int. J. Gen. Syst. 1985, 10, 279–283. [Google Scholar] [CrossRef]
  48. Jiroušek, R.; Shenoy, P.P. A new definition of entropy of belief functions in the Dempster–Shafer theory. Int. J. Approx. Reason. 2018, 92, 49–65. [Google Scholar] [CrossRef] [Green Version]
  49. Jiroušek, R.; Shenoy, P.P. On properties of a new decomposable entropy of Dempster–Shafer belief functions. Int. J. Approx. Reason. 2020, 119, 260–279. [Google Scholar] [CrossRef]
  50. Qin, M.; Tang, Y.; Wen, J. An Improved Total Uncertainty Measure in the Evidence Theory and Its Application in Decision Making. Entropy 2020, 22, 487. [Google Scholar] [CrossRef]
  51. Wu, D.; Tang, Y. An improved failure mode and effects analysis method based on uncertainty measure in the evidence theory. Qual. Reliab. Eng. Int. 2020, 36, 1786–1807. [Google Scholar] [CrossRef]
  52. Yuan, K.; Deng, Y. Conflict evidence management in fault diagnosis. Int. J. Mach. Learn. Cybern. 2019, 10, 121–130. [Google Scholar] [CrossRef]
  53. Dubois, D.; Prade, H. On the unicity of dempster rule of combination. Int. J. Intell. Syst. 1986, 1, 133–142. [Google Scholar] [CrossRef]
  54. Destercke, S.; Dubois, D. Idempotent conjunctive combination of belief functions: Extending the minimum rule of possibility theory. Inf. Sci. 2011, 181, 3925–3945. [Google Scholar] [CrossRef] [Green Version]
  55. Liu, Z.; Dezert, J.; Mercier, G.; Pan, Q. Dynamic evidential reasoning for change detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1955–1967. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed method.
Figure 1. The flowchart of the proposed method.
Entropy 22 00801 g001
Figure 2. Comparison of fusion results using different methods in Example 3.
Figure 2. Comparison of fusion results using different methods in Example 3.
Entropy 22 00801 g002
Figure 3. Comparison of fusion results using different methods in Example 4.
Figure 3. Comparison of fusion results using different methods in Example 4.
Entropy 22 00801 g003
Figure 4. Comparison of fusion results using different methods in Example 5.
Figure 4. Comparison of fusion results using different methods in Example 5.
Entropy 22 00801 g004
Figure 5. The calculation process of this experiment.
Figure 5. The calculation process of this experiment.
Entropy 22 00801 g005
Table 1. Results of three combination methods of Example 3.
Table 1. Results of three combination methods of Example 3.
Methodm(a)m(b)m(c)m( a , b )m( a , c )m( b , c )m( a , b , c )
Dempster’s rule0.9000.1000
Improved base belief function [45]0.35870.14050.32780.06560.04680.04680.0193
Proposed method0.36690.12780.34790.05410.04260.04260.0180
Table 2. Results of three combination methods of Example 4.
Table 2. Results of three combination methods of Example 4.
Methodm(a)m(b)m(c)m( a , b )m( a , c )m( b , c )m( a , b , c )
Dempster’s rule0.48650.02700.48650000
Improved base belief function [45]0.33650.16530.33650.04850.04850.04850.0162
Proposed method0.34460.15710.34460.04610.04610.04610.0154
Table 3. Results of three combination methods of Example 5.
Table 3. Results of three combination methods of Example 5.
Methodm(a)m(b)m(c)m( a , b )m( a , c )m( b , c )m( a , b , c )
Dempster’s rule0.34480.03450.62070000
Improved base belief function [45]0.35020.14180.34800.04690.04690.04690.0193
Proposed method0.37340.13020.34810.04330.04330.04330.0184
Table 4. BPAs of four attributes.
Table 4. BPAs of four attributes.
Attributem(a)m(b)m(c)m( a , b )m( a , c )m( b , c )m( a , b , c )
SL0.33370.31650.28160.03070.00520.02720.0052
SW0.31640.25010.27320.03040.04810.05150.0304
PL0.66990.32580000.00430
PW0.69960.27780000.02260
Table 5. Modified BPAS of four attributes.
Table 5. Modified BPAS of four attributes.
Attributem(a)m(b)m(c)m( a , b )m( a , c )m( b , c )m( a , b , c )
SL0.33370.31650.28160.03070.00520.02720.0052
SW0.31640.25010.27320.03040.04810.05150.0304
PL0.40640.23430.07140.07140.07140.07360.0714
PW0.42120.21030.07140.07140.07140.08270.0714
Table 6. Experiment results of different combination rules in Iris recognition.
Table 6. Experiment results of different combination rules in Iris recognition.
Methodm(a)m(b)m(c)m( a , b , c )
Improved base belief function [45]0.62320.26710.10830
Proposed method0.67980.23850.08690
Table 7. BPAs of four attributes of Setosa samples.
Table 7. BPAs of four attributes of Setosa samples.
PL PW SL SW
Sample 1 m ( a ) = 0.6486 m ( a ) = 0.7477 m ( a ) = 0.8650 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.0821 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.3514 m ( θ ) = 0.2523 m ( θ ) = 0.0529 m ( θ ) = 0
Sample 2 m ( a ) = 0.6486 m ( a ) = 0.7477 m ( a ) = 0.2712 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.3514 m ( θ ) = 0.2523 m ( θ ) = 0.7288 m ( θ ) = 0
Sample 3 m ( a ) = 0.6486 m ( a ) = 0.7547 m ( a ) = 0.1356 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.3514 m ( θ ) = 0.2453 m ( θ ) = 0.8644 m ( θ ) = 0
Sample 4 m ( a ) = 0.6857 m ( a ) = 0 m ( a ) = 0.8650 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.0821 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.3143 m ( θ ) = 1 m ( θ ) = 0.0529 m ( θ ) = 0
Sample 5 m ( a ) = 0 m ( a ) = 0.3738 m ( a ) = 0.7560 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.1484 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 1 m ( θ ) = 0.6262 m ( θ ) = 0.0956 m ( θ ) = 0
Sample 6 m ( a ) = 0.8649 m ( a ) = 0.7477 m ( a ) = 0.6780 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.1351 m ( θ ) = 0.2523 m ( θ ) = 0.3220 m ( θ ) = 0
Sample 7 m ( a ) = 0.6857 m ( a ) = 0.7547 m ( a ) = 0.7560 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.1484 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.3143 m ( θ ) = 0.2453 m ( θ ) = 0.0956 m ( θ ) = 0
Sample 8 m ( a ) = 0.8649 m ( a ) = 0.7547 m ( a ) = 0.4068 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.1351 m ( θ ) = 0.2453 m ( θ ) = 0.5932 m ( θ ) = 0
Sample 9 m ( a ) = 0.9143 m ( a ) = 0.7547 m ( a ) = 0.5253 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.2887 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.0857 m ( θ ) = 0.2453 m ( θ ) = 00.1860 m ( θ ) = 0
Sample 10 m ( a ) = 0.8649 m ( a ) = 0.7547 m ( a ) = 0.8650 m ( a ) = 0
m ( b ) = 0 m ( b ) = 0 m ( b ) = 0 m ( b ) = 0.9000
m ( a , b ) = 0 m ( a , b ) = 0 m ( a , b ) = 0.0821 m ( a , b ) = 0
m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0 m ( b , c ) = 0.1
m ( θ ) = 0.1351 m ( θ ) = 0.2453 m ( θ ) = 0.0529 m ( θ ) = 0
Table 8. The final results of two different method.
Table 8. The final results of two different method.
m ( a ) m ( b ) m ( c ) m ( a , b ) m ( a , c ) m ( b , c ) m ( θ )
Sample 1Improved base belief function0.61580.26410.10140.02580.10140.01340.0017
Proposed method0.66390.22590.08620.03000.00990.01200.0022
Sample 2Improved base belief function0.47810.33760.13460.02990.01930.02550.0051
Proposed method0.53030.26900.12290.03940.02180.02530.0092
Sample 3Improved base belief function0.46040.35490.14010.03100.02070.02730.0056
Proposed method0.49920.28530.13160.04220.02510.02900.0118
Sample 4Improved base belief function0.48110.33780.12950.02810.01670.02170.0035
Proposed method0.48930.29990.12830.04590.02400.02790.0110
Sample 5Improved base belief function0.38750.40380.14040.03290.02140.02810.0055
Proposed method0.42160.32370.14100.05570.03010.03450.0168
Sample 6Improved base belief function0.61680.25890.10700.02600.01160.01510.0022
Proposed method0.66390.22090.08970.02780.01070.01290.0025
Sample 7Improved base belief function0.60570.28680.10010.02780.01040.01350.0018
Proposed method0.66230.23050.08360.03260.00960.01150.0021
Sample 8Improved base belief function0.56580.29730.11920.02810.01440.01870.0030
Proposed method0.60210.24690.10610.03360.01570.01850.0050
Sample 9Improved base belief function0.63040.31570.09130.03150.00870.01120.0013
Proposed method0.66140.24430.07440.03770.00770.00930.0014
Sample 10Improved base belief function0.66800.23550.09020.02510.00800.01030.0011
Proposed method0.70230.20980.07480.02540.00710.00870.0011

Share and Cite

MDPI and ACS Style

Ni, S.; Lei, Y.; Tang, Y. Improved Base Belief Function-Based Conflict Data Fusion Approach Considering Belief Entropy in the Evidence Theory. Entropy 2020, 22, 801. https://doi.org/10.3390/e22080801

AMA Style

Ni S, Lei Y, Tang Y. Improved Base Belief Function-Based Conflict Data Fusion Approach Considering Belief Entropy in the Evidence Theory. Entropy. 2020; 22(8):801. https://doi.org/10.3390/e22080801

Chicago/Turabian Style

Ni, Shuang, Yan Lei, and Yongchuan Tang. 2020. "Improved Base Belief Function-Based Conflict Data Fusion Approach Considering Belief Entropy in the Evidence Theory" Entropy 22, no. 8: 801. https://doi.org/10.3390/e22080801

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop