# Multi-Source Information Fusion Based on Negation of Reconstructed Basic Probability Assignment with Padded Gaussian Distribution and Belief Entropy

^{1}

^{2}

^{3}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

- Since the initial BPAs have a significant influence on the fusion results, Gaussian functions estimated by the maximum likelihood method are used for determining the initial BPAs. To enhance the generalizability of the method, we assume that the multi-source information involved in the fusion obeys a complex nonlinear joint distribution, and they are distributed normally. This hypothesis has proven to be valid and widely accepted [40]. Therefore, it is conventional to use Gaussian functions to build the initial BPA determination model. Furthermore, original data will be padded with the mean of the data correspondingly before being estimated by the maximum likelihood method in order to improve generalizability and mitigate overfitting due to the over-dependence on the provided data. The padding strategy was first used in mathematical statistics to supplement missing information or to reduce dimensionality [41,42]. Lopez-Martin et al. proved that embedding the features of samples into the mapping space was beneficial for improving the accuracy of detection [43]. They embedded sample labels in self-supervised learning networks to accomplish network intrusion detection.
- To improve the ability to discern the uncertainty of information, a variety of methods are applied to extract more valid information from the original sources. Referring to Weng et al.’s method [37], the BPA is firstly reconstructed by assigning the original BPAs, and the BPAs’ values with high degrees of uncertainty are partially assigned to the BPA of the subset focal elements. Additionally, referring to Yin’s research [38] on the negation of BPA, the reconstructed BPA of the subset focal elements is improved by the negation of BPA to enhance the representation of BPA uncertainty information. We denote the result of the calculation after the above process as nrBPA. Such processing can reduce the uncertainty of BPAs while ensuring the uncertainty of BPAs, which makes the final information involved in DS fusion richer and can improve the accuracy of decision-making.
- To reduce the impact of conflicting information from each source on the DS evidence fusion and to make the fusion results more robust. First, improved belief entropy is employed to measure the information entropy of information from each source. Then the initial fusion BPAs are calculated by the entropy weighting method based on the improved belief entropy, which will be involved in the subsequent Dempster’s combination rule calculation to obtain the results.

## 2. Preliminaries

#### 2.1. Dempster-Shafer Evidence Theory

**Definition**

**1.**

**Definition**

**2.**

**Definition**

**3.**

**Definition**

**4.**

**Definition**

**5.**

#### 2.2. Negation of BPA

**Definition**

**6.**

#### 2.3. Belief Entropy

#### 2.3.1. Deng Entropy

**Definition**

**7.**

**Definition**

**8.**

#### 2.3.2. Entropy Weight Method

**Definition**

**9.**

#### 2.4. Hypothesis Testing Based on Gaussian Probability Density Function

**Definition**

**10.**

**Definition**

**11.**

## 3. Proposed Method

**Step 1.1**. Obtaining the feature data set of known fusion results. The set of known fusion results $R={r}_{1},{\phantom{\rule{4pt}{0ex}}r}_{2}\dots {r}_{O}$, which correspond to the identification framework $\theta $ in DS evidence theory, and ${r}_{1},{\phantom{\rule{4pt}{0ex}}r}_{2}\dots {r}_{O}$ are the fusion results, which correspond to the elements in DS evidence theory. The data set is represented as:

**Step 1.2**. Let N be the total number of data, the original data structure of each sample to be fused is assumed as:

**Step 1.3**. The individual features of the training data are involved in estimating parameters $\widehat{\sigma}and\widehat{\mu}$ of the Gaussian function by the maximum likelihood method. Notably, in order to avoid overfitting of the generated Gaussian model, each feature is supplemented with a certain proportion of data with the value of the mean when calculating the variance. For example, if the original training data volume is $N\ast t$, where N is the total, $0<t\le 1\phantom{\rule{4pt}{0ex}}$ is the training proportion. For a feature, suppose the mean value of a certain event is $\mu $, and the filling proportion is p, where $0\le p\le 1$. Then, $(N\ast t)\ast p$ samples with the value of $\mu $ will be filled, and the size of the padded data set is $(N\ast t)\ast (1+p)$.

**Step 3.1**. For a BPA, the more elements pointed to, the greater the uncertainty of that BPA and the more ambiguous the information contained. Weng et al.’s method [37] is proved to measure the uncertainty of BPA and reduce the information uncertainty. For all BPAs according to Equation (15).

**Step 3.2**. The reconstructed BPAs are normalized according to Equation (16) in order to comply with the construction criterion of the BPA and to facilitate the subsequent operations.

**Step 3.3**. The reconstructed BPAs are transformed into nrBPAs, ${m}_{nr}$. By exploring both positive and negative information of the evidence through Yin et al.’s method [38], the inverse of the BPAs is obtained through Equation (6).

**Step 4.1**. The uncertainties of BPAs are measured by improved belief entropy [51]. Equation (8) is applied to obtain the information entropy of each BPA, denoted as ${E}_{1},{E}_{2},{E}_{3}\dots {E}_{M}$.

**Step 4.2**. Equation (9) is referenced to convert the information entropy into weights to obtain ${w}_{1},{w}_{2},{w}_{3}\dots {w}_{M}$.

**Step 4.3**. The final BPAs of each focal element are obtained by multiplying the obtained BPAs with their corresponding weight value obtained by the entropy weight method and then multiplying the BPAs of different BPAs but the same focal element to obtain the final BPA of each focal element. Take the focal element ${A}_{i}$ belonging to $BPA\phantom{\rule{4pt}{0ex}}\mathsf{\Theta}$ as an example, M is the total number of features, and the final BPA A ${m}^{\prime}\left({A}_{i}\right)$ is calculated as Equation (17).

## 4. Experiments

#### 4.1. Demonstration of the Proposed Method

#### 4.2. Application to Realistic Classification Tasks

## 5. Comparative Analysis

#### 5.1. Discussion on Effectiveness of the Improved Method

#### 5.1.1. Discussion on Effectiveness of Using Gaussian BPA Function

#### 5.1.2. Discussion on Effect of the Padding Strategy for Generating BPA Function

#### 5.2. Discussion on Robustness

- Dividing all data sets into 10 parts;
- The model is completed by taking one of the test sets without duplication and using the other nine as training sets. After that, the accuracy ${A}_{i}$ of the used method on the test set is calculated. Positive samples with correct classification are set as true positive examples (TP), positive samples with incorrect classification are set as false positive examples (FP), negative samples with correct classification are set as false positive examples (FP), and the formula for the accuracy A is given in Equation (19).$$A=\frac{TP+TN}{FP+TP+FN+TN}$$
- Averaging the 10 accuracies to obtain the final accuracy rate, as shown in Equation (20).$${A}_{\left(10\right)}=\frac{1}{10}\sum _{i=1}^{10}{A}_{i}$$

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Tang, M.; Liao, H. Failure mode and effect analysis considering the fairness-oriented consensus of a large group with core-periphery structure. Reliab. Eng. Syst. Saf.
**2021**, 215, 107821. [Google Scholar] [CrossRef] - Li, H.; Huang, H.Z.; Li, Y.F.; Zhou, J.; Mi, J. Physics of failure-based reliability prediction of turbine blades using multi-source information fusion. Appl. Soft Comput.
**2018**, 72, 624–635. [Google Scholar] [CrossRef] - Guo, Y.; Yin, C.; Li, M.; Ren, X.; Liu, P. Mobile e-commerce recommendation system based on multi-source information fusion for sustainable e-business. Sustainability
**2018**, 10, 147. [Google Scholar] [CrossRef] [Green Version] - Wu, L.; Wang, L.; Li, N.; Sun, T.; Qian, T.; Jiang, Y.; Wang, F.; Xu, Y. Modeling the COVID-19 Outbreak in China through Multi-source Information Fusion. Innovation
**2020**, 1, 100033. [Google Scholar] [CrossRef] [PubMed] - Rogova, G.L. Information quality in information fusion and decision making with applications to crisis management. In Fusion Methodologies in Crisis Management; Springer: Berlin, Germany, 2016; pp. 65–86. [Google Scholar]
- Fan, Z.P.; Li, G.M.; Liu, Y. Processes and methods of information fusion for ranking products based on online reviews: An overview. Inf. Fusion
**2020**, 60, 87–97. [Google Scholar] [CrossRef] - Rodríguez, R.M.; Bedregal, B.; Bustince, H.; Dong, Y.; Farhadinia, B.; Kahraman, C.; Martínez, L.; Torra, V.; Xu, Y.; Xu, Z.; et al. A position and perspective analysis of hesitant fuzzy sets on information fusion in decision making. Towards high quality progress. Inf. Fusion
**2016**, 29, 89–97. [Google Scholar] [CrossRef] - Liu, Y.; Fan, X.; Lv, C.; Wu, J.; Li, L.; Ding, D. An innovative information fusion method with adaptive Kalman filter for integrated INS/GPS navigation of autonomous vehicles. Mech. Syst. Signal Process.
**2018**, 100, 605–616. [Google Scholar] [CrossRef] [Green Version] - Zhang, C.; Yang, Z.; He, X.; Deng, L. Multimodal intelligence: Representation learning, information fusion, and applications. IEEE J. Sel. Top. Signal Process.
**2020**, 14, 478–493. [Google Scholar] [CrossRef] [Green Version] - Xie, C.; Bai, J.; Zhu, W.; Lu, G.; Wang, H. Lightning risk assessment of transmission lines based on DS theory of evidence and entropy-weighted grey correlation analysis. In Proceedings of the 2017 IEEE Conference on Energy Internet and Energy System Integration (EI2), Beijing, China, 26–28 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Liu, Z.; Zhang, X.; Niu, J.; Dezert, J. Combination of Classifiers With Different Frames of Discernment Based on Belief Functions. IEEE Trans. Fuzzy Syst.
**2021**, 29, 1764–1774. [Google Scholar] [CrossRef] - Pan, Y.; Zhang, L.; Wu, X.; Skibniewski, M.J. Multi-classifier information fusion in risk analysis. Inf. Fusion
**2020**, 60, 121–136. [Google Scholar] [CrossRef] - Liu, Z.G.; Liu, Y.; Dezert, J.; Cuzzolin, F. Evidence combination based on credal belief redistribution for pattern classification. IEEE Trans. Fuzzy Syst.
**2020**, 28, 618–631. [Google Scholar] [CrossRef] [Green Version] - Li, P.; Wei, C. An emergency decision-making method based on DS evidence theory for probabilistic linguistic term sets. Int. J. Disaster Risk Reduct.
**2019**, 37, 101178. [Google Scholar] [CrossRef] - Dempster, A.P. Upper and Lower Probabilities Induced by a Multi-valued Mapping. Ann. Math. Stat.
**1967**, 38, 325–339. [Google Scholar] [CrossRef] - Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
- Zadeh, L.A. A simple view of the Dempster-Shafer theory of evidence and its implication for the rule of combination. AI Mag.
**1986**, 7, 85. [Google Scholar] - He, Z.; Jiang, W. An evidential dynamical model to predict the interference effect of categorization on decision making results. Knowl.-Based Syst.
**2018**, 150, 139–149. [Google Scholar] [CrossRef] - Ren, Z.; Liao, H. Combining conflicting evidence by constructing evidence’s angle-distance ordered weighted averaging pairs. Int. J. Fuzzy Syst.
**2021**, 23, 494–505. [Google Scholar] [CrossRef] - Tang, Y.; Wu, D.; Liu, Z. A new approach for generation of generalized basic probability assignment in the evidence theory. Pattern Anal. Appl.
**2021**, 24, 1007–1023. [Google Scholar] [CrossRef] - Fei, L.; Xia, J.; Feng, Y.; Liu, L. A novel method to determine basic probability assignment in Dempster-Shafer theory and its application in multi-sensor information fusion. Int. J. Distrib. Sens. Netw.
**2019**, 15, 1550147719865876. [Google Scholar] [CrossRef] [Green Version] - Jiang, W.; Zhan, J.; Zhou, D.; Li, X. A method to determine generalized basic probability assignment in the open world. Math. Probl. Eng.
**2016**, 2016, 3878634. [Google Scholar] [CrossRef] - Wang, K. A new multi-Sensor target recognition framework based on Dempster-Shafer evidence theory. Int. J. Perform. Eng.
**2018**, 14, 1224. [Google Scholar] [CrossRef] - Deng, Y.; Sadiq, R.; Jiang, W.; Tesfamariam, S. Risk analysis in a linguistic environment: A fuzzy evidential reasoning-based approach. Expert Syst. Appl.
**2011**, 38, 15438–15446. [Google Scholar] [CrossRef] - Pan, Y.; Zhang, L.; Li, Z.; Ding, L. Improved fuzzy Bayesian network-based risk analysis with interval-valued fuzzy sets and D-S evidence theory. IEEE Trans. Fuzzy Syst.
**2019**, 28, 2063–2077. [Google Scholar] [CrossRef] - Lin, S.; Li, C.; Xu, F.; Li, W. The strategy research on electrical equipment condition-based maintenance based on cloud model and grey DS evidence theory. Intell. Decis. Technol.
**2018**, 12, 283–292. [Google Scholar] [CrossRef] - Zhu, C.; Qin, B.; Xiao, F.; Cao, Z.; Pandey, H.M. A fuzzy preference-based Dempster-Shafer evidence theory for decision fusion. Inf. Sci.
**2021**, 570, 306–322. [Google Scholar] [CrossRef] - Deng, X.; Han, D.; Dezert, J.; Deng, Y.; Shyr, Y. Evidence combination from an evolutionary game theory perspective. IEEE Trans. Cybern.
**2015**, 46, 2070–2082. [Google Scholar] [CrossRef] [Green Version] - Zangeneh Soroush, M.; Maghooli, K.; Setarehdan, S.K.; Nasrabadi, A.M. A novel approach to emotion recognition using local subset feature selection and modified Dempster-Shafer theory. Behav. Brain Funct.
**2018**, 14, 1–15. [Google Scholar] [CrossRef] - Jiang, W.; Zhan, J. A modified combination rule in generalized evidence theory. Appl. Intell.
**2017**, 46, 630–640. [Google Scholar] [CrossRef] - Yager, R.R. Arithmetic and other operations on Dempster-Shafer structures. Int. J. Man-Mach. Stud.
**1986**, 25, 357–366. [Google Scholar] [CrossRef] - Smarandache, F.; Dezert, J. Advances and Applications of DSmT for Information Fusion (Collected works); Infinite Study; AMRES: Belgrade, Serbia, 2006; Volume 2. [Google Scholar]
- Gao, X.; Liu, F.; Pan, L.; Deng, Y.; Tsai, S.B. Uncertainty measure based on Tsallis entropy in evidence theory. Int. J. Intell. Syst.
**2019**, 34, 3105–3120. [Google Scholar] [CrossRef] - Lin, Y.; Li, Y.; Yin, X.; Dou, Z. Multisensor fault diagnosis modeling based on the evidence theory. IEEE Trans. Reliab.
**2018**, 67, 513–521. [Google Scholar] [CrossRef] - Murphy, C.K. Combining belief functions when evidence conflicts. Decis. Support Syst.
**2000**, 29, 1–9. [Google Scholar] [CrossRef] - Song, Y.; Wang, X.; Zhu, J.; Lei, L. Sensor dynamic reliability evaluation based on evidence theory and intuitionistic fuzzy sets. Appl. Intell.
**2018**, 48, 3950–3962. [Google Scholar] [CrossRef] - Weng, J.; Xiao, F.; Cao, Z. Uncertainty modelling in multi-agent information fusion systems. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, Auckland, New Zealand, 9–13 May 2020; pp. 1494–1502. [Google Scholar]
- Yin, L.; Deng, X.; Deng, Y. The negation of a basic probability assignment. IEEE Trans. Fuzzy Syst.
**2018**, 27, 135–143. [Google Scholar] [CrossRef] - Wu, B.; Qiu, W.; Huang, W.; Meng, G.; Huang, J.; Xu, S. A multi-source information fusion approach in tunnel collapse risk analysis based on improved Dempster-Shafer evidence theory. Sci. Rep.
**2022**, 12, 1–17. [Google Scholar] [CrossRef] [PubMed] - Perdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N.D.; Karniadakis, G.E. Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Proc. R. Soc. A Math. Phys. Eng. Sci.
**2017**, 473, 20160751. [Google Scholar] [CrossRef] [PubMed] - Price, D.T.; McKenney, D.W.; Nalder, I.A.; Hutchinson, M.F.; Kesteven, J.L. A comparison of two statistical methods for spatial interpolation of Canadian monthly mean climate data. Agric. For. Meteorol.
**2000**, 101, 81–94. [Google Scholar] [CrossRef] - Malik, A.; Sikka, G.; Verma, H.K. An image interpolation based reversible data hiding scheme using pixel value adjusting feature. Multimed. Tools Appl.
**2017**, 76, 13025–13046. [Google Scholar] [CrossRef] - Lopez-Martin, M.; Sanchez-Esguevillas, A.; Arribas, J.I.; Carro, B. Supervised contrastive learning over prototype-label embeddings for network intrusion detection. Inf. Fusion
**2022**, 79, 200–228. [Google Scholar] [CrossRef] - Wang, X.; Song, Y. Uncertainty measure in evidence theory with its applications. Appl. Intell.
**2018**, 48, 1672–1688. [Google Scholar] [CrossRef] - Yager, R.R. Entropy and specificity in a mathematical theory of evidence. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin, Germany, 2008; pp. 291–310. [Google Scholar]
- Höhle, U. A general theory of fuzzy plausibility measures. J. Math. Anal. Appl.
**1987**, 127, 346–364. [Google Scholar] [CrossRef] [Green Version] - Song, Y.; Wang, X.; Wu, W.; Quan, W.; Huang, W. Evidence combination based on credibility and non-specificity. Pattern Anal. Appl.
**2018**, 21, 167–180. [Google Scholar] [CrossRef] - Jousselme, A.L.; Liu, C.; Grenier, D.; Bossé, É. Measuring ambiguity in the evidence theory. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum.
**2006**, 36, 890–903. [Google Scholar] [CrossRef] - Harmanec, D.; Klir, G.J. Measuring total uncertainty in Dempster-Shafer theory: A novel approach. Int. J. Gen. Syst.
**1994**, 22, 405–419. [Google Scholar] [CrossRef] - Deng, Y. Deng entropy. Chaos Solitons Fractals
**2016**, 91, 549–553. [Google Scholar] [CrossRef] - Yan, H.; Deng, Y. An improved belief entropy in evidence theory. IEEE Access
**2020**, 8, 57505–57516. [Google Scholar] [CrossRef] - Fisher, R.A. On the mathematical foundations of theoretical statistics. Philos. Trans. R. Soc. Lond. Ser. A Contain. Pap. A Math. Phys. Character
**1922**, 222, 309–368. [Google Scholar] - Ranneby, B. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Stat.
**1984**, 11, 93–112. [Google Scholar] - Fisher, R. Iris; UCI Machine Learning Repository: Irvine, CA, USA, 1988. [Google Scholar]
- Wine; UCI Machine Learning Repository: Irvine, CA, USA, 1991.
- Wolberg, W.; Street, W.; Mangasarian, O. Breast Cancer Wisconsin (Diagnostic); UCI Machine Learning Repository: Irvine, CA, USA, 1995. [Google Scholar]
- Koklu, M.; Ozkan, I.A. Multiclass classification of dry beans using computer vision and machine learning techniques. Comput. Electron. Agric.
**2020**, 174, 105507. [Google Scholar] [CrossRef] - Geisser, S. A predictive approach to the random effect model. Biometrika
**1974**, 61, 101–107. [Google Scholar] [CrossRef] - Xiao, F. A new divergence measure for belief functions in D–S evidence theory for multisensor data fusion. Inf. Sci.
**2020**, 514, 462–483. [Google Scholar] [CrossRef] - Chen, Q.; Whitbrook, A.; Aickelin, U.; Roadknight, C. Data classification using the Dempster-Shafer method. J. Exp. Theor. Artif. Intell.
**2014**, 26, 493–517. [Google Scholar] [CrossRef] [Green Version] - Thirunavukkarasu, K.; Singh, A.S.; Rai, P.; Gupta, S. Classification of IRIS dataset using classification based KNN algorithm in supervised learning. In Proceedings of the 2018 4th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 14–15 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
- Eldem, A.; Eldem, H.; Üstün, D. A model of deep neural network for iris classification with different activation functions. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]

**Figure 4.**Eigenvalues and Gaussian distribution functions of the three irises under the corresponding SL, PL, SW, PW features. BPAs were generated based on the intersection of the eigenvalues with the Gaussian functions under the corresponding features.

**Figure 5.**Accuracy under different training ratios and padding ratios in the wine classification task.

**Figure 8.**Accuracy of the proposed method with different padding ratios on different training ratios.

Parameters | Category | SL | SW | PL | PW |
---|---|---|---|---|---|

$\mu $ | iris-setosa | 4.983 | 3.393 | 1.478 | 0.243 |

iris-versicolor | 5.950 | 2.796 | 4.261 | 1.322 | |

iris-virginica | 6.566 | 2.989 | 5.532 | 2.030 | |

$\sigma $ | iris-setosa | 1.267 | 1.302 | 0.678 | 0.373 |

iris-versicolor | 1.782 | 1.093 | 1.717 | 0.744 | |

iris-virginica | 2.345 | 1.286 | 2.010 | 1.094 |

SL | SW | PL | PW | Ground Truth |
---|---|---|---|---|

5.9 | 3.0 | 5.1 | 1.8 | iris-virginica |

m(A) | m(B) | m(C) | m(A,B) | m(A,C) | m(B,C) | m(A,B,C) | |
---|---|---|---|---|---|---|---|

SL | 0.111 | 0.212 | 0.172 | 0.111 | 0.111 | 0.172 | 0.111 |

SW | 0.132 | 0.149 | 0.173 | 0.132 | 0.132 | 0.149 | 0.132 |

PL | 0.000 | 0.293 | 0.415 | 0.000 | 0.000 | 0.293 | 0.000 |

PW | 0.000 | 0.273 | 0.454 | 0.000 | 0.000 | 0.273 | 0.000 |

${\mathit{m}}_{\mathit{nr}}$(A) | ${\mathit{m}}_{\mathit{nr}}$(B) | ${\mathit{m}}_{\mathit{nr}}$(C) | ${\mathit{m}}_{\mathit{nr}}$(A,B) | ${\mathit{m}}_{\mathit{nr}}$(A,C) | ${\mathit{m}}_{\mathit{nr}}$(B,C) | ${\mathit{m}}_{\mathit{nr}}$(A,B,C) | |
---|---|---|---|---|---|---|---|

SL | 0.148 | 0.131 | 0.138 | 0.148 | 0.148 | 0.138 | 0.148 |

SW | 0.145 | 0.142 | 0.138 | 0.145 | 0.145 | 0.142 | 0.145 |

PL | 0.167 | 0.1184 | 0.098 | 0.167 | 0.167 | 0.118 | 0.167 |

PW | 0.167 | 0.121 | 0.092 | 0.167 | 0.167 | 0.121 | 0.167 |

${\mathit{E}}_{\mathit{SL}}$ | ${\mathit{E}}_{\mathit{SW}}$ | ${\mathit{E}}_{\mathit{PL}}$ | ${\mathit{E}}_{\mathit{PW}}$ |
---|---|---|---|

1.852 | 1.846 | 1.868 | 1.870 |

${\mathit{W}}_{\mathit{SL}}$ | ${\mathit{W}}_{\mathit{SW}}$ | ${\mathit{W}}_{\mathit{PL}}$ | ${\mathit{W}}_{\mathit{PW}}$ |
---|---|---|---|

0.251 | 0.252 | 0.249 | 0.249 |

${\mathit{m}}_{\mathit{w}}$(A) | ${\mathit{m}}_{\mathit{w}}$(B) | ${\mathit{m}}_{\mathit{w}}$(C) | ${\mathit{m}}_{\mathit{w}}$(A,B) | ${\mathit{m}}_{\mathit{w}}$(A,C) | ${\mathit{m}}_{\mathit{w}}$(B,C) | ${\mathit{m}}_{\mathit{w}}$(A,B,C) |
---|---|---|---|---|---|---|

0.157 | 0.128 | 0.116 | 0.157 | 0.157 | 0.130 | 0.157 |

m(A) | m(B) | m(C) |
---|---|---|

0.628 | 0.230 | 0.141 |

A | B | C |
---|---|---|

59 | 71 | 48 |

**Table 11.**Comparison of the classification accuracy on each category, mean accuracy and variance of the proposed method with other methods.

Iris-Setosa | Iris-Versicolor | Iris-Virginica | Average | Variance | |
---|---|---|---|---|---|

Dempster’s method [15] | 1.0000 | 0.9969 | 0.7898 | 0.9289 | 0.0097 |

Murphy’s method [35] | 1.0000 | 0.9969 | 0.7898 | 0.9289 | 0.0097 |

Xiao’s method [59] | 1.0000 | 0.9969 | 0.8039 | 0.9336 | 0.0084 |

Chen et al.’s method [60] | 1.0000 | 0.9000 | 0.9600 | 0.9533 | 0.0017 |

Proposed Method | 1.0000 | 0.9255 | 0.9420 | 0.9558 | 0.0010 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chen, Y.; Hua, Z.; Tang, Y.; Li, B.
Multi-Source Information Fusion Based on Negation of Reconstructed Basic Probability Assignment with Padded Gaussian Distribution and Belief Entropy. *Entropy* **2022**, *24*, 1164.
https://doi.org/10.3390/e24081164

**AMA Style**

Chen Y, Hua Z, Tang Y, Li B.
Multi-Source Information Fusion Based on Negation of Reconstructed Basic Probability Assignment with Padded Gaussian Distribution and Belief Entropy. *Entropy*. 2022; 24(8):1164.
https://doi.org/10.3390/e24081164

**Chicago/Turabian Style**

Chen, Yujie, Zexi Hua, Yongchuan Tang, and Baoxin Li.
2022. "Multi-Source Information Fusion Based on Negation of Reconstructed Basic Probability Assignment with Padded Gaussian Distribution and Belief Entropy" *Entropy* 24, no. 8: 1164.
https://doi.org/10.3390/e24081164