A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions
Abstract
1. Introduction
2. Background
2.1. Theory of Evidence
2.2. Uncertainty Measures in Evidence Theory
2.3. Algorithm to Compute the Maximum Entropy
- 1.
- Find a nonempty set , such that is maximal. If there exists more than one set A that attains that maximal, choose the one with maximal cardinality.
- 2.
- For , put .
- 3.
- For each , put .
- 4.
- Put .
- 5.
- If and , then go to Step 1.
- 6.
- If and , then put .
- 7.
- Calculate .
3. A Computational Improvement of Meyerowitz et al.’s Algorithm
- 1.
- Find a nonempty set , such that is maximal. If there exists more than one set A that attains that maximal, choose the one with maximal cardinality.
- 2.
- Find a nonempty set , such that is minimal. If there exists more than one set B that attains that minimal, choose the one with minimal cardinality.
- 3.
- For , put .
- 4.
- For each , put .
- 5.
- Put .
- 6.
- For each , put .
- 7.
- Put .
- 8.
- For each , put .
- 9.
- If and , then go to the first step.
- 10.
- If and , then put .
- 11.
- Calculate .
3.1. Justification
- For :
- For :
- Now, we assume true for n, then
3.2. Example 1
- Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . So, we have . Now, we update the function , and as , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . Thereby, it holds that . Now, we update the function , and since , and there are sets whose function is nonzero, we need a third iteration.
- -
- Third iteration: For this last iteration, we have that , and we have that . With this, we arrive at ; so, we can now calculate .Now, we proceed to the calculation of the maximum entropy:
- Improvement of Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . Therefore, we have . Now, we see the minimum value of , which in this case is , and we have . We update the value of , the function , and the function , and since , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . So, it is satisfied that . Now, we see the minimum value of , which in this case is the same that we obtained for A, , and we have . With this we arrive at . Now, we can calculate .Now, we proceed to the calculation of the maximum entropy:
3.3. Example 2
- Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . So, we have . Now, we update the function , and as , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . Thereby, it holds that . Now, we update the function , and since , and there are sets whose function is nonzero, we need a third iteration.
- -
- Third iteration: The maximum of the function is reached for . In this way, we have . Now, we update the function , and as , and there are sets whose function is non-zero, we need a fourth iteration.
- -
- Fourth iteration: The maximum of the function is attained for . Hence, it is satisfied that . Now, we update the function , and since , and there are sets whose function is nonzero, we need a fifth iteration.
- -
- Fifth iteration: The maximum of the function is reached for . Consequently, it holds that . Now, we update the function , and as , and there are sets whose function is nonzero, we need a sixth iteration.
- -
- Sixth iteration: For this last iteration, we have , and we have . With this, we arrive at ; so, we can now calculate .Now, we proceed to the calculation of the maximum entropy:
- Improvement of Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . Therefore, we have . Now, we see the minimum value of , which in this case is , and we have that . We update the value of , the function , and the function , and since , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . So, it is satisfied that . Now, we see the minimum value of , which in this case is , and we have that . We update the value of , the function , and the function , and as , and there are sets whose function is nonzero, we need a third iteration.
- -
- Third iteration: The maximum of the function is reached for . Thus, we have . Now, we see the minimum value of , which in this case is , and we have that . With this, we arrive at . Now, we can calculate .Now, we proceed to the calculation of the maximum entropy:
3.4. Example 3
- Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . So, we have . Now, we update the function , and as , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . Thereby, it holds that . Now, we update the function , and since , and there are sets whose function is nonzero, we need a third iteration.
- -
- Third iteration: The maximum of the function is reached for . In this way, we have . Now, we update the function , and as , and there are sets whose function is nonzero, we need a fourth iteration.
- -
- Fourth iteration: The maximum of the function is attained for . Hence, it is satisfied that . Now, we update the function , and since , and there are sets whose function is nonzero, we need a fifth iteration.
- -
- Fifth iteration: The maximum of the function is reached for . Consequently, it holds that . Now, we update the function , and as , and there are sets whose function is nonzero, we need a sixth iteration.
- -
- Sixth iteration: The maximum of the function is reached for . In this way, we have . Now, we update the function , and as , and there are sets whose function is nonzero, we need a seventh iteration.
- -
- Seventh iteration: The maximum of the function is reached for . Hence, it is satisfied that . Now, we update the function , and as , and there are sets whose function is nonzero, we need a eighth iteration.
- -
- Eighth iteration:For this last iteration, we have , and we have . With this, we arrive at ; so, we can now calculate .Now, we proceed to the calculation of the maximum entropy:
- Improvement of Meyerowitz et al.’s algorithm
- -
- First iteration: The maximum of the function is reached for . Therefore, we have . Now, we see the minimum value of , which in this case is , and we have . We update the value of , the function , and the function , and since , and there are sets whose function is nonzero, we need a second iteration.
- -
- Second iteration: The maximum of the function is attained for . So, it is satisfied that . Now, we see the minimum value of , which in this case is , and we have that . We update the value of , the function , and the function , and as , and there are sets whose function is nonzero, we need a third iteration.
- -
- Third iteration: The maximum of the function is reached for . Thus, we have . Now, we see the minimum value of , which in this case is , and we have that . We update the value of , the function , and the function , and as , and there are sets whose function is nonzero, we need a fourth iteration.
- -
- Fourth iteration: The maximum of the function is reached for . Thus, we have . Now, we see the minimum value of , which in this case is , and we have . With this we arrive at . Consequently, we can now calculate .Now, we proceed to the calculation of the maximum entropy:
3.5. Experiments
4. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Stat. 1967, 38, 325–339. [Google Scholar] [CrossRef]
- Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
- Beynon, M.; Curry, B.; Morgan, P. The Dempster–Shafer theory of evidence: An alternative approach to multicriteria decision modelling. Omega 2000, 28, 37–50. [Google Scholar] [CrossRef]
- Denœux, T. A k-Nearest Neighbor Classification Rule Based on Dempster-Shafer Theory. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Yager, R.R., Liu, L., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 737–760. [Google Scholar]
- Buede, D.M.; Girardi, P. A target identification comparison of Bayesian and Dempster-Shafer multisensor fusion. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 1997, 27, 569–577. [Google Scholar] [CrossRef]
- Ip, H.H.S.; Ng, J.M.C. Human face recognition using Dempster-Shafer theory. In Proceedings of the 1st International Conference on Image Processing, Austin, TX, USA, 13–16 November 1994; Volume 2, pp. 292–295. [Google Scholar] [CrossRef]
- Zheng, H.; Tang, Y. Deng Entropy Weighted Risk Priority Number Model for Failure Mode and Effects Analysis. Entropy 2020, 22, 280. [Google Scholar] [CrossRef]
- Tang, Y.; Tan, S.; Zhou, D. An Improved Failure Mode and Effects Analysis Method Using Belief Jensen–Shannon Divergence and Entropy Measure in the Evidence Theory. Arab. J. Sci. Eng. 2022, 48, 7163–7176. [Google Scholar] [CrossRef]
- Basir, O.; Yuan, X. Engine fault diagnosis based on multi-sensor information fusion using Dempster-Shafer evidence theory. Inf. Fusion 2007, 8, 379–386. [Google Scholar] [CrossRef]
- Frittella, S.; Manoorkar, K.; Palmigiano, A.; Tzimoulis, A.; Wijnberg, N. Toward a Dempster-Shafer theory of concepts. Int. J. Approx. Reason. 2020, 125, 14–25. [Google Scholar] [CrossRef]
- Chen, T.M.; Venkataramanan, V. Dempster-Shafer theory for intrusion detection in ad hoc networks. IEEE Internet Comput. 2005, 9, 35–41. [Google Scholar] [CrossRef]
- Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Yager, R.R. Entropy and specificity in a mathematical theory of evidence. Int. J. Gen. Syst. 1983, 9, 249–260. [Google Scholar] [CrossRef]
- Klir, G.; Wierman, M. Uncertainty-Based Information: Elements of Generalized Information Theory; Studies in Fuzziness and Soft Computing; Physica-Verlag HD: Heidelberg, Germany, 1999. [Google Scholar]
- Abellán, J.; Masegosa, A. Requirements for total uncertainty measures in Dempster-Shafer theory of evidence. Int. J. Gen. Syst. 2008, 37, 733–747. [Google Scholar] [CrossRef]
- Harmanec, D.; Klir, G.J. Measuring total uncertainty in Dempster-Shafer Theory: A novel aaproach. Int. J. Gen. Syst. 1994, 22, 405–419. [Google Scholar] [CrossRef]
- Meyerowitz, A.; Richman, F.; Walker, E. Calculating maximum-entropy probability densities for belief functions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 1994, 2, 377–389. [Google Scholar] [CrossRef]
- Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
- Abellán, J. Analyzing properties of Deng entropy in the theory of evidence. Chaos Solitons Fractals 2017, 95, 195–199. [Google Scholar] [CrossRef]
- Abellán, J.; Bossé, É. Critique of Recent Uncertainty Measures Developed Under the Evidence Theory and Belief Intervals. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1186–1192. [Google Scholar] [CrossRef]
- Abellán, J.; Moral, S. Building classification trees using the total uncertainty criterion. Int. J. Intell. Syst. 2003, 18, 1215–1225. [Google Scholar] [CrossRef]
- Abellán, J. Ensembles of decision trees based on imprecise probabilities and uncertainty measures. Inf. Fusion 2013, 14, 423–430. [Google Scholar] [CrossRef]
- Abellán, J.; Mantas, C.J.; Castellano, J.G. AdaptativeCC4.5: Credal C4.5 with a rough class noise estimator. Expert Syst. Appl. 2018, 92, 363–379. [Google Scholar] [CrossRef]
- Moral-García, S.; Mantas, C.J.; Castellano, J.G.; Benítez, M.D.; Abellán, J. Bagging of credal decision trees for imprecise classification. EXpert Syst. Appl. 2020, 141, 112944. [Google Scholar] [CrossRef]
- Moral-García, S.; Abellán, J. Maximum of Entropy for Belief Intervals Under Evidence Theory. IEEE Access 2020, 8, 118017–118029. [Google Scholar] [CrossRef]
- Hartley, R.V.L. Transmission of Information1. Bell Syst. Tech. J. 1928, 7, 535–563. [Google Scholar] [CrossRef]
- Dubois, D.; Prade, H. A note on measures of specificity for fuzzy sets. Int. J. Gen. Syst. 1985, 10, 279–283. [Google Scholar] [CrossRef]
- Abellán, J.; Moral, S. A Non-specificity measure for convex sets of probability distributions. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2000, 8, 357–367. [Google Scholar] [CrossRef]
- Cui, H.; Liu, Q.; Zhang, J.; Kang, B. An Improved Deng Entropy and Its Application in Pattern Recognition. IEEE Access 2019, 7, 18284–18292. [Google Scholar] [CrossRef]
- Kang, B.; Deng, Y. The Maximum Deng Entropy. IEEE Access 2019, 7, 120758–120765. [Google Scholar] [CrossRef]
- Zhu, R.; Chen, J.; Kang, B. Power Law and Dimension of the Maximum Value for Belief Distribution With the Maximum Deng Entropy. IEEE Access 2020, 8, 47713–47719. [Google Scholar] [CrossRef]
- Voorbraak, F. A computationally efficient approximation of Dempster-Shafer theory. Int. J. Man-Mach. Stud. 1989, 30, 525–536. [Google Scholar] [CrossRef]
- Cobb, B.R.; Shenoy, P.P. On the plausibility transformation method for translating belief function models to probability models. Int. J. Approx. Reason. 2006, 41, 314–330. [Google Scholar] [CrossRef]
- Jirousek, R.; Shenoy, P.P. A new definition of entropy of belief functions in the Dempster–Shafer theory. Int. J. Approx. Reason. 2018, 92, 49–65. [Google Scholar] [CrossRef]
- Pan, Q.; Zhou, D.; Tang, Y.; Li, X.; Huang, J. A Novel Belief Entropy for Measuring Uncertainty in Dempster-Shafer Evidence Theory Framework Based on Plausibility Transformation and Weighted Hartley Entropy. Entropy 2019, 21, 163. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Ji, D.; Yang, X.; Fei, L.; Zhai, C. An Improved Belief Entropy to Measure Uncertainty of Basic Probability Assignments Based on Deng Entropy and Belief Interval. Entropy 2019, 21, 1122. [Google Scholar] [CrossRef]
Original Alg. | Improved Alg. | Percentage of Improvement | |
---|---|---|---|
29.10 | 26.69 | 8.28% | |
120.05 | 101.20 | 15.70% | |
1077.24 | 868.66 | 19.36% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abellán, J.; Pérez-Lara, A.; Moral-García, S. A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions. Entropy 2023, 25, 867. https://doi.org/10.3390/e25060867
Abellán J, Pérez-Lara A, Moral-García S. A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions. Entropy. 2023; 25(6):867. https://doi.org/10.3390/e25060867
Chicago/Turabian StyleAbellán, Joaquín, Alejandro Pérez-Lara, and Serafín Moral-García. 2023. "A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions" Entropy 25, no. 6: 867. https://doi.org/10.3390/e25060867
APA StyleAbellán, J., Pérez-Lara, A., & Moral-García, S. (2023). A Variation of the Algorithm to Achieve the Maximum Entropy for Belief Functions. Entropy, 25(6), 867. https://doi.org/10.3390/e25060867