Intrinsically Interpretable Gaussian Mixture Model
Abstract
:1. Introduction
2. Background
2.1. GMM
2.2. Interpreteability
3. Related Work
4. Contribution: Intrinsic GMM Interpretations
4.1. Global Interpretation
Algorithm 1 Global interpretation |
|
Algorithm 2 Local interpretation |
|
4.2. Local Interpretation
5. Results and Discussion
5.1. Data Sets and Performance Metrics
- Iris: it is likely the most well-known dataset in the literature of machine learning. It has three classes. Each class represents a distinct iris plant type described with four features: sepal length (), sepal width (), petal length (), and petal width ().
- The Swiss banknotes [39]: it includes measurements of the shape of genuine and forged bills. Six real-valued features (Length (), Left (), Right (), Bottom (), Top (), and Diagonal ()) correspond to two classes: counterfeit (1) or genuine (0).
- Seeds: Seeds is a University of California, Irvine, (UCI) dataset that includes measurements of geometrical properties of seven real-valued parameters, namely area (), perimeter (), compactness (), length of the kernel (), width of the kernel (), asymmetry coefficient (), and length of the kernel groove (). These measures correspond to three distinct types of wheat. (Compactness) is calculated as follows: .
5.2. Global Interpretation
5.2.1. Iris Dataset
Iris Global Interpretation
5.2.2. Swiss Banknote Dataset
Swiss Banknote Global Interpretation
5.2.3. Seeds Dataset
5.3. Local Interpretation
5.3.1. Iris Dataset
5.3.2. Swiss Banknote Dataset
5.3.3. Seeds Dataset
5.4. Comparisons with LIME
5.4.1. Iris Dataset
5.4.2. Swiss Banknote Dataset
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Seeds
0.25 | 0.238 | 0.257 | 0.256 | 0.208 | 0.173 | |
0.239 | 0.300 | 0.276 | 0.281 | 0.180 | ||
0.246 | 0.318 | 0.592 | 0.576 | |||
0.282 | 0.464 | 0.347 | ||||
0.264 | 0.328 | |||||
0.715 |
0.006 | 0.0058 | 0.0057 | 0.00550 | 0.00560 | 0.00610 | |
0.0043 | 0.0038 | 0.00450 | 0.00684 | 0.00688 | ||
0.0159 | 0.00720 | 0.62900 | 0.02000 | |||
0.00742 | 0.07030 | 0.08100 | ||||
0.01790 | 0.00609 | |||||
0.10900 |
0.1825 | 0.1888 | 0.1831 | 0.1869 | 0.1740 | 0.17340 | |
0.1860 | 0.1715 | 0.1849 | 0.1991 | 0.20300 | ||
0.2826 | 0.2113 | 0.9308 | 0.21280 | |||
0.2166 | 0.3874 | 0.28760 | ||||
0.2877 | 0.16115 | |||||
0.30840 |
Appendix B. Used Data Points
iris-1 | [5.6, 3.0, 4.5, 1.5] |
iris-2 | [6.1, 2.8, 4.7, 1.2] |
iris-3 | [6.3, 3.3, 4.7, 1.6] |
iris-4 | [7.2, 3.2, 6.0, 1.8] |
Swiss-1 | [214.9, 130.3, 130.1, 8.7, 11.7, 140.2] |
Swiss-2 | [214.9, 130.2, 130.2, 8.0, 11.2, 139.6] |
References
- Michie, D. Machine learning in the next five years. In Proceedings of the 3rd European Conference on European Working Session on Learning, Glasgow, UK, 3–5 October 1988; Pitman Publishing, Inc.: Glasgow, UK, 1988; pp. 107–122. [Google Scholar]
- Shukla, P.; Verma, A.; Verma, S.; Kumar, M. Interpreting SVM for medical images using Quadtree. Multimed. Tools Appl. 2020, 79, 29353–29373. [Google Scholar] [CrossRef]
- Palczewska, A.; Palczewski, J.; Robinson, R.M.; Neagu, D. Interpreting random forest classification models using a feature contribution method. In Integration of Reusable Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 193–218. [Google Scholar]
- Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
- Holzinger, A.; Saranti, A.; Molnar, C.; Biecek, P.; Samek, W. Explainable AI methods-a brief overview. In Proceedings of the xxAI-Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, Vienna, Austria, 18 July 2020; Revised and Extended Papers. Springer: Berlin/Heidelberg, Germany, 2022; pp. 13–38. [Google Scholar]
- Bennetot, A.; Donadello, I.; Qadi, A.E.; Dragoni, M.; Frossard, T.; Wagner, B.; Saranti, A.; Tulli, S.; Trocan, M.; Chatila, R.; et al. A practical tutorial on explainable ai techniques. arXiv 2021, arXiv:2111.14260. [Google Scholar]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. CSUR 2019, 51, 93. [Google Scholar] [CrossRef] [Green Version]
- Tulio Ribeiro, M.; Singh, S.; Guestrin, C. “Why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 818–833. [Google Scholar]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv 2013, arXiv:cs.CV/1312.6034. [Google Scholar]
- Kim, B.; Rudin, C.; Shah, J.A. The bayesian case model: A generative approach for case-based reasoning and prototype classification. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1952–1960. [Google Scholar]
- Wellawatte, G.P.; Seshadri, A.; White, A.D. Model agnostic generation of counterfactual explanations for molecules. Chem. Sci. 2022, 13, 3697–3705. [Google Scholar] [CrossRef]
- Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; pp. 1885–1894. [Google Scholar]
- Craven, M.; Shavlik, J.W. Extracting tree-structured representations of trained networks. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 1996; pp. 24–30. [Google Scholar]
- Henelius, A.; Puolamäki, K.; Boström, H.; Asker, L.; Papapetrou, P. A peek into the black box: Exploring classifiers by randomization. Data Min. Knowl. Discov. 2014, 28, 1503–1529. [Google Scholar] [CrossRef]
- Pelleg, D.; Moore, A. Mixtures of rectangles: Interpretable soft clustering. In Proceedings of the Eighteenth International Conference on Machine Learning, Williamstown, MA, USA, 28 June–1 July 2001; pp. 401–408. [Google Scholar]
- Chen, J.; Chang, Y.; Hobbs, B.; Castaldi, P.; Cho, M.; Silverman, E.; Dy, J. Interpretable clustering via discriminative rectangle mixture model. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining (ICDM), Barcelona, Spain, 12–15 December 2016; pp. 823–828. [Google Scholar]
- Saisubramanian, S.; Galhotra, S.; Zilberstein, S. Balancing the tradeoff between clustering value and interpretability. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020; pp. 351–357. [Google Scholar]
- De Koninck, P.; De Weerdt, J.; vanden Broucke, S.K. Explaining clusterings of process instances. Data Min. Knowl. Discov. 2017, 31, 774–808. [Google Scholar] [CrossRef]
- Kim, B.; Khanna, R.; Koyejo, O.O. Examples are not enough, learn to criticize! criticism for interpretability. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2280–2288. [Google Scholar]
- Carrizosa, E.; Kurishchenko, K.; Marín, A.; Morales, D.R. Interpreting clusters via prototype optimization. Omega 2022, 107, 102543. [Google Scholar] [CrossRef]
- Dasgupta, S.; Frost, N.; Moshkovitz, M.; Rashtchian, C. Explainable k-means and k-medians clustering. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 12–18. [Google Scholar]
- Hsueh, P.Y.S.; Das, S. Interpretable Clustering for Prototypical Patient Understanding: A Case Study of Hypertension and Depression Subgroup Behavioral Profiling in National Health and Nutrition Examination Survey Data. In Proceedings of the AMIA, Washington, DC, USA, 4–8 November 2017. [Google Scholar]
- Kim, B.; Shah, J.A.; Doshi-Velez, F. Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction. In Advances in Neural Information Processing Systems 28; Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: Montreal, QC, Canada, 2015; pp. 2260–2268. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4765–4774. [Google Scholar]
- Slack, D.; Hilgard, S.; Jia, E.; Singh, S.; Lakkaraju, H. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, 7–9 February 2020; pp. 180–186. [Google Scholar]
- Sun, H.; Wang, S. Measuring the component overlapping in the Gaussian mixture model. Data Min. Knowl. Discov. 2011, 23, 479–502. [Google Scholar] [CrossRef]
- Krzanowski, W.J. Distance between populations using mixed continuous and categorical variables. Biometrika 1983, 70, 235–243. [Google Scholar] [CrossRef]
- Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
- Sibson, R. Information radius. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 1969, 14, 149–160. [Google Scholar] [CrossRef]
- Bhattacharyya, A. On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc. 1943, 35, 99–109. [Google Scholar]
- Matusita, K. Decision rule, based on the distance, for the classification problem. Ann. Inst. Stat. Math. 1956, 8, 67–77. [Google Scholar] [CrossRef]
- AbdAllah, L.; Kaiyal, M. Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance. Int. J. Med Health Sci. 2018, 12, 314–319. [Google Scholar]
- Kailath, T. The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 1967, 15, 52–60. [Google Scholar] [CrossRef]
- Nielsen, F.; Nock, R. Cumulant-free closed-form formulas for some common (dis) similarities between densities of an exponential family. arXiv 2020, arXiv:2003.02469. [Google Scholar]
- Guillerme, T.; Cooper, N. Effects of missing data on topological inference using a total evidence approach. Mol. Phylogenet. Evol. 2016, 94, 146–158. [Google Scholar] [CrossRef]
- Garthwaite, P.H.; Koch, I. Evaluating the contributions of individual variables to a quadratic form. Aust. N. Z. J. Stat. 2016, 58, 99–119. [Google Scholar] [CrossRef]
- Flury, B. Multivariate Statistics: A Practical Approach; Chapman & Hall, Ltd.: London, UK, 1988. [Google Scholar]
- Grinshpun, V. Application of Andrew’s plots to visualization of multidimensional data. Int. J. Environ. Sci. Educ. 2016, 11, 10539–10551. [Google Scholar]
- Cai, W.; Zhou, H.; Xu, L. Clustering Preserving Projections for High-Dimensional Data. J. Phys. Conf. Ser. 2020, 1693, 012031. [Google Scholar] [CrossRef]
- Saranti, A.; Hudec, M.; Mináriková, E.; Takáč, Z.; Großschedl, U.; Koch, C.; Pfeifer, B.; Angerschmid, A.; Holzinger, A. Actionable Explainable AI (AxAI): A Practical Example with Aggregation Functions for Adaptive Classification and Textual Explanations for Interpretable Machine Learning. Mach. Learn. Knowl. Extr. 2022, 4, 924–953. [Google Scholar] [CrossRef]
- Yeom, S.K.; Seegerer, P.; Lapuschkin, S.; Binder, A.; Wiedemann, S.; Müller, K.R.; Samek, W. Pruning by explaining: A novel criterion for deep neural network pruning. Pattern Recognit. 2021, 115, 107899. [Google Scholar] [CrossRef]
- Covert, I.; Lundberg, S.M.; Lee, S.I. Explaining by Removing: A Unified Framework for Model Explanation. J. Mach. Learn. Res. 2021, 22, 9477–9566. [Google Scholar]
- Hase, P.; Xie, H.; Bansal, M. The out-of-distribution problem in explainability and search methods for feature importance explanations. Adv. Neural Inf. Process. Syst. 2021, 34, 3650–3666. [Google Scholar]
- Gevaert, A.; Saeys, Y. PDD-SHAP: Fast Approximations for Shapley Values using Functional Decomposition. arXiv 2022, arXiv:2208.12595. [Google Scholar]
Ref. | Approach | Config. | Output | P-h |
---|---|---|---|---|
Continues data | ||||
[17] | Fit data in M-dimensional hyper-rectangle | # of clusters | Rule | No |
[18] | Discriminative model learn rectangular decision rules | Domain expert for decision boundaries, # of clusters | Rule | No |
Discrete data | ||||
[12] | Use discrete mixture model. Then apply uniform distribution over all data to find the representative instance per cluster | # of clusters | Prototype | No |
[19] | Simultaneously optimize distance and interpretability | # of clusters, interpretability level, feature of interest | Rule | No |
[23] | Use k-means to extract class label of cluster assignments, then return tree with k-leaves. | # of clusters | Tree | No |
[24] | Supervised Learner for similarity then hierarchical clustering for key feature defining the different outcomes | Label provided by physician | Features, Prototype | Yes |
[20] | After clustering data use a k SVM models (on cluster results) | attributes template, search depth, early stop parameter | Rule | Yes |
[22] | Find prototype that maximize true positive and minimize false positive | dissimilarity between individuals | Prototype | Yes |
Binary data | ||||
[25] | Finds set of distinguishable dimensions per cluster utilizing searching over logical formula | # of clusters | Features | No |
Features | ||||
---|---|---|---|---|
, | 0.27 | 0.810 | 0.00004 | 0.0002 |
, | 0.89 | 0.944 | 0.40000 | 0.3000 |
, | 0.50 | 0.640 | 0.00015 | 0.0015 |
Features | , | , | , | , | , | , |
---|---|---|---|---|---|---|
, | 0.0768 | - | - | - | - | - |
, | 0.8699 | 0.19689 | 0.307 | 0.3779 | 0.2137 | 0.246 |
, | 0.0658 | - | - | - | - | - |
Features | , | , | , | , | , | , |
---|---|---|---|---|---|---|
, | ||||||
, | ||||||
, |
Features | , , | , , | , , | , , |
---|---|---|---|---|
, | 0.17 | 0.24 | 0.1 | 0.16 |
, | 0.98 | 0.83 | 0.77 | 0.4 | 0.75 | 0.1 |
Change | 0 | 0 | 0 | 0.0003 | 0.008059 | 0.03862 |
0.75 | 0.70 | 0.40 | 0.73 | 0.100 | |
0.73 | 0.33 | 0.64 | 0.090 | ||
0.30 | 0.60 | 0.070 | |||
0.06 | 0.017 | ||||
0.089 |
, | 0.280 | 0.380 | 0.720 | 0.650 | 0.350 | 0.790 | 0.930 |
, | 0.006 | 0.008 | 0.660 | 0.080 | 0.020 | 0.910 | 0.120 |
, | 0.190 | 0.210 | 0.990 | 0.410 | 0.300 | 0.950 | 0.310 |
0.073 | 0.073 | 0.065 | 0.028 | 0.002 | 0.004 | 0.000 | |
0.094 | 0.110 | 0.085 | 0.067 | 0.040 | 0.040 | 0.047 | |
0.020 | 0.033 | 0.020 | 0.039 | 0.039 | 0.039 | 0.039 |
Clu. | Prob. | Dist. | ||||
---|---|---|---|---|---|---|
62% | 7.65 | 4.00 | 0.50 | 1.75 | 1.38 | |
38% | 6.86 | 0.78 | 1.15 | 1.72 | 3.20 | |
98% | 1.70 | - | 0.20 | 0.03 | 1.50 | |
2% | 6.80 | - | 1.00 | 2.54 | 3.20 | |
46% | 7.23 | 3.50 | - | 1.90 | 1.80 | |
54% | 4.20 | 0.50 | - | 1.90 | 1.90 | |
85% | 5.10 | 2.00 | 0.60 | - | 2.50 | |
15% | 6.70 | 1.80 | 1.10 | - | 3.80 | |
22% | 7.50 | 3.80 | 0.90 | 2.80 | - | |
78% | 3.80 | 0.70 | 0.30 | 2.80 | - |
Clu | Prob. | Dist. | ||||
---|---|---|---|---|---|---|
70.0% | 10.21 | 0.50 | 0.10 | 7.04 | 2.6 | |
30.0% | 10.20 | 0.08 | 0.30 | 1.57 | 8.3 | |
94.4% | 6.60 | - | 0.03 | 4.20 | 2.4 | |
5.6% | 9.60 | - | 0.30 | 0.90 | 8.4 | |
69.0% | 9.80 | 0.40 | - | 7.10 | 2.3 | |
31.0% | 8.55 | 0.18 | - | 1.70 | 6.7 | |
99.5% | 0.96 | 0.22 | 0.14 | - | 0.6 | |
0.5% | 9.50 | 0.07 | 0.30 | - | 9.1 | |
62.0% | 3.75 | 0.40 | 0.01 | 3.30 | - | |
38.0% | 3.50 | 0.14 | 0.08 | 3.30 | - |
Clu | Prob. | Dist. | ||||
---|---|---|---|---|---|---|
98.0% | 4.64 | 0.120 | 1.97 | 0.42 | 2.12 | |
2.0% | 11.01 | 0.420 | 3.80 | 3.70 | 3.10 | |
98.1% | 4.10 | - | 1.80 | 0.09 | 2.19 | |
1.9% | 9.30 | - | 4.13 | 1.90 | 3.30 | |
93.0% | 3.70 | 0.023 | - | 0.54 | 3.11 | |
7.0% | 6.06 | 0.920 | - | 3.90 | 1.23 | |
94.0% | 4.60 | 0.040 | 2.06 | - | 2.50 | |
6.0% | 8.01 | 0.100 | 3.80 | - | 4.10 | |
92.0% | 4.02 | 0.095 | 2.70 | 1.20 | - | |
8.0% | 7.80 | 0.530 | 1.97 | 5.30 | - |
Clu. | Prob. | Dist. | ||||
---|---|---|---|---|---|---|
0% | 25.30 | 0.040 | 0.002 | 23.50 | 1.80 | |
100% | 4.30 | 0.900 | 1.200 | 0.46 | 1.76 | |
1% | 19.20 | - | 0.004 | 17.10 | 2.16 | |
99% | 4.19 | - | 1.350 | 1.10 | 1.70 | |
0% | 25.00 | 0.070 | - | 23.50 | 1.46 | |
100% | 2.60 | 1.240 | - | 0.40 | 0.94 | |
16% | 9.50 | 2.500 | 0.050 | - | 6.95 | |
84% | 4.20 | 1.400 | 1.200 | - | 1.60 | |
0% | 24.85 | 0.005 | 0.030 | 24.80 | - | |
100% | 1.58 | 1.030 | 0.420 | 0.13 | - |
Point | Original Prediction | Comprehensiveness | Sufficiency |
---|---|---|---|
iris-1 | : 62% | : 25% (37%) | : 93% (31%) |
iris-2 | : 70% | : 66% (4%) | : 99.5% (29.5%) |
iris-4 | : 100% | : 81% (19%) | : 100% (0%) |
Clu. | Prob. | Dist. | ||||||
---|---|---|---|---|---|---|---|---|
100% | 10.70 | 0.0500 | 0.100 | 0.70 | 5.80 | 0.0002 | 4.07 | |
0% | 22.40 | 0.1000 | 0.080 | 0.40 | 2.88 | 9.8000 | 9.10 | |
39% | 3.11 | 0.0005 | 0.014 | 0.18 | 2.90 | - | - | |
61% | 2.20 | 0.3400 | 0.500 | 0.95 | 0.40 | - | - |
Clu. | Prob. | Dist. | ||||||
---|---|---|---|---|---|---|---|---|
100.0% | 10.70 | 0.0500 | 0.1000 | 0.70 | 5.80 | 0.0002 | 4.07 | |
0.0% | 22.40 | 0.1000 | 0.0800 | 0.40 | 2.88 | 9.8000 | 9.10 | |
99.7% | 10.01 | - | 0.0500 | 0.70 | 5.40 | 0.0020 | 3.90 | |
0.3% | 22.30 | - | 0.1200 | 0.40 | 2.80 | 9.8000 | 9.30 | |
98.9% | 10.13 | 0.0200 | - | 0.50 | 5.70 | 0.0002 | 3.90 | |
1.1% | 20.50 | 0.1600 | - | 0.19 | 2.60 | 9.0400 | 8.70 | |
99.8% | 8.70 | 0.0400 | 0.0034 | - | 5.26 | 0.0008 | 3.40 | |
0.2% | 22.30 | 0.0700 | 0.0300 | - | 3.04 | 10.0000 | 9.15 | |
99.8% | 3.50 | 0.0070 | 0.0900 | 0.40 | - | 0.9000 | 2.10 | |
0.2% | 16.20 | 0.0200 | 0.0015 | 0.73 | - | 5.7000 | 9.70 | |
68.0% | 9.40 | 0.0300 | 0.1200 | 0.70 | 4.80 | - | 3.80 | |
32.0% | 11.40 | 0.0600 | 0.0400 | 0.80 | 0.30 | - | 10.15 | |
99.7% | 3.20 | 0.0004 | 0.0140 | 0.17 | 2.90 | 0.1100 | - | |
0.3% | 15.40 | 0.4000 | 0.0170 | 0.50 | 3.50 | 11.0000 | - |
Clu. | Prob. | Dist. | ||||||
---|---|---|---|---|---|---|---|---|
100% | 14.2 | 0.0300 | 0.25 | 0.012 | 11.300 | 1.5 | 1.1 | |
0% | 27.2 | 0.0009 | 0.40 | 1.800 | 0.120 | 1.9 | 23.1 | |
3% | 9.8 | 0.0010 | 0.30 | - | 8.500 | 1 | - | |
97% | 3.5 | 0.1000 | 0.20 | - | 0.003 | 3.2 | - |
Clu. | Prob. | Dist. | ||||||
---|---|---|---|---|---|---|---|---|
100.0% | 14.20 | 0.03000 | 0.250 | 0.012 | 11.300 | 1.500 | 1.10 | |
0.0% | 27.20 | 0.00090 | 0.400 | 1.800 | 0.120 | 1.900 | 23.10 | |
99.8% | 13.60 | - | 0.340 | 0.009 | 10.800 | 1.500 | 1.00 | |
0.2% | 27.04 | - | 0.370 | 1.800 | 0.100 | 1.900 | 22.80 | |
99.0% | 14.00 | 0.07000 | - | 0.080 | 11.300 | 1.500 | 1.12 | |
1.0% | 24.60 | 0.00980 | - | 1.100 | 0.200 | 1.400 | 21.90 | |
99.5% | 14.00 | 0.03000 | 0.330 | - | 11.120 | 1.500 | 1.00 | |
0.5% | 25.52 | 0.03000 | 0.060 | - | 0.050 | 2.150 | 23.20 | |
100.0% | 0.42 | 0.06000 | 0.300 | 0.030 | - | 0.006 | 0.05 | |
0.0% | 27.20 | 0.00004 | 0.400 | 1.700 | - | 2.120 | 22.94 | |
100.0% | 7.40 | 0.00800 | 0.150 | 0.007 | 6.600 | - | 0.64 | |
0.0% | 26.44 | 0.00340 | 0.200 | 2.000 | 0.600 | - | 23.70 | |
7.0% | 9.93 | 0.00200 | 0.400 | 0.020 | 8.500 | 1.100 | - | |
93.0% | 5.10 | 0.25000 | 0.006 | 1.980 | 0.005 | 2.800 | - |
Point | Original Prediction | Comprehensiveness | Sufficiency |
---|---|---|---|
Swiss-1 | : 100% | : 30% (70%) | : 100% (0%) |
Swiss-2 | : 100% | : 12% (88%) | : 100% (0%) |
Clu. | Prob. | Dist. | |||||||
---|---|---|---|---|---|---|---|---|---|
72.0% | 21.50 | 2.20 | 0.050 | 0.1000 | 8.27 | 0.00200 | 2.300 | 8.60 | |
28.0% | 20.60 | 1.50 | 11.300 | 5.9000 | 0.08 | 1.30000 | 0.025 | 0.53 | |
0.5% | 19.94 | - | 0.360 | 0.0002 | 8.75 | 0.03700 | 2.300 | 8.50 | |
99.5% | 7.44 | - | 1.700 | 3.5000 | 0.12 | 1.50000 | 0.020 | 0.63 | |
0.5% | 19.80 | 0.33 | - | 0.0014 | 8.90 | 0.01700 | 2.300 | 8.30 | |
99.5% | 7.15 | 1.70 | - | 3.0000 | 0.20 | 1.50000 | 0.030 | 0.65 | |
0.3% | 20.10 | 0.70 | 0.090 | - | 8.50 | 0.00005 | 2.300 | 8.50 | |
97.0% | 6.10 | 2.10 | 0.400 | - | 0.10 | 3.20000 | 0.050 | 0.30 | |
98.0% | 15.20 | 6.30 | 0.200 | 1.5000 | - | 0.00500 | 2.000 | 5.10 | |
2.0% | 20.34 | 1.30 | 11.500 | 5.7000 | - | 1.30000 | 0.030 | 0.60 | |
65.0% | 21.20 | 1.80 | 0.030 | 0.1400 | 8.30 | - | 2.300 | 8.60 | |
35.0% | 20.50 | 1.14 | 12.000 | 6.7000 | 0.10 | - | 0.020 | 2.70 | |
86.0% | 18.90 | 2.05 | 0.007 | 0.0005 | 7.32 | 0.17000 | - | 9.36 | |
14.0% | 19.50 | 1.20 | 10.600 | 5.8000 | 0.08 | 1.26000 | - | 0.55 | |
99.9% | 6.80 | 1.70 | 0.100 | 0.0080 | 2.08 | 0.06000 | 2.800 | - | |
0.1% | 20.30 | 1.20 | 12.200 | 5.5000 | 0.17 | 1.20000 | 0.050 | - |
Clu. | Prob. | Dist. | |||||
---|---|---|---|---|---|---|---|
0.01% | 31.80 | 0.650 | 12.400000 | 1.8000 | 2.400 | 14.550 | |
99.99% | 5.00 | 2.200 | 0.200000 | 1.1000 | 0.080 | 1.400 | |
0.01% | 26.30 | - | 10.000000 | 0.3000 | 2.500 | 13.500 | |
99.99% | 4.60 | - | 0.000001 | 3.4000 | 0.004 | 1.100 | |
20.00% | 8.10 | 1.800 | - | 0.6000 | 1.900 | 3.800 | |
80.00% | 5.00 | 2.200 | - | 1.2000 | 0.090 | 1.500 | |
0.01% | 24.10 | 0.300 | 8.600000 | - | 2.700 | 12.500 | |
99.99% | 5.00 | 2.800 | 0.380000 | - | 0.100 | 1.800 | |
0.01% | 30.20 | 1.200 | 11.500000 | 2.9000 | - | 14.600 | |
99.99% | 5.00 | 2.100 | 0.190000 | 1.2000 | - | 1.500 | |
63.00% | 4.00 | 0.100 | 1.500000 | 0.0002 | 2.400 | - | |
37.00% | 4.60 | 2.000 | 0.800000 | 1.6400 | 0.120 | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alangari, N.; Menai, M.E.B.; Mathkour, H.; Almosallam, I. Intrinsically Interpretable Gaussian Mixture Model. Information 2023, 14, 164. https://doi.org/10.3390/info14030164
Alangari N, Menai MEB, Mathkour H, Almosallam I. Intrinsically Interpretable Gaussian Mixture Model. Information. 2023; 14(3):164. https://doi.org/10.3390/info14030164
Chicago/Turabian StyleAlangari, Nourah, Mohamed El Bachir Menai, Hassan Mathkour, and Ibrahim Almosallam. 2023. "Intrinsically Interpretable Gaussian Mixture Model" Information 14, no. 3: 164. https://doi.org/10.3390/info14030164
APA StyleAlangari, N., Menai, M. E. B., Mathkour, H., & Almosallam, I. (2023). Intrinsically Interpretable Gaussian Mixture Model. Information, 14(3), 164. https://doi.org/10.3390/info14030164