XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework
Abstract
:1. Introduction
1.1. Problem Statement
1.2. Contributions
- A framework for comparing the explanations produced by XAI methods globally and locally (i.e., for the entire event log or for selected process instances), separated and against each other. The comparison framework uses different PPM workflow settings with predefined criteria based on the underlying data, predictive models and XAI methods characteristics.
- An empirical analysis of explanations generated by three global XAI methods, as well as two local XAI methods for predictions of two predictive models over process instances from 27 event logs preprocessed with two different preprocessing combinations.
2. Preliminaries
2.1. Predictive Process Monitoring
PPM Workflow
- 1
- PPM Offline Stage. This stage starts with constructing a prefix log from the input event log. A prefix log is needed to provide a predictive model with incomplete process instances (i.e., a partial trace) for the training phase of an ML model. Therefore, prefixes can be generated by truncating process instances in an event log up to a predefined number of events. Truncating a process instance can be done up to the first k events of its trace, or up to k events with a gap step (g) separating each two events, where k and g are user-defined. The latter prefixing approach denoted as gap-based prefixing.Prefix preprocessing sub-steps include bucketing and encoding. Prefix bucketing groups the prefixes according to certain criteria (e.g., number of activities or reaching a certain state during process execution). The former criteria are defined by the bucketing technique [3]. Single, state-based, prefix length-based, clustering, and domain knowledge-based are examples of prefix bucketing techniques. Encoding is the second sub-step of prefix preprocessing. Prefix encoding involves transforming a prefix to a numerical feature vector that serves as input to the predictive model, either for training or making predictions. Encoding techniques include static, aggregation, index-based, and last state techniques [2,3].In the following step, a predictive model is constructed. Depending on the PPM task, an appropriate predictive model is chosen. The prediction task type may be classification or regression. Next, the predictive model is trained on encoded prefixes representing completed process instances. For each bucket, a dedicated predictive model needs to be trained, i.e., the number of predictive models depend on the chosen bucketing technique. Finally, the performance of the predictive model needs to be evaluated.
- 2
- PPM Online Stage. This stage starts with a running process instance that has not completed yet. Buckets formed in the offline stage are recalled to determine the suitable bucket for the running process instance, based on the similarity between the running process instance and the prefixes in a bucket. The running process instance is then encoded according to the encoding method chosen for the PPM task. The encoded form of the running process instance constitutes the input for the prediction method after having determined the relevant predictive model from the models created in the offline stage. Finally, the predictive model generates a prediction for the running process instance according to the predefined goal of the PPM task.
2.2. eXplainable Artificial Intelligence
- 1
- How to explain. This dimension is concerned with the approach used to explain how a predictive model derives its predictions based on the given inputs. Corresponding approaches have been categorised along different perspectives including design goals and evaluation measures, transparency of the explained model and explanation scope [14,15], granularity [16], and relation to the black-box model [17]. For example, a group of approaches tend to generate an explanation by simplification. These approaches simplify a complex model by using a more interpretable model called a surrogate or proxy model. The simplified model is supposed to generate understandable predictions that achieve an accuracy level comparable to the black-box one. Another group of approaches study feature relevance. They aim to trace back the importance of a feature for deriving a prediction. Another family of approaches tend to explain by example. Approaches from this category tend to select representative samples that allow for insights into the model’s internal reasoning [14,16]. The final category in this dimension explains through visualisation, i.e., intermediate representations and layers of a predictive model are visualised with the aim to qualitatively determine what a model has learned [16].Approaches belonging to this XAI dimension are further categorised into model-agnostic or model-specific methods. Model-agnostic approaches are able to explain any type of ML predictive models, whereas model-specific approaches can only be used on top of specific models.
- 2
- How much to explain. An explanation may be generated at various levels of granularity. An explanation effort can be localised to a specific instance, i.e., local explanation, and it can provide global insights into which factors contributed to the decision of a predictive model, i.e., to generate global explanations. The scope of an explanation and, subsequently, the chosen technique depend on several factors. One of these factors is the purpose of the explanation, e.g., whether it shall allow debugging the model or gaining trust into its predictions. Target stakeholders constitute another deterministic factor. For example, an ML engineer prefers gaining a holistic overview of the factors driving the reasoning process of a predictive model, whereas an end user is only interested in why a model made a certain prediction for a given instance.
- 3
- How to present. Choosing the form according to which an explanation is presented is determined by the way the explanation is generated, the characteristics of the end user (e.g., level of expertise), the scope of the explanation, and the purpose of generating an explanation (e.g., to visualise effects of feature interactions on decisions of the respective predictive model). Three categories of presentation forms were introduced in [15]. The first category comprises visual explanations that use visual elements like saliency maps [10] and charts to describe deterministic factors of a decision in accordance with the respective perspective to be explained of a model. Verbal explanation provides another way of presenting explanations where natural language is used to describe model reasoning (e.g., in recommender systems). The final form of presentation is analytic explanation where a combination of numerical metrics and visualisations are used to reveal model structure or parameters, e.g., using heatmaps and hierarchical decision trees.
- 4
- When to explain. This dimension of an explainability approach is concerned with the point in time an explanation shall be provided. Agreeing on explainability being a subjective topic and depending on the receiver’s understanding and needs, we may regard explainability provisioning from two perspectives. The first perspective considers explainability as gaining an understanding of decisions of a predictive model, being bounded by model characteristics. Adopting this perspective imposes explainability through mechanisms put in place while constructing the model to obtain a white-box predictive model, i.e., intrinsic explanation. On the other hand, using an explanation method to understand the reasoning process of a model in terms of its outcomes is called post-hoc explanation. The latter provides an understanding in terms of the whole reasons behind the mapping process between inputs and outputs. Moreover, it provides a holistic view of input characteristics which led to predictive model decisions.
- 5
- Explain to Whom. Studying the target group of each explainability solution becomes necessary to tailor the explanations and to present them in a way that maximizes the interpretability of a predictive model, forming a mental model of it. The receivers of an explanation should be at the center of attention when designing an explainability solution. These receivers can be further categorised into different user groups including novice users, decision makers and experienced users, system practitioners, and regulatory bodies [14,15]. Targeting each user group with suitable explanations contributes to achieve the explanation process purpose. The purpose of an explanation may be to understand how a predictive model works or how it makes decisions, or which patterns are formed in the learning process. Therefore, it is crucial to understand each user group, identify its relevant needs, and define design goals accordingly.
3. Research Questions
4. XAI Comparison Framework
4.1. Framework Composition
4.1.1. Data Dimension
- Sepsis. This event log belongs to the healthcare domain and reports cases of Sepsis as a life threatening condition.
- Traffic fines. This event log is governmental and is extracted from an Italian information system for managing road traffic fines.
- BPIC2017. This event log documents load application process in a Dutch financial institution.
4.1.2. Preprocessing Dimension
4.1.3. Ml Model Dimension
4.1.4. XAI Dimension
- Ability of the explainability method to overcome the shortcomings of other methods that explain the same aspects of the reasoning process of a predictive model. For example, Accumulated Local Effects (ALE) [12] adopts the same approach as in Partial Dependence Plots (PDP) method [6]. Unlike PDP however, ALE takes the effects of certain data characteristics (e.g., correlations) into account when studying features effects [4].
- Comprehensiveness regarding the explanation coverage when using both local and global explainability methods. Through local explanations the influence of certain features can be observed. In turn, through global explanations the reasoning process a predictive model has followed can be inspected. This approach allows reaching conclusions that may provide a holistic view of both the data and the model applied on the data. It is hard to find a single explainability method that provides explanations at both levels. However, one of the applied methods (i.e., SHAP [5]) starts at the local level by calculating contributions of the features on a prediction. SHAP aggregates these contributions at a global level to give an impression of the impact a feature has on the whole predictions based on a given dataset.
- Availability of a reliable implementation of the explainability method. This implementation should enable the integration of the explainability method with the chosen predictive model as well as in the underlying PPM workflow.
Global Explainability Analysis
Local Explainability Analysis
5. Results and Observations
5.1. Global Methods Comparability
5.1.1. Comparability
5.1.2. Execution Times
5.2. Local Methods Comparability
6. Discussion
- Both studied encoding techniques load the event log with a large number of derived features. However, the situation becomes worse in index-based encoding, as the number of resulting features is increasing proportionally to the number of dynamic attributes, especially the number of categorical levels of a dynamic categorical attribute. Dimensionality explosion has an effect on the explanations generated in the same way it has on predictions generation. On one hand, explaining high-dimensional event logs becomes expensive in terms of computational resources, especially in XAI methods that run multiple iterations to rank features based on their importance, for example in case of PFI. We denote this as the horizontal effect of dimensionality. Furthermore, other XAI methods can not work on lower cardinality features, (e.g., ALE), or can work on it but will not yield useful insights. A high dimensional event log may hold non-useful features that may have been used by the predictive model. However, it cannot be used to explain the prediction generated. We denote this as the vertical effect of dimensionality. However, SHAP is the only XAI method among the compared ones that is able to mitigate the effect of lower cardinality and to produce meaningful explanations while highlighting the effect of interactions between analysed features in dependency plots. These effects are observed in explanations of process instances from the preprocessed event logs using index encoding more than aggregation-based preprocessed event logs.
- Increased collinearity in the underlying data is another problem resulting from encoding techniques with varying degrees. The effect of collinearity can be observed in index-based preprocessed event logs, while not being completely absent in aggregation-based event logs. This collinearity is reflected through explanations of predictions on process instances from prefix-indexed event logs as the length of a prefix increases. Another effect of collinearity is the instability of LIME explanations. This instability is due to the approximating model affected by collinear features (in terms of unstable feature coefficients) and high dimensionality (in terms of unstable feature sets).
- PFI showed to be more stable and consistent along two execution runs, while SHAP stability is affected by the underlying predictive model while may be (in)sensitive to the underlying data characteristics. ALE is mostly unstable and affected by its inability to accurately analyse effects of changes in categorical attributes on predictions generated.
7. Related Work
7.1. Leveraging PPM with Explanations
7.2. Using Transparent Models in PPM Tasks
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ML | Machine Learning |
PPM | Predictive Process Monitoring |
XAI | eXplainable Artificial Intelligence |
GAM | General Additive Model |
RQ | Research Question |
MI | Mutual Information |
LR | Logistic Regression |
PFI | Permutation Feature Importance |
PDP | Partial Dependence Plots |
ALE | Accumulated Local Effects |
SHAP | SHapley Additive exPlanations |
VSI | Variable Stability Index |
CSI | Coefficient Stability Index |
LRP | Layer-wise Relevance Propagation |
Appendix A
References
- Van der Aalst, W. Process Mining: Data Science in Action, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Verenich, I.; Dumas, M.; Rosa, M.L.; Maggi, F.M.; Teinemaa, I. Survey and Cross-benchmark Comparison of Remaining Time Prediction Methods in Business Process Monitoring. ACM Trans. Intell. Syst. Technol. 2019, 10, 34. [Google Scholar] [CrossRef] [Green Version]
- Teinemaa, I.; Dumas, M.; Rosa, M.L.; Maggi, F.M. Outcome-Oriented Predictive Process Monitoring: Review and Benchmark. ACM Trans. Knowl. Discov. Data 2019, 13, 57. [Google Scholar] [CrossRef]
- Molnar, Christoph: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2019. Available online: https://christophm.github.io/interpretable-ml-book/ (accessed on 6 June 2022).
- Lundberg, S.; Lee, S. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4768–4777. [Google Scholar]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Statist. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Shrikumar, A.; Greenside, P.; Kundaje, A. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 145–3153. [Google Scholar]
- Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K.R.; Samek, W. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers. In Artificial Neural Networks and Machine Learning—ICANN; Villa, A., Masulli, P., Rivero, A.J.P., Eds.; Springer: Cham, Switzerland, 2016; Volume 9887, pp. 63–71. [Google Scholar]
- Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2018, 31, 841. [Google Scholar] [CrossRef] [Green Version]
- Kindermans, P.J.; Hooker, S.; Adebayo, J.; Alber, M.; Schütt, K.T.; Dähne, S.; Erhan, D.; Kim, B. The (Un)reliability of Saliency Methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K., Eds.; Springer: Cham, Switzerland, 2019; Volume 11700, pp. 267–280. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. Why Should I Trust You? In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Apley, D.W.; Zhu, J. Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. 2020, 82, 1059–1086. [Google Scholar] [CrossRef]
- Doshi-Velez, F.; Kortz, M. Accountability of AI under the Law: The Role of Explanation, Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society Working Paper. 2017. Available online: http://nrs.harvard.edu/urn-3:HUL.InstRepos:34372584 (accessed on 6 June 2022).
- Arrieta, A.B.; Díaz-Rodriguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Mohseni, S.; Zarei, N.; Ragan, E.D. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–45. [Google Scholar] [CrossRef]
- Zhou, J.; Gandomi, A.H.; Chen, F.; Holzinger, A. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 2021, 10, 593. [Google Scholar] [CrossRef]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2019, 51, 1–42. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, A.; Martínez, M.R. On Quantitative Aspects of Model Interpretability. 2020. Available online: http://arxiv.org/pdf/2007.07584v1 (accessed on 6 June 2022).
- Visani, G.; Bagli, E.; Chesani, F.; Poluzzi, A.; Capuzzo, D. Statistical stability indices for LIME: Obtaining reliable explanations for machine learning models. J. Oper. Res. Soc. 2021, 12, 1–11. [Google Scholar] [CrossRef]
- Outcome-Oriented Predictive Process Monitoring Benchmark- Github. Available online: https://github.com/irhete/predictive-monitoring-benchmark (accessed on 26 April 2022).
- Elkhawaga, G.; Abuelkheir, M.; Reichert, M. Explainability of Predictive Process Monitoring Results: Can You See My Data Issues? arXiv 2022, arXiv:2202.08041. [Google Scholar]
- 4TU Centre for Research Data. Available online: https://data.4tu.nl/Eindhoven_University_of_Technology (accessed on 26 April 2022).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Alibi Explain. Available online: https://github.com/SeldonIO/alibi (accessed on 26 April 2022).
- Weinzierl, S.; Zilker, S.; Brunk, J.; Revoredo, K.; Matzner, M.; Becker, J. XNAP: Making LSTM-Based Next Activity Predictions Explainable by Using LRP. In Business Process Management Workshops: International Publishing (Lecture Notes in Business Information Processing); Ortega, A.D.R., Leopold, H., Santoro, F.M., Eds.; Springer: Cham, Switzerland, 2020; Volume 397, pp. 129–141. [Google Scholar]
- Galanti, R.; Coma-Puig, B.; de Leoni, M.; Carmona, J.; Navarin, N. Explainable Predictive Process Monitoring. In Proceedings of the 2nd International Conference on Process Mining (ICPM), Padua, Italy, 4–9 October 2020; pp. 1–8. [Google Scholar]
- Rizzi, W.; Di Francescomarino, C.; Maggi, F.M. Explainability in Predictive Process Monitoring: When Understanding Helps Improving. In Business Process Management Forum: Lecture Notes in Business Information Processing; Fahland, D., Ghidini, C., Becker, J., Dumas, M., Eds.; Springer: Cham, Switzerland, 2020; Volume 392, pp. 141–158. [Google Scholar]
- Verenich, I.; Dumas, M.; La Rosa, M.; Nguyen, H. Predicting process performance: A white-box approach based on process models. J. Softw. Evol. Proc. 2019, 31, 26. [Google Scholar] [CrossRef] [Green Version]
- Sindhgatta, R.; Moreira, C.; Ouyang, C.; Barros, A. Exploring Interpretable Predictive Models for Business Processes. In Business Process Management, LNCS; Fahland, D., Ghidini, C., Becker, J., Dumas, M., Eds.; Springer: Cham, Switzerland, 2020; Volume 12168, pp. 257–272. [Google Scholar]
- Jain, S.; Wallace, B.C. Attention is not explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 3–5 June 2019; pp. 3543–3556. [Google Scholar]
- Wiegreffe, S.; Pinter, Y. Attention is not not Explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; pp. 11–20. [Google Scholar]
Event Log | #Traces | Short Trace Len. | Avg. Trace Len. | Long Trace Len. | Max Prfx Len. | #Trace Variants | %Pos Class | #Event Class | #Static Col | #Dynamic Cols | #Cat Cols | #Num Cols | #Cat Levels (Static Cols) | #Cat Levels (Dynamic Cols) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sepsis1 | 776 | 5 | 14 | 185 | 20 | 703 | 0.0026 | 14 | 24 | 13 | 28 | 14 | 76 | 38 |
Sepsis2 | 776 | 4 | 13 | 60 | 13 | 650 | 0.14 | 14 | 24 | 13 | 28 | 14 | 76 | 39 |
Sepsis3 | 776 | 4 | 13 | 185 | 31 | 703 | 0.14 | 14 | 24 | 13 | 28 | 14 | 76 | 39 |
Traffic fines | 129,615 | 2 | 4 | 20 | 10 | 185 | 0.455 | 10 | 4 | 14 | 13 | 11 | 54 | 173 |
BPIC2017_Accepted | 31,413 | 10 | 35 | 180 | 20 | 2087 | 0.41 | 26 | 3 | 20 | 12 | 13 | 6 | 682 |
BPIC2017_Cancelled | 31,413 | 10 | 35 | 180 | 20 | 2087 | 0.47 | 26 | 3 | 20 | 12 | 13 | 6 | 682 |
BPIC2017_Refused | 31,413 | 10 | 35 | 180 | 20 | 2087 | 0.12 | 26 | 3 | 20 | 12 | 13 | 6 | 682 |
XAI Method | How (Explain) | Specificity | How Much | How (Present) | When | Whom |
---|---|---|---|---|---|---|
Permutation Feature Importance (PFI) | Feature importance | Model-Agnostic | Global | Numerical | Post-hoc | Systems practitioners |
ALE | Feature effects | Model-Agnostic | Global | Analytic | Post-hoc | Systems practitioners, decision makers & experienced users |
SHAP | Feature contributions | Model-Agnostic | Global& Local | Analytic | Post-hoc | Novice users, system practitioners |
LIME | Simplification | Model-Agnostic | Global | Analytic | Post-hoc | Novice users, systems practitioners, decision makers & experienced users |
Event Log | XGBoost | Logistic Rregression (LR) | ||||||
---|---|---|---|---|---|---|---|---|
Prediction | SHAP | ALE | PFI | Prediction | SHAP | ALE | PFI | |
Sepsis1 | 8.71 | 0.20 | 15.5605 | 10.54 | 0.1265 | 0.00645 | 8.00679 | 4.773 |
Sepsis2 | 16.74 | 2.87 | 13.3611 | 12.32 | 0.061 | 0.00687 | 5.77876 | 4.8206 |
Sepsis3 | 30.53 | 9.55 | 19.15117 | 19.71 | 0.092 | 0.00585 | 8.2588 | 6.4126 |
Traffic_fines | 4285.7 | 91,145.55 | 6516.123 | 4834.37 | 10.395 | 0.6789 | 5769.03 | 139.29 |
BPIC2017_Accepted | 3794.15 | 288.73 | 147,845.645 | 2179.73 | 29.89 | 2.638 | 144,387.72 | 1782.8 |
BPIC2017_Cancelled | 10,000.33 | 34,131.79 | 149,374.35 | 11421.7 | 36.36 | 2.666 | 144,490.6677 | 955.43 |
BPIC2017_Refused | 5294.84 | 764.84 | 148,574.512 | 2992.14 | 25.278 | 2.655 | 144,083.7028 | 554.774 |
Event Log | XGBoost | Logistic Rregression (LR) | ||||||
---|---|---|---|---|---|---|---|---|
Prediction | SHAP | ALE | PFI | Prediction | SHAP | ALE | PFI | |
Sepsis1_1 | 0.388 | 0.0098 | 3.575 | 6.61 | 0.013 | 0.00051 | 0.346 | 3.16 |
Sepsis1_6 | 0.75 | 0.0116 | 9.059 | 5.509 | 0.025 | 0.00045 | 1.1537 | 3.7387 |
Sepsis1_11 | 1.112 | 0.012 | 15.948 | 7.01 | 0.021 | 0.00075 | 2.137 | 4.166 |
Sepsis1_16 | 0.496 | 0.0055 | 19.394 | 6.59 | 0.015 | 0.00031 | 2.421 | 3.6 |
Sepsis2_1 | 0.77 | 0.1797 | 3.804 | 6.41 | 0.01 | 0.00022 | 0.2696 | 3.26 |
Sepsis2_6 | 1.74 | 0.138 | 8.757 | 11.249 | 0.0075 | 0.00052 | 1.08999 | 5.92 |
Sepsis2_11 | 1.203 | 0.028 | 15.122 | 7.52 | 0.007 | 0.000558 | 2.36226 | 4.38 |
Sepsis3_1 | 0.475 | 0.101 | 3.4351 | 8.725 | 0.0678 | 0.00013 | 0.2655 | 4.366 |
Sepsis3_6 | 0.999 | 0.0704 | 8.93579 | 6.566 | 0.0684 | 0.00045 | 1.1067 | 5.8 |
Sepsis3_11 | 1.73 | 0.0416 | 16.015989 | 6.943 | 0.0947 | 0.00055 | 2.2307 | 4.788 |
Sepsis3_16 | 0.6166 | 0.0083 | 18.9257 | 7.216 | 0.067483 | 0.0003 | 1.7351 | 5.054 |
Sepsis3_21 | 0.426 | 0.00284 | 20.954 | 6.6596 | 0.041555 | 0.00016 | 1.4537 | 5.134 |
Sepsis3_26 | 0.245 | 0.00017 | 22.42477 | 7.23 | 0.029335 | 0.00008 | 1.49295 | 5.749 |
Sepsis3_31 | 0.16 | 0.00111 | 24.785 | 8.676 | 0.026648 | 0.000072 | 1.5815 | 6.022 |
Traffic_fines_1 | 1999.87 | 74,009.516 | 335.4324 | 2632.54 | 0.903 | 0.21602 | 72.1658 | 32.99 |
Traffic_fines_6 | 157.018 | 10.881 | 233.545 | 47.776 | 0.422 | 0.05684 | 78.58 | 20.71 |
BPIC2017_Refused_1 | 113.23 | 212.403 | 21.412 | 319.73 | 0.07 | 0.0125 | 5.84036 | 27.45 |
BPIC2017_Refused_6 | 847 | 414.45 | 860.8566 | 515.45 | 1.0348 | 0.21099 | 345.113 | 84.71 |
BPIC2017_Refused_11 | 2521.397 | 341.585 | 9184.41 | 2012.69 | 2.488 | 0.7547 | 3744.7 | 461.968 |
BPIC2017_Refused_16 | 3775.526 | 279.275 | 29,963.995 | 6542.17 | 3.8 | 1.99307 | 12,625.59 | 2331.89 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
El-khawaga, G.; Abu-Elkheir, M.; Reichert, M. XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework. Algorithms 2022, 15, 199. https://doi.org/10.3390/a15060199
El-khawaga G, Abu-Elkheir M, Reichert M. XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework. Algorithms. 2022; 15(6):199. https://doi.org/10.3390/a15060199
Chicago/Turabian StyleEl-khawaga, Ghada, Mervat Abu-Elkheir, and Manfred Reichert. 2022. "XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework" Algorithms 15, no. 6: 199. https://doi.org/10.3390/a15060199
APA StyleEl-khawaga, G., Abu-Elkheir, M., & Reichert, M. (2022). XAI in the Context of Predictive Process Monitoring: An Empirical Analysis Framework. Algorithms, 15(6), 199. https://doi.org/10.3390/a15060199