This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods
by
Miroslava Matejová
Miroslava Matejová
and
Ján Paralič
Ján Paralič *
Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Kosice, Letna 9, 040 01 Košice, Slovakia
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2025, 7(4), 158; https://doi.org/10.3390/make7040158 (registering DOI)
Submission received: 31 October 2025
/
Revised: 25 November 2025
/
Accepted: 27 November 2025
/
Published: 1 December 2025
Abstract
The growing trend of using artificial intelligence models in many areas increases the need for a proper understanding of their functioning and decision-making. Although these models achieve high predictive accuracy, their lack of transparency poses major obstacles to trust. Explainable artificial intelligence (XAI) has emerged as a key discipline that offers a wide range of methods to explain the decisions of models. Selecting the most appropriate XAI method for a given application is a non-trivial problem that requires careful consideration of the nature of the method and other aspects. This paper proposes a systematic approach to solving this problem using multi-criteria decision-making (MCDM) techniques: ARAS, CODAS, EDAS, MABAC, MARCOS, PROMETHEE II, TOPSIS, VIKOR, WASPAS, and WSM. The resulting score is an aggregation of the results of these methods using Borda Count. We present a framework that integrates objective and subjective criteria for selecting XAI methods. The proposed methodology includes two main phases. In the first phase, methods that meet the specified parameters are filtered, and in the second phase, the most suitable alternative is selected based on the weights using multi-criteria decision-making and sensitivity analysis. Metric weights can be entered directly, using pairwise comparisons, or calculated objectively using the CRITIC method. The framework is demonstrated on concrete use cases where we compare several popular XAI methods on tasks in different domains. The results show that the proposed approach provides a transparent and robust mechanism for objectively selecting the most appropriate XAI method, thereby helping researchers and practitioners make more informed decisions when deploying explainable AI systems. Sensitivity analysis confirmed the robustness of our XAI method selection: LIME dominated 98.5% of tests in the first use case, and Tree SHAP dominated 94.3% in the second.
Share and Cite
MDPI and ACS Style
Matejová, M.; Paralič, J.
A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods. Mach. Learn. Knowl. Extr. 2025, 7, 158.
https://doi.org/10.3390/make7040158
AMA Style
Matejová M, Paralič J.
A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods. Machine Learning and Knowledge Extraction. 2025; 7(4):158.
https://doi.org/10.3390/make7040158
Chicago/Turabian Style
Matejová, Miroslava, and Ján Paralič.
2025. "A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods" Machine Learning and Knowledge Extraction 7, no. 4: 158.
https://doi.org/10.3390/make7040158
APA Style
Matejová, M., & Paralič, J.
(2025). A Multi-Criteria Decision-Making Approach for the Selection of Explainable AI Methods. Machine Learning and Knowledge Extraction, 7(4), 158.
https://doi.org/10.3390/make7040158
Article Metrics
Article Access Statistics
For more information on the journal statistics, click
here.
Multiple requests from the same IP address are counted as one view.