Next Article in Journal
The Algebraic Decay Behavior of Weak Solutions to the Magnetohydrodynamic Equations in Unbounded Domains
Next Article in Special Issue
An Assessment of Indifference Threshold Values to Achieve Full Objective Indifference Threshold-Based Attribute Ratio Analysis
Previous Article in Journal
Formal Context Transforms and Their Affordances for Exploratory Data Analysis
Previous Article in Special Issue
A Hybrid Fuzzy DEMATEL–DANP–TOPSIS Framework for Life Cycle-Based Sustainable Retrofit Decision-Making in Seismic RC Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration Modes Between MCDM Methods and Machine Learning Algorithms: A Structured Approach for Framework Development

by
Hatice Kocaman
* and
Umut Asan
Department of Industrial Engineering, Istanbul Technical University, Macka, Istanbul 34367, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(1), 33; https://doi.org/10.3390/math14010033
Submission received: 28 November 2025 / Revised: 15 December 2025 / Accepted: 16 December 2025 / Published: 22 December 2025
(This article belongs to the Special Issue Multi-criteria Decision Making and Data Mining, 2nd Edition)

Abstract

Decision-making is increasingly guided by the integration of Multi-Criteria Decision-Making (MCDM) and Machine Learning (ML) approaches. Despite their complementary strengths, the literature lacks clarity on which forms of integration exist, what contributions they offer, and how to determine the most effective form for a given decision problem. This study systematically investigates integration modes through a methodology that combines a literature review, expert judgment, and statistical analyses. It develops a novel categorization of integration modes based on methodological characteristics, resulting in five distinct modes: sequential approaches (ML → MCDM and MCDM → ML), hybrid integration (MCDM + ML), and performance comparison approaches, including ML vs. MCDM and ML vs. ML evaluated through MCDM. In addition, new evaluation criteria are introduced to ensure rigor, comparability, and reliability in assessing integration forms. By applying correspondence, cluster, and discriminant analyses, the study reveals distinctive patterns, relationships, and gaps across integration modes. The primary outcome is a novel evidence-based framework designed to guide researchers and practitioners in selecting the appropriate integration modes based on problem characteristics, methodological requirements, and application context. The findings reveal that sequential approaches (ML → MCDM and MCDM → ML) are most appropriate when efficiency, structured decision workflows, bias reduction, minimal human intervention, and the management of complex multi-variable decision problems are key objectives. Hybrid integration (MCDM + ML) is better suited to dynamic and data-rich environments that require flexibility, continuous adaptation, and a high level of automation. Performance comparison approaches are most appropriate for validation-oriented studies that evaluate outputs (MCDM[ML vs. ML]) and benchmark alternative methods (ML vs. MCDM), thereby supporting reliable method selection. Furthermore, the study underscores the predominance of integration modes that combine value-based MCDM methods with classification-based ML algorithms, particularly for enhancing interpretability. Environmental science and healthcare emerge as leading domains of adoption, primarily due to their high data complexity and the need to balance diverse, multi-criteria stakeholder requirements.

Graphical Abstract

1. Introduction

Decision-making processes in the academic literature and practical applications have been addressed through a variety of methodological approaches that differ in how they represent, process, and synthesize inputs for evaluating alternatives. Among these, Multi-Criteria Decision-Making (MCDM) and Machine Learning (ML) methods have emerged as prominent and complementary approaches. MCDM, as a prescriptive approach grounded in normative decision theory, provides structured frameworks for evaluating and prioritizing alternatives across multiple, often conflicting, criteria. Notably, MCDM is not exclusively expert-driven. While early applications frequently relied on subjective judgments to elicit criteria weights and preference structures, the methodological scope of MCDM has progressively expanded to incorporate objective, data-driven inputs as well. Contemporary MCDM techniques can integrate subjective judgments, objective measurements, or hybrid combinations thereof, making them applicable to both judgmental and non-judgmental decision environments. In this context, the defining characteristic of MCDM lies in its explicit modeling of preferences, trade-offs, and aggregation mechanisms that guide decision-oriented outcomes (e.g., ranking, sorting, or selection), rather than in the mere origin of the inputs as expert judgments or empirical datasets. Objective weighting schemes have been widely employed in empirical settings where expert judgments are unavailable, impractical, or intentionally excluded. These approaches preserve the normative structure of MCDM while leveraging observed data to infer relative importance among criteria. ML, on the other hand, encompasses a range of algorithms capable of learning patterns and relationships from large and complex datasets to support prediction, classification, and optimization tasks, offering high adaptability and computational efficiency [1].
Advancements in data analytics and intelligent systems have increasingly blurred the boundaries between these methodological streams, fostering the development of integrated decision-making frameworks. This study focuses on the integration of MCDM and ML methods, enabling decision-makers to harness their complementary strengths—the interpretability, structured multi-criteria evaluation, and transparency of MCDM, together with the adaptability, learning capability, and predictive power of ML algorithms—to enhance both the effectiveness and quality of decision-making.
Although the number of studies combining both approaches has recently increased [2], only a few studies [3,4,5,6,7] have examined how researchers use ML and MCDM together and the directions in which these methods are evolving, particularly through structured literature reviews and bibliometric analyses. However, none of these studies have investigated how the two approaches can be effectively integrated, nor have they systematically compared the advantages of different integration forms or their applicability across decision contexts. As a result, the literature offers little practical guidance on selecting appropriate integration modes for specific decision contexts, and there are no established criteria to inform the choice of ML–MCDM integration modes. In the absence of structured categorization and evaluation criteria, the choice of integration modes is often arbitrary, driven by convenience or precedent rather than specific problem requirements. This lack of clarity constitutes the core research problem addressed in this study, as it may lead to underutilization or inappropriate application of ML–MCDM integration, and limit its potential benefits for complex decision-making problems.
Therefore, to gain a deeper understanding of the integration of these approaches, this study aims to address the following questions:
  • Which forms of integration between ML algorithms and MCDM methods exist for addressing decision problems?
  • What criteria should be considered when selecting the most appropriate form of integration?
  • What specific contributions are delivered by these integration forms?
  • What are the current trends in research on the joint use of MCDM and ML methods?
  • Which application areas have adopted integrated approaches most extensively?
  • What are promising future research directions?
To answer these questions, this study employs a structured methodological approach that involves a systematic literature review, categorization, and expert evaluation of different integration forms, and the development of a framework. Based on the review of the current literature, the various ways in which MCDM methods and ML algorithms have been integrated are identified and categorized, while evaluation criteria are established drawing on prior research and expert input to benchmark these integration forms. The relevant studies are then systematically mapped to both categories and criteria. The use of expert judgments in this process ensures a reliable and comprehensive view of how integration forms are employed and what contributions they deliver. These relationships are further examined through correspondence analysis (CA) and cluster analyses, which uncover patterns within and between integration modes and evaluation dimensions. The findings offer an evidence-based framework for selecting a suitable integration approach based on the inherent features, structure, and requirements of the decision problem. The validity of the proposed framework is then assessed through discriminant and Procrustes analyses. Finally, key insights from the analysis, such as the most commonly used integration modes, common method–algorithm pairings, and dominant application areas, are presented, along with limitations and suggestions for future research to advance the integration of MCDM methods and ML algorithms.
Consequently, the theoretical and practical contributions of the study can be summarized as follows:
  • Follows a methodology that synthesizes insights from the literature with expert opinions, thereby ensuring both theoretical depth and practical relevance.
  • Establishes a new systematic categorization of integration modes between ML algorithms and MCDM methods, grounded in their methodological characteristics and application contexts.
  • Introduces, for the first time, a structured set of criteria that enable standardized and reliable evaluation of integration modes, ensuring comparability and rigor.
  • Demonstrates how the complementary strengths of ML and MCDM enhance decision-making, overcome their individual limitations, and generate added value when combined.
  • Employs CA and cluster analysis correspondence analysis to visually map the relationships between integration modes and evaluation criteria, revealing clusters, patterns, and associations that highlight dominant integration modes, trends, and potential gaps in the literature.
  • Proposes a new evidence-based framework that consolidates theoretical insights with practical considerations. It provides researchers and practitioners with clear guidance on selecting the appropriate integration mode, ensuring effective alignment between methodological choices and problem-specific requirements. The proposed framework is designed to address a wide range of complex decision problems where data is available, enhancing the effectiveness and efficiency of the decision process.
  • Validates the spatial associations and cluster structure revealed by CA and clustering using discriminant analysis and further confirms the robustness of the proposed framework through Procrustes and discriminant analyses conducted on a holdout sample.
  • Finally, outlines future research directions to address current gaps, leverage emerging trends, and further advance ML–MCDM integration.
The rest of this paper is organized as follows: Section 2 provides an overview of MCDM and ML methods. Section 3 describes the research methodology, including the systematic literature review, expert evaluation, mapping, framework development, and validation. Section 4 provides an overview of the current state of research, emerging trends in the joint use of MCDM and ML methods, and key findings. Section 5 provides future research directions. Finally, Section 6 presents the concluding remarks and limitations of the study.

2. Overview of MCDM Methods and ML Algorithms

2.1. MCDM Methods and Their Strengths and Weaknesses

MCDM is a branch of operations research and decision-making methods that provide a powerful framework for analyzing and evaluating complex decision problems. MCDM methods can be categorized into several methodological families. Elementary methods rely on simple mathematical aggregation, with Simple Additive Weighting (SAW) being a representative example [8]. Value-based methods, such as the Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and Grey Relational Analysis (GRA), construct value or utility functions to evaluate alternatives. Outranking methods—including Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) and Elimination and Et Choice Translating Reality (ELECTRE)—use pairwise dominance relations to identify preferred alternatives without requiring full compensability among criteria. Other notable families include goal programming and reference point methods, as well as interactive approaches that involve the decision-maker iteratively in the search for the most preferred solution.
MCDM methods offer several advantages, particularly in complex decision-making scenarios where multiple, often conflicting criteria must be considered. By accommodating multiple criteria simultaneously, these methods allow decision-makers to achieve a more comprehensive understanding of the trade-offs and interactions among different factors [9,10,11,12,13], especially in complex decision problems [14,15,16]. These methods improve decision-making by incorporating quantitative data such as cost, time, and efficiency, along with qualitative judgments such as expert opinions and stakeholder preferences [17,18], while also effectively integrating historical data and expert judgments to quantify subjective opinions [19].
While MCDM methods offer numerous advantages, they also have weaknesses and limitations. Some MCDM methods can be computationally intensive, requiring substantial processing power and time, especially when dealing with many criteria and alternatives [20]. As a result, their applicability may be restricted in real-time or resource-constrained scenarios [21,22]. Furthermore, humans’ ability to process and analyze large volumes of data is inherently limited [23]. This limitation often necessitates the use of advanced analytical tools and techniques to aid decision-making.
MCDM methods also depend heavily on comprehensive and high-quality data for all criteria considered. When data are incomplete, uncertain, or unreliable, the resulting evaluations may become misleading, potentially compromising the quality of decisions. Finally, relying on expert judgment to define criteria and assign weights can yield valuable insights to mitigate these issues; however, it is essential to remain aware of potential biases and inconsistencies that may arise [10,13,15,24,25,26].

2.2. ML Algorithms and Their Strengths and Weaknesses

ML is a subfield of artificial intelligence (AI) that involves developing algorithms and statistical models. These models enable computers to learn from data and make predictions and decisions without requiring specific programming instructions. ML algorithms are classified into three categories: supervised learning, unsupervised learning, and reinforcement learning [27]. Supervised learning algorithms learn from a labeled dataset. In this type of learning, each training example is associated with an output label. The main objective is to enable the model to understand and create a relationship between inputs and outputs to accurately predict new data points’ labels. Some common algorithms in this category are Linear Regression, Logistic Regression, Decision Trees, and Support Vector Machines (SVMs). Unsupervised learning, on the other hand, involves training algorithms on unlabeled data to uncover hidden patterns or structures within the dataset [28]. These algorithms include K-means clustering, Hierarchical clustering, and Principal Component Analysis (PCA). Finally, Reinforcement learning focuses on the interaction between the environment and the agent. The agent learns from the outcomes of its actions rather than receiving direct instructions [27].
Machine learning integrates various elements of computer science, applied mathematics, and statistics, supported by a robust theoretical foundation, making it successful in producing more accurate results [19,29]. These algorithms can process and analyze large volumes of data that would be impractical for humans to handle manually. Their ability to recognize complex patterns and relationships within large datasets provides deeper insights for decision-makers, making them invaluable tools [9,30,31,32]. ML algorithms improve decision-making quality by transforming data into information without human intervention, thus reducing human errors [17,26,33].
Even though ML algorithms are powerful and versatile, they have several disadvantages and challenges that can hinder their practical implementation and reliability. One key challenge is their dependency on high-quality data; machine learning models require large, clean, and representative datasets to perform well, and poor-quality or imbalanced data can lead to inaccurate or biased outcomes [34]. Complexity and interpretability issues are also significant, as many advanced models—particularly deep learning architectures—are often treated as “black boxes”, making it difficult for users to understand how decisions are made, which reduces transparency and trust in critical applications [35]. Machine learning models can also be resource-intensive, requiring substantial computational power, memory, and storage—especially during training—which can limit their accessibility and scalability for smaller organizations or edge devices [36]. Additionally, machine learning projects often rely on domain expertise for feature engineering, problem formulation, and interpretation of results, which may not be readily available in all contexts [37]. Lastly, scalability issues can arise when deploying models in real-world settings with vast and dynamic data streams, requiring robust infrastructure and architecture to support real-time processing and updates [19,23].
The strengths and limitations of ML algorithms and MCDM methods are discussed separately in this section, not as independent evaluations, but to emphasize their complementary nature. Specifically, the strengths and limitations of each approach, when used independently, highlight the need for integrating ML and MCDM methods.

3. Methodology

This study suggests a structured methodological approach to examine the integration modes between MCDM methods and ML algorithms and to provide insight to researchers and practitioners. The process begins with the formulation of research questions (presented in the Introduction), which serve as the foundation for the study’s overall design. To address these questions, two primary sources of input are utilized: expert opinion and a comprehensive review of the literature. Expert insights draw on practical experience and contextual knowledge, while the literature review ensures theoretical depth and coverage of the state of the art.
Building on these inputs, two parallel tasks are undertaken. First, the categorization of integration modes identifies and classifies the different ways in which MCDM methods and ML algorithms are combined in the existing body of research. Second, the evaluation criteria are determined, drawing on expert input and prior studies to establish a set of benchmarks for assessing these integration modes.
Once the categories and criteria are established, articles are associated with relevant categories to capture the integration modes employed in each study and to analyze how different methods are combined or applied systematically. Experts against the criteria then evaluate each article to highlight the contributions and outcomes reported in prior studies. To ensure consistency in judgments, inter-rater reliability is assessed to verify the level of agreement among the experts. These associations and evaluations are then synthesized to reveal the relationships between categories and criteria, providing a structured view of how different forms of integration perform against the established evaluation criteria.
To analyze these associations in depth, CA and cluster analysis are employed. These techniques enable the identification of patterns and connections between integration modes and evaluation criteria, offering an evidence-based foundation for further interpretation. The results of this analysis inform the development of a structured framework to identify the appropriate integration form based on the nature of the decision problem. The framework is then validated by confirming the spatial associations and cluster structure through discriminant analysis, and by further demonstrating its robustness using Procrustes and discriminant analyses on an independent holdout sample.
The methodology concludes with the presentation of key findings from the analysis, followed by suggestions for future research that highlight promising avenues to advance the integration of MCDM methods and ML algorithms. Figure 1 illustrates the structured methodological approach employed in this study, while the following sections explain each step in detail.

3.1. Literature Review

The literature review for this study was conducted using Elsevier’s Scopus and Web of Science databases, selected for their extensive coverage of high-quality publications in engineering, computer science, and decision science—ensuring alignment with the research focus. To capture the evolution and development of the concept, we included publications spanning 1999 to 2026. Only peer-reviewed journal articles were considered to ensure fully validated research, prioritizing methodological rigor, reliability, and completeness. Table 1 summarizes the key details of the search process for Scopus, and a similar process was followed for Web of Science.
As shown in Figure 2, the systematic selection process began by retrieving 1146 records from the Scopus and Web of Science databases. After removing duplicate studies that appeared in both databases, the remaining 816 records were further examined for relevance, resulting in the exclusion of 254 papers that were deemed out of scope based on their keywords. After this initial filtering, 562 papers were selected for further evaluation through an additional title and abstract screening. The titles and abstracts of these papers were then reviewed, resulting in the exclusion of 246 additional papers, bringing the total to 316 papers that underwent full-text screening. A more detailed full-text screening was then conducted, during which 86 papers were discarded for reasons such as lack of original content, insufficient detail, or irrelevance to the study’s objectives. Ultimately, 230 studies were deemed relevant, of which 208 were included in the final review, ensuring a refined and focused set of studies for the literature review. 22 (approximately 10% of the relevant studies) were reserved for validation.

3.2. Categorization

The literature recently focused on combining MCDM and ML approaches to make more informed decisions. To enhance the outcomes of these approaches, an effective combination of structured decision-making processes, human judgment, analytics (machines), and relevant data is essential for achieving more informed and high-quality decisions. This integrated perspective is clearly stated in the DECAS model proposed by Elgendy et al. [38], where decision-making effectiveness relies on the structured interaction of five key components: decision-making process, decision-maker, decision, data, and analytics [38]. This comprehensive approach emphasizes integrating these fundamental components in decision-making contexts.
Growing interest in combining expert-based and data-driven decision-making approaches underscores the need for a deeper understanding of how to achieve effective human–machine integration. To this end, Ransbotham et al. [39] identified five primary modes of interaction, each shaped by the decision context and type:
  • Machines make decisions jointly with humans and implement them together.
  • Machines decide, while humans are responsible for implementation.
  • Machines provide recommendations, and humans make the final decision.
  • Machines generate insights to inform the human decision-making process.
  • Humans create options and hypothetical scenarios, which machines then evaluate.
The discussion in these studies predominantly frames the distinction as expert-based versus data-driven, implicitly suggesting that the choice of integration is determined solely by the nature of the data and the sequence of methods. As noted in the Introduction, numerous studies on MCDM employ also objective schemes for non-judgmental or data-driven settings.
In another study, conducting a bibliometric analysis, Düzen et al. [6] suggested purpose-oriented models to describe different types of combinations of ML and MCDM methods, with particular emphasis on the sequence of methods and the comparison of methodological performances. Unlike studies that categorize integration types, Liao et al. [4] review a particular type of integration by outlining how machine learning technologies contribute to MCDM. Specifically, they conduct an in-depth examination of ML-supported MCDM through four key procedures: criteria extraction, criteria interaction, parameter determination, and decision support system development. The study by Reyes-Norambuena et al. [7] examines how MCDM and ML are combined to enhance decision-making in pedestrian dynamics. Rather than proposing a systematic categorization of integration types, the review paper provides examples from the literature. The findings indicate that hybrid MCDM algorithms can effectively assess the performance of ML models.
Building on these insights, selecting and implementing the suitable and effective mode of integration is not straightforward. To better understand the integration dynamics, the review of the literature enabled us to examine prior research to identify and group the different ways MCDM methods and ML algorithms have been integrated into broad categories based on common characteristics. As a result, five primary categories of integration between these two approaches were identified to more effectively represent the diversity and scope of recent studies. The integration modes are presented below with their respective mathematical formulations and explanations. The formulations are provided in a generalized form, with article-specific details omitted for clarity and conciseness. For consistency, the notation used for each mode is specified as follows. For further information, please refer to Supplementary Materials.
Notation
The notation used in the mathematical formulations of the integration modes is presented below.
A : set of decision alternatives, where A   =   { A 1 ,   A 2 ,   ,   A m }
C : set of evaluation criteria, where C =   { C 1 ,   C 2 ,   ,   C n }
x i j : value of alternative i under criterion j
X i : feature vector for alternative Ai, where X i   =   [ x i 1 ,   x i 2 ,   ,   x in ]
p i j : observed the performance of ML algorithm A i on performance criterion C j
N ( · ) : normalization operator
r i j : normalized or scaled criterion value
I j : importance of feature j derived from ML model
W M C D M : MCDM weighting method
w j M L : weight of criterion j derived by normalizing I j
w j e x p : weight of criterion j derived from expert judgment
w j : final weight of criterion j used in MCDM
W : weight-fusion operator combining multiple weight sources
D ( A i ) : overall decision score or ranking value for alternative I
θ : parameters of ML model
f ( · ; θ ) : ML function algorithm parameterized by θ
Λ : parameters of the MCDM aggregation method
A Λ ( · ) : MCDM aggregation function
Y : observed target or response values
y ^ : predicted outputs from ML model(s), e.g., predictions, probabilities, class labels, cluster assignments, feature importances, or learned weights
y ^ i = predicted output from ML model for alternative A i
L ( y ^ ,   Y ) : loss or objective function used to optimize ML parameters
η : learning rate
λ : trade-off parameter controlling MCDM influence
E M L : evaluation metrics for ML results
E M C D M : evaluation metrics for MCDM results
ρ ( · , · ) : performance comparison or correlation function
g ( · ) : function for estimating feature importance in the ML model
Φ ( · ) : transformation or concatenation operator
Ω ( · ) : synthesis or fusion operator
δ ( · ) : selection/decision function that chooses the preferred algorithm(s) given scores and additional constraints.
Category 1: Using ML Outputs as Inputs to MCDM Methods (ML → MCDM)
This category involves integrating outputs from ML algorithms or data analytics techniques as inputs to MCDM methods. For instance, predictions, scores, extracted features, or feature importances derived from large datasets can serve as criterion values or weights in the decision-making process. This integration enables decision-makers to leverage data-driven insights, improve the objectivity of criteria evaluation, and enhance the overall quality and robustness of multi-criteria decisions. Several studies have demonstrated this integration, such as Pirouz et al. [40], who utilized a Bayesian network–supported AHP method for COVID-19 analysis, while Yadegaridehkordi et al. [16] applied segmentation-based TOPSIS to select green hotels. Similarly, Elomiya et al. [41] integrated Random Forest with MCDM methods such as Analytic Hierarchy Process (AHP), fuzzy AHP, and Stepwise Weight Assessment Ratio Analysis (SWARA) to select optimal sites for electric vehicle charging stations. The general formulation of the integration mode ML → MCDM can be written concisely as:
A * = a r g m a x i   A Λ ( { N j ( f j ( A i ;   θ j ) ) ,   W ( w j m l ,   w j exp ) }   f o r   a l l   j ) ,
where f j denotes the ML model that produces usable data for MCDM, N j denotes normalizing or representing the data (numerical, probabilistic, or fuzzy), W denotes the operator combining ML-derived and expert-derived weights, and A Λ denotes aggregating all weighted and normalized criterion values into a final decision score. See Table 2 for a summary of the stages of the integration mode.
Category 2: Using MCDM Results as Input in ML Algorithms (MCDM → ML)
This category integrates results obtained with MCDM methods as input to ML algorithms. The criteria weights or alternative rankings/sortings derived from MCDM can be used to train machine learning models or to improve decision-making. This integration enables ML algorithms to incorporate structured expert knowledge and multi-criteria preferences, enhancing their accuracy and practical relevance. Notable examples of this approach include Oliveira de Sousa et al. [42], who used AHP-based multi-criteria analysis for flood risk assessment with machine learning, and Saleh et al. [43] who applied TOPSIS followed by machine learning classifiers to prioritize maintenance tasks in healthcare facilities. Additionally, Sotiropoulou et al. [44] integrated PROMETHEE II with machine learning regression methods, such as k-Nearest Neighbors and Support Vector Machines, for land use suitability analysis. The general formulation of the integration mode MCDM → ML can be written concisely as:
y ^ i =   f ( Φ ( X i ,   A Λ ( { N j ( x i j ) ,   w j } ) ,   { w j } ) ; θ ) ,
where A Λ ( · ) produces aggregated decision results from multi-criteria data, Φ ( · ) fuses those structured results into the ML input space, and f ( · ;   θ ) learns mappings from enriched inputs to outcomes, guided by loss L . See Table 3 for a summary of the stages of the integration mode.
Category 3: Combining ML and MCDM Methods (MCDM + ML)
This category involves jointly applying ML and MCDM methods within the same decision-making process. By integrating the data-driven analytical and predictive capabilities of ML with the multi-criteria evaluation and weighting strengths of MCDM, this hybrid approach supports comprehensive and consistent decisions that account for both quantitative data and qualitative judgments. Such combinations allow decision-makers to exploit the complementary advantages of the two methods, leading to more accurate, balanced, and practically relevant outcomes. Several studies have demonstrated this integration, such as Guerrero-Gómez-Olmedo et al. [45] who enhanced the interpretability of deep neural networks (DNNs) using multi-criteria analysis. Rafiei-Sardooi et al. [46] integrated state-of-the-art machine learning methods (support vector machine, random forest, and boosted regression tree) with TOPSIS for urban flood vulnerability analysis, and Parishani et al. [47] developed a new weighting method for multi-class classification in disease diagnosis by combining the confusion matrix and MCDM methods. The general formulation of the integration mode MCDM + ML can be concisely expressed as:
S ( A i ) =   Ω ( A Λ ( { N j ( x i j ,   y ^ i j ) ,   w j } ) , y ^ i ) ,
where the ML model f ( X ;   θ ) produces insights or predictions that inform the multi-criteria aggregation process ( A Λ ) , MCDM aggregation yields structured decision outcomes D ( A i ) =   A Λ ( { N j ( x i j ,   y ^ i j ) ,   w j }   f o r   a l l   j ) that can also regulate or be fused with ML predictions, and the synthesis function Ω ( · ) combines both perspectives to produce a comprehensive decision score S ( A i )   =   Ω ( D ( A i ) ,   y ^ i ) . See Table 4 for a summary of the stages of the integration mode.
Category 4: Comparing the performances of ML algorithms and MCDM methods (ML vs. MCDM)
This category involves evaluating and comparing the performance of ML algorithms and MCDM methods across different decision-making contexts. Given their distinct strengths—ML in handling large-scale data and uncovering patterns, and MCDM in structuring complex problems and incorporating expert judgment—such comparisons aim to determine which approach is more effective for a given problem setting. Fernández et al. [48] compared support vector regression (SVR) and VIKOR combined with entropy weighting for multi-material extrusion processes, while Almansi et al. [49] compared multilayer perceptron (MLP) and AHP for hospital site suitability mapping in Malacca, Malaysia. The general formulation of the integration mode ML vs. MCDM can be concisely expressed as:
If observed target values or true labels are available,
M * = a r g m a x   Ω ( E v a l M C D M ( A Λ ( { N j ( X ) ,   w j } ) ,   Y ) ,   E v a l ML ( f ( X ;   θ ) ,   Y ) ,   ρ ( D ( A ) , y ^ ) )
If observed target values or true labels are not available,
M * = a r g m a x   Ω ( E v a l M C D M ( A Λ ( { N j ( X ) ,   w j } ) ) ,   E v a l ML ( f ( X ;   θ ) ) ,   ρ ( D ( A ) , y ^ ) )  
This formulation shows:
  • Both methods are applied independently on the same data.
  • Each is evaluated separately via its own metric.
  • Their results are compared through performance, consistency, or correlation measures.
  • The best-performing or most consistent approach is selected via the synthesis function Ω
See Table 5 for a summary of the stages of the integration mode
Category 5: Comparison of ML algorithm performances using MCDM methods (MCDM[ML vs. ML])
This category compares the performance of ML algorithms using MCDM methods. This approach allows ML algorithms to be evaluated and ranked according to multiple performance criteria. Various criteria, such as accuracy, precision, recall, F1 score, and processing time, are usually taken into account when evaluating the performance of ML algorithms. When there are trade-offs among these criteria, MCDM methods enable a comprehensive comparison of algorithms. Tashakkori et al. [50] applied the TOPSIS method to compare various ML algorithms such as Support Vector Machine (SVM), Decision Tree (DT), Gaussian Process (GP), Multilayer Perceptron (MLP), ensemble models Random Forest (RF) and Bagging, for neonatal mortality prediction, while Mekouar [51] proposed AHP-based MCDM framework for selecting the most optimal classifier from algorithms like Random Forest, Stochastic Gradient Descent (SGD), Multilayer Perceptron (MLP), Decision tree (DT), Naive Bayes (NB), Logistic regression (LG), K nearest neighbors (KNN). The general formulation of the integration mode MCDM[ML vs. ML] can be concisely expressed as
M * = a r g m a x i   δ ( r a n k ( A Λ ( { N j ( p i j ) ,   W M C D M ( { r i j }   o r   e x p e r t   i n p u t s ) } ) ) ,   c o n s t r a i n t s )  
where M * represents the optimal ML model selected from a set of candidates { A i } , δ ( ) is the decision or selection operator that identifies the most preferred alternative according to a specified rule, and the rank operator orders alternatives based on their aggregated evaluation scores A Λ , and W M C D M ( ) provides the criterion importance weights, derived either from expert judgments r i j or other multi-criteria weighting procedures. See Table 6 for a summary of the stages of the integration mode.
In response to Research Question 1, the analysis identified five distinct modes of integration between ML algorithms and MCDM methods, distinguished by purpose and methodological approach. This categorization provides a comprehensive framework for understanding how MCDM methods and ML algorithms can be integrated in decision-making. By combining these approaches, more effective decision-making and prediction processes can be achieved, both theoretically and practically.

3.3. Criteria Identification

To address Research Question 2, a set of fourteen criteria was introduced to evaluate integration modes across diverse decision-making problems, thereby establishing benchmarks for systematic assessment. By applying these criteria to the reviewed studies, we aim to develop a more informed and structured framework that generates valuable insights into the applications of ML–MCDM integration. Moreover, this framework is intended to guide future research by providing concrete recommendations and identifying promising directions for further exploration.
To achieve this, the criteria were derived from an extensive literature review and refined through expert input, with their definitions carefully determined to ensure conceptual clarity. The selection process was deliberately designed to align with the most current and relevant research, thereby providing a rigorous and well-founded methodological foundation. The extracted criteria then underwent a meticulous content analysis, which involved a detailed examination of their definitions and meanings. During this process, closely related criteria were identified based on their conceptual overlap and thematic similarities. For example, criteria such as “Adaptability,” “Scalability,” “Responsiveness,” “Adjustability,” “Expandability,” and “Compatibility” were found to share a conceptual relationship with “Flexibility.” In this way, conceptually related items were consolidated under common headings wherever possible. Nevertheless, complete independence among the 14 identified criteria cannot be fully ensured, and some degree of conceptual overlap may remain. This does not present a methodological concern for the subsequent analyses, as the correspondence analysis employed here is specifically designed to capture and represent associations among categories within a low-dimensional space. In other words, CA does not assume independence among (row or column) categories, and it is acceptable for integration modes or criteria to exhibit internal correlations. The complete set of criteria, their closely related criteria, and the corresponding source citations are provided in Appendix A (Table A1).
The evaluation criteria and their detailed descriptions are presented below.
Novelty (C1): An approach is considered “novel” if it introduces a new and original idea, method, or feature. Novel approaches typically differ from existing models or methods and provide innovative ways to address specific problems. Accordingly, the novelty of an integration mode is closely linked to its degree of innovation and originality [52,53,54]. This study examines the extent to which the integration of MCDM methods and ML algorithms contributes to novel methodological developments.
Complexity (C2): A decision problem is considered complex when it involves multiple components, interdependencies, a large number of alternatives, and/or conflicting objectives, and can manifest at different levels depending on the context [55]. In this study, complexity refers to the challenges of understanding, analyzing, and managing a problem or system arising from numerous variables, interdependencies, operational steps, and data heterogeneity. The research examines whether integrating MCDM and ML approaches can help manage this complexity.
Validation (C3): Validation is the process of assessing the extent to which a method produces accurate, reliable, and meaningful results in relation to its intended purpose [56]. In this study, the validation criterion is defined as the systematic process of assessing and comparing the performance and outcomes of MCDM and/or ML approaches.
Subjectivity (C4): In decision-making, subjectivity refers to the variability in judgments that arises from individual perspectives, interpretations, and biases, rather than from objective data or knowledge gaps [57]. In other words, subjectivity highlights the human element of decision-making. It reflects the personal influence of decision-makers’ backgrounds, expertise, and values on the evaluation process. This study examines whether integrating MCDM methods with ML algorithms reduces subjectivity in decision-making.
Knowledge Base (C5): Knowledge-based approaches typically rely on the insights, principles, and intuitions accumulated by experts or professionals over time to make decisions or solve problems [58]. In this study, this criterion indicates the extent to which integrating MCDM methods and ML algorithms requires expert knowledge.
Effectiveness (C6): Effectiveness denotes the capacity of a method to generate accurate predictions, classifications, or decisions from input data [59]. In this study, the effectiveness criterion refers to the ability of integrating MCDM and ML approaches to successfully and accurately perform the specified task.
Efficiency (C7): Efficiency refers to completing a task or process using resources as effectively as possible. In this study, the efficiency criterion evaluates how well the integration of MCDM and ML approaches performs regarding resource utilization, speed, and computational cost while achieving its objectives [3,60].
Applicability (C8): This criterion essentially captures whether a method is suitable, practical, and relevant for addressing a particular decision problem, given the characteristics of the data, criteria, decision context, and objectives. In this study, the applicability of an integration approach reflects its practical usability and the extent to which it can effectively address real-world decision problems [61].
Flexibility (C9): Flexibility refers to a method’s or model’s ability to adapt or adjust under various conditions, data types, or tasks. A flexible method can accommodate variations or different requirements and function effectively across diverse datasets or contexts [55]. Flexibility, in this study, refers to the capacity of a model or method to be modified or adapted when needed and to respond effectively to unexpected changes.
Consistency (C10): Consistency refers to a method’s ability to produce results of similar quality and performance under different conditions, datasets, or timeframes. A consistent method is essential for obtaining repeatable outcomes and making reliable decisions or predictions [62]. In this study, consistency reflects the extent to which the integration of MCDM methods and ML algorithms yields results of comparable quality and repeatability across varying conditions, datasets, or decision contexts.
Automation (C11): Automation refers to the full or partial automation of a method’s operation or functionality. It enables the method to perform specific tasks or processes without human intervention [63]. In this study, automation reflects the extent to which integrating MCDM methods and ML algorithms facilitates the automatic execution of decision-related tasks.
Sequential Processing (C12): Sequential Processing refers to the stepwise use of MCDM methods and ML algorithms, where one method’s output serves as the input or basis for the next in solving a decision problem. This allows for maintaining methodological coherence and enhancing traceability of the decision-making. In this study, the sequential processing criterion examines whether the methods/algorithms are applied in a defined sequence rather than simultaneously, allowing each to contribute at different stages of the decision-making process and thereby complement one another.
Dynamic Nature (C13): Being dynamic refers to a method’s ability to handle evolving conditions, data, or requirements over time. Dynamic models are designed to handle situations where key factors or parameters are not fixed and may change, allowing them to adapt and model these time-dependent variations effectively [64]. This study examines whether the integration of MCDM methods and ML algorithms successfully accounts for the dynamic nature of the decision problem being modeled.
Explainability (C14): Explainability reduces a model’s black-box nature by making its internal workings and decisions more transparent and understandable [65,66]. Enhancing a model’s explainability helps users understand the reasoning behind its predictions or decisions, thereby fostering greater trust in the model. In this study, the explainability criterion examines whether integrating MCDM methods and ML algorithms enhances the interpretability and transparency of the decision-making process.
A brief description and a guiding question for each criterion are provided in Appendix A to support consistent expert evaluation of ML–MCDM integration modes.

3.4. Evaluation

After defining the categories and evaluation criteria, the next step is to determine how well each category (i.e., integration mode) meets the specified criteria. This evaluation provides a structured understanding of the distinctive characteristics and comparative strengths of different integration modes. However, evaluating categories directly against criteria without an in-depth review of the source articles may lead to oversimplified or even misleading associations, as contextual nuances and methodological details would likely be overlooked. Therefore, a thorough examination of articles that apply different integration modes is necessary to provide essential evidence for evaluation. This approach establishes a clear link between the conceptual framework (categories and criteria) and empirical evidence (the reviewed articles), thereby enhancing methodological rigor and credibility. To ensure this, a three-stage procedure was employed, with its details elaborated in the following paragraphs:
  • Mapping: Drawing on expert input, 208 relevant articles were carefully reviewed and systematically mapped to the corresponding categories. This step ensured that every category was grounded in representative studies.
  • Scoring: The same set of articles was evaluated by experts according to the extent to which the approach proposed in each article satisfies each evaluation criterion. This stage translated qualitative insights from the literature into structured, comparable scores.
  • Synthesis: The outputs of the mapping and scoring stages were combined through matrix multiplication to determine how well each category satisfies the evaluation criteria, ensuring that the findings are firmly grounded in evidence derived from prior research.
Six qualified experts, including the authors, recognized for their substantial experience and knowledge in decision-making and ML, participated in the evaluation process. In decision-making studies, there is no strict, universally accepted number of experts; emphasis is placed on the quality of judgments rather than their quantity [67,68]. Accordingly, a sample of six experts is considered adequate and reasonable for this study. Table 7 presents detailed information about their fields of specialization and professional experience, demonstrating the diversity and relevance of expertise represented in the study. This diversity helped mitigate individual biases and subjectivity.
Given the large number of articles to be evaluated, the task was challenging and potentially tedious, increasing the risk of errors in assessment. To improve both judgment accuracy and efficiency, each expert was assigned a subset of articles, consistent with approaches in previous research [67,69]. All experts were assumed to have equal credibility, and consistency was ensured by assigning each article to three experts.
Accordingly, the reviewed articles were first classified into one of five distinct categories based on the integration mode employed, and experts were provided with a brief description (see Section 3.2) to ensure consistent understanding and application. Table 8 presents the results for a subset of the 208 articles, while the complete classification is provided in Appendix B.
Subsequently, experts examined the extent to which the identified criteria were addressed in the reviewed articles to highlight their contributions and outcomes.
To ensure consistency in expert evaluations, a standardized form was provided to all experts (see Appendix C). The form includes each article’s title, a brief description of the relevant criterion, the evaluation question, and brief guidance on which parts of the article experts could focus on during their review. Experts evaluated each article using the following scale in response to the evaluation questions, which are provided in Appendix A (Table A2):
0: Does not satisfy the criterion at all
1: Partially satisfies the criterion
2: Fully satisfies the criterion
Experts were also required to provide supporting evidence by citing keywords, sentences, or sections, accompanied by a brief descriptive note justifying the score. They were given three months to complete their evaluations. An example evaluation is provided in Appendix C (Table A5).
After completing the evaluation process, scores were compared across experts to assess consistency. The standard deviations among the experts’ scores were first examined. In cases of high deviations, experts whose ratings differed from the group were asked to review and, if necessary, revise their evaluations. After the revisions, the standard deviations ranged between 0 and 1.155. Among the 2912 comparisons, only 15 exhibited the maximum deviation value (1.155), indicating an acceptable level of consistency.
Next, interrater reliability was measured using the Intraclass Correlation Coefficient (ICC). ICC was chosen for its suitability in assessing agreement among more than two raters for both continuous and ordinal data, and for accounting for variance components [83]. A two-way mixed model with absolute agreement was employed for the calculations.
The average-measure ICC results, along with their 95% confidence intervals, were reported separately for the two expert groups and are presented in Appendix D. Except for the Effectiveness criterion, all average-measure ICC values were 0.815 or higher for Group 1 and 0.805 or higher for Group 2. According to Koo and Li [84], ICC values below 0.5, between 0.5–0.75, 0.75–0.9, and above 0.9 indicate poor, moderate, good, and excellent reliability, respectively. Accordingly, the reliability levels observed in this study range from good to excellent. Overall, interrater reliability was high across all criteria, except Effectiveness, which showed lower reliability (0.629 in Group 1 and 0.651 in Group 2). This was due to the frequent use of the maximum score (“2”), which made the criterion sensitive to small differences.
The final evaluation for each criterion was determined by taking the median of the revised scores from all experts, resulting in a reliable and representative set of scores for the assessed criteria. Table 9 presents the results for a subset of the 208 articles.
Following the mapping and scoring stages, the results presented in Table 8 and Table 9 were synthesized to uncover the relationships between categories and criteria, providing a structured view of how different forms of integration perform against the established evaluation criteria. Since the list of articles is common to both tables (article × category and article × criteria), a matrix multiplication approach was employed to link categories with criteria (see Figure 3). Before this, the article–category table, which classifies articles into five integration modes, was transposed.
The resulting category–criteria table is presented in Table 10, offering a comprehensive overview of how each category satisfies the established evaluation criteria. The final column indicates the total number of articles classified under each category.
It is worth noting that categories with more articles may appear disproportionately dominant with respect to the criteria, which could lead to misleading interpretations. To ensure a more reliable comparison and improve the interpretability of the categories with respect to the criteria, normalization was applied to the results in Table 10. The normalization formula is defined as follows:
X = X X c a t e g o r y × 100
where X is the original value for a given category-criterion pair, X c a t e g o r y represents the total number of articles within that category, X is the normalized value.
For example, for Novelty (C1) in the ML → MCDM category, the original value 27 was divided by the total number of articles in that category (refer to the last column of Table 10), yielding 27 / 64 × 100 = 42.2 . This normalization procedure was applied to all values in the category-criteria table, with the results presented in Table 11. Each value in the table represents the relative contribution of a category with respect to a specific criterion, allowing for a more systematic comparison.

3.5. Framework Development

3.5.1. Correspondence Analysis

To reveal underlying patterns and aid in interpreting the distinguishing characteristics of different integration modes, Correspondence Analysis (CA) was applied as a dimensionality reduction and visualization technique. CA is a multivariate statistical technique used to explore relationships in categorical variables. It reduces complex data into a low-dimensional map, where rows and columns of a contingency table are represented as profile points. The profiles are obtained by dividing each cell frequency by its corresponding marginal total, forming conditional frequency distributions [85]. The relative positions and distances of these profile points, computed using chi-square distances, reveal similarities, associations, and underlying patterns in the data [86]. More precisely, points that are close to each other on the map indicate similar profiles, reflecting a strong associative pattern between the corresponding row (or column) categories.
In the joint map, the position of every point in one set is determined by all the points in the other set, and vice versa. Therefore, distances are most meaningful when comparing points within the same set (rows with rows or columns with columns), while proximity between row and column points only provides an approximate indication of association [87]. Usually, however, points from different sets tend to be positioned close together when the observed frequency is higher than expected, and farther apart when it is lower than expected [88]. For example, if a specific integration mode lies close to an evaluation criterion on the map (compared with the other modes), it suggests that the articles within that category exhibit similar patterns of association with that criterion or strongly satisfy it. Conversely, categories that are far from the criterion suggest a weaker relationship with it.
Each axis represents a latent dimension extracted through singular value decomposition, capturing the maximum possible variation (inertia) in the associations among categories [89]. The contribution of each point to a given dimension (principal axis) indicates its relative importance in defining that dimension. A point contributes strongly to the inertia of a principal axis either because it lies far from the origin (even with a small mass) or because it has a large mass despite being relatively close to the center [90]. Here, the mass of a row (or column) is its total in the correspondence matrix (see Appendix E), representing its relative weight in the data. Conversely, points that contribute little to the inertia of an axis are nearly identical to the average profile and therefore appear near the center of the map. This two-dimensional representation allows a visual identification of which integration modes align with which evaluation criteria, while the axes and clusters further highlight the most significant underlying associations in the data. Further details about the computational procedure and interpretation are provided in Appendix E.
The rest of this section provides the results of the CA and their interpretation. First, a test of independence between the rows (integration modes) and the columns (criteria) is performed. The chi-square statistic ( χ 2 = 1284.369; df = 52; p < 0.001) confirms that nonrandom associations exist among the categories—i.e., there are statistically significant relationships in the contingency structure.
Table 12 presents the eigenvalues, along with the explained and cumulative percentages of variance for each of the first four dimensions. The first two dimensions, F1 (62.31%) and F2 (23.58%), together account for 85.89% of the variance, capturing most of the variance in the data structure, whereas F3 and F4 explain less than 15%. A two-dimensional solution was deemed adequate for further analysis to compare the additional variance explained in relation to the increased complexity in interpreting the results.
Table 13 summarizes how each criterion (column points) contributes to and is represented by the two principal dimensions of the CA, using symmetrical normalization. The contribution of point to inertia of dimension indicates how much a criterion defines each axis, which is important for axis interpretation. Strong contributors to F1 include Validation (0.620), Sequential Processing (0.206), Dynamic Nature (0.040), and Novelty (0.042), while strong contributors to F2 include Sequential Processing (0.447), Novelty (0.267), Flexibility (0.133), and Dynamic Nature (0.088). Moreover, with respect to the principal coordinates, F1 mainly contrasts criteria, including Validation (high positive score), with Sequential Processing, Dynamic Nature, Novelty, and Automation (high negative scores), while F2 mainly contrasts Sequential Processing with Novelty, Dynamic Nature, and Flexibility. This suggests that F1 may represent a continuum from developmental aspects to evaluative characteristics, while F2 may represent a continuum from complementary aspects to evolutionary characteristics. The last column in Table 13 (total contribution of the dimensions to inertia of each point) indicates that Validation, Sequential Processing, Automation, Novelty, Flexibility, and Dynamic Nature are most influential and best represented in the two-dimensional solution.
The symmetric plot obtained from the CA is presented in Figure 4. In the plot, blue points represent modes (rows), whereas red points represent criteria (columns). According to the plot, F1 clearly separates ML → MCDM and MCDM → ML modes from the ML vs. MCDM and MCDM[ML vs. ML] modes, while F2 distinguishes MCDM + ML from the ML → MCDM and MCDM → ML modes.

3.5.2. Cluster Analysis

To complement and objectively confirm the spatial associations revealed by the CA, a subsequent clustering analysis was performed using the CA principal coordinates. While CA effectively maps the relative proximities between row and column profiles in a low-dimensional space, it does not formally define groups. Clustering is a necessary algorithmic step that assigns every row (or column) object to a specific non-overlapping cluster, providing a clearer picture of the underlying structures. Additionally, clustering validates the visual patterns observed on the CA map, ensuring that the proximities seen visually correspond to statistically meaningful groupings and thereby strengthening the robustness of the interpretation.
Following this rationale, the analysis proceeded through the following steps. First, the coordinates of the first two principal axes of the CA were used as input to the hierarchical clustering algorithm. Ward’s method with Euclidean distance was then applied to derive the cluster solution. The resulting dendrogram is shown in Figure 5. Based on the largest increases in the dendrogram, solutions with 2, 3, or 4 clusters appear most plausible.
Next, to objectively determine the number of clusters, both the Silhouette scores and variance decompositions were utilized (see Table 14). The Silhouette scores indicate moderate clustering quality, with 2 clusters performing slightly better overall. However, the 2-cluster solution shows a high within-class variance (69.51%). On the other hand, the 4-cluster solution has a slightly lower Silhouette score and better within-class variance. However, one of the clusters fails to include an integration mode, limiting its interpretability. The 3-cluster solution with a reasonable Silhouette score (0.473) and between-class variance (65.74%) aligns with the number of visual groupings identified in the CA plot. Also, the heat map presented in Figure 6 supports the 3-cluster solution (explained below), as the resulting clusters correspond to coherent, interpretable groupings of integration modes and criteria, which outweigh the small drop in silhouette score.
Once the clusters were defined, they were formally characterized in terms of the CA dimensions. To facilitate clear interpretation and reporting of each cluster’s distinct profile, a heat map was generated. The heat map visualizes how each integration mode and criterion is positioned along the two principal dimensions obtained from CA. The colors represent the relative magnitude and direction of each point’s F1 and F2 coordinates, with red shades indicating strong negative contributions and green shades indicating strong positive contributions (see Figure 6).
The first principal dimension (F1) reveals a pronounced separation among three distinct groups of integration modes and evaluation criteria. Cluster 1, located far on the positive side of F1 (bright green), includes the integration modes ML vs. MCDM and MCDM[ML vs. ML], together with the criteria Validation, Applicability, and Effectiveness. On the opposite end of F1 (strong negative values—red shades), Cluster 2 contains the sequential modes ML → MCDM and MCDM → ML, which cluster tightly with the criteria Sequential Processing and weakly with Subjectivity. A third, moderately negative group—Cluster 3—contains the mode MCDM + ML and the criteria Novelty, Dynamic Nature, Automation, and Flexibility. While also on the negative side of F1, this cluster is clearly distinct from the second cluster, reflecting a different conceptual orientation centered on adaptability, innovation, and integrated automation.
The second principal dimension (F2) appears to capture a slightly different conceptual contrast. It distinguishes the sequential modes (ML → MCDM and MCDM → ML, shown in green) from the fully integrated mode MCDM + ML (shown in red). The first group aligns with Sequential Processing and, to a moderate degree, with Efficiency and Complexity. In contrast, MCDM + ML is more strongly linked to Novelty and Dynamic Nature and moderately and weakly to Flexibility and Automation.
The combination of (i) distinct color gradients, (ii) clear separation between the three major branches in the dendrogram, and (iii) strong within-cluster similarity versus between-cluster contrast provides robust visual evidence in favor of a three-cluster structure. These three color patterns correspond exactly to the three clusters described above.

3.5.3. Evidence-Based Framework

The results of both CA and cluster analysis demonstrate the presence of three higher-level clusters, within which certain primary and secondary criteria emerge notably, while others remain consistently shared across all categories. In other words, while the five modes of integration differ in purpose, differences based on criteria are evident only among the three higher-level clusters formed by these modes:
  • Cluster 1 consisting of sequential approaches.
  • Cluster 2 consisting of hybrid approaches.
  • Cluster 3 consisting of performance comparison approaches.
Based on these findings, and in response to Research Question 3, an evidence-based framework has been established, as illustrated in Figure 7, where each cluster offers unique perspectives and insights into the associations within and between the categories. Sequential Processing, Validation, and Novelty serve as highly distinguishing criteria for the clusters, as explored in the CA. In contrast, Knowledge Base, Consistency, and Explainability are positioned equidistantly from all three clusters and lie close to the center, indicating that they are relatively neutral and shared across all integration modes.
The proposed framework has several important implications for both practitioners and researchers. It enables a comparative assessment of integration modes and serves as a guide for selecting the appropriate approach tailored to specific purposes and problem characteristics. The primary and secondary distinguishing criteria outlined in the framework help researchers precisely position their work within the field.
For the integration modes in Cluster 1 (i.e., ML ↔ MCDM), the primary distinguishing criterion is Sequential Processing, which emphasizes the complementary interplay between ML and MCDM, where each method builds on the other’s output in a stepwise, mutually reinforcing manner. Beyond this distinction, this cluster also aligns closely with the criteria Efficiency, Complexity, Subjectivity, and Automation, and shows a weaker but notable association with Explainability. Therefore, a decision-maker may favor sequential approaches for their ability to optimize resource utilization, manage problem complexity, reduce individual biases, provide improved decision support with minimal human intervention, and enhance model transparency. For example, MCDM methods complement ML models by improving efficiency through structured evaluations that narrow down alternatives and by reducing complexity through breaking problems into manageable criteria (refer to Equations (8) and (13)–(16)). Conversely, ML algorithms strengthen MCDM methods by handling large-scale and complex data, reducing cognitive and computational burden (refer to Equations (2)–(7)). They also contribute to efficiency by processing vast amounts of data at speed, and to automation by enabling decision workflows to run with minimal human intervention. Moreover, ML uncovers hidden patterns and relationships among criteria that may be difficult for experts to detect manually (refer to Equations (3)–(6)). In particular, advanced ML and deep learning techniques serve as highly accurate predictive models that rely on large datasets and often require minimal expert input. However, a significant limitation of these models is their inherent “black-box” nature, which makes them difficult to interpret or explain [91]. Here, MCDM plays a critical role by evaluating and prioritizing features, identifying which features most strongly influence model predictions, and thereby enhancing explainability (refer to Equations (8) and (13)–(15)).
Cluster 2 involves hybrid approaches (MCDM + ML) with Novelty as their distinguishing criterion, indicating that these approaches introduce more innovative methodologies compared to other integration modes. Hybrid approaches are also associated with Flexibility, Automation, and Dynamic Nature, emphasizing adaptability and responsiveness to varying conditions and decision-making contexts, enabling certain tasks to be carried out with minimal human intervention and effectively handling changes over time (refer to Equations (18)–(22)). These abilities make Cluster 2 advantageous in dynamic, rapidly changing environments such as intelligent transportation, energy management, and smart manufacturing, where static decision rules can quickly become outdated. Therefore, problems requiring adaptive decision-making, minimal human intervention, and/or time-dependent modeling may benefit more from Cluster 2. It is worth noting that Automation is not unique to this cluster; rather, it is a shared characteristic of both hybrid and sequential MCDM–ML integration modes.
The integration modes in Cluster 3, described as performance comparison approaches, primarily focus on evaluating and benchmarking ML algorithms and MCDM methods. The distinguishing criterion for this cluster is Validation, while Applicability and Effectiveness also play a reasonable role in defining the contributions of the approaches. Researchers aiming to systematically evaluate methods and ensure both accuracy and practical relevance in their results may find Cluster 3 particularly valuable, as it offers a structured framework for comparing alternative models and identifying the most suitable approach for a given decision problem (refer to Equations (26)–(30) and (32)–(35)).
By examining integration modes according to distinguishing criteria, the framework reveals underexplored areas, such as developing methods that simultaneously improve explainability and computational efficiency or designing hybrid approaches that address the black-box limitations of learning models (see Section 5 for details).
The framework also illustrates that no single integration mode is universally superior; each has strengths and limitations. For example, sequential approaches offer structured and complementary decision flows but may lack novelty. Hybrid approaches drive innovation and flexibility but can increase methodological complexity. Performance comparison approaches contribute to benchmarking and validation but do not directly enhance problem-solving capacity (see Section 4.7 for details).
The following sections first demonstrate the validation of the proposed framework through statistical analyses and then elaborate on the key findings and their implications.

3.6. Validation of the Framework

Two complementary approaches have been employed to validate the findings of the previous analyses and the proposed framework: cluster validity (internal validation) and out-of-sample validity (external validation).

3.6.1. Clustering Validity

To assess cluster validity, discriminant analysis was conducted as a post hoc validation of the 3-cluster framework derived from the correspondence and cluster analyses. Using the principal coordinates and the assigned cluster memberships, this analysis tested the statistical robustness and interpretive clarity of the clustering solution.
The discriminant analysis first tested whether the clusters were statistically distinct by examining differences in their mean coordinate profiles, providing evidence of structural separation. The results offer strong empirical validation for the three-cluster framework. Both principal dimensions, F1 and F2, significantly and strongly discriminate the groups (Wilks’ Lambda < 0.40, p < 0.001), indicating that each dimension contributes substantially to the separation of the clusters (see Table 15). The multivariate tests (Wilks’ Lambda = 0.123, χ2 = 32.48, p < 0.001) further confirm that the discriminant functions (i.e., linear combinations of the principal dimensions) jointly provide a significant and clear distinction among the three groups.
The analysis also evaluated how accurately the categories could be reclassified into their assigned clusters, thereby assessing the stability and robustness of the clustering solution. Table 16 indicates that 89.5% of the original cases and 84.2% of the cross-validated cases are correctly assigned to their respective clusters. Note that cross-validation was performed using the leave-one-out procedure, in which each case is classified by discriminant functions derived from all other cases except itself [86]. This approach provides a more realistic estimate of predictive performance by ensuring that no case is classified using information from its own data. Cluster 1 shows perfect stability, with 100% correct classification in both the original and cross-validated results, indicating a clearly defined and distinct group. Cluster 2 also demonstrates strong robustness, with complete accuracy in the original classification and 80% in cross-validation. Cluster 3 exhibits moderate stability, with 60% accuracy in both analyses, reflecting limited overlap with Cluster 1 while maintaining a recognizable, distinct profile. The probabilities reported in Table 17 confirm that these cases (Effectiveness and Applicability) lie closer to the Cluster-1 centroid. A similar pattern is observed for the criterion Automation in the cross-validation, where it is reassigned to Cluster 1. These criteria, therefore, require careful interpretation, as their positions suggest partial affiliation with both clusters and indicate that they may be shared across two groups of integration modes. On the other hand, no misclassifications occurred between Clusters 1 and 2 or between Clusters 2 and 3 in the original dataset. Overall, the accuracy levels are notably high—particularly given the small sample size—and demonstrate that the clusters are statistically stable and well-defined.
Note that, although the main assumption in discriminant analysis—namely, the equality of covariance matrices across groups, tested by Box’s M—is violated (p < 0.001), this outcome is expected given the small and uneven group sizes. Notably, the discriminant solution remains robust and interpretable despite this limitation.
Taken together, these findings demonstrate that the three clusters are statistically distinct, reasonably stable across validation, and well supported by the underlying structure of the data, thereby providing solid validation for the proposed framework.

3.6.2. Out-of-Sample Validity

To assess the external validation of the proposed framework, a separate CA was conducted on 22 holdout articles intentionally reserved during the literature review for validation (see Section 3.1). These articles were randomly selected from studies published in the last three years to ensure contemporary, unbiased coverage. In the holdout set (see Appendix F), MCDM → ML, ML → MCDM, and MCDM[ML vs. ML] are each represented by six studies, followed by ML vs. MCDM with five studies and MCDM + ML with one study. It is worth noting that three articles in the holdout set were assigned to more than one integration mode, reflecting the methodological hybridity and increasing sophistication of recent studies in this domain. Thus, the holdout set covers all ML–MCDM integration modes and exhibits a balanced distribution across the major categories identified in the main dataset, thereby providing a suitable basis for evaluating the generalizability of the proposed framework beyond the main sample. The key question was whether these newly examined articles exhibit patterns of association between integration modes and evaluation criteria that are consistent with the correspondence–cluster structure established using the initial set of 208 articles. The articles were independently evaluated by two experts using the same procedures followed for the original set of articles, ensuring consistency in coding and assessment across both samples (see Appendix F). All average-measure ICC values were 0.788 or higher, indicating an excellent level of interrater reliability between the two experts.
Consistent with the main analysis, the test of independence between the rows (integration modes) and the columns (criteria) confirmed the presence of statistically significant associations in the holdout set ( χ 2 = 1820.864; df = 52; p < 0.001). Moreover, the first two dimensions, F1 (51.43%) and F2 (39.30%), together account for 90.73% of the variance, capturing most of the variance in the data structure.
It is important to note that the CA solutions derived from the 208-article sample and the 22-article holdout sample may differ in their origin, scale, and axis orientation due to sampling variability, meaning that the coordinates of the two maps are not directly comparable. To compare whether the geometry of the two CA solutions is structurally similar, Procrustes analysis was performed. The CA configuration from the 22-article set was rotated, translated, and scaled to optimally align with the 208-article configuration, and the goodness-of-fit was evaluated using the Procrustes sum of squared residuals and the Procrustes correlation (i.e., the overall similarity between the two configurations), with statistical significance assessed via 999-permutation testing [92]. The results of the Procrustes analysis indicate a strong congruence between the two CA solutions. A high Procrustes correlation ( R = 0.9207 ) combined with a low sum of squared residuals ( S S R e s = 0.1524 ) suggests that the geometric structure of the holdout dataset closely matches that of the main dataset. Furthermore, a 999-permutation test—conducted by permuting the rows of the holdout configuration and recalculating the Procrustes correlation—yielded a permutation p-value of 0.001, confirming that the observed congruence is statistically significant at α = 0.05 . Overall, these results provide preliminary evidence of the structural stability of the proposed evidence-based framework beyond the initial dataset. Figure 8 presents the aligned configurations from the Procrustes analysis along with the residual distances between corresponding points.
According to the plot, as observed in the primary analysis, F1 clearly separates ML → MCDM and MCDM → ML modes from the ML vs. MCDM and MCDM[ML vs. ML] modes, while F2 distinguishes MCDM + ML from both the ML → MCDM and MCDM → ML modes. Moreover, with respect to the principal coordinates, the pattern observed in the holdout set is consistent with the main analysis: F1 mainly contrasts criteria such as Validation (high positive score) with Sequential Processing, Dynamic Nature, Novelty, and Automation (high negative scores), while F2 mainly contrasts criteria such as Sequential Processing with Novelty, Dynamic Nature, and Flexibility.
To further test whether the clusters identified in the main dataset generalize, discriminant analysis was applied to the original coordinates of the holdout set. According to the results, both principal dimensions, F1 and F2, significantly and moderately discriminate the groups (F1-Wilks’ Lambda < 0.447, p < 0.002; F2-Wilks’ Lambda < 0.629, p < 0.025), indicating that each dimension contributes substantially to the separation of the clusters. The multivariate tests (Wilks’ Lambda = 0.280, χ2 = 19.745, p < 0.001) further confirm that the discriminant functions jointly provide a significant and clear distinction among the three groups. The analysis also evaluated how accurately the categories could be reclassified into their assigned clusters (see Section 3.5.2), showing that 89.5% of the original cases and 78.9% of the cross-validated cases were correctly assigned to their respective clusters. Details of the classification accuracies and corresponding posterior probabilities are presented in Appendix G.
All these results provide supportive and exploratory evidence that the structural patterns identified in the original analysis remain consistent when applied to a small, independent, and recent set of studies. While the limited sample size constrains statistical stability and generalization, the findings suggest that the proposed framework exhibits a consistent internal structure beyond the initial dataset.

4. Findings and Discussion

4.1. Most Utilized Integration Mode

Identifying the most frequently utilized category offers valuable insights into the prevailing trends in the integration of ML algorithms and MCDM methods. By examining the distribution of these categories, we can better understand which integration strategies are most widely adopted and the underlying reasons driving their prominence. The bar chart in Figure 9 visualizes the distribution of the five integration modes. The MCDM → ML category has the highest usage, with 64 instances, followed by ML → MCDM with 56 instances, reflecting the complementary nature of these approaches, where one method addresses the limitations of the other. Performance comparison approaches, including MCDM[ML vs. ML] (39 instances) and ML vs. MCDM (38 instances), also demonstrate significant application. Finally, the hybrid approach MCDM + ML is the least frequently used, with only 18 instances. This relatively low number of studies may stem from challenges in integrating methods into hybrid forms and the need for novelty in such approaches. However, the adoption of this approach is steadily increasing as it offers more comprehensive and adaptive decision-making capabilities.

4.2. Usage Trends of Integration Modes

Figure 10 illustrates the trends in the use of five ML-MCDM integration modes from 2015 to 2025, highlighting the relative frequency of each mode over time. In response to Research Question 4, the findings reveal a general upward trend across integration modes, particularly after 2020, reflecting the growing adoption of these modes in the literature. The MCDM → ML category (purple line) has shown the highest growth in recent years, surpassing all other modes. ML → MCDM (green line) also demonstrates a steady increase, reaching high frequency by 2025. The two performance comparison categories (blue line: Comparison of ML algorithm performance using MCDM methods; red line: Comparing the performances of ML algorithms and MCDM methods) follow similar upward trends but grow more slowly. The hybrid ML + MCDM category (cyan line) shows the slowest growth and remains the least utilized mode. While its adoption has increased slightly in recent years, it remains less common than the other categories. Given the inherent characteristics of this category, such an outcome is to be expected. However, it is essential not to overlook that this area holds considerable potential and may exhibit further development in the future.

4.3. Frequency of MCDM Methods Across Integration Modes

Figure 11 illustrates the distribution of different MCDM methods across ML-MCDM integration modes. Among them, AHP and TOPSIS are the most frequently used. AHP is particularly dominant in the MCDM → ML category, highlighting its importance in defining criteria weights and guiding feature selection to improve ML models [79,93,94,95,96]. TOPSIS is also widely applied across multiple categories, especially in the MCDM[ML vs. ML] and ML → MCDM categories, as it directly utilizes quantitative performance data and reduces reliance on expert judgment, making it well-suited for data-driven decision-making.
Other methods, such as VIKOR, PROMETHEE, and Fuzzy MCDM methods, show comparatively lower usage across categories. VIKOR and PROMETHEE are less frequently employed in ML–MCDM integration due to their complexity and parameter requirements. VIKOR’s compromise ranking [97] and PROMETHEE’s preference modeling [98] are less directly compatible with quantitative ML performance data. Fuzzy MCDM methods, however, occur in multiple categories despite lower overall numbers, suggesting their unique role in addressing uncertainty and subjectivity within ML-MCDM integration.
It is observed that, while some methods dominate particular integration modes, almost all considered MCDM methods appear at least once across the categories, indicating a diversity of approaches rather than strict boundaries in method selection.

4.4. Frequency of ML Algorithms Across Integration Modes

Figure 12 presents the distribution of the most used ML algorithms across ML-MCDM integrations. Classification algorithms such as Random Forest, Decision Tree, Boosting Algorithms, SVM, Naïve Bayes, Neural Networks, K-Nearest Neighbors (KNN), and Logistic Regression play a crucial role in ML-MCDM integration. There are several reasons why classification-based ML algorithms are widely integrated with MCDM methods. For example, Logistic Regression frequently appears in MCDM[ML vs. ML], indicating its widespread use in benchmarking due to its ability to provide explainable results. Similarly, some classification algorithms, such as decision trees or tree-based models, retain a level of interpretability that aligns with MCDM’s structured nature. This transparency allows stakeholders to understand the reasoning behind predictions and decisions, including how different features are weighted in the model [99]. Additionally, algorithms such as Random Forest and Boosting are robust to overfitting and capable of handling high-dimensional spaces, making them suitable for integration into MCDM methods that may involve complex and heterogeneous criteria [100]. Furthermore, classification algorithms are well-suited for decision-making in finance, healthcare, and marketing industries, where labeled data is prevalent. They help translate complex decision-making frameworks into actionable classifications that can directly support business and operational strategies.
Notably, Neural Networks, SVM, and KNN are comparable in the MCDM → ML category (purple bars). Before the application of these algorithms, MCDM was mainly used for weighting [42,79,93,96,101,102,103,104,105,106] and labeling [107,108] purposes. These three algorithms also demonstrate broad applicability across diverse datasets and problem domains, and they are frequently integrated with MCDM approaches due to their versatility in addressing complex decision-making tasks [76,109,110,111,112].
In contrast, Linear Regression has a relatively lower presence across categories, suggesting that it is less commonly integrated into ML-MCDM frameworks. Similarly, clustering algorithms are less frequently used than classification algorithms, but they remain relevant in decision-support applications, particularly when dealing with unlabeled data. Clustering algorithms (such as K-means, Hierarchical Clustering, and Fuzzy C-means) have a balanced presence across all categories, suggesting their role in analyzing patterns and grouping similar decision alternatives.

4.5. MCDM Methods and ML Algorithms Commonly Used Together

Figure 13 presents a Sankey diagram that shows the MCDM methods and ML algorithms jointly used in ML–MCDM applications. By examining the thickness of the connections in the diagram, we can identify the most frequently used combinations of MCDM methods and ML algorithms. TOPSIS is the most commonly used MCDM method, often integrated with Boosting algorithms (38 times), Decision Trees (31 times), Random Forest (26 times), Support Vector Machines (22 times), Artificial Neural Networks (18 times), and K-Nearest Neighbors (17 times). AHP also shows considerable versatility, frequently integrated with Random Forest (24 times), Boosting algorithms (23 times), Artificial Neural Networks (20 times), Logistic Regression (16 times), Support Vector Machine (16 times), K-Nearest Neighbors (13 times), and Decision Trees (11 times). Furthermore, Fuzzy MCDM methods exhibit strong linkages with Random Forest (14 times), clustering algorithms (used 13 times), Artificial Neural Networks (10 times), SVM (10 times), Decision Tree (7 times), Boosting algorithms (10 times), and KNN (7 times). In contrast, VIKOR and PROMETHEE appear less frequently and have limited integrations across algorithms, suggesting they play more specialized roles in ML–MCDM applications. Other MCDM methods, although less commonly used, provide valuable specialized techniques for integrating with various ML algorithms, enabling the solution of complex decision-making problems that require customized approaches.

4.6. Dominant Application Areas

Integrating MCDM methods and ML algorithms has gained significant attention across a wide range of application domains. The extent of adoption varies between fields, depending on factors such as data availability, the complexity of decision-making processes, and specific industry needs. Understanding why a particular domain favors ML-MCDM integration can help clarify its influence on decision-making practices and the value it brings in different contexts. Figure 14 illustrates the distribution of application areas where integrated approaches have been utilized.
Regarding Research Question 5, the findings indicate that integrated ML–MCDM approaches have been most extensively adopted in the fields of Environmental Science and Healthcare, which account for the largest share of applications, indicating their significance in decision-making research [13,42,43,47,50,70,113,114,115,116]. The rapid advancement of medical technologies and environmental monitoring has led to the generation of massive volumes of both structured and unstructured data. While ML algorithms excel at processing these datasets by uncovering hidden patterns and producing predictive assessments, MCDM methods play a complementary role by interpreting and prioritizing these insights for practical decision-making. Both domains also require collaboration among multiple experts, including doctors, researchers, policymakers, environmental scientists, and public health officials. Integrated approaches are particularly valuable in these contexts, as they combine data-driven insights with structured, criteria-based reasoning to support complex, multidisciplinary decision-making. With rising global health crises and environmental challenges, the integration of data-driven and expert-based decision-making has become not only valuable but essential for building a sustainable future.
Supply Chain, Finance, and Sustainability also exhibit notable implementation [26,117,118,119,120,121,122,123,124]. Conversely, fields such as Manufacturing, Cybersecurity, Renewable Energy, and Tourism have lower levels of representation. Civil Engineering, Biology, Facility Location, Safety, and other application areas appear at the lower end of the tail, suggesting either limited application or emerging research interest in these areas.
Regarding the application areas of the different integration modes, the ML → MCDM category is versatile and broadly applied across Healthcare [125], Environmental Science [13], Supply Chain Management [126], and Marketing [127]. MCDM → ML and MCDM + ML are concentrated in Healthcare [47,128] and Environmental Science [42,129], indicating that these fields particularly benefit from structured expert input alongside ML. The integration modes ML vs. MCDM and MCDM[ML vs. ML] have more focused applications, implying that comparative or hybrid evaluation approaches are used primarily when domain complexity or decision-criticality demands careful method selection. The observed patterns suggest that certain integration strategies are applied more frequently in specific domains, potentially reflecting their suitability for the challenges of those fields.

4.7. Limitations of Integration Modes

MCDM → ML: One significant concern with this integration mode is the potential for subjectivity. The use of expert-driven criterion weights in MCDM can introduce bias, which in turn may compromise the objectivity of the ML model. This issue is compounded by methodological sensitivity, as the output of an MCDM method is heavily dependent on the chosen method and its specific parameters (e.g., normalization method) [130]. This instability can result in volatile inputs, making the subsequent ML models highly sensitive to arbitrary methodological choices. Additionally, scalability limitations present a practical barrier, as traditional MCDM methods struggle to manage the high-dimensional, large-volume datasets typical of modern ML, creating a critical preprocessing bottleneck.
ML → MCDM: The lack of interpretability in ML models raises ethical and practical concerns, particularly in contexts where transparency is essential for accountability and trust. While MCDM provides a structured framework to address such challenges [131], its inclusion also introduces methodological complexity. Furthermore, the reliance on large, well-labeled datasets in ML can introduce challenges when integrating it with MCDM methods, which themselves demand detailed criteria definitions and evaluations. Another limitation is the propagation of errors, uncertainties, or biases that are present in the output of ML models. These issues tend to be carried over into the subsequent MCDM analysis. Although the final ranking or prioritization produced by the MCDM process may appear precise and objective, it may be based on uncertain foundations inherited from the ML stage.
MCDM + ML: One of the foremost issues with the hybrid approach is the lack of standardized, proven methodologies for achieving genuine integration between the two methods. As a result, most implementations remain ad hoc, tailored to specific problems or datasets, and lack generalizability across domains. A further point to consider is that a tightly coupled hybrid model is inherently more complex than either of its components alone. This complexity introduces significant challenges in scalability and computational cost, particularly in real-time systems. Moreover, this integration faces a fundamental conceptual mismatch between the two approaches: ML is inherently data-driven, whereas MCDM generally focuses on decision-makers’ values [6]. Establishing a coherent connection requires careful theoretical grounding and methodological innovation.
MCDM[ML vs. ML]: A primary limitation is that the choice and structural characteristics of the MCDM method used to compare the performances of ML algorithms can strongly influence the final rankings [132]. Another challenge is that the final ranking is sensitive to the weights assigned to conflicting performance criteria. MCDM methods require trade-offs, for example, determining how much interpretability one is willing to sacrifice for a marginal improvement in accuracy. This sensitivity raises concerns about the robustness of the evaluation process, as even small changes in weighting can substantially alter the resulting algorithm rankings.
ML vs. MCDM: Finally, the comparative performance of ML and MCDM is inherently context-dependent, indicating that neither approach can be expected to perform effectively across all study areas, contexts, or dataset environments [77]. Many MCDM methods are designed for small-scale decision problems, making them computationally inefficient when applied to larger problems or high-volume datasets. In contrast, ML algorithms are generally better suited to handle such challenges. Additionally, MCDM methods perform effectively in static decision environments, while ML approaches can adapt to evolving conditions over time through model updates. This mismatch reduces the ability of this integration to support real-time or continuously updating decision environments and limits comparability across studies [20].

5. Future Research Suggestions

The integration of MCDM methods and ML algorithms has shown considerable promise, yet several methodological and practical challenges remain unresolved. As applications expand toward larger datasets, real-time decision-making, and complex dynamic environments, current approaches face limitations in scalability, interpretability, generalizability, adaptability, and in handling uncertainty and complexity. Addressing these challenges requires advancing both methodological innovations and domain-specific applications. In response to Research Question 6, this study has identified five research directions for future research, building on the findings from the literature review and expert evaluations.
Research Focus 1. Methodological Advancements in MCDM and ML Integration
This research focus addresses the fundamental computational and structural limitations of existing ML–MCDM integrations. The objective is to develop models that are scalable, less dependent on subjective manual input, and capable of handling complex internal relationships.
1.1. Improving Scalability and Efficiency: MCDM methods often face challenges with large and complex datasets due to computational demands and the manual effort required. Future research should therefore emphasize preprocessing techniques, such as clustering and dimensionality reduction, that preserve essential information while reducing data volume. Utilizing ML-based optimization approaches (e.g., genetic algorithms, particle swarm optimization) can further enhance accuracy and resource utilization throughout the weighting and selection of alternatives stages. As real-time and large-scale applications continue to grow, developing integrated approaches that balance accuracy, efficiency, and resource use will become increasingly important. Building on this line, future research could also explore MCDM-based evaluations of how advancements in ML—such as architectural innovations, parameter optimization, and emerging approaches like deep learning, preference learning, and transfer learning [4]—influence both the effectiveness and the computational efficiency of decision-making methods.
1.2. Modeling Complex Interdependencies: A further challenge lies in the interdependence among criteria or alternatives. While methods such as ANP and DEMATEL account for these relationships, they remain highly expert-dependent and time-consuming. Advanced approaches such as neural networks, structural equation modeling (SEM), or learning fuzzy cognitive maps may offer more efficient ways to address interdependencies and improve MCDM [15].
1.3. Automating Processes and Reducing Bias: Reducing reliance on expert judgment is another promising direction. Combining ML algorithms with objective MCDM weighting and aggregation methods (e.g., Entropy, CRITIC, Cumulative Belief Degree [133]) can significantly automate decision-making tasks, eliminate biases, and increase responsiveness. This approach not only allows systems to operate autonomously but also enables data-driven decisions to be made with minimal human intervention, enhancing the objectivity of the process.
Research Focus 2. Adaptability to Real-World Data and Dynamics
This research focus concentrates on enhancing the robustness of ML–MCDM models, enabling them to function effectively in environments characterized by changing conditions, uncertainty, and imbalanced data.
2.1. Adapting to Dynamic Environments: Rapidly changing environments, such as financial markets or social media, require models that evolve continuously with new data. Enhanced adaptability would allow MCDM methods to deliver more accurate, timely, and context-aware outcomes. Utilizing online learning, incremental learning, reinforcement learning, and adaptive neuro-fuzzy systems can enable MCDM methods to dynamically update their results in response to changing data over time [120,134,135]. Furthermore, combining non-linear ML models with MCDM methods [42,97] could better capture complex, time-varying behaviors that linear approaches fail to represent.
2.2. Managing Uncertainty: Many practical decision-making situations involve uncertainty due to incomplete or unreliable data. Advancing this area requires developing ML–MCDM frameworks that explicitly capture and incorporate uncertainty into decision models. Probabilistic ML models (e.g., Bayesian approaches) and ensemble learning can be adapted to MCDM methods to provide more robust evaluations under uncertainty. Such integrations can enhance both methodological reliability and improve confidence in decision outcomes by quantifying the credibility of alternatives.
2.3. Addressing Data-Specific Challenges: Another practical barrier in ML–MCDM research is the demand for large and carefully labeled datasets, particularly for deep learning. MCDM methods (e.g., sorting methods) can help reduce labeling costs by identifying the most informative samples and improving data efficiency.
Imbalanced datasets pose another challenge, as ML models often perform well on the majority classes but poorly on minority classes [124]. Various approaches, such as resampling techniques, algorithm-level methods, and anomaly detection, have addressed this issue in the literature.
Additionally, the careful selection of evaluation metrics is essential in imbalanced ML settings, as conventional measures often fail to capture performance across minority classes. Future research could investigate how MCDM methods can be applied to evaluate multiple performance metrics, enabling fairer model assessment. Furthermore, MCDM can help prioritize features that are particularly relevant to minority classes, thereby improving model robustness in imbalanced settings.
Research Focus 3. Enhancing Model Interpretability and Explainability
Beyond adaptability, real-world deployment also demands human interpretability and transparency.
3.1. Enhancing Explainability: Moving ML–MCDM research into real-world practice requires integrated models that are transparent, interpretable, and aligned with human decision processes. Transparent models are particularly critical in risk-averse domains such as medicine, where the reliance on black-box ML models poses significant barriers to adoption [4]. MCDM methods, particularly those that prioritize features, can enhance model explainability by identifying the most influential features in ML predictions.
Another promising direction for future study is to explore the integration of MCDM methods with explainability techniques, such as Shapley Additive Explanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), and Anchors [136]. By aggregating and contextualizing the outputs of these techniques, MCDM methods can provide users with holistic and transparent insights into model behavior.
Beyond interpretation, MCDM can also be employed to simulate alternative decision scenarios, allowing users to examine how variations in input features or model parameters influence the resulting outcomes. Such simulation-based transparency can strengthen user understanding, increase accountability, and encourage ethical adoption of ML–MCDM systems.
Research Focus 4. Expanding Applications Across Sectors and Datasets
Building upon methodological advances and improvements in interpretability, this research focus concentrates on expanding the application of MCDM–ML integrations across diverse real-world sectors and datasets.
4.1. Generalizing MCDM–ML Integrations Across Diverse Sectors: A key research priority is examining the generalizability of MCDM–ML integrations across different sectors, datasets, and conditions [135]. Studies should test these integrations in diverse contexts such as finance, urban planning, and social media, as well as in high-impact areas such as emergency management and healthcare, to validate their robustness and adaptability [4,118]. Sector-specific evaluations will not only confirm applicability and adaptability but also encourage wider adoption.
4.2. Improving Performance Metrics for Real-World Applications:
In practice, performance metrics are commonly used as criteria when comparing ML algorithms through MCDM methods. However, relying exclusively on these metrics can be inadequate in real-world contexts, where broader factors such as scalability, interpretability, and complexity also play critical roles [131]. In sectors like healthcare, for instance, interpretability of an ML model can be just as important as its accuracy, as it provides insights into the decision-making process that medical professionals can trust and rely on in their actions. Because such criteria are often challenging to define and measure, fuzzy sets can be employed to represent them as linguistic labels, allowing for a more flexible evaluation process that accounts for subjectivity in assessments [65,137,138].
Research Focus 5. Leveraging Emerging Technologies
The final research focus highlights the potential of next-generation machine learning technologies to advance ML–MCDM integration.
5.1. Exploring Novel ML Technologies: Novel ML technologies such as generative pre-trained transformers (GPTs), federated learning, deep reinforcement learning, and metamodeling remain largely unexplored in the ML–MCDM literature. These methods could enhance decision support by enabling distributed learning, improving sequential decision-making, or capturing highly non-linear relationships in complex datasets. Comparative studies are needed to establish guidelines for selecting algorithms most relevant to specific problem domains.
Together, these five research focuses provide a comprehensive roadmap for advancing ML–MCDM integration, from foundational methodological improvements to practical applications and the adoption of emerging technologies. By addressing scalability, adaptability, interpretability, real-world applicability, and frontier technologies, ML–MCDM can evolve into a powerful tool for complex decision-making. Table 18 links the suggested future research directions for ML–MCDM integration with the relevant modes.

6. Conclusions and Limitations

This study investigated the integration of ML algorithms and MCDM methods through a comprehensive approach that combines insights from a literature review, expert judgment, and statistical analysis to ensure both theoretical rigor and practical relevance. It establishes a new systematic categorization of integration modes based on their methodological characteristics and application contexts. It introduces a set of previously undefined criteria for standardized and reliable evaluation, enabling comparability and rigor. By employing CA and cluster analysis, the study maps the relationships between integration modes and evaluation criteria, revealing patterns, gaps, and distinguishing criteria among the integration modes, and demonstrates how the complementary strengths of ML and MCDM enhance decision-making while overcoming individual limitations. Building on these insights, a new evidence-based framework is proposed and validated to guide researchers and practitioners in selecting the most appropriate integration modes and aligning methodological choices with problem-specific requirements to address complex decision problems effectively.
Framed by the six research questions guiding the study, the key findings and their implications can be summarized as follows:
  • First, the initial categorization from the literature review identified five main integration modes based on the purpose of integration; however, expert evaluations and subsequent analyses indicated that these could be consolidated into three broader clusters: (i) sequential approaches, (ii) hybrid approaches, and (iii) performance comparison approaches.
  • Second, the results showed that specific criteria, such as Sequential Processing, Validation, and Novelty, act as key differentiators. In contrast, others, like Knowledge Base, Consistency, and Explainability, remain relatively neutral and are shared across all categories.
  • Third, among the integration modes, MCDM → ML and ML → MCDM are the most widely used. The findings suggest that these categories are particularly suitable when the objective is to optimize resource utilization and computational efficiency, address complex decision-making scenarios involving multiple variables, reduce individual biases, and provide transparent, interpretable outcomes to support informed decisions. The study also revealed that most research has emphasized combining existing methods rather than introducing methodological innovations, leaving room for more original forms of integration. Although less common, hybrid approaches (MCDM + ML) are well-positioned to address this gap, offering strong potential for flexibility and dynamic modeling while reflecting the broader shift toward reducing human intervention in decision-making. When the goal is to validate model outputs and assess their practical applicability in real-world contexts, the findings showed that performance comparison approaches are the most appropriate. These approaches emphasize the critical role of comprehensive benchmarking in ML–MCDM integrations, providing a structured means to evaluate competing methods and generate reliable evidence to guide method selection.
  • The fourth question regarding current trends in the joint use of MCDM and ML methods revealed that all categories exhibit a general upward trend, particularly after 2020, indicating a growing adoption in the literature. From a methodological perspective, the findings show that MCDM methods such as AHP and TOPSIS have emerged as the most dominant, often paired with classification-based ML algorithms such as Random Forest, SVM, and KNN. These pairings underscore a preference for interpretable, structured decision-making tools. While clustering and fuzzy methods were less frequent, their consistent presence across categories suggests an increasing focus on managing uncertainty and subjective judgments, particularly in complex decision environments.
  • Fifth, the findings provide insights into the adoption patterns and opportunities across various application domains. In particular, environmental science and healthcare are the leading fields adopting ML–MCDM integrations, driven by their high data complexity and multi-criteria multi-stakeholder decision-making requirements. These domains particularly benefit from the synergy between data-driven insights provided by ML and expert-based prioritization enabled by MCDM. Other sectors, including finance, supply chain, and sustainability, also demonstrate considerable adoption, whereas areas such as cybersecurity, civil engineering, and tourism remain underrepresented, highlighting promising directions for future application and growth.
  • Finally, this study has identified several promising directions for future research. The directions highlight key avenues for future research that can strengthen the effectiveness, robustness, and applicability of ML–MCDM integrations.
While this study offers valuable theoretical and practical contributions, it also has several limitations that should be acknowledged:
  • The categorization and evaluation rely on the currently available literature, which may evolve as new integration approaches and application areas emerge. To address this limitation, future studies should update the classification with newly published research, extend the scope to additional domains, and incorporate longitudinal analyses to capture shifts in ML-MCDM integration.
  • The definition of evaluation criteria and the assessment of articles are based on expert judgment. Although this enhances domain relevance, it may also introduce subjectivity and potential bias. Implementing structured consensus-building techniques or semi-automated evaluation tools using natural language processing (NLP) could help reduce subjectivity and improve the consistency of article assessments.
  • Additionally, the evaluation was based on a limited number of experts, which may introduce bias due to the subjective judgment of a small group. Future studies could involve a larger and more diverse group of experts to improve reliability and reduce potential biases.
  • The criterion of uncertainty was not directly considered in this study, although it is indirectly captured through factors such as subjectivity, complexity, and dynamic nature. Incorporating uncertainty-related evaluation criteria in future work could further enhance the robustness of the proposed framework and extend its applicability across diverse contexts.
  • In terms of the criteria considered in this study, certain integration modes (e.g., MCDM → ML and ML → MCDM) were found to exhibit similar characteristics, making them difficult to distinguish clearly. Future research could incorporate additional or more fine-grained criteria to enable a more precise distinction among these modes and provide deeper insights into their unique contributions and applications.
  • Another limitation of this study is that the relationships among the evaluation criteria were partially examined. Future research could address this gap by incorporating methods that explicitly model interdependencies between criteria, thereby providing a more comprehensive evaluation framework.
  • Although the literature review was conducted using Elsevier’s Scopus database and Web of Science databases, these sources may not cover all relevant journals. As a result, specific articles, particularly from niche or emerging research fields, may have been overlooked, leading to an incomplete view of the literature. A more comprehensive review could be achieved by also including additional databases, such as IEEE Xplore and Google Scholar.
  • The majority of the findings were derived from English-language journals, resulting in the exclusion of journals published in other languages. While this implies that the review is not exhaustive, the authors believe it offers a comprehensive overview and covers most of the relevant studies published in scholarly journals.
  • Finally, applying and validating the framework across diverse domains would strengthen its generalizability and practical utility. Since the interpretation and relevance of certain criteria may vary by context, adapting the criteria definitions or developing domain-specific sub-frameworks could further enhance the accuracy and applicability of the evaluation process.
In conclusion, this study lays the groundwork for a systematic and comprehensive understanding of ML–MCDM integration. Providing a structured categorization and evaluation framework offers valuable guidance for researchers and practitioners seeking to improve decision-making in complex environments.

Supplementary Materials

The following supporting information can be downloaded at: https://drive.google.com/file/d/1ObuH4Vc6IazfIkcnsstbtGkKbUBp3zAP/view?usp=share_link.

Author Contributions

Conceptualization, H.K. and U.A.; methodology, H.K. and U.A.; validation, H.K. and U.A.; formal Analysis, H.K. and U.A.; investigation, H.K.; writing—original draft preparation, H.K.; writing—review and editing, U.A.; visualization, H.K.; project administration, U.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Grouping of related criteria under representative headings.
Table A1. Grouping of related criteria under representative headings.
CriteriaClosely Related Criteria
Novelty [53,54]Creativity [52,53]Innovative [130]Authenticity [53,131]Specificity [131]Originality
[132]
Complexity [55,130,133,134]Simplicity [53]
Validation [56]
Subjectivity
[57]
Reasonable [135]Rationality [136]
Knowledge Base [58,137]
Effectiveness [59,138]Accuracy [130,139,140,141]
Efficiency [3,60,133]Computational time [139]
Applicability [61,142]Generalizability [143,144]
Flexibility [55]Adaptability [145,146,147]Scalability [36,139,148]Responsiveness [149,150,151]Adjustability [152]Expandability [134]Compatibility [153]
Consistency
[62,154]
Stability
[133]
Reliability [141,143]Maintainability [155]Robustness [130,156]
Automation [63,145,157]Automated [139,141,148]Autonomous [137]
Sequential Processing
Dynamic Nature [64]
Explainability [65,66]Interpretability [66,133,139]Understandability [133]Comprehensibility [158]
Table A2. Criteria, descriptions, and questions for expert evaluation.
Table A2. Criteria, descriptions, and questions for expert evaluation.
CriterionDescriptionQuestion
NoveltyThe integration mode introduces a new or original idea, method, or feature beyond what the MCDM method, the ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Novelty as described?
ComplexityThe integration mode improves the ability to understand, analyze, and manage complex decision problems—characterized by numerous variables, interdependencies, operational steps, and data heterogeneity—beyond what the MCDM method, the ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Complexity as described?
ValidationThe integration mode proposes a systematic approach for assessing and comparing performance and outcomes—such as the accuracy, reliability, and meaningfulness of results—beyond what the MCDM method, the ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Validation as described?
SubjectivityThe integration mode helps reduce the influence of individual perspectives, interpretations, and biases in decision-making—enhancing objectivity beyond what the MCDM method, ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Subjectivity as described?
Knowledge BaseThe integration mode enhances the contribution of expert knowledge, experience, and domain insights to the decision-making process beyond what the MCDM method, the ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Knowledge Base as described?
EffectivenessThe integration mode enhances the ability to generate accurate, reliable, and successful outcomes—such as predictions, classifications, or decisions—beyond what the MCDM method, ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Effectiveness as described?
EfficiencyThe integration mode enhances resource utilization and computational speed while reducing computational cost beyond what the MCDM method, ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Efficiency as described?
ApplicabilityThe integration mode introduces a practical and context-sensitive approach that appropriately addresses real-world decision problems beyond what the MCDM method, the ML algorithm, or even both can achieve individually.To what extent does the integration mode proposed in the article satisfy the criterion of Applicability as described?
FlexibilityThe integration mode introduces an approach that can adapt and scale to diverse conditions, data types, and decision tasks, effectively managing large-scale challenges, variations, and unexpected changes beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Flexibility as described?
ConsistencyThe integration mode introduces a robust approach that produces results that are repeatable and of comparable quality across varying conditions, datasets, and decision contexts, beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Consistency as described?
AutomationThe integration mode introduces an approach that allows decision-related tasks to be executed partially or fully without human intervention—beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Automation as described?
Sequential ProcessingThe integration mode introduces a stepwise approach in which the output of one method serves as the input for the next, maintaining methodological coherence, enhancing traceability, and addressing the limitations of individual methods that go beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Sequential Processing as described?
Dynamic NatureThe integration mode introduces an approach capable of handling evolving conditions, data, and requirements over time—allowing adaptation to time-dependent variations and changes beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Dynamic Nature as described?
ExplainabilityThe integration mode enhances the interpretability and transparency of the decision-making process—making the model’s reasoning behind its predictions or decisions more understandable beyond what the MCDM method, the ML algorithm, or even both individually can achieve.To what extent does the integration mode proposed in the article satisfy the criterion of Explainability as described?

Appendix B

Table A3. Mapping of Articles to Categories.
Table A3. Mapping of Articles to Categories.
AuthorsIntegration ModesAuthorsIntegration ModesAuthorsIntegration Modes
[159]MCDM[ML vs. ML][160]MCDM[ML vs. ML][50]MCDM[ML vs. ML]
[114]MCDM[ML vs. ML][161]MCDM[ML vs. ML][162]MCDM[ML vs. ML]
[163]MCDM[ML vs. ML][164]MCDM[ML vs. ML][165]MCDM[ML vs. ML]
[166]MCDM[ML vs. ML][167]MCDM[ML vs. ML][113]MCDM[ML vs. ML]
[168]MCDM[ML vs. ML][169]MCDM[ML vs. ML][75]MCDM[ML vs. ML]
[170]MCDM[ML vs. ML][141]MCDM[ML vs. ML][171]MCDM[ML vs. ML]
[172]MCDM[ML vs. ML][173]MCDM[ML vs. ML][174]MCDM[ML vs. ML]
[175]MCDM[ML vs. ML][139]MCDM[ML vs. ML][54]MCDM[ML vs. ML]
[176]MCDM[ML vs. ML][177]MCDM[ML vs. ML][178]MCDM[ML vs. ML]
[179]MCDM[ML vs. ML][51]MCDM[ML vs. ML][180]MCDM[ML vs. ML]
[181]MCDM[ML vs. ML][182]MCDM[ML vs. ML][80]MCDM[ML vs. ML]
[183]MCDM[ML vs. ML][184]MCDM[ML vs. ML][185]MCDM[ML vs. ML]
[186]MCDM[ML vs. ML][187]MCDM[ML vs. ML][188]MCDM[ML vs. ML]
[189]ML vs. MCDM[190]ML vs. MCDM[191]ML vs. MCDM
[192]ML vs. MCDM[70]ML vs. MCDM[193]ML vs. MCDM
[73]ML vs. MCDM[194]ML vs. MCDM[195]ML vs. MCDM
[196]ML vs. MCDM[197]ML vs. MCDM[48]ML vs. MCDM
[198]ML vs. MCDM[199]ML vs. MCDM[200]ML vs. MCDM
[201]ML vs. MCDM[202]ML vs. MCDM[77]ML vs. MCDM
[203]ML vs. MCDM[204]ML vs. MCDM[49]ML vs. MCDM
[205]ML vs. MCDM[206]ML vs. MCDM[21]ML vs. MCDM
[207]ML vs. MCDM[208]ML vs. MCDM[209]ML vs. MCDM
[210]ML vs. MCDM[211]ML vs. MCDM[10]ML vs. MCDM
[212]ML vs. MCDM[35]ML vs. MCDM[213]ML vs. MCDM
[214]ML vs. MCDM[215]ML vs. MCDM,
ML → MCDM
[216]ML vs. MCDM,
ML → MCDM
[217]ML vs. MCDM,
ML → MCDM
[218]ML vs. MCDM,
ML → MCDM
[219]ML vs. MCDM,
ML → MCDM
[220]ML → MCDM[221]ML → MCDM[222]ML → MCDM
[223]ML → MCDM[224]ML → MCDM[225]ML → MCDM
[226]ML → MCDM[227]ML → MCDM[228]ML → MCDM
[229]ML → MCDM[230]ML → MCDM[231]ML → MCDM
[71]ML → MCDM[125]ML → MCDM[232]ML → MCDM
[233]ML → MCDM[123]ML → MCDM[122]ML → MCDM,
[126]ML → MCDM[99]ML → MCDM,
ML vs. MCDM
[41]ML → MCDM
[234]ML → MCDM[235]ML → MCDM[236]ML → MCDM
[237]ML → MCDM[238]ML → MCDM[239]ML → MCDM
[240]ML → MCDM[241]ML → MCDM[242]ML → MCDM
[243]ML → MCDM[121]ML → MCDM[244]ML → MCDM
[97]ML → MCDM[127]ML → MCDM[245]ML → MCDM
[13]ML → MCDM[78]ML → MCDM[246]ML → MCDM
[247]ML → MCDM[115]ML → MCDM[11]ML → MCDM
[248]ML → MCDM[33]ML → MCDM[98]ML → MCDM
[249]ML → MCDM[250]ML → MCDM[251]ML → MCDM
[252]ML → MCDM[253]ML → MCDM[254]ML → MCDM
[23]ML → MCDM[124]ML → MCDM[40]ML → MCDM
[100]ML → MCDM[255]ML → MCDM[16]ML → MCDM,
MCDM → ML
[256]ML → MCDM, MCDM → ML[257]ML → MCDM,
MCDM → ML
[258]MCDM → ML
[14]MCDM → ML[259]MCDM → ML[260]MCDM → ML
[261]MCDM → ML[9]MCDM → ML[42]MCDM → ML
[101]MCDM → ML[93]MCDM → ML[72]MCDM → ML
[128]MCDM → ML[43]MCDM → ML[129]MCDM → ML
[112]MCDM → ML[107]MCDM → ML[120]MCDM → ML
[262]MCDM → ML[102]MCDM → ML[109]MCDM → ML
[263]MCDM → ML[44]MCDM → ML[116]MCDM → ML
[108]MCDM → ML[104]MCDM → ML[94]MCDM → ML
[264]MCDM → ML[111]MCDM → ML[103]MCDM → ML
[119]MCDM → ML[76]MCDM → ML[265]MCDM → ML
[110]MCDM → ML[79]MCDM → ML[26]MCDM → ML
[266]MCDM → ML[267]MCDM → ML[268]MCDM → ML
[105]MCDM → ML[95]MCDM → ML[269]MCDM → ML
[270]MCDM → ML[271]MCDM → ML[96]MCDM → ML
[272]MCDM → ML[273]MCDM → ML[34]MCDM → ML
[274]MCDM → ML[275]MCDM → ML[276]MCDM → ML
[277]MCDM → ML[106]MCDM → ML[278]MCDM → ML
[279]MCDM → ML[280]MCDM + ML[82]MCDM + ML
[74]MCDM + ML[117]MCDM + ML[29]MCDM + ML
[281]MCDM + ML[282]MCDM + ML[283]MCDM + ML
[46]MCDM + ML[284]MCDM + ML[81]MCDM + ML
[45]MCDM + ML[285]MCDM + ML[286]MCDM + ML
[47]MCDM + ML[118]MCDM + ML[287]MCDM + ML
[288]MCDM + ML

Appendix C

Table A4. Scoring guideline.
Table A4. Scoring guideline.
Article No.Title of the Article
Criterion Description of the criterion.
QuestionEvaluation question for the given criterion.
Explanation:Please pay particular attention to the sections that most directly describe the integration approach, including its underlying rationale, procedural structure, implementation details, contributions, limitations, and resulting outcomes.
Score:0 ☐     1 ☐     2 ☐
Evidence:If any keyword, sentence, or entire section is related to this criterion, please provide descriptive information and a precise pointer (e.g., “Section 1. Introduction”) here.
Table A5. Example evaluation.
Table A5. Example evaluation.
Article No. 114A weighted Bonferroni-OWA operator-based cumulative belief degree approach to personnel selection based on automated video interview assessment data.
C12. Sequential Processing The integration mode introduces a stepwise approach in which the output of one method serves as the input for the next, maintaining methodological coherence, enhancing traceability, and addressing the limitations of individual methods that go beyond what the MCDM method, the ML algorithm, or even both individually can achieve.
Q12. To what extent does the approach proposed in the article satisfy the criterion of Sequential Processing as described above?
Explanation:Please pay particular attention to the sections that most directly describe the integration approach, including its underlying rationale, procedural structure, implementation details, contributions, limitations, and resulting outcomes.
Score:0 ☐     1 ☐     2 ☒
Evidence:Abstract:
“In order to address these issues, an effective and practical approach is proposed that is able to transform, weight, combine, and rank automated AVI assessments obtained through AI technologies and machine learning.”
Introduction:
“The proposed approach is a prime example of the integration of learning-based techniques with MCDM techniques. This is the first study that transforms and aggregates automated AVI assessments in a multi-criteria environment for personnel selection.”
Section 4. Proposed Approach:
Steps of the proposed approach and Figure 1.

Appendix D. Intraclass Correlation Coefficients

Table A6. Intraclass Correlation Coefficients for Group 1—Average Measures.
Table A6. Intraclass Correlation Coefficients for Group 1—Average Measures.
CriterionICC b95% Confidence IntervalF Test with True Value 0
Lower BoundUpper BoundValuedf1df2Sig.
Novelty0.9020.8640.93010.130103206<0.001
Complexity0.9400.9170.95817.365103206<0.001
Validation0.9640.9490.97428.433103206<0.001
Subjectivity0.9480.9280.96319.641103206<0.001
Knowledge Base0.9060.8660.93511.426103206<0.001
Effectiveness0.6290.4850.7372.685103206<0.001
Efficiency0.9050.8680.93310.616103206<0.001
Applicability0.8150.7440.8695.449103206<0.001
Flexibility0.9300.9030.95014.587103206<0.001
Consistency0.9150.8820.93911.797103206<0.001
Automation0.8880.8450.9219.067103206<0.001
Sequential Processing0.9650.9510.97528.273103206<0.001
Dynamic Nature0.9040.8650.93311.095103206<0.001
Explainability0.8890.8460.9218.949103206<0.001
b Type A intraclass correlation coefficients using an absolute agreement definition.
Table A7. Intraclass Correlation Coefficients for Group 2—Average Measures.
Table A7. Intraclass Correlation Coefficients for Group 2—Average Measures.
CriterionICC b95% Confidence IntervalF Test with True Value 0
Lower BoundUpper BoundValuedf1df2Sig.
Novelty0.9260.8980.94813.592103206<0.001
Complexity0.9240.8920.94714.055103206<0.001
Validation0.9760.9670.98341.746103206<0.001
Subjectivity0.9420.9180.95918.245103206<0.001
Knowledge Base0.8540.7940.8987.348103206<0.001
Effectiveness0.6510.5170.7532.872103206<0.001
Efficiency0.9230.8880.94714.198103206<0.001
Applicability0.8050.7250.8635.442103206<0.001
Flexibility0.9230.8920.94613.656103206<0.001
Consistency0.8900.8450.9239.725103206<0.001
Automation0.9140.8800.94012.294103206<0.001
Sequential Processing0.9610.9460.97226.423103206<0.001
Dynamic Nature0.8400.7780.8866.276103206<0.001
Explainability0.8940.8520.9269.92103206<0.001
b Type A intraclass correlation coefficients using an absolute agreement definition.

Appendix E. Correspondence Analysis Theory

Based on the study of Greenacre [85,289], the theory is defined as:
  • Let N be the I   ×   J data matrix with positive row and column sums. The correspondence matrix P is obtained by dividing all entries of N by the grand total n = i j n i j :
    P = 1 n N  
  • Let r and c denote the vectors of row and column sums of P, while D r and D c are the diagonal matrices containing r and c on their diagonal.
  • The algorithm for computing the coordinates of the row and column profiles along the principal axes, based on singular value decomposition (SVD):
Step 1: Compute the SVD of the standardized residuals matrix S :
S = D r 1 / 2 ( P r c T ) D c 1 / 2 = U D α V T      
U T U = V T V = I          
where D α is the diagonal matrix of the singular values in descending order:
α 1 α 2 α 3
Step 2: Compute the standard coordinates of rows and columns:
X = D r 1 / 2 U ,           Y = D c 1 / 2 V          
Step 3: Compute the principal coordinates F and G :
F = X D α ,     G = Y D α          
  • The total variance, called inertia, is
i j ( p i j r i c j ) 2 / ( r i c j )          
which equals to the chi-squared statistic calculated on the original table divided by n. Geometrically, the inertia measures how “far” the row profiles (or the column profiles) are from their average profile. A row or column profile is the corresponding row or column of the table divided by its respective total.
Step 4: Compute the principal inertias λ k :
λ k = α k 2 ,       k = 1 ,   2 , , K       w h e r e     K = min { I 1 ,   J 1 }          
The squared singular values α 1 2 , α 2 2 , are the principal inertias, representing the portion of total inertia explained by each principal axis.
  • The results of CA are presented as a map of points that represent the rows and columns with respect to a selected pair of principal axes, or dimensions. These axes correspond to pairs of columns in the coordinate matrices, usually focusing on the first two columns for the first two principal axes.
  • The symmetric map uses the first two columns of F for row coordinates and the first two columns of G for column coordinates, both in principal coordinates (see Equation (A5)).
  • The distances between profiles are measured using the chi-square distance, which is a weighted Euclidean distance. The total inertia is the weighted average of the squared chi-square distances between each row profile and its average profile (similarly, between the column profiles and their average).

Appendix F

Table A8. Two experts’ evaluations of holdout articles.
Table A8. Two experts’ evaluations of holdout articles.
StudiesYearIntegration ModesC1C2C3C4C5C6C7C8C9C10C11C12C13C14
[290]2025ML vs. MCDM{0, 0}{1, 1}{2, 2}{2, 1}{2, 2}{2, 2}{1, 2}{2, 2}{1, 1}{2, 2}{1, 0}{0, 0}{0, 0}{0, 0}
[291]2025ML → MCDM, MCDM → ML{1, 2}{1, 2}{0, 0}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{0, 1}{1, 1}{2, 2}{2, 2}{2, 2}{0, 1}
[292]2025MCDM[ML vs. ML]{2, 2}{1, 1}{2, 2}{0, 1}{1, 1}{2, 2}{2, 2}{2, 2}{1, 1}{0, 1}{1, 1}{0, 2}{0, 0}{1, 2}
[293]2025MCDM → ML{0, 0}{2, 2}{0, 0}{0, 0}{1, 2}{2, 2}{1, 2}{2, 2}{0, 0}{1, 1}{1, 1}{2, 2}{0, 0}{2, 2}
[294]2025MCDM[ML vs. ML]{0, 0}{0, 1}{2, 2}{0, 0}{2, 1}{2, 2}{2, 2}{2, 2}{0, 1}{1, 1}{0, 0}{0, 0}{0, 0}{1, 1}
[295]2025MCDM[ML vs. ML]{1, 0}{1, 2}{2, 2}{0, 0}{2, 2}{2, 2}{1, 1}{1, 2}{0, 1}{1, 1}{0, 0}{0, 0}{0, 0}{0, 0}
[296]2025ML → MCDM{0, 0}{2, 2}{0, 0}{2, 2}{1, 1}{2, 2}{2, 2}{1, 2}{2, 2}{1, 1}{1, 1}{2, 2}{0, 0}{0, 0}
[297]2025ML vs. MCDM{0, 1}{1, 2}{1, 2}{2, 2}{2, 2}{2, 2}{1, 1}{2, 2}{2, 2}{2, 2}{1, 0}{0, 0}{1, 2}{0, 0}
[298]2025ML → MCDM{1, 2}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{0, 1}{1, 1}{0, 1}{2, 2}{1, 0}{2, 2}{0, 0}{0, 0}
[299]2025MCDM[ML vs. ML]{1, 1}{0, 0}{2, 2}{0, 1}{0, 0}{2, 2}{0, 0}{1, 1}{2, 2}{1, 1}{1, 2}{0, 0}{0, 0}{1, 1}
[300]2025ML vs. MCDM{0, 0}{0, 0}{2, 2}{0, 0}{2, 2}{2, 2}{0, 0}{2, 2}{0, 0}{1, 1}{0, 0}{0, 0}{1, 1}{0, 0}
[296]2025ML vs. MCDM{0, 0}{2, 2}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{2, 2}{0, 1}{1, 1}{1, 0}{0, 0}{0, 0}{0, 0}
[301]2025ML → MCDM{0, 0}{2, 2}{0, 0}{1, 1}{2, 2}{2, 2}{0, 1}{2, 2}{2, 2}{2, 2}{0, 0}{2, 2}{0, 0}{0, 1}
[302]2025ML → MCDM{2, 1}{2, 2}{0, 0}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}
[303]2025MCDM → ML{1, 1}{1, 2}{0, 0}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{0, 1}{0, 1}{2, 2}{2, 2}{0, 0}{0, 0}
[304]2025MCDM → ML{0, 0}{2, 2}{0, 0}{0, 0}{2, 2}{2, 2}{2, 2}{2, 2}{1, 1}{2, 2}{1, 0}{2, 2}{2, 2}{0, 0}
[305]2025ML → MCDM,
ML vs. MCDM
{1, 2}{0, 0}{2, 1}{2, 2}{1, 1}{2, 2}{1, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{0, 0}{1, 1}
[306]2025MCDM[ML vs. ML]{0, 0}{2, 2}{2, 2}{1, 2}{1, 0}{2, 2}{0, 1}{2, 2}{2, 2}{0, 1}{2, 2}{0, 0}{0, 0}{2, 1}
[307]2026MCDM → ML,
ML vs. MCDM
{2, 1}{2, 2}{2, 1}{2, 2}{2, 2}{1, 1}{1, 2}{1, 1}{1, 1}{1, 2}{1, 1}{2, 2}{0, 0}{0, 0}
[308]2026MCDM[ML vs. ML]{1, 2}{0, 1}{2, 2}{2, 2}{0, 0}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{2, 2}{0, 0}{2, 2}{0, 0}
[309]2025MCDM → ML{1, 1}{2, 2}{0, 1}{2, 2}{2, 1}{2, 2}{2, 2}{2, 2}{1, 2}{1, 2}{1, 1}{2, 2}{1, 0}{0, 2}
[310]2024MCDM + ML{2, 2}{2, 2}{0, 0}{1, 2}{1, 2}{2, 2}{2, 2}{2, 2}{2, 2}{1, 2}{2, 2}{0, 0}{1, 2}{1, 0}

Appendix G

Table A9. Classification results for out-of-sample data.
Table A9. Classification results for out-of-sample data.
ClusterPredicted Group Membership
Holdout Sample 123Total
Original aCount19009
20505
32035
%1100.00.00.0100.0
20.0100.00.0100.0
340.00.060.0100.0
Cross-validated bCount17209
20505
32035
%177.822.20.0100.0
220.080.00.0100.0
340.00.060.0100.0
a. 89.5% of original grouped cases correctly classified. b. 78.9% of cross-validated grouped cases correctly classified.
Table A10. Casewise statistics for out-of-sample data.
Table A10. Casewise statistics for out-of-sample data.
OriginalCross-Validated
Integration ModeActualPredictedProbabilityPredictedProbability
/CriteriaGroupGroup123Group123
ML → MCDM110.8730.0860.04110.8530.0980.049
MCDM → ML110.8910.0660.04210.8730.0770.050
MCDM + ML220.1030.8860.01020.1160.8700.014
ML vs. MCDM330.1780.0250.79730.1940.0310.775
MCDM[ML vs. ML]330.0580.0210.92130.0700.9300.903
Novelty220.0960.8940.01020.1090.8780.013
Complexity110.4870.4740.0392 *0.4540.4990.047
Validation330.0000.0001.00030.0000.0001.000
Subjectivity110.6620.2670.07110.6410.2770.082
Knowledge Base110.4400.3100.25010.3840.3400.276
Effectiveness31 *0.5020.2210.2771 *0.5960.2500.154
Efficiency110.4680.4480.0842 *0.4320.4680.100
Applicability31*0.4990.2620.2391 *0.5960.2900.114
Flexibility220.2330.6850.08220.2640.6430.093
Consistency110.5860.2380.17610.5570.2540.189
Automation220.2430.7260.03120.2590.7040.037
Sequential Proces.110.9930.0000.00711.0000.0000.000
Dynamic Nature220.0840.9130.00320.1020.8940.004
Explainability110.5490.1700.28110.5150.1900.295
* Misclassified case.

References

  1. Sayardoost Tabrizi, S.; Yakideh, K.; Moradi, M.; Ebrahimpour, M. Clustering with Machine Learning and Using NDEA in Development Planning: A Case Study in the Petrochemical Two-Stage SSC. Int. J. Res. Ind. Eng. 2025, 14, 355–384. [Google Scholar] [CrossRef]
  2. Lagzi, M.D.; Farkhondeh, F.; Amoozad Mahdiraji, H.; Sakka, G. Exploring Data-Driven Decision-Making Practices: A Comprehensive Review with Bibliometric Insights and Future Directions. EuroMed J. Bus. 2025. [Google Scholar] [CrossRef]
  3. Ali, Y.A.; Awwad, E.M.; Al-Razgan, M.; Maarouf, A. Hyperparameter Search for Machine Learning Algorithms for Optimizing the Computational Complexity. Processes 2023, 11, 349. [Google Scholar] [CrossRef]
  4. Liao, H.; He, Y.; Wu, X.; Wu, Z.; Bausys, R. Reimagining Multi-Criterion Decision Making by Data-Driven Methods Based on Machine Learning: A Literature Review. Inf. Fusion 2023, 100, 101970. [Google Scholar] [CrossRef]
  5. Ati, A.; Bouchet, P.; Ben Jeddou, R. Using Multi-Criteria Decision-Making and Machine Learning for Football Player Selection and Performance Prediction: A Systematic Review. Data Sci. Manag. 2024, 7, 79–88. [Google Scholar] [CrossRef]
  6. Düzen, M.A.; Bölükbaşı, İ.B.; Çalık, E. How to Combine ML and MCDM Techniques: An Extended Bibliometric Analysis. J. Innov. Eng. Nat. Sci. 2024, 4, 642–657. [Google Scholar] [CrossRef]
  7. Reyes-Norambuena, P.; Pinto, A.A.; Martínez, J.; Karbassi Yazdi, A.; Tan, Y. The Application of Machine Learning and Deep Learning with a Multi-Criteria Decision Analysis for Pedestrian Modeling: A Systematic Literature Review (1999–2023). Sustainability 2025, 17, 41. [Google Scholar] [CrossRef]
  8. Thakkar, J.J. Studies in Systems, Decision and Control 336 Multi-Criteria Decision Making; Springer: Singapore, 2021; Volume 336, ISBN 978-981-33-4744-1. [Google Scholar]
  9. Nouib, H.; Qadech, H.; Andaloussi, M.B.; Chowdhury, S.J.; Moumen, A. Predicting Graduate Employability Using Hybrid AHP-TOPSIS and Machine Learning: A Moroccan Case Study. Technologies 2025, 13, 385. [Google Scholar] [CrossRef]
  10. Khosravi, K.; Shahabi, H.; Pham, B.T.; Adamowski, J.; Shirzadi, A.; Pradhan, B.; Dou, J.; Ly, H.B.; Gróf, G.; Ho, H.L.; et al. A Comparative Assessment of Flood Susceptibility Modeling Using Multi-Criteria Decision-Making Analysis and Machine Learning Methods. J. Hydrol. 2019, 573, 311–323. [Google Scholar] [CrossRef]
  11. Nilashi, M.; Mardani, A.; Liao, H.; Ahmadi, H.; Manaf, A.A.; Almukadi, W. A Hybrid Method with TOPSIS and Machine Learning Techniques for Sustainable Development of Green Hotels Considering Online Reviews. Sustainability 2019, 11, 6013. [Google Scholar] [CrossRef]
  12. De Araújo Costa, I.P.; Basílio, M.P.; Do Nascimento Maêda, S.M.; Rodrigues, M.V.G.; Moreira, M.Â.L.; Gomes, C.F.S.; Dos Santos, M. Algorithm Selection for Machine Learning Classification: An Application of the MELCHIOR Multicriteria Method. Front. Artif. Intell. Appl. 2021, 341, 154–161. [Google Scholar] [CrossRef]
  13. Sarkar, S.K.; Ansar, S.B.; Ekram, K.M.M.; Khan, M.H.; Talukdar, S.; Naikoo, M.W.; Islam, A.R.T.; Rahman, A.; Mosavi, A. Developing Robust Flood Susceptibility Model with Small Numbers of Parameters in Highly Fertile Regions of Northwest Bangladesh for Sustainable Flood and Agriculture Management. Sustainability 2022, 14, 3982. [Google Scholar] [CrossRef]
  14. Şimşek, A.İ.; Gür, Y.E.; Ünal, E. Innovative MCDM-ML Algorithms-Based Decision-Support System for Electric Vehicle Selection. Environ. Dev. Sustain. 2025, 1–26. [Google Scholar] [CrossRef]
  15. Harikrishnakumar, R.; Dand, A.; Nannapaneni, S.; Krishnan, K. Supervised Machine Learning Approach for Effective Supplier Classification. In Proceedings of the 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 240–245. [Google Scholar] [CrossRef]
  16. Yadegaridehkordi, E.; Nilashi, M.; Nizam Bin Md Nasir, M.H.; Momtazi, S.; Samad, S.; Supriyanto, E.; Ghabban, F. Customers Segmentation in Eco-Friendly Hotels Using Multi-Criteria and Machine Learning Techniques. Technol. Soc. 2021, 65, 101528. [Google Scholar] [CrossRef]
  17. Gorski, E.G.; Racha Loures, E.d.F.; Santos, E.A.P.; Kondo, R.E.; Martins, G.R.D.N. Towards a Smart Workflow in CMMS/EAM Systems: An Approach Based on ML and MCDM. J. Ind. Inf. Integr. 2022, 26, 100278. [Google Scholar] [CrossRef]
  18. Farhadi, H.; Esmaeily, A.; Najafzadeh, M. Flood Monitoring by Integration of Remote Sensing Technique and Multi-Criteria Decision Making Method. Comput. Geosci. 2022, 160, 105045. [Google Scholar] [CrossRef]
  19. Akinsola, J.E.T.; Awodele, O.; Kuyoro, S.O.; Kasali, F.A. Performance Evaluation of Supervised Machine Learning Algorithms Using Multi-Criteria Decision Making Techniques. In Proceedings of the International Conference on Information Technology in Education and Development (ITED), Valencia, Spain, 11–13 March 2019; pp. 17–34. [Google Scholar]
  20. Kumar, R. A Comprehensive Review of MCDM Methods, Applications, and Emerging Trends. Decis. Mak. Adv. 2025, 3, 185–199. [Google Scholar] [CrossRef]
  21. Pham, Q.B.; Achour, Y.; Ali, S.A.; Parvin, F.; Vojtek, M.; Vojteková, J.; Al-Ansari, N.; Achu, A.L.; Costache, R.; Khedher, K.M.; et al. A Comparison among Fuzzy Multi-Criteria Decision Making, Bivariate, Multivariate and Machine Learning Models in Landslide Susceptibility Mapping. Geomat. Nat. Hazards Risk 2021, 12, 1741–1777. [Google Scholar] [CrossRef]
  22. Wilson, V.H.; Prasad, A.; Shankharan, A.; Kapoor, S.; Rajan, J. Ranking of Supplier Performance Using Machine Learning Algorithm of Random Forest. Int. J. Adv. Res. Eng. Technol. (IJARET) 2020, 11, 298–308. [Google Scholar]
  23. Fu, C.; Xu, C.; Xue, M.; Liu, W.; Yang, S. Data-Driven Decision Making Based on Evidential Reasoning Approach and Machine Learning Algorithms. Appl. Soft Comput. 2021, 110, 107622. [Google Scholar] [CrossRef]
  24. Fernando, X.; Thavarajah, N.; Avramova, T.; Peneva, T.; Ivanov, A. Overview of Existing Multi-Criteria Decision-Making (MCDM) Methods Used in Industrial Environments. Technologies 2025, 13, 444. [Google Scholar] [CrossRef]
  25. Li, J.; Dai, Y.; Jiang, R.; Li, J. Objective Multi-Criteria Decision-Making for Optimal Firefighter Protective Clothing Size Selection. Int. J. Occup. Saf. Ergon. 2024, 30, 968–976. [Google Scholar] [CrossRef]
  26. de Paula Vidal, G.H.; Caiado, R.G.G.; Scavarda, L.F.; Ivson, P.; Garza-Reyes, J.A. Decision Support Framework for Inventory Management Combining Fuzzy Multicriteria Methods, Genetic Algorithm, and Artificial Neural Network. Comput. Ind. Eng. 2022, 174, 108777. [Google Scholar] [CrossRef]
  27. Chu, H.C.; Liao, Y.X.; Chang, L.H.; Lee, Y.H. Traffic Light Cycle Configuration of Single Intersection Based on Modified Q-Learning. Appl. Sci. 2019, 9, 4558. [Google Scholar] [CrossRef]
  28. Kim, R.G.; Abisado, M.; Villaverde, J.; Sampedro, G.A. A Survey of Image-Based Fault Monitoring in Additive Manufacturing: Recent Developments and Future Directions. Sensors 2023, 23, 6821. [Google Scholar] [CrossRef]
  29. Mohsin, M.; Ali, S.A.; Shamim, S.K.; Ahmad, A. A GIS-Based Novel Approach for Suitable Sanitary Landfill Site Selection Using Integrated Fuzzy Analytic Hierarchy Process and Machine Learning Algorithms. Environ. Sci. Pollut. Res. 2022, 29, 31511–31540. [Google Scholar] [CrossRef]
  30. Muttakin, F.; Wang, J.T.; Mulyanto, M.; Leu, J.S. Evaluation of Feature Selection Methods on Psychosocial Education Data Using Additive Ratio Assessment. Electronics 2021, 11, 114. [Google Scholar] [CrossRef]
  31. Kavya, R.; Christopher, J.; Panda, S. ScaPMI: Scaling Parameter for Metric Importance. In Proceedings of the 14th International Conference on Agents and Artificial Intelligence, Vienna, Austria, 3–5 February 2022; SCITEPRESS—Science and Technology Publications, Lda.: Setubal, Portugal, 2022; pp. 83–90. [Google Scholar]
  32. Abushark, Y.B.; Khan, A.I.; Alsolami, F.; Almalawi, A.; Alam, M.M.; Agrawal, A.; Kumar, R.; Khan, R.A. Cyber Security Analysis and Evaluation for Intrusion Detection Systems. Comput. Mater. Contin. 2022, 72, 1765–1783. [Google Scholar] [CrossRef]
  33. Ye, Y.; Zhao, Y.; Shang, J.; Zhang, L. A Hybrid IT Framework for Identifying High-Quality Physicians Using Big Data Analytics. Int. J. Inf. Manag. 2019, 47, 65–75. [Google Scholar] [CrossRef]
  34. Kartal, H.; Oztekin, A.; Gunasekaran, A.; Cebi, F. An Integrated Decision Analytic Framework of Machine Learning with Multi-Criteria Decision Making for Multi-Attribute Inventory Classification. Comput. Ind. Eng. 2016, 101, 599–613. [Google Scholar] [CrossRef]
  35. Sobrie, O.; Lazouni, M.E.A.; Mahmoudi, S.; Mousseau, V.; Pirlot, M. A New Decision Support Model for Preanesthetic Evaluation. Comput. Methods Programs Biomed. 2016, 133, 183–193. [Google Scholar] [CrossRef] [PubMed]
  36. Shivashankar, K.; Al Hajj, G.S.; Martini, A. Scalability and Maintainability Challenges and Solutions in Machine Learning: Systematic Literature Review. Big Data Res. 2025, 40. [Google Scholar] [CrossRef]
  37. Karaahmetoğlu, A.; Yıldız, M.; Ünal, E.; Aydın, U.; Koraş, M.; Akgün, B. Efficient, Interpretable and Automated Feature Engineering for Bank Data. Big Data Res. 2025, 40, 100524. [Google Scholar] [CrossRef]
  38. Elgendy, N.; Elragal, A.; Päivärinta, T. DECAS: A Modern Data-Driven Decision Theory for Big Data and Analytics. J. Decis. Syst. 2022, 31, 337–373. [Google Scholar] [CrossRef]
  39. Ransbotham, S.; Khodabandeh, S.; Kiron, D.; Candelon, F.; Chu, M.; Lafountain, B. Expanding AI’s Impact with Organizational Learning; MIT Sloan Management Review: Cambridge, MA, USA, 2020; Volume 8245. [Google Scholar]
  40. Pirouz, B.; Ferrante, A.P.; Pirouz, B.; Piro, P. Machine Learning and Geo-Based Multi-Criteria Decision Support Systems in Analysis of Complex Problems. ISPRS Int. J. Geo-Inf. 2021, 10, 424. [Google Scholar] [CrossRef]
  41. Elomiya, A.; Křupka, J.; Jovčić, S.; Simic, V.; Švadlenka, L.; Pamucar, D. A Hybrid Suitability Mapping Model Integrating GIS, Machine Learning, and Multi-Criteria Decision Analytics for Optimizing Service Quality of Electric Vehicle Charging Stations. Sustain. Cities Soc. 2024, 106, 105397. [Google Scholar] [CrossRef]
  42. Oliveira de Sousa, F.; Ariza Flores, V.A.; Cunha, C.S.; Oda, S.; Xavier Ratton Neto, H. Multi-Criteria Assessment of Flood Risk on Railroads Using a Machine Learning Approach: A Case Study of Railroads in Minas Gerais. Infrastructures 2025, 10, 12. [Google Scholar] [CrossRef]
  43. Saleh, N.; Gamal, O.; Eldosoky, M.A.A.; Shaaban, A.R. An Integrative Approach to Medical Laboratory Equipment Risk Management. Sci. Rep. 2024, 14, 4045. [Google Scholar] [CrossRef] [PubMed]
  44. Sotiropoulou, K.F.; Vavatsikos, A.P. A Decision-Making Framework for Spatial Multicriteria Suitability Analysis Using PROMETHEE II and k Nearest Neighbor Machine Learning Models. J. Geovis. Spat. Anal. 2023, 7, 20. [Google Scholar] [CrossRef]
  45. Guerrero-Gómez-Olmedo, R.; Salmeron, J.L.; Kuchkovsky, C. LRP-Based Path Relevances for Global Explanation of Deep Architectures. Neurocomputing 2020, 381, 252–260. [Google Scholar] [CrossRef]
  46. Rafiei-Sardooi, E.; Azareh, A.; Choubin, B.; Mosavi, A.H.; Clague, J.J. Evaluating Urban Flood Risk Using Hybrid Method of TOPSIS and Machine Learning. Int. J. Disaster Risk Reduct. 2021, 66, 102614. [Google Scholar] [CrossRef]
  47. Parishani, M.; Rasti-Barzoki, M. CWBCM Method to Determine the Importance of Classification Performance Evaluation Criteria in Machine Learning: Case Studies of COVID-19, Diabetes, and Thyroid Disease. Omega 2024, 127, 103096. [Google Scholar] [CrossRef]
  48. Fernández, D.; Rodríguez-Prieto, Á.; Camacho, A.M. Data-Analytics-Driven Selection of Die Material in Multi-Material Co-Extrusion of Ti-Mg Alloys. Mathematics 2024, 12, 813. [Google Scholar] [CrossRef]
  49. Almansi, K.Y.; Shariff, A.R.M.; Kalantar, B.; Abdullah, A.F.; Ismail, S.N.S.; Ueda, N. Performance Evaluation of Hospital Site Suitability Using Multilayer Perceptron (MLP) and Analytical Hierarchy Process (AHP) Models in Malacca, Malaysia. Sustainability 2022, 14, 3731. [Google Scholar] [CrossRef]
  50. Tashakkori, R.; Mozdgir, A.; Karimi, A.; BozorgzadehVostaKolaei, S. The Prediction of NICU Admission and Identifying Influential Factors in Four Different Categories Leveraging Machine Learning Approaches. Biomed. Signal Process. Control. 2024, 90, 105844. [Google Scholar] [CrossRef]
  51. Mekouar, S. Classifiers Selection Based on Analytic Hierarchy Process and Similarity Score for Spam Identification. Appl. Soft Comput. 2021, 113, 108022. [Google Scholar] [CrossRef]
  52. Boden, M.A. Computer Models of Creativity. AI Mag. 2009, 30, 23–34. [Google Scholar] [CrossRef]
  53. Kharkhurin, A.V. Creativity.4in1: Four-Criterion Construct of Creativity. Creat. Res. J. 2014, 26, 338–352. [Google Scholar] [CrossRef]
  54. Hong Yun, Z.; Alshehri, Y.; Alnazzawi, N.; Ullah, I.; Noor, S.; Gohar, N. A Decision-Support System for Assessing the Function of Machine Learning and Artificial Intelligence in Music Education for Network Games. Soft Comput. 2022, 26, 11063–11075. [Google Scholar] [CrossRef]
  55. Caputo, C.; Cardin, M.A. The role of machine learning for flexibility and real options analysis in engineering systems design. Proc. Des. Soc. 2021, 1, 3121–3130. [Google Scholar] [CrossRef]
  56. Sornette, D.; Davis, A.B.; Ide, K.; Vixie, K.R.; Pisarenko, V.; Kamm, J.R. Algorithm for Model Validation: Theory and Applications. Proc. Natl. Acad. Sci. USA 2007, 104, 6562–6567. [Google Scholar] [CrossRef]
  57. Bozdag, E.; Asan, U.; Soyer, A.; Serdarasan, S. Risk Prioritization in Failure Mode and Effects Analysis Using Interval Type-2 Fuzzy Sets. Expert Syst. Appl. 2015, 42, 4000–4015. [Google Scholar] [CrossRef]
  58. Cohendet, P.; Dupouët, O.; Llerena, P.; Naggar, R.; Rampa, R. Knowledge-Based Approaches to the Firm: An Idea-Driven Perspective. Ind. Corp. Change 2025, 34, 479–501. [Google Scholar] [CrossRef]
  59. Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  60. Sivakumar, M.; Parthasarathy, S.; Padmapriya, T. A Simplified Approach for Efficiency Analysis of Machine Learning Algorithms. PeerJ Comput. Sci. 2024, 10, e2418. [Google Scholar] [CrossRef]
  61. Weaver, S.; Gleeson, M.P. The Importance of the Domain of Applicability in QSAR Modeling. J. Mol. Graph. Model. 2008, 26, 1315–1326. [Google Scholar] [CrossRef] [PubMed]
  62. Wang, L.; Ghosh, D.; Gonzalez Diaz, M.T.; Farahat, A.; Alam, M.; Gupta, C.; Chen, J.; Marathe, M. Wisdom of the Ensemble: Improving Consistency of Deep Learning Models. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BS, Canada, 13 November 2020; Neural Information Processing Systems Foundation: San Diego, CA, USA 2020; Volume 2020. [Google Scholar]
  63. Hutter, F.; Kotthoff, L.; Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges; The Springer Series on Challenges in Machine Learning; Hutter, F., Kotthoff, L., Vanschoren, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; ISBN 978-3-030-05317-8. [Google Scholar]
  64. Jenkins, D.A.; Sperrin, M.; Martin, G.P.; Peek, N. Dynamic Models to Predict Health Outcomes: Current Status and Methodological Challenges. Diagn. Progn. Res. 2018, 2, 23. [Google Scholar] [CrossRef] [PubMed]
  65. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef]
  66. Kamath, U.; Liu, J. Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning. In Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–310. [Google Scholar] [CrossRef]
  67. Uresin, U.; Asan, U. Conceptualizing and Modeling Factors Influencing Digital Twin Performance in Industrial Contexts: A Fuzzy Cognitive Mapping Approach. IEEE Access 2024, 12, 197645–197677. [Google Scholar] [CrossRef]
  68. Robinson, R.S. Purposive Sampling. In Encyclopedia of Quality of Life and Well-Being Research; Michalos, A.C., Ed.; Springer: Dordrecht, The Netherlands, 2014; pp. 5243–5245. ISBN 978-94-007-0753-5. [Google Scholar]
  69. Soner, O.; Asan, U.; Celik, M. Use of HFACS-FCM in Fire Prevention Modelling on Board Ships. Saf. Sci. 2015, 77, 25–41. [Google Scholar] [CrossRef]
  70. Hasan, M.M.; Talha, M.; Akter, M.M.; Ferdous, M.T.; Mojumder, P.; Roy, S.K.; Refat Nasher, N.M. Assessing the Performance of Machine Learning and Analytical Hierarchy Process (AHP) Models for Rainwater Harvesting Potential Zone Identification in Hilly Region, Bangladesh. J. Asian Earth Sci. X 2025, 13, 100189. [Google Scholar] [CrossRef]
  71. Yang, S.; Liao, H.; Wu, X. Prescriptive Analytics for Dynamic Multi-Criterion Decision Making Considering Learned Knowledge of Alternatives. Expert Syst. Appl. 2025, 268, 126350. [Google Scholar] [CrossRef]
  72. Sotiropoulou, K.F.; Vavatsikos, A.P.; Botsaris, P.N. A Hybrid AHP-PROMETHEE II Onshore Wind Farms Multicriteria Suitability Analysis Using KNN and SVM Regression Models in Northeastern Greece. Renew. Energy 2024, 221, 119795. [Google Scholar] [CrossRef]
  73. Segue, W.S.; Njilah, I.K.; Fossi, D.H.; Nsangou, D. Advancements in Mapping Landslide Susceptibility in Bafoussam and Its Surroundings Area Using Multi-Criteria Decision Analysis, Statistical Methods, and Machine Learning Models. J. Afr. Earth Sci. 2024, 213, 105237. [Google Scholar] [CrossRef]
  74. Fan, S.; Liu, G.; Tu, Y.; Zhu, J.; Zhang, P.; Tian, Z. Improved Multi-Criteria Decision Making Method Integrating Machine Learning for Patent Competitive Potential Evaluation: A Case Study in Water Pollution Abatement Technology. J. Clean. Prod. 2023, 403, 136896. [Google Scholar] [CrossRef]
  75. Alamleh, A.; Almatarneh, S.; Samara, G.; Rasmi, M. Machine Learning-Based Detection of Smartphone Malware: Challenges and Solutions. Mesop. J. Cybersecur. 2023, 2023, 134–157. [Google Scholar] [CrossRef]
  76. Lavate, S.H.; Srivastava, P.K. Optimal Channel Allocation: A Dual Approach with MCDM and Machine Learning. Int. J. Intell. Syst. Appl. Eng. 2023, 12, 196–206. [Google Scholar]
  77. Khalil, U.; Imtiaz, I.; Aslam, B.; Ullah, I.; Tariq, A.; Qin, S. Comparative Analysis of Machine Learning and Multi-Criteria Decision Making Techniques for Landslide Susceptibility Mapping of Muzaffarabad District. Front. Environ. Sci. 2022, 10, 1028373. [Google Scholar] [CrossRef]
  78. Srivastava, P.R.; Eachempati, P. Intelligent Employee Retention System for Attrition Rate Analysis and Churn Prediction: An Ensemble Machine Learning and Multi-Criteria Decision-Making Approach. J. Glob. Inf. Manag. (JGIM) 2021, 29, 1–29. [Google Scholar] [CrossRef]
  79. Sarkar, D.; Saha, S.; Maitra, M.; Mondal, P. Site Suitability for Aromatic Rice Cultivation by Integrating Geo-Spatial and Machine Learning Algorithms in Kaliyaganj, C.D. Block, India. Artif. Intell. Geosci. 2021, 2, 179–191. [Google Scholar] [CrossRef]
  80. Hooda, N.; Bawa, S.; Rana, P.S. Optimizing Fraudulent Firm Prediction Using Ensemble Machine Learning: A Case Study of an External Audit. Appl. Artif. Intell. 2020, 34, 20–30. [Google Scholar] [CrossRef]
  81. Chen, C.; Wang, C.; Qiu, T.; Xu, Z.; Song, H. A Robust Active Safety Enhancement Strategy with Learning Mechanism in Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2019, 21, 5160–5176. [Google Scholar] [CrossRef]
  82. Delibašić, B.; Radovanović, S.; Jovanović, M.; Bohanec, M.; Suknović, M. Integrating Knowledge from DEX Hierarchies into a Logistic Regression Stacking Model for Predicting Ski Injuries. J. Decis. Syst. 2018, 27, 201–208. [Google Scholar] [CrossRef]
  83. Denham, B.E. Categorical Statistics for Communication Research; John Wiley & Sons: Pondicherry, India, 2017. [Google Scholar]
  84. Koo, T.K.; Li, M.Y. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef]
  85. Greenacre, M. Correspondence Analysis in Practice, 2nd ed.; Keiding, N., Morgan, B., Speed, T., van der Heijden, P., Eds.; Taylor & Francis Group: Barcelona, Spain, 2007; ISBN 1-58488-616-1. [Google Scholar]
  86. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson: Harlow, UK, 2014. [Google Scholar]
  87. Van der Heijden, P.G.M. Correspondence Analysis of Longitudinal Categorical Data. Psychometrika 1987, 54, 165–166. [Google Scholar] [CrossRef]
  88. Clausen, S.E. Applied Correspondence Analysis: An Introduction; Sage Publications, Inc.: Thousand Oaks, CA, USA, 1998; Volume 121. [Google Scholar]
  89. Greenacre, M. Correspondence Analysis in Practice, 3rd ed.; Chapman and Hall/CRC: Barcelona, Spain, 2017. [Google Scholar]
  90. Hoffman, D.L.; Franke, G.R. Correspondence Analysis: Graphical Representation of Categorical Data in Marketing Research. J. Mark. Res. 1986, 23, 213–227. [Google Scholar] [CrossRef]
  91. Yusuf, H.; Yang, K.; Panoutsos, G. Fuzzy Multi-Criteria Decision-Making: Example of an Explainable Classification Framework. In Proceedings of the Advances in Computational Intelligence Systems; Jansen, T., Jensen, R., Mac Parthaláin, N., Lin, C.-M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 15–26. [Google Scholar]
  92. Dryden, I.L.; Mardia, K.V. Statistical Shape Analysis, with Applications in R: Second Edition; Wiley: Hoboken, NJ, USA, 2016; pp. 1–454. [Google Scholar] [CrossRef]
  93. Halder, S.; Bose, S. Addressing Water Scarcity Challenges through Rainwater Harvesting: A Comprehensive Analysis of Potential Zones and Model Performance in Arid and Semi-Arid Regions—A Case Study on Purulia, India. HydroResearch 2024, 7, 201–212. [Google Scholar] [CrossRef]
  94. Omeka, M.E.; Igwe, O.; Onwuka, O.S.; Nwodo, O.M.; Ugar, S.I.; Undiandeye, P.A.; Anyanwu, I.E. Efficacy of GIS-Based AHP and Data-Driven Intelligent Machine Learning Algorithms for Irrigation Water Quality Prediction in an Agricultural-Mine District within the Lower Benue Trough, Nigeria. Environ. Sci. Pollut. Res. 2024, 31, 54204–54233. [Google Scholar] [CrossRef]
  95. Guhathakurata, S.; Saha, S.; Kundu, S.; Chakraborty, A.; Banerjee, J.S. South Asian Countries Are Less Fatal Concerning COVID-19: A Fact-Finding Procedure Integrating Machine Learning & Multiple Criteria Decision-Making (MCDM) Technique. J. Inst. Eng. (India) Ser. B 2021, 102, 1249–1263. [Google Scholar] [CrossRef]
  96. Bhattacharya, G.; Ghosh, K.; Chowdhury, A.S. Granger Causality Driven AHP for Feature Weighted KNN. Pattern Recognit. 2017, 66, 425–436. [Google Scholar] [CrossRef]
  97. da Silva, D.C.; Batista, J.O.R.; de Sousa, M.A.F.; Mostaço, G.M.; de Castro Monteiro, C.; Bressan, G.; Cugnasca, C.E.; Silveira, R.M. A Novel Approach to Multi-Provider Network Slice Selector for 5G and Future Communication Systems. Sensors 2022, 22, 6066. [Google Scholar] [CrossRef]
  98. Ariyani, N.; Fauzi, A.; Umar, F. Predicting Determinant Factors and Development Strategy for Tourist Villages. Can. Decis. Sci. Lett. 2023, 12, 137–148. [Google Scholar] [CrossRef]
  99. Abdulla, A.; Baryannis, G.; Badi, I. An Integrated Machine Learning and MARCOS Method for Supplier Evaluation and Selection. Decis. Anal. J. 2023, 9, 100342. [Google Scholar] [CrossRef]
  100. Arabameri, A.; Yamani, M.; Pradhan, B.; Melesse, A.; Shirani, K.; Tien Bui, D. Novel Ensembles of COPRAS Multi-Criteria Decision-Making with Logistic Regression, Boosted Regression Tree, and Random Forest for Spatial Prediction of Gully Erosion Susceptibility. Sci. Total Environ. 2019, 688, 903–916. [Google Scholar] [CrossRef] [PubMed]
  101. Yiğit Uzunali, Ş.; Berberoğlu, S. Agricultural Land Suitability Analysis with Parametric and Nonparametric Techniques: The Case of Büyük Menderes River Basin, Türkiye. Comput. Electron. Agric. 2025, 229, 109754. [Google Scholar] [CrossRef]
  102. Sahoo, S.; Singha, C.; Govind, A. Prediction of Pulse Suitability in Rice Fallow Areas Using Fuzzy AHP-Based Machine Learning Methods in Eastern India. Paddy Water Environ. 2024, 22, 341–359. [Google Scholar] [CrossRef]
  103. Asiri, M.M.; Aldehim, G.; Alruwais, N.; Allafi, R.; Alzahrani, I.; Nouri, A.M.; Assiri, M.; Ahmed, N.A. Coastal Flood Risk Assessment Using Ensemble Multi-Criteria Decision-Making with Machine Learning Approaches. Environ. Res. 2024, 245, 118042. [Google Scholar] [CrossRef]
  104. Saha, A.; Villuri, V.G.K.; Bhardwaj, A. Development and Assessment of a Novel Hybrid Machine Learning-Based Landslide Susceptibility Mapping Model in the Darjeeling Himalayas. Stoch. Environ. Res. Risk Assess. 2023, 39, 4145–4168. [Google Scholar] [CrossRef]
  105. Devarakonda, P.; Sadasivuni, R.; Nobrega, R.A.A.; Wu, J. Application of Spatial Multicriteria Decision Analysis in Healthcare: Identifying Drivers and Triggers of Infectious Disease Outbreaks Using Ensemble Learning. J. Multi-Criteria Decis. Anal. 2022, 29, 23–36. [Google Scholar] [CrossRef]
  106. Costache, R.; Țîncu, R.; Elkhrachy, I.; Pham, Q.B.; Popa, M.C.; Diaconu, D.C.; Avand, M.; Costache, I.; Arabameri, A.; Bui, D.T. New Neural Fuzzy-Based Machine Learning Ensemble for Enhancing the Prediction Accuracy of Flood Susceptibility Mapping. Hydrol. Sci. J. 2020, 65, 2816–2837. [Google Scholar] [CrossRef]
  107. Albahri, A.S.; Joudar, S.S.; Hamid, R.A.; Zahid, I.A.; Alqaysi, M.E.; Albahri, O.S.; Alamoodi, A.H.; Kou, G.; Sharaf, I.M. Explainable Artificial Intelligence Multimodal of Autism Triage Levels Using Fuzzy Approach-Based Multi-Criteria Decision-Making and LIME. Int. J. Fuzzy Syst. 2024, 26, 274–303. [Google Scholar] [CrossRef]
  108. Roy, A.; Islam, M.; Karim, M.; Ahmed, K.A.; Khan, A.R.; Uddin, M.; Xames, M.D. Comparative Analysis of KNN and SVM in Multicriteria Inventory Classification Using TOPSIS. Int. J. Inf. Technol. 2023, 15, 3613–3622. [Google Scholar] [CrossRef]
  109. Nasiri Khiavi, A.; Vafakhah, M. Using Algorithmic Game Theory to Improve Supervised Machine Learning: A Novel Applicability Approach in Flood Susceptibility Mapping. Environ. Sci. Pollut. Res. 2024, 31, 52740–52757. [Google Scholar] [CrossRef]
  110. Kodipalli, A.; Devi, S. Prediction of PCOS and Mental Health Using Fuzzy Inference and SVM. Front. Public Health 2021, 9, 789569. [Google Scholar] [CrossRef] [PubMed]
  111. Das, B.; Desai, S.; Daripa, A.; Anand, G.C.; Kumar, U.; Khalkho, D.; Thangavel, V.; Kumar, N.; Obi Reddy, G.P.; Kumar, P. Land Degradation Vulnerability Mapping in a West Coast River Basin of India Using Analytical Hierarchy Process Combined Machine Learning Models. Environ. Sci. Pollut. Res. 2023, 30, 83975–83990. [Google Scholar] [CrossRef]
  112. Rai, A.K.; Malakar, S.; Goswami, S. Evaluating Seismic Risk by MCDM and Machine Learning for the Eastern Coast of India. Environ. Monit. Assess. 2024, 196, 471. [Google Scholar] [CrossRef] [PubMed]
  113. Debnath, A.; Tarafdar, A.; Reddy, A.P.; Bhattacharya, P. ROVM Integrated Advanced Machine Learning-Based Malaria Prediction Strategy in Tripura. J. Supercomput. 2024, 80, 15725–15762. [Google Scholar] [CrossRef]
  114. Lamrani, A.Y.; Benmir, M.; Aboulaich, R. Machine Learning Models Selection under Uncertainty: Application in Cancer Prediction. Math. Model. Comput. 2024, 11, 230–238. [Google Scholar] [CrossRef]
  115. Albahri, O.S.; Al-Obaidi, J.R.; Zaidan, A.A.; Albahri, A.S.; Zaidan, B.B.; Salih, M.M.; Qays, A.; Dawood, K.A.; Mohammed, R.T.; Abdulkareem, K.H.; et al. Helping Doctors Hasten COVID-19 Treatment: Towards a Rescue Framework for the Transfusion of Best Convalescent Plasma to the Most Critical Patients Based on Biological Requirements via Ml and Novel MCDM Methods. Comput. Methods Programs Biomed. 2020, 196, 105617. [Google Scholar] [CrossRef]
  116. Antunes, J.; Hadi-Vencheh, A.; Jamshidi, A.; Tan, Y.; Wanke, P. TEA-IS: A Hybrid DEA-TOPSIS Approach for Assessing Performance and Synergy in Chinese Health Care. Decis. Support. Syst. 2023, 171, 113916. [Google Scholar] [CrossRef]
  117. Samal, S.; Dash, R. Developing a Novel Stock Index Trend Predictor Model by Integrating Multiple Criteria Decision-Making with an Optimized Online Sequential Extreme Learning Machine. Granul. Comput. 2023, 8, 411–440. [Google Scholar] [CrossRef]
  118. Nafei, A.; Azizi, S.P.; Edalatpanah, S.A.; Huang, C.Y. Smart TOPSIS: A Neural Network-Driven TOPSIS with Neutrosophic Triplets for Green Supplier Selection in Sustainable Manufacturing. Expert Syst. Appl. 2024, 255, 124744. [Google Scholar] [CrossRef]
  119. Dohale, V.; Kamble, S.; Ambilkar, P.; Gold, S.; Belhadi, A. An Integrated MCDM-ML Approach for Predicting the Carbon Neutrality Index in Manufacturing Supply Chains. Technol. Forecast. Soc. Change 2024, 201, 123243. [Google Scholar] [CrossRef]
  120. Chen, Q.; Li, J.; Feng, J.; Qian, J. Dynamic Comprehensive Quality Assessment of Post-Harvest Grape in Different Transportation Chains Using SAHP–CatBoost Machine Learning. Food Qual. Saf. 2024, 8, fyae007. [Google Scholar] [CrossRef]
  121. Ijadi Maghsoodi, A.; Torkayesh, A.E.; Wood, L.C.; Herrera-Viedma, E.; Govindan, K. A Machine Learning Driven Multiple Criteria Decision Analysis Using LS-SVM Feature Elimination: Sustainability Performance Assessment with Incomplete Data. Eng. Appl. Artif. Intell. 2023, 119, 105785. [Google Scholar] [CrossRef]
  122. Davoodi, S.; Fereydooni, A.; Rastegar, M.A. Can Portfolio Construction Considering ESG Still Gain High Profits? Res. Int. Bus. Financ. 2024, 67, 102126. [Google Scholar] [CrossRef]
  123. Liu, Y.; Wen, X. Sustainability Assessment of Cities Using Multicriteria Decision-Making Combined with Deep Learning Methods. Sustain. Cities Soc. 2024, 111, 105571. [Google Scholar] [CrossRef]
  124. Minguez Salido, R.; Del Pozo Rubio, R.; García-Centeno, M.d.C. Financial Viability of Households in the Long-Term Care System in Spain: Regional Evidence. Stud. Appl. Econ. 2021, 39, 22. [Google Scholar] [CrossRef]
  125. Amiri, A.S.; Babaei, A.; Khedmati, M. Country-Level Assessment of COVID-19 Performance: A Cluster-Based MACONT-CRITIC Analysis. Appl. Soft Comput. 2025, 171, 112762. [Google Scholar] [CrossRef]
  126. Abdulla, A.; Baryannis, G. A Hybrid Multi-Criteria Decision-Making and Machine Learning Approach for Explainable Supplier Selection. Supply Chain Anal. 2024, 7, 100074. [Google Scholar] [CrossRef]
  127. Darko, A.P.; Liang, D. Modeling Customer Satisfaction through Online Reviews: A FlowSort Group Decision Model under Probabilistic Linguistic Settings. Expert Syst. Appl. 2022, 195, 116649. [Google Scholar] [CrossRef]
  128. Thakur, V.; Hossain, M.K.; Mangla, S.K. Factors to Vaccine Cold Chain Management for Sustainable and Resilient Healthcare Delivery. J. Clean. Prod. 2024, 434, 140116. [Google Scholar] [CrossRef]
  129. Khan, Z.; Mohsin, M.; Ali, S.A.; Vashishtha, D.; Husain, M.; Parveen, A.; Shamim, S.K.; Parvin, F.; Anjum, R.; Jawaid, S.; et al. Comparing the Performance of Machine Learning Algorithms for Groundwater Mapping in Delhi. J. Indian. Soc. Remote Sens. 2024, 52, 17–39. [Google Scholar] [CrossRef]
  130. Amato, G.; Behrmann, M.; Bimbot, F.; Caramiaux, B.; Falchi, F.; Garcia, A.; Geurts, J.; Gibert, J.; Gravier, G.; Holken, H.; et al. AI in the Media and Creative Industries. arXiv 2019, arXiv:1905.04175. [Google Scholar] [CrossRef]
  131. Avdeeff, M. Artificial Intelligence & Popular Music: SKYGGE, Flow Machines, and the Audio Uncanny Valley. Arts 2019, 8, 130. [Google Scholar] [CrossRef]
  132. Corazza, G.E. Potential Originality and Effectiveness: The Dynamic Definition of Creativity. Creat. Res. J. 2016, 28, 258–267. [Google Scholar] [CrossRef]
  133. Nakhaeizadeh, G. Development of Multi-Criteria Metrics for Evaluation of Data Mining Algorithms. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining (KDD), Newport Beach, CA, USA, 14 August 1997; pp. 37–42. [Google Scholar]
  134. Zhou, H.; Zhou, Y.; Zhang, H.; Huang, H.; Li, W. Botzone: A Competitive and Interactive Platform for Game AI Education. In Proceedings of the ACM Turing 50th Celebration Conference, Shanghai, China, 12 May 2017; ACM: New York, NY, USA, 2017; pp. 1–5. [Google Scholar]
  135. Wang, H.; Sui, L.; Bian, J.; Yu, H.; Li, G. Integrated Operation Risk Assessment of Distribution Network Based on Improved Subjective and Objective Combination Weighting and ISODATA. Electr. Power Syst. Res. 2024, 233, 110469. [Google Scholar] [CrossRef]
  136. Mao, Q.; Gao, Y.; Fan, J. An Integrated MCDM Framework for Tidal Current Power Plant Site Selection Based on Interval 2-Tuple Linguistic. Reg. Stud. Mar. Sci. 2024, 74, 103518. [Google Scholar] [CrossRef]
  137. Tian, L. Development of Online Music Education Autonomous Learning Autonomous Learning. In Proceedings of the IOP Conference Series: Materials Science and Engineering, Shenyang, China, 27 December 2020; IOP Publishing: Bristol, UK, 2020; p. 012012. [Google Scholar]
  138. Sidana, M. A Review of the Use of Artificial Intelligence in the Field of Education. Int. J. Artif. Intell. Mach. Learn. 2019, 1. Available online: https://www.ijaiml.com/volume-1-issue-3-paper-1/ (accessed on 1 December 2025).
  139. Bueno, I.; Carrasco, R.A.; Ureña, R.; Herrera-Viedma, E. A Business Context Aware Decision-Making Approach for Selecting the Most Appropriate Sentiment Analysis Technique in e-Marketing Situations. Inf. Sci. 2022, 589, 300–320. [Google Scholar] [CrossRef]
  140. Velmurugan, M.; Ouyang, C.; Moreira, C.; Sindhgatta, R. Evaluating Fidelity of Explainable Methods for Predictive Process Analytics; Springer: Cham, Switzerland, 2021; Volume 424, ISBN 9783030791070. [Google Scholar]
  141. Saroja, S.; Haseena, S.; Madavan, R. Dissolved Gas Analysis of Transformer: An Approach Based on ML and MCDM. IEEE Trans. Dielectr. Electr. Insul. 2023, 30, 2429–2438. [Google Scholar] [CrossRef]
  142. Khurshid, S.; Loganathan, B.K.; Duvinage, M. Comparative Evaluation of Applicability Domain Definition Methods for Regression Models. arXiv 2024, arXiv:2411.00920. [Google Scholar] [CrossRef]
  143. Leung, L. Validity, Reliability, and Generalizability in Qualitative Research. J. Fam. Med. Prim. Care 2015, 4, 324. [Google Scholar] [CrossRef]
  144. Maleki, F.; Ovens, K.; Gupta, R.; Reinhold, C.; Spatz, A.; Forghani, R. Generalizability of Machine Learning Models: Quantitative Evaluation of Three Methodological Pitfalls. Radiol. Artif. Intell. 2022, 5, e220028. [Google Scholar] [CrossRef]
  145. Jamshidi, F.; Marghitu, D.; Chapman, R. Developing an Online Music Teaching and Practicing Platform via Machine Learning: A Review Paper. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 24 July 2021; Springer: Cham, Switzerland, 2021; Volume 12769, pp. 95–108. [Google Scholar]
  146. Ivanov, D.; Das, A.; Choi, T.M. New Flexibility Drivers for Manufacturing, Supply Chain and Service Operations. Int. J. Prod. Res. 2018, 56, 3359–3368. [Google Scholar] [CrossRef]
  147. Kumar, R.; Althaqafi, E.; Patro, S.G.K.; Simic, V.; Babbar, A.; Pamucar, D.; Singh, S.K.; Verma, A. Machine and Deep Learning Methods for Concrete Strength Prediction: A Bibliometric and Content Analysis Review of Research Trends and Future Directions. Appl. Soft Comput. 2024, 164, 111956. [Google Scholar] [CrossRef]
  148. Falatah, M.M.; Batarfi, O.A. Cloud scalability considerations. Int. J. Comput. Sci. Eng. Surv. (IJCSES) 2014, 5, 37. [Google Scholar] [CrossRef]
  149. Bhanarkar, N.; Paul, A.; Mehta, A. Responsive Web Design and Its Impact on User Experience. Int. J. Adv. Res. Sci. Commun. Technol. 2023, 3, 50–55. [Google Scholar] [CrossRef]
  150. Narver, J.C.; Slater, S.F.; MacLachlan, D.L. Responsive and Proactive Market Orientation and New-Product Success. J. Prod. Innov. Manag. 2004, 21, 334–347. [Google Scholar] [CrossRef]
  151. Fruchter, G.E.; Wiszniewska-Matyszkiel, A. How Responsive Should a Firm Be to Customers’ Expectations? Eur. J. Oper. Res. 2024, 314, 323–339. [Google Scholar] [CrossRef]
  152. Liu, H.; Zhao, Y.; Gu, C.; Ge, S.; Yang, Z. Adjustable Capability of the Distributed Energy System: Definition, Framework, and Evaluation Model. Energy 2021, 222, 119674. [Google Scholar] [CrossRef]
  153. Yeh, T.M.; Lu, H.Y.; Pai, F.Y. Applying Multi-Criteria Decision Analysis Methods to Explore the Key Factors in Using Interactive Intelligent Health Promotion Equipment. Sage Open 2025, 15, 21582440251327474. [Google Scholar] [CrossRef]
  154. Antunes, D.R.; Rodrigues, J.D. Endless Running Game to Support Sign Language Learning by Deaf Children. In Lecture Notes in Computer Science; Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Berlin/Heidelberg, Germany, 2021; Volume 12769, pp. 25–40. [Google Scholar]
  155. Deissenboeck, F.; Wagner, S.; Pizka, M.; Teuchert, S.; Girard, J.F. An Activity-Based Quality Model for Maintainability. In Proceedings of the IEEE International Conference on Software Maintenance ICSM 2007, Paris, France, 2–5 October 2007; pp. 184–193. [Google Scholar] [CrossRef]
  156. Olamide, K.; ‘Shade, K.; Monday, E.; Oludele, A. Autonomous Systems and Reliability Assessment: A Systematic Review. Am. J. Artif. Intell. 2020, 4, 30. [Google Scholar] [CrossRef]
  157. Mazzei, D.; Ramjattan, R.; Mazzei, D.; Ramjattan, R. Machine Learning for Industry 4.0: A Systematic Review Using Deep Learning-Based Topic Modelling. Sensors 2022, 22, 8641. [Google Scholar] [CrossRef]
  158. Löfström, H.; Hammar, K.; Johansson, U. A Meta Survey of Quality Evaluation Criteria in Explanation Methods. Lect. Notes Bus. Inf. Process. 2022, 452, 55–63. [Google Scholar] [CrossRef]
  159. Fermanian, J.D.; Xidonas, P.; Corrente, S. Machine Learning & Fairness: An Integrated Multicriteria Approach for the Evaluation of Supervised Classifiers. J. Oper. Res. Soc. 2025, 1–13. [Google Scholar] [CrossRef]
  160. Qasim Jebur Al-Zaidawi, M.; Çevik, M. Advanced Deep Learning Models for Improved IoT Network Monitoring Using Hybrid Optimization and MCDM Techniques. Symmetry 2025, 17, 388. [Google Scholar] [CrossRef]
  161. Kumar, A.; Kaur, K. A Novel MCDM-Based Framework to Recommend Machine Learning Techniques for Diabetes Prediction. Int. J. Eng. Technol. Innov. 2024, 14, 29–43. [Google Scholar] [CrossRef]
  162. Angaitkar, P.; Ram Janghel, R.; Prasad Sahu, T. An MCDM Approach for Reverse Vaccinology Model to Predict Bacterial Protective Antigens. Vaccine 2024, 42, 3874–3882. [Google Scholar] [CrossRef]
  163. Ghiasi, Y.; Seifbarghy, M.; Pishva, D. Diabetes Detection via Machine Learning Using Four Implemented Spanning Tree Algorithms. J. Optim. Ind. Eng. 2024, 36, 1. [Google Scholar] [CrossRef]
  164. Dhiman, B.; Kamboj, S.; Srivastava, V. Explainable AI Based Efficient Ensemble Model for Breast Cancer Classification Using Optical Coherence Tomography. Biomed. Signal Process. Control. 2024, 91, 106007. [Google Scholar] [CrossRef]
  165. Shayea, G.G.; Zabil, M.H.M.; Albahri, A.S.; Joudar, S.S.; Hamid, R.A.; Albahri, O.S.; Alamoodi, A.H.; Zahid, I.A.; Sharaf, I.M. Fuzzy Evaluation and Benchmarking Framework for Robust Machine Learning Model in Real-Time Autism Triage Applications. Int. J. Comput. Intell. Syst. 2024, 17, 151. [Google Scholar] [CrossRef]
  166. Aljohani, A. Optimizing Patient Stratification in Healthcare: A Comparative Analysis of Clustering Algorithms for EHR Data. Int. J. Comput. Intell. Syst. 2024, 17, 173. [Google Scholar] [CrossRef]
  167. Wyrembek, M.; Baryannis, G. Using mcdm methods to optimise machine learning decisions for supply chain delay prediction: A stakeholder-centric approach. Logforum 2024, 20, 175–189. [Google Scholar] [CrossRef]
  168. Basu, S.; Agarwal, R.; Srivastava, V. Development of an Intelligent Full-Field Polarization Sensitive Optical Coherence Tomography for Breast Cancer Classification. J. Biophotonics 2023, 16, e202200385. [Google Scholar] [CrossRef]
  169. Nasiri Khiavi, A.; Mostafazadeh, R.; Adhami, M. Groundwater Quality Modeling and Determining Critical Points: A Comparison of Machine Learning to Best-Worst Method. Environ. Sci. Pollut. Res. Int. 2023, 30, 115758–115775. [Google Scholar] [CrossRef]
  170. Xu, C.; Zhou, K.; Xiong, X.; Gao, F.; Lu, Y. Prediction of Mining Induced Subsidence by Sparrow Search Algorithm with Extreme Gradient Boosting and TOPSIS Method. Acta Geotech. 2023, 18, 4993–5009. [Google Scholar] [CrossRef]
  171. Uzun Ozsahin, D.; Onakpojeruo, E.P.; Uzun, B.; Mustapha, M.T.; Ozsahin, I. Mathematical Assessment of Machine Learning Models Used for Brain Tumor Diagnosis. Diagnostics 2023, 13, 618. [Google Scholar] [CrossRef] [PubMed]
  172. Salih, M.M.; Ahmed, M.A.; Al-Bander, B.; Hasan, K.F.; Shuwandy, M.L.; Al-Qaysi, Z.T. Benchmarking Framework for COVID-19 Classification Machine Learning Method Based on Fuzzy Decision by Opinion Score Method. Iraqi J. Sci. 2023, 64, 922–943. [Google Scholar] [CrossRef]
  173. Das, R.; Saleh, S.; Nielsen, I.; Kaviraj, A.; Sharma, P.; Dey, K.; Saha, S. Performance Analysis of Machine Learning Algorithms and Screening Formulae for β-Thalassemia Trait Screening of Indian Antenatal Women. Int. J. Med. Inf. 2022, 167, 104866. [Google Scholar] [CrossRef] [PubMed]
  174. do Amaral, J.V.S.; de Carvalho Miranda, R.; Montevechi, J.A.B.; dos Santos, C.H.; da Silva, A.F. Data Envelopment Analysis for Algorithm Efficiency Assessment in Metamodel-Based Simulation Optimization. Int. J. Adv. Manuf. Technol. 2022, 121, 7493–7507. [Google Scholar] [CrossRef]
  175. Chowdhury, N.K.; Kabir, M.A.; Rahman, M.M.; Islam, S.M.S. Machine Learning for Detecting COVID-19 from Cough Sounds: An Ensemble-Based MCDM Method. Comput. Biol. Med. 2022, 145, 105405. [Google Scholar] [CrossRef]
  176. Gayathri, R.; Rani, S.U.; Čepová, L.; Rajesh, M.; Kalita, K. A Comparative Analysis of Machine Learning Models in Prediction of Mortar Compressive Strength. Processes 2022, 10, 1387. [Google Scholar] [CrossRef]
  177. Tripathy, J.; Dash, R.; Pattanayak, B.K.; Mishra, S.K.; Mishra, T.K.; Puthal, D. Combination of Reduction Detection Using TOPSIS for Gene Expression Data Analysis. Big Data Cogn. Comput. 2022, 6, 24. [Google Scholar] [CrossRef]
  178. Kadkhodazadeh, M.; Anaraki, M.V.; Morshed-Bozorgdel, A.; Farzin, S. A New Methodology for Reference Evapotranspiration Prediction and Uncertainty Analysis under Climate Change Conditions Based on Machine Learning, Multi Criteria Decision Making and Monte Carlo Methods. Sustainability 2022, 14, 2601. [Google Scholar] [CrossRef]
  179. Al-Mhiqani, M.N.; Ahmad, R.; Abidin, Z.Z.; Abdulkareem, K.H.; Mohammed, M.A.; Gupta, D.; Shankar, K. A New Intelligent Multilayer Framework for Insider Threat Detection. Comput. Electr. Eng. 2022, 97, 107597. [Google Scholar] [CrossRef]
  180. Mallidis, I.; Yakavenka, V.; Konstantinidis, A.; Sariannidis, N. A Goal Programming-Based Methodology for Machine Learning Model Selection Decisions: A Predictive Maintenance Application. Mathematics 2021, 9, 2405. [Google Scholar] [CrossRef]
  181. Seifi, A.; Ehteram, M.; Dehghani, M. A Robust Integrated Bayesian Multi-Model Uncertainty Estimation Framework (IBMUEF) for Quantifying the Uncertainty of Hybrid Meta-Heuristic in Global Horizontal Irradiation Predictions. Energy Convers. Manag. 2021, 241, 114292. [Google Scholar] [CrossRef]
  182. Sharma, G.; Kotia, A.; Ghosh, S.K.; Rana, P.S.; Bawa, S.; Ali, M.K.A. Kinematic Viscosity Prediction of Nanolubricants Employed in Heavy Earth Moving Machinery Using Machine Learning Techniques. Int. J. Precis. Eng. Manuf. 2020, 21, 1921–1932. [Google Scholar] [CrossRef]
  183. Mohammed, M.A.; Abdulkareem, K.H.; Al-Waisy, A.S.; Mostafa, S.A.; Al-Fahdawi, S.; Dinar, A.M.; Alhakami, W.; Baz, A.; Al-Mhiqani, M.N.; Alhakami, H.; et al. Benchmarking Methodology for Selection of Optimal COVID-19 Diagnostic Model Based on Entropy and TOPSIS Methods. IEEE Access 2020, 8, 99115–99131. [Google Scholar] [CrossRef]
  184. Reščič, N.; Eftimov, T.; Seljak, B.K.; Luštrek, M. Optimising an FFQ Using a Machine Learning Pipeline to Teach an Efficient Nutrient Intake Predictive Model. Nutrients 2020, 12, 3789. [Google Scholar] [CrossRef] [PubMed]
  185. Song, Y.; Peng, Y. A MCDM-Based Evaluation Approach for Imbalanced Classification Methods in Financial Risk Prediction. IEEE Access 2019, 7, 84897–84906. [Google Scholar] [CrossRef]
  186. Lo, Y.T.; Fujita, H.; Pai, T.W. Prediction of coronary artery disease based on ensemble learning approaches and co-expressed observations. J. Mech. Med. Biol. 2016, 16, 1640010. [Google Scholar] [CrossRef]
  187. Khademolqorani, S.; Zeinal Hamadani, A.; Mokhatab Rafiei, F. A Hybrid Analysis Approach to Improve Financial Distress Forecasting: Empirical Evidence from Iran. Math. Probl. Eng. 2015, 2015, 178197. [Google Scholar] [CrossRef]
  188. Kou, G.; Peng, Y.; Lu, C. MCDM Approach to Evaluating Bank Loan Default Models. Technol. Econ. Dev. Econ. 2014, 20, 292–311. [Google Scholar] [CrossRef]
  189. Kumar, A.; Das, M.; Pramanik, M.; Baghel, T.; Mukhopadhyay, A. Urbanization and Groundwater Resilience: Pre- and Post-Monsoon Mapping Using AHP and Hybrid Machine Learning Modelling. Int. J. River Basin Manag. 2025, 1–25. [Google Scholar] [CrossRef]
  190. Hussain, J.; Ali, N.; Fu, X.; Chen, J.; Iqbal, S.M.; Hussain, A.; Salam, H. Geospatial Mapping of Potential Aggregate Resources Using Integrated GIS-AHP, Geotechnical, Petrographic and Machine Learning Approaches. Earth Sci. Inf. 2025, 18, 336. [Google Scholar] [CrossRef]
  191. Zhao, L.Q.; Dragićević, S.; Balram, S.; Perez, L. Assessing the Number of Criteria in GIS-Based Multicriteria Evaluation: A Machine Learning Approach. Geogr. Anal. 2025, 57, 489–506. [Google Scholar] [CrossRef]
  192. Chauhan, V.; Gupta, L.; Dixit, J. Landslide Susceptibility Assessment for Uttarakhand, a Himalayan State of India, Using Multi-Criteria Decision Making, Bivariate, and Machine Learning Models. Geoenviron. Disasters 2025, 12, 2. [Google Scholar] [CrossRef]
  193. Asadollahzadeh, D.; Behnam, B. Machine Learning Approaches for Seismic Vulnerability Assessment of Urban Buildings: A Comparative Study with Analytic Hierarchy Process. Prog. Disaster Sci. 2025, 25, 100398. [Google Scholar] [CrossRef]
  194. Rahman, M.; Ningsheng, C.; Islam, M.M.; Dewan, A.; Iqbal, J.; Washakh, R.M.A.; Shufeng, T. Flood Susceptibility Assessment in Bangladesh Using Machine Learning and Multi-Criteria Decision Analysis. Earth Syst. Environ. 2019, 3, 585–601. [Google Scholar] [CrossRef]
  195. Kshetrimayum, A.; Ramesh, H.; Goyal, A. Exploring Different Approaches for Landslide Susceptibility Zonation Mapping in Manipur: A Comparative Study of AHP, FR, Machine Learning, and Deep Learning Models. J. Spat. Sci. 2024, 1–30. [Google Scholar] [CrossRef]
  196. Khalid, R.; Khan, U.T. Flood Susceptibility Mapping Using ANNs: A Case Study in Model Generalization and Accuracy from Ontario, Canada. Geocarto Int. 2024, 39, 2316653. [Google Scholar] [CrossRef]
  197. Das, R.; Chattoraj, S.L.; Singh, M.; Bisht, A. Synergetic Use of Geospatial and Machine Learning Techniques in Modelling Landslide Susceptibility in Parts of Shimla to Kinnaur National Highway, Himachal Pradesh. Model. Earth Syst. Environ. 2024, 10, 4163–4183. [Google Scholar] [CrossRef]
  198. Khuc, T.D.; Truong, X.Q.; Tran, V.A.; Bui, D.Q.; Bui, D.P.; Ha, H.; Tran, T.H.M.; Pham, T.T.T.; Yordanov, V. Comparison of Multi-Criteria Decision Making, Statistics, and Machine Learning Models for Landslide Susceptibility Mapping in Van Yen District, Yen Bai Province, Vietnam. Int. J. Geoinform. 2023, 19, 33–45. [Google Scholar] [CrossRef]
  199. Jari, A.; Khaddari, A.; Hajaj, S.; Bachaoui, E.M.; Mohammedi, S.; Jellouli, A.; Mosaid, H.; El Harti, A.; Barakat, A. Landslide Susceptibility Mapping Using Multi-Criteria Decision-Making (MCDM), Statistical, and Machine Learning Models in the Aube Department, France. Earth 2023, 4, 698–713. [Google Scholar] [CrossRef]
  200. Khanorkar, Y.; Kane, P.V. Selective Inventory Classification Using ABC Classification, Multi-Criteria Decision Making Techniques, and Machine Learning Techniques. Mater. Today Proc. 2023, 72, 1270–1274. [Google Scholar] [CrossRef]
  201. Chen, C.-W. A Feasibility Discussion: Is ML Suitable for Predicting Sustainable Patterns in Consumer Product Preferences? Sustainability 2023, 15, 3983. [Google Scholar] [CrossRef]
  202. Achu, A.L.; Thomas, J.; Aju, C.D.; Remani, P.K.; Gopinath, G. Performance Evaluation of Machine Learning and Statistical Techniques for Modelling Landslide Susceptibility with Limited Field Data. Earth Sci. Inf. 2023, 16, 1025–1039. [Google Scholar] [CrossRef]
  203. Touati, I.; Ellouze, M.; Graja, M.; Hadrich Belguith, L. Appraisal of Two Arabic Opinion Summarization Methods: Statistical Versus Machine Learning. Comput. J. 2022, 65, 192–202. [Google Scholar] [CrossRef]
  204. Aslam, B.; Maqsoom, A.; Khalil, U.; Ghorbanzadeh, O.; Blaschke, T.; Farooq, D.; Tufail, R.F.; Suhail, S.A.; Ghamisi, P. Evaluation of Different Landslide Susceptibility Models for a Local Scale in the Chitral District, Northern Pakistan. Sensors 2022, 22, 3107. [Google Scholar] [CrossRef]
  205. Yazici, I.; Beyca, O.F.; Gurcan, O.F.; Zaim, H.; Delen, D.; Zaim, S. A Comparative Analysis of Machine Learning Techniques and Fuzzy Analytic Hierarchy Process to Determine the Tacit Knowledge Criteria. Ann. Oper. Res. 2022, 308, 753–776. [Google Scholar] [CrossRef]
  206. Saha, R.; Ginwal, H.S.; Chandra, G.; Barthwal, S. A Comparative Study on Grey Relational Analysis and C5.0 Classification Algorithm on Adventitious Rhizogenesis of Eucalyptus. Trees-Struct. Funct. 2021, 35, 43–52. [Google Scholar] [CrossRef]
  207. Vojtek, M.; Vojteková, J.; Costache, R.; Pham, Q.B.; Lee, S.; Arshad, A.; Sahoo, S.; Linh, N.T.T.; Anh, D.T. Comparison of Multi-Criteria-Analytical Hierarchy Process and Machine Learning-Boosted Tree Models for Regional Flood Susceptibility Mapping: A Case Study from Slovakia. Geomat. Nat. Hazards Risk 2021, 12, 1153–1180. [Google Scholar] [CrossRef]
  208. Kumar, R.; Dwivedi, S.B.; Gaur, S. A Comparative Study of Machine Learning and Fuzzy-AHP Technique to Groundwater Potential Mapping in the Data-Scarce Region. Comput. Geosci. 2021, 155, 104855. [Google Scholar] [CrossRef]
  209. Ali, S.A.; Parvin, F.; Pham, Q.B.; Vojtek, M.; Vojteková, J.; Costache, R.; Linh, N.T.T.; Nguyen, H.Q.; Ahmad, A.; Ghorbani, M.A. GIS-Based Comparative Assessment of Flood Susceptibility Mapping Using Hybrid Multi-Criteria Decision-Making Approach, Naïve Bayes Tree, Bivariate Statistics and Logistic Regression: A Case of Topľa Basin, Slovakia. Ecol. Indic. 2020, 117, 106620. [Google Scholar] [CrossRef]
  210. Arabameri, A.; Roy, J.; Saha, S.; Blaschke, T.; Ghorbanzadeh, O.; Bui, D.T. Application of Probabilistic and Machine Learning Models for Groundwater Potentiality Mapping in Damghan Sedimentary Plain, Iran. Remote Sens. 2019, 11, 3015. [Google Scholar] [CrossRef]
  211. Naderpour, M.; Rizeei, H.M.; Khakzad, N.; Pradhan, B. Forest Fire Induced Natech Risk Assessment: A Survey of Geospatial Technologies. Reliab. Eng. Syst. Saf. 2019, 191, 106558. [Google Scholar] [CrossRef]
  212. Baccour, L. Amended Fused TOPSIS-VIKOR for Classification (ATOVIC) Applied to Some UCI Data Sets. Expert Syst. Appl. 2018, 99, 115–125. [Google Scholar] [CrossRef]
  213. Marjanović, M.; Kovačević, M.; Bajat, B.; Voženílek, V. Landslide Susceptibility Assessment Using SVM Machine Learning Algorithm. Eng. Geol. 2011, 123, 225–234. [Google Scholar] [CrossRef]
  214. Hu, Y.C. Bankruptcy Prediction Using ELECTRE-Based Single-Layer Perceptron. Neurocomputing 2009, 72, 3150–3157. [Google Scholar] [CrossRef]
  215. Wu, D.; Olson, D.L. A TOPSIS Data Mining Demonstration and Application to Credit Scoring. Int. J. Data Warehous. Min. 2006, 2, 16–26. [Google Scholar] [CrossRef]
  216. Liu, F.; Liao, H.; Al-Barakati, A. Physician Selection Based on User-Generated Content Considering Interactive Criteria and Risk Preferences of Patients. Omega 2023, 115, 102784. [Google Scholar] [CrossRef]
  217. Gharibi, A.; Babazadeh, R.; Hasanzadeh, R. Machine Learning and Multi-Criteria Decision Analysis for Polyethylene Air-Gasification Considering Energy and Environmental Aspects. Process Saf. Environ. Prot. 2024, 183, 46–58. [Google Scholar] [CrossRef]
  218. Sari, F. Assessment of the Effects of Different Variable Weights on Wildfire Susceptibility. Eur. J. Res. 2024, 143, 651–670. [Google Scholar] [CrossRef]
  219. Khoshvaght, P.; Tanveer, J.; Rahmani, A.M.; Mohammadi, M.; Mehranzadeh, A.; Lansky, J.; Hosseinzadeh, M. H-TERF: A Hybrid Approach Combining Fuzzy Multi-Criteria Decision-Making Techniques and Enhanced Random Forest to Improve WBAN-IoT. Internet Things 2025, 32, 101613. [Google Scholar] [CrossRef]
  220. Wang, Z.; Liu, H.; Fan, X. Hybrid Machine Learning and MCDM Framework for Consumer Preference Extraction and Decision Support in Dynamic Markets. Technol. Soc. 2025, 82, 102926. [Google Scholar] [CrossRef]
  221. Aggarwal, A.G.; Aggarwal, S.; Jindal, V. Ranking of Hotels Using Customer Reviews: An LDA—Picture Fuzzy TOPSIS Approach. Int. J. Syst. Assur. Eng. Manag. 2025, 16, 1885–1898. [Google Scholar] [CrossRef]
  222. Liu, X.; Lyu, H.-M.; Shen, S.-L. Assessment of Geo-Disaster Risk Levels Induced by Extreme Rainfall Using Integrated FCM-VIKOR Approach. Georisk: Assess. Manag. Risk Eng. Syst. Geohazards 2025, 19, 755–774. [Google Scholar] [CrossRef]
  223. Ajin, M.L.; Moses, J.; Priya, M.; Sayn, F.E.; Topalolu, G.; Ozbay, B.; Danach, K.; Harb, H.; Ramadan, A.; Haddad, S. Enhancing Multi-Criteria Decision-Making in Blockchain Security: A Hybrid Machine Learning and PROMETHEE Approach. Eng. Res. Express 2025, 7, 0352c6. [Google Scholar] [CrossRef]
  224. Saini, R.; Vaidya, O.S.; Venkitasubramony, R.; Daultani, Y. Identifying Critical Criteria for Warehouse Performance Using Machine Learning Based Hybrid Methodology. OPSEARCH 2025, 2025, 388. [Google Scholar] [CrossRef]
  225. Wu, B.; Hu, Z.; Gu, Z.; Zheng, Y.; Lv, J. Credit Evaluation of Technology-Based Small and Micro Enterprises: An Innovative Weighting Method Based on Machine Learning and AHP. Data 2025, 10, 9. [Google Scholar] [CrossRef]
  226. Arslan, A.E.; Arslan, O. Machine Learning-Based Multi-Criteria Decision-Making Optimization of a Geothermal Integrated System. Geothermics 2025, 133, 103472. [Google Scholar] [CrossRef]
  227. Kanji, S.; Das, S. Assessing Groundwater Potentialities and Replenishment Feasibility Using Machine Learning and MCDM Models Considering Hydro-Geological Aspects and Water Quality Constituents. Environ. Earth Sci. 2025, 84, 16. [Google Scholar] [CrossRef]
  228. Tufail, F.; Gul, R.; Shabir, M.; Khalaf Alharbi, S.; Abd El-Wahed Khalifa, H. An Enhanced Machine Learning Covering-Based Bipolar L-Fuzzy Rough Set PROMETHEE Model for Battery Storage Systems in Renewable Energy. Expert Syst. Appl. 2025, 287, 127951. [Google Scholar] [CrossRef]
  229. Huo, G.; Liu, X.; Chen, T. Prediction of Physical Fitness and Performance of Wushus Athletes Based on Machine Learning and Fuzzy TOPSIS Method. Entertain. Comput. 2025, 55, 101017. [Google Scholar] [CrossRef]
  230. Salehi, A.; Alimohammadi, M.; Khedmati, M.; Ghousi, R. Spatial-Temporal Dynamics in Country-Level Sustainable Energy Performance Using Ensemble Learning and Analytic Hierarchy Process. J. Clean. Prod. 2025, 508, 145497. [Google Scholar] [CrossRef]
  231. Muhammadun; Jannaty, B.; Thinakaran, R.; Rachman, T. Support Vector Machine with Rule Extraction to Improve Diabetes Prediction Using Fuzzy AHP-Sugeno and Nearest Neighbor. Int. J. Adv. Comput. Sci. Appl. 2025, 16, 731–740. [Google Scholar] [CrossRef]
  232. Zhou, W.; Xie, Z. Sealing Rubber Ring Design Based on Machine Learning Algorithm Combined Progressive Optimization Method. Tribol. Int. 2025, 201, 110173. [Google Scholar] [CrossRef]
  233. Apichonbancha, P.; Lin, R.H.; Chuang, C.L. Integration of Principal Component Analysis with AHP-QFD for Improved Product Design Decision-Making. Appl. Sci. 2024, 14, 5976. [Google Scholar] [CrossRef]
  234. von Linde, H.; Riedel, O. A Methodology for Evaluating Feature Selection and Clustering Methods with Project-Specific Requirements. Int. J. Prod. Res. 2025, 63, 1692–1706. [Google Scholar] [CrossRef]
  235. Panigrahi, G.R.; Sethy, P.K.; Behera, S.K.; Gupta, M.; Alenizi, F.A.; Suanpang, P.; Nanthaamornphong, A. Analytical Validation and Integration of CIC-Bell-DNS-EXF-2021 Dataset on Security Information and Event Management. IEEE Access 2024, 12, 83043–83056. [Google Scholar] [CrossRef]
  236. Zakeri, S.; Konstantas, D.; Sorooshian, S.; Chatterjee, P. A Novel ML-MCDM-Based Decision Support System for Evaluating Autonomous Vehicle Integration Scenarios in Geneva’s Public Transportation. Artif. Intell. Rev. 2024, 57, 310. [Google Scholar] [CrossRef]
  237. Saranya, A.; Al Mazroa, A.; Maashi, M.; Nithya, T.M.; Priya, V. Remote Sensing and Machine Learning Approach for Zoning of Wastewater Drainage System. Desalination Water Treat. 2024, 319, 100549. [Google Scholar] [CrossRef]
  238. Joe Anand, M.C.; Kalaiarasi, K.; Martin, N.; Ranjitha, B.; Priyadharshini, S.S.; Tiwari, M. Fuzzy C-Means Clustering with MAIRCA -MCDM Method in Classifying Feasible Logistic Suppliers of Electrical Products. In Proceedings of the 2023 1st International Conference on Cyber Physical Systems, Power Electronics and Electric Vehicles, ICPEEV 2023, Hyderabad, India, 28 September 2023; Institute of Electrical and Electronics Engineers Inc.: Hyderabad, India, 2023; pp. 1–7. [Google Scholar]
  239. Xie, S.; Zhang, J. TOPSIS-Based Comprehensive Measure of Variable Importance in Predictive Modelling. Expert Syst. Appl. 2023, 232, 120682. [Google Scholar] [CrossRef]
  240. Sun, C.; Wang, K.; Liu, Q.; Wang, P.; Pan, F. Machine-Learning-Based Comprehensive Properties Prediction and Mixture Design Optimization of Ultra-High-Performance Concrete. Sustainability 2023, 15, 15338. [Google Scholar] [CrossRef]
  241. Milosavljević, M.; Radovanović, S.; Delibašić, B. What Drives the Performance of Tax Administrations? Evidence from Selected European Countries. Econ. Model. 2023, 121, 106217. [Google Scholar] [CrossRef]
  242. Alves, M.A.; Meneghini, I.R.; Gaspar-Cunha, A.; Guimarães, F.G. Machine Learning-Driven Approach for Large Scale Decision Making with the Analytic Hierarchy Process. Mathematics 2023, 11, 627. [Google Scholar] [CrossRef]
  243. Yilmaz, I.; Adem, A.; Dağdeviren, M. A Machine Learning-Integrated Multi-Criteria Decision-Making Approach Based on Consensus for Selection of Energy Storage Locations. J. Energy Storage 2023, 69, 107941. [Google Scholar] [CrossRef]
  244. Biswas, S.; Singh, Y.; Mukherjee, M.; Datta, S.; Barman, S.; Raja, M. Design of Multi-Material Model for Wire Electro-Discharge Machining of SS304 and SS316 Using Machine Learning and MCDM Techniques. Arab. J. Sci. Eng. 2022, 47, 15755–15778. [Google Scholar] [CrossRef]
  245. Asan, U.; Soyer, A. A Weighted Bonferroni-OWA Operator Based Cumulative Belief Degree Approach to Personnel Selection Based on Automated Video Interview Assessment Data. Mathematics 2022, 10, 1582. [Google Scholar] [CrossRef]
  246. Song, Y.; Thatcher, D.; Li, Q.; McHugh, T.; Wu, P. Developing Sustainable Road Infrastructure Performance Indicators Using a Model-Driven Fuzzy Spatial Multi-Criteria Decision Making Method. Renew. Sustain. Energy Rev. 2021, 138, 110538. [Google Scholar] [CrossRef]
  247. Pham, B.T.; Luu, C.; Phong, T.V.; Nguyen, H.D.; Le, H.V.; Tran, T.Q.; Ta, H.T.; Prakash, I. Flood Risk Assessment Using Hybrid Artificial Intelligence Models Integrated with Multi-Criteria Decision Analysis in Quang Nam Province, Vietnam. J. Hydrol. 2021, 592, 125815. [Google Scholar] [CrossRef]
  248. Ahani, A.; Nilashi, M.; Yadegaridehkordi, E.; Sanzogni, L.; Tarik, A.R.; Knox, K.; Samad, S.; Ibrahim, O. Revealing Customers’ Satisfaction and Preferences through Online Review Analysis: The Case of Canary Islands Hotels. J. Retail. Consum. Serv. 2019, 51, 331–343. [Google Scholar] [CrossRef]
  249. Deng, S.; Zhang, J.; Zhang, C.; Luo, M.; Ni, M.; Li, Y.; Zeng, T. Prediction and Optimization of Gas Distribution Quality for High-Temperature PEMFC Based on Data-Driven Surrogate Model. Appl. Energy 2022, 327, 120000. [Google Scholar] [CrossRef]
  250. Choudhary, S.; Sharma, K.; Bajaj, M. Effectual Seed Pick Framework Focusing on Maximizing Influence in Social Networks. Wirel. Commun. Mob. Comput. 2023, 2023, 3185391. [Google Scholar] [CrossRef]
  251. Alazemi, F.K.A.O.H.; Ariffin, M.K.A.B.M.; Bin Mustapha, F.; Bin Supeni, E.E. A New Fuzzy TOPSIS-Based Machine Learning Framework for Minimizing Completion Time in Supply Chains. Int. J. Fuzzy Syst. 2022, 24, 1669–1695. [Google Scholar] [CrossRef]
  252. Mahpour, A.; El-Diraby, T. Application of Machine-Learning in Network-Level Road Maintenance Policy-Making: The Case of Iran. Expert Syst. Appl. 2022, 191, 116283. [Google Scholar] [CrossRef]
  253. Albogami, S.M.; Khairol, M.; Bin, A.; Ariffin, M.; Ahmad, K.A.; Supeni, M.K.A.B.M.; Ahmad, E.E.B.; Adrangi, B.; Swishchuk, A.; My, K.A.A. A New Hybrid AHP and Dempster—Shafer Theory of Evidence Method for Project Risk Assessment Problem. Mathematics 2021, 9, 3225. [Google Scholar] [CrossRef]
  254. Shirazi, A.; Hezarkhani, A.; Beiranvand Pour, A.; Shirazy, A.; Hashim, M. Neuro-Fuzzy-AHP (NFAHP) Technique for Copper Exploration Using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Geological Datasets in the Sahlabad Mining Area, East Iran. Remote Sens. 2022, 14, 5562. [Google Scholar] [CrossRef]
  255. Aggarwal, M. On Learning of Weights through Preferences. Inf. Sci. 2015, 321, 90–102. [Google Scholar] [CrossRef]
  256. Geng, Z.Q.; Qin, L.; Han, Y.M.; Zhu, Q.X. Energy Saving and Prediction Modeling of Petrochemical Industries: A Novel ELM Based on FAHP. Energy 2017, 122, 350–362. [Google Scholar] [CrossRef]
  257. Mishra, M.; Sarkar, T. A Multistage Hybrid Model for Landslide Risk Mapping: Tested in and around Mussoorie in Uttarakhand State of India. Environ. Earth Sci. 2020, 79, 449. [Google Scholar] [CrossRef]
  258. Boopathiraja, M.; Karthikeyan, V.V.; Sumathi, P.; Karthik, S. Hybrid Modelling for Land Suitability of Biological Wastewater Treatment: A Fuzzy-AHP and Machine Learning Approach. Desalination Water Treat. 2025, 324, 101444. [Google Scholar] [CrossRef]
  259. Kanji, S.; Das, S. Exploring the Morpho-Tectonic Nature, Hydrological and Physical Characteristics of a Watershed and Prioritizing Sub-Watersheds Surface Runoff Potentialities by Integrating MCDM and Ensemble Machine Learning Models. J. Environ. Manag. 2025, 386, 125772. [Google Scholar] [CrossRef] [PubMed]
  260. Nair, P.G.; Medhe, R.S.; Das, S.; Chatterjee, U.; Singh, D.; Singh, T.P.; Ghosh, A. GIS-Based Flood Vulnerability Mapping in a Tropical River Basin Using Analytical Hierarchy Process (AHP) and Machine Learning Approach. Geocarto Int. 2025, 40, 2551261. [Google Scholar] [CrossRef]
  261. Shuwandy, M.L.; Alasad, Q.; Hammood, M.M.; Yass, A.A.; Abdulateef, S.K.; Alsharida, R.A.; Qaddoori, S.L.; Thalij, S.H.; Frman, M.; Kutaibani, A.H.; et al. A Robust Behavioral Biometrics Framework for Smartphone Authentication via Hybrid Machine Learning and TOPSIS. J. Cybersecur. Priv. 2025, 5, 20. [Google Scholar] [CrossRef]
  262. Erdogan, Z.; Altuntas, S.; Dereli, T. Predicting Patent Quality Based on Machine Learning Approach. IEEE Trans. Eng. Manag. 2024, 71, 3144–3157. [Google Scholar] [CrossRef]
  263. Xue, Y.D.; Zhang, W.; Wang, Y.L.; Luo, W.; Jia, F.; Li, S.T.; Pang, H.J. Serviceability Evaluation of Highway Tunnels Based on Data Mining and Machine Learning: A Case Study of Continental United States. Tunn. Undergr. Space Technol. 2023, 142, 105418. [Google Scholar] [CrossRef]
  264. Adiwijaya, I.R.; Indratno, S.W.; Siallagan, M.; Widodo, A.; Gandara, E. Integration of the Hybrid Decision Support System and Machine Learning Algorithm to Determine Government Assistance Recipients: A Case Study in the Indonesian Funding Program. MENDEL 2023, 29, 15–24. [Google Scholar] [CrossRef]
  265. Albahri, A.S.; Zaidan, A.A.; AlSattar, H.A.; Hamid, R.A.; Albahri, O.S.; Qahtan, S.; Alamoodi, A.H. Towards Physician’s Experience: Development of Machine Learning Model for the Diagnosis of Autism Spectrum Disorders Based on Complex T-Spherical Fuzzy-Weighted Zero-Inconsistency Method. Comput. Intell. 2023, 39, 225–257. [Google Scholar] [CrossRef]
  266. Reinhartz-Berger, I.; Abbas, S. Extracting Domain Behaviors through Multi-Criteria, Polymorphism-Inspired Variability Analysis. Inf. Syst. 2022, 108, 101882. [Google Scholar] [CrossRef]
  267. Pourkhodabakhsh, N.; Mamoudan, M.M.; Bozorgi-Amiri, A. Effective Machine Learning, Meta-Heuristic Algorithms and Multi-Criteria Decision Making to Minimizing Human Resource Turnover. Appl. Intell. 2023, 53, 16309–16331. [Google Scholar] [CrossRef] [PubMed]
  268. Omari, Y.; Hamdadou, D.; Mami, M.A. Coupling Multi-Criteria Analysis and Machine Learning for Agent Based Group Decision Support: Spatial Localization. Int. J. Comput. Digit. Syst. 2021, 12, 55–72. [Google Scholar] [CrossRef]
  269. Ahmed, R.; Nasiri, F.; Zayed, T. A Novel Neutrosophic-Based Machine Learning Approach for Maintenance Prioritization in Healthcare Facilities. J. Build. Eng. 2021, 42, 102480. [Google Scholar] [CrossRef]
  270. Jain, N.; Tomar, A.; Jana, P.K. A Novel Scheme for Employee Churn Problem Using Multi-Attribute Decision Making Approach and Machine Learning. J. Intell. Inf. Syst. 2021, 56, 279–302. [Google Scholar] [CrossRef]
  271. Geng, Z.; Li, H.; Zhu, Q.; Han, Y. Production Prediction and Energy-Saving Model Based on Extreme Learning Machine Integrated ISM-AHP: Application in Complex Chemical Processes. Energy 2018, 160, 898–909. [Google Scholar] [CrossRef]
  272. Costache, R.; Pham, Q.B.; Sharifi, E.; Linh, N.T.T.; Abba, S.I.; Vojtek, M.; Vojteková, J.; Nhi, P.T.T.; Khoi, D.N. Flash-Flood Susceptibility Assessment Using Multi-Criteria Decision Making and Machine Learning Supported by Remote Sensing and GIS Techniques. Remote Sens. 2019, 12, 106. [Google Scholar] [CrossRef]
  273. Costa, W.S.; Pinheiro, P.R.; dos Santos, N.M.; Cabral, L.d.A.F. Aligning the Goals Hybrid Model for the Diagnosis of Mental Health Quality. Sustainability 2023, 15, 5938. [Google Scholar] [CrossRef]
  274. Rai, K.A.; Machkour, M.; Antari, J. Unsupervised Learning-Based New Seed-Expanding Approach Using Influential Nodes for Community Detection in Social Networks. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 753–766. [Google Scholar] [CrossRef]
  275. Musbah, H.; Ali, G.; Aly, H.H.; Little, T.A. Energy Management Using Multi-Criteria Decision Making and Machine Learning Classification Algorithms for Intelligent System. Electr. Power Syst. Res. 2022, 203, 107645. [Google Scholar] [CrossRef]
  276. Chang, T.C.; Wu, M.H.; Kim, P.Z.; Yu, M.H. Smart Driver Drowsiness Detection Model Based on Analytic Hierarchy Process. Sens. Mater. 2021, 33, 485–497. [Google Scholar] [CrossRef]
  277. Majumder, P.; Biswas, P.; Majumder, S. Application of New TOPSIS Approach to Identify the Most Significant Risk Factor and Continuous Monitoring of Death of COVID-19. Electron. J. Gen. Med. 2020, 17, em234. [Google Scholar] [CrossRef]
  278. Seelammal, C.; Vimala Devi, K. Multi-Criteria Decision Support for Feature Selection in Network Anomaly Detection System. Int. J. Data Anal. Tech. Strateg. 2018, 10, 334–350. [Google Scholar] [CrossRef]
  279. Li, H.; Sun, J. Hybridizing Principles of the Electre Method with Case-Based Reasoning for Data Mining: Electre-CBR-I and Electre-CBR-II. Eur. J. Oper. Res. 2009, 197, 214–224. [Google Scholar] [CrossRef]
  280. Montenegro de Barros, G.M.; Pereira, V.; Roboredo, M.C. ELECTRE Tree: A Machine Learning Approach to Infer ELECTRE Tri-B Parameters. Data Technol. Appl. 2021, 55, 586–608. [Google Scholar] [CrossRef]
  281. Wang, Z.; Zhang, G.; Wang, C.; Xing, S. Assessment of the Gully Erosion Susceptibility Using Three Hybrid Models in One Small Watershed on the Loess Plateau. Soil. Tillage Res. 2022, 223, 105481. [Google Scholar] [CrossRef]
  282. Rafiei Sardooi, E.; Azareh, A.; Mesbahzadeh, T.; Soleimani Sardoo, F.; Parteli, E.J.R.; Pradhan, B. A Hybrid Model Using Data Mining and Multi-Criteria Decision-Making Methods for Landslide Risk Mapping at Golestan Province, Iran. Environ. Earth Sci. 2021, 80, 487. [Google Scholar] [CrossRef]
  283. Guo, M.; Zhang, Q.; Liao, X.; Chen, F.Y.; Zeng, D.D. A Hybrid Machine Learning Framework for Analyzing Human Decision-Making through Learning Preferences. Omega 2021, 101, 102263. [Google Scholar] [CrossRef]
  284. Kamps, C.; Jassemi-Zargani, R. Decision making in dynamic environments an application of machine learning to the analytical hierarchy process. Int. J. Anal. Hierarchy Process 2021, 13, 27–50. [Google Scholar] [CrossRef]
  285. Al-Obeidat, F.; Belacel, N. Alternative Approach for Learning and Improving the MCDA Method PROAFTN. Int. J. Intell. Syst. 2011, 26, 444–463. [Google Scholar] [CrossRef]
  286. Amini, A.; Abdollahi, A.; Hariri-Ardebili, M.A. An Automated Machine-Learning-Assisted Stochastic-Fuzzy Multi-Criteria Decision Making Tool: Addressing Record-to-Record Variability in Seismic Design. Appl. Soft Comput. 2024, 154, 111354. [Google Scholar] [CrossRef]
  287. Belmecheri, N.; Aribi, N.; Lazaar, N.; Lebbah, Y.; Loudni, S. Boosting the Learning for Ranking Patterns. Algorithms 2023, 16, 218. [Google Scholar] [CrossRef]
  288. Liu, L.; Chen, M.; Luo, P.; Duan, W.; Hu, M. Quantitative Model Construction for Sustainable Security Patterns in Social–Ecological Links Using Remote Sensing and Machine Learning. Remote Sens. 2023, 15, 3837. [Google Scholar] [CrossRef]
  289. Greenacre, M.J. Correspondence Analysis. In Encyclopedia of Statistical Sciences; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  290. Akajiaku, U.C.; Ohimain, E.I.; Olodiama, E.-e.B.; Eteh, D.R.; Winston, A.G.; Chukwuemeka, P.; Otutu, A.O.; Bamiekumo, B.P.; Imoni, O. Identifying Suitable Dam Sites Using Geospatial Data and Machine Learning: A Case Study of the Katsina-Ala River in Benue State, Nigeria. Earth Sci. Inform. 2025, 18, 497. [Google Scholar] [CrossRef]
  291. Khan, F.; Khan, O.; Parvez, M.; Almujibah, H.; Pachauri, P.; Yahya, Z.; Ahamad, T.; Yadav, A.K.; Ağbulut, Ü. Innovative Hydrogen Production from Waste Bio-Oil via Steam Methane Reforming: An Advanced ANN-AHP-k-Means Modelling Approach Using Extreme Machine Learning Weighted Clustering. Int. J. Hydrogen Energy 2025, 105, 1080–1091. [Google Scholar] [CrossRef]
  292. Anderková, V.; Babič, F.; Paraličová, Z.; Javorská, D.; Anderková, V.; Babič, F.; Paraličová, Z.; Javorská, D. Intelligent System Using Data to Support Decision-Making. Appl. Sci. 2025, 15, 7724. [Google Scholar] [CrossRef]
  293. Singha, C.; Chakraborty, N.; Sahoo, S.; Pham, Q.B.; Xuan, Y. A Novel Framework for Flood Susceptibility Assessment Using Hybrid Analytic Hierarchy Process-Based Machine Learning Methods. Nat. Hazards 2025, 121, 13765–13810. [Google Scholar] [CrossRef]
  294. Gupta, I.; Martinez, A.; Correa, S.; Wicaksono, H. A Comparative Assessment of Causal Machine Learning and Traditional Methods for Enhancing Supply Chain Resiliency and Efficiency in the Automotive Industry. Supply Chain Anal. 2025, 10, 100116. [Google Scholar] [CrossRef]
  295. Shah, S.Z.A.; Abdulkader, O.A.; Jan, S.; Shah, M.A.; Anwar, M. A Holistic Evaluation of Machine Learning Algorithms for Text-Based Emotion Detection. Int. J. Adv. Appl. Sci. 2025, 12, 55–75. [Google Scholar] [CrossRef]
  296. Zhao, X.; Su, Y.; Su, H.; Li, W. Evaluating the Sustainability of Recycled Plastic Furniture Design Using the Analytic Hierarchy Process-Fuzzy Comprehensive Evaluation and Machine Learning Models Integrated Evaluation Method. J. Clean. Prod. 2025, 518, 145782. [Google Scholar] [CrossRef]
  297. Matoc, D.A.; Maheta, N.; Kanabar, B.; Sata, A. Hybrid Framework for Assessing Additive Manufacturing Complexity Index: Integration of Analytical Hierarchy Process and Machine Learning for VAT Photopolymerization. Prog. Addit. Manuf. 2025, 10, 9939–9954. [Google Scholar] [CrossRef]
  298. Ndlovu, M.; Ngcobo, N.; Aigbavboa, C.O.; Mahachi, J. Multicriteria Decision-Making Framework for Proactive Maintenance of Water Distribution Pipelines in South Africa. J. Pipeline Syst. Eng. 2025, 16, 04025062. [Google Scholar] [CrossRef]
  299. Gjorgjevikj, A.; Nikolikj, A.; Koroušić Seljak, B.; Eftimov, T. User-Defined Trade-Offs in LLM Benchmarking: Balancing Accuracy, Scale, and Sustainability. Knowl. Based Syst. 2025, 330, 114405. [Google Scholar] [CrossRef]
  300. Chandan, R.; Sandeep, K.; Boraiaha, C.K. Landslide Susceptibility Analysis of a Part of Western Ghats in South-Western India Using Geospatial Techniques: A Comparison of AHP and Logistic Regression Methods. J. Indian Soc. Remote Sens. 2025, 53, 4051–4064. [Google Scholar] [CrossRef]
  301. Sharma, P.; Mehlawat, M.K.; Gupta, P.; Ding, W. Integrating Feature Selection and Fuzzy Decision-Making: A Spherical Triangular Fuzzy Number Based Framework for Large-Scale Decision-Making. Appl. Soft Comput. 2025, 182, 113535. [Google Scholar] [CrossRef]
  302. Lee, E.; You, Y.W.; Jung, Y.H.; Kam, J. Explainable AI-Based Risk Assessment for Pluvial Floods over South Korea. J. Environ. Manag. 2025, 385, 125640. [Google Scholar] [CrossRef]
  303. Guo, F.; Zheng, X.; Guo, M.; Chen, Y.; Han, C.; Li, J. Assessing and Interpreting Driving Risks through Trajectory Data Analysis across Vehicle Types. Transp. A Transp. Sci. 2025, 1–30. [Google Scholar] [CrossRef]
  304. Zhu, X.; Su, P.; Yu, J.; Pei, J.; Teng, Z.; Li, Y.; Liu, Y. A Prediction Model for Hazard Levels of Shallow Natural Gas in Tunnel Based on K-Means Clustering and Tabular Prior-Data Fitted Network. Results Eng. 2025, 27, 106873. [Google Scholar] [CrossRef]
  305. Varela, D.A.B.; Ongsakul, W. A Machine Learning-Driven MCDA-TOPSIS Framework for Wave Energy Converter Selection in the Philippines. Energy Sustain. Dev. 2025, 89, 101860. [Google Scholar] [CrossRef]
  306. Correa, G.L.; Campello, B.S.C.; Duarte, L.T. Multi-Criteria Decision Analysis as a Tool for Post-Processing Bias Mitigation in Machine Learning Algorithms. Comput. Ind. Eng. 2025, 210, 111552. [Google Scholar] [CrossRef]
  307. Basheer Ahammed, K.K.; Pandey, A.C.; Wasim, M.D. A High-Resolution Coastal Risk Assessment Framework: Integrating Knowledge Driven and Machine Learning Models for the Andhra Pradesh Coastline. Ocean Coast. Manag. 2026, 271, 107947. [Google Scholar] [CrossRef]
  308. Abraham, S.E.; Kovoor, B.C. MHSA-Enhanced CNNs with TOPSIS-Driven Ensemble Learning for Automated Diabetic Retinopathy Grading. Biomed. Signal Process. Control 2026, 112, 108614. [Google Scholar] [CrossRef]
  309. Ragragui, H.; Kaibi, O.; Aouragh, M.H.; El Hmaidi, A. Assessment and Prediction of the Plio-Quaternary Aquifer’s Nitrate Vulnerability Using AHP, Artificial Intelligence and SHAP in the Saiss Basin, Morocco. Model. Earth Syst. Environ. 2025, 11, 388. [Google Scholar] [CrossRef]
  310. Meng, Z.; Lin, R.; Wu, B. Preference Learning Based on Adaptive Graph Neural Network for Multi-Criteria Decision Support. Appl. Soft Comput. 2024, 167, 112312. [Google Scholar] [CrossRef]
Figure 1. Structured methodological approach.
Figure 1. Structured methodological approach.
Mathematics 14 00033 g001
Figure 2. Overview of the systematic selection process for the literature review.
Figure 2. Overview of the systematic selection process for the literature review.
Mathematics 14 00033 g002
Figure 3. Deriving evaluations of categories against criteria using matrix multiplication.
Figure 3. Deriving evaluations of categories against criteria using matrix multiplication.
Mathematics 14 00033 g003
Figure 4. Correspondence map.
Figure 4. Correspondence map.
Mathematics 14 00033 g004
Figure 5. Dendrogram.
Figure 5. Dendrogram.
Mathematics 14 00033 g005
Figure 6. Heat map.
Figure 6. Heat map.
Mathematics 14 00033 g006
Figure 7. Proposed framework.
Figure 7. Proposed framework.
Mathematics 14 00033 g007
Figure 8. Procrustes aligned CA configurations (original sample vs. holdout sample).
Figure 8. Procrustes aligned CA configurations (original sample vs. holdout sample).
Mathematics 14 00033 g008
Figure 9. Utilization of ML-MCDM integration approaches.
Figure 9. Utilization of ML-MCDM integration approaches.
Mathematics 14 00033 g009
Figure 10. Trends in ML-MCDM integration modes over time.
Figure 10. Trends in ML-MCDM integration modes over time.
Mathematics 14 00033 g010
Figure 11. Use of MCDM methods across ML-MCDM integration modes.
Figure 11. Use of MCDM methods across ML-MCDM integration modes.
Mathematics 14 00033 g011
Figure 12. Use of ML algorithms across ML-MCDM integration modes.
Figure 12. Use of ML algorithms across ML-MCDM integration modes.
Mathematics 14 00033 g012
Figure 13. Frequently used method–algorithm pairs.
Figure 13. Frequently used method–algorithm pairs.
Mathematics 14 00033 g013
Figure 14. Application areas.
Figure 14. Application areas.
Mathematics 14 00033 g014
Table 1. Search criteria for literature review.
Table 1. Search criteria for literature review.
Search CriteriaDetails
DatabasesElsevier’s Scopus
Search Strings(KEY (“Machine Learning”) AND KEY (“multi criteria”) OR KEY (“multi attribute”) OR KEY (“multiple criteria”) OR KEY (“multiple attribute”) OR KEY (“MADM”) OR KEY (“MCDM”) OR KEY (“ANP”) OR KEY (“VIKOR”) OR KEY (“ELECTRE”) OR KEY (“PROMETHEE”) OR KEY (“TOPSIS”) OR KEY (“AHP”)) AND PUBYEAR > 1999 AND PUBYEAR < 2026 AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (LANGUAGE, “English”)) AND (LIMIT-TO (SRCTYPE, “j”))
Time FrameFrom: 1999, To: 2026
Document TypeArticle
LanguageEnglish
Source TypeJournal
Document Results763
Table 2. Summary of the stages of integration mode ML → MCDM.
Table 2. Summary of the stages of integration mode ML → MCDM.
StageMathematical RepresentationEquationPurpose
Generate ML outputs y ^ i j = f j ( A i ;   θ ) (2)Produce predictive ML outputs (e.g., scores, probabilities, class labels, or estimates) for alternative A i under criterion C j .
Derive feature importance I j = g ( f j ,   X ) (3)Derive feature importance or weight indicators from ML models (e.g., SHAP, Gini importance, coefficients).
Construct input for MCDM x i j = y ^ i j   (4)Use ML outputs as criterion values ( x i j ) and/or as weights ( w j ) in the MCDM framework. If needed, ML-derived weights can be combined with expert-derived weights.
w j ML = n o r m a l i z e ( I j ) (5)
w j = W ( w j ML ,   w j exp ;   α ) (6)
Normalize (Optional) r i j = N j ( x i j ) (7)Normalize ML-derived data to make them comparable across criteria or scales.
Aggregate normalized data using MCDM D ( A i ) = A Λ ( { r i j ,   w j } ) (8)Aggregate the ML-based inputs through the MCDM method (e.g., Simple Additive Weighting, TOPSIS, AHP) to obtain overall decision scores for each alternative.
Rank or select alternatives r a n k ( A i ) = p o s i t i o n   o f   A i   w h e n   s o r t i n g   b y   D ( A i ) (9)Rank alternatives or select the best one based on the integrated ML–MCDM evaluation results.
A * = a r g m a x i   D ( A i ) (10)
Validate E = E v a l ( D ( A i ) ,   G r o u n d T r u t h ) (11)Validate and refine the decision model by comparing MCDM outcomes with observed or expert-evaluated results.
Table 3. Summary of the stages of integration mode MCDM → ML.
Table 3. Summary of the stages of integration mode MCDM → ML.
StageMathematical RepresentationEquationPurpose
Compute MCDM results D ( A i ) = A Λ ( { r i j ,   w j } ) See (8)Compute overall decision scores, ranks, or classes for each alternative from normalized multi-criteria data ( r i j ) and weights ( w j ).
Construct MCDM-informed feature vectors Z i = Φ ( X i ,   D ( A i ) ,   { w j } ) (13)Combine the MCDM-derived outputs with other features or contextual data to form the enriched feature vector for ML.
Train ML model using enriched data y ^ i = f ( Z i ;   θ ) (14)Train the ML model to learn data-driven patterns from MCDM-enhanced features.
min o v e r   θ :   L ( y ^ ,   Y ) = Σ ( L ( f ( Z i ;   θ ) ,   Y i ) ) (15)
Model prediction or inference y ^ * = f ( Φ ( X * ,   D ( A * ) ,   w 1 , ,   w n ) ;   θ ) (16)Once trained, employ MCDM-informed features within the model to predict outputs for new alternatives ( A * ).
Table 4. Summary of the stages of integration mode MCDM + ML.
Table 4. Summary of the stages of integration mode MCDM + ML.
StageMathematical RepresentationEquationPurpose
Generate ML outputs y ^ = f ( X ;   θ ) (18)Extract patterns, predictions, or importances from data that informs or supports the multi-criteria evaluation process.
Perform multi-criteria evaluation D ( A i ) = A Λ ( { N j ( x i j ,   y ^ i j ) ,   w j } ) (19)Provide structured multi-criteria evaluation of alternatives considering normalized data and ML-derived insights.
Implement coupled optimization w j ( n e w ) = w j ( o l d ) + η × L w j (20)Implement coupled optimization to update each other iteratively:
(a) Criterion weights are adjusted based on ML optimization;
(b) MCDM results regularize or constrain ML training by embedding decision preferences into learning.
m i n i m i z e   o v e r   θ :   [ L ( y ^ ,   Y ) + λ × P ( D ( A i ) ) ] (21)
Synthesize S ( A i ) = Ω ( D ( A i ) ,   y ^ i ) (22)Integrate both MCDM and ML outputs into one final score or decision.
Conduct iterative refinement
(optional)
u p d a t e   ( θ ,   w j )   a r g m i n [ L ( y ^ ,   Y ) + λ × ( 1 ρ ( D ,   y ^ ) ) ] (23)Find both ML parameters and MCDM weights that jointly minimize prediction error while maximizing alignment between ML outputs and MCDM-based evaluations. This enables mutual adaptation and balance.
Table 5. Summary of the stages of integration mode ML vs. MCDM.
Table 5. Summary of the stages of integration mode ML vs. MCDM.
StageMathematical RepresentationEquationPurpose
Compute MCDM results D ( A i ) = A Λ ( { N j ( x i j ) ,   w j } ) See (8)Compute overall decision scores, ranks, or classes for each alternative from normalized multi-criteria data ( r i j ) and weights ( w j ).
Generate ML outputs y ^ i = f ( X i ;   θ ) See (18)Produce predictive ML outputs (e.g., scores, probabilities, class labels, or estimates) for alternative Ai.
Evaluate performance of both models separately E M C D M = { E v a l M C D M ( D ( A ) , Y ) , i f   Y   a v a i l a b l e   E v a l M C D M ( D ( A ) ) , o t h e r w i s e (26)Compute performance indicators for each approach separately using appropriate evaluation functions (e.g., R2, RMSE, accuracy, consistency, correlation).
E M L = { E v a l M L ( y ^ , Y ) , i f   Y   a v a i l a b l e   E v a l M L ( y ^ ) , o t h e r w i s e (27)
Compare results through performance or correlation measures Δ = ρ ( D ( A ) , y ^ ) (28)Assess alignment between ML and MCDM results—or between either and the ground truth— using comparison or agreement metrics such as ranking consistency, value proximity, correlation or predictive accuracy.
Establish comparative evaluation framework Π = { ( E M C D M ,   E M L ,   ρ ( D ( A ) , y ^ ) ) } (29)Construct a performance comparison framework to identify which approach yields higher accuracy, the degree of decision agreement, and each method’s contextual suitability.
Select the superior approach M * = a r g m a x     Ω ( E M L ,   E M C D M ,   ρ ) (30)Identify the best-performing or most consistent approach via the synthesis function.
Table 6. Summary of the stages of integration mode MCDM[ML vs. ML].
Table 6. Summary of the stages of integration mode MCDM[ML vs. ML].
StageMathematical RepresentationEquationPurpose
Construct performance evaluation matrix E = [ p i j ] f o r   i = 1 ,   ,   m ,   j = 1 ,   ,   n (32)Capture each ML algorithm’s ( A i ) quantitative performance p i j under multiple criteria { C j } .
Normalize or scale the performance values r i j = N j ( p i j ) (33)Normalize the p i j values to ensure comparability across different performance scales and units (i.e., criteria).
Determine criterion weights w j = W M C D M ( { r i j }   o r   e x p e r t   i n p u t s ) ;   Σ j   w j = 1   a n d   w j     0 (34)Compute or obtain the relative importance of each performance criterion (e.g., accuracy vs. computation time) using a weighting method such as expert-based AHP, data-driven entropy measure, or a hybrid approach.
Aggregate normalized values using an MCDM operator D ( A i ) = A Λ ( { r i j   ,   w j   } ) See (8)Combine normalized performance values and weights to compute an overall score for each ML algorithm.
Rank the ML algorithms r a n k ( A i ) = p o s i t i o n   o f   A i   w h e n   s o r t i n g   b y   D ( A i ) See (9)Order ML algorithms from best to worst according to their aggregated scores.
Select the most suitable ML algorithm M * = a r g m a x i   δ ( { r a n k ( A i ) , c o n s t r a i n t s } ) (35)Identify the most appropriate ML algorithm given multi-criteria performance trade-offs. δ may choose top-k, those meeting thresholds, or Pareto-optimal subset.
Perform sensitivity, robustness, and statistical validation (optional)Weight sensitivity: r a n k w (36)Evaluate stability and robustness of results by varying weights, propagating performance uncertainty, bootstrapping confidence intervals for scores and ranks, and applying statistical tests (e.g., t-test, Wilcoxon, Friedman) to verify significant performance differences.
if p i j   U , μ i =   E [ D ( A i ) ] ,     C I ( D ( A i ) ) (37)
Bootstrap: C I ( D ( b ) ( A i ) ) (38)
Table 7. Expert profiles.
Table 7. Expert profiles.
ExpertAffiliation TypeField of SpecializationYears of ExperienceCurrent Position/Title
E1AcademicMulti-Criteria Decision-Making, Marketing Analytics25Associate Professor
E2AcademicMulti-Criteria Decision-Making, Human Resources Analytics25Associate Professor
E3AcademicIndustrial Engineering, Multi-Criteria Decision-Making22Associate Professor
E4AcademicIndustrial Engineering, Decision Sciences, Machine Learning9Researcher
E5Industry ProfessionalArtificial Intelligence, Deep Learning, Machine Learning8Research and Development Center Manager
E6Industry ProfessionalStrategic Decision-Making, Artificial Intelligence15Technology Consultant
Table 8. Mapping of articles to categories.
Table 8. Mapping of articles to categories.
Author(s)YearML → MCDMMCDM → MLMCDM + MLML vs. MCDMMCDM[ML vs. ML]
[41]2024
[50]2024
[70]2025
[71]2025
[72]2024
[73]2024
[74]2023
[75]2023
[76]2023
[77]2022
[78]2021
[79]2021
[80]2020
[81]2019
[82]2018
Table 9. Evaluation of articles against criteria.
Table 9. Evaluation of articles against criteria.
Author(s)YearC1C2C3C4C5C6C7C8C9C10C11C12C13C14
[41]202402021211101202
[50]202401201201101001
[70]202501201211000000
[71]202522001202210221
[72]202412022202211201
[73]202400222212100000
[74]202312021201211000
[75]202301201101101001
[76]202302001212211211
[77]202200211202000000
[78]202102102222121211
[79]202100002201111201
[80]202001201212111001
[81]201900122202211121
[82]201821002201221000
Table 10. Evaluations of categories against criteria.
Table 10. Evaluations of categories against criteria.
Integration ModeC1C2C3C4C5C6C7C8C9C10C11C12C13C14# of Articles
ML → MCDM279913628612764105675968126235064
MCDM → ML137811438711035100724663109133956
MCDM + ML1818316273610293312248111118
ML vs. MCDM523713450802276141915123740
MCDM[ML vs. ML]661771452762462494027333839
Table 11. Normalized evaluations of categories against criteria.
Table 11. Normalized evaluations of categories against criteria.
Integration ModeC1C2C3C4C5C6C7C8C9C10C11C12C13C14
ML → MCDM42.2154.720.396.9134.4198.4100.0164.1104.792.2106.3196.935.978.1
MCDM → ML23.2139.319.676.8155.4196.462.5178.6128.682.1112.5194.623.269.6
MCDM + ML100.0100.016.788.9150.0200.055.6161.1183.366.7133.344.461.161.1
ML vs. MCDM12.557.5177.585.0125.0200.055.0190.035.047.537.530.07.517.5
MCDM[ML vs. ML]15.4156.4197.435.9133.3194.961.5159.0125.6102.669.27.77.797.4
Table 12. Eigenvalues and percentages of inertia.
Table 12. Eigenvalues and percentages of inertia.
F1F2F3F4
Eigenvalue0.1170.0440.0230.003
Inertia (%)62.30723.58112.2921.820
Cumulative %62.30785.88898.180100
Table 13. Overview column points (for symmetrical normalization).
Table 13. Overview column points (for symmetrical normalization).
CriterionScore in DimensionContribution of
Point to Inertia of DimensionDimension to Inertia of Point
121212Total
Novelty−0.415−0.6470.0420.2670.2730.6640.937
Complexity−0.0320.0770.0010.0120.0180.1040.122
Validation1.0730.0560.6200.0040.9970.0031.000
Subjectivity−0.0620.0200.0020.0000.0420.0040.047
Knowledge Base0.035−0.0180.0010.0010.1540.0410.195
Effectiveness0.097−0.0020.0120.0000.5820.0000.582
Efficiency−0.0210.1170.0000.0150.0120.3510.363
Applicability0.1220.0350.0160.0030.4400.0360.476
Flexibility−0.169−0.2640.0210.1330.2240.5460.770
Consistency0.0400.0440.0010.0030.0440.0550.099
Automation−0.258−0.1320.0380.0260.7690.2020.971
Sequential Process−0.5910.5340.2060.4470.5480.4490.997
Dynamic Nature−0.489−0.4450.0400.0880.5080.4210.929
Explainability−0.035−0.0160.0000.0000.0100.0020.012
Table 14. Silhouette scores and variance decomposition.
Table 14. Silhouette scores and variance decomposition.
2 Cluster Solution3 Cluster Solution4 Cluster Solution
Silhouette Score0.5030.4730.494
Variance:
Within-class69.51%34.26%21.46%
Between-classes30.49%65.74%78.54%
Total100.00%100.00%100.00%
Table 15. Tests of equality of group means.
Table 15. Tests of equality of group means.
DimensionWilks’ LambdaFdf1df2Sig.
F10.39212.4172160.001
F20.32616.5142160.000
Table 16. Classification results for original and cross-validated data.
Table 16. Classification results for original and cross-validated data.
ClusterPredicted Group Membership
Sample 123Total
Original aCount19009
20505
32035
%1100.00.00.0100.0
20.0100.00.0100.0
340.00.060.0100.0
Cross-validated bCount19009
21405
32035
%1100.00.00.0100.0
220.080.00.0100.0
340.00.060.0100.0
a. 89.5% of original grouped cases correctly classified. b. 84.2% of cross-validated grouped cases correctly classified.
Table 17. Casewise statistics.
Table 17. Casewise statistics.
OriginalCross-Validated
Integration ModeActualPredictedProbabilityPredictedProbability
/CriteriaGroupGroup123Group123
ML → MCDM110.9770.0020.02110.9710.0030.026
MCDM → ML110.9740.0010.02510.9680.0020.030
MCDM + ML220.0030.9970.00020.0050.9950.000
ML vs. MCDM330.0420.0000.95830.0530.0000.947
MCDM[ML vs. ML]330.0890.0000.91130.1090.0010.890
Novelty220.0001.0000.00020.0001.0000.000
Complexity110.8370.0030.16010.8210.0030.176
Validation330.0000.0001.00030.0000.0001.000
Subjectivity110.8600.0110.12910.8360.0140.150
Knowledge Base110.7280.0120.26010.6790.0170.304
Effectiveness31 *0.6160.0050.3791 *0.7810.0050.214
Efficiency110.8270.0010.17210.8110.0020.187
Applicability31 *0.5680.0020.4301 *0.6930.0020.305
Flexibility220.0560.9410.00320.0740.9210.005
Consistency110.7300.0030.26710.7060.0040.290
Automation220.4010.5880.0111 *0.6050.3760.019
Sequential Process.110.9990.0000.00111.0000.0000.000
Dynamic Nature220.0001.0000.00020.0001.0000.000
Explainability110.8210.0210.15810.7840.0260.190
* Misclassified case.
Table 18. Criteria and integration modes associated with the future research suggestions.
Table 18. Criteria and integration modes associated with the future research suggestions.
SuggestionsInvolved Integration Mode(s)
Research Focus 1. Methodological Advancements in MCDM and ML Integration
1.1Improving Scalability and EfficiencyML → MCDM, MCDM → ML, MCDM + ML, MCDM[ML vs. ML]
1.2Modeling Complex InterdependenciesML → MCDM, MCDM → ML, MCDM + ML
1.3Automating Processes and Reducing BiasML → MCDM, MCDM → ML
Research Focus 2. Adaptability to Real-World Data and Dynamics
2.1Adapting to Dynamic EnvironmentsMCDM + ML
2.2Managing UncertaintyML → MCDM, MCDM → ML, MCDM + ML
2.3Addressing Data-Specific ChallengesMCDM → ML, MCDM + ML, MCDM[ML vs. ML]
Research Focus 3. Enhancing Model Interpretability and Explainability
3.1Enhancing ExplainabilityML → MCDM, MCDM → ML
Research Focus 4. Expanding Applications Across Sectors and Datasets
4.1Generalizing MCDM–ML Integrations Across Diverse SectorsML → MCDM, MCDM → ML, MCDM + ML, ML vs. MCDM
4.2Improving Performance Metrics for Real-World ApplicationsMCDM[ML vs. ML]
Research Focus 5. Leveraging Emerging Technologies
5.1Exploring Novel ML TechnologiesML → MCDM, MCDM → ML, MCDM + ML, MCDM[ML vs. ML]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kocaman, H.; Asan, U. Integration Modes Between MCDM Methods and Machine Learning Algorithms: A Structured Approach for Framework Development. Mathematics 2026, 14, 33. https://doi.org/10.3390/math14010033

AMA Style

Kocaman H, Asan U. Integration Modes Between MCDM Methods and Machine Learning Algorithms: A Structured Approach for Framework Development. Mathematics. 2026; 14(1):33. https://doi.org/10.3390/math14010033

Chicago/Turabian Style

Kocaman, Hatice, and Umut Asan. 2026. "Integration Modes Between MCDM Methods and Machine Learning Algorithms: A Structured Approach for Framework Development" Mathematics 14, no. 1: 33. https://doi.org/10.3390/math14010033

APA Style

Kocaman, H., & Asan, U. (2026). Integration Modes Between MCDM Methods and Machine Learning Algorithms: A Structured Approach for Framework Development. Mathematics, 14(1), 33. https://doi.org/10.3390/math14010033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop