Next Article in Journal
Investigation into a Miniaturized Wideband Parasitic Array Antenna
Previous Article in Journal
Dentists’ Perspectives on Defining Failure in Implant-Prosthodontic Therapy: A Cross-Sectional Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Decision Aggregation for Dispersed Data in Unified and Diverse Coalitions

by
Małgorzata Przybyła-Kasperek
*,† and
Jakub Sacewicz
Institute of Computer Science, University of Silesia in Katowice, Bȩdzińska 39, 41-200 Sosnowiec, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2026, 16(1), 103; https://doi.org/10.3390/app16010103
Submission received: 22 November 2025 / Revised: 15 December 2025 / Accepted: 19 December 2025 / Published: 22 December 2025

Abstract

Dispersed data classification presents significant challenges due to structural variations, restricted information exchange, and the need for powerful decision-making strategies. This study introduces a dynamic classification system based on coalition formation using local models trained on independently collected local data. We explore two distinct coalition strategies: unified coalitions, which group models with similar prediction behaviors, and diverse coalitions, which aggregate models exhibiting contrasting decision tendencies. The impact of weighted and unweighted prediction aggregation is also examined to determine the influence of model reliability on global decision-making. Our framework uses Pawlak’s conflict analysis to form optimal coalitions. Experimental evaluations using multiple datasets demonstrate that coalition-based approaches significantly improve classification accuracy compared to operating individual models. The weighted diverse coalitions produce the most stable results. Statistical analyses confirm the effectiveness of the proposed methodology, highlighting the advantages of adaptive coalition strategies in dispersed environments.

1. Introduction

The rapid digitalization of various domains, including healthcare, finance, and mobile applications, has led to the increasing dissemination of dispersed data—data that exists across multiple independently maintained sources without centralized coordination. Traditional centralized databases store data in one location with a uniform structure. In contrast, dispersed data is distributed across multiple entities. This distribution is often constrained by privacy regulations, security concerns, or logistical limitations. This fragmentation poses significant challenges for machine learning models, which traditionally rely on well-structured consolidated datasets for training and decision-making. As a result, developing effective machine learning methods for dispersed data is a crucial research problem, requiring innovative approaches that accommodate decentralized learning while preserving data privacy and integrity.
One widely explored approach for handling dispersed data is distributed and federated learning. These paradigms train local models independently on separate datasets and subsequently aggregate their outputs through fusion strategies, such as hierarchical or parallel combination methods. However, ensemble-based methods relying on aggregated local model predictions face challenges related to conflicting outputs and decision uncertainty. This is particularly relevant when models trained on different subsets of data produce diverse or even contradictory predictions, making final decision-making non-trivial. Dependencies between local models—often overlooked in standard fusion techniques—play a significant role in improving classification accuracy and should be systematically analyzed when determining the global decision.
To address this challenge, this study proposes a coalition-based classification framework for dispersed data. Our approach uses Pawlak’s conflict analysis model, a method originally developed in rough set theory, to analyze and resolve conflicts between classifiers. Unlike traditional ensemble learning, which assumes that all classifiers contribute equally to the final decision, our method explicitly examines the relationships between local models, identifying compatible and conflicting models based on their prediction vectors. We introduce two distinct coalition strategies. Unified coalitions are group classifiers with high agreement in their predictions, assuming that consensus-driven decisions provide higher accuracy. Diverse coalitions are cluster classifiers that provide differing predictions, aiming to preserve diversity and avoid biases that arise from overfitting specific data patterns.
Beyond coalition formation, this study explores the impact of weighted decision aggregation, where local models are assigned importance scores based on their classification accuracy. By incorporating both coalition strategies (unified vs. diverse) and weighted vs. unweighted approaches, we systematically evaluate how these factors influence classification performance across multiple datasets. Additionally, we employ three fundamental strategies for final decision-making: selecting the single strongest coalition, combining the two strongest coalitions, and aggregating all the coalitions. To further enhance classification performance, we explore a diverse range of local model-building techniques, including single-model approaches, such as Decision Trees and k-Nearest Neighbors, as well as ensemble methods, like Random Forests, Gradient Boosting, Random Subspace, and AdaBoost. The key contributions of this study are as follows:
  • A novel coalition-based approach for dispersed data classification with unified and diverse coalition formation strategies and their impact on accuracy.
  • Integration of Pawlak’s conflict model to systematically identify compatible and diverse classifier groups.
  • Evaluation of weighted vs. unweighted prediction aggregation, demonstrating how weighting schemes affect decision reliability.
  • Introduction and evaluation of three strategies for coalition-based classification: single strongest coalition, two strongest coalitions, and all coalitions.
The structure of this paper is organized as follows. Section 2 presents an overview of the related work. Section 3 details the proposed methodology, introducing coalition formation strategies (unified and diverse), weighted decision aggregation, and different strategies for making final classifications. Section 4 describes the experimental setup, including the datasets, evaluation metrics, and methodology used for performance comparison. Section 5 provides a comprehensive analysis of the results, highlighting the impact of coalition-based approaches, weighting mechanisms, and decision strategies on classification performance. Section 6 discusses the key findings, limitations of the study, and potential directions for future research.

2. Related Works

The increasing volume, variety, and velocity of data have driven the development of various machine learning paradigms capable of handling dispersed, distributed, and privacy-sensitive data. We have different approaches to solving this issue: distributed knowledge and distributed learning, federated learning, and dispersed classification.
Distributed learning is a broad paradigm where computational processes and model training are distributed across multiple nodes [1,2]. This approach is particularly useful in large-scale machine learning tasks, where data is too vast or sensitive to be centrally stored. In traditional distributed learning, the data is assumed to be accessible in its entirety, but computational resources are distributed across multiple machines. Recent studies propose methods to enhance model robustness in distributed learning by handling heterogeneous data distributions across nodes [3]. Unlike federated learning, many distributed learning techniques assume full access to the dataset, making them less suitable for privacy-sensitive applications.
Federated learning (FL) is a distributed machine learning paradigm that emphasizes privacy preservation by keeping raw data on local devices and only sharing model updates [4,5]. It has gained significant attention due to its ability to facilitate collaborative model training across distributed devices while ensuring data privacy [6]. FL is particularly useful in scenarios where data is non-independent and identically distributed (non-IID), which poses challenges for model convergence and performance [7,8]. Knowledge distillation (KD) is a technique used to enhance FL by sharing knowledge between the central server and edge devices. It involves transferring knowledge from a larger model (teacher) to a smaller model (student) to improve performance and efficiency [1]. KD can be particularly effective in addressing non-IID data challenges and improving communication efficiency [7]. Dispersed classification involves training models across distributed datasets to improve classification accuracy and efficiency. Techniques such as the traveling model (TM) setup have proven effective in scenarios with limited data availability, demonstrating the potential of dispersed classification in real-world applications [9].
Conflict recognition and coalition formation in dispersed data classification are critical aspects of federated learning and distributed machine learning systems. Techniques such as clustered aggregation and knowledge distillation-based regularization have been proposed to address this issue [10]. For instance, CADIS handles cluster-skewed non-IID data by clustering clients and applying knowledge distillation to improve model performance. Clustering clients based on data similarities can enhance model performance and communication efficiency. FedGK groups clients with similar data distributions, improving accuracy and reducing communication costs [11]. Similarly, cluster-based FL methods group clients with different learning tasks or resource capabilities to optimize aggregation [10]. Concept drift, where data distributions change over time, is another challenge. Drift detection techniques, such as error rate-based and data distribution-based methods, help to identify when the global model needs updating to maintain accuracy [12]. Conflict recognition in dispersed data classification involves addressing non-IID data and concept drift, while coalition formation uses clustering and adaptive combination rules to improve model performance and communication efficiency. Adaptive strategies are essential for changing and diverse environments as they enable ongoing adjustment to new data and system conditions [13]. However, the existing literature has yet to explore the integration of local models within a conflict analysis framework. This paper addresses that gap by proposing a novel approach to conflict analysis in dispersed data classification tasks.
In this paper, we examine the Pawlak conflict model [14,15], a fundamental framework based on rough set theory [16] that provides valuable insights into the nature of conflict. Rough set theory is a powerful mathematical tool for analyzing imprecise or incomplete information, utilizing lower and upper approximations to effectively manage uncertainty and conflict in decision-making. Over the years, this theory has been extensively studied by numerous researchers [17,18], leading to various extensions and refinements. One significant development is the three-way decision theory proposed by Yao [19], which introduces a nuanced approach to decision-making under uncertainty. Additionally, alternative conflict analysis models, such as the three-sectioning of agents and issues, leverage rough set approximations to enhance decision-making frameworks [20,21]. Stepaniuk and Skowron have further explored the application of rough set theory in hierarchical and constraint-based conflict scenarios, revealing its adaptability to complex decision environments [22]. Moreover, rough set theory has been successfully applied in multiple criteria decision analysis (MCDA) [23,24], reinforcing its suitability for handling conflicts characterized by high uncertainty and incomplete information [25,26]. These contributions collectively highlight the versatility of rough set-based approaches in conflict resolution and decision analysis, paving the way for more robust and interpretable models.
The initial experiments, which preliminarily applied Pawlak’s analysis, were presented in [27], where one type of local model was used with a classical approach to coalition formation. Further studies [28,29] explored Pawlak’s approach and a static model for dispersed data, assuming the same attribute sets in all local data. However, the present work introduces significant extensions and adopts a novel perspective on classification based on dispersed data.

3. Research Methods and Approach

Dispersed data is collected independently, without standardized coordination, leading to potential inconsistencies and significant structural variations. Moreover, such data often must be protected and not shared between local units. A solution is to develop local models based on individual datasets separately. In this study, we assume that local data is represented in tabular form as local decision tables. We have access to local decision tables represented as D i = ( U i , A i , d ) for i 1 , , n . In this context, U i represents the universe, a set of objects; A i is a set of conditional attributes describing features of the objects; and d denotes the decision attribute, which represents labels. To realistically model dispersed data, we assume that both objects and attributes may vary across local tables, with some appearing in multiple datasets. Additionally, we assume that it is not possible to determine which objects are shared between local tables as identifying overlaps would require information exchange, which is restricted.
In the proposed approach, local models built from local datasets are recognized based on their prediction vectors and grouped into coalitions. Various types of local classifiers are employed in this study, including single-model approaches and ensemble. Classifier ensembles are used to improve noise tolerance and stability in prediction vectors. A key aspect is also the level of a prediction vector that each local model generates. We use two different levels of prediction: abstract and measurement level. More formally, a local classifier i is built based on local table D i , and, for new object x ^ , it generates a prediction vector [ μ i , 1 ( x ^ ) , , μ i , c ( x ^ ) ] with dimension c equal to the number of decision classes. When the local classifier is an ensemble, the coordinate μ i , j ( x ^ ) is equal to the number of votes cast by the based classifiers for the j-th decision class. When the local classifier is a single-model approach, for the selected class in the prediction vector, the value 1 is assigned and 0 for other possible classes in the whole system. In this way, we obtain vectors from the measurement level where each coordinate corresponds to the probability of the object x ^ belonging to a given decision class. The prediction vector is composed of zeros and ones for the abstract level (indicating exactly one decision class for the test object). The class with the highest values is selected as the decision and assigned one, and the other classes are assigned zero. These prediction vectors will be used to determine global decisions—based on dispersed data. It will be clearly stated when we use vectors from the measurement level and the abstract level. In the second stage of the proposed framework—identification of the compatibility and diversity of classifiers—vectors from the measurement level will be used.
We propose a dynamic classification system for dispersed data. For each object to be classified, relations between local models are analyzed based on the measurement level prediction vectors. These relations are evaluated using Pawlak’s conflict analysis model [14]. Previous studies have focused only on homogeneous coalitions [19,22], while, in this study, we analyze two approaches to forming groups of local models. The first approach identifies compatible coalitions, where models generating similar prediction vectors are grouped based on their agreement in decision-making. We call these unified coalitions. The second approach, in contrast, forms diverse coalitions, grouping models that exhibit significant differences in their decision evaluations for a given object. This dual strategy enables a more comprehensive assessment of classification reliability. The unified coalitions approach assumes that a consensus majority is correct and should be followed. In contrast, the diversity approach is inspired by the ensemble perspective, emphasizing the importance of preserving diversity, which significantly improves model quality.
Formally, for classified objects, we use the measurement prediction vectors and create an information system S = ( L M , V ) , where L M is a set of local models and V is a set of decision attribute values, which stores information about the conflict situation. Function v : L M { 1 , 0 , 1 } for each v V and i L M is defined as
v ( i ) = 1 if   μ i , v ( x ^ )   is   max   in   vector 0 if   second   highest 1 otherwise
The above function can be interpreted as follows. For each model, the decision with the strongest support is assigned a value of 1, indicating the model’s endorsement. The second most supported decision receives a value of 0, signifying neutrality. All remaining decisions are assigned a value of −1, indicating opposition. If multiple decision classes have the same probability, they are assigned identical values. We use the conflict function ρ : L M × L M [ 0 , 1 ] to evaluate consistency of decisions made by local models:
ρ ( i , j ) = card { v V : v ( i ) v ( j ) } card { V } ,
where card { V } is the cardinality of the set of decision classes and i , j L M .
Conflict function values are calculated for each pair of local models. In the classic Pawlak approach, models are considered allies if the conflict function value is below 0.5. Similarly, in the proposed approach, this criterion is used to establish unified coalitions. A set X L M is a coalition of local models if for every i , j X we have ρ ( i , j ) 0.5 .
Conversely, if the primary objective is to form diverse coalitions, we group models that are in conflict with each other. As a result, for diverse coalitions, we obtain that coalition is a set X L M of local models such that, for every i , j X , we have ρ ( i , j ) 0.5 . Coalitions can overlap, allowing a single local model to belong to multiple coalitions simultaneously. As a result, during voting, a single model may contribute to multiple coalitions when making a global decision. The complete algorithmic procedure for distinguishing coalitions is presented in Algorithm 1. The conflict function is straightforward and computationally simple: a comparison of vector values established locally. However, it involves exhaustive pairwise distance computations, making the method appropriate for moderately sized systems and used in larger setups when combined with appropriate optimization strategies.
Algorithm 1: Creation of coalitions
Applsci 16 00103 i001
Once the coalitions are formed, we move to the third stage of global decision-making. This time, we will use prediction vectors from the measurement or abstract level depending on the approach, and use coalitions in the classification process. In this study, we utilize three methods in this stage. The first approach utilizes the strongest coalitions, determined by the number of models they contain. The selected coalitions are those with the largest membership. If multiple coalitions share the maximum size, they are all included. The prediction vectors of all the models within these coalitions are then aggregated (summed) to create a single prediction vector. If a model belongs to multiple coalitions, its prediction vector is counted each time it appears. The final decision is the decision with the largest coefficient in the aggregated prediction vector. Additionally, the method using the single strongest coalition with weights is applied. In this approach, when summing the prediction vectors, each local model’s assigned weight is taken into account (the prediction vector is multiplied by the weight), ensuring a weighted influence on the final decision. The weight assigned to each model corresponds to its accuracy, which is estimated by evaluating the model on a validation set. The validation set was not dispersed as global classification requires complete conditional attribute information for each object. Cross-validation was not applied due to the complexity introduced by data dispersion; instead, a train/test split was used. To ensure robustness and reduce the impact of random partitioning, each experiment was repeated ten times with different random seeds, and the reported results represent averaged metrics across these repetitions. The weights assigned to local models do not depend on the level of dispersion but rather are derived from their classification quality on the classification accuracy. This design ensures that weighting reflects intrinsic model performance rather than the structural properties of a data distribution. The risk of overfitting to the validation set is minimal because the validation process is used exclusively for weight estimation and does not involve retraining, hyperparameter tuning, or iterative optimization. Furthermore, the validation set remains separate from the test set used for global decision-making, preventing any leakage between evaluation phases. The second approach involves selecting the two strongest coalitions. First, we identify the coalition with the highest number of local models, followed by those ranking second in size. The rest is analogous to the previous approach. In addition, we use a version with and without weights. The final approach considered incorporates all coalitions. An aggregate prediction vector (weighted or unweighted) is computed for each coalition, and these vectors are then summed. Unlike the standard fusion method (the sum rule), this method counts models multiple times that belong to multiple coalitions. All the described stages of the proposed hierarchical framework with conflict analysis for ensembles of local models are shown in Figure 1. The fusion process is described in Algorithm 2.
Algorithm 2: Weighted fusion of prediction vectors from selected coalitions
Applsci 16 00103 i002
To better illustrate the application of the proposed coalition approach, consider the following example.
Example 1.
Consider a classification task for object x based on five local tables D 1 , , D 5 , representing dispersed data. Additionally, assume that each local table shares the same label with four possible values, resulting in four decision classes V = { v 1 , , v 4 } . No restrictions are imposed on the attribute sets or object sets. A local model is trained on each local table. Each local model generates a prediction vector from the measurement level, as shown in Table 1.
Based on these prediction vectors, an information system S = ( L M , V ) is constructed according to Formula (1), presented in Table 2. Next, the conflict function values are calculated for each pair of models, as shown in Table 3. Notably, no pair of models shares allied relation as none of the conflict function values fall below 0.5. Consequently, if we focus on unified coalitions, only single-model sets will be coalitions { 1 } , { 2 } , { 3 } , { 4 } , { 5 } . However, when considering diverse coalitions, three distinct coalitions will be created, { 2 , 3 , 5 } , { 1 , 5 } , { 1 , 4 } , { 4 , 3 } . In this case, diverse coalitions provide a clearer and more insightful interpretation of the conflict situation.
Next, we will explore different strategies for making the final decision. Suppose that, based on the validation set, weights for the local models were assigned as follows: ω 1 = 0.4 , ω 2 = 0.05 , ω 3 = 0.05 , ω 4 = 0.1 , and ω 5 = 0.4 . In the case of unified coalitions, all approaches—the one strongest coalition, the two strongest coalitions, and all coalitions—will yield the same result:
  • without weights—aggregated vector [ 1.6 , 1.1 , 1.2 , 1.1 ] , final decision v 1 ;
  • with weights—aggregated vector [ 0.295 , 0.245 , 0.315 , 0.145 ] , final decision v 3 .
    However, when considering diverse coalitions, the results will differ. For the single strongest coalition { 2 , 3 , 5 } , we obtain
  • without weights—aggregated vector [ 1.1 , 0.6 , 0.7 , 0.6 ] , final decision v 1 ;
  • with weights—aggregated vector [ 0.125 , 0.135 , 0.175 , 0.065 ] , final decision v 3 .
    For the two strongest coalitions and all coalitions, we get
  • without weights—aggregated vector [ 2.9 , 2 , 2.2 , 1.9 ] , final decision v 1 ;
  • with weights—aggregated vector [ 0.575 , 0.48 , 0.62 , 0.275 ] , final decision v 3 .
    As demonstrated in the example, the choice of coalition type, the decision-making strategy, and the application of weights all play a crucial role in determining the final decision. This analysis underscores the necessity of selecting the appropriate coalition structure and weighting strategy based on the specific problem domain to improve classification accuracy and decision reliability.

4. Datasets and Experimental Design

The study evaluated the proposed approaches using four datasets from the UC Irvine Machine Learning Repository [30,31,32,33], as well as one empirical dataset [34]. The Avian Influenza dataset is a real-world dataset that tracks human infections globally, with data collected from 12 countries. It is organized into four local tables based on object counts and includes data from regions such as Egypt, Vietnam, and Indonesia. Each table contains conditional attributes tailored to its region, enabling researchers to build predictive models and conduct epidemiological studies. This structured dataset plays a critical role in understanding the global impact of avian influenza on human health.
The Car Evaluation dataset comprises six categorical features, including buying price, maintenance cost, number of doors, seating capacity, luggage boot size, and safety level. The target variable categorizes cars into four distinct classes: unacceptable, acceptable, good, and very good. Its clear structure and categorical nature make it particularly suitable for benchmarking machine learning algorithms in classification tasks.
The Soya dataset is focused on diagnosing soybean diseases, featuring 19 categorical attributes related to factors such as plant conditions, precipitation levels, and infection symptoms. The dataset includes four target classes representing different types of soybean diseases. Its categorical data make it an excellent resource for testing algorithms in multi-class classification problems.
The Vehicle dataset is designed to classify four different types of vehicles based on a set of 18 numerical features extracted from silhouettes of the vehicles. These features include measures such as compactness, circularity, and rectangularity, among others. The target variable distinguishes between four vehicle classes: bus, opel, saab, and van.
The Lymphography dataset contains 18 categorical features extracted from lymphographic examinations, providing detailed descriptions of lymph node characteristics, such as shape, structural blockages, and extravasation patterns. These features capture essential information used in medical diagnostics, supporting the analysis of lymphatic system conditions. The target variable categorizes the instances into four distinct classes, representing different diagnostic conditions: normal, metastases, malign lymph, and fibrosis.
A summary of the datasets’ characteristics is presented in Table 4, highlighting their diverse structures and features, which contribute to a comprehensive evaluation of the proposed approaches. Table 5 contains lists of all classes for each set.
The research examines two methodologies for data dispersion: attribute dispersion and object dispersion. The datasets obtained from the UCI Machine Learning Repository were initially divided into single tables and then dispersed across various local tables. For each dataset, the dispersion involved five different versions: into 3, 5, 7, 9, and 11 local tables. For the Soybean, Vehicle Silhouettes, and Lymphography datasets, attributes were dispersed randomly across the tables, ensuring that each table contained a balanced number of attributes. While the same objects were included in all tables, their identifiers were omitted to mimic a real-world scenario [35], where recognizing identical objects across different sources is impossible. This simulates a dispersed data environment with attribute-level dispersion. In contrast, the Car Evaluation dataset was dispersed based on objects. Objects were randomly and stratifiedly divided among the tables, ensuring that all attributes were retained within each table. This methodology aligns with scenarios where dispersed data represents different subsets of objects across tables but retains complete attribute information. The Avian Influenza dataset was divided into four local tables based on geographic origin. Data was collected from 12 countries, and each local table corresponds to a specific region, such as Egypt, Vietnam, and Indonesia. These tables include conditional attributes and objects from the respective countries, enabling the development of localized predictive models and facilitating regional epidemiological analyses.
To ensure robust and reliable evaluation, each test was repeated 10 times to assess the stability of all methods. Despite the stochastic nature of some approaches, a set of 10 random seeds was pre-selected to ensure reproducibility. Each test followed a consistent pipeline, beginning with training local models on dispersed tables, generating validation sets, and concluding with global decision-making using the evaluated methodologies. Validation sets come from splitting test sets by half in stratified way, ensuring the existence of all decision classes in test and validation sets.
The classification quality was assessed using a diverse set of performance metrics, including classification accuracy (Acc), recall, precision (Prec.), and balanced accuracy (BAcc). These metrics provide a comprehensive evaluation of the methods, capturing different aspects of performance, such as overall accuracy, sensitivity, and precision in classification tasks.
Each dataset, characterized by varying degrees of dispersion, is evaluated using three approaches: without coalitions, with coalitions representing agents’ agreement, and with coalitions representing agents’ disagreement. Additionally, three distinct approaches to global decision-making were employed: the single strongest coalition, the two strongest coalitions, and all coalitions, along with two methods for generating prediction vectors: the abstract level and the measurement level. Furthermore, the study evaluated both weighted and unweighted approaches to decision aggregation, analyzing the impact of weights on the final outcomes. The following notations are used:
  • AL—prediction vectors from the abstract level;
  • ML—prediction vectors from the measurement level;
  • W—means that the weights for the prediction vectors were used;
  • U—means that unified coalitions were used;
  • D—means that diverse coalitions were used;
  • 1—means that the final decision is made by one strongest coalition;
  • 2—means that the final decision is made by two strongest coalitions;
  • All—means that the final decision is taken by all coalitions.
  • By systematically combining various approaches to explore all feasible and meaningful configurations, 4 baseline approaches without coalitions and 24 distinct approaches with coalitions and voting were obtained:
  • AL—local classifiers generate predictions from the abstract level, and then the vectors are summed.
  • AL-W—local classifiers generate predictions from the abstract level, and then the vectors are summed using weights proportional to the quality of the model evaluated on the validation set.
  • U-AL-1—unified coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from only the one strongest coalition are summed.
  • U-AL-All—unified coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from all coalitions are summed.
  • U-AL-2—unified coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from only the two strongest coalitions are summed.
  • U-AL-W-1—similar to U-AL-1 but, when summing prediction vectors, weights proportional to the quality of the local model are used.
  • U-AL-W-All—similar to U-AL-All but, when summing prediction vectors, weights proportional to the quality of the local model are used.
  • U-AL-W-2—similar to U-AL-2 but, when summing prediction vectors, weights proportional to the quality of the local model are used.
  • D-AL-1—diverse coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from only the one strongest coalition are summed.
  • D-AL-All—diverse coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from all coalitions are summed.
  • D-AL-2—diverse coalitions are created, local classifiers generate predictions from the abstract level, and, finally, the prediction vectors from only the two strongest coalitions are summed.
  • D-AL-W-1—similar to D-AL-1 but, when summing prediction vectors, weights proportional to the quality of the local model are used.
  • D-AL-W-All—similar to D-AL-All but, when summing prediction vectors, weights proportional to the quality of the local model are used.
  • D-AL-W-2—similar to D-AL-2 but, when summing prediction vectors, weights proportional to the quality of the local model are used.
    All the above approaches were also used in combination with prediction vectors from the measurement level. Thus, we obtained approaches ML, ML-W, U-ML-1, U-ML-All, U-ML-2, U-ML-W-1, U-ML-W-All, U-ML-W-2, D-ML-1, D-ML-All, D-ML-2, D-ML-W-1, D-ML-W-All, and D-ML-W-2.
In addition, during the experiments, we tested different models for generating local classifiers:
  • k-Nearest Neighbors (KNN);
  • Decision Tree (DT);
  • Gradient Boosting (GB);
  • Random Forest (RF);
  • Random Subspace (RS);
  • AdaBoost.
  • Various numbers of estimators were tested for Gradient Boosting, Random Forest, Random Subspace, and AdaBoost, specifically 20, 50, 100, 200, and 500. The tables present the best result achieved, along with the corresponding number of estimators that produced this outcome. For each dataset, the table indicates the best result obtained. The following parameter values were tested for the k-Nearest Neighbors classifier: 1, 2, 3, 4, and 5. The Decision Tree was built with defaults in the sklearn library in Python 3.13.0. The stages of building Random Subspace model from Decision Trees are presented in Algorithm 3. From all these parameters, the best results were selected and are included in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20 and Table A21 in Appendix A. The values presented in the tables represent the average metrics calculated across all 10 evaluations. For each dataset, the best-performing score is highlighted in blue to indicate the top result.
Algorithm 3: Creation of Random Subspace model
Applsci 16 00103 i003

5. Experimental Results and Comparisons

The experiments contained a wide range of combinations. A total of 28 different approaches were tested using six distinct local models across 21 distributed datasets, resulting in 3528 unique experiment configurations. Each experiment was repeated 10 times, and parameter values (for the ensemble method and K-NN) were also varied to ensure robustness. Given the extensive volume of results, the detailed numerical outcomes are provided in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20 and Table A21 in the Appendix A. However, to enhance clarity and facilitate interpretation, the key findings are presented graphically below. For each of the 21 datasets, two types of visual representations were prepared. The first graph type illustrates four evaluation metrics—precision, recall, balanced accuracy, and accuracy—each displayed separately. These metrics are collected for all tested approaches 1–28 (marked on the Y-axis), and the results are aggregated across different local classifiers (in one row), offering a comprehensive overview of the performance trends. The graphs are presented in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22 for each dataset separately.
The second type of graph includes heat maps that visualize the performance of different datasets and approaches, specifically highlighting balanced accuracy metrics across various local classifier models. These heat maps provide a clear comparison of classification effectiveness, helping to identify patterns and trends across methods. The corresponding charts are shown in Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27.
Some general comments can be made based on the figures and tables from Appendix A. For the Avian dataset, methods using measurement level with any coalition approaches obtained the best results. Random Forest proved to be the best local classifier.
For the Car dataset, when the dispersion level was low, all the methods performed similarly well. However, as the dispersion increased (especially with 7, 9, or 11 local tables), performance differences became more visible. The most effective approaches were those that incorporated coalitions and utilized abstract-level predictions, while the choice of final decision-making method had a less significant impact. Among the local classifiers, Decision Tree and Gradient Boosting demonstrated the best performance for this dataset.
The Soybean dataset shows clear performance differences across approaches independent of dispersion level. The most significant factor influencing performance is the use of weights, which consistently led to the best results. Among the weighted approaches, those employing unified coalitions with abstract-level predictions demonstrated a slight advantage over others. The local classifiers that received the best results for the Soybean data were Random Forest and Gradient Boosting.
As the degree of dispersion increases, the performance gap between different approaches becomes more apparent for the Vehicle dataset. While at lower levels of dispersion the differences are less noticeable, at higher dispersion levels, it becomes evident that diverse coalition approaches lead to the weakest results. This suggests that maintaining uniform and compatible coalitions is essential for this particular dataset to ensure stability and accuracy. Alternatively, treating all local classifiers equally without a coalition-based approach is also a good strategy. The Random Subspace method was the most effective among the various classifiers tested, consistently delivering superior results compared to other models.
For the Lymphography dataset, the performance of the tested approaches differs significantly depending on the degree of dispersion. In configurations with 5, 7, or 9 local tables, the most effective methods incorporated diverse coalitions, abstract-level predictions, and weighted aggregation. However, at 11 local tables, the uniform coalition approach outperformed the others. Regarding the local classifier model, no single method consistently appeared to be the best as different models performed best under different dispersion configurations.
The results demonstrate that no single approach consistently outperforms others across all the datasets, emphasizing the importance of dataset-specific adaptation when selecting classification methods. Coalition-based methods proved beneficial for some datasets, such as Car and Soybean, where abstract-level predictions and weighted aggregation improved accuracy. Weighted approaches consistently improved performance in datasets such as Soybean and Lymphography, highlighting the significance of weighting prediction vectors for better decision-making. The findings suggest that the most effective approach depends on the dataset characteristics, the degree of dispersion, and the coalition structure used. While weighted and coalition-based methods generally improve classification accuracy, their success is not universal, strengthening the need for adaptive strategies to specific data properties. More general comparisons will be made using statistical inference.
Statistical tests were conducted to assess the average differences between the results obtained using different methods and approaches. Specifically, the comparisons were based on the balanced accuracy measure presented in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20 and Table A21. The analysis was based on balanced accuracy (BAcc), which was selected as the primary metric because it accounts for class imbalance and provides a more reliable evaluation than raw accuracy in multi-class settings. In dispersed data environments, the distribution of decision classes across local tables is highly heterogeneous, leading to potential bias when using standard accuracy. The first comparison aimed to identify the best-performing method out of the 28 tested. To achieve this, 28 dependent samples were created, each consisting of 126 observations, representing the results for each dataset, dispersion version, and different model used for local classifiers. Given that balanced accuracy is ratio-scaled and that the normality assumption could not be confirmed, the Friedman test was employed to evaluate whether there were statistically significant differences in the balanced accuracy values across the methods. The Friedman test indicated significant differences among the approaches ( χ 2 ( 27 ) = 326.96 , p < 0.000001 ), with Kendall’s W = 0.42 , suggesting a moderate effect size. The Iman–Davenport extension confirmed these findings ( F ( 27 , 783 ) = 19.63 , p < 0.000001 ). For completeness, the Skillings–Mack test for missing data yielded consistent results ( S M = 258.69 , p < 0.000001 ). Post hoc comparisons were conducted using the Dunn–Bonferroni procedure ( α = 0.05 ), and adjusted p-values are reported in Table 6. A comparative box plot (Figure 28) illustrates that the balanced accuracy results are slightly higher for weighted approaches compared to their unweighted counterparts. The figure also effectively demonstrates the superiority of approaches D-AL-W-1, D-AL-W-All, and D-AL-W-2 over the others. Summarizing, the statistical test identified the best homogeneous methods that perform comparably in terms of balanced accuracy: D-AL-1, D-AL-2, D-AL-W-1, D-AL-W-All, D-AL-W-2, D-ML-1, D-ML-2, D-ML-W-1, D-ML-W-All, and D-ML-W-2. The methods in this group are statistically indistinguishable, meaning there are no significant differences in their performance based on the statistical test. This group consists of diverse coalition-based approaches that use different prediction levels (abstract level or measurement level). Some methods incorporate weighted prediction vectors. The inclusion of different decision-making strategies (e.g., strongest coalition, two strongest, or all coalitions) within this group suggests that the choice of decision mechanism does not significantly affect performance in these cases. However, when considering explainability and computational complexity, the most effective approach is to use the single strongest coalition. Focusing on a small well-defined group provides a clearer and more interpretable basis for decision-making. Additionally, this approach significantly reduces computational complexity, making it both practical and efficient. The effectiveness of these approaches demonstrates that diverse coalitions provide a significant advantage in classification tasks as they integrate multiple perspectives and reduce the risk of overfitting to specific patterns. This highlights the importance of considering the diversity in one coalition, confirming that combining distinct decision-making strategies leads to more stable and reliable results.
The next step involved comparing the models used to build local classifiers to determine which one produces the best results. Balanced accuracy was once again used as the evaluation metric. Six dependent groups were created, each corresponding to a specific type of local model: KNN, DT, GB, RF, RS, and AdaBoost. Each group comprised 588 observations, representing the results obtained from 28 different methods across various datasets (as presented in Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20 and Table A21). The Friedman test was employed to evaluate whether there were statistically significant differences in the balanced accuracy values across the methods. The results of the Friedman test revealed a statistically significant difference in mean balanced accuracy among the six local models, χ 2 ( 587 , 5 ) = 701.05 ,   p = 0.000001 . To identify the specific differences, a post hoc Dunn–Bonferroni test was conducted, with statistically significant results highlighted in blue in Table 7. A comparative box plot is presented in Figure 29. The comparisons revealed that the Decision Tree performs best as a local classifier, consistently obtaining the best results. The Random Forest and Gradient Boosting approaches closely follow, securing second place. In contrast, the remaining methods analyzed produce significantly poorer outcomes. The poorest-performing methods for the proposed dispersed framework are K-Nearest Neighbors and AdaBoost.
In the next comparison, we will delve deeper into the final approach for determining the global decision. Specifically, we aim to test whether considering only the single strongest coalition in the coalitions approach generates results comparable to other methods. To achieve this, three dependent samples were created, each containing corresponding results for various datasets, coalition generation methods, and local classifier models. These samples represent three groups: the single strongest coalition, the two strongest coalitions, and all the coalitions. Each group contained 1008 observations. The Friedman test confirmed a statistically significant difference in mean balanced accuracy among the three methods for global decision-making, χ 2 ( 1007 , 2 ) = 9.52 ,   p = 0.009 . To identify the specific differences, a post hoc Dunn–Bonferroni test was conducted. This test showed that there were no statistically significant differences between any pair of samples tested (Table 8), which is also confirmed by the box plot (Figure 30). We can conclude that focusing on a single well-defined coalition provides a more interpretable decision-making process, which is beneficial for scenarios where transparency is critical. Additionally, processing only the strongest coalition reduces computational complexity, making this method more efficient, especially for large-scale problems.
The experimental results highlight the complexity of designing effective classification strategies for dispersed datasets. Let us now summarize the key findings of this study. The proposed coalition-based approaches demonstrated significant advantages over individual classifiers in most datasets, especially when employing diverse and weighted coalitions. However, their effectiveness varied depending on the degree of dispersion and the nature of the dataset. Some datasets benefited from diverse coalitions, while others performed better with unified coalitions. For example, diverse coalitions performed best in the Soybean and Lymphography datasets, where heterogeneous classifiers allowed for better generalization. Unified coalitions were more effective in the Vehicle dataset when dispersion was high, ensuring consistency in classification decisions. Across different datasets, weighted decision strategies consistently improved classification accuracy. Approaches with diverse coalitions, predictions from the abstract level, and weights (D-AL-W-1, D-AL-W-All, and D-AL-W-2) were among the highest-ranking approaches across multiple datasets. The choice of local classifiers in the proposed framework for dispersed data had a major impact on classification accuracy. Decision Tree, Random Forest, and Gradient Boosting consistently outperformed others, making them reliable choices across different datasets. The study confirmed that the final decision-making strategies (one strongest coalition, two strongest, and all coalitions) had a limited impact on classification accuracy when a coalition-based approach is used. Using only the strongest coalition was the most computationally efficient strategy, making it preferable in real-world applications where speed and efficiency are critical.

6. Conclusions

This study introduced a coalition-based classification framework for dispersed data, focusing on two distinct coalition formation strategies: unified coalitions, which group classifiers with similar decision tendencies, and diverse coalitions, which aggregate classifiers with contrasting predictions. Additionally, we examined the impact of weighted and unweighted decision aggregation, demonstrating that assigning model-based weights significantly improves classification accuracy.
The experimental results across multiple datasets confirmed that coalition-based approaches outperform methods without coalition recognition, particularly in highly dispersed data environments. While unified coalitions leverage agreement-based decision-making, providing stability in structured data distributions, diverse coalitions improve generalization by preserving classifier diversity. Weighted coalition methods were particularly effective, enhancing classification robustness by prioritizing more reliable models.
Statistical analyses validated the superiority of diverse coalitions with weighted aggregation, particularly in challenging classification tasks. However, the choice between unified and diverse coalitions depends on dataset characteristics and dispersion levels. These findings emphasize the importance of adaptive strategies in decentralized learning settings, where balancing agreement and diversity plays a key role in optimizing classification performance.
The proposed framework provides a flexible and effective approach for dispersed data classification, but several limitations should be acknowledged. The coalition-based approach improves classification performance, but its computational cost increases with the number of local models and dispersion levels. Optimizations in coalition formation and decision aggregation should be explored. The weighting mechanism proposed in the paper is based on model validation accuracy. Alternative adaptive weighting strategies, such as confidence-based weighting, reinforcement learning approaches, or evolutionary algorithms to determine weights, could further enhance decision-making. We plan to address this in future work. We also plan to combine unified and diverse coalitions adaptively, depending on dataset characteristics, to improve classification in complex environments.

Author Contributions

Conceptualization, M.P.-K. and J.S.; methodology, M.P.-K. and J.S.; software, J.S.; validation, M.P.-K. and J.S.; formal analysis, M.P.-K. and J.S.; investigation, M.P.-K. and J.S.; writing—original draft preparation, M.P.-K.; writing—review and editing, M.P.-K. and J.S.; visualization, M.P.-K. and J.S.; supervision, M.P.-K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in UCI Repository at https://archive.ics.uci.edu/ (accessed on 5 December 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Detailed Experimental Results

The appendix contains detailed tables summarizing the experimental results for each dataset analyzed in the study. Each table corresponds to a specific dataset, such as Avian Influenza, Table A1, Car Evaluation and different versions of dispersion, Table A2, Table A3, Table A4, Table A5 and Table A6, Soya with different versions of dispersion, Table A7, Table A8, Table A9, Table A10 and Table A11, Vehicles with different versions of dispersion, Table A12, Table A13, Table A14, Table A15 and Table A16, and Lymphography with different versions of dispersion, Table A17, Table A18, Table A19, Table A20 and Table A21. The appendix provides a comprehensive overview of the performance of the tested methods under various configurations. The results include evaluations based on key metrics, such as precision, recall, balanced accuracy, and classification accuracy, allowing for an assessment of each approach. For each dataset, the best-performing score is highlighted in blue to indicate the top result. Heat maps for balanced accuracy for all datasets are also presented in Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27.
Table A1. Avian dataset—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A1. Avian dataset—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6730.620.1940.620.7120.660.3120.660.6460.6090.310.609
ML0.7010.7110.2320.7110.7370.6780.3290.6780.7040.680.3480.68
AL-W0.7160.6960.2630.6960.7190.6710.3080.6710.6880.660.260.66
ML-W0.7150.7110.2270.7110.7170.6640.2770.6640.670.6670.2460.667
U-AL-10.5910.5530.1820.5530.6910.6560.3280.6560.660.6090.3050.609
U-AL-All0.650.6290.1970.6290.7320.6780.3290.6780.6830.6290.3280.629
U-AL-20.6590.660.2170.660.7120.660.3120.660.6460.6090.310.609
U-AL-W-10.6550.6310.2120.6310.7170.6710.3070.6710.6880.660.260.66
U-AL-W-All0.7010.7070.2540.7070.7170.6710.3070.6710.6880.660.260.66
U-AL-W-20.7020.7070.2540.7070.7160.6640.3040.6640.6880.660.260.66
D-AL-10.60.5510.1860.5510.6990.6580.3170.6580.6580.6110.3110.611
D-AL-All0.6570.5840.1890.5840.7320.6780.3290.6780.6830.6290.3280.629
D-AL-20.650.5960.1950.5960.6990.6580.3170.6580.6580.6110.3110.611
D-AL-W-10.6020.5020.1910.5020.7180.6670.3040.6670.6880.660.260.66
D-AL-W-All0.6280.5910.1910.5910.7180.6670.3040.6670.6880.660.260.66
D-AL-W-20.6340.5890.190.5890.7180.6670.3040.6670.6880.660.260.66
U-ML-10.7130.7130.2280.7130.7070.6510.2870.6510.7040.680.3480.68
U-ML-All0.7170.7130.2280.7130.7040.6530.2920.6530.7040.680.3480.68
U-ML-20.7170.7130.2280.7130.7040.6530.2920.6530.7040.680.3480.68
U-ML-W-10.7140.7130.2280.7130.7130.6560.2730.6560.670.6670.2460.667
U-ML-W-All0.720.7130.2280.7130.7130.6560.2730.6560.670.6670.2460.667
U-ML-W-20.720.7130.2280.7130.7130.6560.2730.6560.670.6670.2460.667
D-ML-10.6420.6780.2110.6780.690.6560.3210.6560.7040.680.3480.68
D-ML-All0.6690.6780.2190.6780.70.6490.2910.6490.7040.680.3480.68
D-ML-20.6940.6980.2240.6980.7010.6510.2980.6510.7040.680.3480.68
D-ML-W-10.6810.6560.2140.6560.7140.660.2750.660.670.6670.2460.667
D-ML-W-All0.6850.660.2150.660.7130.6560.2730.6560.670.6670.2460.667
D-ML-W-20.7040.7020.2250.7020.7130.6560.2730.6560.670.6670.2460.667
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6780.6640.3030.6640.6910.6780.3250.6780.7430.560.2550.56
ML0.8030.7670.350.7670.7560.7290.3480.7290.7140.6890.2270.689
AL-W0.7640.7290.330.7290.710.6840.330.6840.7880.6220.3020.622
ML-W0.7770.740.3010.740.7320.7240.340.7240.7580.6890.230.689
U-AL-10.6670.6560.2980.6560.7060.6730.3320.6730.6980.5580.2640.558
U-AL-All0.6950.640.2780.640.7140.6690.3410.6690.7010.5470.2660.547
U-AL-20.6780.6640.3030.6640.6990.6820.3270.6820.7430.560.2550.56
U-AL-W-10.760.7270.3290.7270.720.7020.3290.7020.7880.6220.3020.622
U-AL-W-All0.760.7270.3290.7270.720.7020.3290.7020.7880.6220.3020.622
U-AL-W-20.760.7270.3290.7270.720.7020.3290.7020.7880.6220.3020.622
D-AL-10.7540.720.3310.720.7060.6930.3630.6930.7170.5470.2490.547
D-AL-All0.740.70.3120.70.7380.720.3730.720.7010.5470.2660.547
D-AL-20.7590.7240.3320.7240.750.7330.3680.7330.7170.5470.2490.547
D-AL-W-10.7850.7490.3420.7490.710.6960.3580.6960.7880.6220.3020.622
D-AL-W-All0.790.7510.3430.7510.7680.7440.3580.7440.7880.6220.3020.622
D-AL-W-20.790.7510.3430.7510.7680.7440.3580.7440.7880.6220.3020.622
U-ML-10.8030.7670.350.7670.7530.7290.3450.7290.7140.6890.2270.689
U-ML-All0.8030.7670.350.7670.7550.7310.3450.7310.7140.6890.2270.689
U-ML-20.8030.7670.350.7670.7530.7290.3440.7290.7140.6890.2270.689
U-ML-W-10.7770.740.3010.740.7320.7240.340.7240.7580.6890.230.689
U-ML-W-All0.7770.740.3010.740.7320.7240.340.7240.7580.6890.230.689
U-ML-W-20.7770.740.3010.740.7320.7240.340.7240.7580.6890.230.689
D-ML-10.8030.7670.350.7670.7230.7240.3430.7240.7140.6890.2270.689
D-ML-All0.8030.7670.350.7670.7260.7290.3440.7290.7140.6890.2270.689
D-ML-20.8030.7670.350.7670.7410.7290.3520.7290.7140.6890.2270.689
D-ML-W-10.7780.7380.30.7380.7250.7240.3430.7240.7580.6890.230.689
D-ML-W-All0.7770.740.3010.740.7260.7290.3440.7290.7580.6890.230.689
D-ML-W-20.7770.740.3010.740.7360.7270.3510.7270.7580.6890.230.689
Table A2. Car dataset, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A2. Car dataset, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7780.7840.4660.7840.9390.9370.8640.9370.9630.9620.8360.962
ML0.8110.8120.4520.8120.9440.9420.8610.9420.9670.9650.8740.965
AL-W0.7840.7850.4590.7850.9360.9350.8590.9350.9620.9610.8310.961
ML-W0.7980.8020.460.8020.9360.9350.8590.9350.9660.9640.8690.964
U-AL-10.7790.7860.480.7860.9380.9370.8620.9370.9620.9610.8310.961
U-AL-All0.8030.7920.4670.7920.9440.9420.8610.9420.9620.9610.8310.961
U-AL-20.7780.7870.4850.7870.9380.9370.8620.9370.9620.9620.8330.962
U-AL-W-10.7840.7880.4770.7880.9360.9350.8590.9350.9620.9610.8310.961
U-AL-W-All0.7850.7880.4770.7880.9360.9350.8590.9350.9620.9610.8310.961
U-AL-W-20.7850.7880.4770.7880.9360.9350.8590.9350.9620.9610.8310.961
D-AL-10.7520.750.4830.750.940.9380.8610.9380.9560.9560.8280.956
D-AL-All0.780.7720.440.7720.9440.9420.8610.9420.9580.9570.8180.957
D-AL-20.760.7780.4750.7780.9380.9370.860.9370.9560.9560.8190.956
D-AL-W-10.7540.7590.4890.7590.9360.9350.8590.9350.9520.9520.8120.952
D-AL-W-All0.7680.7860.4890.7860.9360.9350.8590.9350.9620.9580.8220.958
D-AL-W-20.7680.7860.4890.7860.9360.9350.8590.9350.9620.9580.8220.958
U-ML-10.7890.8030.4620.8030.9370.9360.8560.9360.9670.9650.8740.965
U-ML-All0.7890.8030.4620.8030.9370.9360.8590.9360.9670.9650.8740.965
U-ML-20.7890.8030.4620.8030.9370.9360.8560.9360.9670.9650.8740.965
U-ML-W-10.7950.8030.4670.8030.9360.9350.8590.9350.9660.9640.8690.964
U-ML-W-All0.7950.8030.4670.8030.9360.9350.8590.9350.9660.9640.8690.964
U-ML-W-20.7950.8030.4670.8030.9360.9350.8590.9350.9660.9640.8690.964
D-ML-10.7810.7980.4650.7980.9380.9370.8590.9370.960.9570.8350.957
D-ML-All0.780.7980.4640.7980.9380.9370.8590.9370.9620.9590.8370.959
D-ML-20.7840.8010.4680.8010.9370.9360.8540.9360.9620.9590.8370.959
D-ML-W-10.7840.7960.4670.7960.9360.9350.8590.9350.960.9570.8350.957
D-ML-W-All0.7830.7960.4660.7960.9360.9350.8590.9350.9620.9590.8370.959
D-ML-W-20.7870.7990.4670.7990.9360.9350.8590.9350.9620.9590.8370.959
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.9040.9030.7270.9030.7560.7630.3430.7630.8420.8520.6060.852
ML0.9160.9120.7360.9120.7660.7680.3330.7680.8340.8430.590.843
AL-W0.9050.9050.7390.9050.7630.7660.3610.7660.8450.8540.6090.854
ML-W0.9170.9130.7470.9130.7680.7680.3370.7680.8360.8450.5910.845
U-AL-10.9050.9040.7320.9040.7720.780.3690.780.8420.8520.6060.852
U-AL-All0.9030.9010.7140.9010.7640.7770.3490.7770.8460.8550.610.855
U-AL-20.9040.9040.7290.9040.7720.780.3660.780.8420.8520.6060.852
U-AL-W-10.9050.9050.7390.9050.7740.7810.3760.7810.8450.8540.6090.854
U-AL-W-All0.9050.9050.7390.9050.7740.7810.3760.7810.8450.8540.6090.854
U-AL-W-20.9050.9050.7390.9050.7740.7810.3760.7810.8450.8540.6090.854
D-AL-10.8920.8960.710.8960.7650.7780.3680.7780.8250.8320.5890.832
D-AL-All0.9020.9020.7010.9020.7620.7780.3620.7780.8420.840.590.84
D-AL-20.8950.90.7140.90.7650.7790.3710.7790.8390.8450.5980.845
D-AL-W-10.8990.9020.7490.9020.7730.7850.390.7850.8290.8340.5820.834
D-AL-W-All0.9040.9070.7290.9070.770.7820.3840.7820.8420.850.6030.85
D-AL-W-20.9040.9070.7290.9070.7710.7820.3840.7820.8420.850.6030.85
U-ML-10.9140.9110.7360.9110.7660.7640.3310.7640.8340.8430.590.843
U-ML-All0.9150.9120.7430.9120.7620.7630.3290.7630.8340.8430.590.843
U-ML-20.9150.9120.7390.9120.7660.7640.3310.7640.8340.8430.590.843
U-ML-W-10.9160.9130.7470.9130.7680.7680.3370.7680.8360.8450.5910.845
U-ML-W-All0.9160.9130.7470.9130.7680.7680.3370.7680.8360.8450.5910.845
U-ML-W-20.9180.9140.7520.9140.7680.7680.3370.7680.8360.8450.5910.845
D-ML-10.9140.9120.7440.9120.760.7650.3350.7650.8370.8410.5870.841
D-ML-All0.9170.9150.7520.9150.7580.7620.3280.7620.8340.8390.5720.839
D-ML-20.9180.9150.7530.9150.7620.7620.330.7620.8340.8390.5720.839
D-ML-W-10.9140.9120.7540.9120.7640.7690.3420.7690.8370.8410.5760.841
D-ML-W-All0.9180.9150.760.9150.7660.7680.3390.7680.8350.840.5740.84
D-ML-W-20.9180.9160.7620.9160.7670.7670.3380.7670.8350.840.5740.84
Table A3. Car dataset, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A3. Car dataset, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7650.7950.4280.7950.9240.9230.8040.9230.9330.9250.7580.925
ML0.7380.770.3930.770.9320.9270.7960.9270.9370.9290.750.929
AL-W0.7720.7950.4240.7950.9310.9290.8410.9290.9370.9280.7650.928
ML-W0.7510.7790.4070.7790.9310.9290.8430.9290.9370.9290.7510.929
U-AL-10.7660.7950.4220.7950.9260.9240.810.9240.9330.9260.7590.926
U-AL-All0.7760.7930.4010.7930.9320.9270.7960.9270.9390.930.7650.93
U-AL-20.7660.7920.4240.7920.9240.9230.8040.9230.9330.9250.7580.925
U-AL-W-10.770.7960.4250.7960.9310.9290.8430.9290.9370.9280.7650.928
U-AL-W-All0.7720.7940.4240.7940.930.9280.8430.9280.9370.9280.7650.928
U-AL-W-20.7770.7950.4420.7950.9310.9290.8430.9290.9370.9280.7650.928
D-AL-10.70.7030.420.7030.9190.9190.7840.9190.8910.8880.7090.888
D-AL-All0.7640.7410.4090.7410.9320.9270.7960.9270.8950.9010.6440.901
D-AL-20.7360.7150.410.7150.9250.9230.7990.9230.9010.8980.7290.898
D-AL-W-10.7080.7020.4360.7020.9310.9290.8430.9290.9060.9040.7580.904
D-AL-W-All0.7470.7550.4230.7550.9310.9290.8430.9290.9020.9040.6990.904
D-AL-W-20.7370.7070.4090.7070.9310.9290.8430.9290.9110.9090.7650.909
U-ML-10.7360.7680.3870.7680.9260.9240.8140.9240.9370.9290.750.929
U-ML-All0.7360.770.4020.770.9260.9240.8140.9240.9370.9290.7510.929
U-ML-20.7390.7720.4140.7720.9260.9240.8140.9240.9370.9290.7510.929
U-ML-W-10.7540.780.4150.780.9310.9290.8430.9290.9370.9290.7510.929
U-ML-W-All0.7520.780.4150.780.930.9280.8430.9280.9370.9290.7510.929
U-ML-W-20.7550.7830.4350.7830.9310.9290.8430.9290.9370.9290.7510.929
D-ML-10.7490.7620.4740.7620.9240.9230.8020.9230.8940.9050.690.905
D-ML-All0.7430.760.4560.760.9250.9230.8070.9230.8960.9090.6860.909
D-ML-20.750.7650.4730.7650.9250.9230.8050.9230.8950.9050.6920.905
D-ML-W-10.7420.7610.4760.7610.9310.9290.8430.9290.8930.9030.6880.903
D-ML-W-All0.740.7620.4520.7620.9310.9290.8430.9290.8970.9090.6860.909
D-ML-W-20.7270.7560.4540.7560.9310.9290.8430.9290.8950.9050.6930.905
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.8860.8830.6390.8830.7410.7440.3110.7440.8360.830.4460.83
ML0.8970.8930.6370.8930.7240.7450.3010.7450.8430.8460.4560.846
AL-W0.8870.8840.6410.8840.7370.7460.3190.7460.8460.8370.4520.837
ML-W0.8860.8830.6350.8830.7340.7490.3050.7490.8410.8440.4540.844
U-AL-10.8860.8830.6420.8830.7540.7630.3280.7630.8370.8330.4480.833
U-AL-All0.8870.8830.6260.8830.7550.7640.330.7640.8540.8410.4590.841
U-AL-20.8840.880.6420.880.7550.7630.3270.7630.8360.830.4460.83
U-AL-W-10.8840.8810.6420.8810.7530.7630.3290.7630.8460.8370.4520.837
U-AL-W-All0.8870.8840.6410.8840.7530.7630.3290.7630.8460.8370.4520.837
U-AL-W-20.8870.8840.6410.8840.7550.7630.3290.7630.8460.8370.4520.837
D-AL-10.8440.8550.6070.8550.7180.7540.3560.7540.7910.7850.510.785
D-AL-All0.8540.8730.5680.8730.7410.7670.3430.7670.8220.8230.4260.823
D-AL-20.8470.860.610.860.7260.7590.3510.7590.8050.7950.4520.795
D-AL-W-10.8530.860.6590.860.7410.770.3770.770.8230.8260.5560.826
D-AL-W-All0.8540.870.6140.870.7480.770.3580.770.8190.8220.4530.822
D-AL-W-20.8580.8650.6530.8650.7460.7750.3680.7750.8070.810.4820.81
U-ML-10.8860.8830.6390.8830.7250.7440.30.7440.8430.8460.4560.846
U-ML-All0.8970.8930.6390.8930.7270.7450.30.7450.8430.8460.4560.846
U-ML-20.8970.8930.640.8930.7250.7440.30.7440.8430.8460.4560.846
U-ML-W-10.8870.8830.6380.8830.7340.7490.3050.7490.8410.8440.4540.844
U-ML-W-All0.8970.8930.6370.8930.7330.7490.3050.7490.8410.8440.4540.844
U-ML-W-20.8970.8930.6370.8930.7330.7490.3050.7490.8410.8440.4540.844
D-ML-10.8610.8630.6230.8630.7180.7450.3030.7450.8470.850.4770.85
D-ML-All0.8680.8690.6240.8690.7140.7440.3010.7440.850.8520.4790.852
D-ML-20.8630.8650.620.8650.7260.7380.3020.7380.850.8520.4790.852
D-ML-W-10.8640.8670.6260.8670.7230.7490.3090.7490.8450.8480.4750.848
D-ML-W-All0.8690.870.6230.870.7250.7480.3060.7480.8480.850.4770.85
D-ML-W-20.8670.8690.6280.8690.7220.7480.3060.7480.8480.8510.4780.851
Table A4. Car dataset, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A4. Car dataset, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7230.7460.3410.7460.9020.9020.7370.9020.9550.9490.7660.949
ML0.6950.7370.3170.7370.9010.90.7140.90.9460.9360.7080.936
AL-W0.7360.7550.350.7550.9010.9020.7480.9020.9550.9490.7710.949
ML-W0.6910.7360.3220.7360.9010.9020.7480.9020.9460.9360.7080.936
U-AL-10.7260.7480.3480.7480.9010.9020.7390.9020.9540.9480.7650.948
U-AL-All0.7310.750.3480.750.9010.90.7140.90.9550.9480.7550.948
U-AL-20.7090.740.3260.740.9020.9030.7380.9030.9550.9490.7660.949
U-AL-W-10.7330.7540.3470.7540.9010.9020.7480.9020.9550.9490.7710.949
U-AL-W-All0.7260.7490.340.7490.9010.9020.7480.9020.9550.9490.7710.949
U-AL-W-20.7180.7470.3320.7470.9010.9020.7480.9020.9550.9490.7710.949
D-AL-10.7140.6590.3880.6590.90.9010.7390.9010.860.8540.6440.854
D-AL-All0.6930.6950.3680.6950.9010.90.7140.90.8690.8850.5040.885
D-AL-20.7220.6680.4120.6680.90.9010.730.9010.880.8720.6690.872
D-AL-W-10.6940.6850.4130.6850.9010.9020.7480.9020.8750.8590.6680.859
D-AL-W-All0.7220.7030.4010.7030.9010.9020.7480.9020.8740.8760.5920.876
D-AL-W-20.7170.6550.4150.6550.9010.9020.7480.9020.8870.8680.6790.868
U-ML-10.6930.7370.3240.7370.90.90.7270.90.9470.9360.7080.936
U-ML-All0.6880.7350.3190.7350.90.90.7270.90.9460.9360.7080.936
U-ML-20.6790.7310.3150.7310.90.90.7270.90.9460.9360.7080.936
U-ML-W-10.7010.7410.3280.7410.9010.9020.7480.9020.9470.9360.7080.936
U-ML-W-All0.690.7370.3210.7370.9010.9020.7480.9020.9460.9360.7080.936
U-ML-W-20.690.7370.3210.7370.9010.9020.7480.9020.9460.9360.7080.936
D-ML-10.7120.7370.390.7370.9020.9020.740.9020.8380.8470.4940.847
D-ML-All0.710.7370.3760.7370.9010.9020.740.9020.8620.8760.5110.876
D-ML-20.7130.7380.3970.7380.9020.9020.7350.9020.8470.8560.5040.856
D-ML-W-10.7080.740.4030.740.9010.9020.7480.9020.8450.8530.5010.853
D-ML-W-All0.7150.740.380.740.9010.9020.7480.9020.8660.8780.5160.878
D-ML-W-20.7180.7420.4240.7420.9010.9020.7480.9020.8530.8620.5110.862
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.8690.8670.5790.8670.7420.7570.3160.7570.8370.8410.4640.841
ML0.8680.8670.5560.8670.7420.7490.3060.7490.8460.8440.4590.844
AL-W0.870.8670.5790.8670.7440.7580.3170.7580.8440.8430.4640.843
ML-W0.870.8680.560.8680.7410.750.3070.750.8460.8440.4590.844
U-AL-10.8690.8660.5760.8660.7660.7770.3410.7770.8360.840.4610.84
U-AL-All0.870.8660.570.8660.7650.7770.3410.7770.8430.8420.4560.842
U-AL-20.8670.8660.570.8660.7620.7750.3390.7750.8370.8410.4640.841
U-AL-W-10.8710.8670.5810.8670.7670.7780.3440.7780.8440.8430.4640.843
U-AL-W-All0.870.8660.5740.8660.7650.7770.3410.7770.8440.8430.4640.843
U-AL-W-20.8680.8660.5740.8660.7650.7770.3410.7770.8440.8430.4640.843
D-AL-10.810.8220.590.8220.7450.7720.4070.7720.7830.7660.5220.766
D-AL-All0.8320.8450.4820.8450.7510.7810.3850.7810.8250.8210.4540.821
D-AL-20.8110.8250.5710.8250.7370.7720.3960.7720.7970.7980.4870.798
D-AL-W-10.8420.8470.6770.8470.7760.7960.4520.7960.80.7880.590.788
D-AL-W-All0.8330.8440.5430.8440.7520.7820.3960.7820.8060.7990.4750.799
D-AL-W-20.8470.8520.6780.8520.7820.8020.4390.8020.8010.7930.5520.793
U-ML-10.870.8680.5570.8680.7390.7480.3050.7480.8460.8440.4590.844
U-ML-All0.8770.8710.560.8710.7430.7480.3050.7480.8460.8440.4590.844
U-ML-20.8760.870.5560.870.7380.7480.3040.7480.8460.8440.4590.844
U-ML-W-10.8710.8690.560.8690.7390.750.3070.750.8460.8440.4590.844
U-ML-W-All0.8780.8720.5640.8720.7410.750.3060.750.8460.8440.4590.844
U-ML-W-20.8760.870.5550.870.7390.750.3060.750.8460.8440.4590.844
D-ML-10.8580.8590.5240.8590.7320.750.3090.750.8230.8380.4550.838
D-ML-All0.8530.8550.5060.8550.7170.7420.3010.7420.8160.830.4570.83
D-ML-20.8580.8590.5240.8590.7250.7480.3070.7480.8230.8380.4550.838
D-ML-W-10.8590.860.5270.860.7290.7510.310.7510.8270.8390.4560.839
D-ML-W-All0.8410.850.5150.850.7150.7450.3030.7450.8160.830.4570.83
D-ML-W-20.8590.8610.530.8610.7240.750.3090.750.8270.8390.4560.839
Table A5. Car dataset, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A5. Car dataset, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7180.7420.3310.7420.8870.890.7270.890.9330.9170.670.917
ML0.670.7230.2890.7230.8940.8930.720.8930.9350.9190.6540.919
AL-W0.7260.7480.3350.7480.8920.8930.7470.8930.9320.9190.6760.919
ML-W0.6590.720.2920.720.8920.8930.7470.8930.9350.9190.6540.919
U-AL-10.7190.7480.3270.7480.8880.8910.7260.8910.9320.9170.6730.917
U-AL-All0.7220.7480.3260.7480.8940.8930.720.8930.9350.9170.6750.917
U-AL-20.7190.7450.3280.7450.8870.890.7320.890.9330.9170.6750.917
U-AL-W-10.720.7510.3330.7510.8920.8930.7470.8930.9320.9190.6760.919
U-AL-W-All0.7230.7480.3310.7480.8920.8930.7470.8930.9320.9190.6760.919
U-AL-W-20.7250.750.3320.750.8920.8930.7470.8930.9320.9190.6760.919
D-AL-10.7170.6250.3810.6250.8860.8890.7260.8890.8170.80.5840.8
D-AL-All0.7560.6570.3760.6570.8940.8930.720.8930.8360.840.460.84
D-AL-20.730.6320.3950.6320.8880.890.7230.890.8350.8180.5860.818
D-AL-W-10.7280.6530.3960.6530.8920.8930.7470.8930.8690.850.7020.85
D-AL-W-All0.7160.6830.3780.6830.8920.8930.7470.8930.8430.8440.5330.844
D-AL-W-20.7250.6540.3920.6540.8920.8930.7470.8930.8770.8590.7080.859
U-ML-10.670.7230.2880.7230.8890.8910.7330.8910.9350.9190.6540.919
U-ML-All0.6690.7220.2920.7220.8890.8910.7330.8910.9350.9190.6540.919
U-ML-20.6710.7230.2890.7230.8890.8910.7330.8910.9350.9190.6540.919
U-ML-W-10.6630.7220.2920.7220.8920.8930.7470.8930.9350.9190.6540.919
U-ML-W-All0.6660.7220.2930.7220.8920.8930.7470.8930.9350.9190.6540.919
U-ML-W-20.6660.7220.2930.7220.8920.8930.7470.8930.9350.9190.6540.919
D-ML-10.630.6690.3250.6690.8890.8910.7330.8910.8390.8550.5210.855
D-ML-All0.6610.6880.3220.6880.8890.8910.7330.8910.8510.8580.5230.858
D-ML-20.6260.6680.3190.6680.8870.8890.7280.8890.8440.8590.5260.859
D-ML-W-10.6310.6720.3320.6720.8920.8930.7470.8930.8450.8580.5290.858
D-ML-W-All0.6550.6880.3240.6880.8920.8930.7470.8930.8480.8590.520.859
D-ML-W-20.6290.6720.3250.6720.8920.8930.7470.8930.850.8620.5330.862
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.8620.8590.550.8590.7470.760.3180.760.8740.8280.4510.828
ML0.8550.8560.5450.8560.7490.7540.3110.7540.830.8210.4460.821
AL-W0.8590.8580.5530.8580.7460.7620.3220.7620.8410.8380.4520.838
ML-W0.8560.8570.5430.8570.7530.7580.3160.7580.8290.8210.4440.821
U-AL-10.8620.8590.5470.8590.7510.7670.3290.7670.8740.8280.4520.828
U-AL-All0.8560.8550.5410.8550.7520.7690.3310.7690.8550.8370.4550.837
U-AL-20.8610.8580.5450.8580.7540.7690.3290.7690.8730.8270.4510.827
U-AL-W-10.8590.8570.550.8570.7530.7720.3310.7720.8410.8380.4520.838
U-AL-W-All0.8570.8550.5470.8550.7520.770.330.770.8410.8380.4520.838
U-AL-W-20.8570.8560.5470.8560.7520.770.330.770.8410.8380.4520.838
D-AL-10.7930.8070.5440.8070.7330.7580.3910.7580.7710.7360.50.736
D-AL-All0.8090.8260.4770.8260.7410.7730.3720.7730.8270.8080.4410.808
D-AL-20.7970.8130.5480.8130.7420.7680.4030.7680.7760.7490.4890.749
D-AL-W-10.8240.8350.6240.8350.7840.7890.4690.7890.7720.7680.5260.768
D-AL-W-All0.8180.8320.5640.8320.7460.7740.3910.7740.8090.80.4570.8
D-AL-W-20.8330.8440.6350.8440.7830.7960.4560.7960.7790.7780.530.778
U-ML-10.8540.8550.5350.8550.7480.7540.3110.7540.830.8210.4460.821
U-ML-All0.8550.8550.5340.8550.7470.7540.3110.7540.830.8210.4460.821
U-ML-20.8540.8550.5370.8550.7480.7540.3110.7540.830.8210.4460.821
U-ML-W-10.8560.8560.5380.8560.7520.7580.3160.7580.8290.8210.4440.821
U-ML-W-All0.8560.8570.5410.8570.7520.7580.3160.7580.8290.8210.4440.821
U-ML-W-20.8560.8560.5380.8560.7520.7580.3160.7580.8290.8210.4440.821
D-ML-10.8290.8310.5040.8310.7280.7490.3120.7490.8240.8180.450.818
D-ML-All0.8270.8290.490.8290.7220.7460.3060.7460.7990.8030.440.803
D-ML-20.830.8320.5020.8320.7260.7480.3120.7480.8230.8160.440.816
D-ML-W-10.8310.8330.5050.8330.7290.7510.3180.7510.8190.8070.4460.807
D-ML-W-All0.8260.830.4910.830.7330.7510.3110.7510.7990.8030.440.803
D-ML-W-20.8310.8330.5040.8330.7270.750.3180.750.8210.8130.4390.813
Table A6. Car dataset, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A6. Car dataset, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7160.7460.320.7460.8780.8730.6380.8730.9170.9070.6480.907
ML0.6560.7190.2730.7190.8780.870.6170.870.9180.9080.6350.908
AL-W0.7170.7470.3220.7470.880.8770.6510.8770.9150.9060.6410.906
ML-W0.650.7180.2710.7180.8720.8710.6310.8710.9180.9080.6350.908
U-AL-10.720.7480.3240.7480.8760.870.630.870.9160.9070.6440.907
U-AL-All0.6970.7280.3090.7280.8780.870.6170.870.9160.9070.6510.907
U-AL-20.7010.7290.3140.7290.8780.8720.6360.8720.9160.9060.650.906
U-AL-W-10.7150.7460.3230.7460.8720.8710.6310.8710.9150.910.6460.91
U-AL-W-All0.6970.7280.3090.7280.8720.870.6310.870.9150.9080.6520.908
U-AL-W-20.7050.730.3230.730.8720.8710.6310.8710.9130.9060.6490.906
D-AL-10.6880.660.3770.660.8710.8710.6360.8710.7920.7570.570.757
D-AL-All0.7660.6420.3720.6420.8780.870.6170.870.8020.8020.4250.802
D-AL-20.7380.6250.3750.6250.8820.8730.6340.8730.7960.7620.570.762
D-AL-W-10.7540.640.3960.640.8720.870.6310.870.8040.7620.5880.762
D-AL-W-All0.7390.670.3790.670.8720.870.6310.870.8090.790.5210.79
D-AL-W-20.7540.640.3960.640.8720.870.6310.870.8060.7660.5940.766
U-ML-10.6640.7210.2760.7210.8780.8720.6360.8720.9180.9090.6370.909
U-ML-All0.6520.7170.2710.7170.8780.8720.6360.8720.9170.9090.640.909
U-ML-20.6550.7170.2720.7170.8780.8720.6360.8720.9180.9090.6390.909
U-ML-W-10.6590.7210.2740.7210.8720.8710.6310.8710.9180.9090.6370.909
U-ML-W-All0.6520.7180.2710.7180.8720.870.6310.870.9170.9090.640.909
U-ML-W-20.6530.7170.2710.7170.8720.8710.6310.8710.9180.9090.640.909
D-ML-10.6070.6710.2920.6710.8740.8720.6370.8720.8370.8430.5070.843
D-ML-All0.6550.6920.2990.6920.8780.8720.6360.8720.8410.8520.5120.852
D-ML-20.6230.6710.3160.6710.8710.8720.6340.8720.8360.8420.5080.842
D-ML-W-10.630.680.3120.680.8720.870.6310.870.8390.8440.5070.844
D-ML-W-All0.660.6940.3150.6940.8720.870.6310.870.840.850.5110.85
D-ML-W-20.6350.6770.3240.6770.8720.870.6310.870.8370.8430.5070.843
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.8390.8310.4620.8310.7380.7620.3230.7620.8440.8450.4550.845
ML0.8380.8380.4520.8380.7270.7560.3140.7560.8640.8280.4550.828
AL-W0.8430.8350.470.8350.7380.7640.3220.7640.8420.8450.4530.845
ML-W0.8370.8370.4510.8370.7260.7550.3120.7550.8630.8270.4540.827
U-AL-10.830.8270.4590.8270.7690.7860.3530.7860.8360.8440.4610.844
U-AL-All0.8440.8350.460.8350.7690.7860.3510.7860.850.8480.460.848
U-AL-20.8420.8340.4750.8340.7670.7840.3490.7840.8430.8440.4540.844
U-AL-W-10.8360.830.4640.830.7620.7820.3470.7820.8430.8480.470.848
U-AL-W-All0.8410.8330.4630.8330.7660.7830.3480.7830.8470.850.4570.85
U-AL-W-20.8430.8350.4730.8350.7650.7830.3480.7830.8470.850.4570.85
D-AL-10.7660.7850.5080.7850.7630.7860.4270.7860.7180.6750.4560.675
D-AL-All0.8030.8230.4960.8230.7460.7840.3880.7840.8160.7930.4420.793
D-AL-20.7760.790.5220.790.7650.7870.4250.7870.720.690.4610.69
D-AL-W-10.7810.7920.5430.7920.7770.7920.4270.7920.6820.6850.4950.685
D-AL-W-All0.7830.80.5260.80.7450.780.3950.780.8070.7870.5460.787
D-AL-W-20.7860.7950.5430.7950.7790.7920.4230.7920.6890.6920.4990.692
U-ML-10.8380.840.4620.840.7250.7550.3120.7550.8640.8280.4550.828
U-ML-All0.8380.8370.4510.8370.7270.7570.3140.7570.8640.8280.4550.828
U-ML-20.8370.8380.4610.8380.7250.7550.3120.7550.8640.8280.4550.828
U-ML-W-10.840.840.4640.840.7260.7550.3120.7550.8630.8270.4540.827
U-ML-W-All0.8420.840.4510.840.7270.7550.3130.7550.8630.8270.4540.827
U-ML-W-20.8390.8390.4650.8390.7270.7550.3130.7550.8630.8270.4540.827
D-ML-10.7950.8130.4930.8130.7250.7530.3220.7530.810.7940.4710.794
D-ML-All0.8010.8140.4770.8140.7260.7530.3130.7530.8220.8050.4810.805
D-ML-20.7960.8140.4910.8140.7270.7530.3220.7530.810.7940.4710.794
D-ML-W-10.7930.8150.4890.8150.7270.7540.3230.7540.8060.7910.4680.791
D-ML-W-All0.80.8140.480.8140.7270.7530.3140.7530.8220.8050.4810.805
D-ML-W-20.7910.8130.4830.8130.7260.7530.3220.7530.8060.7910.4680.791
Table A7. Soybean dataset, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A7. Soybean dataset, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.5350.2970.2550.2970.5870.3190.3160.3190.5770.3140.280.314
ML0.6740.3480.2810.3480.6470.510.4970.510.6230.3240.2450.324
AL-W0.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
ML-W0.5950.5930.5550.5930.680.6660.7090.6660.6940.6870.6930.687
U-AL-10.5440.2970.2540.2970.5810.3220.3190.3220.5890.310.2730.31
U-AL-All0.4490.2770.1660.2770.420.2690.2230.2690.6540.270.1850.27
U-AL-20.5440.2970.2540.2970.5810.3220.3190.3220.5890.310.2730.31
U-AL-W-10.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
U-AL-W-All0.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
U-AL-W-20.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
D-AL-10.5310.3310.2890.3310.1460.1020.0630.1020.5860.3070.2620.307
D-AL-All0.4550.320.2450.320.1780.1470.0760.1470.6540.270.1850.27
D-AL-20.5350.3510.3020.3510.1990.1450.0740.1450.5980.3050.2690.305
D-AL-W-10.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
D-AL-W-All0.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
D-AL-W-20.6070.5910.5480.5910.680.6660.7090.6660.6910.6850.6920.685
U-ML-10.6210.2520.2120.2520.6750.4770.4390.4770.6230.3240.2450.324
U-ML-All0.6210.2520.2120.2520.6750.4770.4390.4770.6230.3240.2450.324
U-ML-20.6210.2520.2120.2520.6750.4770.4390.4770.6230.3240.2450.324
U-ML-W-10.5840.5810.5420.5810.680.6660.7090.6660.6940.6870.6930.687
U-ML-W-All0.5840.5810.5420.5810.680.6660.7090.6660.6940.6870.6930.687
U-ML-W-20.5840.5810.5420.5810.680.6660.7090.6660.6940.6870.6930.687
D-ML-10.5520.2910.2750.2910.6680.3850.4030.3850.6230.3240.2450.324
D-ML-All0.5710.3340.30.3340.6680.3840.4040.3840.6230.3240.2450.324
D-ML-20.5750.3310.2980.3310.6750.4770.4390.4770.6230.3240.2450.324
D-ML-W-10.5830.580.540.580.6690.5930.6810.5930.6940.6870.6930.687
D-ML-W-All0.5830.580.540.580.680.6660.7090.6660.6940.6870.6930.687
D-ML-W-20.5830.580.540.580.680.6660.7090.6660.6940.6870.6930.687
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6380.3450.3030.3450.6340.3950.3190.3950.2410.2060.120.206
ML0.7560.6060.5810.6060.6760.4450.3580.4450.1890.2240.1480.224
AL-W0.730.7190.730.7190.7130.7130.7250.7130.3460.3760.2280.376
ML-W0.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
U-AL-10.6380.3450.3030.3450.6080.3980.3210.3980.2620.2150.1260.215
U-AL-All0.6910.280.20.280.5930.3430.2130.3430.160.2130.1390.213
U-AL-20.6380.3450.3030.3450.6140.3920.3210.3920.2620.2150.1260.215
U-AL-W-10.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
U-AL-W-All0.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
U-AL-W-20.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
D-AL-10.670.3920.350.3920.6090.4120.3480.4120.3040.2210.1280.221
D-AL-All0.7340.3410.2250.3410.5660.360.2450.360.160.2130.1390.213
D-AL-20.6330.3580.3240.3580.6230.4120.3420.4120.2970.2150.1240.215
D-AL-W-10.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
D-AL-W-All0.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
D-AL-W-20.730.7190.730.7190.7170.7170.7220.7170.3460.3760.2280.376
U-ML-10.7560.6060.5810.6060.6780.4470.3640.4470.1890.2240.1480.224
U-ML-All0.7560.6060.5810.6060.6820.4480.3660.4480.1890.2240.1480.224
U-ML-20.7560.6060.5810.6060.6780.4470.3640.4470.1890.2240.1480.224
U-ML-W-10.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
U-ML-W-All0.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
U-ML-W-20.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
D-ML-10.7560.6060.5810.6060.6790.4480.3670.4480.1890.2240.1480.224
D-ML-All0.7560.6060.5810.6060.6780.4470.3640.4470.1890.2240.1480.224
D-ML-20.7560.6060.5810.6060.6780.4470.3640.4470.1890.2240.1480.224
D-ML-W-10.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
D-ML-W-All0.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
D-ML-W-20.7340.7260.7360.7260.710.7080.7040.7080.2070.270.1650.27
Table A8. Soybean dataset, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A8. Soybean dataset, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4730.3310.2520.3310.5150.2350.1060.2350.4650.1990.0840.199
ML0.610.3660.3260.3660.6070.5470.5430.5470.6830.5140.4610.514
AL-W0.6070.5910.5480.5910.690.6790.7290.6790.6910.6850.6920.685
ML-W0.5940.580.5490.580.690.6790.7290.6790.6930.6860.6920.686
U-AL-10.5320.3480.2480.3480.5050.2250.10.2250.470.20.0860.2
U-AL-All0.5580.4090.2460.4090.3530.1890.080.1890.380.2010.0840.201
U-AL-20.5320.3480.2480.3480.5050.2250.10.2250.470.20.0860.2
U-AL-W-10.6070.5910.5480.5910.690.6790.7290.6790.6910.6850.6920.685
U-AL-W-All0.6070.5910.5480.5910.690.6790.7290.6790.6910.6850.6920.685
U-AL-W-20.6070.5910.5480.5910.690.6790.7290.6790.6910.6850.6920.685
D-AL-10.5360.4420.3160.4420.6450.5260.5620.5260.4970.2160.0940.216
D-AL-All0.5480.4510.2970.4510.6940.5070.4380.5070.380.2010.0840.201
D-AL-20.5360.4420.3160.4420.6450.5260.5620.5260.4970.2160.0940.216
D-AL-W-10.60.5780.5270.5780.690.6790.7290.6790.6910.6850.6920.685
D-AL-W-All0.60.5780.5270.5780.690.6790.7290.6790.6910.6850.6920.685
D-AL-W-20.60.5780.5270.5780.690.6790.7290.6790.6910.6850.6920.685
U-ML-10.5610.3560.3260.3560.6110.5470.5380.5470.6830.5140.4610.514
U-ML-All0.5610.3560.3260.3560.6110.5470.5380.5470.6830.5140.4610.514
U-ML-20.5610.3560.3260.3560.6110.5470.5380.5470.6830.5140.4610.514
U-ML-W-10.5790.5810.5260.5810.690.6790.7290.6790.6930.6860.6920.686
U-ML-W-All0.5790.5810.5260.5810.690.6790.7290.6790.6930.6860.6920.686
U-ML-W-20.5790.5810.5260.5810.690.6790.7290.6790.6930.6860.6920.686
D-ML-10.5720.4210.3470.4210.690.5850.6960.5850.6830.5140.4610.514
D-ML-All0.5760.4240.3480.4240.690.5850.6960.5850.6830.5140.4610.514
D-ML-20.5720.4210.3470.4210.690.5850.6960.5850.6830.5140.4610.514
D-ML-W-10.5880.5660.5270.5660.690.6790.7290.6790.6930.6860.6920.686
D-ML-W-All0.5880.5660.5270.5660.690.6790.7290.6790.6930.6860.6920.686
D-ML-W-20.5880.5660.5270.5660.690.6790.7290.6790.6930.6860.6920.686
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.3690.1930.0870.1930.5090.2630.1080.2630.3680.2090.0820.209
ML0.7230.5460.4550.5460.5550.1890.0720.1890.160.2660.1050.266
AL-W0.730.7190.730.7190.7210.720.7270.720.4410.3760.2280.376
ML-W0.7270.7220.7310.7220.7570.660.6210.660.220.2960.130.296
U-AL-10.3730.1930.0870.1930.5290.2910.1450.2910.3790.2110.0830.211
U-AL-All0.3020.1940.0820.1940.3240.2120.090.2120.0610.1380.0530.138
U-AL-20.3660.1930.0870.1930.5810.2540.1060.2540.3790.2110.0830.211
U-AL-W-10.730.7190.730.7190.7170.7170.7240.7170.4410.3760.2280.376
U-AL-W-All0.730.7190.730.7190.7170.7170.7240.7170.4410.3760.2280.376
U-AL-W-20.730.7190.730.7190.7170.7170.7240.7170.4410.3760.2280.376
D-AL-10.6510.2140.1990.2140.4640.2050.1010.2050.3650.2060.080.206
D-AL-All0.6110.2150.1930.2150.5480.2190.0940.2190.0610.1380.0530.138
D-AL-20.6510.2140.1990.2140.4860.2110.1030.2110.3650.2060.080.206
D-AL-W-10.7270.7160.7240.7160.7070.6890.7020.6890.4410.3760.2280.376
D-AL-W-All0.730.7190.730.7190.7150.7120.7140.7120.4410.3760.2280.376
D-AL-W-20.7270.7160.7240.7160.7070.6890.7020.6890.4410.3760.2280.376
U-ML-10.7230.5460.4550.5460.60.1970.0760.1970.160.2660.1050.266
U-ML-All0.7230.5460.4550.5460.5560.190.0720.190.160.2660.1050.266
U-ML-20.7230.5460.4550.5460.5560.190.0720.190.160.2660.1050.266
U-ML-W-10.7270.7220.7310.7220.7510.6650.6320.6650.220.2960.130.296
U-ML-W-All0.7270.7220.7310.7220.7310.6570.6170.6570.220.2960.130.296
U-ML-W-20.7270.7220.7310.7220.7310.6570.6170.6570.220.2960.130.296
D-ML-10.690.4490.4180.4490.4120.1870.10.1870.160.2660.1050.266
D-ML-All0.690.4490.4180.4490.4610.1890.1010.1890.160.2660.1050.266
D-ML-20.690.4490.4180.4490.4220.190.1020.190.160.2660.1050.266
D-ML-W-10.7240.7160.7250.7160.7360.4460.3950.4460.220.2960.130.296
D-ML-W-All0.7280.7230.7320.7230.7380.4610.3960.4610.220.2960.130.296
D-ML-W-20.7240.7160.7250.7160.7260.4660.4170.4660.220.2960.130.296
Table A9. Soybean dataset, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A9. Soybean dataset, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.5890.1430.1040.1430.3520.180.0690.180.3480.1680.0640.168
ML0.6960.0790.0720.0790.3370.2160.1040.2160.380.2370.110.237
AL-W0.4980.4940.4680.4940.5190.4980.4960.4980.5760.5610.5640.561
ML-W0.4910.4920.4630.4920.5150.4930.4930.4930.5630.5670.5560.567
U-AL-10.5890.1430.1040.1430.3410.1720.0650.1720.3480.1680.0640.168
U-AL-All0.4740.1350.1030.1350.3380.2090.0790.2090.3450.2010.0760.201
U-AL-20.5890.1430.1040.1430.3340.1820.0690.1820.3480.1680.0640.168
U-AL-W-10.4980.4940.4680.4940.4970.4790.4850.4790.5760.5610.5640.561
U-AL-W-All0.4980.4940.4680.4940.5090.4890.4890.4890.5760.5610.5640.561
U-AL-W-20.4980.4940.4680.4940.5160.4950.4930.4950.5760.5610.5640.561
D-AL-10.4710.2750.2690.2750.5240.4910.4910.4910.3480.1680.0640.168
D-AL-All0.4010.2550.2730.2550.5240.4910.4910.4910.3450.2010.0760.201
D-AL-20.4710.2750.2690.2750.5240.4910.4910.4910.3480.1680.0640.168
D-AL-W-10.4980.4940.4680.4940.5190.4980.4960.4980.5760.5610.5640.561
D-AL-W-All0.4980.4940.4680.4940.5190.4980.4960.4980.5760.5610.5640.561
D-AL-W-20.4980.4940.4680.4940.5190.4980.4960.4980.5760.5610.5640.561
U-ML-10.8330.1110.0870.1110.3410.2160.1030.2160.380.2370.110.237
U-ML-All0.8330.1110.0870.1110.340.2160.1030.2160.380.2370.110.237
U-ML-20.8330.1110.0870.1110.340.2160.1030.2160.380.2370.110.237
U-ML-W-10.4840.4940.4730.4940.5110.4890.4880.4890.5630.5670.5560.567
U-ML-W-All0.4840.4940.4730.4940.5110.4890.4890.4890.5630.5670.5560.567
U-ML-W-20.4840.4940.4730.4940.5150.4930.4930.4930.5630.5670.5560.567
D-ML-10.5060.2780.2760.2780.5070.4220.4670.4220.380.2370.110.237
D-ML-All0.5060.2780.2760.2780.5070.4220.4670.4220.380.2370.110.237
D-ML-20.5060.2780.2760.2780.5070.4220.4670.4220.380.2370.110.237
D-ML-W-10.4840.4940.4730.4940.5160.4930.4930.4930.5630.5670.5560.567
D-ML-W-All0.4840.4940.4730.4940.5160.4930.4930.4930.5630.5670.5560.567
D-ML-W-20.4840.4940.4730.4940.5160.4930.4930.4930.5630.5670.5560.567
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.3290.1840.0780.1840.3170.1870.0710.1870.2030.2310.0880.231
ML0.5010.2190.0970.2190.1380.1380.0530.1380.2560.2550.0970.255
AL-W0.5660.5490.5420.5490.5760.5690.5280.5690.3790.3250.1550.325
ML-W0.570.5540.5130.5540.6820.3340.1920.3340.2560.2550.0970.255
U-AL-10.3290.1840.0780.1840.3080.1930.0760.1930.2030.2310.0880.231
U-AL-All0.3550.2020.0770.2020.2540.1980.0750.1980.2710.2120.0810.212
U-AL-20.3290.1840.0780.1840.3160.1890.0720.1890.2030.2310.0880.231
U-AL-W-10.5660.5490.5420.5490.5780.5680.5270.5680.3790.3250.1550.325
U-AL-W-All0.5660.5490.5420.5490.5730.5650.520.5650.3790.3250.1550.325
U-AL-W-20.5660.5490.5420.5490.5730.5650.520.5650.3790.3250.1550.325
D-AL-10.5650.2990.1710.2990.2860.1560.0770.1560.2070.2330.0890.233
D-AL-All0.6570.30.1690.30.2880.1550.0640.1550.2710.2120.0810.212
D-AL-20.5650.2990.1710.2990.3080.1550.0750.1550.2070.2330.0890.233
D-AL-W-10.5660.5490.5420.5490.5310.4490.390.4490.3790.3250.1550.325
D-AL-W-All0.5660.5490.5420.5490.5190.4030.320.4030.3790.3250.1550.325
D-AL-W-20.5660.5490.5420.5490.5170.4470.3920.4470.3790.3250.1550.325
U-ML-10.5010.2190.0970.2190.1380.1380.0530.1380.2560.2550.0970.255
U-ML-All0.5010.2190.0970.2190.1380.1380.0530.1380.2560.2550.0970.255
U-ML-20.5020.220.10.220.1380.1380.0530.1380.2560.2550.0970.255
U-ML-W-10.570.5540.5130.5540.6880.3430.2090.3430.2560.2550.0970.255
U-ML-W-All0.570.5540.5130.5540.6840.3340.1880.3340.2560.2550.0970.255
U-ML-W-20.570.5540.5130.5540.6840.3340.1880.3340.2560.2550.0970.255
D-ML-10.6320.3280.1810.3280.4770.1740.0810.1740.2560.2550.0970.255
D-ML-All0.6460.3290.1840.3290.4860.1730.0780.1730.2560.2550.0970.255
D-ML-20.6320.3280.1810.3280.4980.1760.0820.1760.2560.2550.0970.255
D-ML-W-10.570.5540.5130.5540.5460.3340.2420.3340.2560.2550.0970.255
D-ML-W-All0.570.5540.5130.5540.640.3210.2230.3210.2560.2550.0970.255
D-ML-W-20.570.5540.5130.5540.5520.3380.2430.3380.2560.2550.0970.255
Table A10. Soybean dataset, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A10. Soybean dataset, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4960.1660.1220.1660.2760.1510.0750.1510.3980.1860.0720.186
ML0.0640.0640.0530.0640.3540.2340.1790.2340.3970.2120.1270.212
AL-W0.5910.5040.5130.5040.5460.5280.4750.5280.5610.5470.5110.547
ML-W0.5530.4970.5040.4970.5520.5020.4680.5020.5420.5310.4970.531
U-AL-10.5190.1380.0980.1380.380.1860.0880.1860.3980.1860.0720.186
U-AL-All0.5760.130.0840.130.3580.1610.0890.1610.1380.1380.0530.138
U-AL-20.5560.1040.0780.1040.3480.2040.1570.2040.3980.1860.0720.186
U-AL-W-10.5910.5040.5130.5040.540.5160.4590.5160.5610.5470.5110.547
U-AL-W-All0.5910.5040.5130.5040.5440.5250.4670.5250.5610.5470.5110.547
U-AL-W-20.5910.5040.5130.5040.550.5340.480.5340.5610.5470.5110.547
D-AL-10.4340.2860.2650.2860.2420.140.060.140.3690.1860.0720.186
D-AL-All0.4310.2820.2590.2820.2590.140.0580.140.1380.1380.0530.138
D-AL-20.4340.2860.2650.2860.2420.140.060.140.3690.1860.0720.186
D-AL-W-10.5880.5010.5070.5010.5460.4980.4630.4980.5610.5470.5110.547
D-AL-W-All0.5880.5010.5070.5010.5410.4970.4630.4970.5610.5470.5110.547
D-AL-W-20.5880.5010.5070.5010.5460.4980.4630.4980.5610.5470.5110.547
U-ML-10.1470.0650.0560.0650.3720.2270.1630.2270.3970.2120.1270.212
U-ML-All0.0640.0640.0530.0640.4130.2770.2480.2770.3970.2120.1270.212
U-ML-20.0640.0640.0530.0640.3980.220.2210.220.3970.2120.1270.212
U-ML-W-10.530.4980.4730.4980.5410.4810.4430.4810.5420.5310.4970.531
U-ML-W-All0.5140.4850.4470.4850.5420.5090.4650.5090.5420.5310.4970.531
U-ML-W-20.5130.4880.4480.4880.5510.5190.4730.5190.5420.5310.4970.531
D-ML-10.4460.1640.2130.1640.5330.3530.3590.3530.3970.2120.1270.212
D-ML-All0.4460.1640.2130.1640.5330.3530.3590.3530.3970.2120.1270.212
D-ML-20.4460.1640.2130.1640.5330.3530.3590.3530.3970.2120.1270.212
D-ML-W-10.5550.4770.4890.4770.5420.4830.4640.4830.5420.5310.4970.531
D-ML-W-All0.5550.4770.4890.4770.5420.4830.4640.4830.5420.5310.4970.531
D-ML-W-20.5550.4770.4890.4770.5420.4830.4640.4830.5420.5310.4970.531
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.3860.1770.0690.1770.2490.1490.060.1490.3610.2820.110.282
ML0.4640.2370.1210.2370.1380.1380.0530.1380.1340.1320.0840.132
AL-W0.5660.540.5140.540.4970.4420.3680.4420.4180.3890.2030.389
ML-W0.5810.5390.5040.5390.4960.2150.1210.2150.1010.1370.0930.137
U-AL-10.3860.1770.0690.1770.2180.1490.060.1490.360.2810.110.281
U-AL-All0.1380.1380.0530.1380.150.140.0540.140.3890.3370.1310.337
U-AL-20.3860.1770.0690.1770.2260.1510.060.1510.360.2810.110.281
U-AL-W-10.5660.540.5140.540.4970.440.3670.440.4190.3890.2030.389
U-AL-W-All0.5660.540.5140.540.4970.440.3670.440.4190.3890.2030.389
U-AL-W-20.5660.540.5140.540.4970.440.3670.440.4190.3890.2030.389
D-AL-10.6510.3120.2060.3120.220.1580.0610.1580.3740.2790.1090.279
D-AL-All0.6880.3030.2020.3030.1970.1460.0560.1460.3890.3370.1310.337
D-AL-20.6510.3120.2060.3120.220.1580.0610.1580.3740.2790.1090.279
D-AL-W-10.5660.540.5140.540.5240.5180.4170.5180.4190.3890.2030.389
D-AL-W-All0.5660.540.5140.540.5180.5040.4190.5040.4190.3890.2030.389
D-AL-W-20.5660.540.5140.540.5240.5180.4170.5180.4190.3890.2030.389
U-ML-10.4640.2370.120.2370.1380.1380.0530.1380.1340.1320.0840.132
U-ML-All0.4640.2370.120.2370.1380.1380.0530.1380.1340.1320.0840.132
U-ML-20.4650.2380.1220.2380.1380.1380.0530.1380.1340.1320.0840.132
U-ML-W-10.5810.5390.5040.5390.4960.2150.1210.2150.1010.1370.0930.137
U-ML-W-All0.5810.5390.5040.5390.4960.2150.1210.2150.1010.1370.0930.137
U-ML-W-20.5810.5390.5040.5390.4960.2150.1210.2150.1010.1370.0930.137
D-ML-10.5750.3450.2370.3450.2090.1520.0670.1520.1340.1320.0840.132
D-ML-All0.5750.3450.2370.3450.2090.1520.0670.1520.1340.1320.0840.132
D-ML-20.5750.3450.2370.3450.2090.1520.0670.1520.1340.1320.0840.132
D-ML-W-10.5810.5390.5040.5390.5350.2680.1550.2680.1010.1370.0930.137
D-ML-W-All0.5810.5390.5040.5390.5190.2440.1420.2440.1010.1370.0930.137
D-ML-W-20.5810.5390.5040.5390.5350.2680.1550.2680.1010.1370.0930.137
Table A11. Soybean dataset, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A11. Soybean dataset, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4880.140.1580.140.1470.1450.0560.1450.1980.1460.0550.146
ML0.430.0910.0970.0910.3270.260.1770.260.3090.2630.1950.263
AL-W0.4350.4020.4110.4020.4540.4060.3690.4060.4580.4370.420.437
ML-W0.4190.380.390.380.5150.4280.410.4280.4980.4380.4250.438
U-AL-10.5390.1410.1540.1410.1580.1450.0560.1450.1560.1440.0550.144
U-AL-All0.2870.160.1210.160.1430.1540.0750.1540.1380.1380.0530.138
U-AL-20.340.1350.1380.1350.2670.2140.1250.2140.1760.1450.0550.145
U-AL-W-10.4170.3990.4060.3990.4750.4090.3630.4090.4840.4620.4380.462
U-AL-W-All0.4190.3880.3930.3880.4880.4460.3930.4460.4840.4620.4380.462
U-AL-W-20.430.3860.3850.3860.4780.4470.3940.4470.4840.4620.4380.462
D-AL-10.3230.1820.1850.1820.2880.0840.0660.0840.1820.1450.0550.145
D-AL-All0.2910.1790.1820.1790.3180.0860.0660.0860.1380.1380.0530.138
D-AL-20.3230.1820.1850.1820.2880.0840.0660.0840.1820.1450.0550.145
D-AL-W-10.4350.4020.4110.4020.4790.4170.3750.4170.4860.4630.4380.463
D-AL-W-All0.4350.4020.4110.4020.4780.4230.3780.4230.4860.4630.4380.463
D-AL-W-20.4350.4020.4110.4020.4790.4170.3750.4170.4860.4630.4380.463
U-ML-10.5360.0980.1160.0980.3160.2570.1710.2570.3090.2630.1950.263
U-ML-All0.6240.0920.1090.0920.4020.3010.2460.3010.3090.2630.1950.263
U-ML-20.6280.0930.110.0930.4490.2510.2830.2510.3090.2630.1950.263
U-ML-W-10.4210.3830.3920.3830.5210.4070.3680.4070.4980.4380.4250.438
U-ML-W-All0.4820.3010.3140.3010.5020.4360.4140.4360.4980.4380.4250.438
U-ML-W-20.4750.2850.2860.2850.4960.4490.4130.4490.4980.4380.4250.438
D-ML-10.3170.1390.1310.1390.3230.1580.310.1580.3090.2630.1950.263
D-ML-All0.3170.1390.1310.1390.3230.1580.310.1580.3090.2630.1950.263
D-ML-20.3170.1390.1310.1390.3230.1580.310.1580.3090.2630.1950.263
D-ML-W-10.4040.3820.4090.3820.5080.4190.3860.4190.4980.4380.4250.438
D-ML-W-All0.4040.3820.4090.3820.5080.4190.3860.4190.4980.4380.4250.438
D-ML-W-20.4040.3820.4090.3820.5080.4190.3860.4190.4980.4380.4250.438
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.1980.1670.0780.1670.1290.1450.0550.1450.2020.1830.0750.183
ML0.3410.2510.1830.2510.1380.1380.0530.1380.1070.1430.0620.143
AL-W0.470.4230.4190.4230.2590.2380.1590.2380.2680.2920.1450.292
ML-W0.5280.4380.4030.4380.2080.1480.0560.1480.1120.1540.0840.154
U-AL-10.1980.1670.0780.1670.1250.1450.0550.1450.1960.1840.0730.184
U-AL-All0.1680.1640.0620.1640.1380.1380.0530.1380.1190.1410.0540.141
U-AL-20.1980.1670.0780.1670.1440.1430.0540.1430.2430.1920.0730.192
U-AL-W-10.4850.4470.4220.4470.2990.2690.1430.2690.3310.3160.1680.316
U-AL-W-All0.490.450.4230.450.2990.2690.1430.2690.3310.3160.1680.316
U-AL-W-20.4850.4470.4220.4470.2990.2690.1430.2690.3310.3160.1680.316
D-AL-10.2520.1890.1350.1890.2210.1470.0610.1470.2110.1940.0790.194
D-AL-All0.2670.1720.1140.1720.1390.140.0580.140.1190.1410.0540.141
D-AL-20.2520.1890.1350.1890.2210.1470.0610.1470.2110.1940.0790.194
D-AL-W-10.4810.4460.4210.4460.3780.3020.2020.3020.3310.3160.1680.316
D-AL-W-All0.4810.4460.4210.4460.3750.3060.1980.3060.3310.3160.1680.316
D-AL-W-20.4810.4460.4210.4460.3780.3020.2020.3020.3310.3160.1680.316
U-ML-10.3410.2510.1830.2510.1380.1380.0530.1380.1070.1430.0620.143
U-ML-All0.3410.2510.1830.2510.1380.1380.0530.1380.1070.1430.0620.143
U-ML-20.3410.2510.1830.2510.1380.1380.0530.1380.1070.1430.0620.143
U-ML-W-10.5280.4380.4030.4380.2020.1490.0570.1490.1120.1540.0840.154
U-ML-W-All0.5280.4380.4030.4380.2020.1490.0570.1490.1120.1540.0840.154
U-ML-W-20.5280.4380.4030.4380.2020.1490.0570.1490.1120.1540.0840.154
D-ML-10.3470.2580.2010.2580.2050.1460.060.1460.1070.1430.0620.143
D-ML-All0.3470.2580.2010.2580.2050.1460.060.1460.1070.1430.0620.143
D-ML-20.3470.2580.2010.2580.2050.1460.060.1460.1070.1430.0620.143
D-ML-W-10.5280.4380.4030.4380.3690.1980.1210.1980.1120.1540.0840.154
D-ML-W-All0.5280.4380.4030.4380.4160.2070.120.2070.1120.1540.0840.154
D-ML-W-20.5280.4380.4030.4380.3690.1980.1210.1980.1120.1540.0840.154
Table A12. Vehicle set, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A12. Vehicle set, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.640.640.6270.640.720.7090.690.7090.7470.7310.7180.731
ML0.6090.6210.6010.6210.7380.7220.7050.7220.7560.7420.730.742
AL-W0.620.6260.6090.6260.7210.7090.6910.7090.750.7320.720.732
ML-W0.630.6350.6140.6350.7320.720.7020.720.7580.7440.7320.744
U-AL-10.6390.640.6280.640.7220.7080.690.7080.7520.7340.7220.734
U-AL-All0.6370.6240.6170.6240.7380.7220.7050.7220.7520.7280.7170.728
U-AL-20.6420.6430.6310.6430.720.7070.6890.7070.7540.7350.7230.735
U-AL-W-10.6210.6190.6080.6190.7260.7140.6980.7140.7490.7280.7160.728
U-AL-W-All0.6210.6190.6080.6190.7260.7140.6980.7140.7490.7280.7160.728
U-AL-W-20.6220.6190.6080.6190.7260.7140.6980.7140.7490.7280.7160.728
D-AL-10.5940.5790.5680.5790.720.7050.6870.7050.7080.6940.6830.694
D-AL-All0.5930.5830.5690.5830.7380.7220.7050.7220.7070.6920.680.692
D-AL-20.6130.5920.5830.5920.7340.7190.7020.7190.7080.6940.6830.694
D-AL-W-10.6030.5870.5770.5870.7260.7140.6980.7140.6950.6680.6580.668
D-AL-W-All0.6090.5920.5810.5920.7260.7140.6980.7140.7050.690.6780.69
D-AL-W-20.6140.5970.5890.5970.7260.7140.6980.7140.7050.690.6780.69
U-ML-10.6310.6370.6170.6370.7210.7060.6870.7060.7550.7390.7280.739
U-ML-All0.6310.6360.6160.6360.7280.7130.6940.7130.7550.7390.7280.739
U-ML-20.6320.6380.6170.6380.7210.7060.6870.7060.7550.7390.7280.739
U-ML-W-10.630.6370.6160.6370.7260.7140.6980.7140.7560.7410.7290.741
U-ML-W-All0.6290.6350.6150.6350.7260.7140.6980.7140.7560.7410.7290.741
U-ML-W-20.6320.6390.6180.6390.7260.7140.6980.7140.7560.7410.7290.741
D-ML-10.6290.6220.6070.6220.7270.7110.6920.7110.7480.7420.7260.742
D-ML-All0.6340.6280.6120.6280.7270.7110.6920.7110.7480.7430.7270.743
D-ML-20.6290.6280.610.6280.7210.7060.6870.7060.7520.7490.7320.749
D-ML-W-10.6290.6220.6070.6220.7260.7140.6980.7140.7460.740.7240.74
D-ML-W-All0.630.6330.6130.6330.7260.7140.6980.7140.7470.7410.7250.741
D-ML-W-20.6320.6320.6130.6320.7260.7140.6980.7140.7510.7480.7320.748
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7520.7620.7410.7620.7480.760.7340.760.6690.6280.6210.628
ML0.7650.7770.7520.7770.7710.7830.7570.7830.6940.6670.6540.667
AL-W0.7520.7610.740.7610.7510.7610.7360.7610.6590.6250.6170.625
ML-W0.7670.780.7530.780.7680.780.7540.780.6940.6660.6530.666
U-AL-10.7530.7620.7410.7620.7520.7560.7340.7560.6750.6190.6170.619
U-AL-All0.7550.760.7420.760.7510.750.7340.750.6490.630.6190.63
U-AL-20.7530.7620.7410.7620.7480.7610.7350.7610.660.620.6110.62
U-AL-W-10.7530.7610.740.7610.7520.7560.7340.7560.6540.6180.610.618
U-AL-W-All0.7530.7610.740.7610.7520.7560.7340.7560.6540.6180.610.618
U-AL-W-20.7520.760.7390.760.7520.7560.7340.7560.6560.620.6130.62
D-AL-10.7410.7450.7260.7450.7180.7170.6970.7170.6490.6090.5970.609
D-AL-All0.7430.7480.7290.7480.7340.7350.7130.7350.6750.6430.6330.643
D-AL-20.7440.7520.7310.7520.7380.7480.7230.7480.6510.6190.610.619
D-AL-W-10.7260.7240.7050.7240.7090.7050.6870.7050.6540.6090.5960.609
D-AL-W-All0.7460.7520.7310.7520.7330.730.7130.730.6470.6060.5990.606
D-AL-W-20.7440.750.730.750.7340.7310.7140.7310.6470.6060.5990.606
U-ML-10.7630.7760.7490.7760.770.7820.7560.7820.6940.6680.6550.668
U-ML-All0.7630.7760.750.7760.770.7820.7560.7820.6940.6680.6550.668
U-ML-20.7630.7760.7490.7760.770.7820.7560.7820.6940.6680.6550.668
U-ML-W-10.7660.7790.7520.7790.7680.780.7540.780.6950.6680.6550.668
U-ML-W-All0.7660.7790.7520.7790.7680.780.7540.780.6950.6680.6550.668
U-ML-W-20.7660.7790.7520.7790.7680.780.7540.780.6950.6680.6550.668
D-ML-10.7570.7690.7440.7690.7610.7730.750.7730.6720.6340.6270.634
D-ML-All0.7580.7710.7450.7710.7680.780.7560.780.6860.6470.6430.647
D-ML-20.7590.7720.7460.7720.7660.780.7550.780.6870.6470.6430.647
D-ML-W-10.7580.770.7450.770.7610.7730.750.7730.6710.6320.6260.632
D-ML-W-All0.7590.7710.7460.7710.7680.780.7570.780.6820.6430.6380.643
D-ML-W-20.7610.7730.7480.7730.7690.7790.7540.7790.6840.6430.6390.643
Table A13. Vehicle set, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A13. Vehicle set, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.5960.5920.5770.5920.7380.7370.7170.7370.7380.7450.7210.745
ML0.6030.6140.5910.6140.7360.7320.7150.7320.7140.7240.70.724
AL-W0.5930.5880.5740.5880.7330.7270.7080.7270.7280.7350.7110.735
ML-W0.5990.6130.5870.6130.7330.7270.7080.7270.7150.7240.7010.724
U-AL-10.5960.590.5770.590.740.7390.7190.7390.730.7380.7140.738
U-AL-All0.6060.5950.5850.5950.7360.7320.7150.7320.7390.7420.7190.742
U-AL-20.5970.5940.580.5940.7360.7350.7160.7350.7320.7390.7170.739
U-AL-W-10.5870.5820.5680.5820.7330.7270.7080.7270.7280.7360.7120.736
U-AL-W-All0.5920.5870.5740.5870.7330.7270.7080.7270.7270.7370.7120.737
U-AL-W-20.5910.5870.5740.5870.7330.7270.7080.7270.7270.7360.7110.736
D-AL-10.5820.5520.5410.5520.730.730.710.730.6470.6350.6170.635
D-AL-All0.5920.5610.5480.5610.7360.7320.7150.7320.6830.6740.6540.674
D-AL-20.5960.5640.5540.5640.7370.7350.7150.7350.6540.6470.6270.647
D-AL-W-10.580.5430.5330.5430.7330.7270.7080.7270.6320.6210.6010.621
D-AL-W-All0.5920.5720.560.5720.7330.7270.7080.7270.6680.6690.6490.669
D-AL-W-20.5920.5610.5530.5610.7330.7270.7080.7270.6420.6350.6150.635
U-ML-10.6180.6240.6010.6240.7420.740.7210.740.7140.7230.6990.723
U-ML-All0.6080.6170.5920.6170.7420.740.7210.740.7150.7260.7020.726
U-ML-20.6080.6170.5920.6170.7420.740.7210.740.7150.7270.7030.727
U-ML-W-10.6210.6260.6030.6260.7330.7270.7080.7270.7140.7240.70.724
U-ML-W-All0.6120.6190.5940.6190.7330.7270.7080.7270.7150.7270.7030.727
U-ML-W-20.6110.6190.5940.6190.7330.7270.7080.7270.7160.7280.7030.728
D-ML-10.6420.6310.6190.6310.7380.7350.7160.7350.6660.6720.650.672
D-ML-All0.6470.6380.6250.6380.740.7370.7180.7370.6930.70.6780.7
D-ML-20.6490.6380.6260.6380.7380.7350.7150.7350.6790.6870.6640.687
D-ML-W-10.630.6180.6060.6180.7330.7270.7080.7270.6620.6690.6470.669
D-ML-W-All0.6380.6290.6150.6290.7330.7270.7080.7270.6880.6950.6730.695
D-ML-W-20.6370.6260.6130.6260.7330.7270.7080.7270.6780.6850.6620.685
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7630.7720.7480.7720.7670.7730.7530.7730.6570.6680.6430.668
ML0.7570.7690.7420.7690.7750.7830.7630.7830.6660.6470.6380.647
AL-W0.760.7660.7430.7660.760.7690.7480.7690.6660.6720.6460.672
ML-W0.760.7710.7440.7710.7750.7840.7630.7840.6640.6450.6350.645
U-AL-10.7620.7710.7470.7710.7690.7760.7570.7760.6590.6680.6420.668
U-AL-All0.7630.7720.7490.7720.7720.7760.7580.7760.6470.6640.6380.664
U-AL-20.7610.770.7460.770.7690.7760.7570.7760.6530.6620.6370.662
U-AL-W-10.7570.7640.7410.7640.760.7680.7490.7680.6660.6720.6440.672
U-AL-W-All0.7570.7640.7410.7640.760.7680.7480.7680.6590.6660.640.666
U-AL-W-20.7560.7630.740.7630.7620.7690.750.7690.6570.6640.6380.664
D-AL-10.7140.7110.6930.7110.6830.670.6560.670.5660.5320.520.532
D-AL-All0.7480.740.7260.740.7530.7550.7360.7550.6550.5830.5710.583
D-AL-20.7350.7370.7160.7370.7080.70.6850.70.5980.5590.5460.559
D-AL-W-10.7420.7280.7140.7280.6780.6650.6490.6650.6060.5590.5430.559
D-AL-W-All0.7470.7410.7270.7410.7220.7290.710.7290.610.5790.5670.579
D-AL-W-20.7520.7460.7310.7460.6950.6910.6710.6910.6080.5680.5560.568
U-ML-10.760.7710.7440.7710.7730.7810.7610.7810.6470.6310.6190.631
U-ML-All0.7580.7690.7430.7690.7740.7810.7610.7810.6620.6440.6320.644
U-ML-20.7550.7680.740.7680.7760.7830.7630.7830.6670.6490.6380.649
U-ML-W-10.7640.7740.7480.7740.7740.7830.7620.7830.6420.6510.6220.651
U-ML-W-All0.7620.7720.7460.7720.7740.7830.7620.7830.6550.6380.6260.638
U-ML-W-20.7610.7720.7450.7720.7760.7850.7640.7850.6570.6390.6290.639
D-ML-10.7530.7650.740.7650.770.7750.750.7750.6430.5870.5830.587
D-ML-All0.7580.7690.7430.7690.7740.7810.7560.7810.6590.6060.6020.606
D-ML-20.7570.7690.7430.7690.7710.7780.7530.7780.6560.6020.5970.602
D-ML-W-10.7530.7630.7390.7630.7660.7740.7510.7740.6340.5910.5870.591
D-ML-W-All0.7560.7650.740.7650.7770.7830.7590.7830.6490.6060.6010.606
D-ML-W-20.7570.7670.7420.7670.7710.7790.7540.7790.6460.6030.5980.603
Table A14. Vehicle set, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A14. Vehicle set, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6120.620.6050.620.7150.7120.6950.7120.7440.7420.7290.742
ML0.5980.6130.5890.6130.7110.7070.6930.7070.7580.7540.7430.754
AL-W0.6130.620.6060.620.7150.7130.6970.7130.7450.7430.730.743
ML-W0.5880.6020.5780.6020.7180.7150.6990.7150.7580.7540.7420.754
U-AL-10.6050.6130.5960.6130.7240.720.7050.720.7490.7470.7340.747
U-AL-All0.6160.6230.6090.6230.7110.7070.6930.7070.7450.7390.7270.739
U-AL-20.6210.6280.6150.6280.7240.7220.7060.7220.7460.7420.7290.742
U-AL-W-10.6110.6170.6030.6170.7160.7130.6980.7130.7490.7460.7340.746
U-AL-W-All0.6130.620.6070.620.7160.7130.6980.7130.7460.7410.7290.741
U-AL-W-20.6180.6240.6110.6240.7160.7130.6980.7130.7480.7420.7290.742
D-AL-10.5320.5070.4980.5070.7090.7040.690.7040.6310.6040.6020.604
D-AL-All0.6110.5680.5670.5680.7110.7070.6930.7070.6280.6160.6070.616
D-AL-20.5610.5380.5330.5380.7150.7110.6960.7110.6390.6190.6130.619
D-AL-W-10.5520.540.530.540.7160.7130.6980.7130.6440.6210.6180.621
D-AL-W-All0.60.60.5880.60.7160.7130.6980.7130.6520.6370.6290.637
D-AL-W-20.570.5580.5510.5580.7160.7130.6980.7130.6420.620.6150.62
U-ML-10.6080.6220.5970.6220.7240.720.7050.720.7570.7550.7430.755
U-ML-All0.5980.610.5860.610.7180.7170.7010.7170.7540.750.7380.75
U-ML-20.6010.6140.590.6140.7240.720.7050.720.7590.7510.740.751
U-ML-W-10.6020.6140.5890.6140.7160.7130.6980.7130.7580.7550.7430.755
U-ML-W-All0.5940.6070.5830.6070.7160.7130.6980.7130.7570.7510.740.751
U-ML-W-20.5950.6080.5840.6080.7160.7130.6980.7130.760.7520.740.752
D-ML-10.5910.5940.5740.5940.7140.7120.6950.7120.6930.6870.6770.687
D-ML-All0.6040.6040.5910.6040.7190.7170.7020.7170.6920.6870.6760.687
D-ML-20.5920.5950.5820.5950.7140.7120.6950.7120.6910.6860.6750.686
D-ML-W-10.5980.5870.5740.5870.7160.7130.6980.7130.6940.6880.6770.688
D-ML-W-All0.610.6140.6010.6140.7160.7130.6980.7130.6920.6870.6760.687
D-ML-W-20.590.5940.5780.5940.7160.7130.6980.7130.6920.6870.6760.687
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7440.7510.7310.7510.7520.7560.7390.7560.7030.6720.6670.672
ML0.7320.750.7220.750.7560.7680.7450.7680.7010.6980.680.698
AL-W0.7350.7440.7240.7440.750.7540.7370.7540.7120.6750.670.675
ML-W0.7360.7540.7260.7540.7570.7670.7450.7670.70.6970.6780.697
U-AL-10.740.7480.7280.7480.7550.7590.7410.7590.6920.6660.6530.666
U-AL-All0.7410.750.730.750.7580.760.7430.760.7230.6790.6780.679
U-AL-20.7390.7470.7270.7470.7580.7580.7420.7580.7140.6740.670.674
U-AL-W-10.7380.7470.7270.7470.7530.7550.7380.7550.6890.6650.6520.665
U-AL-W-All0.7390.7470.7270.7470.7520.7560.7390.7560.7140.6730.670.673
U-AL-W-20.7390.7480.7280.7480.750.7540.7360.7540.7170.6760.6720.676
D-AL-10.6660.6540.6380.6540.6560.6310.620.6310.5560.4980.5040.498
D-AL-All0.7090.6980.6820.6980.6970.6820.6670.6820.6380.6050.6020.605
D-AL-20.6950.6750.660.6750.6620.630.6190.630.5560.4980.5020.498
D-AL-W-10.680.6540.6430.6540.6460.6020.5980.6020.5520.5380.5330.538
D-AL-W-All0.7160.7030.6870.7030.6920.680.6680.680.6140.6020.6020.602
D-AL-W-20.6930.6660.6540.6660.6530.6090.6060.6090.5610.5480.5440.548
U-ML-10.7370.7530.7260.7530.7550.7650.7430.7650.6950.6870.6710.687
U-ML-All0.7370.7540.7270.7540.7510.7620.7390.7620.7050.6950.6810.695
U-ML-20.7370.7530.7260.7530.7490.760.7370.760.7010.6890.6760.689
U-ML-W-10.740.7560.7290.7560.7540.7610.740.7610.6940.6870.6690.687
U-ML-W-All0.7380.7550.7280.7550.7520.7610.740.7610.7020.6930.6780.693
U-ML-W-20.7410.7570.730.7570.7510.7590.7370.7590.7010.690.6760.69
D-ML-10.7040.720.6930.720.7170.720.6950.720.6590.650.6390.65
D-ML-All0.7130.7280.7010.7280.7210.7350.7070.7350.6680.6570.6450.657
D-ML-20.7010.7170.6890.7170.7160.7220.6970.7220.660.650.6370.65
D-ML-W-10.7110.7260.70.7260.710.7140.690.7140.660.6530.640.653
D-ML-W-All0.720.7350.7080.7350.7210.7360.7080.7360.6710.6590.6460.659
D-ML-W-20.7070.7220.6960.7220.7110.7190.6950.7190.6620.6520.6390.652
Table A15. Vehicle set, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A15. Vehicle set, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6010.6080.5840.6080.680.6770.6590.6770.7220.7370.710.737
ML0.6270.6340.610.6340.6980.6890.6740.6890.730.7460.7180.746
AL-W0.5970.6070.580.6070.6870.6840.6660.6840.7250.7410.7140.741
ML-W0.630.6390.6140.6390.6860.6830.6650.6830.7420.7580.730.758
U-AL-10.5990.6120.5860.6120.6970.6910.6740.6910.7270.7420.7150.742
U-AL-All0.6040.6090.5840.6090.6980.6890.6740.6890.7220.7350.710.735
U-AL-20.6060.6150.590.6150.6970.690.6730.690.7250.7390.7130.739
U-AL-W-10.5960.610.5830.610.6860.6830.6650.6830.7270.7430.7150.743
U-AL-W-All0.5990.6120.5850.6120.6850.6830.6640.6830.7290.7430.7170.743
U-AL-W-20.5990.6120.5850.6120.6860.6830.6650.6830.720.7340.7080.734
D-AL-10.4940.4540.450.4540.6910.6840.6660.6840.5230.5020.4950.502
D-AL-All0.6210.5540.5490.5540.6980.6890.6740.6890.6110.5690.550.569
D-AL-20.5110.4870.4830.4870.6870.6820.6640.6820.5340.5050.4960.505
D-AL-W-10.4930.4440.430.4440.6860.6830.6650.6830.5690.5280.5170.528
D-AL-W-All0.5890.5460.5430.5460.6860.6830.6650.6830.5950.5760.5610.576
D-AL-W-20.4940.4690.4620.4690.6860.6830.6650.6830.5720.5350.5240.535
U-ML-10.6350.6430.6190.6430.6840.6810.6630.6810.7270.7440.7150.744
U-ML-All0.6260.6340.6090.6340.6840.6810.6630.6810.7340.7510.7220.751
U-ML-20.6360.6410.620.6410.6840.6810.6630.6810.7280.7440.7160.744
U-ML-W-10.6260.6390.6120.6390.6860.6830.6650.6830.7380.7550.7260.755
U-ML-W-All0.6270.6360.610.6360.6850.6830.6640.6830.7430.760.7320.76
U-ML-W-20.6260.6350.610.6350.6860.6830.6650.6830.7390.7550.7270.755
D-ML-10.5860.5810.5690.5810.6840.6780.6610.6780.6730.6850.6630.685
D-ML-All0.6080.6040.5860.6040.690.6860.6690.6860.6670.6810.6590.681
D-ML-20.5930.5910.580.5910.6880.6850.6670.6850.6710.6830.6610.683
D-ML-W-10.580.5850.5690.5850.6860.6830.6650.6830.6760.6830.6610.683
D-ML-W-All0.6090.6050.5880.6050.6860.6830.6650.6830.670.6830.660.683
D-ML-W-20.5860.5830.5710.5830.6860.6830.6650.6830.6730.6810.6580.681
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.7330.7450.7220.7450.7340.7440.7190.7440.680.6670.6520.667
ML0.740.7510.7240.7510.7640.7750.7510.7750.6760.6420.6360.642
AL-W0.7270.7390.7160.7390.7380.7490.7260.7490.6860.6730.6580.673
ML-W0.7440.7540.7260.7540.7650.7760.7530.7760.6740.6460.6380.646
U-AL-10.7320.7430.720.7430.7350.7440.7190.7440.680.6680.6550.668
U-AL-All0.7320.7440.7210.7440.7480.7540.7330.7540.6770.6720.6560.672
U-AL-20.7290.7420.7180.7420.7430.7510.7280.7510.6860.6760.660.676
U-AL-W-10.7320.7440.720.7440.7420.7540.730.7540.6860.6740.6590.674
U-AL-W-All0.7290.7410.7170.7410.7440.750.7280.750.6830.6720.6570.672
U-AL-W-20.7290.7410.7170.7410.7480.7540.7320.7540.6870.6760.6610.676
D-AL-10.630.6140.6060.6140.6050.5740.5640.5740.4980.4460.4520.446
D-AL-All0.7210.720.7030.720.6720.6530.6310.6530.5970.5350.5340.535
D-AL-20.6390.6270.6140.6270.590.5650.5530.5650.4780.4280.4340.428
D-AL-W-10.660.6490.6330.6490.6380.6120.5980.6120.5550.4980.4940.498
D-AL-W-All0.7090.7140.6960.7140.680.6810.6590.6810.5820.5450.5410.545
D-AL-W-20.6690.6580.6420.6580.6320.6130.5990.6130.5390.4880.4870.488
U-ML-10.7450.7560.7290.7560.7680.780.7560.780.6680.6460.640.646
U-ML-All0.740.7520.7230.7520.7640.7740.7510.7740.670.6420.6370.642
U-ML-20.7380.7510.7250.7510.7630.7730.750.7730.6830.6460.6430.646
U-ML-W-10.7450.7560.7290.7560.7720.7820.7570.7820.6690.6490.6410.649
U-ML-W-All0.7410.7510.7230.7510.7610.7720.7470.7720.6750.6540.6460.654
U-ML-W-20.7340.7490.7210.7490.7680.7770.7530.7770.6750.6550.6470.655
D-ML-10.7410.7490.7270.7490.7320.7410.7190.7410.6040.570.5690.57
D-ML-All0.7390.7490.7250.7490.7370.7480.7260.7480.6030.5570.5570.557
D-ML-20.7450.7520.730.7520.7330.7440.7210.7440.6040.5680.5670.568
D-ML-W-10.7330.7430.7210.7430.7320.7390.7180.7390.6090.5740.5710.574
D-ML-W-All0.7380.7490.7240.7490.7390.750.7270.750.5990.5820.5810.582
D-ML-W-20.7370.7460.7240.7460.7320.7420.7180.7420.6090.5720.5690.572
Table A16. Vehicle set, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A16. Vehicle set, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.5740.580.5640.580.6970.7070.6840.7070.7120.7220.6980.722
ML0.5870.6130.5810.6130.6960.7010.680.7010.7180.7310.7050.731
AL-W0.5670.5790.560.5790.6950.7060.6820.7060.7190.7280.7040.728
ML-W0.5860.6090.5780.6090.6920.7010.6770.7010.7160.7280.7040.728
U-AL-10.5780.5850.5680.5850.6940.7030.680.7030.7140.7210.6980.721
U-AL-All0.5710.5750.5590.5750.6960.7010.680.7010.7120.7240.70.724
U-AL-20.5740.5850.5660.5850.6940.7040.680.7040.7040.7150.6910.715
U-AL-W-10.5820.5860.5690.5860.690.6990.6750.6990.7170.7240.7010.724
U-AL-W-All0.5610.5720.5550.5720.6920.7010.6770.7010.7150.7240.70.724
U-AL-W-20.5740.5840.5660.5840.690.6990.6750.6990.7070.7160.6920.716
D-AL-10.510.4480.4440.4480.6960.7040.6810.7040.4780.4610.4560.461
D-AL-All0.5780.5280.5250.5280.6960.7010.680.7010.5370.5070.4880.507
D-AL-20.5130.4510.4470.4510.6890.6950.6730.6950.4720.4570.4520.457
D-AL-W-10.5210.4670.4630.4670.6910.6990.6750.6990.4690.4530.4470.453
D-AL-W-All0.5760.5240.520.5240.6910.6990.6750.6990.4940.4930.4820.493
D-AL-W-20.5330.4740.4710.4740.6910.6990.6750.6990.480.4660.460.466
U-ML-10.5980.620.5870.620.6960.7060.6820.7060.7240.7360.7110.736
U-ML-All0.5850.6090.5770.6090.6960.7060.6820.7060.720.7310.7060.731
U-ML-20.580.6050.5730.6050.6960.7060.6820.7060.7170.730.7040.73
U-ML-W-10.6030.620.5890.620.690.6990.6750.6990.7220.7340.710.734
U-ML-W-All0.5920.6060.5780.6060.6920.7010.6770.7010.720.7310.7060.731
U-ML-W-20.5820.6060.5740.6060.690.6990.6750.6990.7160.7280.7030.728
D-ML-10.5570.5360.5280.5360.6890.70.6760.70.5270.5340.5190.534
D-ML-All0.5650.5550.5470.5550.6960.7040.6810.7040.5390.5540.5350.554
D-ML-20.5580.5410.5340.5410.6890.70.6760.70.5320.5390.5240.539
D-ML-W-10.5610.5430.5370.5430.6910.6990.6750.6990.5480.5540.540.554
D-ML-W-All0.5680.560.5530.560.6910.6990.6750.6990.5650.5790.5590.579
D-ML-W-20.5610.5430.5380.5430.6910.6990.6750.6990.5560.5620.5470.562
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.730.7420.7130.7420.720.7420.7120.7420.6250.6290.6090.629
ML0.7330.7460.7170.7460.7270.7470.7160.7470.5990.590.5740.59
AL-W0.7270.7380.7090.7380.7160.7390.7070.7390.6280.6420.6160.642
ML-W0.7310.7440.7150.7440.7240.7450.7150.7450.5950.5880.5710.588
U-AL-10.7280.7430.7140.7430.7250.7450.7160.7450.6210.6210.6040.621
U-AL-All0.7310.7430.7140.7430.7250.7390.7150.7390.6180.6360.6090.636
U-AL-20.7280.740.7110.740.7180.740.7110.740.6230.6260.6070.626
U-AL-W-10.7260.740.7110.740.7190.7360.710.7360.6170.620.5990.62
U-AL-W-All0.7250.7390.7090.7390.7160.7330.7070.7330.6130.6270.6010.627
U-AL-W-20.7250.7390.7090.7390.7170.7390.7090.7390.6160.6280.6030.628
D-AL-10.5820.5720.5620.5720.5430.5150.5020.5150.4410.4210.4230.421
D-AL-All0.6530.6470.6320.6470.6750.6340.6230.6340.4230.4250.4140.425
D-AL-20.5880.5790.5650.5790.5540.5270.5140.5270.4210.4070.4030.407
D-AL-W-10.60.580.5640.580.5730.5420.5270.5420.4950.4370.4290.437
D-AL-W-All0.6670.6710.650.6710.6580.6540.6350.6540.5120.4550.4450.455
D-AL-W-20.6050.5870.570.5870.5830.5540.5390.5540.5160.4540.4450.454
U-ML-10.7320.7450.7160.7450.7360.7530.7260.7530.6060.6060.5870.606
U-ML-All0.7380.7490.720.7490.7330.7520.7220.7520.6090.6150.5970.615
U-ML-20.7390.750.7210.750.7360.7540.7240.7540.6090.6020.5870.602
U-ML-W-10.7320.7470.7180.7470.7320.7520.7230.7520.6020.6070.5870.607
U-ML-W-All0.7390.750.7210.750.730.750.7210.750.610.6170.5980.617
U-ML-W-20.7370.750.720.750.7330.7520.7220.7520.6020.6090.5890.609
D-ML-10.7030.7040.6880.7040.6910.7010.6780.7010.5880.5390.5360.539
D-ML-All0.7070.7090.6930.7090.7050.7120.6890.7120.6050.5540.5510.554
D-ML-20.7020.7030.6880.7030.6940.7040.6820.7040.5880.5370.5330.537
D-ML-W-10.7050.7060.6890.7060.70.7090.6860.7090.5990.5550.5530.555
D-ML-W-All0.7130.7130.6970.7130.7090.7180.6960.7180.60.5450.5420.545
D-ML-W-20.7070.7080.6910.7080.7040.7130.6890.7130.5990.5530.550.553
Table A17. Lymphography set, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A17. Lymphography set, 3 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.5380.3980.2870.3980.6380.430.6250.43
ML0.4550.4550.3330.4550.4370.3840.2790.3840.4630.420.6250.42
AL-W0.4550.4550.3330.4550.4910.4610.3320.4610.6720.5750.720.575
ML-W0.6340.4730.3450.4730.4670.4450.3230.4450.7650.5680.7240.568
U-AL-10.4550.4550.3330.4550.520.3980.2870.3980.6490.4430.6340.443
U-AL-All0.4550.4550.3330.4550.450.3360.2450.3360.4740.3320.560.332
U-AL-20.4550.4550.3330.4550.520.3980.2870.3980.6880.4570.6440.457
U-AL-W-10.4550.4550.3330.4550.4740.450.3250.450.6630.5750.720.575
U-AL-W-All0.4550.4550.3330.4550.4740.450.3250.450.6640.5750.720.575
U-AL-W-20.4550.4550.3330.4550.4740.450.3250.450.6670.5750.720.575
D-AL-10.620.5180.3710.5180.4950.3550.380.3550.5720.2320.3570.232
D-AL-All0.4550.4550.3330.4550.4650.3930.2840.3930.2460.1020.3890.102
D-AL-20.5780.4980.3580.4980.4910.4250.3050.4250.5730.2550.3740.255
D-AL-W-10.6680.5910.4150.5910.4720.4480.3210.4480.6580.5570.4530.557
D-AL-W-All0.6680.5910.4150.5910.4720.4480.3210.4480.6580.5570.4530.557
D-AL-W-20.6680.5910.4150.5910.4720.4480.3210.4480.6580.5570.4530.557
U-ML-10.4550.4550.3330.4550.5320.4110.2980.4110.4630.420.6250.42
U-ML-All0.4550.4550.3330.4550.5320.4110.2980.4110.4630.420.6250.42
U-ML-20.4550.4550.3330.4550.5320.4110.2980.4110.4630.420.6250.42
U-ML-W-10.6340.4730.3450.4730.4610.4430.320.4430.7580.5680.7240.568
U-ML-W-All0.6340.4730.3450.4730.4610.4430.320.4430.7590.5680.7240.568
U-ML-W-20.6340.4730.3450.4730.4610.4430.320.4430.7610.5680.7240.568
D-ML-10.7690.550.3940.550.4850.2890.4580.2890.6570.2340.1990.234
D-ML-All0.7690.550.3940.550.4670.40.2870.40.6580.2570.2160.257
D-ML-20.7690.550.3940.550.4670.40.2870.40.650.2610.2190.261
D-ML-W-10.7690.550.3940.550.4650.4360.3130.4360.7530.5390.4490.539
D-ML-W-All0.7690.550.3940.550.4650.4360.3130.4360.7530.5390.4490.539
D-ML-W-20.7690.550.3940.550.4650.4360.3130.4360.7530.5390.4490.539
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4870.4610.3380.4610.4870.4570.3350.4570.4580.4550.3330.455
ML0.4910.4660.3410.4660.4570.4550.3330.4550.4810.4770.6670.477
AL-W0.4810.470.3430.470.5390.480.3480.480.4610.4550.3330.455
ML-W0.6490.5070.3670.5070.4570.4550.3330.4550.6970.4930.3580.493
U-AL-10.4860.4570.3340.4570.5470.4680.3740.4680.4510.4550.3330.455
U-AL-All0.4590.4550.3330.4550.4650.4520.3630.4520.4470.4550.3330.455
U-AL-20.4860.4570.3340.4570.5190.4640.3710.4640.4580.4550.3330.455
U-AL-W-10.480.470.3430.470.5340.480.3810.480.4520.4550.3330.455
U-AL-W-All0.480.470.3430.470.5340.480.3810.480.4520.4550.3330.455
U-AL-W-20.480.470.3430.470.5340.480.3810.480.4610.4550.3330.455
D-AL-10.6470.5110.3980.5110.6840.4430.3820.4430.2140.1890.4550.189
D-AL-All0.4840.4570.3350.4570.5270.4730.3760.4730.4650.4550.3330.455
D-AL-20.6520.5160.370.5160.6760.4980.3920.4980.2140.1890.4550.189
D-AL-W-10.6520.5610.3960.5610.7290.5250.4090.5250.4720.4430.3220.443
D-AL-W-All0.6580.5550.3930.5550.6650.5090.3990.5090.4720.4430.3220.443
D-AL-W-20.6580.5550.3930.5550.6650.5090.3990.5090.4720.4430.3220.443
U-ML-10.5210.4680.3420.4680.4580.4550.3330.4550.4810.4770.6670.477
U-ML-All0.520.4640.3390.4640.4580.4550.3330.4550.4810.4770.6670.477
U-ML-20.5210.4680.3420.4680.4580.4550.3330.4550.4810.4770.6670.477
U-ML-W-10.6490.5070.3670.5070.4580.4550.3330.4550.5170.470.3430.47
U-ML-W-All0.6490.5070.3670.5070.4580.4550.3330.4550.5170.470.3430.47
U-ML-W-20.6490.5070.3670.5070.4580.4550.3330.4550.6970.4930.3580.493
D-ML-10.7620.5520.3950.5520.5510.4820.3510.4820.7090.50.6810.5
D-ML-All0.7540.5520.3950.5520.5210.4770.3480.4770.7090.50.6810.5
D-ML-20.7540.5520.3950.5520.5210.4770.3480.4770.7090.50.6810.5
D-ML-W-10.680.5430.3870.5430.5830.4930.3580.4930.6380.5340.380.534
D-ML-W-All0.7140.5450.3890.5450.5810.4890.3550.4890.6380.5340.380.534
D-ML-W-20.6920.5340.3810.5340.5510.4840.3520.4840.6380.5340.380.534
Table A18. Lymphography set, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A18. Lymphography set, 5 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.5250.4180.6230.418
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4490.4390.6380.439
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.5250.4180.6230.418
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.6730.3860.60.386
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.5290.4340.6350.434
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.6040.4930.3560.4930.6630.4730.520.4730.6250.1860.3580.186
D-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.0230.0230.3330.023
D-AL-20.6130.4930.3560.4930.6410.4840.4970.4840.6690.1950.3020.195
D-AL-W-10.6030.5160.3670.5160.7430.7320.5010.7320.7460.6930.4730.693
D-AL-W-All0.5770.480.3490.480.7430.7320.5010.7320.7460.6930.4730.693
D-AL-W-20.6030.5160.3670.5160.7430.7320.5010.7320.7460.6930.4730.693
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4490.4390.6380.439
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4910.4770.6670.477
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4910.4770.6670.477
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4960.4640.3340.4640.2660.1820.450.1820.0060.0230.3330.023
D-ML-All0.4960.4640.3340.4640.4550.4550.3330.4550.0060.0230.3330.023
D-ML-20.4960.4640.3340.4640.3860.30.5370.30.0060.0230.3330.023
D-ML-W-10.5710.5020.3580.5020.4530.4550.3940.4550.4580.4180.3340.418
D-ML-W-All0.4960.4640.3340.4640.4490.450.3270.450.4580.4180.3340.418
D-ML-W-20.5710.5020.3580.5020.4530.4550.3940.4550.4580.4180.3340.418
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4830.4730.60.473
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.6370.4640.4820.4640.6790.5660.6240.5660.5140.4660.4780.466
D-AL-All0.5070.470.630.470.4870.4750.6330.4750.4910.4770.6670.477
D-AL-20.640.4660.4830.4660.6690.5730.5660.5730.4610.3930.4620.393
D-AL-W-10.7570.7110.6760.7110.7670.6730.7860.6730.6670.6340.7130.634
D-AL-W-All0.7540.7090.6430.7090.7660.670.7530.670.6670.6340.7130.634
D-AL-W-20.730.6840.660.6840.7660.670.7530.670.6670.6340.7130.634
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4830.4730.60.473
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4830.4730.60.473
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4830.4730.60.473
D-ML-10.4760.4390.3850.4390.5170.4680.3740.4680.4910.4770.6670.477
D-ML-All0.4580.4430.3880.4430.5170.4680.3740.4680.4910.4770.6670.477
D-ML-20.4770.4390.3850.4390.5170.4680.3740.4680.4910.4770.6670.477
D-ML-W-10.5460.4640.3960.4640.6070.5090.4630.5090.6590.6570.7610.657
D-ML-W-All0.4910.4410.3820.4410.6070.5090.4630.5090.6590.6570.7610.657
D-ML-W-20.5250.4660.3980.4660.6070.5090.4630.5090.6590.6570.7610.657
Table A19. Lymphography set, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A19. Lymphography set, 7 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.6390.50.3610.50.7060.6180.6260.6180.5310.3730.4250.373
D-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.5730.4050.2970.405
D-AL-20.5930.4890.3520.4890.6920.6090.6190.6090.5270.3660.390.366
D-AL-W-10.6030.5160.3670.5160.7630.7550.8350.7550.5850.5410.70.541
D-AL-W-All0.4840.4570.3350.4570.7510.7410.6350.7410.4760.4360.6050.436
D-AL-W-20.6030.5160.3670.5160.7630.7550.8350.7550.5510.5070.6790.507
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4960.4640.3340.4640.4910.4770.6670.4770.480.4550.650.455
D-ML-All0.4960.4640.3340.4640.4550.4550.3330.4550.4720.4410.5450.441
D-ML-20.4960.4640.3340.4640.4910.4770.6670.4770.480.4550.650.455
D-ML-W-10.4960.4640.3340.4640.5380.450.6460.450.5380.450.6460.45
D-ML-W-All0.4960.4640.3340.4640.4910.4770.6670.4770.480.4390.6380.439
D-ML-W-20.4960.4640.3340.4640.5380.450.6460.450.5380.450.6460.45
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.7180.60.6120.60.6140.4930.5710.4930.3550.280.3630.28
D-AL-All0.4870.4750.6330.4750.5110.4730.6580.4730.4910.4770.6670.477
D-AL-20.6530.5660.590.5660.620.520.5580.520.3560.280.3320.28
D-AL-W-10.7670.7160.8060.7160.7630.7550.8350.7550.7720.7340.820.734
D-AL-W-All0.7470.6910.7590.6910.7630.7550.8350.7550.7720.7340.820.734
D-AL-W-20.7670.7160.8060.7160.7630.7550.8350.7550.7720.7340.820.734
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4530.450.330.450.4850.4590.3680.4590.4910.4770.6670.477
D-ML-All0.4550.4550.3330.4550.4580.4570.3670.4570.4910.4770.6670.477
D-ML-20.4530.450.330.450.4850.4590.3680.4590.4910.4770.6670.477
D-ML-W-10.7020.480.5050.480.5170.4680.4060.4680.4910.4770.6670.477
D-ML-W-All0.5210.4430.4820.4430.5170.4680.4060.4680.4910.4770.6670.477
D-ML-W-20.7020.480.5050.480.5170.4680.4060.4680.4910.4770.6670.477
Table A20. Lymphography set, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A20. Lymphography set, 9 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.5910.5770.3980.5770.660.3660.4140.3660.5770.3230.4170.323
D-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.0230.0230.3330.023
D-AL-20.5980.580.40.580.3570.2860.4630.2860.6050.3410.3980.341
D-AL-W-10.7160.6360.4190.6360.7630.7550.8350.7550.7630.7550.8350.755
D-AL-W-All0.6990.6270.4130.6270.5010.4980.4540.4980.4910.4770.6670.477
D-AL-W-20.7160.6360.4190.6360.5070.4930.6760.4930.7630.7550.8350.755
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4960.4640.3340.4640.0230.0230.3330.0230.1130.0570.3580.057
D-ML-All0.4960.4640.3340.4640.3530.250.50.250.0230.0230.3330.023
D-ML-20.4960.4640.3340.4640.1890.1140.40.1140.1130.0570.3580.057
D-ML-W-10.4960.4640.3340.4640.7020.5250.6950.5250.770.5340.7030.534
D-ML-W-All0.4960.4640.3340.4640.4910.4770.6670.4770.4910.4770.6670.477
D-ML-W-20.4960.4640.3340.4640.4910.4770.6670.4770.770.5340.7030.534
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.6490.4590.5450.4590.6140.3230.4180.3230.6480.3570.4420.357
D-AL-All0.4870.4750.6330.4750.3740.2640.4780.2640.4910.4770.6670.477
D-AL-20.6490.4590.5450.4590.6830.3480.4650.3480.3640.3070.4780.307
D-AL-W-10.7630.7550.8350.7550.7630.7550.8350.7550.7630.7550.8350.755
D-AL-W-All0.6140.6020.7420.6020.7630.7550.8350.7550.5410.5110.6780.511
D-AL-W-20.7430.7320.8210.7320.7630.7550.8350.7550.7630.7550.8350.755
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4550.4550.3330.4550.5140.4610.3380.4610.4910.4770.6670.477
D-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4910.4770.6670.477
D-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4910.4770.6670.477
D-ML-W-10.7060.5090.5590.5090.5630.5110.4630.5110.4580.4570.3670.457
D-ML-W-All0.5040.4750.5380.4750.5530.50.4560.50.4910.4770.6670.477
D-ML-W-20.6760.5020.5550.5020.5530.50.4560.50.4580.4570.3670.457
Table A21. Lymphography set, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
Table A21. Lymphography set, 11 local tables—results of precision (Prec.), recall, balanced accuracy (Bacc), and classification accuracy (Acc). The best results are marked in blue.
MethodK-Nearest NeighborsDecision TreeGradient Boosting
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.6520.6110.4260.6110.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.7370.7320.5010.7320.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4960.4640.3340.4640.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.6520.6110.4260.6110.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.5940.520.370.520.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.6520.6110.4260.6110.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.7370.7320.5010.7320.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.7370.7320.5010.7320.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.7370.7320.5010.7320.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.5030.3590.4030.3590.7070.230.4850.230.6970.2180.4770.218
D-AL-All0.4550.4550.3330.4550.6730.3860.60.3860.0230.0230.3330.023
D-AL-20.5180.3680.4110.3680.7020.2270.4830.2270.6620.220.4780.22
D-AL-W-10.4310.4550.3330.4550.0230.0230.3330.0230.0230.0230.3330.023
D-AL-W-All0.4550.4550.3330.4550.0230.0230.3330.0230.0230.0230.3330.023
D-AL-W-20.4310.4550.3330.4550.0230.0230.3330.0230.0230.0230.3330.023
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4960.4640.3340.4640.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4960.4640.3340.4640.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4960.4640.3340.4640.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4960.4640.3340.4640.1790.0910.0670.0910.0230.0230.3330.023
D-ML-All0.4550.4550.3330.4550.1790.0910.0670.0910.0230.0230.3330.023
D-ML-20.4960.4640.3340.4640.1790.0910.0670.0910.0230.0230.3330.023
D-ML-W-10.4840.4570.3350.4570.0230.0230.3330.0230.0230.0230.3330.023
D-ML-W-All0.5770.480.3490.480.0230.0230.3330.0230.0230.0230.3330.023
D-ML-W-20.4840.4570.3350.4570.0230.0230.3330.0230.0230.0230.3330.023
MethodRandom ForestRandom SubspaceAdaBoost
Prec.RecallBAccAccPrec.RecallBAccAccPrec.RecallBAccAcc
AL0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
AL-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
ML-W0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-AL-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-AL-10.6540.3640.4120.3640.6820.3980.5010.3980.3620.2840.5250.284
D-AL-All0.4370.4270.5670.4270.440.430.630.430.4910.4770.6670.477
D-AL-20.690.3770.4220.3770.5690.3550.4060.3550.3660.2890.5280.289
D-AL-W-10.6220.4320.6070.4320.7170.5820.7150.5820.7490.7390.6010.739
D-AL-W-All0.3920.3770.5890.3770.7790.6160.7340.6160.7020.5480.7030.548
D-AL-W-20.4230.3390.550.3390.7720.6050.7260.6050.7490.7390.6010.739
U-ML-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-10.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
U-ML-W-20.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-10.4550.4550.3330.4550.2160.1980.1450.1980.4550.4550.3330.455
D-ML-All0.4550.4550.3330.4550.4550.4550.3330.4550.4550.4550.3330.455
D-ML-20.4550.4550.3330.4550.3750.3520.2580.3520.4550.4550.3330.455
D-ML-W-10.570.4770.4750.4770.3750.350.320.350.4580.4570.3670.457
D-ML-W-All0.4690.4640.4670.4640.4220.4680.4380.4680.4580.4570.3670.457
D-ML-W-20.550.470.4710.470.3470.370.3340.370.4580.4570.3670.457

References

  1. Trajdos, P.; Burduk, R. Ensemble of classifiers based on score function defined by clusters and decision boundary of linear base learners. Knowl.-Based Syst. 2024, 303, 112411. [Google Scholar] [CrossRef]
  2. Stepka, I.; Lango, M.; Stefanowski, J. A Multi–Criteria Approach for Selecting an Explanation from the Set of Counterfactuals Produced by an Ensemble of Explainers. Int. J. Appl. Math. Comput. Sci. 2024, 34, 119–133. [Google Scholar] [CrossRef]
  3. Głowania, S.; Kozak, J.; Juszczuk, P. New voting schemas for heterogeneous ensemble of classifiers in the problem of football results prediction. Procedia Comput. Sci. 2022, 207, 3393–3402. [Google Scholar] [CrossRef]
  4. Ayeelyan, J.; Utomo, S.; Rouniyar, A.; Hsu, H.C.; Hsiung, P.A. Federated learning design and functional models: Survey. Artif. Intell. Rev. 2025, 58, 21. [Google Scholar] [CrossRef]
  5. Pekala, B.; Szkoła, J.; Grochowalski, P.; Gil, D.; Kosior, D.; Dyczkowski, K. A Novel Method for Human Fall Detection Using Federated Learning and Interval-Valued Fuzzy Inference Systems. J. Artif. Intell. Soft Comput. Res. 2025, 15, 77–90. [Google Scholar] [CrossRef]
  6. Saidi, A.; Amira, A.; Nouali, O. Securing decentralized federated learning: Cryptographic mechanisms for privacy and trust. Clust. Comput. 2025, 28, 144. [Google Scholar] [CrossRef]
  7. Kim, S.; Park, H.; Chikontwe, P.; Kang, M.; Jin, K.H.; Adeli, E.; Pohl, K.M.; Park, S.H. Communication Efficient Federated Learning for Multi-Organ Segmentation via Knowledge Distillation with Image Synthesis. IEEE Trans. Med. Imaging 2025, 44, 2079–2092. [Google Scholar] [CrossRef] [PubMed]
  8. Mao, W.; Yu, B.; Zhang, C.; Qin, A.K.; Xie, Y. FedKT: Federated learning with knowledge transfer for non-IID data. Pattern Recognit. 2025, 159, 111143. [Google Scholar] [CrossRef]
  9. Huang, Z.; Lei, H.; Chen, G.; Li, H.; Li, C.; Gao, W.; Chen, Y.; Wang, Y.; Xu, H.; Ma, G.; et al. Multi-center sparse learning and decision fusion for automatic COVID-19 diagnosis. Appl. Soft Comput. 2022, 115, 108088. [Google Scholar] [CrossRef]
  10. Nguyen, N.H.; Nguyen, D.L.; Nguyen, T.B.; Nguyen, T.H.; Pham, H.H.; Nguyen, T.T.; Le Nguyen, P. Cadis: Handling cluster-skewed non-iid data in federated learning with clustered aggregation and knowledge distilled regularization. In Proceedings of the 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid), Bangalore, India, 1–4 May 2023; pp. 249–261. [Google Scholar]
  11. Zhang, W.; Liu, X.; Tarkoma, S. FedGK: Communication-Efficient Federated Learning through Group-Guided Knowledge Distillation. ACM Trans. Internet Technol. 2024, 24, 25. [Google Scholar] [CrossRef]
  12. Casado, F.E.; Lema, D.; Iglesias, R.; Regueiro, C.V.; Barro, S. Ensemble and continual federated learning for classification tasks. Mach. Learn. 2023, 112, 3413–3453. [Google Scholar] [CrossRef]
  13. Cheng, L.; Yu, F.; Huang, P.; Liu, G.; Zhang, M.; Sun, R. Game-theoretic evolution in renewable energy systems: Advancing sustainable energy management and decision optimization in decentralized power markets. Renew. Sustain. Energy Rev. 2025, 217, 115776. [Google Scholar] [CrossRef]
  14. Pawlak, Z. Some remarks on conflict analysis. Eur. J. Oper. Res. 2005, 166, 649–654. [Google Scholar] [CrossRef]
  15. Pawlak, Z. Conflict analysis. In Proceedings of the Fifth European Congress on Intelligent Techniques and Soft Computing (EUFIT’97), Aachen, Germany, 8–12 September 1997; pp. 1589–1591. [Google Scholar]
  16. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  17. Przybyła-Kasperek, M.; Wakulicz-Deja, A. A dispersed decision-making system–The use of negotiations during the dynamic generation of a system’s structure. Inf. Sci. 2014, 288, 194–219. [Google Scholar] [CrossRef]
  18. Gillani, Z.; Bashir, Z.; Aquil, S. A game theoretic conflict analysis model with linguistic assessments and two levels of game play. Inf. Sci. 2024, 677, 120840. [Google Scholar] [CrossRef]
  19. Yao, Y. Three-way decisions with probabilistic rough sets. Inf. Sci. 2010, 180, 341–353. [Google Scholar] [CrossRef]
  20. Dou, H.; Li, S.; Li, J. Three-way conflict analysis model under agent-agent mutual selection environment. Inf. Sci. 2024, 673, 120718. [Google Scholar] [CrossRef]
  21. Li, X.; Yan, Y. A dynamic three-way conflict analysis model with adaptive thresholds. Inf. Sci. 2024, 657, 119999. [Google Scholar] [CrossRef]
  22. Stepaniuk, J.; Skowron, A. Three-way approximation of decision granules based on the rough set approach. Int. J. Approx. Reason. 2023, 155, 1–16. [Google Scholar] [CrossRef]
  23. Cinelli, M.; Kadziński, M.; Gonzalez, M.; Słowiński, R. How to support the application of multiple criteria decision analysis? Let us start with a comprehensive taxonomy. Omega 2020, 96, 102261. [Google Scholar] [CrossRef] [PubMed]
  24. Corrente, S.; Figueira, J.R.; Greco, S.; Słowiński, R. Multiple criteria decision support. In Handbook of Group Decision and Negotiation; Springer: Cham, Switzerland, 2021; pp. 893–920. [Google Scholar]
  25. Han, X.; Dleu, T.; Nguyen, L.; Xu, H. Conflict analysis based on rough set in e-commerce. Int. J. Adv. Manag. Sci. 2013, 2, 1–8. [Google Scholar]
  26. Skowron, A.; Ramanna, S.; Peters, J.F. Conflict analysis and information systems: A rough set approach. In Rough Sets and Knowledge Technology, First International Conference, RSKT 2006; Chongquing, China, 24–26 July 2006, Proceedings 1; Springer: Berlin/Heidelberg, Germany, 2006; pp. 233–240. [Google Scholar]
  27. Przybyła-Kasperek, M.; Sacewicz, J. Ensembles of random trees with coalitions-a classification model for dispersed data. Procedia Comput. Sci. 2024, 246, 1599–1608. [Google Scholar] [CrossRef]
  28. Przybyła-Kasperek, M.; Kusztal, K.; Addo, B.A. Dispersed Data Classification Model with Conflict Analysis and Parameterized Allied Relations. Procedia Comput. Sci. 2024, 246, 2215–2224. [Google Scholar] [CrossRef]
  29. Przybyła-Kasperek, M.; Kusztal, K. Rules’ quality generated by the classification method for independent data sources using pawlak conflict analysis model. In International Conference on Computational Science; Springer Nature: Cham, Switzerland, 2023; pp. 390–405. [Google Scholar]
  30. Bohanec, M. Car Evaluation. UCI Machine Learning Repository. 1997. Available online: https://archive.ics.uci.edu/dataset/19/car+evaluation (accessed on 5 December 2025).
  31. Michalski, R.S.; Chilausky, R.L. Knowledge acquisition by encoding expert rules versus computer induction from examples: A case study involving soybean pathology. Int. J. Hum.-Comput. Stud. 1999, 51, 239–263. [Google Scholar] [CrossRef]
  32. Mowforth, P.; Shepherd, B. Statlog (Vehicle Silhouettes) [Dataset]. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/dataset/149/statlog+vehicle+silhouettes (accessed on 5 December 2025).
  33. Zwitter, M.; Soklic, M. Lymphography Domain; University Medical Center, Institute of Oncology: Ljubljana, Yugoslavia, 1988. [Google Scholar]
  34. Fiebig, L.; Soyka, J.; Buda, S.; Buchholz, U.; Dehnert, M.; Haas, W. Avian Influenza A(H5N1) in Humans—Line List. 2011. Available online: https://edoc.rki.de/handle/176904/7480 (accessed on 15 February 2024).
  35. Marfo, K.F.; Przybyła-Kasperek, M. Objects Diversity and its Impact on Classification Quality in Dispersed Data Environments. Vietnam J. Comput. Sci. 2024, 12, 253–275. [Google Scholar] [CrossRef]
Figure 1. Stages of proposed model.
Figure 1. Stages of proposed model.
Applsci 16 00103 g001
Figure 2. Comparison results of precision, recall, balanced accuracy, and accuracy for Avian dataset.
Figure 2. Comparison results of precision, recall, balanced accuracy, and accuracy for Avian dataset.
Applsci 16 00103 g002
Figure 3. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 3 local tables.
Figure 3. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 3 local tables.
Applsci 16 00103 g003
Figure 4. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 5 local tables.
Figure 4. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 5 local tables.
Applsci 16 00103 g004
Figure 5. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 7 local tables.
Figure 5. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 7 local tables.
Applsci 16 00103 g005
Figure 6. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 9 local tables.
Figure 6. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 9 local tables.
Applsci 16 00103 g006
Figure 7. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 11 local tables.
Figure 7. Comparison results of precision, recall, balanced accuracy, and accuracy for Car dataset with 11 local tables.
Applsci 16 00103 g007
Figure 8. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 3 local tables.
Figure 8. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 3 local tables.
Applsci 16 00103 g008
Figure 9. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 5 local tables.
Figure 9. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 5 local tables.
Applsci 16 00103 g009
Figure 10. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 7 local tables.
Figure 10. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 7 local tables.
Applsci 16 00103 g010
Figure 11. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 9 local tables.
Figure 11. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 9 local tables.
Applsci 16 00103 g011
Figure 12. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 11 local tables.
Figure 12. Comparison results of precision, recall, balanced accuracy, and accuracy for Soybean dataset with 11 local tables.
Applsci 16 00103 g012
Figure 13. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 3 local tables.
Figure 13. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 3 local tables.
Applsci 16 00103 g013
Figure 14. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 5 local tables.
Figure 14. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 5 local tables.
Applsci 16 00103 g014
Figure 15. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 7 local tables.
Figure 15. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 7 local tables.
Applsci 16 00103 g015
Figure 16. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 9 local tables.
Figure 16. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 9 local tables.
Applsci 16 00103 g016
Figure 17. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 11 local tables.
Figure 17. Comparison results of precision, recall, balanced accuracy, and accuracy for Vehicle dataset with 11 local tables.
Applsci 16 00103 g017
Figure 18. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 3 local tables.
Figure 18. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 3 local tables.
Applsci 16 00103 g018
Figure 19. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 5 local tables.
Figure 19. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 5 local tables.
Applsci 16 00103 g019
Figure 20. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 7 local tables.
Figure 20. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 7 local tables.
Applsci 16 00103 g020
Figure 21. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 9 local tables.
Figure 21. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 9 local tables.
Applsci 16 00103 g021
Figure 22. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 11 local tables.
Figure 22. Comparison results of precision, recall, balanced accuracy, and accuracy for Lymphography dataset with 11 local tables.
Applsci 16 00103 g022
Figure 23. Heat map for balanced accuracy and Avian dataset.
Figure 23. Heat map for balanced accuracy and Avian dataset.
Applsci 16 00103 g023
Figure 24. Heat maps for balanced accuracy and Car dataset.
Figure 24. Heat maps for balanced accuracy and Car dataset.
Applsci 16 00103 g024
Figure 25. Heat maps for balanced accuracy and Soybean dataset.
Figure 25. Heat maps for balanced accuracy and Soybean dataset.
Applsci 16 00103 g025
Figure 26. Heat maps for balanced accuracy and Vehicle dataset.
Figure 26. Heat maps for balanced accuracy and Vehicle dataset.
Applsci 16 00103 g026
Figure 27. Heat maps for balanced accuracy and Lymphography dataset.
Figure 27. Heat maps for balanced accuracy and Lymphography dataset.
Applsci 16 00103 g027
Figure 28. Comparison of balanced accuracy obtained for approaches: AL, ML, AL-W, ML-W, U-AL-1, U-AL-All, U-AL-2, U-AL-W-1, U-AL-W-All, U-AL-W-2, D-AL-1, D-AL-All, D-AL-2, D-AL-W-1, D-AL-W-All, D-AL-W-2, U-ML-1, U-ML-All, U-ML-2, U-ML-W-1, U-ML-W-All, U-ML-W-2, D-ML-1, D-ML-All, D-ML-2, D-ML-W-1, D-ML-W-All, and D-ML-W-2.
Figure 28. Comparison of balanced accuracy obtained for approaches: AL, ML, AL-W, ML-W, U-AL-1, U-AL-All, U-AL-2, U-AL-W-1, U-AL-W-All, U-AL-W-2, D-AL-1, D-AL-All, D-AL-2, D-AL-W-1, D-AL-W-All, D-AL-W-2, U-ML-1, U-ML-All, U-ML-2, U-ML-W-1, U-ML-W-All, U-ML-W-2, D-ML-1, D-ML-All, D-ML-2, D-ML-W-1, D-ML-W-All, and D-ML-W-2.
Applsci 16 00103 g028
Figure 29. Comparison of balanced accuracy obtained for approaches: KNN, DT, GB, RF, RS, and AdaBoost.
Figure 29. Comparison of balanced accuracy obtained for approaches: KNN, DT, GB, RF, RS, and AdaBoost.
Applsci 16 00103 g029
Figure 30. Comparison of balanced accuracy obtained for approaches: the single strongest coalition, the two strongest coalitions, and all coalitions.
Figure 30. Comparison of balanced accuracy obtained for approaches: the single strongest coalition, the two strongest coalitions, and all coalitions.
Applsci 16 00103 g030
Table 1. Example—prediction vector from the measurement level.
Table 1. Example—prediction vector from the measurement level.
LM v 1 v 2 v 3 v 4
10.40.20.30.1
20.30.20.20.3
30.60.10.10.2
40.10.30.20.4
50.20.30.40.1
Table 2. Example—information system S = ( L M , V ) .
Table 2. Example—information system S = ( L M , V ) .
LM v 1 v 2 v 3 v 4
11−10−1
21001
31−1−10
4−10−11
5−101−1
Table 3. Example—the conflict function values.
Table 3. Example—the conflict function values.
12345
100.50.510.75
20.500.750.50.75
30.50.7500.751
410.50.7500.5
50.750.7510.50
Table 4. Dataset characteristics. # X means the cardinality of the set X.
Table 4. Dataset characteristics. # X means the cardinality of the set X.
Dataset# The Training Set# The Test Set# Conditional
Attributes
Attributes Type# Decision ClassesSource
Avian Influenza205895Categorical and Integer4[34]
Car Evaluation12105186Categorical4[30]
Soybean30737635Categorical19[31]
Vehicle Silhouettes59225418Integer4[32]
Lymphography1044418Categorical4[33]
Table 5. List of decision classes for each dataset.
Table 5. List of decision classes for each dataset.
DatasetDecision Classes
Avian Influenzaprobable, confirmed fatal, probable fatal, suspected under treatment
Car EvaluationUnacceptable, Acceptable, Good, Very good
Soybeandiaporthe-stem-canker, charcoal-rot, rhizoctonia-root-rot, phytophthora-rot, brown-stem-rot, powdery-mildew, downy-mildew, brown-spot, bacterial-blight, bacterial-pustule, purple-seed-stain, anthracnose, phyllosticta-leaf-spot, alternarialeaf-spot, frog-eye-leaf-spot, diaporthe-pod-&-stem-blight, cyst-nematode, 2-4-d-injury, herbicide-injury
Vehicle SilhouettesBus, Opel, Saab, Van
Lymphographynormal find, metastases, malign lymph, fibrosis
Table 6. p-values for the post hoc Dunn–Bonferroni test for methods designated as 1—AL, 2—ML, 3—AL-W, 4—ML-W, 5—U-AL-1, 6—U-AL-All, 7—U-AL-2, 8—U-AL-W-1, 9—AL-W-All, 10—U-AL-W-2, 11—D-AL-1, 12—D-AL-All, 13—D-AL-2, 14—D-AL-W-1, 15—D-AL-W-All, 16—D-AL-W-2, 17—U-ML-1, 18—U-ML-All, 19—U-ML-2, 20—U-ML-W-1, 21—U-ML-W-All, 22—U-ML-W-2, 23—D-ML-1, 24—D-ML-All, 25—D-ML-2, 26—D-ML-W-1, 27—D-ML-W-All, and 28—D-ML-W-2. Blue color indicates statistically significant results.
Table 6. p-values for the post hoc Dunn–Bonferroni test for methods designated as 1—AL, 2—ML, 3—AL-W, 4—ML-W, 5—U-AL-1, 6—U-AL-All, 7—U-AL-2, 8—U-AL-W-1, 9—AL-W-All, 10—U-AL-W-2, 11—D-AL-1, 12—D-AL-All, 13—D-AL-2, 14—D-AL-W-1, 15—D-AL-W-All, 16—D-AL-W-2, 17—U-ML-1, 18—U-ML-All, 19—U-ML-2, 20—U-ML-W-1, 21—U-ML-W-All, 22—U-ML-W-2, 23—D-ML-1, 24—D-ML-All, 25—D-ML-2, 26—D-ML-W-1, 27—D-ML-W-All, and 28—D-ML-W-2. Blue color indicates statistically significant results.
12345678910111213141516171819202122232425262728
1 1111111110.0110.010.010.010.011111111110.010.010.01
21 111111110.0110.010.010.010.011111111110.010.010.01
311 11111110.0110.020.010.010.011111111110.030.070.01
4111 1111110.0110.050.010.010.011111111110.050.120.03
51111 111110.0110.010.010.010.011111111110.010.030.01
611111 11110.0110.010.010.010.011111111110.010.010.01
7111111 1110.0110.010.010.010.011111111110.010.030.01
81111111 110.0110.040.010.010.011111111110.050.120.02
911111111 10.0110.040.010.010.011111111110.050.120.02
10111111111 0.0110.040.010.010.011111111110.050.120.02
110.010.010.010.010.010.010.010.010.010.01 111110.010.010.010.010.010.0110.331111
1211111111111 10.040.620.10111111111111
130.010.010.020.050.010.010.010.040.040.0411 1110.010.010.010.030.030.03111111
140.010.010.010.010.010.010.010.010.010.0110.041 110.010.010.010.010.010.010.180.010.07111
150.010.010.010.010.010.010.010.010.010.0110.6211 10.010.010.010.010.010.0110.150.98111
160.010.010.010.010.010.010.010.010.010.0110.10111 0.010.010.010.010.010.010.400.020.17111
1711111111110.0110.010.010.010.01 111111110.010.020.01
1811111111110.0110.010.010.010.011 11111110.010.020.01
1911111111110.0110.010.010.010.0111 1111110.010.020.01
2011111111110.0110.030.010.010.01111 111110.030.080.02
2111111111110.0110.030.010.010.011111 11110.030.080.02
2211111111110.0110.030.010.010.0111111 1110.040.090.02
2311111111111110.1810.40111111 11111
2411111111110.33110.010.150.021111111 1111
2511111111111110.070.980.1711111111 111
260.010.010.030.050.010.010.010.050.050.051111110.010.010.010.030.030.04111 11
270.010.010.070.120.030.010.030.120.120.121111110.020.020.020.080.080.091111 1
280.010.010.010.030.010.010.010.020.020.021111110.010.010.010.020.020.0211111
Table 7. p-values for the post hoc Dunn–Bonferroni test for approaches: KNN, DT, GB, RF, RS, and AdaBoost. Blue color indicates statistically significant results.
Table 7. p-values for the post hoc Dunn–Bonferroni test for approaches: KNN, DT, GB, RF, RS, and AdaBoost. Blue color indicates statistically significant results.
p-ValueKNNDTGBRFRSAdaBoost
KNN 0.010.010.010.010.06
DT0.01 10.500.010.01
GB0.011 0.010.010.01
RF0.010.500.01 0.010.01
RS0.010.010.010.01 0.01
AdaBoost0.060.010.010.010.01
Table 8. p-values for the post hoc Dunn–Bonferroni test for approaches: the single strongest coalition, the two strongest coalitions, and all coalitions.
Table 8. p-values for the post hoc Dunn–Bonferroni test for approaches: the single strongest coalition, the two strongest coalitions, and all coalitions.
p-ValueOne StrongestTwo StrongestAll Coalitions
One strongest 0.670.89
Two strongest0.67 0.07
All coalitions0.890.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Przybyła-Kasperek, M.; Sacewicz, J. Weighted Decision Aggregation for Dispersed Data in Unified and Diverse Coalitions. Appl. Sci. 2026, 16, 103. https://doi.org/10.3390/app16010103

AMA Style

Przybyła-Kasperek M, Sacewicz J. Weighted Decision Aggregation for Dispersed Data in Unified and Diverse Coalitions. Applied Sciences. 2026; 16(1):103. https://doi.org/10.3390/app16010103

Chicago/Turabian Style

Przybyła-Kasperek, Małgorzata, and Jakub Sacewicz. 2026. "Weighted Decision Aggregation for Dispersed Data in Unified and Diverse Coalitions" Applied Sciences 16, no. 1: 103. https://doi.org/10.3390/app16010103

APA Style

Przybyła-Kasperek, M., & Sacewicz, J. (2026). Weighted Decision Aggregation for Dispersed Data in Unified and Diverse Coalitions. Applied Sciences, 16(1), 103. https://doi.org/10.3390/app16010103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop