Next Article in Journal
Fast Quaternion Algorithm for Face Recognition
Previous Article in Journal
A Forward–Backward–Forward Algorithm for Quasi-Variational Inequalities in the Moving Set Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(12), 1957; https://doi.org/10.3390/math13121957
Submission received: 8 May 2025 / Revised: 29 May 2025 / Accepted: 12 June 2025 / Published: 13 June 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

Numerical Association Rule Mining (NARM), which simultaneously handles both numerical and categorical attributes, is a powerful approach for uncovering meaningful associations in heterogeneous datasets. However, designing effective NARM solutions is a complex task involving multiple sequential steps, such as data preprocessing, algorithm selection, hyper-parameter tuning, and the definition of rule quality metrics, which together form a complete processing pipeline. In this paper, we introduce NiaAutoARM, a novel Automated Machine Learning (AutoML) framework that leverages stochastic population-based metaheuristics to automatically construct full association rule mining pipelines. Extensive experimental evaluation on ten benchmark datasets demonstrated that NiaAutoARM consistently identifies high-quality pipelines, improving both rule accuracy and interpretability compared to baseline configurations. Furthermore, NiaAutoARM achieves superior or comparable performance to the state-of-the-art VARDE algorithm while offering greater flexibility and automation. These results highlight the framework’s practical value for automating NARM tasks, reducing the need for manual tuning, and enabling broader adoption of association rule mining in real-world applications.

1. Introduction

The design of Machine Learning (ML) pipelines usually demands user interaction to select appropriate preprocessing methods, perform feature engineering, select the most appropriate ML method, and set a combination of hyper-parameters [1]. Therefore, preparing an ML pipeline is complex, and, primarily, it is inappropriate for non-specialists in the data science or artificial intelligence domains [2]. On the other hand, tuning the entire pipeline to produce the best results may also involve a great deal of time for the users, especially if we deal with very complex datasets.
Automated Machine Learning (AutoML) methods have appeared to draw the application of ML methods nearer to the users (in the sense of ML democratization) [2,3]. The main benefit of these methods is searching for the best pipeline in different ML tasks automatically. Until recently, AutoML forms can be found for solving classification problems, neural architecture search, regression problems [4], and reinforcement learning.
Association Rule Mining (ARM) is a ML method for discovering the relationships between items in transaction databases. Bare ARM is limited since it initially operates with a categorical type of attributes only. Recently, Numerical Association Rule Mining (NARM) was proposed, which is a variant of a bare ARM and allows for dealing with numerical and categorical attributes concurrently. Thus, it removes the bottleneck of the bare ARM. The NARM also delivers several benefits since the results can be more reliable and accurate, and it contains less noise than bare ARM, but the numerical attributes need to be discretized before use. Currently, the problem of NARM is mainly tackled through using population-based meta-heuristics, which can cope large search spaces effectively. (Please note that the acronym ARM is used as a synonym for the acronym NARM in the paper.)
The ARM pipeline (see Figure 1) is far from being uncomplicated since it consists of several components: (1) data preprocessing, (2) mining algorithm selection, (3) hyper-parameter optimization, (4) evaluation metric selection, and (5) evaluation. Each of these components can be implemented using several ML methods.
Consequently, composing the ARM pipeline manually requires a great deal of human intervention, and it is potentially a time-consuming task. Therefore, automation of this composing led us to ARM democratization and, consequently, to the new domain of AutoML, i.e., AutoARM.
The data entering the ARM pipeline are in the form of a transaction database; the optional first component of the ARM pipeline is preprocessing, where the data can be preprocessed further using various ML methods. The selection of the proper processing component presents a crucial step, where the most appropriate population-based meta-heuristic nature-inspired (NI) algorithm needs to be determined for ARM. Mainly, the NI algorithms encompasses two classes of population-based algorithms: Evolutionary Algorithms (EAs) [5] and swarm intelligence (SI)-based ones [6].
According to previous studies, no universal, population-based meta-heuristic exists for ARM that can achieve the best results by mining all datasets. This phenomenon is also justified by the No Free Lunch (NFL) theorem of Wolpert and Macready [7]. The next component in the pipeline is the hyper-parameter optimization for the selected population-based meta-heuristic, where the best combination of hyper-parameters is searched for. Finally, the selection of the favorable association rules depends on the composition of the more suitable metrics captured in the fitness function. In our case, the fitness function is represented as a linear combination of several ARM metrics (e.g., support, confidence, amplitude, etc.) weighted with particular weights.
A structured comparison of existing ARM approaches is presented in Table 1, where their level of automation, hyper-parameter tuning capabilities, and optimization techniques were focused on. The table illustrates the diversity in methodological design, ranging from manual, heuristic-based systems to fully automated, data-driven solutions.
To the best knowledge of the authors, no specific AutoML methods exist for constructing the ARM pipelines automatically. Therefore, the contributions of this study are as follows:
  • To propose the first AutoARM solution for searching for the best ARM pipeline, where this automatic searching is represented as an optimization problem.
  • To dedicate special attention to the preprocessing steps of ARM, which have been neglected slightly in recent research works.
  • To implement a new framework called NiaAutoARM v.0.1.1 as a Python package v.3.9.
  • To evaluate the proposed framework rigorously on several datasets.
The structure of the remainder of this paper is as follows: The materials and methods, needed for understanding the observed subjects that follow, are discussed in Section 2. The proposed method for automated ARM is described in Section 3 in detail. The experiments and the obtained results are the subjects of Section 4, where a short discussion of the results is also presented. This paper is then concluded in Section 5 with a summarization of the performed work and an outlining of the potential directions for future work.

2. Materials and Methods

The section highlights the topics necessary for understanding the subjects of this paper. In line with this, the following topics are handled:
  • NI meta-heuristics.
  • AutoML.
  • NiaAML.
  • NiaARM.
The mentioned topics are discussed in detail in the remainder of this paper.

2.1. NI Meta-Heuristics

Exact solving of NP-hard optimization problems [16] requires enormous time and space resources. However, in practical applications, exact solutions are often unnecessary as we are typically satisfied with high-quality approximate solutions obtained within a reasonable time. Consequently, interest in approximate, or heuristic, approaches for solving intractable problems has grown significantly, especially following the emergence of nature-inspired (NI) algorithms.
Heuristics solve the optimization problems directly, i.e., on the lower level. The term “meta-heuristic” refers to a higher-level procedure or heuristic in the fields of computer science, mathematical optimization, and engineering, and it is used to search for, find, generate, or select a heuristic that may offer a good solution to an optimization problem, particularly for large problems (i.e., NP-hard problems) or in cases of limited, incomplete, or imperfect information [17].
One of the first meta-heuristic concepts that used NI algorithms was introduced by Grefenstette in [18], who applied a meta-Genetic Algorithm (meta-GA) to control the parameters of another GA. Recently, this approach has become increasingly widespread, especially in the field of Machine Learning (ML), where meta-heuristics are used for setting the hyper-parameters of neural networks (NNs) [19,20].

2.2. AutoML

Using ML methods in practice demands experienced human ML experts, who are typically expensive and hard to find on the market. On the other hand, computing is becoming cheaper day by day. This fact has led to the advent of AutoML, which is capable of constructing ML pipelines that are of a similar, or even better, quality than those by human experts [2]. Consequently, AutoML enables the so-called democratization of ML. This means that the usage of the ML methods is drawn closer to the user by AutoML; hus, this technology tries to avoid the principle of human-in-the-loop [21].
Automation of ML methods is allowed by AutoML using ML pipelines. Indeed, these pipelines are the control points of the AutoML system. Typically, the ML pipeline consists of the following processing steps:
  • Preprocessing.
  • Processing with definite ML methods.
  • Hyper-parameter optimization.
  • Evaluation.
AutoML is, currently, a very studied research area. The recent advances in the field have been summarized in several review papers [1,3,22,23]. There also exist a dozen applications of AutoML [24,25], where the special position is devoted to NiaAML, which is discussed in more detail in the remainder of this section.

2.3. NiaAML

NiaAML is an AutoML method based on stochastic Nia-s for optimization, where the AutoML is modeled as an optimization problem. The first version of NiaAML [26] covers composing classification pipelines, whereas a stochastic Nia searches for the best classification pipeline. The following steps are included in the AutoML pipeline, i.e., automatic feature selection, feature scaling, classifier selection, and hyper-parameter optimization. Each classifier configuration found by the optimizers was tested using cross-validation.
Following NiaAML, the NiaAML2 [27] method was proposed, which eliminates the main weakness of the original NiaAML method, where the hyper-parameters’ optimization is performed simultaneously with the construction of the classification pipelines in a single phase. In NiaAML, only one instance of the stochastic algorithm was needed. However, in NiaAML2, the construction of the pipeline and hyper-parameter optimization was divided into two separate phases, where two instances of nature-inspired algorithms were deployed, one after the other, to cover both steps. The first step covers the composition of the classification of the pipeline, while the second is devoted to hyper-parameter optimization.

2.4. NiaARM

NiaARM is a Python framework [28] that implements the ARM algorithm comprehensively [12], where the ARM is modeled as a single objective, continuous optimization problem. The fitness function in NiaARM is defined as a weighted sum of arbitrary evaluation metrics. One of the most vital points of NiaARM is that it is based on the NiaPy framework [29]; thus, different Nia-s can be used in the optimizer role. According to the knowledge of the authors, NiaARM is the only comprehensive framework for NARM as it is where all NARM steps are implemented, i.e., preprocessing, optimization, and visualization. Other benefits of NiaARM are good documentation and the many examples provided by its maintainers.

3. Proposed Framework: NiaAutoARM

The proposed framework NiaAutoARM was mainly inspired by the meta-heuristic concept, where the higher-level meta-heuristic controls the hyper-parameters of the lower-level heuristic. Both algorithms explore implementations from the NiaAML library (Figure 2).
Indeed, the NiaAutoARM higher-level meta-heuristic controls the behavior of the lower-level NI heuristic devoted for problem solving, i.e., ARM. The task of the control meta-heuristic is searching for the optimal hyper-parameter setting of the lower-level heuristic. The hyper-parameter settings direct the ARM pipeline construction. As can be observed from Figure 2, there is two-way communication between the control and the problem heuristics: (1) the pipeline constructed by the higher-level metaheuristic is transmitted to the lower-level heuristic, and (2) the results of the constructed pipeline are transmitted back to the higher-level heuristics that evaluate them in order to specify the best one.

3.1. Higher-Level Meta-Heuristic

Thus, we defined the problem of ARM pipeline construction as a continuous optimization problem. This means that an arbitrary population-based NI meta-heuristic, which works in a continuous search space, can be applied for solving this problem. In the NiaAutoARM higher-level meta-heuristic, each individual in the population of solutions represents one feasible ARM pipeline that is encoded into the representation of an individual:
x i ( t ) = x i , 1 ( t ) ALGORITHM , y i , 1 ( t ) , y i , 2 ( t ) CONTROL-PARAM , p i , 1 ( t ) , , p i , P ( t ) PREPROCESSING , z i , 1 ( t ) , , z i , M ( t ) METRICS , w i , 1 ( t ) , , w i , M ( t ) METRIC WEIGHTS ,
where parameter P denotes the number of potential preprocessing methods, and parameter M is the number of potential ARM metrics to be applied. As is evident from Equation (1), each real-valued element of solution in a genotype search space within the interval [ 0 , 1 ]  decodes the particular NiaAutoARM hyper-parameter of the pipeline in a phenotype solution space, as presented in Table 2, and it is determined for each hype-parameter of its corresponding domain values.
As is evident from the table, the ALGORITHM component denotes the specific stochastic NI population-based algorithm, which is chosen from the pool of available algorithms and is typically selected by the user from a NiaPy library to the relative value of x i , 1 ( t )  [28]. The CONTROL−PARAM component indicates a magnitude of two algorithm’s parameters: the maximum number of individuals NP , and the maximum number of fitness function evaluations MAXFES as a termination condition for the lower-level heuristic. Both values, y i , 1 ( t ) and y i , 2 ( t ) , are mapped in genotype–phenotype mapping to the specific domain of the mentioned parameters, as proposed by Mlakar et al. in [30]. The PREPROCESSING component determines the pool of available preprocessing methods that can be applied to the dataset. On the one hand, if  P = 0 , no preprocessing method is applied; meanwhile, on the other hand, if  P > 0 and p i , j ( t ) > 0.5 for j = 1 , , P , then the j-th preprocessing methods from a pool of available ones. For instance, the pool of preprocessing methods in Table 2 consists of the following: “Min_Max normalization” (MM), “Z-Score normalization” (ZS), “Data Squashing” (DS), “Remove Highly Correlated features” (RHC), and “Discretization K-means” (DK). The METRICS component is reserved for the pool of M rule evaluation metrics devoted for estimating the quality of the mined association rules. Additionally, the weights of the metrics are included by the M E T R I C_W E I G H T S component, which weighs the influence of the particular evaluation metric on the appropriate association rule.
Typically, the evaluation metrics illustrated in Table 3 are employed in an NiaAutoARM higher-level meta-heuristic. These metrics were chosen because they reflect both the statistical strength and the practical usefulness of the discovered rules. The framework uses these metrics in the fitness function of the lower-level heuristic. Consequently, this allows for the higher-level meta-heuristic to be directed into the more promising areas of the underlying hyper-parameter’s search space while still catering to a dataset-specific context.
Although the quality of the mined association rules is calculated in the lower-level algorithm using the weighted linear combination of the ARM metrics, the higher-level meta-heuristic estimates the quality of the pipeline due to the fairness using the fitness function as follows:
f ( x i ( t ) ) = α · supp ( X Y ) + β · conf ( X Y ) α + β ,
where α and β designate the impact of the definite ARM metric on the quality of the solution. It is discarded if no rules are produced or the pipeline fails to decode to the solution space.
The pseudo-code of the proposed NiaAutoARM higher-level meta-heuristic for constructing the classification pipelines is presented in Algorithm 1, from which it can be observed that the higher-level meta-heuristic starts with a random initialization of the population (function Initialize_real-valued_vectors_randomly in line 1). After evaluation regarding Equation (2) and determining the best solution (function Eval_and_select_the_best in Line 2), the evolution cycle was started (Lines 3–15), and it was terminated using function Termination_condition_not_met. Within the evolution cycle, each individual x i in the population P (Lines 4–14) is, at first, modified (function Modify_using_NI_algorithms in Line 5). This modification results in the production of a trial solution x t r i a l . Next, both the trial and target solutions are mapped to the phenotype solution space, producing the trial pipeline and target cur_pipeline (and also the current best) solutions (Lines 6 and 7). If the fitness function value of the trial pipeline is better that of the current best evaluated using E v a l function (Line 8), the target solution becomes a trial one (Line 9). Finally, if the trial pipeline is even better than the global best pipeline, best _ pipeline (Line 11), the global best pipeline becomes the trial pipeline (Line 12).
Algorithm 1 A pseudo-code of the NiaAutoARM higher-level meta-heuristic.
  1:
P Initialize_real-valued_vectors_randomly( x i )
  2:
b e s t_p i p e l i n e Eval_and_select_the_best( P )
  3:
while Termination_condition_not_met do
  4:
     for each  x i P  do
  5:
          x t r i a l Modify_using_NI_algorithm( x i )
  6:
          p i p e l i n e Construct_pipeline( x t r i a l )
  7:
          c u r_p i p e l i n e Construct_pipeline( x i )
  8:
         if Eval( p i p e l i n e ) ≥ Eval( c u r_p i p e l i n e ) then
  9:
              x i x t r i a l                ▹ Replace the worse individual
10:
         end if
11:
         if Eval( p i p e l i n e ) ≥ Eval( b e s t_p i p e l i n e ) then
12:
              b e s t_p i p e l i n e p i p e l i n e
13:
         end if
14:
     end for
15:
end while
16:
return  b e s t_p i p e l i n e

3.2. Lower-Level Heuristics

The NiaAutoARM lower-level heuristic can be any NI algorithm from the Niapy library. The library contains implementations of NI algorithms, which can be used for solving the ARM problem. The lower-level heuristic is controlled via the hyper-parameters, like the algorithm’s parameters, preprocessing methods, and orders for constructing the fitness function. It is devoted to solving the problem and returning the corresponding results.
Because the design and implementation of the lower-level heuristic algorithms are described in the corresponding documentation of the Niapy library in detail, we focused only on the construction of the fitness function, which is defined as follows:
f ( x ) = i = 1 M w i · z i ( x ) ,
where the variable w i denotes the weight of the corresponding ARM metric, and z i ( x ) is a pointer to the function for calculating the corresponding ARM metric. Please note that the sum of all weights should be one, in other words i = 1 M w i = 1.0 .

3.3. An Example of Genotype–Phenotype Mapping

An example of decoding an ARM pipeline to the solution space is illustrated in Figure 3, where the parameters are set as P = 1 and M = 6 . Let us suppose that the domains of hyper-parameters are given in accordance with Table 2, and the individual in genotype space is defined as that presented in Table 3.
Then, the higher-level meta-heuristic algorithm transmits the hyper-parameters to the lower-level heuristic algorithm via the following program call:
A l g [ Γ ( x i , 1 ) ] Algorithm call   ( P , M Param , Γ ( y i , 1 ) NP , Γ ( y i , 2 ) MAXFES , Γ ( Prep , p ) Preprocess , Γ ( Metr , z ) Metrics ) , Γ ( Metr , w ) Weights ) ) ,
where the function Γ denotes the mapping of genotype values to the phenotype values. Let us mention that the scalar values of ’Algorithm call’, NP, and MAXFES are decoded by mapping their values from the interval [0, 1] to the domain values in the solution space. On the other hand, the preprocessing methods and ARM metrics represent sets, where each member is taken from the sets Prep and Metr according to the probability 0.5 based on the values of the vectors p and z . Interestingly, the weight vector can be treated either statically or adaptively with respect to setting the parameter weight_adaptation . When the parameter is set as true, the adapted values from vector w indicate an impact of a definite ARM metric in the linear combination of ARM metrics within the fitness function. If this parameter is set to false, the values are fixed to the value 1.0.
As a result of the pipeline application, the support and confidence of the best association rule are returned to the higher-level meta-heuristic.

4. Results

The primary goal of the experiments was to evaluate whether NiaAutoARM can find an optimal pipeline for solving various ARM problems automatically. A series of experiments utilized the most common ARM publicly available datasets to justify this hypothesis.
The UCI ML datasets, listed in Table 4, were used for evaluating the performance of the proposed method [31]. Each database is characterized by the number of transactions, number of attributes, and their types, which can be either categorical (discrete) or numerical (real). These datasets were selected since they vary in terms of the number of transactions, the types of attributes, and the total number of attributes they contain. They are also commonly used within the ARM literature [30], making them appropriate benchmarks for evaluating the generalizability of the proposed NiaAutoARM framework. It is worth mentioning that the proposed method automatically determines the most suitable preprocessing algorithm as a part of its process; therefore, no manual preprocessing was applied to the original datasets.
In our experiments, we used two NI algorithms for the ARM pipeline optimization as the higher-level meta-heuristics, namely the DE and the PSO. Both have appeared in several recent studies in the ARM domain in either original or hybridized form [32,33,34]. To ensure a fair comparison, the most important parameters of both algorithms were set equally. Therefore, the population size was set to NP = 30 , and the maximum number of fitness function evaluations was set to MAXFES = 1000 (i.e., the number of pipeline evaluations), following the parameter ranges used in prior AutoML and NARM studies, and computational feasibility was then balanced with optimization performance. These parameters were selected empirically after preliminary tuning runs, ensuring that the optimization had sufficient search power without introducing prohibitive computational costs. All other parameters of the NI algorithms (i.e., GA, DE, PSO, jDE, LSHADE, and ILSHADE) were left at their default parameter settings, i.e., as implemented in the NiaPy framework, to maintain fairness across comparisons. In all the experiments, the lower-level optimization algorithms for ARM were similarly selected as in the example illustrated in Table 2.
Each experimental run produced the best pipeline for a combination of the specific dataset and algorithm. Considering the stochastic nature of the DE and PSO algorithms, the reported results are the average fitness function values of the best obtained pipelines over 30 independent runs.
The quality of the constructed pipeline was evaluated regarding Equation (2) in the higher-level meta-heuristic algorithm, while the fitness function in the lower-level heuristic algorithm was calculated as a weighted sum of the ARM metrics decoded from the corresponding individual by the NiaAutoARM framework.

4.1. Experimental Evaluation

The following experiments were conducted for analyzing the newly proposed NiaAutoARM thoroughly:
  • Baseline ARM pipeline optimization that allowed for just one preprocessing component and a disabled ARM metric weight adaptation.
  • Studied the influence of adapting the ARM metric weights on the quality of the ARM pipeline construction.
  • Studied the influence of selecting more preprocessing components on the quality of the ARM pipeline construction.
  • Conducted a comparison with the VARDE state-of-the-art algorithm.
In the remainder of this section, all of the experimental results are presented in detail, showcasing the usefulness and efficiency of the proposed method.

4.1.1. Baseline ARM Pipeline Construction

The purpose of the first experiment was to establish a foundational comparison for all the subsequent experiments. In this experiment, no ARM metric weight adaptation was applied, ensuring that the generated pipelines operated in their default configurations. Additionally, each generated pipeline was restricted to a single preprocessing method, eliminating the variability introduced by multiple preprocessing components.
All the results for this experiment are reported numerically in Table 5 and Table 6, and they are graphically represented in Figure 4 for the different PSO and DE higher-level meta-heuristics, respectively. The mentioned tables are structured as follows: The column ’Preprocessing method’ denotes the frequency of the preprocessing algorithms in the best obtained pipelines over all 30 runs. The column ’Hyper-parameters’ is used for reporting the average obtained population sizes ( NP ) and the maximum function evaluations ( MAXFES ) for the best obtained ARM pipelines. Lastly, the column ‘Metrics & Weights’ are used for reporting the average values of each used ARM evaluation metric. The number in the subscript denotes the number of pipelines in which a specific metric was used. Since, in the baseline experiment, no ARM metric weight adaptation was used, all the values are equal to 1. Each row in the tables refer to one experimental dataset.
Figure 4 presents the obtained average fitness values along with the average number of rules generated by the best obtained pipelines. Additionally, the frequencies of the lower-level heuristic algorithms are depicted. The fitness values are marked with blue dash/dotted lines, whereas the number of rules is marked with a red dotted line. The frequencies of the lower-level heuristic algorithms are presented as different colored lines from the center of the graph, and they are outward to each dataset.
The results in Table 5, developed by the PSO higher-level meta-heuristic algorithm, justified that the preprocessing methods, like MM, ZS, and RHC, were selected more frequently. Meanwhile, in general, ’No preprocessing’ was selected in most of the pipelines, regardless of the dataset. The ARM metrics support, confidence, and coverage appeared consistently across most datasets. Notably, the support and confidence were present in nearly all the pipelines for datasets like Abalone, Balance scale, and Basketball, indicating that these metrics are essential for the underlying optimization process. Metrics like amplification, which are used less frequently, are absent in many datasets, suggesting that the current algorithm configuration does not prioritize such metrics. The hyper-parameters NP and MAXFES varied depending on the dataset, influencing the ARM pipeline optimization process.
Table 6 shows the results for the DE higher-level meta-heuristic algorithm. Similar to the results of the PSO, key ARM metrics, like support, confidence, and coverage, are found consistently in many of the generated pipelines. However, there are subtle differences in the distribution of these metrics across the pipelines. For instance, the metric amplitude was selected just for the dataset German. Regarding the preprocessing methods and hyper-parameters, a similar distribution can be found as in the results of the PSO algorithm.
The graphical results showcase that both DE and PSO obtained similar results regarding the fitness value. The number of rules was slightly dispersed, although no big deviations were detected. The key differences were in the selection of the lower-level heuristic algorithm. For the majority of datasets, the PSO and jDE algorithms were selected more often as the lower-level heuristic algorithms. This was also true for both the higher-level meta-heuristic algorithms. Other used algorithms, such as GA, DE, ILSHADE and LSHADE, were selected rarely as the lower-level heuristic, probably due to their complexity or their lack of it.
To summarize the results of the baseline experiment, we can conclude that the best results were obtained when either no preprocessing was applied or when MM was used on the dataset. The NP parameter seemed to be higher for more complex datasets (i.e., more attributes), such as Buying, German, House16 and Ionosphere, while it remained lower for the others, which were less demanding. Regarding the selection of specific ARM evaluation metrics, it seems that both algorithms focused on the more common ones, i.e., those usually used in Evolutionary ARM [30]. Overall, these results indicate the DE and PSO algorithms’ robustness as a higher-level meta-heuristic while reinforcing the potential benefits of further exploration into ARM metric weight adaptation and diversified preprocessing strategies.
Please note that all the subsequent results are reported in the same manner.

4.1.2. Influence of the ARM Metric Weights Adaption on the Quality of ARM Pipeline Construction

The purpose of this experiment was to analyze the impact of selecting ARM metric weight adaptation on the performance of the ARM pipeline construction. The ARM metric weights play a crucial role in guiding the optimization process as they influence the evaluation and selection of the candidate association rules. By incorporating the ARM weight adaptation mechanism, the pipeline can adjust the importance of ARM metrics dynamically, such as support, confidence, coverage, and others, and it is tailored to the characteristics of the dataset. This experiment aimed to determine whether adapting these weights improved the quality of the discovered rules; therefore, they are reflected in the pipeline’s metrics. The results were compared to the baseline configuration, where no weight adaptation was applied.
Table 7 and Table 8 present the results obtained by the PSO and DE higher-level meta-heuristic algorithms, respectively. A similar selection of the preprocessing methods as in the last experiment was also employed in this experiment, where the preprocessing methods MM, ZS, and None were applied the most frequently. The hyper-parameters yielded higher values for the harder datasets. Considering the ARM metrics, the support and confidence still arose with high weight values in the majority of the pipelines, whereas the ARM metrics, like amplification or comprehensibility, were utilized less with lower weights.
From the results in Figure 5, we can deduce similar conclusions as from those in the baseline experiment, but the ARM metric weight adaptation provided slightly higher fitness values than those achieved in the last experiment. Although these differences were not significantly different to those according to the Wilcox test (p-value = 0.41), they still offered overall better ARM pipelines for the majority of datasets.

4.1.3. Influence of Selecting More Preprocessing Methods on the Quality of ARM Pipeline Construction

The parameter P controls the number of preprocessing components allowed in an ARM pipeline. By increasing P beyond 1, we introduce the possibility of combining multiple preprocessing dataset methods, which can, potentially, enhance the quality of the generated rules. This increased flexibility enables the pipeline to address complex data characteristics (e.g., variability in feature scaling, noise reduction, or dimensionality reduction) more effectively. However, this increased complexity also poses challenges, including higher computational costs and a broader search space to be discovered by the inner optimization algorithms. In this section, we analyze the impact of setting the parameter as P > 1 on the quality of the ARM pipelines, focusing on the resulting ARM metrics and their corresponding weights, as well as onthe computational trade offs for the experimental datasets. The results of the selected preprocessing algorithms are depicted as heatmaps of all the possible combinations. The results in Table 9 and Table 10 suggest that the support and confidence ARM metrics were again included heavily in the calculation of the fitness function, achieving high values in the majority of the pipelines for both the higher-level meta-heuristic algorithms. The coverage and inclusion ARM metrics were also involved in many pipelines, although their average weights were smaller. There was no notable difference in the selected hyper-parameters when compared to the previous two experiments.
Since this experiment included selecting more preprocessing methods, their selection frequency is reported in terms of heatmaps in Figure 6b for the PSO meta-heuristic algorithm and Figure 7b for the DE meta-heuristic algorithm. The selection of the preprocessing method varied, of course, if we observed a particular dataset, as the data were distributed differently. However, if we look at the overall selection process, specific combinations stand out. For the PSO algorithm, the most frequent combinations were { MM , RHC } and MM, while, for the DE meta-heuristic algorithm, it was { RHC , ZS } , { MM , RHC , ZS } , and RHC. The MM preprocessing method was frequently selected across all datasets in both algorithms, likely due to its ability to normalize feature values to a standard range (which enhances the ability of the inner optimization algorithm to explore the search space more efficiently). This preprocessing method ensures that all features equally contribute during the optimization process, mitigating the influence of features with larger numeric ranges and facilitating better rule generation.
Figure 6a and Figure 7a illustrate the fitness values and the number of generated rules for the PSO and DE meta-heuristic algorithms. The DE meta-heuristic algorithm produced ARM pipelines with slightly higher fitness values, while the PSO meta-heuristic algorithm generated a greater number of rules. It is also evident that the PSO algorithm was selected the most as the lower-level heuristic algorithm in both scenarios.

4.1.4. Comparison with the VARDE State-of-the-Art Algorithm

The last experiment was reserved for an indirect comparison with the VARDE state-of-the-art algorithm [30] for ARM, which represents a hybridized version of DE and was designed specifically for the exploration and exploitation of the ARM search space. Thus, the best reported variations of VARDE were used in this comparative study. It was not a direct comparison since the pipelines produced by NiaAutoARM are dataset-specific. Therefore, for each dataset, we observed which components of the pipeline provided the best results (i.e., the lower-level heuristic algorithm, preprocessing component and rule evaluation metrics), and we performed 30 independent runs with these settings. The results of these dataset-specific independent runs were compared to the results of VARDE using the Wilcoxon signed rank test.
The results are depicted in Table 11.
As is evident from the table, the pipelines found by the NiaAutoARM provided significantly better results in some instances compared to the VARDE method. Therefore, NiaAutoARM was distinguished as an effective framework for ARM.

4.2. Discussion

The results show notable trends in the optimization of ARM pipelines. The PSO algorithm was selected predominantly over jDE, DE, LSHADE, and ILSHADE as the lower-level heuristic method. This preference can be attributed to the PSO’s ability to balance exploration and exploitation effectively, enabling it to navigate the search space efficiently and avoid premature convergence. In contrast, the other algorithms may converge too quickly, potentially limiting their effectiveness in identifying diverse high-quality pipelines, thus making them less suitable for this specific optimization task. Min-max scaling was the most frequently used preprocessing method, likely due to its simplicity and ability to standardize data efficiently. Additionally, support and confidence were the dominant metrics in the generated pipelines, reflecting their fundamental role in ARM.
While the approach exhibits a slightly higher computational complexity due to the iterative optimization and exploration of diverse preprocessing combinations, this is a manageable trade-off (see Table 12). The superior results achieved, particularly in comparison to the VARDE state-of-the-art hybrid DE method, underscore the robustness of the approach. Notably, the method operates without requiring prior knowledge of the algorithms or datasets, making it adaptable and versatile for various applications.
In summary, the NiaAuroARM framework is capable of finding the best association rules automatically, without any intervention from the user. This makes the framework aligned with the goals of democratizing ML. However, the basic problem remains unsolved from the user’s perspective, i.e., how to make explanations and predictions on the basis of the mined association rules. Therefore, the primary research direction for the future remains to integrate the NiaAutoARM with emerging technologies, like eXplainable AI (XAI). On the other hand, the hybridization of meta-heuristics presents a promising research issue for the future.

5. Conclusions

This paper presents NiaAutoARM, an innovative framework designed for the optimization of the ARM pipelines using stochastic population-based NI algorithms. The framework integrates the selection of the following: a lower-level heuristic, its hyper-parameter optimization, dataset preprocessing techniques, and searching for the more suitable fitness function represented as a weighted sum of ARM evaluation metrics (which is where the weights are the subjects of the adaptation). Extensive evaluations on ten widely used datasets from the UC Irvine repository underscore the framework’s effectiveness, and it is particularly useful for users with limited domain expertise. Comparative analysis against the VARDE state-of-the-art hybrid DE highlights the superior performance of the proposed framework in generating high-quality ARM pipelines. In general, the obtained results underscore the effectiveness of NiaAutoARM’s layered metaheuristic design in optimizing full NARM pipelines, offering clear advantages over conventional or single-layer optimization methods in terms of flexibility, adaptability, and also overall performance.
Our future work aims to address several key areas: First, integrating additional NI algorithms with adaptive parameter tuning could enhance the pipeline optimization process further. Second, incorporating other advanced preprocessing techniques and alternative metrics might improve pipeline diversity and domain-specific applicability. Third, exploring parallel and distributed computing strategies could mitigate computational complexity, making the framework more scalable for larger datasets and more complex mining tasks.
In addition, extending the framework to support multi-objective optimization would allow a deeper exploration of trade-offs between potentially conflicting metrics, advancing its utility for real-world applications that demand interpretable and actionable rule sets. Furthermore, a promising and underexplored direction is to investigate how the heterogeneity of the attribute type. Specifically, how the varying proportions of numerical and categorical attributes influence the performance, quality, and interpretability of the mined association rules. To date, this question has received little systematic attention in the literature, and examining it could lead to tailored strategies that further enhance the effectiveness of NiaAutoARM across mixed-attribute datasets.

Author Contributions

Conceptualization, I.F.J. and I.F.; Methodology, U.M. and I.F.J.; Software, U.M. and I.F.J.; Validation, U.M.; Formal analysis, U.M. and I.F.; Investigation, I.F.J. and I.F.; Writing—original draft, U.M., I.F.J. and I.F.; Writing—review and editing, U.M., I.F.J. and I.F.; Visualization, U.M.; Supervision, I.F.; Project administration, I.F.; Funding acquisition, U.M. and I.F.J. All authors have read and agreed to the published version of the manuscript.

Funding

Iztok Fister, Jr. wishes to thank the Slovenian Research Agency (Program No. P2-0057) for their financial support. Uroš Mlakar also wishes to thank the Slovenian Research and Innovation Agency (Program No. P2-0041) for their financial support.

Data Availability Statement

The data used in this study are available on request from the corresponding authors. The code of the proposed NiaAutoARM is publicly available on https://github.com/firefly-cpp/NiaAutoARM (accessed on 30 April 2025).

Acknowledgments

The authors express their gratitude to Žiga Stupan for his insightful input during the initial discussions of this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yao, Q.; Wang, M.; Chen, Y.; Dai, W.; Li, Y.F.; Tu, W.W.; Yang, Q.; Yu, Y. Taking human out of learning applications: A survey on automated machine learning. arXiv 2018, arXiv:1810.13306. [Google Scholar]
  2. Hutter, F.; Kotthoff, L.; Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges; Springer Nature: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  3. He, X.; Zhao, K.; Chu, X. AutoML: A survey of the state-of-the-art. Knowl.-Based Syst. 2021, 212, 106622. [Google Scholar] [CrossRef]
  4. Conrad, F.; Mälzer, M.; Schwarzenberger, M.; Wiemer, H.; Ihlenfeldt, S. Benchmarking AutoML for regression tasks on small tabular data in materials design. Sci. Rep. 2022, 12, 19350. [Google Scholar] [CrossRef]
  5. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing, 2nd ed.; Springer Publishing Company: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  6. Blum, C.; Merkle, D. Swarm Intelligence: Introduction and Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  7. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  8. Agrawal, R.; Srikant, R. Fast Algorithms for Mining Association Rules in Large Databases. In Proceedings of the 20th International Conference on Very Large Data Bases, VLDB’94, San Francisco, CA, USA, 12–15 September 1994; pp. 487–499. [Google Scholar]
  9. Han, J.; Cheng, H.; Xin, D.; Yan, X. Frequent Pattern Mining: Current Status and Future Directions. Data Min. Knowl. Discov. 2007, 15, 55–86. [Google Scholar] [CrossRef]
  10. Alatas, B.; Akin, E.; Karci, A. MODENAR: Multi-objective differential evolution algorithm for mining numeric association rules. Appl. Soft Comput. 2008, 8, 646–656. [Google Scholar] [CrossRef]
  11. Altay, E.V.; Alatas, B. Differential evolution and sine cosine algorithm based novel hybrid multi-objective approaches for numerical association rule mining. Inf. Sci. 2021, 554, 198–221. [Google Scholar] [CrossRef]
  12. Fister, I.; Iglesias, A.; Galvez, A.; Del Ser, J.; Osaba, E.; Fister, I. Differential evolution for association rule mining using categorical and numerical attributes. In Proceedings of the Intelligent Data Engineering and Automated Learning–IDEAL 2018: 19th International Conference, Madrid, Spain, 21–23 November 2018; Proceedings, Part I 19. Springer: Berlin/Heidelberg, Germany, 2018; pp. 79–88. [Google Scholar]
  13. Minaei-Bidgoli, B.; Barmaki, R.; Nasiri, M. Mining numerical association rules via multi-objective genetic algorithms. Inf. Sci. 2013, 233, 15–24. [Google Scholar] [CrossRef]
  14. Heraguemi, K.E.; Kamel, N.; Drias, H. Association rule mining based on bat algorithm. J. Comput. Theor. Nanosci. 2015, 12, 1195–1200. [Google Scholar] [CrossRef]
  15. Kuo, R.J.; Chao, C.M.; Chiu, Y. Application of particle swarm optimization to association rule mining. Appl. Soft Comput. 2011, 11, 326–336. [Google Scholar] [CrossRef]
  16. Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness, 1st ed.; Series of Books in the Mathematical Sciences; W. H. Freeman: New York, NY, USA, 1979. [Google Scholar]
  17. Glover, F.; Kochenberger, G.A. (Eds.) Handbook of Metaheuristics; International Series in Operations Research & Management Science; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  18. Grefenstette, J. Optimization of control parameters for genetic algorithms. IEEE Trans. Syst. Man Cybern. 1986, 16, 122–128. [Google Scholar] [CrossRef]
  19. Cui, H.; Bai, J. A new hyperparameters optimization method for convolutional neural networks. Pattern Recognit. Lett. 2019, 125, 828–834. [Google Scholar] [CrossRef]
  20. Stang, M.; Meier, C.; Rau, V.; Sax, E. An Evolutionary Approach to Hyper-Parameter Optimization of Neural Networks. In Human Interaction and Emerging Technologies, Proceedings of the 1st International Conference on Human Interaction and Emerging Technologies (IHIET 2019), Nice, France, 22–24 August 2019; Ahram, T., Taiar, R., Colson, S., Choplin, A., Eds.; Springer: Cham, Switzerland, 2020; pp. 713–718. [Google Scholar]
  21. Holzinger, A. Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Inform. 2016, 3, 119–131. [Google Scholar] [CrossRef]
  22. Zöller, M.A.; Huber, M.F. Benchmark and survey of automated machine learning frameworks. J. Artif. Intell. Res. 2021, 70, 409–472. [Google Scholar] [CrossRef]
  23. Escalante, H.J. Automated Machine Learning—A Brief Review at the End of the Early Years. In Automated Design of Machine Learning and Search Algorithms; Springer: Cham, Switzerland, 2021; pp. 11–28. [Google Scholar]
  24. Musigmann, M.; Akkurt, B.H.; Krähling, H.; Nacul, N.G.; Remonda, L.; Sartoretti, T.; Henssen, D.; Brokinkel, B.; Stummer, W.; Heindel, W.; et al. Testing the applicability and performance of Auto ML for potential applications in diagnostic neuroradiology. Sci. Rep. 2022, 12, 13648. [Google Scholar] [CrossRef]
  25. Barreiro, E.; Munteanu, C.R.; Cruz-Monteagudo, M.; Pazos, A.; González-Díaz, H. Net-Net auto machine learning (AutoML) prediction of complex ecosystems. Sci. Rep. 2018, 8, 12340. [Google Scholar] [CrossRef]
  26. Fister, I.; Zorman, M.; Fister, D.; Fister, I. Continuous optimizers for automatic design and evaluation of classification pipelines. In Frontier Applications of Nature Inspired Computation; Springer Tracts in Nature-Inspired Computing; Springer: Singapore, 2020; pp. 281–301. [Google Scholar]
  27. Pečnik, L.; Fister, I.; Fister, I., Jr. NiaAML2: An Improved AutoML Using Nature-Inspired Algorithms. In Proceedings of the Advances in Swarm Intelligence: 12th International Conference, ICSI 2021, Qingdao, China, 17–21 July 2021; Proceedings, Part II 12. Springer: Berlin/Heidelberg, Germany, 2021; pp. 243–252. [Google Scholar]
  28. Stupan, Ž.; Fister, I. NiaARM: A minimalistic framework for Numerical Association Rule Mining. J. Open Source Softw. 2022, 7, 4448. [Google Scholar] [CrossRef]
  29. Vrbančič, G.; Brezočnik, L.; Mlakar, U.; Fister, D.; Fister, I., Jr. NiaPy: Python microframework for building nature-inspired algorithms. J. Open Source Softw. 2018, 3, 613. [Google Scholar] [CrossRef]
  30. Mlakar, U.; Fister, I. Variable-Length Differential Evolution for Numerical and Discrete Association Rule Mining. IEEE Access 2023, 12, 4239–4254. [Google Scholar] [CrossRef]
  31. Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ (accessed on 30 October 2024).
  32. Telikani, A.; Gandomi, A.H.; Shahbahrami, A. A survey of evolutionary computation for association rule mining. Inf. Sci. 2020, 524, 318–352. [Google Scholar] [CrossRef]
  33. Yan, D.; Zhao, X.; Lin, R.; Bai, D. PPQAR: Parallel PSO for quantitative association rule mining. Peer-to-Peer Netw. Appl. 2019, 12, 1433–1444. [Google Scholar] [CrossRef]
  34. Su, T.; Xu, H.; Zhou, X. Particle swarm optimization-based association rule mining in big data environment. IEEE Access 2019, 7, 161008–161016. [Google Scholar] [CrossRef]
Figure 1. The structure of the basic ARM pipeline.
Figure 1. The structure of the basic ARM pipeline.
Mathematics 13 01957 g001
Figure 2. NiaAutoARM framework for automated ARM.
Figure 2. NiaAutoARM framework for automated ARM.
Mathematics 13 01957 g002
Figure 3. An example of the genotype–phenotype mapping within the ARM pipeline construction.
Figure 3. An example of the genotype–phenotype mapping within the ARM pipeline construction.
Mathematics 13 01957 g003
Figure 4. Results for the baseline ARM pipeline optimization, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used lower-level heuristic algorithms are reported. (a) Results for the PSO higher-level meta-heuristic algorithm without ARM metric weight adaptation and just one preprocessing method. (b) Results for the DE higher-level meta-heuristic algorithm without ARM metric weight adaptation and just one preprocessing method.
Figure 4. Results for the baseline ARM pipeline optimization, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used lower-level heuristic algorithms are reported. (a) Results for the PSO higher-level meta-heuristic algorithm without ARM metric weight adaptation and just one preprocessing method. (b) Results for the DE higher-level meta-heuristic algorithm without ARM metric weight adaptation and just one preprocessing method.
Mathematics 13 01957 g004
Figure 5. Results for the ARM pipeline construction using ARM metric weight adaptation, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used inner optimization algorithms are presented. (a) Results for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and just one preprocessing method. (b) Results for the DE higher-level meta-heuristic algorithm DE with ARM metric weight adaptation and just one preprocessing method.
Figure 5. Results for the ARM pipeline construction using ARM metric weight adaptation, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used inner optimization algorithms are presented. (a) Results for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and just one preprocessing method. (b) Results for the DE higher-level meta-heuristic algorithm DE with ARM metric weight adaptation and just one preprocessing method.
Mathematics 13 01957 g005
Figure 6. Results of the PSO ARM pipeline optimization using ARM metric weight adaptation and selecting more preprocessing components, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used lower-level heuristic algorithms and preprocessing methods are reported. (a) Results of the preprocessing components for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods. (b) Heatmap of the preprocessing components for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods.
Figure 6. Results of the PSO ARM pipeline optimization using ARM metric weight adaptation and selecting more preprocessing components, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used lower-level heuristic algorithms and preprocessing methods are reported. (a) Results of the preprocessing components for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods. (b) Heatmap of the preprocessing components for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods.
Mathematics 13 01957 g006
Figure 7. Results for the DE ARM pipeline optimization using ARM metric weight adaptation and selecting more preprocessing methods, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used inner optimization algorithms and preprocessing methods are reported. (a) Results of the preprocessing components for the DE higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods. (b) Heatmap of the preprocessing components for the higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods.
Figure 7. Results for the DE ARM pipeline optimization using ARM metric weight adaptation and selecting more preprocessing methods, where the averages of the best pipelines in terms of fitness values, number of generated rules, and the used inner optimization algorithms and preprocessing methods are reported. (a) Results of the preprocessing components for the DE higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods. (b) Heatmap of the preprocessing components for the higher-level meta-heuristic algorithm with ARM metric weight adaptation and more preprocessing methods.
Mathematics 13 01957 g007
Table 1. Comparison of association rule mining methods.
Table 1. Comparison of association rule mining methods.
MethodTypeAutoMLHPTOptimization
Apriori [8]TraditionalNone
FP-Growth [9]TraditionalNone
MODENAR [10]NARMDE
Hybrid DE-SCA [11]NARMDE + SCA 1
NARM-DE [12]NARMDE
Multi-objective GA [13]NARMGA
BA [14]NARMBA
PSO [15]NARMPSO
NiaAutoARMAutoML-NARMAutoARM pipeline
1 Sine cosine algorithm.
Table 2. Hyper-parameters and their domains.
Table 2. Hyper-parameters and their domains.
Nr.Hyper-ParameterDomain
1ALGORITHM{PSO,DE,GA,ILSHADE,LSHADE,jDE}
2CONTROL-PARAM{ NP , MAXFES }
3PREPROCESSING{MM,ZS,DS,RHC,DK}
4METRICS{Supp,Conf,Cover,Amp,Incl,Comp}
5METRIC-WEIGHTS i = 1 M w i = 1.0
Table 3. ARM metrics used for evaluating the mined rules.
Table 3. ARM metrics used for evaluating the mined rules.
MetricEvaluation Functions
Support S u p p ( X Y ) = | t i | t i X t i Y | N
Confidence C o n f ( X Y ) = S u p p ( X Y ) S u p p ( X )
Coverage C o v e r ( X Y ) = | t i | t i Y | M
Amplitude A m p ( X Y ) = S u p p ( X Y ) S u p p ( X ) S u p p ( Y ) N
Inclusion I n c l ( X Y ) = S u p p ( X Y ) S u p p ( X )
Comprehensibility C o m p ( X Y ) = S u p p ( X Y ) S u p p ( Y )
Table 4. The evaluation datasets used in the experiments.
Table 4. The evaluation datasets used in the experiments.
DatasetNr. of Inst.Nr. of Attr.Attr. Type [D/N]
Abalone41779DN
Balance scale6255DN
Basketball965N
Bolts408N
Buying10040N
German100020DN
House22,78417N
Ionosphere35135DN
Quake21784N
Wine17814N
Table 5. Results for the PSO algorithm, with P = 1 , without ARM metric weight adaptation.
Table 5. Results for the PSO algorithm, with P = 1 , without ARM metric weight adaptation.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone0.270.07-0.20-0.4711.7 ± 5.29656.2 ± 796.41.00 ±  0.00 25 1.00 ±  0.00 23 1.00 ±  0.00 19 -1.00 ±  0.00 22 1.00 ±  0.00 15
Balance scale0.300.07-0.10-0.5317.6 ± 9.08370.3 ± 2598.61.00 ±  0.00 24 1.00 ±  0.00 28 1.00 ±  0.00 8 -1.00 ±  0.00 19 1.00 ±  0.00 16
Basketball0.47----0.5311.7 ± 4.89851.2 ± 543.71.00 ±  0.00 25 1.00 ±  0.00 26 1.00 ±  0.00 18 -1.00 ±  0.00 29 1.00 ±  0.00 12
Bolts0.230.10-0.07-0.6013.2 ± 6.68946.9 ± 2189.41.00 ±  0.00 23 1.00 ±  0.00 25 1.00 ±  0.00 16 1.00 ±  0.00 2 1.00 ±  0.00 26 1.00 ±  0.00 8
Buying0.230.20--0.070.5017.5 ± 8.39039.1 ± 1742.61.00 ±  0.00 25 1.00 ±  0.00 18 1.00 ±  0.00 11 1.00 ±  0.00 3 1.00 ±  0.00 2 1.00 ±  0.00 8
German--0.97--0.0320.1 ± 7.25871.8 ± 3046.21.00 ±  0.00 16 1.00 ±  0.00 20 1.00 ±  0.00 12 1.00 ±  0.00 12 1.00 ±  0.00 15 1.00 ±  0.00 17
House160.300.13-0.10-0.4715.5 ± 8.38642.5 ± 2038.81.00 ±  0.00 25 1.00 ±  0.00 21 1.00 ±  0.00 22 1.00 ±  0.00 3 1.00 ±  0.00 7 1.00 ±  0.00 12
Ionosphere0.17--0.030.100.7018.3 ± 9.28600.9 ± 2393.71.00 ±  0.00 28 1.00 ±  0.00 19 1.00 ±  0.00 7 1.00 ±  0.00 1 1.00 ±  0.00 3 1.00 ±  0.00 2
Quake0.300.03-0.07-0.6011.4 ± 4.39622.0 ± 1074.71.00 ±  0.00 27 1.00 ±  0.00 24 1.00 ±  0.00 17 1.00 ±  0.00 1 1.00 ±  0.00 17 1.00 ±  0.00 18
Wine0.230.03-0.10-0.6312.6 ± 6.39471.0 ± 1301.11.00 ±  0.00 24 1.00 ±  0.00 25 1.00 ±  0.00 20 1.00 ±  0.00 1 1.00 ±  0.00 14 1.00 ±  0.00 18
14.94 ± 3.06 8807.19 ± 1089.31 1.00 ±  0.00 24.20 ± 3.06 1.00 ±  0.00 22.90 ± 3.11 1.00 ±  0.00 15.00 ± 4.92 1.00 ±  0.00 3.29 ± 3.65 1.00 ±  0.00 15.40 ± 8.73 1.00 ±  0.00 12.60 ± 5.00
a No preprocessing of the dataset.
Table 6. Results for DE algorithm, with P = 1 , without ARM metrics weight adaptation.
Table 6. Results for DE algorithm, with P = 1 , without ARM metrics weight adaptation.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone0.430.10-0.20-0.2713.2 ± 5.49360.9 ± 1150.81.00 ±  0.00 27 1.00 ±  0.00 22 1.00 ±  0.00 20 -1.00 ±  0.00 23 1.00 ±  0.00 8
Balance scale0.330.07-0.20-0.4014.8 ± 7.48216.3 ± 2234.21.00 ±  0.00 23 1.00 ±  0.00 28 1.00 ±  0.00 7 -1.00 ±  0.00 23 1.00 ±  0.00 20
Basketball0.470.17-0.13-0.2312.9 ± 3.79160.8 ± 1468.81.00 ±  0.00 22 1.00 ±  0.00 25 1.00 ±  0.00 17 -1.00 ±  0.00 28 1.00 ±  0.00 9
Bolts0.270.13-0.10-0.5015.4 ± 6.39107.0 ± 1343.41.00 ±  0.00 25 1.00 ±  0.00 21 1.00 ±  0.00 15 -1.00 ±  0.00 23 1.00 ±  0.00 10
Buying0.330.17-0.100.100.3013.8 ± 6.68793.3 ± 1813.51.00 ±  0.00 28 1.00 ±  0.00 13 1.00 ±  0.00 6 -1.00 ±  0.00 1 -
German--1.00---18.7 ± 7.47992.1 ± 2403.11.00 ±  0.00 16 1.00 ±  0.00 13 1.00 ±  0.00 15 1.00 ±  0.00 19 1.00 ±  0.00 15 1.00 ±  0.00 13
House160.500.20-0.03-0.2714.2 ± 6.48751.8 ± 1865.91.00 ±  0.00 23 1.00 ±  0.00 25 1.00 ±  0.00 23 -1.00 ±  0.00 8 1.00 ±  0.00 17
Ionosphere0.30--0.100.100.5015.1 ± 6.68769.9 ± 2080.51.00 ±  0.00 30 1.00 ±  0.00 19 1.00 ±  0.00 4 --1.00 ±  0.00 2
Quake0.130.23-0.17-0.4711.1 ± 2.99406.5 ± 899.31.00 ±  0.00 24 1.00 ±  0.00 18 1.00 ±  0.00 18 -1.00 ±  0.00 21 1.00 ±  0.00 15
Wine0.270.07-0.13-0.5311.8 ± 2.89506.5 ± 827.21.00 ±  0.00 27 1.00 ±  0.00 28 1.00 ±  0.00 15 -1.00 ±  0.00 21 1.00 ±  0.00 10
14.09 ± 2.03 8906 ± 478.47 1.00 ±  0.00 24.50 ± 3.72 1.00 ±  0.00 21.20 ± 5.21 1.00 ±  0.00 14.00 ± 5.98 1.00 ±  0.00 19.00 ± 0.00 1.00 ±  0.00 18.11 ± 8.10 1.00 ±  0.00 11.56 ± 5.06
a No preprocessing of the dataset.
Table 7. Results for the outer algorithm PSO with ARM metric weight adaptation and just one preprocessing method.
Table 7. Results for the outer algorithm PSO with ARM metric weight adaptation and just one preprocessing method.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone0.400.07-0.10-0.4311.6 ± 5.09448.7 ± 1608.10.89 ±  0.23 23 0.81 ±  0.29 25 0.67 ±  0.33 17 -0.63 ±  0.35 23 0.41 ±  0.29 11
Balance scale0.400.10-0.10-0.4016.6 ± 8.86563.6 ± 3507.90.56 ±  0.39 23 0.77 ±  0.30 23 0.62 ±  0.25 8 -0.66 ±  0.31 14 0.74 ±  0.27 15
Basketball0.63--0.07-0.3014.8 ± 7.99285.8 ± 1723.50.83 ±  0.28 29 0.84 ±  0.22 24 0.63 ±  0.36 10 -0.76 ±  0.34 22 0.88 ±  0.24 9
Bolts0.230.07-0.03-0.6710.9 ± 3.68642.9 ± 2285.00.86 ±  0.21 19 0.68 ±  0.32 19 0.75 ±  0.28 15 -0.84 ±  0.27 25 0.98 ±  0.04 5
Buying0.430.03-0.130.030.3717.6 ± 8.48695.0 ± 2184.40.75 ±  0.31 27 0.83 ±  0.33 13 0.61 ±  0.40 6 1.00 ±  0.00 1 0.98 ±  0.00 1 0.99 ±  0.01 2
German--1.00---20.4 ± 7.05921.3 ± 2437.60.53 ±  0.28 13 0.60 ±  0.35 14 0.47 ±  0.36 15 0.62 ±  0.36 11 0.66 ±  0.35 15 0.61 ±  0.29 19
House160.300.03-0.03-0.6313.7 ± 6.59141.0 ± 1947.10.79 ±  0.28 24 0.88 ±  0.20 18 0.62 ±  0.36 14 0.03 ±  0.03 4 0.67 ±  0.46 6 0.41 ±  0.29 10
Ionosphere0.23--0.130.030.6013.3 ± 6.18799.0 ± 2451.10.77 ±  0.35 29 0.68 ±  0.34 20 0.73 ±  0.22 3 0.03 ±  0.00 1 -0.34 ±  0.20 2
Quake0.40--0.13-0.4712.1 ± 5.99941.2 ± 239.30.80 ±  0.29 25 0.74 ±  0.34 16 0.83 ±  0.21 15 -0.72 ±  0.32 17 0.87 ±  0.29 13
Wine0.370.07-0.03-0.5310.8 ± 2.09454.8 ± 1539.80.85 ±  0.24 23 0.88 ±  0.26 25 0.74 ±  0.30 10 -0.73 ±  0.33 24 0.70 ±  0.23 6
14.16 ± 3.00 8589.34 ± 1240.34 0.76 ±  0.12 23.50 ± 4.54 0.77 ±  0.09 19.70 ± 4.24 0.67 ±  0.10 11.30 ± 4.38 0.42 ±  0.41 4.25 ± 4.09 0.74 ±  0.10 16.33 ± 7.89 0.69 ±  0.23 9.20 ± 5.29
a No preprocessing of the dataset.
Table 8. Results for the outer algorithm DE with ARM metric weight adaptation and just one preprocessing method.
Table 8. Results for the outer algorithm DE with ARM metric weight adaptation and just one preprocessing method.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone0.600.03-0.10-0.2712.1 ± 3.98808.1 ± 1628.40.78 ±  0.29 24 0.84 ±  0.18 20 0.67 ±  0.32 19 -0.63 ±  0.33 26 0.65 ±  0.30 11
Balance scale0.370.10-0.03-0.5019.3 ± 8.08727.0 ± 1780.10.66 ±  0.30 25 0.80 ±  0.19 20 0.66 ±  0.29 9 -0.85 ±  0.26 15 0.66 ±  0.36 15
Basketball0.370.17-0.27-0.2013.0 ± 5.18858.1 ± 1383.70.70 ±  0.31 22 0.85 ±  0.21 22 0.63 ±  0.26 11 -0.69 ±  0.34 27 0.56 ±  0.33 9
Bolts0.170.20-0.07-0.5716.1 ± 7.78495.1 ± 2678.40.67 ±  0.28 20 0.59 ±  0.34 23 0.69 ±  0.36 16 0.25 ±  0.00 1 0.59 ±  0.27 23 0.78 ±  0.30 12
Buying0.270.13-0.100.070.4316.8 ± 6.99124.0 ± 1631.40.73 ±  0.33 30 0.68 ±  0.33 14 0.69 ±  0.34 3 -0.10 ±  0.00 1 0.49 ±  0.44 2
German--1.00---19.4 ± 7.65848.0 ± 3014.70.87 ±  0.17 12 0.88 ±  0.17 10 0.57 ±  0.30 10 0.61 ±  0.37 11 0.63 ±  0.27 12 0.59 ±  0.28 10
House160.330.07-0.230.030.3315.4 ± 6.78682.6 ± 1810.60.64 ±  0.30 23 0.76 ±  0.31 16 0.67 ±  0.32 20 -0.41 ±  0.37 9 0.60 ±  0.33 18
Ionosphere0.40--0.070.130.4014.7 ± 6.48727.3 ± 1754.60.72 ±  0.27 28 0.73 ±  0.32 14 0.79 ±  0.23 6 -0.46 ±  0.40 3 0.48 ±  0.17 3
Quake0.330.10-0.17-0.4011.1 ± 2.69471.8 ± 1115.70.66 ±  0.32 26 0.68 ±  0.25 18 0.69 ±  0.30 18 -0.74 ±  0.30 13 0.59 ±  0.33 17
Wine0.400.10-0.10-0.4011.8 ± 4.09293.9 ± 1261.70.75 ±  0.26 24 0.77 ±  0.24 19 0.54 ±  0.35 9 -0.61 ±  0.34 20 0.55 ±  0.34 11
14.95 ± 2.84 8603 ± 961.74 0.72 ±  0.07 23.40 ± 4.67 0.76 ±  0.09 17.60 ± 3.85 0.66 ±  0.07 12.10 ± 5.52 0.43 ±  0.18 6.00 ± 5.00 0.57 ±  0.20 14.90 ± 8.62 0.60 ±  0.08 10.80 ± 5.02
a No preprocessing of the dataset.
Table 9. Results for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and selecting more preprocessing methods.
Table 9. Results for the PSO higher-level meta-heuristic algorithm with ARM metric weight adaptation and selecting more preprocessing methods.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone------15.6 ± 8.49570.9 ± 1477.90.83 ±  0.30 17 0.82 ±  0.24 21 0.83 ±  0.28 19 -0.65 ±  0.40 17 0.76 ±  0.36 16
Balance scale------14.8 ± 7.67869.1 ± 2986.20.69 ±  0.37 23 0.74 ±  0.29 24 0.48 ±  0.35 10 -0.82 ±  0.27 16 0.70 ±  0.28 14
Basketball------13.4 ± 6.59700.8 ± 907.40.73 ±  0.34 24 0.83 ±  0.30 19 0.88 ±  0.25 11 -0.66 ±  0.38 21 0.76 ±  0.33 10
Bolts------15.7 ± 7.98379.6 ± 2697.10.79 ±  0.29 25 0.86 ±  0.24 18 0.82 ±  0.24 14 -0.76 ±  0.28 21 0.79 ±  0.27 8
Buying------19.3 ± 8.79364.9 ± 1770.20.80 ±  0.29 26 0.88 ±  0.21 13 0.79 ±  0.32 7 --0.66 ±  0.23 2
German------19.4 ± 6.56091.4 ± 3015.50.61 ±  0.29 13 0.67 ±  0.30 14 0.51 ±  0.38 13 0.76 ±  0.28 14 0.66 ±  0.31 18 0.54 ±  0.33 14
House16------16.0 ± 8.38451.8 ± 2975.00.71 ±  0.33 24 0.80 ±  0.29 22 0.65 ±  0.32 17 0.01 ±  0.00 2 0.48 ±  0.37 10 0.52 ±  0.42 10
Ionosphere------21.3 ± 8.26776.0 ± 3324.50.64 ±  0.41 23 0.82 ±  0.32 14 0.25 ±  0.20 5 0.76 ±  0.23 5 0.81 ±  0.16 3 0.59 ±  0.41 2
Quake------11.6 ± 4.89585.9 ± 899.30.91 ±  0.20 19 0.87 ±  0.24 18 0.64 ±  0.40 15 -0.68 ±  0.36 13 0.71 ±  0.33 16
Wine------14.4 ± 7.38685.9 ± 2585.80.82 ±  0.31 24 0.86 ±  0.24 21 0.69 ±  0.30 18 0.33 ±  0.29 2 0.53 ±  0.38 17 0.74 ±  0.31 13
16.15 2.868447.63 ± 1170.970.75 ±  0.09 21.80 ± 3.92 0.82 ±  0.06 18.40 ± 3.56 0.65 ±  0.19 12.90 ± 4.41 0.47 ±  0.32 5.75 ± 4.92 0.67 ±  0.11 15.11 ± 5.40 0.68 ±  0.09 10.50 ± 4.92
a No preprocessing of the dataset.
Table 10. Results for the DE higher-level meta-heuristic algorithm with ARM metric weight adaptation and selecting more preprocessing methods.
Table 10. Results for the DE higher-level meta-heuristic algorithm with ARM metric weight adaptation and selecting more preprocessing methods.
DatasetPreprocessing MethodHyper-ParametersMetrics & Weights
MM ZS DS RHC KM N   a NP MAXFES Supp Conf Cover Amp Incl Comp
Abalone------11.6 ± 4.28989.0 ± 1818.60.73 ±  0.23 25 0.74 ±  0.29 17 0.85 ±  0.23 15 -0.72 ±  0.30 21 0.67 ±  0.40 9
Balance scale------15.0 ± 6.77358.0 ± 3076.00.61 ±  0.31 24 0.74 ±  0.32 24 0.44 ±  0.32 12 -0.82 ±  0.27 16 0.60 ±  0.29 10
Basketball------13.6 ± 5.58971.7 ± 1704.60.69 ±  0.27 25 0.72 ±  0.33 19 0.57 ±  0.33 13 -0.74 ±  0.31 20 0.61 ±  0.37 15
Bolts------15.6 ± 6.58468.6 ± 2388.30.73 ±  0.27 21 0.76 ±  0.28 20 0.71 ±  0.35 17 0.34 ±  0.17 3 0.81 ±  0.23 26 0.63 ±  0.36 10
Buying------15.1 ± 6.39024.1 ± 1431.30.72 ±  0.32 30 0.61 ±  0.32 12 0.67 ±  0.33 2 ---
German------22.2 ± 7.66033.7 ± 2926.30.55 ±  0.33 11 0.62 ±  0.33 22 0.40 ±  0.35 14 0.57 ±  0.32 15 0.59 ±  0.31 13 0.72 ±  0.32 11
House16------15.8 ± 7.37880.9 ± 2238.00.77 ±  0.29 25 0.74 ±  0.28 23 0.68 ±  0.29 21 -0.55 ±  0.36 13 0.54 ±  0.38 14
Ionosphere------16.8 ± 7.38059.6 ± 2564.30.71 ±  0.34 28 0.82 ±  0.28 21 0.52 ±  0.36 5 -0.53 ±  0.39 5 -
Quake------11.8 ± 2.88982.1 ± 1247.70.78 ±  0.29 27 0.66 ±  0.30 15 0.73 ±  0.28 13 0.21 ±  0.00 1 0.64 ±  0.36 18 0.71 ±  0.30 18
Wine------14.6 ± 5.99342.5 ± 1265.80.65 ±  0.34 24 0.83 ±  0.24 29 0.66 ±  0.33 17 0.08 ±  0.00 1 0.63 ±  0.33 22 0.67 ±  0.32 11
15.22 ± 2.828311.03 ± 963.660.69 ±  0.07 24.00 ± 4.92 0.72 ±  0.07 20.20 ± 4.58 0.62 ±  0.13 12.90 ± 5.36 0.30 ±  0.18 5.00 ± 5.83 0.67 ±  0.10 17.11 ± 5.86 0.64 ±  0.05 12.25 ± 2.90
a No preprocessing of the dataset.
Table 11. Results of the Wilxocon test when comparing the NiaAutoARM-generated pipelines with VARDE.
Table 11. Results of the Wilxocon test when comparing the NiaAutoARM-generated pipelines with VARDE.
MethodBaselineWO, P = 1 WO, P > 1
PSO DE PSO DE PSO DE
VARDE_pos_15_2000 [30]0.030.340.010.080.010.01
VARDE_neg_15_2000 [30]0.610.170.970.540.750.98
Table 12. The average execution times of both the higher-level meta-heuristic algorithms, which are needed for finding the best pipelines for each experimental dataset in seconds.
Table 12. The average execution times of both the higher-level meta-heuristic algorithms, which are needed for finding the best pipelines for each experimental dataset in seconds.
DatasetPSODE
Abalone 27584.0 ± 7238.7 23486.5 ± 4702.6
Balance scale 15356.1 ± 6617.0 11598.7 ± 1298.3
Basketball 23442.6 ± 5271.6 15476.7 ± 1893.6
Bolts 22325.9 ± 9694.9 18603.7 ± 4979.5
Buying 33819.2 ± 10046.0 34449.2 ± 4134.3
German 25322.6 ± 10027.3 25958.7 ± 3230.3
House 34444.4 ± 8286.6 34464 ± 7709.4
Ionosphere 32299.7 ± 9396.3 40831.1 ± 7365.6
Quake 17897.9 ± 4523.5 18393.1 ± 4162.1
Wine 28541.7 ± 7341.7 24963.4 ± 3111.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mlakar, U.; Fister, I., Jr.; Fister, I. NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines. Mathematics 2025, 13, 1957. https://doi.org/10.3390/math13121957

AMA Style

Mlakar U, Fister I Jr., Fister I. NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines. Mathematics. 2025; 13(12):1957. https://doi.org/10.3390/math13121957

Chicago/Turabian Style

Mlakar, Uroš, Iztok Fister, Jr., and Iztok Fister. 2025. "NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines" Mathematics 13, no. 12: 1957. https://doi.org/10.3390/math13121957

APA Style

Mlakar, U., Fister, I., Jr., & Fister, I. (2025). NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines. Mathematics, 13(12), 1957. https://doi.org/10.3390/math13121957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop