Next Article in Journal
Exploring Bioinspired Climatic Design Strategies for a Low-Carbon Future: A Case Study of a Hot–Humid Climate in Sri Lanka
Previous Article in Journal
Silver Nanoparticle–Silk Protein Nanocomposites: A Synergistic Biomimetic Approach for Advanced Antimicrobial Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Threshold Adaptation for Improved Wrapper-Based Evolutionary Feature Selection

Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška Cesta 46, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(10), 670; https://doi.org/10.3390/biomimetics10100670
Submission received: 31 August 2025 / Revised: 24 September 2025 / Accepted: 2 October 2025 / Published: 5 October 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

Feature selection is essential for enhancing classification accuracy, reducing overfitting, and improving interpretability in high-dimensional datasets. Evolutionary Feature Selection (EFS) methods employ a threshold parameter θ to decide feature inclusion, yet the widely used static setting θ = 0.5 may not yield optimal results. This paper presents the first large-scale, systematic evaluation of threshold adaptation mechanisms in wrapper-based EFS across a diverse number of benchmark datasets. We examine deterministic, adaptive, and self-adaptive threshold parameter control under a unified framework, which can be used in an arbitrary bio-inspired algorithm. Extensive experiments and statistical analyses of classification accuracy, feature subset size, and convergence properties demonstrate that adaptive mechanisms outperform the static threshold parameter control significantly. In particular, they not only provide superior tradeoffs between accuracy and subset size but also surpass the state-of-the-art feature selection methods on multiple benchmarks. Our findings highlight the critical role of threshold adaptation in EFS and establish practical guidelines for its effective application.

Graphical Abstract

1. Introduction

Feature selection is a critical step in data preprocessing, especially in high-dimensional datasets often encountered in fields such as bioinformatics [1] and data mining [2]. The goal of feature selection is to identify a subset of relevant features that contribute the most to the predictive power of a model, thereby enhancing performance, reducing overfitting, and improving interpretability. Among the various feature selection methods, bio-inspired algorithms have gained significant attention due to their ability to search large and complex spaces efficiently. Additionally, hybrid approaches have been developed by combining evolutionary algorithms with other simpler optimization techniques, such as Simulated Annealing [3] or traditional filter methods [4], to balance exploration and exploitation more effectively. These hybrid methods often result in faster convergence and improved accuracy, as they capitalize on the strengths of each individual algorithm.
Evolutionary Feature Selection (EFS) methods, which apply bio-inspired algorithms, like evolutionary algorithms (EAs) [5] and Swarm Intelligence (SI)-based algorithms [6], to the task of selecting the optimal subset of features have shown great promise in handling high-dimensional data. These methods rely on the models of biology (like natural selection, the behavior of bird swarms, flocks of fishes, etc.) to evolve feature subsets iteratively, balancing between exploration and exploitation to avoid the local optima. The balancing depends crucially on a parameter threshold that determines the inclusion/exclusion of the particular feature in the solution subset. Indeed, this parameter influences the tradeoff between model complexity and generalization capability. The higher threshold might result in a smaller subset, potentially leading to underfitting if relevant features are discarded. Conversely, the lower threshold might retain too many features, increasing the risk of overfitting and computational cost. Therefore, finding the optimal threshold is essential for maximizing the effectiveness of EFS methods.
Recent works have explored various aspects of EFS, such as the development of new crossover and mutation strategies, hybrid approaches combining EFS with other optimization techniques, and the application of EFS in different domains. Wang et al. [1] introduced a PSO-based feature selection algorithm with a dynamic adjustment mechanism for the inertia weight, enhancing convergence speed and solution quality. Many studies highlight the potential of EFS methods in feature selection, but often overlook the impact of the threshold parameter, focusing primarily on algorithmic innovations. Moreover, some recent studies have recognized the need to optimize feature selection thresholds but have approached it from a heuristic or domain-specific angle. Deng et al. [7] proposed a novel approach for high-dimensional feature selection, named the Feature-Thresholds-Guided Genetic Algorithm (FTGGA). Traditional genetic algorithms suffer from unguided crossover and mutation operations, leading to slow convergence and suboptimal features. To address these challenges, FTGGA introduces a multi-objective feature scoring mechanism that updates feature thresholds during the evolutionary process, allowing for a more targeted crossover and mutation. However, the algorithm integrates the ReliefF [8] technique to filter out most of the redundant features initially, followed by a genetic algorithm guided by continuously updated feature thresholds. Li et al. [9] proposed an Improved Sticky Binary Particle Swarm Optimization (ISBPSO) algorithm for feature selection in high-dimensional classification tasks. The method enhances the standard SBPSO by integrating three key mechanisms: a feature-weighted initialization using mutual information, a dynamic bit masking strategy that reduces the search space progressively by freezing unpromising features, and a genetic refinement process applied to the particles’ personal bests to prevent premature convergence. A notable distinction of ISBPSO is the use of a feature selection threshold of 0.6 instead of the typical 0.5, in order to retain only strongly activated features. Fister et al. [10] introduced a novel Self-Adaptive Differential Evolution Algorithm for feature selection, enhanced by a threshold mechanism. Their approach improved feature selection by updating feature presence thresholds dynamically by means of complex adaptation during the evolutionary process.
In contrast to these method-specific designs, our study tests various threshold parameter control mechanisms systematically under a unified, optimizer-agnostic framework. According to Eiben and Smith [5], the parameter control techniques in evolutionary computation can be classified into one of the following three categories: (1) deterministic, (2) adaptive, and (3) self-adaptive. The algorithm’s parameters are altered according to some deterministic rule by the deterministic parameter control. The adaptive parameter control means that there is some feedback from the search process, which determines the direction or magnitude of the change by the control parameters. In the last parameter control mechanism, the parameters are included into the representation of individuals, and, together with the problem variables, they suffer the effects of the variation operators. In this sense, we benchmark deterministic schedules, population-level feedback mechanisms, and self-adaptive per-individual threshold adaptations across multiple bio-inspired optimizers and datasets, using a common interface and evaluation protocol. This isolates the mechanism effect of threshold adaptation from confounding factors (e.g., prefiltering or operator choice) and yields generalizable guidance on when and how adaptive thresholding improves the quality of the selected feature subsets. In order to present a picture as comprehensively as possible, the random search deterministic algorithm with no parameter control is included into the comparative study.
To the best of our knowledge, this study represents the first large-scale systematic investigation of several mechanisms for feature selection threshold adaptation across a wide range of bio-inspired algorithms and datasets. This allows us to uncover generalizable insights about threshold behavior that are independent of the underlying evolutionary operator. The obtained results suggest that using a higher static threshold already achieves significant improvements in classification accuracy and subset compactness, highlighting the importance of tuning the feature threshold. To summarize, the proposed paper introduces the following key novelties:
  • Proposing a threshold adaptation mechanism, which can be used in an arbitrary bio-inspired algorithm for EFS;
  • Comparing different feature threshold adaptation mechanisms to the baseline method ( θ = 0.5);
  • Investigating the balance of classification accuracy and feature subset size in the fitness function by using five different threshold mechanisms in bio-inspired algorithms;
  • A large-scale study of five feature threshold adaptation mechanisms in bio-inspired algorithms and their influence on the quality of the selected feature subset;
  • A large-scale study of five feature threshold adaptation mechanisms in bio-inspired algorithms and their influence on the size of the selected feature subset;
  • Investigating the convergence properties of the bio-inspired algorithm in regard to using different feature threshold adaptation mechanisms;
  • Comparing the best adaptation mechanism (according to the obtained results) to the state of the art.
The rest of the paper is organized as follows: In Section 2, the foundation of EFS is explained in detail. Section 3 illustrates the design and implementation of the proposed method. The experimental work and the analysis of the obtained results are the subjects of Section 4. Finally, the paper is concluded with Section 5, where we explain the potential directions of future work.

2. Materials and Methods

This section introduces the foundational knowledge necessary for potential readers to grasp to understand the concepts that follow. Firstly, the feature selection problem is defined, which is presented as an optimization problem. Then, the idea of wrapper-based feature selection is introduced. Finally, the application of a wrapper-based feature selection is defined using an arbitrary evolutionary algorithm.

2.1. Feature Selection

Feature selection is a preprocessing mechanism, which involves identifying the most relevant subset of features from a given dataset, thereby reducing the dimension of the problem and improving the learning algorithm efficiency and performance. Mathematically, the feature selection problem can be formulated as follows: Let X = { F 1 , F 2 , , F n } represent the set of all n features in a dataset, and let Y denote the corresponding target class variable. The goal of feature selection is to find a subset of features S X , such that a model trained on S achieves optimal performance in terms of an evaluation metric f ( S ) . Formally, the feature selection is expressed as follows:
arg min f ( S ) : { S X } ,
where f ( S ) represents the performance of the trained model on the selected feature subset S and depends on the selected feature selection method.

2.2. Evolutionary Feature Selection

EFS is a bio-inspired optimization approach in order to identify the most relevant subset of features. Unlike traditional methods, which often rely on deterministic algorithms, bio-inspired computation comprises a class of stochastic nature-inspired population-based search algorithms suitable for solving the hardest optimization problems. These algorithms evolve a population of candidate feature subsets iteratively, by selecting, combining, and mutating them to explore the search space effectively. This process not only enhances model performance by reducing overfitting, but also improves interpretability and computational efficiency by eliminating irrelevant or redundant features. EFS is particularly useful in high-dimensional datasets where the feature space is vast, making an exhaustive search impractical.
During the optimization, a bio-inspired algorithm maintains a population of solutions x = { x 1 , x 2 , , x i } , for  i = 1 , , Np , where Np denotes the population size. Each solution x i = { x i , 1 , x i , 2 , , x i , j } is a vector of j values, where j corresponds to the number of all features in the dataset. Each element of the vector x i , j represents a feature from the dataset. All of the reviewed bio-inspired algorithms for feature selection use a threshold mechanism internally for selecting the relevant features in the search space of the algorithm. This can be expressed mathematically as follows:
S = { x j : x j θ } ,
where the variable S denotes the feature subset, which will be used for training the selected machine learning algorithm, and the parameter θ is a threshold determining if the specific feature will be included in the feature subset or not. Let us mention that the value of the threshold θ is typically set to 0.5 in most of the reviewed literature.

2.3. Wrapper-Based Feature Selection

Wrapper-based methods evaluate the usefulness of feature subsets by using the predictive model as a black box to assess their performance directly. By iterative searching and selecting subsets of features that optimize a given performance metric, wrapper-based approaches can identify the most relevant features for the model effectively. This method is computationally intensive but often yields superior results compared to filter-based methods, as it considers feature dependencies and interactions within the context of the specific predictive algorithm. The wrapper-based feature selection process in our study is implemented using a bio-inspired algorithm, which evolves a population of feature subsets iteratively towards optimal solutions inspired by models which have arisen in biology.
The fitness function for evaluating a solution in a wrapper-based method is defined as follows:
A c c = Classify ( C , S ) ,
where the variable C designates the selected classifier and the variable S is the feature subset. The variable A c c [ 0 , 1 ] denotes the classification accuracy of the observed feature subset S.
Since the feature subset size also plays an important role, the following fitness function was adapted in this study:
f ( S ) = β ( 1 A c c ) + ( 1 β ) L ( S ) L ( X ) ,
where L ( · ) is the length function, which counts the number of features in a subset S, and  β is the weighing factor for balancing the importance between the classification accuracy and the number of selected features.

3. The Proposed Evolutionary Feature Selection and Threshold Adaptation Mechanisms

This section describes the complete EFS framework used throughout our experiments. The framework builds upon a generic bio-inspired optimization algorithm using a threshold-based genotype–phenotype mapping to control feature subset generation. It is implemented as a wrapper-based approach and is compatible with an arbitrary population-based bio-inspired algorithm (Figure 1).
As is evident from Figure 1, a general feature selection process is divided into the following four key steps:
  • Dataset splitting;
  • Subset discovery;
  • Subset evaluation;
  • Validation of results.
In the first step (i.e., dataset splitting), a dataset is divided into training and validation sets with respect to some predefined ratio. Thus, the former set is used for the training phase, while the latter is used for the validation phase of the EFS. The second step (i.e., subset discovery) involves generating candidate subsets of features from the full feature space. In the context of evolutionary algorithms, this corresponds to evolving a population of individuals where each individual encodes a potential feature subset. The third step (i.e., the subset evaluation) is achieved by applying a fitness function to assess the quality of each feature subset. The last step (i.e., validation of results) evaluates the selected subset on a separate testing set to assess its generalization performance. This step is crucial to avoid overfitting and to ensure the robustness of the selected features across different data splits. The result of the process is the best subset S according to the fitness value, as proposed by the definite bio-inspired algorithm and its accuracy Acc.
Let us mention that the second and third steps are entrusted to a particular bio-inspired algorithm by the framework. Although the concept of bio-inspired algorithms captures two classes of nature-inspired algorithms (EAs and SI-based), they share common characteristics that enable us to deal with them similarly. Moreover, some efforts were made by Fister et al. in defining the universal framework of these stochastic nature-inspired population-based algorithms [11]. As a result, the generic bio-inspired algorithm can be defined as illustrated in the pseudo-code of Algorithm 1.
Algorithm 1 The pseudo-code of a generic bio-inspired algorithm.
1:
INITIALIZE_population_randomly
2:
EVALUATE_each_individual
3:
while Termination_condition_not_met do
4:
      MODIFY_each_individual
5:
      EVALUATE_each_trial
6:
      SELECT_individuals_for_the_next_generation
7:
      FIND_global_best_individual
8:
end while
Indeed, these algorithms follow a common evolutionary paradigm and differ only in their specific update/modification mechanisms (i.e., function ‘MODIFY_each_individual’). In summary, the function in EAs consists of the following three functions [5]:
  • SELECT_parents;
  • RECOMBINE_pairs_of_parents;
  • MUTATE_the_resulting_offspring.
However, in SI-based algorithms, this function represents the implementation of some biological model that serves as an inspiration for the search process design captured in the ‘MODIFY_each_individual’ function. Therefore, our main effort in the design was to adapt the specific bio-inspired algorithm to be capable of solving the feature selection problem as an optimization. Indeed, the adaptation demands two modifications of the bio-inspired algorithm, namely the following:
  • genotype–phenotype mapping;
  • fitness function evaluation.
The genotype–phenotype mapping decodes the representation of an encoded solution in the search space to the solution in the problem space. The solution of the EFS in the genotype search space is represented as a real-valued vector of length equal to the number of features, while, in the phenotype space, this is decoded as a binary feature mask derived by applying a threshold to the genotype vector. Specifically, features with values above the threshold θ are included in the selected subset, while others are excluded (see Equation (2) and Figure 2).
In our study, a wrapper-based approach is used, where the fitness function considers the classification accuracy and the number of selected features jointly (see Equation (4)).

Threshold Parameter Control

To study the control of the threshold parameter systematically, we grouped the different parameter control mechanisms into three classes that can be applied to an arbitrary bio-inspired algorithm:
  • Deterministic schedules, varying the threshold over time according to a preset curriculum (e.g., linear ramps, cosine cycles), shaping feature selection pressure without using any feedback from the population;
  • Population-level feedback mechanisms updating a single global threshold by regulating the measurable metrics, such as improvement/success rate or diversity, thereby tightening or relaxing selection as the search progresses;
  • Self-adaptive per-individual thresholds treating the threshold as a gene, which is co-evolved with the features, allowing different individuals to investigate the search space on their own.
All the used mechanisms exposed the same interface, which consumes population summaries and then outputs the global threshold parameter θ t or the local, i.e., per-individual-based threshold parameter θ i , t , so they can be compared fairly and used across all bio-inspired methods under the common objective.
Two deterministic threshold parameter control mechanisms were used, namely, Linear Ramp (LR) and Cosine Ramp (CR). The LR increases the threshold linearly over generations as follows:
θ t = θ min + t T θ max θ min , t = 0 , , T ,
while the CR increases the threshold by following a half-cosine curve, thus ensuring a slow increase at the beginning and end, with a faster transition in the middle; in other words,
θ t = θ min + θ max θ min 2 1 cos π t T ,
where t denotes the current generation, T the maximum number of generations, and  θ min and θ max are the minimum and the maximum values of the threshold parameter.
We implemented two adaptive threshold parameter control mechanisms: The first mechanism, called Proportional Control (PC), regulates the threshold parameter toward a target feature subset size. It uses a target selection rate ρ , which represents the desired fraction of selected features. At each generation t, we compute the realized selection rate ρ t from the population and form the error rate e t = ρ t ρ . The threshold is then updated with a learning rate η > 0 . The larger positive error rate leads to the larger increase in the threshold, which yields the smaller feature subset in the next generation, while the negative error rate produces the opposite effect. The projection operator Π [ θ min , θ max ] ( · ) keeps the threshold parameter within the desired interval [ θ min , θ max ] , i.e.,
θ t + 1 = Π [ θ min , θ max ] θ t + η ( ρ t ρ ) .
The second mechanism, called Success Rate Adaptation (SRA), adjusts the threshold parameter θ t + 1 using the fraction of individuals improving their fitness in the current generation. For instance, let S R t [ 0 , 1 ] denote the success rate and S R = 0.15 be the target success rate, which is determined empirically. When the observed success rate falls below the target one, the threshold is increased by 15%, to promote smaller subsets and stronger exploitation. When the observed success rate exceeds the target, the threshold is decreased by 15%, to promote larger subsets and additional exploration; in other words,
S R t = 1 N i = 1 N f ( x i ) t > f ( x i ) t 1 ,
θ t + 1 = Π [ θ min , θ max ] θ t · c , SR t > S R , θ t · c , SR t S R ,
where c = 0.85 and c = 1.15 are constants which control the value of the threshold θ t + 1 in the new generation.
The final parameter control mechanism, called Self-Adaptation (SA), assigns each individual its own threshold parameter θ i [ θ min , θ max ] . This parameter is encoded in the genome and updated by the variation operators of the evolutionary search process together with the problem variables. Each individual therefore operates with its own selection rate and can adjust it over time. Different solutions explore different areas of the search space at the same time, which increases population diversity and allows each individual to progress at its own pace without relying on a single global controller. Thus, the  θ i threshold control parameter becomes a part of solution; in other words,
x i = x i , 1 , , x i , N , θ i ,
where x i , j for j = 1 , , N denote the problem variables, and N is the number of elements. Thus, each individual produces the feature subset S i using their own threshold, as follows:
S i = { x i , j : x i , j θ i } , for j = 1 , , N .

4. Experiments and Results

This section describes the results of the experimental work which was conducted in this study. The main goal of the experimental work was to check whether the threshold value of the feature selection process in the search space of the algorithm has any implications on the quality of the feature selection process. In line with this, the following experiments were conducted:
  • Determining the best baseline bio-inspired algorithm;
  • Investigating the impact of different threshold parameter control mechanisms on the classification accuracy;
  • Investigating the impact of different threshold parameter control mechanisms on the feature subset size;
  • Analyzing the algorithm’s convergence;
  • Comparing the best parameter control mechanism with state-of-the-art algorithms.
Although various evolutionary algorithms have been proposed and refined to address the challenges of feature selection, we selected a broad set of both classical and state-of-the-art bio-inspired algorithms for our experimental work. Specifically, we considered Differential Evolution (DE) [12], Particle Swarm Optimization (PSO) [13], Self-Adaptive Differential Evolution (jDE) [14], Linear Population Reduction Success History Adaptive Differential Evolution (LSHADE) [15], genetic algorithm (GA) [7], and Artificial Bee Colony algorithm (ABC) [16]. These algorithms were chosen because they represent the most widely used methods in Evolutionary Feature Selection, as reported in recent surveys [17,18,19], and have also been shown to perform well in related studies (e.g., LSHADE in the CEC competition). To provide a baseline and to cover fixed parameter strategies, we further included random search (RS) [20]. In line with the majority of the literature, all algorithms operate in a continuous search space, with candidate solutions mapped to the binary feature space using the standard threshold θ = 0.5 .
The implementations of all the considered algorithms were taken from the Niapy framework [21]. To ensure a fair comparison, the population size of all the algorithms was set to 30, along with 3000 maximum function evaluations. Due to the stochastic nature of evolutionary algorithms, each experiment was executed 30 times for each dataset and algorithm. Let us emphasize that the default values of the other algorithm’s parameters, as proposed in the corresponding literature, were employed for the specific algorithms during the experimental work. Because the selected feature selection approach was wrapper-based, the KNN machine learning algorithm was adopted with a value of K = 5 . This classifier was selected due to its simplicity, computational efficiency, and robustness, which make it particularly suitable for wrapper-based feature selection. KNN has no explicit training phase, allowing for rapid evaluation of candidate feature subsets across many iterations, which is essential in large-scale experimental setups like ours. Furthermore, KNN is used commonly in the literature [22]. By focusing only on the KNN classifier, we ensured that variability in results comes primarily from the threshold adaptation mechanisms and evolutionary algorithms in the study, rather than from differences in classifier behavior. While the threshold adaptation mechanisms may behave differently with classifiers such as Support Vector Machines (SVMs) or Random Forest (RF), the present work deliberately isolates the effect of threshold adaptation. Consequently, all results should be understood as classifier-dependent. In all the described mechanisms, the parameters were set as follows: θ m i n was set to 0.1 , θ m a x was set to 0.9 , and target success rate was set to S R = 0.15 , while the learning rate was set to η = 0.05 . The choice of fixed θ bounds ensured comparability across datasets, even though these values may have different implications in lower- and higher-dimensional feature spaces.
For evaluating the quality of the feature selection process, the considered metric in Equation (4) was applied using the final feature subset size variable | S | , and the classification accuracy of the selected feature subset parameter was weighted by parameter β . In this study, we tested three values of the weighting factor β , namely, β { 0.9 , 0.7 , 0.5 } . The value β = 0.9 is a commonly adopted setting in wrapper-based EFS methods [9] and ensures that classification accuracy has a dominant influence in the evaluation of feature subsets, while maintaining a lower weight for subset size. The lower values β = 0.7 and β = 0.5 were tested, to check whether the lower classification weight (and a higher feature weight) has any significant effect on selecting the final feature subsets. Lower values of β were not considered, as placing a higher weight on the feature subset size can severely degrade algorithm performance by driving the search toward extreme solutions (i.e., selecting almost no features; consequently, the trained classifier fails to capture relevant patterns and achieves low accuracy). Similarly, excessively high β values were avoided to ensure that feature subset size retained at least some influence in the fitness evaluation.
Although we agree that accuracy has limitations, especially in imbalanced or multi-class datasets, the majority of state-of-the-art feature selection studies report accuracy as the primary evaluation metric, which allows us to make a direct comparison with existing works. For consistency and comparability, we therefore adopted accuracy as our main performance measure. Accuracy was computed as the overall classification accuracy across all samples. While alternative metrics such as balanced accuracy, F1-score, or AUC could provide additional insights, incorporating them is beyond the scope of this study and represents an interesting direction for future work.
All the experiments were performed on a desktop computer, with the following configuration:
  • Intel(R) Core(TM) i9-10900KF CPU @ 3.70 GHz;
  • RAM: 65 GB;
  • Operating system: Linux Ubuntu 22.04 Jammy Jellyfish.
To evaluate the impact of a threshold in EFS, the datasets listed in Table 1 were used during the experimental work. The characteristics of each dataset are presented in terms of the number of instances, features, and number of classes. All the datasets contain diverse classification problems, as they contain a different number of instances and features [23]. These datasets are used commonly in the research literature [24].
Each dataset was split randomly into training and testing sets, with 70% of the samples going to the training and 30% to the testing set. When dividing, we paid attention to the equal division of classes between the two sets. One algorithm run consists of selecting the relevant features from the training set and then evaluating the performance of the selected features on the test set. During the training phase, a 5-fold cross-validation scheme was employed on the training set to ensure the robustness and generalizability of the selected feature subsets. Cross-validation is widely recognized as the standard approach to mitigate classifier overfitting in EFS, although it cannot fully eliminate the risk. The test set remained completely unseen throughout the training and feature selection process, and was only used for the final evaluation of the selected features. Let us notice that the datasets were normalized so that no feature with a larger range disproportionately influenced the KNN classifier.
To assess the statistically significant difference between the algorithms in the test, we used the non-parametric Friedman test, which is used for comparing multiple algorithms over multiple datasets by ranks [25,26]. For each dataset, the algorithms were ranked according to their performance. The Friedman statistic was then computed from these ranks and used to test the null hypothesis assuming that all algorithms are equivalent. This means that they have the same expected rank. We performed post hoc analysis only when the Friedman null hypothesis was rejected.
Following Demšar [25], we applied the Nemenyi post hoc test to obtain pairwise comparisons based on average ranks, and to visualize the results with critical difference diagrams, which show which average ranks differ significantly [27]. The Nemenyi procedure is conservative, especially when many algorithms are compared, or when the number of datasets is modest, so its statistical power can be limited, and some pairs may remain indistinguishable [27].
To increase sensitivity, we identified a control method, defined as the algorithm with the lowest average rank, and then applied the Wilcoxon signed-rank test for paired comparisons between each algorithm and the control [26]. This choice follows the recommendation of Benavoli et al., who advocate paired distribution free tests over procedures that rely only on mean ranks, because they offer greater power and a clearer interpretation [28]. In our reporting, the Nemenyi test provides graphical summaries through critical difference diagrams, while the Wilcoxon test provides the primary significance assessment. All the tests were conducted using a significance level α = 0.05 .
The pairwise observations used in the tests were constructed as follows: For each of the 15 datasets, we considered two summary statistics of the experimental outcomes, namely, the mean and the median. This yielded 2 × 15 = 30 pairwise measurements (also classifiers) for each algorithmic comparison and defined the effective sample size for the Wilcoxon analyses reported in the paper.
In the remainder of this section, we illustrate the detailed results of the statistical tests obtained after the conducted experiments.

4.1. Determining the Best Baseline Bio-Inspired Algorithm

The purpose of the first experiment was to recognize the best-performing bio-inspired algorithm using the fixed value of the threshold control parameter, which will later be used as a baseline algorithm for comparing against other bio-inspired algorithms using different adaptation mechanisms. The control algorithm will be recognized in terms of the fitness function (Equation (4)), which considers the threshold control parameter to be static throughout the whole evolutionary run of the algorithm that was set to θ = 0.5 . Given the large number of results, only the aggregated statistical results, considering all datasets, are reported in the corresponding tables.
The results of the Friedman and Wilcoxon tests are presented in Figure 3, Figure 4 and Figure 5 for values of weighting factor β = 0.9 , β = 0.7 , and β = 0.5 , respectively. Thus, each figure is divided into two parts, i.e., the table for presenting the results numerically and the diagram for illustrating the same graphically. The table contains the results of the Friedman tests, together with corresponding Nemenyi and Wilcoxon post hoc tests, while the figure presents the calculated Friedman ranks. The results of the Nemenyi post hoc test are represented as critical difference intervals, where the results of two algorithms are statistically significant if their critical difference intervals do not overlap. The Friedman tests’ ranks of particular baseline bio-inspired algorithms were compared with the ranks obtained by the control algorithm, to identify the best bio-inspired algorithm. The results of the Wilcoxon non-parametric test are depicted through corresponding p-values, where a significant difference between two algorithms is indicated when p < 0.05 . In Figure 3b, Figure 4b and Figure 5b, the best baseline algorithm identified by the Nemenyi post hoc test becomes a control algorithm for the Wilcoxon test. The control algorithm serves as a basis for comparison with the other baseline bio-inspired algorithm and is therefore denoted with the symbol ‡ in the table. Moreover, the presence of a significant difference between the control algorithm and the corresponding bio-inspired algorithm is represented by the symbol †.
The Nemenyi post hoc test results are presented graphically through corresponding diagrams in Figure 3a, Figure 4a and Figure 5a. Each diagram displays the average ranks represented by squares, while lines indicate the confidence intervals (critical differences) for the algorithms being compared. Thus, the lower rank values signify better-performing algorithms.
In summary, the jDE baseline bio-inspired algorithm attained the lowest average rank and was taken as the control algorithm. LSHADE was consistently the closest competitor, followed by DE, GA, PSO, ABC, and RS trailing behind. Under the Nemenyi test, the confidence intervals of jDE overlapped with those of LSHADE (and marginally with DE) by using the weighting factors β = 0.9 and β = 0.7 , so these algorithms were not significantly distinguished from the control one, whereas GA, PSO, ABC, and RS were significantly worse. When β = 0.5 , DE’s interval no longer overlapped with that of jDE; therefore, DE performed significantly worse. LSHADE remained statistically indistinguishable from the control across all three weighting factor values. The Wilcoxon signed-rank test, which has higher power than Nemenyi, corroborated the finding with a stronger separation. For each β , the pairwise Wilcoxon tests indicated significant differences between jDE and the other methods (all p 0.05 ), while the differences between jDE and LSHADE were not significant.
Overall, these results suggest a stable two-tier structure, where jDE and LSHADE form the top tier, tied statistically under both post hoc procedures across the tested β values, while DE occupies a borderline position that becomes clearly inferior when the objective places more weight on feature subset size ( β = 0.5 ). The algorithms GA, PSO, ABC, and RS constituted the lower tier, being consistently worse than the control under both post hoc analyses. For selecting a baseline optimizer, jDE is therefore a reasonable default, with LSHADE as an equally competitive alternative, depending on the implementation or runtime preferences.

4.2. Impact of Different Threshold Parameter Control Mechanisms on the Classification Accuracy

The purpose of this study was to analyze how different threshold parameter control mechanisms influence the classification accuracy. Since the jDE algorithm obtained the best results in the first experiment, it was used as the basic algorithm, whose results should be improved using various threshold parameter controls. Therefore, the jDE algorithm was executed 30 times for each evaluation dataset using five different threshold parameter control mechanisms: LR, CR, SRA, PC, and SA. The obtained classification accuracies and final thresholds are presented in Table 2 for all observed weighting factor values β . In the table, the row marked “# best” indicates the number of datasets on which each mechanism obtained the best results at a specific β value.
Table 2 shows that within jDE, threshold parameter control is most beneficial when accuracy dominates the weighting factor value β = 0.9 . In line with this, the SA threshold parameter control achieves the most wins (5), while the same algorithm using PC parameter control follows (4), with many best runs converging to higher thresholds (often 0.88 0.90 ). The jDE algorithm incorporating the deterministic parameter control wins only sporadically and the fixed baseline ( θ = 0.5 ) tops just a few datasets. As the objective gives more weight to the size of the feature subset ( β = 0.7 ), the fixed-baseline algorithm regains ground (6 wins), while the jDE employing the PC parameter control remains a strong, stable second (5). The same algorithm with SA threshold parameter control stays competitive (3), indicating that regulating a target selection rate is often sufficient. At β = 0.5 , where accuracy and subset size are balanced, results diversify, where the baseline algorithm again leads (6), the algorithm with SA threshold parameter control remains effective on several problems (4), and the same using more reactive mechanisms (SRA and CR) register isolated wins (two each), consistent with scenarios where mid or lower thresholds are preferable. Overall, the jDE applying the SA threshold parameter control offers the highest upside across datasets, while the PC parameter control is the safest default for β { 0.9 , 0.7 } . We can conclude that deterministic threshold parameter control is inconsistent, while the SRA threshold parameter control is dataset-sensitive.
We can also observe a clear trend that on high-dimensional datasets, the SA and PC threshold parameter controls tend to achieve the best results.
Figure 6, Figure 7 and Figure 8 extend the jDE analysis by comparing threshold control parameter mechanisms at three weighting factor values β . The Friedman test determined the SA threshold parameter control as the best method at β = 0.9 and β = 0.5 , and PC threshold parameter control at β = 0.7 . By using the weighting factor value β = 0.9 , the Friedman test did not declare other threshold parameter control mechanisms significantly different from SA, while Wilcoxon detected that the LR, CR, and SRA threshold parameter control mechanisms were significantly worse (all p 0.03 ), whereas the Baseline and PC threshold parameter control mechanisms were not ( p = 0.19 and 0.36 ). At β = 0.7 , the PC threshold parameter control mechanism clearly leads, since the Friedman and Wilcoxon tests both separated it from the LR, CR, and SRA threshold parameter control mechanisms (all significant), and the SA threshold parameter control mechanism remained statistically comparable (Wilcoxon p = 0.27 ). At β = 0.5 , the SA threshold parameter control mechanism again attained the best rank, where the post hoc Nemenyi test marked the SRA threshold parameter control mechanism as significantly worse. Additionally, the results of the Wilcoxon pairwise non-parametric statistical tests reported the PC, SRA, LR, and CR threshold parameter control mechanisms as worse ( p 0.05 ), while the Baseline remained indistinguishable ( p = 0.65 ). Overall, the SA per-individual threshold parameter control mechanism was the most reliable choice, when accuracy dominates, or is balanced with feature subset size in the fitness function, while the PC threshold parameter control mechanism was preferred at the intermediate setting ( β = 0.7 ).

4.3. Impact of the Feature Threshold Parameter Control Mechanisms on the Feature Subset Size

The purpose of this study was to analyze the effect of different feature threshold parameter control mechanisms on the size of the final feature subset. The obtained feature subset sizes, along with final thresholds, are presented in Table 3 for all β values.
Table 3 reports the selected feature subset sizes for jDE using different threshold parameter control mechanisms and three values of weighting factor values β = { 0.9 , 0.7 , 0.5 } . Overall, the jDE using the adaptive parameter control mechanisms shrank the subset noticeably compared to the Baseline jDE algorithm, with the best count row # best indicating that the PC threshold parameter control dominated when accuracy was emphasized ( β = 0.9 , 9 wins), while the SA threshold parameter control became progressively stronger as the feature subset size importance increased ( β = 0.7 : 6 wins; β = 0.5 : 8 wins). At weighting factor value β = 0.9 , the jDE incorporating the PC threshold parameter control achieved large reductions on high-dimensional datasets (e.g., BrainTumor1: 2764 1104 ; LungCancer: 6084 2338 ), while the same algorithm with the SA threshold parameter control was close behind. The deterministic threshold parameter controls incorporated into the jDE algorithm reduce the feature subset size but less aggressively, while the SRA threshold parameter control was occasionally unstable, even inflating the subset (e.g., UrbanLandCover: 48.9 81.0 ). Considering β = 0.7 , the jDE using the SA threshold parameter control was best on most large feature datasets (for example, BrainTumor1: 877; LungCancer: 1609), while the PC threshold parameter control remained a strong second. The determinstic schedules (LR and CR) were best on a few smaller datasets (e.g., Musk1, HillValley). With the weighting factor value β = 0.5 , the jDE employing the SA threshold parameter control was the clear winner by searching for the minimum subset size. It reached the smallest subset sizes on the majority of datasets (e.g., ProstateTumor1: 611 vs. baseline 2745; BrainTumor1: 681 vs. 2734), while PC still provided substantial reductions.
Across all weighting factor values β , it was notable that datasets with fewer features (e.g., German, Segmentation, Ionosphere) naturally hit small absolute feature set sizes, sometimes matching the baseline floor, whereas on high-dimensional datasets the gains from adaptive threshold parameter control are both larger in magnitude and typically lower in variance (see, for instance, Isolet5 and Madelon). By using the jDE algorithm, the PC threshold parameter control was better suited when accuracy predominates, while the SA threshold parameter control produces better results as the fitness function places more weight on the number of selected features.
Figure 9, Figure 10 and Figure 11 compared threshold parameter control mechanisms built in the jDE algorithm with respect to the size of the selected feature subset for all observed weighting factor values β = { 0.9 , 0.7 , 0.5 } . When accuracy dominated (when β = { 0.9 , 0.7 } ), the PC threshold parameter control attained the best average rank, while the SA threshold parameter control was statistically indistinguishable from it according to the Wilcoxon non-parametric statistical test, whereas the Baseline jDE and the jDE using the LR, CR, and SRA threshold parameter controls yielded significantly larger subsets (all p 0.05 ). As the objective placed more weight on limiting the number of selected features (when β = 0.5 ), the jDE incorporating the SA threshold parameter control became the best and delivered the smallest subsets consistently. The same algorithm using the PC threshold parameter control remained the closest competitor but produced significantly larger subset sizes according to the Wilcoxon non-parametric statistical test. Across all the weighting factor values β , the jDE using deterministic threshold parameter controls (i.e., LR and CR) reduced the feature subset sizes relative to the Baseline jDE algorithm, but remained significantly worse than the jDE with the SA threshold parameter control. The SRA threshold parameter control was distinguished as the most unreliable, often producing the largest subsets. In short, a global target on the selection rate by the PC threshold parameter control is sufficient when accuracy is important, while by the SA per-individual threshold parameter control, it becomes decisively superior as the fitness function rewards smaller feature subsets increasingly.

4.4. Convergence Analysis

The mentioned experiment was reserved for comparing the convergence rates of different bio-inspired algorithms for wrapper-based feature selection considering different feature selection threshold parameter controls. The convergence rates are presented in terms of fitness convergence (see Figure 12, Figure 13 and Figure 14) and by using the Generation to Convergence Metric ( G T C ), which is defined as the generation number at which the best fitness value was first obtained during the run of the evolutionary algorithm. This metric captures how quickly an algorithm is able to reach its “optimal” solution, providing additional insights into its convergence speed. The results of the G T C metric are reported in Table 4.
Let us notice that mechanisms that previously delivered strong accuracy and smaller feature subsets also tended to converge in fewer generations, although the balance depends on the weighting factor β . At the weighting factor value β = 0.9 , the jDE armed with the SA threshold parameter control most often reached convergence earliest, for example, on BrainTumor1, Leukemia1, LungCancer, HillValley, Ionosphere, Sonar, and UrbanLandCover. This aligns with its accuracy gains and near-minimal subset sizes at this setting. The jDE using the PC threshold parameter control was close to the former and was best on some datasets, e.g., Musk1 and Madelon. When the fitness function placed more weight on limiting the number of selected features, that is β = 0.7 , the same algorithm with the PC threshold parameter control frequently achieved the lowest GTC on high-dimensional datasets in the first block of datasets, while those using the CR threshold parameter control converged fastest on several medium-size datasets, e.g., Ionosphere, Isolet5, Libras, Musk1, Segmentation, and Sonar. This mirrors the earlier results, where the jDE using the SA threshold parameter control minimized subset size, but using the PC or the CR threshold parameter control often reached convergence earlier, which indicates a balance between speed and compactness. The selected algorithm incorporated with PC threshold parameter control was best on Leukemia1, ProstateTumor1, Libras, Madelon, Musk1, Segmentation, and UrbanLandCover, while the same using the SA threshold parameter control was best on BrainTumor1 and LungCancer. The CR threshold parameter control was distinguished as the fastest on a few datasets, such as Arrhythmia, HillValley, and Ionosphere.
Across all the observed weighted factor values β , the jDE using the SRA threshold parameter control minimized the G T C rarely. The Baseline jDE algorithm occasionally converged most quickly on very low-dimensional datasets, such as German and Segmentation at β = 0.9 , which was consistent with the smaller search space. For early convergence with competitive accuracy, the jDE armed with the PC threshold parameter control is a safe default, especially when β = 0.7 or β = 0.5 . The SA threshold parameter control offers similar or better convergence speed at β = 0.9 and remains attractive when the goal is fast convergence, strong accuracy, and small feature subsets. Deterministic threshold parameter controls can accelerate convergence on some datasets but they should be weighed against their subpar performance.
The fitness convergence rates are reported in Figure 12, Figure 13 and Figure 14 for all values of the weighting factors β = { 0.9 , 0.7 , 0.5 } , where the solid lines present the average fitness, and the dotted lines represent the average θ for each dataset.

4.5. Comparison of the Best Threshold Parameter Control Mechanism with the State-of-the-Art Algorithms

To assess the effectiveness and external validity of the proposed threshold parameter control mechanisms, we compared our best-performing bio-inspired algorithm (i.e., the jDE using the SA threshold parameter control) with three representative state-of-the-art algorithms from the literature. We selected studies that reported average classification accuracy under wrapper-based settings and extracted the published mean classification accuracy and standard deviations. The comparison was restricted to datasets that overlap with ours, to ensure a like-for-like evaluation. One of the selected papers [9] departed from the conventional fixed threshold θ = 0.5 and used θ = 0.6 , with the same fitness function formulation and the weighting factor value β = 0.9 . The other two papers [29,30] implemented method-specific procedures within PSO or DE, and used a fixed threshold value θ = 0.5 . For each study, we ran a paired Wilcoxon non-parametric signed-rank statistical test, with the null hypothesis that the results of the jDE with the SA threshold parameter control and the method taken from literature perform equally. The resulting p-values in Table 5 indicate statistically significant improvements in favor of the jDE using the SA threshold parameter control, including against the method increasing the fixed threshold to a value of θ = 0.6 . These findings suggest that modifying the threshold during the evolutionary run provides a measurable advantage over the fixed-threshold designs and over method-specific heuristics. We note that the original studies may differ in train and test data partitioning. Despite this heterogeneity, the direction and magnitude of the changes are consistent across the overlapping datasets, which supports the threshold parameter control as a default component in wrapper-based EFS.

4.6. Discussion

The results support feature threshold parameter control consistently as a key design choice in wrapper-based EFS. In the algorithm-level comparison, the jDE algorithm emerged as the stronger baseline, the LSHADE was statistically indistinguishable from it in several settings, and the DE, GA, PSO, ABC, and RS algorithms ranked lower according to both the Nemenyi and Wilcoxon non-parametric statistical tests. Built in jDE, the obtained accuracies show that the SA threshold parameter control attained the largest number of per-dataset wins at a weighting factor value of β = 0.9 , while the same one using the PC threshold parameter control was a close second. As the weight factor on the feature subset size was modified to β = 0.7 and β = 0.5 , the jDE using the PC threshold parameter control remained competitive, while the same algorithm armed with the SA threshold parameter control continued to produce the best results. Deterministic threshold parameter control in jDE can be useful but rarely dominated, while the SRA threshold parameter control was the most sensitive to short-term improvement.
The subset size analysis aligned with these trends. When accuracy dominated at the weighting factor value β = { 0.9 , 0.7 } , the jDE using the PC threshold parameter control produced the smallest sets on most high-dimensional datasets, while the SA threshold parameter control was not statistically different from it. When the fitness function placed more emphasis on limiting the number of selected features, the jDE using the SA threshold parameter control became the best choice and won most frequently at the weighting factor β = 0.5 . The deterministic threshold parameter controls reduced the feature subsets relative to the baseline jDE algorithm but remained significantly worse than the best mechanism, while the SRA threshold parameter control often yielded the largest feature subsets. These patterns confirm that a global target on the selection rate is effective when accuracy is the priority, whereas the SA per-individual threshold parameter control becomes advantageous once feature subset size is more important.
The results also show that the obtained accuracies tend to vary only slightly across independent runs, whereas the size of the selected feature subsets exhibits greater variability. This difference can be attributed to the stochastic initialization of the algorithms, which encourages exploration of diverse regions of the search space. Importantly, despite fluctuations in the subset size, classification accuracy remained stable, suggesting that different subsets found can yield comparably good performance.
The convergence analysis results are consistent with the accuracy and feature subset size findings. At the weighting factor value β = 0.9 , the jDE using the SA threshold parameter control most often converged in fewer generations and did so while maintaining better accuracy and smaller feature subsets. At the weighting factor values β = 0.7 and β = 0.5 , the jDE with built-in PC threshold parameter control frequently converged fastest on large datasets, while the same one using the CR threshold parameter control can be the fastest on some medium-size datasets. The fixed-baseline jDE algorithm can converge quickly on very low-dimensional datasets, which was expected given the small search space. The jDE using the SRA threshold parameter control rarely minimized the number of generations and can be unstable in terms of convergence.
A comparison with the state-of-the-art algorithm results also confirms these conclusions. Using the overlapping datasets and the reported means and standard deviations from three representative studies, the Wilcoxon non-parametric statistical test shows that the jDE with the SA threshold parameter control mechanism achieved significantly higher accuracy than all three references at the weighting factor value β = 0.9 . This includes a method that had already improved over the conventional setting by fixing θ = 0.6 . The direction of the differences was consistent across the shared datasets, despite minor differences in the evaluation protocols.
While the results demonstrate the effectiveness of threshold parameter control in wrapper-based EFS, several limitations of the present study should be noted. First, although cross-validation was employed to mitigate classifier overfitting, wrapper-based approaches remain vulnerable in extremely high-dimensional datasets with limited samples, where the risk of overfitting cannot be fully eliminated. Second, certain datasets exhibit substantial class imbalance, which may bias the KNN classifier toward majority classes and affect reported accuracy. Cross-validation reduces but does not remove these biases completely. Finally, it is important to note that the obtained results may not generalize directly to more complex classifiers.

5. Conclusions

This study examined different threshold adaptation mechanisms in wrapper-based EFS for classification. We evaluated five referenced bio-inspired algorithms and a random search method on widely used benchmark datasets. The analysis covered three aspects of performance, namely, classification accuracy, the size of the selected feature subset, and the number of generations to convergence. By holding the objective and the evaluation protocol fixed, we isolated the effect of threshold control from other algorithmic factors.
The results show that threshold adaptation in feature selection should be considered a default design choice. Within jDE, which emerged as a strong baseline with LSHADE as a close alternative, SA achieved the highest classification accuracy, most often when classification accuracy had a higher weight in the fitness function. As the fitness function placed more weight on limiting the number of selected features, SA also produced the smallest subsets most frequently. The algorithm also converged faster when using the SA mechanism and a higher weight on classification in the fitness function, whereas PC often converged earlier on larger datasets when the feature subset size gained more importance.
A comparison with representative state-of-the-art methods on overlapping datasets supports these conclusions further. By using statistical tests on the reported results, SA obtained statistically significant better results. This held even when the literature already improved the fixed threshold by moving from the default value to a larger constant. The combination of internal benchmarks and external comparisons therefore indicates that adapting the threshold during the evolutionary run provides measurable benefits over fixed-threshold designs.
For future research, we want to test more recent bio-inspired algorithms and use even larger datasets. It would also be interesting to investigate hybrid adaptation mechanisms that combine a global target with per-individual threshold mechanisms, and to study per-feature threshold adaptation. Another priority is a multi-objective formulation of the problem that treats accuracy and feature subset size as separate objectives. In addition, extending the study beyond the KNN classifier would strengthen the generalizability of the findings and reveal whether threshold adaptation interacts differently with classifiers of varying complexity. Finally, theoretical analysis of the stability of threshold dynamics and their interaction with population diversity would deepen the understanding of when and why adaptation is useful.

Author Contributions

Conceptualization, U.M.; formal analysis, U.M. and I.F.; funding acquisition, U.M. and I.F.J.; investigation, U.M.; methodology, U.M. and I.F.J.; project administration, I.F.; software, U.M. and I.F.J.; supervision, I.F.; validation, U.M.; visualization, U.M.; writing—original draft, U.M., I.F.J. and I.F.; writing—review and editing, U.M., I.F.J. and I.F. All authors have read and agreed to the published version of the manuscript.

Funding

Iztok Fister Jr. thanks the financial support from the Slovenian Research Agency (Program No. P2-0057). Uroš Mlakar thanks the financial support from the Slovenian Research and Innovation Agency (Program No. P2-0041 and Program No. J2-60046).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study is available on request from the corresponding author.

Acknowledgments

The authors would like to thank the editors and reviewers for providing useful comments and suggestions to improve the quality of this article. During the preparation of this work, the authors used language tools such as Writefull, DeepL, and ChatGPT 5.0 in order to improve the article’s readability. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, L.; Wang, Y.; Chang, Q. Feature selection methods for big data bioinformatics: A survey from the search perspective. Methods 2016, 111, 21–31. [Google Scholar] [CrossRef] [PubMed]
  2. Nguyen, B.H.; Xue, B.; Zhang, M. A survey on swarm intelligence approaches to feature selection in data mining. Swarm Evol. Comput. 2020, 54, 100663. [Google Scholar] [CrossRef]
  3. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  4. Hoque, N.; Bhattacharyya, D.K.; Kalita, J.K. MIFS-ND: A mutual information-based feature selection method. Expert Syst. Appl. 2014, 41, 6371–6385. [Google Scholar] [CrossRef]
  5. Eiben, A.E.; Smith, J.E. Introduction to Evolutionary Computing, 2nd ed.; Springer Publishing Company: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  6. Blum, C.; Merkle, D. Swarm Intelligence: Introduction and Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  7. Deng, S.; Li, Y.; Wang, J.; Cao, R.; Li, M. A feature-thresholds guided genetic algorithm based on a multi-objective feature scoring method for high-dimensional feature selection. Appl. Soft Comput. 2023, 148, 110765. [Google Scholar] [CrossRef]
  8. Kononenko, I. Estimating attributes: Analysis and extensions of RELIEF. In Proceedings of the European Conference on Machine Learning, Catania, Italy, 6–8 April 1994; pp. 171–182. [Google Scholar]
  9. Li, A.D.; Xue, B.; Zhang, M. Improved binary particle swarm optimization for feature selection with new initialization and search space reduction strategies. Appl. Soft Comput. 2021, 106, 107302. [Google Scholar] [CrossRef]
  10. Fister, D.; Fister, I.; Jagrič, T.; Fister, I., Jr.; Brest, J. A novel self-adaptive differential evolution for feature selection using threshold mechanism. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 17–24. [Google Scholar]
  11. Fister, I., Jr.; Brest, J.; Mlakar, U.; Fister, I. Towards the universal framework of stochastic nature-inspired population-based algorithms. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar] [CrossRef]
  12. Hancer, E.; Xue, B.; Zhang, M. Differential evolution for filter feature selection based on information theory and feature ranking. Knowl.-Based Syst. 2018, 140, 103–119. [Google Scholar] [CrossRef]
  13. Osei-Kwakye, J.; Han, F.; Amponsah, A.A.; Ling, Q.H.; Abeo, T.A. A diversity enhanced hybrid particle swarm optimization and crow search algorithm for feature selection. Appl. Intell. 2023, 53, 20535–20560. [Google Scholar] [CrossRef]
  14. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  15. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  16. Karaboğa, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  17. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic algorithms on feature selection: A survey of one decade of research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  18. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A comprehensive survey on recent metaheuristics for feature selection. Neurocomputing 2022, 494, 269–296. [Google Scholar] [CrossRef]
  19. Abu Khurma, R.; Aljarah, I.; Sharieh, A.; Abd Elaziz, M.; Damaševičius, R.; Krilavičius, T. A review of the modification strategies of the nature inspired algorithms for feature selection problem. Mathematics 2022, 10, 464. [Google Scholar] [CrossRef]
  20. Anderson, R.L. Recent Advances in Finding Best Operating Conditions. J. Am. Stat. Assoc. 1953, 48, 789–798. [Google Scholar] [CrossRef]
  21. Vrbančič, G.; Brezočnik, L.; Mlakar, U.; Fister, D.; Fister, I., Jr. NiaPy: Python microframework for building nature-inspired algorithms. J. Open Source Softw. 2018, 3, 613. [Google Scholar] [CrossRef]
  22. Rostami, M.; Berahmand, K.; Nasiri, E.; Forouzandeh, S. Review of swarm intelligence-based feature selection methods. Eng. Appl. Artif. Intell. 2021, 100, 104210. [Google Scholar] [CrossRef]
  23. Dua, D.; Graff, C. UCI Machine Learning Repository. 2017. Available online: https://archive.ics.uci.edu/ (accessed on 20 September 2025).
  24. Song, X.; Zhang, Y.; Zhang, W.; He, C.; Hu, Y.; Wang, J.; Gong, D. Evolutionary computation for feature selection in classification: A comprehensive survey of solutions, applications and challenges. Swarm Evol. Comput. 2024, 90, 101661. [Google Scholar] [CrossRef]
  25. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  26. Rey, D.; Neuhäuser, M. Wilcoxon-Signed-Rank Test. In International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2011; pp. 1658–1659. [Google Scholar] [CrossRef]
  27. Nemenyi, P. Distribution-Free Multiple Comparisons. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 1963. [Google Scholar]
  28. Benavoli, A.; Corani, G.; Mangili, F. Should we really use post-hoc tests based on mean-ranks? J. Mach. Learn. Res. 2016, 17, 152–161. [Google Scholar]
  29. Hancer, E. Differential evolution for feature selection: A fuzzy wrapper–filter approach. Soft Comput. 2019, 23, 5233–5248. [Google Scholar] [CrossRef]
  30. Hancer, E.; Xue, B.; Zhang, M. Fuzzy filter cost-sensitive feature selection with differential evolution. Knowl.-Based Syst. 2022, 241, 108259. [Google Scholar] [CrossRef]
Figure 1. General process of EFS.
Figure 1. General process of EFS.
Biomimetics 10 00670 g001
Figure 2. Example of a genotype–phenotype mapping of an individual using the feature threshold. This example depicts the feature selection process on a dataset with 7 features, and a threshold θ = 0.60 . The selected feature subset S according to the threshold is { F 1 , F 2 , F 5 } .
Figure 2. Example of a genotype–phenotype mapping of an individual using the feature threshold. This example depicts the feature selection process on a dataset with 7 features, and a threshold θ = 0.60 . The selected feature subset S according to the threshold is { F 1 , F 2 , F 5 } .
Biomimetics 10 00670 g002
Figure 3. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithm discovery using β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 3. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithm discovery using β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g003
Figure 4. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithms discovery using β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 4. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithms discovery using β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g004
Figure 5. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithms discovery using β = 0.5 and weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.5 and weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 5. (a): Graphical representation of the Friedman critical distances for the best baseline bio-inspired algorithms discovery using β = 0.5 and weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests to determine the best baseline bio-inspired algorithm using β = 0.5 and weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g005
Figure 6. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 6. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g006
Figure 7. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 7. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g007
Figure 8. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.5 and weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.5 and weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 8. (a): Graphical representation of the Friedman critical distances for the classification accuracy using the jDE algorithm with β = 0.5 and weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests on classification accuracy using different threshold parameter control mechanisms for the jDE algorithm using β = 0.5 and weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g008
Figure 9. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 9. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.9 and weighting factor β = 0.9 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.9 and weighting factor β = 0.9 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g009
Figure 10. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 10. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.7 and weighting factor β = 0.7 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.7 and weighting factor β = 0.7 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g010
Figure 11. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.5 , Weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.5 , Weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Figure 11. (a): Graphical representation of the Friedman critical distances for the feature subset size using the jDE algorithm with weighting factor value β = 0.5 , Weighting factor β = 0.5 ; (b): Friedman and Wilcoxon statistical tests on feature subset size using different threshold parameter control mechanisms for the jDE algorithm using weighting factor value β = 0.5 , Weighting factor β = 0.5 . The ‡ symbol denotes the best-performing method according to the Friedman test, whereas the † indicates a statistically significant difference.
Biomimetics 10 00670 g011
Figure 12. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.9: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Figure 12. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.9: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Biomimetics 10 00670 g012
Figure 13. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.7: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Figure 13. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.7: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Biomimetics 10 00670 g013
Figure 14. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.5: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Figure 14. Convergence analysis of fitness and θ for jDE and all datasets for β = 0.5: (a): BrainTumor1, (b): Leukemia1, (c): LungCancer, (d): ProstateTumor1, (e): Arrhythmia, (f): German, (g): HillValley, (h): Ionosphere, (i): Isolet5, (j): Libras, (k): Madelon, (l): Musk1, (m): Segmentation, (n): Sonar, and (o): UrbanLandCover.
Biomimetics 10 00670 g014
Table 1. Experimental datasets used in the study.
Table 1. Experimental datasets used in the study.
DatasetFeaturesInstancesClasses
Arrhythmia27945213
German2010002
HillValley10012122
Ionosphere343512
Isolet5617155926
Libras9036015
Madelon50026002
Musk11664762
Segmentation192107
Sonar602082
UrbanLandCover1476759
BrainTumor15920905
ProstateTumor159661022
LungCancer12,6002035
Leukemia15327723
Table 2. Results of the average classification accuracies for all datasets using all threshold parameter control mechanisms and the basic jDE algorithm for all the observed weighting factor values β . The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
Table 2. Results of the average classification accuracies for all datasets using all threshold parameter control mechanisms and the basic jDE algorithm for all the observed weighting factor values β . The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
DatasetBaselineLRCRSRAPCSA
β = 0.9
BrainTumor10.838 ± 0.03 (0.50 ± 0.00)0.844 ± 0.02 (0.88 ± 0.03)0.831 ± 0.04 (0.89 ± 0.01)0.841 ± 0.03 (0.85 ± 0.07)0.852 ± 0.03 (0.90 ± 0.00)0.832 ± 0.03 (0.90 ± 0.01)
Leukemia10.876 ± 0.03 (0.50 ± 0.00)0.868 ± 0.06 (0.88 ± 0.04)0.880 ± 0.05 (0.88 ± 0.03)0.892 ± 0.05 (0.87 ± 0.06)0.883 ± 0.06 (0.90 ± 0.00)0.898 ± 0.05 (0.90 ± 0.00)
LungCancer0.907 ± 0.02 (0.50 ± 0.00)0.911 ± 0.02 (0.88 ± 0.03)0.910 ± 0.02 (0.89 ± 0.02)0.911 ± 0.02 (0.84 ± 0.07)0.910 ± 0.03 (0.90 ± 0.00)0.913 ± 0.02 (0.90 ± 0.01)
ProstateTumor10.889 ± 0.03 (0.50 ± 0.00)0.894 ± 0.04 (0.89 ± 0.01)0.897 ± 0.04 (0.90 ± 0.01)0.890 ± 0.05 (0.83 ± 0.07)0.910 ± 0.03 (0.90 ± 0.00)0.910 ± 0.04 (0.90 ± 0.01)
Arrhythmia0.627 ± 0.03 (0.50 ± 0.00)0.617 ± 0.03 (0.84 ± 0.05)0.607 ± 0.04 (0.89 ± 0.02)0.616 ± 0.03 (0.85 ± 0.07)0.601 ± 0.03 (0.90 ± 0.00)0.611 ± 0.04 (0.89 ± 0.01)
German0.734 ± 0.01 (0.50 ± 0.00)0.733 ± 0.02 (0.47 ± 0.13)0.732 ± 0.01 (0.50 ± 0.18)0.731 ± 0.02 (0.56 ± 0.25)0.735 ± 0.01 (0.87 ± 0.05)0.734 ± 0.01 (0.86 ± 0.06)
HillValley0.552 ± 0.01 (0.50 ± 0.00)0.548 ± 0.01 (0.85 ± 0.05)0.551 ± 0.01 (0.88 ± 0.03)0.550 ± 0.01 (0.86 ± 0.07)0.555 ± 0.01 (0.90 ± 0.00)0.561 ± 0.01 (0.89 ± 0.01)
Ionosphere0.917 ± 0.03 (0.50 ± 0.00)0.922 ± 0.04 (0.69 ± 0.11)0.915 ± 0.04 (0.76 ± 0.11)0.919 ± 0.04 (0.79 ± 0.17)0.909 ± 0.04 (0.86 ± 0.04)0.919 ± 0.03 (0.89 ± 0.02)
Isolet50.848 ± 0.01 (0.50 ± 0.00)0.842 ± 0.01 (0.86 ± 0.04)0.841 ± 0.01 (0.89 ± 0.02)0.838 ± 0.01 (0.67 ± 0.15)0.850 ± 0.02 (0.90 ± 0.00)0.852 ± 0.01 (0.88 ± 0.02)
Libras0.741 ± 0.02 (0.50 ± 0.00)0.730 ± 0.02 (0.80 ± 0.07)0.737 ± 0.02 (0.85 ± 0.06)0.732 ± 0.03 (0.90 ± 0.02)0.736 ± 0.03 (0.90 ± 0.00)0.733 ± 0.02 (0.89 ± 0.02)
Madelon0.786 ± 0.02 (0.50 ± 0.00)0.813 ± 0.02 (0.88 ± 0.03)0.823 ± 0.02 (0.90 ± 0.01)0.778 ± 0.02 (0.59 ± 0.16)0.846 ± 0.01 (0.90 ± 0.00)0.839 ± 0.01 (0.90 ± 0.00)
Musk10.866 ± 0.03 (0.50 ± 0.00)0.864 ± 0.04 (0.82 ± 0.07)0.857 ± 0.03 (0.88 ± 0.02)0.873 ± 0.03 (0.88 ± 0.05)0.869 ± 0.04 (0.90 ± 0.00)0.879 ± 0.03 (0.87 ± 0.06)
Segmentation0.812 ± 0.01 (0.50 ± 0.00)0.818 ± 0.02 (0.42 ± 0.07)0.812 ± 0.01 (0.48 ± 0.16)0.813 ± 0.02 (0.49 ± 0.28)0.807 ± 0.02 (0.86 ± 0.06)0.813 ± 0.01 (0.75 ± 0.16)
Sonar0.817 ± 0.04 (0.50 ± 0.00)0.802 ± 0.04 (0.80 ± 0.08)0.809 ± 0.05 (0.82 ± 0.09)0.798 ± 0.04 (0.88 ± 0.05)0.802 ± 0.05 (0.90 ± 0.00)0.800 ± 0.04 (0.86 ± 0.05)
UrbanLandCover0.830 ± 0.04 (0.50 ± 0.00)0.842 ± 0.02 (0.87 ± 0.02)0.835 ± 0.02 (0.90 ± 0.00)0.762 ± 0.08 (0.23 ± 0.08)0.840 ± 0.02 (0.90 ± 0.00)0.837 ± 0.02 (0.89 ± 0.01)
β = 0.7
BrainTumor10.836 ± 0.02 (0.50 ± 0.00)0.826 ± 0.04 (0.89 ± 0.01)0.821 ± 0.04 (0.90 ± 0.00)0.822 ± 0.03 (0.82 ± 0.07)0.836 ± 0.03 (0.90 ± 0.00)0.825 ± 0.04 (0.90 ± 0.00)
Leukemia10.874 ± 0.03 (0.50 ± 0.00)0.858 ± 0.06 (0.89 ± 0.02)0.892 ± 0.05 (0.89 ± 0.01)0.879 ± 0.06 (0.80 ± 0.06)0.903 ± 0.05 (0.90 ± 0.00)0.885 ± 0.04 (0.90 ± 0.00)
LungCancer0.905 ± 0.02 (0.50 ± 0.00)0.916 ± 0.02 (0.90 ± 0.01)0.914 ± 0.02 (0.90 ± 0.00)0.904 ± 0.02 (0.79 ± 0.05)0.914 ± 0.02 (0.90 ± 0.00)0.920 ± 0.03 (0.90 ± 0.00)
ProstateTumor10.882 ± 0.04 (0.50 ± 0.00)0.908 ± 0.03 (0.89 ± 0.01)0.890 ± 0.05 (0.90 ± 0.00)0.891 ± 0.03 (0.82 ± 0.07)0.918 ± 0.04 (0.90 ± 0.00)0.908 ± 0.04 (0.90 ± 0.00)
Arrhythmia0.619 ± 0.03 (0.50 ± 0.00)0.582 ± 0.06 (0.89 ± 0.02)0.595 ± 0.05 (0.90 ± 0.01)0.610 ± 0.04 (0.75 ± 0.06)0.585 ± 0.05 (0.90 ± 0.00)0.589 ± 0.04 (0.90 ± 0.00)
German0.713 ± 0.01 (0.50 ± 0.00)0.707 ± 0.02 (0.33 ± 0.03)0.709 ± 0.02 (0.28 ± 0.04)0.695 ± 0.04 (0.29 ± 0.05)0.701 ± 0.02 (0.64 ± 0.02)0.711 ± 0.01 (0.88 ± 0.04)
HillValley0.552 ± 0.01 (0.50 ± 0.00)0.557 ± 0.01 (0.87 ± 0.03)0.558 ± 0.01 (0.89 ± 0.01)0.551 ± 0.01 (0.62 ± 0.12)0.560 ± 0.01 (0.89 ± 0.01)0.556 ± 0.01 (0.90 ± 0.01)
Ionosphere0.954 ± 0.01 (0.50 ± 0.00)0.934 ± 0.03 (0.48 ± 0.07)0.934 ± 0.03 (0.51 ± 0.09)0.920 ± 0.03 (0.44 ± 0.14)0.954 ± 0.01 (0.74 ± 0.04)0.957 ± 0.00 (0.89 ± 0.02)
Isolet50.846 ± 0.01 (0.50 ± 0.00)0.837 ± 0.02 (0.89 ± 0.01)0.833 ± 0.02 (0.90 ± 0.00)0.825 ± 0.02 (0.76 ± 0.05)0.845 ± 0.02 (0.90 ± 0.00)0.843 ± 0.02 (0.90 ± 0.00)
Libras0.735 ± 0.02 (0.50 ± 0.00)0.729 ± 0.03 (0.82 ± 0.06)0.725 ± 0.03 (0.86 ± 0.05)0.724 ± 0.04 (0.87 ± 0.08)0.740 ± 0.02 (0.90 ± 0.01)0.735 ± 0.03 (0.89 ± 0.02)
Madelon0.796 ± 0.01 (0.50 ± 0.00)0.823 ± 0.01 (0.90 ± 0.01)0.829 ± 0.02 (0.90 ± 0.00)0.779 ± 0.02 (0.60 ± 0.09)0.854 ± 0.01 (0.90 ± 0.00)0.852 ± 0.01 (0.90 ± 0.00)
Musk10.863 ± 0.04 (0.50 ± 0.00)0.856 ± 0.03 (0.88 ± 0.03)0.853 ± 0.03 (0.88 ± 0.04)0.864 ± 0.03 (0.76 ± 0.10)0.874 ± 0.04 (0.90 ± 0.00)0.876 ± 0.03 (0.90 ± 0.01)
Segmentation0.809 ± 0.00 (0.50 ± 0.00)0.815 ± 0.01 (0.34 ± 0.05)0.815 ± 0.01 (0.31 ± 0.05)0.812 ± 0.01 (0.24 ± 0.14)0.815 ± 0.01 (0.75 ± 0.05)0.809 ± 0.00 (0.83 ± 0.09)
Sonar0.810 ± 0.06 (0.50 ± 0.00)0.785 ± 0.05 (0.80 ± 0.07)0.797 ± 0.06 (0.86 ± 0.05)0.791 ± 0.06 (0.83 ± 0.11)0.795 ± 0.06 (0.90 ± 0.00)0.791 ± 0.06 (0.89 ± 0.02)
UrbanLandCover0.837 ± 0.03 (0.50 ± 0.00)0.819 ± 0.04 (0.89 ± 0.02)0.834 ± 0.02 (0.90 ± 0.00)0.777 ± 0.10 (0.39 ± 0.07)0.831 ± 0.02 (0.90 ± 0.00)0.834 ± 0.02 (0.90 ± 0.00)
β = 0.5
BrainTumor10.828 ± 0.04 (0.50 ± 0.00)0.826 ± 0.04 (0.90 ± 0.00)0.810 ± 0.04 (0.90 ± 0.01)0.820 ± 0.03 (0.80 ± 0.06)0.820 ± 0.03 (0.90 ± 0.00)0.828 ± 0.05 (0.90 ± 0.00)
Leukemia10.883 ± 0.02 (0.50 ± 0.00)0.892 ± 0.05 (0.90 ± 0.01)0.873 ± 0.04 (0.90 ± 0.00)0.894 ± 0.05 (0.81 ± 0.07)0.877 ± 0.06 (0.90 ± 0.00)0.888 ± 0.04 (0.90 ± 0.00)
LungCancer0.903 ± 0.02 (0.50 ± 0.00)0.909 ± 0.02 (0.90 ± 0.00)0.920 ± 0.02 (0.90 ± 0.01)0.904 ± 0.02 (0.77 ± 0.03)0.913 ± 0.02 (0.90 ± 0.00)0.907 ± 0.03 (0.90 ± 0.00)
ProstateTumor10.887 ± 0.04 (0.50 ± 0.00)0.891 ± 0.04 (0.90 ± 0.01)0.900 ± 0.04 (0.90 ± 0.00)0.898 ± 0.04 (0.80 ± 0.06)0.910 ± 0.03 (0.90 ± 0.00)0.923 ± 0.03 (0.90 ± 0.00)
Arrhythmia0.604 ± 0.03 (0.50 ± 0.00)0.593 ± 0.04 (0.89 ± 0.01)0.593 ± 0.04 (0.90 ± 0.00)0.601 ± 0.04 (0.76 ± 0.06)0.587 ± 0.04 (0.90 ± 0.00)0.585 ± 0.04 (0.90 ± 0.00)
German0.713 ± 0.01 (0.50 ± 0.00)0.689 ± 0.04 (0.30 ± 0.03)0.706 ± 0.03 (0.26 ± 0.04)0.690 ± 0.03 (0.27 ± 0.06)0.701 ± 0.04 (0.63 ± 0.02)0.704 ± 0.01 (0.87 ± 0.04)
HillValley0.556 ± 0.01 (0.50 ± 0.00)0.556 ± 0.01 (0.83 ± 0.06)0.557 ± 0.01 (0.87 ± 0.04)0.544 ± 0.01 (0.56 ± 0.10)0.556 ± 0.01 (0.82 ± 0.02)0.554 ± 0.01 (0.90 ± 0.01)
Ionosphere0.945 ± 0.03 (0.50 ± 0.00)0.926 ± 0.04 (0.50 ± 0.10)0.924 ± 0.04 (0.50 ± 0.13)0.922 ± 0.03 (0.42 ± 0.19)0.927 ± 0.04 (0.70 ± 0.02)0.947 ± 0.02 (0.89 ± 0.02)
Isolet50.832 ± 0.02 (0.50 ± 0.00)0.808 ± 0.02 (0.89 ± 0.01)0.819 ± 0.02 (0.90 ± 0.00)0.806 ± 0.02 (0.78 ± 0.04)0.827 ± 0.02 (0.90 ± 0.00)0.828 ± 0.02 (0.90 ± 0.00)
Libras0.732 ± 0.03 (0.50 ± 0.00)0.715 ± 0.05 (0.79 ± 0.07)0.706 ± 0.04 (0.87 ± 0.05)0.700 ± 0.04 (0.66 ± 0.18)0.707 ± 0.03 (0.89 ± 0.01)0.719 ± 0.03 (0.89 ± 0.01)
Madelon0.804 ± 0.02 (0.50 ± 0.00)0.825 ± 0.02 (0.89 ± 0.01)0.833 ± 0.01 (0.90 ± 0.00)0.787 ± 0.02 (0.75 ± 0.06)0.857 ± 0.01 (0.90 ± 0.00)0.860 ± 0.01 (0.90 ± 0.00)
Musk10.850 ± 0.04 (0.50 ± 0.00)0.839 ± 0.04 (0.87 ± 0.03)0.837 ± 0.03 (0.89 ± 0.01)0.842 ± 0.03 (0.74 ± 0.09)0.850 ± 0.04 (0.90 ± 0.00)0.857 ± 0.03 (0.90 ± 0.00)
Segmentation0.784 ± 0.01 (0.50 ± 0.00)0.772 ± 0.04 (0.33 ± 0.06)0.784 ± 0.02 (0.29 ± 0.07)0.786 ± 0.01 (0.26 ± 0.07)0.782 ± 0.02 (0.68 ± 0.03)0.786 ± 0.00 (0.88 ± 0.03)
Sonar0.783 ± 0.06 (0.50 ± 0.00)0.789 ± 0.06 (0.76 ± 0.08)0.792 ± 0.06 (0.83 ± 0.07)0.787 ± 0.08 (0.70 ± 0.20)0.802 ± 0.07 (0.88 ± 0.02)0.796 ± 0.07 (0.89 ± 0.02)
UrbanLandCover0.830 ± 0.02 (0.50 ± 0.00)0.813 ± 0.04 (0.89 ± 0.01)0.819 ± 0.02 (0.90 ± 0.00)0.785 ± 0.09 (0.54 ± 0.09)0.823 ± 0.02 (0.90 ± 0.00)0.826 ± 0.02 (0.90 ± 0.00)
# best 3 / 6 / 6 3 / 1 / 0 0 / 0 / 2 0 / 0 / 2 4 / 5 / 1 5 / 3 / 4
Table 3. Result of the average feature subset sizes for all datasets obtained by the Baseline jDE algorithm and the jDE using different threshold parameter control mechanisms for all observed weighting factor values β . The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
Table 3. Result of the average feature subset sizes for all datasets obtained by the Baseline jDE algorithm and the jDE using different threshold parameter control mechanisms for all observed weighting factor values β . The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
DatasetBaselineLRCRSRAPCSA
β = 0.9
BrainTumor12764.43 ± 55.921443.17 ± 171.711324.83 ± 126.471404.70 ± 323.901103.50 ± 127.251108.93 ± 139.76
Leukemia12471.73 ± 67.891284.63 ± 167.551225.20 ± 188.581259.77 ± 251.191016.70 ± 109.031068.30 ± 108.17
LungCancer6084.33 ± 73.232905.17 ± 300.602832.83 ± 309.653152.43 ± 681.212337.73 ± 185.422453.80 ± 229.31
ProstateTumor12784.73 ± 60.621257.97 ± 130.211264.23 ± 94.001525.17 ± 368.471057.43 ± 135.871047.87 ± 115.46
Arrhythmia114.93 ± 5.2863.10 ± 14.1455.07 ± 8.6162.83 ± 13.7447.90 ± 8.1447.70 ± 6.13
German5.00 ± 0.005.53 ± 0.735.53 ± 0.905.73 ± 1.235.23 ± 0.825.00 ± 0.00
HillValley26.50 ± 3.8416.50 ± 2.9314.30 ± 3.0114.80 ± 4.8815.27 ± 2.5614.87 ± 2.78
Ionosphere3.63 ± 0.893.57 ± 1.013.23 ± 0.773.53 ± 1.223.60 ± 1.003.60 ± 0.81
Isolet5266.73 ± 14.28132.23 ± 17.80126.87 ± 17.42208.37 ± 58.55123.53 ± 10.90126.73 ± 10.02
Libras30.37 ± 4.3120.27 ± 4.5019.63 ± 2.9318.20 ± 3.7217.70 ± 2.8817.70 ± 3.25
Madelon207.57 ± 9.8988.73 ± 15.3978.77 ± 13.10191.73 ± 49.8656.57 ± 10.4763.33 ± 9.75
Musk168.83 ± 6.4741.37 ± 7.5036.00 ± 5.1335.87 ± 6.2734.70 ± 6.2037.07 ± 6.71
Segmentation5.00 ± 0.005.50 ± 0.735.47 ± 1.015.80 ± 1.325.20 ± 0.415.00 ± 0.00
sonar20.47 ± 2.6613.93 ± 3.4113.30 ± 3.6413.03 ± 3.3011.20 ± 1.8812.70 ± 2.96
UrbanLandCover48.90 ± 5.3523.33 ± 5.1421.90 ± 4.2480.97 ± 9.0418.80 ± 3.0118.80 ± 3.70
β = 0.7
BrainTumor12781.43 ± 55.971190.13 ± 195.411146.97 ± 112.591449.40 ± 332.07937.40 ± 126.68877.10 ± 147.00
Leukemia12476.07 ± 63.201116.00 ± 118.751042.90 ± 119.061421.53 ± 259.56912.10 ± 115.75899.83 ± 121.39
LungCancer6036.00 ± 63.682313.83 ± 215.982074.77 ± 311.043092.90 ± 733.621626.23 ± 293.061608.93 ± 279.72
ProstateTumor12775.90 ± 47.321118.93 ± 113.321098.67 ± 115.781463.50 ± 320.89874.07 ± 154.77870.37 ± 123.33
Arrhythmia78.93 ± 5.7330.03 ± 6.0530.40 ± 5.7355.43 ± 10.9923.90 ± 4.9824.23 ± 4.52
german1.00 ± 0.001.47 ± 0.681.43 ± 0.732.10 ± 1.181.23 ± 0.431.00 ± 0.00
HillValley11.00 ± 2.885.27 ± 1.644.77 ± 1.2511.03 ± 5.215.13 ± 1.785.30 ± 1.06
Ionosphere2.17 ± 0.382.70 ± 1.022.73 ± 0.943.37 ± 1.132.23 ± 0.432.03 ± 0.18
Isolet5211.23 ± 8.7089.33 ± 8.6088.87 ± 10.30150.23 ± 25.6886.03 ± 9.7685.43 ± 9.77
Libras15.43 ± 2.2110.90 ± 2.2810.90 ± 2.1211.03 ± 2.4110.57 ± 1.9811.60 ± 2.47
Madelon161.40 ± 9.8660.07 ± 9.6261.23 ± 7.83166.93 ± 29.9339.53 ± 5.4639.70 ± 8.76
Musk143.33 ± 4.0020.80 ± 4.2121.73 ± 3.7731.00 ± 7.7421.77 ± 3.7023.17 ± 2.97
Segmentation4.00 ± 0.004.37 ± 0.674.60 ± 0.814.73 ± 1.114.43 ± 0.634.00 ± 0.00
Sonar9.00 ± 1.746.43 ± 1.336.53 ± 1.337.37 ± 1.966.57 ± 1.386.67 ± 1.18
UrbanLandCover27.07 ± 4.2311.27 ± 2.7311.27 ± 2.4950.73 ± 7.969.07 ± 2.279.90 ± 1.63
β = 0.5
BrainTumor12733.80 ± 48.221024.07 ± 109.36935.63 ± 129.981265.80 ± 293.25685.50 ± 121.96680.73 ± 114.62
Leukemia12454.00 ± 44.53942.60 ± 110.39858.63 ± 94.501163.17 ± 334.25717.20 ± 113.81645.60 ± 115.98
LungCancer5981.47 ± 56.501983.07 ± 219.261815.50 ± 271.642842.97 ± 461.631208.43 ± 171.961120.00 ± 232.41
ProstateTumor12745.30 ± 45.51927.20 ± 136.08987.60 ± 92.711271.60 ± 354.65676.30 ± 86.29610.77 ± 121.28
Arrhythmia49.83 ± 4.8417.57 ± 4.2618.10 ± 3.1245.10 ± 9.9412.80 ± 3.4112.87 ± 2.58
German1.00 ± 0.001.50 ± 0.781.60 ± 0.772.47 ± 1.531.13 ± 0.351.00 ± 0.00
HillValley3.53 ± 1.043.23 ± 0.733.20 ± 0.558.70 ± 3.203.03 ± 0.183.00 ± 0.26
Ionosphere1.97 ± 0.182.50 ± 0.942.83 ± 1.293.03 ± 1.522.03 ± 0.182.00 ± 0.00
Isolet5168.37 ± 9.6070.00 ± 7.3068.17 ± 7.60126.37 ± 22.3859.90 ± 6.5558.83 ± 8.97
Libras8.63 ± 1.597.30 ± 1.476.77 ± 1.0410.03 ± 3.186.83 ± 0.917.33 ± 1.06
Madelon131.83 ± 8.7947.60 ± 6.2546.17 ± 7.68109.63 ± 21.7129.23 ± 5.4228.57 ± 4.26
Musk121.93 ± 2.8810.83 ± 2.1011.97 ± 1.8822.63 ± 5.7212.10 ± 2.1213.33 ± 2.59
Segmentation2.00 ± 0.002.50 ± 0.732.70 ± 0.992.67 ± 0.802.27 ± 0.582.00 ± 0.00
Sonar5.20 ± 1.004.93 ± 1.174.73 ± 1.394.90 ± 2.044.67 ± 1.184.60 ± 1.30
UrbanLandCover16.77 ± 2.826.53 ± 1.506.23 ± 1.6331.80 ± 6.165.00 ± 0.915.87 ± 1.20
# best 2 / 2 / 3 0 / 2 / 1 2 / 1 / 1 0 / 0 / 0 9 / 4 / 2 2 / 6 / 8
Table 4. Results of the G T C metric for all datasets and weighting factor values β , by using different feature threshold parameter control mechanisms built in the jDE algorithm. The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
Table 4. Results of the G T C metric for all datasets and weighting factor values β , by using different feature threshold parameter control mechanisms built in the jDE algorithm. The best-performing threshold parameter control mechanism for each dataset is highlighted in bold.
DatasetBaselineLRSRASACRPC
β = 0.9
BrainTumor192.6 ± 7.195.4 ± 3.886.4 ± 11.385.4 ± 14.595.1 ± 4.286.7 ± 12.8
Leukemia189.6 ± 8.895.3 ± 5.491.7 ± 6.587.5 ± 11.392.7 ± 7.287.8 ± 10.0
LungCancer90.2 ± 10.895.3 ± 3.289.6 ± 8.487.4 ± 11.294.2 ± 5.089.7 ± 9.6
ProstateTumor192.2 ± 7.097.6 ± 1.487.6 ± 9.789.0 ± 9.495.7 ± 3.288.0 ± 10.6
Arrhythmia91.5 ± 7.790.7 ± 6.889.2 ± 8.487.8 ± 9.692.9 ± 5.188.0 ± 10.2
German32.9 ± 10.544.5 ± 15.853.3 ± 15.139.9 ± 19.348.8 ± 15.740.1 ± 18.4
HillValley91.1 ± 9.091.6 ± 6.591.9 ± 6.982.8 ± 12.691.2 ± 7.487.9 ± 9.4
Ionosphere67.9 ± 16.469.6 ± 13.879.0 ± 13.249.2 ± 19.672.8 ± 12.452.6 ± 19.7
Isolet593.5 ± 4.893.8 ± 4.593.1 ± 4.391.9 ± 8.792.6 ± 5.894.2 ± 5.5
Libras83.5 ± 16.286.2 ± 8.389.8 ± 7.485.7 ± 12.184.7 ± 10.184.1 ± 13.7
Madelon93.4 ± 5.595.6 ± 3.395.2 ± 3.594.0 ± 5.795.4 ± 4.093.4 ± 4.8
Musk186.1 ± 11.587.7 ± 8.490.3 ± 8.887.0 ± 9.891.2 ± 6.584.4 ± 10.1
Segmentation31.9 ± 12.338.6 ± 8.245.7 ± 15.438.2 ± 12.947.6 ± 15.832.5 ± 15.4
Sonar86.5 ± 14.185.4 ± 9.988.8 ± 8.481.5 ± 15.382.5 ± 13.085.2 ± 13.0
UrbanLandCover94.8 ± 5.395.0 ± 3.195.6 ± 4.686.9 ± 13.695.8 ± 2.889.1 ± 8.1
β = 0.7
BrainTumor192.0 ± 6.997.7 ± 2.096.7 ± 2.191.2 ± 7.690.3 ± 7.280.9 ± 14.5
Leukemia191.9 ± 10.996.9 ± 2.393.7 ± 4.290.9 ± 8.287.9 ± 9.883.0 ± 15.8
LungCancer93.3 ± 7.498.0 ± 1.296.7 ± 2.290.8 ± 5.290.4 ± 8.483.9 ± 12.5
ProstateTumor193.5 ± 4.897.3 ± 2.097.0 ± 2.091.4 ± 7.589.3 ± 9.384.3 ± 10.5
Arrhythmia94.7 ± 4.197.2 ± 2.296.4 ± 3.094.6 ± 4.790.5 ± 8.889.5 ± 9.7
German17.7 ± 4.227.1 ± 3.829.8 ± 3.934.5 ± 6.310.9 ± 2.65.0 ± 3.2
HillValley90.6 ± 5.393.7 ± 3.892.8 ± 5.395.5 ± 3.984.0 ± 15.082.6 ± 14.5
Ionosphere43.6 ± 10.846.6 ± 9.049.6 ± 7.165.3 ± 6.333.0 ± 14.033.1 ± 13.7
Isolet596.8 ± 2.596.9 ± 2.195.7 ± 2.993.3 ± 4.290.9 ± 7.093.4 ± 6.2
Libras89.1 ± 6.987.8 ± 7.786.5 ± 10.290.8 ± 6.177.0 ± 14.881.6 ± 16.4
Madelon97.3 ± 2.597.9 ± 1.497.2 ± 1.695.7 ± 2.895.2 ± 3.692.8 ± 11.3
Musk191.7 ± 9.096.0 ± 3.891.1 ± 7.692.3 ± 6.584.5 ± 10.986.1 ± 11.8
Segmentation22.1 ± 7.329.0 ± 6.732.6 ± 4.941.5 ± 11.219.9 ± 6.024.2 ± 8.8
Sonar88.7 ± 10.286.0 ± 9.286.3 ± 9.188.0 ± 7.379.5 ± 17.884.9 ± 13.2
UrbanLandCover95.1 ± 5.096.8 ± 2.696.2 ± 2.395.4 ± 3.393.3 ± 4.792.6 ± 6.3
β = 0.5
BrainTumor191.6 ± 8.998.5 ± 0.995.9 ± 3.488.4 ± 8.989.7 ± 13.988.4 ± 8.0
Leukemia195.4 ± 8.198.0 ± 1.296.2 ± 2.691.0 ± 6.489.3 ± 10.485.0 ± 12.6
LungCancer92.4 ± 7.198.4 ± 0.796.4 ± 3.088.3 ± 9.589.8 ± 11.688.5 ± 13.6
ProstateTumor193.4 ± 6.998.1 ± 1.197.2 ± 1.890.8 ± 7.289.9 ± 6.584.3 ± 17.5
Arrhythmia96.0 ± 2.697.9 ± 1.996.2 ± 2.393.2 ± 5.991.5 ± 7.991.8 ± 6.8
German15.8 ± 4.324.1 ± 3.427.8 ± 4.531.8 ± 7.111.1 ± 2.85.1 ± 3.6
HillValley91.8 ± 7.388.2 ± 8.087.8 ± 8.194.9 ± 4.474.7 ± 14.275.7 ± 13.2
Ionosphere44.9 ± 15.948.8 ± 12.249.3 ± 13.064.7 ± 12.430.9 ± 10.733.7 ± 16.5
Isolet596.4 ± 2.697.2 ± 1.996.8 ± 1.993.3 ± 6.094.1 ± 4.394.5 ± 5.5
Libras87.7 ± 9.584.1 ± 8.688.7 ± 8.892.9 ± 5.179.9 ± 14.677.9 ± 18.9
Madelon96.0 ± 2.897.6 ± 1.896.8 ± 2.496.1 ± 3.195.7 ± 3.594.5 ± 5.5
Musk194.3 ± 4.694.7 ± 3.594.6 ± 4.794.4 ± 5.590.0 ± 8.485.7 ± 13.0
Segmentation18.9 ± 6.027.3 ± 7.431.0 ± 6.135.5 ± 7.714.7 ± 4.414.3 ± 5.5
Sonar87.6 ± 8.880.9 ± 10.182.0 ± 11.490.7 ± 5.975.3 ± 15.277.3 ± 16.3
UrbanLandCover96.7 ± 2.497.2 ± 2.096.2 ± 2.995.5 ± 3.993.8 ± 4.691.0 ± 7.1
Table 5. Comparison of the best overall proposed jDE using the SA threshold parameter control mechanism with the state-of-the-art algorithms according to average classification accuracy.
Table 5. Comparison of the best overall proposed jDE using the SA threshold parameter control mechanism with the state-of-the-art algorithms according to average classification accuracy.
MethodBio-Inspired AlgorithmIncluded DatasetsMethod SpecificsWilcox Test (p-Value)
[9]PSOSonar, Libras, HillValley, UrbanLandCover, Musk1, Arrhythmia, Madelon, Isolet5Binary encoded features, θ = 0.6 , β = 0.9 0.015
[29]DESonar, Musk1, Madelon, Isolet5 θ = 0.5 , β = 0.9 0.039
[30]DEGerman, Ionsophere, Sonar, Libras, HillValley, UrbanLandCover, Musk1 θ = 0.5 , β = 0.9 0.021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mlakar, U.; Fister, I., Jr.; Fister, I. Threshold Adaptation for Improved Wrapper-Based Evolutionary Feature Selection. Biomimetics 2025, 10, 670. https://doi.org/10.3390/biomimetics10100670

AMA Style

Mlakar U, Fister I Jr., Fister I. Threshold Adaptation for Improved Wrapper-Based Evolutionary Feature Selection. Biomimetics. 2025; 10(10):670. https://doi.org/10.3390/biomimetics10100670

Chicago/Turabian Style

Mlakar, Uroš, Iztok Fister, Jr., and Iztok Fister. 2025. "Threshold Adaptation for Improved Wrapper-Based Evolutionary Feature Selection" Biomimetics 10, no. 10: 670. https://doi.org/10.3390/biomimetics10100670

APA Style

Mlakar, U., Fister, I., Jr., & Fister, I. (2025). Threshold Adaptation for Improved Wrapper-Based Evolutionary Feature Selection. Biomimetics, 10(10), 670. https://doi.org/10.3390/biomimetics10100670

Article Metrics

Back to TopTop