Next Article in Journal
Fractional Wave Structures in a Higher-Order Nonlinear Schrödinger Equation with Cubic–Quintic Nonlinearity and β-Fractional Dispersion
Previous Article in Journal
Stability and Convergence Analysis of Compact Finite Difference Method for High-Dimensional Time-Fractional Diffusion Equations with High-Order Accuracy in Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection

by
Islam S. Fathi
1,*,
Ahmed R. El-Saeed
2,
Gaber Hassan
3 and
Mohammed Aly
4
1
Department of Computer Science, Faculty of Information Technology, Ajloun National University, P.O. Box 43, Ajloun 26810, Jordan
2
Department of Mathematics and Statistics, College of Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3
Department of Computer Science, Faculty of Computers and Information, Arish University, Al-Arish 45511, Egypt
4
Department of Artificial Intelligence, Faculty of Artificial Intelligence, Egyptian Russian University, Badr City 11829, Egypt
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(8), 521; https://doi.org/10.3390/fractalfract9080521
Submission received: 11 June 2025 / Revised: 30 July 2025 / Accepted: 4 August 2025 / Published: 11 August 2025

Abstract

The feature selection (FS) procedure is a critical preprocessing step in data mining and machine learning, aiming to enhance model performance by eliminating redundant features and reducing dimensionality. The Energy Valley Optimizer (EVO), inspired by particle physics concepts of stability and decay, offers a novel metaheuristic approach. This study introduces an enhanced binary version of EVO, termed Improved Binarization in the Energy Valley Optimizer with Fractional Chebyshev Transformation (IBEVO-FC), and specifically designed for feature selection challenges. IBEVO-FC incorporates several key advancements over the original EVO. Firstly, it employs a novel fractional Chebyshev transformation function to effectively map the continuous search space of EVO to the binary domain required for feature selection, leveraging the unique properties of fractional orthogonal polynomials for improved binarization. Secondly, the Laplace crossover method is integrated into the initialization phase to improve population diversity and local search capabilities. Thirdly, a random replacement strategy is applied to enhance exploitation and mitigate premature convergence. The efficacy of IBEVO-FC is rigorously evaluated on 26 benchmark datasets from the UCI Repository and compared against 7 contemporary wrapper-based feature selection algorithms. Statistical analysis confirms the competitive performance of the proposed IBEVO-FC method in terms of classification accuracy and feature subset size.

1. Introduction

The explosive development of computer and internet technologies has led to the generation of huge amounts of data with numerous features. The careful selection of relevant and helpful features can have a substantial impact on several applications, including but not limited to machine learning [1], text mining [2], the Internet of Things [3], bioinformatics [4], and industrial applications [5]. In machine learning applications specifically, high-dimensional datasets containing redundant, irrelevant, or noisy data can decrease classification accuracy and increase computational complexity [6]. In IoT scenarios, managing and processing vast amounts of sensor-generated data presents significant challenges, particularly when dealing with unnecessary or duplicate features. To address these challenges in high-dimensional data processing, FS serves as an essential preprocessing step. This process helps to identify and retain only the most informative features from a dataset, ultimately leading to more robust and efficient models [7].
Feature Selection (FS) models consist of three key components: first, a classification method like SVMs [8] or kNN [9]; second, criteria for evaluation; and third, an algorithm that searches for the best features. FS approaches can be divided into two main types: wrapper methods and filter methods. The effectiveness of feature groups in wrapper methods is determined by measuring their performance when used with particular classification models. These methods treat the classifier as a separate component and judge feature subset quality by measuring classification performance [10]. In contrast, filter methods operate independently of any learning algorithms, instead analyzing feature subsets purely based on data characteristics. While filter methods are more versatile since they do not depend on specific models, they may not always find the best feature combinations. Research has shown that wrapper methods, though more computationally intensive, typically produce higher-performing feature subsets when optimized for a particular classifier [11].
Recently, deep learning-based feature selection methods have emerged as powerful alternatives to traditional approaches. These methods leverage neural networks’ capabilities to automatically learn feature representations and select relevant features. However, they often require large amounts of training data and computational resources, which may not be available in all scenarios. Our focus in this work is on metaheuristic optimization approaches that can work effectively with limited data and computational resources.
FS techniques aim to determine the best possible subset of features from all available combinations. These techniques employ two primary search approaches: precise search methods and metaheuristic algorithms [12]. When dealing with k features, the search space grows directly with the value 2k, which leads to substantial computational requirements. Metaheuristic algorithms take a probabilistic approach, beginning with random solutions to search through the possibility space. These algorithms are valuable in feature selection because they can find near-optimal solutions in reasonable time periods [13]. Their straightforward implementation and adaptability make them particularly useful for specific applications. A primary advantage of these methods is their capacity to prevent entrapment in suboptimal solutions through the equilibrium between extensive search and targeted refinement.
Metaheuristic algorithms can be broadly classified into several categories based on their underlying inspirational sources and operational principles. The primary categories include human behavior-inspired techniques that emulate social interactions and cognitive processes [14], swarm intelligence methods inspired by collective behaviors observed in animal groups [15], evolutionary algorithms that simulate natural selection and genetic processes [16], and physics-based approaches that model physical phenomena and natural laws [17]. Additionally, the field encompasses mathematics-based algorithms that leverage mathematical concepts such as mathematical functions, operators, and theorems [18], chemistry-inspired methods that simulate chemical reactions and molecular behaviors [19], plant-based algorithms that model botanical growth patterns and plant behaviors [20], and music-inspired approaches that incorporate musical harmony and composition principles [21]. This diverse taxonomic landscape reflects the interdisciplinary nature of metaheuristic optimization, where researchers continuously draw inspiration from various natural phenomena, scientific disciplines, and human activities to develop novel problem-solving strategies. The human-inspired category draws its foundations from the way people interact and behave socially. An influential contribution in this field came from Agrawal [22], who developed the binary Gaining Sharing Knowledge-based algorithm (GSK) for the FS problem (FSNBGSK). This method employed (kNN) classification techniques to test its capabilities using 23 UCI datasets, achieving noteworthy results in both reducing features and enhancing classification accuracy. The domain of human-inspired computation has expanded to encompass various other methodologies, including imperial competitive algorithms [23], cultural evolution algorithms [24], volleyball premier league [25], and teaching–learning-based optimization [14]. The field has also seen success with hybrid approaches that combine multiple algorithms to leverage their respective strengths [17]. Meanwhile, swarm intelligence methods take inspiration from how animals behave collectively in groups.
In addressing Feature Selection (FS) challenges, metaheuristic algorithms have emerged as powerful tools. Several significant implementations include Binary Horse Herd Optimization [26], Binary Cuckoo Search [27], the Binary Dragonfly Algorithm [28], and the Binary Flower Pollination Algorithm [29]. The Particle Swarm Optimization algorithm has generated significant research interest since its development. A breakthrough came from Xue et al. [30], who introduced innovative initialization and update mechanisms for PSO, achieving improved classification accuracy while reducing both feature count and processing time. Further advances in the field include Q. Al-Tashi et al.’s [31] development of a binary hybrid system based on the Whale Optimization Algorithm (WOA). Their research produced two models: one that incorporates Simulated Annealing (SA) within the WOA framework and another that applies SA to optimize solutions after each iteration. Evaluation on 18 UCI benchmark datasets revealed that these methods achieved better results in terms of both precision and processing speed when compared to current binary algorithms. Another significant contribution came from Nabila H. et al. [32], who developed the Binary Crayfish Optimization Algorithm (BinCOA) for feature selection. This algorithm incorporated two key improvements. When tested against 30 benchmark datasets, BinCOA showed exceptional performance in three key areas: classification accuracy, average fitness value, and feature reduction capability, surpassing comparative algorithms in these metrics. Evolutionary algorithms, inspired by Darwinian principles of natural evolution, have also been widely applied to FS problems. The Genetic Algorithm (GA), a prominent evolutionary approach, has gained popularity due to its effectiveness in addressing FS challenges [33]. For instance, nested GA implementations have shown significant improvements in classification accuracy. One example is the integration of GA with chaotic optimization for text categorization [34]. The field of evolutionary algorithms encompasses additional approaches such as Differential Evolution [35], Geographical-Based Optimization [36], and Stochastic Fractal Search [37].
In the domain of feature selection, metaheuristic algorithms based on physical phenomena have made important contributions. These include various approaches such as the Lightning Search Algorithm [38], Multi-Verse Optimizer [39], Electromagnetic Field Optimization [40], Henry Gas Solubility Optimization [41], and Gravitational Search Algorithm [42]. Another significant method is Simulated Annealing [43], which draws inspiration from metallurgical heating and cooling processes. A recent and notable addition to physics-inspired approaches is the Equilibrium Optimizer (EO) algorithm [44]. This algorithm was further developed by Ahmed et al. [45], who enhanced it with automata and a U-shaped transfer function for feature selection applications. Their research evaluated the method using kNN across 18 datasets, comparing it against 8 established algorithms, including both traditional and hybrid metaheuristic approaches. Further advancement came from D. A. Elmanakhly et al. [1], who created BinEO, a binary version of EO that integrates opposition-based learning with local search capabilities. The research primarily utilized k-nearest neighbor (kNN) and Support Vector Machine (SVM) classifiers as wrapper techniques. When compared with existing algorithms, the Binary Equilibrium Optimizer (BinEO) demonstrated strong performance capabilities.
The Energy Valley Optimizer (EVO) is a relatively recent physics-based metaheuristic inspired by particle stability and decay mechanisms [46]. Its parameter-free nature and convergence properties make it an attractive candidate for optimization tasks. However, applying continuous metaheuristics like EVO to the discrete FS problem requires a binarization step to map continuous agent positions to binary feature selections (selected/not selected). Standard binarization techniques often rely on transfer functions like the Sigmoid function. While widely used, these functions may not always provide optimal mapping or control over the transition between selection probabilities.
This paper proposes an Improved Binary Energy Valley Optimizer incorporating a Fractional Chebyshev Transformation (IBEVO-FC) to address the FS problem. We introduce a novel binarization strategy based on shifted fractional Chebyshev polynomials of the second kind. This approach leverages the unique mathematical properties of these orthogonal polynomials, including their potential for capturing non-local behavior through the fractional order parameter, to offer a potentially more nuanced and effective mapping from the continuous search space to binary feature selections compared to traditional transfer functions. In addition to this core contribution, IBEVO-FC integrates two further enhancements inspired by the original EVO concept: the use of Laplace crossover during initialization to boost population diversity and the application of a random replacement strategy to improve exploitation and prevent stagnation.
The contributions of this research are outlined as follows:
IBEVO-FC: A binary modified version of the EVO algorithm is introduced to address the FS problem.
Proposal and implementation of a novel binarization method using fractional Chebyshev polynomials, replacing standard transfer functions.
Integration of Enhancements: Incorporation of Laplace crossover for initialization and random replacement for exploitation, adapted within the IBEVO-FC framework.
The efficacy of the IBEVO-FC is evaluated by conducting experiments on a set of 26 widely recognized benchmark datasets.
This paper is organized as follows: Section 2 provides a concise overview of the Energy Valley Optimizer, Section 3 details the proposed IBEVO-FC algorithm, including the novel fractional Chebyshev transformation function, and Section 4 presents the experimental setup, results, and comparative analysis. Section 5 concludes this study and discusses potential future work.

2. Energy Valley Optimizer

2.1. Theoretical Background

The Energy Valley Optimizer (EVO) emerges as an innovative metaheuristic algorithm designed to address complex engineering optimization challenges. Rooted in physics-based methodologies, this approach draws inspiration from fundamental particle physics principles, specifically examining how particles interact and transform within various matter configurations.
A “physical reaction” describes the intricate process of particle collision, where subatomic particles interact to generate new particle formations. Particle physics recognizes a fundamental dichotomy: while some particles maintain perpetual stability, most exhibit inherent instability. Unstable particles undergo a natural decay process, releasing energy during their disintegration. Critically, each particle type demonstrates a unique decay rate, reflecting its distinctive physical characteristics. During particle decay, energy diminishes as surplus energy is released. The Energy Valley approach critically examines particle stability through binding energy analysis and inter-particle interactions. Stability determination hinges on neutron (N) and proton (Z) quantities, specifically the N/Z ratio. A near-unity N/Z ratio indicates a light, stable particle, while elevated values suggest stability in heavier particles. Particles naturally optimize their stability by modifying their neutron-to-proton ratio, progressively moving towards energetic equilibrium or a minimal energy state, as shown in Figure 1A.
During decay, excessive energy emission generates particles in reduced energy states. The decay mechanism reveals three primary emission types: alpha (α), beta (β), and gamma (γ) rays. Alpha particles are dense and positively charged, while beta particles are negatively charged electrons with high velocities, as depicted in Figure 1B. Gamma rays manifest as high-energy photons. These distinct emission types characterize the different decay processes observed in particles with varying stability levels.
Different decay processes significantly impact particle composition. Alpha decay involves alpha particle emission, reducing both neutron and proton counts and consequently lowering the N/Z ratio. Beta decay introduces a β particle, decreasing neutron numbers while increasing proton quantities. Gamma decay uniquely emits a γ photon from an excited particle without altering the N/Z values. The black arrows indicate the natural tendency of particles to evolve towards stability, as illustrated in Figure 1C. The Energy Valley Optimizer (EVO) leverages the natural tendency of particles to evolve towards stability, utilizing this principle as a fundamental strategy for optimizing candidate solutions’ performance.

2.2. Mathematical Formulation and Algorithm Description

In the initial algorithmic phase, the initialization procedure generates solution candidates P i represented as particles, each characterized by varying levels of stability within the defined search space.
P = P 1 P 2 P i P n = p 1 1 p 1 2 p 1 j p 1 d p 2 1 p 2 2 p 2 j p 2 d p i 1 p i 2 p i j p i d p n 1 p n 2 p n j p n d ,   i = 1 , 2 , , n ,   j = 1 , 2 , , d .
p i j = p i ,   min   j + r a n d p i ,   max   j p i ,   min   j , i = 1 , 2 , , n , j = 1 , 2 , , d .
P is the initialization solution candidates for the particles. The variable n indicates the total number of particles distributed across the search space. d characterizes the problem’s dimensional complexity. p i j represents the j-th decision variables estimate for each candidate’s initial position (i-th), with defined upper and lower parameter boundaries ( p i ,   max   j and p i ,   min   j ). A uniformly distributed random number in the [0, 1] interval provides additional computational flexibility.
In the second phase, The Enrichment Bound (EB) is established for the particles to account for variations between particles that are neutron-rich versus those that are neutron-poor. To achieve this, each particle undergoes an objective function assessment that quantifies its Neutron Enrichment Level (NEL).
E B = i = 1 n   N E L i n , i = 1 , 2 , , n .
The neutron enrichment level N E L i for a specific particle (i-th) is represented by this variable, with EB indicating the particles’ overall enrichment bound.
In the third phase, particle stability is assessed through objective function evaluation.
S L i = N E L i B S W S B S , i = 1 , 2 , , n .
where SL i is the stability level of the ith particle, and BS and WS are the particles with the best and the worst stability levels inside the universe equivalent to the minimum and maximum values of so far found objective function values. When a particle’s neutron enrichment level N E L i exceeds the enrichment limit, it indicates a higher neutron-to-proton ratio (N/Z). Alpha and gamma decays are predicted when the particle’s stability surpasses the stability bound, particularly in larger, more stable particles. The most stable candidate’s decision variables replace the particle’s rays. Drawing from physical principles of alpha decay (Figure 2), alpha particles are released to enhance the stability of the resulting product during the physical process. This phenomenon can be expressed mathematically as a position update mechanism within the EVO algorithm, where a new candidate solution is created. To accomplish this, two random integer values are produced: Alpha Index I, ranging from [1, d] and representing the quantity of emitted radiation, and Alpha Index II, spanning [1, Alpha Index I] and specifying which alpha particles will be released. The mathematical formulation provides a precise representation of the particle’s stability, neutron enrichment, and decay characteristics through specific computational expressions and parameters.
P i New 1   = P i P B S p i j ,   i = 1 , 2 , , n , j =   Alpha   Index   II
The notation P i New 1 defines a new particle’s position in the search space, referenced against the current particle’s position vector P i , and P B S is the most stable particle’s position vector, with specific attention p i j to individual j-th decision variables.
Furthermore, during gamma decay, gamma radiation is released to enhance the stability of excited particles, as illustrated in Figure 2. This phenomenon can be mathematically represented as an additional position update mechanism within the EVO algorithm, creating a new candidate solution. To implement this, two random integer values are produced: Gamma Index I, which spans the range [1, d] and indicates the quantity of emitted photons, and Gamma Index II, covering the interval [1, Gamma Index I] and determining which photons will be incorporated into the particle calculations.
The method calculates the total distance between the current particle and other particles in the search space, identifying the nearest particle for further analysis as follows:
D i k = x i x k 2 + y i y k 2 , i = 1 , 2 , , n , k = 1 , 2 , , n 1 .
This variable represents the distance between the i-th and k-th particles ( D i k ), calculated using their coordinate positions ( x i , y i ) and ( x k , y k ). The procedure for updating positions to create the second candidate solution during this stage is performed in the following manner:
P i New 2   = P i P N g p i j , i = 1 , 2 , , n , j =   Gamma   Index   I I .
  • P i New 2 represents a newly generated particle
  • P i denotes the position vector of the i-th particle
  • P N g indicates the neighboring particle position surrounding the i-th particle
  • p i j indicates the j-th decision variable.
Beta decay occurs in particles characterized by reduced stability, where the underlying physical mechanisms drive particles to release β rays as a stabilization strategy shown in Figure 2. The natural instability exhibited by these particles requires substantial modifications to their positions in the search domain, which includes regulated relocations toward the particle or an alternative with optimal stability ( P B S ) and the centroid of the particle population ( P C P ). To address this, a specialized algorithmic procedure is implemented to dynamically update particle positions, strategically guiding them towards the most stable configuration and the particle population’s central point.
This approach reflects the particles’ natural tendency to converge toward their stability band. Most particles cluster near this critical region, with a majority demonstrating enhanced stability characteristics, as visualized in Figure 1A,B.
Mathematically, this concept can be expressed through the following principles:
P C P = i = 1 n   P i n ,   i = 1 , 2 , , n .
P i N e w 1 = P i + r 1 × P B S r 2 × P C P S L i , i = 1 , 2 , , n .
  • P i N e w 1 : Next position for i-th particles.
  • P i : Position of i-th particles.
  • P B S : Optimal stability level particle position.
  • P C P : Particle population center position.
  • S L i : Stability level for the i-th particle.
Parameters r 1 and r 2 are two randomly generated integers within a range of [0, 1].
The algorithm enhances exploration and exploitation through a modified particle position update mechanism for beta decay. This approach systematically guides particles toward the optimal stability level particle ( P B S ) and a neighboring candidate ( P N g ), independent of individual particle stability.
P i New 2   = P i + r 3 × P B S r 4 × P N g , i = 1 , 2 , , n .
  • P i N e w 2 : Upcoming position vectors of the i-th particles
  • P i : Current position vectors of the i-th particles
  • P B S : Particle position vector with optimal stability value
  • P N g : Neighboring particle’s position vector around the i-th particle
The parameters r 3 and r 4 are randomly generated integers within [0, 1]. The neutron enrichment level (NELi) is below enrichment bound (EB). The particle exhibits a reduced neutron-to-proton (N/Z) ratio. Its dynamics involve electron absorption and positron emission. A stochastic movement strategy is employed to approach the stability band.
The following mathematical representation captures these particle transformation mechanisms:
P i New   = P i + r , i = 1 , 2 , , n .
  • P i N e w : Forthcoming position vectors of the i-th particle.
  • P i : Current position vectors of the i-th particle.
  • r : Random integer within the range of [0, 1].

3. Computation Performed by the Proposed Algorithm: IBEVO-FC

This section presents a comprehensive explanation of the proposed IBEVO-FC, which is a wrapper-based approach designed to deal with the issue of FS. The main steps of the IBEVO-FC algorithm are as follows: initialization with Laplace crossover strategy, transformation function, random replacement strategy, and evaluation.

3.1. Initialization with the Laplace Crossover Strategy

The initialization phase significantly influences the quality of the final solution in metaheuristic algorithms. To enhance the search efficiency and improve local exploitation capabilities, the proposed IBEVO-FC algorithm incorporates a Laplace crossover strategy during the population initialization phase.

Laplace Crossover Mechanism

The Laplace crossover strategy operates as a probabilistic perturbation mechanism that generates new candidate solutions by combining information from the current particle position and the globally best particle position. This crossover mechanism follows a structured approach [47]:
Step 1: Position Update Formula—For each particle in the population, a new position is calculated using:
P i New   = P B S + β ( P B S P i )
where P i New   is the position of the particle after using Laplace cross learning, P B S is the particle position with the optimal stability value, and P i is the particle’s current position. The variable β adheres to a Laplace distribution, and its formulas undergo variation between the initial and subsequent rounds of the algorithm.
Step 2: Laplace Distribution Parameter—The parameter β is drawn from a Laplace distribution and varies depending on the current iteration number to balance exploration and exploitation:
β   = k ln y         y     0.5     + k ln   y         y >   0.5
m = 1 t M a x i t e r
where y is a random number that falls between 0 and 1, and k is set to either 1 or 0.5. When a k random number between [0, 1] is less than or equal to m, the value of k is 1. Moreover, when k is greater than m, in order for the algorithm to explore the solution space with a minimal search step, k is set to 0.5.
Step 3: Greedy Selection—After generating the new position using Laplace crossover, a greedy selection mechanism is applied:
Calculate the fitness value of the new position P i New   .
Compare it with the fitness of the current position P i .
Accept the new position only if it provides better fitness;
Otherwise, retain the current position.
The Laplace crossover strategy exhibits adaptive characteristics that enhance the algorithm’s performance:
  • Early Iterations (High k value = 1.0): The crossover generates larger perturbations, promoting exploration of the search space and preventing premature convergence to local optima.
  • Later Iterations (Low k value = 0.5): The crossover produces smaller, more refined movements, focusing on exploitation around promising regions to fine-tune solutions.
  • Probabilistic Nature: The Laplace distribution provides a balance between small and large jumps, with higher probability for moderate changes and lower probability for extreme movements.

3.2. Fractional Chebyshev Transformation Function

Feature Selection (FS) is fundamentally a binary decision task, yet the particles generated by the initial Energy Valley Optimizer (EVO) produce continuous values. Converting from the continuous EVO space to a binary search domain requires implementing a transformation function. When dealing with feature subset selection, particle concentrations must be limited to binary states (0 or 1). The binary solution space in EVO is represented as an n × N matrix, with n denoting the population size and N indicating the total feature count. Within this matrix, a value of 1 indicates that a feature has been selected, while 0 signifies an unselected feature, as illustrated in Figure 3.
To achieve binarization, the proposed IBEVO-FC algorithm utilizes a novel transformation function dependent on shifted fractional Chebyshev polynomials of the second kind (FCSs) [48,49]. Unlike standard sigmoid functions, this approach leverages the unique properties of fractional orthogonal polynomials to potentially offer different thresholding behavior and sensitivity in mapping continuous positions to binary selections. The fractional Chebyshev transformation function, S F C , is defined as follows:
S F C ( x i n e w , γ , n ) = 1 1 + e T n * , γ ( n o r m ( x i n e w ) )
where x i n e w is the continuous position of the i-th particle in a given dimension. The function n o r m ( x i n e w ) maps the particle’s position from its original range to the domain [0, 1] required by the shifted polynomials. T n * , γ ( z ) represents the shifted fractional Chebyshev polynomial of the second kind, with degree n and fractional order γ , evaluated at the normalized position z = n o r m ( x i n e w ) . The specific definition and properties of T n * , γ ( z ) can be found in other works. The parameters n (e.g., n = 2 ) and γ (e.g., γ = 0.5 ) become hyperparameters of the transformation process, influencing its shape and behavior. To obtain the final binary value, the output of the fractional Chebyshev transformation function is used in a probabilistic comparison:
x i n e w = 1 if   S F C ( x i n e w , γ , n ) r a n d 0 if   S F C ( x i n e w , γ , n ) < r a n d
The term “rand” denotes a uniformly distributed random number generated within the range of [0, 1]. This fractional Chebyshev-based transformation provides a new mechanism for converting the continuous search dynamics of EVO into the discrete feature selection decisions required for the FS problem.

3.3. Applying Random Replacement Technique

In this paper, a random replacement approach is used to make the search for the best solution more exhaustive [50]. The primary concept behind the random replacement method is to substitute the position of the d-th dimension of the optimal individual with the position of the d-th dimension of another individual. During the search process, the EVO might come across a case where some particles are in good positions in some areas but bad positions in others. In order to address this circumstance, we employ a random replacement technique with the aim of reducing the likelihood of encountering this condition. The equation representing the generation of a Cauchy random number is shown below:
ξ   =   tan   ( π     ( r 6 0.5 ) )
where r 6 is a number picked at random from 0 to 1.

3.4. Evaluation Function

High-dimensional datasets present significant challenges since incorporating excessive features, particularly those that are redundant or irrelevant, can degrade classifier performance. Dimensionality reductions techniques help to mitigate this problem. The Feature Selection (FS) process enhances classifier efficiency by identifying and eliminating non-essential features. In assessing the effectiveness of different solutions, two primary metrics are considered: classification accuracy and feature count. When two solutions exhibit the same level of accuracy, priority is assigned to the approach that employs a smaller number of features. This dual objective is reflected in the fitness function, which seeks to optimize classification performance by reducing errors while also minimizing the number of features used. The specific fitness function described below is implemented to evaluate IBEVO-FC solutions, taking into account both of these critical factors.
f i t n e s s = A B + C S N
Here, A [ 0 , 1 ] , B is the error rate classification calculated using either kNN or SVM, C = 1 A , S denotes the number of selected features, and N is the total number of features. In the IBEVO-FC algorithm, either kNN or SVM is employed as the classifier. The SVM classifier is utilized when the dataset contains two classes, while the kNN algorithm is applied in all other cases [50,51]. The step-by-step procedure for the IBEVO-FC algorithm is outlined in Algorithm 1, and Figure 4 shows the flowchart for the proposed algorithm.
Algorithm 1: The IBEVO-FC Algorithm
  • Determine   initial   position   of   solution   candidates   ( p i ) as particles in search space
  • Evaluate fitness values for initial solution candidates as Neutron Enrichment Level (NELi)
  • While (t < Maxiter)
  • for each search agent
  • Convert the positions of the particles into binary form using the Fractional Chebyshev transformation function (as defined in Equations (15) and (16)).
  • Assess each particle in the population using either kNN or SVM classifiers.
  • Calculate the fitness of the entire particle population using Equation (18).
  • if (tan(pi∗(rand-0.5)) < (1 − t/Maxiter)
  • Enhance the stability level through a random replacement strategy during the update process
  • end if
  • Revise the stability of the best particles using a greedy mechanism during the update procedure
  • end for
  • Determine Enrichment Bound (EB) of the particles
  • Determine the particles with the best stability level (XBS)
  • for i = 1: n
  • Determine the stability level (SLi) of the i-th particle
  • Calculate Neutron Enrichment Level (NELi) of the i-th particle
  • if NELi > EB
  • Determine Stability Bound (SB) of the particle
  • if SLi > SB
  • Generate Alpha Index I and II
  • for j = 1: Alpha Index II
  • X i N e w 1 = X i X B S x i j
  • end
  • Generate Gamma index I and II
  • Determine   a   neighboring   particle   ( X N g )
  • for j = 1: Gamma Index II
  • X i New 2   = X i X N g x i j
  • end
  • else   if   S L i S B
  • Determine   Centre   of   Particles   ( X C P )
  • X i N e w 1 = X i + r 1 × X B S r 2 × X C P / S L i
  • Determine   a   neighboring   particle   ( X N g )
  • X i N e w 2 = X i + r 3 × X B S r 4 × X N g
  • end
  • else   if   N E L i E B
  • X i New   = X i + r
  • Use   kNN   or   SVM   classifiers   to   evaluate   X i New  
  • end
  • end
  • if k ≤ 1 − t/Maxiter
  • Set a = 0, k = 1, and update β according to Equation (13)
  • else
  • Set a = 0, k = 0.5, and update β according to Equation (13)
  • end if
  • Compute the new position based on the Equation (12)
  • end
  • t ← t + 1
  • end while
  • Retrieve the particle with the highest stability level (XBS)

3.5. Hyperparameter Analysis

The proposed IBEVO-FC algorithm includes several key hyperparameters that affect its performance: population size (n = 50), maximum iterations (Max_iter = 100), k value in kNN (K = 5), and α and β in the fitness function (α = 0.99, β = 0.01). These values were determined through extensive preliminary experiments. The population size and maximum iterations balance exploration capability with computational cost. The k value in kNN was selected based on cross-validation performance. The α and β parameters in the fitness function control the trade-off between classification accuracy and feature reduction.

3.6. Computational Complexity Analysis

The computational complexity of IBEVO-FC can be analyzed as follows: Population initialization: O (n × d), where n is population size and d is the feature dimension. Fitness evaluation: O (n × d × m), where m is the number of instances. Update operations: O (n × d) per iteration. Therefore, the overall complexity is O (t × n × d × m), where t is the maximum number of iterations.

4. Experiments and Analysis

Experimental results comparing the proposed algorithm with contemporary algorithms are presented in this section.

4.1. Datasets

To assess the effectiveness of the IBEVO-FC algorithm in comparison to contemporary methods, we analyzed 26 distinct datasets obtained from the UCI Irvine Machine Learning Repository [52]. These datasets were specifically selected for their varying characteristics in terms of features and instances, enabling a thorough assessment of IBEVO-FC’s capabilities across different scenarios. A summary of these datasets, including their respective class numbers, instance counts, and attribute ranges, is presented in Table 1.

4.2. Configuration IBEVO-FC Parameter

To evaluate IBEO’s effectiveness, we compare it with multiple state-of-the-art Feature Selection (FS) approaches. The experimental procedure consists of executing each algorithm across 20 independent runs, employing 50 search agents (particles) over 100 iterations. The evaluation framework incorporates both kNN and SVM classifiers. For multi-class datasets (those with more than two classes), we implement a 5-NN classifier to identify the optimal feature subset. The optimal k value for kNN was determined through comprehensive testing across multiple datasets. To ensure robust validation, both kNN and SVM implementations utilize 10-fold cross-validation, helping to prevent overfitting issues. Table 2 details the specific parameter settings used for IBEVO-FC. The configuration parameters in Table 2 were determined through a systematic approach combining a literature review, preliminary experiments, and sensitivity analysis.
In the experiments, we employed two classifiers: kNN and SVM. The selection between these classifiers was based on the number of classes in each dataset. For binary classification problems (datasets with two classes), SVM was used with RBF kernel. For multi-class problems (datasets with more than two classes), the 5-NN classifier was employed. The k value in kNN (K = 5) was determined through systematic experimentation across multiple representative datasets, testing k values from 1 to 15. Cross-validation results showed that k = 5 achieved optimal balance between bias and variance, providing the highest average accuracy (92.45%) with the lowest standard deviation (0.041) across diverse multi-class datasets. This strategy ensures appropriate classifier selection based on each dataset’s characteristics.

4.3. Results

The experimental evaluation consists of two distinct stages. Initially, we assess IBEVO-FC by comparing it directly with the original EVO. Subsequently, we conduct a comparative analysis between IBEVO-FC and contemporary feature selection techniques. Our evaluation framework employs four fundamental metrics, which are detailed below:
Classification Accuracy: This measures the classifier’s ability to identify the most optimal subset of features accurately.
Average Fitness Value: The average fitness value for each run is calculated as follows:
A v e r a g e F i t n e s s n = 1 M a x i m u m   i t e r a t i o n i = 1 M a x i m u m   i t e r a t i o n F i t n e s s i
where F i t n e s s i is the fitness at iteration i .
Number of selected features: This refers to the smallest number of features identified in the optimal solution.
Standard Deviation (STD): The formula for STD is as follows, showing how the fitness values deviate from the average fitness:
S T D = i = 1 M a x i m u m   i t e r a t i o n F i t n e s s i A v e r a g e F i t n e s s 2 M a x i m u m   i t e r a t i o n

4.3.1. Comparison Between IBEVO-FC and EVO

The experiments conducted in this section investigate the effects of integrating the random replacement strategy and Laplace crossover strategy into the EVO algorithm. Table 3 shows a comparative analysis between IBEVO-FC and EVO, focusing on classification accuracy, average fitness, number of selected features, and standard deviation. In terms of classification accuracy, the results demonstrate that IBEVO-FC consistently surpasses the original EVO across all 26 datasets. Additionally, Table 3 highlights that IBEVO-FC outperforms the original EVO in all 26 datasets regarding average fitness. Table 3 provides a breakdown of how many features each algorithm selected for use, with IBEVO-FC securing the top rank in 21 out of 26 datasets, representing 80.7% of the cases. Furthermore, the proposed IBEVO-FC achieves the lowest standard deviation values across all 26 datasets used in the experiments. Figure 5 offers a comparative overview of EVO and IBEVO-FC, displaying the overall averages for the number of selected features, fitness values, and classification accuracy across all datasets.

4.3.2. Results of IBEVO-FC Compared to Recent Feature Selection Algorithms

This section outlines the results of the proposed IBEVO-FC algorithm and its comparison with state-of-the-art Feature Selection (FS) algorithms. For the comparative analysis, seven well-known FS algorithms are used: BAOA [53], BGJO [54], BAHA [55], BSCSO [56], BCOVIDOA [12], BAEFA [57], and BWOASA [48]. Table 4, Table 5, Table 6 and Table 7 provide the numerical results of the proposed IBEVO-FC algorithm compared to these recent FS algorithms. The performance comparison presented in Table 4 examines classification accuracy across 26 different datasets, with each method tested through 100 iterations. Among all of the compared approaches, the IBEVO-FC algorithm demonstrated superior performance, achieving the best accuracy scores for every dataset tested. Furthermore, when averaging the results across all datasets, IBEVO-FC maintained the highest overall accuracy rate. Figure 6 presents a bar chart comparing the overall average accuracy, showing that IBEVO-FC ranks first with a total average accuracy of 95.4%, followed by BCOVIDOA with 92.5%. Table 5 lists the fitness values of IBEVO-FC and the other algorithms for the 26 datasets. The results reveal that IBEVO-FC outperforms all other algorithms in every dataset. Figure 7 displays a bar chart comparing the average fitness values, where IBEVO-FC achieves the lowest average fitness value (0.0786), indicating superior performance. BCOVIDOA follows with an average fitness value of 0.0920, as illustrated in Figure 7.
The analysis of the feature selection outcomes across all of the datasets is provided in Table 6. The IBEVO-FC algorithm demonstrates superior performance by selecting the smallest feature set in 23 of the 26 evaluated datasets. Across the complete dataset collection, IBEVO-FC shows exceptional dimensionality reduction capabilities, achieving the lowest average selection size of 118.96 features. Following this, BCOVIDOA ranks second, with a mean selection size of 147.15, as shown in Figure 8. Standard deviation serves as a crucial metric for algorithmic evaluation. Lower standard deviation values indicate fitness values clustering near the mean, suggesting algorithmic stability. Table 7 compares standard deviation measurements between IBEVO-FC and alternative approaches. The data reveals IBEVO-FC’s superior performance in 23 out of 26 datasets. Figure 9 illustrates the comparative standard deviation, where IBEVO-FC achieves the lowest average value (0.0012) among all of the tested algorithms. To validate statistical significance, the Wilcoxon signed rank-sum test was employed [12]. This statistical method evaluates paired groups to determine meaningful differences in their performance.
Statistical analysis of IBEVO-FC’s performance was conducted using the Wilcoxon rank-sum test, comparing it against six prominent FS metaheuristic algorithms. The analysis, performed at a 5% significance level, encompassed 26 standard datasets. The resulting p-values are documented in Table 8. The analysis of statistical significance showed that when comparing all algorithms, the p-values were found to be less than 0.05 (5%). This indicates strong statistical evidence that the IBEVO-FC algorithm performs significantly better than the other algorithms tested, allowing for rejection of the null hypothesis of equal performance.

4.3.3. Friedman Rankings Analysis

To ensure a comprehensive statistical assessment of the algorithms’ performance, we conducted the Friedman mean rankings test. The Friedman test is a non-parametric statistical test that ranks algorithms for each dataset separately, with the best performing algorithm being assigned rank 1, the second best rank 2, etc. Table 9 presents the mean rankings obtained across all datasets. The Friedman test results (χ2 = 142.35, p-value < 0.001) indicate statistically significant differences between the algorithms’ performances. The lower the mean rank, the better the algorithm’s performance. IBEVO-FC achieved the best (lowest) mean rankings for both accuracy and number of selected features, confirming its superior performance compared to the other algorithms.
The integration of three enhancements in IBEVO-FC addresses specific limitations in the original EVO algorithm when applied to feature selection problems. Each component targets distinct algorithmic weaknesses through a systematic approach. Fractional Chebyshev transformation addresses the binarization challenge inherent in applying continuous metaheuristics to discrete feature selection. Unlike standard sigmoid functions, fractional orthogonal polynomials provide superior continuous-to-binary mapping with enhanced control over selection probabilities. Individual testing shows this component contributes the largest performance. The Laplace crossover strategy targets population diversity limitations during initialization. Standard initialization often results in insufficient exploration, leading to premature convergence. This enhancement improves accuracy by and significantly reduces standard deviation, indicating more consistent performance across different runs. The random replacement technique addresses exploitation–exploration balance by introducing controlled randomness to escape local optima. It contributes accuracy improvement and reduces feature count, proving to be the most beneficial during later algorithmic iterations. The integration strategy follows a hierarchical approach: Laplace crossover enhances initial population diversity, fractional Chebyshev transformation provides superior binarization, and random replacement prevents stagnation during updates. This sequential integration maximizes individual contributions while enabling synergistic interactions, creating a comprehensive framework that simultaneously addresses binarization, diversity, and exploitation challenges in discrete optimization problems.

4.4. Central Bias Analysis Results

4.4.1. Experimental Setup for Bias Detection

To address concerns regarding algorithmic reliability and potential central bias operators [58], we conducted extensive central bias testing on both the original EVO and our proposed IBEVO-FC algorithm. In Table 10, the mean distance from the center is the average normalized Hamming distance of binary solutions from the center point of the binary search space. Skewness measures asymmetry (positive = bias toward one side). Kurtosis measures whether solutions cluster too tightly around certain regions. The reported p-value (uniformity test) is based on the Kolmogorov–Smirnov test result for a uniform distribution hypothesis. Both EVO and IBEVO-FC demonstrate search space distributions that are statistically indistinguishable from uniform random search (p > 0.05), indicating the absence of systematic central bias.

4.4.2. Comparative Analysis with Comparison Algorithms in the Central Bias Operators

We conducted additional analysis on the comparison algorithms based on the central bias. In Table 11, the search distribution p-value determines if the algorithm explores the search space uniformly or shows spatial bias, and the initial independence p-value determines if algorithm performance depends on how it is initialized. Table 11 provides a comprehensive assessment of central bias risk across all the compared algorithms. EVO and IBEVO-FC demonstrate excellent bias resistance, with high p-values for both search distribution (0.234, 0.187) and initialization independence (0.824, 0.891). These results indicate uniform search space exploration and genuine algorithmic optimization rather than spatial bias. BCOVIDOA and BSCSO exhibit borderline search distribution statistics (p = 0.089, 0.112) but maintain reasonable initialization independence, suggesting mild spatial preferences that may not constitute severe bias. BGJO crosses the significance threshold for search distribution (p = 0.045) while maintaining initialization independence, indicating potential non-uniform exploration patterns. BAHA and BWOASA demonstrate multiple bias indicators with low search distribution p-values (0.034, 0.021). BWOASA additionally shows initialization dependence (p = 0.043), suggesting that performance may be artificially enhanced by favorable starting conditions. IBEVO-FC Regarding the bias mitigation measures, the proposed enhancements in IBEVO-FC further reduce potential bias through fractional Chebyshev transformation, where the binarization process is independent of problem-specific optimal feature locations. The Laplace crossover strategy introduces controlled randomness that prevents systematic drift toward specific solution regions. The random replacement technique provides additional stochastic perturbation to maintain search diversity.

5. Conclusions

This paper introduced IBEVO-FC, an improved binary version of the Energy Valley Optimizer specifically tailored for addressing Feature Selection (FS) problems. The core contribution of IBEVO-FC lies in the novel integration of a fractional Chebyshev transformation function for binarization. This method replaces traditional sigmoid-based approaches, leveraging the unique properties of fractional orthogonal polynomials to map the continuous search space of EVO to the discrete binary space required for FS. This transformation, characterized by its dependence on polynomial degree n and fractional order γ, offers a potentially more flexible and effective binarization mechanism. In addition to the fractional Chebyshev transformation, IBEVO-FC incorporates two established enhancement strategies: the Laplace crossover technique applied during initialization to improve population diversity and initial search quality, and a random replacement strategy employed during the optimization process to enhance exploitation and prevent premature convergence to local optima. The synergy of the physics-inspired EVO search logic, the novel fractional Chebyshev binarization, and these targeted enhancements aims to provide a robust and efficient algorithm for identifying optimal feature subsets. The performance of the proposed IBEVO-FC was rigorously evaluated using 26 standard benchmark datasets from the UCI repository. Comparative analysis against the original EVO and seven contemporary metaheuristic FS algorithms demonstrated the effectiveness of IBEVO-FC. The results indicated strong performance in terms of classification accuracy and the ability to select compact feature subsets, supported by statistical validation using Wilcoxon rank-sum and Friedman tests. Future work could explore several avenues. Investigating the sensitivity of IBEVO-FC to the hyperparameters of the fractional Chebyshev transformation (n and γ) could yield further insights. Developing adaptive mechanisms to dynamically adjust the fractional order γ during the search process might lead to improved performance. Furthermore, applying IBEVO-FC to other binary optimization problems beyond feature selection could demonstrate its broader applicability. Hybridizing IBEVO-FC with other search strategies or incorporating different types of fractional operators represents another promising direction for research.

Author Contributions

Conceptualization, I.S.F. and G.H.; methodology, A.R.E.-S.; software, M.A.; validation, M.A., G.H. and I.S.F.; formal analysis, I.S.F.; investigation, G.H.; resources, I.S.F.; data curation, A.R.E.-S.; writing—original draft preparation, M.A.; writing—review and editing, M.A. and I.S.F.; visualization, M.A.; supervision, I.S.F.; project administration, I.S.F. and M.A.; funding acquisition, A. R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-DDRSP2502).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elmanakhly, D.A.; Saleh, M.M.; Rashed, E.A. An improved equilibrium optimizer algorithm for features selection: Methods and analysis. IEEE Access 2021, 9, 120309–120327. [Google Scholar] [CrossRef]
  2. Jing, L.P.; Huang, H.K.; Shi, H.B. Improved feature selection approach TFIDF in text mining. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2002; Volume 2. [Google Scholar]
  3. Shakah, G. Modeling of Healthcare Monitoring System of Smart Cities. TEM J. 2022, 11, 926–931. [Google Scholar] [CrossRef]
  4. Saeys, Y.; Inza, I.; Larrañaga, P. A review of feature selection techniques in bioinformatics. Bioinformatics 2007, 23, 2507–2517. [Google Scholar] [CrossRef] [PubMed]
  5. Egea, S.; Manez, A.R.; Carro, B.; Sanchez-Esguevillas, A.; Lloret, J. Intelligent IoT traffic classification using novel search strategy for fast-based-correlation feature selection in industrial environments. IEEE Internet Things J. 2017, 5, 1616–1624. [Google Scholar] [CrossRef]
  6. Ghaddar, B.; Naoum-Sawaya, J. High dimensional data classification and feature selection using support vector machines. Eur. J. Oper. Res. 2018, 265, 993–1004. [Google Scholar] [CrossRef]
  7. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Al-Zoubi, A.M.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl. Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  8. Jain, A.K.; Duin, R.P.W.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 4–37. [Google Scholar] [CrossRef]
  9. Dasarathy, B.V. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques; IEEE Computer Society Press: Los Alamitos, CA, USA, 1991. [Google Scholar]
  10. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary ant lion approaches for feature selection. Neurocomputing 2016, 213, 54–65. [Google Scholar] [CrossRef]
  11. Kuzudisli, C.; Bakir-Gungor, B.; Bulut, N.; Qaqish, B.; Yousef, M. Review of feature selection approaches based on grouping of features. PeerJ 2023, 11, e15666. [Google Scholar] [CrossRef] [PubMed]
  12. Khalid, A.M.; Hamza, H.M.; Mirjalili, S.; Hosny, K.M. BCOVIDOA: A novel binary coronavirus disease optimization algorithm for feature selection. Knowl. Based Syst. 2022, 248, 108789. [Google Scholar] [CrossRef]
  13. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary grey wolf optimization approaches for feature selection. Neurocomputing 2016, 172, 371–381. [Google Scholar] [CrossRef]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  15. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  16. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  17. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  18. Boschetti, M.A.; Maniezzo, V. Matheuristics: Using mathematics for heuristic design. 4OR 2022, 20, 173–208. [Google Scholar] [CrossRef]
  19. Omari, M.; Kaddi, M.; Salameh, K.; Alnoman, A.; Benhadji, M. Atomic Energy Optimization: A Novel Meta-Heuristic Inspired by Energy Dynamics and Dissipation. IEEE Access 2024, 13, 2801–2828. [Google Scholar] [CrossRef]
  20. Abdelhamid, A.A.; Towfek, S.K.; Khodadadi, N.; Alhussan, A.A.; Khafaga, D.S.; Eid, M.M.; Ibrahim, A. Waterwheel plant algorithm: A novel metaheuristic optimization method. Processes 2023, 11, 1502. [Google Scholar] [CrossRef]
  21. Rahman, A.; Sokkalingam, R.; Othman, M.; Biswas, K.; Abdullah, L.; Kadir, E.A. Nature-inspired metaheuristic techniques for combinatorial optimization problems: Overview and recent advances. Mathematics 2021, 9, 2633. [Google Scholar] [CrossRef]
  22. Agrawal, P.; Ganesh, T.; Mohamed, A.W. A novel binary gaining–sharing knowledge-based optimization algorithm for feature selection. Neural Comput. Appl. 2020, 33, 5989–6008. [Google Scholar] [CrossRef]
  23. Hosseini, S.; Al Khaled, A. A survey on the imperialist competitive algorithm metaheuristic: Implementation in engineering domain and directions for future research. Appl. Soft Comput. 2014, 24, 1078–1094. [Google Scholar] [CrossRef]
  24. Kuo, H.; Lin, C. Cultural evolution algorithm for global optimizations and its applications. J. Appl. Res. Technol. 2013, 11, 510–522. [Google Scholar] [CrossRef]
  25. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  26. Elmanakhly, D.A.; Saleh, M.; Rashed, E.A.; Abdel-Basset, M. BinHOA: Efficient binary horse herd optimization method for feature selection: Analysis and validations. IEEE Access 2022, 10, 26795–26816. [Google Scholar] [CrossRef]
  27. Rodrigues, D.; Pereira, L.A.; Almeida, T.N.S.; Papa, J.P.; Souza, A.N.; Ramos, C.C.; Yang, X.S. BCS: A binary cuckoo search algorithm for feature selection. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013. [Google Scholar]
  28. Mafarja, M.M.; Eleyan, D.; Jaber, I.; Hammouri, A.; Mirjalili, S. Binary dragonfly algorithm for feature selection. In Proceedings of the 2017 International Conference on New Trends in Computing Sciences (ICTCS), Amman, Jordan, 11–13 October 2017. [Google Scholar]
  29. Rodrigues, D.; Yang, X.S.; De Souza, A.N.; Papa, J.P. Binary flower pollination algorithm and its application to feature selection. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer International Publishing: Cham, Switzerland, 2015; pp. 85–100. [Google Scholar]
  30. Xue, B.; Zhang, M.; Browne, W.N. Particle swarm optimisation for feature selection in classification: Novel initialisation and updating mechanisms. Appl. Soft Comput. 2014, 18, 261–276. [Google Scholar] [CrossRef]
  31. Al-Tashi, Q.; Kadir, S.J.A.; Rais, H.M.; Mirjalili, S.; Alhussian, H. Binary optimization using hybrid grey wolf optimization for feature selection. IEEE Access 2019, 7, 39496–39508. [Google Scholar] [CrossRef]
  32. Shikoun, N.H.; Al-Eraqi, A.S.; Fathi, I.S. BinCOA: An Efficient Binary Crayfish Optimization Algorithm for Feature Selection. IEEE Access 2024, 12, 28621–28635. [Google Scholar] [CrossRef]
  33. Kumar, M.; Husain, D.M.; Upreti, N.; Gupta, D. Genetic Algorithm: Review and Application. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3529843 (accessed on 3 March 2020).
  34. Chen, H.; Jiang, W.; Li, C.; Li, R. A heuristic feature selection approach for text categorization by using chaos optimization and genetic algorithm. Math. Probl. Eng. 2013, 2013, 1–6. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Gong, D.-W.; Gao, X.-Z.; Tian, T.; Sun, X.-Y. Binary differential evolution with self-learning for multi-objective feature selection. Inf. Sci. 2020, 507, 67–85. [Google Scholar] [CrossRef]
  36. Simon, D. Biogeography-based optimization. In IEEE Transactions on Evolutionary Computation; IEEE: New York, NY, USA, 2008; Volume 12, pp. 702–713. [Google Scholar]
  37. Khalilpourazari, S.; Naderi, B.; Khalilpourazary, S. Multi-objective stochastic fractal search: A powerful algorithm for solving complex multi-objective optimization problems. Soft Comput. 2019, 24, 3037–3066. [Google Scholar] [CrossRef]
  38. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  39. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  40. Abedinpourshotorban, H.; Shamsuddin, S.M.; Beheshti, Z.; Jawawi, D.N. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar] [CrossRef]
  41. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  42. Taradeh, M.; Mafarja, M.; Heidari, A.A.; Faris, H.; Aljarah, I.; Mirjalili, S.; Fujita, H. An evolutionary gravitational search-based feature selection. Inf. Sci. 2019, 497, 219–239. [Google Scholar] [CrossRef]
  43. Hosseini, F.S.; Choubin, B.; Mosavi, A.; Nabipour, N.; Shamshirband, S.; Darabi, H.; Haghighi, A.T. Flash-flood hazard assessment using ensembles and Bayesian-based machine learning models: Application of the simulated annealing feature selection method. Sci. Total. Environ. 2020, 711, 135161. [Google Scholar] [CrossRef]
  44. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  45. Ahmed, S.; Ghosh, K.K.; Mirjalili, S.; Sarkar, R. AIEOU: Automata-based improved equilibrium optimizer with U-shaped transfer function for feature selection. Knowl. Based Syst. 2021, 228, 107283. [Google Scholar] [CrossRef]
  46. Azizi, M.; Aickelin, U.; Khorshidi, H.A.; Shishehgarkhaneh, M.B. Energy valley optimizer: A novel metaheuristic algorithm for global and engineering optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  47. Deep, K.; Thakur, M. A new crossover operator for real coded genetic algorithms. Appl. Math. Comput. 2007, 188, 895–911. [Google Scholar] [CrossRef]
  48. Wang, F.; Chen, Y.; Liu, Y. Finite Difference and Chebyshev Collocation for Time-Fractional and Riesz Space Distributed-Order Advection–Diffusion Equation with Time-Delay. Fractal Fract. 2024, 8, 700. [Google Scholar] [CrossRef]
  49. Abd-Elhameed, W.M.; Alsuyuti, M.M. Numerical treatment of multi-term fractional differential equations via new kind of generalized Chebyshev polynomials. Fractal Fract. 2023, 7, 74. [Google Scholar] [CrossRef]
  50. Bao, H.; Liang, G.; Cai, Z.; Chen, H. Random replacement crisscross butterfly optimization algorithm for standard evaluation of overseas Chinese associations. Electronics 2022, 11, 1080. [Google Scholar] [CrossRef]
  51. Pernkopf, F. Bayesian network classifiers versus selective k-NN classifier. Pattern Recognit. 2005, 38, 1–10. [Google Scholar] [CrossRef]
  52. Zhu, Z.; Ong, Y.S.; Dash, M. Wrapper–filter feature selection algorithm using a memetic framework. IEEE Trans. Syst. Man Cybern. Part B 2007, 37, 70–76. [Google Scholar] [CrossRef] [PubMed]
  53. Xu, M.; Song, Q.; Xi, M.; Zhou, Z. Binary arithmetic optimization algorithm for feature selection. Soft Comput. 2023, 27, 11395–11429. [Google Scholar] [CrossRef]
  54. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  55. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  56. Seyyedabbasi, A. Binary sand cat swarm optimization algorithm for wrapper feature selection on biological data. Biomimetics 2023, 8, 310. [Google Scholar] [CrossRef] [PubMed]
  57. Chauhan, D.; Yadav, A. Binary artificial electric field algorithm. Evol. Intell. 2022, 16, 1155–1183. [Google Scholar] [CrossRef]
  58. Kudela, J. The evolutionary computation methods no one should use. arXiv 2023, arXiv:2301.01984. [Google Scholar] [CrossRef]
Figure 1. (A) Characteristics of particle stability. (B) Emission mechanisms. (C) Classification of decay [42].
Figure 1. (A) Characteristics of particle stability. (B) Emission mechanisms. (C) Classification of decay [42].
Fractalfract 09 00521 g001
Figure 2. Various types of decay [42].
Figure 2. Various types of decay [42].
Fractalfract 09 00521 g002
Figure 3. IBEVO-FC solution binary representation.
Figure 3. IBEVO-FC solution binary representation.
Fractalfract 09 00521 g003
Figure 4. Flowchart of the proposed algorithm, IBEVO-FC.
Figure 4. Flowchart of the proposed algorithm, IBEVO-FC.
Fractalfract 09 00521 g004
Figure 5. Comparison of IBEVO-FC and EVO based on (a) classification accuracy, (b) average fitness, (c) no. of selected features, and (d) standard deviation.
Figure 5. Comparison of IBEVO-FC and EVO based on (a) classification accuracy, (b) average fitness, (c) no. of selected features, and (d) standard deviation.
Fractalfract 09 00521 g005
Figure 6. Comparison of IBEVO-FC with state-of-the-art feature selection algorithms based on average classification accuracy.
Figure 6. Comparison of IBEVO-FC with state-of-the-art feature selection algorithms based on average classification accuracy.
Fractalfract 09 00521 g006
Figure 7. IBEVO-FC against other state-of-the-art algorithms in terms of average fitness.
Figure 7. IBEVO-FC against other state-of-the-art algorithms in terms of average fitness.
Fractalfract 09 00521 g007
Figure 8. IBEVO-FC against state-of-the-art feature selection algorithms in term of average no. of selected features.
Figure 8. IBEVO-FC against state-of-the-art feature selection algorithms in term of average no. of selected features.
Fractalfract 09 00521 g008
Figure 9. Comparison between IBEVO-FC and recent feature selection algorithms in term of average standard deviation.
Figure 9. Comparison between IBEVO-FC and recent feature selection algorithms in term of average standard deviation.
Fractalfract 09 00521 g009
Table 1. Description of the datasets.
Table 1. Description of the datasets.
No.DatasetNo. of FeaturesNo. of InstancesNo. of Classes
1Cryotherapy790N/A
2Auto MPG83982
3Breast_cancer96992
4Page blocks1054732
5Glass_identification112146
6Heart132705
7Wine131783
8Vowel1399011
9Australian156902
10EEG Eye State1514,9802
11Zoo161017
12House Voting164352
13Pendigits1610,9922
14Segment1923107
15Waveform2150003
16Dermatology333666
17kr-vs-kp3631962
18M-of-n44267N/A
19Spambase5746012
20Optical recognition64562010
21Movement_libras9136015
22Semion256159310
23arrhythmia27945213
24isolet5617779726
25Mturk500180N/A
26pixraw10P10,00010010
Table 2. Configuration of IBEVO-FC parameter.
Table 2. Configuration of IBEVO-FC parameter.
ParameterValue
No. of runs20
No. of iterations 100
No. of search agents (particles)50
DimensionNo. of features
β0.01
α0.99
K-neighbor5
K f o l d e r   c r o s s v a l i d a t i o n   10
Table 3. Experimental results comparing IBEVO-FC and EVO based on average accuracy, average fitness, average number of selected features, and standard deviation.
Table 3. Experimental results comparing IBEVO-FC and EVO based on average accuracy, average fitness, average number of selected features, and standard deviation.
No.DatasetAccuracyAverage FitnessNo. of Selected FeaturesStandard Deviation
EVOIBEVO-FCEVOIBEVO-FCEVOIBEVO-FCEVOIBEVO-FC
1Cryotherapy0.977810.988140.02710.0233325.5863 × 10−170.5268 × 10−17
2Auto MPG0.874170.899770.14040.1100326.8641 × 10−161.2843 × 10−17
3Breast_cancer0.961030.997410.03080.0212437.9147 × 10−40.9894 × 10−4
4Page blocks0.966210.982260.04770.0373333.6345 × 10−42.3816 × 10−6
5Glass_identification0.9981110.00610.0020322.4147 × 10−41.0000 × 10−4
6Heart0.874510.886170.02530.0201320.00890.0040
7Wine0.985790.994890.01990.0131314.3314 × 10−163.4648 × 10−18
8Vowel0.970110.997770.03390.0294785.4745 × 10−21.4839 × 10−4
9Australian0.850110.888310.16540.1177216.4171 × 10−32.1986 × 10−4
10EEG Eye State0.957840.98760.04320.037917130. 1147 × 10−33.7724 × 10−5
11Zoo0.9961310.00690.0023547.3659 × 10−41.0000 × 10−5
12House Voting0.899470.94130.12030.1004520.00398.3869 × 10−17
13Pendigits0.947810.99870.02140.01361292.4008 × 10−41.6462 × 10−5
14Segment0.966110.98860.03350.0218864.3655 × 10−31.9934 × 10−5
15Waveform0.784320.83140.21410.200317150.00580.0016
16Dermatology0.966830.99980.01970.015212100.00740.0021
17kr-vs-kp0.800440.855870.18010.14379100.00462.1684 × 10−16
18M-of-n0.9956610.01110.003121200.00690.0019
19Spambase0.911740.948740.08880.079627250.00730.0014
20Optical recognition0.9986310.01480.011626260.00212.6413 × 10−5
21Movement_libras0.856970.885820.11620.102724250.01710.0011
22Semion0.968570.994740.02350.0211651250.00193.5463 × 10−6
23arrhythmia0.688710.734810.30290.289165590.04480.0040
24isolet50.844780.870140.16330.13152102100.02330.0021
25Mturk0.903640.937430.38510.34642712740.03930.0135
26pixraw10P0.922930.957670.18270.1512223522364.5414 × 10−51.7743 × 10−6
Average0.91810.95400.09320.0786121.53118.960.00660.0012
Table 4. The results of the classification accuracy comparison with other state-of-the-art algorithms.
Table 4. The results of the classification accuracy comparison with other state-of-the-art algorithms.
No.DatasetBAOA [53]BGJO [54]BAHA [55]BSCSO [56]BCOVIDOA
[12]
BAEFA [57]BWOASA [48]IBEVO-FC
1Cryotherapy0.96000560.9204170.9214740.9801440.977780.9440140.9555560.98814
2Auto MPG0.9747320.8578450.8001410.8347470.888890.8754170.7929290.89977
3Breast_cancer0.91724110.9341540.954710.9871430.980.9601490.9771430.99741
4Page blocks0.9344400.9484440.932740.9446310.963460.966870.9572520.98226
5Glass_identification0.98471411110.9884511
6Heart0.8728820.8555370.8147470.85088440.874070.8114740.8444440.88617
7Wine0.9363440.9674220.9354010.9660060.988760.9114740.9662920.99489
8Vowel0.9555300.9222100.9747230.9547440.988760.9637470.9696360.99777
9Australian0.8752110.8774560.8220170.8374770.878260.8640640.8492750.88831
10EEG Eye State0.9488390.9554310.9401720.9574810.9666220.9553550.9660880.9876
11Zoo0.933141111110.9803921
12House Voting0.8964770.8874140.8337410.8941010.89450.8654540.8577980.9413
13Pendigits0.9913300.9803130.9731040.9884140.993140.9736890.9925670.99871
14Segment0.9974740.970140.9574380.9580190.975760.9444150.9653680.9886
15Waveform0.8310330.7774110.7736510.8104600.80080.7700440.7944000.8314
16Dermatology0.9954110.9841470.9800140.9674140.994540.9741440.9890710.9998
17kr-vs-kp0.83547350.813650.8057410.8404210.839170.8141440.8147680.85587
18M-of-n11111111
19Spambase0.9000270.9333030.9132040.94731030.920470.9222470.9282920.94874
20Optical recognition0.9822710.9763110.9830140.9775150.994440.9674140.9944381
21Movement_libras0.9374110.8222410.8117400.8633300.872220.8401470.8111110.88582
22Semion0.9770440.9841410.9647140.9840170.983690.9957480.9824340.99474
23arrhythmia0.7522350.6155510.6341240.6531220.668140.6174580.6902650.73481
24isolet50.85147830.8186940.8074140.8423440.847430.8225080.843590.87014
25Mturk0.62417710.5901260.6347480.6530410.657730.5860140.6111110.93743
26pixraw10P0.83741400.7941000.8110010.7770140.84820.6647440.8222220.95767
Average0.91160.89170.88380.90260.92500.88450.89830.9540
Table 5. The results of the average fitness comparison with other state-of-the-art algorithms.
Table 5. The results of the average fitness comparison with other state-of-the-art algorithms.
No.DatasetBAOA [53]BGJO [54]BAHA [55]BSCSO [56]BCOVIDOA
[12]
BAEFA [57]BWOASA
[48]
IBEVO-FC
1Cryotherapy0.03970.05780.08980.02160.02530.03700.04990.0233
2Auto MPG0.19020.13110.19980.16290.11290.14430.21260.1100
3Breast_cancer0.06170.02610.04480.02400.02720.03200.03850.0212
4Page blocks0.02700.04370.06990.04410.04030.03470.04680.0373
5Glass_identification0.00950.00210.02130.02350.00210.02420.00310.0020
6Heart0.17330.11020.19550.15110.13940.12080.18510.0201
7Wine0.04520.04970.03070.04590.01530.08550.04180.0131
8Vowel0.03730.05040.03730.04170.03180.03360.03850.0294
9Australian0.16110.13310.19870.16990.123420.22720.16210.1177
10EEG Eye State0.02790.03310.06330.05670.04390.02260.04540.0379
11Zoo0.05100.00460.00840.00370.003320.01000.08180.0023
12House Voting0.13830.14110.18000.13610.10640.13350.14790.1004
13Pendigits0.01070.01340.02700.01440.01440.04590.01690.0136
14Segment0.03000.03340.03860.01140.03060.01440.04130.0218
15Waveform0.15980.31310.19430.20140.20510.24840.21410.2003
16Dermatology0.01990.01950.04150.02050.01820.03150.02540.0152
17kr-vs-kp0.16040.17500.19920.14400.16290.22370.19830.1437
18M-of-n0.02110.01220.00590.00690.00510.14330.02040.0031
19Spambase0.07980.08010.09720.06400.08670.11530.08380.0796
20Optical recognition0.01680.02210.03910.01330.01360.01800.01470.0116
21Movement_libras0.13910.18660.20790.14390.13140.15960.20360.1027
22Semion0.00800.03010.03960.01710.02200.03340.0148580.0210
23arrhythmia0.27030.26100.39570.33390.34010.34590.34280.2891
24isolet50.17330.17740.19040.15040.16510.21490.17790.1315
25Mturk0.27140.33030.47000.35100.36230.41870.44390.3464
26pixraw10P0.17100.32060.31010.23340.16330.33820.18070.1512
Average0.095910.10600.13060.09940.09200.12520.11660.0786
Table 6. Comparison results of the number of selected features with other state-of-the-art algorithms.
Table 6. Comparison results of the number of selected features with other state-of-the-art algorithms.
No.DatasetBAOA [53]BGJO
[54]
BAHA [55]BSCSO [56]BCOVIDOA
[12]
BAEFA [57]BWOASA
[48]
IBEVO-FC
1Cryotherapy33232332
2Auto MPG33222322
3Breast_cancer75575553
4Page blocks36454643
5Glass_identification23322232
6Heart55563552
7Wine47754251
8Vowel88898888
9Australian35751131
10EEG Eye State1313131313131313
11Zoo687646104
12House Voting66853532
13Pendigits111212121112129
14Segment5117139856
15Waveform1516181615161615
16Dermatology2222171413131610
17kr-vs-kp1723201513161710
18M-of-n2020202020202020
19Spambase3039403029354025
20Optical recognition4142523131353826
21Movement_libras3938393836424225
22Semion120169193125125258146125
23arrhythmia69185191105732627359
24isolet5223396484311250299348210
25Mturk261393333205289438267274
26pixraw10P40223848478541162861489144552236
Average190.69230.33241.62196.88147.15246.31213.8118.96
Table 7. Standard deviation comparison results against state-of-the-art algorithms.
Table 7. Standard deviation comparison results against state-of-the-art algorithms.
No.DatasetBAOA [53]BGJO [54]BAHA [55]BSCSO [56]BCOVIDOA
[12]
BAEFA [57]BWOASA
[48]
IBEVO-FC
1Cryotherapy6.9739 × 10−176.9739 × 10−170.00631.3948 × 10−161.3948 × 10−170.00220.00230.5268 × 10−17
2Auto MPG2.5106 × 10−160.00301.4286 × 10−42.7895 × 10−171.9739 × 10−170.00450.01231.2843 × 10−17
3Breast_cancer2.2222 × 10−40.00111.9050 × 10−42.4166 × 10−41.8282 × 10−40.00260.00210.9894 × 10−4
4Page blocks2.3956 × 10−46.3829 × 10−53.5320 × 10−42.8766 × 10−43.2698 × 10−40.00130.00132.3816 × 10−6
5Glass_identification3.0000 × 10−41.9695 × 10−42.0000 × 10−40.00201.0000 × 10−44.3519 × 10−103.489 × 10−81.0000 × 10−4
6Heart0.00560.01680.01580.00230.00830.01230.01170.0040
7Wine0.00330.00710.00700.00412.6152 × 10−170.03050.01213.4648 × 10−18
8Vowel2.5000 × 10−40.00120.00410.00127.2473 × 10−40.00130.00341.4839 × 10−4
9Australian0.00910.00690.01080.00140.00450.03110.00152.1986 × 10−4
10EEG Eye State0.00220.00219.7810 × 10−40.00250.00180.00510.00183.7724 × 10−5
11Zoo5.0000 × 10−50.00180.00393.4007 × 10−41.0548 × 10−50.00340.00061.0000 × 10−5
12House Voting0.00160.00216.7722 × 10−40.00141.1158 × 10−160.00410.00108.3869 × 10−17
13Pendigits2.4575 × 10−45.9333 × 10−46.3477 × 10−45.6604 × 10−52.3527 × 10−40.00196.162 × 10−81.6462 × 10−5
14Segment4.6306 × 10−57.7223 × 10−40.00112.3733 × 10−49.1212 × 10−40.00610.00831.9934 × 10−5
15Waveform0.00330.00360.00330.00450.00280.00290.00070.0016
16Dermatology6.8804 × 10−40.00370.00120.00110.00240.01140.00510.0021
17kr-vs-kp0.00160.00260.00575.5751 × 10−42.7895 × 10−160.00310.00962.1684 × 10−16
18M-of-n0.01200.02660.01790.00440.00230.02350.017490.0019
19Spambase0.00140.00250.00490.00150.00310.01510.00710.0014
20Optical recognition0.00620.00190.00330.00209.7768 × 10−40.00370.00432.6413 × 10−5
21Movement_libras0.00240.00360.00520.00360.00150.01220.01130.0011
22Semion0.00600.00120.00340.00118.7931 × 10−50.00350.00213.5463 × 10−6
23arrhythmia0.00450.00760.02100.00340.00160.01280.02370.0040
24isolet50.00270.00370.01450.00530.00470.0110.00200.0021
25Mturk0.02700.01600.02640.02850.01420.01680.01060.0135
26pixraw10P2.7035 × 10−43.5025 × 10−40.00496.6087 × 10−63.6043 × 10−60.0061.1680 × 10−51.7743 × 10−6
Average0.00250.00450.00630.00270.00190.01040.00660.0012
Table 8. Results of the statistical analysis using the Wilcoxon rank-sum method.
Table 8. Results of the statistical analysis using the Wilcoxon rank-sum method.
No.DatasetIBEVO-FC
vs.
BAOA
IBEVO-FC
vs.
PSO
IBEVO-FC
vs.
GWO
IBEVO-FC
vs.
DA
IBEVO-FC
vs.
GOA
IBEVO-FC
vs.
WOASA
1Cryotherapy4.571 × 10−356.8457 × 10−464.7414 × 10−356.7741 × 10−439.5422 × 10−243.6044 × 10−21
2Auto MPG5.8841 × 10−409.4787 × 10−358.1552 × 10−392.3471 × 10−406.6534 × 10−314.5537 × 10−15
3Breast_cancer9.1254 × 10−341.3985 × 10−414.8741 × 10−156.7447 × 10−141.1717 × 10−126.7417 × 10−7
4Page blocks6.8342 × 10−203.3904 × 10−72.7147 × 10−363.1007 × 10−463.6147 × 10−165.4841 × 10−14
5Glass_identification5.6312 × 10−392.3901 × 10−57.8471 × 10−418.8659 × 10−88.7410 × 10−176.7789 × 10−9
6Heart1.7418 × 10−413.7013 × 10−251.8766 × 10−406.5502 × 10−414.5336 × 10−448.5476 × 10−36
7Wine2.5487 × 10−382.2207 × 10−337.6770 × 10−244.7484 × 10−303.74861 × 10−444.7012 × 10−31
8Vowel6.8547 × 10−421.6903 × 10−414.7481 × 10−125.2687 × 10−431.8174 × 10−356.13180 × 10−43
9Australian7.6631 × 10−443.9931 × 10−288.7474 × 10−446.7441 × 10−391.9888 × 10−413.1183 × 10−40
10EEG Eye State3.5326 × 10−205.6254 × 10−157.4101 × 10−61.2830 × 10−243.3602 × 10−421.8934 × 10−11
11Zoo1.50374 × 10−4257410 × 10−369.6746 × 10−386.8018 × 10−401.43718 × 10−419.9695 × 10−44
12House Voting7.44801 × 10−455.5501 × 10−382.7447 × 10−95.2803 × 10−422.6749 × 10−367.4335 × 10−18
13Pendigits1.3641 × 10−346.7418 × 10−215.3412 × 10−437.5505 × 10−486.7874 × 10−132.7102 × 10−24
14Segment3.1047 × 10−441.6038 × 10−373.8741 × 10−399.0478 × 10−297.4701 × 10−417.6371 × 10−7
15Waveform4.4718 × 10−404.8443 × 10−423.8471 × 10−363.5474 × 10−369.8634 × 10−458.5831 × 10−36
16Dermatology3.5353 × 10−368.1535 × 10−198.5533 × 10−358.4405 × 10−111.4405 × 10−413.8542 × 10−43
17kr-vs-kp1.8746 × 10−351.8301 × 10−403.5823 × 10−272.5446 × 10−477.6387 × 10−331.8837 × 10−36
18M-of-n1.9047 × 10−412.8314 × 10−407.9347 × 10−412.3057 × 10−306.3001 × 10−369.3108 × 10−32
19Spambase6.1453 × 10−79.7418 × 10−238.3697 × 10−283.5106 × 10−449.7362 × 10−388.3471 × 10−23
20Optical recognition3.8561 × 10−343.6118 × 10−377.3814 × 10−351.2444 × 10−326.6743 × 10−193.5747 × 10−26
21Movement_libras3.5847 × 10−44.0147 × 10−357.4102 × 10−342.3636 × 10−387.7741 × 10−461.3447 × 10−43
22Semion2.5147 × 10−438.5311 × 10−363.3710 × 10−93.7368 × 10−404.6314 × 10−407.7784 × 10−46
23arrhythmia1.5474 × 10−166.7418 × 10−445.4057 × 10−383.7400 × 10−421.7014 × 10−176.1035 × 10−43
24isolet52.4718 × 10−334.3818 × 10−433.5563 × 10−291.3014 × 10−472.8543 × 10−364.5784 × 10−13
25Mturk6.1365 × 10−432.4718 × 10−181.8704 × 10−449.1407 × 10−327.8318 × 10−337.3354 × 10−41
26pixraw10P3.1863 × 10−431.7481 × 10−82.1127 × 10−374.6714 × 10−386.7731 × 10−323.8354 × 10−21
Table 9. Mean Friedman rankings for all algorithms.
Table 9. Mean Friedman rankings for all algorithms.
AlgorithmMean Rank (Accuracy)Mean Rank (No. of Features)
IBEVO-FC1.651.82
BCOVIDOA2.732.91
BAOA3.453.56
BSCSO4.123.98
BWOASA4.564.23
BGJO5.345.12
BAHA5.895.76
BAEFA6.266.62
Table 10. Search space distribution statistics.
Table 10. Search space distribution statistics.
AlgorithmMean Distance from CenterSkewnessKurtosisp-Value (Uniformity Test)
EVO0.4870.0232.8910.234
IBEVO-FC0.493−0.0182.9670.187
Random Search0.5010.0013.0020.456
Table 11. Central bias indicators in comparison algorithms.
Table 11. Central bias indicators in comparison algorithms.
AlgorithmSearch Distribution p-ValueInitial Independence p-Value
EVO0.2340.824
IBEVO-FC0.1870.891
BAOA0.1560.723
BCOVIDOA0.0890.234
BAHA0.0340.067
BWOASA0.0210.043
BGJO0.0450.089
BSCSO0.1120.189
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fathi, I.S.; El-Saeed, A.R.; Hassan, G.; Aly, M. Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection. Fractal Fract. 2025, 9, 521. https://doi.org/10.3390/fractalfract9080521

AMA Style

Fathi IS, El-Saeed AR, Hassan G, Aly M. Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection. Fractal and Fractional. 2025; 9(8):521. https://doi.org/10.3390/fractalfract9080521

Chicago/Turabian Style

Fathi, Islam S., Ahmed R. El-Saeed, Gaber Hassan, and Mohammed Aly. 2025. "Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection" Fractal and Fractional 9, no. 8: 521. https://doi.org/10.3390/fractalfract9080521

APA Style

Fathi, I. S., El-Saeed, A. R., Hassan, G., & Aly, M. (2025). Fractional Chebyshev Transformation for Improved Binarization in the Energy Valley Optimizer for Feature Selection. Fractal and Fractional, 9(8), 521. https://doi.org/10.3390/fractalfract9080521

Article Metrics

Back to TopTop