Next Article in Journal
Advances in Biomimetics: The Power of Diversity
Previous Article in Journal
Development of a Cable-Driven Bionic Spherical Joint for a Robot Wrist
Previous Article in Special Issue
Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection

School of Petroleum Engineering, Yangtze University, Wuhan 430100, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(1), 53; https://doi.org/10.3390/biomimetics10010053
Submission received: 24 December 2024 / Revised: 13 January 2025 / Accepted: 13 January 2025 / Published: 14 January 2025

Abstract

:
Optimization algorithms play a crucial role in solving complex problems across various fields, including global optimization and feature selection (FS). This paper presents the enhanced polar lights optimization with cryptobiosis and differential evolution (CPLODE), a novel improvement upon the original polar lights optimization (PLO) algorithm. CPLODE integrates a cryptobiosis mechanism and differential evolution (DE) operators to enhance PLO’s search capabilities. The original PLO’s particle collision strategy is replaced with DE’s mutation and crossover operators, enabling a more effective global exploration and using a dynamic crossover rate to improve convergence. Furthermore, a cryptobiosis mechanism records and reuses historically successful solutions, thereby improving the greedy selection process. The experimental results on 29 CEC 2017 benchmark functions demonstrate CPLODE’s superior performance compared to eight classical optimization algorithms, with higher average ranks and faster convergence. Moreover, CPLODE achieved competitive results in feature selection on ten real-world datasets, outperforming several well-known binary metaheuristic algorithms in classification accuracy and feature reduction. These results highlight CPLODE’s effectiveness for both global optimization and feature selection.

1. Introduction

The increasing importance of feature selection arises from the complexities introduced by high-dimensional datasets [1]. In such datasets, irrelevant or redundant features can obscure meaningful patterns, compromise model performance, and escalate computational demands [2]. By concentrating on the identification of a subset of features that maintains or enhances a model’s predictive power, feature selection boosts the efficiency and efficacy of machine learning workflows [3].
Feature selection methods can be broadly categorized into three main types: filter methods, embedded methods, and wrapper methods, each distinguished by their underlying principles and inherent trade-offs [4]. Filter methods employ statistical measures to assess and rank features independent of any specific predictive model. Widely used techniques encompass correlation coefficients [5], mutual information [6], and variance thresholds [7]. Although computationally efficient, filter methods often overlook xinteractions among features, limiting their effectiveness in more complex situations. Embedded methods, conversely, integrate the feature selection process directly within the model training phase. Examples include Lasso regression [8], which introduces a penalty term to shrink the coefficients of less relevant features to zero and where feature importance is derived from split criteria. These methods generally provide improved performance by aligning feature selection with the model’s objectives but are restricted by the choice of the base algorithm. Wrapper methods take a more comprehensive and iterative approach, evaluating feature subsets using a predictive model [9]. Despite their computational overheads, they are effective at addressing feature interactions and customizing the selected subset for a particular problem. Techniques like forward selection, backward elimination, and recursive feature elimination illustrate this category, underscoring its ability to effectively optimize feature sets. The very nature of wrapper-based feature selection represents a global optimization problem, where the search for the optimal subset within an exponentially growing number of combinations necessitates the use of efficient algorithms. Formally, this problem can be defined as follows:
Let = { f 1 , f 2 , , f n } denote the complete set of n features, and let x = ( x 1 , x 2 , , x n ) be a binary vector where x i { 0,1 } indicates whether the i -th feature is selected ( x i = 1 ) or not ( x i = 0 ). The goal of wrapper-based feature selection is to identify the optimal subset of features 𝒮 that maximizes (or minimizes) a predefined objective function J ( 𝒮 ) , which typically evaluates the performance of a predictive model trained on the selected features. The search space for this problem is combinatorial in nature, with a total of 2 n possible feature subsets.
Traditional methods, such as exhaustive search or greedy algorithms, often struggle with the curse of dimensionality, thereby prompting the adoption of metaheuristic approaches [10].
Metaheuristic algorithms have emerged as effective tools for addressing challenging optimization problems, particularly in high-dimensional, multimodal, and non-convex search spaces [11]. These algorithms can be generally categorized into two main types: evolutionary algorithms and swarm intelligence algorithms. Evolutionary algorithms, drawing inspiration from natural selection, encompass techniques such as genetic algorithms (GA) [12] and differential evolution (DE) [13], which emulate biological evolutionary processes. Conversely, swarm intelligence algorithms, inspired by the collective behaviors of animal groups, include methods like particle swarm optimization (PSO) [14] and ant colony optimization (ACO) [15]. While both categories emphasize the importance of balancing exploration and exploitation, they diverge in their foundational principles and operational mechanisms.
Over the past few years, metaheuristic algorithms, particularly swarm intelligence-based approaches, have shown significant promise in addressing FS challenges. Several studies have explored improved versions of established metaheuristic algorithms for FS. For instance, Gao et al. [16] introduced clustering probabilistic particle swarm optimization (CPPSO), which enhances traditional PSO with probabilistic velocity representation and a K-means clustering strategy to improve both exploration and exploitation for high-dimensional data. Similarly, hybrid approaches have gained traction, such as the particle swarm-guided bald eagle search (PS-BES) by Kwakye et al. [17], which combines the speed of PSO to guide bald eagle search, introducing an attack–retreat–surrender mechanism to better balance diversification and intensification. These studies showcase the effectiveness of leveraging different search mechanisms for improved performance on benchmark datasets and real-world problems. Other variations of metaheuristics have explored improved exploration strategies, such as a modified version of the forensic-based investigation algorithm (DCFBI) proposed by Hu et al. [18], incorporating dynamic individual selection and crisscross mechanisms for improved convergence and avoidance of local optima. Furthermore, Askr et al. [19] proposed binary-enhanced golden jackal optimization (BEGJO), using copula entropy for dimensionality reduction while integrating enhancement strategies to improve exploration and exploitation capabilities. Beyond the improved variations of metaheuristic algorithms, novel algorithms have also emerged. Lian et al. [20] presented the parrot optimizer (PO), inspired by parrot behaviors, integrating stochasticity to enhance population diversity and avoid local optima. Likewise, Singh et al. [21] explored combining emperor penguin optimization, bacterial foraging optimization, and their hybrid to optimize feature selection for glaucoma classification. These studies indicate the emergence of diverse metaheuristic strategies to balance exploration and exploitation for FS.
While metaheuristic approaches have shown considerable success in feature selection, the no free lunch (NFL) theorem highlights their inherent limitations [22]. The NFL theorem asserts that no single optimization algorithm can consistently outperform all others across all problem instances. This necessitates ongoing innovation and adaptation of metaheuristic strategies to address diverse feature selection challenges. Researchers are thus motivated to refine existing algorithms or explore the combination of multiple techniques, such as hybridizing algorithms or incorporating adaptive mechanisms, to enhance their generalizability and robustness [23,24]. Informed by these considerations and the need to overcome the constraints of current metaheuristic algorithms, this study introduces an innovative approach to enhance existing algorithms, aiming to advance their applicability to both feature selection and global optimization tasks.
The polar lights optimization (PLO) algorithm, a recent metaheuristic optimization approach proposed by Yuan et al. in 2024 [25], draws its inspiration from the natural phenomenon of the aurora. PLO emulates the movement of high-energy particles as they are affected by the Earth’s magnetic field and atmosphere, incorporating three fundamental mechanisms: gyration motion for local exploitation, aurora oval walk for global exploration, and particle collision to facilitate an escape from local optima. A key advantage of PLO lies in its ability to balance local and global search through the use of adaptive weights. However, similar to other metaheuristic algorithms, PLO’s performance can be susceptible to parameter settings, and its convergence may be challenged by high-dimensional problems. Therefore, further research is warranted to investigate parameter-tuning strategies and assess PLO’s performance across diverse real-world applications to validate its robustness and practical utility.
This paper introduces CPLODE, an enhanced version of the polar lights optimization (PLO) algorithm, designed to improve its search capabilities through the integration of a cryptobiosis mechanism and differential evolution (DE) operators. Specifically, the cryptobiosis mechanism refines the greedy selection process within PLO, allowing the algorithm to retain and reuse historically effective search directions. Moreover, the original particle collision strategy in PLO is replaced by DE’s mutation and crossover operators, which provide a more effective means for global exploration and employ a dynamic crossover rate to promote improved convergence. These modifications collectively contribute to the enhanced performance of CPLODE. The key contributions of this paper can be summarized as follows:
  • A novel enhanced polar lights optimization algorithm, CPLODE, is proposed by integrating a cryptobiosis mechanism and differential evolution operators to enhance the search effectiveness.
  • The DE mutation and crossover operators replace the original particle collision strategy and use a dynamic and adaptive crossover rate to enable better solution convergence.
  • The cryptobiosis mechanism replaces the greedy selection approach and allows for the preservation and reuse of historically successful solutions to improve the overall performance.
  • The performance of CPLODE is validated through comprehensive experiments, demonstrating its efficacy in solving complex optimization problems.
The remainder of this paper is organized as follows: Section 1 introduces the research background, motivation, and key contributions. Section 2 describes the fundamentals of the original PLO algorithm. Section 3 presents the proposed CPLODE algorithm, including detailed explanations of the cryptobiosis mechanism and the DE operators. Section 4 covers the experimental setup, results, and their analysis to evaluate CPLODE’s performance. Section 5 explores the application of the proposed CPLODE algorithm in feature selection. Finally, Section 6 concludes the paper by summarizing the key findings and outlining potential future work.

2. The Original PLO

Polar lights optimization (PLO), introduced by Yuan et al. [25] in 2024, is a novel metaheuristic algorithm that mimics the movement of high-energy particles interacting with the Earth’s geomagnetic field and atmosphere, inspired by the natural phenomenon of the aurora. The algorithm solves optimization problems by modeling this particle motion, which is divided into three core phases: gyration motion, aurora oval walk, and particle collision.
1. Gyration motion: Inspired by the spiraling trajectory of high-energy particles under Lorentz force and atmospheric damping, gyration motion facilitates local exploitation. Mathematically, this is represented by the following equation:
v t = C e q B a m t
where v t represents the particle’s velocity at time t , C is a constant, q represents the particle’s charge, B is the strength of Earth’s magnetic field, α represents the atmospheric damping factor, and m is the mass of the particle. In the PLO algorithm, C , q , and B are set to 1 for simplicity, and m is set to 100. The damping factor α is a random value within the range [1, 1.5]. The fitness evaluation process of the current particle represents the time ( t ) to model the decaying spiraling trajectories, enabling fine-grained local searches.
2. Aurora Oval Walk: The aurora oval walk emulates the dynamic movement of energetic particles along the auroral oval, facilitating global exploration. This movement is influenced by a Levy flight distribution, the average population position, and a random search component. The aurora oval walk of each particle is calculated using the following equation:
A o = L e v y ( d ) × ( X a v g j X i , j ) + L B ( i ) + r 1 × ( U B ( i ) L B ( i ) ) / 2
where A o represents the movement of a particle in the auroral oval walk, i represents the i -th individual and ranges from 1 to N (population size), and j represents the j -th dimension, ranging from 1 to D (problem dimension). L e v y ( d ) is the Levy distribution, d   is the step size, X a v g ( j ) X ( i , j )   represents the direction the particles tend to move toward the average location, L B is the lower bound of the search space, U B is the upper bound of the search space, and r 1 is a random number [0, 1].
This auroral oval walk enables rapid exploration of the solution space through a seemingly random walk. To integrate both gyration motion and the auroral oval walk, the updated position of each particle ( X n e w ( i , j ) ) is computed as follows:
X n e w ( i , j ) = X ( i , j ) + r 2 × ( W 1 × v ( t ) + W 2 × A o )
where X ( i , j ) is the current particle position, and r 2 introduces randomness, taking values between 0 and 1. W 1 and W 2 are adaptive weights that balance exploration and exploitation. They are updated in each iteration as follows:
W 1 = 2 ( 1 + e 2 ( t / T ) 4 ) 1
W 2 = e ( 2 t / T ) 3
where t is the current iteration, and T is the maximum number of iterations. W 1 increases over time, giving more weight to gyration motion and W 2 decreases, giving less weight to the auroral oval walk, shifting from global search towards local exploitation.
3. Particle Collision: Inspired by the violent particle collisions in Earth’s magnetic field, which result in energy transfer and changes in particle directions, this strategy enables particles to escape local optima. In PLO, each particle may collide with any other particle in the swarm and is modeled mathematically with:
X n e w i , j = X i , j + sin r 3 × π × X i , j X a , j r 4 < K   a n d   r 5 < 0.05 X ( i , j ) o t h e r w i s e
where X n e w i , j is the new position of particle i in dimension j , X i , j is the current position of particle i in dimension j , X a , j is the position of a randomly selected particle in the population, and r 3 , r 4 , and r 5 are random numbers from [0, 1]. The sine function introduces a variable direction of movement after the collision. The collision probability, K , increases with iterations as follows:
K = ( t / T )
The PLO algorithm iteratively updates the particle positions by combining gyration motion and the aurora oval walk by Equations (2) and (3). These motion patterns are balanced by adaptive weights that gradually shift emphasis from global exploration to local exploitation. The particle collision behavior occurs stochastically, allowing particles to escape local optima. This process continues until a maximum number of iterations is reached, resulting in a near-optimal solution. The core strength of PLO lies in the combination of these inspired physical behaviors to enable an effective search and a specific mechanism to avoid local optima. Figure 1 shows the flowchart of PLO.

3. Proposed CPLODE

3.1. Differential Evolution

Differential evolution (DE) is a population-based evolutionary algorithm known for its effectiveness in solving optimization problems through mutation and crossover operators [13]. In this work, we leverage DE’s mutation and crossover operators as an alternative to the particle collision strategy in the original PLO algorithm, aiming to enhance the algorithm’s global exploration capability. This replacement provides a more effective solution generation strategy than the random collisions used previously. Furthermore, the r 4 < K   and   r 5 < 0.05 condition for particle collision is replaced with a dynamic and adaptive crossover rate. The specific implementation of these DE operators within the improved PLO is detailed below.
1. Mutation: The mutation operator, crucial in DE, generates a trial vector by perturbing the current solution. We employ the “DE/best/1” mutation strategy [26], which perturbs a base vector by adding a scaled difference vector. This is mathematically expressed as:
M ( i ) = X b e s t + F × X r 1 X r 2
where X r 1 and X r 2 are two randomly selected individuals from the population, X b e s t is the best individuals in the population, and F is a scaling factor.
2. Crossover: Following mutation, a crossover operator is used to increase population diversity by combining the beneficial features. We employed a binomial crossover, where each component of the newly generated offspring was selected from either the mutated vector or the current solution with a crossover probability, C r . This is mathematically expressed as:
C ( i , j ) = M ( i , j ) r a n d < C r X ( i , j ) o t h e r w i s e
where C ( i , j ) represents the j th dimension of the i th offspring, rand is a random number between 0 and 1, and C r is the crossover rate, as shown in Equation (10).
C r = 0.5 e 2 ( F E s / M a x F E s ) 1 / 2 + 0.1
where F E s represents the current number of fitness evaluations, and M a x F E s is the maximum number of fitness evaluations. This dynamic crossover rate, C r , starts at a higher value initially, promoting exploration, and gradually decreases as the algorithm iterates, transitioning the algorithm from diversification to intensification. This allows the algorithm to effectively search the entire solution space initially and then focus on exploiting promising regions, thus enhancing convergence.

3.2. Cryptobiosis Mechanism

The cryptobiosis mechanism, proposed by Zheng et al. in 2024 as part of the moss growth optimizer (MGO) algorithm [27], is implemented to refine the greedy selection mechanism. Drawing inspiration from the cryptobiosis phenomenon observed in moss, which allows them to endure periods of inactivity and subsequently revive under favorable conditions, this mechanism records historical information for each solution. In contrast to conventional methods that directly modify individuals, this mechanism stores the solutions generated in each iteration. Specifically, it maintains a record of a fixed number of past solutions and tracks the best-performing particle. When specific criteria are met, such as reaching the maximum number of records or the conclusion of a generation, the mechanism is triggered. The best historical solution among the recorded solutions is then employed to replace the current solution. This approach facilitates repeated exploration of promising areas, thereby ensuring the population’s global search capability. Concurrently, replacing individuals with the best historical solutions under these conditions enhances population quality. This mechanism remains active throughout the search process, aiming to improve search efficiency by reintroducing previously successful solutions rather than initiating the search from scratch at each step.
The pseudo-code of the cryptobiosis mechanism is shown in Algorithm 1. In Algorithm 1, several variables are used to manage the cryptobiosis mechanism. X i represents the i -th solution within the population; r e c n u m denotes the maximum number of records that can be kept before a cryptobiosis event occurs; r e c o r d is a counter tracking the number of records currently stored; X r e c o r d ( i ) stores the recorded solutions for the i -th individual; t represents the current iteration, and T is the maximum number of iterations allowed before the next cryptobiosis cycle. X r e c o r d b e s t ( i ) stores the best solution from the recorded solutions for the i -th individual. The algorithm cycles until the maximum number of fitness evaluations (MaxFEs) are reached. Within each cycle of the algorithm, solutions are recorded, and a local best solution is found within the recorded solutions.
Algorithm 1: Pseudo-code of cryptobiosis mechanism
1.
Input:  X i : i-th solution r e c _ n u m : maximum number of records
2.
Output: Updated X i
3.
r e c o r d = 0
4.
While  ( F E s < M a x F E s )
5.
          If r e c o r d = 0
6.
                  X r e c o r d ( i ) = X ( i )
7.
                  r e c o r d = r e c o r d + 1
8.
          End if
9.
          Update the X                                     /* PLO */
10.
        For i = 1:N
11.
                  X r e c o r d ( i ) = X ( i )
12.
                  r e c o r d = r e c o r d + 1
13.
                  If  r e c o r d > r e c n u m 1 | | t T
14.
                          X r e c o r d ( i ) = X ( i )
15.
                          For  e = 1 : r e c o r d
16.
                                  If  F i t n e s s X r e c o r d ( i ) < F i t n e s s ( X r e c o r d b e s t ( i ) )
17.
                                          X r e c o r d b e s t ( i ) = X r e c o r d ( i )
18.
                                  End if
19.
                          End for
20.
                          X ( i ) = X r e c o r d b e s t ( i )
21.
                          r e c o r d = 0
22.
                  End if
23.
          End For
24.
          F E s = F E s + N
25.
End while
26.
Return X

3.3. The Proposed CPLODE

This section delineates the workflow of the proposed CPLODE algorithm, which integrates the cryptobiosis mechanism and DE operators into the original PLO framework. CPLODE commences by initializing the required parameters and generating an initial population of solutions, which is consistent with standard optimization algorithms. The algorithm then proceeds through the following primary steps. Initially, the gyration motion strategy of PLO is executed to perform a local search around the current particle. Subsequently, instead of employing the original particle collision strategy, CPLODE leverages the DE mutation and crossover operators, as detailed in the preceding section, to produce new candidate solutions. Specifically, the “DE/best/1” mutation operator perturbs the current solution based on a scaled difference vector, while the binomial crossover operator, utilizing a dynamic crossover rate C r , combines the mutated solution with the current solution. These steps ensure effective global exploration of the search space. Following the completion of gyration motion and mutation/crossover by all particles in the population, the cryptobiosis mechanism is activated. This mechanism records historical information for each particle throughout the previous iterations, and upon activation, it replaces the current solution with the best-recorded solution if a more effective historical solution is identified. The algorithm continues to iterate through these steps until a termination criterion is satisfied, at which point the algorithm returns the optimal or near-optimal solution. The overall workflow of CPLODE is illustrated in Figure 2.
Algorithm 2 provides the pseudo-code for the CPLODE.
Algorithm 2: Pseudo-code of CPLODE
Parameters initializing: F E s = 0 , M a x F E s , t = 0
Initialize high-energy particle cluster X .
Calculate the fitness value f ( X ) .
Sort X according to f ( X ) .
Update the current optimal solution X b e s t .
While  F E s < M a x F E s
Calculate the velocity v ( t ) for each particle, according to Equation (1).
Calculate aurora oval walk A o for each particle, according to Equation (2).
Calculate weights W 1 and W 2 according to Equations (4) and (5).
For each energetic particle do
Updating particles X n e w using Equation (3).
If  r 4 < K and r 5 < 0.05
Particle collision strategy: update particle X n e w using Equations (8) and (9).
End If
          Calculate the fitness f ( X n e w ) .
           F E s = F E s + 1 .
End For
  If  f X n e w < f ( X )
          Iterating over X using the cryptobiosis mechanism.
  End If
Sort X according to f ( X ) .
Update the optimal solution X b e s t .
                                                        t = t + 1 .
End While
Return the  X b e s t .
The computational complexity of the proposed CPLODE algorithm depends primarily on population initialization, fitness evaluation, gyration motion, DE-based solution generation, cryptobiosis mechanism, and the particle collision strategy. Assuming a population size of N, a maximum number of iterations of T, and a solution dimension of D, the overall computational complexity of CPLODE can be approximated as follows: O(CPLODE) ≈ O(TN)* + O(TND) + O(TND) + O(TN)* + O(TN)* ≈ O(TND). Therefore, the computational complexity of CPLODE is dominated by the DE-based solution generation and gyration motion and has a time complexity of O(TND).

4. Global Optimization Performance Evaluation

This section presents a comprehensive evaluation of the proposed CPLODE algorithm’s performance on a set of 29 benchmark functions from the IEEE CEC 2017 test suite. These experiments aim to provide a rigorous assessment of CPLODE’s optimization capabilities. All tests were conducted under standardized conditions on an Ubuntu 22.04 system using MATLAB R2023b, with a consistent configuration to ensure a fair comparison. The experiments were performed on an AMD Ryzen 9 5900X processor with 64 GB of RAM. To evaluate the algorithm’s performance, each algorithm was executed 30 times, and the average and standard deviation of the results for each benchmark function were recorded. For these experiments, the population size was set to 30, the problem dimension was set to 30, and the maximum number of fitness evaluations was set to 300,000. The following analysis will detail these results and provide an in-depth performance comparison.

4.1. Detailed Description of Benchmark Functions

The performance of the proposed CPLODE algorithm was evaluated using a suite of 29 benchmark functions from the 2017 IEEE Congress on Evolutionary Computation (CEC 2017) test suite [28]. These functions encompass a diverse range of characteristics, categorized into four primary types: unimodal, multimodal, hybrid, and composition functions. This selection of functions ensures a robust evaluation of the algorithm’s optimization capabilities across various landscape complexities. Each function is defined with a specific global optimum, as summarized in Table 1, which provides the function name, its type, and its optimal objective value. These benchmark functions serve as a standard tool to analyze and compare the effectiveness of optimization algorithms.

4.2. Comparative Analysis with Classical Optimization Algorithms

To evaluate the performance of the proposed CPLODE algorithm, comparative experiments were conducted against the following eight other classical optimization algorithms: PLO [25], SMA [29], WOA [30], GWO [31], MFO [32], SCA [33], FA [34], and DE [13]. These algorithms were chosen to provide a broad comparison across different optimization techniques. The experiments were performed on the 29 benchmark functions from the CEC 2017 test suite, and each algorithm was executed 30 times with a population size of 30, a solution dimension of 30, and a maximum of 300,000 function evaluations.
Table 2 summarizes the average (Avg) and standard deviation (Std) of the fitness values obtained by each algorithm on the 29 benchmark functions. Furthermore, Table 2 presents the overall rankings of each algorithm based on the Friedman test, along with the win/loss/tie results of CPLODE against each other algorithm. As shown in Table 2, CPLODE achieves the best average rank with a score of 1.6552, indicating its superior performance. In detail, CPLODE performs favorably on most functions and notably obtains results that are equal to or better than the other algorithms, especially on complex multimodal functions such as F3, F14, F17, and F19, demonstrating its effectiveness in navigating complex optimization landscapes.
Table 3 provides the p-values resulting from the Wilcoxon signed-rank test, which compares CPLODE against each of the other algorithms on the 29 benchmark functions. A p-value less than 0.05 indicates a statistically significant difference between the performance of CPLODE and the other algorithm. As shown in Table 3, the p-values for the majority of the functions are less than 0.05, demonstrating that CPLODE significantly outperforms the other algorithms. In those cases where the p-value is higher than 0.05, this typically indicates that CPLODE and that particular algorithm are able to provide results of comparable quality, typically when the problem is very simple, such as F12, F14, F21, F22, F23, F25, F26, F28, and F29.
The results presented in Table 2 and Table 3 demonstrate that CPLODE not only achieves a higher average rank but also exhibits a robust and consistent performance when compared to the selected classical optimization algorithms in these global optimization experiments.
Figure 3 illustrates the convergence behavior of CPLODE and the other comparison algorithms across several representative benchmark functions (F2, F4, F6, F7, F9, F11, F23, F25, and F29). The horizontal axis represents the number of fitness evaluations (FEs) performed, while the vertical axis shows the best fitness value achieved by each algorithm at each evaluation. The legend at the bottom of the figure identifies each algorithm.
A visual analysis of the convergence curves reveals that CPLODE (the red line with circles) consistently achieves a superior convergence rate and reaches better fitness values compared to the other algorithms. Notably, for most of the shown functions, CPLODE exhibits a steep descent in fitness value during the initial evaluations, which indicates rapid convergence and demonstrates its strong exploitation capacity. The red lines consistently fall below the other colored lines across nearly all of the functions, showcasing that the CPLODE algorithm effectively navigates the search space and escapes local optima effectively to locate more promising solutions compared to all the comparison algorithms. Specifically on the more challenging functions, such as F7, F9, and F29, other algorithms are more prone to stagnating at local optima and show a much slower rate of convergence in comparison to the CPLODE. The performance demonstrates that CPLODE provides better exploration ability to navigate the entire solution space, and also has better exploitation capabilities, thus achieving faster and more reliable convergence.

5. Application to Feature Selection

This section explores the application of the proposed CPLODE algorithm to feature selection problems. Feature selection is a critical task in machine learning, aimed at identifying the most relevant features from a dataset, thereby reducing dimensionality and improving model performance. To apply CPLODE, a method designed for continuous domains, to this discrete problem, we employed a binary encoding strategy. Specifically, we constrained the problem’s upper and lower bounds to the interval [0, 1] and utilized Equation (11) to determine the selection status of each feature. This approach enables CPLODE, which is inherently suited for continuous domains, to operate effectively within the binary search space of feature selection.
X i , j = 0      X i , j < 0.5 1      X i , j 0.5
To transition from the continuous search space of CPLODE to the binary feature selection space, each dimension is converted using a threshold of 0.5. Specifically, if a dimension’s value is greater than or equal to 0.5, the corresponding feature is selected; otherwise, it is not selected. This process maps the continuous space to a binary selection space.
In this study, we use the K-nearest neighbors (KNN) classifier to evaluate the quality of the selected feature subsets and use the following fitness function, which seeks to simultaneously minimize the classification error and the number of selected features:
F i t n e s s = μ   × E + ( 1 μ ) × l L
where E represents the classification error rate, l is the number of selected features, L is the total number of features, and μ is a constant between 0 and 1 controlling the trade-off between the error rate and the number of selected features. Because we primarily focus on the accuracy of the selected feature subset, we set μ to 0.05 to prioritize error minimization. This setting gives more weight to the classification error and less weight to the number of features selected.

5.1. Detailed Description of Datasets

To evaluate the performance of the proposed CPLODE algorithm for feature selection, experiments were conducted on ten datasets selected from the UCI Machine Learning Repository. These datasets represent a range of complexity with varying numbers of samples, features, and classes. The number of samples in these datasets ranges from 72 to 2310, while the number of features varies from 8 to 7130. These variations ensure that the performance of the algorithm is tested across diverse feature selection scenarios, from low-dimensional to high-dimensional data and with different class distributions. Table 4 provides a detailed description of each dataset used in this study, including the dataset name, number of samples, number of features, and number of classes.

5.2. Feature Selection Results and Discussion

This subsection presents the experimental results of CPLODE for feature selection, compared with several other well-known binary metaheuristic algorithms as follows: BPSO [35], BGSA [36], BALO [37], BBA [38], and BSSA [39]. These experiments were conducted on the ten real-world datasets described in Section 5.1. All algorithms were run with a population size of 30 and a maximum of 1000 iterations. To ensure robustness and avoid bias, a 10-fold cross-validation technique was used in all experiments. The detailed experimental results are shown in Table 5 and Table 6.
Table 5 presents the average classification error rates, with standard deviations in parentheses, obtained by each algorithm on each dataset. From the table, it can be observed that CPLODE achieves the lowest error rates on the majority of datasets, demonstrating its superior performance in feature selection. Notably, CPLODE achieves significantly lower error rates on datasets such as “Hepatitis_full_data” and “Segment”, where the error rates of CPLODE are well below other binary metaheuristic algorithms. While some algorithms, such as BPSO and BSSA, perform well on datasets like “Leukemia”, their results are not consistently better than CPLODE across all datasets. On the “Heart” dataset, CPLODE, BGSA, and BSSA exhibited similar performances, but all of the algorithms outperformed BBA, which had the worst performance. These results indicate that CPLODE achieves better accuracy on many of the datasets presented.
Table 6 presents the average number of selected features, with standard deviations in parentheses, achieved by each algorithm on each dataset. From the table, it can be observed that while CPLODE shows a competitive ability in selecting the relevant features, it does not consistently select the shortest feature subsets. For example, the BGSA algorithm achieves lower feature selections on the “Hepatitis_full_data” and “Leukemia” datasets. However, this lower feature count comes at the expense of higher error rates, as shown in Table 5. Overall, CPLODE tends to select feature subsets that strike a good balance between classification performance and feature reduction, although not always selecting the absolute minimum number of features.
In summary, the experimental results demonstrate that the proposed CPLODE algorithm provides a highly competitive performance for feature selection. The proposed CPLODE demonstrates superior performance on most of the datasets used in this study, achieving a better classification performance. These superior results can be attributed to the integration of effective global search mechanisms through DE while employing the cryptobiosis mechanism for population quality control and the gyration motion strategy of PLO for local exploitation.

6. Conclusions

In this work, we have introduced CPLODE, a novel enhancement of the PLO algorithm, achieved through the integration of a cryptobiosis mechanism and DE operators. These modifications were designed to improve the original PLO’s search capabilities. Specifically, we replaced the original particle collision strategy with DE’s mutation and crossover operators, which enables more effective global exploration while also employing a dynamic crossover rate to enhance convergence. Furthermore, the cryptobiosis mechanism was incorporated to refine the greedy selection approach by recording and reusing historically successful solutions.
The performance of CPLODE was assessed on 29 benchmark functions from the CEC 2017 test suite, demonstrating superior performance across diverse fitness landscapes when compared to eight classical optimization algorithms. CPLODE achieved a higher average rank and statistically significant improvements, according to the Wilcoxon signed-rank test, particularly on more complex functions. Convergence curves further validated its enhanced convergence rate and optimal function values. These results emphasize the effectiveness of the integrated cryptobiosis mechanism and DE operators within CPLODE, confirming its improved search capability.
Moreover, CPLODE was applied to ten real-world datasets for feature selection, showcasing competitive performance by outperforming several well-known binary metaheuristic algorithms on most datasets and achieving a good balance between classification accuracy and feature reduction.
Future research will focus on further refining CPLODE with advanced adaptive mechanisms and exploring its application to a wider range of real-world optimization and feature selection tasks, including comparisons with traditional methods. We also intend to investigate the integration of machine learning and reinforcement learning techniques for more intelligent optimization strategies.

Author Contributions

Y.G., Conceptualization, Software, Data Curation, Investigation, Writing—Original Draft, Project Administration; L.C., Methodology, Writing—Original Draft, Writing—Review and Editing, Validation, Formal Analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Major Project during the 13th Five-Year Plan under grant number 2016ZX05060004.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dhal, P.; Azad, C. A Comprehensive Survey on Feature Selection in the Various Fields of Machine Learning. Appl. Intell. 2022, 52, 4543–4581. [Google Scholar] [CrossRef]
  2. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A Comprehensive Survey on Recent Metaheuristics for Feature Selection. Neurocomputing 2022, 494, 269–296. [Google Scholar] [CrossRef]
  3. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-Strategies Boosted Mutative Crow Search Algorithm for Global Tasks: Cases of Continuous and Discrete Optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  4. Hu, H.; Shan, W.; Tang, Y.; Heidari, A.A.; Chen, H.; Liu, H.; Wang, M.; Escorcia-Gutierrez, J.; Mansour, R.F.; Chen, J. Horizontal and Vertical Crossover of Sine Cosine Algorithm with Quick Moves for Optimization and Feature Selection. J. Comput. Des. Eng. 2022, 9, 2524–2555. [Google Scholar] [CrossRef]
  5. Peralta-Reyes, E.; Vizarretea-Vasquez, D.; Natividad, R.; Aizpuru, A.; Robles-Gomez, E.; Alanis, C.; Regalado-Mendez, A. Electrochemical Reforming of Glycerol into Hydrogen in a Batch-Stirred Electrochemical Tank Reactor Equipped with Stainless Steel Electrodes: Parametric Optimization, Total Operating Cost, and Life Cycle Assessment. J. Environ. Chem. Eng. 2022, 10, 108108. [Google Scholar] [CrossRef]
  6. Vergara, J.R.; Estévez, P.A. A Review of Feature Selection Methods Based on Mutual Information. Neural Comput. Appl. 2014, 24, 175–186. [Google Scholar] [CrossRef]
  7. Fida, M.A.F.A.; Ahmad, T.; Ntahobari, M. Variance Threshold as Early Screening to Boruta Feature Selection for Intrusion Detection System. In Proceedings of the 2021 13th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 20–21 October 2021; pp. 46–50. [Google Scholar]
  8. Muthukrishnan, R.; Rohini, R. LASSO: A Feature Selection Technique in Predictive Modeling for Machine Learning. In Proceedings of the 2016 IEEE International Conference on Advances in Computer Applications (ICACA), Coimbatore, India, 24 October 2016; pp. 18–20. [Google Scholar]
  9. Chen, G.; Chen, J. A Novel Wrapper Method for Feature Selection and Its Applications. Neurocomputing 2015, 159, 219–226. [Google Scholar]
  10. Maldonado, J.; Riff, M.C.; Neveu, B. A Review of Recent Approaches on Wrapper Feature Selection for Intrusion Detection. Expert Syst. Appl. 2022, 198, 116822. [Google Scholar] [CrossRef]
  11. Li, G.; Zhang, T.; Tsai, C.-Y.; Yao, L.; Lu, Y.; Tang, J. Review of the Metaheuristic Algorithms in Applications: Visual Analysis Based on Bibliometrics (1994–2023). Expert Syst. Appl. 2024, 255, 124857. [Google Scholar]
  12. Gonçalves, J.F.; de Magalhães Mendes, J.J.; Resende, M.G. A Hybrid Genetic Algorithm for the Job Shop Scheduling Problem. Eur. J. Oper. Res. 2005, 167, 77–95. [Google Scholar] [CrossRef]
  13. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  14. Poli, R.; Kennedy, J.; Blackwell, T. Particle Swarm Optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  15. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar]
  16. Gao, J.; Wang, Z.; Lei, Z.; Wang, R.-L.; Wu, Z.; Gao, S. Feature Selection with Clustering Probabilistic Particle Swarm Optimization. Int. J. Mach. Learn. Cybern. 2024, 15, 3599–3617. [Google Scholar] [CrossRef]
  17. Kwakye, B.D.; Li, Y.; Mohamed, H.H.; Baidoo, E.; Asenso, T.Q. Particle Guided Metaheuristic Algorithm for Global Optimization and Feature Selection Problems. Expert Syst. Appl. 2024, 248, 123362. [Google Scholar] [CrossRef]
  18. Hu, H.; Shan, W.; Chen, J.; Xing, L.; Heidari, A.A.; Chen, H.; He, X.; Wang, M. Dynamic Individual Selection and Crossover Boosted Forensic-Based Investigation Algorithm for Global Optimization and Feature Selection. J. Bionic Eng. 2023, 20, 2416–2442. [Google Scholar] [CrossRef]
  19. Askr, H.; Abdel-Salam, M.; Hassanien, A.E. Copula Entropy-Based Golden Jackal Optimization Algorithm for High-Dimensional Feature Selection Problems. Expert Syst. Appl. 2024, 238, 121582. [Google Scholar] [CrossRef]
  20. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot Optimizer: Algorithm and Applications to Medical Problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  21. Singh, L.K.; Khanna, M.; Garg, H.; Singh, R. Emperor Penguin Optimization Algorithm- and Bacterial Foraging Optimization Algorithm-Based Novel Feature Selection Approach for Glaucoma Classification from Fundus Images. Soft Comput. 2023, 28, 2431–2467. [Google Scholar] [CrossRef]
  22. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  23. Huang, X.; Hu, H.; Wang, J.; Yuan, B.; Dai, C.; Ablameyk, S.V. Dynamic Strongly Convex Sparse Operator with Learning Mechanism for Sparse Large-Scale Multi-Objective Optimization. In Proceedings of the 2024 6th International Conference on Data-Driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 121–127. [Google Scholar]
  24. Hu, H.; Wang, J.; Huang, X.; Ablameyko, S.V. An Integrated Online-Offline Hybrid Particle Swarm Optimization Framework for Medium Scale Expensive Problems*. In Proceedings of the 2024 6th International Conference on Data-Driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 25–32. [Google Scholar]
  25. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar Lights Optimizer: Algorithm and Applications in Image Segmentation and Feature Selection. Neurocomputing 2024, 607, 128427. [Google Scholar]
  26. Yu, W.-J.; Shen, M.; Chen, W.-N.; Zhan, Z.-H.; Gong, Y.-J.; Lin, Y.; Liu, O.; Zhang, J. Differential Evolution with Two-Level Parameter Adaptation. IEEE Trans. Cybern. 2013, 44, 1080–1099. [Google Scholar]
  27. Zheng, B.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H. The Moss Growth Optimization (MGO): Concepts and Performance. J. Comput. Des. Eng. 2024, 11, 184–221. [Google Scholar] [CrossRef]
  28. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization. In National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report; National University of Defense Technology: Changsha, China, 2017. [Google Scholar]
  29. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime Mould Algorithm: A New Method for Stochastic Optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  30. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  32. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  33. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  34. Yang, X.-S.; He, X. Firefly Algorithm: Recent Advances and Applications. Int. J. Swarm Intell. 2013, 1, 36–50. [Google Scholar] [CrossRef]
  35. Dara, S.; Banka, H. A Binary PSO Feature Selection Algorithm for Gene Expression Data. In Proceedings of the 2014 International Conference on Advances in Communication and Computing Technologies (ICACACT 2014), Mumbai, India, 10–11 August 2014; pp. 1–6. [Google Scholar]
  36. Taradeh, M.; Mafarja, M.; Heidari, A.A.; Faris, H.; Aljarah, I.; Mirjalili, S.; Fujita, H. An Evolutionary Gravitational Search-Based Feature Selection. Inform. Sci. 2019, 497, 219–239. [Google Scholar] [CrossRef]
  37. Emary, E.; Zawbaa, H.M.; Hassanien, A.E. Binary Ant Lion Approaches for Feature Selection. Neurocomputing 2016, 213, 54–65. [Google Scholar]
  38. Mafarja, M.; Heidari, A.A.; Habib, M.; Faris, H.; Thaher, T.; Aljarah, I. Augmented Whale Feature Selection for IoT Attacks: Structure, Analysis and Applications. Future Gener. Comput. Syst. 2020, 112, 18–40. [Google Scholar] [CrossRef]
  39. Shekhawat, S.S.; Sharma, H.; Kumar, S.; Nayyar, A.; Qureshi, B. bSSA: Binary Salp Swarm Algorithm with Hybrid Data Transformation for Feature Selection. IEEE Access 2021, 9, 14867–14882. [Google Scholar] [CrossRef]
Figure 1. Flowchart of PLO.
Figure 1. Flowchart of PLO.
Biomimetics 10 00053 g001
Figure 2. Flowchart of CPLODE.
Figure 2. Flowchart of CPLODE.
Biomimetics 10 00053 g002
Figure 3. Convergence curves of CPLODE on benchmarks with other algorithms.
Figure 3. Convergence curves of CPLODE on benchmarks with other algorithms.
Biomimetics 10 00053 g003
Table 1. CEC2017 benchmark functions.
Table 1. CEC2017 benchmark functions.
FunctionFunction NameClassOptimum
F1Shifted and Rotated Bent Cigar FunctionUnimodal100
F2Shifted and Rotated Zakharov FunctionUnimodal300
F3Shifted and Rotated Rosenbrock’s FunctionMultimodal400
F4Shifted and Rotated Rastrigin’s FunctionMultimodal500
F5Shifted and Rotated Expanded Scaffer’s F6 FunctionMultimodal600
F6Shifted and Rotated Lunacek Bi-Rastrigin FunctionMultimodal700
F7Shifted and Rotated Non-Continuous Rastrigin’s FunctionMultimodal800
F8Shifted and Rotated Lévy FunctionMultimodal900
F9Shifted and Rotated Schwefel’s FunctionMultimodal1000
F10Hybrid Function 1 (N = 3)Hybrid1100
F11Hybrid Function 2 (N = 3)Hybrid1200
F12Hybrid Function 3 (N = 3)Hybrid1300
F13Hybrid Function 4 (N = 4)Hybrid1400
F14Hybrid Function 5 (N = 4)Hybrid1500
F15Hybrid Function 6 (N = 4)Hybrid1600
F16Hybrid Function 6 (N = 5)Hybrid1700
F17Hybrid Function 6 (N = 5)Hybrid1800
F18Hybrid Function 6 (N = 5)Hybrid1900
F19Hybrid Function 6 (N = 6)Hybrid2000
F20Composition Function 1 (N = 3)Composition2100
F21Composition Function 2 (N = 3)Composition2200
F22Composition Function 3 (N = 4)Composition2300
F23Composition Function 4 (N = 4)Composition2400
F24Composition Function 5 (N = 5)Composition2500
F25Composition Function 6 (N = 5)Composition2600
F26Composition Function 7 (N = 6)Composition2700
F27Composition Function 8 (N = 6)Composition2800
F28Composition Function 9 (N = 3)Composition2900
F29Composition Function 10 (N = 3)Composition3000
Table 2. Results of CPLODE and other algorithms on CEC2017.
Table 2. Results of CPLODE and other algorithms on CEC2017.
F1 F2 F3
AvgStdAvgStdAvgStd
CPLODE4.0920 × 1034.1750 × 1033.0002 × 1021.0051 × 10−24.5068 × 1023.4430 × 101
PLO1.1590 × 1042.4939 × 1032.7023 × 1045.6270 × 1034.7492 × 1021.1119 × 101
SMA2.5115 × 1099.8658 × 1083.7557 × 1048.5901 × 1036.1469 × 1025.3174 × 101
WOA2.6952 × 1061.5313 × 1061.4743 × 1056.0775 × 1045.6002 × 1023.6325 × 101
GWO2.1219 × 1091.3846 × 1093.4397 × 1041.0051 × 1047.0155 × 1024.0283 × 102
MFO1.2988 × 10106.9423 × 1097.7996 × 1046.2218 × 1041.1193 × 1036.5315 × 102
SCA1.2347 × 10102.1109 × 1093.5606 × 1045.4682 × 1031.3672 × 1032.2062 × 102
FA1.4659 × 10101.4882 × 1096.3838 × 1048.0840 × 1031.4042 × 1031.5042 × 102
DE2.3830 × 1034.8570 × 1032.1028 × 1045.1500 × 1034.9188 × 1021.0679 × 101
F4 F5 F6
AvgStdAvgStdAvgStd
CPLODE5.4831 × 1027.8331 × 1006.0000 × 1029.1994 × 10−77.8753 × 1028.8299 × 100
PLO5.5092 × 1027.4005 × 1006.0412 × 1026.4002 × 10−18.2325 × 1029.5400 × 100
SMA7.1420 × 1022.9243 × 1016.4455 × 1028.0890 × 1001.0707 × 1034.4764 × 101
WOA7.6420 × 1024.8394 × 1016.6890 × 1029.1396 × 1001.2308 × 1037.8906 × 101
GWO6.0427 × 1022.7433 × 1016.0927 × 1023.9375 × 1008.6044 × 1024.4426 × 101
MFO7.1517 × 1026.3055 × 1016.4193 × 1021.1270 × 1011.1255 × 1032.3548 × 102
SCA7.7692 × 1021.9626 × 1016.4836 × 1024.2782 × 1001.1213 × 1033.1481 × 101
FA7.6207 × 1029.3450 × 1006.4410 × 1022.6151 × 1001.3824 × 1033.1254 × 101
DE6.0933 × 1028.3500 × 1006.0000 × 1020.0000 × 1008.4291 × 1027.5915 × 100
F7 F8 F9
AvgStdAvgStdAvgStd
CPLODE8.4608 × 1028.3623 × 1009.0020 × 1022.6584 × 10−13.2070 × 1032.6162 × 102
PLO8.5204 × 1029.2515 × 1001.2746 × 1031.3036 × 1023.2575 × 1032.5941 × 102
SMA9.7055 × 1022.6769 × 1015.5826 × 1031.0568 × 1035.3672 × 1035.6679 × 102
WOA1.0136 × 1035.8658 × 1018.1838 × 1032.3689 × 1036.2527 × 1036.5665 × 102
GWO8.8678 × 1021.9861 × 1011.7494 × 1036.1671 × 1024.1153 × 1039.3870 × 102
MFO1.0139 × 1035.2970 × 1017.1350 × 1031.9884 × 1035.4931 × 1037.9595 × 102
SCA1.0478 × 1031.5537 × 1015.1287 × 1031.0492 × 1038.1431 × 1033.2193 × 102
FA1.0509 × 1031.4648 × 1015.4305 × 1035.1404 × 1027.9251 × 1033.8444 × 102
DE9.0849 × 1021.0072 × 1019.0000 × 1029.6743 × 10−145.9854 × 1031.9299 × 102
F10 F11 F12
AvgStdAvgStdAvgStd
CPLODE1.1229 × 1039.1014 × 1001.2004 × 1058.5831 × 1041.8444 × 1042.1474 × 104
PLO1.1581 × 1031.6584 × 1015.0177 × 1052.4604 × 1051.4198 × 1045.7243 × 103
SMA1.5322 × 1031.0156 × 1021.2983 × 1087.5032 × 1071.5058 × 1061.6082 × 106
WOA1.4913 × 1039.8025 × 1014.8594 × 1073.1101 × 1071.3602 × 1059.4765 × 104
GWO1.8226 × 1036.6599 × 1021.1275 × 1083.2956 × 1081.3704 × 1073.9247 × 107
MFO5.9355 × 1035.1442 × 1033.5650 × 1088.1328 × 1083.1216 × 1087.5404 × 108
SCA2.0945 × 1032.2808 × 1021.1051 × 1092.9299 × 1083.4755 × 1081.2259 × 108
FA3.2599 × 1035.7410 × 1021.4017 × 1093.0724 × 1086.1809 × 1081.7734 × 108
DE1.1611 × 1032.0851 × 1011.5862 × 1068.2773 × 1052.9151 × 1041.1875 × 104
F13 F14 F15
AvgStdAvgStdAvgStd
CPLODE1.4384 × 1041.0938 × 1041.2813 × 1041.3823 × 1041.9560 × 1031.3213 × 102
PLO7.9649 × 1034.7388 × 1034.5866 × 1031.2584 × 1031.9970 × 1031.1299 × 102
SMA1.8264 × 1059.5737 × 1042.0160 × 1049.2830 × 1032.8795 × 1033.3202 × 102
WOA5.4997 × 1057.1671 × 1056.1537 × 1045.5170 × 1043.5053 × 1034.9078 × 102
GWO1.0984 × 1051.8864 × 1053.5527 × 1058.3062 × 1052.3715 × 1032.8305 × 102
MFO2.7980 × 1056.5096 × 1055.6788 × 1044.2509 × 1043.0976 × 1033.6698 × 102
SCA1.1542 × 1056.3321 × 1041.1045 × 1078.4052 × 1063.6068 × 1032.2862 × 102
FA1.8689 × 1057.2559 × 1046.2655 × 1073.1177 × 1073.4083 × 1032.1965 × 102
DE6.4866 × 1045.4400 × 1047.6915 × 1034.1350 × 1032.0677 × 1031.3933 × 102
F16 F17 F18
AvgStdAvgStdAvgStd
CPLODE1.7990 × 1037.6248 × 1011.8955 × 1052.7639 × 1051.7549 × 1041.6480 × 104
PLO1.8186 × 1033.9754 × 1011.3635 × 1057.8731 × 1042.9860 × 1036.7273 × 102
SMA2.3156 × 1032.1345 × 1024.8558 × 1053.3512 × 1055.0410 × 1055.3037 × 105
WOA2.6334 × 1032.8446 × 1023.3275 × 1062.9399 × 1062.2655 × 1062.8705 × 106
GWO1.9486 × 1031.1480 × 1025.9764 × 1055.7548 × 1058.0009 × 1051.6674 × 106
MFO2.5460 × 1033.1139 × 1023.1751 × 1067.8848 × 1061.2902 × 1073.6770 × 107
SCA2.3934 × 1031.5338 × 1023.3788 × 1061.7657 × 1063.3238 × 1072.1205 × 107
FA2.4670 × 1031.2003 × 1024.3534 × 1061.9992 × 1069.2454 × 1073.1871 × 107
DE1.8421 × 1035.1530 × 1013.1519 × 1051.7642 × 1058.2470 × 1033.5513 × 103
F19 F20 F21
AvgStdAvgStdAvgStd
CPLODE2.1010 × 1036.2362 × 1012.3480 × 1039.5720 × 1003.4641 × 1031.3586 × 103
PLO2.1670 × 1035.1289 × 1012.3505 × 1037.3562 × 1002.3855 × 1034.1707 × 102
SMA2.4134 × 1031.2866 × 1022.4785 × 1032.4241 × 1013.2081 × 1031.4174 × 103
WOA2.6641 × 1032.0926 × 1022.5797 × 1036.7176 × 1016.2597 × 1032.2427 × 103
GWO2.4343 × 1031.2789 × 1022.3797 × 1032.6664 × 1014.5712 × 1031.3470 × 103
MFO2.6919 × 1032.6365 × 1022.5126 × 1034.5540 × 1016.5400 × 1039.2570 × 102
SCA2.6031 × 1031.4985 × 1022.5514 × 1032.2652 × 1017.6999 × 1032.5971 × 103
FA2.5986 × 1037.5334 × 1012.5382 × 1031.5345 × 1013.8226 × 1031.3006 × 102
DE2.1400 × 1037.0218 × 1012.4102 × 1039.0096 × 1003.7627 × 1031.6700 × 103
F22 F23 F24
AvgStdAvgStdAvgStd
CPLODE2.7010 × 1037.8806 × 1002.8693 × 1036.4910 × 1002.8878 × 1035.8163 × 10−1
PLO2.6996 × 1038.5181 × 1002.8689 × 1037.2496 × 1002.8847 × 1031.2452 × 100
SMA2.8511 × 1033.1216 × 1013.0080 × 1033.0863 × 1013.0205 × 1035.3642 × 101
WOA3.0359 × 1038.6584 × 1013.1487 × 1038.0901 × 1012.9455 × 1033.1040 × 101
GWO2.7461 × 1033.6003 × 1012.9220 × 1034.7272 × 1012.9723 × 1032.5394 × 101
MFO2.8310 × 1033.4083 × 1012.9962 × 1033.4826 × 1013.3300 × 1034.3061 × 102
SCA2.9926 × 1032.3532 × 1013.1572 × 1033.5056 × 1013.1873 × 1035.5685 × 101
FA2.9135 × 1031.1764 × 1013.0649 × 1031.1714 × 1013.5444 × 1031.1607 × 102
DE2.7561 × 1038.9800 × 1002.9626 × 1031.2120 × 1012.8874 × 1033.4067 × 10−1
F25 F26 F27
AvgStdAvgStdAvgStd
CPLODE4.1335 × 1039.3910 × 1013.2053 × 1031.0043 × 1013.1550 × 1036.6740 × 101
PLO3.9283 × 1034.7938 × 1023.2029 × 1034.0598 × 1003.2121 × 1039.0455 × 100
SMA5.2084 × 1035.6927 × 1023.2563 × 1032.0406 × 1013.4164 × 1034.4305 × 101
WOA7.7069 × 1039.8268 × 1023.3737 × 1031.0329 × 1023.3130 × 1033.1227 × 101
GWO4.5991 × 1033.2501 × 1023.2435 × 1032.1859 × 1013.4256 × 1037.9194 × 101
MFO6.0445 × 1036.8971 × 1023.2668 × 1033.0744 × 1014.4157 × 1039.8818 × 102
SCA6.9434 × 1032.7207 × 1023.4025 × 1033.8445 × 1013.8215 × 1031.3113 × 102
FA6.5170 × 1031.5945 × 1023.3344 × 1031.4804 × 1013.8905 × 1038.7135 × 101
DE4.6715 × 1036.3597 × 1013.2043 × 1033.6717 × 1003.1860 × 1034.7932 × 101
F28 F29
AvgStdAvgStd
CPLODE3.4382 × 1039.2089 × 1011.0096 × 1043.3843 × 103
PLO3.4618 × 1035.5133 × 1012.0740 × 1044.9557 × 103
SMA4.0386 × 1032.0859 × 1025.2833 × 1062.8997 × 106
WOA4.8405 × 1033.4810 × 1029.3134 × 1067.1522 × 106
GWO3.7806 × 1031.6576 × 1024.5862 × 1063.5436 × 106
MFO4.2868 × 1033.1200 × 1026.1200 × 1058.0878 × 105
SCA4.6218 × 1032.1985 × 1027.0732 × 1072.6337 × 107
FA4.7160 × 1031.2109 × 1029.4462 × 1073.0941 × 107
DE3.5286 × 1037.2288 × 1011.2285 × 1043.3857 × 103
Overall Rank
RANK+/=−AVG
CPLODE1~1.6552
PLO212/14/31.8621
SMA528/1/05.1379
WOA729/0/07.0000
GWO429/0/04.4483
MFO629/0/06.7586
SCA829/0/07.5517
FA929/0/07.7241
DE319/5/52.8621
Table 3. The p-values of CPLODE versus other algorithms on CEC2017.
Table 3. The p-values of CPLODE versus other algorithms on CEC2017.
PLOSMAWOAGWO
F15.752 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F21.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F37.271 × 10−31.734 × 10−61.734 × 10−61.734 × 10−6
F41.414 × 10−11.734 × 10−61.734 × 10−61.734 × 10−6
F51.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F61.734 × 10−61.734 × 10−61.734 × 10−62.127 × 10−6
F72.105 × 10−31.734 × 10−61.734 × 10−62.353 × 10−6
F81.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F97.971 × 10−11.734 × 10−61.734 × 10−65.752 × 10−6
F101.921 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F113.882 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F127.655 × 10−11.734 × 10−61.921 × 10−61.921 × 10−6
F133.379 × 10−32.353 × 10−61.734 × 10−61.251 × 10−4
F145.710 × 10−24.492 × 10−21.734 × 10−61.238 × 10−5
F152.134 × 10−11.734 × 10−61.734 × 10−68.466 × 10−6
F168.972 × 10−21.734 × 10−61.734 × 10−61.799 × 10−5
F174.908 × 10−11.965 × 10−32.603 × 10−64.534 × 10−4
F182.843 × 10−53.182 × 10−61.734 × 10−64.729 × 10−6
F194.897 × 10−41.734 × 10−61.734 × 10−61.734 × 10−6
F201.470 × 10−11.734 × 10−61.734 × 10−63.182 × 10−6
F211.020 × 10−18.451 × 10−12.843 × 10−56.035 × 10−3
F228.290 × 10−11.734 × 10−61.734 × 10−66.984 × 10−6
F237.189 × 10−11.734 × 10−61.734 × 10−61.921 × 10−6
F242.127 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F253.600 × 10−13.182 × 10−61.734 × 10−66.339 × 10−6
F264.779 × 10−11.734 × 10−61.734 × 10−62.127 × 10−6
F278.944 × 10−41.734 × 10−61.734 × 10−61.734 × 10−6
F287.521 × 10−21.734 × 10−61.734 × 10−61.921 × 10−6
F291.921 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
MFOSCAFADE
F11.734 × 10−61.734 × 10−61.734 × 10−61.319 × 10−2
F21.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F31.734 × 10−61.734 × 10−61.734 × 10−61.127 × 10−5
F41.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F51.734 × 10−61.734 × 10−61.734 × 10−61.953 × 10−3
F61.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F71.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F81.734 × 10−61.734 × 10−61.734 × 10−66.104 × 10−5
F91.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F101.734 × 10−61.734 × 10−61.734 × 10−66.339 × 10−6
F111.734 × 10−61.734 × 10−61.734 × 10−61.921 × 10−6
F121.734 × 10−61.734 × 10−61.734 × 10−62.183 × 10−2
F131.973 × 10−51.734 × 10−61.921 × 10−65.216 × 10−6
F145.216 × 10−61.734 × 10−61.734 × 10−64.653 × 10−1
F151.734 × 10−61.734 × 10−61.734 × 10−66.424 × 10−3
F161.734 × 10−61.734 × 10−61.734 × 10−66.836 × 10−3
F175.706 × 10−41.734 × 10−61.734 × 10−62.765 × 10−3
F181.127 × 10−51.734 × 10−61.734 × 10−61.657 × 10−2
F191.734 × 10−61.734 × 10−61.734 × 10−64.277 × 10−2
F201.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F211.734 × 10−61.494 × 10−56.035 × 10−38.130 × 10−1
F221.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F231.734 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F242.127 × 10−61.734 × 10−61.734 × 10−63.162 × 10−3
F251.921 × 10−61.734 × 10−61.734 × 10−61.734 × 10−6
F261.734 × 10−61.734 × 10−61.734 × 10−68.130 × 10−1
F271.734 × 10−61.734 × 10−61.734 × 10−66.511 × 10−2
F281.734 × 10−61.734 × 10−61.734 × 10−68.944 × 10−4
F291.734 × 10−61.734 × 10−61.734 × 10−65.193 × 10−2
Table 4. Detailed description of datasets.
Table 4. Detailed description of datasets.
DatasetsSamplesFeaturesClasses
Breast cancer28692
Heart EW270132
Lymphography148184
Hepatitis_full_data155192
Glass21496
Heart303135
Thyroid_2class18782
Leukemia7271302
Vote534162
Segment2310187
Table 5. Classification error rates of different algorithms.
Table 5. Classification error rates of different algorithms.
FunctionCPLODEBPSOBGSABALOBBABSSA
Breast cancer9.21 × 10−29.76 × 10−29.91 × 10−29.36 × 10−21.62 × 10−19.42 × 10−2
7.68 × 10−39.52 × 10−31.08 × 10−21.29 × 10−21.38 × 10−21.36 × 10−2
Heart EW7.68 × 10−28.10 × 10−28.75 × 10−28.36 × 10−21.72 × 10−18.32 × 10−2
2.32 × 10−24.21 × 10−23.67 × 10−33.52 × 10−22.58 × 10−23.14 × 10−3
Lymphography1.82 × 10−22.42 × 10−22.38 × 10−21.97 × 10−26.52 × 10−22.01 × 10−2
5.12 × 10−3(4.23 × 10−3)(3.25 × 10−3)(2.46 × 10−3)(1.23 × 10−3)(4.02 × 10−3)
Hepatitis_full_data1.28 × 10−21.93 × 10−22.57 × 10−29.42 × 10−32.37 × 10−11.82 × 10−2
(1.47 × 10−3)(3.23 × 10−2)(5.36 × 10−3)(1.86 × 10−3)(8.45 × 10−2)(4.89 × 10−3)
Glass9.86 × 10−21.17 × 10−11.07 × 10−11.21 × 10−12.92 × 10−11.06 × 10−1
(6.49 × 10−2)(4.25 × 10−2)(4.99 × 10−2)(5.44 × 10−2)(1.08 × 10−1)(4.93 × 10−2)
Heart6.18 × 10−27.03 × 10−25.58 × 10−26.97 × 10−22.63 × 10−16.28 × 10−2
(3.54 × 10−2)(4.43 × 10−2)(5.21 × 10−2)(3.39 × 10−2)8.96 × 10−2)(3.49 × 10−2)
Thyroid_2class1.89 × 10−12.01 × 10−12.14 × 10−12.08 × 10−13.21 × 10−12.17 × 10−1
5.36 × 10−2(6.49 × 10−2)(6.85 × 10−2)(7.31 × 10−2)(8.81 × 10−2)(7.59 × 10−2)
Leukemia0.00 × 1000.00 × 1001.05 × 10−10.00 × 1001.65 × 10−20.00 × 100
(0.00 × 100)(0.00 × 100)(4.21 × 10−2)(0.00 × 100)(4.91 × 10−2)(0.00 × 100)
Vote2.13 × 10−22.76 × 10−22.437 × 10−28.96 × 10−23.72 × 10−23.92 × 10−2
1.62 × 10−23.68 × 10−39.86 × 10−22.83 × 10−23.13 × 10−24.07 × 10−3
Segment2.26 × 10−22.35 × 10−22.48 × 10−22.42 × 10−24.23 × 10−22.88 × 10−2
3.65 × 10−36.58 × 10−37.38 × 10−36.74 × 10−39.46 × 10−35.33 × 10−3
Table 6. Number of selected features by different algorithms.
Table 6. Number of selected features by different algorithms.
FunctionCPLODEBPSOBGSABALOBBABSSA
Breast cancer4.24.85.84.85.35.8
(1.35)(1.47)(1.08)(0.66)(0.93)(1.36)
Heart EW5.35.56.25.34.45.7
(0.85)(1.44)(0.96)(0.84)(1.21)(0.93)
Lymphography4.24.44.34.88.44.8
(0.99)(0.68)(1.17)(1.23)(2.67)(2.05)
Hepatitis_full_data5.66.34.26.36.15.3
(2.32)(1.25)(1.71)(2.21)(2.37)(2.56)
Glass3.73.94.63.94.14.3
(0.92)(0.74)(1.36)(0.67)(1.53)(0.87)
Heart6.46.16.45.95.76.2
(0.63)(0.78)(0.62)(1.46)(1.05)(1.17)
Thyroid_2class4.15.54.34.03.94.2
(0.73)(1.28)(0.76)(1.20)(1.34)(1.37)
Leukemia1879.35236.42531.72657.03146.32768.1
(26.32)(67.45)(23.32)(46.44)(38.52)(94.68)
Vote3.83.23.12.76.34.4
(1.62)(1.33)(1.67)(1.24)(1.35)(2.50)
Segment5.26.35.45.37.66.2
(0.83)(0.94)(0.99)(1.48)(1.76)(1.66)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Cheng, L. Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection. Biomimetics 2025, 10, 53. https://doi.org/10.3390/biomimetics10010053

AMA Style

Gao Y, Cheng L. Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection. Biomimetics. 2025; 10(1):53. https://doi.org/10.3390/biomimetics10010053

Chicago/Turabian Style

Gao, Yang, and Liang Cheng. 2025. "Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection" Biomimetics 10, no. 1: 53. https://doi.org/10.3390/biomimetics10010053

APA Style

Gao, Y., & Cheng, L. (2025). Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection. Biomimetics, 10(1), 53. https://doi.org/10.3390/biomimetics10010053

Article Metrics

Back to TopTop