Next Article in Journal / Special Issue
Bio-Inspired Spotted Hyena Optimizer with Deep Convolutional Neural Network-Based Automated Food Image Classification
Previous Article in Journal
Robot Arm Reaching Based on Inner Rehearsal
Previous Article in Special Issue
Adaptive PI Controller Based on a Reinforcement Learning Algorithm for Speed Control of a DC Motor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection

1
School of Mechanical and Electrical Engineering, Guizhou Normal University, Guiyang 550025, China
2
Technical Engineering Center of Manufacturing Service and Knowledge Engineering, Guizhou Normal University, Guiyang 550025, China
3
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(6), 492; https://doi.org/10.3390/biomimetics8060492
Submission received: 30 September 2023 / Revised: 14 October 2023 / Accepted: 16 October 2023 / Published: 18 October 2023
(This article belongs to the Special Issue Bionic Artificial Neural Networks and Artificial Intelligence)

Abstract

:
The sand cat is a creature suitable for living in the desert. Sand cat swarm optimization (SCSO) is a biomimetic swarm intelligence algorithm, which inspired by the lifestyle of the sand cat. Although the SCSO has achieved good optimization results, it still has drawbacks, such as being prone to falling into local optima, low search efficiency, and limited optimization accuracy due to limitations in some innate biological conditions. To address the corresponding shortcomings, this paper proposes three improved strategies: a novel opposition-based learning strategy, a novel exploration mechanism, and a biological elimination update mechanism. Based on the original SCSO, a multi-strategy improved sand cat swarm optimization (MSCSO) is proposed. To verify the effectiveness of the proposed algorithm, the MSCSO algorithm is applied to two types of problems: global optimization and feature selection. The global optimization includes twenty non-fixed dimensional functions (Dim = 30, 100, and 500) and ten fixed dimensional functions, while feature selection comprises 24 datasets. By analyzing and comparing the mathematical and statistical results from multiple perspectives with several state-of-the-art (SOTA) algorithms, the results show that the proposed MSCSO algorithm has good optimization ability and can adapt to a wide range of optimization problems.

1. Introduction

With the development of the information age, there is an explosive increase in data volume. The problems people encounter in fields such as engineering [1], ecology [2], information [3], manufacturing [4], design [5], and management [6] are becoming increasingly complex. Most of these problems exhibit characteristics such as multi-objective [7] and high-dimensional [8]. The swarm intelligence algorithm is a critical way to solve optimization problems [9], with simple principles, easy implementation, and excellent performance; it has been favored by more and more scholars, and research in this area is also increasing [10]. Heuristic intelligence algorithms can achieve higher global optima by using a random search. Due to its independence in utilizing function gradients, heuristic algorithms do not require the objective function to have continuously differentiable conditions, providing optimization possibilities for some objective functions that cannot be optimized through a gradient descent [11,12].
The SCSO is a recently proposed efficient swarm intelligence algorithm that simulates the lifestyle habits of sand cats for optimization. It belongs to the evolutionary algorithm that simulates biological practices [13]. The SCSO algorithm has a simple structure and is easy to implement. Compared with the cat swarm optimization (CSO) [14], grey wolf Optimizer (GWO) [15], whale optimization algorithm (WOA) [16], salp swarm algorithm (SSA) [17], gravitational search algorithm (GSA) [18], particle swarm optimization (PSO) [19], black widow optimization (BWO) [20] and other algorithms, it has local solid development capabilities. Although the SCSO algorithm has achieved good results, due to a low population diversity and too-single exploration angle, the algorithm has a slow convergence speed and low solving accuracy, and is prone to falling into local optima during the exploration stage of complex problems. Due to the abundance of natural organisms and their easy-to-understand and accept survival habits, evolutionary algorithms that simulate biological patterns have become a hot research topic for relevant experts and scholars, such as the Genghis Khan shark optimizer (GKSO) [21], Harris hawks optimization (HHO) [22], snake optimizer (SO) [23], dung beetle optimizer (DBO) [24], crayfish optimization algorithm (COA) [25], and so on. The food chain is the foundation for the survival of the fittest, and many organisms have related shortcomings, which leads to evolutionary algorithms that simulate biological habits. Although they can solve numerous optimization problems, the optimization effect sometimes could be better. Therefore, excellent mathematical models can be obtained for optimization problem-solving after constructing a mathematical model for optimizing biological habits and improving some mathematical theories [26]. This method often achieves good optimization results. Farzad Kiani et al. proposed chaotic sand cat swarm optimization [27], Seyyedabbasi proposed binary sand cat swarm optimization algorithm [28], Amjad Qtaish et al. proposed memory-based sand cat swarm optimization [29], Wu et al. proposed modified sand cat swarm optimization algorithm [30], and Farzad Kiani et al. proposed enhanced sand cat swarm optimization inspired by the political system [31]. Our research team is also committed to improving the effectiveness of original biological intelligence algorithms by introducing some mathematical theories. We proposed algorithms such as the enhanced snake optimizer (ESO) [32], hybrid golden jackal optimization and golden sine algorithm with dynamic lens-imaging learning (LSGJO) [33], reptile search algorithm considering different flight heights (FRSA) [34], etc., to provide some ideas for solving optimization problems.
Although SCSO achieved specific results in optimization problems, it is not perfect. When encountering higher dimensional and multi-feature issues, the convergence speed of the SCSO algorithm is slow, and the disadvantage of quickly falling into local optima is fully exposed. To improve the effectiveness of the SCSO and help it overcome some physiological limitations, this paper proposes three novel strategies: a novel opposition-based learning strategy, a novel exploration mechanism, and a biological elimination update mechanism. These strategies help the SCSO quickly jump out of local optima, accelerate convergence speed, and improve optimization accuracy.
To verify the effectiveness of the MSCSO algorithm proposed in this paper, the algorithm was applied to two kinds of problems, global optimization (containing 30 functions) [35] and feature selection (containing 20 datasets) [36], which are also common complexity problems in many fields. Global optimization includes unimodal multidimensional functions, multimodal multidimensional functions, and multimodal fixed-dimensional functions. Unimodal functions are used to test the development ability of optimization algorithms, multimodal functions are used to test the exploration ability of optimization algorithms, and multidimensional functions are used to test the stability of algorithms. Feature Selection is considered an NP-hard problem: when a dataset has N features, 2N feature subsets are generated. Metaheuristic algorithms are widely used to find an optimal solution to NP-hard problems. Such as Ma et al. [37], Wu et al. [38] and Fan et al. [39] used global optimization functions to test their proposed algorithms. Wang et al. [40], Lahmar et al. [41], and Turkoglu et al. [42] used feature selection datasets to test their proposed algorithms. Finally, by comparing the results with many advanced algorithms in global optimization and feature selection problems, it is proven that the improved algorithm proposed in this paper has excellent performance.
The main contribution of this paper is as follows:
Three improvement strategies (novel opposition-based learning strategy, novel exploration mechanism, and biological elimination update mechanism) are used to improve the optimization performance of the SCSO algorithm.
Thirty standard test functions for intelligent optimization algorithm testing are used to evaluate the proposed MSCSO and compare the results with 11 other advanced optimization algorithms.
Twenty-four feature selection datasets were used to evaluate the proposed MSCSO and compare the results with other advanced optimization algorithms.
The chapters of this paper are arranged as follows: Section 1 introduces the background of intelligent optimization algorithms, Section 2 introduces the original SCSO, Section 3 introduces the relevant improvement strategies designed in this paper, and introduces the proposed MSCSO. Section 4 applies the proposed MSCSO to global optimization and feature selection problems and describes the corresponding statistical results. Section 5 summarizes the entire paper and looks forward to future research directions.

2. Original SCSO

The sand cat is the only type of cat that lives in the desert and can walk on soft, hot sand. With its superb auditory ability, it can detect low-frequency noise, detect and track prey, whether moving on the ground or underground, and then carry out capture operations on the prey. The SCSO is a novel biomimetic optimization algorithm that simulates the behavior of the sand cat in nature to achieve the optimization process. In the SCSO process, the detection and tracking of prey by sand cats can be observed as the exploration phase of the algorithm, while the capture of prey by sand cats is the exploitation phase of the algorithm.
In SCSO, the generation method of the initial solutions for the sand cat is shown in Equations (1) and (2).
S C i = [ p i 1 , p i 2 , , p i j , , p i d ]
p i j = u b j + r 1 ( u b j l b j )
where  S C i  represents the position of the i-th sand cat,  i [ 1 , n ] n  is the number of populations,  d  is the dimension for solving the problem,  p i j  indicates the position of the i-th sand cat in the j-th dimension,  r 1  is a random number between 0 and 1,  u b j  and  l b j  are the upper and lower boundary of the j-th dimensional space, respectively.
Due to its unique ear canal structure, sand cats can perceive low-frequency noise, which can help them make judgments based on the noise situation, search, or track prey, and achieve conversion between the stages of surround (exploration) and hunting (exploitation).  R  is a mathematical model for sand cats to sense low-frequency noise, as shown in Equations (3)–(5).
r G = s M ( 2 × s M × i t e r c i t e r M a x )
R = 2 × r G × r 1 r G
r = r G × r 2
where  S M  is the constant 2,  r G  is a linear line that converges from 2 to 0,  i t e r c  represents the current number of iterations,  i t e r M a x  represents the maximum number of iterations,  R  represents the transition control between exploration and exploitation,  R [ 1 , 1 ] r 1  and  r 2  are random numbers between 0 and 1, and  r  represents the sensitivity range of each sand cat.
When |R| > 1, The SCSO has entered the exploration phase, and its position update method is shown in Equation (6).
P i , j t + 1 = r × ( P r , j t r 3 × P i , j t )
where  P i , j t  represents the position of the i-th sand cat in the j-th dimension during the t-th iteration.  P i , j t + 1  represents the position of the i-th sand cat in the j-th dimension during the (t+1)-th iteration.  P r , j t  is the position of the r-th sand cat in the j-th dimension of the sand cat group during the t-th iteration.  r 3  is a random number between 0 and 1.
When |R| ≤ 1, the SCSO has entered the exploitation phase, and its location update method is shown in Equations (7) and (8).
P r i , j t = r 4 P b j t P i , j t
P i , j t + 1 = P b j t r P r i , j t cos ( θ )
where  P r i , j t  represents the random position at t-th iteration, which ensures that the sand cat can approach its prey.  P b j t  is the position of the optimal individual in the sand cat group in the j-th dimension during the t-th iteration.  r 4  is a random number between 0 and 1.  θ  is assigned by the roulette wheel algorithm.  P i , j t + 1  represents the position of the i-th sand cat in the j-th dimension during the (t + 1)-th iteration.
The pseudocode of the SCSO is shown in Algorithm 1.
Algorithm 1 The pseudocode of the SCSO
1.Initialize the population Si
2.Calculate the fitness function based on the objective function
3.Initialize the  r , r G , R  
4.While (t < =  i t e r M a x )
5.  For each search agent
6.    Obtain a random angle based on the Roulette Wheel Selection ( 0 ° θ 360 )
7.    If (abs(R) > 1)
8.      Update the search agent position based on the Equation (6)
9.    Else
10.      Update the search agent position based on the Equation (8)
11    End
12.  End
13. t = t + 1
14.End

3. Proposed MSCSO

The SCSO was proposed by Seyyedabbasi and Kiani in 2022 as a biomimetic intelligent algorithm. Due to sand cats’ intense hunting and survival abilities, the SCSO has excellent optimization ability by simulating their habits. But there is no free lunch in the world, and there is no one way to solve all problems [43]. When solving optimization problems, the SCSO encountered some problems. This paper introduces novel mathematical theories to improve the SCSO, enhance its effectiveness, and help it overcome some physiological limitations. This paper proposes three novel strategies, namely, a novel opposition-based learning strategy, a novel exploration mechanism, and a biological elimination update mechanism, to help the SCSO quickly jump out of local optima, accelerate convergence speed, and improve optimization accuracy.

3.1. Nonlinear Lens Imaging Strategy

Lens imaging strategy is a form of opposition-based learning [44]. By refracting the object on one side of the convex lens from the convex lens to the other side of the convex lens, a more optimal solution can be obtained. However, in traditional convex lens imaging mechanisms, the imaging coefficients are often a fixed value, which is not conducive to generating population diversity. Therefore, this paper proposes a novel lens imaging strategy that expands the diversity of the population and increases the possibility of obtaining high-quality solutions by setting dynamically updated imaging coefficients. The dynamically updated imaging coefficient is defined here as  , which can be calculated by Equations (9)–(11). Figure 1 shows the variations of the static lens imaging strategy and the emotional lens imaging strategy. The dynamic lens imaging strategy can search for more effective regions to improve the population’s diversity and enhance the algorithm’s global search ability.
u b + l b 2 P P u b + l b 2 = h h
P = u b + l b 2 + u b + l b 2 × P
= exp ( ( i t e r c / i t e r M a x ) 3 + 0.0001 ) 1
where  P  is the original solution;  P  is a new solution obtained through the lens imaging strategy.

3.2. Novel Exploration Mechanisms

In the development phase of the original SCSO, assuming that the sensitivity range of the sand cat is a circle, the direction of movement can be determined by a random angle on the circle θ. Due to the selected arbitrary angle being between 0 and 360, its value will be between −1 and 1. In this way, each group member can move in different circumferential directions in the search space. SCSO uses a roulette selection algorithm to select a random angle for each sand cat.
Inspired by this idea, this paper proposes a novel exploration mechanism by adding a random angle θ, enabling the sand cat to search for prey in different directions during the exploration phase. The novel exploration mechanism is represented by Equation (12). By increasing the random angle, the sand cat can approach its prey, increasing the randomness of exploration and utilizing the sand cat to close the optimal individual position. Figure 2 shows the variation form of random angle θ. Using this method, the sand cat can approach the hunting position while avoiding the risk of getting trapped in local optima by introducing an unexpected angle.
P i , j t + 1 = P i , m j t + ( P i , m 1 t P i , m j t ) r 5 cos ( θ )
where  P i , j t + 1  represents the position of the i-th sand cat in the j-th dimension during the (t+1)-th iteration.  r 5  is a random number between 1 and 2.  m j  is a random number between 1 and d.  P i , m j t  represents the position of the i-th sand cat in the  m j  dimension during the t-th iteration.  P i , m 1 t  represents the position of the i-th sand cat in the  m 1  dimension during the t-th iteration.

3.3. Elimination and Update Mechanism

Although the sand cat is a highly viable organism, the number of sand cat species has also changed during the exploration and exploitation stages due to changes in the external environment. Some sand cats may even be attacked by higher-level food chain species and die out. Inspired by this phenomenon, this paper proposes an elimination and update mechanism to ensure that the population size of sand cats remains consistent during the optimization process. This mechanism randomly selects 10% of individuals for elimination. If the fitness value of the new individual is lower, the old individual will be replaced, which is more in line with the survival of the fittest in the competition process of organisms. The update mechanism is shown in Equation (13).
P new i , j t + 1 = r 6 r 7 P i , j t + 1 + r 8 ( u b l b ) ( i t e r M a x i t e r c i t e r M a x )
where,  r 6 , r 7  and  r 8  are random numbers between 0 and 1, respectively.
Apply the proposed improvement strategies to the SCSO and proposed the MSCSO algorithm. The pseudocode of MSCSO is shown in Algorithm 2.
Algorithm 2. The pseudocode of MSCSO.
1.Initialize the population
2.Calculate the fitness function based on the objective function
3.Initialize the  r , r G , R  
4.While (t <=  i t e r M a x )
5.  For each search agent
6.    Obtain a random angle based on the Roulette Wheel Selection ( 0 ° θ 360 )
7    Obtain new position by Equation (10)
8    Calculate the fitness function values to obtain the optimal position
9.    If (abs(R) > 1)
10.      Update the search agent position based on the Equation (12)
11.    Else
12.      Update the search agent position based on the Equation (8)
13    End
14    Update new position using Equation (13)
15    Check the boundaries of new position and calculate fitness value
16.  End
17  Find the current best solution
18. t = t + 1
19.End

3.4. Time Complexity of MSCSO

In the process of optimizing practical problems, in addition to pursuing accuracy, time is also a very significant factor. The time complexity of an algorithm is an essential indicator for measuring the algorithm [45]. Therefore, it is crucial to analyze the time complexity of the improved algorithm compared to the original algorithm. The time complexity is mainly reflected in three parts: algorithm initialization, fitness evaluation, and update solution.
Time complexity is an essential indicator in algorithm comparison, representing the degree of time an algorithm takes to perform calculations, which is mainly reflected in the algorithm’s initialization, fitness evaluation, and update solution [46]. The computational complexity of SCSO is  O ( N × D × T ) , where  N  is the population size,  D  is the computational dimension of the problem, and  T  is the number of iterations. MSCSO has three more parts than SCSO. The time complexity of the new opposition-based learning strategy is  O ( D × T ) , and the novel exploration mechanism replaces the original exploration mechanism; thus, there is no increase in time complexity. The time complexity of the elimination update mechanism is  O ( 0.1 × N × D × T ) . Therefore, the time complexity of MSCSO is  O ( N × D × T ) + O ( D × T ) + O ( 0.1 × N × D × T ) = O ( ( 1.1 N + 1 ) × D × T ) ; thus, the MSCSO proposed in this paper has equal time complexity compared to SCSO.

3.5. Population Diversity of MSCSO

Population diversity is an important part of the qualitative analysis of algorithms. This article demonstrates the population changes of SCSO and MSCSO algorithms during optimization through population diversity experiments. Taking global optimization as an example, unimodal and multimodal functions were selected, with dimensions of 30 and fixed. The population diversity  I C  can be calculated by Equations (14) and (15) [47]. The population diversity curves of SCSO and MSCSO are shown in Figure 3.
I C ( t ) = i = 1 N d = 1 D ( x i d ( t ) c d ( t ) ) 2
c d ( t ) = 1 D i = 1 N x i d ( t )
where  c d  denotes the degree of dispersion between the population and the centroid c in each iteration, and  x i d  represents the d-th dimension value of the i-th individual during the t-th iteration.
During the entire algorithm optimization process, MSCSO has higher population diversity values and better population diversity than SCSO. This indicates that MSCSO has better search ability during the exploration phase, which can avoid the algorithm falling into local optima and premature convergence.

3.6. Exploration and Exploitation of MSCSO

During the optimization process, different algorithms have different design ideas, resulting in differences in exploration and exploitation. Therefore, when designing a new algorithm, it is necessary to measure the exploration and exploitation of the algorithm in order to conduct a practical analysis of the search strategies that affect these two factors. The percentage of exploration and exploitation can be calculated by Equations (16)–(18) [47]. The exploration and exploitation of MSCSO is shown in Figure 4.
E x p l o r a t i o n ( % ) = D i v ( t ) D i v max × 100
E x p l o i t a t i o n ( % ) = | D i v ( t ) D i v max | D i v max × 100
D i v ( t ) = 1 D d = 1 D 1 N i = 1 N | m e d i a n ( x d ( t ) ) x i d ( t ) |
where  D i v ( t )  denotes the dimension-wise diversity measurement, and  D i v max  is the maximum diversity in the whole iteration process.
During the entire algorithm optimization process, the first part of MSCSO has a high search proportion, indicating that MSCSO has good searchability, and preventing the algorithm from falling into local optima and premature convergence. In the latter part, the development proportion gradually increases, and, based on the previous search, the convergence is accelerated to obtain the optimization results. Throughout the entire optimization process, the exploration and exploitation of MSCSO maintain a dynamic balance, indicating that the algorithm has good stability and optimization performance.

4. Experiments and Results Analysis

4.1. Benchmark Datasets

In this section, we conduct performance and effectiveness testing experiments on the proposed algorithm. Global optimization and feature selection, as common problems in daily life, have become the leading choices for testing optimization algorithms used to evaluate the comprehensive ability of algorithm exploration and exploitation. In terms of global optimization, this paper selected 30 well-known functions commonly used for optimization testing as the test set, including 20 non-fixed dimensional functions and ten fixed dimensional functions. In terms of feature selection, this paper selected 24 datasets commonly used for testing. The details of the global optimization function set are shown in Appendix A Table A1. The details of datasets for feature selection are shown in Table 1, which can be obtained from the website: https://archive.ics.uci.edu/datasets (accessed on 15 October 2023).

4.2. Parameter Settings

In order to better compare the results with other algorithms, in the global optimization section, this paper uses 11 famous algorithms as benchmark algorithms, including GA [48], PSO [19], GWO [15], HHO [22], ACO [49], WOA [16], SOGWO [50], EGWO [51], TACPSO [52], SCSO [13], etc. These algorithms have been used as comparative methods in many studies and have excellent performance in global optimization. The details of parameter setting for algorithms are shown in Table 2.
The feature selection problem is a binary optimization problem. When applying traditional optimization algorithms to the feature selection problem, binary transformation is required first, and a transfer function is used to map continuous values to their corresponding binary values [53].
Any optimization problem is transformed into a solution for the objective function [54]. In feature selection, the goal is to minimize the number of selected features and achieve the highest accuracy. Therefore, the objective function of the feature selection problem is shown in Equation (19).
f i t n e s s = α E r r o r + ( 1 α ) | S | | F |
where  E r r o r  represents the classification error rate,  | S |  represents the number of selected features,  | F |  represents the total number of features, and  α  is the weight assigned to the classification error rate,  α [ 0 , 1 ] .
Table 3 shows eight binary transfer functions (including four S-shaped and four V-shaped transfer functions). This paper conducted extensive simulations to verify the efficiency of these transfer functions and found that V4 is the most feasible transfer function.
In the global optimization experiment, all algorithms adopt unified parameter settings to ensure the fairness of the results. The algorithm runs independently and continuously 30 times, with a population of 50 and algorithm iterations of 500. The simulation testing environment for this time is operating system Win10, 64-bit, CPU 11th Gen Intel (R) Core (TM) i7-11700K, memory 64 GB, primary frequency 3.60 GHz, and simulation software MATLAB 2016b.

4.3. Results Analysis

In global optimization problems, the higher the dimension of the optimization problem, the better it can demonstrate the robustness and performance of the algorithm. Therefore, this paper focuses on 20 non-fixed dimensional functions, using three dimensions of 30, 100, and 500, respectively, to fully validate the proposed and benchmarked algorithm’s effectiveness. This paper adopts five statistical metrics to assess the effectiveness of all algorithms, including Mean, standard deviation (Std), p-value, Wilcoxon rank sum test, and Friedman test. It draws the iterative convergence curve and box diagram of the algorithm fully and comprehensively.
Table 4 shows the results of 5 statistical metrics for 12 optimization algorithms in solving 30-dimensional non-fixed dimensional functions. Figure 5 shows the iteration curves of the 12 optimization algorithms in solving 30-dimensional non-fixed dimensional functions. Through the convergence curve, the MSCSO algorithm proposed in this paper outperforms other algorithms in terms of the convergence speed and optimization accuracy in F1, F2, F3, F4, F9, F10, F11, F13, F14, F15, F16, F17, and F18 functions. Figure 6 is a box diagram of the results of 12 optimization algorithms solving 30-dimensional non-fixed dimensional functions.
By analyzing the results in Table 4, Figure 5 and Figure 6, MSCSO achieved the most optimal values in the 30-dimensional non-fixed dimensional functions compared to the other 11 algorithms, with a quantity of 13. The Wilcoxon rank sum test and Friedman test show the overall results of each algorithm. In the Wilcoxon rank sum test, MSCSO achieved results of 190/22/8; in the Friedman test, MSCSO achieved the highest-ranking result with a value of 2.3750. The above results indicate that MSCSO has achieved better results than other algorithms in 30-dimensional non-fixed dimensional functions.
Table 5 shows the results of 5 statistical metrics for 12 optimization algorithms in solving 100-dimensional non-fixed dimensional functions. Figure 7 shows the iteration curves of the 12 optimization algorithms in solving 100-dimensional non-fixed dimensional functions. Through the convergence curve, the MSCSO algorithm proposed in this paper outperforms other algorithms in terms of convergence speed and optimization accuracy in F1, F2, F3, F4, F9, F10, F11, F13, F14, F15, F16, F17, and F18 functions. Figure 8 is a box diagram of the results of 12 optimization algorithms solving 100-dimensional non-fixed dimensional functions.
By analyzing the results in Table 5, Figure 7 and Figure 8, MSCSO achieved the most optimal values in the 100-dimensional non-fixed dimensional functions compared to the other 11 algorithms, with a quantity of 12. The Wilcoxon rank sum test and Friedman test show the overall results of each algorithm. In the Wilcoxon rank sum test, MSCSO achieved results of 194/20/6; in the Friedman test, MSCSO achieved the highest-ranking result with a value of 2.2125. The above results indicate that MSCSO has achieved better results than other algorithms in 100-dimensional non-fixed dimensional functions.
Table 6 shows the results of 5 statistical metrics for 12 optimization algorithms in solving 500-dimensional non-fixed dimensional functions. Figure 9 shows the iteration curves of the 12 optimization algorithms in solving 500-dimensional non-fixed dimensional functions. Through the convergence curve, the MSCSO algorithm proposed in this paper outperforms other algorithms in terms of convergence speed and optimization accuracy in F1, F2, F3, F4, F9, F10, F11, F13, F14, F15, F16, F17, and F18 functions. Figure 10 is a box diagram of the results of 12 optimization algorithms solving 500-dimensional non-fixed dimensional functions.
By analyzing the results in Table 6, Figure 9 and Figure 10, MSCSO achieved the most optimal values in the 500-dimensional non-fixed dimensional functions compared to the other 11 algorithms, with a quantity of 12. The Wilcoxon rank sum test and Friedman test show the overall results of each algorithm. In the Wilcoxon rank sum test, MSCSO achieved results of 191/24/5; in the Friedman test, MSCSO achieved the highest-ranking result with a value of 2.2500. The above results indicate that MSCSO has achieved better results than other algorithms in 100-dimensional non-fixed dimensional functions.
Table 7 shows the results of 5 statistical metrics for 12 optimization algorithms in a fixeddimensional function. Figure 11 shows the iteration curves of the 12 optimization algorithms in solving fixed-dimensional functions. From the convergence curve, the MSCSO algorithm proposed in this paper has a higher convergence speed than other algorithms in functions F21, F23, F24, F28, F29, and F30. In functions F21, F23, F24, F25, F26, F28, F29, and F30, the optimization accuracy is stronger than other algorithms. Figure 12 is a box diagram of the results of 12 optimization algorithms solving fixed-dimensional functions.
Through the analysis of the results in Table 7, Figure 11 and Figure 12, in the fixed-dimensional functions, MSCSO achieved the second highest optimal value compared to the other 11 algorithms, with only one fewer number than the best TACPSO. The Wilcoxon rank sum test and Friedman test show the overall results of each algorithm. In the Wilcoxon rank sum test, MSCSO achieved a result of 60/24/26; in the Friedman test, MSCSO achieved the highest-ranking result with a value of 4.3750. The above results indicate that MSCSO has achieved better results than other algorithms in fixed-dimensional functions.
Accurate and fitness values are commonly used comparative indicators in feature selection problems. To comprehensively demonstrate the effectiveness, the average (Mean) and standard deviation (Std) of the accuracy and fitness values are calculated separately for Friedman’s test. The iterative convergence curve of the algorithm was drawn.
Table 8 shows the proposed and benchmark algorithm results for solving 24 feature selection datasets. The results include the average and standard deviation of the feature selection accuracy. In 24 datasets, MSCSO achieved 15 optimal values, with the highest number among all algorithms. Table 9 shows the Friedman test results for the average and standard deviation of feature selection accuracy, with the sum of ranks achieving 46.5 and the average of ranks achieving 1.9375, ranking first among all algorithms.
The results in Table 10 include the average and standard deviation of feature selection fitness values. In 24 datasets, MSCSO achieved 13 optimal values, with the highest number among all algorithms. Table 11 shows the Friedman test results for the average and standard deviation of feature selection accuracy, with the sum of ranks achieving 46 and the average of ranks achieving 1.9167, ranking first among all algorithms. Figure 13 shows the convergence curves of feature selection datasets.
This section validates the proposed MSCSO algorithm through two problems: global optimization and feature selection. MSCSO and other advanced algorithms were tested on 30 well-known global optimization functions (20 non-deterministic functions (Dim = 30, 100, and 500) and 10 deterministic functions) and 24 feature selection datasets. Global optimization problems such as options of five statistical metrics assess the effectiveness of all algorithms, including Mean, Std, p-value, Wilcoxon rank sum test, and Friedman test. The feature selection problem adopts the Mean and Std of the accuracy, and fitness values are calculated separately for Friedman’s test. The convergence curve, box diagram, and experimental data table show that MSCSO has achieved the best results, demonstrating its strong development ability and efficient spatial exploration ability. The effectiveness of the proposed strategies and methods has been verified.

5. Conclusions and Future Works

The SCSO is a highly effective biomimetics swarm intelligence algorithm proposed in recent years, which simulates the life habits of sand cats to optimize problems and achieve good optimization results. But in some issues, the optimization effect is not ideal. Therefore, after constructing a mathematical model for optimizing biological habits, some mathematical theories can be improved to obtain excellent optimization mathematical models. This method often achieves good optimization results. In order to enhance the effectiveness of the SCSO and help it overcome some physiological limitations, this paper proposes three novel strategies: a new opposition-based learning strategy, a new exploration mechanism, and a biological elimination update mechanism. These strategies help the SCSO easily jump out of local optima, accelerate convergence speed, and improve optimization accuracy.
To verify the effectiveness of the proposed MSCSO algorithm in this paper, the algorithm was applied to two problems: global optimization (thirty functions) and feature selection (twenty datasets). These are also common complexity problems in many fields. Finally, by comparing them with many advanced algorithms in global optimization and feature selection problems, it is proven that the improved algorithm proposed in this paper has excellent performance. Global optimization problems such as options five statistical metrics assess the effectiveness of all algorithms, including Mean, Std, p-value, Wilcoxon rank sum test, and Friedman test. The feature selection problem adopts the Mean and Std of the accuracy, and fitness values are calculated separately for Friedman’s test. The convergence curve, box diagram, and experimental data table show that MSCSO achieved the best results, demonstrating its strong development ability and efficient spatial exploration ability. The experimental and statistical results show that MSCSO has excellent performance and has certain advantages compared to other advanced algorithms in jumping out of local optima, improving convergence speed, and improving optimization accuracy. MSCSO has excellent optimization capabilities from both theoretical and practical perspectives. This proves that MSCSO can adapt to a wide range of optimization problems and verifies the algorithm’s robustness.
Although the strategy proposed in this article improved the optimization ability of the original SCSO, it was found through the interpretation of relevant mathematical models that the proportion of algorithm exploration and exploitation is too fixed, which cannot enable the algorithm to explore and develop according to actual problems. In later research, nonlinear dynamic adjustment factors can be set for the exploration and development stage of the algorithm. In future related research, emphasis will be placed on evolving the proposed algorithms towards more practical problems, such as feature selection in the fields of text and images. In response to the hyperparameter optimization problems faced by machine learning and deep learning, more effective heuristic algorithms will be adopted to attempt to provide some efforts for the improvement of artificial intelligence technology.

Author Contributions

Conceptualization, L.Y., J.Y. and P.Y.; methodology, P.Y. and L.Y.; software, P.Y. and G.L.; writing—original draft, L.Y. and Y.L.; writing—review and editing, Y.L., T.Z. and J.Y.; data curation, J.Y. and G.L.; visualization G.L. and J.Y.; supervision, T.Z. and Y.L.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Guizhou Provincial Science and Technology Projects (Grant No. Qiankehejichu-ZK [2022] General 320), National Natural Science Foundation (Grant No. 72061006) and Academic New Seedling Foundation Project of Guizhou Normal University (Grant No. Qianshixinmiao-[2021] A30).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this article is publicly available and has been explained in the main text.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The global optimization functions.
Table A1. The global optimization functions.
NameFunctionDimRange   F m i n Type
Sphere   f 1 ( x ) = i = 1 D i m x i 2 30, 100, 500[−100, 100]0Unimodal
Schwefel 2.22   f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30, 100, 500[−1.28, 1.28]0Unimodal
Schwefel 1.2   f 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30, 100, 500[−100, 100]0Unimodal
Schwefel 2.21   f 4 ( x ) = max i { | x i | , 1 i n } 30, 100, 500[−100, 100]0Unimodal
Rosenbroke   f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30, 100, 500[−30, 30]0Unimodal
Step     f 6 ( x ) = i = 1 n [ x i + 0.5 ] 2 30, 100, 500[−100, 100]0Unimodal
Quartic   f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] 30, 100, 500[−1.28, 1.28]0Unimodal
Exponential   f 8 = e x p ( 0.5 i = 1 n x i ) 30, 100, 500[−10, 10]0Unimodal
Sum Power   f 9 = i = 1 n | x i | ( i + 1 ) 30, 100, 500[−1, 1]0Unimodal
Sum Square   f 10 ( x ) = i = 1 n n x i 2 30, 100, 500[−10, 10]0Unimodal
Zakharov   f 11 = i = 1 n x i 2 + ( i = 1 n 0.5 i x i ) 2 + ( i = 1 n 0.5 i x i ) 4 30, 100, 500[−10, 10]0Unimodal
Trid   f 12 ( x ) = ( x i 1 ) 2 + i = 1 n i ( 2 x i 2 x i 1 ) 2 30, 100, 500[−10, 10]0Unimodal
Elliptic   f 13 = i = 1 n ( 10 6 ) ( i 1 ) / ( n 1 ) x i 2 30, 100, 500[−100, 100]0Unimodal
Cigar   f 14 ( x ) = x 1 2 + 10 6 i = 1 n x i 2 30, 100, 500[−100, 100]0Unimodal
Generalized Schwefel’s problem 2.26   f 15 ( x ) = i = 1 n x i sin ( | x i | ) 30, 100, 500[−500, 500]−418.9829 × nMultimodal
Generalized Rastrigin’s Function   f 16 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30, 100, 500[−5.12, 5.12]0Multimodal
Ackley’s Function   f 17 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30, 100, 500[−32, 32]0Multimodal
Generalized Criewank Function   f 18 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30, 100, 500[−600, 600]0Multimodal
Penalized 1   f 19 ( x ) = π n { 10 sin ( π yi ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a } 30, 100, 500[−50, 50]0Multimodal
Penalized 2   f 20 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 30, 100, 500[−50, 50]0Multimodal
Shekell’s Foxholes Function   f 21 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2 [−65.536, 65.536]1Multimodal
Kowalik’s Function   f 22 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003Multimodal
Six-Hump Camel-Back Function   f 23 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316Multimodal
Branin Function   f 24 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398Multimodal
Goldstein-Price Function   f 25 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3Multimodal
Hatman’s Function1   f 26 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86Multimodal
Hatman’s Function2   f 27 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32Multimodal
Schekel’s Family 1   f 28 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532Multimodal
Schekel’s Family 2   f 29 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028Multimodal
Schekel’s Family 3   f 30 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5364Multimodal

References

  1. Yuan, Y.; Wei, J.; Huang, H.; Jiao, W.; Wang, J.; Chen, H. Review of resampling techniques for the treatment of imbalanced industrial data classification in equipment condition monitoring. Eng. Appl. Artif. Intell. 2023, 126, 106911. [Google Scholar] [CrossRef]
  2. Liang, Y.-C.; Minanda, V.; Gunawan, A. Waste collection routing problem: A mini-review of recent heuristic approaches and applications. Waste Manag. Res. 2022, 40, 519–537. [Google Scholar] [CrossRef] [PubMed]
  3. Kuo, R.; Li, S.-S. Applying particle swarm optimization algorithm-based collaborative filtering recommender system considering rating and review. Appl. Soft Comput. 2023, 135, 110038. [Google Scholar] [CrossRef]
  4. Fan, S.-K.S.; Lin, W.-K.; Jen, C.-H. Data-driven optimization of accessory combinations for final testing processes in semiconductor manufacturing. J. Manuf. Syst. 2022, 63, 275–287. [Google Scholar] [CrossRef]
  5. Huynh, N.-T.; Nguyen, T.V.T.; Tam, N.T.T.; Nguyen, Q. Optimizing Magnification Ratio for the Flexible Hinge Displacement Amplifier Mechanism Design. In Lecture Notes in Mechanical Engineering, Proceedings of the 2nd Annual International Conference on Material, Machines and Methods for Sustainable Development (MMMS2020), Nha Trang, Vietnam, 12–15 November 2020; Springer: Cham, Switzerland, 2021. [Google Scholar]
  6. Kler, R.; Gangurde, R.; Elmirzaev, S.; Hossain, M.S.; Vo, N.T.M.; Nguyen, T.V.T.; Kumar, P.N. Optimization of Meat and Poultry Farm Inventory Stock Using Data Analytics for Green Supply Chain Network. Discret. Dyn. Nat. Soc. 2022, 2022, 8970549. [Google Scholar] [CrossRef]
  7. Yu, K.; Liang, J.J.; Qu, B.; Luo, Y.; Yue, C. Dynamic Selection Preference-Assisted Constrained Multiobjective Differential Evolution. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 2954–2965. [Google Scholar] [CrossRef]
  8. Yu, K.; Sun, S.; Liang, J.J.; Chen, K.; Qu, B.; Yue, C.; Wang, L. A bidirectional dynamic grouping multi-objective evolutionary algorithm for feature selection on high-dimensional classification. Inf. Sci. 2023, 648, 119619. [Google Scholar] [CrossRef]
  9. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  10. Yu, K.; Zhang, D.; Liang, J.J.; Chen, K.; Yue, C.; Qiao, K.; Wang, L. A Correlation-Guided Layered Prediction Approach for Evolutionary Dynamic Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023, 27, 1398–1412. [Google Scholar] [CrossRef]
  11. Wei, J.; Huang, H.; Yao, L.; Hu, Y.; Fan, Q.; Huang, D. New imbalanced bearing fault diagnosis method based on Sample-characteristic Oversampling TechniquE (SCOTE) and multi-class LS-SVM. Appl. Soft Comput. 2021, 101, 107043. [Google Scholar] [CrossRef]
  12. Yu, K.; Zhang, D.; Liang, J.J.; Qu, B.; Liu, M.; Chen, K.; Yue, C.; Wang, L. A Framework Based on Historical Evolution Learning for Dynamic Multiobjective Optimization. IEEE Trans. Evol. Comput. 2023. early access. [Google Scholar] [CrossRef]
  13. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  14. Chu, S.-C.; Tsai, P.-W.; Pan, J.-S. Cat swarm optimization. In PRICAI 2006: Trends in Artificial Intelligence, Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, Guilin, China, 7–11 August 2006; Proceedings 9; Springer: Cham, Switzerland, 2006; pp. 854–858. [Google Scholar]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  18. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  19. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  20. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  21. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  22. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  23. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  24. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  25. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 1–61. [Google Scholar] [CrossRef]
  26. Wei, J.; Wang, J.; Huang, H.; Jiao, W.; Yuan, Y.; Chen, H.; Wu, R.; Yi, J. Novel extended NI-MWMOTE-based fault diagnosis method for data-limited and noise-imbalanced scenarios. Expert Syst. Appl. 2023, 238, 121799. [Google Scholar] [CrossRef]
  27. Kiani, F.; Nematzadeh, S.; Anka, F.A.; Fındıklı, M. Chaotic Sand Cat Swarm Optimization. Mathematics 2023, 11, 2340. [Google Scholar] [CrossRef]
  28. Seyyedabbasi, A. Binary Sand Cat Swarm Optimization Algorithm for Wrapper Feature Selection on Biological Data. Biomimetics 2023, 8, 310. [Google Scholar] [CrossRef] [PubMed]
  29. Qtaish, A.; Albashish, D.; Braik, M.; Alshammari, M.T.; Alreshidi, A.; Alreshidi, E. Memory-Based Sand Cat Swarm Optimization for Feature Selection in Medical Diagnosis. Electronics 2023, 12, 2042. [Google Scholar] [CrossRef]
  30. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  31. Kiani, F.; Anka, F.A.; Erenel, F. PSCSO: Enhanced sand cat swarm optimization inspired by the political system to solve complex problems. Adv. Eng. Softw. 2023, 178, 103423. [Google Scholar] [CrossRef]
  32. Yao, L.; Yuan, P.; Tsai, C.-Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An Enhanced Snake Optimizer for Real-world Engineering Problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  33. Yuan, P.; Zhang, T.; Yao, L.; Lu, Y.; Zhuang, W. A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems. Appl. Sci. 2022, 12, 9709. [Google Scholar] [CrossRef]
  34. Yao, L.; Li, G.; Yuan, P.; Yang, J.; Tian, D.; Zhang, T. Reptile Search Algorithm Considering Different Flight Heights to Solve Engineering Optimization Design Problems. Biomimetics 2023, 8, 305. [Google Scholar] [CrossRef] [PubMed]
  35. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  36. Abed-Alguni, B.H.; Alawad, N.A.; Al-Betar, M.A.; Paul, D. Opposition-based sine cosine optimizer utilizing refraction learning and variable neighborhood search for feature selection. Appl. Intell. 2023, 53, 13224–13260. [Google Scholar] [CrossRef] [PubMed]
  37. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  38. Wu, R.; Huang, H.; Wei, J.; Ma, C.; Zhu, Y.; Chen, Y.; Fan, Q. An improved sparrow search algorithm based on quantum computations and multi-strategy enhancement. Expert Syst. Appl. 2022, 215, 119421. [Google Scholar] [CrossRef]
  39. Fan, Q.; Huang, H.; Chen, Q.; Yao, L.; Yang, K.; Huang, D. A modified self-adaptive marine predators algorithm: Framework and engineering applications. Eng. Comput. 2021, 38, 3269–3294. [Google Scholar] [CrossRef]
  40. Wang, Y.; Ran, S.; Wang, G.-G. Role-Oriented Binary Grey Wolf Optimizer Using Foraging-Following and Lévy Flight for Feature Selection. Appl. Math. Model. 2023, in press. [CrossRef]
  41. Lahmar, I.; Zaier, A.; Yahia, M.; Boaullègue, R. A Novel Improved Binary Harris Hawks Optimization For High dimensionality Feature Selection. Pattern Recognit. Lett. 2023, 171, 170–176. [Google Scholar] [CrossRef]
  42. Turkoglu, B.; Uymaz, S.A.; Kaya, E. Binary Artificial Algae Algorithm for feature selection. Appl. Soft Comput. 2022, 120, 108630. [Google Scholar] [CrossRef]
  43. Hu, G.; Zhong, J.; Zhao, C.; Wei, G.; Chang, C.-T. LCAHA: A hybrid artificial hummingbird algorithm with multi-strategy for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 415, 116238. [Google Scholar] [CrossRef]
  44. Long, W.; Jiao, J.; Xu, M.; Tang, M.; Wu, T.; Cai, S. Lens-imaging learning Harris hawks optimizer for global optimization and its application to feature selection. Expert Syst. Appl. 2022, 202, 117255. [Google Scholar] [CrossRef]
  45. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A balanced whale optimization algorithm for constrained engineering design problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  46. Jia, H.; Li, Y.; Wu, D.; Rao, H.; Wen, C.; Abualigah, L. Multi-strategy Remora Optimization Algorithm for Solving Multi-extremum Problems. J. Comput. Des. Eng. 2023, 10, qwad044. [Google Scholar] [CrossRef]
  47. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  48. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  49. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  50. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  51. Komathi, C.; Umamaheswari, M. Design of gray wolf optimizer algorithm-based fractional order PI controller for power factor correction in SMPS applications. IEEE Trans. Power Electron. 2019, 35, 2100–2118. [Google Scholar] [CrossRef]
  52. Ziyu, T.; Dingxue, Z. A modified particle swarm optimization with an adaptive acceleration coefficients. In Proceedings of the 2009 Asia-Pacific Conference on Information Processing, Shenzhen, China, 18–19 July 2009; pp. 330–332. [Google Scholar]
  53. Ewees, A.A.; Ismail, F.H.; Sahlol, A.T. Gradient-based optimizer improved by Slime Mould Algorithm for global optimization and feature selection for diverse computation problems. Expert Syst. Appl. 2023, 213, 118872. [Google Scholar] [CrossRef]
  54. Sun, L.; Si, S.; Zhao, J.; Xu, J.; Lin, Y.; Lv, Z. Feature selection using binary monarch butterfly optimization. Appl. Intell. 2023, 53, 706–727. [Google Scholar] [CrossRef]
Figure 1. Variations of the static lens imaging strategy and the dynamic lens imaging strategy.
Figure 1. Variations of the static lens imaging strategy and the dynamic lens imaging strategy.
Biomimetics 08 00492 g001
Figure 2. The variation form of random angle θ.
Figure 2. The variation form of random angle θ.
Biomimetics 08 00492 g002
Figure 3. The population diversity curves of SCSO and MSCSO.
Figure 3. The population diversity curves of SCSO and MSCSO.
Biomimetics 08 00492 g003
Figure 4. The exploration and exploitation of MSCSO.
Figure 4. The exploration and exploitation of MSCSO.
Biomimetics 08 00492 g004
Figure 5. The convergence curves of 30-dimensional non-fixed dimensional functions.
Figure 5. The convergence curves of 30-dimensional non-fixed dimensional functions.
Biomimetics 08 00492 g005
Figure 6. Boxplot analysis of 30-dim non-fixed dimensional functions.
Figure 6. Boxplot analysis of 30-dim non-fixed dimensional functions.
Biomimetics 08 00492 g006
Figure 7. The convergence curves of 100-dimensional non-fixed dimensional functions.
Figure 7. The convergence curves of 100-dimensional non-fixed dimensional functions.
Biomimetics 08 00492 g007
Figure 8. Boxplot analysis of 100-dimensional non-fixed dimensional functions.
Figure 8. Boxplot analysis of 100-dimensional non-fixed dimensional functions.
Biomimetics 08 00492 g008
Figure 9. The convergence curves of 500-dimensional non-fixed dimensional functions.
Figure 9. The convergence curves of 500-dimensional non-fixed dimensional functions.
Biomimetics 08 00492 g009
Figure 10. Boxplot analysis of 500-dimensional non-fixed dimensional functions.
Figure 10. Boxplot analysis of 500-dimensional non-fixed dimensional functions.
Biomimetics 08 00492 g010
Figure 11. The convergence curves of fixed-dimensional functions.
Figure 11. The convergence curves of fixed-dimensional functions.
Biomimetics 08 00492 g011
Figure 12. Boxplot analysis of fixed-dimensional functions.
Figure 12. Boxplot analysis of fixed-dimensional functions.
Biomimetics 08 00492 g012
Figure 13. The convergence curves of feature selection datasets.
Figure 13. The convergence curves of feature selection datasets.
Biomimetics 08 00492 g013
Table 1. The feature selection datasets.
Table 1. The feature selection datasets.
No.DatasetsFeaturesSamplesNo.DatasetsFeaturesSamples
1Iris415013Clean1167476
2Ionosphere3435114Waveform373325
3Zoo1610115PenglungEW215000
4Wine1217816Sonar60208
5Glass921417Vote16435
6Musk146747618German241000
7HeartEW1427019Diabetes8168
8Lymphography1814820KrvskpEW363196
9Parkinsons2219521Australian14690
10Exactly13100022Haberman4306
11Breastcancer969923Ecoli8336
12BreastEW3056924Mammographic6830
Table 2. Parameters and assignments setting for algorithms.
Table 2. Parameters and assignments setting for algorithms.
AlgorithmsParameters and Assignments
GA   α [ 0.5 , 1.5 ]
PSO c 1 = 2 c 2 = 2 W min = 0.2 W max = 0.9
GWO a = 2 ( linearly   decreases   over   iterations ) r 1 [ 0 , 1 ] r 2 [ 0 , 1 ]
HHO   J [ 0 , 2 ]
ACO   α = 1 , β = 2 , ρ = 0.05
WOA   a [ 2 , 0 ] , A [ 2 , 0 ] , C = 2 . r a n d ( 0 , 1 ) , l [ 1 , 1 ] , b = 1
SOGWO a = 2 ( linearly   decreases   over   iterations ) r 1 [ 0 , 1 ] r 2 [ 0 , 1 ]
EGWO a = 2 ( linearly   decreases   over   iterations ) r 1 [ 0 , 1 ] r 2 [ 0 , 1 ]
TACPSO c 1 = 2 c 2 = 2 W min = 0.2 W max = 0.9
SCSO r G [ 2 , 0 ] R [ 2 r G , 2 r G ]
MSCSO r G [ 2 , 0 ] R [ 2 r G , 2 r G ]
Table 3. Details of binary transfer functions.
Table 3. Details of binary transfer functions.
S-ShapedTransfer FunctionsV-ShapedTransfer Functions
S1   T ( x ) = 1 1 + e 2 x V1   T ( x ) = | e r f ( π 2 x ) |
S2   T ( x ) = 1 1 + e 2 x V2   T ( x ) = | tanh ( x ) |
S3   T ( x ) = 1 1 + e ( x / 2 ) V3   T ( x ) = | ( x ) / 1 + x 2 |
S4   T ( x ) = 1 1 + e ( x / 3 ) V4   T ( x ) = | 2 π arctan ( π 2 x ) |
Table 4. Comparison of results on 30-dimensional non-fixed dimensional functions.
Table 4. Comparison of results on 30-dimensional non-fixed dimensional functions.
F(x)MetricGAPSOGWOHHOACOWOACMA-ESSOGWOEGWOTACPSOSCSOMSCSO
F1Mean2.3882 × 10044.0252 × 10029.4854 × 10−281.0834 × 10−973.1450 × 10−031.2385 × 10−741.0751 × 10−052.5836 × 10−271.2941 × 10−301.5302 × 10−014.6963 × 10−1140.0000 × 1000
Std8.0237 × 10032.4914 × 10029.6209 × 10−284.7523 × 10−973.4279 × 10−032.7928 × 10−744.1260 × 10−064.2308 × 10−274.8761 × 10−303.4246 × 10−011.5788 × 10−1130.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F2Mean5.5958 × 10011.6450 × 10018.0038 × 10−172.3736 × 10−506.7519 × 10−017.3908 × 10−514.8772 × 10−037.6346 × 10−171.8008 × 10−191.1305 × 10002.9535 × 10−590.0000 × 1000
Std8.2859 × 10007.1315 × 10005.9003 × 10−179.1308 × 10−502.5363 × 10003.2765 × 10−501.2516 × 10−035.8232 × 10−174.9257 × 10−192.5175 × 10001.4573 × 10−580.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F3Mean5.1764 × 10049.1934 × 10038.4566 × 10−062.0375 × 10−653.3823 × 10044.4908 × 10045.3514 × 10001.3368 × 10−045.5742 × 10−041.3350 × 10038.6436 × 10−970.0000 × 1000
Std1.3440 × 10045.9570 × 10031.9506 × 10−051.1160 × 10−647.1006 × 10031.3375 × 10041.5242 × 10013.8736 × 10−042.7963 × 10−031.3539 × 10032.6872 × 10−960.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F4Mean7.1899 × 10018.9183 × 10007.7056 × 10−076.4756 × 10−507.6759 × 10015.1504 × 10015.3379 × 10011.7477 × 10−065.9639 × 10−021.0062 × 10014.1841 × 10−500.0000 × 1000
Std9.7846 × 10002.4777 × 10009.2735 × 10−071.6146 × 10−491.8395 × 10012.5189 × 10014.2940 × 10013.2233 × 10−062.3079 × 10−013.8593 × 10001.4644 × 10−490.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F5Mean2.1880 × 10071.7601 × 10042.7143 × 10011.9384 × 10−023.4312 × 10032.7964 × 10013.1524 × 10012.7193 × 10012.8053 × 10012.3597 × 10022.8028 × 10014.4841 × 10−01
Std1.6446 × 10071.8892 × 10048.1726 × 10−013.0642 × 10−021.6368 × 10044.7168 × 10−012.4178 × 10017.6903 × 10−017.7618 × 10−015.3466 × 10028.7610 × 10−011.2419 × 1000
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.8608 × 10−063.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F6Mean2.0342 × 10043.5598 × 10028.1840 × 10−011.5320 × 10−042.5508 × 10−034.1990 × 10−011.1798 × 10−057.3664 × 10−013.3872 × 10001.8392 × 10−011.8160 × 10002.5493 × 10−04
Std9.4131 × 10031.5762 × 10023.8138 × 10−011.5441 × 10−042.4175 × 10−032.2643 × 10−015.6766 × 10−063.8153 × 10−016.0128 × 10−013.8507 × 10−015.2389 × 10−013.2794 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.3732 × 10−011.1737 × 10−093.0199 × 10−116.7220 × 10−103.0199 × 10−113.0199 × 10−113.3384 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(=)(+)(+)(−)(+)(+)(+)(+)——
F7Mean1.3664 × 10011.4687 × 10001.9841 × 10−031.4422 × 10−048.0345 × 10−023.7002 × 10−032.6953 × 10−022.0066 × 10−037.5552 × 10−039.1955 × 10−021.3950 × 10−045.0156 × 10−04
Std8.7006 × 10003.2863 × 10001.0485 × 10−031.5180 × 10−042.9376 × 10−024.5906 × 10−036.3070 × 10−037.3785 × 10−044.0167 × 10−034.1518 × 10−021.4087 × 10−046.8478 × 10−04
P3.0199 × 10−113.0199 × 10−112.1947 × 10−086.6689 × 10−033.0199 × 10−114.1127 × 10−073.0199 × 10−114.5726 × 10−094.0772 × 10−113.0199 × 10−114.8560 × 10−03——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(−)——
F8Mean4.8999 × 10043.6819 × 10012.1156 × 10−031.3081 × 10−041.9022 × 10−015.5122 × 10−032.5917 × 10−021.8879 × 10−037.1894 × 10−034.3129 × 10−018.3222 × 10−052.2013 × 10−04
Std2.4132 × 10043.5570 × 10011.1600 × 10−031.3126 × 10−047.7122 × 10−026.3741 × 10−038.7268 × 10−031.1092 × 10−033.4171 × 10−032.3226 × 10−011.1264 × 10−042.2498 × 10−04
P3.0199 × 10−113.0199 × 10−115.4941 × 10−119.9258 × 10−023.0199 × 10−117.6950 × 10−083.0199 × 10−115.4941 × 10−113.0199 × 10−113.0199 × 10−114.0330 × 10−03——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(−)——
F9Mean2.3492 × 10−053.2403 × 10−059.0901 × 10−073.0590 × 10−073.0590 × 10−073.0590 × 10−073.0590 × 10−079.7708 × 10−071.0893 × 10−043.0590 × 10−072.3376 × 10−063.0590 × 10−07
Std1.7190 × 10−053.9762 × 10−055.6252 × 10−072.6922 × 10−222.6922 × 10−222.6922 × 10−222.6922 × 10−227.3193 × 10−071.0355 × 10−042.6922 × 10−221.5292 × 10−062.6922 × 10−22
P1.2118 × 10−121.0239 × 10−121.2118 × 10−12NaNNaNNaNNaN1.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−12——
Wr(+)(+)(+)(=)(=)(=)(=)(+)(+)(=)(+)——
F10Mean1.0145 × 10178.4409 × 10088.1966 × 10−875.5583 × 10−1183.3982 × 10087.6823 × 10−1021.8948 × 10−066.0445 × 10−872.1559 × 10−883.7142 × 10062.7945 × 10−2070.0000 × 1000
Std3.9055 × 10172.6029 × 10094.2284 × 10−862.1965 × 10−1171.8250 × 10094.2059 × 10−1012.9163 × 10−063.1459 × 10−861.1808 × 10−871.8277 × 10070.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F11Mean4.1073 × 10031.9695 × 10011.2612 × 10−286.3211 × 10−981.3240 × 10−032.7798 × 10−765.5566 × 10−071.3055 × 10−286.1163 × 10−322.3031 × 10−026.5380 × 10−1110.0000 × 1000
Std2.5445 × 10039.5147 × 10001.9626 × 10−283.4427 × 10−971.9224 × 10−039.6563 × 10−762.5350 × 10−072.5006 × 10−281.4303 × 10−317.7778 × 10−023.5781 × 10−1100.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F12Mean1.1708 × 10043.4868 × 10026.6669 × 10−012.4973 × 10−019.6624 × 10−016.6701 × 10−016.6667 × 10−016.6668 × 10−016.7791 × 10−013.2344 × 10006.6667 × 10−012.4509 × 10−01
Std8.6335 × 10031.2065 × 10035.8385 × 10−057.7056 × 10−047.7851 × 10−013.9840 × 10−041.3368 × 10−052.6822 × 10−056.0836 × 10−024.0159 × 10002.1578 × 10−071.1400 × 10−02
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.0407 × 10−043.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F13Mean2.9962 × 10054.9379 × 10000.0000 × 10006.1378 × 10−2430.0000 × 10000.0000 × 10002.7895 × 10−010.0000 × 10000.0000 × 10002.2045 × 10−670.0000 × 10000.0000 × 1000
Std5.6755 × 10051.4569 × 10010.0000 × 10000.0000 × 10000.0000 × 10000.0000 × 10008.4603 × 10−010.0000 × 10000.0000 × 10007.4589 × 10−670.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.7016 × 10−08NaNNaN1.2118 × 10−12NaNNaN1.2118 × 10−12NaN——
Wr(+)(+)(+)(+)(=)(=)(+)(=)(=)(+)(=)——
F14Mean6.6487 × 10051.4240 × 10031.0865 × 10−1992.1045 × 10−1003.5179 × 10−2917.7139 × 10−1074.5316 × 10012.5201 × 10−781.5299 × 10−2626.3626 × 10−571.8398 × 10−2400.0000 × 1000
Std1.4463 × 10063.4291 × 10030.0000 × 10001.1527 × 10−990.0000 × 10002.8284 × 10−1063.7111 × 10011.3803 × 10−770.0000 × 10002.8667 × 10−560.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−125.8522 × 10−091.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F15Mean−2.3265 × 1003−7.2536 × 1003−6.2034 × 1003−1.2569 × 1004−7.3992 × 1003−9.6875 × 1003−5.4185 × 1003−6.4035 × 1003−6.6208 × 1003−8.3848 × 1003−6.7829 × 1003−1.2568 × 1004
Std4.9200 × 10021.0589 × 10035.9953 × 10027.9718 × 10−011.1202 × 10031.8009 × 10032.8361 × 10007.6384 × 10026.1800 × 10025.2569 × 10028.8936 × 10021.4088 × 1000
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.0188 × 10−053.0199 × 10−113.0199 × 10−113.1602 × 10−123.1602 × 10−123.1602 × 10−123.1602 × 10−123.1602 × 10−12——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F16Mean2.6969 × 10022.0122 × 10023.9097 × 10000.0000 × 10002.3545 × 10021.8948 × 10−151.3219 × 10021.1182 × 10001.6074 × 10028.0006 × 10010.0000 × 10000.0000 × 1000
Std5.1650 × 10013.7574 × 10014.9670 × 10000.0000 × 10002.8249 × 10011.0378 × 10−146.4843 × 10011.8024 × 10004.5943 × 10012.2926 × 10010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.1921 × 10−12NaN1.2118 × 10−123.3371 × 10−011.2118 × 10−121.1462 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F17Mean1.9918 × 10016.2986 × 10001.0427 × 10−138.8818 × 10−161.2668 × 10014.7962 × 10−151.0646 × 10011.0451 × 10−131.8196 × 10−012.3619 × 10008.8818 × 10−168.8818 × 10−16
Std4.0001 × 10−012.7508 × 10001.5169 × 10−140.0000 × 10009.8240 × 10002.1580 × 10−151.0128 × 10011.8093 × 10−146.9249 × 10−019.2248 × 10−010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.0947 × 10−12NaN1.2118 × 10−128.0416 × 10−111.2118 × 10−121.1453 × 10−128.6036 × 10−131.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F18Mean1.9614 × 10024.0914 × 10003.5675 × 10−030.0000 × 10008.9299 × 10−025.0132 × 10−031.3486 × 10−045.0762 × 10−035.8190 × 10−031.8384 × 10−010.0000 × 10000.0000 × 1000
Std7.7683 × 10011.6958 × 10007.6611 × 10−030.0000 × 10001.7726 × 10−012.7459 × 10−024.9410 × 10−059.6453 × 10−039.3203 × 10−032.0792 × 10−010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.1035 × 10−02NaN1.2118 × 10−123.3371 × 10−011.2118 × 10−122.7880 × 10−031.4552 × 10−041.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(=)——
F19Mean2.3968 × 10075.3044 × 10005.3218 × 10−021.2764 × 10−052.4202 × 10002.3134 × 10−021.2759 × 10−064.7523 × 10−022.7727 × 10002.0992 × 10001.0182 × 10−011.6282 × 10−05
Std2.4219 × 10072.2030 × 10002.7722 × 10−021.1330 × 10−052.8971 × 10002.7220 × 10−024.8945 × 10−073.3122 × 10−022.9934 × 10001.7822 × 10004.8569 × 10−021.7897 × 10−05
P3.0199 × 10−113.0199 × 10−113.0199 × 10−117.3940 × 10−013.0199 × 10−113.0199 × 10−115.8587 × 10−063.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(=)(+)(+)(−)(+)(+)(+)(+)——
F20Mean9.3930 × 10077.9378 × 10017.1729 × 10−016.5993 × 10−051.5778 × 10005.5696 × 10−011.7743 × 10−056.7547 × 10−012.5680 × 10004.8580 × 10002.4786 × 10001.4439 × 10−04
Std7.8709 × 10073.0201 × 10022.2320 × 10−019.3433 × 10−052.2858 × 10002.3534 × 10−019.7499 × 10−062.3352 × 10−013.9348 × 10−015.6359 × 10003.1401 × 10−012.2463 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−112.1156 × 10−013.0199 × 10−113.0199 × 10−118.5641 × 10−043.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(=)(+)(+)(−)(+)(+)(+)(+)——
Wilcoxon’s rank sum test20/0/020/0/020/0/09/8/318/2/017/3/016/1/319/1/019/1/019/1/013/5/2——
Friedman value1.1450 × 10011.0350 × 10015.4625 × 10002.8875 × 10008.2375 × 10005.4625 × 10006.7625 × 10005.8125 × 10006.9125 × 10008.4750 × 10003.8125 × 10002.3750 × 1000
Friedman rank121142947681031
Table 5. Comparison of results on 100-dimensional non-fixed dimensional functions.
Table 5. Comparison of results on 100-dimensional non-fixed dimensional functions.
F(x)MetricGAPSOGWOHHOACOWOACMA-ESSOGWOEGWOTACPSOSCSOMSCSO
F1Mean2.2796 × 10054.1309 × 10031.9649 × 10−128.1783 × 10−941.1445 × 10057.9144 × 10−731.7426 × 10013.0552 × 10−128.5587 × 10−166.2226 × 10038.4594 × 10−1040.0000 × 1000
Std2.1903 × 10041.5709 × 10031.6092 × 10−124.4712 × 10−931.4191 × 10044.1414 × 10−723.4458 × 10002.3656 × 10−121.3989 × 10−152.2899 × 10034.4412 × 10−1030.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118E−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F2Mean2.6881 × 10027.8502 × 10014.3109 × 10−086.2289 × 10−502.8672 × 10231.4848 × 10−491.0948 × 10014.3769 × 10−081.0503 × 10−101.0912 × 10024.7898 × 10−550.0000 × 1000
Std1.6706 × 10011.7924 × 10011.6356 × 10−083.1766 × 10−491.2002 × 10246.6602 × 10−492.0627 × 10001.6609 × 10−088.7046 × 10−113.1014 × 10011.9093 × 10−540.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F3Mean6.0279 × 10051.2364 × 10058.1177 × 10029.7975 × 10−655.4162 × 10051.0311 × 10064.0662 × 10051.5336 × 10032.7234 × 10048.6711 × 10043.5996 × 10−870.0000 × 1000
Std1.6306 × 10054.8487 × 10041.2470 × 10035.2305 × 10−646.0352 × 10043.2664 × 10055.0995 × 10041.2227 × 10031.3463 × 10041.5929 × 10041.9623 × 10−860.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F4Mean9.3715 × 10012.2901 × 10011.2118 × 10004.1398 × 10−489.7205 × 10017.6390 × 10019.9136 × 10011.4083 × 10007.2801 × 10014.7416 × 10013.0384 × 10−470.0000 × 1000
Std2.6212 × 10005.0716 × 10001.8149 × 10001.5216 × 10−471.2321 × 10001.9532 × 10011.3058 × 10001.1685 × 10009.1623 × 10003.4754 × 10001.5472 × 10−460.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F5Mean8.8575 × 10082.9507 × 10059.7907 × 10016.1032 × 10−021.1344 × 10099.8113 × 10019.2200 × 10029.7793 × 10019.8304 × 10013.2176 × 10069.8589 × 10018.1960 × 10−01
Std1.2162 × 10081.7422 × 10057.6201 × 10−019.6318 × 10−022.1787 × 10082.7765 × 10−016.9787 × 10028.1377 × 10−015.2734 × 10−012.0883 × 10061.9354 × 10−011.4538 × 1000
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.0188 × 10−053.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F6Mean2.2850 × 10054.0293 × 10039.9389 × 10002.4948 × 10−041.1967 × 10054.3402 × 10001.7082 × 10011.0292 × 10011.5039 × 10016.3488 × 10031.4374 × 10018.8979 × 10−04
Std2.1925 × 10041.4985 × 10039.6829 × 10−012.8808 × 10−041.4465 × 10041.3761 × 10003.0428 × 10009.8669 × 10−011.0101 × 10002.0877 × 10031.3427 × 10009.0667 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−113.3679 × 10−043.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F7Mean1.2880 × 10031.8519 × 10017.3043 × 10−031.4652 × 10−048.3847 × 10024.8867 × 10−031.6478 × 10−015.9650 × 10−033.1838 × 10−021.4838 × 10012.7908 × 10−043.8534 × 10−04
Std2.7904 × 10023.4375 × 10013.3281 × 10−031.2302 × 10−042.7992 × 10025.4639 × 10−032.7897 × 10−022.2195 × 10−031.7459 × 10−021.3548 × 10015.8690 × 10−046.0105 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−112.9047 × 10−013.0199 × 10−112.3897 × 10−083.0199 × 10−113.3384 × 10−113.0199 × 10−113.0199 × 10−114.6427 × 10−01——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F8Mean4.6556 × 10061.0718 × 10048.7212 × 10−031.3345 × 10−043.1155 × 10064.0181 × 10−031.4693 × 10009.0207 × 10−033.5840 × 10−021.5888 × 10041.9658 × 10−043.2212 × 10−04
Std1.0644 × 10063.0968 × 10043.4001 × 10−031.0575 × 10−045.1347 × 10054.7751 × 10−034.9195 × 10−013.4850 × 10−032.0738 × 10−021.1767 × 10041.8249 × 10−047.1252 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−118.4180 × 10−013.0199 × 10−118.3520 × 10−083.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−115.1060 × 10−01——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F9Mean3.2977 × 10−081.5079 × 10−096.4755 × 10−161.9287 × 10−221.9287 × 10−221.9287 × 10−227.7732 × 10−191.1438 × 10−159.5797 × 10−101.1098 × 10−212.5733 × 10−131.9287 × 10−22
Std6.1594 × 10−083.0777 × 10−097.9144 × 10−160.0000 × 10001.0729 × 10−360.0000 × 10009.9456 × 10−192.5022 × 10−152.4510 × 10−092.0133 × 10−214.8276 × 10−130.0000 × 1000
P1.2118 × 10−121.1064 × 10−121.2118 × 10−12NaN1.2864 × 10−08Na1.2118 × 10−121.2118 × 10−121.2118 × 10−123.0208 × 10−071.2118 × 10−12——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(+)——
F10Mean5.1483 × 10823.5975 × 10512.6735 × 10−533.4519 × 10−1211.1690 × 10733.5429 × 10−829.3166 × 10485.8934 × 10−512.6623 × 10274.0074 × 10444.6375 × 10−2110.0000 × 1000
Std1.8609 × 10831.8265 × 10521.4630 × 10−521.7459 × 10−1205.1268 × 10731.9405 × 10−813.9737 × 10492.8320 × 10−501.4581 × 10281.8306 × 10450.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F11Mean3.9619 × 10054.1432 × 10037.0203 × 10−131.8974 × 10−955.3103 × 10054.1676 × 10−723.1191 × 10009.9913 × 10−139.9624 × 10−162.3445 × 10032.0590 × 10−1040.0000 × 1000
Std8.6142 × 10049.7870 × 10035.6979 × 10−137.8619 × 10−951.1470 × 10051.9547 × 10−715.6696 × 10−018.3325 × 10−132.1026 × 10−159.3347 × 10021.1273 × 10−1030.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F12Mean1.2109 × 10061.0441 × 10046.8895 × 10−012.5098 × 10−017.4605 × 10056.6762 × 10−012.0708 × 10016.6674 × 10−016.6668 × 10−015.1931 × 10036.6667 × 10−012.9749 × 10−01
Std1.7705 × 10052.2701 × 10048.4553 × 10−021.6496 × 10−032.4831 × 10059.8690 × 10−045.6741 × 10002.5311 × 10−052.2236 × 10−053.3124 × 10036.3080 × 10−078.4863 × 10−02
P3.0199 × 10−113.0199 × 10−113.0199 × 10−117.6973 × 10−043.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F13Mean6.9563 × 10063.2556 × 10000.0000 × 10003.7898 × 10−2150.0000 × 10000.0000 × 10007.8789 × 10000.0000 × 10000.0000 × 10001.7522 × 10−660.0000 × 10000.0000 × 1000
Std2.4021 × 10079.7576 × 10000.0000 × 10000.0000 × 10000.0000 × 10000.0000 × 10001.6600 × 10010.0000 × 10000.0000 × 10009.0765 × 10−660.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−12NaN5.8522 × 10−09NaNNaN1.2118 × 10−12NaNNaN1.2118 × 10−12NaN——
Wr(+)(+)(=)(+)(=)(=)(+)(=)(=)(+)(=)——
F14Mean1.0643 × 10071.1869 × 10031.5923 × 10−2021.7476 × 10−992.1583 × 10−2962.6076 × 10−1022.7539 × 10025.1224 × 10−621.4005 × 10−2674.5004 × 10−585.1791 × 10−2440.0000 × 1000
Std3.3461 × 10072.9411 × 10030.0000 × 10009.5623 × 10−990.0000 × 10001.0957 × 10−1013.0001 × 10022.8057 × 10−610.0000 × 10001.1347 × 10−570.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.9457 × 10−091.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F15Mean−4.0513 × 1003−1.4926 × 1004−1.6092 × 1004−4.1896 × 1004−1.5725 × 1004−3.5063 × 1004−1.3104 × 1004−1.6515 × 1004−1.7161 × 1004−2.2572 × 1004−1.9187 × 1004−4.1892 × 1004
Std6.2481 × 10022.0901 × 10032.4194 × 10033.3138 × 10003.4541 × 10035.4556 × 10035.2254 × 10023.3694 × 10031.5809 × 10031.6226 × 10031.4736 × 10037.4988 × 1000
P3.0199 × 10−113.0199 × 10−113.0199 × 10−118.1200 × 10−043.0199 × 10−113.8202 × 10−102.9691 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F16Mean1.5305 × 10038.7163 × 10021.0704 × 10010.0000 × 10001.4030 × 10032.2737 × 10−149.1420 × 10029.2606 × 10008.3838 × 10024.5994 × 10020.0000 × 10000.0000 × 1000
Std7.4282 × 10019.8737 × 10016.0539 × 10000.0000 × 10004.1987 × 10016.9378 × 10−143.3052 × 10015.2711 × 10001.5836 × 10026.3770 × 10010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−128.1404 × 10−021.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(−)(+)(+)(+)(+)(=)——
F17Mean2.0813 × 10018.9150 × 10001.1883 × 10−078.8818 × 10−162.0778 × 10014.4409 × 10−151.8915 × 10011.3312 × 10−076.8008 × 10−091.2287 × 10018.8818 × 10−168.8818 × 10−16
Std1.1073 × 10−013.2976 × 10004.2129 × 10−080.0000 × 10003.3228 × 10−022.2853 × 10−154.3504 × 10004.9113 × 10−085.1643 × 10−099.6392 × 10−010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−129.8401 × 10−101.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F18Mean2.0005 × 10033.6336 × 10015.6536 × 10−030.0000 × 10001.0152 × 10030.0000 × 10001.1646 × 10009.6033 × 10−037.9317 × 10−035.8580 × 10010.0000 × 10000.0000 × 1000
Std2.3542 × 10021.5089 × 10011.2226 × 10−020.0000 × 10001.3432 × 10020.0000 × 10002.2550 × 10−021.3643 × 10−021.2348 × 10−021.9599 × 10010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−121.9154 × 10−091.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(=)——
F19Mean1.7363 × 10091.0009 × 10033.0634 × 10−013.7823 × 10−063.1386 × 10094.9345 × 10−023.2312 × 10−012.8489 × 10−011.5560 × 10016.9676 × 10043.4836 × 10−011.3032 × 10−05
Std4.2025 × 10085.2710 × 10037.2027 × 10−024.4886 × 10−062.8856 × 10082.2000 × 10−021.2786 × 10−017.0244 × 10−021.0702 × 10019.3768 × 10049.5973 × 10−021.5304 × 10−05
P3.0199 × 10−113.0199 × 10−113.0199 × 10−113.1830 × 10−033.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F20Mean3.5933 × 10095.6724 × 10046.9081 × 10001.5929 × 10−045.5965 × 10092.7952 × 10004.0591 × 10006.8193 × 10002.7049 × 10012.3673 × 10069.7397 × 10004.4916 × 10−04
Std5.9842 × 10088.4445 × 10044.2975 × 10−011.5891 × 10−046.1779 × 10088.5577 × 10−011.0288 × 10004.0145 × 10−014.2160 × 10012.7130 × 10069.3944 × 10−026.4329 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.2235 × 10−013.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(+)——
Wilcoxon’s rank sum test20/0/020/0/019/1/08/7/519/1/016/3/120/0/019/1/019/1/020/0/014/6/0——
Friedman value1.1150 × 10019.4750 × 10005.4375 × 10002.4500 × 10009.7250 × 10005.0250 × 10008.2250 × 10005.7625 × 10006.2875 × 10008.8000 × 10003.4500 × 10002.2125 × 1000
Friedman rank121052114867931
Table 6. Comparison of results on 500-dimensional non-fixed dimensional functions.
Table 6. Comparison of results on 500-dimensional non-fixed dimensional functions.
F(x)MetricGAPSOGWOHHOACOWOACMA-ESSOGWOEGWOTACPSOSCSOMSCSO
F1Mean1.4977 × 10063.1397 × 10041.7613 × 10−038.8328 × 10−951.5648 × 10062.9543 × 10−682.2126 × 10052.2573 × 10−031.0849 × 10−052.9514 × 10055.6077 × 10−970.0000 × 1000
Std4.4331 × 10048.5563 × 10035.5599 × 10−043.3822 × 10−943.5002 × 10041.6151 × 10−671.6771 × 10048.5204 × 10−041.2365 × 10−051.9603 × 10042.7758 × 10−960.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F2Mean5.4992 × 102224.2782 × 10021.1083 × 10−023.5377 × 10−471.7181 × 102682.5354 × 10−491.0609 × 101504.0119 × 101501.3815 × 10−044.2159 × 10232.0238 × 10−520.0000 × 1000
StdInf7.8024 × 10011.7558 × 10−031.6076 × 10−46Inf1.0333 × 10−484.0119 × 101501.6260 × 10−037.1073 × 10−052.2984 × 10243.7943 × 10−520.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F3Mean1.9802 × 10073.0740 × 10063.2480 × 10059.0454 × 10−461.3429 × 10073.0265 × 10079.0761 × 10063.6470 × 10051.4732 × 10062.0009 × 10066.2146 × 10−810.0000 × 1000
Std8.0170 × 10061.8959 × 10068.2783 × 10044.9542 × 10−452.0759 × 10061.0334 × 10078.7328 × 10059.2667 × 10042.5059 × 10055.0885 × 10053.3915 × 10−800.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F4Mean9.9138 × 10013.5549 × 10016.4807 × 10011.2872 × 10−489.9396 × 10018.2266 × 10019.9812 × 10016.6963 × 10019.7072 × 10016.9371 × 10012.1131 × 10−440.0000 × 1000
Std2.6133 × 10−014.8545 × 10004.0676 × 10004.1619 × 10−482.8396 × 10−012.4726 × 10012.4606 × 10−015.0286 × 10006.4644 × 10−011.7810 × 10001.1365 × 10−430.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F5Mean6.8192 × 10091.0362 × 10074.9804 × 10021.6816 × 10−014.9891 × 10024.9631 × 10023.5042 × 10084.9807 × 10029.7400 × 10044.2102 × 10084.9845 × 10022.0871 × 1001
Std2.8680 × 10086.4980 × 10062.6747 × 10−012.8584 × 10−014.1249 × 10−023.8186 × 10−015.0317 × 10073.8521 × 10−013.0459 × 10054.9563 × 10078.9807 × 10−025.0497 × 1001
P3.0199 × 10−113.0199 × 10−113.0199 × 10−114.6856 × 10−083.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F6Mean1.4913 × 10063.5389 × 10049.1566 × 10013.7310 × 10−031.5608 × 10063.2604 × 10012.2310 × 10059.1592 × 10011.0576 × 10023.0073 × 10051.0674 × 10027.0995 × 10−03
Std4.2928 × 10041.0628 × 10042.2706 × 10004.2920 × 10−034.1429 × 10049.6870 × 10001.2747 × 10041.8252 × 10001.4150 × 10001.4538 × 10042.7812 × 10001.1931 × 10−02
P3.0199 × 10−113.0199 × 10−113.0199 × 10−111.3732E−013.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(+)——
F7Mean5.6255 × 10042.5865 × 10035.0710 × 10−022.3710 × 10−046.0085 × 10043.9607 × 10−034.7686 × 10034.5808 × 10−022.1429 × 10006.0645 × 10031.1311 × 10−044.8096 × 10−04
Std2.6073 × 10031.7603 × 10031.2792 × 10−024.0484 × 10−042.5759 × 10034.7070 × 10−034.9426 × 10029.6847 × 10−032.0928 × 10001.1985 × 10039.3870 × 10−059.7120 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−117.5059 × 10−013.0199 × 10−115.0723 × 10−103.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.6322 × 10−01——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F8Mean2.0767 × 10081.1618 × 10061.1913 × 10−011.8727 × 10−042.2269 × 10083.6961 × 10−031.8301 × 10071.2687 × 10−014.4363 × 10031.2149 × 10072.7154 × 10−044.1224 × 10−04
Std1.0727 × 10071.1187 × 10062.7234 × 10−021.5671 × 10−047.6520 × 10065.6367 × 10−032.4005 × 10062.3678 × 10−022.2041 × 10041.6307 × 10064.9701 × 10−046.7304 × 10−04
P3.0199 × 10−113.0199 × 10−113.0199 × 10−112.3399 × 10−013.0199 × 10−111.0277 × 10−061.6980 × 10−083.0199 × 10−113.0199 × 10−113.0199 × 10−112.5805 × 10−01——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F9Mean1.7925 × 10−101.3682 × 10−182.0314 × 10−382.6692 × 10−1092.2041 × 10−1082.6692 × 10−1097.8137 × 10−653.8006 × 10−386.4597 × 10−248.7207 × 10−1013.7782 × 10−302.6692 × 10−109
Std9.1302 × 10−105.7709 × 10−185.5846 × 10−389.6162 × 10−1253.4167 × 10−1089.6162 × 10−1252.0501 × 10−641.5688 × 10−373.1494 × 10−234.7480 × 10−1002.0638 × 10−299.6162 × 10−125
P1.2118 × 10−121.2118 × 10−121.2000 × 10−12NaN1.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−121.2118 × 10−121.1651 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(+)——
F10MeanInfInf3.6934 × 10104.0490 × 10−119Inf1.2129 × 10−101Inf2.5286 × 10−043.7059 × 102073.3333 × 102321.3386 × 10−2100.0000 × 1000
StdNaNNaN2.0229 × 10112.2168 × 10−118NaN6.6401 × 10−101NaN1.3850 × 10−03InfInf0.0000 × 10000.0000 × 1000
P1.6853 × 10−141.6853 × 10−141.2118 × 10−121.2118 × 10−121.6853 × 10−141.2118 × 10−121.6853 × 10−141.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F11Mean1.5676 × 10073.6029 × 10057.0041 × 10−032.0151 × 10−941.6670 × 10074.9140 × 10−708.8429 × 10056.5687 × 10−032.2807 × 10008.7724 × 10001.2842 × 10−970.0000 × 1000
Std6.3020 × 10053.2012 × 10052.3146 × 10−032.0151 × 10−946.6729 × 10051.8243 × 10−691.0381 × 10052.4959 × 10−038.7724 × 10001.3034 × 10054.7136 × 10−970.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F12Mean5.2208 × 10071.0206 × 10068.1360 × 10−012.5151 × 10−015.6751 × 10077.0253 × 10−014.8663 × 10068.8203 × 10−011.7687 × 10023.4207 × 10066.6667 × 10−017.0904 × 10−01
Std2.0914 × 10067.9598 × 10051.5772 × 10−013.7954 × 10−031.7916 × 10067.5690 × 10−025.1206 × 10051.5918 × 10−015.5992 × 10023.4454 × 10057.6552 × 10−077.0904 × 10−01
P3.0199 × 10−113.0199 × 10−118.7710 × 10−025.5727 × 10−103.0199 × 10−111.9073 × 10−013.0199 × 10−116.5486 × 10−041.0702 × 10−093.0199 × 10−117.7272 × 10−02——
Wr(+)(+)(=)(−)(+)(=)(+)(+)(+)(+)(=)——
F13Mean5.5514 × 10061.7456 × 10000.0000 × 10003.8344 × 10−2040.0000 × 10000.0000 × 10001.1121 × 10010.0000 × 10000.0000 × 10004.6247 × 10−670.0000 × 10000.0000 × 1000
Std7.7247 × 10064.0383 × 10000.0000 × 10000.0000 × 10000.0000 × 10000.0000 × 10002.0778 × 10010.0000 × 10000.0000 × 10001.8707 × 10−660.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−12NaN1.9346 × 10−10NaNNaN1.2118 × 10−12NaNNaN1.2118 × 10−12NaN——
Wr(+)(+)(=)(+)(=)(=)(+)(=)(=)(+)(=)——
F14Mean8.2125 × 10064.5534 × 10021.9021 × 10−2062.5887 × 10−1006.4825 × 10−2944.6056 × 10−1023.6800 × 10022.7047 × 10−472.8033 × 10−2561.6283 × 10−574.7963 × 10−2370.0000 × 1000
Std1.3622 × 10071.6572 × 10030.0000 × 10001.4145 × 10−990.0000 × 10002.3305 × 10−1012.6847 × 10021.1742 × 10−460.0000 × 10005.5162 × 10−570.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.9346 × 10−101.2118 × 10−121.2118 × 10−12——
Wr(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)(+)——
F15Mean−9.4627 × 1003−3.5233 × 1004−5.4762 × 1004−2.0948 × 1005−3.0797 × 1004−1.8122 × 1005−2.2446 × 1004−5.2633 × 1004−4.7961 × 1004−6.3985 × 1004−6.1348 × 1004−2.0944 × 1005
Std1.7324 × 10035.0990 × 10031.2302 × 10041.8715 × 10016.6733 × 10032.6623 × 10042.2163 × 10031.3708 × 10044.3140 × 10034.5266 × 10033.6742 × 10038.2445 × 1001
P3.0199 × 10−113.0199 × 10−113.0199 × 10−112.1506 × 10−023.0199 × 10−119.9186 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F16Mean8.7350 × 10034.2617 × 10036.9260 × 10010.0000 × 10008.8896 × 10030.0000 × 10006.9922 × 10039.7784 × 10015.2329 × 10034.4655 × 10030.0000 × 10000.0000 × 1000
Std1.0567 × 10024.7422 × 10021.8520 × 10010.0000 × 10001.3020 × 10020.0000 × 10009.7784 × 10012.6807 × 10011.0524 × 10031.5555 × 10020.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(=)——
F17Mean2.1104 × 10011.1179 × 10011.7992 × 10−038.8818 × 10−162.1013 × 10015.2699 × 10−152.0988 × 10011.9941 × 10−031.4581 × 10−041.8243 × 10018.8818 × 10−168.8818 × 10−16
Std2.7540 × 10−023.2017 × 10003.0725 × 10−040.0000 × 10001.1582 × 10−022.4120 × 10−151.4466 × 10−023.8350 × 10−041.0052 × 10−041.7898 × 10−010.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2078 × 10−121.2118 × 10−12NaN1.2118 × 10−121.1003 × 10−101.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(+)(+)(+)(+)(+)(=)——
F18Mean1.3531 × 10043.0519 × 10021.1862 × 10−020.0000 × 10001.4049 × 10040.0000 × 10001.9821 × 10034.1634 × 10−021.2083 × 10−022.6489 × 10030.0000 × 10000.0000 × 1000
Std3.4343 × 10028.1281 × 10013.1222 × 10−020.0000 × 10004.0180 × 10020.0000 × 10001.3456 × 10025.8013 × 10−022.6952 × 10−021.2709 × 10020.0000 × 10000.0000 × 1000
P1.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN——
Wr(+)(+)(+)(=)(+)(=)(+)(+)(+)(+)(=)——
F19Mean1.6871 × 10102.3391 × 10052.7477 × 10053.2586 × 10−061.8114 × 10109.9567 × 10−025.1739 × 10087.9653 × 10−014.3263 × 10073.5742 × 10087.8758 × 10−017.6443 × 10−06
Std8.2188 × 10082.7477 × 10056.3557 × 10−024.7260 × 10−068.1213 × 10084.9571 × 10−021.1508 × 10087.9933 × 10−024.7311 × 10076.4163 × 10077.2447 × 10−028.5768 × 10−06
P3.0199 × 10−113.0199 × 10−113.0199 × 10−119.0688 × 10−033.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
F20Mean3.0778 × 10104.4311 × 10065.0740 × 10014.7357 × 10−043.2917 × 10101.8085 × 10011.1541 × 10095.1221 × 10011.2245 × 10071.7574 × 10074.9816 × 10012.0716 × 10−03
Std1.4788 × 10093.3146 × 10061.6573 × 10006.1694 × 10−041.5376 × 10097.5770 × 10001.8369 × 10081.8298 × 10001.7574 × 10071.7874 × 10087.2371 × 10−023.0945 × 10−03
P3.0199 × 10−113.0199 × 10−113.0199 × 10−118.1200 × 10−043.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−11——
Wr(+)(+)(+)(−)(+)(+)(+)(+)(+)(+)(+)——
Wilcoxon’s rank sum test20/0/020/0/018/2/08/7/519/1/015/5/020/0/019/1/019/1/020/0/013/7/0——
Friedman value1.0888 × 10018.7250 × 10005.4375 × 10002.5625 × 10009.4750 × 10004.6625 × 10009.1750 × 10006.3125 × 10006.7000 × 10008.5875 × 10003.2250 × 10002.2500 × 1000
Friedman rank129521141067831
Table 7. Comparison of results on fixed-dimensional functions.
Table 7. Comparison of results on fixed-dimensional functions.
F(x)MetricGAPSOGWOHHOACOWOACMA-ESSOGWOEGWOTACPSOSCSOMSCSO
F21Mean9.9996 × 10−019.9800 × 10−014.2992 × 10001.6235 × 10004.1334 × 10003.6486 × 10007.4200 × 10003.2311 × 10008.8986 × 10009.9800 × 10−014.5586 × 10009.9800 × 10−01
Std1.0163 × 10−021.6020 × 10−103.8844 × 10001.5213 × 10004.0030 × 10003.5083 × 10004.1400 × 10002.6989 × 10005.0473 × 10001.4867 × 10−164.0421 × 10002.5409 × 10−12
P3.6897 × 10−112.6099 × 10−109.9186 × 10−113.3384 × 10−111.8398 × 10−014.0772 × 10−113.0199 × 10−113.0199 × 10−113.4548 × 10−107.7632 × 10−127.6950 × 10−08——
Wr(+)(+)(+)(+)(=)(+)(+)(+)(+)(−)(+)——
F22Mean1.6642 × 10−021.0024 × 10−023.2080 × 10−033.7552 × 10−044.0808 × 10−037.9626 × 10−045.3984 × 10−037.7841 × 10−038.5570 × 10−031.1908 × 10−034.7696 × 10−044.9924 × 10−04
Std1.9314 × 10−029.8328 × 10−036.8552 × 10−031.7199 × 10−047.4081 × 10−035.5383 × 10−043.8866 × 10−039.6429 × 10−031.3336 × 10−023.6376 × 10−033.5300 × 10−043.3424 × 10−04
P4.9752 × 10−111.3289 × 10−105.1877 × 10−027.9585 × 10−011.6057 × 10−063.8481 × 10−033.0199 × 10−113.9881 × 10−042.0095 × 10−011.5013 × 10−027.4827 × 10−02——
Wr(+)(+)(=)(=)(+)(+)(+)(+)(=)(−)(=)——
F23Mean−9.7347 × 10−01−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000−1.0316 × 1000
Std7.7637 × 10−023.1472 × 10−054.3488 × 10−082.5625 × 10−096.7752 × 10−163.0772 × 10−096.7752 × 10−162.6709 × 10−084.0944 × 10−096.0459 × 10−167.3591 × 10−105.3275 × 10−08
P3.0199 × 10−113.6897 × 10−114.4440 × 10−079.0292 × 10−041.2118 × 10−122.8913 × 10−031.2118 × 10−121.8682 × 10−054.1279 × 10−031.2455 × 10−115.7460 × 10−02——
Wr(+)(+)(+)(−)(−)(−)(−)(+)(+)(−)(=)——
F24Mean3.9902 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−013.9790 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−013.9789 × 10−01
Std2.3114 × 10−034.4680 × 10−061.1344 × 10−061.8372 × 10−060.0000 × 10001.5164 × 10−050.0000 × 10008.6789 × 10−071.6174 × 10−070.0000 × 10002.2747 × 10−085.6349 × 10−08
P3.6897 × 10−111.3289 × 10−102.4386 × 10−094.1127 × 10−071.2118 × 10−129.9186 × 10−111.2118 × 10−124.6159 × 10−106.7362 × 10−061.2118 × 10−129.4696 × 10−01——
Wr(+)(+)(+)(+)(−)(+)(−)(+)(+)(−)(=)——
F25Mean5.6019 × 10003.0003 × 10003.0000 × 10003.0000 × 10006.6000 × 10003.0001 × 10005.4112 × 10005.7000 × 10001.1368 × 10013.0000 × 10003.0000 × 10003.0004 × 1000
Std7.1539 × 10004.8541 × 10−043.6279 × 10−052.4443 × 10−061.5426 × 10012.2400 × 10−041.3207 × 10011.4789 × 10012.5562 × 10012.2281 × 10−151.0066 × 10−051.6404 × 10−03
P6.2828 × 10−061.7666 × 10−032.0621 × 10−018.1014 × 10−103.5048 × 10−097.3940 × 10−011.4516 × 10−101.9579 × 10−011.0315 × 10−022.7391 × 10−113.6709 × 10−03——
Wr(+)(+)(=)(−)(−)(=)(−)(=)(−)(−)(−)——
F26Mean−3.2300 × 1000−3.8601 × 1000−3.8621 × 1000−3.8593 × 1000−3.8628 × 1000−3.8544 × 1000−3.8628 × 1000−3.8607 × 1000−3.8618 × 1000−3.8628 × 1000−3.8618 × 1000−3.8618 × 1000
Std4.4069 × 10−013.7334 × 10−031.8444 × 10−034.3845 × 10−032.7101 × 10−151.5473 × 10−022.7101 × 10−153.1028 × 10−032.5789 × 10−032.5684 × 10−152.4887 × 10−032.6529 × 10−03
P3.0199 × 10−115.2978 × 10−019.2344 × 10−014.7138 × 10−041.2118 × 10−122.6695 × 10−091.2118 × 10−122.0621 × 10−012.4157 × 10−021.1364 × 10−111.0763 × 10−02——
Wr(+)(=)(=)(+)(−)(+)(−)(=)(−)(−)(−)——
F27Mean−1.4744 × 1000−3.0142 × 1000−3.2249 × 1000−3.0907 × 1000−3.2784 × 1000−3.2359 × 1000−3.1633 × 1000−3.2538 × 1000−3.2658 × 1000−3.2604 × 1000−3.1612 × 1000−3.2607 × 1000
Std5.5810 × 10−01−3.0142 × 10001.0807 × 10−019.3938 × 10−025.8273 × 10−029.1771 × 10−021.8828 × 10−018.2493 × 10−028.0172 × 10−026.3773 × 10−022.1967 × 10−017.4356 × 10−02
P3.0199 × 10−115.1857 × 10−078.3026 × 10−012.0152 × 10−087.6093 × 10−053.8481 × 10−033.7449 × 10−011.6687 × 10−019.0490 × 10−026.1841 × 10−034.6427 × 10−01——
Wr(+)(+)(=)(+)(−)(+)(=)(=)(=)(−)(=)——
F28Mean−9.1447 × 10−01−8.5278 × 1000−9.0605 × 1000−5.5277 × 1000−5.4185 × 1000−7.9164 × 1000−7.3444 × 1000−9.0563 × 1000−6.0541 × 1000−6.6356 × 1000−5.2589 × 1000−1.0153 × 1001
Std6.9185 × 10−012.5091 × 10002.2568 × 10001.4563 × 10003.6642 × 10002.7743 × 10003.2173 × 10002.2649 × 10003.3166 × 10003.2628 × 10001.5331 × 10001.5412 × 10−04
P3.0199 × 10−115.9615 × 10−094.0772 × 10−113.0199 × 10−117.3554 × 10−023.0199 × 10−116.6056 × 10−016.0658 × 10−116.6955 × 10−113.7805 × 10−015.5727 × 10−10——
Wr(+)(+)(+)(+)(=)(+)(=)(+)(+)(=)(+)——
F29Mean−9.9176 × 10−01−9.0792 × 1000−9.8709 × 1000−5.2550 × 1000−7.0297 × 1000−8.4873 × 1000−9.1620 × 1000−1.0401 × 1001−7.9763 × 1000−7.9478 × 1000−6.9606 × 1000−1.0403 × 1001
Std4.9834 × 10−012.4464 × 10001.6171 × 10009.3970 × 10−013.6826 × 10003.0282 × 10002.8266 × 10008.0404 × 10−043.5316 × 10003.3520 × 10002.6969 × 10003.6603 × 10−05
P3.0199 × 10−111.3854 × 10−093.0199 × 10−113.0199 × 10−116.6168 × 10−014.0772 × 10−117.8548 × 10−063.0199 × 10−114.5043 × 10−117.4445 × 10−024.9980 × 10−09——
Wr(+)(+)(+)(+)(=)(+)(−)(+)(+)(=)(+)——
F30Mean−1.4199 × 1000−1.0508 × 1001−1.0354 × 1001−5.0779 × 1000−8.3101 × 1000−7.4034 × 1000−9.4552 × 1000−9.9938 × 1000−6.7009 × 1000−8.1710 × 1000−6.0465 × 1000−1.0536 × 1001
Std7.8499 × 10−013.8371 × 10−029.8702 × 10−012.4010 × 10−013.5013 × 10003.2711 × 10002.8037 × 10002.0583 × 10003.9600 × 10003.2254 × 10002.3410 × 10003.5501 × 10−05
P3.0199 × 10−111.0808 × 10−103.0199 × 10−113.0199 × 10−116.8412 × 10−033.0199 × 10−113.7124 × 10−073.0199 × 10−113.3384 × 10−117.6240 × 10−021.3111 × 10−08——
Wr(+)(+)(+)(+)(−)(+)(−)(+)(+)(=)(+)——
Wilcoxon’s rank sum test10/0/09/1/06/4/07/1/21/3/68/1/12/2/67/3/06/2/20/3/74/4/2——
Friedman
value
9.4500 × 10006.2250 × 10005.5000 × 10006.0000 × 10006.8750 × 10007.2500 × 10006.4750 × 10006.2250 × 10008.5250 × 10004.5500 × 10006.5500 × 10004.3750 × 1000
Friedman rank125349107511281
Table 8. Classification accuracy results on feature selection datasets.
Table 8. Classification accuracy results on feature selection datasets.
DatasetMetricBACOBBOABHHOBWOABGWOBGABPSOBMSCSO
N1Mean9.8333 × 10−019.8333 × 10−019.8333 × 10−019.8000 × 10−019.8333 × 10−019.8333 × 10−019.8333 × 10−019.8333 × 10−01
Std1.7568E−021.7568 × 10−021.7568 × 10−021.7213 × 10−021.7568 × 10−021.7568 × 10−021.7568 × 10−021.7568 × 10−02
N2Mean9.4000 × 10−019.1857 × 10−019.0571 × 10−019.3714 × 10−019.5286 × 10−019.3571 × 10−019.3143 × 10−019.4571 × 10−01
Std3.0713 × 10−023.4372 × 10−024.4772 × 10−023.8803 × 10−023.6916 × 10−023.8832 × 10−024.4057 × 10−023.2156 × 10−02
N3Mean9.7000 × 10−019.2000 × 10−019.4000 × 10−019.4000 × 10−019.6000 × 10−019.6500 × 10−019.7000 × 10−019.8500 × 10−01
Std3.4960 × 10−022.5820 × 10−023.9441 × 10−023.9441 × 10−023.9441 × 10−024.1164 × 10−023.4960 × 10−022.4152 × 10−02
N4Mean9.6571 × 10−019.2571 × 10−019.2857 × 10−019.0857 × 10−019.5714 × 10−019.5429 × 10−019.5714 × 10−019.6571 × 10−01
Std2.6255 × 10−022.4094 × 10−023.8686 × 10−028.2808 × 10−024.3121 × 10−023.3537 × 10−023.3672 × 10−022.6255 × 10−02
N5Mean7.5238 × 10−017.1429 × 10−017.0714 × 10−017.2381 × 10−017.3810 × 10−017.3571 × 10−017.4048 × 10−017.5238 × 10−01
Std6.3690 × 10−027.0093 × 10−025.6176 × 10−025.9603 × 10−026.2492 × 10−027.2261 × 10−026.1935 × 10−026.3690 × 10−02
N6Mean9.1684 × 10−019.0526 × 10−019.1263 × 10−019.1158 × 10−019.6421 × 10−019.6421 × 10−019.4842 × 10−019.2737 × 10−01
Std2.5520 × 10−023.1773 × 10−022.9377 × 10−023.5158 × 10−022.5879 × 10−021.8699 × 10−022.8699 × 10−022.5520 × 10−02
N7Mean8.7963 × 10−018.3704 × 10−018.5000 × 10−018.4630 × 10−018.4815 × 10−018.7037 × 10−018.6667 × 10−018.8519 × 10−01
Std3.7293 × 10−026.1605 × 10−022.3827 × 10−024.2811 × 10−026.2830 × 10−023.3810 × 10−022.2764 × 10−023.1232 × 10−02
N8Mean9.1034 × 10−018.6207 × 10−018.5862 × 10−018.8621 × 10−019.1379 × 10−019.0345 × 10−018.8276 × 10−019.1724 × 10−01
Std2.9078 × 10−022.8155 × 10−024.4368 × 10−024.3161 × 10−022.4383 × 10−025.0887 × 10−023.3314 × 10−023.3314 × 10−02
N9Mean9.4615 × 10−019.3077 × 10−019.3077 × 10−019.3846 × 10−019.3333 × 10−019.3333 × 10−019.3590 × 10−019.5385 × 10−01
Std1.4555 × 10−021.2386 × 10−022.9731 × 10−022.1622 × 10−021.7928 × 10−022.4772 × 10−022.7695 × 10−022.3562 × 10−02
N10Mean9.7950 × 10−017.4950 × 10−017.5000 × 10−018.0400 × 10−019.0400 × 10−019.7200 × 10−019.1500 × 10−019.8850 × 10−01
Std3.3702 × 10−026.4354 × 10−027.4498 × 10−027.5085 × 10−021.2449 × 10−018.8544 × 10−021.1350 × 10−011.1316 × 10−02
N11Mean9.8705 × 10−019.7842 × 10−019.8129 × 10−019.8201 × 10−019.8129 × 10−019.8345 × 10−019.8561 × 10−019.8705 × 10−01
Std8.1676 × 10−031.3566 × 10−029.1001 × 10−031.0858 × 10−029.1001 × 10−031.0751 × 10−028.3072 × 10−038.1676 × 10−03
N12Mean9.6283 × 10−019.5398 × 10−019.5044 × 10−019.5487 × 10−019.6018 × 10−019.5487 × 10−019.5221 × 10−019.6372 × 10−01
Std1.3060 × 10−021.4925 × 10−021.7301 × 10−021.4116 × 10−021.5185 × 10−022.0202 × 10−021.4571 × 10−021.3486 × 10−02
N13Mean9.2737 × 10−019.0526 × 10−019.2526 × 10−019.2316 × 10−019.6105 × 10−019.6947 × 10−019.5579 × 10−019.5053 × 10−01
Std2.9125 × 10−022.8070 × 10−022.2440 × 10−023.1403 × 10−022.1658 × 10−022.4536 × 10−022.7982 × 10−022.8527 × 10−02
N14Mean8.5714 × 10−018.0000 × 10−018.5714 × 10−018.5714 × 10−019.2857 × 10−018.7857 × 10−018.5000 × 10−018.7857 × 10−01
Std8.9087 × 10−021.1567 × 10−018.9087 × 10−021.1168 × 10−015.8321 × 10−029.5535 × 10−021.1394 × 10−019.5535 × 10−02
N15Mean8.4230 × 10−018.0540 × 10−018.3090 × 10−018.3710 × 10−018.3900 × 10−018.4000 × 10−018.3980 × 10−018.4270 × 10−01
Std7.4095 × 10−031.1108 × 10−021.3076 × 10−028.5434 × 10−038.7560 × 10−037.2265 × 10−038.4564 × 10−037.1032 × 10−03
N16Mean9.0488 × 10−018.7805 × 10−018.8049 × 10−018.8780 × 10−019.5854 × 10−019.4390 × 10−019.3902 × 10−019.1951 × 10−01
Std2.4254 × 10−023.4493 × 10−023.1383 × 10−023.2924 × 10−022.3139 × 10−022.0080 × 10−023.4969 × 10−022.8281 × 10−02
N17Mean9.6782 × 10−019.5402 × 10−019.4943 × 10−019.5977 × 10−019.7356 × 10−019.6667 × 10−019.6322 × 10−019.6897 × 10−01
Std1.1871 × 10−022.0274 × 10−022.8774 × 10−022.1842 × 10−021.3328 × 10−022.0597 × 10−022.0845 × 10−021.3328 × 10−02
N18Mean7.6450 × 10−017.4200 × 10−017.4900 × 10−017.5650 × 10−017.7550 × 10−017.7200 × 10−017.7050 × 10−017.7350 × 10−01
Std1.6406 × 10−021.6700 × 10−022.2211 × 10−021.7329 × 10−021.7393 × 10−022.6055 × 10−021.6907 × 10−021.7167 × 10−02
N19Mean7.6536 × 10−017.4183 × 10−017.4379 × 10−017.5556 × 10−017.5948 × 10−017.6013 × 10−017.5425 × 10−017.6536 × 10−01
Std2.2314 × 10−021.9303 × 10−022.1960 × 10−022.0714 × 10−021.8435 × 10−022.1579 × 10−021.8792 × 10−022.2314 × 10−02
N20Mean9.7418 × 10−019.0407 × 10−019.6103 × 10−019.6651 × 10−019.6964 × 10−019.8482 × 10−019.6901 × 10−019.7653 × 10−01
Std7.2751 × 10−034.8269 × 10−021.0497 × 10−028.5779 × 10−031.1193 × 10−027.7391 × 10−031.3954 × 10−024.5476 × 10−03
N21Mean7.1667 × 10−016.9420 × 10−016.9203 × 10−016.9710 × 10−017.2246 × 10−017.1087 × 10−017.1304 × 10−017.1884 × 10−01
Std1.2528 × 10−021.5579 × 10−022.1127 × 10−021.2221 × 10−021.2340 × 10−022.4739 × 10−021.7148 × 10−021.7684 × 10−02
N22Mean7.7869 × 10−017.6557 × 10−017.6885 × 10−017.7541 × 10−017.7377 × 10−017.7213 × 10−017.7869 × 10−017.7869 × 10−01
Std3.2097 × 10−023.1910 × 10−023.1343 × 10−022.8967 × 10−023.4387 × 10−022.9376 × 10−023.2097 × 10−023.2097 × 10−02
N23Mean8.6866 × 10−018.1791 × 10−018.5672 × 10−018.6716 × 10−018.6716 × 10−018.6866 × 10−018.6269 × 10−018.7015 × 10−01
Std3.0507 × 10−025.9619 × 10−024.1146 × 10−023.1817 × 10−023.4790 × 10−022.9684 × 10−023.9676 × 10−022.9892 × 10−02
N24Mean8.4036 × 10−018.3735 × 10−018.3313 × 10−018.3675 × 10−018.3795 × 10−018.3976 × 10−018.4036 × 10−018.4036 × 10−01
Std3.2781 × 10−023.3117 × 10−022.9655 × 10−023.5746 × 10−023.4131 × 10−023.3745 × 10−023.2781 × 10−023.2781 × 10−02
Table 9. Friedman’s test results of classification accuracy on feature selection datasets.
Table 9. Friedman’s test results of classification accuracy on feature selection datasets.
DatasetBACOBBOABHHOBWOABGWOBGABPSOBMSCSO
N144484444
N237841562
N32.586.56.5542.51
N41.57683.553.51.5
N51.57864531.5
N658671.51.534
N728576341
N837852461
N927.57.535.55.541
N1028765341
N111.586.556.5431.5
N122684.534.571
N1358672134
N14585512.572.5
N1528765341
N1658761234
N1737861452
N1858761342
N191.58754361.5
N2038764152
N2137861542
N2228745622
N232.5874.54.52.561
N2426875422
Sum of ranks69177.5164.5138.581.585.510146.5
Sum of ranks squared476131,506.2527,060.2519,182.256642.257310.25102012162.25
Average of ranks2.87507.39586.85425.77083.39583.56254.20831.9375
Table 10. Fitness results on feature selection datasets.
Table 10. Fitness results on feature selection datasets.
DatasetMetricBACOBBOABHHOBWOABGWOBGABPSOBMSCSO
N1Mean2.0750 × 10−022.1500 × 10−022.1500 × 10−022.4800 × 10−022.0750 × 10−022.1250 × 10−022.1000 × 10−022.0750 × 10−02
Std1.7210 × 10−021.7472 × 10−021.7989 × 10−021.7689 × 10−021.7210 × 10−021.7750 × 10−021.7504 × 10−021.7210 × 10−02
N2Mean6.0459 × 10−028.2908 × 10−029.5755 × 10−026.3846 × 10−024.8260 × 10−026.6496 × 10−027.0856 × 10−025.5890 × 10−02
Std3.0147 × 10−023.3392 × 10−024.4206 × 10−023.7801 × 10−023.6420 × 10−023.8245 × 10−024.3603 × 10−023.1943 × 10−02
N3Mean3.4075 × 10−028.3263 × 10−026.4650 × 10−026.3150 × 10−024.2850 × 10−023.7963 × 10−023.3638 × 10−021.9288 × 10−02
Std3.3048 × 10−022.5051 × 10−023.8308 × 10−023.7395 × 10−023.8556 × 10−023.9903 × 10−023.4338 × 10−022.3161 × 10−02
N4Mean3.7276 × 10−027.6710 × 10−027.3714 × 10−029.3681 × 10−024.5262 × 10−024.8007 × 10−024.5929 × 10−023.7776 × 10−02
Std2.5626 × 10−022.3663 × 10−023.8177 × 10−028.1330 × 10−024.3066 × 10−023.3071 × 10−023.3196 × 10−022.6817 × 10−02
N5Mean2.4925 × 10−012.8741 × 10−012.9482 × 10−012.7787 × 10−012.6295 × 10−012.6553 × 10−012.6137 × 10−012.4937 × 10−01
Std6.2291 × 10−026.8497 × 10−025.5994 × 10−025.7787 × 10−026.1638 × 10−027.1237 × 10−026.0426 × 10−026.2452 × 10−02
N6Mean8.6609 × 10−029.7187 × 10−028.9784 × 10−029.2832 × 10−023.7938 × 10−023.8817 × 10−025.5394 × 10−027.6658 × 10−02
Std2.4783 × 10−023.1049 × 10−022.9625 × 10−023.4197 × 10−022.5889 × 10−021.8765 × 10−022.8520 × 10−022.5526 × 10−02
N7Mean1.2247 × 10−011.6433 × 10−011.5158 × 10−011.5571 × 10−011.5333 × 10−011.3210 × 10−011.3477 × 10−011.1759 × 10−01
Std3.5687 × 10−026.0503 × 10−022.3815 × 10−024.2621 × 10−026.2034 × 10−023.3254 × 10−022.2460 × 10−023.0119 × 10−02
N8Mean9.2481 × 10−021.4016 × 10−011.4358 × 10−011.1566 × 10−018.8345 × 10−029.9086 × 10−021.1946 × 10−018.6153 × 10−02
Std2.8815 × 10−022.7508 × 10−024.3007 × 10−024.2646 × 10−022.4154 × 10−025.0409 × 10−023.3471 × 10−023.2651 × 10−02
N9Mean5.4717 × 10−027.0993 × 10−027.1084 × 10−026.1969 × 10−026.7318 × 10−026.6955 × 10−026.5280 × 10−024.7510 × 10−02
Std1.4138 × 10−021.1693 × 10−022.9006 × 10−022.1395 × 10−021.8088 × 10−022.4493 × 10−022.7141 × 10−022.2621 × 10−02
N10Mean2.5526 × 10−022.5253 × 10−012.5250 × 10−012.0204 × 10−011.0004 × 10−013.2643 × 10−028.9381 × 10−021.6616 × 10−02
Std3.3737 × 10−026.2487 × 10−027.1911 × 10−027.5714 × 10−021.2376 × 10−018.8631 × 10−021.1273 × 10−011.1380 × 10−02
N11Mean1.8153 × 10−022.6478 × 10−022.4185 × 10−022.3139 × 10−022.2518 × 10−022.1715 × 10−021.9689 × 10−021.7820 × 10−02
Std8.1168 × 10−031.3261 × 10−029.2854 × 10−031.0615 × 10−029.2964 × 10−031.0488 × 10−027.9495 × 10−037.6715 × 10−03
N12Mean3.9930 × 10−024.9191 × 10−025.1629 × 10−024.6881 × 10−024.0658 × 10−024.5781 × 10−024.9776 × 10−023.9220 × 10−02
Std1.2495 × 10−021.4799 × 10−021.7656 × 10−021.4514 × 10−021.4897 × 10−021.9883 × 10−021.4003 × 10−021.3138 × 10−02
N13Mean7.6294 × 10−029.7263 × 10−027.8259 × 10−028.0505 × 10−024.1067 × 10−023.3401 × 10−024.8062 × 10−025.3943 × 10−02
Std2.8333 × 10−022.7740 × 10−022.1545 × 10−023.0766 × 10−022.1286 × 10−022.4403 × 10−022.7740 × 10−022.8149 × 10−02
N14Mean1.4285 × 10−011.9944 × 10−011.4249 × 10−011.4244 × 10−017.2157 × 10−021.2278 × 10−011.5229 × 10−011.2177 × 10−01
Std8.7565 × 10−021.1365 × 10−018.7845 × 10−021.1021 × 10−015.7467 × 10−029.4502 × 10−021.1271 × 10−019.4325 × 10−02
N15Mean1.6441 × 10−011.9884 × 10−011.7560 × 10−011.7013 × 10−011.6591 × 10−011.6540 × 10−011.6555 × 10−011.6373 × 10−01
Std6.9856 × 10−031.0952 × 10−021.2407 × 10−027.9506 × 10−038.4674 × 10−036.7921 × 10−037.9944 × 10−037.1104 × 10−03
N16Mean9.7071 × 10−021.2332 × 10−011.2197 × 10−011.1344 × 10−014.3115 × 10−025.7770 × 10−026.3966 × 10−028.3466 × 10−02
Std2.4255 × 10−023.3907 × 10−023.0706 × 10−023.3025 × 10−022.2877 × 10−022.0283 × 10−023.5103 × 10−022.7712 × 10−02
N17Mean3.4425 × 10−024.9455 × 10−025.3256 × 10−024.2453 × 10−022.9297 × 10−023.5812 × 10−024.0351 × 10−023.4474 × 10−02
Std1.2014 × 10−022.0127 × 10−022.9055 × 10−022.1762 × 10−021.2744 × 10−022.0038 × 10−022.0549 × 10−021.3335 × 10−02
N18Mean2.3719 × 10−012.5950 × 10−012.5278 × 10−012.4640 × 10−012.2546 × 10−012.2989 × 10−012.3150 × 10−012.2898 × 10−01
Std1.6083 × 10−021.5701 × 10−021.9984 × 10−021.7786 × 10−021.6847 × 10−022.5991 × 10−021.6296 × 10−021.7136 × 10−02
N19Mean2.3654 × 10−012.5971 × 10−012.5802 × 10−012.4700 × 10−012.4237 × 10−012.4210 × 10−012.4779 × 10−012.3654 × 10−01
Std2.1125 × 10−021.9679 × 10−022.0966 × 10−022.0618 × 10−021.7589 × 10−022.0959 × 10−021.8135 × 10−022.1032 × 10−02
N20Mean3.2424 × 10−029.9694 × 10−024.5327 × 10−024.2044 × 10−023.3667 × 10−021.9556 × 10−023.6148 × 10−022.9656 × 10−02
Std6.6407 × 10−034.7989 × 10−028.8712 × 10−038.5118 × 10−031.0476 × 10−027.3417 × 10−031.3914 × 10−024.0424 × 10−03
N21Mean2.8414 × 10−013.0474 × 10−013.0811 × 10−013.0294 × 10−012.7790 × 10−012.9045 × 10−012.8723 × 10−012.8135 × 10−01
Std1.1812 × 10−021.5496 × 10−022.1654 × 10−021.1677 × 10−021.2370 × 10−022.4283 × 10−021.7214 × 10−021.7065 × 10−02
N22Mean2.2477 × 10−012.3675 × 10−012.3384 × 10−012.2834 × 10−012.2930 × 10−012.3126 × 10−012.2477 × 10−012.2477 × 10−01
Std3.0760 × 10−023.0239 × 10−022.9796 × 10−022.7559 × 10−023.2866 × 10−022.8226 × 10−023.0760 × 10−023.0760 × 10−02
N23Mean1.3512 × 10−011.8670 × 10−011.4914 × 10−011.3894 × 10−011.3794 × 10−011.3717 × 10−011.4208 × 10−011.3512 × 10−01
Std3.0151 × 10−025.8644 × 10−023.9245 × 10−023.2795 × 10−023.4572 × 10−022.9483 × 10−023.8550 × 10−023.0151 × 10−02
N24Mean1.6244 × 10−011.6542 × 10−011.6980 × 10−011.6622 × 10−011.6443 × 10−011.6304 × 10−011.6244 × 10−011.6244 × 10−01
Std3.1679 × 10−023.1196 × 10−022.8157 × 10−023.4901 × 10−023.2767 × 10−023.2648 × 10−023.1679 × 10−023.1679 × 10−02
Table 11. Friedman test results of fitness results on feature selection datasets.
Table 11. Friedman test results of fitness results on feature selection datasets.
DatasetBACOBBOABHHOBWOABGWOBGABPSOBMSCSO
N126.56.582542
N237841562
N338765421
N417683542
N517864532
N658671234
N728576341
N837852461
N927836541
N1028765341
N1128765431
N1226853471
N1358672134
N1468541372
N1528765341
N1658761234
N1727861453
N1858761342
N191.58754361.5
N2038764152
N2137861542
N2228745622
N231.58754361.5
N2426875422
Sum of ranks66179.5168.5139778710146
Sum of ranks squared435632,220.2528,392.2519,3215929756910,2012116
Average of ranks2.75007.47927.02085.79173.20833.62504.20831.9167
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, L.; Yang, J.; Yuan, P.; Li, G.; Lu, Y.; Zhang, T. Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection. Biomimetics 2023, 8, 492. https://doi.org/10.3390/biomimetics8060492

AMA Style

Yao L, Yang J, Yuan P, Li G, Lu Y, Zhang T. Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection. Biomimetics. 2023; 8(6):492. https://doi.org/10.3390/biomimetics8060492

Chicago/Turabian Style

Yao, Liguo, Jun Yang, Panliang Yuan, Guanghui Li, Yao Lu, and Taihua Zhang. 2023. "Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection" Biomimetics 8, no. 6: 492. https://doi.org/10.3390/biomimetics8060492

Article Metrics

Back to TopTop