Next Article in Journal
Superhydrophobicity, Photocatalytic Self-Cleaning and Biocidal Activity Combined in a Siloxane-ZnO Composite for the Protection of Limestone
Previous Article in Journal
Virtual Simulation-Based Optimization for Assembly Flow Shop Scheduling Using Migratory Bird Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection

1
School of Information and Artificial Intelligence, Nanchang Institute of Science & Technology, Nanchang 330108, China
2
School of Computer Science and Technology, Hubei Business College, Wuhan 430079, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(9), 572; https://doi.org/10.3390/biomimetics9090572
Submission received: 24 August 2024 / Revised: 19 September 2024 / Accepted: 19 September 2024 / Published: 22 September 2024

Abstract

:
Feature selection (FS) is a classic and challenging optimization task in most machine learning and data mining projects. Recently, researchers have attempted to develop more effective methods by using metaheuristic methods in FS. To increase population diversity and further improve the effectiveness of the beluga whale optimization (BWO) algorithm, in this paper, we propose a multi-strategies improved BWO (MSBWO), which incorporates improved circle mapping and dynamic opposition-based learning (ICMDOBL) population initialization as well as elite pool (EP), step-adaptive Lévy flight and spiral updating position (SLFSUP), and golden sine algorithm (Gold-SA) strategies. Among them, ICMDOBL contributes to increasing the diversity during the search process and reducing the risk of falling into local optima. The EP technique also enhances the algorithm′s ability to escape from local optima. The SLFSUP, which is distinguished from the original BWO, aims to increase the rigor and accuracy of the development of local spaces. Gold-SA is introduced to improve the quality of the solutions. The hybrid performance of MSBWO was evaluated comprehensively on IEEE CEC2005 test functions, including a qualitative analysis and comparisons with other conventional methods as well as state-of-the-art (SOTA) metaheuristic approaches that were introduced in 2024. The results demonstrate that MSBWO is superior to other algorithms in terms of accuracy and maintains a better balance between exploration and exploitation. Moreover, according to the proposed continuous MSBWO, the binary MSBWO variant (BMSBWO) and other binary optimizers obtained by the mapping function were evaluated on ten UCI datasets with a random forest (RF) classifier. Consequently, BMSBWO has proven very competitive in terms of classification precision and feature reduction.

1. Introduction

With the advent of the information age, we have witnessed an unprecedented surge in data volume across various domains, ranging from engineering [1] to ecology [2] and from information technology [3] to manufacturing [4] and management [5]. The complexity of the problems in these fields is increasing and is often characterized by multiple objectives [6] and high-dimensional characteristics [7]. The high dimensionality and redundancy inherent in raw datasets can lead to excessive consumption of computational resources, adversely affecting the efficacy of learning algorithms. Thus, selecting more important data, even in datasets with a limited amount of data, is essential for increasing classification success. FS has emerged as an essential data preprocessing technique that has garnered substantial interest over recent decades. This method increases the classification accuracy and reduces the data size by selecting the most appropriate subset of features from the original dataset [8].
FS encompasses a spectrum of methods, broadly classified into filter, wrapper, and hybrid approaches [9]. Among them, filters are faster than wrappers, but they ignore the relationships among features and cannot deal with redundant information. While wrappers are relatively computationally expensive, they can attain better results than filters because of the utilization of learning techniques in the evaluation process [10]. The quest for the best feature subset has been a foundation of FS, with the determination of this subset relying heavily on the search methodologies and evaluation strategies of the candidate features. The evolution of optimization algorithms in FS has seen a transition from traditional full search, random search, sequential search, and incremental search methods to metaheuristic search approaches [8]. Metaheuristic algorithms (MAs) have become prevalent because of their ability to navigate large search spaces efficiently and effectively, avoiding the obstacles of local optima while seeking global optima [11,12]. Various metaheuristic methods, including several recent algorithms, have been applied to address FS problems [13,14,15,16,17,18]. Moreover, there are also some hybridizations or improved optimizers in the FS techniques [19,20,21,22,23]. The reason for the appearance of many such works on FS problems is that no FS technique can address all the varieties of FS problems. Hence, we need extensive opportunities to develop new efficient models for FS cases [24].
The BWO is a recently proposed population-based metaheuristic with promising optimization capabilities for addressing continuous problems [25]. The construction of BWO is inspired mainly by the behaviours of beluga whales, including swimming, preying, and whale fall. The BWO is a derivative-free optimization technique that is easy to implement. Compared with the whale optimization algorithm (WOA) [26], the grey wolf optimizer (GWO) [27], particle swarm optimization (PSO) [28], and other algorithms have local solid development capabilities. The main merit of this optimizer is the equilibrium between exploration and exploitation that ensures global convergence. Owing to its excellent advantages, BWO, or its modified version, has been employed in many fields, such as image semantic segmentation [29], cluster routing in wireless sensor networks [30], landslide susceptibility modelling [31], speech emotion recognition [32], short-term hydrothermal scheduling [33], and demand-side management [34]. In addition, some modified versions of BWO have been developed to accomplish specific optimization problems [35,36,37]. However, as a novel optimizer, BWO has been poorly studied for its effectiveness in more problems. In other words, even though this method is an excellent optimizer, it also faces some challenges in terms of improving the search ability, accelerating the convergence rate, and addressing complex optimization problems. It is necessary to extend the application fields of BWO to make this optimizer more worthy.
Although the BWO algorithm can achieve certain optimization effects in the early stages of the algorithm, in the later stages, due to insufficient population diversity and a singular exploration angle, the algorithm often has difficulty obtaining better solutions and is prone to falling into local optima. Furthermore, as the problem′s dimensions and complexity increase, the optimization capability of the BWO algorithm decreases, exploration accuracy decreases, the convergence speed decreases, and it becomes difficult to find other high-quality solutions [35,37]. The food chain embodies the principle of survival of the fittest in nature, and each organism has certain limitations in its survival strategy [22]. These limitations inspire us to deeply analyze and improve the biological behaviour-based mathematical models when designing evolutionary algorithms that simulate biological habits. Although the existing evolutionary algorithms can address many optimization problems, by constructing mathematical models that optimize biological habits, and by refining some mathematical theories, we can construct excellent mathematical models for solving optimization problems, which has the potential to further enhance the performance of the algorithms [38].
To improve the effectiveness of the original BWO and help it overcome some physiological limitations, this paper introduces several mathematical theories. First, improved circle mapping (ICM) [39] and dynamic opposition-based learning (DOBL) [40] were introduced to increase the diversity of an algorithm during the search process, thereby reducing the risk of falling into local optima and enhancing the search efficiency and accuracy. Second, the EP of GWO [27] was integrated to maintain a subpopulation composed of the best individuals, which guided the main population to evolve towards the global optimum, enhancing the algorithm′s ability to escape from local optima. Third, we integrated the SLFSUP strategy so that the MSBWO could conduct a more detailed and in-depth search within local areas, enhancing the rigor and accuracy of the development of local spaces. Finally, by introducing the Gold-SA [41] to update the population, we accelerated the convergence speed of the algorithm while maintaining the diversity of the population and improving the quality of the solutions. We tested MSBWO on twenty-three benchmark continuous problems. Simultaneously, we interrogated the feature selection problem to evaluate this proposed approach.
The main contributions of this paper are as follows:
Four improvement strategies, namely, ICMDOBL population initialization, EP, SLFSUP, and Gold-SA, were used to improve the optimization performance of the BWO algorithm.
Twenty-three global optimization tasks for intelligent optimization algorithm testing were used to evaluate the proposed MSBWO and compare it with other conventional and SOTA advanced metaheuristic approaches.
The developed MSBWO was transformed into a binary model for tackling FS problems for the first time. Furthermore, the binary MSBWO was compared with other FS techniques on several UCI datasets.
The structure of this article is as follows: A detailed description of the standard BWO exploration and exploitation process is presented in Section 2. Section 3 introduces MSBWO, which incorporates several strategies, and proposes MSBWO for feature selection tasks. The experimental setup and results analysis of this study are shown in Section 4. Finally, in the fifth section, the conclusions and description of the work are given. To aid in understanding, this article includes a comprehensive list of relevant abbreviations, summarized in Table 1.

2. Original BWO

The beluga whale, adept at navigating and hunting in aquatic environments, serves as the inspiration for the BWO algorithm. This novel biomimetic optimization approach emulates the social behaviours and hunting strategies of beluga whales to address optimization challenges. In BWO, the exploration phase is akin to the belugas′ use of echolocation to detect and track prey, representing the algorithm′s global search for optimal solutions. Conversely, the exploitation phase mirrors the focused hunting tactics employed by these whales once a target has been located, signifying the algorithm′s searching refinement in a promising area. Additionally, the whale fall phase introduces a unique mechanism that simulates the natural phenomenon of a whale falling to the ocean floor after death, which serves as an ecological disturbance that can lead to new areas of exploration and potentially escape from local optima in the search space.

2.1. Initialization

The locations of beluga whales are regarded as search agents. In the initialization phase, the initial population is generated randomly, and the fitness value of each individual is computed. The population initialization model is expressed as:
X = r a n d ( m , n ) · ( U b L b ) + L b
where Ub and Lb are the upper and lower boundaries of the optimization problem to be solved, m is the population size of beluga whales, and n is the dimension of the solution. In each iteration, BWO transfers from exploration to exploitation depending on the balance factor Bf, which is similar to the Harris hawks optimizer (HHO) [42]. The balance factor Bf is modelled as:
B f = B 0 ( 1 T / 2 T m a x )
where B 0 is a random number between ( 0 ,   1 ) , T is the current iteration, and T m a x is the maximum iterative number. Exploration occurs when the balance factor B f > 0.5 , resulting in pairs of two beluga whales swimming closely together in a synchronized or mirrored manner. In the case of B f 0.5 , the exploitation phase occurs, engaging in the preying behaviour of beluga whales.

2.2. Exploration Phase

The exploration phase of BWO simulates the pair swimming behaviour of beluga whales. They randomly move in a synchronized or mirrored manner, expressed as follows:
{ X i , j T + 1 = X i , p j T + ( X r , p 1 T X i , p j T ) ( 1 + r 1 ) s i n ( 2 π r 2 ) ,       j = e v e n X i , j T + 1 = X i , p j T + ( X r , p 1 T X i , p j T ) ( 1 + r 1 ) c o s ( 2 π r 2 ) ,       j = o d d
where p j ( j = 1 ,   2 , , d ) is a random number selected from the d dimension, X i , p j is the position of the ith beluga whale in the p j dimension, X r , p 1 is the current position of the rth beluga whale randomly selected, r 1 and r 2 are random numbers between ( 0 ,   1 ) , and s i n ( 2 π r 2 ) and c o s ( 2 π r 2 ) indicate that the fins of the mirrored beluga whales are facing the surface.

2.3. Exploitation Phase

The exploitation phase of BWO mimics the preying behaviour of beluga whales. Beluga populations communicate with each other and share information on their positions to cooperatively forage and move according to the locations of nearby beluga whales. To enhance the algorithm′s convergence, the Lévy flight (LF) strategy is employed during the exploitation phase, which can be represented as:
X i T + 1 = r 3 X b e s t T r 4 X i T + C 1 × L F · ( X r T X i T )
where C 1 = 2 r 4 ( 1 T / T m a x ) is the random jump strength, r 3 and r 4 are random numbers between ( 0 ,   1 ) , X r is the current position for a random beluga whale, and X b e s t is the best position among beluga whales. L F is the LF function, which is calculated as follows:
L F = 0.05 × u × σ | v | 1 / β
σ = ( Γ ( 1 + β ) × s i n ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) × β × 2 ( β 1 / 2 ) ) 1 / β
where u and v are normally distributed random numbers, β is the default constant equal to 1.5, and Γ ( x ) is the Γ function.

2.4. Whale Fall Phase

If the balance factor B f W f , whale fall occurs. W f = 0.1 0.05 T / T m a x is the probability of whale fall of individual beluga whales. The whale fall is a phenomenon in which beluga whales are threaten by killer whales and humans during migration and foraging. The dead beluga whale falls into the deep seabed. Afterwards, to maintain the population size, the updated position is established using the positions of the current beluga whale, random individual beluga whales, and the step size of the whale fall:
X i T + 1 = r 5 X i T r 6 X r T + r 7 X s t e p
where r 5 , r 6 , and r 7 are random numbers between (0, 1), and X s t e p is the step size of the whale fall established as:
X s t e p = ( U b L b ) e C 2 T / T m a x
where C 2 is the step factor, which is related to the probability of a whale fall and population size, C 2 = 2 W f × m .

3. Proposed MSBWO

The proposed MSBWO introduces four fruitful strategies: (1) the ICMDOBL population diversity strategy, (2) the EP mechanism, (3) the SLFSUP, and (4) the Gold-SA population update mechanism. In addition, the feature selection problem can be solved by updating MSBWO to a binary version.

3.1. ICMDOBL Population Diversity

Owing to the random population generation of the BWO algorithm, it can lead to an uneven population distribution, which may result in reduced population diversity and lower population quality, thereby affecting the convergence of the algorithm. Chaotic mapping is characterized by uncertainty, irreducibility, and unpredictability [43], which can lead to a more uniform population distribution than probability-dependent random generation. MSBWO generates an initial population with chaotic mapping to increase the diversity of potential solutions. There are common chaotic mappings, such as logistic mapping, tent mapping [44], sine mapping [45], and circle mapping [46]. Circle mapping is more stable and has a higher coverage rate of chaotic values. Considering that circle mapping takes values more densely between [0.2, 0.6], the circle mapping formula is improved as follows:
X i + 1 , j = m o d ( 3.85 X i , j + 0.4 0.7 3.85 π s i n ( 3.85 π X i , j ) , 1 )
where X i , j represents the sequence value of the ith beluga whale on the jth dimension, and X i + 1 , j is the chaotic sequence value of the (i+1)th beluga whale on the jth dimension. Then, the values are scaled and shifted to generate X C i r c l e with values between L b and U b for each dimension:
X C i r c l e = ( U b L b ) · X i , j + L b
DOBL [40,47] is introduced to further increase population diversity and improve the quality of the initial solutions. The specific formula is expressed as follows:
X D O B L = X C i r c l e + r 8 × ( r 9 × ( L b + U b X C i r c l e ) X C i r c l e )
where X C i r c l e is the population established with the ICM method, as shown in Equation (10), and r 8 and r 9 are random numbers between (0, 1). The DOBL generates X C i r c l e and an opposition population X D O B L and then merges these two populations into a new population, X n e w = { X D O B L X C i r c l e } . The fitness values of X n e w are calculated, and the greedy strategy is used for full competition within the new population. The best N individuals are then selected as the initial population. Using ICMDOBL, MSBWO starts iterating from individuals with better fitness, thereby enhancing the convergence.

3.2. EP Strategy

In the case of location updating, beluga whales always use the best whale as prey. If the prey has fallen into a local optimum, all subsequent search agents will converge to it, leading to a premature convergence of the algorithm. In the GWO algorithm [27], a hierarchical system was proposed to update the positions according to the mean position of the first best three grey wolves to avoid the shortcomings caused by guiding a single search agent.
Inspired by GWO, the EP strategy is integrated into MSBWO. The first three best solutions obtained thus far, and their weighted average, are included as the candidate elites in the EP. The first three best solutions are conducive to exploration, whereas the weighted average position represents the evolutionary trend of the entire superior population, which is beneficial for exploitation. Position updating is guided by the agent randomly selected from the EP, improving the algorithm′s ability to escape from local optima.
The EP strategy is modelled as:
{ E l i t e T = [ X b e s t _ 1 T , X b e s t _ 2 T , X b e s t _ 3 T , X m e a n T ] X m e a n T = i = 1 b e s t _ n u m θ i X b e s t _ i T ,     θ i = ω i j = 1 b e s t _ n u m ω j   ω i = 3 f b e s t _ 3 f b e s t _ i 3 f b e s t _ 3 f b e s t _ 1
where f b e s t _ i is the fitness value, and b e s t _ n u m is set to 3.

3.3. SLFSUP Strategy

In the exploitation phase, BWO uses LF with a fixed step to improve its convergence. However, at different stages of the algorithm, the expected step of LF may vary. The larger the step of LF is, the easier it is to find the optimum result, but it reduces the search precision. The smaller the step size is, the higher the search precision, but it reduces the search speed. Therefore, the step-adaptive LF strategy is used in MSBWO to improve its exploitation and convergence accuracy. In the early stages of iteration, MSBWO uses LF with a larger step so that it can fully exploit the solution space, whereas it becomes more refined in the later stage with a decreasing LF step. The step-adaptive LF strategy is calculated as follows:
L F = 0.05 × ( 1 T / ( T m a x / 2 ) ) × e T / T m a x · u × σ | v | 1 / β
As stated in Equation (4), position updating in the exploitation of BWO involves a random agent, the best agent, and the current agent. There may still be omissions for the possible solution. In WOA [26], the spiral updating position strategy is used according to the position of the prey, namely, the best solution obtained, and the position of the whale adjusts the distance when searching for prey. Such a strategy can make full use of the regional information and improve the search capabilities. Therefore, MSBWO introduces this method to enhance the algorithm′s rigor and accuracy in the development of the local space and to strengthen the local search ability.
The position updating model in the exploitation process based on the SLFSUP is as follows:
X i T + 1 = r 3   X E P T r 4 X i T + C 1 × L F · ( X r T X i T ) · | X E P T X i T |   × c o s ( 2 π l ) × e b l
where X E P is the position of an agent randomly selected from E l i t e T , X r is the current position for a random beluga whale to maintain its diversity, b is a constant for defining the shape of the spiral, and l is a random number in [−1, 1]. b is set to 1 in MSBWO.

3.4. Golden-SA Update Mechanism

Inspired by the relationship between the sine function and a one-unit radius circle, the Gold-SA [41] scans all values of the sine function. The algorithm has strong global searching capabilities. The golden section ratio is used in the position updating of Gold-SA so that it can completely scan the search space as much as possible, thus accelerating convergence and escaping from local optima.
In the MSBWO, the Gold-SA mechanism is utilized to update the beluga whale population to increase the global search ability. The position updating with Gold-SA is given as follows:
X i T + 1 = X i T | sin ( r 10 ) | + r 11 sin ( r 10 ) | x 1 × X E P T x 2 × X i T |
where r 10 is a random number in the range [ 0 ,   2 π ] and where r 11 is a random number in the range [ 0 ,   π ] . x 1 and x 2 are the coefficients obtained via the golden section method, which aims at narrowing the search space and allowing the current value to approach the target value. They can be expressed as follows:
{ x 1 = a τ + b ( 1 τ ) x 2 = a ( 1 τ ) + b    
where τ = ( 5 1 ) / 2 is the golden number, and the initial values of a and b are π and π , respectively.
The proposed improvement strategies are applied to BWO, and the flow chart of the MSBWO is shown in Figure 1.

3.5. Binary MSBWO

FS is a binary decision optimization problem with a theoretical solution that is exponential, using 1 to represent the selection of the feature, and 0 to represent the non-selection of the feature. As an improved algorithm of the original BWO, the proposed MSBWO has a greatly improved search performance. Therefore, this study applies it to obtain a better feature subset. However, to apply MSBWO to the FS problem, the search space of the agents needs to be restricted. Moreover, binary transformation is required to map the continuous values to the corresponding binary values [48].
To address the above issues, Equations (17) and (18) are used for initialization:
X i = [ X i j ] ,   1 i m ,   1 j d
X i j = { 0 ,       i f   r a n d < 0.5 1 ,       i f   r a n d 0.5 ,       1 i m ,   1 j d
where X i j is the jth component of the ith agent, d is the size of the features, and r a n d is a random number between (0, 1).
After position updating, the sigmoid function is used for discretization. The transfer function and position updating equation selected in this paper are shown in Equations (19) and (20).
S ( X i j ( t ) ) = 1 1 + e x p ( X i j ( t ) )
X i j ( t + 1 ) = { 0 ,       i f   r a n d < S ( X i j ( t + 1 ) ) 1 ,       i f   r a n d S ( X i j ( t + 1 ) )
As a combination optimization, the FS has two main goals. One is to improve the classification performance, and the other is to minimize the number of selected features. Therefore, the fitness function is shown in Equation (21).
F i t n e s s = r 12 E R ( D ) + r 13 | S | / | D |
where E R ( D ) represents the classification error rate of the RF classifier, D denotes the number of features in the original dataset, and S denotes the length of the selected feature subset. r 12 and r 13 are used to balance the relationship between the error rate and the ratio of selected features; r 12 [ 0 , 1 ] , r 13 = 1 r 12 .

3.6. Computational Complexity

To gain a better understanding of the implementation process of the MSBWO algorithm proposed in this paper, the computational complexity of MSBWO is analyzed as follows. The computational complexity of the MSBWO relies on three main steps: initialization, fitness evaluation, and updating of the beluga whale. In the initialization phase of MSBWO, the computational complexity of each agent is assumed to be O ( d ) , where d is the dimension of a particular problem. The computational complexity of ICMDOBL is O ( n × d ) , where n is the population size. After entering the iteration, the computational complexity of EP is O ( n × l o g n + n ) . In the exploration and exploitation phases, the novel exploitation mechanism replaces the original exploration mechanism, and the computational complexity is similar to that of BWO, which is represented as O ( n × d × T m a x ) . The computation of the whale fall phase can also be approximated as O ( 0.1 × n × T m a x ) , similar to BWO. Additionally, the Gold-SA is an extra searching strategy whose computational complexity is O ( n × d ) . Therefore, the total computational complexity of MSBWO is evaluated approximately as O ( n × d + n × T m a x × ( 1.1 + l o g n + 2 × d ) ) . Thus, the MSBWO algorithm proposed in this paper has greater computational complexity than the original BWO algorithm.

4. Experiments and Results Analysis

While confirming the performance of MSBWO, sufficient targeted experiments were performed in this work. The results of the comparative observations are discussed in a comprehensive analysis. To decrease the influence of external factors, every task in this work was conducted in the same setting. With respect to the parameter settings for the metaheuristic algorithms, a total of 50 search agents were set up, except for those used for the FS experiments, and multiple iterations were completed. To reduce the impact of experimental randomness, each algorithm was executed on the benchmark function 30 times.
We applied two statistical performance measures, the mean and standard deviation (std), which represent the robustness of the tested methods, to assess the optimization ability of the MSBWO. Furthermore, some statistically significant results were used to estimate the success of the MSBWO. In this study, we utilized the Wilcoxon rank-sum test to analyze the significant differences in the statistical results among the compared approaches. The significance level was set to 0.05. In the results of the Wilcoxon rank-sum test, the rows identified by ‘+/=/−’ are the results of the significance analysis. The symbol ‘+’ indicates that MSBWO outperforms the other compared approaches significantly, ‘=’ indicates that there is no significant difference between MSBWO and the other compared approaches, and ‘−’ indicates that MSBWO is worse than the other compared methods. Additionally, the Friedman test was applied to express the average ranking performance (denoted as ARV) of all the compared approaches more closely for further statistical comparison.
Section 4.1 presents an extensive scalability analysis to perform a more comprehensive investigation into the efficiency of MSBWO on CEC2005 benchmark problems [49], as shown in Table 2, Table 3 and Table 4. Section 4.2 investigates the impact of different optimization strategies on the final search for the global optimal solution. In Section 4.3, MSBWO is compared with other conventional MAs in terms of convergence speed and accuracy on the race functions. Section 4.4 compares MSBWO to the SOTA metaheuristic approaches that were introduced in 2024. In Section 4.5, 10 datasets are selected from the UCI machine learning library to test the performance of the binary MSBWO in FS.
All the experiments were performed on a 2.60 GHz Intel i7-10750H CPU equipped with 16 GB of RAM and Windows 10 OS and were programmed in MATLAB R2023b.

4.1. Scalability Analysis of MSBWO

The dimensions of the optimization problems affect the efficiency of the algorithm. Therefore, it is necessary to conduct an extensive scalability analysis to perform a more comprehensive investigation into the efficiency of MSBWO. The purpose of the scalability evaluation experiment is to compare the performance of MSBWO with that of BWO as the number of dimensions increases. In this section, we test the first 13 of 23 benchmark problems with dimensions of 100, 200, 500, 1000, and 2000.
In each experiment, 30 dependent runs were applied to each method to reduce the influence of randomness on the experimental results. Additionally, the maximum number of iterations was set to 500, and the population size was set to 50. The parameter initialization of all the methods was the same as that of their original references.
The results in Table 5 present the obtained statistical values for 13 problems on each dimension. According to the statistical results in Table 5, MSBWO is more successful than BWO in addressing the optimization problems on each dimension. Despite the significant statistical results at 1000/2000 dimensions being somewhat reduced compared with the results at 100/200/500 dimensions, the solutions of each function with 1000/2000 dimensions achieved by MSBWO are much closer to the optimal solution than BWO, according to statistical measures (average values and standard deviations).
For the unimodal problems (F1-F7), MSBWO outperforms BWO, except for F5 and F6, and at 1000/2000 dimensions, which indicates that MSBWO significantly strengthens the exploitative ability of BWO at 100/200/500 dimensions. For the multimodal problems (F8-F13), the MSBWO is better than the BWO for F8 with each dimension. MSBWO shows no difference from BWO when addressing F9-F11, and both attained their optimal solutions. However, BWO is superior to MSBWO for F12 and F13 with dimensions of 1000 and 2000. That is, at 1000/2000 dimensions, the advantage of MSBWO over the original BWO is not as pronounced as it is at 100/200/500 dimensions.
From the standard deviation perspective, the standard deviations of the MSBWO on each dimension are lower than those of the BWO when solving functions F2, F4, and F7, which are equal to those of the BWO with functions F1, F3, F9, F10, and F11. This indicates that the optimization ability of MSBWO is no less than that of BWO, and this stability is not significantly affected by the number of dimensions. Although the performance of MSBWO is not superior to that of BWO on F8, MSBWO can find a satisfactory solution. Moreover, MSBWO attains a lower ‘ARV’ than BWO does in each case of dimension, which clearly reveals the superiority of MSBWO without the dependence of dimension.
It can be concluded that the strategies integrated into MSBWO facilitate the balance of exploration and exploitation and significantly enhance the search ability in different dimensions for specific problems.

4.2. Cross-Evaluation of the Proposed MSBWO

To verify the contributions of various improvement strategies to MSBWO, this section compares the original BWO algorithm with five incomplete versions of the MSBWO algorithm. Section 3.1, Section 3.2, Section 3.3, Section 3.4 introduce four integration strategies, including ICMDOBL, EP, SLFSUP, and Golden-SA, into the original BWO. In this section, the performance after mixing and crossing is tested and compared, mainly by means of linear combinations. In Table 6, “1” means that the mechanism was selected, and “0” means that it was not. We refer to the BWO combined with ICMDOBL as ICMDOBL_BWO, and the fusion of the BWO and EP strategies as EP_BWO. The combination of BWO and SLFSUP is denoted as SLFSUP_BWO, and the fusion of BWO and Golden-SA is denoted as GSA_BWO. In addition, the EP_GSA_BWO integrates BWO with the EP and Golden-SA strategies. The dimensions of the various methods were set to 30. Each algorithm was executed on all 23 benchmark functions 30 times.
From the horizontal comparison in Table 7, it is not difficult to find that the improvement strategies introduced into MSBWO enhance the performance of BWO to varying degrees. ICMDOBL_BWO outperforms traditional BWO on the test functions except for F12. The mechanism helps the algorithm start searching from a broader solution space, ultimately stably converging to the optimal solution. EP_BWO emphasizes the inheritance of excellent agents while maintaining population diversity, which helps the algorithm quickly converge to a high-quality area in the solution space. EP_BWO significantly outperforms BWO on the F1–F6, F12–F14, F15, and F21–F23 functions, showing good performance in maintaining population diversity and accelerating the convergence speed. SLFSUP_BWO improves the algorithm′s exploration capability in the search space by adaptively adjusting the search step size, and it significantly outperforms BWO on the F1-F4 functions. GSA_BWO updates the population position by simulating the dynamic changes in the sine waves. This strategy stands out in that it improves the algorithm′s global search capability and helps find the global optimal solution. GSA_BWO significantly outperforms BWO except for F7–F11, and F15. Each improvement strategy has unique advantages and is suitable for different types of problems. EP_GSA_BWO performs best on multimodal problems, whether multimodal or fixed-dimensional multimodal problems, as it integrates the advantages of the EP and Gold-SA strategies.
As shown by the std, EP_GSA_BWO has the best stability on most test functions. Its std is generally zero or very small, except for F8. However, GSA_BWO and EP_GSA_BWO have larger stds on F8, indicating that the Gold-SA strategy has a significant fluctuation in performance on F8. EP_BWO generally has a smaller std on the test functions, indicating good stability. The std of ICMDOBL_BWO is generally similar to that of BWO, indicating that the improvement strategy has little impact on stability.
It can be seen from the Wilcoxon signed-rank test and ARV that the variant BWO clearly enhances the performance of BWO, although each improvement strategy has unique advantages and applicable scenarios.

4.3. Comparison with Conventional MAs

To further assess the optimization performance of MSBWO, in addition to BWO, we select four well-known MAs to participate in the competition, namely, dung beetle optimizer (DBO) [50], GWO [27], WOA [26], and PSO [28]. The parameter initialization of all the algorithms was the same as that of their original references. Additionally, the population size was 50, the dimension was 30, and the maximum iteration number was 500. In addition, each function was executed 30 times. Table 8 presents the statistical outcomes in terms of the mean and standard deviation (marked by ‘std’) of the proposed MSBWO compared with other selected algorithms on 23 benchmark problems. The statistical significances of values in Table 8 are shown in Table 9. Figure 2, Figure 3 and Figure 4 present the convergence curves of the three categories of algorithms.
From the statistical results listed in Table 8, MSBWO can find the best solutions, even the optimal solutions, on most of the functions, except for F12, F13, F17, and F19. For F12 and F13, the performance of MSBWO is worse than that of BWO; however, the results are close to those of BWO and are far better than those of any of the other four methods. For F17 and F19, MSBWO performs next in terms of performance to the first method, namely, DBO and PSO on F17 and PSO on F19. All algorithms present similar best average values for F16-F19 but different standard deviations.
According to the p values of the Wilcoxon rank-sum test, which analyzes the significant difference between the paired algorithms in Table 9, the performance of MSBWO has significantly positive differences in these four functions compared with that of the other compared methods. From the overall significant statistical results of the Wilcoxon rank-sum test on all the functions, the worst-case MSBWO produces 14 significantly better, 7 equal, and 2 significantly worse results than the PSO does, and the best case is that MSBWO overwhelmingly succeeds on almost all of the algorithms compared with GWO. It makes sense that MSBWO obtains the best ARV of 1.4565 in the Friedman test. Therefore, the conclusion can be drawn that the proposed MSBWO is the best approach with considerable advantages over five competitive swarm-based algorithms.
From the standard deviation perspective, the standard deviations of MSBWO are the lowest for 15 functions, although those of MSBWO are not the lowest for F1, F3, and F9-F11. These results indicate that the optimization ability of MSBWO is more stable than that of the other algorithms. The performance of MSBWO is not superior on F12, F13, F17, or F19; however, MSBWO can find satisfactory solutions when solving these functions.
The curves in Figure 2, Figure 3 and Figure 4 intuitively draw the convergence rates of the proposed MSBWO, BWO, DBO, GWO, WOA, and PSO for addressing the unimodal (F1, F2, F6, and F7), multimodal (F10–F13), and composition (F15, F21–F23) problems. According to Figure 2, Figure 3 and Figure 4, the MSBWO has powerful advantages in terms of the convergence rate over the other approaches in terms of F1, F2, F10, F11, F15, and F21–F23. Other approaches, especially DBO, GWO, WOA, and PSO, stagnate into local optima during early optimization on F1, F2, F6, F11, F12, and F15, whereas MSBWO has the fastest convergence rate and can obtain the best solutions on these functions. These trends indicate that the improvement in MSBWO is clearly confirmed in most cases of the unimodal, multimodal, and composition tasks.
Accordingly, these experimental results verify that the developed MSBWO has an efficient searching ability at an accelerated convergence speed, which benefits mainly from the ICMDOBL strategy and EP strategy. The ICMDOBL strategy helps the algorithm to have better initial random agents, and the individuals are more equally scattered in the global space and have a better chance to approach the global optimal solution. Moreover, the SLFSUP mechanism allows the algorithm to adjust the step size during the search process according to the current search situation, which can excellently achieve a reasonable balance between the exploitation and exploration abilities. Gold-SA also accelerates convergence and escape from local optima.

4.4. Comparison with SOTA Algorithms

To further investigate the advantages of the proposed MSBWO, the algorithm was compared against five SOTA MAs that were introduced in 2024, namely, the horned lizard optimization algorithm (HLOA) [51], hippopotamus optimization (HO) [52], parrot optimizer (PO) [53], crested porcupine optimizer (CPO) [54], and black-winged kite algorithm (BKA) [55]. The simulation results, including the Wilcoxon test and the Friedman test results, can be seen in Table 10. Table 11 records the p values of the Wilcoxon test, which were used to investigate the significant differences between MSBWO and one of the compared algorithms. The statistical significances of values in Table 10 are shown in Table 11. These results are clearly illustrated in the convergence curves in Figure 5, Figure 6 and Figure 7.
As reported in Table 10, MSBWO achieves the best solutions for approximately 78% of the functions except for F7, F14, F15, and F20. For F7, MSBWO is only worse than PO. The performance of MSBWO on F14 is worse than that of HO, CPO, and BKA. For F15, the means of HO, PO, and CPO are better than those of MSBWO. Nevertheless, according to the p values in Table 11, for F15, there is no significant difference between MSBWO and the PO. For F20, the solution obtained via MSBWO approaches the best solutions.
According to the ARV, the established MSBWO is ranked the best, with a value of 2.2174. Additionally, from Table 11, we observe that MSBWO significantly outperforms the other competitors in general on F1–F13, except for F9–F11, for which all the algorithms achieve the best solutions. This indicates that MSBWO is significantly better than the other five algorithms in optimizing both unimodal and multimodal problems, which reflects its excellent exploitation ability and explorative ability. For F16–F23, MSBWO shows competitive optimization capability. This shows that the performance of MSBWO in handling composition problems is not worse than those of the above advanced methods. According to the above investigations, the performance of MSBWO is superior to that of these outstanding optimizers from an overall perspective.
The convergence curves again prove the merits of MSBWO in an obvious way. From Figure 5 for unimodal functions (F1, F3, F6, and F7) and Figure 6 for multimodal functions (F10–F13), MSBWO significantly achieves the best outcome and fastest convergence rate. In contrast, other competitors, including HO, stagnate into local optima during the early stage on F6, F12, and F13. For the fixed-dimensional functions in Figure 7, although MSBWO does not achieve the fastest convergence rate, the difference in convergence speed compared with the comparative algorithms is not significant, and it also obtains the global optimal solution.
Therefore, the multiple strategies integrated into the MSBWO contribute to strengthening the balance between diversification and intensification. MSBWO effectively has a faster convergence rate or better search ability than outstanding advanced optimizers such as the HLOA, HO, PO, CPO, and BKA.

4.5. Feature Selection Experiment

This section presents a more comprehensive study on the proposed MSBWO in a binary manner according to the feature selection (FS) rules. Distinct test datasets were used to test the proposed approach for FS. They are available from the UCI repository, which can be obtained from the website https://archive.ics.uci.edu/datasets (accessed on 5 May 2024). The details of the datasets used for feature selection are shown in Table 12. As revealed in Table 12, the datasets contain different sizes of features and instances and belong to different subject areas. The difference in the dataset is beneficial for testing the proposed method from different viewpoints.
In this study, we chose the common RF classifier. Simultaneously, four other FS approaches, including binary GWO (BGWO), binary WOA (BWOA), binary DBO (BDBO), and binary BWO (BBWO), are regarded as competitors against the proposed BMSBWO to confirm its efficiency. In the fitness function, r 14 is set to 0.9. The number of decision trees in the RF classifier is set to 20. Additionally, the population size is 20, and the maximum number of iterations is 50. In addition, each function was executed 30 times.
The numerical results of comparing BMSBWO with BGWO, BWOA, BDBO, and BBWO on each dataset for FS problems are recorded in Table 13, Table 14, Table 15 and Table 16 in terms of fitness, error rate, mean feature selection size, and average running time. The metric mean feature selection size determines the FS ratio by dividing the FS size by the total size of the features in the original dataset.
As outlined in Table 13, the excellent performance of the BMSBWO is evidently superior to that of the BGWO, BWOA, BDBO, and BBWO on high-dimensional samples S5-S10 in terms of fitness. For the S1-S4 datasets, the BMSBWO is not the sole best, but it still demonstrates good performance. Notably, BBWO exhibits equally excellent fitness values on the S1-S4 datasets as BMSBWO. From the final ARV obtained, the average fitness values obtained by the BMSBWO are much lower than those of the other peers. This shows that the performance of the BMSBWO is superior to those of the other algorithms. According to the final rank values in Table 14, the classification accuracy obtained via the BMSBWO still exceeds those of the other algorithms. Except for S1, S4, and S8, the classification error rates of BMSBWO are lower than those of its rivals.
The ultimate goal of feature selection is to improve the prediction accuracy and reduce the dimensionality of the prediction results. Obtaining the optimal feature subset by eliminating features with little or no predictive information and strongly correlated redundant features is the core of this work. Table 15 shows that the BMSBWO algorithm obtains a subset of features with minimum dimensionality on each dataset, indicating that the BMSBWO algorithm has a better feature selection capability. Combined with the classification error rate in Table 14, it can always filter out fewer features with a low error rate. Furthermore, BMSBWO even achieves a 0% error rate by selecting the fewest features on S4.
A comparison of the time consumption results in Table 16 reveals that BMSBWO ranks fifth, which shows that it takes more time than most binary optimizers. This is because the improved strategies, such as EP, SLFSUP, and Golden-SA, somewhat affect the time cost, which can also be seen from the computational complexity of MSBWO. Although BMSBWO has a greater time cost, considering the comprehensive performance of Table 13, Table 14 and Table 15 is worthwhile. BMSBWO outperforms the other four binary optimizers in handling the feature selection problem. Of course, how to reduce the consumption of the BMSBWO computing time while ensuring performance is still the direction of our future research.
The best fitness values during the iterative process are presented below as convergence curves to make the experimental results more intuitive and clearer. Figure 8 shows the convergence curves of the algorithm when comparing 10 datasets. The Y-axis shows the average fitness value under ten independent executions, and the X-axis indicates the number of iterations. The convergence values of the BMSBWO are much smaller than those of the other algorithms on approximately 80% of the benchmark datasets. It can also be observed that the MSBWO method is not prone to falling into local optima, demonstrating stronger exploration capabilities on the S5 dataset. All of these benefit from the variety of update methods provided by the ICMDOBL, EP, and SLFSUP strategies, which ensure the diversity of the population and enable the algorithm to have more opportunities to explore optimal regions.
Handling the balance between the global exploration and local exploitation search phases is a significant factor that makes BMSBWO superior to the other algorithms. The experimental results indicate that its powerful search capability enables the BMSBWO to perform excellently on a wide range of complex problems.

5. Conclusions and Future Works

In this paper, a novel improved BWO was constructed to optimize the diversity of population positions and the exploration–exploitation imbalance of the original BWO. The proposed optimizer is called MSBWO, which contains an initialization stage and an updating stage. In the updating stage, the EP, SLFSUP, and Golden-SA strategies were integrated with BWO to improve the rigor and accuracy of the algorithm in local space exploration, enhancing local search capabilities and accelerating the convergence speed of the algorithm.
The algorithm was applied to CEC2005 global optimization problems. The global optimization performance of MSBWO was verified by comparing it to other conventional algorithms, DBO, GWO, WOA, and PSO, as well as the SOTA algorithms HLOA, HO, PO, CPO, and BKA. The comprehensive results of the experiment indicated that the established MSBWO has excellent exploration abilities, which helps the algorithm jump out of local optimal values and accurately explore more promising regions in most cases. Thus, it is better than other optimizers in terms of search ability and convergence speed when tackling global optimization problems.
In addition, we mapped MSBWO into binary space via a mapping function based on the continuous version of MSBWO as a feature selection technique. Ten UCI datasets of different dimensions were utilized to benchmark the performance of binary MSBWO in feature selection. The experimental results clearly verified that the BMSBWO outperforms the other investigated methods with respect to fitness, mean feature selection size, and error rate measures compared with the other algorithms. This has important implications in terms of reducing the data dimensionality and improving the computing performance.
Accordingly, we can regard the proposed MSBWO algorithm as a potential global optimization method as well as a promising feature selection technique. However, the integration of improvement strategies, which contribute to enhancing the performance of the original BWO, resulted in more time costs to attain high-quality best solutions. Therefore, it is necessary to harmonize efficiency with accuracy when tackling practical problems. In future studies, a promising direction is to use the proposed method in multi-objective optimization tasks. We can also expand the application of this method to more real-life problems such as machine learning, medical applications, financial fields, and engineering optimization tasks. Moreover, research on integrating the novel BWO algorithm with other strategies to build a much better optimizer is a worthwhile endeavour.

Author Contributions

Conceptualization, Z.F. and Z.X.; methodology, Z.F. and Z.X.; software, C.Z.; validation, Z.H. and X.L.; data curation, C.Z.; writing—original draft preparation, Z.F. and Z.X.; writing—review and editing, Z.H. and X.L.; visualization, Z.H.; supervision, X.L.; project administration, X.L.; funding acquisition, Z.F. and Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hubei Province of China, grant number 2022CFB536, and the Initial Scientific Research Foundation for Talented Scholars of Nanchang Institute of Science & Technology, grant number NGRCZX-20-18.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yuan, Y.G.; Wei, J.A.; Huang, H.S.; Jiao, W.D.; Wang, J.X.; Chen, H.L. Review of resampling techniques for the treatment of imbalanced industrial data classification in equipment condition monitoring. Eng. Appl. Artif. Intell. 2023, 126, 106911. [Google Scholar] [CrossRef]
  2. Liang, Y.C.; Minanda, V.; Gunawan, A. Waste collection routing problem: A mini-review of recent heuristic approaches and applications. Waste Manage. Res. 2022, 40, 519–537. [Google Scholar] [CrossRef] [PubMed]
  3. Kuo, R.; Li, S.S. Applying particle swarm optimization algorithm-based collaborative filtering recommender system considering rating and review. Appl. Soft Comput. 2023, 135, 110038. [Google Scholar] [CrossRef]
  4. Fan, S.K.S.; Lin, W.K.; Jen, C.H. Data-driven optimization of accessory combinations for final testing processes in semiconductor manufacturing. J. Manuf. Syst. 2022, 63, 275–287. [Google Scholar] [CrossRef]
  5. Kler, R.; Gangurde, R.; Elmirzaev, S.; Hossain, M.S.; Vo, N.V.T.; Nguyen, T.V.T.; Kumar, P.N. Optimization of Meat and Poultry Farm Inventory Stock Using Data Analytics for Green Supply Chain Network. Discrete Dyn. Nat. Soc. 2022, 2022, 8970549. [Google Scholar] [CrossRef]
  6. Yu, K.J.; Liang, J.; Qu, B.Y.; Luo, Y.; Yue, C.T. Dynamic Selection Preference-Assisted Constrained Multiobjective Differential Evolution. IEEE Trans. Syst. Man. Cybern. Syst. 2022, 52, 2954–2965. [Google Scholar] [CrossRef]
  7. Yu, K.J.; Sun, S.R.; Liang, J.; Chen, K.; Qu, B.Y.; Yue, C.T.; Wang, L. A bidirectional dynamic grouping multi-objective evolutionary algorithm for feature selection on high-dimensional classification. Inf. Sci. 2023, 648, 119619. [Google Scholar] [CrossRef]
  8. Uzer, M.S.; Inan, O.; Yilmaz, N. A hybrid breast cancer detection system via neural network and feature selection based on SBS, SFS and PCA. Neural Comput. Appl. 2013, 23, 719–728. [Google Scholar] [CrossRef]
  9. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  10. Arora, S.; Anand, P. Binary butterfly optimization approaches for feature selection. Expert. Syst. Appl. 2019, 116, 147–160. [Google Scholar] [CrossRef]
  11. Karakoyun, M.; Ozkis, A. A binary tree seed algorithm with selection-based local search mechanism for huge-sized optimization problems. Appl. Soft Comput. 2022, 129, 109590. [Google Scholar] [CrossRef]
  12. Yilmaz, Ö.; Altun, A.A.; Köklü, M. Optimizing the learning process of multi-layer perceptrons using a hybrid algorithm based on MVO and SA. Int. J. Ind. Eng. Comput. 2022, 13, 617–640. [Google Scholar] [CrossRef]
  13. Zhang, R.Z.; Zhu, Y.J.; Liu, Z.S.; Feng, G.H.; Diao, P.F.; Wang, H.E.; Fu, S.H.; Lv, S.; Zhang, C. A Back Propagation Neural Network Model for Postharvest Blueberry Shelf-Life Prediction Based on Feature Selection and Dung Beetle Optimizer. Agriculture 2023, 13, 1784. [Google Scholar] [CrossRef]
  14. Wang, X.Y.; Yang, J.; Teng, X.L.; Xia, W.J.; Jensen, R. Feature selection based on rough sets and particle swarm optimization. Pattern Recognit. Lett. 2007, 28, 459–471. [Google Scholar] [CrossRef]
  15. Fang, L.L.; Liang, X.Y. A Novel Method Based on Nonlinear Binary Grasshopper Whale Optimization Algorithm for Feature Selection. J. Bionic Eng. 2023, 20, 237–252. [Google Scholar] [CrossRef]
  16. Akinola, O.; Oyelade, O.N.; Ezugwu, A.E. Binary Ebola Optimization Search Algorithm for Feature Selection and Classification Problems. Appl. Sci. 2022, 12, 11787. [Google Scholar] [CrossRef]
  17. Shikoun, N.H.; Al-Eraqi, A.S.; Fathi, I.S. BinCOA: An Efficient Binary Crayfish Optimization Algorithm for Feature Selection. IEEE Access 2024, 12, 28621–28635. [Google Scholar] [CrossRef]
  18. Sun, L.; Si, S.S.; Zhao, J.; Xu, J.C.; Lin, Y.J.; Lv, Z.Y. Feature selection using binary monarch butterfly optimization. Appl. Intell. 2023, 53, 706–727. [Google Scholar] [CrossRef]
  19. Ibrahim, R.A.; Ewees, A.A.; Oliva, D.; Abd Elaziz, M.; Lu, S.F. Improved salp swarm algorithm based on particle swarm optimization for feature selection. J. Ambient Intell. Hum. Comput. 2019, 10, 3155–3169. [Google Scholar] [CrossRef]
  20. Al-Wajih, R.; Abdulkadir, S.J.; Aziz, N.; Al-Tashi, Q.; Talpur, N. Hybrid Binary Grey Wolf With Harris Hawks Optimizer for Feature Selection. IEEE Access 2021, 9, 31662–31677. [Google Scholar] [CrossRef]
  21. Guo, W.Y.; Liu, T.; Dai, F.; Xu, P. An Improved Whale Optimization Algorithm for Feature Selection. Comput. Mater. Continua 2020, 62, 337–354. [Google Scholar] [CrossRef]
  22. Yao, L.G.; Yang, J.; Yuan, P.L.; Li, G.H.; Lu, Y.; Zhang, T.H. Multi-Strategy Improved Sand Cat Swarm Optimization: Global Optimization and Feature Selection. Biomimetics 2023, 8, 492. [Google Scholar] [CrossRef]
  23. Uzer, M.S.; Inan, O. A novel feature selection using binary hybrid improved whale optimization algorithm. J. Supercomput. 2023, 79, 10020–10045. [Google Scholar] [CrossRef]
  24. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Al-Zoubi, A.M.; Mirjalili, S.; Fujita, H. An efficient binary Salp Swarm Algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  25. Zhong, C.T.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Software 2016, 95, 51–67. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Software 2014, 69, 46–61. [Google Scholar] [CrossRef]
  28. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks (ICNN 95), The University of Western Australia, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  29. Anilkumar, P.; Venugopal, P. An improved beluga whale optimizer—Derived Adaptive multi-channel DeepLabv3+ for semantic segmentation of aerial images. PLoS ONE 2023, 18, e0290624. [Google Scholar] [CrossRef]
  30. Yuan, H.; Chen, Q.; Li, H.; Zeng, D.; Wu, T.; Wang, Y.; Zhang, W. Improved beluga whale optimization algorithm based cluster routing in wireless sensor networks. Math. Biosci. Eng. 2024, 21, 4587–4625. [Google Scholar] [CrossRef]
  31. Chen, Z.; Song, D. Modeling landslide susceptibility based on convolutional neural network coupling with metaheuristic optimization algorithms. Int. J. Digit. Earth 2023, 16, 3384–3416. [Google Scholar] [CrossRef]
  32. Deepika, C.; Kuchibhotla, S. Deep-CNN based knowledge learning with Beluga Whale optimization using chaogram transformation using intelligent sensors for speech emotion recognition. Meas. Sens. 2024, 32, 101030. [Google Scholar] [CrossRef]
  33. Shen, X.H.; Wu, Y.G.; Li, L.X.; Zhang, T.X. A modified adaptive beluga whale optimization based on spiral search and elitist strategy for short-term hydrothermal scheduling. Electr. Power Syst. Res. 2024, 228, 110051. [Google Scholar] [CrossRef]
  34. Youssef, H.; Kamel, S.; Hassan, M.H.; Mohamed, E.M.; Belbachir, N. Exploring LBWO and BWO Algorithms for Demand Side Optimization and Cost Efficiency: Innovative Approaches to Smart Home Energy Management. IEEE Access 2024, 12, 28831–28852. [Google Scholar] [CrossRef]
  35. Chen, H.M.; Wang, Z.; Wu, D.; Jia, H.M.; Wen, C.S.; Rao, H.H.; Abualigah, L. An improved multi-strategy beluga whale optimization for global optimization problems. Math. Biosci. Eng. 2023, 20, 13267–13317. [Google Scholar] [CrossRef] [PubMed]
  36. Horng, S.C.; Lin, S.S. Improved Beluga Whale Optimization for Solving the Simulation Optimization Problems with Stochastic Constraints. Mathematics 2023, 11, 1854. [Google Scholar] [CrossRef]
  37. Jia, H.; Wen, Q.; Wu, D.; Wang, Z.; Wang, Y.; Wen, C.; Abualigah, L. Modified beluga whale optimization with multi-strategies for solving engineering problems. J. Comput. Des. Eng. 2023, 10, 2065–2093. [Google Scholar] [CrossRef]
  38. Wei, J.A.; Wang, J.X.; Huang, H.S.; Jiao, W.D.; Yuan, Y.G.; Chen, H.L.; Wu, R.; Yi, J.H. Novel extended NI-MWMOTE-based fault diagnosis method for data-limited and noise-imbalanced scenarios. Expert. Syst. Appl. 2024, 238, 121799. [Google Scholar] [CrossRef]
  39. Li, H.S.; Lu, G.J.; Su, J.D.; Hou, T.; Huang, F.G.; Pan, Y.S. Improved Particle Swarm Fuzzy PID Temperature Control for the Pellet Grills. IEEE Access 2024, 12, 66373–66381. [Google Scholar] [CrossRef]
  40. Liu, K.W.; Wang, X.C.; Wang, L.D. An Improved Memetic Algorithm for Urban Rail Train Operation Strategy Optimization. Int. J. Innov. Comput. Inf. Control 2020, 16, 241–256. [Google Scholar] [CrossRef]
  41. Tanyildizi, E.; Demir, G. Golden Sine Algorithm: A Novel Math-Inspired Algorithm. Adv. Electr. Comput. Eng. 2017, 17, 71–78. [Google Scholar] [CrossRef]
  42. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comp. Sy. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  43. Onay, F.K.; Aydemir, S.B. Chaotic hunger games search optimization algorithm for global optimization and engineering problems. Math. Comput. Simul. 2022, 192, 514–536. [Google Scholar] [CrossRef]
  44. Cheng, Y.H.; Kuo, C.N.; Lai, C.M. Comparison of the adaptive inertia weight PSOs based on chaotic logistic map and tent map. In Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Macao, China, 18–20 July 2017; pp. 355–360. [Google Scholar]
  45. You, M.K.; Wu, Y.J.; Wang, Y.L.; Xie, X.Y.; Xu, C. Parameter Optimization of PID Controller Based on Improved Sine-SOA Algorithm. In Proceedings of the 19th IEEE International Conference on Mechatronics and Automation (IEEE ICMA), Tianjin, China, 7–10 August 2022; pp. 12–17. [Google Scholar]
  46. Arora, S.; Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
  47. Sharma, S.R.; Kaur, M.; Singh, B. A Self-adaptive Bald Eagle Search optimization algorithm with dynamic opposition-based learning for global optimization problems. Expert. Syst. 2023, 40, e13170. [Google Scholar] [CrossRef]
  48. Ewees, A.A.; Ismail, F.H.; Sahlol, A.T. Gradient-based optimizer improved by Slime Mould Algorithm for global optimization and feature selection for diverse computation problems. Expert. Syst. Appl. 2023, 213, 118872. [Google Scholar] [CrossRef]
  49. Liang, J.J.; Suganthan, P.N.; Deb, K. Novel composition test functions for numerical global optimization. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, 2005, SIS 2005, Pasadena, CA, USA, 8–10 June 2005; pp. 68–75. [Google Scholar]
  50. Xue, J.K.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  51. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar] [CrossRef]
  52. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  53. Lian, J.B.; Hui, G.H.; Ma, L.; Zhu, T.; Wu, X.C.; Heidari, A.A.; Chen, Y.; Chen, H.L. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  54. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  55. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
Figure 1. Flowchart of MSBWO; the parts that have been improved are highlighted with coloured boxes.
Figure 1. Flowchart of MSBWO; the parts that have been improved are highlighted with coloured boxes.
Biomimetics 09 00572 g001
Figure 2. Convergence curves of MSBWO and conventional MAs on F1, F2, F6, and F7.
Figure 2. Convergence curves of MSBWO and conventional MAs on F1, F2, F6, and F7.
Biomimetics 09 00572 g002
Figure 3. Convergence curves of MSBWO and conventional MAs on F10 to F13.
Figure 3. Convergence curves of MSBWO and conventional MAs on F10 to F13.
Biomimetics 09 00572 g003
Figure 4. Convergence curves of MSBWO and conventional MAs on F15, F21, F22, and F23.
Figure 4. Convergence curves of MSBWO and conventional MAs on F15, F21, F22, and F23.
Biomimetics 09 00572 g004
Figure 5. Convergence curves of the MSBWO and SOTA algorithms on F1, F2, F6, and F7.
Figure 5. Convergence curves of the MSBWO and SOTA algorithms on F1, F2, F6, and F7.
Biomimetics 09 00572 g005
Figure 6. Convergence curves of the MSBWO and SOTA algorithms on F10 to F13.
Figure 6. Convergence curves of the MSBWO and SOTA algorithms on F10 to F13.
Biomimetics 09 00572 g006
Figure 7. Convergence curves of the MSBWO and SOTA algorithms on F15, F21, F22, and F23.
Figure 7. Convergence curves of the MSBWO and SOTA algorithms on F15, F21, F22, and F23.
Biomimetics 09 00572 g007
Figure 8. Convergence curves of the BMSBWO and other binary algorithms on 10 datasets.
Figure 8. Convergence curves of the BMSBWO and other binary algorithms on 10 datasets.
Biomimetics 09 00572 g008
Table 1. A comprehensive list of abbreviations utilized in this article.
Table 1. A comprehensive list of abbreviations utilized in this article.
AbbreviationsDescription
FSFeature selection
BWOBeluga whale optimization
MSBWOMulti-strategies improved beluga whale optimization
ICMDOBLImproved circle mapping and dynamic opposition-based learning
EPElite pool
SLFSUPStep-adaptive Lévy flight and spiral updating position
Gold-SAGolden sine algorithm
SOTAState-of-the-art
BMSBWOBinary multi-strategies improved beluga whale optimization
RFRandom forest
MAsMetaheuristic algorithms
WOAWhale optimization algorithm
GWOGrey wolf optimizer
PSOParticle swarm optimization
ICMImproved circle mapping
DOBLDynamic opposition-based learning
HHOHarris hawks optimizer
LFLévy flight
stdStandard deviation
DBODung beetle optimizer
HLOAHorned lizard optimization algorithm
HOHippopotamus optimization
POParrot optimizer
CPOCrested porcupine optimizer
BKABlack-winged kite algorithm
BGWOBinary grey wolf optimizer
BWOABinary whale optimization algorithm
BDBOBinary dung beetle optimizer
BBWOBinary beluga whale optimization
Table 2. Unimodal benchmark functions.
Table 2. Unimodal benchmark functions.
FunctionRangefmin
F 1 ( x ) = i = 1 n x i 2 [ 100 ,   100 ] 0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | [ 10 ,   10 ] 0
F 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 [ 100 ,   100 ] 0
F 4 ( x ) = max i { | x i | , 1 i n } [ 100 ,   100 ] 0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [ 30 ,   30 ] 0
F 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2   [ 100 ,   100 ] 0
F 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) [ 128 ,   128 ] 0
Table 3. Multimodal benchmark functions.
Table 3. Multimodal benchmark functions.
FunctionRange f m i n
F 8 ( x ) = i = 1 n x i s i n ( | x i | ) [ 500 ,   500 ] n * 418.9829 × n
F 9 ( x ) = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] [ 5.12 ,   5.12 ] n 0
F 10 ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e [ 32 ,   32 ] n 0
F 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n c o s ( x i i ) + 1 [ 600 ,   600 ] n 0
F 12 ( x ) = π n { 10 s i n ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i ,   10 ,   100 ,   4 ) y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { k ( x i a ) m 0 k ( x i a ) m x i > a a < x i < a x i < a [ 50 ,   50 ] n 0
F 13 ( x ) = 0.1 { s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x 1 + 1 ) ] + ( x n 1 ) 2 [ 1 + s i n 2 ( 2 π x n ) ] } + i = 1 n u ( x i ,   5 ,   100 ,   4 ) [ 50 ,   50 ] n 0
* The n is the dimension of the solution.
Table 4. Fixed-dimensional multimodal benchmark functions.
Table 4. Fixed-dimensional multimodal benchmark functions.
FunctionRange f m i n
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 [ 65.536 ,   65.536 ] 2 1
F 15 ( x ) = i = 1 11 [ a i x i ( b i 2 b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 [ 5 ,   5 ] 4 0.00030
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 [ 5 ,   5 ] 2 1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 [ 5 ,   5 ] 2 0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] [ 2 ,   2 ] 2 3
F 19 ( x ) = i = 1 4 c i e x p ( j = 1 3 a i j ( x j p i j ) 2 ) [ 0 ,   1 ] 3 3.86
F 20 ( x ) = i = 1 4 c i e x p ( j = 1 6 a i j ( x j p i j ) 2 ) [ 0 ,   1 ] 6 3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 [ 0 ,   10 ] 4 10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 [ 0 ,   10 ] 4 10.4028
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 [ 0 ,   10 ] 4 10.5363
Table 5. Overall results of the scalability test on 13 problems with five dimensions.
Table 5. Overall results of the scalability test on 13 problems with five dimensions.
FunMetric10020050010002000
MSBWOBWOMSBWOBWOMSBWOBWOMSBWOBWOMSBWOBWO
F1Mean03.1325 × 10−24003.9284 × 10−24202.6941 × 10−24304.8315 × 10−25003.3198 × 10−247
std0000000000
F2Mean4.9784 × 10−2581.7522 × 10−1224.4907 × 10−2571.9363 × 10−1221.4864 × 10−2582.0048 × 10−1221.4654 × 10−2583.6978 × 10−1287.5977 × 10−2564.7685 × 10−127
std07.4403 × 10−12206.7326 × 10−12208.5800 × 10−12201.2181 × 10−12701.1725 × 10−126
F3Mean02.8571 × 10−23603.2202 × 10−23202.0805 × 10−23703.3291 × 10−23001.1124 × 10−228
std0000000000
F4Mean3.5853 × 10−2568.0589 × 10−1213.0137 × 10−2572.1383 × 10−1194.6305 × 10−2583.7698 × 10−1212.3963 × 10−2512.6188 × 10−1117.1375 × 10−2545.4145 × 10−105
std04.054 × 10−12001.1281 × 10−11801.0441 × 10−12006.198 × 10−11101.9742 × 10−104
F5Mean3.4833 × 10−33.4724 × 10−22.9821 × 10−33.4543 × 10−23.3810 × 10−34.5225 × 10−22.6676 × 10−42.0253 × 10−51.6484 × 10−46.676 × 10−5
std2.9867 × 10−32.8795 × 10−22.5249 × 10−32.5515 × 10−23.4903 × 10−33.0284 × 10−27.8311 × 10−43.9018 × 10−52.9464 × 10−41.3279 × 10−4
F6Mean8.8239 × 10−63.8552 × 10−41.097 × 10−53.9011 × 10−48.1703 × 10−63.8265 × 10−42.3526 × 10−79.8573 × 10−131.2257 × 10−62.4559 × 10−12
std8.1335 × 10−61.9332 × 10−46.3314 × 10−61.8887 × 10−44.6765 × 10−61.7915 × 10−43.3537 × 10−71.2323 × 10−122.8654 × 10−64.625 × 10−12
F7Mean3.2832 × 10−55.7773 × 10−53.511 × 10−51.0244 × 10−43.717 × 10−57.8434 × 10−54.6477 × 10−57.7971 × 10−53.9363 × 10−58.8667 × 10−5
std2.8017 × 10−56.0136 × 10−52.6436 × 10−59.0007 × 10−52.8646 × 10−56.7439 × 10−53.8874 × 10−55.4964 × 10−52.0208 × 10−57.2232 × 10−5
F8Mean−1.5067 × 10114−4.0675 × 103−1.3525 × 10114−4.0179 × 103−2.3274 × 10112−4.0622 × 103−1.9952 × 10110−4.1898 × 105−4.3998 × 10113−8.3797 × 105
std6.2796 × 101141.6335 × 1025.3718 × 101141.962 × 1028.5034 × 101121.8575 × 1027.4347 × 101109.0956 × 10−82.4097 × 101141.7306 × 10−7
F9Mean0000000000
std0000000000
F10Mean4.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−16
std0000000000
F11Mean0000000000
std0000000000
F12Mean2.9151 × 10−68.7024 × 10−52.7176 × 10−61.0203 × 10−42.9509 × 10−67.8079 × 10−59.5097 × 10−103.5524 × 10−161.0891 × 10−96.5309 × 10−16
std1.6047 × 10−64.9517 × 10−51.9696 × 10−65.8337 × 10−51.8946 × 10−63.8323 × 10−52.1414 × 10−93.952 × 10−161.9733 × 10−91.1161 × 10−15
F13Mean9.7534 × 10−61.1854 × 10−49.5108 × 10−61.0988 × 10−47.6788 × 10−69.9716 × 10−57.652 × 10−73.1858 × 10−131.3665 × 10−75.3749 × 10−13
std1.1971 × 10−57.7015 × 10−59.3821 × 10−69.4893 × 10−56.4942 × 10−66.0157 × 10−52.9231 × 10−67.6574 × 10−133.5755 × 10−71.2843 × 10−12
+/=/− 10/3/010/3/010/3/06/3/46/4/3
ARV 1.11541.88461.11541.88461.11541.88461.42311.47691.38461.6154
Rank 1212121212
Table 6. BWO with one or more improvement strategies.
Table 6. BWO with one or more improvement strategies.
ICMDOBLEPSLFSUPGolden-SA
BWO0000
ICMDOBL_BWO1000
EP_BWO0100
SLFSUP_BWO0010
GSA_BWO0001
EP_GSA_BWO0101
Table 7. Results of variant BWO with the Wilcoxon signed rank test.
Table 7. Results of variant BWO with the Wilcoxon signed rank test.
FunMetricBWOICMDOBL_BWOEP_BWOSLFSUP_BWOGSA_BWOEP_GSA_BWO
F1Mean7.0013 × 10−2601.4336 × 10−2624.9908 × 10−269000
std0000000
F2Mean2.8825 × 10−1332.5311 × 10−1333.7554 × 10−1372.5676 × 10−2112.4955 × 10−2272.917 × 10−224
std1.0065 × 10−1327.4073 × 10−1331.039 × 10−136000
F3Mean1.4033 × 10−2461.0023 × 10−2453.3437 × 10−25005.9322 × 10−3051.1606 × 10−304
std000000
F4Mean6.2102 × 10−1281.6457 × 10−1282.0551 × 10−1331.2706 × 10−2031.0216 × 10−2171.0178 × 10−216
std3.1092 × 10−1275.8050 × 10−1284.2469 × 10−133000
F5Mean2.4149 × 10−71.4948 × 10−79.0321 × 10−132.9282 × 10−72.3391 × 10−88.8411 × 10−14
std3.8877 × 10−72.603 × 10−74.1987 × 10−124.438 × 10−78.7891 × 10−84.7885 × 10−13
F6Mean5.7644 × 10−159.0922 × 10−1502.045 × 10−149.65231 × 10−160
std7.0405 × 10−151.6401 × 10−1402.5569 × 10−141.2376 × 10−150
F7Mean7.6282 × 10−57.3459 × 10−58.1907 × 10−56.4404 × 10−54.0187 × 10−53.7322 × 10−5
std6.5538 × 10−55.7310 × 10−58.9211 × 10−55.9111 × 10−53.3717 × 10−52.7464 × 10−5
F8Mean−1.257 × 104−1.257 × 104−1.257 × 104−1.257 × 104−3.4676 × 10148−1.485 × 10117
std2.8059 × 10−92.743 × 10−91.8501 × 10−123.4371 × 10−71.8709 × 101498.1337 × 10117
F9Mean000000
std000000
F10Mean4.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−16
std000000
F11Mean000000
std000000
F12Mean1.9893 × 10−141.9976 × 10−142.7406 × 10−322.7215 × 10−144.8608 × 10−157.148 × 10−32
std2.8979 × 10−142.9732 × 10−142.3483 × 10−324.5395 × 10−145.2525 × 10−152.3217 × 10−31
F13Mean1.3464 × 10−131.5827 × 10−131.2251 × 10−302.456 × 10−132.998 × 10−146.7572 × 10−32
std2.4346 × 10−133.2734 × 10−135.9353 × 10−303.4683 × 10−135.198 × 10−141.1523 × 10−31
F14Mean0.9980.9980.9980.9980.9980.998
std7.6912 × 10−72.0118 × 10−101.7869 × 10−141.7285 × 10−91.7885 × 10−112.3142 × 10−16
F15Mean3.4383 × 10−43.1749 × 10−43.4516 × 10−43.4657 × 10−43.3169 × 10−43.4711 × 10−4
std4.1472 × 10−56.4871 × 10−64.8746 × 10−56.3006 × 10−52.8705 × 10−54.6637 × 10−5
F16Mean−1.0315−1.0316−1.0316−1.0315−1.0316−1.0316
std1.3142 × 10−47.4459 × 10−54.148 × 10−51.8771 × 10−42.4755 × 10−51.8592 × 10−6
F17Mean0.3988 × 10−10.39970.40080.39950.39810.3982
std1.3235 × 10−31.8371 × 10−34.4741 × 10−32.1497 × 10−31.8057 × 10−43.4807 × 10−4
F18Mean3.56873.3993.91053.42923.00033.0004
std0.49290.35921.06800.42574.4043 × 10−44.2285 × 10−4
F19Mean−3.8594−3.8586−3.8543−3.8595−3.8622−3.8616
std2.71 × 10−32.7328 × 10−34.4753 × 10−32.4504 × 10−33.6126 × 10−47.9765 × 10−4
F20Mean−3.288−3.3023−3.2875−3.3004−3.3064−3.3185
std3.8818 × 10−29.5708 × 10−33.0113 × 10−22.3301 × 10−22.6131 × 10−24.4554 × 10−3
F21Mean−10.149−10.150−10.153−10.147−10.153−10.153
std5.5906 × 10−33.9356 × 10−32.028 × 10−66.9848 × 10−34.8783 × 10−63.3077 × 10−8
F22Mean−10.399−10.396−10.403−10.397−10.403−10.403
std3.8217 × 10−31.5631 × 10−22.7891 × 10−67.5756 × 10−23.4611 × 10−65.6416 × 10−7
F23Mean−10.531−10.533−10.536−10.527−10.536−10.536
std6.874 × 10−34.7865 × 10−33.0245 × 10−62.1245 × 10−25.6398 × 10−66.0013 × 10−7
+/=/− 1/18 /42/8/131/18/40/4/190/4/19
ARV 4.56524.2173.58694.2172.28262.0870
Rank 64.534.521
Table 8. Results of MSBWO and the original metaheuristic algorithms.
Table 8. Results of MSBWO and the original metaheuristic algorithms.
FunMetricMSBWOBWODBOGWOWOAPSO
F1Mean01.4907 × 10−2601.4828 × 10−1011.7561 × 10−333.6949 × 10−850.1004
std008.1219 × 10−1012.09 × 10−331.9188 × 10−840.1005
F2Mean1.2231 × 10−2591.4586 × 10−1317.398 × 10−647.085 × 10−202.1589 × 10−514.115 × 10−2
std06.8565 × 10−1314.0335 × 10−635.5593 × 10−201.1804 × 10−502.1195 × 10−2
F3Mean01.7217 × 10−2441.0467 × 10−605.4544 × 10−82.5192 × 1041.1183 × 103
std005.52 × 10−601.3807 × 10−71.0986 × 1049.3686 × 102
F4Mean2.7801 × 10−2551.3531 × 10−1283.0078 × 10−602.3538 × 10−832.7035.8991
std05.1694 × 10−1281.1732 × 10−592.1745 × 10−826.1721.3136
F5Mean9.3369 × 10−82.0861 × 10−725.08226.80127.3262.5137 × 102
std3.6746 × 10−75.1346 × 10−70.20560.68730.36875.4071 × 102
F6Mean1.7025 × 10−181.143 × 10−144.349 × 10−80.56898.4008 × 10−28.5094 × 10−2
std3.1276 × 10−181.437 × 10−149.5797 × 10−80.34617.317 × 10−29.0455 × 10−2
F7Mean4.1334 × 10−57.2328 × 10−51.7576 × 10−31.2252 × 10−32.938 × 10−33.2197 × 10−2
std3.7571 × 10−55.8612 × 10−51.1258 × 10−36.0667 × 10−42.5824 × 10−31.1147 × 10−2
F8Mean−5.559 × 10112−1.2569 × 104−8.9143 × 103−6.2936 × 103−1.1169 × 104−8.0967 × 103
std2.7117 × 101132.6663 × 10−91.4104 × 1034.6811 × 1021.7026 × 1036.7177 × 102
F9Mean002.90501.29171.8948 × 10−1545.336
std007.44512.68271.0378 × 10−1413.798
F10Mean4.4409 × 10−164.4409 × 10−164.4409 × 10−164.2721 × 10−143.9968 × 10−150.22272
std0005.1399 × 10−152.4685 × 10−150.3444
F11Mean002.1263 × 10−32.2938 × 10−32.7297 × 10−30.1618
std001.1646 × 10−26.4787 × 10−31.4951 × 10−20.1025
F12Mean3.1128 × 10−111.8546 × 10−146.1145 × 10−103.258 × 10−21.3093 × 10−22.6513 × 10−2
std9.8555 × 10−113.5716 × 10−141.1332 × 10−91.7838 × 10−22.5505 × 10−25.8293 × 10−2
F13Mean3.2653 × 10−111.0811 × 10−138.5229 × 10−20.44210.22160.1118
std9.4472 × 10−111.418 × 10−130.12340.19180.18170.1284
F14Mean0.9980.9980.9982.15031.68730.998
std3.7695 × 10−126.1182 × 10−71.3675 × 10−161.89151.87344.1233 × 10−17
F15Mean3.1194 × 10−43.4182 × 10−47.5692 × 10−44.4166 × 10−37.5181 × 10−41.7582 × 10−3
std1.0641 × 10−54.3411 × 10−53.7425 × 10−48.1143 × 10−33.7044 × 10−45.0652 × 10−3
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std1.7061 × 10−121.3142 × 10−46.5843 × 10−161.2002 × 10−87.3904 × 10−106.2532 × 10−16
F17Mean0.39790.39920.39790.39790.39790.3979
std1.1133 × 10−81.5727 × 10−302.4381 × 10−53.4691 × 10−60
F18Mean33.25943333
std1.0092 × 10−110.23361.5494 × 10−151.383 × 10−51.0270 × 10−51.6223 × 10−15
F19Mean−3.8628−3.8594−3.8622−3.8616−3.8604−3.8628
std3.7199 × 10−52.3940 × 10−31.9973 × 10−32.4837 × 10−33.0790 × 10−32.6543 × 10−15
F20Mean−3.3215−3.2808−3.2734−3.2777−3.2301−3.2643
std2.6160 × 10−44.2965 × 10−26.6944 × 10−27.2579 × 10−20.20176.3786 × 10−2
F21Mean−10.153−10.147−6.2746−9.1399−8.9500−5.8955
std3.2037 × 10−81.1124 × 10−22.45722.05852.45013.4184
F22Mean−10.403−10.399−7.6177−10.401−9.2588−7.3590
std3.2256 × 10−75.6792 × 10−32.87148.9801 × 10−42.35123.6012
F23Mean−10.536−10.529−8.6163−10.535−6.9987−6.9640
std8.709 × 10−71.4087 × 10−22.79836.5498 × 10−43.64033.8923
+/=/− 18/3/215/7/122/1/020/3/014/7/2
ARV 1.45652.73913.26094.47834.36964.6957
Rank 123546
Table 9. p values of the Wilcoxon rank-sum test comparing MSBWO with conventional algorithms on all functions.
Table 9. p values of the Wilcoxon rank-sum test comparing MSBWO with conventional algorithms on all functions.
FunctionBWODBOGWOWOAPSO
F11.21178 × 10−121.21178 × 10−121.21178 × 10−121.21178 × 10−121.21178 × 10−12
F23.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F31.21178 × 10−121.21178 × 10−121.21178 × 10−121.21178 × 10−121.21178 × 10−12
F43.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F51.44233 × 10−33.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F63.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F73.6439 × 10−23.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F83.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F912.15772 × 10−23.54361 × 10−120.333711.21178 × 10−12
F10118.9938 × 10−133.62921 × 10−91.21178 × 10−12
F1110.333714.19262 × 10−20.333711.21178 × 10−12
F121.42984 × 10−55.46175 × 10−93.01986 × 10−113.01986 × 10−113.01986 × 10−11
F136.20265 × 10−43.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F142.93241 × 10−101.87212 × 10−93.05742 × 10−116.15792 × 10−111.00916 × 10−11
F151.06657 × 10−78.82569 × 10−70.559235.57265 × 10−103.14633 × 10−2
F163.00852 × 10−113.13637 × 10−123.00852 × 10−113.24821 × 10−78.8305 × 10−12
F173.01986 × 10−111.21178 × 10−123.33839 × 10−112.2539 × 10−41.21178 × 10−12
F183.01986 × 10−112.20334 × 10−113.01986 × 10−113.01986 × 10−112.78095 × 10−11
F193.01986 × 10−113.42449 × 10−81.56381 × 10−25.09117 × 10−64.08059 × 10−12
F203.01986 × 10−117.24419 × 10−27.959 × 10−31.07626 × 10−20.6586
F213.01986 × 10−111.92277 × 10−33.01986 × 10−113.01986 × 10−115.13599 × 10−2
F223.01986 × 10−110.982263.01986 × 10−113.01986 × 10−110.66015
F233.01986 × 10−112.65493 × 10−23.01986 × 10−113.01986 × 10−110.8762
Table 10. Results of MSBWO and five selected SOTA algorithms on 23 benchmark problems.
Table 10. Results of MSBWO and five selected SOTA algorithms on 23 benchmark problems.
FunMetricMSBWOHLOAHOPOCPOBKA
F1Mean02.3456 × 10−24001.7748 × 10−472.1585 × 10−391.237 × 10−88
std0006.7514 × 10−471.1822 × 10−386.7752 × 10−88
F2Mean1.0224 × 10−2591.5503 × 10−1254.7132 × 10−1921.346 × 10−175.3952 × 10−223.0805 × 10−51
std05.7134 × 10−12507.3722 × 10−171.9341 × 10−211.499 × 10−50
F3Mean06.5747 × 10−23701.4656 × 10−351.8336 × 10−392.9127 × 10−87
std0008.0271 × 10−359.0774 × 10−391.5709 × 10−86
F4Mean1.8495 × 10−2562.4166 × 10−1285.8439 × 10−1911.3277 × 10−311.0666 × 10−202.8368 × 10−44
std06.1005 × 10−12807.1884 × 10−314.7757 × 10−201.2548 × 10−43
F5Mean5.306 × 10−824.8532.4008 × 10−21.2637 × 10−325.42227.303
std1.2493 × 10−79.91023.4737 × 10−22.0162 × 10−30.28921.0939
F6Mean6.7936 × 10−181.5435 × 10−46.1904 × 10−31.2590 × 10−56.6420 × 10−61.0359
std1.9820 × 10−172.1774 × 10−48.1137 × 10−31.7856 × 10−53.2049 × 10−60.836
F7Mean5.2429 × 10−52.2378 × 10−47.8394 × 10−59.2437 × 10−61.5765 × 10−32.1842 × 10−4
std4.1660 × 10−52.6434 × 10−46.9779 × 10−58.0807 × 10−69.3270 × 10−41.8647 × 10−4
F8Mean−4.7434 × 10114−7.4814 × 103−2.166 × 104−6.9567 × 103−9.0556 × 103−9.1644 × 103
std2.4502 × 101156.0516 × 1023.4142 × 1031.0963 × 1032.8966 × 1021.2571 × 103
F9Mean000000
std000000
F10Mean4.4409 × 10−164.4409 × 10−164.4409 × 10−164.4409 × 10−165.6251 × 10−164.4409 × 10−16
std00006.4863 × 10−160
F11Mean000000
std000000
F12Mean2.0917 × 10−111.0387 × 10−21.4667 × 10−41.5031 × 10−61.9325 × 10−75.4138 × 10−2
std9.8839 × 10−113.1659 × 10−22.9938 × 10−42.6090 × 10−61.0923 × 10−76.4073 × 10−2
F13Mean3.3187 × 10−111.9673 × 10−21.0453 × 10−35.2258 × 10−64.0682 × 10−61.6659
std1.0552 × 10−105.7383 × 10−22.6180 × 10−37.9707 × 10−62.0333 × 10−60.5876
F14Mean1.09755.04670.9983.79560.9980.998
std0.39954.18726.2508 × 10−144.579901.4283 × 10−16
F15Mean3.1449 × 10−47.2813 × 10−33.075 × 10−43.0881 × 10−43.0750 × 10−41.1101 × 10−3
std1.6827 × 10−51.3678 × 10−21.7418 × 10−82.0477 × 10−61.3987 × 10−84.0566 × 10−3
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
std1.7954 × 10−115.2156 × 10−163.4218 × 10−111.0717 × 10−106.4539 × 10−166.1158 × 10−16
F17Mean0.39790.39790.39790.39790.39790.3979
std2.1712 × 10−802.1185 × 10−105.6491 × 10−101.2421 × 10−130
F18Mean333333
std6.5647 × 10−111.4817 × 10−141.5442 × 10−91.0930 × 10−91.3424 × 10−151.3374 × 10−15
F19Mean−3.8627−3.8625−3.8627−3.8627−3.8627−3.8627
std4.7886 × 10−51.439 × 10−34.986 × 10−91.3162 × 10−52.7101 × 10−152.5243 × 10−15
F20Mean−3.3215−3.2369−3.2680−3.2657−3.3220−3.3060
std2.5687 × 10−57.5536 × 10−26.2937 × 10−27.7621 × 10−22.8448 × 10−144.1454 × 10−2
F21Mean−10.153−10.149−10.153−7.0944−10.153−9.6517
std3.9089 × 10−85.8287 × 10−34.5053 × 10−72.54025.6943 × 10−151.9085
F22Mean−10.403−9.3208−10.403−7.7453−10.403−10.403
std3.8351 × 10−72.81368.0294 × 10−72.70314.6649 × 10−169.4054 × 10−14
F23Mean−10.536−7.6298−10.536−6.3903−10.536−10.266
std6.9415 × 10−73.9015.3435 × 10−72.32642.6182 × 10−151.4815
+/=/− 16/7/08/12/313/8/212/7/415/6/2
ARV 2.21744.286134.15223.4133.9348
Rank 162534
Table 11. p values of the Wilcoxon rank-sum test comparing MSBWO with selected SOTA algorithms on all functions.
Table 11. p values of the Wilcoxon rank-sum test comparing MSBWO with selected SOTA algorithms on all functions.
FunctionHLOAHOPOCPOBKA
F11.21178 × 10−1211.21178 × 10−121.21178 × 10−121.21178 × 10−12
F23.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F31.21178 × 10−1211.65725 × 10−111.21178 × 10−121.21178 × 10−12
F43.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F53.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F63.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F78.12 × 10−40.137322.37682 × 10−73.01986 × 10−111.38525 × 10−6
F83.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−113.01986 × 10−11
F911111
F101110.333711
F1111111
F123.01986 × 10−113.01986 × 10−116.72195 × 10−103.01986 × 10−113.01986 × 10−11
F133.01986 × 10−113.01986 × 10−111.77691 × 10−103.01986 × 10−113.01986 × 10−11
F141.55785 × 10−44.8715 × 10−44.80348 × 10−63.32524 × 10−126.17693 × 10−9
F150.129643.01986 × 10−117.4827 × 10−23.01986 × 10−111.06441 × 10−7
F166.11988 × 10−113.36405 × 10−41.55665 × 10−88.57636 × 10−122.95423 × 10−11
F171.21178 × 10−123.80385 × 10−72.15403 × 10−61.72025 × 10−121.21178 × 10−12
F184.94171 × 10−116.73621 × 10−63.64589 × 10−85.21145 × 10−122.56756 × 10−11
F192.2771 × 10−103.01986 × 10−119.46827 × 10−31.21178 × 10−121.40589 × 10−11
F202.27802 × 10−50.379047.7272 × 10−27.82349 × 10−121.10772 × 10−6
F218.48477 × 10−91.84999 × 10−82.37147 × 10−104.08059 × 10−129.576 × 10−9
F223.82489 × 10−90.258054.1825 × 10−92.36567 × 10−121.69332 × 10−11
F238.4687 × 10−90.347833.68973 × 10−113.15782 × 10−124.24791 × 10−10
Table 12. Descriptions of datasets.
Table 12. Descriptions of datasets.
Symbol DatasetNo. of FeaturesNo. of Instances
S1Pima8768
S2Vowel10528
S3Australian14690
S4Zoo16101
S5Vehicle18846
S6Robot245456
S7Wdbc30569
S8Sonar60208
S9Air64359
S10DNA1801186
Table 13. Comparison of the BMSBWO with other FS techniques in terms of fitness.
Table 13. Comparison of the BMSBWO with other FS techniques in terms of fitness.
DatasetBMSBWOBGWOBWOABDBOBBWO
S10.215870.215870.215870.215870.21587
S20.137220.137220.142880.142880.13722
S30.147820.156470.147820.147820.14782
S40.041170.041170.047050.041170.04117
S50.241510.252880.254910.244070.25825
S60.031540.039240.037890.031690.03295
S70.844480.853310.850930.851270.85297
S80.094840.114050.100930.121770.11077
S90.066530.075510.089100.073330.07089
S100.158510.178270.176530.169580.16263
ARV1.653.653.653.102.95
Rank14.54.532
Table 14. Comparison of the BMSBWO with other FS techniques in terms of the error rate.
Table 14. Comparison of the BMSBWO with other FS techniques in terms of the error rate.
DatasetBMSBWOBGWOBWOABDBOBBWO
S10.190470.193070.190470.190470.19047
S20.081760.091820.096850.091820.08301
S30.133270.142310.147110.133650.13750
S4000.021500.010750
S50.233070.239370.243300.237790.24094
S60.012090.012950.016610.013070.01295
S70.010990.018930.012420.018320.01445
S80.074070.089940.074070.105820.08994
S90.030860.049380.058640.046290.04938
S100.146060.161790.169660.160670.16011
ARV1.33.64.13.22.8
Rank14532
Table 15. Comparison of the BMSBWO with other FS techniques in terms of the mean feature selection size.
Table 15. Comparison of the BMSBWO with other FS techniques in terms of the mean feature selection size.
DatasetBMSBWOBGWOBWOABDBOBBWO
S10.444440.444440.444440.444440.44444
S20.60.636360.636360.636360.61818
S30.293330.333330.360.360.33333
S40.333330.440190.440190.450980.35294
S50.410520.431570.421050.421050.43157
S60.2480.3040.2480.280.248
S70.182790.247310.236550.279560.22580
S80.355190.377040.387970.371580.36612
S90.348710.348710.394870.348710.46153
S100.351380.408830.388950.405520.4022
ARV1.43.753.353.72.8
Rank15342
Table 16. Comparison of BMSBWO with other FS techniques in terms of average running time.
Table 16. Comparison of BMSBWO with other FS techniques in terms of average running time.
DatasetBMSBWOBGWOBWOABDBOBBWO
S128.4513.2613.413.4214.91
S227.4713.1113.0813.1914.34
S329.7213.7414.0113.8615.48
S415.647.447.427.418.06
S543.0519.8819.9120.0822.35
S6208.2897.2699.6997.60108.29
S769.7432.8633.1332.6536.48
S825.3311.7411.7011.7613.27
S936.2616.7316.6516.7018.94
S10261.69113.82117.65116.65137.54
ARV51.72.12.24
Rank51234
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Z.; Xiao, Z.; Li, X.; Huang, Z.; Zhang, C. MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection. Biomimetics 2024, 9, 572. https://doi.org/10.3390/biomimetics9090572

AMA Style

Fan Z, Xiao Z, Li X, Huang Z, Zhang C. MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection. Biomimetics. 2024; 9(9):572. https://doi.org/10.3390/biomimetics9090572

Chicago/Turabian Style

Fan, Zhaoyong, Zhenhua Xiao, Xi Li, Zhenghua Huang, and Cong Zhang. 2024. "MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection" Biomimetics 9, no. 9: 572. https://doi.org/10.3390/biomimetics9090572

APA Style

Fan, Z., Xiao, Z., Li, X., Huang, Z., & Zhang, C. (2024). MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection. Biomimetics, 9(9), 572. https://doi.org/10.3390/biomimetics9090572

Article Metrics

Back to TopTop