Next Article in Journal
Design and Experimental of the Soil Removal Device for Root-Soil Complex of Gentian Imitating the Percussion of Woodpeckers
Next Article in Special Issue
Improved Osprey Optimization Algorithm Based on Two-Color Complementary Mechanism for Global Optimization and Engineering Problems
Previous Article in Journal
Understanding the Structure–Function Relationship through 3D Imaging and Biomechanical Analysis: A Novel Methodological Approach Applied to Anterior Cruciate Ligaments
Previous Article in Special Issue
An Improved Football Team Training Algorithm for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems

1
School of Art and Design, Xi’an University of Technology, Xi’an 710054, China
2
National Demonstration Center for Experimental Arts Education, Nankai University, Tianjin 300371, China
3
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(8), 478; https://doi.org/10.3390/biomimetics9080478
Submission received: 16 June 2024 / Revised: 21 July 2024 / Accepted: 1 August 2024 / Published: 8 August 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
Based on a meta-heuristic secretary bird optimization algorithm (SBOA), this paper develops a multi-strategy improvement secretary bird optimization algorithm (MISBOA) to further enhance the solving accuracy and convergence speed for engineering optimization problems. Firstly, a feedback regulation mechanism based on incremental PID control is used to update the whole population according to the output value. Then, in the hunting stage, a golden sinusoidal guidance strategy is employed to enhance the success rate of capture. Meanwhile, to keep the population diverse, a cooperative camouflage strategy and an update strategy based on cosine similarity are introduced into the escaping stage. Analyzing the results in solving the CEC2022 test suite, the MISBOA both get the best comprehensive performance when the dimensions are set as 10 and 20. Especially when the dimension is increased, the advantage of MISBOA is further expanded, which ranks first on 10 test functions, accounting for 83.33% of the total. It illustrates the introduction of improvement strategies that effectively enhance the searching accuracy and stability of MISBOA for various problems. For five real-world optimization problems, the MISBOA also has the best performance on the fitness values, indicating a stronger searching ability with higher accuracy and stability. Finally, when it is used to solve the shape optimization problem of the combined quartic generalized Ball interpolation (CQGBI) curve, the shape can be designed to be smoother according to the obtained parameters based on MISBOA to improve power generation efficiency.

1. Introduction

Optimization problems in engineering design are to select a set of parameters (variables) to achieve the optimal value of the design index (goal) under a series of relevant constraints [1]. Those confusions widely exist in various fields, such as production cost optimization [2], public transportation optimization [3], production decision optimization [4], structural design optimization [5], feature selection [6], path planning [7], and so on.
To solve those complex optimization problems effectively, meta-heuristic algorithms (MHAs) are studied and proposed, the core idea of which is to gradually approach the optimal solution of the problem by searching in the problem space [8]. Different from optimization algorithms based on precise gradient information or Hesse matrix, the main advantage of MHAs is that they can handle complex, non-linear problems and do not require assumptions about the specific model of the problem [9]. Furthermore, with the continuous development and improvement of computer hardware and software, MHAs have shown strong ability to solve practical problems, and the future development prospects will be broader [10].
As shown in Figure 1, according to different search mechanisms, MHAs are usually classified into four types, which are swarm algorithms, evolutionary algorithms, natural-like algorithms, and mathematical-like algorithms [11]. For swarm algorithms, they currently are the most popular MHAs, inspired by the adaptive, self-organizing group behaviors exhibited by social species in nature, such as foraging, hunting, working, or migrating [12]. For example, in the classical ant colony optimization algorithm (ACO), the foraging behavior based on pheromone concentration in the ant group is modeled mathematically to form a basic search framework [13]. Grey wolf optimization algorithm (GWO) was constructed by simulating the hunting behavior of grey wolves, belonging to swarm algorithms. Based on the hierarchical mechanism and hunting behavior, the special mechanism has achieved good results in balanced exploration and development of GWO, leading to good performance in convergence speed and solution accuracy [14]. Inspired by the migratory behavior of wild geese, a wild geese migration optimization (GMO) algorithm was developed. During the long-distance migration, they usually keep a special formation consisting of several small groups [15]. Evolutionary algorithms are constructed by the natural law of survival of the fittest [16]. For genetic algorithm (GA), it simulated theory of Darwinian biological evolution, mainly including natural selection and genetic mechanisms [17]. Differential evolution algorithm (DE) was also a representative of this type. It is inspired by the mutation strategy, hybridization strategy, and selection operations [18]. Different from GA, each individual in the DE corresponds to a solution vector, and the complex encoding and decoding process is abandoned. Human evolutionary optimization algorithm (HEOA) was inspired by the human evolution process, mainly consisting of two distinct phases: human exploration and human development [19]. Natural-like algorithms are developed by modeling common natural phenomena, such as rain, snow, wind, and so on [20]. For example, inspired by the mechanism of frost and ice growth in nature, a novel rime optimization algorithm (RIME) was proposed in 2023. By studying the various growth processes of rime-ice, this algorithm develops a novel searching frame, including soft-rime search and hard-rime puncture strategies [21]. A snow ablation optimizer (SAO) was constructed in 2022, which mainly emulates the sublimation and melting behavior of snow to realize a balance between exploitation and exploration in the solution space and discourage premature convergence [22]. The Kepler optimization algorithm (KOA) was also a well-known approach of natural-like algorithms based on the celestial motion principle, which simulates the motion law of planets around the sun to solve the optimization problem [23]. The last category is mathematical-like algorithms, mainly including the sine cosine algorithm (SCA) [24], gradient-based optimizer (GBO) [25], and Runge Kutta optimizer (RUN) [26]. The SCA only uses the volatility and periodicity of sine and cosine functions as the design goal of the operator to search and iterate the optimal solution. Compared with the GA, SCA has the advantages of fewer parameters, a simple structure, and easy implementation. For the GBO, it adopts a gradient-based approach to improve exploration trends and accelerate convergence to obtain better bits in the search space.
Though optimization algorithms can be solved by various MHAs, the accuracy and efficiency are different due to diverse algorithm structures. Thus, to enhance the performance of original algorithms, different improved versions of the basic MHAs are introduced [27]. Here, two common types of improvement methods are discussed. In the first class, hybrid algorithms are constructed by mixing different basic algorithms with various characters [28]. For example, Nenavath and Jatoth proposed a new hybrid SCA-DE by mixing the SCA and differential DE [29]. Compared with the basic SCA and DE, the proposed hybrid algorithm has better capability to escape from local optima with faster convergence. Garg also presented a hybrid algorithm named PSO-GA for solving constrained optimization problems. It adopted the direction of particle swarm optimization (PSO) and the decision operators of GA, leading to better balance between the exploration and exploitation abilities [30]. However, this kind of method may inherit their own absence when combining the advantages of different algorithms, which means certain limitations on the improvement effect. Thus, another kind of method is studied, that is, several improvement strategies are incorporated into the original algorithms. For example, aiming to overcome the shortcomings of low accuracy, slow convergence speed, and easily falling into local optimums, Hu et al. proposed an improved BWO algorithm by introducing novel selecting strategies, mutation methods, and adaptive parameters [31]. The proposed algorithm achieved higher classification accuracy while using fewer information from the original datasets. Liu et al. proposed an enhanced grey wolf optimization algorithm (NAS-GWO) for the agricultural UAV trajectory planning problem [32]. The improved algorithm mainly includes three key strategies. A boundary constraint mechanism is adopted to enhance the ability of achieving better solutions. The typical Gaussian mutation model and spiral function are used to reduce the possibility of being troubled with local optimums. Meanwhile, to keep the balance of exploitation and exploration abilities, the NAS-GWO hires a nonlinear factor based on the Sigmoid function.
In this paper, a novel secretary bird optimization algorithm (SBOA) proposed in 2024, is analyzed and studied. By being tested on CEC2017 and CEC2022 benchmark test suites, the searching accuracy, convergence rate, and stability of SBOA have been verified [33]. However, limited by the single structure of the original algorithm, there is still room to develop an enhanced version to get better solutions. Thus, by introducing targeted improvement strategies, we propose a multi-strategy secretary bird optimization algorithm (MISBOA). The main contributions can be summarized as follows:
(1)
An enhanced algorithm MISBOA is proposed by integrating four specific improvement strategies.
A feedback regulation mechanism is introduced to fully make use of the global information, leading to faster global convergence.
To improve the development ability of SBOA, a golden sinusoidal guidance strategy is adopted in the hunting stage.
In the escape stage, a cooperative camouflage strategy is employed to enhance the global exploration ability.
An update strategy based on cosine similarity is used to keep the population diverse, which influences the ability to escape local optimums.
(2)
The ability of MISBOA is verified by solving basic test functions and well-known engineering optimization problems.
The MISBOA and other eight typical algorithms are tested on functions with 10 and 20 dimensions of CEC2022. Accuracy, convergence speed, stability, and extensibility of various algorithms are analyzed.
The engineering application capability of the MISBOA is verified on five complex engineering optimization problems.
(3)
The MISBOA is applied for a shape optimization problem of combined curves.
With the aim of minimizing the total energy, a shape optimization model of combined quartic generalized Ball interpolation (CQGBI) curve is established.
The shape of wind-driven generator blades is optimized by MISBOA and other algorithms to increase the efficiency of power generation.
Figure 2 shows the motivation of this paper to illustrate the idea and value of the research more intuitively. The organization of our work can be summarized as follows: Section 2 introduces the original SBOA algorithm and the detailed steps for constructing the MISBOA. Section 3 analyzes the performance of MISBOA according to the results of the CEC2022 test suit. Section 4 tests the ability of MISBOA in solving engineering optimization problems and the established shape optimization model of combined quartic generalized Ball interpolation. In Section 5, the main conclusions of this paper and future work are given.

2. The Multi-Strategy Improvement Secretary Bird Optimization Algorithm

2.1. The Basic Secretary Bird Optimization Algorithm

Secretary birds are large terrestrial raptors, usually inhabit tropical savannas or semi-desert areas. They are natural enemies of snakes on the African continent, such as the black mamba, cobra, and others. Inspired by their survival strategy of searching for prey and evading pursuit from predators in harsh environments, a secretary bird optimization algorithm is constructed to deal with complex optimization problems. The SBOA consists of the following parts:

2.1.1. Initial Preparation Phase

Firstly, for a typical minimization optimization problem f ( x ) , the initial solutions used to start the search needs to be determined. Here, N initial solutions form the initial population X = [ X 1 , X 2 , , X N ] of secretary birds. Each individual Xi in the population corresponds to a solution to the optimization problem, which is initialized by Equation (1).
X i = l b + r a n d ( u b l b ) ,         1 i N ,
where lb and ub are the lower and upper bounds of the decision variables. rand represents a random number in [0, 1]. M is the dimension of the problem. In addition, the fitness value of the solution Xi is noted as F i = f ( X i ) to measure its quality.

2.1.2. Hunting Strategy of Secretary Birds

Unlike other fierce predators, secretary birds adopt a more intelligent approach to hunting snakes. After spotting the target, they do not dive down and fight straightaway, but wander, jump, and pick quarrels near the snake patiently to observe and confuse opponents. Until a suitable time, they will strike quickly and kill the prey. Thus, the whole hunting process can be divided into three steps, which are searching for prey, consuming prey, and attacking prey. Then, mathematical models are used to characterize these stages.
(1)
Searching for prey
In this stage, the secretary birds need to search for prey at a safe distance. For optimization algorithms, this initial stage asks for stronger exploration ability to acquire enough information for the whole search area. As shown in Figure 3, the secretary bird can explore new potential areas by referring to the positions of the other two secretary birds. Therefore, differential mutation operations are introduced to keep enhancing algorithm diversity. For each individual Xi, it will use Equations (2) and (3) to update position when the current iteration time t is smaller than one-third of the maximum iterations T.
X i n e w ( t ) = X i ( t ) + ( X r 1 ( t ) X r 2 ( t ) ) × R 1
X i ( t + 1 ) = X i n e w ( t ) ,   if   F i n e w < F i X i ( t + 1 ) = X i ( t ) ,   else
where X r 1 ( t ) , X r 2 ( t ) are two individuals randomly selected from the current population. R1 is a random vector, including 1 × M elements randomly selected in [0, 1].
(2)
Consuming prey
Once secretary birds find potential prey, they first hover around the snake with agile footwork and maneuvers. By watching and luring opponents in the process of circling, the patience of the prey will be consumed to let down its guard. As shown in Figure 4, by regarding the current best individual as the prey, other secretary birds update their position to close the prey. In this way, the success of hunting will be greatly improved. When 1 3 T < t < 2 3 T , the Brownian motion (RB) is applied to model the random movement of the secretary birds, as shown in Equation (4),
R B = r a n d n ( 1 , M )
where randn(1,M) is a randomly generated vector satisfying the standard normal distribution, whose mean value is 0 and standard deviation is 1.
Then, secretary birds update their positions by Equations (5) and (6).
X i n e w ( t ) = X b e s t ( t ) + exp ( ( t / T ) 4 ) × ( R B 0.5 ) × ( X b e s t ( t ) X i ( t ) )
X i ( t + 1 ) = X i n e w ( t ) ,   if   F i n e w < F i X i ( t + 1 ) = X i ( t ) ,   else
where Xbest is the best solution for the current population.
(3)
Attacking prey
After constant consumption, the prey will be exhausted. It is time for secretary birds to start the attack. Here, the levy flight strategy is used to simulate various attack ways, such as continuous steps and occasional long jumps in a short time. Equations (7) and (8) are used to describe the characters of this stage. As shown in Figure 5, secretary birds quickly approach the prey, meaning candidate solutions are close to the current best solution. When t 2 3 T , this strategy will be executed.
X i n e w ( t ) = X b e s t ( t ) + 1 t / T ( 2 × t / T ) × X i ( t ) × R L
X i ( t + 1 ) = X i n e w ( t ) ,   if   F i n e w < F i X i ( t + 1 ) = X i ( t ) ,   else
where RL represents the levy flight strategy, which is
R L = 0.5 × L e v y ( M )
where L e v y ( M ) = s × u × σ v 1 / φ , in which s and φ are fixed numbers of 0.01 and 1.5, respectively. u and υ are random numbers in [0, 1]. σ is set as follows.
σ = Γ ( 1 + η ) × sin π η 2 Γ 1 + η 2 × η × 2 η 1 2 1 / n
where Γ is the gamma function and η is set as 0.5.

2.1.3. Escape Strategy for Secretary Birds

In nature, secretary birds also face the risk of being hunted when hunting other prey. The main enemies they need to face are eagles, hawks, foxes, and jackals. When they sense danger, various evasion strategies are necessary to protect themselves or their food. In this algorithm, camouflage and running modes are modeled to simulate the escape strategies.
(1)
Camouflage based on environment
Facing enemies, the secretary birds first choose to camouflage themselves to avoid danger. As shown in Figure 6, secretary birds update their positions around the prey (best individual), reflecting the behavior of trying to escape local optimums in the algorithms. The mathematical model of this strategy is shown in Equations (11) and (12).
X i n e w ( t ) = X b e s t ( t ) + ( 2 × R B 1 ) × ( 1 t / T ) 2 × X i ( t )
X i ( t + 1 ) = X i n e w ( t ) ,   if   F i n e w < F i X i ( t + 1 ) = X i ( t ) ,   else
(2)
Running mode
If they cannot avoid the enemy, flight or rapid-running strategies will be employed to keep them safe. As shown in Figure 7, a random individual Xrand is selected as the leader for reference, avoiding being limited to local optimums. Secretary birds update their positions by Equations (13) and (14).
X i n e w ( t ) = X b e s t ( t ) + R 2 × ( X r a n d ( t ) K × X i ( t ) )
X i ( t + 1 ) = X i n e w ( t ) ,   if   F i n e w < F i X i ( t + 1 ) = X i ( t ) ,   else
The above modes will be executed with equal probability to show the randomness of searching for the feasible area. In addition Figure 8 shows the flow chart of the original SBOA algorithm.

2.2. The Multi-Strategy Improvement Secretary Bird Optimization Algorithm

Though the SBOA has shown excellent performance in solving various test functions compared with the other 15 advanced algorithms, there are still some shortcomings in the algorithm construction, which can be further improved to obtain faster convergence and solutions with higher accuracy. The main inadequacy is reflected in the following four aspects.
(1)
For the whole algorithm, the global information is not fully used to adjust the position update strategy. This defect may affect the overall performance of the SBOA, which means the balance between development ability and exploration ability.
(2)
In the hunting stage, the attack mode of secretary birds can be further enhanced by observing the performance of the prey, which is to improve the development ability of SBOA.
(3)
In the escape stage, secretary birds decide how to camouflage or escape using a few simple random strategies, which may influence the ability to escape local optimums.
Aiming at the above problems, this section will introduce the appropriate strategy to construct a novel version of SBOA.

2.2.1. Feedback Regulation Mechanism

Feedback regulation mechanism is a common structural design in the biological world. It can regulate the state of the system at the next stage according to the result at the current stage after receiving the stimulus of internal and external environmental changes (Figure 9). In the field of engineering control, the PID controller is one of the most widely used automatic controllers, which is constructed by the idea of feedback regulation [34]. Among various PID methods, incremental PID control is a typical recursive algorithm, which makes the system stable by adjusting the controlled object according to the output value [35]. This kind of control method can be used in the optimization algorithm, by regarding the searching process as a system and the best fitness value in each iteration as the output value. Thus, the discrete incremental PID control is adopted to form the feedback regulation mechanism.
Firstly, the system deviations need to be defined as Equation (15).
e k ( t ) = X b e s t ( t ) X ( t )
where X b e s t ( t ) is the best solution in the tth iteration. To facilitate the following calculation, the deviation of the previous iteration is denoted as e k 1 ( t ) . The deviation of the previous two iterations is denoted as e k 2 ( t ) . When t = 1, let e k 2 ( t ) = e k 1 ( t ) = e k ( t ) . When t > 1, e k 2 ( t ) = e k 1 ( t 1 ) . Furthermore, to reduce the space complexity, the e k 1 ( t ) can be calculated by
e k 1 ( t ) = e k ( t 1 ) + X b e s t ( t ) X b e s t ( t 1 )
Then, the output of PID regulation in iteration t is
Δ u = K p r 1 ( e k ( t ) e k 1 ( t ) ) + K i r 2 e k ( t ) + K d r 3 ( e k ( t ) 2 e k 1 ( t ) + e k 2 ( t ) )
where r1, r2, r3 are random numbers from 0 to 1. Kp, Ki, and Kd are adjustment coefficients for the proportion, integral and differential, respectively.
Finally, the updated solutions can be calculated by the following equation:
X ( t + 1 ) = X ( t ) + λ Δ u ( t ) + ( 1 λ ) H ( t )
where λ = r 4 cos ( t / T ) . H ( t ) = ( cos ( 1 t / T ) + ρ r 5 L ) e k ( t ) is a conditioning factor to prevent the algorithm from falling into local optimum, in which ρ = ( ln ( T t + 2 ) / ln ( T ) ) 2 and L is a Levy flight function. And r4 and r5 are random numbers from 0 to 1.

2.2.2. Golden Sinusoidal Guidance Strategy

After locking the prey, the secretary bird will start to attack the target. To enhance the success rate of capture, this section will adopt a golden sinusoidal guidance strategy to take advantage of the target’s location information [36]. In the algorithm, this strategy is useful to enhance the ability of developing local areas. For the individual i in the population, it will update its position by Equation (19).
X i ( t + 1 ) = X i ( t ) × sin ( s 1 ) + s 2 × sin ( s 1 ) × θ 1 × X b e s t ( t ) θ 2 × X b e s t ( t )
where s1 is a random value in [0, 2π], and s2 is a random value in [0, π]. θ1 and θ2 are the coefficients obtained by the golden section, which are θ 1 = π + 2 π ( 1 τ ) and θ 1 = π + 2 π τ , in which τ is defined as τ = 1 5 2 .

2.2.3. Cooperative Camouflage Strategy

When secretary birds feel threatened, they will first use the environment to disguise themselves. However, this kind of camouflage strategy cannot effectively avoid enemy attacks because the information exchange between populations is neglected in the original method. Thus, this section will introduce a cooperative camouflage strategy by considering the positions of different individuals. First, three different individuals need to be selected, noted as Xa, Xb, and Xc. A new solution just can be generated by Equation (20).
X i ( t + 1 ) = X a + r 6 × ( X b X c )
where r6 is a random number from 0 to 1.

2.2.4. Update Strategy Based on Cosine Similarity

When the camouflage strategy does not work, secretary birds have to run to escape enemies. Then, the key to success is how to choose the right escape direction. Generally speaking, an empty place will be the first choice to ensure sufficient space. For the individual Xi, the cosine similarity is used to measure the crowding degree to other individuals.
Firstly, construct vectors A and B by Equation (21).
A = X i ( t ) X b e s t ( t ) B = X i ( t ) X b e s t ( t )
where ij.
Then, calculate the similarity between Xi and Xj by Equation (22).
ψ i , j = A B A B
After comparing Xi and others, the individual with the smallest cosine similarity will be selected as the update direction, noted as Xs. A new solution can be obtained by Equation (23).
X i ( t + 1 ) = X b e s t ( t ) + R × ( X s ( t ) K × X i ( t ) )
where R is a vector including random elements in [0, 1], and K is an integer, randomly toggling between 1 and 2.
Figure 10 shows the flowchart of the MISBOA algorithm. The specific steps of the MISBOA algorithm are as follows:
Step 1. Initialize the related parameters and the population of secretary birds.
Step 2. Calculate the fitness value of each solution and obtain the best solution in the current population.
Step 3. According to the historical information, update solutions based on the feedback regulation mechanism in Equation (18).
Step 4. If t < T/3, secretary birds search for the prey in the hunting stage and update solutions by Equations (2) and (3).
Step 5. If T/3 < t < 2T/3, secretary birds start to consume the prey in the hunting stage and update solutions by Equations (5) and (6).
Step 6. If t > 2T/3, secretary birds are attacking the prey based on the Golden sinusoidal guidance strategy in the hunting stage and update solutions by Equation (19).
Step 7. Turn to the escape stage. If rand > 0.5, update solutions by Equation (20) based on the cooperative camouflage strategy; otherwise, update solutions by Equation (23) based on the cosine similarity.
Step 8. If t < T, turn to Step 1; otherwise, output the best solution.

2.2.5. Computational Complexity Analysis

Computational complexity is a significant metric to measure whether an optimization algorithm can be deployed and applied quickly. Based on the Big O notation, the computational complexity of basic SBOA is O(N × T × (Dim + 1)), where N, T, Dim are the population size, maximum iterations, and dimensions, respectively. For the proposed MISBOA, four introduced strategies may potentially increase the computational complexity. However, the golden sinusoidal guidance strategy, cooperative camouflage strategy, and update strategy based on cosine similarity are more effective upgraded versions of the original methods, which do not incur additional computational costs. Thus, the computational complexity is O(T × N) + O(T × N × Dim) + O(T × N × Dim), resulting in O(N × T × (2 × Dim + 1)).

3. Numerical Experiment on the Test Functions

This section will test the comprehensive ability of the proposed MISBOA algorithm. By analyzing the results of MISBOA and other selected algorithms in solving test functions of the CEC2022 test suite, the effectiveness of the introduced strategy will be discussed.

3.1. Test Functions and Parameter Setting

For the objectivity and rationality of the experiment, it is inevitable to select other typical intelligent optimization algorithms as the control group. In this paper, other 8 popular algorithms are selected, which are the original SBOA [33], the golden jackal optimization algorithm (GJO) [37], the particle swarm optimization algorithm (PSO) [38], the PID-based search algorithm (PSA) [35], the quadratic interpolation optimization algorithm (QIO) [39], the Newton-Raphson-based optimizer (NRBO) [40], and the sand cat swarm optimization algorithm (SCSO) [41]. The parameters of the other metaheuristic algorithms are listed in Table 1.
For the CEC2022 test suite, there are 12 test functions, including unimodal functions, basic functions, hybrid functions, and composition functions. The dimension of test functions can be set as 10 or 20 [42]. Thus, for comprehensiveness and reliability, this section will test MISBOA and other comparison algorithms on the 12 test functions of 10 and 20 dimensions.
For the experiment conditions, the size of the initial population for all algorithms is set to 100, and the maximum iteration time is set to 1000. Then, the results of various algorithms will be recorded after 30 independent runs to avoid the influence of chance factors. The average value (Ave) and the standard deviation of the obtained 30 results will be used as the main indexes to evaluate the quality of various algorithms. Another index rank is determined by arranging the average values in ascending order. The smaller ranking result represents better performance. Average time cost (in seconds) for each function after 30 runs is also recorded to reflect the computational complexity of various algorithms.

3.2. Analysis and Discussion of the Results on CEC 2022 with 10 Dimensions

Firstly, the solving results of CEC 2022 with 10 dimensions are summarized in Table 2. The data marked in bold means the best results among all algorithms for the according evaluation index. Results show that the constructed MISBOA algorithm has the best performance for 6 test functions, followed by SBOA and QIO, which perform best on 1 and 4 test functions. Comparing the MISBOA and the basic SBOA, MISBOA can keep advantage over SBOA on 10 test functions, except for CEC09 and CEC12. For CEC03 and CEC05, the MISBOA can achieve the theoretical optimums after each run. However, the SBOA can only obtain the ideal results in a few experiments, which leads to a larger standard deviation. For the QIO algorithm, it can also rank first on 4 test functions, but its performance is not stable for all functions with various characters. For example, QIO only ranks fourth on CEC01, CEC02, CEC03, and CEC08 and ranks eighth on CEC12. The improved MISBOA ranks in the top three on all functions. It illustrates the introduction of multiple strategies effectively balances the capacities of exploration and development, further having stronger searching ability for various optimization problems. Based on the final ranking results, the performances of MISBOA and other algorithms are ranked as MISBOA > SBOA > QIO > PSA > PSO > PSOBKA > NRBO > SCSO > GJO. For time cost, the MISBOA inevitably consumes slightly more time compared to the original SBOA, which is consistent with the results of the complexity analysis. However, facing the NRBO and SCSO, the MISBOA emerges as the winner and competes evenly with the QIO algorithm. That is, the proposed MISBOA achieves higher solving accuracy while simultaneously minimizing increases in time costs as much as possible.
To support the above conclusions from statistical knowledge, Table 3 lists the p-value of the Wilcoxon rank sum test (WRST) between the proposed MISBOA and other algorithms. If the MISOBA performs better than another algorithm and the p-value is less than 0.05, it can illustrate the MISOBA is obviously superior to the comparison algorithm, noted ‘+’. Otherwise, if MISOBA performs worse than another algorithm and the p-value is less than 0.05, it is noted as ‘−’. If the p-value is over 0.05, there is no statistically significant difference between the two algorithms. From Table 3, compared with the SBOA, the improved algorithm gets the statistical advantage on 9 test functions and is obviously inferior to the original version on CEC12. For QIO, the MISBOA performs greatly better on 7 test functions and worse on 4 test functions. For BKA, GJO, PSO, NRBO, and SCSO, the advantages of all test functions of MISBOA can be supported by WRST.
Figure 11 plots radar comparison maps between MISBOA and other algorithms on CEC2022 with 10 dimensions. A smaller radar coverage area means the algorithm has a more stable solving ability for various test functions in CEC2022. Compared with BKA, GJO, PSO, NRBO, and SCSO, the MISBOA takes advantage of all functions. For PSA, QIO, and SBOA, though the MISBOA cannot rank first on a few functions, its comprehensive performance is still excellent. The average iterative curves of different algorithms after 30 runs are shown in Figure 12. From the curves, convergence rate and accuracy can be visually observed and analyzed. Compared with the original SBOA, the convergence rate of the proposed algorithm is obviously faster than SBOA on CEC01, CEC02, CEC04, CEC06, CEC07, CEC08, CEC10, and CEC11 during the whole searching process. This kind of advantage comes from the comprehensive impact of multiple strategies. The introduction of the feedback regulation mechanism can constantly adjust updated strategies based on historical information to ensure the quality of the entire population. The golden sinusoidal guidance strategy also improves the local development capacity, leading to individuals converging to better solutions with faster speed. Thus, at the early stage, the curves of MISBOA fall faster. At the later stage, the cooperative camouflage strategy and update strategy based on cosine similarity play roles in improving the ability to keep the population diversity and avoiding the local optimums. Compared with the QIO, though the MISBOA is at a disadvantage on CEC10 and CEC11, the performance of the QIO on other test functions lacks stability.

3.3. Analysis and Discussion of the Results on CEC 2022 with 20 Dimensions

Then, to verify the scalability of the MISBOA algorithm, this section compares it with other selected algorithms on test functions in CEC2022 with 20 dimensions. With the increase of decision variables, the fast convergence ability of algorithms in the feasible space is further required. After 30 runs, the data is collected in Table 4. Except for CEC 04 and CEC10, the MISBOA ranks first on the rest 10 test functions. Compared with the results on CEC2022 functions with 20 dimensions, the advantages of MISBOA are further expanded. It illustrates the performance of MISBOA is still stable with the increase of dimensions. For the original SBOA, facing more decision variables, it ranks first on only 1 test function. That is, without the improved strategies, the basic algorithm cannot effectively search for solutions with higher accuracy. The performance of the QIO algorithm is similar, ranking first only on CEC10. Thus, the application of feedback regulation mechanisms, golden sinusoidal guidance strategies, cooperative camouflage strategies, and update strategies based on cosine similarity is a helpful approach to balancing the abilities of exploration and development. With increased dimensions, all algorithms incur higher time costs. However, the changes of MISBOA and SBOA are minimal compared to QIO, NRBO, and SCSO. That is, the structure of the MISBOA algorithm effectively mitigates the increase in time costs caused by growth in problem dimensions.
Table 5 includes the p-value of WRST between MISBOA and other algorithms on CEC2022 with 20 dimensions. With the increase in dimensions, the difference between MISBOA and other algorithms is further revealed. For example, compared with the SBOA, MISBOA performs significantly better on 10 test functions under the conditions of 20 dimensions, more than that of 10 dimensions. This phenomenon is even more obvious when the MISBOA is compared with the QIO. The results of WRST prove that the MISBOA is obviously superior to QIO on 9 test functions and performs worse on only 1 test function. This result illustrates that the MISBOA is also effective when the dimension of problems is increased, while the performance of QIO is greatly affected. The radar comparison maps between MISBOA and others on CEC2022 with 20 dimensions in Figure 13 also prove that conclusion.
Figure 14 plots the average iterative curves of different algorithms on CEC2022 with 20 dimensions. It can be observed that the advantage of MISBOA in convergence speed is established in the beginning for all algorithms except for CEC06. It illustrates the value of the feedback regulation mechanism and golden sinusoidal guidance strategy. By referring to the historical information and the best solution in current, the whole population can quickly be close to better solutions. Meanwhile, for CEC06, when other algorithms cannot continue to find better solutions in the later stage, the iterative curve of MISBOA is still falling to explore feasible solutions with higher precision. It is the value of cooperative camouflage strategy and update strategy based on cosine similarity. By using the information of random individuals and individuals with small cosine similarity, the diversity of the population can be maintained to avoid local optimums. Figure 15 is the statistical results of ranking on CEC2022 with 10 and 20 dimensions. For MISBOA, it has the smallest ranking result for test functions with 10 and 20 dimensions, respectively. With the increase of decision variables, the average rank of MISBOA is smaller, meaning its competitiveness, universality, and stability for complex optimization algorithms with different characters.

4. The Application for Real-World Optimization Problems

In addition, the test functions of CEC2022, the performances of novel algorithms must be verified in solving real-world optimization problems [43]. In this section, five typical engineering optimization problems are used to test the abilities of MISBOA and other algorithms, firstly [44]. Then, the MISBOA is employed to solve a shape optimization problem of combined curves with multiple shape parameters. Based on the 8 comparison algorithms in Section 4, four popular algorithms widely used in practice are added, which are Harris hawks optimization (HHO) [45], walrus optimizer (WO) [46], grey wolf optimization algorithm (GWO) [14], and whale optimization algorithm (WOA) [47].

4.1. Engineering Optimization Problems

The results will be recorded after 10 independent runs for each algorithm. For all problems, the size of the population and maximum iteration are 50 and 300, respectively. To fairly measure the performance of different algorithms, the best value, average value, worst value, and standard deviation (Std) results after 10 runs will be calculated. Meanwhile, based on the average value, the ranking result for each algorithm can be determined to illustrate its combination performance.

4.1.1. Step-Cone Pulley Design Problem

The main goal of this problem is to minimize the weight of the four stepped cone pulleys by adjusting five key variables, which are the diameter of each pulley (d1, d2, d3, d4) and the width of the pulley (w). As shown in Figure 16, the system contains 11 nonlinear constraints to ensure that the transmitted power must be at 0.75 hp. Its mathematical model is defined in Equation (24).
x = [ x 1 , x 2 , x 3 , x 4 , x 5 ] = [ d 1 , d 2 , d 3 , d 4 , w ] ,
f ( x ) = p w d 1 2 11 + N 1 N 2 + d 2 2 1 + N 2 N 2 + d 3 2 1 + N 3 N 2 + d 4 2 1 + N 4 N 2 ,
h 1 ( x ) = C 1 C 2 = 0 , h 2 ( x ) = C 1 C 3 = 0 , h 3 ( x ) = C 1 C 4 = 0 , g i = R i 2 0 , i = 1 , 2 , 3 , 4 , h i = ( 0.75 × 745.6998 ) P i 0 , i = 1 , 2 , 3 , 4 , C i = π d i 2 1 + N i N + N i N 1 2 4 a + 2 a , i = 1 , 2 , 3 , 4 , R i = exp u π 2 sin 1 N i N 1 d i 2 a , i = 1 , 2 , 3 , 4 , P i = s t w ( 1 R i ) π d i N i 60 , i = 1 , 2 , 3 , 4 , t = 8   mm , s = 1.75   MPa , u = 0.35 , ρ = 7200   kg / m 3 , a = 3   mm , N 1 = 750 , N 2 = 450 , N 3 = 250 , N 4 = 150 , N = 350 , 0 d 1 , d 2 60 , 0 d 3 , d 4 , w 60 .
Table 6 summarizes the results of MISBOA and others for the step-cone pulley design problem. Observing the index of best value, the MISBOA and SBOA obtain the same fitness values, which are better than others. However, for the index of worst value, the SBOA is inferior to MISBOA, meaning the stability of MISBOA is enhanced by introducing multiple effective, improved strategies. Thus, for the final ranking results, the proposed MISBOA performs excellently. Smaller Std also supports the conclusion that the MISBOA is not easy to be affected by accidental factors when solving engineering optimization problems. Table 7 lists the best variables obtained by various algorithms for this design problem according to the best fitness value in Table 6.

4.1.2. Planetary Gear Train Design Problem

The main aim of the planetary gear train design optimization problem is to minimize the maximum error of the transmission ratio used in automobiles. To achieve the target, 9 variables are needed to be adjusted, as shown in Figure 17. In these variables, 6 integer variables (N1, N2, N3, N4, N5, N6) represent the number of the gear teeth, and 3 discrete variables represent the number of gear modulus (m1, m2) and the number of planetary gear teeth (p). The established model of the planetary gear train design system is shown in Equation (25).
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 ] = [ N 1 , N 2 , N 3 , N 4 , N 5 , N 6 , p , m 1 , m 2 ] ,
f ( x ) = max i k i o k , k = { 1 , 2 , , R } ,
i 1 = N 6 N 4 , i o 1 = 3.11 , i 2 = N 6 ( N 1 N 3 + N 2 N 4 ) N 1 N 3 ( N 6 + N 4 ) , i o R = 3.11 , i R = N 2 N 6 N 1 N 3 , i o 2 = 1.84 , g 1 ( x ) = m 2 ( N 6 + 2.5 ) D max 0 , g 2 ( x ) = m 1 ( N 1 + N 2 ) + m 1 ( N 2 + 2 ) D max 0 , g 3 ( x ) = m 2 ( N 4 + N 5 ) + m 2 ( N 5 + 2 ) D max 0 , g 4 ( x ) = m 1 ( N 1 + N 2 ) + m 2 ( N 6 + N 3 ) m 1 m 2 0 , g 5 ( x ) = ( N 1 + N 2 ) sin ( π / p ) + N 2 + 2 + δ 22 0 , g 6 ( x ) = ( N 6 N 3 ) sin ( π / p ) + N 3 + 2 + δ 33 0 , g 7 ( x ) = ( N 4 + N 5 ) sin ( π / p ) + N 5 + 2 + δ 55 0 , g 8 ( x ) = ( N 3 + N 5 + 2 + δ 35 ) 2 ( N 6 N 3 ) 2 ( N 4 + N 5 ) 2 + 2 ( N 6 N 3 ) ( N 4 + N 5 ) cos 2 π p β 0 , g 9 ( x ) = N 4 N 6 + 2 N 5 + 2 δ 56 + 4 0 , g 10 ( x ) = 2 N 3 N 6 + N 4 + 2 δ 34 + 4 0 , h 1 ( x ) = N 6 N 4 p , δ 22 = δ 33 = δ 55 = δ 56 = 0.5 , β = cos 1 ( N 4 + N 5 ) 2 + ( N 6 N 3 ) 2 ( N 3 + N 5 ) 2 2 ( N 6 N 3 ) ( N 4 + N 5 ) , p = { 3 , 4 , 5 } ,   m 1 , m 2 = { 1.75 ,   2.0 ,   2.25 ,   2.5 ,   2.75 ,   3.0 } , D max = 220 , 17 N 1 96 ,   14 N 2 54 ,   14 N 3 51 ,   17 N 4 46 ,   14 N 5 51 ,   48 N 6 124 .
In Table 8, the solving results of MISBOA and others are shown for the design problem of the planetary gear train design. For the index of best value, the BKA and WOA perform better than the MISBOA, but the MISBOA’s average value and Std have obvious advantages, which lead to the better ranking result. It illustrates the superior comprehensive performance of the MISBOA. It is not reliable if an algorithm can achieve better solutions for only several runs but perform poorly for lots of experiments. Thus, especially for engineering optimization problems, the MISBOA is competitive due to its stronger stability. Table 9 shows the best variables of selected algorithms according to the best solution.

4.1.3. Robot Gripper Design Problem

The problem of robot gripper is a popular optimization problem in mechanical structure engineering. As shown in Figure 18, this system consists of 6 factors, which are connecting rod length, geometric angle between connecting rods, vertical displacement, clamping pressure, actuator displacement of the robotic gripper, and horizontal displacement. 7 key parameters influence the performance of the system, which are three links (a, b, c), the vertical distance between the first robotic arm node and the actuator end f, the vertical displacement of the links e, the geometric angle between the second and third links d, and the horizontal distance between the actuator end and the links node l. By adjusting these key parameters, the minimum value of the difference between the minimum and maximum force needs to be obtained. This problem can be described by Equation (26).
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] = [ a , b , c , e , f , l , d ] ,
f ( x ) = min z F k ( x , z ) + max z F k ( x , z ) ,
g 1 ( x ) = Y min + y ( x , Z max ) 0 , g 2 ( x ) = y ( x , Z max ) 0 , g 3 ( x ) = Y max y ( x , 0 ) 0 , g 4 ( x ) = y ( x , 0 ) Y G 0 , g 5 ( x ) = l 2 + e 2 ( a + b ) 2 0 , g 6 ( x ) = b 2 ( a e ) 2 ( l Z max ) 2 0 , g 7 ( x ) = Z max l 0 , α = cos 1 a 2 + g 2 b 2 2 a g + ϕ , g = e 2 + ( z l ) 2 , β = cos 1 b 2 + g 2 a 2 2 b g ϕ , g = tan 1 e l z , y ( x , z ) = 2 ( f + e + c sin ( β + d ) ) , F k ( x , z ) = P b sin ( α + β ) 2 c cos ( α ) Y min = 50 , Y max = 100 , Y G = 150 , Z max = 100 , P = 100 , 0 e 50 ,         100 c 200 ,         10 f , a , b 150 ,         1 d 3.14 , 100 l 300 .
Table 10 provides the results of various algorithms for the robot gripper design problem. For this complex system, the MISBOA achieves the advantages on all four measure indexes. Thus, the MISBOA can be regarded as an effective and reliable approach to solve the design problem of the robot gripper. This kind of preponderance comes from the suitable and reasonable combination of various improved strategies, which leads to better performance in balancing the abilities between exploration and development. Table 11 shows the best control parameters in the robot gripper system according to the best fitness value.

4.1.4. Four-Stage Gearbox Design Problem

In this problem, researchers must reduce the total weight of the gearbox system by optimizing 22 key variables, as shown in Figure 19. These parameters can be divided into four classes, including the positions of the gears, and pinions, the thickness of the blanks, and the number of teeth. Meanwhile, 86 constraint conditions need to be satisfied, increasing the difficulty of this problem. These requirements involve the contact ratio, pitch, strength of the gears, assembly of gears, kinematics, and size of the gears. The model can be represented by the following:
x = [ x 1 , x 2 , x 3 , , x 22 ] = [ N p 1 , , N p 4 , N g 1 , , N g 4 , b 1 , b 4 , x p 1 , x g 1 , , x g 4 , y p 1 , y g 1 , , y g 4 ] ,
f ( x ) = π 1000 i = 1 4 b i c i 2 ( N p i 2 + N g i 2 ) ( N p i + N g i ) 2 ,
g 1 ( x ) = 366000 π w 1 + 2 c 1 N p 1 N p 1 + N g 1 N p 1 + N g 1 2 4 b 1 c 1 2 N p 1 σ N J R 0.0167 W K o K m 0 ,
g 2 ( x ) = 366000 N g 1 π w 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 N p 2 + N g 2 2 4 b 2 c 2 2 N p 2 σ N J R 0.0167 W K o K m 0 ,
g 3 ( x ) = 366000 N g 1 N g 2 π w 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 N p 3 + N g 3 2 4 b 3 c 3 2 N p 3 σ N J R 0.0167 W K o K m 0 ,
g 4 ( x ) = 366000 N g 1 N g 2 N g 3 π w 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 N p 4 + N g 4 2 4 b 4 c 4 2 N p 4 σ N J R 0.0167 W K o K m 0 ,
g 5 ( x ) = 366000 π w 1 + 2 c 1 N p 1 N p 1 + N g 1 N p 1 + N g 1 3 4 b 1 c 1 2 N g 1 N p 1 2 σ H C p sin φ cos φ 0.0334 W K o K m 0 ,
g 6 ( x ) = 366000 N g 1 π w 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 N p 2 + N g 2 3 4 b 2 c 2 2 N g 2 N p 2 2 σ H C p sin φ cos φ 0.0334 W K o K m 0 ,
g 7 ( x ) = 366000 N g 1 N g 2 π w 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 N p 3 + N g 3 3 4 b 3 c 3 2 N g 3 N p 3 2 σ H C p sin φ cos φ 0.0334 W K o K m 0 ,
g 8 ( x ) = 366000 N g 1 N g 2 N g 3 π w 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 N p 4 + N g 4 3 4 b 4 c 4 2 N g 4 N p 4 2 σ H C p sin φ cos φ 0.0334 W K o K m 0 ,
g 9 ( x ) g 12 ( x ) = N p i sin 2 ( φ ) 4 1 N p i + 1 N p i 2 + N p i sin 2 ( φ ) 4 1 N p i 1 N p i 2 + sin ( φ ) N p i + N g i 2 + C R min π cos ( φ ) 0 ,
g 13 ( x ) g 16 ( x ) = d min 2 c i N p i N p i + N g i 0 , g 17 ( x ) g 20 ( x ) = d min 2 c i N g i N p i + N g i 0 ,
g 21 ( x ) = x p 1 + N p 1 + 2 c 1 N p 1 + N g 1 L max 0 ,
g 22 ( x ) g 24 ( x ) = L max N p i + 2 c i N p i + N g i i = 2 , 3 , 4 + x g ( i 1 ) 0 ,
g 25 ( x ) = x p 1 + N p 1 + 2 c 1 N p 1 + N g 1 0 , g 26 28 ( x ) = N p i + 2 c i N p i + N g i x g ( i 1 ) 0 ,
g 29 ( x ) = y p 1 + N p 1 + 2 c 1 N p 1 + N g 1 L max 0 ,
g 30 32 ( x ) = L max + N p i + 2 c i N p i + N g i y g ( i 1 ) i = 2 , 3 , 4 0 ,
g 33 ( x ) = N p 1 + 2 c 1 N p 1 + N g 1 y p 1 0 , g 34 36 ( x ) = N p i + 2 c i N p i + N g i y g ( i 1 ) i = 2 , 3 , 4 0 ,
g 37 40 ( x ) = L max + N p i + 2 c i N p i + N g i + x g i 0 , g 41 44 ( x ) = x g i + N p i + 2 c i N p i + N g i 0 ,
g 45 48 ( x ) = y g i + N p i + 2 c i N p i + N g i L max 0 , g 49 52 ( x ) = y g i + N p i + 2 c i N p i + N g i 0 ,
g 53 56 ( x ) = b i 8.255 b i 5.715 b i 12.70 N p i + 0.945 c i N g i 1 0 ,
g 57 60 ( x ) = b i 8.255 b i 3.175 b i 12.70 N p i + 0.646 c i N g i 0 ,
g 61 64 ( x ) = b i 5.715 b i 3.175 b i 12.70 N p i + 0.504 c i N g i 0 ,
g 65 68 ( x ) = b i 5.715 b i 3.175 b i 8.255 N p i N g i 0 ,
g 69 72 ( x ) = b i 8.255 b i 5.715 b i 12.70 N p i + N g i 1.812 c i 1 0 ,
g 73 76 ( x ) = b i 8.255 b i 3.175 b i 12.70 N p i + N g i 0.945 c i 0 ,
g 77 80 ( x ) = b i 5.715 b i 3.175 b i 12.70 N p i + N g i 0.646 c i 1 0 ,
g 81 84 ( x ) = b i 5.715 b i 3.175 b i 8.255 N p i + N g i 0.504 c i 0 ,
g 85 ( x ) = w min w 1 N p 1 N p 2 N p 3 N p 4 N g 1 N g 2 N g 3 N g 4 0 , g 86 ( x ) = w max w 1 N p 1 N p 2 N p 3 N p 4 N g 1 N g 2 N g 3 N g 4 0 ,
c i = y g i y p 1 2 + x g i x p 1 2 , K o = 1.5 , d min = 25 , J R = 0.2 ,
φ = 120 , W = 55.9 , K m = 1.6 , C R min = 1.4 , L max = 127 , C p = 464 ,
σ H = 3290 , w max = 255 , w 1 = 5000 , σ N = 2090 , w min = 245 ,
b i 3.175 , 12.7 , 8.255 , 5.715 , 7 N g i , N p i 76 ,
y p i , x p i , y g i , x g i 12.7 , 38.1 , 25.4 , 50.8 , 76.2 , 63.5 , 88.9 , 114.3 , 101.6 .
Table 12 includes the solving results of all algorithms for the four-stage gearbox design problem. From the results, it can be noticed that the performance of different algorithms varies a lot due to the increase of variables and the introduction of lots of constraints. For example, though the SBOA achieves the best value after 10 runs, its average value is poor compared with the MISBOA. Compared with other algorithms, the advantage of MISBOA on the index of average value is further expanded. Thus, the MISBOA is a suitable tool for this complex problem. Table 13 shows the best variables according to the best fitness value of each algorithm.

4.1.5. Traveling Salesman Problem

The traveling salesman problem (TSP) is a well-known combinatorial optimization problem. In this problem, each salesman needs to find a path in a certain number of cities to make the total length of the shortest path. Meanwhile, the planned path has to satisfy the requirement, which passes through each city once and only once and finally returns to the starting point. TSP is widely applied for practical applications, such as logistics, transportation, and other fields. Because the candidate solutions to this problem are the full permutation of all cities, the difficulty of this problem will increase dramatically with the increase in the number of cities.
In this case, there are 80 randomly generated cities and 2 traveling salesmen. Table 14 includes the results obtained by MISBOA and others for the TSP with 80 cities and 2 traveling salesmen. For the indexes of the best, average, and standard, the MISBOA ranks first. That is, for complex problems with lots of variables, the MISBOA can reasonably search for the feasible space with the suitable strategies in different stages. For the original SBOA, the lack of more effective searching strategies leads to its worse overall performance compared with the MISBOA. Figure 20 plots the best routes for travel salesman obtained by different algorithms and their convergence curves and the box plots. Comparing Figure 20a,b, the number of cities visited by the two salesmen is not evenly balanced. Compared with other solutions, the route of MISBOA is more reasonable, which avoids additional consumption. However, routes based on other algorithms are wound together, leading to longer total distance.

4.2. Shape Optimization Problems of Combined Curves

In the field of practical engineering applications, it is of great significance to generate a smooth curve that can pass through all given data points by the interpolation approach [48]. In this section, the energy of the combined quartic generalized Ball interpolation (CQGBI) curve is studied and optimized.
The CQGBI curve is constructed by combing several quartic curves. For the given data points Q i R 2 ( i = 0 ,   1 ,   ,   n ) , a CQGBI curve R ( t ; λ , μ ) can be obtained, as shown in Equation (28) [49].
R ( t ; λ , μ ) = R 1 ( t ; λ 1 , μ 1 ) , R 1 ( t ; λ j , μ j ) , R 1 ( t ; λ n , μ n ) ,
where
R j ( t ; λ j , μ j ) = ( 1 + 2 t ) ( 1 t ) 2 P j 1 + ( 3 2 t ) t 2 P j + ( 2 + λ j λ j t ) t ( 1 t ) 2 2 + λ j P j 1 ( 2 + μ j t ) t 2 ( 1 t ) 2 + μ j P j ,
in which λ j ,   μ j ( 2 ,   4 ]   ( j = 1 , 2 , , n ) are the shape parameters and can be adjusted.
Hence, it is necessary to select more suitable parameters to maintain the smoothness of the obtained combined curve. Here, the curvature variation energy is used to measure the smoothness of the CQGBI curve, which is [50].
E j ( λ j , μ j ) = 0 1 R j ( t ; λ j , μ j ) d t ,
where R j ( t ; λ j , μ j ) is the curve of the j-th section of the CQGBI curve.
After further calculation, Equation (30) can be represented by
R j ( t ; λ j , μ j ) = 12 P j 1 12 P j + 18 λ j 24 λ j t + 12 2 + λ j P j 1 6 μ j 24 μ j t + 12 2 + μ j P j ,
Combined Equation (30) and Equation (31), we can obtain
E j ( λ j , μ j ) = 0 1 12 P j 1 12 P j + 18 λ j 24 λ j t + 12 2 + λ j P j 1 6 μ j 24 μ j t + 12 2 + μ j P j d t = 144 P j 1 2 + 144 P j 2 288 P j 1 P j + k 1 P j 1 2 + k 2 P j 2 + k 3 P j 1 P j 1 + k 4 P j 1 P j k 3 P j P j 1 k 4 P j P j + k 5 P j 1 P j ,
where k 1 = 84 λ j 2 144 λ j + 144 ( 2 + λ j ) 2 , k 2 = 84 μ j 2 144 μ j + 144 ( 2 + μ j ) 2 , k 3 = 144 λ j + 288 2 + λ j , k 4 = 144 μ j + 288 2 + μ j , k 5 = 144 λ j + 144 μ j 24 λ j μ j + 288 ( 2 + λ j ) ( 2 + μ j ) .
Then, by regarding the total energy of the whole CQGBI curve, the optimization model can be established by Equation (33).
x = [ x 1 , x 2 , , x 2 n ] = [ λ 1 , λ 2 , , λ n , μ 1 , μ 2 , , μ n ] ,
f ( x ) = j = 1 n E j ( λ j , μ j ) = j = 1 n ( 144 P j 1 2 + 144 P j 2 288 P j 1 P j + k 1 P j 1 2 + k 2 P j 2 + k 3 P j 1 P j 1 + k 4 P j 1 P j k 3 P j P j 1 k 4 P j P j + k 5 P j 1 P j ) .
λ j ,   μ j ( 2 ,   4 ]   ( j = 1 , 2 , , n )
That is, for a CQGBI curve with n + 1 data points, there are 2n parameters that need to be optimized. Here, the shape optimization problem of wind-driven generator blades is studied. As shown in Figure 21, as a core component of wind power generation systems, the shape of wind-driven generator blades will significantly affect the efficiency of power generation. Thus, the MISBOA and other algorithms are used to select the most suitable parameters to reduce the energy of the CQGBI curve, which describes the shape of a wind-driven generator blade.
After 10 times running, the results of various algorithms for the shape optimization problem of wind-driven generator blades are listed in Table 15. From the summarized data, the MISBOA finally ranks first due to its excellent performance on the four measure indexes. Especially for the standard deviation, the MISBOA is significantly superior to other algorithms. That is, it even can achieve similar solutions with higher accuracy for each running, meaning the ability to get better results in fewer execution times. For the original SBOA, its performance is severely affected, only ranking fifth. The QIO and HHO obtain the same best value but have poor comprehensive performance due to the larger average value. Based on the best solution, the best designs for wind-driven generator blades are plotted in Figure 22. Local enlargements of some key sites are also drawn. For the solutions with larger fitness values, curves at corners are not smooth and will affect power generation efficiency, such as Figure 22d,i,l. For the MISBOA, the CQGBI curve is smoother to reduce unnecessary wastage.

5. Conclusions and Future Work

Aiming at the defects of the original SBOA algorithm, this paper proposed an improved version by introducing four specific strategies. Firstly, a feedback regulation mechanism based on incremental PID control is applied. It aims to keep higher searching effectiveness by updating the whole population according to the output value. In the hunting stage, to enhance the success rate of capture, a golden sinusoidal guidance strategy is employed. Meanwhile, to keep the population, diversity, a cooperative camouflage strategy, and an update strategy based on cosine similarity are introduced into the escaping stage.
According to the obtained results in solving test functions of the CEC2020 suite. When the dimension is 10, the MISBOA can rank firstly on 6 functions, while the SBOA can rank firstly only on 1 and 4 functions. Meanwhile, the comprehensive performance of each algorithm on different functions is compared by radar map. Though the final rank of QIO is the second, it ranks fourth on 4 functions and only ranks eighth on CEC12, meaning poorer stability than MISBOA. The worst rank of MISBOA is third on only one function. When the dimension is increased to 20, the advantage of MISBOA is more obvious. It ranks first on the rest 10 test functions, accounting for 83.33% of the total. The original SBOA and QIO algorithms rank first on only 1 test function. This difference illustrates the introduction of improvement strategies that effectively enhance the searching accuracy and stability of MISBOA for various problems. From the convergence curves, due to the effect of the feedback regulation mechanism and golden sinusoidal guidance strategy, the MISBOA can quickly approach better solutions. In the later searching process, it also can keep staying ahead because two strategies can help MISBOA escape local optimums. For five real-world optimization problems, the MISBOA also has the best performance on the fitness values, indicating stronger searching ability with higher accuracy and stability. Finally, when it is used to solve the shape optimization problem of the CQGBI curve, the shape can be designed to be smoother according to the obtained parameters based on MISBOA to improve power generation efficiency.
In the future, the practical application of the MISBOA algorithm will be further explored. Meanwhile, whether the improved strategy adopted in this paper will have a positive impact on other original algorithms is also a problem that needs to be studied.

Author Contributions

Conceptualization, S.Q., J.L. and G.H.; Methodology, S.Q., J.L., X.B. and G.H.; Software, S.Q. and X.B.; Validation, J.L. and X.B.; Formal analysis, J.L.; Investigation, S.Q., J.L., X.B. and G.H.; Resources, J.L. and G.H.; Data curation, S.Q. and X.B.; Writing—original draft, S.Q., J.L., X.B. and G.H.; Writing—review & editing, S.Q., J.L., X.B. and G.H.; Visualization, S.Q. and X.B.; Supervision, G.H.; Project administration, S.Q., J.L. and G.H.; Funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by Shaanxi Key Research and Development General Project (2024GX-YBXM-529); Shaanxi Province Art and Science Planning Project (2023HZ1758); Shaanxi Provincial Social Science Fund Project (2023J005); Research Project on Experimental Teaching and Teaching Laboratory Construction of the Ministry of Education (SYJX2024-023).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during the study are included in this published article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, W.; Wang, G.-G.; Gandomi, A.H. A survey of learning-based intelligent optimization algorithms. Arch. Comput. Methods Eng. 2021, 28, 3781–3799. [Google Scholar] [CrossRef]
  2. Tsai, S.C.; Yeh, Y.; Wang, H.; Chou, T.C. Efficient optimization in stochastic production planning problems with product substitution. Comput. Oper. Res. 2024, 164, 106544. [Google Scholar] [CrossRef]
  3. Li, Q.; Guo, L. Nature-inspired metaheuristic optimization algorithms for urban transit routing problem. Eng. Res. Express 2023, 5, 015040. [Google Scholar] [CrossRef]
  4. Hu, G.; Gong, C.; Li, X.; Xu, Z. CGKOA: An enhanced Kepler optimization algorithm for multi-domain optimization problems. Comput. Methods Appl. Mech. Eng. 2024, 425, 116964. [Google Scholar] [CrossRef]
  5. Truong, D.-N.; Chou, J.-S. Fuzzy adaptive forensic-based investigation algorithm for optimizing frequency-constrained structural dome design. Math. Comput. Simul. 2023, 210, 473–531. [Google Scholar] [CrossRef]
  6. Hu, G.; Huang, F.; Seyyedabbasi, A.; Wei, G. Enhanced multi-strategy bottlenose dolphin optimizer for UAVs path planning. Appl. Math. Model. 2024, 130, 243–271. [Google Scholar] [CrossRef]
  7. Younis, H.B.; Moosavi, S.K.; Zafar, M.H.; Hadi, S.F.; Mansoor, M. Feature selection based on dataset variance optimization using Whale Optimization Algorithm (WOA). In Handbook of Whale Optimization Algorithm; Academic Press: Cambridge, MA, USA, 2024; pp. 547–565. [Google Scholar]
  8. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  9. Hu, G.; Zhong, J.; Wei, G. SaCHBA_PDN: Modified honey badger algorithm with multi-strategy for UAV path planning. Expert Syst. Appl. 2023, 223, 119941. [Google Scholar] [CrossRef]
  10. Hu, G.; Zheng, Y.; Houssein, E.H.; Wei, G. DRPSO:A multi-strategy fusion particle swarm optimization algorithm with a replacement mechanisms for colon cancer pathology image segmentation. Comput. Biol. Med. 2024, 108780. [Google Scholar] [CrossRef]
  11. Raza Moosavi, S.K.; Zafar, M.H.; Mirjalili, S.; Sanfilippo, F. Improved barnacles movement optimizer (ibmo) algorithm for engineering design problems. In International Conference on Artificial Intelligence and Soft Computing; Springer Nature: Cham, Switzerland, 2023; pp. 427–438. [Google Scholar] [CrossRef]
  12. Song, B.Y.; Wang, Z.D.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  13. Morin, M.; Abi-Zeid, I.; Quimper, C.G. Ant colony optimization for path planning in search and rescue operations. Eur. J. Oper. Res. 2023, 305, 53–63. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  15. Ghasemi, M.; Rahimnejad, A.; Hemmati, R.; Akbari, E.; Gadsden, S.A. Wild Geese Algorithm: A novel algorithm for large scale optimization based on the natural life and death of wild geese. Array 2021, 11, 100074. [Google Scholar] [CrossRef]
  16. Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef]
  17. Bhattacharjee, A.K.; Mukhopadhyay, A. An improved genetic algorithm with local refinement for solving hierarchical single-allocation hub median facility location problem. Soft Comput. 2023, 27, 1493–1509. [Google Scholar] [CrossRef]
  18. Li, J.; Soradi-Zeid, S.; Yousefpour, A.; Pan, D. Improved differential evolution algorithm based convolutional neural network for emotional analysis of music data. Appl. Soft Comput. 2024, 153, 111262. [Google Scholar] [CrossRef]
  19. Lian, J.; Hui, G. Human Evolutionary Optimization Algorithm. Expert Syst. Appl. 2024, 241, 122638. [Google Scholar] [CrossRef]
  20. Hu, G.; Huang, F.; Chen, K.; Wei, G. MNEARO: A meta swarm intelligence optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 419, 116664. [Google Scholar] [CrossRef]
  21. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  22. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  23. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  24. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  25. Daoud, M.S.; Shehab, M.; Al-Mimi, H.M.; Abualigah, L.; Zitar, R.A.; Shambour, M.K.Y. Gradient-Based Optimizer (GBO): A Review, Theory, Variants, and Applications. Arch. Comput. Methods Eng. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  26. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.F.; Chen, H.L. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  27. Hu, G.; Du, B.; Chen, K.; Wei, G. Super eagle optimization algorithm based three-dimensional ball security corridor planning method for fixed-wing UAVs. Adv. Eng. Inform. 2024, 59, 102354. [Google Scholar] [CrossRef]
  28. Kar, A.K. Bio inspired computing—A review of algorithms and scope of applications. Expert Syst. Appl. 2016, 59, 20–32. [Google Scholar] [CrossRef]
  29. Aydilek, I.B. A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl. Soft Comput. 2018, 66, 232–249. [Google Scholar] [CrossRef]
  30. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  31. Hu, G.; Du, B.; Wang, X.F.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  32. Liu, X.; Li, G.; Yang, H.; Zhang, N.; Wang, L.; Shao, P. Agricultural UAV trajectory planning by incorporating multi-mechanism improved grey wolf optimization algorithm. Expert Syst. Appl. 2023, 233, 120946. [Google Scholar] [CrossRef]
  33. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  34. Zafar, M.H.; Younus, H.B.; Moosavi, S.K.; Mansoor, M.; Sanfilippo, F. Online PID Tuning of a 3-DoF Robotic Arm Using a Metaheuristic Optimisation Algorithm: A Comparative Analysis. In International Conference on Information and Software Technologies; Springer Nature: Cham, Switzerland, 2023; pp. 25–37. [Google Scholar] [CrossRef]
  35. Gao, Y. PID-based search algorithm: A novel metaheuristic algorithm based on PID algorithm. Expert Syst. Appl. 2023, 232, 120886. [Google Scholar] [CrossRef]
  36. Li, Z. A local opposition-learning golden-sine grey wolf optimization algorithm for feature selection in data classification. Appl. Soft Comput. 2023, 142, 110319. [Google Scholar] [CrossRef]
  37. Chopra, N.; Mohsin Ansari, M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  38. Gad, A.G. Particle Swarm Optimization Algorithm and Its Applications: A Systematic Review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  39. Zhao, W.; Wang, L.; Zhang, Z.; Mirjalili, S.; Khodadadi, N.; Ge, Q. Quadratic Interpolation Optimization (QIO): A new optimization algorithm based on generalized quadratic interpolation and its applications to real-world engineering problems. Comput. Methods Appl. Mech. Eng. 2023, 417, 116446. [Google Scholar] [CrossRef]
  40. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar] [CrossRef]
  41. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  42. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  43. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  44. Moosavi, S.K.; Akhter, M.N.; Zafar, M.H.; Mansoor, M. Constraint optimization: Solving engineering design problems using Whale Optimization Algorithm (WOA). In Handbook of Whale Optimization Algorithm; Academic Press: Cambridge, MA, USA, 2024; pp. 193–216. [Google Scholar]
  45. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  46. Han, M.; Du, Z.; Yuen, K.F.; Zhu, H.; Li, Y.; Yuan, Q. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Syst. Appl. 2024, 239, 122413. [Google Scholar] [CrossRef]
  47. Woods, B.K.S.; Friswell, M.I. Multi-objective geometry optimization of the Fish Bone Active Camber morphing airfoil. J. Intell. Mater. Syst. Struct. 2016, 27, 808–819. [Google Scholar] [CrossRef]
  48. Hu, G.; Zhu, X.; Wei, G.; Chang, C.T. An improved marine predators algorithm for shape optimization of developable Ball surfaces. Eng. Appl. Artif. Intell. 2021, 105, 104417. [Google Scholar] [CrossRef]
  49. Qin, X.; Lv, D.; Hu, G.; Wu, J. Subdivision Algorithm of Quartic Q-Ball Curves with Shape Parameters. In Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 711–715. [Google Scholar] [CrossRef]
  50. Lu, L.; Jiang, C.; Hu, Q. Planar cubic G1 and quintic G2 Hermite interpolations via curvature variation minimization. Comput. Graph. 2018, 70, 92–98. [Google Scholar] [CrossRef]
Figure 1. The classification of MHAs.
Figure 1. The classification of MHAs.
Biomimetics 09 00478 g001
Figure 2. The motivation of this paper.
Figure 2. The motivation of this paper.
Biomimetics 09 00478 g002
Figure 3. The modeling process of searching for prey.
Figure 3. The modeling process of searching for prey.
Biomimetics 09 00478 g003
Figure 4. The modeling process of consuming prey.
Figure 4. The modeling process of consuming prey.
Biomimetics 09 00478 g004
Figure 5. The modeling process of attacking prey.
Figure 5. The modeling process of attacking prey.
Biomimetics 09 00478 g005
Figure 6. The modeling process of camouflage based on environment.
Figure 6. The modeling process of camouflage based on environment.
Biomimetics 09 00478 g006
Figure 7. The modeling process of running mode.
Figure 7. The modeling process of running mode.
Biomimetics 09 00478 g007
Figure 8. The flowchart of the SBOA algorithm.
Figure 8. The flowchart of the SBOA algorithm.
Biomimetics 09 00478 g008
Figure 9. Schematic diagram of the incremental PID control.
Figure 9. Schematic diagram of the incremental PID control.
Biomimetics 09 00478 g009
Figure 10. The flowchart of the MISBOA algorithm.
Figure 10. The flowchart of the MISBOA algorithm.
Biomimetics 09 00478 g010
Figure 11. The radar comparison maps between MISBOA and other algorithms on CEC2022 with 10 dimensions.
Figure 11. The radar comparison maps between MISBOA and other algorithms on CEC2022 with 10 dimensions.
Biomimetics 09 00478 g011
Figure 12. The average iterative curves of different algorithms on CEC2022 with 10 dimensions.
Figure 12. The average iterative curves of different algorithms on CEC2022 with 10 dimensions.
Biomimetics 09 00478 g012
Figure 13. The radar comparison maps between MISBOA and other algorithms on CEC2022 with 20 dimensions.
Figure 13. The radar comparison maps between MISBOA and other algorithms on CEC2022 with 20 dimensions.
Biomimetics 09 00478 g013
Figure 14. The average iterative curves of different algorithms on CEC2022 with 20 dimensions.
Figure 14. The average iterative curves of different algorithms on CEC2022 with 20 dimensions.
Biomimetics 09 00478 g014
Figure 15. The statistical results of ranking on CEC2022 with 10 and 20 dimensions.
Figure 15. The statistical results of ranking on CEC2022 with 10 and 20 dimensions.
Biomimetics 09 00478 g015
Figure 16. The construction of step-cone pulley design.
Figure 16. The construction of step-cone pulley design.
Biomimetics 09 00478 g016
Figure 17. The construction of the planetary gear train design.
Figure 17. The construction of the planetary gear train design.
Biomimetics 09 00478 g017
Figure 18. The construction of the robot gripper design.
Figure 18. The construction of the robot gripper design.
Biomimetics 09 00478 g018
Figure 19. The construction of the four-stage gearbox design problem.
Figure 19. The construction of the four-stage gearbox design problem.
Biomimetics 09 00478 g019
Figure 20. The best results for TSP with 80 cities and 2 traveling salesman.
Figure 20. The best results for TSP with 80 cities and 2 traveling salesman.
Biomimetics 09 00478 g020aBiomimetics 09 00478 g020b
Figure 21. The shape of wind-driven generator blades.
Figure 21. The shape of wind-driven generator blades.
Biomimetics 09 00478 g021
Figure 22. The best design for wind-driven generator blades according to the best solutions of various algorithms.
Figure 22. The best design for wind-driven generator blades according to the best solutions of various algorithms.
Biomimetics 09 00478 g022aBiomimetics 09 00478 g022b
Table 1. Parameters of the selected algorithms.
Table 1. Parameters of the selected algorithms.
AlgorithmsProposed TimeParameters
BKA2024Parameter P = 0.9.
GJO2022The constant c1 = 1.5.
PSO1995Cognitive factor c1 = 2, social factor c2 = 2.5, acceleration weight w = 2.
PSA2024Proportion factor Kp = 1, integral factor Ki = 0.5, differential factor Kd = 1.2, levy flight factor of β = 1.5.
QIO2023None
NRBO2024Deciding factor for trap avoidance operator DF = 0.6;
SCSO2022The maximum sensitivity S = 2.
SBOA2024Levy flight factor β = 1.5.
MISBOALevy flight factor β = 1.5, proportion factor Kp = 1, integral factor Ki = 0.5, differential factor Kd = 1.2.
Table 2. Results of different algorithms on CEC2022 functions with 10 dimensions.
Table 2. Results of different algorithms on CEC2022 functions with 10 dimensions.
IndexBKAGJOPSOPSAQIONRBOSCSOSBOAMISBOA
CEC01Ave320.52 1437.41 322.74 300.00 300.00 424.69 629.84 300.00 300.00
Std9.79E+011.26E+032.05E+014.09E-144.78E-081.31E+027.70E+024.48E-142.59E-14
Rank596247831
Time0.99 0.79 0.36 0.45 3.09 2.75 4.16 1.03 2.75
CEC02Ave411.45 438.50 405.57 410.41 406.65 424.36 431.70 404.92 402.98
Std20.67 24.18 5.16 20.73 17.63 25.31 31.50 3.54 3.49
Rank693547821
Time1.21 0.93 0.40 0.58 3.54 2.64 4.84 1.03 3.72
CEC03Ave615.80 603.44 600.01 600.00 600.00 617.38 611.65 600.00 600.00
Std8.88E+002.55E+008.79E-031.84E-054.61E-056.18E+007.01E+003.45E-060.00E+00
Rank865349721
Time1.50 1.04 0.60 0.77 3.37 2.28 4.01 1.33 3.42
CEC04Ave813.63 825.17 814.03 822.84 810.61 820.18 826.31 809.59 807.33
Std4.95 7.31 5.95 13.24 4.00 7.80 5.86 4.55 3.26
Rank485736921
Time1.50 1.04 0.60 0.77 3.37 2.28 4.01 1.33 3.42
CEC05Ave995.50 936.73 900.19 915.27 900.18 968.53 1009.84 900.00 900.00
Std6.85E+013.62E+012.33E-012.80E+014.52E-017.16E+011.35E+022.70E-130.00E+00
Rank864537921
Time1.06 0.85 0.41 0.59 3.50 3.64 4.74 1.05 2.71
CEC06Ave2086.69 7083.45 3478.84 3362.98 1801.11 1991.39 4725.88 3282.25 1825.88
Std1118.41 1627.54 2632.62 1543.53 0.76 328.28 1984.53 1436.34 16.18
Rank497613852
Time1.16 1.02 0.46 0.65 3.51 5.10 5.95 1.01 3.09
CEC07Ave2035.31 2037.73 2013.24 2018.92 2002.84 2044.48 2037.13 2006.41 2003.56
Std10.63 14.08 9.75 5.38 3.79 15.56 12.63 8.91 6.71
Rank684519732
Time2.35 1.66 1.07 1.18 4.91 10.88 7.22 2.02 5.90
CEC08Ave2221.96 2225.92 2214.48 2222.14 2211.81 2229.52 2226.38 2211.36 2206.67
Std8.03 4.32 9.79 20.97 10.54 4.98 4.35 10.26 9.24
Rank574639821
Time1.86 1.20 0.80 1.01 3.38 3.51 4.18 1.84 4.63
CEC09Ave2543.42 2558.71 2533.31 2529.28 2529.28 2531.31 2549.56 2529.28 2529.28
Std43.08 22.24 1.06 0.00 0.00 3.15 23.20 0.00 0.00
Rank796145823
Time1.57 1.05 0.63 0.81 3.41 3.25 4.18 1.47 3.55
CEC10Ave2543.71 2553.59 2542.78 2528.25 2500.35 2556.39 2533.86 2525.53 2507.38
Std57.93 62.08 56.88 51.23 0.07 64.70 56.12 46.63 27.46
Rank786419532
Time1.48 1.00 0.58 0.76 3.25 3.39 4.09 1.34 3.68
CEC11Ave2659.40 2822.80 2713.43 2727.86 2600.00 2816.16 2731.96 2638.35 2623.33
Std124.56 179.32 160.75 173.53 0.00 159.55 158.19 104.80 89.76
Rank495618732
Time2.27 1.47 1.04 1.17 4.47 3.97 5.15 2.20 5.52
CEC12Ave2864.74 2865.07 2868.81 2865.27 2866.15 2865.21 2865.38 2860.36 2862.03
Std1.80 2.36 1.12 1.96 1.83 1.28 1.92 1.60 1.67
Rank349685712
Time3.16 1.99 1.34 1.69 6.36 12.41 7.26 3.43 7.26
Average rank5.58 7.67 5.33 4.67 3.08 7.00 7.58 2.50 1.58
Finial rank695437821
Average time1.68 1.17 0.69 0.87 3.85 4.68 4.98 1.59 4.14
Table 3. The p-value of WRST between the MISBOA and other algorithms on CEC2022 with 10 dimensions.
Table 3. The p-value of WRST between the MISBOA and other algorithms on CEC2022 with 10 dimensions.
BKAGJOPSOPSAQIONRBOSCSOSBOA
CEC016.32E-12/+6.32E-12/+6.32E-12/+8.42E-09/+6.32E-12/+6.32E-12/+6.32E-12/+1.43E-02/+
CEC022.82E-04/+3.10E-10/+1.67E-03/+1.64E-03/+5.18E-02/=1.41E-08/+1.37E-06/+3.66E-04/+
CEC033.15E-12/+3.15E-12/+3.15E-12/+9.18E-08/+3.15E-12/+3.15E-12/+3.15E-12/+1.58E-02/+
CEC046.88E-07/+4.78E-11/+4.04E-06/+2.28E-08/+4.17E-04/+1.36E-09/+3.21E-11/+4.24E-02/+
CEC051.21E-12/+1.21E-12/+1.21E-12/+1.21E-12/+1.21E-12/+1.21E-12/+1.21E-12/+9.04E-13/+
CEC063.82E-09/+3.02E-11/+7.39E-11/+5.07E-10/+3.02E-11/−1.78E-10/+3.02E-11/+3.34E-11/+
CEC075.49E-11/+3.02E-11/+1.58E-04/+2.19E-08/+5.87E-04/−4.50E-11/+4.50E-11/+2.71E-02/+
CEC082.67E-09/+7.39E-11/+1.43E-05/+5.09E-06/+5.83E-03/+3.02E-11/+7.39E-11/+5.94E-02/+
CEC095.73E-11/+1.21E-12/+1.21E-12/+2.78E-05/−1.21E-12/+1.21E-12/+1.21E-12/+3.34E-01/=
CEC103.50E-09/+1.07E-09/+2.67E-09/+2.67E-09/+1.43E-08/−8.10E-10/+2.03E-09/+2.00E-06/=
CEC111.16E-09/+4.85E-10/+5.35E-10/+1.29E-08/+1.88E-09/−3.98E-10/+8.71E-10/+8.32E-03/+
CEC124.62E-08/+4.62E-08/+2.96E-11/+1.14E-07/+2.11E-10/+1.26E-09/+4.92E-09/+2.25E-03/−
+/=/−12/0/012/0//012/0/011/0/17/1/412/0/012/0/09/2/1
Table 4. Results of different algorithms on CEC2022 functions with 20 dimensions.
Table 4. Results of different algorithms on CEC2022 functions with 20 dimensions.
IndexBKAGJOPSOPSAQIONRBOSCSOSBOAMISBOA
CEC01Ave791.30 11,801.49 4525.72 300.00 473.69 6355.33 7879.45 333.17 300.00
Std1562.25 3673.44 1114.88 0.00 178.77 1754.62 3915.38 33.94 0.00
Rank596247831
Time1.01 1.00 0.43 0.69 4.89 4.55 6.98 1.05 3.29
CEC02Ave462.82 557.79 480.78 449.38 451.37 588.88 518.37 452.92 440.93
Std11.70 88.91 12.19 19.88 19.27 60.80 40.44 8.68 18.84
Rank5862.00 39741
Time1.09 0.99 0.42 0.68 4.99 3.87 7.14 1.12 2.93
CEC03Ave641.17 620.69 602.05 602.12 600.05 647.93 641.33 600.01 600.00
Std10.43 10.60 0.73 2.74 0.15 9.47 11.23 0.04 0.00
Rank764539821
Time2.59 1.91 1.25 1.54 6.88 6.27 10.11 2.53 6.26
CEC04Ave859.76 887.60 902.92 855.63 836.68 900.91 884.70 827.14 831.58
Std12.65 26.30 11.33 17.21 9.00 19.41 14.25 8.67 12.25
Rank579438612
Time1.69 1.33 0.74 1.07 6.43 6.94 21.32 1.79 4.55
CEC05Ave1885.84 1623.36 1006.72 1513.93 913.97 2078.83 2109.87 900.03 900.02
Std392.09 321.74 98.38 393.05 11.00 437.03 359.83 0.06 0.08
Rank764538921
Time1.62 1.36 0.73 1.10 6.09 8.25 23.38 1.41 3.90
CEC06Ave4933.37 4,852,799.47 75,196.95 4601.02 4301.75 10,967.32 125,429.45 7540.34 2350.78
Std4780.87 11,904,051.13 200,057.18 3400.84 2503.06 18,029.84 386,929.93 7325.22 881.48
Rank497326851
Time2.44 2.06 0.99 1.44 11.43 8.47 27.42 2.48 6.87
CEC07Ave2094.99 2096.21 2045.75 2074.80 2034.29 2130.26 2118.64 2031.69 2026.00
Std20.09 37.95 11.40 51.21 13.09 39.39 28.80 9.82 5.86
Rank674539821
Time3.08 1.94 1.35 1.68 6.81 12.40 26.72 2.65 6.89
CEC08Ave2249.56 2247.94 2246.59 2254.94 2222.84 2292.85 2247.05 2222.79 2221.58
Std45.64 41.35 36.40 52.01 4.84 58.32 34.71 1.65 0.92
Rank764839521
Time3.39 2.15 1.63 1.99 7.02 13.45 27.18 3.04 7.67
CEC09Ave2506.20 2551.08 2488.33 2480.78 2480.78 2552.55 2527.15 2480.78 2480.78
Std63.53 39.97 1.86 0.00 0.00 39.51 37.19 0.00 0.00
Rank685349721
Time2.84 1.94 1.36 1.78 6.86 11.80 26.20 2.59 6.31
CEC10Ave3521.79 3610.37 2944.78 2669.55 2507.00 3380.06 3176.59 2530.12 2522.89
Std1013.07 1469.48 465.33 183.17 34.71 1308.25 1051.78 54.78 51.38
Rank895417632
Time2.42 1.68 1.10 1.45 6.45 10.67 22.19 2.11 5.53
CEC11Ave3380.09 4052.40 2982.61 2916.67 2935.35 3940.42 3483.37 2952.02 2906.67
Std724.80 390.72 126.19 74.66 110.57 290.98 386.07 111.55 94.44
Rank695238741
Time3.48 2.35 1.78 2.22 7.20 13.48 23.89 7.53 3.18
CEC12Ave3017.43 2993.92 3016.04 2969.80 2978.36 3009.53 3019.32 2939.76 2900.00
Std58.80 37.88 18.18 27.28 20.94 47.79 61.23 4.98 0.00
Rank857346921
Time3.95 2.61 1.97 2.46 8.31 14.44 23.61 3.51 8.39
Average rank6.17 7.42 5.50 3.83 3.00 7.92 7.33 2.67 1.17
Finial rank685439721
Average time2.47 1.78 1.15 1.51 6.95 9.55 20.51 2.65 5.48
Table 5. The p-value of WRST between the MISBOA and other algorithms on CEC2022 with 20 dimensions.
Table 5. The p-value of WRST between the MISBOA and other algorithms on CEC2022 with 20 dimensions.
BKAGJOPSOPSAQIONRBOSCSOSBOA
CEC013.02E-11/+3.02E-11/+3.02E-11/+1.60E-07/+3.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+
CEC021.84E-07/+3.24E-11/+3.96E-11/+8.96E-04/+1.51E-05/+2.93E-11/+3.96E-11/+1.53E-08/+
CEC032.27E-11/+2.27E-11/+2.27E-11/+2.27E-11/+2.27E-11/+2.27E-11/+2.27E-11/+7.47E-09/+
CEC047.12E-09/+1.21E-10/+3.02E-11/+2.67E-07/+3.39E-02/+3.02E-11/+3.69E-11/+1.76E-01/=
CEC052.66E-11/+2.66E-11/+2.66E-11/+2.66E-11/+3.60E-11/+2.66E-11/+2.66E-11/+2.19E-08/+
CEC064.42E-06/+3.69E-11/+4.57E-09/+6.38E-03/+5.87E-04/+7.74E-06/+1.20E-08/+1.49E-06/+
CEC073.02E-11/+3.34E-11/+1.07E-09/+2.15E-10/+2.84E-04/+3.02E-11/+3.02E-11/+1.33E-02/+
CEC083.34E-11/+3.02E-11/+3.34E-11/+8.88E-06/+2.28E-01/=3.02E-11/+3.02E-11/+5.26E-04/+
CEC098.87E-12/+8.87E-12/+8.87E-12/+8.87E-12/+8.87E-12/+8.87E-12/+8.87E-12/+8.86E-12/+
CEC108.48E-09/+1.31E-08/+1.01E-08/+3.08E-08/+6.74E-06/−7.09E-08/+2.38E-07/+8.56E-04/+
CEC112.36E-04/+2.02E-11/+9.63E-02/=1.46E-01/=5.92E-02/=2.02E-11/+2.67E-10/+2.13E-01/=
CEC123.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+3.02E-11/+
+/=/−12/0/012/0/011/1/011/1/09/2/112/0/012/0/010/2/0
Table 6. Results of various algorithms for the step-cone pulley design problem.
Table 6. Results of various algorithms for the step-cone pulley design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA9.80015 9.80015 9.80015 8.64E-111
SBOA9.80015 9.80016 9.80019 1.48E-052
HHO9.82369 9.98223 10.30760 1.50E-018
PSO10.05302 10.32013 10.59591 1.82E-0110
SCSO10.38607 12.60736 15.24964 1.69E+0013
PSA9.80095 9.90024 10.02584 9.08E-026
QIO9.80028 9.80087 9.80183 5.40E-043
BKA9.80709 10.15584 11.20607 5.61E-019
GJO9.85004 9.90851 9.99557 4.78E-027
WO9.80062 9.80902 9.83530 1.09E-024
NRBO9.93519 10.35455 10.83489 2.89E-0111
GWO9.81930 9.83887 9.87625 2.37E-025
WOA10.43362 11.85415 14.78914 1.50E+0012
Table 7. The best variables of various algorithms for the step-cone pulley design problem.
Table 7. The best variables of various algorithms for the step-cone pulley design problem.
Algorithmsx1 (d1)x2 (d2)x3 (d3)x4 (d4)x5 (w)
MISBOA20.5427 28.2575 50.7967 84.4957 90.0000
SBOA20.5427 28.2575 50.7967 84.4957 90.0000
HHO20.5671 28.4052 50.8662 84.5135 89.9810
PSO20.0943 29.7522 51.2044 85.2100 89.3350
SCSO18.6324 30.8902 52.2056 86.8985 87.5718
PSA20.5424 28.2597 50.8006 84.5021 89.9932
QIO20.5428 28.2577 50.7974 84.4962 89.9998
BKA20.5745 28.2576 50.8071 84.5254 90.0000
GJO20.7429 28.3120 50.9868 84.6088 90.0000
WO20.5445 28.2606 50.7974 84.4975 89.9988
NRBO22.3892 28.2576 50.7968 84.5012 89.9998
GWO20.5880 28.3594 50.8323 84.5390 90.0000
WOA24.8581 29.2136 53.0311 85.2885 89.1611
Table 8. Results of various algorithms for the planetary gear train design problem.
Table 8. Results of various algorithms for the planetary gear train design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA0.5269 0.5348 0.5438 0.0070 1
SBOA0.5332 0.5367 0.5371 0.0012 3
HHO0.5264 0.5363 0.5590 0.0091 2
PSO0.5300 0.5410 0.5567 0.0084 5
SCSO0.5371 0.9698 1.6232 0.3639 13
PSA0.5273 0.5639 0.7066 0.0533 9
QIO0.5296 0.5378 0.5565 0.0082 4
BKA0.5258 0.5841 0.9832 0.1415 10
GJO0.5263 0.5412 0.5546 0.0083 6
WO0.5371 0.5531 0.6004 0.0210 8
NRBO0.5371 0.5991 0.8486 0.1109 12
GWO0.5305 0.5428 0.5555 0.0086 7
WOA0.5258 0.5988 1.1248 0.1851 11
Table 9. The best variables of various algorithms for the planetary gear train design problem.
Table 9. The best variables of various algorithms for the planetary gear train design problem.
Algorithmsx1 (N1)x2 (N2)x3 (N3)x4 (N4)x5 (N5)x6 (N6)x7 (p)x8 (m1)x9 (m2)
MISBOA37.97 26.22 22.80 24.27 23.54 87.44 32.00 2.00
SBOA29.65 14.44 15.14 23.42 17.38 83.43 33.00 2.00
HHO45.30 36.27 36.79 32.61 38.78 120.10 31.75 1.75
PSO34.79 21.30 21.17 25.22 21.31 91.40 32.25 1.75
SCSO29.13 16.91 14.33 16.51 19.09 61.85 31.75 1.75
PSA41.53 30.21 23.72 23.51 13.51 86.87 32.00 2.25
QIO32.24 20.60 22.03 23.53 14.39 86.52 32.50 2.00
BKA34.82 25.65 25.32 23.51 21.77 87.33 31.75 1.75
GJO36.84 22.21 20.38 23.60 15.43 86.79 32.00 1.75
WO28.55 17.30 14.05 16.51 13.51 61.64 51.75 1.75
NRBO28.02 19.14 15.93 16.51 16.92 62.24 31.75 1.75
GWO52.65 22.10 14.28 23.91 19.59 86.97 31.75 1.75
WOA34.52 25.60 24.57 24.28 19.87 87.06 31.75 1.75
Table 10. Results of various algorithms for the robot gripper design problem.
Table 10. Results of various algorithms for the robot gripper design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA2.5596 2.9459 3.3890 0.2267 1
SBOA2.6154 3.0286 3.4375 0.3525 2
HHO3.7227 14.7582 79.5867 23.7816 12
PSO3.3858 4.4163 5.2854 0.6725 8
SCSO6.8012 254.0620 1417.0476 462.3691 13
PSA2.8181 3.7524 5.5934 0.8160 6
QIO3.2198 3.7341 4.6100 0.4506 5
BKA2.6099 3.3315 4.3028 0.5159 3
GJO2.8372 3.8936 4.9890 0.6637 7
WO3.0550 4.5285 5.9751 0.9301 9
NRBO2.5948 5.8424 15.3426 3.6253 10
GWO3.4269 3.7197 4.0471 0.2179 4
WOA3.0753 6.1437 9.8143 2.0333 11
Table 11. The best variables of various algorithms for the robot gripper design problem.
Table 11. The best variables of various algorithms for the robot gripper design problem.
Algorithmsx1 (a)x2 (b)x3 (c)x4 (e)x5 (f)x6 (l)x7 (d)
MISBOA149.78 149.48 200.00 0.17 147.82 101.50 2.32
SBOA150.00 149.84 200.00 0.00 10.10 105.01 1.60
HHO149.99 149.17 160.93 0.04 27.52 129.18 1.77
PSO136.95 130.70 180.44 5.97 134.17 107.89 2.42
SCSO120.55 108.47 109.46 11.36 10.00 113.75 1.68
PSA146.24 137.89 200.00 8.09 126.20 108.51 2.29
QIO134.48 132.96 197.78 1.03 147.67 115.51 2.44
BKA149.97 149.83 198.44 0.00 148.88 103.50 2.39
GJO150.00 149.81 195.32 0.00 46.06 108.24 1.80
WO150.00 148.54 200.00 0.50 92.26 132.93 2.12
NRBO150.00 149.76 199.99 0.09 11.80 103.79 1.60
GWO148.03 145.10 197.54 0.49 150.00 151.07 2.53
WOA145.91 145.51 187.74 0.00 10.73 116.18 1.66
Table 12. Results of various algorithms for the four-stage gearbox design problem.
Table 12. Results of various algorithms for the four-stage gearbox design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA49.80 35,629.56 137,738.51 42,104.67 1
SBOA43.50 53,388.20 250,878.33 100,100.08 2
HHO168,279.45 350,597.52 464,185.35 93,312.90 7
PSO15,548.11 106,639.29 399,268.03 118,788.06 4
SCSO567,902.76 1,171,784.84 2,793,481.23 623,190.16 13
PSA12,274.68 162,340.57 717,276.47 213,572.86 5
QIO46.52 60,161.92 256,594.48 98,071.12 3
BKA9839.23 374,880.35 607,695.76 231,737.41 9
GJO68,095.24 364,278.70 644,579.00 192,960.40 8
WO154,445.43 403,588.62 581,976.98 156,748.27 10
NRBO117,747.19 884,329.98 2,232,288.75 662,328.94 11
GWO51.09 213,894.36 415,612.84 140,816.71 6
WOA714,462.55 1,112,066.60 1,640,079.66 322,925.94 12
Table 13. The best variables of various algorithms for the four-stage gearbox design problem.
Table 13. The best variables of various algorithms for the four-stage gearbox design problem.
Algorithmsx1 (Np1)x2 (Np2)x3 (Np3)x4 (Np4)x5 (Ng1)x6 (Ng2)x7 (Ng3)x8 (Ng4)
MISBOA2115232551365437
SBOA2015202243354043
HHO1014401443234138
PSO2521234056595058
SCSO13771727172122
PSA2139203473516446
QIO2123182353493642
BKA181881625402237
GJO1716221140525711
WO746121325493027
NRBO2517121347481248
GWO1625211735543839
WOA12157712183722
Algorithmsx9 (b1)x10 (b2)x11 (b3)x12 (b4)x13 (xp1)x14 (xg1)x15 (xg2)x16 (xg3)
MISBOA3.1753.1753.1753.17588.938.150.838.1
SBOA3.1753.1753.1753.17538.150.863.538.1
HHO3.1753.1753.1753.17525.450.850.838.1
PSO3.1753.1753.1753.17525.476.276.250.8
SCSO3.1753.1753.1753.17538.112.763.525.4
PSA3.1753.1753.1753.17550.876.288.950.8
QIO3.1753.1753.1753.17576.250.850.838.1
BKA3.1753.1755.7153.17563.525.438.138.1
GJO3.1753.1753.1755.71525.438.163.550.8
WO5.7153.1753.1753.17550.888.950.825.4
NRBO3.1753.1758.2553.17538.150.876.276.2
GWO3.1753.1753.1753.17550.850.850.825.4
WOA5.7153.1753.1753.17538.112.738.112.7
Algorithmsx17 (xg4)x18 (yp1)x19 (yg1)x20 (yg2)x21 (yg3)x22 (yg4)
MISBOA50.850.850.876.250.850.8
SBOA38.1101.663.563.563.550.8
HHO38.176.238.150.825.438.1
PSO88.988.950.876.250.876.2
SCSO12.712.712.712.738.125.4
PSA63.512.763.550.876.263.5
QIO50.825.463.576.238.150.8
BKA38.125.438.163.563.563.5
GJO50.812.750.850.850.812.7
WO25.4101.676.238.176.276.2
NRBO76.263.525.438.138.176.2
GWO50.825.476.276.250.876.2
WOA63.538.138.112.712.725.4
Table 14. The results of various algorithms for TSP.
Table 14. The results of various algorithms for TSP.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA7.3535 7.4945 7.6473 0.0829 1
SBOA7.3659 7.5073 7.6431 0.0941 2
HHO7.7392 8.2628 8.9568 0.3604 11
PSO7.6862 7.9092 8.1677 0.1372 8
SCSO8.1210 8.5124 8.9007 0.2366 13
PSA7.5284 7.8043 8.2786 0.2163 7
QIO7.5307 7.6551 7.7299 0.0653 3
BKA7.5619 7.7585 8.0241 0.1609 5
GJO7.8943 8.2646 8.5822 0.2020 12
WO7.5249 7.7156 7.8352 0.1005 4
NRBO7.6937 7.9233 8.2097 0.1570 9
GWO7.6019 8.0283 8.6042 0.3257 10
WOA7.5243 7.7743 7.9464 0.1348 6
Table 15. The results of various algorithms for shape optimization problems of combined curves.
Table 15. The results of various algorithms for shape optimization problems of combined curves.
AlgorithmsThe BestThe Average The WorstThe StdRank
MISBOA25,999.74440 25,999.74440 25,999.74442 5.3021E-061
SBOA25,999.76483 26,000.06963 26,000.85249 3.6657E-015
HHO25,999.74440 25,999.74494 25,999.74760 1.0275E-032
PSO26,025.83962 26,044.24748 26,074.01958 1.2059E+0113
SCSO25,999.74463 25,999.78421 25,999.94426 6.3569E-024
PSA26,018.82949 26,028.23579 26,040.48786 7.5703E+0012
QIO25,999.74440 26,000.12618 26,001.97965 6.8187E-017
BKA25,999.74534 26,001.61971 26,004.55399 1.9536E+008
GJO26,003.03461 26,009.58403 26,016.95296 5.5396E+0010
WO25,999.74441 26,000.11604 26,001.53414 7.1334E-016
NRBO25,999.74441 25,999.76224 25,999.87262 4.0222E-023
GWO26,016.19124 26,026.47146 26,061.95977 1.3641E+0111
WOA25,999.76399 26,006.30961 26,018.65573 6.5048E+009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qin, S.; Liu, J.; Bai, X.; Hu, G. A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems. Biomimetics 2024, 9, 478. https://doi.org/10.3390/biomimetics9080478

AMA Style

Qin S, Liu J, Bai X, Hu G. A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems. Biomimetics. 2024; 9(8):478. https://doi.org/10.3390/biomimetics9080478

Chicago/Turabian Style

Qin, Song, Junling Liu, Xiaobo Bai, and Gang Hu. 2024. "A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems" Biomimetics 9, no. 8: 478. https://doi.org/10.3390/biomimetics9080478

APA Style

Qin, S., Liu, J., Bai, X., & Hu, G. (2024). A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems. Biomimetics, 9(8), 478. https://doi.org/10.3390/biomimetics9080478

Article Metrics

Back to TopTop