Next Article in Journal
Research on Intelligent Identification Method for Pantograph Positioning and Skateboard Structural Anomalies Based on Improved YOLO v8 Algorithm
Next Article in Special Issue
Chinese Story Generation Based on Style Control of Transformer Model and Content Evaluation Method
Previous Article in Journal
Fair and Transparent Student Admission Prediction Using Machine Learning Models
Previous Article in Special Issue
Rotating Machinery Fault Detection Using Support Vector Machine via Feature Ranking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems

School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232001, China
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(12), 573; https://doi.org/10.3390/a17120573
Submission received: 12 November 2024 / Revised: 5 December 2024 / Accepted: 11 December 2024 / Published: 13 December 2024

Abstract

:
A multi-strategy improved honey badger algorithm (MIHBA) is proposed to address the problem that the honey badger algorithm may fall into local optimum and premature convergence when dealing with complex optimization problems. By introducing Halton sequences to initialize the population, the diversity of the population is enhanced, and premature convergence is effectively avoided. The dynamic density factor of water waves is added to improve the search efficiency of the algorithm in the solution space. Lens opposition learning based on the principle of lens imaging is also introduced to enhance the ability of the algorithm to get rid of local optimums. MIHBA achieves the best ranking in 23 test functions and 4 engineering design problems. The improvement of this paper improves the convergence speed and accuracy of the algorithm, enhances the adaptability and solving ability of the algorithm to complex functions, and provides new ideas for solving complex engineering design problems.

1. Introduction

Optimization problems are ubiquitous in the field of engineering design, from mechanical design to structural engineering to electronic system design, where engineers often need to find optimal solutions under multiple design variables and constraints [1]. Traditional optimization methods, such as gradient descent [2] and Newton’s method [3], are often difficult to cope with complex, multi-peak, and high-dimensional engineering design problems. In recent years, biologically inspired optimization algorithms, such as parrot optimizer (PO) [4], coati optimization algorithm (COA) [5], and snow ablation optimizer (SAO) [6], have attracted extensive attention for their efficiency and robustness in dealing with complex problems.
The hippo optimization algorithm (HO) proposed by Mohammad Hussein Amiri et al. mimics hippopotamus’ defense and predator evasion behaviors, demonstrates strong global search and local exploitation capabilities, and effectively solves four different engineering design challenges [7]. Jun Wang et al. proposed the black-winged kite optimization algorithm (BKA), which is inspired from the predatory and migratory behaviors of the black-winged kite, has significant evolutionary and searching capabilities, and shows great advantages in engineering design problems [8]. The multi-strategies improved beluga whale optimization algorithm (MSBWO), proposed by Zhaoyong Fan et al., improves the algorithm’s global search capability and convergence speed by introducing improvements such as chaos mapping initialization, elite pooling strategy, and adaptive lévy flight [9]. Mahdi Azizi et al. proposed the energy valley optimization (EVO) algorithm, inspired by the principles of stability and particle decay in the field of physics. EVO is effective and superior in solving complex optimization problems [10]. The gold rush optimization algorithm (GRO) proposed by Zolfi Kamran is inspired by the behavior of gold prospectors exploring for gold, and GRO searches for an optimal solution by simulating the three key processes of migration, collaboration, and panning for gold. The application of the GRO algorithm to several benchmark test functions and real-world engineering problems has shown that it has significant advantages in global search and engineering optimization problems [11]. In addition, Peng Wang et al. proposed a quantum intelligent optimization algorithm framework for quantum intelligent computation of global optimization problems, as well as a quantum multi-objective optimization learning model and deep neural network architecture search and parameter optimization techniques for quantum intelligent optimization [12]. The zebra optimization algorithm (ZOA), proposed by Eva Trojovská et al., is inspired by the foraging and defensive behaviors of zebras, and is applied to a wide range of optimization problems with different exploration and development requirements [13]. Mohamed Abdel-Basset et al. proposed the nutcracker optimization algorithm (NOA) to mimic the behavior of a nutcracker for solving global optimization and engineering design problems [14]. Laith Abualigah et al. proposed a Reptile Search Algorithm (RSA), inspired by crocodile hunting behavior. They introduced two mathematical models for updating the location of candidate solutions, and implemented four metrics to qualitatively study the proposed RSA [15]. Amin Abdollahi Dehkordi et al. proposed nonlinear-based chaotic Harris Hawks optimization (NCHHO), which uses chaotic and nonlinear control parameters to improve the optimization performance of HHO and solve the Internet of Vehicles routing problem [16]. Bibekananda Jena et al. submitted a new hybrid differential squirrel search algorithm optimization algorithm (DSSE) for solving global optimization difficulties by combining the search methodology of the squirrel search algorithm with the differential evolutionary optimization process [17]. Afshin Faramarzi et al. developed a novel optimization algorithm called the Equilibrium Optimizer (EO), inspired by the mass balance model, where the algorithm design includes highly exploratory and exploitative search mechanisms to stochastically change the solution [18]. The multipopulation ensemble particle swarm optimizer (MPEPSO) algorithm, proposed by Ziang Liu et al., enhances the efficiency and accuracy of the algorithm by combining different PSO search strategies and dynamically adjusting the allocation of reward subpopulations, which better solves the engineering design problems [19]. Weiguo Zhao et al. modeled three unique foraging strategies of manta rays and developed Manta ray foraging optimization (MRFO), an algorithm with few adjustable parameters that is easy to implement [20]. These algorithms show great potential and application in the field of engineering design. honey badger algorithm (HBA), as an emerging optimization algorithm, provides new ideas for solving engineering design problems with its unique search mechanism and powerful global search capability [21].
Ajay Kumar Bansal et al. optimized HNGS, a hybrid nano grid system for the future residential area of Mahendragarh, Haryana, India, using honey badger algorithm (HBA) and obtained a higher quality solution [22]. Rajendran Arul Jose et al. combined the HBA with a gradient boosting decision tree (GBDT) to improve the prediction accuracy of the control parameters and to realize the real-time adjustment of the parameters to meet the changes in the grid demand [23]. Allan J Wilson et al. proposed an optimization framework based on binarized spiking neural network (BzSpNN) and honey badger algorithm for cluster head selection in WSNs to minimize the energy consumption and extend the network life cycle [24]. Meng Jiang et al. proposed a combined method (CGH-GTO), based on the improved gray wolf optimizer (IGWO), honey badger algorithm (HBA), and gorilla troops optimizer (GTO), for the identification of PV model parameters [25]. Xinyu Ren et al. constructed a two-layer planning model for combined cooling, heating and power (CCHP) and developed the multi-objective honey badger algorithm (MOHBA), which improves the economic, energy, and environmental performance of the CCHP system [26]. Boxiong Wang et al. proposed an improved binary honey badger algorithm for IoT traffic datasets based on lévy flights, which improves the performance of HBA for IoT device identification in the feature selection problem [27]. Siwen Zhang et al. proposed α4CycρHBA, an algorithm that significantly improves the global search capability and convergence speed by introducing density factors based on primitive functions and mathematical spirals in polar coordinate systems [28]. Peixin Huang et al. proposed an improved honey badger algorithm called ODEHBA, which excels in solving complex numerical problems, engineering design problems, and routing problems for vehicular networking [29]. Jinrui Zhang et al. proposed a back propagation neural network (BPNN) model based on HBA optimization for predicting the migration time of toxic fumes induced by blasting in underground mines to further guide and improve the ventilation design [30]. The honey badger algorithm excels in solving optimization problems with its simplicity and efficiency, and is especially suitable for complex and high-dimensional search spaces. Although the HBA shows some advantages in optimization problems with its simple structure and clear ideas, it also has some drawbacks. The HBA is weak in exploration capability, which leads to the algorithm’s difficulty in effectively exploring the solution space at the early stage of the search, thus affecting the ability of global optimization search. The algorithm does not handle the balance between exploration and exploitation well enough, which modifies the convergence performance of the algorithm and the quality of the final solution. The HBA also tends to fall into local optimal solutions, and it requires more iterations to achieve better optimization results, which is a challenge to the computational resources. In order to better solve the engineering design problem, this paper improves the original HBA.
In this paper, we propose an improved honey badger algorithm (MIHBA) based on multiple strategies to address the shortcomings of the honey badger algorithm. The introduction of Halton sequence to initialize the population enables the generation of point sets that are uniformly distributed in multidimensional space. Initializing the population with Halton sequence can improve the diversity of the algorithm and avoid premature convergence. The dynamic density factor of water waves is introduced to perform an efficient search in the solution space by modeling the propagation, refraction, and breaking wave motion mechanisms of water waves. Introducing the dynamic density factor of water waves into HBA can enhance the global search ability and local search ability of the algorithm, thus improving the convergence speed and the quality of the algorithm. Lens opposition-based learning is introduced to enhance the search capability of the algorithm by introducing this new mechanism for escaping local optima. This approach helps the algorithm to jump out when it falls into a local optimum and continue the search towards the global optimum solution. The main contributions of this paper are as follows:
  • The introduction of Halton sequence to initialize the population improves the initial diversity of the population and helps to avoid early convergence of the algorithm to local optimal solutions;
  • The combination of the dynamic density factor of the water waves allows the algorithm to explore a wider search range and improves the adaptability and solving ability for complex functions;
  • The learning strategy based on the lens imaging principle improves the ability of the algorithm to escape from local optimal solutions and enhances the global search capability;
  • The proposed algorithm is tested for performance on 23 benchmark test functions and applied to four engineering design problems, where the algorithm shows great advantages.

2. Algorithm Design

2.1. Honey Badger Algorithm

The honey badger algorithm (HBA) is a meta-heuristic algorithm that solves optimization problems by simulating the dynamic search behavior of the honey badger foraging. The main steps of the HBA include initialization, defining and updating the density factor, the mining phase, and the honey harvesting phase.
In the initialization phase, the algorithm performs an initialization operation on the number N of the honey badger population within the set boundaries, as shown in Equation (1).
X i = l b i + r 1 ( u b i l b i ) , r 1 ( 0 , 1 ) ,
where X i is the location of the i t h individual of the N candidate individuals, and the upper and lower bounds of the search space are denoted by u b and l b , respectively.
The strength of the honey badger’s sense of smell is related to prey concentration and the distance between prey. Let I be the odor intensity of the prey, if the honey badger moves quickly when it smells the food odor very high, as shown in Equation (2).
{ I i = r 2 S / ( 4 π d i 2 ) , r 2 ( 0 , 1 ) S = ( x i x i + 1 ) 2 d i = x p r e y x i ,
where S represents the source strength, d i represents the distance between the current honey badger individual and the target prey, and r 2 is a random number between (0, 1).
The density factor controls the time-varying randomization and ensures the balance between the exploration phase and the development phase, defined as shown in Equation (3).
= C × exp ( t / t max ) ,
where t is the current number of iterations, t max is the maximum number of iterations, C 1 , and the default value is 2.
During the digging phase, honey badgers use the strength of their sense of smell to locate and dig for prey. During this phase, the honey badger’s locomotor behavior resembles a heart curve, and the locomotor map simulation is shown in Equation (4).
x n e w = x p r e y + F β I x p r e y + F r 3 d i | cos ( 2 π r 4 ) [ 1 cos ( 2 π r 5 ) ] | ,
where x n e w represents the updated position and x p r e y is the global optimal position in the current state. β represents the ability of the honey badger to hunt, and its default value is 6. r 3 , r 4 , r 5 are three random numbers between (0, 1). In addition, F denotes the flag that controls the search direction, as determined by Equation (5):
F = { 1 , i f r 6 0.5 1 , o t h e r w i s e .
While in the honey phase, the honey badger follows the guide bird to find the hive, this phase is similar to the local search in the algorithm, simulated as in Equation (6).
x n e w = x p r e y + F r 7 d i , r 7 ( 0 , 1 ) .

2.2. Proposed Algorithm

2.2.1. Halton Sequence Initializes the Population

In the original honey badger algorithm, a highly random population was obtained by randomly initializing the population using the rand function. However, this randomization does not necessarily ensure that individuals are uniformly distributed throughout the solution space, which may lead to a slow population search, as well as reduce the diversity of the algorithms. To solve this problem, this paper introduces Halton sequence to generate pseudo-random numbers to initialize the population [31]. The pseudo-randomness of the Halton sequence ensures that the individuals are more evenly distributed throughout the solution space, thus increasing the diversity of the initialization phase of the algorithm. This helps the individual to quickly discover the location of high-quality solutions, speeds up the convergence of the algorithm, and improves the convergence accuracy of the algorithm.
The two-dimensional Halton sequence is realized by selecting two primes as the basis quantities and forming a series of uniformly distributed and non-repeating points by constant slicing of these two basis quantities. This slicing process can be described by a mathematical model as described in Equations (7)–(9).
n = i = 0 m b i × p i = b m × p m + L + b 1 × p 1 + b 0 ,
θ ( n ) = b 0 p 1 + b 1 p 2 + L + b m p m 1 ,
H ( n ) = [ q 1 ( n ) , q 2 ( n ) ] ,
where n [ 1 , N ] is any integer, p is a prime number greater than or equal to 2, denoted by the underlying quantity of the Halton sequence, b i { 0 , 1 , 2 , , p 1 } is a constant, θ ( n ) is a defined sequence function, and H ( n ) is the resulting Halton sequence. This sequence replaces Equation (1) of the original algorithm for population initialization. As shown in Figure 1, where Figure 1a shows random distribution map, Figure 1b shows Halton distribution map, and Figure 1c,d show the histograms of particle distributions corresponding to the two initializations, respectively. According to Figure 1, it can be clearly seen that the Halton initialization possesses a more uniform particle distribution than the random initialization.

2.2.2. Water Wave Dynamic Density Factor

In HBA, the value of the density factor controls the balance of the algorithm during the exploration and development phases. The primitive decreasing mechanism often leads the algorithm to fall into a local optimum, making it difficult to effectively cover and explore the entire search space, thus limiting the accuracy of the algorithm in the development stage, especially when dealing with complex and high-dimensional problems. To solve this problem, researchers have proposed a variety of improved convergence factors: reference [32] proposes a convergence factor based on sinusoidal functions, which can help the algorithm to search for the optimal value in the global range, as shown in Equation (10). Reference [33] presents the power-of-five nonlinear convergence factor, which can enhance the search ability of the algorithm, as shown in Equation (11). Reference [34] proposes a convergence factor that utilizes the uncertainty property in water wave dynamics, which improves the algorithm’s adaptability and solving ability for complex functions, as shown in Equation (12). Figure 2 illustrates the above four convergence factors, where (a) is the original convergence factor shown in Equation (3), (b) is the sinusoidal convergence factor shown in Equation (10), (c) is the power-of-five convergence factor demonstrated in Equation (11), and (d) is the water wave dynamic density factor shown in Equation (12).
= 1 + sin ( π / 2 + π × t / t max ) ,
= 2 2 × ( t / t max ) 5 ,
= 2 × r a n d × S × exp ( t / t max ) 3 ,
where t max is the maximum number of iterations, t is the current number of iterations, S is a random integer, and S [ 0 , 1 ] .
HBA often lacks accuracy when dealing with complex multimodal functions. To solve this problem, the dynamic density factor of water waves is introduced. The uncertain fluctuation characteristics in the water wave dynamics can dynamically adjust the density factor, realizing the adaptive switching of the algorithm between the exploration and development phases. This self-adjustment mechanism helps to improve the overall search efficiency of the algorithm and avoid falling into local optimization. The multi-wave interference characteristic in water wave dynamics is very similar to the characteristics of complex multi-peak optimization problems. Using this characteristic to design the density factor helps the algorithm to better cope with the complex multi-peak problem and reduce the risk of falling into the local optimum. Compared with the traditional methods, such as sinusoidal function and power-of-five convergence factor, the water wave dynamics density factor can better capture the uncertainty characteristics in the complex environment, realize the adaptive equilibrium of exploration and development, enhance the ability to cope with the complex multi-peak problems, and improve the adaptability of the algorithm in the complex environment.

2.2.3. Lens Opposition-Based Learning

Lens opposition-based learning (LOBL) [35] incorporates the concepts of opposition-based learning (OBL) [36] and the scientific principles of lens imaging, aiming to improve the search capabilities of the algorithm. The LOBL strategy skillfully draws on the dyadic nature and the principle of focus imaging in optical systems and exhibits more efficient performance than OBL in finding optimal or near-optimal solutions.
In optical imaging, if an object is placed beyond twice the focal length of the lens, an inverted and reduced-size solid image is formed on the opposite side of the lens, between one and two times the focal length. As shown in Figure 3, the midpoint of the [ l b , u b ] interval is O . Consider the y -axis as a convex lens. An object of height h is located at point x . Point x is at twice the focal length of the lens. When this object is imaged through a lens, the coordinates of the vertices of its image will become ( x * , h * ) . The detailed mathematical Equation (13) is shown.
( ( l b + u b ) / 2 x ) / ( x * ( l b + u b ) / 2 ) = h / h * .
Setting k = ( h / h * ) , Equation (13) can be simplified to Equation (14).
x * = ( l b + u b ) / 2 + ( l b + u b ) / 2 k x / k ,
where k is a dynamically varying coefficient that increases with the number of iterations t . The rate of increase is controlled by ( t c / t max ) 1 / 2 and scaled up to the 10th power to simulate the focusing effect of the lens as shown in Equation (15). Pos(t + 1) is the new position as shown in Equation (16).
k = (1 + (tc/tmax)1/2)10,
x n e w ( t + 1 ) = ( l b + u b ) / 2 + ( l b + u b ) / ( 2 × k ) ( x p r e y ( t ) / k ) ,
where t max is the maximum number of iterations, t is the current number of iterations, x p r e y is the position of the current individual, x n e w is the updated position, and u b and l b are the upper and lower bounds, respectively.
The benefit of introducing LOBL is that it is able to utilize the geometric relationship between the current solution and the optimal solution instead of a completely random search, which helps to improve the global exploration capability of the algorithm. By dynamically adjusting the parameters, a balance between exploration and exploitation can be achieved, thus improving the convergence speed and stability of the algorithm. The new solution updating mechanism can better utilize the geometric properties of the problem itself and improve the solution accuracy of the algorithm. And the adaptive adjustment capability of the update strategy for the dynamic density factor of water waves can dynamically adjust the algorithm behavior according to the problem characteristics. There is complementarity between the two update strategies, which can improve the overall optimization performance. The overall flowchart of the algorithm is shown in Figure 4.
The pseudocode of the algorithm is shown below (Algorithm 1).
Algorithm 1. Pseudo code of MIHBA
Set   initial   parameters   t max , N , β , C .
Initialize the population positions using the Halton mapping as shown in Equations (7)–(9).
Evaluate   the   fitness   of   each   honey   badger   position   x i   using   objective   function   and   assign   to   f i ,   i [ 1 , 2 , , N ] .
Save   best   position   x p r e y   and   assign   fitness   to   f p r e y .
while   t t max  do
  Update the decreasing factor α using Equation (12).
   for   i = 1 to N  do
     Calculate   the   intensity   I i using Equation (2).
     if   r < 0.5  then
       Update   the   position   x n e w using Equation (4).
    Else
      Update the position using Equation (6).
    end if
      Update the global best position using the lens opposition-based learning as described in Equation (16).
     if   f n e w f i  then
       Set   x i = x n e w   and   f i = f n e w .
    end if
     if   f n e w f p r e y  then
       Set   x p r e y = x n e w   and   f p r e y = f n e w .
    end if
  end for
end while Stop criteria satisfied.
Return   x p r e y

2.3. Subsection

2.3.1. Time Complexity

The algorithm time complexity mainly depends on the population size N and the dimension of the problem D i m , and the time complexity of Halton mapping is O ( N × d i m ) . In each iteration, the position update of each individual requires the computation of a new density factor α and a new position X n e w . The time complexity of this part mainly depends on the population size N , the dimension D i m , and the complexity of computing α and X n e w . The time complexity of computing α is O ( 1 ) because it involves simple operations on the number of constants. The time complexity of updating Xnew is O ( d i m ) because it needs to be computed for each dimension. Therefore, the total time complexity of each iteration is O ( N × d i m ) . This strategy updates the global optimal position at the end of each iteration, and its time complexity is O ( d i m ) because it needs to be computed for each dimension.
Therefore, the time complexity of MIHBA is O ( t m a x × N × d i m ) , which is equal to the time complexity of the HBA.

2.3.2. Space Complexity

Space complexity is a key measure of the storage space required for an algorithm to run. The storage of the honey badger population location X requires N × d i m spatial units. The MIHBA also needs to store variables such as the global optimal position X p r e y , the fitness, the current number of iterations t , etc., which require O ( d i m ) space. The density factor α and other temporary variables require O ( 1 ) space. Thus, the overall space complexity of the algorithm, O ( N × d i m ) , is equal to that of the original HBA. As a result, MIHBA did not add additional storage space requirements.

3. Experiments

In order to examine the performance of the improved HBA, this study evaluated the MIHBA on 23 standard test functions. These test functions were grouped into 3 categories: single-peak functions, multimodal functions, and fixed-dimension functions. Finally, the MIHBA was subjected to ablation experiments and compared with five meta-heuristics.

3.1. Experimental Setup and Evaluation Criteria

To ensure the accuracy and reproducibility of the experiments, all tests were conducted in the same environment. The experimental environment was configured with a Windows 10 operating system, an Intel® Core™ i5-12490F 3.0 GHz processor, and 16 GB of operating memory. All algorithms were implemented and tested in the MATLAB R2020b programming environment. All benchmark functions are shown in Table 1, including the dimension D i m , the value domain (the lower and upper bounds in the search space), and the known global optimum F m i n for each function. The performance of the optimization algorithm is evaluated using mean, standard deviation, and rank.
Mean: used to summarize the concentrated trend of all values within a data set. The overall performance and reliability of the algorithm can be assessed by repeatedly running the algorithm in a series of independent experiments and calculating the average of its results. The average value is calculated as follows:
M e a n = ( 1 / S ) i = 1 s C i ,
where S is the number of cycles and C i is the result of the i t h independent experiment.
Standard deviation (SD) reflects the degree of deviation of individual values in the data set from the mean. In the context of algorithm optimization and machine learning, the standard deviation is a key metric for evaluating the consistency of the results of multiple runs of an algorithm, reflecting the stability and reliability of the algorithm in different test environments. Calculated as follows:
S D = ( 1 / S ) i = 1 s ( C i ( 1 / S ) i = 1 S C i ) 2 ,
where S is the number of cycles and C i is the result of the i t h independent experiment.
Rank is based on the Friedman test [37] and is used to rank all algorithms. The ranking is determined based on the average ranking of the algorithms in each test; if multiple algorithms perform the same in a particular test, they will share the average ranking for that test. In this way, the performance of the algorithms on all the test functions can be considered together and the overall ranking of each algorithm can be derived accordingly. Specifically, “rank-count” represents the cumulative sum of the ranks, “ave-rank” represents the average of the ranks, and “overall-rank” is the final ranking of the algorithm across all comparisons.

3.2. Test Functions

In this study, we chose 23 benchmark test functions to perform the experiments. These test functions were categorized as unimodal, multimodal, and fixed-dimension, as shown in Table 1, where f 1 f 7 are single-peak unimodal test functions, f 8 f 13 are multimodal test functions, and f 14 f 23 are fixed-dimension test functions. The single-peak unimodal function has only one global optimal solution and no misleading local optimal solutions. The multimodal function, on the other hand, contains multiple local optimal solutions and a unique global optimal solution, testing the efficiency of the algorithm in exploring the diversity of the solution space and solving the development phase. Fixed-dimension test functions have the dimensions preset in the experiments and do not allow for adjustments, ensuring consistency and comparability of the tests. These tests contain functions ranging from simple to complex, ensuring consistency and comparability of the tests and helping to comprehensively assess the generalization ability of the algorithms.

3.3. The Sensitivity Analysis About P, t

In this study, we explore how the performance of metaheuristic algorithms is affected by the number of fitness evaluations ( F E S ). The total number of adaptive assessments was fixed at 15,000. Despite the same total number of times, different combinations of evaluation frequencies ( P / t ) may make a significant difference in the performance of the algorithm. For this purpose, three different combinations of values were selected for testing: 15/1000 (Set-1), 30/500 (Set-2), and 60/250 (Set-3) to assess their specific impact on MIHBA performance at different evaluation frequencies.
As shown in Table 2, we compare all the functions, including 7 single-peak functions, 6 multimodal functions and 10 fixed-dimension functions. By computing rank-count vs. ave-rank, we ranked the overall-rank for each P / t group. For a P / t value of 15/1000, the rank-count is 50; for a P / t value of 30/500, the rank-count is 43; and for a P / t value of 60/250, the rank-count is 45. Overall, the overall-rank of set-2 is the best among the three groups. Therefore, when the F E S is the same, this paper chooses set-2 for the experiment.

3.4. Results of Comparative Experiments

Evolutionary algorithms and population intelligence are the main sources of comparison algorithms in this paper. Genetic Algorithm (GA) is the most widely used evolutionary algorithm, which is inspired by Darwin’s theory of evolution [38]. Yuan Lyu et al. proposed a multi-unit optimal configuration strategy based on overlap analysis and improved GA for optimizing the chiller system configuration, which improves the annual energy-efficiency ratio by maximizing the overlap index to obtain the optimal configuration directly from the candidate chiller units [39]. Non-monopoly search (NO) optimization algorithm is an intelligent optimization algorithm based on natural selection and population evolution, which searches for optimal solutions in the search space by simulating the competition and collaboration of species in nature [40]. Among the swarm intelligent optimization algorithms, the most classical algorithm is the PSO algorithm developed by Kennedy and Eberhart, which traces the path by comprehensively considering its own optimal solution as well as the optimal solution obtained in the iterative process [41]. Qin Hu et al. firstly suggested an asymmetric frame damage identification method based on the improvement of particle swarm algorithm to update the Bayesian model, which overcame the disadvantage of the original particle swarm algorithm in converging too early for the damage identification and condition assessment of real engineering structures, but it is not adapted to the problem to be solved in this paper [42]. The Dung Beetle Optimization (DBO) algorithm is a relatively new swarm intelligence optimization algorithm, which simulates the behavior of dung beetle populations in nature and constructs a search framework based on the rolling-laying-feeding-stealing model, which is used to solve the global optimization problem [43]. In this paper, two widely used meta-heuristic algorithms, GA and PSO, which are inspired by the theory of biological evolution and simulate the phenomenon of group intelligence, are selected for comparison. The former is inspired by biological evolution theory, and the latter simulates the phenomenon of group intelligence, as well as two differently inspired algorithms, NO and DBO, which have been proposed in recent years. Such a combination of algorithmic comparisons can reflect the diversity of metaheuristic algorithms and also makes the experimental results more representative and the experimental conclusions more convincing. Table 3 demonstrates the parameter settings specific to the comparison algorithm.
In conducting experiments to compare the performance of the HBA with other optimization algorithms, we used a standardized experimental framework to ensure the fairness of the evaluation. In the experiments, the population size was uniformly set to 30 individuals, while the maximum number of iterations of the algorithm was limited to 500 rounds. To ensure the stability and credibility of the results, 50 rounds of independent experiments were executed for each algorithm. In this way, we were able to collect and analyze the mean and standard deviation of each objective function to comprehensively assess the performance and efficiency of different algorithms in solving the optimization problem. Table 4 shows the experimental data of MIHBA with other algorithms after 50 iterations on 23 test functions.
Based on the numerical results obtained in Table 4, we can see that MIHBA exhibits better means and standard deviations than the other compared algorithms on f 1 f 4 , f 7 f 11 , and f 17 . MIHBA has the best average of the six compared algorithms on the f 18 and f 21 f 23 test functions. And for f 20 , the standard deviation of MIHBA is optimal compared to other algorithms. For f 15 and f 19 , the improvements to HBA, while effective, are not optimal among all the compared algorithms. On test functions f5, f 6 , f12 f 14 , and f 16 , the improvements to the HBA do not achieve better results. In terms of rankings, MIHBA performs well in the majority of the tested functions, with an overall ranking of first among the six compared algorithms. Overall, the numerical results of MIHBA on the 23 tested functions show that MIHBA has better stability and stronger ability to jump out of the local optimum.

3.5. Results of Ablation Experiments

Experiments with MIHBA and variants of HBA were performed on the 23 test functions described above, and the nomenclature of the ablation algorithms is given in Table 5. Among them, HBA1 is the HBA that takes Halton sequence; HBA2 replaces the original density factor with water wave dynamic density factor;HBA3 incorporates lens opposition-based learning; HBA12 adopts Halton sequence and replaces the original density factor with water wave dynamic density factor; HBA13 adopts Halton sequence and incorporates lens opposition-based learning; and HBA23 replaces the original density factor with water wave dynamic density factor and incorporates lens opposition-based learning. MIHBA is the algorithm improved in this paper.
In evaluating the performance of the HBA against the comparison algorithms, we used a uniform experimental setup to ensure fairness. In this case, the population size was set to 30 and the maximum number of iterations was limited to 500. In order to obtain more stable and reliable results, each algorithm was run independently 50 times. This allowed for the collection of the mean and standard deviation of each objective function, which allowed for a comprehensive comparison and analysis of the efficiency and effectiveness of the different algorithms in solving the optimization problem. The experimental results of MIHBA and ablation algorithms under 50 cycles of 23 test functions are given in Table 6.
Based on the numerical results obtained from Table 6, it can be seen that the MIHBA performs well on most of the benchmark functions, especially on f 1 f 4 , f8 f 11 , f 15 , and f 17 . From the results, it can be seen that after adding Halton sequence initialization, HBA1 can only improve the convergence accuracy in some test functions, such as f 3 , f5 f 8 , f 10 , f12, f 15 f16, f 18 , and f 20 f22. With the addition of the water wave dynamic density factor, HBA2 can achieve a higher convergence accuracy in the test functions, exceptf5 f 6 , f 8 , f 12 f 16 , f18, and f 22 f 23 , and significantly improve the convergence speed of the algorithm. HBA3 integrates the lens opposition-based learning, which obtains better accuracy in test functions other than f 13 , and enhances the global search capability. Each of the three strategies has its own strengths, and the fusion of the three algorithms results in MIHBA obtaining the best overall-rank among all the variants of the HBA. Overall, MIHBA shows better stability and a stronger ability to jump out of the local optimum in the numerical results compared with its ablation algorithm.

3.6. Friedman Test

In an in-depth analysis of the comprehensive performance of the 12 algorithms, the Friedman test is a powerful statistical tool to rank the performance of these algorithms [44]. In this way, we can not only obtain a relative ranking of the algorithms’ performance but also identify the algorithms that excel in a particular benchmark test. With Friedman’s test, we can observe the performance of the MIHBA against the other 11 algorithms on 23 benchmark functions, and these detailed ranking data can be found in Table 7.
Based on the data in Table 7, we can clearly observe that the MIHBA excels among all the algorithms involved in the comparison. Among them, MIHBA has a rank-count of 41.5 and an ave-rank of 2.7667. This result indicates that MIHBA tops the list in terms of comprehensive performance among the 23 benchmarking functions involved. The results of the statistical analysis of Friedman’s test further corroborate the superiority of the MIHBA over other algorithms in optimization problems.

3.7. Wilcoxon Signed-Rank Test

In this study, we used the Wilcoxon signed-rank test [45], a non-parametric statistical method, to carefully compare the performance differences between the MIHBA and the other five algorithms. In this paper, a significance level of 0.05 was set P. The MIHBA is considered to be significantly different from the comparison algorithm when the p -value is less than 0.05, i.e., the significance symbol is 1. When the p -value is greater than 0.05, i.e., significance sign is 0, it is considered that there is no significant difference in the performance of the two. When the p -value is NaN, it is considered that the algorithms perform similarly. After the Wilcoxon test on 23 benchmark functions, the detailed performance comparison results of MIHBA with other algorithms are obtained and displayed in Table 8.
Table 8 demonstrates the results of the Wilcoxon signed-rank test. In the comparison with GA, the Wilcoxon signed-rank test for MIHBA is significant with all p -values less than 0.05 and all h-values equal to 1, which indicates that there is a significant performance difference between the two algorithms. The low frequency of h -values of 0 when comparing with DBO and PSO confirms the significant difference between MIHBA and these two algorithms. In the comparison with NO and the original HBA, with p -values of NaN and h -values of 0 on the three test functions, there are significant differences between MIHBA and HO and the original HBA. In summary, MIHBA is significantly different from the other five algorithms with which it was compared.

3.8. Convergence Analysis

Convergence analysis is a key metric when evaluating algorithm performance. During the iteration process, the convergence analysis can determine whether the algorithm can effectively approximate the optimal solution or reach a steady state. In this paper, we set an upper limit of 500 iterations, a population size of 30 individuals, and specify that all algorithms are evaluated for fitness no more than 15,000 times. In order to visualize the convergence behavior of the algorithms, we plotted Figure 5, Figure 6 and Figure 7, which show the average convergence curves of the MIHBA and other comparative algorithms on different types of test functions, respectively. Specifically, Figure 5 covers 7 single-peak test functions, Figure 6 contains 6 multimodal test functions, while Figure 7 shows 10 fixed-dimension test functions. These plots provide us with a visual comparison of the convergence performance of the algorithms.
Figure 5 shows the average convergence curves of the MIHBA comparison algorithms on f 1 to f 7 . Except for f 7 , both MIHBA and HBA23 algorithms are located at the bottom left of the graph, which indicates that MIHBA has a very fast convergence speed and accuracy. As for f 7 , MIHBA is ranked 4th among the 12 compared algorithms, but it has converged to a good value in the 5th iteration, indicating that MIHBA has faster convergence speed.
Figure 6 shows the average convergence curves of MIHBA and its comparison algorithm on f 8 f 13 , with MIHBA shown as the red line with the pentagram identifier in the figure. On the f 9 , f 12 and f 13 test functions, MIHBA is located at the bottom of the graph. It shows that on these 3 test functions, the MIHBA is able to reach more accurate values faster. On the other three test functions, MIHBA was able to achieve a performance that ranked in the top 4 out of 12 algorithms.
Figure 7 presents the average convergence curves of the MIHBA with other comparative algorithms on the f 14 f23 test functions. From the figure, it can be seen that MIHBA shows excellent results on f 14 , f 15 , f17 f 19 , f 21 , and f 22 with fast convergence and high convergence accuracy. MIHBA makes up for the lack of performance of HBA23. On f20, although MIHBA does not have the fastest convergence, it can be seen on the small graph that all algorithms except PSO have similar performance. Improvements to the HBA did not work well on f 16 and f 23 . In summary, MIHBA has a higher convergence efficiency compared to other algorithms, indicating that the improvement of HBA is effective and enhances the convergence performance of HBA.

3.9. Stability Analysis

This section delves into the stability performance of the different algorithms through box plots. Each algorithm was tested with 50 independent runs. The experiments include three single-peak test functions, three multimodal test functions, and three fixed-dimension test functions for a comprehensive comparison. As shown in Figure 8, the box plot of the MIHBA is displayed side-by-side with the other compared algorithms to visually reveal the performance differences between them.
Figure 8 shows a box plot of MIHBA with the other 11 metaheuristics running 50 experiments independently. The results of the MIHBA are optimal except for f 12 and f 18 , especially on f 8 . On f 12 and f18, MIHBA is stable, but there are individual outliers. MIHBA showed better stability in the experiment, suggesting that improvements to HBA are effective.

4. Application

To demonstrate the global optimization capability of the improved algorithm MIHBA, four engineering design problems are tested in this section. They are gear system design problem, pressure vessel design problem, three-bar truss design problem and reducer design problem, and the constraints are handled by applying the penalty function method. We choose the top four algorithms MIHBA, HBA, DBO and PSO ranked by Friedman test results for comparison to evaluate the performance of MIHBA. The comparison experiment is set with a population size of 30, the number of iterations is 500, and the loop experiment is conducted 50 times.

4.1. Gear Train Design Problems

A gear train consists of multiple gears that transmit motion and power, change speed, direction, and torque. Gear systems are widely used in automotive transmissions, differentials, etc. to accelerate, decelerate and steer. The design needs to consider the geometric parameters and meshing conditions to ensure that the transmission requirements are met. The transmission ratio determines the effect of deceleration or speed increase, and the design needs to consider the transmission ratio, efficiency, and cost, as shown in Figure 9. Its mathematical model is given in Equation (19).
C o n s i d e r x = [ x 1 , x 2 , x 3 , x 4 ] = [ N 1 , N 2 , N 3 , N 4 ] , M i n i m i z e f ( x ) = ( 1 6.931 x 3 x 2 x 1 x 4 ) 2 , V a r i a b l e r a n g e x 1 , x 2 , x 3 , x 4 { 12 , 13 , , 60 }
MIHBA is used to optimize the gear train design problem, and the results are compared with those of Friedman test for the top three algorithms: DBO, HBO, and PSO. As shown in the experimental results in Table 9, the optimal values obtained by MIHBA are less than the other 3 algorithms, which indicates that MIHBA obtains better values and performs better in the gear train design problem. In this problem, the Optimal cost of MIHBA is reduced by two orders of magnitude compared to HBA.

4.2. Pressure Vessel Design Problems

The pressure vessel design problem is a classical optimization problem that involves minimizing the cost of a pressure vessel while satisfying specific constraints. The mathematical model for the four parameters of this problem is given in Equation (20), and a diagram of the pressure vessel design problem is given in Figure 10.
C o n s i d e r x = [ x 1 , x 2 , x 3 , x 4 ] = [ T s ,   T h , R , L ] M i n i m i z e f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 S u b j e c t t o g 1 ( x ) = x 1 + 0.0193 x 3 0 g 2 ( x ) = x 2 + 0.00954 x 3 0 g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 g 4 ( x ) = x 4 240 0 V a r i a b l e r a n g e x 1 , x 2 [ 0 , 99 ] ; x 3 , x 4 [ 10 , 200 ]
As shown in the experimental results in Table 10, the optimal values obtained by MIHBA are less than the other three algorithms, indicating that MIHBA obtains better values and performs better in the pressure vessel design problems. In this problem, the Optimal cost of MIHBA is reduced by 11.45% compared to HBA.

4.3. Three-Rod Truss Design Problems

The three-bar truss design problem is a classical structural optimization problem that involves mechanics and optimization theory. In this problem, the objective is to design a truss consisting of three rods that minimizes the weight of the truss under given loads and constraints while satisfying the strength, stiffness, and stability requirements, and reduces the cost of designing problems for a three-rod truss. The mathematical model for the parameters of this problem is given in Equation (21), and a diagram of the three-bar truss design problem is given in Figure 11.
C o n s i d e r x = [ x 1 , x 2 ] = [ N 1 , N 2 ] M i n i m i z e f ( x ) = L * ( 2 2 x 1 + x 2 ) S u b j e c t t o g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P δ 0 g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P δ 0 g 3 ( x ) = 1 2 x 2 + x 1 P δ 0 V a r i a b l e r a n g e x 1 , x 2 [ 0 , 1 ] W h e r e L = 100 , P = 2 , δ = 2
As shown in the experimental results in Table 11, the optimal values obtained by MIHBA are smaller than the other three algorithms, indicating that MIHBA obtains better values and performs better in the three-bar truss design problem.

4.4. Reducer Design Problems

In the field of engineering design, the gearbox design problem is a typical optimization challenge that aims to minimize weight and design cost while satisfying a set of key performance constraints. This problem involves seven major design variables: the tooth face width b , the gear module m , the number of pinion teeth p , the lengths of the two shafts l 1 and l 2 , and the diameters of the two shafts d 1 and d 2 . Together, these variables determine the size, performance, and weight of the gearbox. To ensure the safety and reliability of the reducer, the design process must adhere to 11 stringent constraints, including bending stresses on the gear teeth, surface pressures, transverse shaft deflections, and stresses on the shaft. The gearbox design problem is shown in Figure 12, and its mathematical model is given in Equation (22).
Consider   x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] = [ b , m , p , l 1 , l 2 , d 1 , d 2 ] Minimize   f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) Subject to   g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 g 3 ( x ) = 1.93 x 4 3 x 2 x 6 4 x 3 1 0 g 4 ( x ) = 1.93 x 5 3 x 2 x 7 4 x 3 1 0 g 5 ( x ) = ( 745 x 4 / ( x 2 x 3 ) ) 2 + 16.9 × 10 6 110 x 6 3 1 0 g 6 ( x ) = ( 745 x 5 / ( x 2 x 3 ) ) 2 + 157.5 × 10 6 85 x 7 3 1 0 g 7 ( x ) = x 2 x 3 40 1 0 g 8 ( x ) = 5 x 2 x 1 1 0 g 9 ( x ) = x 1 12 x 2 1 0 g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 Variable range   x 1 [ 2.6 , 3.6 ] , x 2 [ 0.7 , 0.8 ] , x 3 [ 17 , 28 ] , x 4 [ 7.3 , 8.3 ] , x 5 [ 7.8 , 8.3 ] , x 6 [ 2.9 , 3.9 ] , x 7 [ 5.0 , 5.5 ]
As shown in the experimental results in Table 12, the optimal values obtained by MIHBA are less than the other three algorithms, indicating that MIHBA obtains better values and performs better in the gearbox design problem. In this problem, the optimal cost of MIHBA is 6.69% lower than that of HBA.

5. Conclusions

The honey badger algorithm has important applications in the engineering field, as it can not only efficiently find the global optimal solution in complex search spaces, but also maintain a balance between exploration and exploitation, converge quickly, and be applicable to a wide range of engineering optimization problems. However, the honey badger algorithm may encounter the problems of slow convergence and easy to fall into local optimum in some cases. In this paper, the honey badger algorithm is analyzed in depth and several key improvements are proposed. These improvements include the use of Halton sequence for population initialization, the introduction of water wave dynamic density factor, and the use of a learning method based on the principles of lens imaging. These innovative strategies significantly improve the algorithm’s search power and the quality of the solution, making it more effective in dealing with complex optimization problems. Initializing the population by Halton sequence increases the diversity of the population and thus prevents the algorithm from converging to a local optimum solution too early in the search process. This approach generates a uniformly distributed set of points in the multidimensional space, providing a more balanced starting point for the global search of the algorithm. The global and local search capability of the algorithm is improved by introducing the water wave dynamic density factor to simulate the propagation, refraction, and breaking wave motion in the solution space. This dynamic density factor allows the algorithm to explore the vast search space more efficiently. In addition, a learning method based on the lens imaging principle is used to further improve the algorithm’s ability to escape from local optimal solutions. This strategy promotes communicative learning among group members by introducing new dynamics into the search process and helps to maintain the diversity of the algorithms. Finally, experiments on 23 benchmark test functions and 4 engineering design problems validate the effectiveness of the improvements. The excellent performance of MIHBA in solving complex optimization problems has been demonstrated in experiments, especially in nonlinear and multi-objective problems, which are difficult to solve using traditional mathematical methods. The advancement of MIHBA makes it a powerful tool for future work, and it has a wide range of applications spanning across many fields, such as engineering, artificial intelligence, and economic management. Whether it is model training in machine learning, structural optimization in engineering design, portfolio management in economics, or path planning in logistics, MIHBA can provide efficient solutions. Its flexibility and efficiency bode well for driving innovation and improving efficiency in the industry while closely aligning academic discoveries with industry needs. In short, MIHBAs not only has theoretical advantages, but also shows wide applicability and strong problem-solving ability in practical applications. MIHBA will play a greater role in future work to meet the needs of more industries.
Although we have made significant progress, there are still many opportunities for further improvement. The Fast Evolutionary Programming (FEP) algorithm proposed in the literature [46] can significantly improve the computational efficiency of the algorithm by parallelization strategies, especially by using modern computing resources such as GPUs, cloud computing, and other modern computing resources. Reference [47] proposes a method capable of automatically reconstructing robot models and automatically running motion simulations in a physical simulation environment by combining physical simulation modeling with Simscape Multibody and a multi-objective optimization approach with genetic algorithms in order to search for and solve the dimensions and control parameters of the robot in motion simulation. The time complexity of MIHBA proposed in this paper is not improved over that of HBA, but the time consumption is increased. In the future, we will reduce the time consumption of this algorithm and use it for solving inverse kinematics of quadrupled robots.

Author Contributions

Writing—review and editing, software, conceptualization, T.H.; writing—review and editing, writing—original draft, software, methodology, T.L.; visualization, supervision, resources, funding acquisition, Q.L.; writing—review and editing, funding acquisition, methodology, conceptualization, Y.H.; supervision, resources, validation, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Anhui Provincial Colleges and Universities Collaborative Innovation Project (GXXT-2023-068), and Anhui University of Science and Technology Graduate Innovation Fund Project (2023CX2086). Among them, the funder of GXXT-2023-068 project is Yourui Huang, and the funder of 2023CX2086 project is Quanzeng Liu.

Data Availability Statement

The data generated from the analysis in this study can be found in this article. This study does not report the original code, which is available for academic purposes from the lead contact. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.

Acknowledgments

We would like to thank the School of Electrical and Information Engineering at Anhui University of Science and Technology for providing the laboratory.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xu, Y.; Cao, H.; Shi, J.; Pei, S.; Zhang, B.; She, K. A comprehensive multi-parameter optimization method of squeeze film damper-rotor system using hunter-prey optimization algorithm. Tribol. Int. 2024, 194, 109538. [Google Scholar] [CrossRef]
  2. Jing, R.; Song, B.; Gao, R.; Yang, C.; Hao, X. A variable gradient descent shape optimization method for guide tee resistance reduction. J. Build. Eng. 2024, 95, 110161. [Google Scholar] [CrossRef]
  3. Zhao, B.; Liu, X. A transient formulation of entropy and heat transfer coefficients of Newton’s cooling law with the unifying entropy potential difference in compressible flows. Int. J. Therm. Sci. 2024, 205, 109253. [Google Scholar] [CrossRef]
  4. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  5. Dehghani, M.; Montazeri, Z.; Trojovská, E. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  6. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  7. Amiri, M.; Hashjin, N.; Montazeri, M. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  8. Wang, J.; Wang, W.; Hu, X. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  9. Fan, Z.; Xiao, Z.; Li, X.; Huang, Z.; Zhang, C. MSBWO: A Multi-Strategies Improved Beluga Whale Optimization Algorithm for Feature Selection. Biomimetics 2024, 9, 572. [Google Scholar] [CrossRef] [PubMed]
  10. Azizi, M.; Aickelin, U.; Khorshidi, H.A.; Baghalzadeh Shishehgarkhaneh, M. Energy valley optimizer: A novel metaheuristic algorithm for global and engineering optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  11. Zolfi, K. Gold rush optimizer: A new population-based metaheuristic algorithm. Oper. Res. Decis. 2023, 33, 113–150. Available online: https://api.semanticscholar.org/CorpusID:258203735 (accessed on 16 April 2023).
  12. Wang, P.; Xin, G. Quantum theory of intelligent optimization algorithms. Acta Autom. Sin. 2023, 49, 2396–2408. [Google Scholar] [CrossRef]
  13. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  14. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based Syst. 2022, 262, 110248. [Google Scholar] [CrossRef]
  15. Abualigah, L.; Elaziz, M.; Sumari, P.; Geem, Z.; Gandomi, A. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  16. Dehkordi, A.; Sadiq, A.; Mirjalili, S.; Ghafoor, K. Nonlinear-based Chaotic Harris Hawks Optimizer: Algorithm and Internet of Vehicles application. Appl. Soft Comput. 2021, 109, 107574. [Google Scholar] [CrossRef]
  17. Jena, B.; Naik, M.K.; Wunnava, A.; Panda, R. A Differential Squirrel Search Algorithm. Adv. Intell. Comput. Commun. 2021, 202, 143–152. [Google Scholar] [CrossRef]
  18. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  19. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  20. Liu, Z.; Nishi, T. Multipopulation Ensemble Particle Swarm Optimizer for Engineering Design Problems. Math. Probl. Eng. 2020, 2020, 1450985. [Google Scholar] [CrossRef]
  21. Hashim, F.; Houssein, E.; Hussain, K.; Mabrouk, M.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  22. Bansal, A.; Sangtani, V.; Bhukya, M. Optimal configuration and sensitivity analysis of hybrid nanogrid for futuristic residential application using honey badger algorithm. Energy Convers. Manag. 2024, 315, 118784. [Google Scholar] [CrossRef]
  23. Jose, R.; Paulraj, E.; Rajesh, P. Enhancing Steady-State power flow optimization in smart grids with a hybrid converter using GBDT-HBA technique. Expert Syst. Appl. 2024, 258, 125047. [Google Scholar] [CrossRef]
  24. Wilson, A.; Kiran, W.; Radhamani, A.; Bharathi, A. Optimizing energy-efficient cluster head selection in wireless sensor networks using a binarized spiking neural network and honey badger algorithm. Knowl.-Based Syst. 2024, 299, 112039. [Google Scholar] [CrossRef]
  25. Jiang, M.; Ding, K.; Chen, X.; Cui, L.; Zhang, J.; Cang, Y. Cgh-gto method for model parameter identification based on improved grey wolf optimizer, honey badger algorithm, and gorilla troops optimizer. Energy 2024, 296, 131163. [Google Scholar] [CrossRef]
  26. Ren, X.; Li, L.; Ji, B.; Liu, J. Design and analysis of solar hybrid combined cooling, heating and power system: A bi-level optimization model. Energy 2024, 292, 130362. [Google Scholar] [CrossRef]
  27. Wang, B.; Kang, H.; Sun, G.; Li, J. Efficient traffic-based IoT device identification using a feature selection approach with Lévy flight-based sine chaotic sub-swarm binary honey badger algorithm. Appl. Soft Comput. 2024, 155, 111455. [Google Scholar] [CrossRef]
  28. Zhang, S.; Wang, J.; Li, Y.; Zhang, S.; Wang, Y.; Wang, X. Improved honey badger algorithm based on elementary function density factors and mathematical spirals in polar coor-dinate systema. Artif. Intell. Rev. 2024, 57, 55. [Google Scholar] [CrossRef]
  29. Huang, P.; Zhou, Y.; Deng, W.; Zhao, H.; Luo, Q.; Wei, Y. Orthogonal opposition-based learning honey badger algorithm with differential evolution for global optimization and engineering design problems. Alex. Eng. J. 2024, 91, 348–367. [Google Scholar] [CrossRef]
  30. Zhang, J.; Zhang, T.; Li, C. Migration time prediction and assessment of toxic fumes under forced ventilation in underground mines. Undergr. Space 2024, 18, 273–294. [Google Scholar] [CrossRef]
  31. Zhong, H.; Cong, M.; Wang, M.; Du, Y.; Liu, D. HB-RRT: A path planning algorithm for mobile robots using Halton sequence-based rapidly-exploring random tree. Eng. Appl. Artif. Intell. 2024, 133, 108362. [Google Scholar] [CrossRef]
  32. Duan, Y.; Yu, X. A collaboration-based hybrid GWO-SCA optimizer for engineering optimization problems. Eng. Appl. Artif. Intell. 2023, 2013, 119017. [Google Scholar] [CrossRef]
  33. Huang, Y.; Liu, Q.; Song, H.; Han, T.; Li, T. CMGWO: Grey wolf optimizer for fusion cell-like P systems. Heliyon 2024, 10, e34496. [Google Scholar] [CrossRef] [PubMed]
  34. Zheng, Y. Water wave optimization: A new nature-inspired metaheuristic. Comput. Oper. Res. 2015, 55, 1–11. [Google Scholar] [CrossRef]
  35. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy perturbation. Appl. Soft Comput. 2024, 152, 111211. [Google Scholar] [CrossRef]
  36. Choi, T.; Pachauri, N. Adaptive search space for stochastic opposition-based learning in differential evolution. Knowl.-Based Syst. 2024, 300, 112172. [Google Scholar] [CrossRef]
  37. Bernardo, M.; Daniel, Z.; Erik, C.; Fernando, F.; Alma, R. A better balance in me-taheuristic algorithms: Does it exist? Swarm. Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
  38. Abdi, G.; Sheikhani, A.; Kordrostami, S.; Zarei, B.; Rad, M. Identifying communities in complex networks using learning-based genetic algorithm. Ain Shams Eng. J. 2024, in press. [Google Scholar] [CrossRef]
  39. Lyu, Y.; Jin, X.; Xue, Q.; Jia, Z.; Du, Z. An optimal configuration strategy of multi-chiller system based on overlap analysis and improved GA. Build. Environ. 2024, 266, 112117. [Google Scholar] [CrossRef]
  40. Abualigah, L.; Al-qaness, M.; Abd, E. The non-monopolize search (NO): A novel single-based local search optimization algorithm. Neural Comput. Appl. 2024, 36, 5305–5332. [Google Scholar] [CrossRef]
  41. Zhu, D.; Shen, J.; Zhang, Y.; Li, W.; Zhu, X.; Zhou, C.; Cheng, S.; Yao, Y. Multi-strategy particle swarm optimization with adaptive forgetting for base station layout. Swarm Evol. Comput. 2024, 91, 101737. [Google Scholar] [CrossRef]
  42. Hu, Q.; Zhou, N.; Chen, H.; Weng, S. Bayesian damage identification of an unsymmetrical frame structure with an improved PSO algorithm. Structures 2023, 57, 105119. [Google Scholar] [CrossRef]
  43. Lu, Q.; Chen, Y.; Zhang, X. Grinding process optimization considering carbon emissions, cost and time based on an improved dung beetle algorithm. Comput. Ind. Eng. 2024, 197, 110600. [Google Scholar] [CrossRef]
  44. Röhmel, J. The permutation distribution of the Friedman test. Comput. Stat. Data Anal. 1997, 26, 83–99. [Google Scholar] [CrossRef]
  45. Dewan, I.; Rao, B. Wilcoxon-signed rank test for associated sequences. Stat. Probab. Lett. 2005, 71, 131–142. [Google Scholar] [CrossRef]
  46. Zhuo, Y.; Zhang, T.; Du, F.; Liu, R. A parallel particle swarm optimization algorithm based on GPU/CUDA. Appl. Soft Comput. 2023, 144, 110499. [Google Scholar] [CrossRef]
  47. Xiao, Y.; Yin, K.; Chen, X.; Chen, Z.; Gao, F. Multi-objective optimization design method for the dimensions and control parameters of curling hexapod robot based on application performance. Mech. Mach. Theory 2024, 204, 105831. [Google Scholar] [CrossRef]
Figure 1. Population initialization. (a) Random distribution map, (b) Halton distribution map, (c) Random initialized particle swarm distribution histogram, (d) Halton initialized particle swarm distribution histogram.
Figure 1. Population initialization. (a) Random distribution map, (b) Halton distribution map, (c) Random initialized particle swarm distribution histogram, (d) Halton initialized particle swarm distribution histogram.
Algorithms 17 00573 g001
Figure 2. Four different convergence factors. (a) The original convergence factor shown in Equation (3), (b) the sinusoidal convergence factor shown in Equation (10), (c) the power-of-five convergence factor demonstrated in Equation (11), and (d) the water wave dynamic density factor shown in Equation (12).
Figure 2. Four different convergence factors. (a) The original convergence factor shown in Equation (3), (b) the sinusoidal convergence factor shown in Equation (10), (c) the power-of-five convergence factor demonstrated in Equation (11), and (d) the water wave dynamic density factor shown in Equation (12).
Algorithms 17 00573 g002
Figure 3. LOBL schematic.
Figure 3. LOBL schematic.
Algorithms 17 00573 g003
Figure 4. Overall flowchart of the improved algorithm.
Figure 4. Overall flowchart of the improved algorithm.
Algorithms 17 00573 g004
Figure 5. Average convergence curves of MIHBA and comparative algorithms on single-peak test functions.
Figure 5. Average convergence curves of MIHBA and comparative algorithms on single-peak test functions.
Algorithms 17 00573 g005
Figure 6. Average convergence curves of MIHBA and comparison algorithm on multimodal test functions.
Figure 6. Average convergence curves of MIHBA and comparison algorithm on multimodal test functions.
Algorithms 17 00573 g006
Figure 7. Average convergence curves of MIHBA and comparison algorithms on fixed dimensional test functions.
Figure 7. Average convergence curves of MIHBA and comparison algorithms on fixed dimensional test functions.
Algorithms 17 00573 g007
Figure 8. Box-line diagram of MIHBA and comparison algorithms.
Figure 8. Box-line diagram of MIHBA and comparison algorithms.
Algorithms 17 00573 g008
Figure 9. Diagram of gear system design problems.
Figure 9. Diagram of gear system design problems.
Algorithms 17 00573 g009
Figure 10. Pressure vessel design problem map.
Figure 10. Pressure vessel design problem map.
Algorithms 17 00573 g010
Figure 11. Problematic diagram of three-rod truss design.
Figure 11. Problematic diagram of three-rod truss design.
Algorithms 17 00573 g011
Figure 12. Schemes follow the same formatting.
Figure 12. Schemes follow the same formatting.
Algorithms 17 00573 g012
Table 1. Benchmark functions.
Table 1. Benchmark functions.
FunctionDimensionDomainTheoretical Optimum
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
f 3 ( x ) = i = 0 n 1 { j = 0 j < i x i } 2 30[−100, 100]0
f 4 ( x ) = max i { | x i | , 1 i n } 30[−100, 100]0
f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
f 6 ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
f 8 ( x ) = i = 1 n x i sin ( | x i | ) 30[−500, 500]−12,569.4
f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 10 ( x ) = 20 exp ( 0.2 ( 1 / n ) × i = 1 n x i 2 ) exp ( ( 1 / n ) × i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
f 11 = ( 1 / 4000 ) × i = 1 n x i 2 i = 1 n cos ( x i / x + 1 ) + 1 30[−600, 600]0
f 12 ( x ) = ( π / n ) × { 10 sin ( π y 1 ) + ( y n 1 ) 2 + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] } y i = 1 + ( x i + 1 ) / 4 u ( x i , a , k , m ) = { k ( x i a ) m x i > a 0 a < x i < 1 k ( x i a ) m x i < a 30[−50, 50]0
f 13 ( x ) = 0.1 { i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x 1 + 1 ) ] sin 2 ( 3 π x 1 ) + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n i ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
f 14 ( x ) = ( 1 / 500 + j = 1 25 ( 1 / ( j + i = 1 2 ( x i a i j ) 6 ) ) ) 1 2[−65, 65]1
f 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) / ( b i 2 + b i x 3 + x 4 ) ] 2 4[−5, 5]0.00003075
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + x 1 6 / 3 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316285
f 17 ( x ) = ( x 2 ( 5.1 / 4 π 2 ) x 1 2 + ( 5 / π ) x 1 6 ) 2 + 10 ( 1 ( 1 / 8 π ) ) cos x 1 + 10 2[−5, 5]0.398
f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 × ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
f 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86
f 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
f 21 ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10
f 22 ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10
f 23 ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1 4[0, 10]−10
Table 2. Sensitivity Analysis about P , t .
Table 2. Sensitivity Analysis about P , t .
FunctionCriterionP/tP/tP/t
15/100030/50060/250
fMean0.0000 × 1000.0000 × 1001.2449 × 10−213
SD0.0000 × 1000.0000 × 1000.0000 × 100
Rank1.51.53
f 2 Mean0.0000 × 1002.1815 × 10−2114.4099 × 10−110
SD0.0000 × 1000.0000 × 1001.6352 × 10−109
Rank123
f 3 Mean0.0000 × 1000.0000 × 1003.0434 × 10−201
SD0.0000 × 1000.0000 × 1000.0000 × 100
Rank1.51.53
f 4 Mean0.0000 × 1008.8038 × 10−2027.7190 × 10−105
SD0.0000 × 1000.0000 × 1003.6422 × 10−104
Rank123
f 5 Mean2.8811 × 1012.8576 × 1012.8695 × 101
SD1.5692 × 10−13.2195 × 10−12.8470 × 10−1
Rank213
f 6 Mean3.2683 × 1003.1940 × 1002.7047 × 100
SD1.3931 × 1001.0974 × 1001.0262 × 100
Rank321
f 7 Mean3.8137 × 10−42.5721 × 10−42.5611 × 10−4
SD3.2160 × 10−42.6331 × 10−42.2611 × 10−4
Rank321
f 8 Mean−1.2293 × 104−1.2455 × 104−1.2441 × 104
SD2.9838 × 1021.6206 × 1021.9268 × 102
Rank312
f 9 Mean0.0000 × 1000.0000 × 1000.0000 × 100
SD0.0000 × 1000.0000 × 1000.0000 × 100
Rank222
f 10 Mean8.8818 × 10−168.8818 × 10−168.8818 × 10−16
SD0.0000 × 1000.0000 × 1000.0000 × 100
Rank222
f 11 Mean0.0000 × 1000.0000 × 1000.0000 × 100
SD0.0000 × 1000.0000 × 1000.0000 × 100
Rank222
f 12 Mean2.5736 × 10−12.4112 × 10−11.7331 × 10−1
SD2.7247 × 10−11.6118 × 10−11.7408 × 10−1
Rank321
f 13 Mean1.9765 × 1001.7823 × 1001.5983 × 100
SD7.7008 × 10−16.7126 × 10−16.5243 × 10−1
Rank321
f 14 Mean2.9411 × 1003.4290 × 1002.3035 × 100
SD8.9217 × 10−11.9069 × 1001.6986 × 100
Rank132
f 15 Mean2.1147 × 10−31.6665 × 10−31.3744 × 10−3
SD1.0278 × 10−34.3485 × 10−43.4597 × 10−4
Rank321
f 16 Mean−1.0130 × 100−8.8472 × 10−1−9.6634 × 10−1
SD1.1539 × 10−13.1674 × 10−12.2367 × 10−1
Rank132
f 17 Mean3.9789 × 10−13.9789 × 10−13.9789 × 10−1
SD1.1839 × 10−140.0000 × 1000.0000 × 100
Rank31.51.5
f 18 Mean4.1660 × 1003.0000 × 1003.0000 × 100
SD5.3452 × 1007.4694 × 10−153.8756 × 10−15
Rank31.51.5
f 19 Mean−3.6990 × 100−3.8626 × 100−3.8628 × 100
SD2.9147 × 10−11.1146 × 10−31.4827 × 10−13
Rank321
f 20 Mean−3.2541 × 100−3.2004 × 100−3.1947 × 100
SD6.5043 × 10−21.4440 × 10−22.8326 × 10−2
Rank123
f 21 Mean−1.0150 × 101−1.0153 × 101−1.0153 × 101
SD1.4456 × 10−23.1420 × 10−82.7968 × 10−7
Rank312
f 22 Mean−1.0403 × 101−1.0403 × 101−1.0403 × 101
SD3.0841 × 10−48.5220 × 10−84.2074 × 10−9
Rank321
f 23 Mean−1.0367 × 101−8.9298 × 100−3.3961 × 100
SD1.1467 × 1003.2466 × 1002.6635 × 100
Rank123
Rank-Count504345
Ave-Rank2.1741.8691.957
Overall-Rank312
Table 3. Comparison of algorithm-specific parameter settings.
Table 3. Comparison of algorithm-specific parameter settings.
AlgorithmParametersValue
NO N o = 1 1
PSO c 1 2
c 2 2
w s t a r t 0.9
w e n d 0.6
GA p c 0.9
p m 0.2
DBO P _ p e r c e n t 0.2
HBA Series β 6
C 2
v e c _ f l a g [−1, 1]
Table 4. Experimental results of MIHBA and comparison algorithms under 50 cycles of 23 test functions.
Table 4. Experimental results of MIHBA and comparison algorithms under 50 cycles of 23 test functions.
FunctionCriterionNOPSOGADBOHBAMIHBA
f 1 Mean7.3003 × 10−1852.4640 × 1009.4741 × 1032.2951 × 10−1144.7283 × 10−1350.0000 × 100
SD0.0000 × 1001.1232 × 1005.8262 × 1031.1000 × 10−1133.1823 × 10−1340.0000 × 100
Rank256431
f 2 Mean1.2667 × 10−954.6171 × 1004.0060 × 1014.7724 × 10−571.8214 × 10−722.1815 × 10−211
SD5.0335 × 10−951.1213 × 1009.8511 × 1003.3579 × 10−564.3810 × 10−720.0000 × 100
Rank256431
f 3 Mean2.2392 × 10−1581.8283 × 1024.0361 × 1041.5087 × 10−433.3967 × 10−960.0000 × 100
SD1.5833 × 10−1575.4112 × 1011.3818 × 1041.0668 × 10−421.8422 × 10−950.0000 × 100
Rank256431
f 4 Mean2.8770 × 10−922.0464 × 1006.7351 × 1014.6272 × 10−543.5493 × 10−578.8038 × 10−202
SD1.8518 × 10−912.3739 × 10−18.4169 × 1002.7100 × 10−531.8516 × 10−560.0000 × 100
Rank256431
f 5 Mean8.2510 × 10−301.1298 × 1031.5814 × 1062.5750 × 1012.4107 × 1012.8576 × 101
SD1.0323 × 10−297.1313 × 1022.0583 × 1061.9964 × 10−19.4081 × 10−13.2195 × 10−1
Rank156324
f 6 Mean6.8816 × 1002.4789 × 1009.2284 × 1031.5846 × 10−23.0077 × 10−23.1940 × 100
SD8.2198 × 10−11.2106 × 1005.0355 × 1035.9135 × 10−28.1659 × 10−21.0974 × 100
Rank536124
f 7 Mean3.9573 × 10−31.7060 × 1019.0373 × 10−11.2122 × 10−34.5094 × 10−42.5721 × 10−4
SD3.4920 × 10−31.2841 × 1017.8983 × 10−11.0887 × 10−34.3386 × 10−42.6331 × 10−4
Rank465321
f 8 Mean−9.2645 × 102−6.2006 × 103−2.1082 × 103−8.7402 × 103−8.1146 × 103−1.2455 × 104
SD6.0439 × 1021.3820 × 1034.7463 × 1021.7062 × 1031.3014 × 1031.6206 × 102
Rank645231
f 9 Mean0.0000 × 1001.6733 × 1022.2556 × 1023.7809 × 10−10.0000 × 1000.0000 × 100
SD0.0000 × 1003.2186 × 1013.5442 × 1011.5686 × 1000.0000 × 1000.0000 × 100
Rank256422
f 10 Mean8.8818 × 10−162.6233 × 1001.9645 × 1018.8818 × 10−161.5945 × 1008.8818 × 10−16
SD0.0000 × 1004.8205 × 10−14.3168 × 10−10.0000 × 1005.4623 × 1000.0000 × 100
Rank246252
f 11 Mean0.0000 × 1001.2469 × 10−18.3818 × 1011.5363 × 10−30.0000 × 1000.0000 × 100
SD0.0000 × 1005.2338 × 10−25.7330 × 1011.0863 × 10−20.0000 × 1000.0000 × 100
Rank256422
f 12 Mean1.4334 × 1005.6601 × 10−25.9015 × 1037.4402 × 10−43.1293 × 10−32.4112 × 10−1
SD2.8723 × 10−15.7002 × 10−23.0280 × 1042.5328 × 10−31.4709 × 10−21.6118 × 10−1
Rank536124
f 13 Mean1.8699 × 10−325.5405 × 10−15.2614 × 1056.1309 × 10−14.1696 × 10−11.7823 × 100
SD9.4408 × 10−332.2252 × 10−11.2456 × 1065.0675 × 10−13.4127 × 10−16.7126 × 10−1
Rank136425
f 14 Mean1.2277 × 1013.1880 × 1009.9800 × 10−11.3150 × 1001.5304 × 1003.4290 × 100
SD1.3645 × 1002.3948 × 1004.8340 × 10−108.8063 × 10−11.5614 × 1001.9069 × 100
Rank651234
f 15 Mean7.1715 × 10−28.8753 × 10−41.0915 × 10−28.0347 × 10−45.9713 × 10−31.6665 × 10−3
SD6.3231 × 10−21.3918 × 10−41.3053 × 10−23.9753 × 10−49.2543 × 10−34.3485 × 10−4
Rank615243
f 16 Mean−3.4420 × 10−1−1.0316 × 100−9.5284 × 10−1−1.0316 × 100−1.0316 × 100−8.8472 × 10−1
SD3.7077 × 10−14.3145 × 10−169.9834 × 10−21.6764 × 10−73.4164 × 10−163.1674 × 10−1
Rank614325
f 17 Mean5.9065 × 1003.9789 × 10−17.1239 × 1013.9789 × 10−13.9789 × 10−13.9789 × 10−1
SD8.1139 × 1000.0000 × 1007.6469 × 1002.5202 × 10−160.0000 × 1000.0000 × 100
Rank52.562.52.52.5
f 18 Mean3.7968 × 1023.0000 × 1004.9999 × 1003.0000 × 1005.1600 × 1003.0000 × 100
SD2.4405 × 1026.6417 × 10−151.1593 × 1012.6119 × 10−151.2001 × 1017.4694 × 10−15
Rank61.541.551.5
f 19 Mean−1.4201 × 100−3.8628 × 100−3.3076 × 100−3.8611 × 100−3.8615 × 100−3.8626 × 100
SD1.0736 × 1009.5794 × 10−163.6983 × 10−13.2588 × 10−32.9188 × 10−31.1146 × 10−3
Rank615432
f 20 Mean−1.1995 × 100−3.2625 × 100−1.4049 × 100−3.2381 × 100−3.2469 × 100−3.2004 × 100
SD7.2506 × 10−16.0050 × 10−24.8656 × 10−11.0375 × 10−17.0311 × 10−21.4440 × 10−2
Rank615432
f 21 Mean−5.0552 × 100−7.1528 × 100−2.1951 × 100−6.6436 × 100−8.7291 × 100−1.0153 × 101
SD0.0000 × 1003.1403 × 1009.6407 × 10−12.5711 × 1003.0861 × 1003.1420 × 10−8
Rank356421
f 22 Mean−5.0877 × 100−9.3099 × 100−2.0067 × 100−8.0030 × 100−9.0115 × 100−1.0403 × 101
SD0.0000 × 1002.4042 × 1008.4662 × 10−12.9063 × 1003.0265 × 1008.5220 × 10−8
Rank526431
f 23 Mean−5.1285 × 100−9.4642 × 100−1.7977 × 100−8.9748 × 100−8.4113 × 100−8.9298 × 100
SD8.9720 × 10−162.3292 × 1006.5073 × 10−12.5559 × 1003.3086 × 1003.2466 × 100
Rank416253
Rank-Count 89791246966.554
Ave-Rank 3.86963.43485.39133.00002.89132.3478
Overall-Rank 546321
Table 5. Ablation algorithm nomenclature.
Table 5. Ablation algorithm nomenclature.
AlgorithmHaltonWater Wave Dynamic Density FactorLens Opposition-Based Learning
HBA
HBA1
HBA2
HBA3
HBA12
HBA13
HBA23
MIHBA
The symbol ✓ representation algorithm adds the above improvements.
Table 6. Experimental results of MIHBA and ablation algorithms under 50 cycles of 23 test functions.
Table 6. Experimental results of MIHBA and ablation algorithms under 50 cycles of 23 test functions.
FunctionCriterionHBAHBA1HBA2HBA3HBA12HBA13HBA23MIHBA
f 1 Mean4.7283 × 10−1351.7723 × 10−1343.7719 × 10−2409.5105 × 10−2563.8876 × 10−2382.0207 × 10−2580.0000 × 1000.0000 × 100
SD3.1823 × 10−1341.0698 × 10−1330.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
Rank7854631.51.5
f 2 Mean1.8214 × 10−721.5383 × 10−711.9881 × 10−1244.6686 × 10−1356.6283 × 10−1243.8114 × 10−1337.4191 × 10−2102.1815 × 10−211
SD4.3810 × 10−724.4108 × 10−711.3312 × 10−1233.0048 × 10−1344.4163 × 10−1232.6888 × 10−1320.0000 × 1000.0000 × 100
Rank78536421
f 3 Mean3.3967 × 10−961.8412 × 10−972.2601 × 10−2262.2645 × 10−2152.0049 × 10−2202.0154 × 10−2210.0000 × 1000.0000 × 100
SD1.8422 × 10−951.0975 × 10−960.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
Rank8736541.51.5
f 4 Mean3.5493 × 10−576.7229 × 10−441.4246 × 10−1182.2832 × 10−1192.9398 × 10−1191.2983 × 10−1103.6497 × 10−1988.8038 × 10−202
SD1.8516 × 10−562.4640 × 10−438.8713 × 10−1181.1773 × 10−1181.2212 × 10−1186.8046 × 10−1100.0000 × 1000.0000 × 100
Rank78534621
f 5 Mean2.4107 × 1012.3986 × 1012.8699 × 1012.3815 × 1012.8608 × 1012.3783 × 1012.8675 × 1012.8576 × 101
SD9.4081 × 10−18.5190 × 10−12.8744 × 10−15.6154 × 10−13.0152 × 10−18.0837 × 10−13.2828 × 10−13.2195 × 10−1
Rank43716285
f 6 Mean3.0077 × 10−21.5175 × 10−24.4300 × 1002.0162 × 10−23.6728 × 1001.1255 × 10−24.3189 × 1003.1940 × 100
SD8.1659 × 10−25.9595 × 10−27.8025 × 10−16.8143 × 10−21.0744 × 1004.9958 × 10−27.4332 × 10−11.0974 × 100
Rank42836175
f 7 Mean4.5094 × 10−44.1968 × 10−42.8189 × 10−43.5129 × 10−43.8020 × 10−43.7453 × 10−43.7404 × 10−42.5721 × 10−4
SD4.3386 × 10−43.7204 × 10−43.7439 × 10−43.0190 × 10−43.6380 × 10−42.5372 × 10−43.6974 × 10−42.6331 × 10−4
Rank87436251
f 8 Mean−8.1146 × 103−1.1362 × 104−6.6279 × 103−8.6794 × 103−1.2342 × 104−1.1789 × 104−6.4823 × 103−1.2455 × 104
SD1.3014 × 1031.1693 × 1037.0940 × 1021.3760 × 1033.0113 × 1021.1924 × 1039.4440 × 1021.6206 × 102
Rank64752381
f9Mean0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
SD0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
Rank4.54.54.54.54.54.54.54.5
f 10 Mean1.5945 × 1008.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
SD5.4623 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
Rank84444444
f 11 Mean0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
SD0.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 1000.0000 × 100
Rank4.54.54.54.54.54.54.54.5
f 12 Mean3.1293 × 10−33.9397 × 10−44.5782 × 10−18.4754 × 10−63.1053 × 10−16.5165 × 10−44.4263 × 10−12.4112 × 10−1
SD1.4709 × 10−21.5503 × 10−31.9787 × 10−11.3619 × 10−52.4469 × 10−11.9603 × 10−31.6113 × 10−11.6118 × 10−1
Rank42816375
f 13 Mean4.1696 × 10−14.2182 × 10−12.4608 × 1001.0587 × 1002.0050 × 1004.8950 × 10−12.4024 × 1001.7823 × 100
SD3.4127 × 10−13.6345 × 10−13.3591 × 10−14.9331 × 10−15.4848 × 10−14.6493 × 10−12.9277 × 10−16.7126 × 10−1
Rank12846375
f 14 Mean1.5304 × 1001.7284 × 1004.8083 × 1001.5299 × 1003.4278 × 1001.6693 × 1003.2449 × 1003.4290 × 100
SD1.5614 × 1001.6678 × 1003.5082 × 1001.6356 × 1002.0498 × 1001.6030 × 1002.6254 × 1001.9069 × 100
Rank14826375
f 15 Mean5.9713 × 10−31.7166 × 10−37.9111 × 10−35.5807 × 10−31.7706 × 10−31.8942 × 10−37.9738 × 10−31.6665 × 10−3
SD9.2543 × 10−35.7402 × 10−41.1928 × 10−28.8396 × 10−34.4023 × 10−45.2223 × 10−41.3768 × 10−24.3485 × 10−4
Rank63752481
f 16 Mean−1.0316 × 100−1.0316 × 100−1.0153 × 100−1.0316 × 100−8.5207 × 10−1−1.0316 × 100−9.8266 × 10−1−8.8472 × 10−1
SD3.4164 × 10−163.3269 × 10−161.1542 × 10−13.0917 × 10−163.4153 × 10−13.1879 × 10−161.9580 × 10−13.1674 × 10−1
Rank41528367
f 17 Mean3.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−1
SD0.0000 × 1000.0000 × 1008.2352 × 10−160.0000 × 1003.5198 × 10−160.0000 × 1000.0000 × 1000.0000 × 100
Rank3.53.583.573.53.53.5
f18Mean5.1600 × 1003.0000 × 1001.2180 × 1013.5400 × 1003.0000 × 1003.0000 × 1007.8600 × 1003.0000 × 100
SD1.2001 × 1011.7843 × 10−152.0850 × 1013.8184 × 1001.0560 × 10−141.9106 × 10−151.4109 × 1017.4694 × 10−15
Rank61854273
f 19 Mean−3.8615 × 100−3.8615 × 100−3.8317 × 100−3.8468 × 100−3.8628 × 100−3.8625 × 100−3.8162 × 100−3.8626 × 100
SD2.9188 × 10−32.9188 × 10−31.5299 × 10−11.0927 × 10−12.8931 × 10−111.5601 × 10−31.8540 × 10−11.1146 × 10−3
Rank45761382
f 20 Mean−3.2469 × 100−3.1984 × 100−3.2729 × 100−3.2742 × 100−3.2028 × 100−3.2026 × 100−3.2552 × 100−3.2004 × 100
SD7.0311 × 10−21.6450 × 10−26.9087 × 10−25.9130 × 10−23.8330 × 10−41.5673 × 10−37.1978 × 10−21.4440 × 10−2
Rank87213465
f 21 Mean−8.7291 × 100−1.0153 × 101−9.9987 × 100−9.8523 × 100−1.0153 × 101−1.0153 × 101−9.5514 × 100−1.0153 × 101
SD3.0861 × 1002.7213 × 10−151.0637 × 1001.4891 × 1003.1098 × 10−33.1183 × 10−152.0616 × 1003.1420 × 10−8
Rank82564272
f 22 Mean−9.0115 × 100−1.0403 × 101−7.8663 × 100−9.5057 × 100−1.0403 × 101−1.0269 × 101−8.3035 × 100−1.0403 × 101
SD3.0265 × 1001.3899 × 10−153.5968 × 1002.4577 × 1002.9170 × 10−69.4450 × 10−13.4108 × 1008.5220 × 10−8
Rank61853472
f 23 Mean−8.4113 × 100−5.6190 × 100−7.5025 × 100−8.5479 × 100−3.4246 × 100−5.9353 × 100−8.2796 × 100−8.9298 × 100
SD3.3086 × 1003.9071 × 1003.7837 × 1003.4047 × 1002.6604 × 1003.9568 × 1003.5025 × 1003.2466 × 100
Rank37528641
Rank-Count122.598.513683.511278.5125.571.5
Ave-Rank5.32614.28265.91303.63044.86963.41305.45653.1087
Overall-Rank64835271
Table 7. Friedman test results for 12 algorithms.
Table 7. Friedman test results for 12 algorithms.
FunctionNOPSOGADBOHBAHBA1
f 1 711121089
f 2 711121089
f3711121098
f 4 711129810
f 7 101211987
f 8 121011574
f 9 511121055
f 10 511125105
f 11 511121055
f 15 12211184
f 17 115.5125.55.55.5
f 18 123.583.593.5
f 19 121.511756
f 21 119121082.5
f 22 11612972
Rank-Count134126.5172114110.585.5
Ave-Rank8.93338.433311.46677.60007.36675.7000
Overall–Rank111012985.5
FunctionHBA2HBA3HBA12HBA13HBA23MIHBA
f 1 54631.51.5
f 2 536421
f336541.51.5
f 4 534621
f 7 236541
f 8 862391
f 9 555555
f 10 555555
f 11 555555
f159756103
f 17 5.55.55.55.55.55.5
f 18 1173.53.5103.5
f 19 981.54103
f21562.52.572.5
f 22 1052482
Rank-Count92.578.56465.585.541.5
Ave-Rank6.16675.23334.26674.36675.70002.7667
Overall-Rank74235.51
Table 8. Results of the Wilcoxon signed-rank test.
Table 8. Results of the Wilcoxon signed-rank test.
FunctionsNO vs. MIHBAPSO vs. MIHBAGA vs. MIHBADBO vs. MIHBAHBA vs. MIHBA
p h p h h p h p h
f 1 3.3111 × 10−2013.3111 × 10−2013.3111 × 10−2013.3111 × 10−2013.3111 × 10−201
f 2 7.0661 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−181
f33.3111 × 10−2013.3111 × 10−2013.3111 × 10−2013.3111 × 10−2013.3111 × 10−201
f 4 7.0661 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−181
f 5 4.8145 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−1817.0661 × 10−181
f 6 1.5267 × 10−1719.4778 × 10−417.0661 × 10−1817.0661 × 10−1817.0661 × 10−181
f 7 1.5537 × 10−1217.0661 × 10−1817.0661 × 10−1818.4857 × 10−1411.3969 × 10−31
f 8 7.0661 × 10−1817.0661 × 10−1817.0661 × 10−1811.3657 × 10−1717.0661 × 10−181
f 9 NaN03.3111 × 10−2013.3111 × 10−2018.2226 × 10−20NaN0
f 10 NaN03.3111 × 10−2013.3111 × 10−201NaN04.3349 × 10−21
f 11 NaN03.3111 × 10−2013.3111 × 10−2013.2709 × 10−10NaN0
f 12 7.8197 × 10−1813.0946 × 10−1317.0661 × 10−1817.0661 × 10−1811.1417 × 10−171
f135.8250 × 10−1815.2389 × 10−1517.0661 × 10−1811.7158 × 10−1212.8599 × 10−151
f141.0931 × 10−1711.7752 × 10−215.4090 × 10−1219.2297 × 10−1412.4401 × 10−121
f 15 1.8228 × 10−1217.0121 × 10−1811.1158 × 10−818.3451 × 10−1213.8154 × 10−31
f 16 2.8961 × 10−1312.5068 × 10−102.1714 × 10−812.6399 × 10−416.6781 × 10−61
f 17 3.3072 × 10−201NaN03.3111 × 10−2013.2709 × 10−10NaN0
f 18 4.5418 × 10−1818.0551 × 10−416.8062 × 10−1811.8974 × 10−517.7869 × 10−41
f 19 7.2628 × 10−1812.2866 × 10−716.8385 × 10−1812.0178 × 10−513.7619 × 10−81
f 20 7.0661 × 10−1811.8392 × 10−1717.0661 × 10−1812.2274 × 10−211.1090 × 10−71
f 21 3.3111 × 10−2011.7289 × 10−317.0661 × 10−1814.3822 × 10−912.7414 × 10−81
f 22 3.3111 × 10−2014.1793 × 10−107.0661 × 10−1812.5307 × 10−102.5381 × 10−81
f 23 3.3222 × 10−819.8724 × 10−203.3827 × 10−1611.5238 × 10−102.6397 × 10−51
Table 9. Comparative experimental results for gear train design problems.
Table 9. Comparative experimental results for gear train design problems.
AlgorithmOptimum ValueOptimal Cost
N 1 N 2 N 3 N 4
PSO462612479.9216 × 10−10
DBO601213182.7265 × 10−8
HBA541237578.8876 × 10−10
MIHBA431916492.7009 × 10−12
Table 10. Results of comparative experiments on pressure vessel design problems.
Table 10. Results of comparative experiments on pressure vessel design problems.
AlgorithmOptimum ValueOptimal Cost
T s T h R L
PSO8.8652 × 10−14.3823 × 10−14.5933 × 1011.3428 × 1026.0978 × 103
DBO9.8781 × 10−14.8827 × 10−15.1182 × 1018.9237 × 1016.3489 × 103
HBA1.0888 × 1005.4029 × 10−15.6377 × 1015.4621 × 1016.6714 × 103
MIHBA7.9079 × 10−13.9089 × 10−14.0974 × 1011.9109 × 1025.9073 × 103
Table 11. Comparative experimental results for the three-bar truss design problems.
Table 11. Comparative experimental results for the three-bar truss design problems.
AlgorithmOptimum ValueOptimal Cost
N 1 N 2
PSO7.8489 × 10−14.1908 × 10−1263.9078
DBO7.9007 × 10−14.0433 × 10−1263.8983
HBA7.9240 × 10−13.9780 × 10−1263.9059
MIHBA7.8862 × 10−14.0840 × 10−1263.8958
Table 12. Comparative experimental results for pressure vessel design problems.
Table 12. Comparative experimental results for pressure vessel design problems.
AlgorithmOptimum ValueOptimal Cost
m l 1 l 2 d 1 d 2
PSO3.60.7178.38.3000 × 1003.3522 × 1005.5000 × 1003.1977 × 103
DBO3.60.7178.38.3000 × 1003.3522 × 1005.2869 × 1003.0560 × 103
HBA3.60.7178.37.7154 × 1003.9000 × 1005.2867 × 1003.2093 × 103
MIHBA3.60.7177.37.7153 × 1003.3502 × 1005.2867 × 1002.9945 × 103
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, T.; Li, T.; Liu, Q.; Huang, Y.; Song, H. A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems. Algorithms 2024, 17, 573. https://doi.org/10.3390/a17120573

AMA Style

Han T, Li T, Liu Q, Huang Y, Song H. A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems. Algorithms. 2024; 17(12):573. https://doi.org/10.3390/a17120573

Chicago/Turabian Style

Han, Tao, Tingting Li, Quanzeng Liu, Yourui Huang, and Hongping Song. 2024. "A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems" Algorithms 17, no. 12: 573. https://doi.org/10.3390/a17120573

APA Style

Han, T., Li, T., Liu, Q., Huang, Y., & Song, H. (2024). A Multi-Strategy Improved Honey Badger Algorithm for Engineering Design Problems. Algorithms, 17(12), 573. https://doi.org/10.3390/a17120573

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop