Next Article in Journal
An Angular Acceleration Based Looming Detector for Moving UAVs
Next Article in Special Issue
Enhancing Path Planning Capabilities of Automated Guided Vehicles in Dynamic Environments: Multi-Objective PSO and Dynamic-Window Approach
Previous Article in Journal
A Crisscross-Strategy-Boosted Water Flow Optimizer for Global Optimization and Oil Reservoir Production
Previous Article in Special Issue
Qubit Adoption Method of a Quantum Computing-Based Metaheuristics Algorithm for Truss Structures Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Mutation Incorporated Quantum Honey Badger Algorithm with Dynamic Opposite Learning and Laplace Crossover for Fuzzy Front-End Product Design

School of Economics and Management, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(1), 21; https://doi.org/10.3390/biomimetics9010021
Submission received: 30 October 2023 / Revised: 12 December 2023 / Accepted: 25 December 2023 / Published: 2 January 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms)

Abstract

:
In this paper, a multi-strategy fusion enhanced Honey Badger algorithm (EHBA) is proposed to address the problem of easy convergence to local optima and difficulty in achieving fast convergence in the Honey Badger algorithm (HBA). The adoption of a dynamic opposite learning strategy broadens the search area of the population, enhances global search ability, and improves population diversity. In the honey harvesting stage of the honey badger (development), differential mutation strategies are combined, selectively introducing local quantum search strategies that enhance local search capabilities and improve population optimization accuracy, or introducing dynamic Laplacian crossover operators that can improve convergence speed, while reducing the odds of the HBA sinking into local optima. Through comparative experiments with other algorithms on the CEC2017, CEC2020, and CEC2022 test sets, and three engineering examples, EHBA has been verified to have good solving performance. From the comparative analysis of convergence graphs, box plots, and algorithm performance tests, it can be seen that compared with the other eight algorithms, EHBA has better results, significantly improving its optimization ability and convergence speed, and has good application prospects in the field of optimization problems.

1. Introduction

With the continuous innovation and development of technology in recent years, engineering problems in various fields such as social life and scientific research have generated many complex optimization solving needs [1,2,3,4], such as dynamic changes, nonlinearity, uncertainty, and high-dimensional. Traditional methods, including the gradient descent method, yoke gradient method, variational method, Newton’s method, and other methods, find it difficult to obtain optimal solutions to these problems within a certain time or accuracy, and are no longer able to meet practical needs. In addition, their efficiency is relatively low when solving real-world engineering problems with large search space and non-linearity. On the contrary, meta-heuristics (MHs) are stochastic optimization methods that do not require gradients. Due to their self-organizing, adaptive, and self-learning characteristics, they have demonstrated their ability to solve real-world engineering design problems in different fields. Therefore, with the continuous advancement of society and the development of artificial intelligence, optimization methods based on an MHs algorithm have been developed.
MHs methods solve optimization problems by simulating biological behavior, physical facts, and chemical phenomena. They are divided into four categories: Swarm Intelligence (SI) algorithms, Evolutionary Algorithms (EA) [5], Physics-based Algorithms (PhA) [6,7,8], and human-based algorithms [9,10,11]. Recently, these behaviors have been widely modeled in various optimization techniques, and their results are summarized in Table 1. Among them, SI is a kind of meta-heuristic algorithm that explores optimization by mimicking the swarm intelligence pattern of behavior of biological and non-living systems in nature [12]. It has advantages such as good parallelism, autonomous exploration, easy implementation, strong flexibility, and fewer parameters. In general, the structure of excellent SI optimization algorithms is simple. The simple theories and mathematical models originate from nature and solve practical problems by simulating nature. Additionally, it is easy to incorporate its variant methods in line with state-of-the-art algorithms. Second, these optimization algorithms can be considered as black boxes, which can solve optimization cases for a series of output values and given input values. Furthermore, an important characteristic of SI algorithms is their randomness, which means that they will find the entire variable space and effectively escape local optima. They seek the optimal result through probability search, without requiring too much prior knowledge or analyzing the internal laws and correlations of the data. It only needs to learn from the data itself, self-organize, and adaptively solve optimization problems, which are very suitable for solving NP complete problems [13].
In the last few years, SI was designated as a small branch of artificial intelligence, is widely used in areas like path planning, mechanical control, engineering scheduling, feature extraction, image processing, training MLP, etc. [14,15,16,17,18,19,20,21], and has achieved significant development. The No Free Lunch (NFL) theorem proposed by Wolpert et al. [22] logically proves that there is no algorithm that can solve all optimization problems. Therefore, research in the field of SI algorithms is very active, with many experts and scholars conducting research on improvements to current algorithms and new algorithms. Typical examples include Particle Swarm Optimization algorithms (PSO) [23] and Ant Colony Optimization (ACO) [24], which have been inferred from by the cooperative foraging behavior of bird and ant colonies, respectively. Over the past few years, a number of researchers have been involved in the development of SI, proposing various algorithms that simulate the habits of natural organisms. Yang et al. [25] presented the Bat Algorithm (BA) to simulate the bats’ behavior using sonar for detection and localization. Gandomi et al. [26] have developed the Cuckoo Search algorithm (CS) according to the reproductive characteristics of cuckoo birds; the reason why the optimal solution obtained by CS is much better than that obtained by existing methods is because CS uses unique search features. References [27,28,29] proposed the Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and Salp Swarm Algorithm (SSA) by simulating the hunting behavior of grey wolf, humpback whale, and salp, respectively. Compared with well-known meta-heuristic algorithms, the GWO algorithm can provide highly competitive results. The results of classical engineering design problems and practical applications have shown that this algorithm is suitable for challenging problems with unknown search spaces. Compared with existing meta-heuristic algorithms and traditional methods, WOA has strong competitiveness. The SSA can effectively improve the initial random solution and converge to the optimal solution. The results of actual case studies demonstrate the advantages of the proposed algorithm in solving real-world problems with difficult and unknown search spaces.
Mirjalili et al. [30] proposed the Sea-horse Optimizer (SHO) from the motor, predatory, and reproductive behavior of the sea-horse. These three intelligent behaviors are expressed and constructed mathematically to balance the local development and global exploration of SHO. The experimental results indicate that SHO is a high-performance optimizer with positive adaptability for handling constraint problems. Abualigaha et al. [31] introduced the Reptile Search Algorithm (RSA) derived from the hunting activity of crocodiles, the search method of RSA is unique, and it achieves better results. Based on mathematical models of sine and cosine functions, the Sine Cosine Algorithm (SCA) [32] is proposed, which can effectively explore different regions of the search space, avoid local optima, converge to global optima, and effectively utilize the promising regions of the search space during the optimization process. The SCA algorithm has obtained smooth shapes of airfoils with very low drag, indicating its effectiveness in solving practical problems with constraints and unknown search spaces. Subsequently, Tunicate Swarm Algorithm (TSA) [33], Wild Horse Optimizer [34] (WHO), Archimedes Optimization Algorithm (AOA) [35], and Moth Flame Optimization (MFO) [36] were successively proposed.
Table 1. A brief review of meta-heuristic algorithms.
Table 1. A brief review of meta-heuristic algorithms.
AlgorithmsAbbrev.Inspiration
Particle Swarm OptimizationPSO [23]The predation behavior of birds
Genetic algorithmsGA [5]Darwin’s theory
Gravitational Search AlgorithmGSA [6]The interaction law
Teaching Learning-Based Optimization TLBO [8]The effect of influence of a teacher on learners
Ant Colony OptimizationACO [24]The foraging behavior of ants
Bat AlgorithmBA [25]The echolocation behavior of bats
Cuckoo Search algorithmCS [26]The reproductive characteristics of cuckoo birds
Gray Wolf OptimizationGWO [27]The leadership hierarchy and hunting mechanism
Whale Optimization AlgorithmWOA [28]The bubble-net hunting behavior of humpback whales
Salp Swarm AlgorithmSSA [29]The swarming behaviour of salps when navigating and foraging in oceans
Sea-horse OptimizerSHO [30]The movement, predation, and breeding behaviors of sea horses
Reptile Search AlgorithmRSA [31]The hunting behavior of crocodiles
Tunicate Swarm Algorithm TSA [33]The group behavior of tunicates in the ocean
Sine Cosine AlgorithmSCA [32]Based on mathematical models of sine and cosine functions
Wild Horse OptimizerWHO [34]The decency behaviour of the horse
Arithmetic Optimization AlgorithmAOA [35]The main arithmetic operators in mathematics
Moth Flame OptimizationMFO [36]The navigation method of moths
Honey Badger AlgorithmHBA [37]The intelligent foraging behavior of honey badger
The basic framework of the MHs algorithm mentioned above is established in two stages, namely the exploration and exploration stages. The MHs algorithm needs to achieve a perfect balance between these stages in order to be efficient and robust, thereby ensuring the best results in one or more specific applications. The exploration process involves searching for regions of distant feasible solutions to ensure obtaining better candidate solutions. After the exploration phase, exploring the search space is crucial. This algorithm will converge to a promising solution and is expected to find the best solution through local convergence strategy [19].
A good balance of exploration and exploitation and prevention of falling into local solutions are the key requirements for MHs algorithms to solve engineering optimization problems. They ensure a large search space and the acquisition of the optimal global solution. The summary results show that researchers mainly deal with (1) mixing two or more other strategies. The improved meta-heuristic algorithm will introduce the advantages and disadvantages of each algorithm, and refer to the corresponding strategies in a targeted manner to improve optimization efficiency. (2) Propose new heuristic optimization algorithms that are more adaptable to complex engineering optimization problems. However, the newly proposed algorithm must be more mature and generalizable for optimization problems when migrating to a new project.
The Honey Badger Algorithm (HBA) [37] was developed by Fatma et al. Firstly, the special feature of HBA from other meta-heuristic algorithms lies in the use of two new mechanisms to update individual positions: the foraging behavior of honey badgers in both mining and honey picking modes, which possesses stronger searching ability and performs well for the complex practical problems. The dynamic search behavior of honey badger with digging and honey finding approaches are formulated into exploration and exploitation phases in HBA.
Secondly, compared with different algorithms such as PSO, WOA, and GWO, etc., HBA has been widely noticed and used in various fields because of its high flexibility, simple algorithm structure, high convergence accuracy, and operability. It has a stronger searching ability and performs well for the complex practical problems. HBA has successfully solved the speed reducer design problem, tension/compression spring design problem, and some other constraint engineering problems.
Therefore, experts have made varying degrees of improvements to it in order to better adapt to various problems in recent times. For example, Akdag et al. proposed a developed honey badger algorithm (DHBA) to solve the optimal power flow problem [38]. Han et al. proposed an improved chaotic honey badger algorithm to optimize and efficiently model proton exchange membrane fuel cells [39]. The literature [40] proposes an enhanced HBA (LHBA) based on Levy flight strategy and applies it to the optimization problem of robot grippers, The results show that LHBA can obtain the minimum value of the difference between the minimum force and the maximum force and successfully solve this optimization problem. In order to improve the overall optimization performance of basic HBA, literature [41] proposes an improved HBA named SaCHBA_PDN based on the Bernoulli shift graph, segmented optimal decreasing neighborhood, and policy adaptive horizontal crossing and applies it to solve the path planning problem of unmanned aerial vehicles (UAVs). Test experimental results show that SaCHBA_PDN has a better performance than other optimization algorithms. Simulation results show that SaCHBA_PDN can obtain more feasible and efficient paths in different obstacle environments, etc. However, the HBA still has limitations in falling into local optima and solving accuracy when facing multiple local solution problems [37], while the experimental results in this paper also show that there is some room for improvement in its performance such as optimization accuracy and stability. Therefore, this paper attempts to improve some limitations of the HBA.
To further improve the performance of the original HBA, it was enhanced by combining four different strategies: dynamic opposite learning, differential mutation operations, local quantum search, and dynamic Laplacian crossover operators, forming the enhanced Honey Badger Algorithm (EHBA) to be studied in this paper. What is more, the EHBA has been successfully introduced to a number of typical practical engineering problems. To summarize, the main contributions made include:
(a)
A dynamic opposite learning strategy was adopted for HBA initialization to enhance the diversity of the population and quality of candidate solution for performance improvement of the original HBA, and increases the convergence speed of the algorithm.
(b)
Combining differential mutation operations to increase the diversity of individual populations, enhance the HBA’s capability to jump out of local optima, and to some extent increase the precision of HBA.
(c)
Local quantum search and dynamic Laplacian crossover operators are selectively used in the mining and honey mining stages to balance the development and exploration stages of the algorithm.
(d)
Performance testing and analysis of EHBA were conducted on test sets CEC2017, CEC2020, and CEC2022, respectively. The feasibility, stability, and high accuracy of the proposed method have been verified through existing test sets. Improved new algorithms EHBA were adopted to design and solve three typical engineering practical cases, further verifying the practicality of EHBA.
The rest of the research content of this article is as outlined below: Section 2 outlines the basic theory of the original honey badger algorithm, combining multiple strategies to establish an enhanced honey badger algorithm (EHBA), and provides specific process steps for improving the algorithm in Section 3; for the effectiveness of the developed EHBA, calculated and statistical analyses were conducted in Section 4 using test sets CEC2017, CEC2020, and CEC2022, respectively; Section 5 provides three specific engineering examples and analysis to validate the engineering utility of EHBA; Finally, the conclusion and future research of the entire article was made in Section 6.

2. Theoretical Basis of Honey Badger Algorithm

The HBA simulates the foraging behavior of honey badgers in both digging and honey modes. In the previous mode, honey badger employs its olfactory capabilities to approach the prey’s position. As it approaches, the honey badger moves around prey to select suitable places to excavate and capture it. With the second option, the honey badger directly tracks the honeycomb under the guidance of the honey guide bird. In theory, HBA has both exploration and exploitation stages, so it can be called a global optimization algorithm. The feeding activity of the honey badger exhibits the properties of powerful optimization capacity and rapid convergence rate.

2.1. Population Initialization Stage

As with all meta-heuristics, HBA starts the optimization process by generating a uniformly distributed randomized population within a set boundary range. According to Equation (1), initialize the population and individual position of honey badgers.
P i = r 1 × ( U p b i L o b i ) + L o b i ,
where r1 is a random value within [0, 1], L o b i and U p b i represent the lower and upper bound of the problem to be solved, respectively.
Forming the initial population matrix P in below Equation (2).
P = [ P 1 P 2 P N ] = [ p 1 , 1 p 1 , j p 1 , D i m 1 p 1 , D i m p 2 , 1 p 2 , j p 2 , D i m p i , j p N 1 , 1 p N 1 , j p N 1 , D i m p N , 1 p N , j p N , D i m 1 p N , D i m ]   ,
where Pi (i = 1, 2, …, N) represents the candidate solution position vector, and Pi,j represents the position in the j-th direction of the i-th candidate honey badger.
As mentioned earlier, there are two parts to update the position process in the HBA, namely the “digging stage” and the “honey stage”.

2.2. Digging Stage (Exploration)

Some honey badgers approach their prey through their sense of smell, and this unique foraging behavior provides us with the direction of the digging stage. In addition to the location update formula, the digging stage also defines three related concepts: intensity operator, trend modifier, and density factor.

2.2.1. Definition of the Intensity I

The intensity I is proportional to the density of prey and the distance between it and the honey badger, and is denoted by the inverse square law [42] in Equation (4).
S = ( P i P i + 1 ) 2 ,
I i = r 2 S 4 π d i 2 ,
d i = P p r e y P i ,
where Ii is the intensity of the prey’s odor in Equation (4). If the odor is strong, the speed of the movement will be rapid, and vice versa. S and di indicate the source or concentration intensity (location of prey) and the distance between the current honey badger candidate and the prey, r2, is a random number between 0 and 1.

2.2.2. Update Density Factor α

The density factor (α) governs the time-varying stochasticity to guarantee a steady shift from exploration to exploitation. The diminishing factor α is refreshed to lower the stochasticity over time using Equation (6), which reduces with iterations.
α = C × exp ( t T ) ,
where T and t are the maximum number of iterations and the current number of iterations, respectively. C default is 2.

2.2.3. Definition of the Search Orientation F

The next few steps are all used to jump out of the local optimum zone. Here, the EHBA uses a flag F that switches the search orientation to take advantage of numerous opportunities for individuals to strictly scan the search space.
F = { 1 , if   r 3 0.5 1 , otherwise ,
where r3 is random numbers in the range of 0–1.

2.2.4. Update Location of Digging Stage

During the digging period, honey badgers depend strongly on the scent intensity, the distance between them, and search factors α. Badgers may be subject to any interference during excavation activities; this can be a hindrance to their search for better prey sites. The honey badger executes actions that resemble the shape of a heart-line. Equation (8) can be used to mimic cardioid movement.
P N e w = P p r e y + F × β × P p r e y + F × r 3 × α × d i × | cos ( 2 π r 4 ) × [ 1 cos ( 2 π r 5 ) ] | ,
among them, Pprey is the globally optimal prey location so far. β ≥ 1 (default to 6) is the honey badger’s capacity to forage, di is shown in Equation (5). r4, r5, and r6 are three different random numbers between 0 and 1, respectively. F is defined as a sign to change the search direction.

2.3. Honey Harvesting Stage (Exploitation)

The situation where the honey badger follows the honey guide badger to the hive is illustrated in Equation (9).
P N e w = P p r e y + F × r 7 × α × d i ,
where PNew and Pprey represent the current individual location and the location of the prey, and r7 is random numbers in the range of 0–1. F and α are defined by Equations (8) and (6), respectively. From Equation (9), it can be observed that the honey badger near Pprey is based on variable di.

3. An Enhanced Honey Badger Algorithm Combining Multiple Strategies

To address the issue of inadequate precision in solving the original HBA, there is a phenomenon of insufficient global exploration capability and difficulty in jumping out of local extremes. An enhanced honey badger search algorithm is proposed by combining dynamic opposite learning strategy, differential mutation strategy, Laplacian, and quantum local strategy. Firstly, in terms of population initialization, dynamic opposite learning strategies are utilized to enhance the richness of the initial population. The differential mutation operation is to increase the diversity of the population and prevent HBA from falling into local optima. Simultaneously, introducing quantum local search or Laplacian operators for dynamic crossover operations in the local development stage allows the optimal honey badger individual to adopt different crossover strategies at different stages of development, with a fine search range in the early stage and better jumping out of local extremum in the later stage.

3.1. Dynamic Opposite Learning Strategy

Opposite learning is considered a new technology in intelligent computing, aims to find corresponding opposite solutions based on the current solution, and then select and save better solutions through fitness calculation. Initializing through opposite learning strategies can effectively improve the diversity of the population and help it escape from local optima.
In meta-heuristic optimization algorithms, the population initialization is usually randomly produced, which can only ensure the distribution of the population, but cannot assure the quality of the initial solution. Nevertheless, studies have shown that the quality of the initialization significantly affects the convergence rate and precision of HBA. Based on this issue, domestic and foreign scholars have introduced various strategies into the initialization part to enhance the initial performance, commonly including chaotic initialization, opposite learning, and Cauchy random generation, etc. This section introduces a dynamic opposite learning strategy to stronger the quality of initialization solutions [43] with Equation (10).
P D o b l = P I n i t × r 8 × ( r 9 × ( U p b + L o b P I n i t ) ) P I n i t ,
where PInit and PDobl are the population created at random and opposite initial population. r8 and r9 are the different arbitrary number within (0, 1). Firstly, PInit and PDobl are produced, respectively. Then, merge them into a new population PNew = {PDoblPInit}. Calculate the objective function of PNew, and use a greedy strategy to fully compete within the population, selecting the best N candidate honey badgers as the initial population. This allows the population to near the optimal solution more quickly, thereby accelerating the convergence of the HBA.

3.2. Differential Mutation Operation

Differential evolution algorithm (DE) is a new parallel evolutionary algorithm. It consists of three operations: mutation, crossover, and selection [44,45,46]. The DE algorithm keeps the best individuals and eliminates the worst individuals by means of successive iterative operations, and leads the search process towards the global optimal solution. The concrete procedures of the three operations are described below:
  • Mutations. Mutation refers to calculating the weighted position difference between two individuals in a population, then adding the position of a random individual to generate a mutated individual. The specific mutation procedures can be described with Equation (11).
v i t + 1 = p r ¯ 1 t + F s × ( p r ¯ 2 t p r ¯ 2 t ) ,
where p r ¯ 1 t , p r ¯ 2 t , p r ¯ 3 t are individuals that are different from each other in the t-th iteration, respectively, Fs represents the adaptive adjustment mutation operator.
  • Crossover. By using some parts of the present population and corresponding parts of the mutant population, and exchanging them in accordance with certain rules, it is possible to make a cross population m i t that can enrich the variety of the species in the population.
m i t + 1 = { v i t + 1     if   r j < C R   or   j = j r a n d p i t     otherwise ,
where CR ∈ [0, 1] is the crossover probability, r j and j r a n d are random integer numbers in the range [0, 1] and [ 1 , D i m ] , respectively.
  • Selection. If the fitness value of the cross vector m i t is not inferior to the fitness value of the target individuals p i t , then replace the target individual with the cross vector in the next generation.
p i t + 1 = { m i t     if   f ( m i t ) < f ( p i t ) p i t     otherwise ,
The DE algorithm uses the variation information between individuals to disturb them, thereby increasing the variety of the individuals and searching for the optimum result. It has the merits of simplicity of processing, stability of search, and ease of implementation. In this contribution, the new individual obtained by the DE algorithm is substituted for the optimal individual in the original HBA and then drives the evolution process of the individuals. This not only enhances the precision and exploration, but also secures the convergence rate of the HBA.

3.3. Quantum Local Search

First, calculate the adaptive expansion coefficient of the current generation [47]:
β ( t ) = ( β max β min ) T t T + β min ,
where βmax = 1 and βmin = 0.5 are the maximum and minimum values of the preset adaptive expansion coefficient. Generate an attraction point Qi based on individual historical average optimal position and group historical optimal position:
Q i = φ P ¯ + ( 1 φ ) P b .
Assuming that the position vector of each honey badger has quantum behavior, the state of the position vector is described using the wave function φ(x, t). The position equation of the new position vector obtained through Monte Carlo random simulation can be seen in Equation (16).
P ( t + 1 ) = Q i ± β | P ¯ P | ln ( 1 / u ) .
In Equations (15) and (16), φ and u represents a D random number matrix that follows a uniform distribution from 0 to 1. The introduced quantum local search strategy generates an attraction point according to the Equation (15), and the honey badger population moves in a one-dimensional potential well centered around this attraction point, expanding the variety of new individuals and making sure the individuals of the honey badger population have better distribution. Finally, the position is updated according to Equation (16) to decrease the possibility of HBA entering local optima, making the HBA have better exploration performance, which is beneficial for balancing exploration and development.

3.4. Dynamic Laplace Crossover

The Laplace crossover operator was proposed by Kusum et al. [48,49]. The density function of the Laplace operator distribution can be described in Equation (17).
D ( p ) = 1 2 b exp ( | p a | b ) .
In Equation (17), aR is the positional parameter, usually taken as 0, and b > 0 is the proportional parameter. First, generate a random number equally distributed on the interval [0, 1], and λ (random number) is generated by the Equation (18).
λ = { a b log e ( ε ) , ε 1 2 a + b log e ( ε ) , ε > 1 2 ,
In Laplacian crossover, two offspring o ( 1 ) = o 1 ( 1 ) , o 2 ( 1 ) , , o n ( 1 ) and o ( 2 ) = o 1 ( 2 ) , o 2 ( 2 ) , , o n ( 2 ) are generated by a pair of parents p ( 2 ) = p 1 ( 2 ) , p 2 ( 2 ) , , p n ( 2 ) and p ( 1 ) = p 1 ( 1 ) , p 2 ( 1 ) , , p n ( 1 ) by Equations (19) and (20).
p i ( 1 ) = o i ( 1 ) + λ | o i ( 1 ) o i ( 1 ) | ,
p i ( 2 ) = o i ( 2 ) + λ | o i ( 2 ) o i ( 2 ) | .
In order to match the iterative law of the algorithm, this article adopts a dynamic Laplacian crossover strategy for cross mutation operations. Figure 1 shows the differences in the Laplace cross density function curves under different values of b. In which, the solid line and dashed line are represented as b = 1 and b = 0.5. From a vertical perspective, the peak near the center value of the dashed line is greater than the solid line, and the peak at both ends is smaller than the solid line; from a horizontal perspective, the solid line descends more slowly as it approaches both ends of the horizontal axis, while the dashed line approaches both ends of the horizontal axis.
To make the algorithm easy to operate and universal, this article dynamically introduces the Laplace crossover operator in the local discovery stage to help the honey badger population break free from the constraints of local extremum and avoid premature convergence. Because b = 1 is more likely to generate random numbers entering and leaving the origin than b = 0.5, and has a wider distribution range, selecting b = 1 for Laplace crossing in the early stage of local exploration allows honey badger individuals to search the range of solutions with a larger step size and better break free from the constraints of local extremum by Equation (21).
λ = { a log e ( ε ) , ε 1 2 a + log e ( ε ) , ε > 1 2 ,   r 1 t T .
Due to the high probability of generating random numbers near the central value of b = 0.5, in the later stage of local development, b = 0.5 is chosen for Laplacian crossover, which allows honey badger individuals to walk around the optimal solution with a smaller step length, fine search region, and improve the probability of finding the global optimal. The specific expression is defined with Equation (22).
λ = { a 1 2 log e ( ε ) , ε 1 2 a + 1 2 log e ( ε ) , ε > 1 2 ,   r > 1 t T .
In Equations (21) and (22), 1 − t/T represents a monotonically decreasing function between [0, 1]. In the early stages of development, t is small, so 1 − t/T is large. The algorithm randomly selects a honey badger and the current optimal individual to perform a crossover operation according to Equation (21) when 1 − t/T is greater than r. In the later stage of development, t is large, so 1 − t/T is small. The algorithm randomly selects a honey badger and performs a crossover operation with the current optimal individual according to Equation (22) when it is less than r. The dynamic Laplacian crossover operation allows for the generation of offspring in the early stage, which can better explore the search space with a larger step size, increase the probability of jumping out of the local extremum, and avoid premature convergence. In the later stage of development, offspring that are closer to their parents are generated, which can finely search the space near the optimal solution with a smaller step size, increasing the probability of finding the global optimal solution, and helping the honey badger individuality converge to the global optimal at a faster speed. Overall, the introduction of Laplace crossover operator in the development stage enables the honey badger population to perform adaptive dynamic crossover operations according to the iterative process, improving the convergence rate and solving ability.

3.5. The Specific Steps of the Enhanced Honey Badger Algorithm

Initialize the population using dynamic opposite learning, replace the random method of HBA, and reduce the uncertainty of the algorithm. This initialization population strategy can generate high-quality populations with good diversity, laying the foundation for subsequent iterations. Introducing Laplacian crossover strategy or local quantum search strategy in the local exploration stage forces the honey badger group to adaptively select and update strategies during the iteration process, helping the honey badger group converge to the global optimum faster and decreasing the probability of premature convergence of the HBA.
Step One. Initialize the population by Equation (1), perform the dynamic opposite learning population with Equation (10), and retain the optimal individuals according to the greedy strategy to enter the main program iteration.
Step Two. Calculate the objective function value of each honey badger, and record the optimal objective function value FBest and the optimal individual position XBest based on the results.
Step Three. Define intensity I with Equation (4) and density factor by Equation (6).
Step Four. Perform differential mutation operation Equations (11)–(13).
Step Five. If r < 0.5 , update the values through the digging stage with Equation (8).
Step six. If the random number r 1 > 1 t / T , update the individual position of the population based on the quantum local search Equation (16); otherwise, if the random number r 2 > 1 t / T , replace the individual position based on the dynamic Laplace crossover Equation (22); if r 2 1 t / T , update the individual position following the Equation (21).
Step Seven. After updating, judge whether it exceeds the upper and lower bounds of the position. If a certain dimension of the individual exceeds the upper bound, replace its value with the upper bound Upb. If a certain dimension of the individual exceeds the lower bound, replace its value with the lower bound Lob.
Step Eight. Evaluate the fitness value and judge whether is better than FBest; if it is better than FBest, the fitness value of this candidate solution is recorded as a new FBest, and the individual position is updated as PBest;
Step Nine. As the number of iteration increases, if t < T , return to Step Three; otherwise, output the optimal value FBest and the corresponding position PBest.
For the sake of expressing the EHBA more clearly, the pseudo-code of EHBA is listed in Algorithm 1 and Figure 2 gives the flowchart of the EHBA.
Algorithm 1: The Proposed EHBA
Input: The parameters of HBA such as β, C, N, Dim, and maximum iterations T.
Output: Optimal fitness value.
Random Initialization
Construct the new population through dynamic opposite learning strategy.
For i = 1 to N do
r8 = rand(0,1), r9 = rand(0,1),
For j = 1 to Dim do
P i , j D o b l = P i , j I n i t × r 8 × ( r 9 × ( U p b j + L o b j P i , j I n i t ) ) P i , j I n i t
Check the boundaries.
Using greedy algorithm to select the best initial population from 2N populations
Evaluate all fitness value F(Pi), i = 1, 2, …, N. Save best position PBest and FBest.
      While (t < T) do
Renew the decreasing factor α by Equation (6).
         For i =1 to N do
Calculate the intensity Ii by Equation (4).
      Perform differential mutation operation with Equations (11)–(13):
For i = 1 to N do
Perform mutation by Equation (11); End
For i = 1 to N
For j = 1 to Dim do
Perform crossover by Equation (12);
End
End
For i = 1 to N
For j = 1 to Dim do
Perform selection by Equation (13);
End
End
            If r < 0.5 then
Replace the location Pnew by Equation (8).
             Else
               Quantum Local Search:
Perform Equations (14)–(16)
             Else
               Dynamic Laplace Crossover:
                if r1 < 1 − t/T then
Renewed the honey badger location with Equation (21).
Else
Renewed the honey badger location with Equation (22).
End if
             End if
Evaluate new position
If FnewF(Pi) then
Let Pi = Pnew and Fi = Fnew.
End if
If FnewFBest then
Make PBest = Pnew and FBest = Fnew.
End if
            End For
           Verify the honey badger’s boundaries.
           Refresh Honey Badger’s location and most best location (P*)
        t = t + 1
      End while

3.6. The Complexity Analysis

The calculation complexity of EHBA is determined mainly by the following three operations: dynamic opposite learning initialization, fitness evaluation, and population position update. As the primary stage of the algorithm, the initialization stage is executed only once at the beginning, while the other two steps are performed in each iteration cycle. The complexity is calculated with a default population size of N, the iteration period is T, and the dimension is defined D.
The calculation complexity of the HBA is of O(TND). The computational complexity of initialization for dynamic directional learning is O(2N). Quantum local search and Laplace crossover replace the honey harvesting stage of the original algorithm HBA, which is only an order of magnitude operation with a constant multiple c, O(cTND). Constants have little effect on large O. In summary, the total computational complexity of EHBA is O(cTND + 2N).

4. Numerical Experiment and Analysis Results

In this section, we test the performance on the CEC2017, CEC2020, and CEC2022 test sets for demonstrating the effectiveness of the EHBA. CEC2017 contains 30 single objective optimization functions, which is a very classic functions set and also the most capable function set for postgraduate intelligent algorithm optimization ability. The F2 function in the CEC2017 was later removed, as officially declared. The optimization function test kit for CEC2020 includes 10 benchmark problems, which are actually a combination of functions selected from CEC2014 and CEC2017. CEC2022 was also selected from the 2017 and 2014 function sets and is the latest test set for algorithm performance testing. These three test sets contain different uni-modal, multi-modal, hybrid, and composition functions, which can better measure the performance of the new EHBA method.
In this study, the experiment was run in a Windows 10 (64 bit) environment using Intel (R) core (TM) i5-6500 processors, 3.2 GHz, and 8 GB of main memory. EHBA was implemented in MATLAB R2019a to ensure the fairness of the algorithm.
The results were compared with the algorithm mentioned in the introduction containing SHO [30], AOA [35], WOA [28], MFO [36], HBA [37], TSA [32], SCA [33], GWO [27], etc., to verify the efficiency of the EHBA when evaluating test problems. These algorithms not only include the earliest classical algorithms proposed, but also include algorithms with better applicability and performance in recent years, which can better reflect the superiority of EHBA in this paper.
Parameter settings for all comparison algorithms should be consistent with those in the various literature; see Table 2. The maximum number of iterations T and population size N for all methods are 1000 and 30, respectively.
The randomness of meta-heuristics leads to unreliable results from a single run. To ensure a fair comparison, all procedures are performed 30 times independently. Usually, the average accuracy (Mean), standard deviation (Std), best value (Min), worst value (Max), Rank, and Wilcoxon’s rank sum test are selected as the evaluation criteria, which best highlight the effectiveness and feasibility of the algorithm. Here, “+” denotes that the results of other methods are superior to EHBA, “—” defines the number of functions that underperform in EHBA, “=” means that there is no significant difference between EHBA and other methods. Also shown in bold are the minimum values obtained by the eight algorithms listed above. Rank represents the ranking result of the average value of different algorithm. The lower the rank, the better the performance of the algorithm in terms of precision and stability.

4.1. Experiment and Analysis on the CEC2017 Test Set

The test function CEC2017 contains 29 functions that are often used to test the effectiveness of algorithms, with at least half being challenging mixed and combined functions [50]. Table 3 presents the test results between EHBA and the other eight methods on the CEC2017. The bold data in the table represent the optimal average data among all the comparison algorithms. In addition, Table 4 shows the Wilcoxon rank sum test results of eight comparative algorithms at a significance level of 0.05.
From Table 3, we can observe that the average rank of EHBA is 1.2069, at the head of their league. The overall solution results of the EHBA are better. Observing the bold data, EHBA achieved smaller values on 86% of the test functions, which were distributed across various functions (uni-modal, multi-modal, mixed, and combined). However, HBA and GWO achieved smaller values on 3 and 1 functions, respectively. Therefore, the number of smaller values obtained by EHBA was much better than that obtained by other comparison algorithms. It is visible that the local quantum search and dynamic Laplacian crossover strategy have improved the effective searching capabilities of the HBA in seeking the best solution. The EHBA can find superior solutions with higher convergence speed based on the original algorithm.
Based on the p-value results presented in Table 4, due to the fact that many values are the same, for the convenience of observation, “—” represents the same value 6.79562 × 10−8. Combined with Table 3, the final results show that the number of functions of the comparison method that are superior/similar/inferior to EHBA are 3/13/13, 0/12/17, 0/0/29, 0/0/29, 0/0/29, 0/0/29, 1/4/24, and 0/1/28, respectively. It is possible to see that the better HBA in the comparison algorithm outperforms EHBA on 3 test functions, and is inferior to EHBA on 13 test functions. Secondly, the second best GWO in the comparison algorithm outperforms EHBA on function F10, and is inferior to EHBA on 24 test functions. Therefore, overall, the performance of EHBA outperforms the comparison algorithm. Overall, from CEC 2017 test functions, the performance of EHBA is superior to the other eight comparative algorithms. Thus, the experimental results show that the proposed algorithm can effectively solve the CEC2017.
In order to display the optimization performance of the EHBA in a more intuitive way, such as its convergence rate, and capability to escape from local optima, and the convenience of observing the trend of curve changes, Figure 3 and Figure 4 show the convergence curves and box plots on some CEC2017 test functions, marking the iterations set as the horizontal axis, the functions use log10(F) as the vertical axis. From the figure, it can be seen that EHBA can converge to the optimal solution within 1000 iterations continuously, indicating its strong exploration and development capability. The convergence curves indicate that EHBA has a significantly improved convergence accuracy and speed compared to other algorithms. It is evident that the iteration curve of EHBA is able to overcome the local solution in the early stages of iteration and approaches the near-optimal solution; it is close to the optimal solution in the subsequent development period. Specifically, in the convergence curve, EHBA shows more significant convergence effects on F1, F6, F7, F9, F16, F18, F20, F22, and F30, mainly because EHBA has the ability to jump out of the local solution and find the optimal position quickly.
Box plot analysis shows the distribution of the data and helps to understand the distribution of the results. Figure 4 shows the box plot of the results of the EHBA algorithm and other recent optimization algorithms. The box plot is an excellent display showing the distribution of the data based on the quartiles. The red lines of EHBA and HBA show the lowest median, with EHBA being more pronounced. The narrow quartile range of the results obtained by EHBA indicates that the distribution of the obtained solutions is more clustered than other algorithms, and there are few outliers, further demonstrating the stability of EHBA due to the incorporation of the improved strategy. Overall, EHBA is a competitive algorithm that deserves further exploration in practical engineering applications.

4.2. Experiment and Analysis on the CEC2020 Test Set

4.2.1. The Ablation Experiments of EHBA

In order to verify the effectiveness of the different strategies of EHBA, EHBA is compared with its six incomplete algorithms and HBA. The incomplete algorithms include the dynamic opposite learning strategy, differential mutation strategy, quantum local search, or dynamic Laplace crossover corresponding to EHBA1, EHBA2, EHBA3, and selecting the combination strategies EHBA4 (the dynamic opposite learning strategy and differential mutation strategy), EHBA5 (the dynamic opposite learning strategy and quantum local search or dynamic Laplace crossover strategy), or EHBA6 (the differential mutation strategy and quantum local search or dynamic Laplace crossover strategy) to evaluate their impact on convergence speed and accuracy. Due to article space constraints, this paper only gives the convergence curves of some test sets on the CEC2020 in Figure 5.
F4 in CEC2020 has almost the same convergence accuracy and speed for all comparison algorithms, and has been recombined into a curve. Therefore, it is not shown here. Figure 5a,d indicate that for functions F1 and F5, EHBA has a slower convergence speed than EHBA2 and EHBA4, but has better convergence accuracy. Figure 5c,f–i indicate that for F7–F10, although EHBA has a slightly faster convergence speed than other algorithms, its convergence accuracy is significantly better than other algorithms. For F2, the convergence speed of EHBA is lower than that of ehba1 and EHBA5 in the early stage of iteration, but in the later stage of iteration, the convergence speed and accuracy are significantly better than other algorithms. In general, every improvement strategy of EHBA is effective and its incomplete algorithms all improve HBA to different degrees in both exploration and exploitation. Overall, applying all strategies has a better convergence effect on HBA than its ablation algorithm, which further proves the effectiveness of the added strategies.
The experimental results show that these four strategies have a certain effect on improving the performance of HBA, especially the quantum local search and dynamic Laplace strategy introduced.

4.2.2. Comparison Experiment between Other HBA Variant Algorithms and EHBA

The proposed EHBA are compared with other HBA variants as well to verify its performance. Here, two recently improved variant algorithms have been selected, LHBA [40] and SaCHBA_PDN [41]. To save space, the convergence curve on CEC2020 is presented in Figure 6.
Figure 6 shows that for function F2, although the convergence speed is not as fast as the two variant algorithms LHBA and SaCHBA_PDN in the early iteration, the speed and accuracy are better than LHBA and SaCHBA_PDN in the later stage. Except for F2 in Figure 6, other test functions show that the convergence accuracy and speed of the improved algorithm in this paper are significantly better than other variant algorithms, further indicating that the introduced strategy in this paper has a significant improvement on the original algorithm and has high convergence efficiency. This also demonstrates the effectiveness and high convergence of EHBA proposed in this article.

4.2.3. Comparison Experiments of EHBA and Other Intelligent Algorithms

Similarly, Table 5 presents the comparison results between EHBA and the other eight methods on the test set CEC2020 [51]; it contains 10 functions. At this time, the bold data in the table represent the optimal average data among all the comparison algorithms. In addition, Table 6 lists the Wilcoxon rank sum test results of eight comparative methods under the condition of significance level = 0.05.
From Table 5, we can observe that the average rank of EHBA is 1.4, at the head of their league. The overall solution results of the EHBA algorithm are better. Observing the data in bold, EHBA achieved better values on 80% of the test functions, which were distributed across various functions. However, HBA and GWO achieved smaller values on one function, respectively. Therefore, the number of smaller values obtained by EHBA was much better than that obtained by other comparison algorithms. It is visible that the local quantum search and dynamic Laplacian crossover strategy have improved the searching capabilities of the HBA in seeking the best solution. The EHBA can find superior solutions with higher convergence speed based on the original algorithm.
Based on the p-value results presented in Table 6, due to the fact that many values are the same, for the convenience of observation, “—” represents the same value 6.7956 × 10−8. Combined with the Table 5, the final results show that the number of functions of each comparison methods better/similar/inferior to EHBA are 1/5/4, 0/2/8, 0/0/10, 0/0/10, 0/1/9, 0/1/9, 3/3/4, 0/1/9, respectively. It is possible to see that the better HBA outperforms EHBA on F5, and is inferior to EHBA on four test functions. Secondly, the second best GWO in the comparison algorithm outperforms EHBA on three test functions, and is superior to EHBA on four test functions. Therefore, overall, the performance of EHBA is good at the other algorithm. Overall, from ten test functions, the performance of EHBA is superior to the other eight comparative algorithms. Thus, the experimental results show that the proposed algorithm can effectively solve the CEC2020.
Like CEC2017, the convergence curve and box plot are also provided in Figure 7 and Figure 8. The convergence curves indicate that EHBA has significantly improved convergence accuracy and speed compared to other algorithms. It is observed that the iterative curve of EHBA is able to slip away from the local solution in the early stages of iteration and approach the near-optimal solution. It will be close to the optimum solution in the later development phase. Specifically, the convergence curve of EHBA shows more significant convergence effects on F1, F6, and F7. Therefore, the proposed strategy mainly improves the convergence speed of the algorithm in solving the CEC2020 test function, avoiding local stagnation in the optimization process as well as exploration and exploitation capabilities. Overall, EHBA obtains competitive convergence results, and its overall convergence is better than other comparative algorithms.
Figure 8 demonstrated that the box plot of the test function is in line with Table 5. The red lines of EHBA and HBA show the lowest median, with EHBA being more pronounced. The narrow quartile range of the results obtained by EHBA indicates that the distribution of the obtained solutions is more concise than other methods, and there are few outliers, further demonstrating the stability of the EHBA due to the strategy that will be improved. Overall, EHBA is a competitive algorithm that deserves further exploration in practical engineering applications.

4.3. Experiment and Analysis on the CEC2022 Test Set

In the same way, Table 7 presents the results between EHBA and other algorithms on the test set CEC2022 [52]; it contains 12 functions. At this time, the bold data in the table represent the optimal average data among all the comparison algorithms. Table 6 also shows the Wilcoxon rank sum test results of eight comparative algorithms with the significance level of 0.05.
The average rank of EHBA is 1.25 from Table 5, top ranking. The overall solution of the EHBA algorithm is better. Observing the bold data, EHBA achieved smaller values on 83% of the test functions, which were distributed across various functions. However, HBA and GWO achieved smaller values on F11 and F4, respectively. Therefore, the number of smaller values obtained by EHBA was much better than that obtained by other comparison algorithms. It is possible to see that the local quantum search and dynamic Laplacian crossover strategy have improved the effective searching capabilities of the HBA in seeking the best solution. The EHBA can find superior solutions with a higher convergence speed based on the original algorithm.
Based on the p-value results presented in Table 8. Due to the fact that many values are the same, for the convenience of observation, “—” represents the same value 6.79562 × 10−8. Combined with Table 7, the final results show that the number of functions superior/similar/inferior to EHBA are 1/4/7, 0/0/12, 0/0/12, 0/0/12, 0/0/12, 1/1/10, 0/2/10, 0/1/11, respectively. It is possible to observe that the better HBA outperforms EHBA on one test function, and is inferior to EHBA on seven test functions. Secondly, the second best MFO in the comparison algorithm outperforms EHBA on one test function, and is not good at EHBA on ten test functions. Therefore, overall, the performance of EHBA is better than the comparison algorithm. Overall, looking at the 12 test functions, the performance of EHBA is superior to the other 8 comparative algorithms. Thus, the experimental results show that the proposed algorithm can effectively solve the CEC2022.
Like CEC2020, Figure 9 provided the convergence curve IYDSE compared to other meta-heuristics. From the results in the figure, the convergence curves indicate that EHBA has significantly improved convergence accuracy and speed compared to other algorithms. The iterative curve of EHBA can avoid the local solution in the early stages of iteration and converge to the approximate optimal solution. It will be found close to the optimum solution in the subsequent development phase. Specifically, the convergence curve of EHBA shows more significant convergence effects on F3, F5, and F10. Therefore, the proposed strategy mainly improves the convergence speed of the algorithm in solving the CEC2022 test function, avoiding local stagnation in the optimization process as well as exploration and exploitation capabilities.
The box plot of the test function in Figure 10, the compact box plot, indicates strong data consistency. The red lines of EHBA and HBA show the lowest median, with EHBA being more pronounced. The narrow quartile range of the results obtained by EHBA indicates that the distribution of the obtained solutions is tighter. What is more, there are few outliers, which is also proof of the stability of the EHBA. EHBA has a thinner box plot compared to other algorithms, which indicates the improved performance of the HBA due to the incorporation of the improved strategy. Overall, EHBA is a competitive algorithm that deserves further exploration in practical engineering applications.
In summary, we can see that the EHBA has good convergence, stability, and effectiveness, which provides a solid foundation for solving practical problems.

5. The Application of EHBA in Engineering Design Issues

To verify its ability to solve practical problems, the EHBA was used to solve three practical engineering design problems. Before using each algorithm to solve practical engineering optimization problems, a penalty function [53] was used to transform the constrained problem into an unconstrained problem.

5.1. Welding Beam Design Issues

Designing welded beams with the lowest manufacturing cost is an effective way to achieve green manufacturing [26]; the schematic view can see the Figure 11. Notably, thickness (b), length (l), height (t) of the electrode, and weld thickness (h) are defined as the four optimize variables. At the same time, a load was imposed on the top of the reinforcement; this will result in seven violated constraints, as detailed in Equation (24). Let γ = [ h , l , t , b ] T = [ γ 1 , γ 2 , γ 3 , γ 4 ] T , the formula expression of its mathematical model can be seen in Equation (23). The meanings of relevant variables can be found in reference [26].
Min   F ( γ ) = 1.10471 γ 1 2 γ 2 + 0.04811 γ 3 γ 4 ( 14.0 + γ 2 ) .
The constraint conditions are listed in Equation (24).
{ s 1 ( γ ) = τ ( γ ) τ max 0 , g 2 ( γ ) = σ ( γ ) σ max 0 , s 3 ( γ ) = x 1 x 4 0 , s 4 ( γ ) = 0.1047 γ 1 2 + 0.04811 γ 3 γ 4 ( 14 + γ 2 ) 5 0 , s 5 ( γ ) = 0.125 x 1 0 , s 6 ( γ ) = δ ( γ ) 0.25 0 , s 7 ( γ ) = P P c ( γ ) 0 , M = P ( L + γ 2 / 2 ) , R = γ 2 2 / 4 + ( ( γ 1 + γ 2 ) / 2 ) 2 , δ ( γ ) = 6 P L 3 / E γ 3 2 γ 4 , J = 2 2 γ 1 γ 2 ( γ 2 2 / 4 + ( ( γ 1 + γ 2 ) / 2 ) 2 ) , σ ( γ ) = 6 P L / γ 4 γ 3 2 , P c ( γ ) = 4.013 E γ 3 2 γ 4 6 / 36 / L 2 ( 1 γ 3 / 2 L E / 4 G ) , τ ( γ ) = ( τ ) 2 + 2 τ τ ( γ 2 / 2 R ) + ( τ ) 2 , τ = P / 2 γ 1 γ 2 , τ = M R / J .
The range of variable values is given below, in Equation (25).
0.1 γ 1 , γ 4 2 ,   0.1 γ 1 , γ 4 10 .
Utilizing EHBA and HBA, SHO, SCA, TSA, WOA, MFO, GWO, and AOA to solve welding beam design problems, Table 9 shows the mean, standard deviation, worst case, and best values independently calculated 20 times in solving the welding beam design problem. Table 10 summarizes the best results in terms of the best results generated by the above algorithms. Simultaneously, the algorithm’s convergence curve diagram is provided in Figure 12; the vertical axis is the logarithm of numerical results, which indicates the efficiency of the EHBA developed in this paper.
From the data analysis in the table, the objective fitness values acquired by MFO and HBA are the same and smaller, meaning that they have high solving precision. Observing the bold data, it is clear that EHBA has performed well on the fourth indicator, with small optimal values, worst values, average values, and standard deviations. Therefore, overall, EHBA has high accuracy in solving this problem and the solution results are relatively stable.

5.2. Vehicle Side Impact Design Issues

The goal of the car side impact design problem is to minimize the weight of the car. According to the mathematical model of car side impact established in reference [54], this problem has 11 design elements γ = [ γ 1 , γ 2 , , γ 10 , γ 11 ] ; the mathematical expressions of the objective problem is below.
Min F ( γ ) = 1.98 + 4.90 γ 1 + 6.67 γ 2 + 6.98 γ 3 + 4.01 γ 4 + 1.78 γ 5 + 2.73 γ 7 .
The constraint conditions that the objective function needs to meet are shown in Equation (27).
{ s 1 ( γ ) = 1.16 0.3717 γ 2 γ 4 0.00931 γ 2 γ 10 0.484 γ 3 γ 9 + 0.01343 γ 6 γ 10 1 0 , s 2 ( γ ) = 0.261 0.0159 γ 1 γ 2 0.188 γ 1 γ 8 0.019 γ 2 γ 7 + 0.0144 γ 3 γ 5 + 0.0008757 γ 5 γ 10      + 0.08045 γ 6 γ 9 + 0.00139 γ 8 γ 11 + 0.00001575 γ 10 γ 11 0.32 0 , s 3 ( γ ) = 0.214 + 0.00817 γ 5 0.131 γ 1 γ 8 0.0704 γ 1 γ 9 + 0.03099 γ 2 γ 6 0.018 γ 2 γ 7 + 0.0208 γ 3 γ 8      + 0.121 γ 3 γ 9 0.00364 γ 5 γ 6 + 0.0007715 γ 5 γ 10 0.0005354 γ 6 γ 10 + 0.00121 γ 8 γ 11 0.32 0 , s 4 ( γ ) = 0.74 0.061 γ 2 0.163 γ 3 γ 8 + 0.001232 γ 3 γ 10 0.166 x 7 x 9 + 0.227 x 2 2 0.32 0 , s 5 ( γ ) = 28.98 + 3.818 γ 3 4.2 γ 1 γ 2 + 0.0207 γ 5 γ 10 + 6.63 γ 6 γ 9 7.7 γ 7 γ 8 + 0.32 γ 9 γ 10 32 0 , s 6 ( γ ) = 33.86 + 2.95 γ 3 + 0.1792 γ 10 5.057 γ 1 γ 2 11 γ 2 γ 8 0.0215 γ 5 γ 10 9.98 γ 7 γ 8 + 22 γ 8 γ 9 32 0 , s 7 ( γ ) = 46.36 9.9 γ 2 12.9 γ 1 γ 8 + 0.1107 γ 3 γ 10 32 0 , s 8 ( γ ) = 4.72 0.5 γ 4 0.19 γ 2 γ 3 0.0122 γ 4 γ 10 + 0.009325 γ 6 γ 10 + 0.000191 γ 11 2 4 0 .
The range of variable values is as follows in Equation (28).
{ 0.5 γ 1 , γ 2 , γ 3 , γ 4 , γ 5 , γ 6 , γ 7 1.5 , 0.192 γ 8 , γ 9 0.345 , 30 γ 10 , γ 11 30 .
EHBA was implemented to deal with this case and the calculation values were compared with those of HBA, SHO, SCA, TSA, WOA, MFO, GWO, and AOA. Table 11 shows the best values and corresponding variable values for solving the car side impact design problem.
The objective fitness values drawn by MFO, HBA, and EHBA are the same and smaller, making it known that they have high solving accuracy. In addition, Table 12 presents the statistical results running 20 times. The bold data in the table represent the optimal values among all algorithms under each evaluation indicator. Observing the bold data, notably, EHBA has achieved good solving results under the four indicators, with small optimal values, worst values, average values, and standard deviations. Therefore, EHBA has high accuracy and is relatively stable in solving this case. In addition, the convergence curve figure in the above methods is provided in Figure 13; the vertical axis is the logarithm of the numerical solution, which also adds to the proof of the efficiency of EHBA developed in this paper.

5.3. Parameter Estimation of Frequency Modulated (FM) Sound Waves

Finding the optimal parameter combination of the six variables for frequency modulation synthesizers is the most critical issue in the problem of frequency modulation sound waves [55]. This is a multi-modal problem. Here, the minimum sum of squared errors between sound waves and target sound waves is defined as the target equation. Let γ = ( γ 1 , γ 2 , γ 3 , γ 4 , γ 5 , γ 6 ) = ( a 1 , ω 1 , a 2 , ω 2 , a 3 , ω 3 ) , the mathematical description of the problem can be seen in Equation (29).
Min F ( γ ) = t = 0 100 [ ο ( t ) ο 0 ( t ) ] 2 ,
where
{ ο ( t ) = γ 1 sin ( γ 2 t θ + γ 3 sin ( γ 4 t θ + γ 5 sin ( γ 6 t θ ) ) ) ο 0 ( t ) = sin ( 5 t θ 1.5 sin ( 4.8 t θ + 2.0 sin ( 4.9 t θ ) ) ) .
In Equation (30), θ = 2 π / 100 , o(t) and o0(t) are the estimating sound waves and target sound wave.
The range of variable values is defined with Equation (31).
6.4 a 1 , ω 1 , a 2 , ω 2 , a 3 , ω 3 6.35 .
EHBA is applied to deal with the issue of parameter estimation, and the results of EHBA with the original HBA, SHO, SCA, TSA, WOA, MFO, GWO, and AOA are compared. Table 13 lists the best results obtained by all comparison methods. From this, the result obtained by the EHBA are relatively small, indicating that the algorithm has high solving accuracy. In addition, Table 14 presents the statistical results of all methods running 20 times. The bold data are the best value calculated by comparing the algorithms under each indicator (optimal value, worst value, average value, standard deviation). Observing the bold data, it can be seen that although EHBA has a large standard deviation, it can obtain smaller average values, optimal values, and worst values. Therefore, overall, the solution effect of EHBA is relatively good. In addition, the convergence curve diagram for the above methods is provided in Figure 14. The vertical axis is the logarithm of numerical results, which also adds to the proof of the efficiency of EHBA developed in this paper.

6. Conclusions and Future Research

A multi-strategy fusion enhanced optimization algorithm (EHBA) is proposed based on the dynamic opposite learning, differential variation and selectively, local quantum search, or dynamic Laplacian crossover operators to address the issues of local optima and slow convergence speed in the HBA. The adoption of a dynamic opposite learning strategy broadens the search area of the population, enhances global search ability, and improves population diversity and the quality of solutions. Differential mutation operation not only enhances the precision and exploration, but also secures the convergence rate of the HBA. Introducing a local quantum search strategy during the honey harvesting stage (development), the local search capabilities are enhanced and the population optimization precision is improved. Alternatively, introducing dynamic Laplacian crossover operators can improve convergence speed, which reduces the probability of EHBA sinking into local optima to a certain extent. Through comparative experiments with other algorithms on the CEC2017, CEC2020, and CEC2022 test sets, along with three engineering examples, EHBA has been verified to have good solving performance compared with other intelligent algorithms and other variant algorithms. From the convergence curve, box plot, and comparative analysis of algorithm performance testing, it will be on display that compared with the other eight comparative methods, EHBA has significantly improved optimization ability and convergence speed, and is expected to prove useful in optimizing problems.
Due to the superiority of EHBA, it may be implemented for multi-objective problems in more scientific research areas, such as robot movement, missile trajectory, image segmentation, predictive modeling, feature selection [56], geometry optimization [57], and engineering design [58,59] in the future. In addition, effective improvements to the original algorithm HBA can not only add different and good strategies, but also integrate other excellent algorithms.

Author Contributions

Conceptualization, J.H.; methodology, J.H. and H.H.; software, J.H.; validation, J.H. and H.H.; formal analysis, H.H.; investigation, J.H. and H.H.; resources, H.H.; data curation, J.H.; writing—original draft, J.H. and H.H.; writing—review and editing, J.H. and H.H.; visualization, J.H.; supervision, H.H.; project administration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received financial support from the National Natural Science Foundation of China (72072144, 71672144, 71372173, 70972053); Shaanxi soft Science Research Plan (2019KRZ007); Science and Technology Research and Development Program of Shaanxi Province (2021KRM183, 2017KRM059, 2017KRM057, 2014KRM282); Soft Science Research Program of Xi’an Science and Technology Bureau (21RKYJ0009); Fundamental Research Funds for the Central Universities, CHD (300102413501); Key R&D Program Project in Shaanxi Province (2021SF-458).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during the study are included in this published article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jia, H.M.; Li, Y.; Sun, K.J. Simultaneous feature selection optimization based on hybrid sooty tern optimization algorithm and genetic algorithm. Acta Autom. Sin. 2022, 48, 15. [Google Scholar] [CrossRef]
  2. Jia, H.M.; Jiang, Z.C.; Li, Y. Simultaneous feature selection optimization based on improved bald eagle search algorithm. Control Decis. 2022, 37, 3. [Google Scholar] [CrossRef]
  3. Jia, H.M.; Jiang, Z.C.; Peng, X.X. Multi-threshold color image segmentation based on improved spotted hyena optimizer. Comput. Appl. Soft. 2020, 37, 261–267. [Google Scholar]
  4. Zhang, F.Z.; He, Y.Z.; Liu, X.J.; Wang, Z.K. A novel discrete differential evolution algorithm for solving D{0-1} KP problem. J. Front. Comput. Sci. Technol. 2022, 16, 12. [Google Scholar]
  5. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  6. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  8. Faramarzi, A.; Heidarinejad, M.; Stephens, B.E.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  9. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  10. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  12. Lin, S.J.; Dong, C.; Chen, M.Z.; Zhang, F.; Chen, J.H. Summary of new group intelligent optimization algorithms. Comput. Eng. Appl. 2018, 54, 1–9. [Google Scholar] [CrossRef]
  13. Feng, W.T.; Song, K.K. An Enhanced Whale Optimization Algorithm. Comput. Simul. 2020, 37, 275–279, 357. [Google Scholar] [CrossRef]
  14. Chen, Y.; Chen, S. Research on Application of Dynamic Weighted Bat Algorithm in Image Segmentation. Comput. Eng. Appl. 2020, 56, 207–215. [Google Scholar] [CrossRef]
  15. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2016, 20, 606–626. [Google Scholar] [CrossRef]
  16. Gong, G.; Chiong, R.; Deng, Q.; Gong, X. A hybrid artificial bee colony algorithm for flexible job shop scheduling with worker flexibility. Int. J. Prod. Res. 2019, 58, 4406–4420. [Google Scholar] [CrossRef]
  17. Tharwat, A.; Elhoseny, M.; Hassanien, A.E.; Gabel, T.; Kumar, A. Intelligent Bézier curve-based path planning model using Chaotic Particle Swarm Optimization algorithm. Clust. Comput. 2019, 22 (Suppl. S2), 4745–4766. [Google Scholar] [CrossRef]
  18. Askarzadeh, A.; Rezazadeh, A. Artificial neural network training using a new efficient optimization algorithm. Appl. Soft Comput. 2013, 13, 1206–1213. [Google Scholar] [CrossRef]
  19. Irmak, B.; Karakoyun, M.; Gülcü, Ş. An improved butterfly optimization algorithm for training the feed-forward artificial neural networks. Soft Comput. 2023, 27, 3887–3905. [Google Scholar] [CrossRef]
  20. Ang, K.M.; Chow, C.E.; El-Kenawy, E.-S.M.; Abdelhamid, A.A.; Ibrahim, A.; Karim, F.K.; Khafaga, D.S.; Tiang, S.S.; Lim, W.H. A Modified Particle Swarm Optimization Algorithm for Optimizing Artificial Neural Network in Classification Tasks. Processes 2022, 10, 2579. [Google Scholar] [CrossRef]
  21. Ang, K.M.; Lim, W.H.; Tiang, S.S.; Ang, C.K.; Natarajan, E.; Ahamed Khan, M.K.A. Optimal Training of Feedforward Neural Networks Using Teaching-Learning-Based Optimization with Modified Learning Phases. In Proceedings of the 12th National Technical Seminar on Unmanned System Technology 2020. Lecture Notes in Electrical Engineering; Springer: Singapore, 2022; Volume 770. [Google Scholar] [CrossRef]
  22. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  23. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  24. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  25. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  26. Gandomi, A.H.; Yang, X.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  27. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  30. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization problems. Appl. Intell. 2022, 53, 11833–11860. [Google Scholar] [CrossRef]
  31. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  32. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2015, 96, 120–133. [Google Scholar] [CrossRef]
  33. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intel. 2020, 90, 103541. [Google Scholar] [CrossRef]
  34. Naruei, I.; Keynia, F. Wild horse optimizer: A new meta-heuristic algorithm for solving engineering optimization problems. Eng. Comput. 2021, 38, 3025–3056. [Google Scholar] [CrossRef]
  35. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  36. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  37. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simulat. 2021, 192, 84–110. [Google Scholar] [CrossRef]
  38. Akdağ, O. A Developed Honey Badger Optimization Algorithm for Tackling Optimal Power Flow Problem. Electr. Power Compon. Syst. 2022, 50, 331–348. [Google Scholar] [CrossRef]
  39. Han, E.; Ghadimi, N. Model identification of proton-exchange membrane fuel cells based on a hybrid convolutional neural network and extreme learning machine optimized by improved honey badger algorithm. Sustain. Energy Technol. Assess. 2022, 52, 102005. [Google Scholar] [CrossRef]
  40. Zhong, J.Y.; Yuan, X.G.; Du, B.; Hu, G.; Zhao, C.Y. An Lévy Flight Based Honey Badger Algorithm for Robot Gripper Problem. In Proceedings of the 7th International Conference on Image, Vision and Computing (ICIVC), Xi’an, China, 26–28 July 2022; pp. 901–905. [Google Scholar] [CrossRef]
  41. Hu, G.; Zhong, J.Y.; Wei, G. SaCHBA_PDN: Modified honey badger algorithm with multi-strategy for UAV path planning. Expert Syst. Appl. 2023, 223, 119941. [Google Scholar] [CrossRef]
  42. Kapner, D.; Cook, T.; Adelberger, E.; Gundlach, J.; Heckel, B.R.; Hoyle, C.; Swanson, H. Tests of the gravitational inverse-square law below the dark-energy length scale. Phys. Rev. Lett. 2007, 98, 021101. [Google Scholar] [CrossRef]
  43. Jia, H.M.; Liu, Q.G.; Liu, Y.X.; Wang, S.; Wu, D. Hybrid Aquila and Harris hawks optimization algorithm with dynamic opposition-based learning. CAAI Trans. Intell. Syst. 2023, 18, 104–116. [Google Scholar] [CrossRef]
  44. Hua, Y.; Sui, X.; Zhou, S.; Chen, Q.; Gu, G.; Bai, H.; Li, W. A novel method of global optimization for wavefront shaping based on the differential evolution algorithm. Opt. Commun. 2021, 481, 126541. [Google Scholar] [CrossRef]
  45. Li, X.; Wang, L.; Jiang, Q.; Li, N. Differential evolution algorithm with multi-population cooperation and multi-strategy integration. Neurocomputing 2021, 421, 285–302. [Google Scholar] [CrossRef]
  46. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  47. Xu, C.H.; Luo, Z.H.; Wu, G.H.; Liu, B. Grey wolf optimization algorithm based on sine factor and quantum local search. Comput. Eng. Appl. 2021, 57, 83–89. [Google Scholar] [CrossRef]
  48. Deep, K.; Bansal, J.C. Optimization of directional over current relay times using Laplace Crossover Particle Swarm Optimization (LXPSO). In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 288–293. [Google Scholar] [CrossRef]
  49. Wan, Y.G.; Li, X.; Guan, L.Z. Improved Whale Optimization Algorithm for Solving High-dimensional Optimization Problems. J. Front. Comput. Sci. Technol. 2021, 112, 107854. [Google Scholar]
  50. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  51. Yue, C.T.; Price, K.V.; Suganthan, P.N.; Liang, J.J.; Ali, M.Z.; Qu, B.Y.; Awad, N.H.; Biswas, P.P. Problem Definitions and Evaluation Criteria for the CEC2020 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report; Nanyang Technological University: Singapore; Glasgow, UK, 2020. [Google Scholar]
  52. Yazdani, D.; Branke, J.; Omidvar, M.N.; Li, X.; Li, C.; Mavrovouniotis, M.; Nguyen, T.T.; Yang, S.; Yao, X. IEEE CEC 2022 Competition on Dynamic Optimization Problems Generated by Generalized Moving Peaks Benchmark. arXiv 2021, arXiv:2106.06174. [Google Scholar] [CrossRef]
  53. Wu, L.H.; Wang, Y.N.; Zhou, S.W.; Yuan, X.F. Differential evolution for nonlinear constrained optimization using non-stationary multi-stage assignment penalty function. Syst. Eng. Theory Pract. 2007, 27, 128–133. [Google Scholar] [CrossRef]
  54. Youn, B.D.; Choi, K.K.; Yang, R.J.; Gu, L. Reliability-based design optimization for crash worthiness of vehicle side impact. Struct. Multidiscip. Optim. 2004, 26, 272–283. [Google Scholar] [CrossRef]
  55. Gothania, B.; Mathur, G.; Yadav, R.P. Accelerated artificial bee colony algorithm for parameter estimation of frequency-modulated sound waves. Int. J. Electron. Commun. Eng. 2014, 7, 63–74. [Google Scholar]
  56. Hu, G.; Du, B.; Wang, X.F.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  57. Zheng, J.; Ji, X.; Ma, Z.; Hu, G. Construction of Local-Shape-Controlled Quartic Generalized Said-Ball Model. Mathematics 2023, 11, 2369. [Google Scholar] [CrossRef]
  58. Hu, G.; Guo, Y.X.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  59. Hu, G.; Zheng, Y.X.; Abualigah, L.; Hussien, A.G. DETDO: An adaptive hybrid dandelion optimizer for engineering optimization. Adv. Eng. Inform. 2023, 57, 102004. [Google Scholar] [CrossRef]
Figure 1. Laplace density function curve.
Figure 1. Laplace density function curve.
Biomimetics 09 00021 g001
Figure 2. Flowchart for the proposed EHBA optimization algorithm.
Figure 2. Flowchart for the proposed EHBA optimization algorithm.
Biomimetics 09 00021 g002
Figure 3. Convergence curves of EHBA and other algorithms on CEC2017 partial test functions.
Figure 3. Convergence curves of EHBA and other algorithms on CEC2017 partial test functions.
Biomimetics 09 00021 g003aBiomimetics 09 00021 g003b
Figure 4. Box plots of EHBA and other algorithms on CEC2017 partial test functions.
Figure 4. Box plots of EHBA and other algorithms on CEC2017 partial test functions.
Biomimetics 09 00021 g004aBiomimetics 09 00021 g004b
Figure 5. Convergence curves of incomplete algorithms on CEC2020.
Figure 5. Convergence curves of incomplete algorithms on CEC2020.
Biomimetics 09 00021 g005
Figure 6. Convergence curves of other HBA variant algorithms and EHBA on CEC2020.
Figure 6. Convergence curves of other HBA variant algorithms and EHBA on CEC2020.
Biomimetics 09 00021 g006
Figure 7. Convergence curves of EHBA and other algorithms on CEC2020 partial test functions.
Figure 7. Convergence curves of EHBA and other algorithms on CEC2020 partial test functions.
Biomimetics 09 00021 g007
Figure 8. Box plots of EHBA and other algorithms on CEC2020 partial test functions.
Figure 8. Box plots of EHBA and other algorithms on CEC2020 partial test functions.
Biomimetics 09 00021 g008
Figure 9. Convergence curves of EHBA and other algorithms on CEC2022 partial test functions.
Figure 9. Convergence curves of EHBA and other algorithms on CEC2022 partial test functions.
Biomimetics 09 00021 g009
Figure 10. Box plots of EHBA and other algorithms on CEC2022 partial test functions.
Figure 10. Box plots of EHBA and other algorithms on CEC2022 partial test functions.
Biomimetics 09 00021 g010
Figure 11. Schematic view of welded beam problem.
Figure 11. Schematic view of welded beam problem.
Biomimetics 09 00021 g011
Figure 12. The convergence curve diagram (Design problems of welded beam).
Figure 12. The convergence curve diagram (Design problems of welded beam).
Biomimetics 09 00021 g012
Figure 13. The convergence curve diagram (Vehicle side impact design).
Figure 13. The convergence curve diagram (Vehicle side impact design).
Biomimetics 09 00021 g013
Figure 14. The convergence curve diagram (Parameter estimation of frequency modulated sound waves).
Figure 14. The convergence curve diagram (Parameter estimation of frequency modulated sound waves).
Biomimetics 09 00021 g014
Table 2. Parameters setting.
Table 2. Parameters setting.
AlgorithmsParameters Setting Value
HBA, EHBACoefficient of the logarithmic spiral shape β (the ability of a honey badger to get food) 6
C2
SHOLogarithmic helix constantu = 0.05, v = 0.05
Constant parameters ll = 0.05
AOAConstant parametersc1 = 2, c2 = 6, c3 = 1, c4 = 2
WOAControl parameter a
Constant parameters b
a is linear decrease from 2 to 0
b = 1
MFOShape constant of logarithmic spiral bb = 1
TSAInitial interaction velocity constant Pmin, PmaxPmin = 1, Pmax = 4
SCAConstant parameters aa = 2
GWOControl parameter aa is linear decrease from 2 to 0
Table 3. Comparison results of EHBA and other methods on CEC2017.
Table 3. Comparison results of EHBA and other methods on CEC2017.
FIndexAlgorithms
EHBAHBASHOSCATSAWOAMFOGWOAOA
F1Ave6.940 × 1032.116 × 1042.036 × 10101.776 × 10102.043 × 10101.875 × 1091.044 × 10102.469 × 1095.208 × 1010
Std6.075 × 1035.366 × 1046.035 × 1092.884 × 1095.480 × 1091.367 × 1098.340 × 1091.791 × 1097.679 × 109
Best4.300 × 1029.124 × 1026.912 × 1091.246 × 10106.148 × 1096.860 × 1081.824 × 1097.948 × 1084.107 × 1010
Rank127683549
F3Ave2.908 × 1041.885 × 1046.359 × 1046.253 × 1044.720 × 1042.536 × 1051.388 × 1055.102 × 1045.539 × 104
Std6.627 × 1035.071 × 1039.618 × 1031.100 × 1049.274 × 1037.388 × 1046.077 × 1041.381 × 1049.598 × 103
Best1.723 × 1041.077 × 1044.048 × 1044.732 × 1042.599 × 1041.688 × 1054.195 × 1041.461 × 1043.291 × 104
Rank217639845
F4Ave5.049 × 1025.151 × 1023.487 × 1032.466 × 1033.914 × 1038.298 × 1021.263 × 1036.419 × 1021.096 × 104
Std3.796 × 1012.728 × 1011.636 × 1038.131 × 1022.526 × 1031.009 × 1029.391 × 1021.665 × 1022.162 × 103
Best4.046 × 1024.733 × 1021.118 × 1031.545 × 1038.272 × 1025.865 × 1025.877 × 1025.272 × 1027.314 × 103
Rank127684539
F5Ave6.103 × 1026.216 × 1027.268 × 1028.185 × 1028.335 × 1028.309 × 1027.045 × 1026.204 × 1028.680 × 102
Std1.664 × 1012.609 × 1012.710 × 1012.758 × 1014.902 × 1015.581 × 1013.487 × 1014.249 × 1012.702 × 101
Best5.853 × 1025.657 × 1026.912 × 1027.852 × 1027.145 × 1026.699 × 1026.440 × 1025.659 × 1028.008 × 102
Rank135687429
F6Ave6.002 × 1026.141 × 1026.485 × 1026.581 × 1026.833 × 1026.801 × 1026.428 × 1026.104 × 1026.768 × 102
Std2.683 × 10−16.140 × 1006.153 × 1006.172 × 1001.378 × 1011.205 × 1011.368 × 1014.147 × 1004.811 × 100
Best6.000 × 1026.066 × 1026.352 × 1026.455 × 1026.597 × 1026.537 × 1026.194 × 1026.018 × 1026.682 × 102
Rank135698427
F7Ave8.715 × 1029.302 × 1021.131 × 1031.202 × 1031.258 × 1031.256 × 1031.052 × 1039.302 × 1021.362 × 103
Std3.822 × 1015.782 × 1015.359 × 1016.706 × 1011.000 × 1021.216 × 1021.466 × 1025.838 × 1015.943 × 101
Best8.130 × 1028.640 × 1021.036 × 1031.090 × 1031.132 × 1031.048 × 1038.339 × 1028.459 × 1021.243 × 103
Rank135687429
F8Ave8.985 × 1029.036 × 1029.781 × 1021.081 × 1031.111 × 1031.059 × 1031.016 × 1038.991 × 1021.100 × 103
Std2.029 × 1012.444 × 1012.901 × 1012.194 × 1015.049 × 1015.459 × 1015.290 × 1012.235 × 1012.551 × 101
Best8.701 × 1028.557 × 1029.206 × 1021.040 × 1031.031 × 1039.860 × 1029.466 × 1028.705 × 1021.049 × 103
Rank134796528
F9Ave1.235 × 1023.043 × 1035.260 × 1037.498 × 1031.105 × 1041.007 × 1046.909 × 1032.599 × 1039.035 × 103
Std4.100 × 1029.862 × 1028.810 × 1021.349 × 1033.400 × 1033.877 × 1031.678 × 1031.874 × 1039.305 × 102
Best9.054 × 1021.623 × 1033.284 × 1035.246 × 1035.192 × 1036.380 × 1033.775 × 1031.175 × 1036.816 × 103
Rank134698527
F10Ave4.970 × 1035.513 × 1035.693 × 1038.736 × 1037.183 × 1037.257 × 1035.657 × 1034.575 × 1038.405 × 103
Std6.279 × 1021.498 × 1035.170 × 1022.679 × 1026.318 × 1028.110 × 1026.857 × 1029.121 × 1023.467 × 102
Best2.817 × 1033.856 × 1034.769 × 1038.208 × 1035.945 × 1035.750 × 1034.122 × 1033.673 × 1037.486 × 103
Rank235967418
F11Ave1.201 × 1031.274 × 1033.282 × 1033.080 × 1035.717 × 1036.606 × 1034.853 × 1032.233 × 1037.360 × 103
Std3.024 × 1015.679 × 1011.494 × 1037.678 × 1022.695 × 1033.322 × 1038.710 × 1039.032 × 1021.705 × 103
Best1.132 × 1031.187 × 1031.620 × 1031.977 × 1031.772 × 1032.088 × 1031.375 × 1031.275 × 1033.978 × 103
Rank125478639
F12Ave1.169 × 1071.096 × 1071.983 × 10101.762 × 10102.455 × 10101.623 × 1097.032 × 1091.186 × 1096.502 × 1010
Std6.141 × 1069.818 × 1067.132 × 1093.949 × 1091.384 × 10105.255 × 1085.918 × 1091.111 × 1099.469 × 109
Best3.678 × 1063.699 × 1068.820 × 1091.406 × 10108.337 × 1096.302 × 1081.132 × 1091.201 × 1085.259 × 1010
Rank217684539
F13Ave2.862 × 1044.082× 1044.596 × 1088.223 × 1083.208 × 1092.150 × 1067.182 × 1071.436 × 1075.248 × 109
Std6.761 × 1043.591× 1041.181 × 1092.120 × 1084.547 × 1092.336 × 1063.031 × 1083.725 × 1071.977 × 109
Best2.462 × 1037.882 × 1038.005 × 1055.302 × 1084.076 × 1072.296 × 1052.883 × 1042.902 × 1041.371 × 109
Rank126783549
F14Ave4.670 × 1059.018 × 1057.040 × 1067.230 × 1062.527 × 1074.218 × 1064.315 × 1061.368 × 1061.196 × 108
Std2.301 × 1052.423 × 1065.367 × 1065.031 × 1064.207 × 1072.794 × 1064.907 × 1061.205 × 1065.024 × 107
Best1.379 × 1054.415 × 1041.256 × 1061.709 × 1066.223 × 1052.136 × 1051.560 × 1051.082 × 1053.903 × 107
Rank126784539
F15Ave1.124 × 1041.65 × 1042.910 × 1053.238 × 1071.373 × 1083.773 × 1064.010 × 1043.399 × 1062.103 × 108
Std1.343 × 1041.406 × 1045.303 × 1052.574 × 1072.274 × 1086.591 × 1062.966 × 1041.466 × 1072.340 × 108
Best1.836 × 1033.024 × 1031.386 × 1061.754 × 1069.574 × 1041.417 × 1056.000 × 1032.200 × 1046.277 × 106
Rank124786359
F16Ave2.375 × 1032.860 × 1033.054 × 1033.955 × 1033.646 × 1034.286 × 1033.255 × 1032.740 × 1035.506 × 103
Std2.289 × 1024.421 × 1023.268 × 1022.684 × 1026.790 × 1027.032 × 1024.167 × 1024.395 × 1028.078 × 102
Best1.986 × 1032.026 × 1032.277 × 1033.519 × 1032.767 × 1033.139 × 1032.631 × 1032.047 × 1034.204 × 103
Rank134768529
F17Ave1.944 × 1032.214 × 1032.408 × 1032.672 × 1032.595 × 1032.808 × 1032.602 × 1032.018 × 1033.334 × 103
Std1.504 × 1022.166 × 1022.663 × 1021.522 × 1023.417 × 1022.243 × 1023.607 × 1021.093 × 1024.563 × 102
Best1.759 × 1031.859 × 1031.886 × 1032.432 × 1032.043 × 1032.439 × 1031.957 × 1031.832 × 1032.615 × 103
Rank134758629
F18Ave1.572 × 1061.701 × 1061.457 × 1073.276 × 1072.070 × 1074.441 × 1071.377 × 1078.983 × 1061.157 × 108
Std1.083 × 1061.540 × 1061.733 × 1071.154 × 1072.021 × 1072.813 × 1071.061 × 1071.183 × 1073.164 × 107
Best2.608 × 1052.711 × 1054.623 × 1061.346 × 1071.374 × 1061.894 × 1071.344 × 1067.288 × 1055.250 × 107
Rank125768439
F19Ave6.171 × 1031.205 × 1041.056 × 1075.981 × 1073.992 × 1081.648 × 1076.896 × 1071.677 × 1065.337 × 108
Std5.556 × 1031.457 × 1043.343 × 1073.563 × 1079.671 × 1081.085 × 1072.969 × 1082.021 × 1063.838 × 108
BestBest1.936 × 1032.162 × 1031.021 × 1041.918 × 1071.582 × 1055.604 × 1051.109 × 1048.336 × 1031.025 × 107
Rank124685739
F20Ave2.240 × 1032.547 × 1032.638 × 1032.865 × 1032.808 × 1032.853 × 1032.750 × 1032.421 × 1032.868 × 103
Std1.131 × 1022.447 × 1022.019 × 1021.332 × 1022.107 × 1022.034 × 1022.176 × 1021.364 × 1021.370 × 102
Best2.149 × 1032.222 × 1032.305 × 1032.601 × 1032.460 × 1032.451 × 1032.320 × 1032.171 × 1032.568 × 103
Rank134867529
F21Ave2.401 × 1032.406 × 1032.503 × 1032.591 × 1032.650 × 1032.619 × 1032.498 × 1032.415 × 1032.628 × 103
Std2.010 × 1013.290 × 1012.864 × 1012.077 × 1014.877 × 1016.677 × 1015.385 × 1014.413 × 1012.389 × 101
Best2.365 × 1032.347 × 1032.455 × 1032.554 × 1032.556 × 1032.538 × 1032.391 × 1032.360 × 1032.583 × 103
Rank125697438
F22Ave2.502 × 1034.053 × 1036.678 × 1039.760 × 1038.438 × 1037.757 × 1036.669 × 1034.752 × 1039.449 × 103
Std9.030 × 1022.669 × 1031.420 × 1031.367 × 1031.624 × 1031.927 × 1037.371 × 1021.762 × 1038.479 × 102
Best2.300 × 1032.300 × 1033.574 × 1034.487 × 1033.731 × 1032.585 × 1035.660 × 1032.396 × 1037.570 × 103
Rank125976438
F23Ave2.754 × 1032.799 × 1032.987 × 1033.073 × 1033.218 × 1033.121 × 1032.829 × 1032.792 × 1033.556 × 103
Std2.395 × 1015.438 × 1014.057 × 1013.885 × 1011.546 × 1029.451 × 1013.803 × 1014.361 × 1011.205 × 102
Best2.700 × 1032.724 × 1032.918 × 1033.005 × 1033.028 × 1032.989 × 1032.779 × 1032.741 × 1033.297 × 103
Rank135687429
F24Ave2.960 × 1033.086 × 1033.300 × 1033.227 × 1033.365 × 1033.229 × 1032.991 × 1032.923 × 1033.772 × 103
Std3.035 × 1011.867 × 1027.077 × 1014.405 × 1011.219 × 1021.025 × 1023.539 × 1013.966 × 1011.888 × 102
Best2.902 × 1032.851 × 1033.195 × 1033.156 × 1033.158 × 1033.060 × 1032.940 × 1032.867 × 1033.467 × 103
Rank247586319
F25Ave2.901 × 1032.905 × 1033.456 × 1033.345 × 1033.593 × 1033.098 × 1033.232 × 1033.015 × 1034.624 × 103
Std1.828 × 1011.957 × 1012.252 × 1021.121 × 1023.408 × 1025.812 × 1014.286 × 1028.485 × 1014.476 × 102
Best2.884 × 1032.884 × 1033.102 × 1033.196 × 1033.203 × 1032.991 × 1032.888 × 1032.940 × 1033.695 × 103
Rank127684539
F26Ave4.555 × 1034.504 × 1037.303 × 1037.551 × 1038.400 × 1038.546 × 1036.068 × 1034.891 × 1031.029 × 104
Std6.324 × 1021.120 × 1038.007 × 1023.042 × 1021.726 × 1039.515 × 1025.199 × 1024.851 × 1026.964 × 102
Best2.800 × 1032.811 × 1035.649 × 1037.043 × 1033.796 × 1037.109 × 1035.081 × 1034.087 × 1039.013 × 103
Rank215678439
F27Ave3.231 × 1033.408 × 1033.540 × 1033.514 × 1033.670 × 1033.560 × 1033.261 × 1033.257 × 1033.654 × 103
Std2.172 × 1012.140 × 1021.374 × 1028.689 × 1012.326 × 1022.318 × 1022.638 × 1013.699 × 1016.057 × 102
Best3.204 × 1033.203 × 1033.368 × 1033.382 × 1033.366 × 1033.293 × 1033.231 × 1033.203 × 1033.200 × 103
Rank146597328
F28Ave3.253 × 1033.233 × 1034.160 × 1034.262 × 1034.721 × 1033.544 × 1033.783 × 1033.451 × 1035.352 × 103
Std5.239 × 1012.465 × 1014.107 × 1022.602 × 1025.381 × 1021.141 × 1023.947 × 1021.029 × 1021.571 × 103
Best3.206 × 1033.203 × 1033.590 × 1033.942 × 1033.898 × 1033.403 × 1033.304 × 1033.310 × 1033.300 × 103
Rank216784539
F29Ave3.621 × 1034.379 × 1034.350 × 1035.116 × 1034.857 × 1035.207 × 1034.299 × 1033.905 × 1036.297 × 103
Std1.801 × 1028.364 × 1022.871 × 1023.841 × 1024.141 × 1024.619 × 1023.061 × 1022.067 × 1027.947 × 102
BestBest3.393 × 1033.643 × 1033.758 × 1034.436 × 1034.251 × 1034.427 × 1033.709 × 1033.655 × 1034.940 × 103
Rank154768329
F30Ave2.703 × 1045.087 × 1048.922 × 1061.603 × 1082.463 × 1073.301 × 1071.178 × 1066.930 × 1061.191 × 109
Std1.575 × 1046.900 × 1049.825 × 1068.697 × 1071.589 × 1072.176 × 1072.038 × 1065.327 × 1065.720 × 107
Best8.203 × 1031.294 × 1041.040 × 1065.523 × 1075.505 × 1065.552 × 1061.917× 1042.344 × 1062.332 × 107
Rank125867349
Mean Rank1.20692.44835.27596.51727.37936.34484.58622.68978.5517
Result125786439
The bold data represent the optimal average data among all the comparison algorithms.
Table 4. Wilcoxon rank sum test values of each comparison algorithm (30-dimensional CEC2017 test set).
Table 4. Wilcoxon rank sum test values of each comparison algorithm (30-dimensional CEC2017 test set).
ResultAlgorithms
HBASHOSCATSAWOAMFOGWOAOA
F14.09356 × 101-------
F32.30247 × 10−57.89803 × 10−8-2.06160 × 10−6-7.89803 × 10−81.10447 × 10−51.65708 × 10−7
F43.50702 × 101---7.89803 × 10−87.89803 × 10−87.94795 × 10−7-
F59.09074 × 102----7.89803 × 10−88.18149 × 101-
F6--------
F75.62904 × 10−4----4.16576 × 10−51.95335 × 10−3-
F84.56951 × 1011.06457 × 10−7----9.67635 × 101-
F92.95975 × 10−7-----1.80745 × 10−5-
F105.79218 × 1014.15502 × 10−4---3.63883 × 10−35.56046 × 10−3-
F111.59972 × 10−5-------
F124.72676 × 1011.82672 × 10−41.82672 × 10−41.82672 × 10−41.82672 × 10−41.82672 × 10−41.82672 × 10−41.82672 × 10−4
F135.11526 × 10−3---7.89803 × 10−81.20089 × 10−66.91658 × 10−7-
F141.13297 × 10−21.82672 × 10−41.82672 × 10−44.39639 × 10−41.70625 × 10−33.76353 × 10−22.11339 × 10−21.82672 × 10−4
F153.60483 × 10−21.44383 × 10−4---1.44383 × 10−41.20089 × 10−6-
F161.79364 × 10−41.20089 × 10−6-9.17277 × 10−8-1.65708 × 10−73.05663 × 10−3-
F172.22203 × 10−41.57567 × 10−6-1.20089 × 10−6-1.04727 × 10−66.01106 × 10−2-
F189.69850 × 1011.82672 × 10−41.82672 × 10−45.82840 × 10−41.82672 × 10−41.31494E−035.79536 × 10−31.82672 × 10−4
F192.73285 × 1012.56295 × 10−7---1.91771 × 10−71.65708 × 10−7-
F202.04071 × 10−54.53897 × 10−7-9.17277 × 10−89.17277 × 10−82.95975 × 10−72.22203 × 10−4-
F216.55361 × 101----2.06160 × 10−66.35945 × 101-
F223.49946 × 10−61.65708 × 10−77.89803 × 10−89.17277 × 10−89.17277 × 10−81.91771 × 10−77.94795 × 10−7-
F238.35717 × 10−4----5.22689 × 10−71.78238 × 10−3-
F249.78649 × 10−3----9.04540 × 10−38.35717 × 10−4-
F253.23482 × 101----3.49946 × 10−67.89803 × 10−8-
F261.19856 × 101--1.20089 × 10−6-9.17277 × 10−81.55570 × 101-
F271.29405 × 10−4---7.89803 × 10−84.68040 × 10−54.32018 × 10−32.85305 × 101
F289.09074 × 102---1.23464 × 10−71.43085 × 10−73.41558 × 10−71.91771 × 10−7
F291.80297 × 10−62.21776 × 10−7---2.56295 × 10−74.68040 × 10−5-
F303.23482 × 101----6.67365 × 10−6--
+/=/−3/13/130/12/170/0/290/0/290/0/290/0/291/4/240/1/28
The bold data represent p values with a significance level greater than 0.05.
Table 5. Comparison results of EHBA and other methods on CEC 2020 test sets.
Table 5. Comparison results of EHBA and other methods on CEC 2020 test sets.
FIndexAlgorithms
EHBAHBASHOSCATSAWOAMFOGWOAOA
F1Ave5.146 × 1039.416 × 1032.123 × 10101.776 × 10102.198 × 10101.568 × 1098.572 × 1092.396 × 1094.962 × 1010
Std5.482 × 1031.051 × 1047.070 × 1092.384 × 1091.096 × 10107.708 × 1085.976 × 1092.097 × 1097.739 × 109
Best1.275 × 1023.130 × 1025.350 × 1091.228 × 10101.057 × 10108.104 × 1082.115 × 1043.908 × 1073.301 × 1010
Rank127683549
F2Ave5.005 × 1035.007 × 1035.549 × 1038.775 × 1037.437 × 1037.134 × 1035.347 × 1034.694 × 1038.418 × 103
Std3.957 × 1027.068 × 1024.511 × 1022.669 × 1024.619 × 1021.104 × 1033.973 × 1021.321 × 1035.213 × 102
Best4.235 × 1034.055 × 1034.440 × 1038.167 × 1036.495 × 1035.392 × 1034.772 × 1033.285 × 1037.589 × 103
Rank235976418
F3Ave8.701 × 1029.070 × 1021.121 × 1031.213 × 1031.210 × 1031.293 × 1031.148 × 1039.049 × 1021.375 × 103
Std2.882 × 1016.115 × 1016.218 × 1016.894 × 1019.149 × 1016.968 × 1011.793 × 1025.734 × 1016.906 × 101
Best8.255 × 1028.244 × 1021.026 × 1031.117 × 1031.018 × 1031.187 × 1038.803 × 1028.309 × 1021.232 × 103
Rank134768529
F4Ave1.900 × 1031.900 × 1031.900 × 1031.912 × 1031.919 × 1031.900 × 1034.454 × 1041.900 × 1031.900 × 103
Std0.000 × 1000.000 × 1000.000 × 1008.133 × 1005.474 × 1030.000 × 1005.325 × 1042.049E−010.000 × 100
Best1.900 × 1031.900 × 1031.900 × 1031.900 × 1031.908 × 1031.900 × 1031.907 × 1031.900 × 1031.900 × 103
Rank111781911
F5Ave2.524 × 1063.533 × 1059.756 × 1061.139 × 1071.433 × 1071.085 × 1075.594 × 1062.112 × 1068.184 × 107
Std1.985 × 1062.935 × 1058.155 × 1063.445 × 1061.906 × 1077.497 × 1066.985 × 1063.032 × 1063.949 × 107
Best1.437 × 1054.945 × 1043.006 × 1065.768 × 1062.692 × 1051.752 × 1062.402 × 1051.549 × 1052.854 × 107
Rank315786429
F6Ave1.966 × 1032.285 × 1032.295 × 1033.832 × 1033.073 × 1033.638 × 1032.606 × 1032.083 × 1034.109 × 103
Std1.016 × 1023.105 × 1022.523 × 1022.371 × 1027.328 × 1026.569 × 1024.053 × 1021.803 × 1027.046 × 102
Best1.752 × 1031.745 × 1031.957 × 1033.480 × 1032.006 × 1032.529 × 1031.909 × 1031.783 × 1032.997 × 103
Rank134867529
F7Ave4.674 × 1051.039 × 1061.575 × 1064.186 × 1063.523 × 1061.018 × 1071.409 × 1062.254 × 1062.817 × 107
Std3.712 × 1054.159 × 1063.022 × 1063.272 × 1064.506 × 1068.030 × 1061.402 × 1063.431 × 1061.977 × 107
Best1.085 × 1051.666 × 1049.209 × 1046.239 × 1058.674 × 1047.705 × 1058.947 × 1041.207 × 1055.958 × 106
Rank124768359
F8Ave2.760 × 1034.184 × 1036.149 × 1039.928 × 1038.306 × 1037.083 × 1036.621 × 1035.419 × 1039.160 × 103
Std1.418 × 1032.438 × 1031.242 × 1031.196 × 1031.604 × 1031.933 × 1031.109 × 1032.042 × 1031.592 × 103
Best2.300 × 1032.300 × 1033.994 × 1035.288 × 1033.975 × 1032.793 × 1033.695 × 1032.461 × 1035.740 × 103
Rank124976538
F9Ave2.952 × 1033.015 × 1033.283 × 1033.228 × 1033.396 × 1033.263 × 1032.999 × 1032.942 × 1033.864 × 103
Std3.252 × 1011.594 × 1026.334 × 1013.797 × 1011.185 × 1029.144 × 1013.562 × 1016.694 × 1011.991 × 102
Best2.914 × 1032.894 × 1033.165 × 1033.155 × 1033.228 × 1033.073 × 1032.925 × 1032.867 × 1033.431 × 103
Rank247586319
F10Ave2.899 × 1032.904 × 1033.520 × 1033.451 × 1033.602 × 1033.135 × 1033.398 × 1033.013 × 1034.774 × 103
Std1.665 × 1011.847 × 1012.674 × 1021.503 × 1024.233 × 1024.906 × 1014.504 × 1026.055 × 1015.229 × 102
Best2.884 × 1032.884 × 1033.117 × 1033.186 × 1033.086 × 1033.054 × 1032.896 × 1032.933 × 1033.875 × 103
Rank127684539
Mean Rank1.42.34.87.17.25.54.82.48
Result124676439
The bold data represent the optimal average data among all the comparison algorithms.
Table 6. Wilcoxon rank sum test values of each comparison algorithm on CEC2020 test set.
Table 6. Wilcoxon rank sum test values of each comparison algorithm on CEC2020 test set.
ResultAlgorithms
HBASHOSCATSAWOAMFOGWOAOA
F14.1124 × 10−2-------
F26.5536 × 1014.1550 × 10−4--2.2178 × 10−72.9441 × 10−21.4810 × 10−3-
F37.2045 × 102----1.9177 × 10−72.3903 × 10−2-
F4NaNNaN8.0065 × 10−98.0065 × 10−9NaN8.0065 × 10−93.2162 × 10−6NaN
F55.8736 × 10−61.4149 × 10−51.0646 × 10−74.3202 × 10−31.1045 × 10−51.4042 × 1011.1986 × 101-
F69.2091 × 10−47.5774 × 10−6-1.6571 × 10−7-1.0473 × 10−63.1517 × 10−2-
F78.2924 × 10−56.7868 × 10−22.9598 × 10−73.0566 × 10−31.2346 × 10−71.1433 × 10−25.0751 × 101-
F83.7499 × 10−42.6898 × 10−69.1728 × 10−81.2346 × 10−73.9388 × 10−71.5757 × 10−65.8736 × 10−61.6571 × 10−7
F92.2869 × 101----6.6104 × 10−56.3892 × 102-
F103.6484 × 101----3.4156 × 10−71.0646 × 10−7-
+/=/−1/5/40/2/80/0/100/0/100/1/90/1/93/3/40/1/9
The bold data represent p values with a significance level greater than 0.05.
Table 7. Comparison results of EHBA and other methods on CEC2022 test set.
Table 7. Comparison results of EHBA and other methods on CEC2022 test set.
FIndexAlgorithms
EHBAHBASHOSCATSAWOAMFOGWOAOA
F1Ave5.364 × 1031.104 × 1041.617 × 1041.497 × 1041.579 × 1042.766 × 1043.116 × 1041.430 × 1041.954 × 104
Std2.919 × 1034.507 × 1044.914 × 1033.327 × 1035.756 × 1039.329 × 1032.064 × 1044.016 × 1033.527 × 103
Best1.098 × 1033.200 × 1039.519 × 1038.203 × 1039.326 × 1031.648 × 1045.139 × 1036.364 × 1031.039 × 104
Rank126458937
F2Ave4.511 × 1024.589 × 1027.055 × 1027.453 × 1027.319 × 1025.662 × 1025.508 × 1025.013 × 1021.945 × 103
Std8.416 × 1001.438 × 1011.190 × 1028.521 × 1011.745 × 1025.987 × 1011.236 × 1024.262 × 1015.160 × 102
Best4.449 × 1024.290 × 1025.753 × 1026.206 × 1024.925 × 1014.594 × 1024.449 × 1024.540 × 1021.290 × 103
Rank126875439
F3Ave6.001 × 1026.053 × 1026.383 × 1026.462 × 1026.630 × 1026.666 × 1026.221 × 1026.055 × 1026.646 × 102
Std3.968 × 10−13.002 × 1005.963 × 1004.799 × 1001.587 × 1011.215 × 1011.055 × 1013.531 × 1006.684 × 100
Best6.000 × 1026.013 × 1026.247 × 1026.367 × 1026.289 × 1026.340 × 1026.092 × 1026.008 × 1026.528 × 102
Rank125679438
F4Ave8.537 × 1028.541 × 1028.965 × 1029.462 × 1029.548 × 1029.337 × 1028.908 × 1028.535 × 1029.408 × 102
Std1.346 × 1011.574 × 1011.796 × 1011.282 × 1012.473 × 1013.202 × 1012.209 × 1012.206 × 1011.269 × 101
Best8.301 × 1028.239 × 1028.707 × 1029.193 × 1029.157 × 1028.791 × 1028.413 × 1028.231 × 1029.141 × 102
Rank235896417
F5Ave1.053 × 1031.401 × 1032.403 × 1032.488 × 1034.692 × 1033.664 × 1032.987 × 1031.193 × 1032.843 × 103
Std2.894 × 1023.605 × 1021.908 × 1024.189 × 1021.698 × 1031.372 × 1031.105 × 1032.092 × 1024.380 × 102
Best9.008 × 1029.116 × 1022.047 × 1031.663 × 1032.126 × 1031.901 × 1031.241 × 1039.158 × 1022.082 × 103
Rank134598726
F6Ave7.741 × 1039.352 × 1034.906 × 1069.297 × 1072.896 × 1081.105 × 1061.035 × 1087.845 × 1061.024 × 109
Std6.441 × 1038.701 × 1039.685 × 1066.017 × 1067.746 × 1081.472 × 1064.192 × 1081.549 × 1076.891 × 108
Best2.355 × 1031.957 × 1031.695 × 1041.740 × 1073.064 × 1052.296 × 1042.521 × 1032.558 × 1039.256 × 107
Rank124683759
F7Ave2.046 × 1032.062 × 1032.120 × 1032.149 × 1032.267 × 1032.230 × 1032.123 × 1032.074 × 1032.174 × 103
Std3.150 × 1011.698 × 1012.485 × 1012.150 × 1011.252 × 1027.125 × 1015.616 × 1013.165 × 1012.548 × 101
Best2.024 × 1032.039 × 1032.065 × 1032.110 × 1032.127 × 1032.098 × 1032.03 × 1032.034 × 1032.128 × 103
Rank124698537
F8Ave2.224 × 1032.273 × 1032.261 × 1032.275 × 1032.409 × 1032.289 × 1032.264 × 1032.267 × 1032.275 × 103
Std1.569 × 1006.082 × 1014.657 × 1012.134 × 1014.029 × 1026.970 × 1014.653 × 1015.435 × 1018.732 × 101
Best2.223 × 1032.223 × 1032.227 × 1032.243 × 1032.236 × 1032.232 × 1032.223 × 1032.226 × 1032.232 × 103
Rank152698347
F9Ave2.481 × 1032.481 × 1032.595 × 1032.599 × 1032.674 × 1032.579 × 1032.513 × 1032.522 × 1033.173 × 103
Std2.371 × 10−45.208 × 10−24.287 × 1012.776 × 1018.627 × 1014.222 × 1013.371 × 1013.303 × 1012.213 × 102
Best2.481 × 1032.481 × 1032.514 × 1032.551 × 1032.580 × 1032.524 × 1032.481 × 1032.481 × 1032.826 × 103
Rank126785349
F10Ave2.471 × 1033.731 × 1033.202 × 1033.104 × 1035.402 × 1034.747 × 1033.970 × 1033.323 × 1034.838 × 103
Std4.709 × 1011.164 × 1036.864 × 1021.293 × 1038.277 × 1021.097 × 1031.087 × 1038.470 × 1021.671 × 103
Best2.404 × 1032.501 × 1032.521 × 1032.526 × 1032.809 × 1032.501 × 1032.501 × 1032.500 × 1032.624 × 103
Rank153297648
F11Ave2.900 × 1032.900 × 1035.207 × 1034.542 × 1035.785 × 1033.788 × 1033.756 × 1033.430 × 1037.983 × 103
Std1.124 × 1027.947 × 1015.532 × 1025.781 × 1021.228 × 1039.650 × 1026.090 × 1022.009 × 1025.790 × 102
Best2.600 × 1032.600 × 1034.054 × 1033.742 × 1033.961 × 1032.866 × 1032.900 × 1033.131 × 1037.016 × 103
Rank217685439
F12Ave2.970 × 1033.091 × 1033.174 × 1033.072 × 1033.301 × 1033.042 × 1032.960 × 1032.972 × 1033.455 × 103
Std4.170 × 1011.030 × 1029.535 × 1013.083 × 1011.789 × 1026.701 × 1011.533 × 1012.737 × 1014.983 × 102
Best2.944 × 1032.960 × 1033.059 × 1033.026 × 1032.987 × 1032.960 × 1032.942 × 1032.946 × 1032.900 × 103
Rank267584139
Mean Rank1.25002.91674.91675.75008.00006.33334.75003.16677.9167
Result125687439
The bold data represent the optimal average data among all the comparison algorithms.
Table 8. Wilcoxon rank sum test values of each comparison algorithm on CEC2022 test set.
Table 8. Wilcoxon rank sum test values of each comparison algorithm on CEC2022 test set.
ResultAlgorithms
HBASHOSCATSAWOAMFOGWOAOA
F17.5774 × 10−61.2346 × 10−71.6571 × 10−71.9177 × 10−7-1.8030 × 10−65.2269 × 10−77.8980 × 10−8
F24.0936 × 10−1---9.1728 × 10−81.2941 × 10−43.4156 × 10−7-
F39.1728 × 10−8-----1.0646 × 10−7-
F48.6043 × 10−11.2346 × 10−7---3.9874 × 10−66.1677 × 101-
F52.4706 × 1041.0646 × 10−71.2346 × 10−77.8980 × 10−87.8980 × 10−81.9177 × 10−71.9533 × 1037.8980 × 10−8
F68.3923 × 10−11.6571 × 10−7--7.8980 × 10−89.7865 × 10−37.4064 × 10−5-
F79.2780 × 10−51.2009 × 10−66.9166 × 10−71.9177 × 10−71.4309 × 10−71.1045 × 10−54.1658 × 10−53.9388 × 10−7
F87.4064 × 10−51.0646 × 10−7---1.8030 × 10−66.0148 × 10−7-
F92.4706 × 10−4-------
F101.2505 × 10−59.1728 × 10−82.5629 × 10−7-2.5629 × 10−72.5629 × 10−71.1590 × 10−4-
F117.6431 × 10−2---9.1266 × 10−71.5997 × 10−5--
F123.0691 × 10−62.2178 × 10−71.2009 × 10−62.2178 × 10−78.5974 × 10−64.9033 × 10−12.1841 × 10−11.1355 × 10−1
+/=/−1/4/70/0/120/0/120/0/120/0/121/1/100/2/100/1/11
The bold data represent p values with a significance level greater than 0.05.
Table 9. Statistical results of the welded beam problem.
Table 9. Statistical results of the welded beam problem.
MethodsOptimumMeanWorstStd
EHBA1.43378191.43493741.43641510.0007941
HBA1.43380741.43380901.43383620.0000064
SHO1.44460791.50386561.56610390.0349623
SCA1.48311251.53982121.59724140.0277449
TSA1.44104371.44765761.45574060.0037096
WOA1.46669071.97280593.45512000.5119476
MFO1.43380741.49071241.88697670.1384304
GWO1.43456621.43794131.44728490.0036150
AOA1.62020841.96508332.38428400.2300621
Table 10. Optimal results of the welded beam problem.
Table 10. Optimal results of the welded beam problem.
MethodsVariablesOptimum
γ1γ2γ3γ4
EHBA0.20538461.33605299.03633110.20574301.4337819
HBA0.20572981.33356059.03662390.20572961.4338074
SHO0.18979811.48729399.03521770.20579371.4446079
SCA0.17643071.58792579.19936130.20706251.4831125
TSA0.20167891.38549809.05780410.20564961.4410437
WOA0.17118881.72478509.00840390.20702061.4666907
MFO0.20572981.33356049.03662390.20572961.4338074
GWO0.20549241.33954719.03649990.20574561.4345662
AOA0.18104231.529960610.00000000.20943841.6202084
Table 11. Optimal results of side impact design problems for cars.
Table 11. Optimal results of side impact design problems for cars.
MethodsVariablesOptimum
γ1γ2γ3γ4γ5γ6γ7γ8γ9γ10γ11
EHBA0.50001.05250.50000.50000.50001.50000.50000.34500.3450−30.00000.000022.2383119
HBA0.50001.05250.50000.50000.50001.50000.50000.34500.3450−30.00000.000022.2383119
SHO0.51671.04520.50000.50000.50001.49480.50000.34370.3309−29.9994−0.034622.2725325
SCA0.50001.05890.50000.50000.50001.45460.50000.34500.3450−30.0000−0.162322.3236875
TSA0.50001.05320.50000.50000.50001.50000.50000.34500.3450−30.0000−1.045022.2398005
WOA0.50001.05250.50000.50000.50001.50000.50000.34500.3450−30.00006.487822.2383130
MFO0.50001.05250.50000.50000.50001.50000.50000.34500.3450−30.00000.000022.2383119
GWO0.53151.03860.50000.50000.50001.50000.50000.34500.3450−29.99640.038422.2390089
AOA0.50001.09640.50000.50000.50001.50000.50000.34500.1920−26.8686−0.052922.4954648
Table 12. Statistical results of vehicle side impact design issues.
Table 12. Statistical results of vehicle side impact design issues.
MethodsOptimumMeanWorstStd
EHBA22.238311922.238314522.23834390.0000075
HBA22.238311922.731874525.14567120.7577413
SHO22.272532522.413500122.59082040.0807070
SCA22.323687522.892521923.51963360.3130891
TSA22.239800522.426452725.43750370.7089918
WOA22.238313022.966388724.54975390.7826121
MFO22.238311922.283205122.99237730.1680703
GWO22.239008922.250895322.28787930.0159526
AOA22.495464823.633041225.96712230.9988731
Table 13. Optimal results for parameter estimation of FM sound waves.
Table 13. Optimal results for parameter estimation of FM sound waves.
MethodsVariablesOptimum
γ1γ2γ3γ4γ5γ6
EHBA0.99929005.0002498−1.50108874.79984682.00021214.90003980.0000370
HBA0.6268062−0.02736014.3856049−4.8936163−0.1260477−5.173620510.9422767
SHO1.09624310.0355469−0.6068540−0.04169254.29247034.88435439.6438934
SCA−0.5006558−0.04668974.48565574.8840717−0.00024210.825074412.6418622
TSA0.62055810.02409134.3344360−4.74434134.0024033−0.037270411.6162542
WOA0.76471510.1268153−1.1065172−0.14193024.17769094.90353969.0496958
MFO0.85632654.9215368−1.15211632.49546314.93308692.424683211.2071984
GWO0.84865335.00878851.4857537−4.79109671.9845038−4.90106550.7131581
AOA0.74898910.09128720.97253110.08782814.4087630−4.89477479.2791912
Table 14. Statistical results of FM sound wave parameter estimation problem.
Table 14. Statistical results of FM sound wave parameter estimation problem.
MethodsOptimumMeanWorstStd
EHBA0.00003709.485566720.16705296.7437197
HBA10.942276717.668983623.18274803.4680614
SHO9.643893419.325903425.16323186.4388645
SCA12.641862221.740884524.94662312.6678648
TSA11.616254219.435924225.20526494.2413626
WOA9.049695819.323336025.08088114.7363808
MFO11.207198419.721487527.48968126.7926567
GWO0.713158116.091944425.04304956.4865698
AOA9.279191225.668393529.85500285.6292789
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, J.; Hu, H. Differential Mutation Incorporated Quantum Honey Badger Algorithm with Dynamic Opposite Learning and Laplace Crossover for Fuzzy Front-End Product Design. Biomimetics 2024, 9, 21. https://doi.org/10.3390/biomimetics9010021

AMA Style

Huang J, Hu H. Differential Mutation Incorporated Quantum Honey Badger Algorithm with Dynamic Opposite Learning and Laplace Crossover for Fuzzy Front-End Product Design. Biomimetics. 2024; 9(1):21. https://doi.org/10.3390/biomimetics9010021

Chicago/Turabian Style

Huang, Jiaxu, and Haiqing Hu. 2024. "Differential Mutation Incorporated Quantum Honey Badger Algorithm with Dynamic Opposite Learning and Laplace Crossover for Fuzzy Front-End Product Design" Biomimetics 9, no. 1: 21. https://doi.org/10.3390/biomimetics9010021

Article Metrics

Back to TopTop