Next Article in Journal
EEG-Based Empathic Safe Cobot
Previous Article in Journal
Experimental Investigation on Auto-Ignition Characteristics of Kerosene Spray Flames
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems

1
Key Laboratory of CNC Equipment Reliability, Ministry of Education, School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130022, China
2
Changchun Vocational Institute of Technology, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(8), 602; https://doi.org/10.3390/machines10080602
Submission received: 10 July 2022 / Revised: 20 July 2022 / Accepted: 21 July 2022 / Published: 24 July 2022
(This article belongs to the Section Machine Design and Theory)

Abstract

:
Recently, numerous new meta-heuristic algorithms have been proposed for solving optimization problems. According to the Non-Free Lunch theorem, we learn that no single algorithm can solve all optimization problems. In order to solve industrial engineering design problems more efficiently, we, inspired by the algorithm framework of the Arithmetic Optimization Algorithm (AOA) and the Harris Hawks Optimization (HHO), propose a novel hybrid algorithm based on these two algorithms, named EAOAHHO in this paper. The pinhole imaging opposition-based learning is introduced into the proposed algorithm to increase the original population diversity and the capability to escape from local optima. Furthermore, the introduction of composite mutation strategy enhances the proposed EAOAHHO exploitation and exploration to obtain better convergence accuracy. The performance of EAOAHHO is verified on 23 benchmark functions and the IEEE CEC2017 test suite. Finally, we verify the superiority of the proposed EAOAHHO over the other advanced meta-heuristic algorithms for solving four industrial engineering design problems.

1. Introduction

Optimization is the process of finding the best solution for all possible options for a particular problem [1,2]. Optimization problems have received increasing attention from different disciplines and engineering fields in recent years, and are divided into three main categories: constrained optimization, unconstrained optimization, and constrained engineering optimization problems [3]. For the majority of engineering applications, optimization is required to find the optimal results in terms of minimizing cost and energy consumption or maximizing profit, efficiency, and yield under certain constraints. With the increasing complexity of engineering optimization problems, it is extremely difficult for traditional optimization methods to satisfy the multi-constraint, high-dimensional, multi-objectives, and other characteristics of current optimization problems. Therefore, it is an essential research direction to construct an effective optimization strategy to solve the optimization problem. Compared with traditional methods, the meta-heuristic algorithms, which are characterized by simple structure, flexibility, gradient-free information, easy implementation, and strong capability of circumventing local constraints, can achieve global optimum. Currently, numerous meta-heuristic algorithms have been proposed to solve complex optimization problems, such as engineering optimization [4,5], path planning [6,7], feature selection [8], and image segmentation [9].
Meta-heuristic algorithms can be classified into three categories based on the design inspiration of the algorithms: evolutionary, physics-based, and swarm-based [10]. Evolutionary algorithms simulate Darwinian biological evolution. The solving process of such algorithms is to initialize a random population and then evaluate the initial population using one or more operators such as crossover, mutation, and selection during the optimization process, which does not take into account the previous population execution [11]. The main common evolutionary algorithms include Genetic Algorithm (GA) [12], Differential Evolution (DE) [13], etc. The second class of algorithms is inspired by the principles of physics and physical phenomena in the universe, such as Simulated Annealing (SA) [14], Big Bang-Big Crunch (BB-BC) algorithm [15], Gravity Search Algorithm (GSA) [16], Artificial Chemical Reaction Optimization Algorithm (ACROA) [17], Galaxy-Based Search Algorithm (GBSA) [18], Multi-verse optimizer (MVO) [19], and Atom Search Optimization (ASO) [20]. The last category generally originates from the collective behavior of social creatures. The natural world is the key inspiration for most stochastic optimization strategies based on the population [21]. Particle Swarm Optimization (PSO) derived from the study on bird predation behavior is a well-known swarm intelligence algorithm [22,23]. The PSO algorithm has the advantages of simple structure and few parameter settings, but it is susceptible to being trapped in the local optimum [1]. In recent years, a large number of scholars have studied the group behavior of creatures, and better performing swarm intelligence algorithms have come out, such as Ant Lion Optimizer (ALO) [24], Salp Swarm Algorithm (SSA) [25], Grey Wolf Optimizer (GWO) [26], Harris hawks optimization (HHO) [27], Aquila Optimizer (AO) [28], Spotted Hyena Optimizer (SHO) [29], Lion Optimization Algorithm (LOA) [30], Whale Optimization Algorithm (WOA) [31], and so on.
However, the most general meta-heuristic algorithms have the disadvantages of easily falling into local optimum, slow convergence speed, and poor convergence accuracy when solving some complex optimization problems. As proved in No Free Lunch (NFL) theory, there is no one algorithm that can be used to address all optimization problems [32]. Therefore, it has been developed as a hot issue how to reasonably improve and apply algorithms to specific problems. For example, Fan et al. [33] proposed a refracted Salp Swarm Algorithm (RSSA). The basic Salp Swarm Algorithm is improved with the refracted opposition-based learning strategy, multi-leader mechanism, and adaptive conversion parameter strategy. The improved RSSA is significantly better than the basic SSA algorithm for applications in the field of structural parameter identification. Zhang et al. [1] propose an improved Harris hawks optimization based on adaptive cooperative foraging and dispersed foraging strategies, so as to remedy the shortcomings of the HHO in the exploration phase, where the population diversity is low and easily falls into local optimum. In addition, there are also some scholars who enhance the performance of the algorithm by hybridizing more than two algorithms. For example, in order to accelerate the global search phase of the existing Harris hawks Optimization and escape from local search space, Kamboj et al. [34] utilized the sine cosine algorithm to develop a hybrid variant of the Harris Hawks optimizer named the hybrid Harris Hawks-Sine Cosine Algorithm (hHHO-SCA).
In recent years, Harris Hawks Optimization (HHO) [27], a popular swarm intelligence algorithm, has been modified by a large number of scholars, and extensively applied to various optimization problems, such as engineering design [34], path planning [35], image segmentation [36], and fault diagnosis [37]. Compared with HHO, Arithmetic Optimization Algorithm (AOA) [38] proposed in 2021 is a novel meta-heuristic algorithm, which has more scope for performance enhancement. P. Arun Mozhi Devan et al. [39] propose an Arithmetic-Trigonometric Optimization Algorithm that combines the traditional sine cosine algorithm with arithmetic optimization algorithm to enhance the convergence speed and optimal search area in the exploration and exploitation phase. To achieve better convergence during exploration and exploitation phases, Namrata Panga et al. [40] proposed an improved arithmetic optimization algorithm, which has been achieved by incorporating other functions such as square, cube, sine, and cosine in the algorithms’ stochastic scaling coefficient. In this paper, we proposed an ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization, named EAOAHHO, which has greater performance than the basic AOA and HHO due to the integration mechanism of AOA and HHO, and introduction of the pinhole imaging opposition-based learning (PIOBL) and composite mutation strategy (CMS). The main contributions of this paper are summarized as follows:
  • Inspired by the algorithm architecture of the AOA and HHO, we propose a novel hybrid algorithm based on these two algorithms, named EAOAHHO.
  • PIOBL helps the proposed algorithm to increase the original population diversity and the capability to escape from local optima.
  • CMS can enhance the proposed algorithm exploitation and exploration to obtain better convergence accuracy.
  • The performance of EAOAHHO is verified on 23 benchmark functions, the IEEE CEC2017 test suite, and four industrial engineering design problems. The experimental results demonstrate the superiority of EAOAHHO over the basic AOA, HHO, and other advanced meta-heuristic algorithms in handling the above problem.
The remainder of this article is arranged as follows: In Section 2, we introduce the background knowledge of the basic AOA and HHO, as well as pinhole imaging opposition-based learning and ensemble/composite mutation strategies. A detailed description of the proposed hybrid EAOAHHO algorithm is provided in Section 3. In Section 4, we conduct a series of simulation experiments on 23 classical benchmark functions and the IEEE CEC2017 test suite to evaluate the performance of EAOAHHO, and then the results obtained are discussed. Based on these results, the proposed algorithm is applied to solve four industrial engineering design problems in Section 5. Finally, the conclusion of this paper and future research direction are shown in Section 6.

2. Background

2.1. Arithmetic Optimization Algorithm (AOA)

Arithmetic optimization algorithm (AOA) is a recently developed meta-heuristic algorithm, which mimics the distribution behavior of four common arithmetic operators in mathematics [38], i.e., Multiplication (M), Division (D), Addition (A), and Subtraction (S). The search phase of AOA is illustrated in Figure 1, which shows how the search process of AOA is arranged and how the search phases are divided.

2.1.1. Initialization Phase

In the optimization process of AOA, a set of candidate solutions ( X ) is randomly generated as shown in the matrix of Equation (1). The best candidate solution in each iteration is considered to be the best solution or near-optimal solution obtained so far.
X = [ x 1 , 1 x 1 , j x 1 , n 1 x 1 , n x 2 , 1 x 2 , j x 2 , n x N 1 , 1 x N 1 , j x N 1 , n x N , 1 x N , j x N , n 1 x N , n ]
Before the algorithm begins working, exploration or exploitation is selected by a factor calculated by the Mathematical Optimizer Acceleration (MOA). The Equation (2) is shown below.
M O A ( t ) = M i n + t × ( M a x M i n T )
where M O A ( t ) represents the i th iteration function value. t stands for the current number of iterations, and T is the maximum number of iterations. Min and Max are accelerated functions of Min. and Max. values, respectively.

2.1.2. Exploration Phase

This section describes the exploration phase of AOA. According to the arithmetic operators, mathematical calculations using the Division (D) operator or the Multiplication (M) operator have high distribution values dedicated to the exploration process. However, unlike the operators S and A, these operators are not easily accessible to the target due to their high dispersion. AOA exploration operators exploit the search field arbitrarily through many regions and seek a better alternative dependent on two key search techniques M and D search techniques as shown in Equation (3):
X i ( t + 1 ) = { X b e s t ( t ) ÷ ( M O P + e p s ) × ( ( u b l b ) × μ + l b ) , r 2 < 0.5 X b e s t ( t ) × M O P × ( ( u b l b ) × μ + l b ) , o t h e r w i s e
where X i ( t + 1 ) denotes the i th solution of the next iteration. X b e s t ( t ) is the position of optimum solution till now. u b and l b represent the upper bound values and the lower bound values. e s p is a small integer value. μ is the control parameter to adjust the search process. In this paper, μ is set to 0.5.
M O P ( t ) = 1 t 1 α T 1 α
where the Math Optimizer probability ( M O P ) is a parameter. M o p ( t ) is the i th iteration function value, and the dynamic parameter α is set to 5.

2.1.3. Exploitation Phase

AOA exploitation operators exploit the search field deeply through many regions and seek a better alternative dependent on two key search techniques A and S search techniques as shown in Equation (5):
X i ( t + 1 ) = { X b e s t ( t ) M O P × ( ( u b l b ) × μ + l b ) , r 3 < 0.5 X b e s t ( t ) + M O P × ( ( u b l b ) × μ + l b ) , o t h e r w i s e
The pseudo-code of the basic AOA is presented in Algorithm 1.
Algorithm 1 Pseudo-code of the basic AOA
1.    Initialize the population size N and the maximum iterations T
    2.     Initialize the positions of each search agent X i ( i = 1 , 2 , , N )
    3.     While  t T
    4.        Check if the position goes beyond the search space boundary and then adjust it
    5.        Evaluate the fitness values of all search agents
    6.        Set X b e s t as the position of current best solution
    7.        Calculate the M O A value using Equation (2)
    8.        Calculate the M O P value using Equation (4)
    9.        For  i = 1 to N
    10.              If  r 1 > M O A  then    //Exploration phase
    11.                  Update the search agent’s position using Equation (3)
    12.             Else                                //Exploitation phase
    13.                  Update the search agent’s position using Equation (5)
    14.             End If
    15.     End For
    16.     t = t + 1
    17.  End While
    18.  Return  X b e s t

2.2. Harris Hawks Optimization (HHO)

Harris hawks optimization (HHO) is a novel bionic, gradient-free, and population-based optimization algorithm proposed by Heidari et al. [27] in 2019. The inspiration for HHO derives from the foraging behavior of the Harris hawk, a famous bird of prey discovered in the southern half of Arizona, USA. Harris hawks commonly adopt a strategy known as “surprise pounce” to capture the prey. In this strategy, several hawks try to attack the potential prey (rabbit, etc.) outside the shelter simultaneously from different directions by means of good teamwork. Depending on the dynamic nature of the environment and the escaping energy of the prey, Harris hawks can exhibit a variety of chasing styles. These collaborative activities are beneficial for perplexing the escaping prey and help Harris hawks chase the detected prey to exhaustion, thus increasing its vulnerability. In this algorithm, Harris hawks are the candidate solutions and the best solution in each iteration is defined as the intended prey. Figure 2 illustrates the three main search phases of HHO including: the exploration phase, the transition from exploration to exploitation phase, and the exploitation phase, which are briefly described as follows.

2.2.1. Exploration Phase

Considering the habits of Harris hawks, they randomly perch on tall trees or perch according to the locations of other family members to search for the prey. To simulate these two perching mechanisms, the control parameter q is introduced in this phase to choose between them with an equal probability of 0.5. The mathematical model is as follows:
X i ( t + 1 ) = { X r ( t ) r 4 | X r ( t ) 2 r 5 X i ( t ) | , q 0.5 ( X b e s t ( t ) X m ( t ) ) r 6 ( l b + r 7 ( u b l b ) ) , q < 0.5
where X i ( t + 1 ) denotes the position vector of the i -th hawk in the next iteration t , X i ( t ) denotes the current position of the i -th hawk, X r ( t ) represents a random hawk selected from the population, X b e s t implies the position of the prey (best solution obtained so far), q , r 4 , r 5 , r 6 , r 7 are random numbers between 0 and 1, u b and l b describe the upper and lower boundaries of the search range, respectively. In addition, the average position X m ( t ) of all hawks in the current population can be calculated using Equation (7):
X m ( t ) = 1 N i = 1 N X i ( t )
where N refers to the total number of hawks (population size).

2.2.2. Transition from Exploration to Exploitation

At this stage, the algorithm transfers from the global exploration phase to the local exploitation phase according to the escaping energy of the prey, and then carries out different exploitative behaviors. As the intended prey tries to escape from the pursuit of Harris hawks, its retained energy will decrease significantly, which can be modeled by Equation (8).
E = 2 E 0 ( 1 t T )
where E denotes the escaping energy of the prey, T denotes the maximum iteration, and E 0 represents the initial energy of the prey, which is a random number ranged from −1 to 1.
The variable E establishes a good basis for the smooth transition from exploration to exploitation. In the case of | E | 1 , the hawk searches the target area as much as possible to explore the location of the prey, which is also called the exploration phase. On the other hand, when | E | < 1 , the hawk attacks the prey found in the earlier phase, implying that HHO switches to the exploitation.

2.2.3. Exploitation Phase

In the exploitation phase, four possible mechanisms are proposed to simulate the attacking process of Harris hawks. Suppose r is a random number in the range [ 0 , 1 ] , which symbolizes the chance of a prey in successfully escaping from dangerous situations ( r < 0.5 ) or not before attack ( r 0.5 ). Regardless of what the prey does, the hawks will perform a soft or hard besiege to capture the prey based on the escaping energy E . A soft besiege is conducted when | E | 0.5 , otherwise, the hard besiege occurs.
  • Soft besiege
When r 0.5 and | E | 0.5 , the prey still has enough energy to escape, so Harris hawks softly surround it to make the prey more exhausted before the surprise pounce is implemented. In this situation, the current position of each hawk is updated as follows:
X i ( t + 1 ) = Δ X i ( t ) E | J X b e s t ( t ) X i ( t ) |
Δ X i ( t ) = X b e s t ( t ) X i ( t )
J = 2 ( 1 r 8 )
where Δ X i ( t ) is the difference between the location of the prey and the present position of i -th hawk in iteration t , r 8 denotes a random number between 0 and 1, J denotes the random jump strength of the prey.
  • Hard besiege
If r 0.5 and | E | < 0.5 , the prey has a low energy to escape and the Harris hawks hardly encircle the intended prey and then attack it. This behavior is modeled as follows:
X i ( t + 1 ) = X b e s t ( t ) E | Δ X i ( t ) |
  • Soft besiege with progressive rapid dives
When r < 0.5 and | E | 0.5 , the prey
Y = X b e s t ( t ) E | J X b e s t ( t ) X i ( t ) |
Z = Y + S × Levy ( D )
where D is the dimension of the problem and S is a random vector by size 1 × D , while Levy   ( · ) indicates the levy flight function, which is expressed as follows:
Levy ( x ) = 0.01 × u × σ | v | 1 β , σ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β ) × β × 2 ( β 1 2 ) ) 1 / β
where u and v are random numbers within the interval [ 0 , 1 ] , Γ ( · ) denotes the standard gamma function, and β is a constant value fixed to 1.5.
X i ( t + 1 ) = { Y , i f   F ( Y ) < F ( X i ( t ) ) Z , i f   F ( Z ) < F ( X i ( t ) )
  • Hard besiege with progressive rapid dives
In the case of r < 0.5 and | E | < 0.5 , the prey is too exhausted to escape, so the hawks attempt to shrink the distance from their average position to the prey before constructing a hard besiege to attack and kill the prey. The mathematical model of this behavior can be described as follows:
Y = X b e s t ( t ) E | J X b e s t ( t ) X m ( t ) |
Z = Y + S × L e v y ( D )
X i ( t + 1 ) = { Y , i f   F ( Y ) < F ( X i ( t ) ) Z , i f   F ( Z ) < F ( X i ( t ) )
where X m ( t ) is also calculated using Equation (7). It is worth noting that in each step, only the better position Y or Z will be considered as the next location.
The pseudo-code of the basic HHO is presented in Algorithm 2.
Algorithm 2 Pseudo-code of the basic HHO
1.    Initialize the population size N   and the maximum iterations T
    2.    Generate the random position of each hawk X i ( i = 1 , 2 , , N )
    3.    While  t T
    4.        Check if any hawk position goes beyond the search space boundary and then adjust it
    5.        Evaluate the fitness values of all hawks
    6.        Set X b e s t as the position of current best solution
    7.        For  i = 1 to N
    8.             Calculate the prey energy E using Equation (8)
    9.             If  | E | 1  then    //Exploration phase
    10.                Update the hawk’s position using Equation (6)
    11.            End If
    12.            If  | E | < 1  then    //Exploitation phase
    13.                If r 0.5 and | E | 0.5  then
    14.                    Update the hawk’s position using Equation (9)
    15.                Else if   r 0.5 and | E | < 0.5  then
    16.                    Update the hawk’s position using Equation (12)
    17.                Else if r < 0.5 and | E | 0.5  then
    18.                    Update the hawk’s position using Equation (16)
    19.                Else if r < 0.5 and | E | < 0.5  then
    20.                    Update the hawk’s position using Equation (19)
    21.            End If
    22.     End For
    23.     t = t + 1
    24.  End While
    25.  Return  X b e s t

2.3. Pinhole Imaging Opposition-Based Learning

Pinhole imaging opposition-based learning (PIOBL), integrating both the conventional opposition-based learning (OBL) [41] and pinhole imaging principle, is a powerful technique in machine intelligence and has been successfully used to improve different basic meta-heuristic algorithms [42,43,44]. The main ideology of PIOBL is to simultaneously estimate the fitness values of the current position as well as its inverse position, and then the fitter one is retained to participate in the subsequent iterative calculation. The optimization procedure often starts with a stochastic initial position. If this initial position is near the global optimum, the algorithm converges quickly. On the contrary, the initial position may be far from the optimum or just at the opposite direction, which will cause it to take a quite long time to converge or even fall into a stagnant state. Therefore, PIOBL can effectively increase the search efficiency and the probability of finding a better candidate position. The mathematical definition of PIOBL is expressed as follows.
Pinhole imaging is a common optical phenomenon that specifically refers to when the light source passes diagonally from one side of the pinhole plate into the other side, an inverted image will be formed on the receiving screen at this moment. As illustrated in Figure 3, where the cardinal point O denotes the midpoint of the search range [ l b , u b ] and the y -axis is considered as a pinhole plate. Additionally, a light source p with height h is placed at the point X i ( X i is the current location of the i -th search agent in the population). Through pinhole imaging, the corresponding image p * with the height h * can be obtained and its projection on the coordinate axis is X i ˜ ( X i ˜ is the opposite position of X i ). The geometrical relationship in this figure is formulated as follows:
( l b + u b ) / 2 X i X i ˜ ( l b + u b ) / 2 = h h *
Let the distance coefficient k = h / h * , the opposite position X i ˜ based on the theory of pinhole imaging is calculated by modifying the Equation (20):
X i ˜ = ( l b + u b ) 2 + ( l b + u b ) 2 k X i k
Interestingly, if k = 1 , the Equation (21) can be simplified as the standard formula of OBL strategy as follows:
X i ˜ = ( l b + u b ) X i
Thus, OBL is just a peculiar case of PIOBL. Compared with the former, we can adjust the coefficient k of PIOBL to attain dynamic opposite positions and a broader search range, thereby further enhancing the capability of the algorithm to avoid the local optima.
Generally, most optimization problems are multi-dimensional, so the above Equation (22) can be also extended into D-dimensional space as follows:
X i , j ˜ = ( l b j + u b j ) 2 + ( l b j + u b j ) 2 k X i , j k , j = 1 , 2 , , D
where X i , j and X i , j ˜ are the   j -dimensional components of X i and X i ˜ , respectively, l b j and u b j are the lower and upper boundaries in the j -th dimension.

2.4. Ensemble/Composite Mutation Strategy (CMS)

The ensemble/composite mutation is a novel search strategy used to facilitate the exploration and exploitation capability of an algorithm and provide additional population diversity during the optimization [45], which was developed by Zhang et al. [46] based on the rand local mutation scheme of composite DE [47]. In this strategy, three different mutation operators: DE/rand/1/bin, DE/rand/2/bin, and DE/current-to-rand/2/bin, are executed in parallel to generate candidate positions for each of the search agents. The mathematical formula of these operators is shown as follows:
V i 1 = { X R 1 + F 1 × ( X R 2 X R 3 ) , r 9 < C 1 X i , r 9 C 1
V i 2 = { X R 4 + F 2 × ( X R 5 X R 6 ) + F 2 × ( X R 7 X R 8 ) , r 10 < C 2 X i , r 10 C 2
V i 3 = { X i + F 3 × ( X R 9 X i ) + F 3 × ( X R 10 X R 11 ) , r 11 < C 3 X i , r 11 C 3
where V i 1 , V i 2 and V i 3 are the new mutant positions of the i -th search agent in the population, R 1 ~ R 11 are different integer indices in the interval [ 1 ,   N ] , r 9 ~ r 11 are random numbers between 0 and 1. F 1 , F 2 and F 3 denote the scale factors, which are equal to 1.0, 0.8, and 1.0, respectively. Moreover, C 1 , C 2 and C 3 express the crossover rates, which are set as 0.1, 0.2, and 0.9, respectively [46].
After these mutant candidate positions V i 1 , V i 2 , and V i 3 have been generated, the best one of them with the lowest fitness V i is selected to update the original position of i -th search agent by using Equation (27) as follows:
X i { V i , i f   F ( V i ) < F ( X i ) X i , o t h e r w i s e
where F ( · ) is the objective function for a given problem. If the fitness value of V i is better than that of X i , then X i will be replaced by V i , otherwise it keeps unchanged.

3. The Proposed EAOAHHO Algorithm

3.1. The Detailed Design of EAOAHHO

In this section, we give a detailed introduction to the process of the EOAAHHO. Our proposed algorithm is based on an ensemble of AOA and HHO, and introduces PIOBL strategy and CMS to enhance the performance. During each iteration, we first calculate M O A and M O P using Equations (2) and (4) in AOA. Furthermore, the PIOBL strategy is performed to update the best solution. Then, according to the value of r a n d , the next step to determine the AOA or HHO is used for updating X i . Ultimately, we generate the mutant position V i by CMS strategy. When the current number of iterations reaches the maximum number of iterations, the global optimal solution is outputted. The flow chart and pseudo-code of the EAOAHHO are shown in Figure 4 and Algorithm 3, respectively.
Algorithm 3 Pseudo-code of the proposed EAOAHHO
1.    Initialize the population size N   and the maximum iterations T
    2.    Initialize the positions of each search agent X i ( i = 1 , 2 , , N )
    3.    While  t T
    4.        Check if the position goes beyond the search space boundary and then adjust it
    5.        Evaluate the fitness values of all search agents
    6.        Set X b e s t as the position of current best solution
    7.        Calculate the M O A value using Equation (2)
    8.        Calculate the M O P value using Equation (4)
    9.        Generate the opposite positions of all search agents using Equation (21) and save the ones with better fitness//PIOBL
    10.     For  i = 1 to N
    11.          If  r a n d < 0.5  then    //AOA     r a n d is a random number between 0 and 1
    12.              If  r 1 > M O A  then
    13.                  Update the position using Equation (3)
    14.              Else
    15.                  Update the position using Equation (5)
    16.              End If
    17.         Else                            //HHO
    18.              Calculate the prey energy E using Equation (8)
    19.              If  | E | 1  then
    20.                  Update the position using Equation (6)
    21.              End If
    22.              If  | E | < 1  then
    23.                  If r 0.5 and | E | 0.5  then
    24.                      Update the position using Equation (9)
    25.                  Else if r 0.5 and | E | < 0.5  then
    26.                     Update the position using Equation (12)
    27.                  Else if r < 0.5 and | E | 0.5  then
    28.                     Update the position using Equation (16)
    29.                  Else if r < 0.5 and | E | < 0.5  then
    30.                      Update the position using Equation (19)
    31.              End If
    32.              Generate the new mutant positions   V i 1 ,   V i 2 ,   V i 3 by using Equations (24)–(26)    //CMS
    33.              Set V i as the best trial vector with the lowest fitness from V i 1 ,   V i 2 ,   V i 3
    34.              If  F ( V i ) < F ( X i )  then
    35.                   X i = V i
    36.              End If
    37.          End If
    38.      End For
    39.       t = t + 1
    40.  End While
    41.  Return  X b e s t

3.2. Computational Complexity Analysis

Computational complexity is a vital metric to measure the time consumption of an algorithm during operation. The computational complexity of the proposed EAOAHHO is mainly associated with three components: initialization, fitness evaluation, and updating of positions. In the initialization phase, the positions of all search agents are generated randomly in the search domain, which needs computational complexity O ( N ) , where N is the population size. Then, in the iteration procedure, the algorithm evaluates the fitness value of each individual and updates the population positions sequentially, so the computational complexity is O ( T × N + T × N × D ) , where T denotes the maximum number of iterations and D denotes the dimension of specific problems. In addition, PIOBL and CMS are introduced in the proposed algorithm to further update the position vector of each individual. The computational complexity of the two strategies is O ( T × N × D ) and O ( 3 × T × N × D ) , respectively. Thus, the total computational complexity of EAOAHHO should be O ( N × ( 1 + T + 5 T D ) ) . As per the literature [27,38], the computational complexity of both AOA and HHO is O ( N × ( 1 + T + T D ) ) . In addition, the complexity of all other algorithms (AO [28], WOA [31], MFO [48], TSA [49] and ChOA [50]) used in this paper is O ( N × ( 1 + T + T D ) ) . Compared to the basic algorithms, the computational complexity of EAOAHHO increases to some extent. However, these extra time cost can significantly improve the search performance of the algorithm and acquire a better solution, which is acceptable based on NFL theory.

4. Experimental Results and Discussions

In this section, we verify the performance of the proposed EAOAHHO on 23 classical benchmark functions and 29 IEEE CEC2017 benchmark functions. These two sets of functions verify the effectiveness of the proposed algorithm in handling simple numerical problems and complex numerical problems, respectively. To verify the superiority of the proposed algorithm, we compare EAOAHHO with the original AOA [38], HHO [27], and several other advanced meta-heuristics, namely Aquila optimizer (AO) [28], whale optimization algorithm (WOA) [31], moth-flame optimization algorithm (MFO) [48], salp swarm algorithm (SSA) [25], tunicate swarm algorithm (TSA) [49], and chimp optimization algorithm (ChOA) [50]. For all experiments, the maximum number of iterations was set to 500, the population size was 30, and each algorithm was run 30 times independently. The parameter settings for each algorithm are shown in Table 1. Furthermore, we chose the average fitness (Avg) and standard deviation (Std) as the evaluation metrics for the algorithm performance. All the experimental series are implemented in MATLAB R2017a software (version 9.2.0) with Microsoft Windows 10 system, and the hardware platform of the computer is configured as Intel® Core TM i5-10300H CPU @ 2.50 GHz and 16 GB RAM.

4.1. Experiment 1: Classical Benchmark Functions

In this subsection, a total of 23 classical benchmark functions selected from the literature [27] are utilized to evaluate the performance of EAOAHHO. The 23 benchmark functions are classified into three different categories on the basis of their features: unimodal, multimodal, and fix-dimension multimodal. The unimodal benchmark functions (F1~F7) have only one global optimal value and are usually applied to check the exploitation competence and convergence rate of the algorithm. On the other hand, the multimodal benchmark functions (F8~F13) are characterized by multiple local minima. This type of functions is designed to examine the exploration capability and the local optima avoidance of the algorithm. The fix-dimension multimodal benchmark functions (F14~F23) can be regarded as a combination of the first two categories of functions but with a lower dimension, and they are used to study the stability of the algorithm between the exploration and exploitation. The expression, spatial dimension (Dim), search range, and a theoretical minimum of each function are outlined in Table 2, Table 3 and Table 4. Figure 5 visualizes the search space of some representative benchmark functions.

4.1.1. Impacts of Components

In the previous chapters, we introduced the basic flow of the proposed EAOAHHO, and two strategies for modifying the original algorithm, namely PIOBL and CMS. In order to verify the effectiveness of the introduced strategies, we have designed three other algorithms in this subsection. EAOAHHO-1 is a hybrid algorithm for the basic AOA and HHO. EAOAHHO-2 denotes the EAOAHHO-1 only with PIOBL. And EAOAHHO-3 denotes the EAOAHHO-1 only with CMS. To verify the impact of each component on the original algorithm, we tested the performance of EAOAHHO-1, EAOAHHO-2, EAOAHHO-3, and EAOAHHO-1 on 23 classical benchmark functions with the same experimental parameters. The results of the experiment are shown in Table 5. In this table, we can see that EAAHO achieves optimal values in all 23 benchmark functions, and is much better than the remaining three algorithms. In addition, EAOAHHO-2 and EAOAHHO-3 achieve optimal values or solutions converging to the optimal values of EAOAHHO in most of the benchmark functions. Moreover, both EAOAHHO-2 and EAOAHHO-3 achieve optimal solutions more often than the unimproved EAOAHHO-1, which indicates that the two modified strategies we have introduced can improve the performance of the original algorithm to some extent.

4.1.2. Comparison of EAOAHHO with Other Meta-Heuristic Algorithms

In this subsection, the basic AOA, HHO, and six well-known meta-heuristic algorithms, namely AO, WOA, MFO, SSA, TSA, and ChOA are employed to compare with EAOAHHO for 23 benchmark functions based on the numerical analysis, boxplot, convergence curve, and Wilcoxon test. The parameter settings for each algorithm have been shown in Table 1 above. After 30 independent runs on each benchmark function, comparison results attained by EAOAHHO and other algorithms on 23 benchmark functions are recorded in Table 6. According to the data in the table, we can observe that the EAOAHHO achieved the best Avg and Std on all 23 benchmark functions and even reaches the theoretical optimum on some functions, such as F1F4, F9, F11, and so on. There is only one global optimum for the unimodal benchmark function, therefore we can measure the exploitation capability of the algorithm with the functions F1F7. The proposed EAOAHHO has superiority over the other eight meta-heuristic algorithms in solving such problems. Only on the function F2, AOA obtains the same optimal value as EAOAHHO. The above results demonstrate that the proposed algorithm has great exploitation capability. There are a large number of local optimal solutions in the multimodal benchmark functions. Therefore, the capability of the algorithm to escape local optima can be analysed by using functions F8F13. In most cases, EAOAHHO obtained optimal solutions that outperformed the other algorithms. EAOAHHO achieved the same optimal value as some of the algorithms, except in functions F9, F10. In the rest of the cases, optimal solutions obtained by EAOAHHO are better than the other eight meta-heuristic algorithms. The fix-dimension multimodal benchmark functions can be utilised to examine the switching capability of the algorithm between exploration and exploitation. According to the experimental results in the table, we can notice that EAOAHHO achieves the optimal Avg and Std on functions F14F23. For functions F16F19, there are also other advanced meta-heuristic algorithms that achieve the same optimal values as EAOAHHO. Thus, in combination with the above results, it can be seen that the algorithm proposed in this paper is superior to the other eight meta-heuristics with regard to exploitation capability, avoidance of local extremes, and the ability to switch between exploration and exploitation.
Figure 6 shows the boxplot of EAOAHHO and other algorithms on some benchmark functions, and it can be noticed that the EAOAHHO has a relatively narrower box plot compared to other algorithms in most cases, which indicates great consistency in terms of median, maximum, and minimum values. In addition, Figure 7 illustrates the convergence curves of EAOAHHO and other algorithms on some benchmark functions. we can observe that the convergence accuracy and convergence speed of EAOAHHO outperforms other algorithms in most cases, especially for Functions F1F4, and F11. For Function F7 and F13, EAOAHHO demonstrates a great ability to escape from local optima, due to the introduction of PIOBL.

4.1.3. Scalability Test

Scalability is an essential metric to measure a newly proposed algorithm. By adjusting the dimension of the benchmark function, we can effectively judge the impact of the dimension expansion on the execution efficiency of the algorithm. In this section, we extend the dimensions of functions F1F13 to 100, 300, and 500, and we perform scalability tests on EAOAHHO and the 8 advanced algorithms used in the previous section. The results on the 100, 300, and 500 dimensions are shown in Table 7, Table 8 and Table 9, respectively. According to the data comparison in Table 7, Table 8 and Table 9, it can be seen that as the function dimension increases, the performance of EOAAHHO and the other 8 meta-heuristic algorithms gradually decreases. The main reason for the above results is that the dimensionality of the function increases, which leads to a complex search space and an increase in the number of factors that need to be optimized. In addition, the optimization results of EAOAHHO are better than the other 8 algorithms in most cases, and the gap becomes more obvious as the dimension increases. Therefore, EAOAHHO has more favorable scalability compared to the original AOA, HHO, and other 6 meta-heuristic algorithms.

4.1.4. Computational Time Analysis

In this section, we focus on the computation time for EAOAHHO and several other meta-heuristic algorithms, and the statistical results are listed in Table 10. We can observe that WOA outperforms the other eight algorithms in computation time in most cases. In addition, MFO, TSA and AOA also perform relatively well in computation time. EAOAHHO take the longest time to solve these benchmark functions. One of the main reasons for this phenomenon is that the HHO itself takes longer to compute compared with WOA, MFO and TSA. Secondly, we have introduced PIOBL and CMS strategies into the EAOAHHO, which enhance the performance of the algorithm but increase the computational cost.

4.1.5. Statistical Test

In the previous subsection, we evaluated the algorithms using Std and Avg criteria. However, the meta-heuristic algorithm obtains results randomly, and we introduce mean absolute error (MAE) and Wilcoxon rank-sum test to further assess the performance of the different algorithms.
MAE is a metric that indicates the distance between the estimated value and the true value. It is calculated by the formula Equation (28):
M A E = 1 N F i = 1 N F | f i f |
where, N F represents the number of test functions, f i is the optimization result obtained by i th function, and f * is its global optimum.
Table 11 shows the mean of EAOAHHO and other algorithms on 23 benchmark functions. The results show that the EAOAHHO ranks first and has a much smaller MAE value than the other algorithms.
In addition, we introduce the Wilcoxon rank-sum test in this subsection to calculate and count the differences between the different algorithms. In this study, we set the significance level at 0.05. The obtained p-values and statistical results of EAOAHHO using the Wilcoxon rank-sum test are listed in Table 12. In this table, the “+” sign denotes that the EAOAHHO has a better performance than the comparison algorithms, the “−” sign denotes that the EAOAHHO has worse performance than the comparison algorithms, and “=” represents the EAOAHHO is similar to the algorithms in the comparison. The last three rows of this table indicate the times of EAOAHHO obtained “+”, “=”, and “−” compared with each algorithm in the Wilcoxon rank-sum test. Among the 23 benchmark functions, EAOAHHO outperformed SSA, TSA, and ChOA 23 times; outperformed WOA 21 times; and outperformed AO, MFO, HHO, and AOA 20 times. Given the above statistics, it is evident that the EAOAHHO proposed in this paper is significantly enhanced compared to the original AOA and HHO and is the best optimizer among the eight advanced algorithms.

4.2. Experiment 2: IEEE CEC2017 Test Suite

In Experiment 1, we verify the excellent performance of the proposed EOAAHHO in solving simple problems with 23 classical benchmark functions. In order to further verify whether EAOAHHO is still superior to the original AOA, HHO, and other advanced meta-heuristics when dealing with complex problems, we choose IEEE CEC 2017 test suite [51], a set of more challenging test functions listed in Table 13, for Experiments 2. The experimental method is the same as the previous section. Comparison results of EAOAHHO and other algorithms on IEEE CEC2017 test suite are shown in Table 14. In addition, we counted the experimental results and plotted the ranking diagram of the 9 algorithms shown in Figure 8. Combining the results in Table 14 and Figure 8, we can find that EAOAHHO ranks first in the vast majority of cases. For functions CEC-5, CEC-7, CEC-8, CEC-9, CEC-16, CEC-20, and CEC-23, the proposed algorithm ranks fourth. Third place for functions CEC-10 and CEC-26. For functions CEC-17 and CEC-24, the proposed algorithm ranks second.

5. EAOAHHO for Solving Industrial Engineering Design Problems

Industrial engineering design problems are a class of nonlinear optimization problems accompanied by complex geometry and many constraints [52]. To further highlight the performance of the proposed EAOAHHO in real constrained optimization, four common industrial engineering design problems from the structural field are solved in this section, namely the three-bar truss design problem, tension/compression spring design problem, welded beam design problem, and rolling element bearing design problem. For the sake of convenience, the death penalty function [53] is introduced here to handle infeasible candidate solutions subject to those different constraints of equality and inequality. The proposed EAOAHHO runs independently 30 times on each project with the maximum iterations and population size are set to 500 and 30, respectively. The results obtained are compared with various state-of-the-art optimizers released in the previous studies.

5.1. Three-Bar Truss Design Problem

As it is named, the main objective of this optimization problem is to minimize the total weight of a three-bar truss. From the shape of the truss and its associated forces in Figure 9, two structural parameters need to be considered in this design, including the cross-sectional area of component 1 ( A 1 ) and the cross-sectional area of component 2 ( A 2 ). Stress, deflection, and buckling are the three dominant constraints. The mathematical formulation of this problem is described as follows:
Consider
z = [ z 1 , z 2 ] = [ A 1 , A 2 ]
Minimize
f ( z ) = ( 2 2 z 1 + z 2 ) × l
Subject to
g 1 ( z ) = 2 z 1 + z 2 2 z 1 2 + 2 z 1 z 2 P σ 0 g 2 ( z ) = z 2 2 z 1 2 + 2 z 1 z 2 P σ 0 g 3 ( z ) = 1 2 z 2 + z 1 P σ 0
Variable range
0 z 1 , z 2 1
where
l = 100 cm ,   P = 2   KN / cm 2 ,   σ = 2   KN / cm 2
Table 15 records the detailed results obtained by the proposed EAOAHHO and other meta-heuristics such as MFO, SSA, HHO, AOA, MVO, GEO, GOA, and AHA on this application. Based on the data in the table, it can be noticed that EAOAHHO outperforms all comparison algorithms and reveals the minimum weight f m i n ( z ) = 263.87285 corresponding to the optimal solution z = [ 0.78859304   0.40825052 ] . Therefore, it is reasonable to believe that EAOAHHO has promising potential to solve such a problem with very confined search space.

5.2. Tension/Compression Spring Design Problem

The final purpose of this engineering case is to decrease the weight of a tension/compression spring as much as possible. As illustrated in Figure 10, this problem has three decision variables that need to be optimized, namely the wire diameter ( d ), the mean coil diameter ( D ), and the number of active coils ( N ). Furthermore, constraints on shear stress, floating frequency, and limited floating deflection should not be violated during the minimization process. The problem is expressed mathematically as follows:
Consider
z = [ z 1 , z 2 , z 3 ] = [ d , D , N ]
Minimize
f ( z ) = ( z 3 + 2 ) z 2 z 1 2
Subject to
g 1 ( z ) = 1 z 2 3 z 3 71785 z 1 4 0 g 2 ( z ) = 4 z 2 2 z 1 z 2 12566 ( z 2 z 1 3 z 1 4 ) + 1 5108 z 1 2 0 g 3 ( z ) = 1 140.45 z 1 z 2 2 z 3 0 g 4 ( z ) = z 1 + z 2 1.5 1 0
Variable range
0.05 z 1 2 , 0.25 z 2 1.30 , 2.00 z 3 15.00
The experimental results of EAOAHHO for this problem are compared with those of WOA, MFO, SSA, HHO, AOA, AHA, GWO, and INFO, as presented in Table 16. It can be observed from this table that the proposed EAOAHHO can effectively achieve the optimal solution at z = [ 0.052291   0.360263   10.179344 ] with the minimum weight f m i n ( z ) = 0.01199749 , which proves the merits of EAOAHHO in addressing the tension/compression spring design problem.

5.3. Welded Beam Design Problem

The welded beam design problem is one of the most well-known case studies used to evaluate the performance of algorithms. It was first proposed by Coello [53] and aims to minimize the overall manufacturing cost of a welded beam with four decision variables, i.e., the weld thickness ( h ), the length of the joint beam ( l ), the height of the beam ( t ), and the thickness of the beam ( b ), as illustrated in Figure 11. The mathematical formulation and seven constraint functions of this problem are given as follows:
Consider
z = [ z 1 , z 2 , z 3 , z 4 ] = [ h , l , t , b ]
Minimize
f ( z ) = 1.10471 z 1 2 z 2 + 0.04811 z 3 z 4 ( 14 + z 2 )
Subject to
g 1 ( z ) = τ ( z ) τ max 0 g 2 ( z ) = σ σ max 0 g 3 ( z ) = δ δ max 0 g 4 ( z ) = z 1 z 4 0 g 5 ( z ) = P P C ( z ) 0 g 6 ( z ) = 0.125 z 1 0 g 7 ( z ) = 1.10471 z 1 2 + 0.04811 z 3 z 4 ( 14 + z 2 ) 5 0
Variable range
0.1 z 1 , z 4 2 , 0.1 z 2 , z 3 10
where
τ ( z ) = ( τ ) 2 + 2 τ τ z 2 2 R + ( τ ) 2 , τ = P 2 z 1 z 2 , τ = M R J , M = P ( L + z 2 2 ) R = z 2 2 4 + ( z 1 + z 3 2 ) 2 , J = 2 { 2 z 1 z 2 [ z 2 2 4 + ( z 1 + z 3 2 ) 2 ] } , σ ( z ) = 6 P L E z 3 2 z 4 δ ( z ) = 6 P L 3 E z 3 2 z 4 , P C ( z ) = 4.013 E z 3 2 z 4 6 36 L 2 ( 1 z 3 2 L E 4 G ) P = 6000 lb , L = 14 in , E = 30 × 10 6 psi , G = 12 × 10 6 psi , δ max = 0.25 in , τ max = 13600 psi , σ max = 30000 psi .
This problem has been figured out by the proposed EAOAHHO and the remaining eight methods, such as WOA, AOA, MVO, GWO, ROA, HGS, AVOA, and IMFO. The optimal solutions are summarized in Table 17. It can be seen that when the four variables h , l , t , and b are set as 0.195539, 3.354588, 9.036630, and 0.205729, respectively, the minimum manufacturing cost of EAOAHHO is 1.693914. In this comparison, the results of EAOAHHO are evidently better than all the other methods. This shows that EAOAHHO has good competitiveness in dealing with the welded beam design problem.

5.4. Rolling Element Bearing Design Problem

Unlike the three test cases mentioned above, the primary goal of this problem is to maximize the dynamic load-carrying capacity of a rolling element bearing. In this optimum design, a total of ten geometric variables need to be taken into account, which are pitch diameter ( D m ), ball diameter ( D b ), the number of balls ( Z ), the inner and outer raceway curvature radius coefficient ( f i and f o ), K d m i n , K d m a x , δ , e , and ζ (see Figure 12). The problem has nine constraints and its mathematical model is described as follows.
Maximize
C d = { f c Z 2 / 3 D b 1.8 ,   i f   D b 25.4 mm 3.647 f c Z 2 / 3 D b 1.4 , e l s e
Subject to
g 1 ( z ) = ϕ 0 2 sin 1 ( D b / D m ) Z + 1 0 g 2 ( z ) = 2 D b K dmin ( D d ) > 0 g 3 ( z ) = K dmax ( D d ) 2 D b 0 g 4 ( z ) = ζ B w D b 0 g 5 ( z ) = D m 0.5 ( D + d ) 0 g 6 ( z ) = ( 0.5 + e ) ( D + d ) D m 0 g 7 ( z ) = 0.5 ( D D m D b ) δ D b 0 g 8 ( z ) = f i 0.515 g 9 ( z ) = f o 0.515
where
f c = 37.91 [ 1 + { 1.04 ( 1 γ 1 + γ ) 1.72 ( f i ( 2 f o 1 ) f o ( 2 f i 1 ) ) 0.41 } 10 / 3 ] 0.3 × [ γ 0.3 ( 1 γ ) 1.39 ( 1 + γ ) 1 / 3 ] [ 2 f i 2 f i 1 ] 0.41 x = [ { ( D d ) / 2 3 ( T / 4 ) } 2 + { D / 2 T / 4 D b } 2 { d / 2 + T / 4 } 2 ] y = 2 { ( D d ) / 2 3 ( T / 4 ) } { D / 2 T / 4 D b } ϕ   0 = 2 cos 1 ( x y ) , γ = D b D m , f i = r i D b , f o = r o D b , T = D d 2 D b D = 160 , d = 90 , B w = 30 , r i = r o = 11.033 , 0.5 ( D + d ) D m 0.6 ( D + d ) 0.15 ( D d ) D b 0.45 ( D d ) , 4 Z 50 , 0.515 f i   and   f o 0.6 0.4 K dmin 0.5 , 0.6 K dmax 0.7 , 0.3 δ 0.4 , 0.02 e 0.1 , 0.6 ζ 0.85 .
The comparison results between different optimization techniques when solving the rolling element bearing design problem are summarized in Table 18. Compared with HHO, TLBO, RUN, RSA, and COOT, the proposed EAOAHHO algorithm in this paper is able to discover the best cost value with significant progress, which is 85,539.193. This example once again proves the effectiveness of EAOAHHO at the practical application level.
Taken together, the results of this section strongly demonstrate the superiority of the proposed EAOAHHO in different characteristics and real-world optimization tasks. Benefiting from the hybrid operation, pinhole imaging opposition-based learning, and composite mutation strategy, EAOAHHO possesses more robust exploration and exploitation capabilities, which can be regarded as a reliable alternative to the basic AOA, HHO, and some existing algorithms. In light of its excellent performance, EAOAHHO may be used to tackle a wider range of real-world problems.
Table 18. Comparison results of different algorithms for solving rolling element bearing design problem.
Table 18. Comparison results of different algorithms for solving rolling element bearing design problem.
AlgorithmEAOAHHOHHO [27]TLBO [62]RUN [63]RSA [64]COOT [65]
D m 125.7227125125.7191125.2142125.1722125
D b 21.4233021.0000021.4255921.5979621.2973421.87500
Z 11.0011611.0920711.0000011.4024010.8852110.77700
f i 0.515000.515000.515000.515000.5152530.51500
f o 0.515000.515000.515000.515000.5177640.51500
K d m i n 0.500000.400000.4242660.400590.412450.43190
K d m a x 0.700000.600000.6339480.614670.6323380.65290
δ 0.300000.300000.300000.305300.3019110.30000
e 0.020000.050470.0688580.020000.0243950.02000
ζ 0.6002400.600000.7994980.636650.60240.60000
Maximum cost85,539.19383,011.88381,859.7483,680.4783,486.6483,918.492
The best result obtained is highlighted in bold.

6. Conclusions and Future Work

In this study, we propose an ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization, called EAOAHHO, which combines the basic AOA and HHO, and introduces PIOBL and CMS strategies. The PIOBL is introduced into the proposed algorithm to increase the original population diversity and the capability to escape from local optima. Furthermore, the introduction of CMS enhances the proposed EAOAHHO exploitation and exploration to obtain better convergence accuracy.
In order to evaluate the performance of the proposed algorithm, EAOAHHO is compared with the basic AOA, HHO, and six other advanced meta-heuristic algorithms based on 23 classical benchmark functions and the IEEE CEC2017 test suite. The experimental results indicate that the EAOAHHO proposed in this study has better convergence accuracy and capability of local optimal avoidance, compared with other algorithms, and can maintain a good balance between exploration and exploitation. In the end of the paper, we further verify the superiority of EAOAHHO by solving four industrial engineering design problems, and the results are also competitive with other meta-heuristic algorithms.
However, the performance of this algorithm on the IEEE CEC2017 test suite remains to be enhanced. We could further enhance the performance of the proposed algorithm by introducing strategies such as chaos-based initialization, and so on.
In the future, we expect to further enhance the computational efficiency of the proposed EAOAHHO and apply it to more practical engineering applications, such as path planning for autonomous intelligent unmanned systems, fault diagnosis for aero engines, and optimizing Support Vector Machine (SVM).

Author Contributions

Conceptualization, J.Y. and Y.S.; methodology, J.Y.; software, J.Y.; validation, J.Y., Y.S., X.Z. and Y.C.; formal analysis, Y.S.; investigation, X.Z.; data curation, J.Y.; writing—original draft preparation, J.Y.; writing—review and editing, J.Y.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by China Geological Survey, grant number 12120113017600; and Program for JLU Science and Technology Innovative Research Team, grant number 2017TD-13.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors are grateful to the editor and anonymous reviewers for their constructive comments and suggestions, which have improved this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.C.; Zhao, K.; Niu, Y. Improved harris hawks optimization based on adaptive cooperative foraging and dispersed foraging strategies. IEEE Access 2020, 8, 160297–160314. [Google Scholar] [CrossRef]
  2. Xiao, Y.N.; Sun, X.; Guo, Y.L.; Li, S.P.; Zhang, Y.P.; Wang, Y.W. An Improved Gorilla Troops Optimizer Based on Lens Opposition-Based Learning and Adaptive β-Hill Climbing for Global Optimization. Cmes-Comput. Model. Eng. Sci. 2022, 131, 815–850. [Google Scholar] [CrossRef]
  3. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  4. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  5. Xiao, Y.; Sun, X.; Guo, Y.; Cui, H.; Wang, Y.; Li, J.; Li, S. An enhanced honey badger algorithm based on Lévy flight and refraction opposition-based learning for engineering design problems. J. Intell. Fuzzy Syst. 2022, 1–24. [Google Scholar] [CrossRef]
  6. Dewangan, R.K.; Shukla, A.; Godfrey, W.W. Three dimensional path planning using grey wolf optimizer for UAVs. Appl. Intell. 2019, 49, 2201–2217. [Google Scholar] [CrossRef]
  7. Yao, J.; Sha, Y.; Chen, Y.; Zhang, G.; Hu, X.; Bai, G.; Liu, J. IHSSAO: An Improved Hybrid Salp Swarm Algorithm and Aquila Optimizer for UAV Path Planning in Complex Terrain. Appl. Sci. 2022, 12, 5634. [Google Scholar] [CrossRef]
  8. Jia, H.; Zhang, W.; Zheng, R.; Wang, S.; Leng, X.; Cao, N. Ensemble mutation slime mould algorithm with restart mechanism for feature selection. Int. J. Intell. Syst. 2022, 37, 2335–2370. [Google Scholar] [CrossRef]
  9. Tongbram, S.; Shimray, B.A.; Singh, L.S.; Dhanachandra, N. A novel image segmentation approach using fcm and whale optimization algorithm. J. Ambient. Intell. Humaniz. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
  10. Yin, S.H.; Luo, Q.F.; Du, Y.L.; Zhou, Y.Q. p DTSMA: Dominant swarm with adaptive T-distribution mutation-based slime mould algorithm. Math. Biosci. Eng. 2022, 19, 2240–2285. [Google Scholar] [CrossRef]
  11. Tang, C.M.; Zhou, Y.Q.; Tang, Z.H.; Luo, Q.F. Teaching-learning-based pathfinder algorithm for function and engineering optimization problems. Appl. Intell. 2021, 51, 5040–5066. [Google Scholar] [CrossRef]
  12. Kuncheva, L.I.; Jain, L.C. Designing classifier fusion systems by genetic algorithms. IEEE Trans. Evol. Comput. 2000, 4, 327–336. [Google Scholar]
  13. Qian, W.W.; Chai, J.R.; Xu, Z.G.; Zhang, Z.Y. Differential evolution algorithm with multiple mutation strategies based on roulette wheel selection. Appl. Intell. 2018, 48, 3612–3629. [Google Scholar] [CrossRef]
  14. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  15. Yilmaz, S.; Gokasan, M. Optimal trajectory planning by big bang-big crunch algorithm. In Proceedings of the 2nd International Conference on Control, Decision and Information Technologies (CoDIT), Ecole Natl Ingenieurs Metz, Metz, France, 3–5 November 2014; pp. 557–561. [Google Scholar]
  16. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  17. Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  18. Recioui, A. Application of a galaxy-based search algorithm to mimo system capacity optimization. Arab. J. Sci. Eng. 2016, 41, 3407–3414. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  20. Hekimoglu, B. Optimal tuning of fractional order PID controller for DC motor speed control via chaotic atom search optimization algorithm. IEEE Access 2019, 7, 38100–38114. [Google Scholar] [CrossRef]
  21. Krishna, A.B.; Saxena, S.; Kamboj, V.K. hSMA-PS: A novel memetic approach for numerical and engineering design challenges. Eng. Comput. 2021, 1–35. [Google Scholar] [CrossRef]
  22. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  23. Jiang, J.J.; Wei, W.X.; Shao, W.L.; Liang, Y.F.; Qu, Y.Y. Research on Large-scale bi-level particle swarm optimization algorithm. IEEE Access 2021, 9, 56364–56375. [Google Scholar] [CrossRef]
  24. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  25. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  26. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  27. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  29. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  30. Selvi, M.; Ramakrishnan, B. Lion optimization algorithm (LOA)-based reliable emergency message broadcasting system in VANET. Soft Comput. 2020, 24, 10415–10432. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  32. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  33. Fan, Q.; Chen, Z.J.; Li, Z.; Xia, Z.H.; Lin, Y.Q. An efficient refracted salp swarm algorithm and its application in structural parameter identification. Eng. Comput. 2022, 38, 175–189. [Google Scholar] [CrossRef]
  34. Kamboj, V.K.; Nandi, A.; Bhadoria, A.; Sehgal, S. An intensify harris hawks optimizer for numerical and engineering optimization problems. Appl. Soft Comput. 2020, 89, 106018. [Google Scholar] [CrossRef]
  35. Belge, E.; Altan, A.; Hacioglu, R. Metaheuristic Optimization-Based Path Planning and Tracking of Quadcopter for Payload Hold-Release Mission. Electronics 2022, 11, 1208. [Google Scholar] [CrossRef]
  36. Abd Elaziz, M.; Heidari, A.A.; Fujita, H.; Moayedi, H. A competitive chain-based Harris Hawks Optimizer for global optimization and multi-level image thresholding problems. Appl. Soft Comput. 2020, 95, 106347. [Google Scholar] [CrossRef]
  37. Shao, K.X.; Fu, W.L.; Tan, J.W.; Wang, K. Coordinated approach fusing time-shift multiscale dispersion entropy and vibrational Harris hawks optimization-based SVM for fault diagnosis of rolling bearing. Measurement 2021, 173, 108580. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Meth. Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  39. Devan, P.A.M.; Hussin, F.A.; Ibrahim, R.B.; Bingi, K.; Nagarajapandian, M.; Assaad, M. An Arithmetic-Trigonometric Optimization Algorithm with Application for Control of Real-Time Pressure Process Plant. Sensors 2022, 22, 617. [Google Scholar] [CrossRef]
  40. Panga, N.; Sivaramakrishnan, U.; Abishek, R.; Bingi, K.; Chaudhary, J. An Improved Arithmetic Optimization Algorithm. In Proceedings of the 2021 IEEE Madras Section Conference (MASCON), Chennai, India, 27–28 August 2021; pp. 1–6. [Google Scholar]
  41. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  42. Xiao, Y.; Sun, X.; Zhang, Y.; Guo, Y.; Wang, Y.; Li, J. An improved slime mould algorithm based on tent chaotic mapping and nonlinear inertia weight. Int. J. Innov. Comp. Inf. Control 2021, 17, 2151–2176. [Google Scholar] [CrossRef]
  43. Long, W.; Jiao, J.; Liang, X.; Wu, T.; Xu, M.; Cai, S. Pinhole-imaging-based learning butterfly optimization algorithm for global optimization and feature selection. Appl. Soft Comput. 2021, 103, 107146. [Google Scholar] [CrossRef]
  44. Li, M.; Xu, G.; Fu, B.; Zhao, X. Whale optimization algorithm based on dynamic pinhole imaging and adaptive strategy. J. Supercomput. 2021, 78, 6090–6120. [Google Scholar] [CrossRef]
  45. Abualigah, L.; Diabat, A. Improved multi-core arithmetic optimization algorithm-based ensemble mutation for multidisciplinary applications. J. Intell. Manuf. 2022, 1–42. [Google Scholar] [CrossRef]
  46. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  47. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  48. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  49. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  50. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  51. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  52. Tang, A.; Zhou, H.; Han, T.; Xie, L. A modified manta ray foraging optimization for global optimization problems. IEEE Access 2021, 9, 128702–128721. [Google Scholar] [CrossRef]
  53. Coello Coello, C.A. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Meth. Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  54. Mohammadi-Balani, A.; Dehghan Nayeri, M.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  55. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  56. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Meth. Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  57. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  58. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  59. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  60. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  61. Li, Y.; Zhu, X.; Liu, J. An improved moth-flame optimization algorithm for engineering problems. Symmetry 2020, 12, 1234. [Google Scholar] [CrossRef]
  62. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  63. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  64. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile search algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  65. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
Figure 1. Different search phases of AOA.
Figure 1. Different search phases of AOA.
Machines 10 00602 g001
Figure 2. Different search phases of HHO.
Figure 2. Different search phases of HHO.
Machines 10 00602 g002
Figure 3. Pinhole imaging opposition-based learning.
Figure 3. Pinhole imaging opposition-based learning.
Machines 10 00602 g003
Figure 4. Flow chart of the proposed EAOAHHO algorithm.
Figure 4. Flow chart of the proposed EAOAHHO algorithm.
Machines 10 00602 g004
Figure 5. 3D view of the search space for some representative benchmark functions.
Figure 5. 3D view of the search space for some representative benchmark functions.
Machines 10 00602 g005
Figure 6. Boxplot of EAOAHHO and other algorithms on some benchmark functions.
Figure 6. Boxplot of EAOAHHO and other algorithms on some benchmark functions.
Machines 10 00602 g006
Figure 7. Convergence curves of EAOAHHO and other algorithms on some benchmark functions.
Figure 7. Convergence curves of EAOAHHO and other algorithms on some benchmark functions.
Machines 10 00602 g007aMachines 10 00602 g007b
Figure 8. Ranking of different algorithms on IEEE CEC2017 test suite.
Figure 8. Ranking of different algorithms on IEEE CEC2017 test suite.
Machines 10 00602 g008
Figure 9. Schematic view of three-bar truss design problem.
Figure 9. Schematic view of three-bar truss design problem.
Machines 10 00602 g009
Figure 10. Schematic view of tension/compression spring design problem.
Figure 10. Schematic view of tension/compression spring design problem.
Machines 10 00602 g010
Figure 11. Schematic view of welded beam design problem.
Figure 11. Schematic view of welded beam design problem.
Machines 10 00602 g011
Figure 12. Schematic view of rolling element bearing design problem.
Figure 12. Schematic view of rolling element bearing design problem.
Machines 10 00602 g012
Table 1. Parameter settings of different algorithms.
Table 1. Parameter settings of different algorithms.
AlgorithmParameter Setting
AO [28] U = 0.00565 ; r 1 = 10 ; ω = 0.005 ; α = 0.1 ; δ = 0.1 ; G 1 [ 1 ,   1 ] ; G 2 = [ 2 ,   0 ]
WOA [31] b = 1 ; a 1 = [ 2 , 0 ] ; a 2 = [ 2 , 1 ]
MFO [48] b = 1 ; t = [ 1 ,   1 ] ; a [ 1 , 2 ]
SSA [25] c 1 = [ 1 ,   0 ] ; c 2 , c 3 [ 0 ,   1 ]
TSA [49] P m i n = 1 ; P m a x = 4
HHO [27] E 0 [ 1 ,   1 ]
AOA [38] α = 5 ; μ = 0.5 ; M i n = 0.1 ; M a x = 1
ChOA [50] f = [ 2.5 ,   0 ] ; m = G a u s s   c h a o t i c
EAOAHHO α = 5 ; μ = 0.5 ; M i n = 0.1 ; M a x = 1 ; E 0 [ 1 ,   1 ] ,   k = 12,000 , F 1 = 1.0 ; F 2 = 0.8 ; F 3 = 1.0 ; C 1 = 0.1 ; C 2 = 0.2 ; C 3 = 0.9
Table 2. Unimodal benchmark functions.
Table 2. Unimodal benchmark functions.
FunctionDimRangeFmin
F 1 ( x ) = i = 1 D x i 2 30[−100, 100]0
F 2 ( x ) = i = 1 D x i + i = 1 D x i 30[−10, 10]0
F 3 ( x ) = i = 1 D ( j = 1 D x j ) 2 30[−100, 100]0
F 4 ( x ) = max i { | x i | , 1 i D } 30[−100, 100]0
F 5 ( x ) = i = 1 D 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
F 6 ( x ) = i = 1 D ( | x i + 0.5 | ) 2 30[−100, 100]0
F 7 ( x ) = i = 1 D i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Table 3. Multimodal benchmark functions.
Table 3. Multimodal benchmark functions.
FunctionDimRangeFmin
F 8 ( x ) = i = 1 D x i sin ( | x i | ) 30[−500, 500]−418.9829 × D
F 9 ( x ) = i = 1 D [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n i = 1 D x i 2 ) exp ( 1 n i = 1 D cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
F 11 ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1 30[−600, 600]0
F 12 ( x ) = π D { 10 sin ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 D u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a 30[−50, 50]0
F 13 ( x ) = 0.1 { sin 2 ( 3 π x i ) + i = 1 D ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x D 1 ) 2 [ 1 + sin 2 ( 2 π x D ) ] } + i = 1 D u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
Table 4. Fix-dimension multimodal benchmark functions.
Table 4. Fix-dimension multimodal benchmark functions.
FunctionDimRangeFmin
F 14 ( x ) = ( 1 500 + j = 1 25 ( j + i = 1 n ( x i a i j ) 6 ) 1 ) 1 2[−65, 65]0.998
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 2 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[−1, 2]−3.8628
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 5. Comparison results of EAOAHHO-1, EAOAHHO-2, EAOAHHO-3, and EAOAHHO.
Table 5. Comparison results of EAOAHHO-1, EAOAHHO-2, EAOAHHO-3, and EAOAHHO.
FnCriteriaEAOAHHO-1EAOAHHO-2EAOAHHO-3EAOAHHO
F1Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F2Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F3Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F4Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F5Avg1.92 × 10−31.04 × 10−31.38 × 10−47.13 × 10−5
Std6.74 × 10−31.84 × 10−36.88 × 10−41.00 × 10−4
F6Avg1.83 × 10−33.39 × 10−41.41 × 10−52.09 × 10−6
Std3.18 × 10−35.99 × 10−43.89 × 10−52.62 × 10−6
F7Avg3.58 × 10−52.78 × 10−53.29 × 10−51.98 × 10−5
Std4.29 × 10−52.22 × 10−52.65 × 10−52.20 × 10−5
F8Avg−12,561.135−12,564.854−12,569.327−12,569.451
Std2.02 × 1011.34 × 1016.02 × 10−11.16 × 10−1
F9Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F10Avg8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F11Avg0.00 × 1000.00 × 1000.00 × 1000.00 × 100
Std0.00 × 1000.00 × 1000.00 × 1000.00 × 100
F12Avg2.94 × 10−53.01 × 10−61.35 × 10−62.21 × 10−8
Std8.65 × 10−54.30 × 10−62.98 × 10−63.77 × 10−8
F13Avg3.01 × 10−41.57 × 10−55.67 × 10−61.86 × 10−7
Std6.55 × 10−42.95 × 10−58.39 × 10−64.48 × 10−7
F14Avg9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Std1.67 × 10−101.12 × 10−100.00 × 1000.00 × 100
F15Avg3.73 × 10−43.50 × 10−43.39 × 10−43.07 × 10−4
Std9.91 × 10−54.34 × 10−51.67 × 10−41.34 × 10−19
F16Avg−1.0316−1.0316−1.0316−1.0316
Std8.23 × 10−168.63 × 10−166.92 × 10−166.78 × 10−16
F17Avg3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std1.34 × 10−57.07 × 10−60.00 × 1000.00 × 100
F18Avg3.00 × 1003.00 × 1003.00 × 1003.00 × 100
Std4.01 × 10−131.17 × 10−148.08 × 10−165.34 × 10−16
F19Avg−3.8628−3.8628−3.8628−3.8628
Std1.55 × 10−51.04 × 10−55.82 × 10−152.71 × 10−15
F20Avg−3.2363−3.2416−3.2586−3.2863
Std8.46 × 10−28.04 × 10−26.03 × 10−25.54 × 10−2
F21Avg−9.7978−10.1531−10.1532−10.1532
Std1.29 × 1003.17 × 10−46.90 × 10−156.79 × 10−15
F22Avg−9.6291−10.4028−10.4029−10.4029
Std2.33 × 1002.32 × 10−48.73 × 10−167.38 × 10−16
F23Avg−9.1795−10.5354−10.3130−10.5364
Std2.53 × 1002.43 × 10−31.22 × 1002.29 × 10−15
The best result obtained is highlighted in bold.
Table 6. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions.
Table 6. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions.
FnCriteriaAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
F1Avg2.49 × 10−1123.58 × 10−742.01 × 1031.83 × 10−71.42 × 10−214.69 × 10−925.76 × 10−359.89 × 10−60.00 × 100
Std1.19 × 10−1111.25 × 10−734.07 × 1033.60 × 10−72.78 × 10−212.57 × 10−913.16 × 10−342.89 × 10−50.00 × 100
F2Avg1.89 × 10−566.84 × 10−513.38 × 1011.75 × 1001.00 × 10−131.00 × 10−500.00 × 1004.69 × 10−50.00 × 100
Std1.03 × 10−553.35 × 10−502.05 × 1011.30 × 1001.43 × 10−134.20 × 10−500.00 × 1006.61 × 10−50.00 × 100
F3Avg2.35 × 10−1014.69 × 1041.83 × 1041.50 × 1037.58 × 10−44.66 × 10−752.81 × 10−31.99 × 1020.00 × 100
Std1.29 × 10−1001.56 × 1041.29 × 1048.73 × 1023.53 × 10−32.43 × 10−747.15 × 10−34.14 × 1020.00 × 100
F4Avg1.92 × 10−524.85 × 1016.64 × 1011.20 × 1013.01 × 10−18.85 × 10−482.68 × 10−23.32 × 10−10.00 × 100
Std1.05 × 10−513.19 × 1017.66 × 1004.32 × 1004.17 × 10−14.78 × 10−472.04 × 10−23.37 × 10−10.00 × 100
F5Avg3.88 × 10−32.81 × 1011.39 × 1042.26 × 1022.83 × 1011.41 × 10−22.84 × 1012.89 × 1011.64 × 10−4
Std5.10 × 10−34.39 × 10−13.09 × 1042.98 × 1028.48 × 10−11.46 × 10−22.68 × 10−11.68 × 10−13.02 × 10−4
F6Avg9.27 × 10−54.43 × 10−13.01 × 1035.83 × 10−73.76 × 1001.58 × 10−43.14 × 1003.66 × 1002.26 × 10−7
Std1.49 × 10−42.24 × 10−15.35 × 1036.27 × 10−76.28 × 10−12.13 × 10−43.10 × 10−13.60 × 10−12.34 × 10−7
F7Avg1.04 × 10−44.01 × 10−33.89 × 1001.82 × 10−11.02 × 10−21.39 × 10−45.25 × 10−52.44 × 10−32.61 × 10−5
Std1.18 × 10−45.17 × 10−37.50 × 1009.10 × 10−24.09 × 10−31.34 × 10−44.60 × 10−52.39 × 10−32.58 × 10−5
F8Avg−7849.674−10,295.569−8411.505−7585.108−6021.436−12,568.997−5275.950−5725.692−12,569.445
Std3.91 × 1031.99 × 1038.90 × 1027.78 × 1025.06 × 1027.71 × 10−14.70 × 1026.32 × 1011.58 × 10−1
F9Avg0.00 × 1000.00 × 1001.60 × 1025.37 × 1011.83 × 1020.00 × 1000.00 × 1007.50 × 1000.00 × 100
Std0.00 × 1000.00 × 1003.18 × 1012.06 × 1013.63 × 1010.00 × 1000.00 × 1007.60 × 1000.00 × 100
F10Avg8.88 × 10−163.73 × 10−151.40 × 1012.42 × 1001.73 × 1008.88 × 10−168.88 × 10−162.00 × 1018.88 × 10−16
Std0.00 × 1002.36 × 10−157.87 × 1001.01 × 1001.56 × 1000.00 × 1000.00 × 1009.97 × 10−40.00 × 100
F11Avg0.00 × 1000.00 × 1002.46 × 1011.61 × 10−21.25 × 10−20.00 × 1001.68 × 10−11.94 × 10−20.00 × 100
Std0.00 × 1000.00 × 1004.63 × 1011.20 × 10−21.75 × 10−20.00 × 1001.55 × 10−13.16 × 10−20.00 × 100
F12Avg1.02 × 10−62.62 × 10−22.42 × 1017.02 × 1007.44 × 1008.34 × 10−65.01 × 10−14.75 × 10−14.37 × 10−8
Std1.34 × 10−62.92 × 10−28.38 × 1013.59 × 1004.58 × 1007.05 × 10−64.96 × 10−21.98 × 10−19.03 × 10−8
F13Avg3.36 × 10−55.70 × 10−11.37 × 1071.46 × 1012.95 × 1009.44 × 10−52.83 × 1002.76 × 1001.73 × 10−7
Std4.72 × 10−52.99 × 10−17.49 × 1071.45 × 1016.26 × 10−12.29 × 10−41.20 × 10−11.09 × 10−12.56 × 10−7
F14Avg2.89 × 1003.84 × 1001.89 × 1001.10 × 1008.56 × 1001.33 × 1001.02 × 1011.13 × 1009.98 × 10−1
Std3.50 × 1003.97 × 1001.31 × 1004.00 × 10−15.08 × 1009.47 × 10−13.51 × 1005.03 × 10−10.00 × 100
F15Avg5.21 × 10−48.37 × 10−41.01 × 10−31.59 × 10−33.94 × 10−33.91 × 10−41.73 × 10−21.31 × 10−33.07 × 10−4
Std1.34 × 10−45.31 × 10−43.66 × 10−43.55 × 10−37.48 × 10−32.22 × 10−42.88 × 10−24.50 × 10−51.27 × 10−19
F16Avg−1.0312−1.0316−1.0316−1.0316−1.0284−1.0316−1.0316−1.0316−1.0316
Std3.77 × 10−47.51 × 10−106.78 × 10−162.42 × 10−149.65 × 10−36.71 × 10−101.05 × 10−71.25 × 10−56.78 × 10−16
F17Avg3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.11 × 10−13.99 × 10−13.98 × 10−1
Std3.22 × 10−44.54 × 10−60.00 × 1003.65 × 10−144.34 × 10−51.60 × 10−51.43 × 10−29.51 × 10−40.00 × 100
F18Avg3.03 × 1003.00 × 1003.00 × 1003.00 × 1001.77 × 1013.00 × 1008.40 × 1003.00 × 1003.00 × 100
Std3.90 × 10−21.85 × 10−41.53 × 10−152.05 × 10−132.88 × 1011.06 × 10−61.10 × 1012.37 × 10−41.42 × 10−15
F19Avg−3.8571−3.8541−3.8628−3.8628−3.8626−3.8578−3.8529−3.8548−3.8628
Std4.19 × 10−31.31 × 10−21.65 × 10−104.56 × 10−93.41 × 10−41.04 × 10−23.52 × 10−32.40 × 10−32.71 × 10−15
F20Avg−3.1913−3.1980−3.2479−3.2310−3.2729−3.0784−3.0334−2.5754−3.2863
Std6.99 × 10−21.68 × 10−16.27 × 10−26.11 × 10−28.50 × 10−21.12 × 10−11.06 × 10−14.67 × 10−15.54 × 10−2
F21Avg−10.1395−7.8528−6.7116−7.8105−6.1621−5.3826−3.7548−2.8730−10.1532
Std3.27 × 10−22.69 × 1003.19 × 1003.22 × 1003.44 × 1001.26 × 1001.37 × 1002.13 × 1007.01 × 10−15
F22Avg−10.3873−6.9247−7.1985−8.1209−6.6915−5.0843−3.6058−4.1673−10.4029
Std2.14 × 10−22.95 × 1003.54 × 1003.34 × 1003.60 × 1004.52 × 10−31.49 × 1001.66 × 1001.23 × 10−15
F23Avg−10.5298−7.1441−7.3743−8.5583−6.0191−5.2153−3.5062−4.3408−10.5364
Std7.63 × 10−33.40 × 1003.72 × 1003.14 × 1003.80 × 1001.07 × 1001.77 × 1001.54 × 1002.06 × 10−15
The best result obtained is highlighted in bold.
Table 7. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 100 ).
Table 7. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 100 ).
FnCriteriaAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
F1Avg1.91 × 10−1064.91 × 10−706.26 × 1041.42 × 1035.30 × 10−101.90 × 10−922.42 × 10−27.46 × 10−10.00 × 100
Std1.05 × 10−1052.68 × 10−691.61 × 1044.17 × 1029.45 × 10−107.70 × 10−929.72 × 10−38.66 × 10−10.00 × 100
F2Avg3.74 × 10−695.79 × 10−512.36 × 1024.79 × 1013.29 × 10−72.82 × 10−503.62 × 10−429.12 × 10−20.00 × 100
Std2.03 × 10−681.43 × 10−503.77 × 1015.63 × 1003.62 × 10−78.96 × 10−501.98 × 10−415.87 × 10−20.00 × 100
F3Avg5.70 × 10−1061.04 × 1062.19 × 1055.32 × 1041.18 × 1042.22 × 10−581.02 × 1006.81 × 1040.00 × 100
Std3.12 × 10−1052.40 × 1054.89 × 1042.81 × 1046.78 × 1031.22 × 10−579.30 × 10−13.11 × 1040.00 × 100
F4Avg4.32 × 10−537.98 × 1019.31 × 1012.79 × 1015.54 × 1016.78 × 10−509.59 × 10−27.40 × 1010.00 × 100
Std1.65 × 10−521.96 × 1012.02 × 1003.01 × 1001.55 × 1012.27 × 10−499.35 × 10−31.65 × 1010.00 × 100
F5Avg2.37 × 10−29.81 × 1011.58 × 1081.58 × 1059.85 × 1015.04 × 10−29.88 × 1013.33 × 1022.07 × 10−3
Std4.26 × 10−22.36 × 10−16.63 × 1078.01 × 1042.41 × 10−16.49 × 10−21.75 × 10−15.47 × 1024.26 × 10−3
F6Avg6.93 × 10−44.25 × 1005.96 × 1041.35 × 1031.44 × 1015.07 × 10−41.82 × 1012.27 × 1014.87 × 10−4
Std1.51 × 10−31.18 × 1001.58 × 1045.09 × 1021.22 × 1006.79 × 10−47.37 × 10−12.76 × 1004.83 × 10−4
F7Avg9.14 × 10−54.04 × 10−33.00 × 1022.57 × 1005.51 × 10−21.39 × 10−46.79 × 10−52.32 × 10−22.59 × 10−5
Std8.92 × 10−54.57 × 10−31.25 × 1025.17 × 10−12.01 × 10−21.78 × 10−45.69 × 10−51.59 × 10−21.82 × 10−5
F8Avg−1.07 × 104−3.48 × 104−2.27 × 104−2.15 × 104−1.34 × 104−4.19 × 104−9.88 × 103−1.81 × 104−4.19 × 104
Std2.12 × 1035.78 × 1031.97 × 1032.07 × 1031.19 × 1031.83 × 1007.35 × 1021.11 × 1026.87 × 100
F9Avg0.00 × 1007.58 × 10−158.58 × 1022.47 × 1029.64 × 1020.00 × 1000.00 × 1003.66 × 1010.00 × 100
Std0.00 × 1004.15 × 10−147.98 × 1014.76 × 1011.48 × 1020.00 × 1000.00 × 1001.91 × 1010.00 × 100
F10Avg4.44 × 10−163.76 × 10−151.98 × 1011.03 × 1011.83 × 10−14.44 × 10−163.58 × 10−42.00 × 1014.44 × 10−16
Std0.00 × 1002.46 × 10−151.90 × 10−11.56 × 1006.97 × 10−10.00 × 1009.32 × 10−44.29 × 10−30.00 × 100
F11Avg0.00 × 1002.11 × 10−25.49 × 1021.50 × 1011.24 × 10−20.00 × 1005.62 × 1023.89 × 10−10.00 × 100
Std0.00 × 1008.12 × 10−21.50 × 1023.47 × 1001.58 × 10−20.00 × 1001.62 × 1022.45 × 10−10.00 × 100
F12Avg1.12 × 10−64.49 × 10−22.74 × 1083.34 × 1011.22 × 1011.60 × 10−69.03 × 10−11.20 × 1006.28 × 10−7
Std2.38 × 10−62.01 × 10−21.54 × 1081.08 × 1015.04 × 1002.10 × 10−62.28 × 10−23.14 × 10−19.68 × 10−7
F13Avg5.30 × 10−52.87 × 1006.03 × 1084.37 × 1031.81 × 1011.26 × 10−49.96 × 1001.05 × 1016.87 × 10−6
Std1.17 × 10−49.34 × 10−13.54 × 1086.77 × 1032.71 × 1011.68 × 10−47.28 × 10−21.61 × 1001.03 × 10−5
The best result obtained is highlighted in bold.
Table 8. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 300 ).
Table 8. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 300 ).
FnCriteriaAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
F1Avg3.97 × 10−981.32 × 10−695.59 × 1054.17 × 1042.61 × 10−45.17 × 10−942.81 × 10−11.31 × 1020.00 × 100
Std1.80 × 10−976.55 × 10−692.29 × 1044.07 × 1031.72 × 10−42.81 × 10−932.50 × 10−26.40 × 1010.00 × 100
F2Avg1.95 × 10−575.17 × 10−483.86 × 10242.81 × 1026.11 × 10−41.89 × 10−482.27 × 10−93.85 × 1000.00 × 100
Std1.07 × 10−562.34 × 10−472.12 × 10251.34 × 1013.54 × 10−47.02 × 10−481.23 × 10−81.44 × 1000.00 × 100
F3Avg6.27 × 10−1011.09 × 1071.93 × 1065.06 × 1054.54 × 1051.14 × 10−491.31 × 1011.02 × 1060.00 × 100
Std3.43 × 10−1003.49 × 1062.99 × 1052.05 × 1058.49 × 1044.93 × 10−498.36 × 1002.50 × 1050.00 × 100
F4Avg1.47 × 10−517.74 × 1019.82 × 1013.70 × 1019.82 × 1012.57 × 10−481.50 × 10−19.61 × 1010.00 × 100
Std7.97 × 10−512.25 × 1015.68 × 10−13.46 × 1001.31 × 1001.11 × 10−471.59 × 10−21.72 × 1000.00 × 100
F5Avg7.66 × 10−22.97 × 1022.24 × 1091.22 × 1073.47 × 1021.11 × 10−12.99 × 1024.10 × 1042.24 × 10−2
Std1.58 × 10−12.37 × 10−11.94 × 1082.72 × 1065.92 × 1011.37 × 10−15.36 × 10−24.67 × 1044.04 × 10−2
F6Avg1.22 × 10−31.72 × 1015.65 × 1054.03 × 1045.54 × 1011.11 × 10−36.62 × 1012.27 × 1022.98 × 10−3
Std1.98 × 10−34.99 × 1002.97 × 1043.85 × 1031.88 × 1001.37 × 10−39.33 × 10−17.57 × 1013.80 × 10−3
F7Avg6.49 × 10−53.13 × 10−31.04 × 1046.16 × 1014.30 × 10−11.75 × 10−48.75 × 10−55.39 × 10−13.77 × 10−5
Std5.08 × 10−54.07 × 10−31.09 × 1039.95 × 1001.81 × 10−13.44 × 10−49.67 × 10−54.00 × 10−14.04 × 10−5
F8Avg−2.38 × 104−1.06 × 105−4.77 × 104−4.52 × 104−2.40 × 104−1.26 × 105−1.77 × 104−5.19 × 104−1.25 × 105
Std6.18 × 1031.88 × 1043.90 × 1033.45 × 1031.75 × 1038.02 × 1001.10 × 1034.17 × 1023.36 × 103
F9Avg0.00 × 1000.00 × 1003.77 × 1031.57 × 1033.47 × 1030.00 × 1001.34 × 10−71.39 × 1020.00 × 100
Std0.00 × 1000.00 × 1001.09 × 1029.60 × 1013.47 × 1020.00 × 1007.34 × 10−74.18 × 1010.00 × 100
F10Avg4.44 × 10−164.12 × 10−152.01 × 1011.37 × 1011.96 × 10−34.44 × 10−166.56 × 10−32.00 × 1014.44 × 10−16
Std0.00 × 1002.55 × 10−151.11 × 10−13.95 × 10−11.23 × 10−30.00 × 1004.81 × 10−41.58 × 10−20.00 × 100
F11Avg0.00 × 1000.00 × 1005.05 × 1033.64 × 1022.68 × 10−20.00 × 1004.19 × 1032.18 × 1000.00 × 100
Std0.00 × 1000.00 × 1002.27 × 1023.33 × 1014.53 × 10−20.00 × 1006.26 × 1026.03 × 10−10.00 × 100
F12Avg1.45 × 10−68.63 × 10−25.11 × 1091.05 × 1052.87 × 1042.86 × 10−61.05 × 1002.73 × 1031.32 × 10−6
Std4.39 × 10−63.07 × 10−25.14 × 1081.08 × 1057.43 × 1043.17 × 10−61.73 × 10−21.39 × 1042.21 × 10−6
F13Avg1.26 × 10−41.01 × 1011.03 × 10109.04 × 1065.57 × 1034.87 × 10−43.01 × 1012.96 × 1042.99 × 10−5
Std1.78 × 10−42.78 × 1007.97 × 1083.41 × 1066.81 × 1037.16 × 10−41.92 × 10−21.42 × 1054.18 × 10−5
The best result obtained is highlighted in bold.
Table 9. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 500 ).
Table 9. Comparison results of EAOAHHO and other algorithms on 23 benchmark functions ( D = 500 ).
FnCriteriaAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
F1Avg1.70 × 10−998.31 × 10−711.14 × 1069.27 × 1042.42 × 10−27.09 × 10−956.49 × 10−17.28 × 1020.00 × 100
Std9.28 × 10−992.87 × 10−704.05 × 1044.96 × 1031.62 × 10−22.72 × 10−943.72 × 10−23.21 × 1020.00 × 100
F2Avg1.29 × 10−584.90 × 10−475.80 × 101145.36 × 1027.69 × 10−38.35 × 10−498.38 × 10−41.23 × 1010.00 × 100
Std6.61 × 10−582.21 × 10−463.08 × 101151.42 × 1013.63 × 10−33.42 × 10−481.12 × 10−32.71 × 1000.00 × 100
F3Avg3.05 × 10−943.14 × 1074.74 × 1061.31 × 1061.32 × 1061.16 × 10−253.57 × 1014.00 × 1060.00 × 100
Std1.67 × 10−939.95 × 1068.05 × 1055.78 × 1052.39 × 1056.35 × 10−251.56 × 1011.43 × 1060.00 × 100
F4Avg3.90 × 10−527.55 × 1019.89 × 1013.99 × 1019.92 × 1011.09 × 10−471.83 × 10−19.70 × 1010.00 × 100
Std2.11 × 10−512.89 × 1014.34 × 10−13.52 × 1002.49 × 10−15.94 × 10−471.96 × 10−21.28 × 1000.00 × 100
F5Avg8.41 × 10−24.96 × 1025.01 × 1093.63 × 1071.08 × 1052.00 × 10−14.99 × 1024.61 × 1054.66 × 10−2
Std1.19 × 10−14.67 × 10−12.44 × 1084.50 × 1069.22 × 1042.87 × 10−16.49 × 10−24.69 × 1051.20 × 10−1
F6Avg8.53 × 10−43.07 × 1011.16 × 1069.36 × 1041.03 × 1022.48 × 10−31.16 × 1028.75 × 1026.58 × 10−3
Std1.28 × 10−37.50 × 1002.77 × 1046.33 × 1031.70 × 1003.68 × 10−31.29 × 1003.74 × 1025.05 × 10−3
F7Avg1.01 × 10−44.32 × 10−33.89 × 1042.81 × 1022.83 × 1002.01 × 10−48.69 × 10−54.52 × 1002.52 × 10−5
Std1.04 × 10−44.28 × 10−31.96 × 1033.47 × 1011.24 × 1002.34 × 10−47.01 × 10−53.93 × 1003.02 × 10−5
F8Avg−4.25 × 104−1.78 × 105−6.13 × 104−6.09 × 104−3.16 × 104−2.09 × 105−2.27 × 104−8.51 × 104−2.09 × 105
Std1.23 × 1043.38 × 1044.89 × 1035.37 × 1032.48 × 1033.10 × 1011.56 × 1035.54 × 1022.28 × 102
F9Avg3.03 × 10−140.00 × 1006.97 × 1033.11 × 1035.68 × 1030.00 × 1007.28 × 10−63.23 × 1020.00 × 100
Std1.66 × 10−130.00 × 100 1.47 × 1021.48 × 1027.77 × 1020.00 × 1007.47 × 10−65.27 × 1010.00 × 100
F10Avg4.44 × 10−164.47 × 10−152.03 × 1011.43 × 1011.07 × 10−24.44 × 10−167.98 × 10−32.01 × 1014.44 × 10−16
Std0.00 × 1002.76 × 10−151.52 × 10−12.19 × 10−15.07 × 10−30.00 × 1003.85 × 10−47.95 × 10−30.00 × 100
F11Avg0.00 × 1000.00 × 1001.04 × 1048.42 × 1024.55 × 10−20.00 × 1009.50 × 1037.85 × 1000.00 × 100
Std0.00 × 1000.00 × 1003.03 × 1025.57 × 1018.45 × 10−20.00 × 1002.90 × 1032.87 × 1000.00 × 100
F12Avg1.46 × 10−69.02 × 10−21.21 × 10101.50 × 1063.71 × 1062.34 × 10−61.08 × 1001.29 × 1051.23 × 10−6
Std2.89 × 10−63.98 × 10−25.87 × 1087.13 × 1053.58 × 1062.99 × 10−61.07 × 10−23.52 × 1051.75 × 10−6
F13Avg5.14 × 10−41.66 × 1012.25 × 10103.33 × 1071.87 × 1065.90 × 10−45.02 × 1012.18 × 1056.32 × 10−5
Std9.17 × 10−45.31 × 1001.20 × 1097.38 × 1061.90 × 1066.37 × 10−42.92 × 10−24.57 × 1051.08 × 10−4
The best result obtained is highlighted in bold.
Table 10. Average computational time of EAOAHHO and other algorithms on 23 benchmark functions (unit: s).
Table 10. Average computational time of EAOAHHO and other algorithms on 23 benchmark functions (unit: s).
FnAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
F11.91 × 10−16.27 × 10−21.22 × 10−11.20 × 10−11.58 × 10−11.13 × 10−11.03 × 10−11.53 × 1004.22 × 10−1
F21.65 × 10−16.90 × 10−21.08 × 10−11.06 × 10−11.39 × 10−18.98 × 10−29.28 × 10−21.48 × 1004.18 × 10−1
F33.83 × 10−11.70 × 10−12.09 × 10−12.17 × 10−12.63 × 10−13.56 × 10−12.04 × 10−11.53 × 1001.44 × 100
F41.72 × 10−16.07 × 10−21.07 × 10−11.11 × 10−11.51 × 10−11.08 × 10−19.71 × 10−21.54 × 1003.90 × 10−1
F52.03 × 10−17.75 × 10−21.17 × 10−11.19 × 10−11.64 × 10−11.80 × 10−11.17 × 10−11.49 × 1005.48 × 10−1
F61.58 × 10−16.00 × 10−29.90 × 10−21.15 × 10−11.48 × 10−11.33 × 10−19.13 × 10−21.52 × 1004.31 × 10−1
F72.87 × 10−11.32 × 10−11.58 × 10−11.78 × 10−12.02 × 10−12.47 × 10−11.52 × 10−11.48 × 1001.11 × 100
F82.07 × 10−17.37 × 10−21.06 × 10−11.41 × 10−11.63 × 10−12.06 × 10−11.24 × 10−11.49 × 1005.70 × 10−1
F91.77 × 10−17.91 × 10−21.23 × 10−11.08 × 10−11.73 × 10−11.64 × 10−19.99 × 10−21.53 × 1003.90 × 10−1
F101.86 × 10−17.26 × 10−21.15 × 10−11.23 × 10−11.71 × 10−11.57 × 10−11.05 × 10−11.52 × 1004.24 × 10−1
F111.97 × 10−17.68 × 10−21.54 × 10−11.35 × 10−11.75 × 10−11.75 × 10−11.13 × 10−11.53 × 1007.94 × 10−1
F125.00 × 10−12.10 × 10−12.52 × 10−12.71 × 10−12.99 × 10−15.27 × 10−12.46 × 10−11.59 × 1001.39 × 100
F135.10 × 10−12.15 × 10−12.65 × 10−12.94 × 10−13.03 × 10−15.16 × 10−12.64 × 10−11.58 × 1001.45 × 100
F147.77 × 10−13.74 × 10−13.45 × 10−13.98 × 10−13.46 × 10−18.94 × 10−13.77 × 10−14.50 × 10−12.63 × 100
F151.23 × 10−14.17 × 10−25.01 × 10−27.04 × 10−24.55 × 10−21.19 × 10−14.64 × 10−22.84 × 10−14.18 × 10−1
F161.19 × 10−14.16 × 10−24.11 × 10−26.78 × 10−24.03 × 10−21.11 × 10−13.76 × 10−21.55 × 10−13.93 × 10−1
F171.18 × 10−13.29 × 10−23.94 × 10−23.30 × 10−23.15 × 10−21.11 × 10−14.16 × 10−21.56 × 10−13.80 × 10−1
F181.06 × 10−13.22 × 10−24.30 × 10−26.91 × 10−23.16 × 10−29.89 × 10−23.70 × 10−21.43 × 10−13.51 × 10−1
F191.40 × 10−14.08 × 10−25.88 × 10−27.07 × 10−24.52 × 10−21.38 × 10−14.82 × 10−22.06 × 10−14.33 × 10−1
F201.52 × 10−15.13 × 10−25.83 × 10−28.53 × 10−26.71 × 10−21.48 × 10−15.89 × 10−23.86 × 10−14.49 × 10−1
F211.67 × 10−14.73 × 10−25.97 × 10−29.26 × 10−26.01 × 10−21.61 × 10−16.11 × 10−22.91 × 10−15.04 × 10−1
F221.90 × 10−16.34 × 10−26.14 × 10−29.65 × 10−26.88 × 10−21.91 × 10−15.81 × 10−22.84 × 10−15.63 × 10−1
F232.21 × 10−17.60 × 10−28.47 × 10−21.05 × 10−18.09 × 10−21.93 × 10−18.66 × 10−23.08 × 10−16.70 × 10−1
The best result obtained is highlighted in bold.
Table 11. Mean absolute error of EAOAHHO and other algorithms on 23 benchmark functions.
Table 11. Mean absolute error of EAOAHHO and other algorithms on 23 benchmark functions.
AlgorithmsMAERank
AO205.3004273
WOA2141.90028
MFO597,465.99
SSA296.03747794
TSA296.090925
HHO0.71709962
AOA320.1634647
CHOA309.8713796
EAOAHHO0.0033091
Table 12. p-values of the Wilcoxon rank-sum test between EAOAHHO and other algorithms.
Table 12. p-values of the Wilcoxon rank-sum test between EAOAHHO and other algorithms.
FnEAOAHHO
vs. AO
EAOAHHO
vs. WOA
EAOAHHO
vs. MFO
EAOAHHO
vs. SSA
EAOAHHO
vs. TSA
EAOAHHO
vs. HHO
EAOAHHO
vs. AOA
EAOAHHO
vs. ChOA
F11.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F21.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12NaN1.21 × 10−12
F31.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F41.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F55.46 × 10−63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.43 × 10−83.02 × 10−113.02 × 10−11
F61.69 × 10−93.02 × 10−113.02 × 10−113.82 × 10−103.02 × 10−114.94 × 10−53.02 × 10−113.02 × 10−11
F75.56 × 10−44.50 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.68 × 10−61.44 × 10−24.50 × 10−11
F83.00 × 10−113.00 × 10−113.00 × 10−113.00 × 10−113.00 × 10−113.07 × 10−63.00 × 10−113.00 × 10−11
F9NaNNaN1.21 × 10−121.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12
F10NaN8.73 × 10−81.21 × 10−121.21 × 10−121.21 × 10−12NaNNaN1.21 × 10−12
F11NaNNaN1.21 × 10−121.21 × 10−128.87 × 10−7NaN1.21 × 10−121.21 × 10−12
F121.10 × 10−83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.98 × 10−113.02 × 10−113.02 × 10−11
F136.70 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.18 × 10−93.02 × 10−113.02 × 10−11
F141.21 × 10−121.21 × 10−126.58 × 10−57.56 × 10−131.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F152.82 × 10−112.82 × 10−112.81 × 10−112.82 × 10−112.82 × 10−112.82 × 10−112.82 × 10−112.82 × 10−11
F161.21 × 10−121.21 × 10−12NaN1.21 × 10−121.21 × 10−124.57 × 10−121.21 × 10−121.21 × 10−12
F171.21 × 10−121.21 × 10−12NaN1.93 × 10−101.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F181.26 × 10−111.26 × 10−114.90 × 10−51.26 × 10−111.26 × 10−111.26 × 10−111.26 × 10−111.26 × 10−11
F191.21 × 10−121.21 × 10−12NaN1.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12
F201.93 × 10−69.73 × 10−71.06 × 10−22.43 × 10−81.41 × 10−41.60 × 10−101.01 × 10−112.59 × 10−11
F214.08 × 10−124.08 × 10−126.56 × 10−74.08 × 10−124.08 × 10−124.08 × 10−124.08 × 10−124.08 × 10−12
F221.45 × 10−111.45 × 10−111.55 × 10−61.45 × 10−111.45 × 10−111.45 × 10−111.45 × 10−111.45 × 10−11
F233.15 × 10−123.15 × 10−122.12 × 10−103.15 × 10−123.15 × 10−123.15 × 10−123.15 × 10−123.15 × 10−12
+2021202323202023
=32300330
00000000
Table 13. Descriptions of IEEE CEC2017 test suite.
Table 13. Descriptions of IEEE CEC2017 test suite.
FunctionNameDimRangeFmin
Unimodal functions
CEC-01Shifted and Rotated Bent Cigar Function10[−100, 100]100
CEC-03Shifted and Rotated Zakharov Function10[−100, 100]300
Simple Multimodal Functions
CEC-04Shifted and Rotated Rosenbrock’s Function10[−100, 100]400
CEC-05Shifted and Rotated Rastrigin’s Function10[−100, 100]500
CEC-06Shifted and Rotated Expanded Scaffer’s F6 Function10[−100, 100]600
CEC-07Shifted and Rotated Lunacek Bi_Rastrigin Function10[−100, 100]700
CEC-08Shifted and Rotated Non-Continuous Rastrigin’s Function10[−100, 100]800
CEC-09Shifted and Rotated Levy Function10[−100, 100]900
CEC-10Shifted and Rotated Schwefel’s Function10[−100, 100]1000
Hybrid Functions
CEC-11Hybrid Function 1 (N = 3)10[−100, 100]1100
CEC-12Hybrid Function 2 (N = 3)10[−100, 100]1200
CEC-13Hybrid Function 3 (N = 3)10[−100, 100]1300
CEC-14Hybrid Function 4 (N = 4)10[−100, 100]1400
CEC-15Hybrid Function 5 (N = 4)10[−100, 100]1500
CEC-16Hybrid Function 6 (N = 4)10[−100, 100]1600
CEC-17Hybrid Function 6 (N = 5)10[−100, 100]1700
CEC-18Hybrid Function 6 (N = 5)10[−100, 100]1800
CEC-19Hybrid Function 6 (N = 5)10[−100, 100]1900
CEC-20Hybrid Function 6 (N = 6)10[−100, 100]2000
Composition Functions
CEC-21Composition Function 1 (N = 3)10[−100, 100]2100
CEC-22Composition Function 2 (N = 3)10[−100, 100]2200
CEC-23Composition Function 3 (N = 4)10[−100, 100]2300
CEC-24Composition Function 4 (N = 4)10[−100, 100]2400
CEC-25Composition Function 5 (N = 5)10[−100, 100]2500
CEC-26Composition Function 6 (N = 5)10[−100, 100]2600
CEC-27Composition Function 7 (N = 6)10[−100, 100]2700
CEC-28Composition Function 8 (N = 6)10[−100, 100]2800
CEC-29Composition Function 9 (N = 3)10[−100, 100]2900
CEC-30Composition Function 10 (N = 3)10[−100, 100]3000
Table 14. Comparison results of EAOAHHO and other algorithms on IEEE CEC2017 test suite.
Table 14. Comparison results of EAOAHHO and other algorithms on IEEE CEC2017 test suite.
FnMetricAOWOAMFOSSATSAHHOAOAChOAEAOAHHO
CEC-1Avg3.78 × 1075.79 × 1071.41 × 1084.01 × 1033.19 × 1099.59 × 1061.07 × 10102.32 × 1093.30 × 103
Std9.04 × 1076.42 × 1074.37 × 1083.49 × 1033.75 × 1094.71 × 1074.38 × 1091.70 × 1093.18 × 103
CEC-3Avg2.31 × 1037.56 × 1038.82 × 1033.25 × 1029.41 × 1036.46 × 1021.29 × 1045.01 × 1033.00 × 102
Std7.94 × 1025.10 × 1037.52 × 1031.77 × 1005.44 × 1032.71 × 1023.42 × 1032.33 × 1034.29 × 10−2
CEC-4Avg4.23 × 1024.88 × 1024.28 × 1024.10 × 1026.31 × 1024.34 × 1021.20 × 1036.52 × 1024.06 × 102
Std2.30 × 1016.83 × 1013.91 × 1011.77 × 1013.02 × 1023.48 × 1015.45 × 1021.88 × 1021.20 × 101
CEC-5Avg5.34 × 1025.64 × 1025.30 × 1025.24 × 1025.55 × 1025.57 × 1025.60 × 1025.65 × 1025.36 × 102
Std1.58 × 1012.12 × 1011.13 × 1019.97 × 1001.81 × 1011.94 × 1011.65 × 1011.61 × 1011.63 × 101
CEC-6Avg6.20 × 1026.36 × 1026.37 × 1026.13 × 1026.38 × 1026.40 × 1026.38 × 1026.34 × 1026.04 × 102
Std5.79 × 1001.31 × 1011.29 × 1018.53 × 1001.32 × 1011.38 × 1015.93 × 1001.26 × 1014.57 × 100
CEC-7Avg7.58 × 1027.97 × 1027.39 × 1027.47 × 1028.00 × 1027.97 × 1028.04 × 1028.06 × 1027.78 × 102
Std1.48 × 1012.52 × 1011.10 × 1011.47 × 1012.85 × 1012.05 × 1011.82 × 1011.48 × 1011.40 × 101
CEC-8Avg8.28 × 1028.47 × 1028.34 × 1028.30 × 1028.47 × 1028.32 × 1028.36 × 1028.45 × 1028.34 × 102
Std8.67 × 1002.14 × 1011.28 × 1011.38 × 1011.34 × 1011.04 × 1019.46 × 1008.12 × 1009.29 × 100
CEC-9Avg1.04 × 1031.61 × 1031.07 × 1031.10 × 1031.56 × 1031.53 × 1031.41 × 1031.63 × 1031.38 × 103
Std6.65 × 1013.90 × 1022.25 × 1023.35 × 1025.70 × 1022.71 × 1021.97 × 1022.59 × 1021.92 × 102
CEC-10Avg1.97 × 1032.25 × 1032.03 × 1031.93 × 1032.29 × 1032.07 × 1032.28 × 1032.94 × 1032.01 × 103
Std2.24 × 1023.91 × 1024.25 × 1021.41 × 1022.44 × 1022.90 × 1022.94 × 1023.33 × 1022.76 × 102
CEC-11Avg1.28 × 1031.24 × 1031.26 × 1031.20 × 1034.41 × 1031.19 × 1032.91 × 1031.40 × 1031.15 × 103
Std1.60 × 1028.00 × 1013.57 × 1027.48 × 1013.44 × 1037.67 × 1011.91 × 1031.19 × 1026.10 × 101
CEC-12Avg4.49 × 1066.43 × 1061.81 × 1063.85 × 1062.11 × 1074.03 × 1063.17 × 1084.12 × 1072.10 × 103
Std5.36 × 1066.73 × 1064.49 × 1063.97 × 1061.05 × 1084.25 × 1063.96 × 1081.03 × 1081.38 × 103
CEC-13Avg1.74 × 1041.80 × 1041.15 × 1041.53 × 1045.23 × 1061.89 × 1041.20 × 1044.61 × 1041.33 × 103
Std1.25 × 1041.17 × 1041.10 × 1041.45 × 1042.26 × 1071.19 × 1048.05 × 1032.28 × 1049.55 × 101
CEC-14Avg2.63 × 1034.08 × 1035.11 × 1032.87 × 1033.61 × 1032.40 × 1038.14 × 1036.62 × 1031.45 × 103
Std1.05 × 1032.15 × 1035.97 × 1033.01 × 1032.15 × 1031.27 × 1037.87 × 1031.42 × 1032.90 × 101
CEC-15Avg6.50 × 1031.05 × 1041.22 × 1048.09 × 1037.19 × 1036.75 × 1031.90 × 1041.95 × 1041.53 × 103
Std3.22 × 1037.49 × 1031.94 × 1047.25 × 1035.46 × 1033.70 × 1037.59 × 1038.50 × 1033.56 × 101
CEC-16Avg1.88 × 1031.92 × 1031.81 × 1031.79 × 1031.93 × 1031.91 × 1032.08 × 1031.94 × 1031.89 × 103
Std1.28 × 1021.55 × 1021.60 × 1021.56 × 1021.62 × 1021.65 × 1021.25 × 1021.03 × 1021.44 × 102
CEC-17Avg1.78 × 1031.82 × 1031.81 × 1031.85 × 1031.88 × 1031.84 × 1031.91 × 1031.80 × 1031.79 × 103
Std2.29 × 1016.76 × 1016.67 × 1014.95 × 1011.08 × 1021.01 × 1021.17 × 1022.68 × 1013.43 × 101
CEC-18Avg2.54 × 1041.95 × 1042.06 × 1041.99 × 1042.91 × 1041.67 × 1042.70 × 1061.02 × 1051.83 × 103
Std1.46 × 1041.32 × 1041.35 × 1049.38 × 1031.61 × 1041.25 × 1041.22 × 1076.72 × 1042.25 × 101
CEC-19Avg2.80 × 1041.09 × 1051.68 × 1046.91 × 1031.15 × 1052.17 × 1048.40 × 1042.48 × 1041.90 × 103
Std4.44 × 1041.50 × 1052.34 × 1044.92 × 1033.71 × 1052.72 × 1048.60 × 1049.90 × 1038.92 × 100
CEC-20Avg2.13 × 1032.18 × 1032.10 × 1032.14 × 1032.19 × 1032.19 × 1032.15 × 1032.26 × 1032.15 × 103
Std4.91 × 1017.99 × 1016.35 × 1015.70 × 1018.55 × 1017.53 × 1017.32 × 1017.61 × 1016.13 × 101
CEC-21Avg2.31 × 1032.34 × 1032.32 × 1032.30 × 1032.34 × 1032.34 × 1032.34 × 1032.31 × 1032.29 × 103
Std4.76 × 1013.99 × 1014.62 × 1015.18 × 1015.20 × 1014.59 × 1017.82 × 1016.20 × 1013.48 × 101
CEC-22Avg2.31 × 1032.42 × 1032.31 × 1032.30 × 1032.60 × 1032.42 × 1033.03 × 1033.63 × 1032.31 × 103
Std5.11 × 1004.15 × 1024.35 × 1012.09 × 1013.28 × 1024.53 × 1022.58 × 1027.99 × 1022.21 × 101
CEC-23Avg2.64 × 1032.65 × 1032.62 × 1032.63 × 1032.71 × 1032.69 × 1032.75 × 1032.66 × 1032.65 × 103
Std1.51 × 1012.35 × 1011.00 × 1011.08 × 1014.42 × 1013.89 × 1014.02 × 1019.06 × 1001.38 × 101
CEC-24Avg2.76 × 1032.77 × 1032.75 × 1032.74 × 1032.81 × 1032.82 × 1032.84 × 1032.80 × 1032.75 × 103
Std4.90 × 1016.68 × 1014.92 × 1015.34 × 1017.58 × 1014.15 × 1018.22 × 1011.63 × 1014.69 × 101
CEC-25Avg2.95 × 1032.96 × 1032.94 × 1032.93 × 1033.03 × 1032.93 × 1033.36 × 1033.05 × 1032.93 × 103
Std3.20 × 1015.60 × 1012.49 × 1012.89 × 1019.88 × 1012.90 × 1012.31 × 1028.86 × 1012.48 × 101
CEC-26Avg3.07 × 1033.45 × 1033.09 × 1032.92 × 1033.92 × 1033.79 × 1033.97 × 1034.01 × 1033.07 × 103
Std1.79 × 1025.34 × 1022.88 × 1022.06 × 1024.47 × 1026.09 × 1023.09 × 1023.33 × 1023.02 × 102
CEC-27Avg3.11 × 1033.15 × 1033.09 × 1033.10 × 1033.17 × 1033.17 × 1033.26 × 1033.14 × 1033.09 × 103
Std8.58 × 1004.84 × 1011.99 × 1011.72 × 1013.96 × 1015.41 × 1015.93 × 1013.34 × 1013.74 × 100
CEC-28Avg3.46 × 1033.47 × 1033.38 × 1033.32 × 1033.52 × 1033.42 × 1033.79 × 1033.25 × 1033.30 × 103
Std7.54 × 1011.33 × 1029.19 × 1011.15 × 1021.84 × 1021.68 × 1021.73 × 1022.95 × 1011.18 × 102
CEC-29Avg3.25 × 1033.40 × 1033.25 × 1033.31 × 1033.34 × 1033.38 × 1033.44 × 1033.39 × 1033.21 × 103
Std4.39 × 1011.35 × 1029.96 × 1015.83 × 1011.06 × 1021.02 × 1021.40 × 1021.04 × 1026.58 × 101
CEC-30Avg1.00 × 1061.73 × 1066.35 × 1051.02 × 1062.31 × 1061.25 × 1063.09 × 1075.74 × 1064.70 × 105
Std1.22 × 1061.61 × 1065.46 × 1057.18 × 1056.59 × 1061.87 × 1062.82 × 1074.21 × 1061.15 × 106
The best result obtained is highlighted in bold.
Table 15. Comparison results of different algorithms for solving three-bar truss design problem.
Table 15. Comparison results of different algorithms for solving three-bar truss design problem.
AlgorithmOptimal Values for VariablesMinimum Weight
A 1 ( z 1 ) A 2 ( z 2 )
MFO [48]0.788244770.40946691263.89598
SSA [25]0.788665410.40827578263.89584
HHO [27]0.788662800.40828313263.89584
AOA [38]0.793690000.39426000263.91540
MVO [19]0.788602760.40845307263.89585
GEO [54]0.788671100.40825970263.89584
GOA [55]0.788897560.40761957263.89588
AHA [56]0.788683000.40822460263.89584
EAOAHHO0.788593040.40825052263.87285
The best result obtained is highlighted in bold.
Table 16. Comparison results of different algorithms for solving tension/compression spring design problem.
Table 16. Comparison results of different algorithms for solving tension/compression spring design problem.
AlgorithmOptimal Values for VariablesMinimum Weight
d ( z 1 ) D ( z 2 ) N ( z 3 )
WOA [31]0.0512070.34521512.0040320.0126763
MFO [48]0.0519940.36410910.8684220.0126669
SSA [25]0.0512070.34521512.0040320.0126763
HHO [27]0.0517960.35930511.1388590.0126654
AOA [38]0.0500000.34980911.8637000.0121240
AHA [56]0.0518970.36174810.6892830.0126660
GWO [26]0.0516900.35673711.2888500.0126660
INFO [57]0.0515550.35349911.4803400.0126660
EAOAHHO0.0522910.36026310.1793440.01199749
The best result obtained is highlighted in bold.
Table 17. Comparison results of different algorithms for solving welded beam design problem.
Table 17. Comparison results of different algorithms for solving welded beam design problem.
AlgorithmOptimal Values for VariablesMinimum Cost
h ( z 1 ) l ( z 1 ) t ( z 3 ) b ( z 4 )  
WOA [31]0.2053963.4842939.0374260.2062761.730499
AOA [38]0.1944752.57092010.000000.2018271.716400
MVO [19]0.2054633.4731939.0445020.2056951.726450
GWO [26]0.2056763.4783779.0368100.2057781.726240
ROA [58]0.2000773.3657549.0111820.2068931.706447
HGS [59]0.2077393.2306428.9887780.2079261.703355
AVOA [60]0.2057303.4704749.0366210.2057301.724852
IMFO [61]0.2057303.4702009.0375000.2057301.724900
EAOAHHO0.1955393.3545889.0366300.2057291.693914
The best result obtained is highlighted in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yao, J.; Sha, Y.; Chen, Y.; Zhao, X. A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems. Machines 2022, 10, 602. https://doi.org/10.3390/machines10080602

AMA Style

Yao J, Sha Y, Chen Y, Zhao X. A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems. Machines. 2022; 10(8):602. https://doi.org/10.3390/machines10080602

Chicago/Turabian Style

Yao, Jinyan, Yongbai Sha, Yanli Chen, and Xiaoying Zhao. 2022. "A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems" Machines 10, no. 8: 602. https://doi.org/10.3390/machines10080602

APA Style

Yao, J., Sha, Y., Chen, Y., & Zhao, X. (2022). A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems. Machines, 10(8), 602. https://doi.org/10.3390/machines10080602

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop