An Improved Hybrid Aquila Optimizer and Harris Hawks Algorithm for Solving Industrial Engineering Optimization Problems

: Aquila Optimizer (AO) and Harris Hawks Optimizer (HHO) are recently proposed meta-heuristic optimization algorithms. AO possesses strong global exploration capability but insufﬁcient local exploitation ability. However, the exploitation phase of HHO is pretty good, while the exploration capability is far from satisfactory. Considering the characteristics of these two algorithms, an improved hybrid AO and HHO combined with a nonlinear escaping energy parameter and random opposition-based learning strategy is proposed, namely IHAOHHO, to improve the searching performance in this paper. Firstly, combining the salient features of AO and HHO retains valuable exploration and exploitation capabilities. In the second place, random opposition-based learning (ROBL) is added in the exploitation phase to improve local optima avoidance. Finally, the nonlinear escaping energy parameter is utilized better to balance the exploration and exploitation phases of IHAOHHO. These two strategies effectively enhance the exploration and exploitation of the proposed algorithm. To verify the optimization performance, IHAOHHO is comprehensively analyzed on 23 standard benchmark functions. Moreover, the practicability of IHAOHHO is also highlighted by four industrial engineering design problems. Compared with the original AO and HHO and ﬁve state-of-the-art algorithms, the results show that IHAOHHO has strong superior performance and promising prospects.


Introduction
Meta-heuristic optimization algorithms inspired by nature are becoming more and more popular in real-world applications [1]. Meta-heuristics usually mimic biological or physical phenomena and only consider inputs and outputs, making them flexible and straightforward. Furthermore, meta-heuristics is a kind of stochastic optimization technique. This property assists them to effectively avoid local optima, which usually occurs in real problems. Because of the advantages of simplicity, flexibility, and ability to avoid local optima, meta-heuristic optimization algorithms outperform heuristic optimization algorithms to solve various complex and tricky optimization problems in the real world [2].
Three dominant categories are divided from meta-heuristic optimization algorithms: evolutionary, physics-based, and swarm intelligence techniques. Evolutionary algorithms are inspired by the laws of evolution in nature. The randomly generated population evolves over subsequent generations as the number of iterations increases. Each generation of individuals is always formed by the combination of best individuals so that the used to control the main position vectors of HHO to achieve a better balance between exploration and exploitation phases, and chaotic maps were adopted to update the control energy parameters to avoid premature convergence. Kaveh et al. [54] proposed an effective algorithm called ICHHO by hybridizing HHO with Imperialist Competitive Algorithm (ICA). Combination of the exploration strategy of ICA and exploitation technique of HHO helps to achieve a better search strategy. These improved and hybrid algorithms have proven that HHO is a valuable algorithm. Aquila Optimizer (AO) [55] is the latest swarm intelligence algorithm, proposed in 2021. This algorithm simulates different hunting methods of Aquila for different kinds of prey. The hunting methods for fast-moving prey reflect the global exploration ability of the algorithm, and the hunting methods for slowmoving prey reflect the local exploitation ability of the algorithm. AO algorithm possesses strong global exploration ability, high search efficiency, and fast convergence speed, but its local exploitation ability is insufficient, so it is easy to fall into local optima. Due to the short time that has elapsed since the algorithm has been proposed, there is no research on the improvement of AO yet.
Therefore, we tried a kind of hybridization to improve the performance of HHO and AO. As far as we know, this kind of hybridization of HHO with AO has not been used before. We propose a new, improved hybrid Aquila Optimizer and Harris Hawks Optimization (IHAOHHO) by combining the salient features of AO and HHO. In this paper, we integrate the exploitation strategy of HHO into the AO algorithm, which is added random opposition-based learning (ROBL) in the exploitation phase to avoid local optima stagnation. At the same time, the nonlinear escaping energy parameter balances the exploration and exploitation phases of the algorithm. The 23 standard benchmark functions and four engineering design problems were applied to test the exploration and exploitation capabilities of IHAOHHO. The proposed algorithm is compared with original AO, HHO, and several well-known algorithms, including SMA, SSA, WOA, GWO, and PSO. The experimental results show that the proposed IHAOHHO algorithm performs better than the state-of-the-art meta-heuristic algorithms.
The rest of this paper is organized as follows ( Figure 1): Section 2 provides a brief overview of the related work: original Harris Hawks Optimization algorithm and Aquila Optimizer, as well as two improvement strategies. Section 3 describes in detail the proposed hybrid algorithm. Section 4 conducts simulation experiments and results analysis. Finally, Section 5 concludes the paper.

Aquila Optimizer (AO)
AO is a latest novel swarm intelligence algorithm proposed by  There are four hunting behaviors of Aquila for different kinds of prey. Aquila can switch hunting strategies flexibly for different prey and then uses its fast speed united with sturdy feet and claws to attack prey. The brief description of mathematical model can be described as follows.
Step 1: Expanded exploration (X 1 ): high soar with a vertical stoop In this method, the Aquila flies high over the ground and explores the search space widely, and then a vertical dive is taken once the Aquila determines the area of the prey. The mathematical representation of this behaviour is written as: where X best (t) represents the best position obtained so far, and X M (t) denotes the average position of all Aquilas in current iteration. t and T is the current iteration and the maximum number of iterations, respectively. N is the population size, and r 1 is a random number between 0 and 1.
Step 2: Narrowed exploration (X 2 ): contour flight with short glide attack This is the most commonly used hunting method for Aquila. It uses short gliding to attack the prey after descending within the selected area and flying around the prey. The position update formula is represented as: where X R (t) represents a random position of the Aquila, D is the dimension size, and r 2 is a random number within (0, 1). LF(D) represents Levy flight function, which is presented as follows: where s and β are constant values equal to 0.01 and 1.5, respectively, and u and v are random numbers between 0 and 1. y and x are used to present the spiral shape in the search, which are calculated as follows: where r 3 means the number of search cycles between 1 and 20, D 1 is composed of integer numbers from 1 to the dimension size (D), and ω is equal to 0.005.
Step 3: Expanded exploitation (X 3 ): low flight with a slow descent attack In the third method, when the area of prey is roughly determined, the Aquila descends vertically to do a preliminary attack. AO exploits the selected area to get close and attack the prey. This behaviour is presented as follows: where X best (t) denotes to the best position obtained so far, and X M (t) means the average value of the current positions. α and δ are the exploitation adjustment parameters fixed to 0.1, UB and LB are the upper and lower bound of the problem, and r 4 and r 5 are random numbers within (0, 1).
Step 4: Narrowed exploitation (X 4 ): walking and grabbing prey In this method, the Aquila chases the prey in the light of its escape trajectory and then attacks the prey on the ground. The mathematical representation of this behaviour is as follows: where X(t) is the current position, and QF(t) represents the quality function value, which used to balance the search strategy. G 1 denotes the movement parameter of Aquila during tracking prey, which is a random number between [-1,1]. G 2 denotes the flight slope when chasing prey, which decreases linearly from 2 to 0. r 6 , r 7 , and r 8 are random numbers between 0 and 1.

Harris's Hawks Optimizer (HHO)
HHO is a new meta-heuristic optimization algorithm proposed by Heidari et al. in 2019. It is inspired by the unique cooperative foraging activities of Harris's hawk. Harris's hawk can show a variety of chasing patterns according to the dynamic nature of the environment and the escaping patterns of the prey. These switching activities are conducive to confuse the running prey, and these cooperative strategies can help Harris's hawk chase the detected prey to exhaustion, which increases its vulnerability. The brief description of mathematical model is as follows.

Exploration Phase
The Harris's hawks usually perch on some random locations, wait, and monitor the desert to detect the prey. There are two perching strategies based on the positions of other family members and the prey or random tall trees, which is selected in accordance with the random q value.
where X r (t) is the position of a random hawk, X prey (t) represents the position of the prey, that is the best position obtained so far, and X m (t) denotes the average position of the current population. N is total number of hawks, UB and LB are the upper and lower bound of the problem, and q, r 1 , r 2 , r 3 , and r 4 are random numbers between 0 and 1.

Transition from Exploration to Exploitation Phase
The HHO algorithm has a transition mechanism from exploration to exploitation phase based on the escaping energy of the prey and then changes the different exploitative behaviors. The energy of the prey is modelled as follows, which decreases during the escaping behaviour.
where E represents the escaping energy of the prey, E 0 is the initial state of the energy, and t and T are the current and maximum number of iterations, respectively. When |E|≥ 1 , the algorithm performs the exploration stage, and when |E|< 1 , the algorithm performs the exploitation phase.

Exploitation Phase
In this phase, four different chasing and attacking strategies are proposed on the basis of the escaping energy of the prey and chasing styles of the Harris's hawks. Except for the escaping energy, parameter r is also utilized to choose the chasing strategy, which indicates the chance of the prey in successfully escaping (r < 0.5) or not (r ≥ 0.5) before attack.

Soft besiege
When r ≥ 0.5 and |E|≥ 0.5 , the prey still has enough energy and tries to escape, so the Harris's hawks encircle it softly to make the prey more exhausted and then attack it. This behaviour is modeled as follows: where ∆X(t) indicates the difference between the position of the prey and the current position, J represents the random jump strength of the prey, X prey (t) represents the position of the prey, X(t) is the current position, and r 5 is a random number within (0, 1).

Hard besiege
When r ≥ 0.5 and |E|< 0.5 , the prey has a low escaping energy, and the Harris's hawks encircle the prey readily and finally attack it. In this situation, the positions are updated as follows:

Soft besiege with progressive rapid dives
When |E|≥ 0.5 and r < 0.5, the prey has enough energy to successfully escape, so the Harris's hawks perform soft besiege with several rapid dives around the prey and try to progressively correct its position and direction. This behaviour is modeled as follows: where D is the dimension size of the problem, and S is a random vector. LF is Levy flight function, which is utilized to mimic the deceptive motions of the prey. u and v are random values between 0 and 1, and β is a constant number equal to 1.5. Note that only the better position between Y and Z is selected as the next position.

Hard besiege with progressive rapid dives
When |E|< 0.5 and r < 0.5, the prey does not have enough energy to escape, so the hawks perform a hard besiege to decrease the distance between their average position and the prey and finally attack and kill the prey. The mathematical representation of this behaviour is as follows: Note that only the better position between Y and Z will be the next position for the new iteration.

Nonlinear Escaping Energy Parameter
In the original HHO algorithm, the escaping energy E is used to control the transition from exploration to exploitation phase. The parameter E is linearly reduced from 2 to 0, that is, only local search is performed in the second half of the iterations, which is easy to fall into local optima. In order to overcome this shortcoming of the algorithm, another way to update the escaping energy E is utilized [56]: where t and T are the current and maximum number of iterations, respectively. It can be seen from Figure

Random Opposition-Based Learning (ROBL)
Opposition-based learning (OBL) is a powerful optimization tool proposed by Tizhoosh [57]. The main idea of OBL is simultaneously considering the fitness of an estimate and its corresponding opposite estimate to obtain a better candidate solution. The OBL concept has successfully been used in varieties of meta-heuristics algorithms [58][59][60][61][62] to improve the convergence speed. Different from the original OBL, this paper utilizes an improved OBL strategy, called random opposition-based learning (ROBL) [63], which is defined by: wherex j represents the opposite solution, l j and u j are the lower and upper bound of the problem in jth dimension, and rand is a random number within (0, 1). The opposite solution described by Equation (26) is more random than the original OBL and can effectively help the population jump out of the local optima.

The Detail Design of IHAOHHO
The exploration phase of AO mimics the hunting behaviour for fast-moving prey with a wide flying area, making AO have a strong global search ability and fast convergence speed. However, the selected search space is not exhaustively searched during the exploitation phase. The effect of Levy flight is relatively weak, leading to premature convergence. In a word, the AO algorithm possesses strong randomness and fast convergence speed in the global exploration phase. However, it is easy to fall into local optima in the local exploitation stage. For the HHO algorithm, the transition from global to local search is realized based on the energy attenuation of the prey. In the early iterations, which reflect the exploration phase, the diversity of the population is insufficient, and the convergence speed is slow. As the number of iterations increases, the energy of prey decreases, and the algorithm enters the stage of local exploitation. Four different hunting strategies are adopted in the light of the energy and escape probability of the prey. The Levy flight term is added in the exploitation phase. Whether to use Levy flight to update positions is decided by fitness values so that the algorithm can jump out of the local optima to a certain extent.
Therefore, we combine the global exploration phase of AO and the local exploitation phase of HHO to give full play to the advantages of these two algorithms. The global search capability, faster convergence speed, and the ability to jump out of the local optima of the algorithm are all retained. Meanwhile, a nonlinear escaping energy mechanism is utilized to control the transition from exploration to exploitation phase, which retains the possibility of global search in the later iterations. ROBL strategy is added to the exploitation phase to enhance further the ability to jump out of the local optima. All these strategies improve the convergence speed and accuracy of the hybrid algorithm and effectively enhance the overall optimization performance of the algorithm. This improved hybrid Aquila Optimizer and Harris Hawks Optimization algorithm is named IHAOHHO. Different phases of IHAOHHO are shown in Figure 3. The pseudo-code of IHAOHHO is given in Algorithm 1, and the summarized flowchart is illustrated in Figure 4.

Computational Complexity of IHAOHHO
Computation complexity is a key metric for an algorithm to evaluate its time consumption during operation. The computational complexity of the IHAOHHO algorithm depends on three processes: initialization, evaluation of fitness, and updating of hawks. In  For i = 1 to N 5: Check if the position goes out of the search space boundary, and bring it back. 6: Calculate the fitness of X i 7: Update X best 8: End for 9: Update x, y, QF, G 1 , G 2 , E 1 10: For i = 1 to N 11: Update E using Equation (24) % Nonlinear escaping energy parameter 12: If |E| ≥ 1 % Exploration part of AO 13: If rand < 0.5 14: Update the position of Xnew i using Equation (1)  If r ≥ 0.5 and |E| ≥ 0.5 26: Update the position of X i using Equation (12)  27: End if 28: If r ≥ 0.5 and |E| < 0.5 29: Update the position of X i using Equation (15)  30: End if 31: If r < 0.5 and |E| ≥ 0.5 32: Update the position of Xnew i using Equation (16)

Results and Discussion
In this section, two main experiments were carried out to evaluate the performance of the IHAOHHO algorithm. The first kind of experiments is benchmark function experiments, which aimed to evaluate the performance of IHAOHHO in solving 23 numerical optimization problems. The second experiment is industrial engineering design problems, which aimed to evaluate the performance of IHAOHHO in solving real-world problems. All experiments are implemented in MATLAB R2016a on a PC with Intel (R) core (TM) i5-9500 CPU @ 3.00 GHz and RAM 2 GB memory on OS windows 10.

Benchmark Function Experiments
To investigate the performance of the IHAOHHO algorithm, 23 standard benchmark functions of three different types were utilized for testing [64]. The main characteristic of the first type, namely unimodal benchmark functions, is that there is only one global optimum with no local optima. These test functions can be used to evaluate the exploitation capability and convergence rate of an algorithm. The second type, namely multimodal benchmark functions, has a global optimum and multiple local optima, which includes general and fixed-dimension multimodal test functions. This type of functions was utilized to evaluate the exploitation and local optima avoidance capability of the algorithm. The benchmark function details, including dimensions, ranges, and optima, are listed in Tables 1-3. Table 1. Unimodal benchmark functions.

Function
Dim Range Fmin For verification of the results, the IHAOHHO algorithm was compared with the original AO and HHO; SMA as one of the recent algorithms; SSA, WOA, and GWO as several classical meta-heuristic algorithms; and PSO as the most well-known swarm intelligence algorithm. For all these algorithms, we set the population size N = 30, dimension size D = 30, maximum number of iterations T = 500, and ran them 30 times independently. The parameter settings of each algorithm are shown in Table 4. After all, the average and standard deviation results of these test functions are exhibited in Tables 4-6. Figure 5 shows the convergence curves of 23 test functions. The partial search history, trajectory and average fitness maps are represented in Figure 6. Wilcoxon signed-rank test results are also listed in Table 6. The detailed data analysis is given in the following subsections. Functions F1-F7 are used to investigate the exploitation capability of the algorithm since they have only one global optimum and no local optima. It can be seen from Table 5 that IHAOHHO can achieve much better results than other meta-heuristic algorithms excluding F6. For F1 and F3, IHAOHHO can find the theoretical optimum. For all unimodal functions excluding F6, IHAOHHO gets the smallest average values and standard deviations compared to other algorithms, which indicate the best accuracy and stability. Hence, the exploitation capability of the proposed IHAOHHO algorithm is excellent.

Evaluation of Exploration Capability (Functions F8-F23)
Multimodal functions F8-F23 contain plentiful local optima whose number increases exponentially with the dimension size of the problem. This kind of functions is very useful to evaluate the exploration ability of the algorithm. From the results shown in Table 5, IHAOHHO outperforms other algorithms in most of the multimodal and fixed-dimension multimodal functions. For multimodal functions F8-F13, IHAOHHO almost obtains all the best average values and standard deviations. Among ten fixed-dimensions multimodal functions F14-F23, IHAOHHO achieves the best accuracy of eight functions and best stability of four functions. These results indicate that IHAOHHO also provides robust exploitation capability.

Analysis of Convergence Behavior
In the light of the mathematical formula of the IHAOHHO algorithm, search agents tend to investigate promising regions of the search space widely and then exploit it in detail. Search agents change drastically in early iterations and then converge gradually as the number of iterations increases. Convergence curves of the proposed IHAOHHO and AO, HHO, SMA, SSA, WOA, GWO, and PSO for 23 benchmark functions are provided in Figure 5, which shows the convergence rate of algorithms. It can be seen that IHAOHHO shows great superiority compared to other state-of-the-art algorithms. The IHAOHHO algorithm presents three different convergence behaviors during optimization processes. Firstly, for F1-F4, IHAOHHO gradually converges to the optimal values at a faster speed than other algorithms, and the optimal value is better than the others in three of the functions. The second behaviour is extremely fast convergence speed, as observed in F6, F8-F11, F14-F19, and F21-F23. For these functions, IHAOHHO can find the optimum at an extremely fast speed within 20 iterations, and the accurate approximation of the global optimum is almost the best. The last behaviour is observed in F5, F7, F12, F13, and F20 and shows the local optimum avoidance capability of IHAOHHO. The proposed algorithm jumps out of the local optimum after several times of stagnation. This is probably due to the effect of nonlinear escaping energy parameter. Overall, IHAOHHO can efficiently achieve great solutions for all these 23 standard benchmark functions.
In addition, the search history, trajectory, and average fitness figures of several functions are given in Figure 6. Search history figures show us how the algorithm explores and exploits the search space while solving optimization problems. Trajectory figures reveal the order in which an algorithm explores and exploits the search space. Meanwhile, average fitness presents if exploration and exploitation are conducive to improve the first random population, and an accurate approximation of the global optimum can be found in the end. Inspecting Figure 6, the IHAOHHO algorithm samples the most promising areas observed from search histories. Because of the fast convergence, the vast majority of search agents are concentrated near the global optimum. From trajectory and average fitness maps, it can be noticed that exploration almost spread throughout the iterative process until the last 50 iterations focused on exploitation, and average fitness decreased abruptly and leveled off accordingly. The average fitness figures also show the great improvement of the first random population and the acquisition of the final global optimal accurate approximation.

The Wilcoxon Test
Furthermore, the Wilcoxon signed-rank test results are listed in Table 6, which is used to evaluate the statistical performance differences between the proposed IHAOHHO algorithm with other algorithms. It is worth noting that a p-value less than 0.05 means that there is a significant difference between the two compared algorithms. In the light of this criterion, IHAOHHO outperforms all other algorithms in varying degrees. This superiority is statistically significant on unimodal functions F1-F7, which indicates that IHAOHHO benefits from high exploitation. IHAOHHO also shows better results on multimodal function F8-F23, from which we may conclude that IHAOHHO has a high capability of exploration to investigate the most promising regions in the search space. To sum up, the IHAOHHO algorithm can provide better results on almost all benchmark functions than other comparative algorithms.

Computation Time
The computation time is useful to assess the efficiency for an algorithm in solving optimization problems. From the computation time results of all algorithms shown in Table 7, it is obvious that IHAOHHO spent more time in solving these benchmark functions than other comparative algorithms, especially the earlier classic methods of SSA, WOA, GWO, and PSO. The computation time of IHAOHHO is also slightly longer than the basic AO and HHO, which may be ascribed to the ROBL strategy. ROBL produces one more candidate solution in each iteration, increasing the computation time. However, the IHAOHHO took less time than SMA on most test functions. In view of the best search performance of IHAOHHO and the rapid development of the computational machines, it is acceptable for the proposed algorithm to improve the optimization performance.

Experiments on Industrial Engineering Design Problems
Most optimization problems have constraints in the real world, so considering equality and inequality constraints during optimization is a necessary process. In this subsection, four well-known constrained industrial engineering design problems, which include pressure vessel design problem, speed reducer design problem, tension/compression spring design problem, and three-bar truss design problem, were solved to further verify the performance of the proposed IHAOHHO algorithm. The results of IHAOHHO were compared to various classical optimizers proposed in previous studies. The parameter settings were as same as the previous experiments.

Pressure Vessel Design Problem
The objective of this problem was to minimize the fabrication cost of the cylindrical pressure vessel to meet the pressure requirements. As shown in Figure 7, four structural parameters in this problem needed to be minimized, including the thickness of the shell (T s ), the thickness of the head (T h ), inner radius (R), and the length of the cylindrical section without the head (L). The formulation of four optimization constraints can be described as follows:

Consider
Minimize Subject to From the results in Table 8, it is obvious that IHAOHHO can obtain superior optimal values compared to AO, HHO, SMA, WOA, GWO, MVO, GA, ES, and CPSO [65]. This problem aims to optimize seven variables to minimize the speed reducer's total weights, which include the face width (x 1 ), module of teeth (x 2 ), a discrete design variable on behalf of the teeth in the pinion (x 3 ), length of the first shaft between bearings (x 4 ), length of the second shaft between bearings (x 5 ), diameters of the first shaft (x 6 ), and diameters of the second shaft (x 7 ). Four constraints-covering stress, bending stress of the gear teeth, stresses in the shafts, and transverse deflections of the shafts, as shown in Figure 8-should be satisfied. The mathematical formulation is represented as follows: Subject to g 1 ( → x ) = 27 Compared to AO, PSO, AOA, MFO [66], GA, SCA, HS [67], FA [68], and MDA [69], IHAOHHO can obviously achieve better results in the speed reducer design problem, as shown in Table 9. In this case, the intention is to minimize the weight of the tension/compression spring shown in Figure 9. Constraints on surge frequency, shear stress, and deflection must be satisfied during optimum design. There are three parameters that needed to be minimized, including the wire diameter (d), mean coil diameter (D), and the number of active coils (N). The mathematical form of this problem can be written as follows:

Three-Bar Truss Design Problem
The three-bar truss design problem is a classical optimization application in civil engineering field. The main intention of this case is to minimize the weight of a truss with three bars by considering two structural parameters as illustrated in Figure 10. Deflection, stress, and buckling are the three main constrains. The mathematical formulation of this problem is given: Figure 10. Three-bar truss design problem.

Consider
Subject to Variable range 0 ≤ x 1 , x 2 ≤ 1, where l = 100cm, P = 2KN/cm 2 , σ = 2KN/cm 2 . Results of IHAOHHO for solving threebar truss design problem are listed in Table 11, which are compared with AO, HHO, SSA, AOA, MVO, MFO, and GOA [70]. It can be observed that IHAOHHO outperforms other optimization algorithms published in the literature. As a summary, this section demonstrates the superiority of the proposed IHAOHHO algorithms in different characteristics and real case studies. IHAOHHO is able to outperform the original AO and HHO and other well-known algorithms with very competitive results, which were derived from the robust exploration and exploitation capabilities of IHAOHHO. Excellent performance in solving industrial engineering design problems indicates that IHAOHHO can be widely used in real-world optimization problems.

Conclusions
This study proposed an improved hybrid Aquila Optimizer and Harris Hawks Optimization by combining the exploration part of AO with the exploitation part of HHO and a nonlinear escaping energy parameter and random opposition-based learning (ROBL) strategy. The proposed method integrated the mentioned search methods to tackle the weakness of the traditional search methods. The proposed IHAOHHO algorithm was tested using 23 mathematical benchmark functions to analyze its exploration, exploitation, local optima avoidance capabilities, and convergence behaviors. Results show competitive results compared to other state-of-the-art meta-heuristic algorithms. To further verify the superiority of IHAOHHO, four industrial engineering design problems were solved. The results are also competitive with other meta-heuristic algorithms.
As future perspectives, binary and multi-objective versions of IHAOHHO will be considered. More applications of this algorithm in different fields are valuable works, including text clustering, scheduling problems, appliances management, parameters estimation, multi-objective engineering problems, feature selection, test classification, image segmentation problems, network applications, sentiment analysis, etc.