An Enhanced Northern Goshawk Optimization Algorithm and Its Application in Practical Optimization Problems

: As a kind of effective tool in solving complex optimization problems, intelligent optimization algorithms are paid more attention to their advantages of being easy to implement and their wide applicability. This paper proposes an enhanced northern goshawk optimization algorithm to further improve the ability to solve challenging tasks. Firstly, by applying the polynomial interpolation strategy to the whole population, the quality of the solutions can be enhanced to keep a fast convergence to the better individual. Then, to avoid falling into lots of local optimums, especially late in the whole search, different kinds of opposite learning methods are used to help the algorithm to search the space more fully, including opposite learning, quasi-opposite learning, and quasi-reﬂected learning, to keep the diversity of the population, which is noted as a multi-strategy opposite learning method in this paper. Following the construction of the enhanced algorithm, its performance is analyzed by solving the CEC2017 test suite, and ﬁve practical optimization problems. Results show that the enhanced algorithm ranks ﬁrst on 23 test functions, accounting for 79.31% among 29 functions, and keeps a faster convergence speed and a better stability on most functions, compared with the original northern goshawk optimization algorithm and other popular algorithms. For practical problems, the enhanced algorithm is still effective. When the complexity of the TSP is increased, the performance of the improved algorithm is much better than others on all measure indexes. Thus, the enhanced algorithm can keep the balance between exploitation and exploration and obtain better solutions with a faster speed for problems of high complexity.


Introduction
Optimization theory is an important branch of mathematics, which studies the problem of how to find the best solution among lots of solutions [1]. It has been applied to solve the challenging optimization problems in agriculture, industry, national defense, transportation and other fields, especially for the optimization models with strong constraints, multivariable systems or multi-objectives [2]. The practice shows that under the same conditions, the optimization technology can improve the efficiency of the system, allocate resources rationally and reduce energy consumption [3]. This effect becomes more significant with the increase in the complexity of the optimization problem.
According to the characteristics of the approaches to solving optimization problems, massive optimization techniques can be classified into two categories: deterministic optimization techniques and intelligent optimization algorithms [4]. The former uses the analytic properties of the problem to generate a definite finite or infinite sequence of points to converge to the global optimal solution [5], including the gradient descent method [6], mization techniques and intelligent optimization algorithms [4]. The former uses the analytic properties of the problem to generate a definite finite or infinite sequence of points to converge to the global optimal solution [5], including the gradient descent method [6], the Newton method [7], the conjugate gradient method [8], and so on. The gradient descent method can search the global optimum and is easy to implement. The Newton method is to approximate the solution of the equations in the natural and complex domains with the characteristic of a fast convergence speed. The conjugate gradient method is between the gradient descent method and the Newton method. It overcomes the slow convergence of the gradient descent and the calculation of the Hessian matrix of the Newton method because it only uses the information of the first derivative [9].
These approaches are computationally efficient because they make full use of the analytic properties of the target problem. However, the deterministic optimization techniques may face challenges in the solution accuracy for problems with a high complexity, strong nonlinearity and many variables [10]. Thus, intelligent optimization algorithms (or meta-heuristic algorithms) were proposed.
Intelligent optimization algorithms are random search methods. Compared with traditional optimization methods, this kind of tool does not have any special requirements (differentiable or convex optimization) on the objective function, and is not limited to specific problems [11]. Therefore, its application scope is more extensive, and it has become an effective approach to solving the optimization problem.
There are several methods to classify intelligent algorithms [12]. Here, they are summarized into three types, including evolutionary-based algorithms, physics or mathematics-based algorithms and swarm intelligence algorithms [13]. Figure 1 shows the classification of intelligent optimization algorithms and some representative algorithms of these classifications. The evolutionary approach takes natural law as the core idea. The most classical one of this kind is the genetic algorithm (GA), which simulates the evolutionary process of Darwin's genetic selection and natural elimination. It is an effective method to search for the optimal solution [14]. However, the GA involves the process of encoding and decoding, which makes it time-consuming. The differential evolution algorithm (DE) is also a popular algorithm, which consists of five stages, including the population initialization, The evolutionary approach takes natural law as the core idea. The most classical one of this kind is the genetic algorithm (GA), which simulates the evolutionary process of Darwin's genetic selection and natural elimination. It is an effective method to search for the optimal solution [14]. However, the GA involves the process of encoding and decoding, which makes it time-consuming. The differential evolution algorithm (DE) is also a popular algorithm, which consists of five stages, including the population initialization, progeny generation, progeny mutation, progeny selection and fitness evaluation. Because the encoding and decoding processes are abandoned, the search efficiency is improved [15]. The imperialist competitive algorithm (ICA) is an overall optimization algorithm, based on the evolutionary, inspired by the competition, occupation and annexation of colonial countries among imperialist countries in the political and social colonial stage, in human history [16]. Other algorithms of this kind are the memetic algorithm (MA) [17], the bio-geography optimization algorithm (BBO) [18] and so on.
Physics or mathematics-based algorithms are inspired by the physical laws or mathematical theorems found in nature by scholars. For example, the simulated annealing algorithm (SA) was proposed, based on the similarity between the annealing process of solid matter in physics and the general combinatorial optimization problem. Starting from a high initial temperature, the global optimal solution of the objective function can be found in the solution space with the constant decrease of the temperature parameter. Meanwhile, it can jump out of the local optimum, based on the probability jump property [19]. According to the law of universal gravitation and Newton's second law, the gravitational search algorithm (GSA) was constructed. This algorithm searches for the optimal solution by moving the particle position of the population continuously in the search space by the universal gravitation between them. When the particle moves to the optimal position, the optimal solution is found [20]. The sine cosine algorithm (SCA) was proposed, based on the mathematical model of sine and cosine, to search the space region and find the optimal solution [21]. Other such algorithms are the billiards-inspired optimization algorithm (BOA) [22], the gradient-based optimizer (GBO) [23] and so on.
The swarm intelligence algorithms were proposed, based on the habits of social animals, such as birds, ants, wolves or bees. The particle swarm optimization algorithm (PSO) is one of the most popular algorithms, which mimics the behavior of birds. For each individual in the group, it changes the search pattern by learning from its own experience and the experience of other members [24]. Due to the simple construction and special learning method, the PSO algorithm performs well on the convergence speed and accuracy. Thus, it has been applied to solve some practical optimization problems. Moreover, lots of novel algorithms of this kind have emerged in recent years by simulating other animals. For example, the whale optimization algorithm (WOA) is proposed by mimicking humpback whale hunting behavior in 2016 [25]. The grey wolf optimizer (GWO) mimics the hierarchical system of the wolf population, which is unique in that a small number of elite gray wolves lead a group of gray wolves toward their prey [26]. In 2021, the Aquila optimizer (AO) was proposed. It was constructed in four stages using the hunting process of the Aquila in the wild. During different stages, the Aquila will fly in different methods to make sure their prey doesn't escape, such as a short glide attack, swooping, or high soaring [27]. In 2022, inspired by the features of the sand cat swarm of detecting low frequencies and digging for prey, the sand cat swarm optimization algorithm (SCSO) was proposed [28].
Though the above algorithms have been used to solve test functions or engineering optimization problems, there are still possibilities to further improve the capacity in the solution accuracy or convergence speed [29]. Thus, some improved strategies are introduced into the original algorithm. Here, these strategies are divided into two categories.
The first category is the strategies to enhance the exploitation ability of the algorithm. For example, in [30], a linearly dynamic random heuristic crossover strategy was added to the BBO algorithm, to obtain a stronger local search ability. Noticing the deficiency of the exploitation ability for the surface conversion model, Hu et al. introduced a golden sine learning strategy, based on the relationship between the sine function and the unit circle to help individuals to approach better solutions, which is an effective way to improve the quality of the whole population [31]. Similarly, to enhance the exploitation phase of the colliding bodies optimization algorithm, the quadratic interpolation of the utilized historically best solution was employed and showed outstanding performance in solving 24 mathematical optimization problems, including 30 design variables [32]. Aiming at the original algorithm's low capability in exploiting, Yousri et al. [33] adopted the fractional calculus, using the Caputo fractional differ-sum operator, to enhance the manta rays movement in the exploitation phase, via utilizing the history dependency of fractional calculus to boost exploiting the optimal solutions, via sharing the past knowledge over the optimization process.
Moreover, some strategies are used to enhance the exploration capacity. In [34], the classical PSO algorithm was improved, based on a Lévy fly strategy, which is a random walk method. Following the application of this strategy to the original population, the individuals move by leaps and bounds in the search space, which is an effective approach to avoid the local optimums. Different mutation methods are also effective in improving the exploration capacity, such as the wavelet mutation and Cauchy mutation. In Jiang et al. [35], each individual has the possibility to experience the wavelet mutation by setting a mutation rate. Due to the volatility of wavelet function in the wavelet function, individuals have randomness in updating their positions to enhance the population diversity. Miao et al. [36] introduces the mutation ability of the Cauchy distribution, to help the original algorithm to explore a wider solution space and avoid falling into a local optimum prematurely. Compared with the state-of-the-art algorithms, the novel version of the improved algorithm performs best on the feature selection of sleep staging. The opposite learning (OL) strategy is also a common approach to assist the algorithm to explore the space quickly. In [37], the dynamic OL strategy is embedded into the population initialization stage and the generation jumping stage of the original dragonfly algorithm to solve the flexible job shop scheduling problem. The results obtained showed that the improved algorithm can efficiently achieve a better solution on 15 examples, compared to the algorithms.
Based on the above knowledge, this paper proposes a novel version of the northern goshawk optimization algorithm (ENGO), by enhancing the abilities of exploitation and exploration with the according strategies. The northern goshawk optimization (NGO) algorithm was proposed by Dehghani et al., by simulating the behavior of the northern goshawk during the process of hunting the prey, including two stages of prey identification and the tail and chase process [38]. Once it was tested on 68 different objective functions and four engineering design problems, we found that it can keep a balance between the exploration capacity and the exploitation capacity, and has an outstanding performance for the optimization problems, compared with other similar algorithms. To improve its capacity in solving problems with higher dimensions and stronger constraint conditions, this paper proposed an enhanced NGO algorithm, noted as ENGO, and verified it on the CEC2017 test suite and engineering optimization problems. The main contributions of this paper are as follows: Firstly, an enhanced NGO algorithm is proposed by introducing the polynomial interpolation strategy and the multi-strategy opposite learning method.

•
To improve the capacity of exploitation, this paper applies the quadratic interpolation function to each individual, to find a better solution. • Different opposite learning methods are employed to help the algorithm search the space more fully, including opposite learning, quasi-opposite learning and quasireflected learning.
Secondly, the ENGO algorithm is used to solve the test functions of the CEC2017 test suite, four engineering optimization problems and the traveling salesman problem (TSP).
The structure of this paper is as follows: Section 2 introduces the original NGO algorithm and how to apply the two strategies to it to obtain the ENGO algorithm. Section 3 shows the results of the ENGO algorithm and other comparison algorithms in solving the CEC2017 test suite. Section 4 analyzes the performance of the ENGO algorithm in solving engineering problems and TSP. Section 5 is the conclusion of this paper.

The Original Northern Goshawk Optimization
The search mechanism of the NGO algorithm stems from its efficient search and capture of prey. Thus, the algorithm consists of three stages, which are population initialization, prey identification and prey capture. Firstly, the initialization population of the northern goshawk can be presented by matrix X, shown as follows.

The Improved Northern Goshawk Optimization
Although the NGO algorithm has the advantages of easy implementation, simple structure and high precision in solving test functions and some engineering optimization problems, there are still some opportunities to further enhance its capacities of exploration and exploitation. For example, the model of chasing prey in the original NGO algorithm is too simple, which will lead to the poor quality of the individuals and slow convergence speed to the optimal solution. Thus, this section proposes an enhanced northern goshawk optimization algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning models, noted as ENGO.

The Improved Northern Goshawk Optimization
Although the NGO algorithm has the advantages of easy implementation, simple structure and high precision in solving test functions and some engineering optimization problems, there are still some opportunities to further enhance its capacities of exploration and exploitation. For example, the model of chasing prey in the original NGO algorithm is too simple, which will lead to the poor quality of the individuals and slow convergence speed to the optimal solution. Thus, this section proposes an enhanced northern goshawk optimization algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning models, noted as ENGO.

Polynomial Interpolation Strategy
Polynomial interpolation is a kind of search method, which refers to seeking the minimal value of the interpolation polynomial ϕ(t) constructed by some discrete data [39]. Firstly, the discrete points are selected to construct the interpolation polynomial ϕ(t). If the constructed polynomial is quadratic, it is called the quadratic interpolation, and if it is cubic, it is cubic interpolation method. Then, the minimal value of the function ϕ(t) can be solved by letting ϕ (t) = 0.
Here, the northern goshawk population is regarded as the discrete data in the feasible space. Firstly, three individuals X i , X i+1 and X i+2 of the northern goshawk population are selected to construct the quadratic interpolation function ϕ(X), shown in Equation (8).
Submit the discrete individuals into Equation (8), we can obtain According to Equation (9), three coefficients a 0 , a 1 and a 2 can be obtained by Equation (10).
Finally, comparing the obtained X* and the original X i , the individual is updated, as Equation (12) shown.
Following the performance of this operation on the whole population, the quality of the northern goshawk population will be improved, which is helpful to approach the optimal solution closer.

Multi-Strategy Opposite Learning Method
Moreover, falling into local optimal solutions easily is also a dilemma for optimization algorithms, especially for some optimization problems with a large number of local optimums. Thus, the different kinds of opposite learning methods are introduced to help the NGO algorithm to enhance the capacity of the escaping local optimums. To make the learning modes of the individuals in the population more diverse, this paper adopts three kinds of learning mechanisms with their own characteristics, including opposite learning [40], quasi-opposite learning [41] and quasi-reflected learning [42].
For the i-th individual of the northern goshawk population, the newly generated individual, based on the different opposite learning mechanisms, can be calculated by the following equations. population. Following the application of different learning methods to the X i in different dimensions, new individuals will be generated in the constructed areas A and B.
. (14) . , 2 Figure 3 illustrates the characteristics of the different opposite learning methods in the two-dimensional space. The red point is the i-th individual in the northern goshawk population. Following the application of different learning methods to the Xi in different dimensions, new individuals will be generated in the constructed areas A and B. To widen the search range as much as possible, the above methods will be applied to the northern goshawk population. Suppose the size of the population X is N, then the population will be divided into three groups randomly. Following the application of different methods to each group, a new population, based on opposite learning strategies, will be generated, noted as X oppo . The two populations X and X oppo are mixed and sorted, according to the fitness values, and N individuals with better fitness values will be retained as the final population. Figure 4 shows the whole process of the learning methods for the population. To widen the search range as much as possible, the above methods will be applied to the northern goshawk population. Suppose the size of the population X is N, then the population will be divided into three groups randomly. Following the application of different methods to each group, a new population, based on opposite learning strategies, will be generated, noted as X oppo . The two populations X and X oppo are mixed and sorted, according to the fitness values, and N individuals with better fitness values will be retained as the final population. Figure 4 shows the whole process of the learning methods for the population.   Step 1. Initialize the parameters, such as the size of the northern goshawk population N, the dimension of the problem M, and the maximum iteration time T.
Step 2. Create the initial northern goshawk population by Equation (2).
Step 3. when t < T, calculate the fitness value of each individual in the population. Following the selection of the according prey of the i-th individual Xi by Equation (3), the updated solution can be obtained by Equations (4) and (5).
Step 4. Divide the population into three equal groups.
Step 5. Based on Equations (13) Figure 5 shows the flowchart of the ENGO algorithm. The specific steps of the ENGO algorithm are as follows: Step 1. Initialize the parameters, such as the size of the northern goshawk population N, the dimension of the problem M, and the maximum iteration time T.
Step 2. Create the initial northern goshawk population by Equation (2). Step 3. when t < T, calculate the fitness value of each individual in the population. Following the selection of the according prey of the i-th individual X i by Equation (3), the updated solution can be obtained by Equations (4) and (5).
Step 4. Divide the population into three equal groups.
Step 5. Based on Equations (13)- (15), apply different opposite learning methods to the according group, and obtain new solutions.
Step 6. Mix the new solutions with the population, and N better solutions are selected as the new population.
Step 7. By simulating the behavior of chasing the prey, calculate the new solution by Equation (6). Then, update the solution by Equation (7).
Step 8. Apply the polynomial interpolation strategy to each individual and update the individual by Equations (11) and (12).
Step 9. if t < T, return to Step 3. Otherwise, output the best individual and its fitness value.

Test Functions and Parameters Setting
To test the performance of the improved algorithm, the CEC2017 test suite is used, which consists of 29 test functions (except for the F2), including uni-model functions,

Test Functions and Parameters Setting
To test the performance of the improved algorithm, the CEC2017 test suite is used, which consists of 29 test functions (except for the F2), including uni-model functions, multimodal functions, hybrid functions and composition functions [43]. These functions are constructed by rotating, combining or scaling benchmark functions, and have been used to test the ability of the optimization algorithms to search a complex space. Then, the original NGO algorithm and other six well-known algorithms are selected for comparison. They are the particle swarm optimization algorithm (PSO), the Archimedes optimization algorithm (AOA), the African vultures optimization algorithm (AVOA), the dandelion optimizer (DO), the sand cat optimization algorithm (SCSO) and the sparrow search algorithm (SSA). In Table 1, the time and parameters of these compared algorithms are shown. In the comparison experiments, the size of the population of all algorithms is set as 100, and the maximum iteration number is 500 times. The dimension of the test functions is set as 30. Meanwhile, to compare the different algorithms more fairly, all algorithms run 20 times, independently, to avoid random distractions. Once completed, the average value (Ave) and the standard deviation (Std) are recorded to show the accuracy and stability of the algorithm. All of the experimental series were implemented using Windows 10(64-bit), Intel(R) Core(TM) i7-1165G7 (Santa Clara, CA, USA), CPU @2.80GHz (Santa Clara, CA, USA), MATLAB 2016a (Natick, MA, USA) with 8 GB of ARM.

Results Analysis and Discussion
The results of the different algorithms in solving the functions of the CEC2017 test suite are displayed in Table 2. The data in bold represents the best results among all algorithms. The results show that the enhanced NGO algorithm ranks first on 23 test functions, accounting for 79.31% of all functions. However, the original NGO algorithm only ranks first on four functions, accounting for 13.79% of all functions. Obviously, after introducing the polynomial interpolation strategy and the multi-strategy opposite learning method into the original NGO algorithm, the solution accuracy has been greatly improved. When compared with other algorithms, the average rank of the ENGO algorithm is 1.3448, which is less than the others. According to the final rank, the performance of these algorithms is ranked as ENGO > NGO > DO > SSA > SCSO > PSO > AOA.  The average convergence curves of the different algorithms after it was run 20 times, are shown in Figure 6. The accuracy and the convergence speed of the algorithm can be measured by the descending rate and the final position of its curve. Compared with the original NGO algorithm, the ENGO algorithm can converge to the optimum with a faster speed, especially in the early stages of the whole search. It is because the polynomial interpolation strategy improves the quality of the northern goshawk population by selecting better solutions around the three selected individuals randomly. Meanwhile, among all algorithms, the ENGO algorithm can obtain solutions with a better accuracy when the search ends. Figure 7 displays the boxplots of the different algorithms, which are used to measure the stability and robustness of the algorithm. Comparing the length of the boxes, the ENGO is smaller than the original NGO algorithm and others on F6, F7, F11, F15, F17, F20, F22, F23, F24, F25, F27, F28 and F30, which illustrates that the distribution of the results of the ENGO algorithm is more centralized after it was run 20 times on these functions. For the remaining functions, the median of the data also needs to be considered. On functions F1, F5, F8, F9, F12, F14, F16, F18, F19, F21 and F26, the median of the ENGO is smaller than others. Thus, considering the length and median of the box, the improved algorithm has an advantage in stability and robustness, except for F3, F4, F10, F13 and F29. It is because the introduction of the two improvement strategies that keeps a better balance between the exploitation capacity and the exploration capacity.

F26
Ave 6.1568E + 03 9.9657E + 03 6.6772E + 03 5.7903E + 03 6.1648E + 03 6.3669E + 03 3.2578E + 03 3.5280E + 03 Std 4.9151E + 02 3. The average convergence curves of the different algorithms after it was run 20 times, are shown in Figure 6. The accuracy and the convergence speed of the algorithm can be measured by the descending rate and the final position of its curve. Compared with the original NGO algorithm, the ENGO algorithm can converge to the optimum with a faster speed, especially in the early stages of the whole search. It is because the polynomial interpolation strategy improves the quality of the northern goshawk population by selecting better solutions around the three selected individuals randomly. Meanwhile, among all algorithms, the ENGO algorithm can obtain solutions with a better accuracy when the search ends.  Figure 7 displays the boxplots of the different algorithms, which are used to measure the stability and robustness of the algorithm. Comparing the length of the boxes, the ENGO is smaller than the original NGO algorithm and others on F6, F7, F11, F15, F17, F20, F22, F23, F24, F25, F27, F28 and F30, which illustrates that the distribution of the results of the ENGO algorithm is more centralized after it was run 20 times on these functions. For the remaining functions, the median of the data also needs to be considered. On functions F1, F5, F8, F9, F12, F14, F16, F18, F19, F21 and F26, the median of the ENGO is smaller than others. Thus, considering the length and median of the box, the improved algorithm has an advantage in stability and robustness, except for F3, F4, F10, F13 and F29. It is because the introduction of the two improvement strategies that keeps a better balance between the exploitation capacity and the exploration capacity.    Figure 8 includes the radar maps of eight algorithms in solving the CEC2017 test suite. They are constructed by the ranks of different algorithms on each test function, which is used to reflect the comprehensive performance of the algorithm more intuitively. The radar graph of the ENGO algorithm has the minimum area, which also shows its outstanding performance on most functions. The area of the NGO algorithm is obviously bigger than the ENGO algorithm. The AOA algorithm has the biggest area, indicating that there are still opportunities to be improved. Table 3 is the p-value of the Wilcoxon rank sum test between the ENGO and other algorithms. Usually, when the p-value is smaller than 0.05, there is an obvious difference between the two groups of test data. Thus, combined with the data in Table 2, '+' represents the ENGO better than the comparison algorithm, obviously, while '-' represents the ENGO worse than the comparison algorithm, obviously. The '=' illustrates that there is no obvious difference between the two algorithms. From Table 3, the improved algorithm outperforms the original algorithm in more than half of the test functions with the support of statistics.  Figure 8 includes the radar maps of eight algorithms in solving the CEC2017 test suite. They are constructed by the ranks of different algorithms on each test function, which is used to reflect the comprehensive performance of the algorithm more intuitively. The radar graph of the ENGO algorithm has the minimum area, which also shows its outstanding performance on most functions. The area of the NGO algorithm is obviously bigger than the ENGO algorithm. The AOA algorithm has the biggest area, indicating that there are still opportunities to be improved. Table 3 is the p-value of the Wilcoxon rank sum test between the ENGO and other algorithms. Usually, when the p-value is smaller than 0.05, there is an obvious difference between the two groups of test data. Thus, combined with the data in Table 2, '+' represents the ENGO better than the comparison algorithm, obviously, while '−' represents the ENGO worse than the comparison algorithm, obviously. The '=' illustrates that there is no obvious difference between the two algorithms. From Table 3, the improved algorithm outperforms the original algorithm in more than half of the test functions with the support of statistics.

Practical Optimization Problems
Except for the test functions, the practical optimization problems with high dimensions and strong constraints can also reflect the performance of the algorithm. In this section, the ENGO algorithm, the original NGO algorithm and the other eight algorithms need to solve five problems to show their performance. These algorithms are AOA, AO [23], the Harris hawks optimization (HHO) [49], the WOA, PSO, GWO [50], the manta ray foraging optimization (MRFO) [51] and the multi-verse optimizer (MVO) [52]. All algorithms run 20 times, independently. The size of the population is 30, the maximum iteration is 100. The results and discussion of these examples are as follows.

Gear Train Design Problem
The gear train design problem is used to obtain the most suitable number of teeth to minimize the cost of the gear ratio. This problem includes four variables, denoted as T A , T B , T C and T D , respectively [53]. The structure of the four gears is shown in Figure 9. Equation (16) lists its mathematical model.

Practical Optimization Problems
Except for the test functions, the practical optimization problems with high dimensions and strong constraints can also reflect the performance of the algorithm. In this section, the ENGO algorithm, the original NGO algorithm and the other eight algorithms need to solve five problems to show their performance. These algorithms are AOA, AO [23], the Harris hawks optimization (HHO) [49], the WOA, PSO, GWO [50], the manta ray foraging optimization (MRFO) [51] and the multi-verse optimizer (MVO) [52]. All algorithms run 20 times, independently. The size of the population is 30, the maximum iteration is 100. The results and discussion of these examples are as follows.

Gear Train Design Problem
The gear train design problem is used to obtain the most suitable number of teeth to minimize the cost of the gear ratio. This problem includes four variables, denoted as TA, TB, TC and TD, respectively [53]. The structure of the four gears is shown in Figure 9. Equation (16) lists its mathematical model.   Table 4 provides the final results of the ENGO algorithm and other algorithms in solving the gear train design problem. Among these algorithms, the ENGO algorithm has the best average value after it was run 20 times. That is, the ENGO algorithm is effective in solving this problem. Meanwhile, the smaller standard deviation illustrates that the improved algorithm is more stable, while the original NGO algorithm is largely fluctuating. Table 5 includes the best variables of the gear train obtained by each algorithm. Table 4. Results of the different algorithms in solving the gear train design problem.

Algorithms
The

Pressure Vessel Design Problem
The pressure vessel design problem aims to obtain the optimal total cost of the cylindrical vessel, by adjusting the parameter values of the material, its formation, and welding. The schematic diagram of the pressure vessel is shown in Figure 10. There are four parameter values that need to be optimized. T s represents the thickness of the shell, T h is the thickness of the head, and the inner radius and length of the cylindrical section are noted as R and L (without considering the head), respectively. The mathematical model of this problem is shown as follows [54,55]:   Table 6 shows the results of solving the pressure vessel design problem. The ENGO algorithm also has the best performance of accuracy and stability. The original NGO algorithm ranks second. Though the GWO algorithm also shows a great performance on the measure index of the best value, there are still defects in the stability. In Table 7, the best parameters of the pressure vessel design problem are listed.   Table 6 shows the results of solving the pressure vessel design problem. The ENGO algorithm also has the best performance of accuracy and stability. The original NGO algorithm ranks second. Though the GWO algorithm also shows a great performance on the measure index of the best value, there are still defects in the stability. In Table 7, the best parameters of the pressure vessel design problem are listed. Table 6. Results of the different algorithms in solving the pressure vessel design problem.

Algorithms
The  Figure 11 is the schematic diagram of the four-stage gearbox design problem, which is proposed to reduce the weight of the gearbox. There are twenty-two parameters to be determined, which can be divided into four categories, including the positions of the gears, the positions of the pinions, the thickness of the blanks, and the number of teeth. Moreover, there are eighty-six constraints that need to be satisfied. They cover the contact ratio, pitch, strength of the gears, assembly of the gears, kinematics, and the size of the gears. Thus, the four-stage gearbox design problem is challenging for most algorithms to search for the optimum in a small search area with lots of local solutions. The model can be represented by the following [56].  N p4 , N g1 , . . . , N g4 , b 1 . . . , b 4 , x p1 , x g1 , . . . , x g4 , y p1 , y g1 , . . . , y g4 ],

Four-Stage Gearbox Design Problem
i=2,3,4 ≤ 0,     x y x y Figure 11. The construction of the four-stage gearbox design problem. Tables 8 and 9 are the according results of all algorithms in solving the four stage gearbox design problem. In Table 8, the ENGO algorithm and MRFO algorithm both perform well in this problem. Though the MRFO has a smaller average than the ENGO, the ENGO is more stable than the MRFO during many runs, which is also significant in engineering applications. For the original NGO algorithm, it ranks sixth among all algorithms.  Table 8, the ENGO algorithm and MRFO algorithm both perform well in this problem. Though the MRFO has a smaller average than the ENGO, the ENGO is more stable than the MRFO during many runs, which is also significant in engineering applications. For the original NGO algorithm, it ranks sixth among all algorithms. Thus, by introducing the polynomial interpolation strategy and the multistrategy opposite learning models into the original NGO algorithm, its comprehensive performance has been significantly improved. Table 8. Results of the different algorithms in solving the four stage gearbox design problem.

Algorithms
The   Figure 12 is the construction of the 72-bar truss design problem, including its node and element numbering schemes [57]. As Table 10 shown, the structural members of the 72-bar space truss are organized into 16 groups, symmetrically. The unit weight of the material is 0.1 lb/in. 3 , and the modulus of elasticity is 107 psi. The structure has the following constraints: a maximum displacement of ±0.25 in. at the uppermost joints in the x, y, or z directions, and a maximum allowable stress of ±25 ksi in any element and the range of acceptable cross sectional areas varies from 0.1 in. 2 to 3.0 in. 2 [58].  Figure 12 is the construction of the 72-bar truss design problem, including its node and element numbering schemes [57]. As Table 10 shown, the structural members of the 72-bar space truss are organized into 16 groups, symmetrically. The unit weight of the material is 0.1 lb/in. 3 , and the modulus of elasticity is 107 psi. The structure has the following constraints: a maximum displacement of ±0.25 in. at the uppermost joints in the x, y, or z directions, and a maximum allowable stress of ±25 ksi in any element and the range of acceptable cross sectional areas varies from 0.1 in. 2 to 3.0 in. 2 [58]. Once the algorithm is run 20 times, all results are summarized in Tables 11 and 12. As shown in Table 11, the ENGO algorithm keeps outstanding performance on all four measure indexes, while the original NGO algorithm only ranks fourth. Thus, for engineering optimization problems of high dimension, the difference between the enhanced algorithm and the original algorithm is more clear. Finally, the optimal solutions obtained by the different algorithms are listed in Table 12, for reference.   Once the algorithm is run 20 times, all results are summarized in Tables 11 and 12. As shown in Table 11, the ENGO algorithm keeps outstanding performance on all four measure indexes, while the original NGO algorithm only ranks fourth. Thus, for engineering optimization problems of high dimension, the difference between the enhanced algorithm and the original algorithm is more clear. Finally, the optimal solutions obtained by the different algorithms are listed in Table 12, for reference.

Traveling Salesman Problem
The traveling salesman problem (TSP) is a classical combinatorial optimization problem [59]. In the TSP, there is a commodity salesman who needs to sell his goods in several designated cities. During this process, the salesman has to go through all of the cities before he returns to the original city. Therefore, how to determine the most suitable route to minimize the total travel, is the question to be considered. Since the feasible solution to this problem is the full permutation of all vertices, the challenge of this problem will increase dramatically with the increase in the number of cities, which can be regarded as a NP-hard problem [60]. Here, the ENGO algorithm is employed to solve the TSP. Then, two examples are given to analyze the performance of the ENGO algorithm and other comparison algorithms.

Two Traveling Salesmen and 80 Cities
The first example includes 80 randomly generated cities and two traveling salesmen. The results are summarized in Table 13. Once the algorithm is run 20 times, the average travel obtained by the ENGO algorithm is the smallest. Then, the MRFO algorithm performs better on the best value than the ENGO algorithm, which illustrates that there are still opportunities to improve the ENGO by borrowing the structure of the MRFO algorithm. Figure 13 includes the best routes for the traveling salesman, the convergence curves and the box plots of the different algorithms. From Figure 13a-h, the routes obtained by the AOA or GWO are more complicated, while the route obtained by the ENGO algorithm is reasonable. Furthermore, in Figure 13k, the red line of the ENGO algorithm has the fastest convergence speed during the whole search process, which illustrates that the improvement strategies are helpful to enhance the quality of the population to make the algorithm approach the optimum faster.

Three Traveling Salesmen and 80 Cities
Then, to increase the difficulty of the TSP, the number of traveling salesmen is set as three. Table 14 are the results obtained by all algorithms. Obviously, the ENGO algorithm has the best performance on four measure indexes. Meanwhile, from Figure 14k,l, the ENGO algorithm also has the fastest convergence speed and is the most stable one. Therefore, when the difficulty of the TSP is increased, the ENGO algorithm will show a stronger capacity to obtain the optimal route.

Three Traveling Salesmen and 80 Cities
Then, to increase the difficulty of the TSP, the number of traveling salesmen is set as three. Table 14 are the results obtained by all algorithms. Obviously, the ENGO algorithm has the best performance on four measure indexes. Meanwhile, from Figure 14k,l, the ENGO algorithm also has the fastest convergence speed and is the most stable one. Therefore, when the difficulty of the TSP is increased, the ENGO algorithm will show a stronger capacity to obtain the optimal route.

Conclusions and Future Work
This paper proposed a novel version of the NGO algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning method, called the ENGO algorithm. To test the ability of the improved algorithm, the ENGO algorithm is employed to solve 29 benchmark functions of the CEC2017 test suite, four engineering

Conclusions and Future Work
This paper proposed a novel version of the NGO algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning method, called the ENGO algorithm. To test the ability of the improved algorithm, the ENGO algorithm is employed to solve 29 benchmark functions of the CEC2017 test suite, four engineering examples and the TSP and, compared it with other famous optimization algorithms. On the one hand, the results show that the ENGO algorithm ranks first on 23 test functions, accounting for 79.31% of the 29 functions, while the original NGO algorithm ranks first on only four functions, accounting for 13.79%. Combined with the box plots, the fluctuation of the results obtained by ENGO algorithm is significantly smaller. These gaps suggest that the introduction of the improvement strategies keeps a better balance between the exploration and exploitation, which helps the algorithm to enhance the search ability, and to obtain results with higher precision. The average convergence curves can also reflect the effect of the different strategies. Compared with other algorithms, the curve of the ENGO decreases rapidly. It is because the quality of the whole population is enhanced by the polynomial interpolation strategy, which leads the individuals to determine the appropriate search direction and move toward the optimal solution quickly. At the end of the search process, from the convergence curves, it can be observed that the ENGO algorithm can avoid the search stagnation phenomenon, which reflects that the multi-strategy opposite learning method helps the algorithm to expand the search range, to escape the local optimums. On the other hand, the results of the four engineering examples and the TSP show that the ENGO algorithm is still competitive for the optimization problems with high dimensions and strong constraints. When facing a more complex search space, the ENGO algorithm can keep an outstanding performance on all measure indexes, such as the 72 truss design problem. For the two TSP examples with high dimensions, the differences between the ENGO algorithm and the original NGO algorithm become more obvious, which illustrates that the two improvement strategies can maintain the efficient search ability of the algorithm as the complexity of the problem increases.
For the future, we can introduce the polynomial interpolation strategy and the multistrategy opposite learning method into other intelligent optimization algorithms and discuss whether these strategies are still effective, and obtain an outstanding performance. Moreover, the ENGO algorithm can be employed to solve other changing problems, such as the parameters optimization of some complex models, feature selection, degree reduction, shape optimization, production scheduling problems, flight optimization problems, and so on. Data Availability Statement: All data generated or analyzed during this study were included in this published article.

Acknowledgments:
The authors are very grateful to the reviewers for their insightful suggestions and comments, which helped us to improve the presentation and content of the paper.

Conflicts of Interest:
The authors declare no conflict of interest.

Notations Explanation NGO
Northern goshawk optimization algorithm ENGO Enhanced goshawk optimization algorithm proposed in this paper N The size of the northern goshawk population M The dimension of the optimization problem t The number of current iterations of the algorithm T The maximum iterations UB The upper bound of the optimization problem LB The lower bound of the optimization problem X i The i-th individual in the population X i,j The j-th element of the i-th individual prey i The selected prey in the population r A random number in the goshawk optimization algorithm I A vector consisted of 1 or 2 a 0 , a 1 , a 2 Three coefficients in the quadratic polynomial function X i New solution obtained by opposite learning X i New solution obtained by quasi-opposite learninĝ X i New solution obtained by quasi-reflected learning X oppo New population including the solutions after different opposite learning methods T A , T B , T C , T D Number of teeth in gears T A , T B , T C , T D T s The thickness of the shell section T h The thickness of the head R The inner radius L The length of the cylindrical N p1 , . . . , N p4 The positions of the pinions N g1 , . . . , N g4 The positions of the gears b 1 . . . , b 4 The thickness of the blanks x p1 , x g1 , . . . , x g4 , y p1 , y g1 , . . . , y g4