Next Article in Journal
Fixed/Predefined-Time Synchronization of Complex-Valued Stochastic BAM Neural Networks with Stabilizing and Destabilizing Impulse
Previous Article in Journal
Taxonomy-Aware Prototypical Network for Few-Shot Relation Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Northern Goshawk Optimization Algorithm and Its Application in Practical Optimization Problems

1
School of Technology, Xi’an Siyuan University, Xi’an 710038, China
2
Division of Informationize Management, Xi’an University of Technology, Xi’an 710048, China
3
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4383; https://doi.org/10.3390/math10224383
Submission received: 8 October 2022 / Revised: 8 November 2022 / Accepted: 15 November 2022 / Published: 21 November 2022

Abstract

:
As a kind of effective tool in solving complex optimization problems, intelligent optimization algorithms are paid more attention to their advantages of being easy to implement and their wide applicability. This paper proposes an enhanced northern goshawk optimization algorithm to further improve the ability to solve challenging tasks. Firstly, by applying the polynomial interpolation strategy to the whole population, the quality of the solutions can be enhanced to keep a fast convergence to the better individual. Then, to avoid falling into lots of local optimums, especially late in the whole search, different kinds of opposite learning methods are used to help the algorithm to search the space more fully, including opposite learning, quasi-opposite learning, and quasi-reflected learning, to keep the diversity of the population, which is noted as a multi-strategy opposite learning method in this paper. Following the construction of the enhanced algorithm, its performance is analyzed by solving the CEC2017 test suite, and five practical optimization problems. Results show that the enhanced algorithm ranks first on 23 test functions, accounting for 79.31% among 29 functions, and keeps a faster convergence speed and a better stability on most functions, compared with the original northern goshawk optimization algorithm and other popular algorithms. For practical problems, the enhanced algorithm is still effective. When the complexity of the TSP is increased, the performance of the improved algorithm is much better than others on all measure indexes. Thus, the enhanced algorithm can keep the balance between exploitation and exploration and obtain better solutions with a faster speed for problems of high complexity.

1. Introduction

Optimization theory is an important branch of mathematics, which studies the problem of how to find the best solution among lots of solutions [1]. It has been applied to solve the challenging optimization problems in agriculture, industry, national defense, transportation and other fields, especially for the optimization models with strong constraints, multivariable systems or multi-objectives [2]. The practice shows that under the same conditions, the optimization technology can improve the efficiency of the system, allocate resources rationally and reduce energy consumption [3]. This effect becomes more significant with the increase in the complexity of the optimization problem.
According to the characteristics of the approaches to solving optimization problems, massive optimization techniques can be classified into two categories: deterministic optimization techniques and intelligent optimization algorithms [4]. The former uses the analytic properties of the problem to generate a definite finite or infinite sequence of points to converge to the global optimal solution [5], including the gradient descent method [6], the Newton method [7], the conjugate gradient method [8], and so on. The gradient descent method can search the global optimum and is easy to implement. The Newton method is to approximate the solution of the equations in the natural and complex domains with the characteristic of a fast convergence speed. The conjugate gradient method is between the gradient descent method and the Newton method. It overcomes the slow convergence of the gradient descent and the calculation of the Hessian matrix of the Newton method because it only uses the information of the first derivative [9].
These approaches are computationally efficient because they make full use of the analytic properties of the target problem. However, the deterministic optimization techniques may face challenges in the solution accuracy for problems with a high complexity, strong nonlinearity and many variables [10]. Thus, intelligent optimization algorithms (or meta-heuristic algorithms) were proposed.
Intelligent optimization algorithms are random search methods. Compared with traditional optimization methods, this kind of tool does not have any special requirements (differentiable or convex optimization) on the objective function, and is not limited to specific problems [11]. Therefore, its application scope is more extensive, and it has become an effective approach to solving the optimization problem.
There are several methods to classify intelligent algorithms [12]. Here, they are summarized into three types, including evolutionary-based algorithms, physics or mathematics-based algorithms and swarm intelligence algorithms [13]. Figure 1 shows the classification of intelligent optimization algorithms and some representative algorithms of these classifications.
The evolutionary approach takes natural law as the core idea. The most classical one of this kind is the genetic algorithm (GA), which simulates the evolutionary process of Darwin’s genetic selection and natural elimination. It is an effective method to search for the optimal solution [14]. However, the GA involves the process of encoding and decoding, which makes it time-consuming. The differential evolution algorithm (DE) is also a popular algorithm, which consists of five stages, including the population initialization, progeny generation, progeny mutation, progeny selection and fitness evaluation. Because the encoding and decoding processes are abandoned, the search efficiency is improved [15]. The imperialist competitive algorithm (ICA) is an overall optimization algorithm, based on the evolutionary, inspired by the competition, occupation and annexation of colonial countries among imperialist countries in the political and social colonial stage, in human history [16]. Other algorithms of this kind are the memetic algorithm (MA) [17], the bio-geography optimization algorithm (BBO) [18] and so on.
Physics or mathematics-based algorithms are inspired by the physical laws or mathematical theorems found in nature by scholars. For example, the simulated annealing algorithm (SA) was proposed, based on the similarity between the annealing process of solid matter in physics and the general combinatorial optimization problem. Starting from a high initial temperature, the global optimal solution of the objective function can be found in the solution space with the constant decrease of the temperature parameter. Meanwhile, it can jump out of the local optimum, based on the probability jump property [19]. According to the law of universal gravitation and Newton’s second law, the gravitational search algorithm (GSA) was constructed. This algorithm searches for the optimal solution by moving the particle position of the population continuously in the search space by the universal gravitation between them. When the particle moves to the optimal position, the optimal solution is found [20]. The sine cosine algorithm (SCA) was proposed, based on the mathematical model of sine and cosine, to search the space region and find the optimal solution [21]. Other such algorithms are the billiards-inspired optimization algorithm (BOA) [22], the gradient-based optimizer (GBO) [23] and so on.
The swarm intelligence algorithms were proposed, based on the habits of social animals, such as birds, ants, wolves or bees. The particle swarm optimization algorithm (PSO) is one of the most popular algorithms, which mimics the behavior of birds. For each individual in the group, it changes the search pattern by learning from its own experience and the experience of other members [24]. Due to the simple construction and special learning method, the PSO algorithm performs well on the convergence speed and accuracy. Thus, it has been applied to solve some practical optimization problems. Moreover, lots of novel algorithms of this kind have emerged in recent years by simulating other animals. For example, the whale optimization algorithm (WOA) is proposed by mimicking humpback whale hunting behavior in 2016 [25]. The grey wolf optimizer (GWO) mimics the hierarchical system of the wolf population, which is unique in that a small number of elite gray wolves lead a group of gray wolves toward their prey [26]. In 2021, the Aquila optimizer (AO) was proposed. It was constructed in four stages using the hunting process of the Aquila in the wild. During different stages, the Aquila will fly in different methods to make sure their prey doesn’t escape, such as a short glide attack, swooping, or high soaring [27]. In 2022, inspired by the features of the sand cat swarm of detecting low frequencies and digging for prey, the sand cat swarm optimization algorithm (SCSO) was proposed [28].
Though the above algorithms have been used to solve test functions or engineering optimization problems, there are still possibilities to further improve the capacity in the solution accuracy or convergence speed [29]. Thus, some improved strategies are introduced into the original algorithm. Here, these strategies are divided into two categories.
The first category is the strategies to enhance the exploitation ability of the algorithm. For example, in [30], a linearly dynamic random heuristic crossover strategy was added to the BBO algorithm, to obtain a stronger local search ability. Noticing the deficiency of the exploitation ability for the surface conversion model, Hu et al. introduced a golden sine learning strategy, based on the relationship between the sine function and the unit circle to help individuals to approach better solutions, which is an effective way to improve the quality of the whole population [31]. Similarly, to enhance the exploitation phase of the colliding bodies optimization algorithm, the quadratic interpolation of the utilized historically best solution was employed and showed outstanding performance in solving 24 mathematical optimization problems, including 30 design variables [32]. Aiming at the original algorithm’s low capability in exploiting, Yousri et al. [33] adopted the fractional calculus, using the Caputo fractional differ-sum operator, to enhance the manta rays movement in the exploitation phase, via utilizing the history dependency of fractional calculus to boost exploiting the optimal solutions, via sharing the past knowledge over the optimization process.
Moreover, some strategies are used to enhance the exploration capacity. In [34], the classical PSO algorithm was improved, based on a Lévy fly strategy, which is a random walk method. Following the application of this strategy to the original population, the individuals move by leaps and bounds in the search space, which is an effective approach to avoid the local optimums. Different mutation methods are also effective in improving the exploration capacity, such as the wavelet mutation and Cauchy mutation. In Jiang et al. [35], each individual has the possibility to experience the wavelet mutation by setting a mutation rate. Due to the volatility of wavelet function in the wavelet function, individuals have randomness in updating their positions to enhance the population diversity. Miao et al. [36] introduces the mutation ability of the Cauchy distribution, to help the original algorithm to explore a wider solution space and avoid falling into a local optimum prematurely. Compared with the state-of-the-art algorithms, the novel version of the improved algorithm performs best on the feature selection of sleep staging. The opposite learning (OL) strategy is also a common approach to assist the algorithm to explore the space quickly. In [37], the dynamic OL strategy is embedded into the population initialization stage and the generation jumping stage of the original dragonfly algorithm to solve the flexible job shop scheduling problem. The results obtained showed that the improved algorithm can efficiently achieve a better solution on 15 examples, compared to the algorithms.
Based on the above knowledge, this paper proposes a novel version of the northern goshawk optimization algorithm (ENGO), by enhancing the abilities of exploitation and exploration with the according strategies. The northern goshawk optimization (NGO) algorithm was proposed by Dehghani et al., by simulating the behavior of the northern goshawk during the process of hunting the prey, including two stages of prey identification and the tail and chase process [38]. Once it was tested on 68 different objective functions and four engineering design problems, we found that it can keep a balance between the exploration capacity and the exploitation capacity, and has an outstanding performance for the optimization problems, compared with other similar algorithms. To improve its capacity in solving problems with higher dimensions and stronger constraint conditions, this paper proposed an enhanced NGO algorithm, noted as ENGO, and verified it on the CEC2017 test suite and engineering optimization problems. The main contributions of this paper are as follows:
Firstly, an enhanced NGO algorithm is proposed by introducing the polynomial interpolation strategy and the multi-strategy opposite learning method.
  • To improve the capacity of exploitation, this paper applies the quadratic interpolation function to each individual, to find a better solution.
  • Different opposite learning methods are employed to help the algorithm search the space more fully, including opposite learning, quasi-opposite learning and quasi-reflected learning.
Secondly, the ENGO algorithm is used to solve the test functions of the CEC2017 test suite, four engineering optimization problems and the traveling salesman problem (TSP).
The structure of this paper is as follows: Section 2 introduces the original NGO algorithm and how to apply the two strategies to it to obtain the ENGO algorithm. Section 3 shows the results of the ENGO algorithm and other comparison algorithms in solving the CEC2017 test suite. Section 4 analyzes the performance of the ENGO algorithm in solving engineering problems and TSP. Section 5 is the conclusion of this paper.

2. The Enhanced Northern Goshawk Optimization Algorithm

2.1. The Original Northern Goshawk Optimization

The search mechanism of the NGO algorithm stems from its efficient search and capture of prey. Thus, the algorithm consists of three stages, which are population initialization, prey identification and prey capture.

2.1.1. Initialization

Firstly, the initialization population of the northern goshawk can be presented by matrix X, shown as follows.
X = [ X 1 X 2 X N ] = [ x 1 , 1 x 1 , 2 x 1 , M x 2 , 1 x 2 , 1 x 2 , M x N , 1 x N , 2 x N , M ] ,
where, X i ,   1 i N represents the i-th individual of the whole population. N and M are the size of the population and the dimension of the objective function, respectively. For a single objective optimization problem with upper bound UB and lower bound LB, the elements of X i can be calculated by
x i , j = L B + r a n d ( U B L B ) ,   1 i N ; 1 j M .

2.1.2. Prey Identification

In the first stage, the northern goshawk will select the prey and try to attack it. Because the prey is selected randomly, this behavior can reflect the global exploration ability of the algorithm in the feasible space. Suppose the preyi, shown in Equation (3), is the target selected by the individual Xi, then Equation (4) mimics the process of the northern goshawk attacking its prey.
p r e y i = X p ,   i = 1 , 2 , , N ; p = 1 , 2 , , i 1 , i + 1 , , N .
{ X i n e w = X i + r ( p r e y i I X i ) , F i t ( p r e y i ) < F i t ( X i ) , X i n e w = X i + r ( X i p r e y i ) , F i t ( p r e y i ) F i t ( X i ) ,
where r is a random vector with numbers in [0, 1], and I is a vector consisting of 1 or 2. r and I are used to enhance the randomness of the algorithm to search the space more fully. Then, the individual Xi will be updated by Equation (5).
{ X i = X i n e w , F i t ( X i n e w ) < F i t ( X i ) , X i = X i , F i t ( X i n e w ) F i t ( X i ) .

2.1.3. Prey Capture

When the northern goshawk locks on to its prey and starts attacking it, the prey will be spooked and start escaping. In this stage, the northern goshawk needs to keep chasing its prey. Due to its high pursuit speed, the northern goshawk can chase and eventually capture prey in almost any situation. Regarding a circle with radius r as the range of the chasing behavior, this stage can be simulated by Equation (6).
X i n e w = X i + R ( 2 r 1 ) X i ,  
where R = 0.02 ( 1 t / T ) . T is the maximum number of iterations and t is the current iteration. Then, the individual Xi is updated by Equation (7).
{ X i = X i n e w , F i t ( X i n e w ) < F i t ( X i ) , X i = X i , F i t ( X i n e w ) F i t ( X i ) .
Figure 2 provides the flow chart of the original NGO algorithm.

2.2. The Improved Northern Goshawk Optimization

Although the NGO algorithm has the advantages of easy implementation, simple structure and high precision in solving test functions and some engineering optimization problems, there are still some opportunities to further enhance its capacities of exploration and exploitation. For example, the model of chasing prey in the original NGO algorithm is too simple, which will lead to the poor quality of the individuals and slow convergence speed to the optimal solution. Thus, this section proposes an enhanced northern goshawk optimization algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning models, noted as ENGO.

2.2.1. Polynomial Interpolation Strategy

Polynomial interpolation is a kind of search method, which refers to seeking the minimal value of the interpolation polynomial φ(t) constructed by some discrete data [39]. Firstly, the discrete points are selected to construct the interpolation polynomial φ(t). If the constructed polynomial is quadratic, it is called the quadratic interpolation, and if it is cubic, it is cubic interpolation method. Then, the minimal value of the function φ(t) can be solved by letting φ′(t) = 0.
Here, the northern goshawk population is regarded as the discrete data in the feasible space. Firstly, three individuals Xi, Xi+1 and Xi+2 of the northern goshawk population are selected to construct the quadratic interpolation function φ(X), shown in Equation (8).
φ ( X ) = a 0 + a 1 X + a 2 X 2 .
Submit the discrete individuals into Equation (8), we can obtain
{ φ ( X i ) = a 0 + a 1 X i + a 2 X i 2 , φ ( X i + 1 ) = a 0 + a 1 X i + 1 + a 2 X i + 1 2 , φ ( X i + 2 ) = a 0 + a 1 X i + 2 + a 2 X i + 2 2 .
According to Equation (9), three coefficients a0, a1 and a2 can be obtained by Equation (10).
{ a 0 = ( X i + 1 X i + 2 ) φ ( X i ) + ( X i + 2 X i ) φ ( X i + 1 ) + ( X i X i + 1 ) φ ( X i + 2 ) ( X i X i + 1 ) ( X i + 1 X i + 2 ) ( X i + 2 X i ) , a 1 = ( X i + 1 2 X i + 2 2 ) φ ( X i ) + ( X i + 2 2 X i 2 ) φ ( X i + 1 ) + ( X i 2 X i + 1 2 ) φ ( X i + 2 ) ( X i X i + 1 ) ( X i + 1 X i + 2 ) ( X i + 2 X i ) , a 2 = ( X i + 1 X i + 2 ) X i + 1 X i + 2 φ ( X i ) + ( X i + 2 X i ) X i + 2 X i φ ( X i + 1 ) + ( X i X i + 1 ) X i X i + 1 φ ( X i + 2 ) ( X i X i + 1 ) ( X i + 1 X i + 2 ) ( X i + 2 X i ) .
To obtain the minimal of quadratic curve φ ( X ) , let φ ( X ) = a 1 + 2 a 2 X = 0 . When X * = a 1 2 a 2 , the quadratic curve φ ( X ) obtains the minimal value. Combined with Equation (10), the X * can be obtained as follows.
X * = 1 2 × ( X i + 1 2 X i + 2 2 ) F i t ( X i ) + ( X i + 2 2 X i 2 ) F i t ( X i + 1 ) + ( X i 2 X i + 1 2 ) F i t ( X i + 2 ) ( X i + 1 X i + 2 ) F i t ( X i ) + ( X i + 2 X i ) F i t ( X i + 1 ) + ( X i X i + 1 ) F i t ( X i + 2 ) .
Finally, comparing the obtained X* and the original Xi, the individual is updated, as Equation (12) shown.
{ X i = X * , F i t ( X * ) < F i t ( X i ) , X i = X i , F i t ( X * ) F i t ( X i ) .
Following the performance of this operation on the whole population, the quality of the northern goshawk population will be improved, which is helpful to approach the optimal solution closer.

2.2.2. Multi-Strategy Opposite Learning Method

Moreover, falling into local optimal solutions easily is also a dilemma for optimization algorithms, especially for some optimization problems with a large number of local optimums. Thus, the different kinds of opposite learning methods are introduced to help the NGO algorithm to enhance the capacity of the escaping local optimums. To make the learning modes of the individuals in the population more diverse, this paper adopts three kinds of learning mechanisms with their own characteristics, including opposite learning [40], quasi-opposite learning [41] and quasi-reflected learning [42].
For the i-th individual of the northern goshawk population, the newly generated individual, based on the different opposite learning mechanisms, can be calculated by the following equations.
X ˜ i = L B + U B X i .
X ¯ i = r a n d [ L B + U B 2 , L B + U B X i ] .
X ^ i = r a n d [ L B + U B 2 , X i ] .
Figure 3 illustrates the characteristics of the different opposite learning methods in the two-dimensional space. The red point is the i-th individual in the northern goshawk population. Following the application of different learning methods to the Xi in different dimensions, new individuals will be generated in the constructed areas A and B.
To widen the search range as much as possible, the above methods will be applied to the northern goshawk population. Suppose the size of the population X is N, then the population will be divided into three groups randomly. Following the application of different methods to each group, a new population, based on opposite learning strategies, will be generated, noted as Xoppo. The two populations X and Xoppo are mixed and sorted, according to the fitness values, and N individuals with better fitness values will be retained as the final population. Figure 4 shows the whole process of the learning methods for the population.
Figure 5 shows the flowchart of the ENGO algorithm. The specific steps of the ENGO algorithm are as follows:
  • Step 1. Initialize the parameters, such as the size of the northern goshawk population N, the dimension of the problem M, and the maximum iteration time T.
  • Step 2. Create the initial northern goshawk population by Equation (2).
  • Step 3. when t < T, calculate the fitness value of each individual in the population. Following the selection of the according prey of the i-th individual Xi by Equation (3), the updated solution can be obtained by Equations (4) and (5).
  • Step 4. Divide the population into three equal groups.
  • Step 5. Based on Equations (13)–(15), apply different opposite learning methods to the according group, and obtain new solutions.
  • Step 6. Mix the new solutions with the population, and N better solutions are selected as the new population.
  • Step 7. By simulating the behavior of chasing the prey, calculate the new solution by Equation (6). Then, update the solution by Equation (7).
  • Step 8. Apply the polynomial interpolation strategy to each individual and update the individual by Equations (11) and (12).
  • Step 9. if t < T, return to Step 3. Otherwise, output the best individual and its fitness value.

3. Numerical Experiment on the Test Functions

3.1. Test Functions and Parameters Setting

To test the performance of the improved algorithm, the CEC2017 test suite is used, which consists of 29 test functions (except for the F2), including uni-model functions, multi-modal functions, hybrid functions and composition functions [43]. These functions are constructed by rotating, combining or scaling benchmark functions, and have been used to test the ability of the optimization algorithms to search a complex space. Then, the original NGO algorithm and other six well-known algorithms are selected for comparison. They are the particle swarm optimization algorithm (PSO), the Archimedes optimization algorithm (AOA), the African vultures optimization algorithm (AVOA), the dandelion optimizer (DO), the sand cat optimization algorithm (SCSO) and the sparrow search algorithm (SSA). In Table 1, the time and parameters of these compared algorithms are shown. In the comparison experiments, the size of the population of all algorithms is set as 100, and the maximum iteration number is 500 times. The dimension of the test functions is set as 30. Meanwhile, to compare the different algorithms more fairly, all algorithms run 20 times, independently, to avoid random distractions. Once completed, the average value (Ave) and the standard deviation (Std) are recorded to show the accuracy and stability of the algorithm. All of the experimental series were implemented using Windows 10(64-bit), Intel(R) Core(TM) i7-1165G7 (Santa Clara, CA, USA), CPU @2.80GHz (Santa Clara, CA, USA), MATLAB 2016a (Natick, MA, USA) with 8 GB of ARM.

3.2. Results Analysis and Discussion

The results of the different algorithms in solving the functions of the CEC2017 test suite are displayed in Table 2. The data in bold represents the best results among all algorithms. The results show that the enhanced NGO algorithm ranks first on 23 test functions, accounting for 79.31% of all functions. However, the original NGO algorithm only ranks first on four functions, accounting for 13.79% of all functions. Obviously, after introducing the polynomial interpolation strategy and the multi-strategy opposite learning method into the original NGO algorithm, the solution accuracy has been greatly improved. When compared with other algorithms, the average rank of the ENGO algorithm is 1.3448, which is less than the others. According to the final rank, the performance of these algorithms is ranked as ENGO > NGO > DO > SSA > SCSO > PSO > AOA.
The average convergence curves of the different algorithms after it was run 20 times, are shown in Figure 6. The accuracy and the convergence speed of the algorithm can be measured by the descending rate and the final position of its curve. Compared with the original NGO algorithm, the ENGO algorithm can converge to the optimum with a faster speed, especially in the early stages of the whole search. It is because the polynomial interpolation strategy improves the quality of the northern goshawk population by selecting better solutions around the three selected individuals randomly. Meanwhile, among all algorithms, the ENGO algorithm can obtain solutions with a better accuracy when the search ends.
Figure 7 displays the boxplots of the different algorithms, which are used to measure the stability and robustness of the algorithm. Comparing the length of the boxes, the ENGO is smaller than the original NGO algorithm and others on F6, F7, F11, F15, F17, F20, F22, F23, F24, F25, F27, F28 and F30, which illustrates that the distribution of the results of the ENGO algorithm is more centralized after it was run 20 times on these functions. For the remaining functions, the median of the data also needs to be considered. On functions F1, F5, F8, F9, F12, F14, F16, F18, F19, F21 and F26, the median of the ENGO is smaller than others. Thus, considering the length and median of the box, the improved algorithm has an advantage in stability and robustness, except for F3, F4, F10, F13 and F29. It is because the introduction of the two improvement strategies that keeps a better balance between the exploitation capacity and the exploration capacity.
Figure 8 includes the radar maps of eight algorithms in solving the CEC2017 test suite. They are constructed by the ranks of different algorithms on each test function, which is used to reflect the comprehensive performance of the algorithm more intuitively. The radar graph of the ENGO algorithm has the minimum area, which also shows its outstanding performance on most functions. The area of the NGO algorithm is obviously bigger than the ENGO algorithm. The AOA algorithm has the biggest area, indicating that there are still opportunities to be improved. Table 3 is the p-value of the Wilcoxon rank sum test between the ENGO and other algorithms. Usually, when the p-value is smaller than 0.05, there is an obvious difference between the two groups of test data. Thus, combined with the data in Table 2, ‘+’ represents the ENGO better than the comparison algorithm, obviously, while ‘−’ represents the ENGO worse than the comparison algorithm, obviously. The ‘=’ illustrates that there is no obvious difference between the two algorithms. From Table 3, the improved algorithm outperforms the original algorithm in more than half of the test functions with the support of statistics.

4. Practical Optimization Problems

Except for the test functions, the practical optimization problems with high dimensions and strong constraints can also reflect the performance of the algorithm. In this section, the ENGO algorithm, the original NGO algorithm and the other eight algorithms need to solve five problems to show their performance. These algorithms are AOA, AO [23], the Harris hawks optimization (HHO) [49], the WOA, PSO, GWO [50], the manta ray foraging optimization (MRFO) [51] and the multi-verse optimizer (MVO) [52]. All algorithms run 20 times, independently. The size of the population is 30, the maximum iteration is 100. The results and discussion of these examples are as follows.

4.1. Gear Train Design Problem

The gear train design problem is used to obtain the most suitable number of teeth to minimize the cost of the gear ratio. This problem includes four variables, denoted as TA, TB, TC and TD, respectively [53]. The structure of the four gears is shown in Figure 9. Equation (16) lists its mathematical model.
x = [ x 1 , x 2 , x 3 , x 4 ] = [ T A , T B , T C , T D ] ,
f ( x ) = ( 1 6.931 x 1 x 2 x 3 x 4 ) 2 ,
12 x 1 , x 2 , x 3 , x 4 60 .
Table 4 provides the final results of the ENGO algorithm and other algorithms in solving the gear train design problem. Among these algorithms, the ENGO algorithm has the best average value after it was run 20 times. That is, the ENGO algorithm is effective in solving this problem. Meanwhile, the smaller standard deviation illustrates that the improved algorithm is more stable, while the original NGO algorithm is largely fluctuating. Table 5 includes the best variables of the gear train obtained by each algorithm.

4.2. Pressure Vessel Design Problem

The pressure vessel design problem aims to obtain the optimal total cost of the cylindrical vessel, by adjusting the parameter values of the material, its formation, and welding. The schematic diagram of the pressure vessel is shown in Figure 10. There are four parameter values that need to be optimized. Ts represents the thickness of the shell, Th is the thickness of the head, and the inner radius and length of the cylindrical section are noted as R and L (without considering the head), respectively. The mathematical model of this problem is shown as follows [54,55]:
x = [ x 1 , x 2 , x 3 , x 4 ] = [ T s , T h , R , L ] ,
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 ,
g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 3 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 , g 4 ( x ) = x 4 240 0 , 0 x 1 99 ,     0 x 2 99 ,     0 x 3 200 ,     0 x 4 200 .
Table 6 shows the results of solving the pressure vessel design problem. The ENGO algorithm also has the best performance of accuracy and stability. The original NGO algorithm ranks second. Though the GWO algorithm also shows a great performance on the measure index of the best value, there are still defects in the stability. In Table 7, the best parameters of the pressure vessel design problem are listed.

4.3. Four-Stage Gearbox Design Problem

Figure 11 is the schematic diagram of the four-stage gearbox design problem, which is proposed to reduce the weight of the gearbox. There are twenty-two parameters to be determined, which can be divided into four categories, including the positions of the gears, the positions of the pinions, the thickness of the blanks, and the number of teeth. Moreover, there are eighty-six constraints that need to be satisfied. They cover the contact ratio, pitch, strength of the gears, assembly of the gears, kinematics, and the size of the gears. Thus, the four-stage gearbox design problem is challenging for most algorithms to search for the optimum in a small search area with lots of local solutions. The model can be represented by the following [56].
x = [ x 1 , x 2 , x 3 , , x 22 ] = [ N p 1 , , N p 4 , N g 1 , , N g 4 , b 1 , b 4 , x p 1 , x g 1 , , x g 4 , y p 1 , y g 1 , , y g 4 ] ,
f ( x ) = ( π 1000 ) i = 1 4 b i c i 2 ( N p i 2 + N g i 2 ) ( N p i + N g i ) 2 ,
g 1 ( x ) = ( 366000 π w 1 + 2 c 1 N p 1 N p 1 + N g 1 ) ( ( N p 1 + N g 1 ) 2 4 b 1 c 1 2 N p 1 ) σ N J R 0.0167 W K o K m 0 ,
g 2 ( x ) = ( 366000 N g 1 π w 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 ) ( ( N p 2 + N g 2 ) 2 4 b 2 c 2 2 N p 2 ) σ N J R 0.0167 W K o K m 0 ,
g 3 ( x ) = ( 366000 N g 1 N g 2 π w 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 ) ( ( N p 3 + N g 3 ) 2 4 b 3 c 3 2 N p 3 ) σ N J R 0.0167 W K o K m 0 ,
g 4 ( x ) = ( 366000 N g 1 N g 2 N g 3 π w 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 ) ( ( N p 4 + N g 4 ) 2 4 b 4 c 4 2 N p 4 ) σ N J R 0.0167 W K o K m 0 ,
g 5 ( x ) = ( 366000 π w 1 + 2 c 1 N p 1 N p 1 + N g 1 ) ( ( N p 1 + N g 1 ) 3 4 b 1 c 1 2 N g 1 N p 1 2 ) ( σ H C p ) ( sin ( φ ) cos ( φ ) 0.0334 W K o K m ) 0 ,
g 6 ( x ) = ( 366000 N g 1 π w 1 N p 1 + 2 c 2 N p 2 N p 2 + N g 2 ) ( ( N p 2 + N g 2 ) 3 4 b 2 c 2 2 N g 2 N p 2 2 ) ( σ H C p ) ( sin ( φ ) cos ( φ ) 0.0334 W K o K m ) 0 ,
g 7 ( x ) = ( 366000 N g 1 N g 2 π w 1 N p 1 N p 2 + 2 c 3 N p 3 N p 3 + N g 3 ) ( ( N p 3 + N g 3 ) 3 4 b 3 c 3 2 N g 3 N p 3 2 ) ( σ H C p ) ( sin ( φ ) cos ( φ ) 0.0334 W K o K m ) 0 ,
g 8 ( x ) = ( 366000 N g 1 N g 2 N g 3 π w 1 N p 1 N p 2 N p 3 + 2 c 4 N p 4 N p 4 + N g 4 ) ( ( N p 4 + N g 4 ) 3 4 b 4 c 4 2 N g 4 N p 4 2 ) ( σ H C p ) ( sin ( φ ) cos ( φ ) 0.0334 W K o K m ) 0 ,
g 9 ( x ) g 12 ( x ) = N p i sin 2 ( φ ) 4 1 N p i + ( 1 N p i ) 2 + N p i sin 2 ( φ ) 4 1 N p i ( 1 N p i ) 2 + sin ( φ ) ( N p i + N g i ) 2 + C R min π cos ( φ ) 0 ,
g 13 ( x ) g 16 ( x ) = d min 2 c i N p i N p i + N g i 0 , g 17 ( x ) g 20 ( x ) = d min 2 c i N g i N p i + N g i 0 ,
g 21 ( x ) = x p 1 + ( ( N p 1 + 2 ) c 1 N p 1 + N g 1 ) L max 0 ,
g 22 ( x ) g 24 ( x ) = L max ( ( N p i + 2 ) c i N p i + N g i ) i = 2 , 3 , 4 + x g ( i 1 ) 0 ,
g 25 ( x ) = x p 1 + ( ( N p 1 + 2 ) c 1 N p 1 + N g 1 ) 0 , g 26 28 ( x ) = ( ( N p i + 2 ) c i N p i + N g i x g ( i 1 ) ) 0 ,
g 29 ( x ) = y p 1 + ( ( N p 1 + 2 ) c 1 N p 1 + N g 1 ) L max 0 ,
g 30 32 ( x ) = L max + ( ( N p i + 2 ) c i N p i + N g i y g ( i 1 ) ) i = 2 , 3 , 4 0 ,
g 33 ( x ) = ( N p 1 + 2 ) c 1 N p 1 + N g 1 y p 1 0 , g 34 36 ( x ) = ( ( N p i + 2 ) c i N p i + N g i y g ( i 1 ) ) i = 2 , 3 , 4 0 ,
g 37 40 ( x ) = L max + ( N p i + 2 ) c i N p i + N g i + x g i 0 , g 41 44 ( x ) = x g i + ( ( N p i + 2 ) c i N p i + N g i ) 0 ,
g 45 48 ( x ) = y g i + ( N p i + 2 ) c i N p i + N g i L max 0 , g 49 52 ( x ) = y g i + ( ( N p i + 2 ) c i N p i + N g i ) 0 ,
g 53 56 ( x ) = ( b i 8.255 ) ( b i 5.715 ) ( b i 12.70 ) ( N p i + 0.945 c i N g i ) ( 1 ) 0 ,
g 57 60 ( x ) = ( b i 8.255 ) ( b i 3.175 ) ( b i 12.70 ) ( N p i + 0.646 c i N g i ) 0 ,
g 61 64 ( x ) = ( b i 5.715 ) ( b i 3.175 ) ( b i 12.70 ) ( N p i + 0.504 c i N g i ) 0 ,
g 65 68 ( x ) = ( b i 5.715 ) ( b i 3.175 ) ( b i 8.255 ) ( N p i N g i ) 0 ,
g 69 72 ( x ) = ( b i 8.255 ) ( b i 5.715 ) ( b i 12.70 ) ( N p i + N g i 1.812 c i ) ( 1 ) 0 ,
g 73 76 ( x ) = ( b i 8.255 ) ( b i 3.175 ) ( b i 12.70 ) ( N p i + N g i 0.945 c i ) 0 ,
g 77 80 ( x ) = ( b i 5.715 ) ( b i 3.175 ) ( b i 12.70 ) ( N p i + N g i 0.646 c i ) ( 1 ) 0 ,
g 81 84 ( x ) = ( b i 5.715 ) ( b i 3.175 ) ( b i 8.255 ) ( N p i + N g i 0.504 c i ) 0 ,
g 85 ( x ) = w min w 1 ( N p 1 N p 2 N p 3 N p 4 ) N g 1 N g 2 N g 3 N g 4 0 , g 86 ( x ) = w max w 1 ( N p 1 N p 2 N p 3 N p 4 ) N g 1 N g 2 N g 3 N g 4 0 ,
c i = ( y g i y p 1 ) 2 + ( x g i x p 1 ) 2 , K o = 1.5 , d min = 25 , J R = 0.2 ,
φ = 120 , W = 55.9 , K m = 1.6 , C R min = 1.4 , L max = 127 , C p = 464 ,
σ H = 3290 , w max = 255 , w 1 = 5000 , σ N = 2090 , w min = 245 ,
b i { 3.175 , 12.7 , 8.255 , 5.715 } , 7 N g i , N p i 76 ,
y p i , x p i , y g i , x g i { 12.7 , 38.1 , 25.4 , 50.8 , 76.2 , 63.5 , 88.9 , 114.3 , 101.6 } .
Table 8 and Table 9 are the according results of all algorithms in solving the four stage gearbox design problem. In Table 8, the ENGO algorithm and MRFO algorithm both perform well in this problem. Though the MRFO has a smaller average than the ENGO, the ENGO is more stable than the MRFO during many runs, which is also significant in engineering applications. For the original NGO algorithm, it ranks sixth among all algorithms. Thus, by introducing the polynomial interpolation strategy and the multi-strategy opposite learning models into the original NGO algorithm, its comprehensive performance has been significantly improved.

4.4. 72-Bar Truss Design Problem

Figure 12 is the construction of the 72-bar truss design problem, including its node and element numbering schemes [57]. As Table 10 shown, the structural members of the 72-bar space truss are organized into 16 groups, symmetrically. The unit weight of the material is 0.1 lb/in.3, and the modulus of elasticity is 107 psi. The structure has the following constraints: a maximum displacement of ±0.25 in. at the uppermost joints in the x, y, or z directions, and a maximum allowable stress of ±25 ksi in any element and the range of acceptable cross sectional areas varies from 0.1 in.2 to 3.0 in.2 [58].
Once the algorithm is run 20 times, all results are summarized in Table 11 and Table 12. As shown in Table 11, the ENGO algorithm keeps outstanding performance on all four measure indexes, while the original NGO algorithm only ranks fourth. Thus, for engineering optimization problems of high dimension, the difference between the enhanced algorithm and the original algorithm is more clear. Finally, the optimal solutions obtained by the different algorithms are listed in Table 12, for reference.

4.5. Traveling Salesman Problem

The traveling salesman problem (TSP) is a classical combinatorial optimization problem [59]. In the TSP, there is a commodity salesman who needs to sell his goods in several designated cities. During this process, the salesman has to go through all of the cities before he returns to the original city. Therefore, how to determine the most suitable route to minimize the total travel, is the question to be considered. Since the feasible solution to this problem is the full permutation of all vertices, the challenge of this problem will increase dramatically with the increase in the number of cities, which can be regarded as a NP-hard problem [60]. Here, the ENGO algorithm is employed to solve the TSP. Then, two examples are given to analyze the performance of the ENGO algorithm and other comparison algorithms.

4.5.1. Two Traveling Salesmen and 80 Cities

The first example includes 80 randomly generated cities and two traveling salesmen. The results are summarized in Table 13. Once the algorithm is run 20 times, the average travel obtained by the ENGO algorithm is the smallest. Then, the MRFO algorithm performs better on the best value than the ENGO algorithm, which illustrates that there are still opportunities to improve the ENGO by borrowing the structure of the MRFO algorithm. Figure 13 includes the best routes for the traveling salesman, the convergence curves and the box plots of the different algorithms. From Figure 13a–h, the routes obtained by the AOA or GWO are more complicated, while the route obtained by the ENGO algorithm is reasonable. Furthermore, in Figure 13k, the red line of the ENGO algorithm has the fastest convergence speed during the whole search process, which illustrates that the improvement strategies are helpful to enhance the quality of the population to make the algorithm approach the optimum faster.

4.5.2. Three Traveling Salesmen and 80 Cities

Then, to increase the difficulty of the TSP, the number of traveling salesmen is set as three. Table 14 are the results obtained by all algorithms. Obviously, the ENGO algorithm has the best performance on four measure indexes. Meanwhile, from Figure 14k,l, the ENGO algorithm also has the fastest convergence speed and is the most stable one. Therefore, when the difficulty of the TSP is increased, the ENGO algorithm will show a stronger capacity to obtain the optimal route.

5. Conclusions and Future Work

This paper proposed a novel version of the NGO algorithm by introducing the polynomial interpolation strategy and the multi-strategy opposite learning method, called the ENGO algorithm. To test the ability of the improved algorithm, the ENGO algorithm is employed to solve 29 benchmark functions of the CEC2017 test suite, four engineering examples and the TSP and, compared it with other famous optimization algorithms. On the one hand, the results show that the ENGO algorithm ranks first on 23 test functions, accounting for 79.31% of the 29 functions, while the original NGO algorithm ranks first on only four functions, accounting for 13.79%. Combined with the box plots, the fluctuation of the results obtained by ENGO algorithm is significantly smaller. These gaps suggest that the introduction of the improvement strategies keeps a better balance between the exploration and exploitation, which helps the algorithm to enhance the search ability, and to obtain results with higher precision. The average convergence curves can also reflect the effect of the different strategies. Compared with other algorithms, the curve of the ENGO decreases rapidly. It is because the quality of the whole population is enhanced by the polynomial interpolation strategy, which leads the individuals to determine the appropriate search direction and move toward the optimal solution quickly. At the end of the search process, from the convergence curves, it can be observed that the ENGO algorithm can avoid the search stagnation phenomenon, which reflects that the multi-strategy opposite learning method helps the algorithm to expand the search range, to escape the local optimums. On the other hand, the results of the four engineering examples and the TSP show that the ENGO algorithm is still competitive for the optimization problems with high dimensions and strong constraints. When facing a more complex search space, the ENGO algorithm can keep an outstanding performance on all measure indexes, such as the 72 truss design problem. For the two TSP examples with high dimensions, the differences between the ENGO algorithm and the original NGO algorithm become more obvious, which illustrates that the two improvement strategies can maintain the efficient search ability of the algorithm as the complexity of the problem increases.
For the future, we can introduce the polynomial interpolation strategy and the multi-strategy opposite learning method into other intelligent optimization algorithms and discuss whether these strategies are still effective, and obtain an outstanding performance. Moreover, the ENGO algorithm can be employed to solve other changing problems, such as the parameters optimization of some complex models, feature selection, degree reduction, shape optimization, production scheduling problems, flight optimization problems, and so on.

Author Contributions

Conceptualization, G.H.; Data curation, Y.L., X.H., G.H. and W.D.; Formal analysis, Y.L. and W.D.; Funding acquisition, X.H. and G.H.; Investigation, Y.L., X.H. and W.D.; Methodology, Y.L., X.H., G.H. and W.D.; Project administration, X.H. and G.H.; Resources, Y.L. and G.H.; Software, Y.L. and W.D.; Supervision, X.H. and G.H.; Validation, Y.L. and G.H.; Writing—original draft, Y.L., X.H., G.H. and W.D.; Writing—review & editing, Y.L., X.H., G.H. and W.D. All authors have read and agreed to the published version of the manuscript.

Funding

Foundation items: Joint Fund of National Natural Science Foundation of China (U21A20485).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study were included in this published article.

Acknowledgments

The authors are very grateful to the reviewers for their insightful suggestions and comments, which helped us to improve the presentation and content of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

NotationsExplanation
NGONorthern goshawk optimization algorithm
ENGOEnhanced goshawk optimization algorithm proposed in this paper
NThe size of the northern goshawk population
MThe dimension of the optimization problem
tThe number of current iterations of the algorithm
TThe maximum iterations
UBThe upper bound of the optimization problem
LBThe lower bound of the optimization problem
XiThe i-th individual in the population
Xi,jThe j-th element of the i-th individual
preyiThe selected prey in the population
rA random number in the goshawk optimization algorithm
IA vector consisted of 1 or 2
a0, a1, a2Three coefficients in the quadratic polynomial function
X ˜ i New solution obtained by opposite learning
X ¯ i New solution obtained by quasi-opposite learning
X ^ i New solution obtained by quasi-reflected learning
XoppoNew population including the solutions after different opposite learning methods
TA, TB, TC, TDNumber of teeth in gears TA, TB, TC, TD
TsThe thickness of the shell section
ThThe thickness of the head
RThe inner radius
LThe length of the cylindrical
N p 1 , , N p 4 The positions of the pinions
N g 1 , , N g 4 The positions of the gears
b 1 , b 4 The thickness of the blanks
x p 1 , x g 1 , , x g 4 , y p 1 , y g 1 , , y g 4 The number of teeth of the different gears
TSPTraveling salesman problem
GAGenetic algorithm
DEDifferential evolution algorithm
ICAImperialist competitive algorithm
MAMemetic algorithm
BBOBio-geography optimization algorithm
SASimulated annealing algorithm
GSAGravitational search algorithm
SCASine cosine algorithm
BOABilliards-inspired optimization algorithm
GBOGradient-based optimizer
PSOParticle swarm optimization algorithm
WOAWhale optimization algorithm
GWOGrey wolf optimizer
AOAquila optimizer
SCSOSand cat swarm optimization algorithm
AOAArchimedes optimization algorithm
AVOAAfrican vultures optimization algorithm
DODandelion optimizer
SSASparrow search algorithm
HHOHarris hawks optimization algorithm
MRFOManta ray foraging optimization algorithm

References

  1. Hu, G.; Yang, R.; Qin, X.Q.; Wei, G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 403, 115676. [Google Scholar] [CrossRef]
  2. Sotelo, D.; Favela-Contreras, A.; Avila, A.; Pinto, A.; Beltran-Carbajal, F.; Sotelo, C. A New Software-Based Optimization Technique for Embedded Latency Improvement of a Constrained MIMO MPC. Mathematics 2022, 10, 2571. [Google Scholar] [CrossRef]
  3. Hu, G.; Du, B.; Wang, X.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  4. Kvasov, D.E.; Mukhametzhanov, M.S. Metaheuristic vs. deterministic global optimization algorithms: The univariate case. Appl. Math. Comput. 2018, 318, 245–259. [Google Scholar] [CrossRef]
  5. Sotelo, D.; Favela-Contreras, A.; Kalashnikov, V.V.; Sotelo, C. Model Predictive Control with a Relaxed Cost Function for Constrained Linear Systems. Math. Probl. Eng. 2020, 2020, 7485865. [Google Scholar] [CrossRef]
  6. Yan, L.; Zou, X. Gradient-free Stein variational gradient descent with kernel approximation. Appl. Math. Lett. 2021, 121, 107465. [Google Scholar] [CrossRef]
  7. Rubio, J.d.J.; Islas, M.A.; Ochoa, G.; Cruz, D.R.; Garcia, E.; Pacheco, J. Convergent newton method and neural network for the electric energy usage prediction. Inf. Sci. 2022, 585, 89–112. [Google Scholar] [CrossRef]
  8. Gonçalves, M.L.N.; Lima, F.S.; Prudente, L.F. A study of Liu-Storey conjugate gradient methods for vector optimization. Appl. Math. Comput. 2022, 425, 127099. [Google Scholar] [CrossRef]
  9. Guan, G.; Yang, Q.; Gu, W.; Jiang, W.; Lin, Y. A new method for parametric design and optimization of ship inner shell based on the improved particle swarm optimization algorithm. Ocean Eng. 2018, 169, 551–566. [Google Scholar] [CrossRef]
  10. Rahman, I.; Mohamad-Saleh, J. Hybrid bio-Inspired computational intelligence techniques for solving power system optimization problems: A comprehensive survey. Appl. Soft. Comput. 2018, 69, 72–130. [Google Scholar] [CrossRef]
  11. Kiran, M.S.; Hakli, H. A tree–seed algorithm based on intelligent search mechanisms for continuous optimization. Appl. Soft. Comput. 2021, 98, 106938. [Google Scholar] [CrossRef]
  12. Montoya, O.D.; Molina-Cabrera, A.; Gil-Gonzalez, W. A Possible Classification for Metaheuristic Optimization Algorithms in Engineering and Science. Ingeniería 2022, 27, e19815. [Google Scholar] [CrossRef]
  13. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  14. Zhou, J.; Hua, Z. A correlation guided genetic algorithm and its application to feature selection. Appl. Soft. Comput. 2022, 123, 108964. [Google Scholar] [CrossRef]
  15. Gonçalves, E.N.; Belo, M.A.R.; Batista, A.P. Self-adaptive multi-objective differential evolution algorithm with first front elitism for optimizing network usage in networked control systems. Appl. Soft. Comput. 2022, 114, 108112. [Google Scholar] [CrossRef]
  16. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 25–28. [Google Scholar]
  17. Seo, W.; Park, M.; Kim, D.-W.; Lee, J. Effective memetic algorithm for multilabel feature selection using hybridization-based communication. Expert Syst. Appl. 2022, 201, 117064. [Google Scholar] [CrossRef]
  18. Chen, X.; Tianfield, H.; Du, W.; Liu, G. Biogeography-based optimization with covariance matrix based migration. Appl. Soft. Comput. 2016, 45, 71–85. [Google Scholar] [CrossRef] [Green Version]
  19. Lee, J.; Perkins, D. A simulated annealing algorithm with a dual perturbation method for clustering. Pattern Recogn. 2021, 112, 107713. [Google Scholar] [CrossRef]
  20. Wang, Y.; Gao, S.; Yu, Y.; Cai, Z.; Wang, Z. A gravitational search algorithm with hierarchy and distributed framework. Knowl.-Based Syst. 2021, 218, 106877. [Google Scholar] [CrossRef]
  21. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  22. Kaveh, A.; Khanzadi, M.; Rastegar Moghaddam, M. Billiards-inspired optimization algorithm; a new meta-heuristic method. Structures 2020, 27, 1722–1739. [Google Scholar] [CrossRef]
  23. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Hu, X.; Cao, X.; Wu, C. An efficient hybrid integer and categorical particle swarm optimization algorithm for the multi-mode multi-project inverse scheduling problem in turbine assembly workshop. Comput. Ind. Eng. 2022, 169, 108148. [Google Scholar] [CrossRef]
  25. Sun, Y.; Yang, T.; Liu, Z. A whale optimization algorithm based on quadratic interpolation for high-dimensional global optimization problems. Appl. Soft. Comput. 2019, 85, 105744. [Google Scholar] [CrossRef]
  26. Yu, X.; Wu, X. Ensemble grey wolf Optimizer and its application for image segmentation. Expert Syst. Appl. 2022, 209, 118267. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  28. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2022, 1–25. [Google Scholar] [CrossRef]
  29. Zhang, H.; Zhang, K.; Zhou, Y.; Ma, L.; Zhang, Z. An immune algorithm for solving the optimization problem of locating the battery swapping stations. Knowl.-Based Syst. 2022, 248, 108883. [Google Scholar] [CrossRef]
  30. Reihanian, A.; Feizi-Derakhshi, M.-R.; Aghdasi, H.S. NBBO: A new variant of biogeography-based optimization with a novel framework and a two-phase migration operator. Inf. Sci. 2019, 504, 178–201. [Google Scholar] [CrossRef]
  31. Hu, G.; Du, B.; Wang, X. An improved black widow optimization algorithm for surfaces conversion. Appl. Intell. 2022, 1–42. [Google Scholar] [CrossRef]
  32. Kaveh, A.; Ghazaan, M.I.; Saadatmand, F. Colliding bodies optimization with Morlet wavelet mutation and quadratic interpolation for global optimization problems. Eng. Comput. 2022, 38, 2743–2767. [Google Scholar] [CrossRef]
  33. Yousri, D.; AbdelAty, A.M.; Al-qaness, M.A.A.; Ewees, A.A.; Radwan, A.G.; Abd Elaziz, M. Discrete fractional-order Caputo method to overcome trapping in local optima: Manta Ray Foraging Optimizer as a case study. Expert Syst. Appl. 2022, 192, 116355. [Google Scholar] [CrossRef]
  34. Lu, X.-l.; He, G. QPSO algorithm based on Lévy flight and its application in fuzzy portfolio. Appl. Soft. Comput. 2021, 99, 106894. [Google Scholar] [CrossRef]
  35. Jiang, F.; Xia, H.; Anh Tran, Q.; Minh Ha, Q.; Quang Tran, N.; Hu, J. A new binary hybrid particle swarm optimization with wavelet mutation. Knowl.-Based Syst. 2017, 130, 90–101. [Google Scholar] [CrossRef]
  36. Miao, F.; Yao, L.; Zhao, X. Symbiotic organisms search algorithm using random walk and adaptive Cauchy mutation on the feature selection of sleep staging. Expert Syst. Appl. 2021, 176, 114887. [Google Scholar] [CrossRef]
  37. Yang, D.; Wu, M.; Li, D.; Xu, Y.; Zhou, X.; Yang, Z. Dynamic opposite learning enhanced dragonfly algorithm for solving large-scale flexible job shop scheduling problem. Knowl.-Based Syst. 2022, 238, 107815. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Hubálovský, H.; Trojovský, A. Northern Goshawk Optimization: A New Swarm-Based Algorithm for Solving Optimization Problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  39. Hu, G.; Du, B.; Li, H.; Wang, X. Quadratic interpolation boosted black widow spider-inspired optimization algorithm with wavelet mutation. Math. Comput. Simul. 2022, 200, 428–467. [Google Scholar] [CrossRef]
  40. Yang, Y.; Gao, Y.; Tan, S.; Zhao, S.; Wu, J.; Gao, S.; Wang, Y.-G. An opposition learning and spiral modelling based arithmetic optimization algorithm for global continuous optimization problems. Eng. Appl. Artif. Intell. 2022, 113, 104981. [Google Scholar] [CrossRef]
  41. Li, B.; Li, Z.; Yang, P.; Xu, J.; Wang, H. Modeling and optimization of the thermal-hydraulic performance of direct contact heat exchanger using quasi-opposite Jaya algorithm. Int. J. Therm. Sci. 2022, 173, 107421. [Google Scholar] [CrossRef]
  42. Guo, W.; Xu, P.; Dai, F.; Zhao, F.; Wu, M. Improved Harris hawks optimization algorithm based on random unscented sigma point mutation strategy. Appl. Soft. Comput. 2021, 113, 108012. [Google Scholar] [CrossRef]
  43. Hu, G.; Zhu, X.; Wang, X.; Wei, G. Multi-strategy boosted marine predators algorithm for optimizing approximate developable surface. Knowl.-Based Syst. 2022, 254, 109615. [Google Scholar] [CrossRef]
  44. Houssein, E.H.; Gad, A.G.; Hussain, K.; Suganthan, P.N. Major Advances in Particle Swarm Optimization: Theory, Analysis, and Application. Swarm Evol. Comput. 2021, 63, 100868. [Google Scholar] [CrossRef]
  45. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  46. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  47. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  49. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  50. Zheng, J.; Hu, G.; Ji, X.; Qin, X. Quintic generalized Hermite interpolation curves: Construction and shape optimization using an improved GWO algorithm. Comput. Appl. Math. 2022, 41, 115. [Google Scholar] [CrossRef]
  51. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intel. 2020, 87, 103300. [Google Scholar] [CrossRef]
  52. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  53. Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.-T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl.-Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
  54. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
  55. Hu, G.; Zhu, X.; Wei, G.; Chang, C.-T. An improved marine predators algorithm for shape optimization of developable Ball surfaces. Eng. Appl. Artif. Intell. 2021, 105, 104417. [Google Scholar] [CrossRef]
  56. Gurrola-Ramos, J.; Hernàndez-Aguirre, A.; Dalmau-Cedeño, O. COLSHADE for Real-World Single-Objective Constrained optimization Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  57. Camp, C.V.; Farshchin, M. Design of space trusses using modified teaching-learning based optimization. Eng. Struct. 2014, 62–63, 87–97. [Google Scholar] [CrossRef]
  58. Hu, G.; Dou, W.; Wang, X.; Abbas, M. An enhanced chimp optimization algorithm for optimal degree reduction of Said-Ball curves. Math. Comput. Simul. 2022, 197, 207–252. [Google Scholar] [CrossRef]
  59. Uğur, A.; Aydin, D. An interactive simulation and analysis software for solving TSP using Ant Colony Optimization algorithms. Adv. Eng. Softw. 2009, 40, 341–349. [Google Scholar] [CrossRef]
  60. Skinderowicz, R. Improving Ant Colony Optimization efficiency for solving large TSP instances. Appl. Soft. Comput. 2022, 120, 108653. [Google Scholar] [CrossRef]
Figure 1. The classification of intelligent optimization algorithms.
Figure 1. The classification of intelligent optimization algorithms.
Mathematics 10 04383 g001
Figure 2. The flowchart of the NGO algorithm.
Figure 2. The flowchart of the NGO algorithm.
Mathematics 10 04383 g002
Figure 3. Schematic diagram of different learning methods in 2-D.
Figure 3. Schematic diagram of different learning methods in 2-D.
Mathematics 10 04383 g003
Figure 4. The process of opposite learning and selection.
Figure 4. The process of opposite learning and selection.
Mathematics 10 04383 g004
Figure 5. The flowchart of the ENGO algorithm.
Figure 5. The flowchart of the ENGO algorithm.
Mathematics 10 04383 g005
Figure 6. The average convergence curves of different algorithms on CEC2017.
Figure 6. The average convergence curves of different algorithms on CEC2017.
Mathematics 10 04383 g006aMathematics 10 04383 g006bMathematics 10 04383 g006c
Figure 7. The boxplots of the different algorithms on CEC2017.
Figure 7. The boxplots of the different algorithms on CEC2017.
Mathematics 10 04383 g007aMathematics 10 04383 g007bMathematics 10 04383 g007c
Figure 8. The radar maps of the different algorithms on CEC2017.
Figure 8. The radar maps of the different algorithms on CEC2017.
Mathematics 10 04383 g008
Figure 9. The construction of the gear train design. Here, A B C D represent the serial number of the gear.
Figure 9. The construction of the gear train design. Here, A B C D represent the serial number of the gear.
Mathematics 10 04383 g009
Figure 10. The construction of the pressure vessel design problem.
Figure 10. The construction of the pressure vessel design problem.
Mathematics 10 04383 g010
Figure 11. The construction of the four-stage gearbox design problem.
Figure 11. The construction of the four-stage gearbox design problem.
Mathematics 10 04383 g011
Figure 12. The construction of the 72-bar truss design problem. (a) The front view; (b) Structure diagram in 3-dimension.
Figure 12. The construction of the 72-bar truss design problem. (a) The front view; (b) Structure diagram in 3-dimension.
Mathematics 10 04383 g012
Figure 13. The best results, the convergence and box plot of all algorithms for the TSP with 80 cities and two traveling salesmen. (a) NGO; (b) ENGO; (c) AOA; (d) AO; (e) HHO; (f) WOA; (g) PSO; (h) GWO; (i) MRFO; (j) MVO; (k) The convergence curve; (l) The box plot.
Figure 13. The best results, the convergence and box plot of all algorithms for the TSP with 80 cities and two traveling salesmen. (a) NGO; (b) ENGO; (c) AOA; (d) AO; (e) HHO; (f) WOA; (g) PSO; (h) GWO; (i) MRFO; (j) MVO; (k) The convergence curve; (l) The box plot.
Mathematics 10 04383 g013aMathematics 10 04383 g013b
Figure 14. The best results, convergence and box plot of all algorithms for the TSP with 80 cities and three traveling salesmen. (a) NGO; (b) ENGO; (c) AOA; (d) AO; (e) HHO; (f) WOA; (g) PSO; (h) GWO; (i) MRFO; (j) MVO; (k) The convergence curve; (l) The box plot.
Figure 14. The best results, convergence and box plot of all algorithms for the TSP with 80 cities and three traveling salesmen. (a) NGO; (b) ENGO; (c) AOA; (d) AO; (e) HHO; (f) WOA; (g) PSO; (h) GWO; (i) MRFO; (j) MVO; (k) The convergence curve; (l) The box plot.
Mathematics 10 04383 g014
Table 1. Parameters of the algorithms.
Table 1. Parameters of the algorithms.
AlgorithmsProposed TimeParameters
PSO [44]1988Learning factors c1 = 2, c2 = 2.5, the inertia factor w = 2.
AOA [45]2021Control parameter µ = 0.499, Sensitive parameter a = 5.
AVOA [46]2021Hunger degrees z is a random value in [−1, 1].
DO [47]2022Parameters β = 1.5, s = 0.01.
SCSO [28]2022The value of hearing characteristics SM = 2.
SSA [48]2017The update probability of the leader c3 = 0.5.
Table 2. Results of the different algorithms on the CEC2017 functions.
Table 2. Results of the different algorithms on the CEC2017 functions.
FunctionIndexPSOAOAAVOADOSCSOSSANGOENGO
F1Ave5.1802E + 094.0462E + 101.1221E + 051.8277E + 053.3426E + 095.5010E + 032.0690E + 052.3466E + 03
Std1.3161E + 093.5061E + 094.8240E + 051.3250E + 052.2407E + 093.6044E + 031.3089E + 051.8986E + 03
Rank78346251
F3Ave7.1287E + 043.9949E + 043.0323E + 042.1871E + 034.1388E + 044.7198E + 045.8724E + 044.5869E + 04
Std1.3573E + 047.5639E + 039.2550E + 031.5687E + 038.6106E + 035.6481E + 035.3424E + 035.4097E + 03
Rank83214675
F4Ave9.1671E + 021.0676E + 045.2458E + 025.0657E + 026.6922E + 025.0147E + 025.0824E + 024.9688E + 02
Std1.3168E + 021.2735E + 033.0283E + 012.7533E + 011.0356E + 023.2163E + 011.6062E + 012.8701E + 01
Rank78536241
F5Ave7.3096E + 028.3088E + 026.9320E + 026.7696E + 027.3262E + 027.4290E + 026.5192E + 026.1849E + 02
Std2.3716E + 012.0645E + 013.8796E + 013.0970E + 014.2318E + 015.1374E + 012.6417E + 012.1285E + 01
Rank58436721
F6Ave6.2781E + 026.7466E + 026.4592E + 026.4002E + 026.5735E + 026.4691E + 026.0692E + 026.0030E + 02
Std4.0829E + 004.6522E + 005.7921E + 001.2329E + 011.0318E + 018.0761E + 005.0220E + 004.5294E − 01
Rank38547621
F7Ave1.1906E + 031.2735E + 031.1822E + 031.0234E + 031.1217E + 031.2322E + 039.1927E + 028.9333E + 02
Std7.9530E + 014.7253E + 018.5067E + 017.0898E + 019.2206E + 017.6023E + 013.4636E + 011.9863E + 01
Rank68534721
F8Ave1.0399E + 031.0744E + 039.4314E + 029.3114E + 029.9586E + 029.7208E + 029.3873E + 029.1502E + 02
Std2.0781E + 011.3098E + 012.3945E + 013.4521E + 013.5091E + 013.1781E + 011.4437E + 011.9785E + 01
Rank78426531
F9Ave3.9823E + 037.4560E + 034.8178E + 034.9962E + 035.7009E + 035.3918E + 032.9437E + 031.9618E + 03
Std8.1851E + 027.3024E + 028.2016E + 021.9850E + 039.5008E + 022.2118E + 024.5211E + 025.7069E + 02
Rank38457621
F10Ave8.2352E + 038.0569E + 035.7230E + 034.9592E + 035.6354E + 035.3943E + 035.4102E + 035.3437E + 03
Std3.2222E + 024.0120E + 028.9928E + 024.9182E + 028.5290E + 024.9178E + 022.5354E + 023.8350E + 02
Rank87615342
F11Ave2.0359E + 034.0694E + 031.2502E + 031.2517E + 031.8404E + 031.2747E + 031.2135E + 031.1957E + 03
Std2.8771E + 028.8072E + 025.1866E + 014.8974E + 016.1253E + 024.3691E + 013.4790E + 012.8866E + 01
Rank78346521
F12Ave2.5414E + 089.4131E + 096.7289E + 066.6982E + 063.2798E + 071.5966E + 067.3235E + 056.3577E + 05
Std3.5963E + 079.5712E + 086.2126E + 062.1447E + 062.4691E + 071.1237E + 062.1315E + 052.3050E + 05
Rank78546321
F13Ave3.1682E + 077.2400E + 091.2733E + 051.2456E + 051.3647E + 052.1858E + 041.0714E + 042.5724E + 04
Std1.9843E + 073.5318E + 095.4956E + 041.0581E + 051.9868E + 041.2436E + 048.5012E + 031.6919E + 04
Rank78546213
F14Ave1.1780E + 056.2017E + 051.8606E + 055.7101E + 043.0132E + 055.1864E + 048.2371E + 034.6760E + 03
Std8.0871E + 044.6962E + 052.2180E + 052.6573E + 044.7135E + 054.0322E + 044.6052E + 032.5158E + 03
Rank58647321
F15Ave1.8054E + 068.5030E + 073.4814E + 043.3985E + 041.7336E + 051.1045E + 044.6995E + 032.4524E + 03
Std2.3121E + 066.6009E + 071.8818E + 042.2745E + 044.3157E + 051.2212E + 042.5192E + 036.9164E + 02
Rank78546321
F16Ave3.3186E + 034.9326E + 033.1043E + 032.6516E + 033.1768E + 032.8377E + 032.5559E + 032.4806E + 03
Std2.7582E + 024.0993E + 023.0041E + 022.5849E + 023.3518E + 024.1478E + 021.9722E + 021.5468E + 02
Rank78536421
F17Ave2.3003E + 033.1673E + 032.5071E + 032.2951E + 032.3644E + 032.4993E + 031.9907E + 031.9748E + 03
Std2.1638E + 021.8004E + 022.0256E + 022.0889E + 021.9855E + 022.1525E + 027.6050E + 016.6039E + 01
Rank48735621
F18Ave2.6966E + 065.6400E + 068.7903E + 055.9533E + 051.7970E + 065.9365E + 059.6472E + 049.4033E + 04
Std1.6107E + 066.7615E + 069.5620E + 053.3879E + 052.1560E + 065.1722E + 054.6634E + 045.4539E + 04
Rank78546321
F19Ave2.1838E + 061.6601E + 083.3801E + 045.9278E + 042.0900E + 061.0631E + 045.3849E + 035.0245E + 03
Std2.2968E + 061.1869E + 081.9333E + 044.8092E + 042.5655E + 069.8222E + 031.3140E + 031.4903E + 03
Rank78456321
F20Ave2.6074E + 032.7081E + 032.6796E + 032.5232E + 032.6236E + 032.7266E + 032.3560E + 032.3617E + 03
Std1.9460E + 021.0941E + 021.9055E + 021.9280E + 022.1442E + 022.1721E + 027.4857E + 014.2260E + 01
Rank47635812
F21Ave2.5259E + 032.6337E + 032.4945E + 032.4552E + 032.4992E + 032.4968E + 032.4270E + 032.4084E + 03
Std1.8736E + 012.0020E + 015.4731E + 013.3110E + 014.3535E + 015.5648E + 011.2745E + 011.3815E + 01
Rank78436521
F22Ave6.4159E + 036.4679E + 034.8378E + 035.5543E + 033.4848E + 035.7974E + 032.3034E + 032.3005E + 03
Std3.2348E + 037.8528E + 022.3835E + 032.0029E + 031.4360E + 032.1210E + 031.9491E + 001.2498E + 00
Rank78453621
F23Ave2.8792E + 033.4845E + 032.9717E + 032.8672E + 032.9288E + 032.9679E + 032.7722E + 032.7413E + 03
Std2.1797E + 017.5564E + 018.0333E + 014.7990E + 015.9829E + 017.6021E + 012.1701E + 011.6470E + 01
Rank48735621
F24Ave3.0411E + 033.8398E + 033.1300E + 033.0604E + 033.0968E + 033.0891E + 032.9197E + 032.8797E + 03
Std9.3852E + 009.8943E + 018.3199E + 016.9980E + 017.1139E + 016.9832E + 011.4223E + 011.2927E + 01
Rank38746521
F25Ave3.2031E + 034.3262E + 032.9137E + 032.8939E + 033.0406E + 032.8956E + 032.9170E + 032.8886E + 03
Std6.4161E + 012.5324E + 022.4682E + 011.3634E + 015.1759E + 011.3847E + 011.7329E + 016.2293E + 00
Rank78426351
F26Ave6.1568E + 039.9657E + 036.6772E + 035.7903E + 036.1648E + 036.3669E + 033.2578E + 033.5280E + 03
Std4.9151E + 023.4324E + 021.0058E + 034.4628E + 021.0908E + 031.5443E + 037.6200E + 021.2912E + 03
Rank48735612
F27Ave3.3027E + 034.4396E + 033.2770E + 033.2687E + 033.3635E + 033.2875E + 033.2230E + 033.2184E + 03
Std1.7512E + 012.0391E + 023.0678E + 014.2722E + 018.8901E + 014.2029E + 019.6877E + 006.4926E + 00
Rank68437521
F28Ave3.5240E + 036.3214E + 033.2690E + 033.2365E + 033.4941E + 033.2461E + 033.2798E + 033.2054E + 03
Std7.3227E + 013.5836E + 022.2702E + 012.6406E + 011.1313E + 022.2172E + 011.4775E + 016.6437E + 00
Rank78426351
F29Ave4.3235E + 036.2964E + 034.2739E + 033.9694E + 034.4828E + 034.2342E + 033.8593E + 033.8632E + 03
Std1.9360E + 025.6231E + 022.8515E + 021.9885E + 023.4699E + 023.4541E + 029.4104E + 011.5308E + 02
Rank68537412
F30Ave7.0571E + 068.7344E + 085.0128E + 056.2123E + 057.0001E + 062.0995E + 041.7041E + 046.8796E + 03
Std5.5024E + 062.6904E + 083.0621E + 054.2992E + 054.9031E + 067.5318E + 031.4933E + 049.3090E + 02
Rank78456321
Average rank6.00007.75864.79313.34485.75864.48282.51721.3448
Table 3. The p-value of Wilcoxon rank sum test between the ENGO and other algorithms.
Table 3. The p-value of Wilcoxon rank sum test between the ENGO and other algorithms.
PSOAOAAVOADOSCSOSSANGO
F16.796 × 10−8/+6.796 × 10−8/+5.255 × 10−5/+2.960 × 10−7/+6.796 × 10−8/+4.540 × 10−6/+6.796 × 10−8/+
F36.796 × 10−8/+6.796 × 10−8/−6.796 × 10−8/−5.979 × 10−1/−6.796 × 10−8/−6.796 × 10−8/+6.796 × 10−8/+
F46.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+5.979 × 10−1/=6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+
F56.796 × 10−8/+6.796 × 10−8/+1.481 × 10−3/+2.184 × 10−1/=6.796 × 10−8/+8.817 × 10−1/=7.712 × 10−3/+
F66.796 × 10−8/+6.796 × 10−8/+1.657 × 10−7/+1.431 × 10−7/+7.898 × 10−8/+1.235 × 10−7/+1.803 × 10−6/+
F76.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+8.585 × 10−2/=
F86.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+9.173 × 10−8/+
F96.796 × 10−8/+6.796 × 10−8/+5.091 × 10−4/+2.073 × 10−2/+6.015 × 10−7/+3.987 × 10−6/+4.155 × 10−4/+
F101.953 × 10−3/+7.898 × 10−8/+2.041 × 10−5/+3.382 × 10−4/−1.376 × 10−6/+1.201 × 10−6/+3.942 × 10−1/=
F116.796 × 10−8/+6.796 × 10−8/+8.597 × 10−6/+1.116 × 10−3/+9.748 × 10−6/+3.069 × 10−6/+1.047 × 10−6/+
F127.937 × 10−8/+7.937 × 10−3/+6.905 × 10−1/=8.413 × 10−1/=7.937 × 10−3/+1.000 × 10−0/=5.476 × 10−1/=
F137.937 × 10−8/+7.937 × 10−3/+2.222 × 10−1/=3.175 × 10−2/+7.937 × 10−3/+3.095 × 10−1/=9.524 × 10−2/−
F147.937 × 10−8/+7.937 × 10−3/+7.937 × 10−3/+7.937 × 10−3/+7.937 × 10−3/+8.413 × 10−1/=4.206 × 10−1/=
F154.903 × 10−1/=5.166 × 10−6/+8.392 × 10−1/=1.058 × 10−2/+9.246 × 10−1/=4.703 × 10−3/+1.065 × 10−7/+
F166.796 × 10−8/+6.796 × 10−8/+1.143 × 10−2/+6.389 × 10−2/=2.062 × 10−6/+7.113 × 10−3/+4.540 × 10−6/+
F176.917 × 10−7/+6.796 × 10−8/+1.600 × 10−5/+2.616 × 10−1/=3.705 × 10−5/+5.310 × 10−2/=7.353 × 10−1/=
F186.168 × 10−1/=6.796 × 10−8/+6.220 × 10−4/+5.428 × 10−1/=7.205 × 10−2/=6.868 × 10−4/+1.014 × 10−3/+
F196.220 × 10−4/+3.069 × 10−6/+1.719 × 10−1/=5.652 × 10−2/=5.428 × 10−1/=2.073 × 10−2/+7.898 × 10−8/+
F206.796 × 10−8/+6.796 × 10−8/+4.735 × 10−1/=1.929 × 10−2/+6.796 × 10−8/+2.341 × 10−3/+1.294 × 10−4/−
F216.040 × 10−3/+1.415 × 10−5/+4.155 × 10−4/+1.333 × 10−1/=8.355 × 10−3/+1.610 × 10−4/+2.393 × 10−1/=
F223.416 × 10−7/+6.796 × 10−8/+1.159 × 10−4/+7.712 × 10−3/+1.807 × 10−5/+1.794 × 10−4/+2.977 × 10−1/=
F235.979 × 10−1/=3.152 × 10−2/+8.604 × 10−1/=2.503 × 10−1/=6.610 × 10−5/+6.389 × 10−2/=6.796 × 10−8/+
F246.796 × 10−8/+6.796 × 10−8/+6.796 × 10−8/+1.657 × 10−7/+6.796 × 10−8/+6.796 × 10−8/+7.205 × 10−2/=
F259.127 × 10−7/+6.796 × 10−8/+1.235 × 10−7/+4.539 × 10−7/+3.939 × 10−7/+1.918 × 10−7/+1.436 × 10−2/+
F266.796 × 10−8/+6.796 × 10−8/+5.255 × 10−5/+1.556 × 10−1/=6.796 × 10−8/+6.787 × 10−2/=6.796 × 10−8/−
F271.047 × 10−6/+6.796 × 10−8/+2.356 × 10−6/+6.917 × 10−7/+1.997 × 10−4/+1.600 × 10−5/+8.597 × 10−6/+
F281.065 × 10−7/+6.796 × 10−8/+2.302 × 10−5/+5.091 × 10−4/+1.431 × 10−7/+6.674 × 10−6/+4.249 × 10−1/=
F296.796 × 10−8/+6.796 × 10−8/+9.620 × 10−2/=2.748 × 10−2/+1.065 × 10−7/+1.333 × 10−1/=2.561 × 10−3/−
F301.047 × 10−6/+6.796 × 10−8/+2.041 × 10−5/+2.748 × 10−2/+9.127 × 10−7/+1.794 × 10−4/+4.249 × 10−1/=
+/=/−26/3/028/0/121/7/116/11/225/3/121/8/015/10/4
Table 4. Results of the different algorithms in solving the gear train design problem.
Table 4. Results of the different algorithms in solving the gear train design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
NGO6059.996074.136127.8030.012
ENGO6059.736066.126090.6213.701
AOA6071.986545.797222.18444.388
AO6113.006199.996410.86120.664
HHO6118.616795.997306.63449.839
WOA6570.107922.6910,623.261569.4710
PSO6059.726276.066439.73197.575
GWO6069.896076.516091.009.243
MRFO6060.576276.276410.09183.546
MVO6327.376518.436821.12213.747
Table 5. The best solution of the gear train design problem obtained by the different algorithms.
Table 5. The best solution of the gear train design problem obtained by the different algorithms.
Algorithmsx1x2x3x4
NGO0.77820.384740.3197200.0000
ENGO0.77820.384740.3201200.0000
AOA0.77880.385140.3452199.6439
AO0.79100.410640.8942193.8638
HHO0.83510.413243.2690162.6520
WOA0.82700.452342.2240175.0869
PSO0.79880.394841.3871185.6578
GWO0.77850.385040.3279199.9139
MRFO0.77990.385540.4039198.8352
MVO0.78500.388440.6637195.3838
Table 6. Results of the different algorithms in solving the pressure vessel design problem.
Table 6. Results of the different algorithms in solving the pressure vessel design problem.
AlgorithmsThe BestThe AverageThe WorstThe StdRank
NGO5885.405891.235912.017.632
ENGO5885.535889.235903.664.431
AOA5888.216177.718486.53579.234
AO6015.946827.337996.55649.549
HHO5991.426529.047347.30389.768
WOA6191.178298.0913,900.721763.2910
PSO5921.556192.626694.52248.265
GWO5885.326415.777318.93543.487
MRFO5889.045899.865927.3711.373
MVO5902.316376.457181.97368.376
Table 7. The best solution of the pressure vessel design problem obtained by the different algorithms.
Table 7. The best solution of the pressure vessel design problem obtained by the different algorithms.
Algorithmsx1x2x3x4
NGO13.11016.766142.1004176.6150
ENGO13.37037.020042.0984176.6376
AOA12.01235.952540.3278199.9302
AO12.22046.187240.5586196.7161
HHO12.79695.534341.3533186.0939
WOA13.82066.671941.5674183.3307
PSO13.35876.770242.0985176.6364
GWO11.70176.007840.3203200.0000
MRFO13.13526.873242.0915176.7231
MVO14.93027.031748.6240109.7093
Table 8. Results of the different algorithms in solving the four stage gearbox design problem.
Table 8. Results of the different algorithms in solving the four stage gearbox design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
NGO18,398.72166,899.45355,937.6398,874.366
ENGO40.2931,057.83118,898.0338,591.382
AOA236,593.12593,321.311,367,729.52318,308.489
AO75,789.09334,881.901,053,952.90240,302.347
HHO16,877.30335,825.38826,083.75241,230.918
WOA148,853.31639,888.432,175,565.64412,516.6410
PSO72.27110,875.33293,548.5391,901.274
GWO43.52147,549.85489,288.82133,316.045
MRFO40.0929,320.84126,516.2542,610.571
MVO61.07103,131.19363,070.87128,562.163
Table 9. The best solution of the four stage gearbox design problem obtained by the different algorithms.
Table 9. The best solution of the four stage gearbox design problem obtained by the different algorithms.
Algorithmsx1x2x3x4x5x6x7x8
NGO16.359743.4036 15.6042 20.7061 15.9103 45.3901 18.1319 37.3685
ENGO16.349836.2778 18.0688 32.4697 13.9053 36.2515 15.7787 31.2109
AOA7.348216.5814 9.7969 18.9440 8.9957 11.7763 6.7875 23.3493
AO6.750926.2480 15.9342 28.5684 18.4118 42.2120 11.9579 14.9953
HHO19.546453.4613 24.5021 44.3106 20.7301 44.8527 26.8726 52.5285
WOA11.590227.4520 12.1870 32.3820 11.1318 20.5502 13.5990 23.5141
PSO23.490354.2967 24.7237 51.7276 32.4498 54.2332 18.4690 43.6656
GWO21.782959.0988 22.9404 33.5723 20.0176 47.8837 20.2588 42.3957
MRFO22.120543.6337 17.6126 38.1229 20.9525 41.6324 21.0797 48.9074
MVO32.453057.1252 22.4880 47.3220 22.7186 67.3639 13.1444 22.6758
Algorithmsx9x10x11x12x13x14x15x16
NGO0.79771.9562 1.2978 0.7257 1.5711 3.5749 5.1603 4.5826
ENGO1.16261.0218 1.1244 1.2402 1.8459 3.5264 4.2182 4.8197
AOA2.65291.2972 1.6242 1.9436 1.6410 2.7802 4.2547 3.5011
AO2.42540.5351 1.0247 1.6666 1.7472 3.5758 2.1305 3.9710
HHO0.72250.9587 1.2449 1.1576 6.2971 2.8094 4.5773 3.2258
WOA0.81231.1016 1.4739 1.4853 0.7152 3.7964 3.5428 3.1702
PSO1.21101.4161 1.1434 1.1996 1.6512 6.2798 2.9257 5.6438
GWO0.67440.5895 0.6186 0.8372 8.0862 4.0926 5.5300 5.1202
MRFO1.17781.1063 1.0636 1.0945 2.3573 5.2736 5.1814 4.8497
MVO1.49731.0831 0.7566 1.9664 7.2887 5.9129 5.7797 5.5464
Algorithmsx17x18x19x20x21x22
NGO2.96571.1195 4.3970 2.4098 3.5683 3.6964
ENGO4.83501.8115 5.3561 3.8104 3.7590 2.7212
AOA4.54572.9304 5.6055 2.3556 1.6346 1.6271
AO2.76606.2850 3.4033 2.7208 3.2026 2.8172
HHO2.93834.7721 2.9564 2.4774 3.4950 2.9967
WOA2.54941.0331 1.7917 2.8701 1.7390 2.7638
PSO6.04297.3123 5.0504 3.1684 6.1872 3.9215
GWO5.10506.3408 6.4331 3.6878 4.2577 5.0673
MRFO6.26444.2654 4.1077 2.8940 3.6149 4.3536
MVO3.62181.9314 5.9677 5.2745 6.1850 3.9999
Table 10. Multiple loading conditions for the 72-bar truss.
Table 10. Multiple loading conditions for the 72-bar truss.
CaseNodePx (Kips)PY (Kips)Pz (Kips)
1170.00.0−5.0
180.00.0−5.0
190.00.0−5.0
200.00.0−5.0
2175.05.0−5.0
Table 11. Results of the different algorithms in solving the 72-bar truss design problem.
Table 11. Results of the different algorithms in solving the 72-bar truss design problem.
AlgorithmsThe BestThe Average The WorstThe StdRank
NGO392.18411.71444.6615.704
ENGO364.94369.08374.792.371
AOA521.87633.25797.5779.648
AO450.14519.94643.4449.425
HHO469.52535.59653.3046.826
WOA657.741276.022551.19500.0310
PSO561.18656.26817.3258.929
GWO366.77385.89544.0041.892
MRFO374.05387.49398.936.143
MVO403.42631.44818.15130.687
Table 12. The best solutions of the 72-bar truss design problem obtained by the different algorithms.
Table 12. The best solutions of the 72-bar truss design problem obtained by the different algorithms.
VariablesElementsNGOENGOAOAAOHHOWOAPSOGWOMRFOMVO
A1 (cm2)1–42.09201.85901.56761.15851.83811.17661.72661.96592.14642.0780
A2 (cm2)5–120.47670.56510.56860.90070.57151.38780.52360.52560.47210.6499
A3 (cm2)13–160.09470.00100.12430.00100.59271.08051.06760.00760.02820.4172
A4 (cm2)17–180.12070.00360.56370.73800.00100.50050.46960.01160.01330.0010
A5 (cm2)19–220.98721.18152.64260.89300.86641.07830.95051.32161.10741.3433
A6 (cm2)23–300.57520.51000.79670.59790.38920.47191.01070.48590.50520.5199
A7 (cm2)31–340.02200.00870.06770.03710.00100.32390.32930.02390.00350.0010
A8 (cm2)35–360.01560.00270.63530.00100.90251.08580.25070.05530.02590.0066
A9 (cm2)37–400.68380.56050.93510.56630.76720.35251.12770.57640.65520.8319
A10 (cm2)41–480.55590.52610.33890.43460.60980.86910.62330.52010.54370.4748
A11 (cm2)49–520.06640.00100.33670.00100.58410.27470.12870.02540.04720.0010
A12 (cm2)53–540.09440.15660.67340.48840.07370.98830.40480.05360.08570.3795
A13 (cm2)55–580.21490.16390.94470.18930.14400.51910.36320.16940.20250.1548
A14 (cm2)59–660.50230.57530.51900.73630.55930.34950.32990.53910.54620.5078
A15 (cm2)67–700.48530.36820.61800.55490.97501.01780.76660.47510.42640.2240
A16 (cm2)71–720.89300.50860.54350.76840.52031.04501.35400.52890.73680.6657
Table 13. The results of the different algorithms for the TSP with 80 cities and two2 traveling salesmen.
Table 13. The results of the different algorithms for the TSP with 80 cities and two2 traveling salesmen.
AlgorithmsThe BestThe AverageThe WorstThe StdRank
NGO7.42107.58347.77110.15586
ENGO7.37777.44817.55760.07271
AOA7.31347.57037.88400.20395
AO7.33977.48367.62060.11053
HHO7.47137.91018.33910.308810
WOA7.47177.52397.55390.03464
PSO7.61817.84288.04430.18769
GWO7.60587.72557.87160.10727
MRFO7.22317.48337.71760.18322
MVO7.56097.78547.98520.18748
Table 14. The results of the different algorithms for the TSP with 80 cities and three travelling salesmen.
Table 14. The results of the different algorithms for the TSP with 80 cities and three travelling salesmen.
AlgorithmsThe BestThe Average The WorstThe StdRank
NGO7.47857.64617.82320.14974
ENGO7.41537.52627.61540.07191
AOA7.53647.70517.84160.12255
AO7.50427.62627.67520.06953
HHO7.69037.94768.16820.17867
WOA7.62917.74677.84080.09736
PSO7.80037.95498.11520.13968
GWO7.78228.01068.35830.21679
MRFO7.47477.61077.80130.14592
MVO7.79448.06468.65280.338810
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liang, Y.; Hu, X.; Hu, G.; Dou, W. An Enhanced Northern Goshawk Optimization Algorithm and Its Application in Practical Optimization Problems. Mathematics 2022, 10, 4383. https://doi.org/10.3390/math10224383

AMA Style

Liang Y, Hu X, Hu G, Dou W. An Enhanced Northern Goshawk Optimization Algorithm and Its Application in Practical Optimization Problems. Mathematics. 2022; 10(22):4383. https://doi.org/10.3390/math10224383

Chicago/Turabian Style

Liang, Yan, Xianzhi Hu, Gang Hu, and Wanting Dou. 2022. "An Enhanced Northern Goshawk Optimization Algorithm and Its Application in Practical Optimization Problems" Mathematics 10, no. 22: 4383. https://doi.org/10.3390/math10224383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop