Using the Grey Wolf Aquila Synergistic Algorithm for Design Problems in Structural Engineering

The Aquila Optimizer (AO) is a metaheuristic algorithm that is inspired by the hunting behavior of the Aquila bird. The AO approach has been proven to perform effectively on a range of benchmark optimization issues. However, the AO algorithm may suffer from limited exploration ability in specific situations. To increase the exploration ability of the AO algorithm, this work offers a hybrid approach that employs the alpha position of the Grey Wolf Optimizer (GWO) to drive the search process of the AO algorithm. At the same time, we applied the quasi-opposition-based learning (QOBL) strategy in each phase of the Aquila Optimizer algorithm. This strategy develops quasi-oppositional solutions to current solutions. The quasi-oppositional solutions are then utilized to direct the search phase of the AO algorithm. The GWO method is also notable for its resistance to noise. This means that it can perform effectively even when the objective function is noisy. The AO algorithm, on the other hand, may be sensitive to noise. By integrating the GWO approach into the AO algorithm, we can strengthen its robustness to noise, and hence, improve its performance in real-world issues. In order to evaluate the effectiveness of the technique, the algorithm was benchmarked on 23 well-known test functions and CEC2017 test functions and compared with other popular metaheuristic algorithms. The findings demonstrate that our proposed method has excellent efficacy. Finally, it was applied to five practical engineering issues, and the results showed that the technique is suitable for tough problems with uncertain search spaces.


Introduction
In order to maximize profit, productivity, and efficiency, optimization is carried out, which is basically the process of identifying the best possible solution among all viable options for a particular situation [1][2][3].In recent decades, as human culture and contemporary science have advanced, the intricacy of the number of optimization issues in the actual world has been rising significantly, placing more demands on optimization strategies' dependability and efficacy [4].In general, deterministic algorithms and metaheuristic algorithms (MAs) are two categories of existing optimization technologies that are used.Using the same starting parameters for a deterministic method, possible solutions are produced in accordance with mechanical convergence to the global optimum without regard to the analytical features of problems or anything arbitrary [5].The conjugate gradient and the Newton-Raphson technique are two typical deterministic techniques.While this kind of method can solve some nonlinear problems satisfactorily, it often falls into local optima when met with multimodal, large-scale, and sub-optimal search space constraints.It also requires the problem's derivative information.Lately, as a perfect substitute for deterministic algorithms, MAs are gaining popularity among academics worldwide as a great substitute for deterministic algorithms because of their straightforward designs, minimal processing overheads, ability to avoid gradient information, and strong local optimal avoidance capability [6].
Metaheuristic algorithms [7] are influenced by nature [8].Based on their sources of inspiration, these algorithms can be grouped into four categories [9,10], including Evolution-based Algorithms (EAs) [11], Swarm-based Intelligence (SI) [12], Physics-based Techniques (PTs) [13], and Human-based Behaviors (HBs) [14], as indicated in Table 1.They usually model physical or biological phenomena in nature and create mathematical frameworks to solve optimization problems [15,16].These algorithms offer the features of self-organization, self-adaptation, and self-learning, and they have been widely applied in various domains, such as biology [17,18], feature selection [19], optimization computing [20], image classification [21], and artificial intelligence [22,23].Evolution-Inspired map and crisscross operator [51], random learning mechanism and Nelder-Mead simplex search [52], wavelet mutation [53], weighted adaptive searching technique [54], binary AO [55], etc.A fine literature examination of the AO algorithm and its application is offered in reference [56].From these sources, it may be inferred that AO has a propensity to converge too soon and undergo local stagnation.
The other method concerned in this paper, the GWO, was created in 2014 [30].The Grey Wolf Optimizer (GWO) was developed in response to the social hunting behavior of grey wolves and has inspired other academics to use it to tackle practical optimization issues.Grey wolves have a rigid social structure with an alpha wolf at the top, and they live in packs.The alpha wolf is in charge of steering the pack and making choices, and the other wolves adhere to its instructions [57].The top solution thus far has been identified to be the alpha position of the GWO.So, we may force the AO to search through fresh regions of the search space and prevent local stagnation by adding the alpha position of the GWO into the AO.
Opposition-based learning (OBL) [58] is a novel area of study that has generated noteworthy attention within the past 10 years.By making use of the OBL concept, numerous soft computing methods have been improved.To boost the solution quality, the quasiopposition-based learning (QOBL) technique [59] is applied.The QOBL indicates that, when solving an optimization issue, it is more effective to employ a quasi-opposite number than an opposite number.The fittest population in QOBL is made up of the current nominee population as well as its opposite population and quasi-opposite population.The QOBL strategy can be applied in integrating NIOAs, in the optimal design of PI/PD dual-mode controllers [60], and in the parameter identification of permanent magnet synchronous motors [61].
In light of the above discussion, to increase the exploration ability of the AO algorithm, this work offers a hybrid approach that employs the alpha position of the GWO to drive the search process of the AO algorithm.At the same time, we applied the QOBL strategy in each phase of the Aquila Optimizer.This strategy develops quasi-oppositional solutions to the current solutions.The quasi-oppositional solutions are then utilized to direct the search phase of the AO algorithm.We call this enhanced algorithm the GAOA.By comparing the application results of 10 swarm intelligence algorithms based on 23 classical benchmark functions [29,30] and 29 CEC2017 benchmark functions [62], it is proven that the approach suggested in this research can speed up the convergence speed, enhance the convergence accuracy, and identify the global optimum instead of the local optimum.The application of four engineering challenges also indicates that our suggested approach has considerable advantages in solving genuine situations.
The important contributions of this study are summarized as follows: • Based on the Grey Wolf alpha position, the Aquila Optimizer has been improved, so that its exploration ability is increased.• Then, the quasi-oppositional-based learning strategy is used in each phase of the Aquila Optimizer to direct the search process of the AO algorithm.• The performance of our method on 23 classical functions and 29 CEC2017 functions is examined and compared with the performances of the other 10 algorithms while considering different dimensions.

•
Four engineering design challenges are utilized to evaluate the effectiveness of our proposed method to solve practical situations.
The following chapters of this article are organized as follows: Section 2 introduces the background of the Aquila Optimizer, Grey Wolf Optimizer, and opposition-based learning strategy.Section 3 introduces the developed algorithm.In Section 4, we carry out the corresponding experiment.Also, four traditional engineering problems are discussed in Section 5. Finally, Section 6 provides a summary and the future prospects.

Background 2.1. Aquila Optimizer
The metaheuristic optimization method, known as the Aquila Optimizer (AO) [29], was motivated by the Aquila bird's hunting style.The Aquila bird uses four primary prey hunting techniques, which AO imitates: Using its height advantage, the Aquila bird swoops down vertically to take down floating prey.By hovering in a contour-like pattern close to the ground, the Aquila bird performs swift glide-like assaults to pursue seabirds.The Aquila bird uses this technique to hunt foxes and other prey that move slowly.The Aquila bird employs this method to directly capture prey while walking on the ground.

Expanded Exploration
The Aquila Optimizer algorithm's expanded exploration u 1 reflects that it achieves great heights, and then descends rapidly, which is the hunting method observed in Aquila birds.This tactic involves the bird soaring at great heights, allowing it to thoroughly examine the search area, spot prospective prey, and choose the best hunting location.Ref. [29] contains a mathematical representation of this tactic, as shown in Equation (1).
The maximum number of iterations in the method is symbolized as H whereas h stands for the present iteration.The initial search in the candidate solution population (u 1 ) yields the answer for the following iteration, denoted as u (h+1) 1 .

The expression u (h) best
denotes the best result up to the h th iteration.Through the equation 1 − h H , to adjust the depth of the search space, a count of iterations is used.Furthermore, N denotes the population size and dimension size, D, and the average value of the locations of connected existing solutions at the h th iteration is calculated using Equation (2), indicated as u

Narrowed Exploration
The Aquila Optimizer algorithm's narrowed exploration method (u 2 ) is in line with how Aquila birds hunt; to pursue prey, the tactic entails flying in a contour-like pattern with quick gliding attacks in a condensed investigation area.The main objective of this approach, which is mathematically described in Equation (3), is to find a solution u (h+1) 2 for the following iterations, denoted as h.
The Levy flying distribution for dimension space, D, is referred to as Levy(D) in the Aquila optimization method.In the range [1, N], where N denotes population size, the random solution u (h) R is discovered at the h th iteration.Typically set to 0.01, the fixed constant value s is used to determine the Levy flight distribution along with randomly chosen parameters, u and v, that range from 0 to 1. Equation (4) provides the mathematical expression for this calculation.
Equation ( 5) determines the value s, which is derived using a certain constant parameter a set at 1.5.
The spiral shapes within the search range, designated by v and u, respectively, are represented by Equations ( 6) and (7).These spiral shapes are specified in Equation (3).
Over a predetermined number of search iterations, the variable r 1 accepts values in the range of (1,20).w and U have fixed values of 0.005 and 0.00565, respectively.D 1 ∈ Z has a range of 1 to the search space's dimension, D.

Expanded Exploitation
The Aquila bird targets its prey with a low, slow-moving descent attack while carefully inspecting the prey's location during the investigation phase.Equation ( 8) is a mathematical representation of this method, also known as expanded exploitation u 3 .
Equation ( 8) results in u , which denotes the outcome for the following iteration.
In the h th iteration, u (h) M stands for the average value of the current solution determined by Equation ( 2), and u best (h) represents the currently best solution found.The tuning parameters θ and ρ are normally given a value of 0.1 each, whereas the variable 'rand' is allocated a random number in the (0, 1) range.The upper bound is shown as ub, and the lower bound is shown as lb.

Narrowed Exploitation
Aquila birds use a hunting strategy in which they directly capture their targets by exploiting the prey's erratic movement patterns when on the ground.Equation (9), which generates the h th iteration of the following solution, indicated as u (h+1) 4 , uses this hunting approach as the foundation for the restricted exploitation technique u (h) 4 design.A quality function known as J, which is stated in Equation (10), was proposed to guarantee a balanced search strategy.
Equations ( 11) and ( 12) are utilized to calculate the trajectory of an attack during a getaway, from the initial location to the terminal location (P 2 ), and the motion pattern for the Aquila bird's prey tracking (P 1 ).The calculations are performed using the current iteration number (h) and the maximum number of iterations (H).
The pseudocode in Algorithm 1 provides a summary of the Aquila Optimization procedure.

Algorithm 1 Aquila Optimizer
Set Initial values of parameters (nPop, nVar, α, β , max_iter, etc.) where nPop refers to population size, max_iter to the maximum number of iterations.Determine the starting position at random.While (Iteration < max_iter) do Determine the fitness of early positions.As u best (h), identify the best individual with the finest fitness values.For (i = 1: nPop) Updated variables include u, v, P

Grey Wolf Optimizer
The GWO is a population-based metaheuristic algorithm [30] that replicates grey wolves, considered as apex predators, which are at the top of the food chain [57]: • The alpha wolf is regarded as the dominating wolf in the pack, and his/her orders should be followed by the pack members.

•
Beta wolves are subordinate wolves, which support the alpha wolf in decision making, and they are considered the best prospects to be the alpha wolf.

•
Delta wolves have to surrender to the alpha and beta, but they rule the omega.

•
Omega wolves are regarded as the scapegoats in the pack, they are the least important individuals in the pack, and they are only allowed to feed last.

Encircling the Prey
When the prey site is seized by the grey wolves, the encircling of prey is performed.In the process of encircling, grey wolf individuals should first assess the distances between themselves and the prey according to Equation ( 13) and then update their positions through Equation ( 14): where h denotes the current iteration,

→
A and → C are specified as coefficient vectors, u p represents the best solution position vector that the prey has detected so far, and u indicates the position vector of a grey wolf.A are coefficients that are determined by Equations ( 15) and (16).The components of the vector → a are linearly decreasing from 2 to 0 during the course of the iterations and can be stated as Equation (17): where h signifies the current iteration, and H denotes the maximum number of iterations.

Hunting the Prey
In the GWO, while the global optimums of an optimization problem are unknown, the first three grey wolves, the alpha, beta, and delta, are always assumed to be the closest solutions to the optimal value.In the hunting strategy, the placements of each search agent (wolf) are altered based on the three best places of the alpha, beta, and delta.The following equations are used to replicate the hunting process and to locate the better optimum in the border space.Therefore, the remaining wolves are required to update their positions following the leading wolves, which may be computed by Equations ( 18)- (20).
where u α , u β , and u δ are the three best positions of the alpha, beta, and delta;

Attacking the Prey (Exploitation Phase)
Grey wolves separate from each other to look for prey and converge to attack prey.Grey wolves will only attack the prey when they are no longer moving.This phase is responsible for exploitation and is handled by a linear decrement in → a .The linear reduction in this parameter enables grey wolves to attack the prey when it stops moving.

Searching the Prey (Exploration Phase)
It is apparent that after the prey stops moving, the wolf will kill the prey and, in this way, they finish their hunting process.Grey wolves primarily search according to the positions of α, β, and δ [63].
The process of the GWO can be exhibited in detail according to the pseudo-code of the GWO's Algorithm 2. The best answer discovered thus far in the search space is represented by the alpha wolf.It increases the speed and efficiency of convergence by guiding the other wolves, or candidate solutions, toward areas of potential interest.By doing this, early convergence to possibly less-than-ideal local optima is avoided.It promotes equilibrium behavior between exploration and exploitation.Compared to certain other metaheuristics, the algorithm is comparatively simple to develop and comprehend due to the simplicity of the alpha position notion.Therefore, for better performance, the alpha position can be combined with other optimization strategies to create flexible hybrid algorithms [64].

Opposition-Based Learning and Quasi-Opposition-Based Learning
Tizhoosh first proposed opposition-based learning (OBL) in 2005 [58].By contrasting the current solution with the opposition-based learning solution, OBL's primary goal is to select the best solution for the following iteration.Numerous metaheuristic algorithms have effectively employed the OBL approach to increase their ability to overcome the stagnation of local optima [65,66].The following is the mathematical equation: A better OBL variant is quasi-opposition-based learning (QOBL) [48], which uses quasi-opposite points rather than opposite points.QOBL points are more likely to represent challenging problems that have not yet been solved by existing methods.The QOBL mathematical equation is as follows: Below is the pseudo-code of implementing QOBL in population initialization, denoted as Algorithm 3.

Algorithm 3 Quasi-Oppositional-Based Learning
Set Initial values of parameters (nPop, nVar, initial population u, lb, ub) For i = 1: nPop For j = 1: nVar u o i,j = lb j + ub j − u i,j %Inverting the current population If u i,j < D i,j %Creating Quasi Opposite of Population

Proposed Framework
In this section, the general framework of the developed algorithm is described in Algorithm 4.
Using the GWO method to populate the AO algorithm's initial population will boost its exploratory capabilities.To carry this out, a population of solutions can first be generated using the GWO algorithm.
In Algorithm 4, once the population of the AO algorithm has been initialized, the AO algorithm can be utilized to optimize the issue.The alpha position of the GWO population can be utilized to steer the search process of the AO algorithm.This can be achieved by updating the positions of the solutions in the AO population based on the alpha position.
To create an improved harmony between diversity and amplification and to make sure that the optimal result was found, the QOBL was reimplemented for each phase of the AO.Up until the termination criteria were satisfied, this process was repeated.We named this hybrid algorithm the Grey Wolf Aquila Synergistic Algorithm (GAOA).The performance of the AO algorithm can be enhanced by initializing the population of the AO algorithm with the GWO algorithm and directing the search process using the alpha position of the AO population.This is due to the AO algorithm's potential to increase the likelihood of obtaining the global optimum, expand the search space under consideration, and decrease the likelihood of early convergence.Additionally, the harmony between diversity and intensification can be enhanced by including the quasi-oppositional-based learning (QOBL) technique in each stage of the AO algorithm.This may help the AO algorithm to perform even better.Its flowchart is given in Figure 1 for a clear visualization of the process.
Biomimetics 2024, 9, x FOR PEER REVIEW 11 of 34 The general computational complexity of the GAOA is also shown in this section.Three rules are usually used to determine the computational complexity of the GAOA: initializing the solutions, calculating the fitness functions, and updating the solutions.
Let N be the number of solutions and let ( ) O N be the computing complexity of the initialization procedures of the solutions.The updating processes for the solutions have a computational complexity of , where G is the total number of iterations and D is the problem's dimension size.These processes involve searching for the best positions and updating each solution's position.As a result, the suggested GAOA's (Grey Wolf Aquila Synergistic Algorithm) overall computational complexity is .

Experimental Settings
The performance of the suggested approach is examined in this work by utilizing benchmark functions from the IEEE Congress on Evolutionary Computation 2017 (CEC2017) and 23 classical benchmark functions.The test suite for the IEEE CEC2017 has 30 functions, although F2 is excluded due to instability.There are two unimodal functions Overall, the suggested method, the Grey Wolf Aquila Synergistic Algorithm (GAOA), is a promising strategy to enhance the AO algorithm's performance in a variety of optimization tasks.It has been demonstrated to be successful at enhancing the performance of the AO method in a number of benchmark optimization tasks whilst being very easy to implement.
The general computational complexity of the GAOA is also shown in this section.Three rules are usually used to determine the computational complexity of the GAOA: initializing the solutions, calculating the fitness functions, and updating the solutions.
Let N be the number of solutions and let O(N) be the computing complexity of the initialization procedures of the solutions.The updating processes for the solutions have a computational complexity of , where G is the total number of iterations and D is the problem's dimension size.These processes involve searching for the best positions and updating each solution's position.As a result, the suggested GAOA's (Grey Wolf Aquila Synergistic Algorithm) overall computational complexity is

Experimental Settings
The performance of the suggested approach is examined in this work by utilizing benchmark functions from the IEEE Congress on Evolutionary Computation 2017 (CEC2017) and 23 classical benchmark functions.The test suite for the IEEE CEC2017 has 30 functions, although F2 is excluded due to instability.There are two unimodal functions (F1 and F3), seven basic multimodal functions (F4-F10), ten hybrid functions (F11-F20), and ten composition functions (F21-F30) among the twenty-nine benchmark functions.
The population size (N) was fixed at 100 in each experiment.The [−100, 100] range was chosen for the search.On each function, each algorithm was executed 51 times.In the tables that follow, the best results across all comparing algorithms are highlighted in bold.On a computer with an IntelI CoITM) i7-9750H processor running at 2.60 GHz and 16 GB of RAM, all algorithms were implemented in MATLAB R2021b.
The following four factors are used to assess GAOA's (Grey Wolf Aquila Synergistic Algorithm) performance: • The average and standard deviation of the optimization errors between the obtained and known real optimal values are used.All objective functions are minimization issues; hence, the best values, or minimum mean values, are denoted in bold.

•
Non-parametric statistical tests, such as the Wilcoxon rank-sum test, are used to compare the p-value and the significance level (0.05) between the suggested algorithm and the compared method [67,68].There is a substantial difference between the two algorithms when the p-value is less than 0.05.W/T/L denotes the number of wins, ties, and losses the given algorithm has experienced in comparison to its rival.

•
By exhibiting the pairwise variations in the ranks for each method at each dimension, the Bonferroni-Dunn diagram demonstrates the discrepancies between the rankings achieved for each algorithm at dimensions of 30, 50, and 100.By deducting the rank of one algorithm from the rank of another algorithm, the pairwise differences in the rankings are determined.Each bar in the Bonferroni-Dunn image represents the average pairwise difference in ranks for a particular algorithm at a particular dimension.Usually, the bars are color-coded to represent various algorithms.

•
Convergence graphs are used to provide a simple visual representation of the algorithm's accuracy and speed of convergence.It explains if the enhanced algorithm breaks away from the local solution.
Table 2 displays the parameter settings [71] for various methods.Tables 3-6 show the findings of a comparative experiment.
To test the exploration, exploitation, and stagnation process avoidance abilities of the GAOA, a set of 23 benchmark test functions is employed.As seen in Table 6, the GAOA outperforms the SMA, SSA, and other metaheuristic algorithms by a significant margin.With the exception of F6, the GAOA, in particular, routinely beats the other algorithms.Notably, for all unimodal functions other than F5, the GAOA has the least mean value and standard deviations and achieves the theoretical optimum for F1-F4.These results show good precision and stability, emphasizing the excellent applicability of the suggested GAOA algorithm.The results for functions F8-F23 shown in Table 6 show that the GAOA also performs exceptionally well in exploration.The theoretical optimum is notably achieved by the GAOA in F8, F10, F14-F17, and F19-F23, highlighting its outstanding exploration capacity.These results demonstrate the strength of the GAOA in navigating the search space and locating the best answers.The statistical findings for each of the nine functions (F1, F3, F5, F6, F7, F9, F11, F12, and F13) are shown in Table 8 for each parameter setting.It is evident from these findings that, out of all the functions evaluated, the parameters of α = 0.1 and δ = 0.1 generally perform better in different circumstances, followed by α = 0.1 and δ = 0.5; α = 0.1 and δ = 0.9; α = 0.5 and δ = 0.1; and α = 0.9 and δ = 0.9, which assigned ranks 2, 3, 5, and 4, respectively.But the AO performs similarly in each of these instances at F1, F3, F9, and F11.F1 Mean STD 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 F3 Mean STD 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 F9 Mean STD 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 F11 Mean STD 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × The convergence graphs of the average optimizations produced by eight algorithms on the IEEE CEC2017 functions with 30, 50, and 100 dimensions are shown in Figures 2-4.The log value of the average optimizations is represented by the vertical axis, and the log value of iterations is represented by the horizontal axis.It is evident from Figures 2-4 that the GAOA curves are the lowest and that the convergence speed is quick.The GAOA can identify a better solution, exit local optimization, prevent premature convergence, enhance the quality of the solution, and has a high optimization efficiency when compared to the original AO in the convergence graphs.This unequivocally proves the efficacy of the revised methodology presented in this research and the improvement in the population diversity.Unimodal functions do not entirely reflect the benefits of the GAOA.The GAOA may search for smaller values and converge quickly on increasingly complicated multimodal, hybrid, and composition functions, demonstrating excellent competitiveness.
that the GAOA curves are the lowest and that the convergence speed is quick.The GAOA can identify a better solution, exit local optimization, prevent premature convergence, enhance the quality of the solution, and has a high optimization efficiency when compared to the original AO in the convergence graphs.This unequivocally proves the efficacy of the revised methodology presented in this research and the improvement in the population diversity.Unimodal functions do not entirely reflect the benefits of the GAOA.The GAOA may search for smaller values and converge quickly on increasingly complicated multimodal, hybrid, and composition functions, demonstrating excellent competitiveness.

Complexity of the Algorithm
The proposed algorithm's usability and functionality are confirmed by the algorithm's complexity.Due to their high computing cost, algorithms with high computational complexity are rarely investigated.Thus, for an algorithm to be effective, it must have strong optimization capabilities quick convergence and a minimal computational cost.In this section, we present the CPU running time used by all algorithms that were evaluated using IEEE CEC2017 functions with 30, 50, and 100 dimensions.We also address how the enhanced technique presented in this research affects the algorithm complexity of AO.
For each algorithm, the maximum number of function evaluations is fixed to be the same.Table 9 displays the findings for the CPU running time.The table shows that the WOA takes the least amount of time to compute.The GAOA and MAO require extremely little time to compute, and they have similar processing times.On the other hand, the RSA is the most difficult and time-consuming algorithm.In Tables 10-12 Wilcoxon rank sum test is performed for the Tables 3-5 respectively to compare the p-values.And results shows that GAOA performs really good.The metaheuristic algorithm performance is compared using the Bonferroni-Dunn bar chart.It is a trustworthy and dependable test that may be used to determine which algorithm performs the best for a specific set of benchmark functions.It is evident from Figure 5 that the GAOA outperforms the other metaheuristic methods.The eight algorithms are represented by the horizontal axis, while the rank is represented by the vertical axis.Future high-dimensional problems and engineering problems can benefit from the GAOA's low computational complexity and low computational cost.

GAOA for Engineering Design Problems
The performance of the GAOA on the following five design problems is presented in this section to assess its effectiveness: pressure vessels, tension springs, three-bar trusses, speed reducers, and cantilever beams.A population size of 30 individuals and a maximum iteration of 500 were used to solve these problems.The results of the GAOA were then contrasted with other state-of-the-art algorithms that have been reported in the literature.The parameter settings used in the evaluation are consistent with those used in prior computational experiments.

Pressure Vessel Design Problem
The pressure vessel design challenge [29,30] seeks to reduce the overall expense of a cylindrical pressure vessel while meeting the desired form and pressure criteria depicted in Figure 6.As shown in Figure 6, the answer to this problem entails minimizing four parameters: the shell's thickness (t), head (h), cylindrical section sinner radius (r), and length without the top (l).The following are the issue's restrictions and a corresponding equation. Consider [ ]

GAOA for Engineering Design Problems
The performance of the GAOA on the following five design problems is presented in this section to assess its effectiveness: pressure vessels, tension springs, three-bar trusses, speed reducers, and cantilever beams.A population size of 30 individuals and a maximum iteration of 500 were used to solve these problems.The results of the GAOA were then contrasted with other state-of-the-art algorithms that have been reported in the literature.The parameter settings used in the evaluation are consistent with those used in prior computational experiments.

Pressure Vessel Design Problem
The pressure vessel design challenge [29,30] seeks to reduce the overall expense of a cylindrical pressure vessel while meeting the desired form and pressure criteria depicted in Figure 6.As shown in Figure 6, the answer to this problem entails minimizing four parameters: the shell's thickness (t), head (h), cylindrical section sinner radius (r), and length without the top (l).The following are the issue's restrictions and a corresponding equation. Consider [ ]

GAOA for Engineering Design Problems
The performance of the GAOA on the following five design problems is presented in this section to assess its effectiveness: pressure vessels, tension springs, three-bar trusses, speed reducers, and cantilever beams.A population size of 30 individuals and a maximum iteration of 500 were used to solve these problems.The results of the GAOA were then contrasted with other state-of-the-art algorithms that have been reported in the literature.The parameter settings used in the evaluation are consistent with those used in prior computational experiments.

Pressure Vessel Design Problem
The pressure vessel design challenge [29,30] seeks to reduce the overall expense of a cylindrical pressure vessel while meeting the desired form and pressure criteria depicted in Figure 6.As shown in Figure 6, the answer to this problem entails minimizing four parameters: the shell's thickness (t), head (h), cylindrical section sinner radius (r), and length without the top (l).The following are the issue's restrictions and a corresponding equation.
s results show that when the GAOA is compared to the COA, AO, GWO ROA, RSA, WOA, and SCA, the ROA can obtain better ideal values.

Tension Spring Design Problem
The three variables that needed to be tuned in order to optimize the design were th number of active coils (N), the mean coil diameter (D), and the wire diameter (d) [29,30] Figure 7 shows the structural layout of the tension spring.The following is a presentation of the mathematical solution to this problem. Consider Table 13's results show that when the GAOA is compared to the COA, AO, GWO, ROA, RSA, WOA, and SCA, the ROA can obtain better ideal values.

Tension Spring Design Problem
The three variables that needed to be tuned in order to optimize the design were the number of active coils (N), the mean coil diameter (D), and the wire diameter (d) [29,30].Figure 7 shows the structural layout of the tension spring.The following is a presentation of the mathematical solution to this problem.Table 14 displays the outcomes of applying the GAOA to the tension spring design problem.The outcomes are then contrasted with those attained by a variety of other techniques, such as the COA, AO, GSA, DE, RSA, SMA, and EROA.It is evident that the COA produced outcomes that were superior to those of the other algorithms.

Three-Bar Truss Design Problem
Designing a three-bar truss is a difficult problem in structural engineering [29,30].The motto of this problem is to find a truss design that minimizes the weight while meeting the design constraint.Figure 8 shows the structural layout of the three-bar truss.The following is a presentation of the mathematical solution to this problem.
Consider  Table 14 displays the outcomes of applying the GAOA to the tension spring design problem.The outcomes are then contrasted with those attained by a variety of other techniques, such as the COA, AO, GSA, DE, RSA, SMA, and EROA.It is evident that the COA produced outcomes that were superior to those of the other algorithms.[29] 0.0502 0.3562 10.5425 0.0112 GSA [36] 0.0502 0.3236 13.5254 0.0127 DE [25] 0.0516 0.3547 11.4108 0.0126 RSA [32] 0.0525 0.4100 7.853 0.0124 SMA [74] 0.0584 0.5418 5.2613 0.0134 EROA [75] 0.0537 0.4695 5.811 0.0106 Note: Bold is used to indicate the best results.

Three-Bar Truss Design Problem
Designing a three-bar truss is a difficult problem in structural engineering [29,30].The motto of this problem is to find a truss design that minimizes the weight while meeting the design constraint.Figure 8 shows the structural layout of the three-bar truss.The following is a presentation of the mathematical solution to this problem.
Table 16 displays the comparison findings and the benefit of using the GAOA to achieve the smallest overall weight of the problem.

Cantilever Beam Design
The determination of the least overall weight of cantilever beams is a specific problem in concrete engineering.The thickness of the walls of the hollow square cross section, as well as the dimensions of the square, can all affect the weight.
The objective of the optimization problem is to find the values of these parameters that minimize the overall weight of the beam while still ensuring that the beam is strong enough to withstand the applied load [29].The design configuration associated with this problem is depicted visually in Figure 10, and it can be represented mathematically using the following formulation: Consider [ ] 2.9 ≤ s 6 ≤ 3.9, 5.0 ≤ s 7 ≤ 5.5.
Table 16 displays the comparison findings and the benefit of using the GAOA to achieve the smallest overall weight of the problem.

Cantilever Beam Design
The determination of the least overall weight of cantilever beams is a specific problem in concrete engineering.The thickness of the walls of the hollow square cross section, as well as the dimensions of the square, can all affect the weight.
The objective of the optimization problem is to find the values of these parameters that minimize the overall weight of the beam while still ensuring that the beam is strong enough to withstand the applied load [29].The design configuration associated with this problem is depicted visually in Figure 10, and it can be represented mathematically using the following formulation: Biomimetics 2024, 9, x FOR PEER REVIEW 30 of 34 The results in Table 17 show that the GAOA achieves the minimized overall weight faster and with a better performance than all other algorithms.In conclusion, this section emphasizes how excellent the suggested GAOA is in comparison to other characteristics and real-world case studies.With extremely competitive results, the GAOA displays its capacity to outperform both the fundamental COA and ROA algorithms as well as other wellknown algorithms.These successes are a result of the GAOA's strong exploration and exploitation capabilities.Its outstanding success in resolving industrial engineering design issues further highlights its potential for widespread use in practical optimization issues.

→D
is the difference vector that chooses the movement of the wolf either toward the neighborhood areas of the prey or opposite of them.Both→ A and → C are modified over iterations like the following:

D
δ are the distances of the search agents away from the three best solutions; and →

Figure 5 .
Figure 5. Bonferroni-Dunn bar chart for (a) D = 30, (b) D = 50, and (c) D = 100.The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level.(Here, ----shows the significance level at 0.1, andshows the significance level at 0.05.).

Figure 5 .
Figure 5. Bonferroni-Dunn bar chart for (a) D = 30, (b) D = 50, and (c) D = 100.The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level.(Here, ----shows the significance level at 0.1, and

Figure 5 .
Figure 5. Bonferroni-Dunn bar chart for (a) D = 30, (b) D = 50, and (c) D = 100.The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level.(Here, ----shows the significance level at 0.1, andshows the significance level at 0.05.).

Table 1 .
Classification of algorithms.

Table 2 .
Parameter settings for GAOA and other algorithms.

Table 3 .
GAOA and seven competing algorithms' experimental and statistical data on the benchmark functions with 30 dimensions from IEEE CEC2017.

Table 4 .
GAOA and seven competing algorithms' experimental and statistical data on the benchmark functions with 50 dimensions from IEEE CEC2017.

Table 5 .
GAOA and seven competing algorithms' experimental and statistical data on the benchmark functions with 100 dimensions from IEEE CEC2017.

Table 6 .
GAOA and six competing algorithms' experimental and statistical data on the benchmark functions with 30 dimensions from classical benchmark functions.

Table 7 '
s Friedman test findings further demonstrate the GAOA's better performance.The table shows that the IEEE CEC2017 functions with 30, 50, and 100 dimensions are the ones where the GAOA works best.

Table 7 .
Friedman ranks of GAOA and seven competitive algorithms from IEEE CEC2017.

Table 8 .
The influence of the parameters (α and δ) tested on various classical test functions using the GAOA algorithm.

Table 9 .
CPU running times of all algorithms tested on CEC2017 functions with 30, 50, and 100 dimensions.

Table 10 .
Results of Wilcoxon rank-sum test obtained for Table3.

Table 11 .
Results of Wilcoxon rank-sum test obtained for Table4.

Table 12 .
Results of Wilcoxon rank-sum test obtained for Table5.

Table 12 .
Results of Wilcoxon rank-sum test obtained for Table5.

Table 13 .
Performance comparison of GAOA and other algorithms for pressure vessel design problem Note: Bold is used to indicate the best results.

Table 13 .
Performance comparison of GAOA and other algorithms for pressure vessel design problem.

Table 14 .
Performance comparison of GAOA and other algorithms for tension spring design problem.
Note: Bold is used to indicate the best results.

Table 14 .
Performance comparison of GAOA and other algorithms for tension spring design problem.

Table 16 .
Performance comparison of GAOA and other algorithms for speed reducer design problem.
Note: Bold is used to indicate the best results.

Table 16 .
Performance comparison of GAOA and other algorithms for speed reducer design problem.
Note: Bold is used to indicate the best results.

Table 17 .
Performance comparison of GAOA and other algorithms for cantilever beam design problem.
Note: Bold is used to indicate the best results.