Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer: Solving Constrained Engineering Design Problems

One of the most important tasks in handling real-world global optimization problems is to achieve a balance between exploration and exploitation in any nature-inspired optimization method. As a result, the search agents of an algorithm constantly strive to investigate the unexplored regions of a search space. Aquila Optimizer (AO) is a recent addition to the field of metaheuristics that finds the solution to an optimization problem using the hunting behavior of Aquila. However, in some cases, AO skips the true solutions and is trapped at sub-optimal solutions. These problems lead to premature convergence (stagnation), which is harmful in determining the global optima. Therefore, to solve the above-mentioned problem, the present study aims to establish comparatively better synergy between exploration and exploitation and to escape from local stagnation in AO. In this direction, firstly, the exploration ability of AO is improved by integrating Dynamic Random Walk (DRW), and, secondly, the balance between exploration and exploitation is maintained through Dynamic Oppositional Learning (DOL). Due to its dynamic search space and low complexity, the DOL-inspired DRW technique is more computationally efficient and has higher exploration potential for convergence to the best optimum. This allows the algorithm to be improved even further and prevents premature convergence. The proposed algorithm is named DAO. A well-known set of CEC2017 and CEC2019 benchmark functions as well as three engineering problems are used for the performance evaluation. The superior ability of the proposed DAO is demonstrated by the examination of the numerical data produced and its comparison with existing metaheuristic algorithms.


Introduction
Global optimization is a term used to characterize several scientific and engineering problems that can be resolved using different optimization techniques.These days, the preferred methods for global optimization are metaheuristic algorithms (MAs) since they are protected against local maximum efficacy by their stochastic and dynamic nature [1].Genetic Evolution [2], Differential Evolution (DE) [3], Particle Swarm Optimization (PSO) [4], Reptile Search Algorithm (RSA) [5], Whale Optimization Algorithm (WOA) [6], Brain Storm Optimization (BSO) [7], Teaching-Learning-Based Optimization (TLBO) [8], etc., are several MAs that have emerged over the past 20 years.One of the better algorithms is the AO method, which Abualigah proposed in 2021 [9], because it is simple to build, has consistent performance, and few configurable parameters.Its strong optimization capabilities have helped with a variety of global optimization problems, including feature selection [10], vehicle route planning [11], and machine scheduling [12].
Biomimetics 2024, 9, 215 2 of 23 The No Free Lunch (NFL) theorem [13] was a significant advancement in the field of nature-inspired algorithms.It is impossible to develop a single optimization algorithm that solves every optimization problem, according to the NFL theorem.To put it simply, even if optimization method "A" is ideally suited for a particular set of problems, there is always a subset of problems on which it would perform poorly.As a result, the NFL theorem provides the area of nature-inspired algorithms life and enables academics to either suggest new algorithms or enhance already existing ones.In order to improve existing algorithms, an effective approach for doing so is hybridization-combining the best aspects of multiple algorithms to create a hybridized algorithm.The present study aims to combine the benefits of better exploration and the efficiency of maintaining a balance between exploration and exploitation by improving the AO, DOL, and DRW techniques.This is achieved by drawing inspiration from the advantages of improving the algorithm.
The new NIOA, called Aquila Optimizer, uses the Aquila bird's hunting strategy in an attempt to discover the best solution to an optimization problem.AO is capable of handling a broad range of optimization problems [14].The first drawback of this algorithm is premature convergence, which happens when the algorithm has a stagnation issue and is unable to explore the whole search space during the process.The second drawback is its low computational efficiency.This provides a poor ideal solution and also prevents the algorithm from searching the whole search space.Aquila Optimizer takes a longer time to converge and to reach the ideal solution than other existing metaheuristic algorithms.Therefore, in the current study, Aquila Optimizer is enhanced so that it can explore the more promising areas that are left in the population's memory.By combining AO with DRW and DOL, suitable harmony between the exploration and exploitation process is formed.The DOL [15] method with its asymmetric and dynamic search space exhibits a great deal of promise.In the meanwhile, the dynamic opposite number, a random candidate, can be computed quickly and easily.This may enhance the algorithm's capacity for exploitation and increase the rate of convergence.The DRW [16] approach focuses on iteratively improving a solution by exploring its closer neighborhood because balancing the search for new promising areas with refining solutions within existing areas is the key to metaheuristics.The following are the paper's contributions: 1.
To increase the AO algorithm's computing effectiveness and capacity for local optimal avoidance, a new DRW technique is put forth.2.
To enhance the algorithm's performance and balance between exploration and exploitation, the DOL approach is incorporated into AO for the very first time.

3.
The performance of DAO is examined on twenty-nine benchmark functions of CEC 2017, ten benchmark functions of CEC 2019, and then on three engineering design problems, and the results are compared with various algorithms.
The following part of the paper is structured as follows: the fundamental ideas of AO, DOL, and DRW are presented in Section 2. The previous work on AO is explained in Section 3. In Section 4, the proposed DAO algorithm is explained.Section 5 presents the experiments and their findings.Section 6 shows the engineering applications.The study's conclusion is finally presented in Section 7.

Aquila Optimizer
The Aquila bird's hunting strategy served as the inspiration for the Aquila Optimizer (AO) metaheuristic optimization technique [9].AO mimics the four main prey-hunting strategies, explained as follows:

Expanded Exploration
The expanded exploration x 1 of Aquila Optimizer mimics the high-achieving quickly descending hunting strategy observed in Aquila birds.With this strategy, the bird soars to enormous heights, giving it the opportunity to inspect the whole search area, identify po-tential prey, and select the ideal hunting place.Equation (1) in [9] provides a mathematical illustration of this strategy.
In Equation ( 1), the maximum number of iterations is represented as H where h denotes the current iteration.The response for the subsequent iteration indicated as x (h+1) 1 is found by the first search in the candidate solution population (x 1 ).represents the best outcome achieved so far in the h th iteration.A count of iterations is employed through an equation 1 − h H to modify the search space's depth.Additionally, using Equation (2), where N represents the population size and D is the dimension size, the average value of the locations of connected existing solutions at the h th iteration is determined, represented as

Narrowed Exploration
In this approach, the Aquila bird hunts; to track prey, they must fly in a contour-like pattern and execute swift gliding strikes inside a small research region.The primary aim of this methodology x (h+1) 2 , as expressed mathematically in Equation ( 3), is to identify a solution for the subsequent iterations.
In this approach, Levy(D) is the Levy flying distribution for dimension space D. At the h th iteration, the random solution x where N is the population size.The Levy flight distribution is calculated using a fixed constant value of s = 0.01 and two randomly selected parameters, u and v, which have values between 0 and 1.The mathematical expression for this computation is provided by Equation (4).
Equation ( 5) finds the value σ, which is obtained using the constant parameter a = 1.5.
Equations ( 6) and ( 7) depict the spiral form inside the search range, denoted by y and x, respectively.Equation (3) specifies this spiral form.
Variable r 1 , over a predefined number of search iterations, takes values between 1 and 20.The constant values of ω and U are 0.005 and 0.00565, respectively.D 1 ∈ Z has a range from 1 to the dimension D of the search space.

Expanded Exploitation
During the investigation phase, the Aquila bird meticulously examines the prey area before attacking with a low, slow fall.This strategy, sometimes referred to as expanded exploitation x 3 , is represented mathematically in Equation (8).
, the result of Equation ( 8), represents the result for the subsequent iteration.
In the h th iteration, x best (h) denotes the current best solution obtained, and x (h) M denotes the average value of the current solution as determined by Equation (2).Variable "rand" is assigned a random number within the range of (0, 1), while tuning parameters θ and ρ are typically assigned values of 0.1 each.Symbols ub and lb represent the upper and lower bounds, respectively.

Narrowed Exploitation
Aquila birds hunt by taking advantage of their prey's unpredictable ground movement patterns to grab their prey directly.This hunting strategy serves as the basis for the constrained exploitation technique x (h) 4 design, which is produced by Equation ( 9), which also yields the h th iteration of the following solution, denoted as x (h+1) 4 . Equation (10), which expresses the quality function J, was put out to provide a well-balanced search approach.
Equations ( 11) and ( 12) are used to determine the mobility pattern for the Aquila's prey tracking (P 1 ) and the trajectory of an attack during an escape, from the beginning to the terminal point (P 2 ).Both the maximum number of iterations H and the current iteration number h are used in the computations.

Concept of Dynamic Oppositional Learning (DOL)
The objectives of the optimization algorithms are to produce solutions, improve approximated solutions, and look for additional solutions inside the domain.The needs of tackling a complex problem cannot be met by the current solutions.Then, a variety of learning techniques are developed to improve optimization algorithms' performance.Owing to its higher convergence capacity, the opposition-based learning (OBL) technique is the most frequently acknowledged among these learning systems.The following is an introduction to the definition of OBL [15]: OBL is made up of the real number x ∈ R in the interval x ∈ [a, b].Furthermore, the opposite number, x OBL , is produced.
Regarding a situation with several dimensions, the definition is demonstrated as follows: x = (x 1 , x 2 , . . . ,x D ) is a point in D dimensional coordinates if and only if x 1 , x 2 , . . . ,x D ∈ R in the interval [a i , b i ].As the iteration changes, the associated low and high bounds of the population are denoted by a i and b i , respectively.In the meantime, the definition of the multidimensional opposite point is Even while the OBL method enhances the algorithm's searching capabilities, it still has certain drawbacks, such being premature.Various variations of OBL have been proposed to enhance its performance.To expand the domain known as original notion of quasiopposite-based learning (QOBL), for example, a quasi-opposite number is employed [17].In the meantime, a quasi-reflection number is introduced in the interval between the present location and the center position in order to implement a quasi-reflection-based learning (QRBL) method [18].
Phase of Dynamic Opposite Learning: In addition to the OBL variations mentioned above, a novel learning approach called dynamic opposite learning operator (DOL) is used in this work.In order to enhance the TLBO algorithm's performance, Xu et al. originally suggested the DOL method in [15].When dealing with complex issues, the DOL is included to prevent the algorithm from being too young [19].Furthermore, in an asymmetric and dynamic search environment, the DOL learning technique is a new variation of the opposition-based learning (OBL) strategy that aids in population learning from the opposite points [20,21].

Dynamic Population Initialization
x ∈ [a, b] was defined as the initial population in the initialization step.Additionally, x OBL is produced in the opposing domain.x RO x RO = rand • x OBL , rand ∈ [0, 1] is intro- duced to replace x OBL in order to expand the searching space and convert the previous symmetric searching space into a dynamic asymmetric domain.The optimizer is then able to prevent prematurity by expanding the searching space.Therefore, in order to enhance the capacity to overcome local optima, a weighting factor w d is incorporated.This is how the mathematical model is displayed: where r 2 ∈ [0, 1] is a random parameter.When faced with a multifaceted goal, it manifests as follows: where i = 1, 2, . . ., N is the population size, j = 1, 2, . . ., D is the dimension of an individual, r 1 and r 2 denote random numbers among [0, 1].

Dynamic Population Jumping Process
In DOL, a jumping rate (J r ) is used to update the population, and a positive weighting factor (w d ) is employed to balance the capabilities of exploration and exploitation.The following is an implementation of the DOL operation procedure, provided that the selection probability is less than J r .
where a random value x ij is produced as the starting populace; N is the population size; i is the i th solution; x DOL ij is the population created by the DOL technique; j displays the dimension of j th ; two random parameters in [0, 1] are called r 1 and r 2 ; the weighting factor w d is set to 3; and the jumping rate J r is set at 1 by conducting sensitivity analysis as in Table 1.
Table 1.The sensitivity analysis of w and J r .

Concept of Dynamic Random Walk (DRW)
Dynamic Random Walk (DRW) can be applied to the expanded exploration phase of the Aquila Optimizer metaheuristic algorithm to improve its exploration ability and help it escape local optima by the following equation: where rwv, random walk vector, is provided by rwv = r(1, D) − 0.5.Two random parameters in [0, 1] are called r 3 and r 4 .DRW is incorporated into AO to improve its exploration ability.In the early stages of the optimization process, DRW is used to allow the search agents to explore a large search space.

Previous Work on AO and DOL
There is always room to enhance an algorithm by increasing and balancing the operators' exploitation and exploration since the NFL theorem opposes the existence of an algorithm that is best suited for all optimization tasks.Plenty of work has been completed in the literature to improve the search efficiency in AO.These improvements include adjusting the algorithm's parameters, including new movement strategies, and merging the algorithm with other optimization methods.The improved versions of AO can handle a large range of difficult real-world optimization problems better than the standard AO.The strategies used in AO are hybridization with NIOAs [22,23], oppositional-based learning [24], chaotic sequence [25], Levy flight-based strategy [26], Gauss map and crisscross operator [27], Niche Thought with Dispersed Chaotic Swarm [28], random learning mechanism and Nelder-Mead Simplex Search [29], wavelet mutation [30], Weighted Adaptive Searching Technique [31], Binay AO [32], and multi-objective AO [33].
DOL strategies are also used in many NIOAs to enhance their performance.First, they were introduced with Teaching-Learning-based Optimization [15], Grey Wolf Optimizer [34], Whale Optimization Algorithm [35], Antlion Optimizer [16], Bald Eagle Search Optimization [36], in the hybrid version of Aquila Optimizer, and Artificial Rabbits Optimization Algorithm [37], and the comprehensive survey with other algorithms can be found in the literature [14].

The Proposed DAO Algorithm
Two new features, DOL and DRW, are added to the original AO by the proposed DAO (Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer) algorithm.The aim of DOL population generation is to provide diverse solutions to escape from stagnation, and DOL generation jumping helps in the exploitation ability of the algorithm and accelerates the speed of the algorithm.On the other hand, DRW will help the algorithm to improve its exploration ability.This overall approach will provide the perfect balance between exploration and exploitation and help the algorithm to escape from local optima.Let us examine this improvement working in more detail.

Benefits of using DOL population initialization
Compared to random initialization, the use of a dynamic opposition population initialization technique in Aquila Optimizer (AO) has various benefits that result in a more diverse solution pool: (a) Random initialization limitations: Particularly for complex problems, random initialization might produce a population localized in a particular area of the search space, which restricts exploration and raises the possibility of becoming trapped in local optima.(b) Initialization Based on Dynamic Opposition: For every randomly selected initial point, this method produces an "opposite" solution.With respect to a predetermined reference point (often the centre or limits), the opposing solution is located on the other side of the search area.This forces investigation in many places and produces wider initial dispersion of solutions.
The starting population is more diversified when opposition-based generation and random selection are combined.Because of this diversity, AO is able to investigate various regions of the search field right away.To prevent becoming overly biased in favor of the opposing alternatives, the strategy, nevertheless, maintains a healthy balance by retaining some randomly generated solutions.Overall, we can say that introducing DOL population initialization can help AO in the following ways: Fine-tuning: By investigating neighboring regions in the opposite direction, the leaps may discover somewhat better choices even if AO converges to a suitable solution. 3.

Benefits of using DRW in place of Aquila's expanded exploration phase:
(a) Reduced Complexity: By doing away with the necessity to plan and carry out a specific extended exploration phase, DRW simplifies the algorithm as a whole.(b) Effective Exploration: Because of its intrinsic unpredictability, DRW can efficiently explore the search space and perhaps produce outcomes that are comparable to those of Aquila's exploration stage.
In Algorithm 1, DOL Population Initialization and DOL Generation Jumping are used and DRW is used to swap out the expanded exploration of AO.Algorithm 1 illustrates the phases of this algorithm.In this, the parameter values are taken at their best regarding α, β of AO; weight w d , jumping rate J r of DOL; and weight w of DRW for the rest of the paper.This section also displays DAO's overall computational complexity.The initialization of the solutions, the computing of the fitness functions, and the updating of the solutions are the three steps that are often taken to ascertain the computational complexity of DAO.
Let N represent the total number of solutions, and let ( ) o N represent the computa- tional complexity of the solutions' initialization processes.The computational complexity of the updating processes for the solutions is ( where G is the total number of iterations and D is the size of the problem's dimen- sions.These procedures entail updating the placements of each solution and looking for the best ones.Consequently, the overall computing complexity of the proposed DAO (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Opti- This section also displays DAO's overall computational complexity.The initialization of the solutions, the computing of the fitness functions, and the updating of the solutions are the three steps that are often taken to ascertain the computational complexity of DAO.Let N represent the total number of solutions, and let o(N) represent the computational complexity of the solutions' initialization processes.The computational complexity of the updating processes for the solutions is o

Experimental Settings
where G is the total number of iterations and D is the size of the problem's dimensions.These procedures entail updating the placements of each solution and looking for the best ones.Consequently, the overall computing complexity of the proposed DAO (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) is

Algorithm 1 DAO Algorithm
Initialize the values of parameters (nPop, nVar, α, β, w, w d , J r , Max_iter, etc.) Establish a random starting position.Take the counter t =
The following five factors are used to assess DAO's (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) performance: 1.
The optimization errors between the obtained and known real optimal values, average, and standard deviation.Since all objective functions include minimization, the best values-that is, the lowest mean values-are indicated in bold.

2.
Non-parametric statistical tests to compare the p-value and the significance level = 0.05 between the compared technique and the suggested algorithm, such as the Wilcoxon rank sum test [40].For both techniques, there is a significant difference when the p-value is less than 0.05.W/T/L indicates how many wins, ties, and losses the algorithm in question has suffered in contrast to its opponent.

3.
The Friedman test is another non-parametric statistical test that is used [41,42].The average optimization error values are used as test data.The method operates more efficiently with lower Friedman rank values.To make the minimal value stand out, it is bolded.

4.
Bonferroni-Dunn's diagram shows the differences in the rankings obtained for each algorithm at dimension 10 by showing the pairwise variances in ranks for each approach at each dimension.Pairwise disparities in rankings are calculated by subtracting the rank of one algorithm from the rank of another algorithm.In the graphic created by Bonferroni and Dunn, each bar denotes the average pairwise difference in ranks for a certain algorithm at a given dimension.Typically, different algorithms are represented by color-coded bars.

5.
A clear visual depiction of the algorithm's accuracy and convergence rate is offered via convergence graphs.If the improved algorithm deviates from the local answer, it explains why.

Competitive Algorithms Comparison on CEC2017 Benchmark Functions
Five competing algorithms are compared to gauge DAO's efficiency and search performance: MAO (Modified Aquila Optimizer), AO (Aquila Optimizer), RSA (Reptile Search Algorithm), WOA (Whale Optimization Algorithm), and BSO (Brain Storm Optimization).The comparison is made on 29 benchmark functions from IEEE CEC2017 from the literature [43].The population size (N) was fixed at 50 in each experiment.Maximum iteration is 500 and dimension is 10.The [−100, 100] range was chosen for the search.On each function, each algorithm was executed 30 times.
Parameter Settings: The algorithm's performance depends on the parameter settings, particularly for DAO.In that instance, this part implements the sensitivity analysis of the parameters of DOL.Table 1 contains a detailed explanation of each parameter setting; the mean values are used to compare the results.
The weighting factor w and the jumping rate J r are set to 1-10 and 0.1-1 in the DAO algorithm, respectively.Here, in Table 1, only J r = 0.3, and 1 is taken because, at other points, the values are not favorable.
Test functions have been chosen for analysis from the literature [43], where F3 and F6 are multimodal functions, F18 is a hybrid function, F23 is a composition function, and, in order to assess performance, the means of the outcomes obtained by DAO are also shown in Table 1.In F3, F6, and F23, respectively, DAO performs better than other settings when w = 3 and J r = 1.w = 3 and J r = 1 are hence the best parameter settings, and DRW weight w = 0.5 is taken from the literature [16].Table 2 contains the parameter settings of the optimization algorithms used for comparison.

Algorithm
Parameters

Analysis of Unimodal and Multimodal Test Functions
The mean and standard deviation of algorithms on twenty-nine unimodal, multimodal, and composition functions are displayed in Table 3.The function F1 is unimodal.The results show that, on one unimodal function, DAO outperforms the other algorithms.Moreover, it may be said that the DOL approach, which expands search spaces, has a higher chance of reaching the global optimum for its capacity for exploitation.Multimodal functions like F3-F9 are used to confirm DAO's exploring capabilities.The results in Table 3 demonstrate how well DAO performs in comparison to other algorithms, particularly on the F4, F5, F6, and F9 test functions.

Analysis of Hybrid and Composition Test Functions
Hybrid functions are used to evaluate the algorithms by combining unimodal and multimodal functions in order to mimic real-world challenges.It may lead to subpar performance; however, balancing exploitation and exploration capability is important to deal with mixed tasks.Table 3 clearly illustrates the benefits of DAO on F12-F17, F20-F24, F26-F28, and F30, and the composition function indicates that DAO is still able to solve the problem to the same degree as other algorithms.Then, in many real-world scenarios, DAO may effectively balance the rate of convergence and the optimization solution.
The last line of Table 3 shows W/L/T (Win/Loss/Tie), Friedman rank, and CPU runtime.The W/L/T metric shows that DAO performs well on functions with 10 dimensions, outperforming AO, MAO, RSA, WOA, and BSO on 24, 29, 28, 28, and 27 functions, respectively.The Friedman rank of DAO is comparatively less than other MAs, and the CPU runtime of DAO, AO, MAO, RSA, WOA, and BSO is shown in the third-last line of Table 3.The results show that WOA takes much less time than other MAs.

Analysis of Convergence Graph
Figure 2 displays the convergence graphs of the four functions, F4, F9, F13, and F20, where the mean optimizations generated by six algorithms on the IEEE CEC2017 functions with 10 dimensions are displayed.The vertical axis represents the log value of the mean optimizations, while the horizontal axis represents the number of iterations.Figure 2 makes it clear that the convergence speed is fast and that the DAO curves are the lowest.When compared to the original AO in the convergence graphs, DAO can find a better solution, exit local optimization, avoid premature convergence, improve the quality of the solution, and have high optimization efficiency.mean optimizations, while the horizontal axis represents the number of iterations.Figure 2 makes it clear that the convergence speed is fast and that the DAO curves are the lowest.When compared to the original AO in the convergence graphs, DAO can find a better solution, exit local optimization, avoid premature convergence, improve the quality of the solution, and have high optimization efficiency. .This table shows that the performance of DAO is better than other original AO and other metaheuristic algorithms.
The Bonferroni-Dunn's test [45] is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of Table 4.Among all the other algorithms, DAO was found to have the lowest mean rank.The Bonferroni-Dunn graphic in Figure 3 shows the variation in ranks for each method at D = 10.In this figure, a horizontal cut line is drawn, which represents the threshold for the best-performing algorithm, the one with the lowest ranking bar.The height of this cut line is determined by adding the algorithm's ranking.The Bonferroni-Dunn technique computed the equivalent CD for each α = 0.05 and α = 0.1.Algorithms with a rank bar higher than this line are deemed to perform worse than the control algorithm.As a result, it is evident from the use of the Bonferroni-Dunn technique that AO and WOA are substantially acceptable when compared with DAO.Table 4 represents the Wilcoxon rank sum test results.The totals of ranks for positive and negative differences are represented by ∑ R + and ∑ R − , respectively.When compared to other algorithms, DAO has a greater positive rank sum.Additionally, in the table, the corresponding z and p values are provided.The significant threshold of difference is α = 0.05.This table shows that the performance of DAO is better than other original AO and other metaheuristic algorithms.The Bonferroni-Dunn's test [45] is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of Table 4.Among all the other algorithms, DAO was found to have the lowest mean rank.The Bonferroni-Dunn graphic in Figure 3 shows the variation in ranks for each method at D = 10.In this figure, a horizontal cut line is drawn, which represents the threshold for the best-performing algorithm, the one with the lowest ranking bar.The height of this cut line is determined by adding the algorithm's ranking.The Bonferroni-Dunn technique computed the equivalent CD for each α = 0.05 and α = 0.1.Algorithms with a rank bar higher than this line are deemed to perform worse than the control algorithm.As a result, it is evident from the use of the Bonferroni-Dunn technique that AO and WOA are substantially acceptable when compared with DAO.

Competitive Algorithms Comparison on CEC2019 Benchmark Functions
In Table 5, the list of the ten CEC2019 benchmark functions with their dimensions and search ranges is taken from the literature [46].DAO has been implemented on 10 CEC 2019 benchmark functions with 500 iterations and 50 population sizes for 30 independent runs.Its results are compared with AO, MAO, WOA, SSA, and GOA.The comparison has been performed through the mean and STD (standard deviation) values by the considered algorithms across the course of the functions, as reported in Table 6.Moreover, the Friedman mean rank values and W/L/T are involved in the table's last lines (see Table 6).The results confirm the proposed DAO's

Competitive Algorithms Comparison on CEC2019 Benchmark Functions
In Table 5, the list of the ten CEC2019 benchmark functions with their dimensions and search ranges is taken from the literature [46].Analysis of IEEE CEC'19 Test Functions DAO has been implemented on 10 CEC 2019 benchmark functions with 500 iterations and 50 population sizes for 30 independent runs.Its results are compared with AO, MAO, WOA, SSA, and GOA.The comparison has been performed through the mean and STD (standard deviation) values by the considered algorithms across the course of the functions, as reported in Table 6.Moreover, the Friedman mean rank values and W/L/T are involved in the table's last lines (see Table 6).The results confirm the proposed DAO's superiority in dealing with these challenging testbed functions as it is classified as the best algorithm for half of these functions.
Meanwhile, AO succeeded for three functions, and MAO, WOA, SSA, and GOA for only one function out of this set.When it comes to the chain counterparts, DAO is positioned first in terms of sequence.The CPU runtime is mentioned in the last line of Table 6, which shows WOA taking much less time than the other algorithms.The convergence curves of Figure 4 show the efficiency of DAO in converging for high qualified solutions with significant convergence speed, as exhibited in F2, F6, F7, and F9. Figure 4 shows the convergence capacity of six algorithms on test functions.The average fitness value is displayed as the "Mean".Because of its exceptional exploration capabilities, DAO converges quickly with iterative computation, as illustrated in the figures.Regarding the trend that is gradually convergent, this is because the DOL technique is capable of being exploited.
Table 7 represents the Wilcoxon rank sum test results.The totals of ranks for positive and negative differences are represented by ∑ R + and ∑ R − , respectively.When compared to other algorithms, DAO has a greater positive rank sum in most of the cases.Additionally, in the table, the corresponding z and p values are provided.The significant threshold of difference is α = 0.05.This table shows that the performance of DAO is equivalently acceptable when compared to other metaheuristic algorithms.
Bonferroni-Dunn's test is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of Table 7.Among all the other algorithms, DAO was found to have the lowest mean rank.The Bonferroni-Dunn graphic in Figure 5 shows the variation in ranks for each method at D = 10.In this figure, the smallest bar will show the best-performing algorithm, or the one with the lowest ranking bar.Algorithms with a higher rank bar are deemed to perform worse than the control algorithm.As a result, it is evident from the use of the Bonferroni-Dunn technique that DAO is also performing well when compared with other metaheuristic algorithms, and the worst performance is from the SSA algorithm.shows the variation in ranks for each method at D = 10.In this figure, the smallest bar will show the best-performing algorithm, or the one with the lowest ranking bar.Algorithms with a higher rank bar are deemed to perform worse than the control algorithm.As a result, it is evident from the use of the Bonferroni-Dunn technique that DAO is also performing well when compared with other metaheuristic algorithms, and the worst performance is from the SSA algorithm.

DAO for Engineering Design Problems
Three relevant engineering benchmarks are used in this section to confirm that DAO improves when tackling real-world problems: problem of cantilever beam design (CBD), welded beam design problem (WBD), and pressure vessel design (PVD) problem and work.Thirty independent runs of each problem were carried out in order to examine the statistical features of the outcomes, and all the parameters are taken at their best.

CBD Problem
The goal of the CBD problem is to minimize a cantilever beam's weight while accounting for the vertical displacement constraint.There are five hollow square blocks, and each of the five side length values 1 2 3 4 5 , , , , z z z z z needs to be optimized [47].The following is an explanation of the mathematical model: Consider

DAO for Engineering Design Problems
Three relevant engineering benchmarks are used in this section to confirm that DAO improves when tackling real-world problems: problem of cantilever beam design (CBD), welded beam design problem (WBD), and pressure vessel design (PVD) problem and work.Thirty independent runs of each problem were carried out in order to examine the statistical features of the outcomes, and all the parameters are taken at their best.

CBD Problem
The goal of the CBD problem is to minimize a cantilever beam's weight while accounting for the vertical displacement constraint.There are five hollow square blocks, and each of the five side length values z 1 , z 2 , z 3 , z 4 , z 5 needs to be optimized [47].The following is an explanation of the mathematical model:  8 displays the results of the CBD problem compared with six different MAs, such as COA, AO, GWO, ROA, WOA, and SCA.The results indicate that the proposed algorithm DAO is able to provide better results than other state-of-the-art algorithms.Thus, DAO is the optimal method for addressing the CBD problem.CPU runtime of the given The outcomes of three classic engineering challenges are shown in this section, demonstrating how well and consistently DAO performs when handling real-world issues.In particular, DAO performs noticeably better than the AO algorithm.

Conclusions
In order to replace the expanded exploration regarding AO, this study has proposed a low-complexity DRW method that strikes a fair balance between exploitation and exploration.The aim of this technique is to increase computational efficiency and to avoid stagnation.Moreover, to achieve a balance between exploration and exploitation, the DOL technique is introduced.The CPU runtime clearly shows Aquila Optimizer's computing efficiency.Then, the results obtained from the benchmark functions of CEC 2017 and CEC 2019 demonstrate its superiority.Furthermore, the convergence graphs, the Wilcoxon rank sum tests, the Friedman test, and the Bonferroni test show its accessibility.Then, it is also applied to real-world structural engineering design problems, which provides better results than AO.All these results show that the DRW and DOL approaches provide great additions to AO. DAO performs far better than AO as well as compared to most of the other MAs.

2 .
(a) Increased exploration: AO can find promising regions throughout the whole search space by distributing the first solutions more widely.(b) Decreased chance of local optima: AO is less likely to become stuck in solutions that are only effective in a small area because it starts from a variety of sites.(c) Faster convergence: When multiple regions are investigated concurrently, a welldistributed population can converge more quickly to the global optimum.Benefits of using DOL generation jumping: (a) Improved Exploration: Reintroducing exploration in later phases may result in the identification of more effective solutions.(b) Escape from Local Optima: AO is nudged away from regions that would not lead to the global optimum by jumping in opposition to underperforming individuals.(c)

Figure 1 Figure 1 .
Figure 1.Flowchart of the proposed DAO algorithm.

Figure 1 .
Figure 1.Flowchart of the proposed DAO algorithm.

Figure 2 .
Figure 2. Convergence graphs of F4, F9, F13, and F20 CEC 2017 benchmark function.Table 4 represents the Wilcoxon rank sum test results.The totals of ranks for positive and negative differences are represented by R +  and R −  , respectively.When com- pared to other algorithms, DAO has a greater positive rank sum.Additionally, in the table, the corresponding z and p values are provided.The significant threshold of difference is 0.05 α =

Figure 3 .
Figure 3. Bonferroni-Dunn bar chart for D = 10.The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, -----shows sig level at 0.1, and shows significance level at 0.05).

Figure 3 .
Figure 3. Bonferroni-Dunn bar chart for D = 10.The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, -----shows sig level at 0.1, and shows significance level at 0.05).

Figure 5 .
Figure 5. Bonferroni-Dunn bar chart for D = 10.The bar represents the rank of the correspondence algorithm.

Figure 5 .
Figure 5. Bonferroni-Dunn bar chart for D = 10.The bar represents the rank of the correspondence algorithm.

Table 2 .
Parameter settings of optimization algorithms.

Table 3 .
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC benchmark functions.
Note: bold is used to indicate the best result.

Table 4 .
Summary of non-parametric statistical results by Wilcoxon test and Bonferroni-Dunn test.

Table 4 .
Summary of non-parametric statistical results by Wilcoxon test and Bonferroni-Dunn test.

Table 5 .
List of 10 benchmark functions of CEC2019 with dimensions and search range.

Table 5 .
List of 10 benchmark functions of CEC2019 with dimensions and search range.

Table 6 .
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2019 benchmark functions.

Table 7 .
Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni-Dunn test.

Table 7 .
Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni-Dunn test.

Table 10 .
Comparison of DAO and other algorithms for PVD problem.Note: bold is used to indicate better result.