1. Introduction
Global optimization is a term used to characterize several scientific and engineering problems that can be resolved using different optimization techniques. These days, the preferred methods for global optimization are metaheuristic algorithms (MAs) since they are protected against local maximum efficacy by their stochastic and dynamic nature [
1]. Genetic Evolution [
2], Differential Evolution (DE) [
3], Particle Swarm Optimization (PSO) [
4], Reptile Search Algorithm (RSA) [
5], Whale Optimization Algorithm (WOA) [
6], Brain Storm Optimization (BSO) [
7], Teaching–Learning-Based Optimization (TLBO) [
8], etc., are several MAs that have emerged over the past 20 years. One of the better algorithms is the AO method, which Abualigah proposed in 2021 [
9], because it is simple to build, has consistent performance, and few configurable parameters. Its strong optimization capabilities have helped with a variety of global optimization problems, including feature selection [
10], vehicle route planning [
11], and machine scheduling [
12].
The No Free Lunch (NFL) theorem [
13] was a significant advancement in the field of nature-inspired algorithms. It is impossible to develop a single optimization algorithm that solves every optimization problem, according to the NFL theorem. To put it simply, even if optimization method “A” is ideally suited for a particular set of problems, there is always a subset of problems on which it would perform poorly. As a result, the NFL theorem provides the area of nature-inspired algorithms life and enables academics to either suggest new algorithms or enhance already existing ones. In order to improve existing algorithms, an effective approach for doing so is hybridization—combining the best aspects of multiple algorithms to create a hybridized algorithm. The present study aims to combine the benefits of better exploration and the efficiency of maintaining a balance between exploration and exploitation by improving the AO, DOL, and DRW techniques. This is achieved by drawing inspiration from the advantages of improving the algorithm.
The new NIOA, called Aquila Optimizer, uses the Aquila bird’s hunting strategy in an attempt to discover the best solution to an optimization problem. AO is capable of handling a broad range of optimization problems [
14]. The first drawback of this algorithm is premature convergence, which happens when the algorithm has a stagnation issue and is unable to explore the whole search space during the process. The second drawback is its low computational efficiency. This provides a poor ideal solution and also prevents the algorithm from searching the whole search space. Aquila Optimizer takes a longer time to converge and to reach the ideal solution than other existing metaheuristic algorithms. Therefore, in the current study, Aquila Optimizer is enhanced so that it can explore the more promising areas that are left in the population’s memory. By combining AO with DRW and DOL, suitable harmony between the exploration and exploitation process is formed. The DOL [
15] method with its asymmetric and dynamic search space exhibits a great deal of promise. In the meanwhile, the dynamic opposite number, a random candidate, can be computed quickly and easily. This may enhance the algorithm’s capacity for exploitation and increase the rate of convergence. The DRW [
16] approach focuses on iteratively improving a solution by exploring its closer neighborhood because balancing the search for new promising areas with refining solutions within existing areas is the key to metaheuristics. The following are the paper’s contributions:
To increase the AO algorithm’s computing effectiveness and capacity for local optimal avoidance, a new DRW technique is put forth.
To enhance the algorithm’s performance and balance between exploration and exploitation, the DOL approach is incorporated into AO for the very first time.
The performance of DAO is examined on twenty-nine benchmark functions of CEC 2017, ten benchmark functions of CEC 2019, and then on three engineering design problems, and the results are compared with various algorithms.
The following part of the paper is structured as follows: the fundamental ideas of AO, DOL, and DRW are presented in
Section 2. The previous work on AO is explained in
Section 3. In
Section 4, the proposed DAO algorithm is explained.
Section 5 presents the experiments and their findings.
Section 6 shows the engineering applications. The study’s conclusion is finally presented in
Section 7.
2. Algorithm Preliminaries
2.1. Aquila Optimizer
The Aquila bird’s hunting strategy served as the inspiration for the Aquila Optimizer (AO) metaheuristic optimization technique [
9]. AO mimics the four main prey-hunting strategies, explained as follows:
2.1.1. Expanded Exploration
The expanded exploration
of Aquila Optimizer mimics the high-achieving quickly descending hunting strategy observed in Aquila birds. With this strategy, the bird soars to enormous heights, giving it the opportunity to inspect the whole search area, identify potential prey, and select the ideal hunting place. Equation (1) in [
9] provides a mathematical illustration of this strategy.
In Equation (1), the maximum number of iterations is represented as
where
denotes the current iteration. The response for the subsequent iteration indicated as
is found by the first search in the candidate solution population
. The expression
represents the best outcome achieved so far in the
iteration. A count of iterations is employed through an equation
to modify the search space’s depth. Additionally, using Equation (2), where
represents the population size and
is the dimension size, the average value of the locations of connected existing solutions at the
iteration is determined, represented as
.
2.1.2. Narrowed Exploration
In this approach, the Aquila bird hunts; to track prey, they must fly in a contour-like pattern and execute swift gliding strikes inside a small research region. The primary aim of this methodology
, as expressed mathematically in Equation (3), is to identify a solution for the subsequent iterations.
In this approach,
is the Levy flying distribution for dimension space
. At the
iteration, the random solution
is taken in the range of
, where
is the population size. The Levy flight distribution is calculated using a fixed constant value of
and two randomly selected parameters,
and
, which have values between 0 and 1. The mathematical expression for this computation is provided by Equation (4).
Equation (5) finds the value
, which is obtained using the constant parameter
.
Equations (6) and (7) depict the spiral form inside the search range, denoted by
and
, respectively. Equation (3) specifies this spiral form.
Variable , over a predefined number of search iterations, takes values between 1 and 20. The constant values of and are 0.005 and 0.00565, respectively. has a range from 1 to the dimension of the search space.
2.1.3. Expanded Exploitation
During the investigation phase, the Aquila bird meticulously examines the prey area before attacking with a low, slow fall. This strategy, sometimes referred to as expanded exploitation
, is represented mathematically in Equation (8).
, the result of Equation (8), represents the result for the subsequent iteration. In the iteration, denotes the current best solution obtained, and denotes the average value of the current solution as determined by Equation (2). Variable “rand” is assigned a random number within the range of (0, 1), while tuning parameters and are typically assigned values of 0.1 each. Symbols and represent the upper and lower bounds, respectively.
2.1.4. Narrowed Exploitation
Aquila birds hunt by taking advantage of their prey’s unpredictable ground movement patterns to grab their prey directly. This hunting strategy serves as the basis for the constrained exploitation technique
design, which is produced by Equation (9), which also yields the
iteration of the following solution, denoted as
. Equation (10), which expresses the quality function
, was put out to provide a well-balanced search approach.
Equations (11) and (12) are used to determine the mobility pattern for the Aquila’s prey tracking
and the trajectory of an attack during an escape, from the beginning to the terminal point
. Both the maximum number of iterations
and the current iteration number
are used in the computations.
2.2. Concept of Dynamic Oppositional Learning (DOL)
The objectives of the optimization algorithms are to produce solutions, improve approximated solutions, and look for additional solutions inside the domain. The needs of tackling a complex problem cannot be met by the current solutions. Then, a variety of learning techniques are developed to improve optimization algorithms’ performance. Owing to its higher convergence capacity, the opposition-based learning (OBL) technique is the most frequently acknowledged among these learning systems. The following is an introduction to the definition of OBL [
15]:
OBL is made up of the real number
in the interval
. Furthermore, the opposite number,
, is produced.
Regarding a situation with several dimensions, the definition is demonstrated as follows:
is a point in
dimensional coordinates if and only if
in the interval
. As the iteration changes, the associated low and high bounds of the population are denoted by
and
, respectively. In the meantime, the definition of the multidimensional opposite point is
Even while the OBL method enhances the algorithm’s searching capabilities, it still has certain drawbacks, such being premature. Various variations of OBL have been proposed to enhance its performance. To expand the domain known as original notion of quasi-opposite-based learning (QOBL), for example, a quasi-opposite number is employed [
17]. In the meantime, a quasi-reflection number is introduced in the interval between the present location and the center position in order to implement a quasi-reflection-based learning (QRBL) method [
18].
Phase of Dynamic Opposite Learning: In addition to the OBL variations mentioned above, a novel learning approach called dynamic opposite learning operator (DOL) is used in this work. In order to enhance the TLBO algorithm’s performance, Xu et al. originally suggested the DOL method in [
15]. When dealing with complex issues, the DOL is included to prevent the algorithm from being too young [
19]. Furthermore, in an asymmetric and dynamic search environment, the DOL learning technique is a new variation of the opposition-based learning (OBL) strategy that aids in population learning from the opposite points [
20,
21].
2.2.1. Dynamic Population Initialization
was defined as the initial population in the initialization step. Additionally,
is produced in the opposing domain.
is introduced to replace
in order to expand the searching space and convert the previous symmetric searching space into a dynamic asymmetric domain. The optimizer is then able to prevent prematurity by expanding the searching space. Therefore, in order to enhance the capacity to overcome local optima, a weighting factor
is incorporated. This is how the mathematical model is displayed:
where
is a random parameter. When faced with a multifaceted goal, it manifests as follows:
where
is the population size,
is the dimension of an individual,
and
denote random numbers among
.
2.2.2. Dynamic Population Jumping Process
In DOL, a jumping rate
is used to update the population, and a positive weighting factor
is employed to balance the capabilities of exploration and exploitation. The following is an implementation of the DOL operation procedure, provided that the selection probability is less than
.
where a random value
is produced as the starting populace;
is the population size;
is the
solution;
is the population created by the DOL technique;
displays the dimension of
; two random parameters in
are called
and
; the weighting factor
is set to 3; and the jumping rate
is set at 1 by conducting sensitivity analysis as in
Table 1.
2.3. Concept of Dynamic Random Walk (DRW)
Dynamic Random Walk (DRW) can be applied to the expanded exploration phase of the Aquila Optimizer metaheuristic algorithm to improve its exploration ability and help it escape local optima by the following equation:
where
, random walk vector, is provided by
. Two random parameters in
are called
and
. DRW is incorporated into AO to improve its exploration ability. In the early stages of the optimization process, DRW is used to allow the search agents to explore a large search space.
3. Previous Work on AO and DOL
There is always room to enhance an algorithm by increasing and balancing the operators’ exploitation and exploration since the NFL theorem opposes the existence of an algorithm that is best suited for all optimization tasks. Plenty of work has been completed in the literature to improve the search efficiency in AO. These improvements include adjusting the algorithm’s parameters, including new movement strategies, and merging the algorithm with other optimization methods. The improved versions of AO can handle a large range of difficult real-world optimization problems better than the standard AO. The strategies used in AO are hybridization with NIOAs [
22,
23], oppositional-based learning [
24], chaotic sequence [
25], Levy flight-based strategy [
26], Gauss map and crisscross operator [
27], Niche Thought with Dispersed Chaotic Swarm [
28], random learning mechanism and Nelder–Mead Simplex Search [
29], wavelet mutation [
30], Weighted Adaptive Searching Technique [
31], Binay AO [
32], and multi-objective AO [
33].
DOL strategies are also used in many NIOAs to enhance their performance. First, they were introduced with Teaching–Learning-based Optimization [
15], Grey Wolf Optimizer [
34], Whale Optimization Algorithm [
35], Antlion Optimizer [
16], Bald Eagle Search Optimization [
36], in the hybrid version of Aquila Optimizer, and Artificial Rabbits Optimization Algorithm [
37], and the comprehensive survey with other algorithms can be found in the literature [
14].
4. The Proposed DAO Algorithm
Two new features, DOL and DRW, are added to the original AO by the proposed DAO (Dynamic Random Walk and Dynamic Opposition Learning for Improving Aquila Optimizer) algorithm. The aim of DOL population generation is to provide diverse solutions to escape from stagnation, and DOL generation jumping helps in the exploitation ability of the algorithm and accelerates the speed of the algorithm. On the other hand, DRW will help the algorithm to improve its exploration ability. This overall approach will provide the perfect balance between exploration and exploitation and help the algorithm to escape from local optima. Let us examine this improvement working in more detail.
Compared to random initialization, the use of a dynamic opposition population initialization technique in Aquila Optimizer (AO) has various benefits that result in a more diverse solution pool:
- (a)
Random initialization limitations:
Particularly for complex problems, random initialization might produce a population localized in a particular area of the search space, which restricts exploration and raises the possibility of becoming trapped in local optima.
- (b)
Initialization Based on Dynamic Opposition:
For every randomly selected initial point, this method produces an “opposite” solution. With respect to a predetermined reference point (often the centre or limits), the opposing solution is located on the other side of the search area. This forces investigation in many places and produces wider initial dispersion of solutions.
The starting population is more diversified when opposition-based generation and random selection are combined. Because of this diversity, AO is able to investigate various regions of the search field right away. To prevent becoming overly biased in favor of the opposing alternatives, the strategy, nevertheless, maintains a healthy balance by retaining some randomly generated solutions. Overall, we can say that introducing DOL population initialization can help AO in the following ways:
- (a)
Increased exploration: AO can find promising regions throughout the whole search space by distributing the first solutions more widely.
- (b)
Decreased chance of local optima: AO is less likely to become stuck in solutions that are only effective in a small area because it starts from a variety of sites.
- (c)
Faster convergence: When multiple regions are investigated concurrently, a well-distributed population can converge more quickly to the global optimum.
- 2.
Benefits of using DOL generation jumping:
- (a)
Improved Exploration: Reintroducing exploration in later phases may result in the identification of more effective solutions.
- (b)
Escape from Local Optima: AO is nudged away from regions that would not lead to the global optimum by jumping in opposition to underperforming individuals.
- (c)
Fine-tuning: By investigating neighboring regions in the opposite direction, the leaps may discover somewhat better choices even if AO converges to a suitable solution.
- 3.
Benefits of using DRW in place of Aquila’s expanded exploration phase:
- (a)
Reduced Complexity: By doing away with the necessity to plan and carry out a specific extended exploration phase, DRW simplifies the algorithm as a whole.
- (b)
Effective Exploration: Because of its intrinsic unpredictability, DRW can efficiently explore the search space and perhaps produce outcomes that are comparable to those of Aquila’s exploration stage.
In Algorithm 1, DOL Population Initialization and DOL Generation Jumping are used and DRW is used to swap out the expanded exploration of AO. Algorithm 1 illustrates the phases of this algorithm. In this, the parameter values are taken at their best regarding of AO; weight , jumping rate of DOL; and weight of DRW for the rest of the paper.
Figure 1 also displays the algorithm DAO flowchart visualization.
Algorithm 1 DAO Algorithm |
, Max_iter, etc.)
Establish a random starting position.
While (t < Max_iter), do
Conduct DOL population initialization using Equation (16)
Assess the early positions’ fitness.
Verify Boundaries
For (i = 1: nPop) do
Update of the existing solution’s mean value
Updated variables include
If
If Apply DRW using Equation (18)
Else Apply Narrowed Exploration by Equation (3)
End If
Else
If Apply Expanded Exploitation by Equation (8)
Else Apply Narrowed Exploitation by Equation (9)
End If
End If
Conduct the DOL population jumping process using Equation (17)
Assess the fitness function.
Verify boundaries
End for
t = t + 1
End while
Record best solution |
This section also displays DAO’s overall computational complexity. The initialization of the solutions, the computing of the fitness functions, and the updating of the solutions are the three steps that are often taken to ascertain the computational complexity of DAO. Let represent the total number of solutions, and let represent the computational complexity of the solutions’ initialization processes. The computational complexity of the updating processes for the solutions is , where is the total number of iterations and is the size of the problem’s dimensions. These procedures entail updating the placements of each solution and looking for the best ones. Consequently, the overall computing complexity of the proposed DAO (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) is .
5. Experimental Settings
The algorithms used in the numerical trials include Aquila Optimizer (AO), Modified Aquila Optimizer (MAO) [
38], Whale Optimization Algorithm (WOA) [
6], Grasshopper Optimization Algorithm (GOA) [
39], Reptile Search Algorithm (RSA) [
5], and Brain Storm Optimization (BSO) [
7]. On a computer with an Intel(R) Core (TM) i7-9750H processor running at 2.60 GHz and 16 GB of RAM, all algorithms were implemented in MATLAB R2021b.
The following five factors are used to assess DAO’s (Dynamic Opposition Learning and Dynamic Random Walk for Improving Aquila Optimizer) performance:
The optimization errors between the obtained and known real optimal values, average, and standard deviation. Since all objective functions include minimization, the best values—that is, the lowest mean values—are indicated in bold.
Non-parametric statistical tests to compare the
p-value and the significance level = 0.05 between the compared technique and the suggested algorithm, such as the Wilcoxon rank sum test [
40]. For both techniques, there is a significant difference when the
p-value is less than 0.05. W/T/L indicates how many wins, ties, and losses the algorithm in question has suffered in contrast to its opponent.
The Friedman test is another non-parametric statistical test that is used [
41,
42]. The average optimization error values are used as test data. The method operates more efficiently with lower Friedman rank values. To make the minimal value stand out, it is bolded.
Bonferroni–Dunn’s diagram shows the differences in the rankings obtained for each algorithm at dimension 10 by showing the pairwise variances in ranks for each approach at each dimension. Pairwise disparities in rankings are calculated by subtracting the rank of one algorithm from the rank of another algorithm. In the graphic created by Bonferroni and Dunn, each bar denotes the average pairwise difference in ranks for a certain algorithm at a given dimension. Typically, different algorithms are represented by color-coded bars.
A clear visual depiction of the algorithm’s accuracy and convergence rate is offered via convergence graphs. If the improved algorithm deviates from the local answer, it explains why.
5.1. Competitive Algorithms Comparison on CEC2017 Benchmark Functions
Five competing algorithms are compared to gauge DAO’s efficiency and search performance: MAO (Modified Aquila Optimizer), AO (Aquila Optimizer), RSA (Reptile Search Algorithm), WOA (Whale Optimization Algorithm), and BSO (Brain Storm Optimization). The comparison is made on 29 benchmark functions from IEEE CEC2017 from the literature [
43]. The population size (N) was fixed at 50 in each experiment. Maximum iteration is 500 and dimension is 10. The [−100, 100] range was chosen for the search. On each function, each algorithm was executed 30 times.
Parameter Settings: The algorithm’s performance depends on the parameter settings, particularly for DAO. In that instance, this part implements the sensitivity analysis of the parameters of DOL.
Table 1 contains a detailed explanation of each parameter setting; the mean values are used to compare the results.
The weighting factor
and the jumping rate
are set to 1–10 and 0.1–1 in the DAO algorithm, respectively. Here, in
Table 1, only
is taken because, at other points, the values are not favorable.
Test functions have been chosen for analysis from the literature [
43], where F3 and F6 are multimodal functions, F18 is a hybrid function, F23 is a composition function, and, in order to assess performance, the means of the outcomes obtained by DAO are also shown in
Table 1. In F3, F6, and F23, respectively, DAO performs better than other settings when
and
.
and
are hence the best parameter settings, and DRW weight
is taken from the literature [
16].
Table 2 contains the parameter settings of the optimization algorithms used for comparison.
Analysis of IEEE CEC’17 Test Functions
The mean and standard deviation of algorithms on twenty-nine unimodal, multimodal, and composition functions are displayed in
Table 3. The function F1 is unimodal. The results show that, on one unimodal function, DAO outperforms the other algorithms. Moreover, it may be said that the DOL approach, which expands search spaces, has a higher chance of reaching the global optimum for its capacity for exploitation.
Multimodal functions like F3–F9 are used to confirm DAO’s exploring capabilities. The results in
Table 3 demonstrate how well DAO performs in comparison to other algorithms, particularly on the F4, F5, F6, and F9 test functions.
Hybrid functions are used to evaluate the algorithms by combining unimodal and multimodal functions in order to mimic real-world challenges. It may lead to subpar performance; however, balancing exploitation and exploration capability is important to deal with mixed tasks.
Table 3 clearly illustrates the benefits of DAO on F12–F17, F20–F24, F26–F28, and F30, and the composition function indicates that DAO is still able to solve the problem to the same degree as other algorithms. Then, in many real-world scenarios, DAO may effectively balance the rate of convergence and the optimization solution.
The last line of
Table 3 shows W/L/T (Win/Loss/Tie), Friedman rank, and CPU runtime. The W/L/T metric shows that DAO performs well on functions with 10 dimensions, outperforming AO, MAO, RSA, WOA, and BSO on 24, 29, 28, 28, and 27 functions, respectively. The Friedman rank of DAO is comparatively less than other MAs, and the CPU runtime of DAO, AO, MAO, RSA, WOA, and BSO is shown in the third-last line of
Table 3. The results show that WOA takes much less time than other MAs.
Figure 2 displays the convergence graphs of the four functions, F4, F9, F13, and F20, where the mean optimizations generated by six algorithms on the IEEE CEC2017 functions with 10 dimensions are displayed. The vertical axis represents the log value of the mean optimizations, while the horizontal axis represents the number of iterations.
Figure 2 makes it clear that the convergence speed is fast and that the DAO curves are the lowest. When compared to the original AO in the convergence graphs, DAO can find a better solution, exit local optimization, avoid premature convergence, improve the quality of the solution, and have high optimization efficiency.
Table 4 represents the Wilcoxon rank sum test results. The totals of ranks for positive and negative differences are represented by
and
, respectively. When compared to other algorithms, DAO has a greater positive rank sum. Additionally, in the table, the corresponding z and
p values are provided. The significant threshold of difference is
. This table shows that the performance of DAO is better than other original AO and other metaheuristic algorithms.
The Bonferroni–Dunn’s test [
45] is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of
Table 4. Among all the other algorithms, DAO was found to have the lowest mean rank. The Bonferroni–Dunn graphic in
Figure 3 shows the variation in ranks for each method at D = 10. In this figure, a horizontal cut line is drawn, which represents the threshold for the best-performing algorithm, the one with the lowest ranking bar. The height of this cut line is determined by adding the algorithm’s ranking. The Bonferroni–Dunn technique computed the equivalent CD for each α = 0.05 and α = 0.1. Algorithms with a rank bar higher than this line are deemed to perform worse than the control algorithm. As a result, it is evident from the use of the Bonferroni–Dunn technique that AO and WOA are substantially acceptable when compared with DAO.
5.2. Competitive Algorithms Comparison on CEC2019 Benchmark Functions
In
Table 5, the list of the ten CEC2019 benchmark functions with their dimensions and search ranges is taken from the literature [
46].
Analysis of IEEE CEC’19 Test Functions
DAO has been implemented on 10 CEC 2019 benchmark functions with 500 iterations and 50 population sizes for 30 independent runs. Its results are compared with AO, MAO, WOA, SSA, and GOA. The comparison has been performed through the mean and STD (standard deviation) values by the considered algorithms across the course of the functions, as reported in
Table 6. Moreover, the Friedman mean rank values and W/L/T are involved in the table’s last lines (see
Table 6). The results confirm the proposed DAO’s superiority in dealing with these challenging testbed functions as it is classified as the best algorithm for half of these functions.
Meanwhile, AO succeeded for three functions, and MAO, WOA, SSA, and GOA for only one function out of this set. When it comes to the chain counterparts, DAO is positioned first in terms of sequence. The CPU runtime is mentioned in the last line of
Table 6, which shows WOA taking much less time than the other algorithms. The convergence curves of
Figure 4 show the efficiency of DAO in converging for high qualified solutions with significant convergence speed, as exhibited in F2, F6, F7, and F9.
Figure 4 shows the convergence capacity of six algorithms on test functions. The average fitness value is displayed as the “Mean”. Because of its exceptional exploration capabilities, DAO converges quickly with iterative computation, as illustrated in the figures. Regarding the trend that is gradually convergent, this is because the DOL technique is capable of being exploited.
Table 7 represents the Wilcoxon rank sum test results. The totals of ranks for positive and negative differences are represented by
and
, respectively. When compared to other algorithms, DAO has a greater positive rank sum in most of the cases. Additionally, in the table, the corresponding z and
p values are provided. The significant threshold of difference is
. This table shows that the performance of DAO is equivalently acceptable when compared to other metaheuristic algorithms.
Bonferroni–Dunn’s test is used for the DAO algorithm to identify significant differences, and the results are shown in the last line of
Table 7. Among all the other algorithms, DAO was found to have the lowest mean rank. The Bonferroni–Dunn graphic in
Figure 5 shows the variation in ranks for each method at D = 10. In this figure, the smallest bar will show the best-performing algorithm, or the one with the lowest ranking bar. Algorithms with a higher rank bar are deemed to perform worse than the control algorithm. As a result, it is evident from the use of the Bonferroni–Dunn technique that DAO is also performing well when compared with other metaheuristic algorithms, and the worst performance is from the SSA algorithm.
6. DAO for Engineering Design Problems
Three relevant engineering benchmarks are used in this section to confirm that DAO improves when tackling real-world problems: problem of cantilever beam design (CBD), welded beam design problem (WBD), and pressure vessel design (PVD) problem and work. Thirty independent runs of each problem were carried out in order to examine the statistical features of the outcomes, and all the parameters are taken at their best.
6.1. CBD Problem
The goal of the CBD problem is to minimize a cantilever beam’s weight while accounting for the vertical displacement constraint. There are five hollow square blocks, and each of the five side length values
needs to be optimized [
47]. The following is an explanation of the mathematical model:
Table 8 displays the results of the CBD problem compared with six different MAs, such as COA, AO, GWO, ROA, WOA, and SCA. The results indicate that the proposed algorithm DAO is able to provide better results than other state-of-the-art algorithms. Thus, DAO is the optimal method for addressing the CBD problem. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the CBD problem.
6.2. WBD Problem
The goal of the WBD challenge is to reduce the cost of manufacturing a welded beam [
9]. The optimization parameters include thickness (
), height (
), length of the clamping bar (
), and thickness (
). It is important to take into account seven limitations. The optimization model can be stated as follows:
Subject to the constraint,
Table 9 reports the outcomes of the WBD problem. It is clear that DAO is not able to provide a better solution than other algorithms. However, with the exception of AO, DAO also has a very close value to provide an optimal result. This suggests that DAO is a stable and effective solution to the WBD problem. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the WBD problem.
6.3. PVD Problem
The PVD problem, a classical and representative optimization issue in engineering, is typically employed to verify the efficacy of optimization techniques. Its goal is to reduce a tension/compression spring’s cost [
41]. The design parameters are thickness of the shell
, thickness of the head
, inner radius
, and the length of the cylindrical shell
. The following is the expression for the mathematical formulation [
47]:
Table 10′s results for the TSD problem demonstrate that ROA is the optimal method for solving it, followed by COA and DAO, but we can say that DAO is a competitive and stable solution. CPU runtime of the given set of algorithms is calculated, which shows that WOA takes very little time to compute the PVD problem.
The outcomes of three classic engineering challenges are shown in this section, demonstrating how well and consistently DAO performs when handling real-world issues. In particular, DAO performs noticeably better than the AO algorithm.
7. Conclusions
In order to replace the expanded exploration regarding AO, this study has proposed a low-complexity DRW method that strikes a fair balance between exploitation and exploration. The aim of this technique is to increase computational efficiency and to avoid stagnation. Moreover, to achieve a balance between exploration and exploitation, the DOL technique is introduced. The CPU runtime clearly shows Aquila Optimizer’s computing efficiency. Then, the results obtained from the benchmark functions of CEC 2017 and CEC 2019 demonstrate its superiority. Furthermore, the convergence graphs, the Wilcoxon rank sum tests, the Friedman test, and the Bonferroni test show its accessibility. Then, it is also applied to real-world structural engineering design problems, which provides better results than AO. All these results show that the DRW and DOL approaches provide great additions to AO. DAO performs far better than AO as well as compared to most of the other MAs.
8. Future Scope
DAO could be applied in additional real-world applications given its great performance. Additionally, other optimization jobs including image processing, cloud and fog computing, and others could use the DAO optimization method.
Author Contributions
Conceptualization, M.V. and P.K.; methodology, M.V. and P.K.; software, M.V. and P.K.; validation, M.A. and Y.G.; formal analysis, M.V. and P.K.; investigation, M.A. and Y.G.; resources, P.K., M.A. and Y.G.; data curation, M.V. and P.K.; writing—original draft preparation, M.V. and P.K.; writing—review and editing, M.A. and Y.G.; visualization, M.V. and P.K.; supervision, M.A., P.K. and Y.G.; project administration, M.A. and P.K.; funding acquisition, M.A. and Y.G. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Deanship of Scientific Research, the Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia (GrantA016).
Institutional Review Board Statement
Not applicable.
Data Availability Statement
Since no datasets were created or examined in the current investigation, data sharing is not relevant to this topic.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A. Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
- Goldberg, D.E. Genetic Algorithms; Pearson Education: Bangalore, India, 2006. [Google Scholar]
- Storn, R.; Price, K. Differential Evolution- A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle Swarm Optimization. Proc. IEEE Int. Conf. Neural Netw. 1995, 4, 1942–1948. [Google Scholar]
- Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A Nature-Inspired Meta-Heuristic Optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Shi, Y. Brain Storm Optimization Algorithm. In International Conference in Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar]
- Rao, R.; Savsani, V.; Vakharia, D. Teaching-learning based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
- Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila Optimizer: A Novel MetaHeuristic Optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
- Li, L.; Pan, J.; Zhuang, Z.; Chu, S. A Novel Feature Selection Algorithm Based on Aquila Optimizer for COVID-19 Classification. In International Conference on Intelligent Information Processing; Springer International Publishing: Cham, Switzerland, 2022; pp. 30–41. [Google Scholar]
- Chaudhari, S.V.; Dhipa, M.; Ayoub, S.; Gayathri, B.; Siva, M.; Banupriya, V. Modified Aquila Optimization based Route Planning Model for Unmanned Aerial Vehicles Networks. In Proceedings of the 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS), Pudukkottai, India, 13–15 December 2022; pp. 370–375. [Google Scholar]
- Abualigah, L.; Elaziz, M.A.; Khodadadi, N.; Forestiero, A.; Jia, H.; Gandomi, A.H. Aquila Optimizer Based PSO Swarm Intelligence for IoT Task Scheduling Application in Cloud Computing. In Part of the Studies in Computational Intelligence Book Series; Springer International Publishing: Cham, Switzerland, 2022; Volume 1038, pp. 481–497. [Google Scholar]
- Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
- Sasmal, B.; Hussien, A.G.; Das, A.; Dhal, K.G. A Comprehensive Survey on Aquila Optimizer. Arch. Comput. Methods Eng. 2023, 30, 4449–4476. [Google Scholar] [CrossRef]
- Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic opposite learning enhanced teaching–learning-based optimization. Knowl. Based Syst. 2020, 104966, 188. [Google Scholar] [CrossRef]
- Dong, H.; Xu, Y.; Li, X.; Yang, Z.; Zou, C. An improved antlion optimizer with dynamic random walk and dynamic opposite learning. Knowl. Based Syst. 2021, 106752, 216. [Google Scholar] [CrossRef]
- Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Quasi-oppositional differential evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2229–2236. [Google Scholar] [CrossRef]
- Ergezer, M.; Simon, D.; Du, D. Oppositional biogeography-based optimization. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX, USA, 11–14 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1009–1014. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, Y.; Guo, Y.; Feng, W.; Menhas, M.; Zhang, Y. Parameters Identification of Battery Model Using a Novel Differential Evolution Algorithm Variant. Front. Energy Res. 2022, 10, 794732. [Google Scholar] [CrossRef]
- Liu, Z.H.; Wei, H.L.; Li, X.H.; Liu, K.; Zhong, Q.C. Global identification of electrical and mechanical parameters in PMSM drive based on dynamic self-learning PSO. IEEE Trans. Power Electron. 2018, 33, 10858–10871. [Google Scholar] [CrossRef]
- Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC 06), Vienna, Austria, 22 May 2006; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 695–701. [Google Scholar]
- Mohamed, A.; Abualigah, L.; Alburaikan, A.; Khalifa, H.A.E.-W. AOEHO: A New Hybrid Data Replication Method in Fog Computing for IoT Application. Sensors 2023, 23, 2189. [Google Scholar] [CrossRef]
- Nirmalapriya, G.; Agalya, V.; Regunathan, R.; Belsam Jeba Ananth, M. Fractional Aquila Spider Monkey Optimization Based Deep Learning Network for Classification of Brain Tumor. Biomed. Signal Process. Control. 2023, 79, 104017. [Google Scholar] [CrossRef]
- Perumalla, S.; Chatterjee, S.; Kumar, A.P.S. Modelling of Oppositional Aquila Optimizer with Machine Learning Enabled Secure Access Control in Internet of Drones Environment. Theor. Comput. Sci. 2023, 941, 39–54. [Google Scholar] [CrossRef]
- Duan, J.; Zuo, H.; Bai, Y.; Chang, M.; Chen, X.; Wang, W.; Ma, L.; Chen, B. A Multistep Short-Term Solar Radiation Forecasting Model Using Fully Convolutional Neural Networks and Chaotic Aquila Optimization Combining WRF-Solar Model Results. Energy 2023, 271, 126980. [Google Scholar] [CrossRef]
- Ramamoorthy, R.; Ranganathan, R.; Ramu, S. An Improved Aquila Optimization with Fuzzy Model Based Energy Efficient Cluster Routing Protocol for Wireless Sensor Networks. Yanbu J. Eng. Sci. 2022, 19, 51–61. [Google Scholar] [CrossRef]
- Huang, C.; Huang, J.; Jia, Y.; Xu, J. A Hybrid Aquila Optimizer and Its K-Means Clustering Optimization. Trans. Inst. Meas. Control 2023, 45, 557–572. [Google Scholar] [CrossRef]
- Zhang, Y.; Xu, X.; Zhang, N.; Zhang, K.; Dong, W.; Li, X. Adaptive Aquila Optimizer combining niche thought with dispersed chaotic swarm. Sensors 2023, 23, 755. [Google Scholar] [CrossRef] [PubMed]
- Ekinci, S.; Izci, D.; Abualigah, L. A Novel Balanced Aquila Optimizer Using Random Learning and Nelder–Mead Simplex Search Mechanisms for Air–Fuel Ratio System Control. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 68. [Google Scholar] [CrossRef]
- Alangari, S.; Obayya, M.; Gaddah, A.; Yafoz, A.; Alsini, R.; Alghushairy, O.; Ashour, A.; Motwakel, A. Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication. Sensors 2022, 22, 8508. [Google Scholar] [CrossRef] [PubMed]
- Das, T.; Roy, R.; Mandal, K.K. A Novel Weighted Adaptive Aquila Optimizer Technique for Solving the Optimal Reactive Power Dispatch Problem. Researchsquare, 2022; preprint. [Google Scholar]
- Bas, E. Binary Aquila Optimizer for 0–1 Knapsack Problems. Eng. Appl. Artif. Intell. 2023, 118, 105592. [Google Scholar] [CrossRef]
- Long, H.; Liu, S.; Chen, T.; Tan, H.; Wei, J.; Zhang, C.; Chen, W. Optimal reactive power dispatch based on multi-strategy improved Aquila optimization algorithm. IAENG Int. J. Comput. Sci. 2022, 49, 4. [Google Scholar]
- Wang, Y.; Jin, C.; Li, Q.; Hu, T.; Xu, Y.; Chen, C.; Zhang, Y.; Yang, Z. A Dynamic Opposite Learning-Assisted Grey Wolf Optimizer. Symmetry 2022, 14, 1871. [Google Scholar] [CrossRef]
- Cao, D.; Xu, Y.; Yang, Z.; Dong, H.; Li, X. An enhanced whale optimization algorithm with improved dynamic opposite learning and adaptive inertia weight strategy. Complex Intell. Syst. 2023, 9, 767–795. [Google Scholar] [CrossRef]
- Sharma, S.; Kaur, M.; Sing, B. A Self-adaptive Bald Eagle Search optimization algorithm with dynamic opposition-based learning for global optimization problems. Expert Syst. 2023, 40, e13170. [Google Scholar] [CrossRef]
- Wang, Y.; Xiao, Y.; Guo, Y.; Li, J. Dynamic Chaotic Opposition-Based Learning-Driven Hybrid Aquila Optimizer and Artificial Rabbits Optimization Algorithm: Framework and Applications. Processes 2022, 10, 2703. [Google Scholar] [CrossRef]
- Ali, M.H.; Salawudeen, A.T.; Kamel, S.; Salau, H.B.; Habil, M.; Shouran, M. Single- and Multi-Objective Modified Aquila Optimizer for Optimal Multiple Renewable Energy Resources in Distribution Network. Mathematics 2022, 10, 2129. [Google Scholar] [CrossRef]
- Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and Application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
- García, S.; Molina, D.; Lozano, M.; Herrera, F. A Study on the Use of Non-Parametric Tests for Analyzing the Evolutionary Algorithms’ Behaviour: A Case Study on the CEC’2005 Special Session on Real Parameter Optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
- García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced Nonparametric Tests for Multiple Comparisons in the Design of Experiments in Computational Intelligence and Data Mining: Experimental Analysis of Power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
- Luengo, J.; García, S.; Herrera, F. A Study on the Use of Statistical Tests for Experimentation with Neural Networks: Analysis of Parametric Test Conditions and Non-Parametric Tests. Expert Syst. Appl. 2009, 36, 7798–7808. [Google Scholar] [CrossRef]
- Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2016. [Google Scholar]
- Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
- Ahmadianfar, I.; Asghar Heidari, A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 116516, 195. [Google Scholar] [CrossRef]
- Jing-Chang, L.; Qu, B.; Suganthan, P. Problem Definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, Computer science. Mathematics 2014, 635, 2014. [Google Scholar]
- Varshney, M.; Kumar, P.; Ali, M.; Gulzar, Y. Using the Grey Wolf Aquila Synergistic Algorithm for Design Problems in structural Engineering. Biomimetics 2024, 9, 54. [Google Scholar] [CrossRef]
- Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish Optimization Algorithm. Artif. Intell. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
- Jia, H.; Peng, X.; Lang, C. Remora Optimization Algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
Figure 1.
Flowchart of the proposed DAO algorithm.
Figure 1.
Flowchart of the proposed DAO algorithm.
Figure 2.
Convergence graphs of F4, F9, F13, and F20 CEC 2017 benchmark function.
Figure 2.
Convergence graphs of F4, F9, F13, and F20 CEC 2017 benchmark function.
Figure 3.
Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, ----- shows sig level at 0.1, and shows significance level at 0.05).
Figure 3.
Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm, and horizontal cut lines show the significance level (here, ----- shows sig level at 0.1, and shows significance level at 0.05).
Figure 4.
Convergence graphs of F1, F2, F4, F6, F9, and F10 CEC 2019 benchmark functions.
Figure 4.
Convergence graphs of F1, F2, F4, F6, F9, and F10 CEC 2019 benchmark functions.
Figure 5.
Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm.
Figure 5.
Bonferroni–Dunn bar chart for D = 10. The bar represents the rank of the correspondence algorithm.
Table 1.
The sensitivity analysis of and .
Table 1.
The sensitivity analysis of and .
| Mean |
---|
F3 | F6 | F18 | F23 |
---|
| | | | | | | |
---|
1 | 6.582 × 103 | 6.859 × 103 | 4.251 × 101 | 4.839 × 101 | 2.387 × 105 | 1.457 × 105 | 4.256 × 102 | 4.340 × 102 |
2 | 6.502 × 103 | 5.187 × 103 | 4.216 × 101 | 3.776 × 101 | 3.041 × 105 | 1.567 × 105 | 4.101 × 102 | 3.865 × 102 |
3 | 6.750 × 103 | 4.022 × 103 | 4.469 × 101 | 3.675 × 101 | 2.224 × 106 | 6.025 × 105 | 3.982 × 102 | 3.818 × 102 |
4 | 8.326 × 103 | 5.279 × 103 | 4.476 × 101 | 3.885 × 101 | 1.045 × 106 | 6.649 × 105 | 4.082 × 102 | 3.868 × 102 |
5 | 8.613 × 103 | 4.975 × 103 | 4.468 × 101 | 3.908 × 101 | 4.604 × 106 | 1.106 × 106 | 4.109 × 102 | 3.888 × 102 |
6 | 9.230 × 103 | 5.749 × 103 | 4.886 × 101 | 4.031 × 101 | 3.238 × 106 | 2.014 × 106 | 4.166 × 102 | 4.025 × 102 |
7 | 9.451 × 103 | 7.962 × 103 | 5.108 × 101 | 4.931 × 101 | 5.852 × 106 | 5.806 × 106 | 4.192 × 102 | 4.218 × 102 |
8 | 1.031 × 104 | 8.989 × 103 | 5.228 × 101 | 4.922 × 101 | 3.966 × 106 | 1.155 × 107 | 4.223 × 102 | 4.357 × 102 |
9 | 9.738 × 103 | 1.003 × 104 | 5.263 × 101 | 5.246 × 101 | 1.043 × 107 | 1.255 × 107 | 4.181 × 102 | 4.472 × 102 |
10 | 1.087 × 104 | 1.108 × 104 | 5.446 × 101 | 5.205 × 101 | 2.989 × 107 | 3.067 × 107 | 4.311 × 102 | 4.574 × 102 |
Table 2.
Parameter settings of optimization algorithms.
Table 2.
Parameter settings of optimization algorithms.
Algorithm | Parameters |
---|
DAO | |
AO [9] | |
MAO [38] | |
SSA [44] | |
WOA [6] | |
RSA [5] | |
GOA [39] | |
BSO [7] | |
Table 3.
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2017 benchmark functions.
Table 3.
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2017 benchmark functions.
Function | DAO | AO | MAO | RSA | WOA | BSO |
---|
F1 Mean STD | 8.388 × 108 4.709 × 108 | 9.239 × 108 6.512 × 106 | 2.159 × 1010 4.981 × 109 | 5.38 × 1010 9.29 × 109 | 9.784 × 108 7.462 × 106 | 9.410 × 109 2.501 × 103 |
F3 Mean STD | 4.291 × 103 1.381 × 103 | 8.809 × 102 5.485 × 102 | 2.363 × 105 4.971 × 104 | 7.42 × 104 5.50 × 103 | 3.663 × 103 3.232 × 103 | 3.001 × 101 1.710 × 102 |
F4 Mean STD | 7.619 × 101 4.001 × 101 | 2.085 × 102 2.512 × 102 | 2.527 × 103 1.304 × 103 | 1.45 × 104 4.56 × 103 | 9.451 × 101 1.923 × 101 | 9.495 × 102 2.001 × 101 |
F5 Mean STD | 6.359 × 101 1.584 × 101 | 7.125 × 101 1.066 × 101 | 1.508 × 102 2.758 × 101 | 3.89 × 102 3.30 × 101 | 8.408 × 101 2.096 × 101 | 2.038 × 102 4.101 × 101 |
F6 Mean STD | 3.523 × 101 8.702 × 100 | 7.745 × 101 6.053 × 100 | 9.271 × 101 1.745 × 101 | 8.63 × 101 7.46 × 100 | 3.627 × 101 1.012 × 101 | 5.316 × 101 6.414 × 100 |
F7 Mean STD | 8.615 × 101 2.031 × 101 | 5.545 × 101 1.931 × 101 | 4.585 × 102 9.379 × 101 | 6.72 × 102 6.73 × 101 | 7.470 × 101 2.151 × 101 | 5.110 × 102 1.011 × 102 |
F8 Mean STD | 3.211 × 101 6.033 × 100 | 2.408 × 101 6.884 × 100 | 1.369 × 102 1.922 × 101 | 3.11 × 102 2.80 × 101 | 4.291 × 101 1.767 × 101 | 1.451 × 102 3.211 × 101 |
F9 Mean STD | 2.773 × 102 1.571 × 102 | 3.135 × 102 6.321 × 101 | 4.114 × 103 1.082 × 103 | 8.53 × 103 1.19 × 103 | 5.919 × 102 3.820 × 102 | 3.411 × 103 6.754 × 102 |
F10Mean STD | 1.451 × 103 3.124 × 102 | 9.451 × 102 2.686 × 102 | 2.726 × 103 2.296 × 102 | 7.02 × 103 3.59 × 102 | 1.181 × 103 2.751 × 102 | 4.211 × 103 6.081 × 102 |
F11Mean STD | 4.201 × 102 4.743 × 102 | 1.078 × 102 5.818 × 101 | 2.604 × 104 2.681 × 104 | 7.77 × 103 2.80 × 103 | 1.417 × 102 8.465 × 101 | 1.378 × 102 4.511 × 101 |
F12Mean STD | 5.697 × 106 5.285 × 106 | 7.862 × 106 3.363 × 106 | 2.784 × 109 1.640 × 109 | 1.70 × 1010 4.36 × 109 | 7.279 × 106 5.117 × 106 | 9.614 × 107 8.094 × 105 |
F13Mean STD | 2.549 × 105 6.824 × 105 | 2.465 × 105 1.528 × 104 | 3.020 × 108 3.011 × 108 | 1.18 × 1010 4.90 × 109 | 1.437 × 106 1.177 × 104 | 5.216 × 107 2.340 × 104 |
F14Mean STD | 5.424 × 103 8.248 × 103 | 6.334 × 104 8.016 × 102 | 7.503 × 106 1.063 × 107 | 3.07 × 106 3.58 × 106 | 7.307 × 103 1.500 × 103 | 4.170 × 105 3.152 × 103 |
F15Mean STD | 6.293 × 103 3.375 × 103 | 9.332 × 103 2.839 × 103 | 2.148 × 107 2.908 × 107 | 6.73 × 108 5.74 × 108 | 6.416 × 103 5.063 × 103 | 3.112 × 104 2.122 × 104 |
F16Mean STD | 3.248 × 102 1.027 × 102 | 9.535 × 102 1.114 × 102 | 1.178 × 103 2.349 × 102 | 3.89 × 103 6.86 × 102 | 3.329 × 102 1.440 × 102 | 1.504 × 103 3.314 × 102 |
F17Mean STD | 8.911 × 101 2.286 × 101 | 9.589 × 101 1.871 × 101 | 6.631 × 102 1.916 × 102 | 5.30 × 103 6.86 × 103 | 1.033 × 102 5.087 × 101 | 8.120 × 102 2.401 × 102 |
F18Mean STD | 2.407 × 105 3.197 × 105 | 2.153 × 104 1.184 × 104 | 6.274 × 108 6.430 × 108 | 3.27 × 107 3.07 × 107 | 1.946 × 104 1.111 × 104 | 1.120 × 105 1.001 × 105 |
F19Mean STD | 3.203 × 104 4.889 × 104 | 1.436 × 104 2.225 × 104 | 6.471 × 107 8.754 × 107 | 1.32 × 104 1.69 × 109 | 6.597 × 104 9.665 × 104 | 1.301 × 105 5.361 × 104 |
F20Mean STD | 1.701 × 102 5.579 × 101 | 2.153 × 102 4.716 × 101 | 5.419 × 102 1.330 × 102 | 8.63 × 102 1.42 × 102 | 1.854 × 102 7.896 × 101 | 7.219 × 102 2.015 × 102 |
F21Mean STD | 2.299 × 102 5.293 × 101 | 2.967 × 102 4.681 × 101 | 3.375 × 102 3.148 × 101 | 6.43 × 102 4.26 × 101 | 2.310 × 102 5.171 × 101 | 4.004 × 102 4.051 × 101 |
F22Mean STD | 1.758 × 102 5.337 × 101 | 2.091 × 102 1.524 × 101 | 1.798 × 103 5.866 × 102 | 5.25 × 103 1.01 × 103 | 1.831 × 102 2.703 × 102 | 4.001 × 103 1.701 × 103 |
F23Mean STD | 3.843 × 102 2.354 × 101 | 5.412 × 102 1.313 × 101 | 5.423 × 102 6.781 × 101 | 1.04 × 103 1.08 × 102 | 3.976 × 102 2.060 × 101 | 9.991 × 102 1.013 × 102 |
F24Mean STD | 3.145 × 102 1.416 × 101 | 3.437 × 102 8.266 × 101 | 5.939 × 102 7.436 × 101 | 1.17 × 103 2.45 × 102 | 3.870 × 102 2.521 × 101 | 1.004 × 103 9.711 × 101 |
F25Mean STD | 4.776 × 102 4.645 × 101 | 7.949 × 102 3.036 × 101 | 1.988 × 103 7.365 × 102 | 2.22 × 103 8.61 × 102 | 5.651 × 102 3.538 × 101 | 4.101 × 102 9.110 × 100 |
F26Mean STD | 6.408 × 102 3.023 × 102 | 9.175 × 102 1.623 × 102 | 2.348 × 103 3.639 × 102 | 7.93 × 103 1.12 × 103 | 9.465 × 102 6.068 × 102 | 5.832 × 103 1.112 × 103 |
F27Mean STD | 4.467 × 102 4.845 × 101 | 6.041 × 102 8.332 × 100 | 7.303 × 102 1.222 × 102 | 9.41 × 102 2.31 × 102 | 5.379 × 102 3.300 × 101 | 1.204 × 103 2.510 × 102 |
F28Mean STD | 4.913 × 102 6.521 × 100 | 5.965 × 102 9.938 × 101 | 1.323 × 103 2.043 × 102 | 3.98 × 103 8.85 × 102 | 6.153 × 102 1.794 × 102 | 5.854 × 102 5.120 × 101 |
F29Mean STD | 4.026 × 102 6.510 × 101 | 3.429 × 102 5.123 × 101 | 1.070 × 103 2.163 × 102 | 4.14 × 103 1.61 × 103 | 4.614 × 102 8.636 × 101 | 1.520 × 103 3.701 × 102 |
F30Mean STD | 3.891 × 104 8.456 × 104 | 6.647 × 105 7.482 × 104 | 1.451 × 108 1.052 × 108 | 2.24 × 108 9.25 × 107 | 7.597 × 106 9.042 × 105 | 5.371 × 105 3.104 × 105 |
(W/L/T) Rank CPU Runtime | 20/9/0 1.62 3.25 × 104 | 5/24/0 2.41 2.10 × 104 | 0/29/0 4.72 1.29 × 104 | 1/28/0 5.62 5.11 × 104 | 1/28/0 2.55 4.10 × 103 | 2/27/0 4.07 1.29 × 104 |
Table 4.
Summary of non-parametric statistical results by Wilcoxon test and Bonferroni–Dunn test.
Table 4.
Summary of non-parametric statistical results by Wilcoxon test and Bonferroni–Dunn test.
Algorithms | | | z-Value | p-Value | Sign |
---|
DAO vs. | AO | 21 | 8 | 2.022 | 0.043 | = |
MAO | 29 | 0 | 4.703 | 0.000 | + |
RSA | 28 | 1 | 4.249 | 0.000 | + |
WOA | 24 | 5 | 2.757 | 0.006 | = |
BSO | 25 | 4 | 3.557 | 0.000 | + |
| 1.1428 | | | 1.2656 |
Table 5.
List of 10 benchmark functions of CEC2019 with dimensions and search range.
Table 5.
List of 10 benchmark functions of CEC2019 with dimensions and search range.
Func. No. | Functions | Dim | Search Range |
---|
F1 | Storn’s Chebyshev Polynomial Fitting Problem | 9 | [−8192, 8192] |
F2 | Inverse Hilbert Matrix Problem | 16 | [−16,384, 16,384] |
F3 | Lennard-Jones Minimum Energy Cluster | 18 | [−4, 4] |
F4 | Rastrigin’s Function | 10 | [−100, 100] |
F5 | Griewangk’s Function | 10 | [−100, 100] |
F6 | Weierstrass Function | 10 | [−100, 100] |
F7 | Modified Schwefel’s Function | 10 | [−100, 100] |
F8 | Expanded Schaffer’s F6 Function | 10 | [−100, 100] |
F9 | Happy Cat Function | 10 | [−100, 100] |
F10 | Ackley Function | 10 | [−100, 100] |
Table 6.
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2019 benchmark functions.
Table 6.
Mean and standard deviation (STD) obtained from objective function by standard AO, the proposed algorithm DAO, and other metaheuristic algorithms for 10-dimensional CEC 2019 benchmark functions.
Function | DAO | AO | MAO | WOA | SSA | GOA |
---|
F1 Mean STD | 9.900 × 101 0.000 × 100 | 9.900 × 101 2.053 × 10−8 | 1.235 × 109 7.355 × 108 | 6.784 × 106 7.462 × 106 | 7.324 × 109 3.483 × 109 | 1.320 × 1010 1.541 × 1010 |
F2 Mean STD | 1.950 × 102 0.000 × 100 | 1.950 × 102 0.000 × 100 | 2.825 × 104 7.301 × 103 | 7.663 × 102 8.7317 × 102 | 2.001 × 102 2.079 × 10−2 | 1.739 × 103 4.084 × 102 |
F3 Mean STD | 2.948 × 102 1.321 × 100 | 2.937 × 102 1.863 × 100 | 2.862 × 102 4.426 × 10−1 | 2.951 × 102 1.923 × 100 | 2.970 × 102 1.776 × 10−15 | 2.270 × 102 8.188 × 10−12 |
F4 Mean STD | 3.441 × 102 1.376 × 101 | 3.683 × 102 9.697 × 100 | 2.445 × 102 2.501 × 101 | 3.498 × 102 2.496 × 101 | 3.423 × 101 1.077 × 101 | 3.286 × 102 1.971 × 101 |
F5 Mean STD | 4.929 × 102 4.413 × 100 | 4.981 × 102 1.826 × 10−1 | 3.059 × 102 4.894 × 101 | 4.977 × 102 4.591 × 10−1 | 5.486 × 102 8.533 × 10−1 | 8.484 × 102 8.763 × 10−1 |
F6 Mean STD | 5.918 × 102 1.787 × 100 | 5.944 × 102 1.440 × 100 | 5.999 × 102 9.224 × 10−1 | 5.919 × 102 1.751 × 100 | 5.986 × 102 8.533 × 10−1 | 8.484 × 102 8.763 × 101 |
F7 Mean STD | 7.152 × 102 2.553 × 102 | 3.011 × 102 2.936 × 102 | 2.217 × 103 2.924 × 102 | 7.640 × 102 3.001 × 102 | 4.728 × 102 9.776 × 10−1 | 5.007 × 102 2.191 × 102 |
F8 Mean STD | 7.953 × 102 1.998 × 10−1 | 8.957 × 102 3.015 × 10−1 | 7.644 × 102 2.387 × 10−1 | 5.953 × 102 3.216 × 10−1 | 9.088 × 102 6.135 × 10−1 | 8.587 × 102 4.300 × 10−1 |
F9 Mean STD | 8.985 × 102 1.639 × 10−1 | 9.365 × 102 1.427 × 10−1 | 8.993 × 103 8.679 × 10−1 | 8.985 × 103 2.006 × 10−1 | 2.416 × 103 5.956 × 10−1 | 9.664 × 102 1.827 × 10−1 |
F10 Mean STD | 9.785 × 102 7.688 × 10−1 | 9.996 × 102 4.637 × 100 | 9.852 × 102 1.350 × 10−1 | 9.953 × 102 1.330 × 10−1 | 2.101 × 103 3.562 × 101 | 9.923 × 102 3.718 × 10−4 |
(W/L/T) Rank CPU Runtime | 5/5/2 2.65 3.11 × 104 | 3/7/2 3.25 3.02 × 104 | 1/9/0 3.15 2.16 × 104 | 1/9/0 3.65 5.02 × 104 | 1/9/0 4.30 4.22 × 103 | 1/9/0 4.00 1.26 × 104 |
Table 7.
Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni–Dunn test.
Table 7.
Summary of non-parametric statistical results obtained from Wilcoxon test and Bonferroni–Dunn test.
Algorithms | ΣR+ | ΣR− | z-Value | p-Value | Sign |
---|
DAO vs. | AO | 6 | 2 | 1.260 | 0.208 | = |
MAO | 5 | 5 | 0.866 | 0.386 | = |
WOA | 6 | 3 | 1.599 | 0.110 | = |
MPA | 8 | 2 | 1.478 | 0.139 | = |
GOA | 7 | 3 | 1.478 | 0.139 | = |
Table 8.
Comparison of DAO and other algorithms for CBD problem.
Table 8.
Comparison of DAO and other algorithms for CBD problem.
Optimum Attributes | |
---|
Algorithms | z1 | z2 | z3 | z4 | z5 | Optimum Weight | CPU Runtime (s) |
---|
DAO | 6.0112 | 5.1211 | 4.8221 | 3.2114 | 2.1510 | 1.3302 | 1.986 |
COA [48] | 6.0172 | 5.3071 | 4.4912 | 3.5081 | 2.1499 | 1.3999 | 2.001 |
AO [9] | 5.8492 | 5.5413 | 4.3778 | 3.5978 | 2.1026 | 1.3596 | 1.926 |
ROA [49] | 6.0156 | 5.1001 | 4.303 | 3.7365 | 2.3183 | 1.3456 | 1.256 |
GWO [50] | 5.9956 | 5.4121 | 4.5986 | 3.5689 | 2.3548 | 1.3586 | 1.112 |
WOA [6] | 5.8393 | 5.1582 | 4.9917 | 3.693 | 2.2275 | 1.3467 | 0.606 |
SCA [51] | 5.9264 | 5.9285 | 4.5223 | 3.3267 | 1.9923 | 1.3581 | 1.111 |
Table 9.
Comparison of DAO and other algorithms for WBD problem.
Table 9.
Comparison of DAO and other algorithms for WBD problem.
Optimum Attributes |
---|
Algorithms | H | L | TT | BB | Optimum Cost | CPU Runtime (s) |
---|
DAO | 0.2138 | 3.2154 | 9.0275 | 0.2052 | 1.6960 | 2.410 |
COA [48] | 0.2456 | 3.2563 | 9.0403 | 0.2057 | 1.6963 | 2.031 |
AO [9] | 0.1631 | 3.3652 | 9.0202 | 0.2067 | 1.6566 | 2.399 |
SSA [44] | 0.2057 | 3.4714 | 9.0366 | 0.2057 | 1.7249 | 2.121 |
WOA [6] | 0.2054 | 3.4843 | 9.0374 | 0.2062 | 1.7305 | 1.037 |
Table 10.
Comparison of DAO and other algorithms for PVD problem.
Table 10.
Comparison of DAO and other algorithms for PVD problem.
Optimum Attributes | |
---|
Algorithms | | | | | Optimum Cost | CPU Runtime (s) |
---|
DAO | 0.7885 | 0.3254 | 42.3275 | 189.892 | 5877.1000 | 2.432 |
COA [48] | 0.7437 | 0.3705 | 40.3238 | 199.9414 | 5735.2488 | 2.356 |
AO [9] | 1.0540 | 0.1828 | 59.6219 | 38.8050 | 5949.2258 | 2.222 |
GWO [50] | 0.8125 | 0.4345 | 42.0891 | 176.7587 | 6051.5639 | 1.345 |
ROA [49] | 0.7295 | 0.2226 | 40.4323 | 198.5537 | 5311.9175 | 2.252 |
RSA [5] | 0.8071 | 0.4426 | 43.6335 | 142.5359 | 6213.8317 | 1.125 |
WOA [6] | 0.8125 | 0.4375 | 42.0982 | 76.6389 | 6059.7410 | 0.872 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).