1. Introduction
Optimization problems, characterized by high nonlinearity, multimodality, and large-scale search spaces, have long posed significant challenges across scientific and engineering domains due to the proliferation of local optima. In practical applications, increasing complexity and dynamism demand algorithms with enhanced performance and adaptability. Consequently, the development of efficient and highly adaptable optimization algorithms has become a key to improving the efficiency of the system and the quality of decision making.
In recent years, metaheuristic algorithms have demonstrated unique advantages in handling complex large-scale problems by combining global exploration with local exploitation. They have been successfully applied in various domains, including computational intelligence and data mining [
1,
2,
3,
4,
5,
6], transportation [
7], task planning [
8], resource management [
9,
10], and UAV path planning [
11,
12]. (1) In computational intelligence and data mining, Manoharan et al. [
1] introduced an enhanced weighted K-means Grey Wolf Optimizer to enhance clustering performance. Tang et al. [
2] developed EDECO, an Enhanced Educational Competition Optimizer [
13] that uses distribution-based replacement and dynamic distance balancing for efficient numerical optimization. Jia et al. [
3] improved the Slime Mould Algorithm (SMA) [
14] with Compound Mutation Strategy (CMS) and Restart Strategy (RS) for feature selection. Djaafar Zouache et al. [
4] drew inspiration from quantum computing to combine the Firefly Algorithm (FA) [
15] and Particle Swarm Optimization (PSO) [
16] for feature selection tasks. (2) In transportation, Fang et al. [
7] proposed the Discrete Wild Horse Optimizer (DWHO) to solve the Capacitated Vehicle Routing Problem (CVRP) [
17]. (3) In task planning, Zhong et al. [
8] proposed a novel Independent Success History Adaptation Competitive Differential Evolution (ISHACDE) algorithm to address functional optimization problems and Space Mission Trajectory Optimization (SMTO). (4) In resource management, Abd et al. [
9] presented an improved White Shark Optimizer [
18] to optimize hybrid photovoltaic, wind turbine, biomass and hydrogen storage systems. Zhang et al. [
10] proposed MFSMA, a Multi-Strategy Slime Mould Algorithm, for optimal microgrid scheduling. (5) In UAV path planning, Xu et al. [
11] introduced DBO-AWOA, an Adaptive Whale Optimization Algorithm that combines chaotic mapping, nonlinear convergence factors, adaptive inertia mechanisms, and dung beetle-inspired reproduction behaviors for 3D route planning. Liang et al. [
12] developed the Improved Spider-Bee Optimizer (ISWO) for UAV 3D path planning.
In general, metaheuristic algorithms offer a rich toolkit to solve complex optimization challenges by emulating natural phenomena, animal behaviors, physical laws, and social processes. These algorithms fall into evolution-based, physics-based, swarm-based, and human-based categories [
19].
Evolution-based algorithms draw inspiration from Darwin’s theory of natural selection and genetic inheritance. Prominent examples include the Genetic Algorithm (GA) [
20], Differential Evolution (DE) [
21], Genetic Programming (GP) [
22], and Evolution Strategies (ES) [
23]. Among these, the GA and DE are particularly popular due to their strong adaptability and straightforward implementation.
Physics-based algorithms are derived from various phenomena and principles in physics. Simulated Annealing (SA) [
24] mimics the annealing process in solids to achieve global search, the Sine–Cosine Algorithm (SCA) [
25] uses the periodic oscillations of trigonometric functions to balance exploration and exploitation, Young’s Double-Slit Experiment (YDSE) Optimizer [
26] is inspired by the interference patterns observed in Young’s double-slit experiment, and the Sinh Cosh Optimizer (SCHO) [
27] leverages the mathematical properties of hyperbolic sine and hyperbolic cosine functions to drive its exploration and exploitation mechanisms.
Human-behavior-based algorithms simulate educational or social activities to drive optimization. Teaching–Learning-Based Optimization (TLBO) [
28] models the influence mechanism between teachers and learners, the Attraction–Repulsion Optimization Algorithm (AROA) [
29] emulates the balance between attractive and repulsive forces observed in nature, using this mechanism as the basis for its search process, Differentiated Creative Search (DCS) [
30] balances divergent and convergent thinking within a team-based framework, fostering a continuously learning and adaptive optimization environment, and the Football Team Training Algorithm (FTTA) [
31] draws on the training routines and tactical drills of football teams to formulate its optimization strategy.
Swarm-based algorithms are inspired by collective foraging and collaboration behaviors in nature. Particle Swarm Optimization (PSO) [
16] emulates bird flocking and foraging, the Whale Optimization Algorithm (WOA) [
32] mimics the bubble net feeding strategy of humpback whales, the Grey Wolf Optimizer (GWO) [
33] draws on the hierarchical pack hunting of grey wolves, and the Griffon Vulture Optimization Algorithm (GVOA) [
34] models the collective intelligent foraging behavior of griffon and vulture species in nature.
Abualigah et al. [
35] are the first to formalize the “high-altitude cruise and dive hunting” strategy of Aquila into a novel metaheuristic optimization algorithm, known as the Aquila Optimizer (AO). By alternating between global cruise and precise diving behaviors, the AO achieves rapid convergence while maintaining a simple and easy-to-implement structure. Its outstanding performance in numerous benchmark tests has made AO a research focal point and led to its widespread adoption for solving complex optimization problems. However, the AO exhibits several limitations: It tends to converge prematurely, becoming trapped in local optima and failing to sufficiently explore the global search space, which can result in suboptimal solutions. Its convergence speed in complex high-dimensional problems can be slow, requiring more iterations to reach the best results, and its scalability is limited, making it challenging to address large-scale optimization tasks with many variables and constraints [
36]. To address these issues, researchers have proposed various improved variants of the AO in different optimization domains, including hybrids with other techniques and novel movement strategies. Compared to the original AO, these enhanced algorithms demonstrate superior adaptability and performance in a broader range of complex problems. Serdar Ekinci et al. [
37] developed an enhanced AO algorithm (enAO) by innovatively integrating an improved Opposition-Based Learning (OBL) mechanism [
38] with the Nelder–Mead (NM) simplex search method [
39]. Kujur et al. [
40] introduced a Chaotic Aquila Optimizer tailored for demand response scheduling in grid-connected residential microgrid (GCRMG) systems. Pashaei et al. [
41] proposed a Mutated Binary Aquila Optimizer (MBAO) that incorporates a Time-Varying Mirrored S-shaped (TVMS) transfer function. Designed as a new wrapper-based gene selection method, MBAO aims to identify the optimal subset of informative genes. Inspired by the superior performance of the Arithmetic Optimization Algorithm (AOA) [
42] and the Aquila Optimizer (AO), Zhang et al. [
43] proposed a hybrid algorithm that integrates the strengths of both, denoted AOAAO.
Although various AO variants have addressed some of the shortcomings of the original AO, there are still issues that need to be resolved. Xiao et al. [
44] introduced an enhanced hybrid metaheuristic, IHAOAVOA, which combines the strengths of the AO and the African Vultures Optimization Algorithm (AVOA) [
45]. Although IHAOAVOA demonstrates marked improvements over both the AO and AVOA on various benchmark functions, its increased computational cost remains a potential drawback, and there is still room for performance gains on certain benchmark functions. To accelerate convergence, Wirma et al. [
46] integrated chaotic mapping into the AO and employed a single-stage evolutionary strategy to balance exploration and exploitation. However, compared to other AO variants, this approach incurs additional computational overhead and memory usage for chaotic variables, resulting in higher CPU time and resource demands. Zhao et al. [
47] developed a multiple-update mechanism employing heterogeneous strategies to accelerate the late-stage convergence of the AO and enhance its overall performance. However, this approach exhibits suboptimal results when applied to real-world engineering problems. Zhao et al. [
48] simplified the Aquila Optimizer to enhance its convergence speed, introducing a simplified Aquila Optimizer (IAO). In benchmark functions, the IAO outperforms the original AO and other comparison algorithms. However, its performance remains suboptimal when applied to real-world engineering problems. This suggests that, although the AO algorithm and its variants have undergone continual refinement and perform excellently on benchmark functions, they still exhibit limitations: They often underperform on real-world engineering problems, and their performance on certain test functions leaves room for further improvement. Furthermore, we observe that many AO variants still exhibit performance bottlenecks when tackling high-dimensional optimization problems.
To address the shortcomings of the AO and AO variants, this paper proposes MSAO, a multi-strategy enhanced variant designed to overcome the premature convergence to local optima and its degraded performance in high-dimensional search spaces. In summary, the MSAO introduces three key innovations: the incorporation of a random sub-dimension update mechanism to effectively decompose and explore high-dimensional search spaces; the integration of the memory strategy and dream-sharing mechanism of the DOA, allowing individuals to retain historical elite information, achieving a coordinated balance between global exploration and local exploitation; and the adoption of adaptive parameter control and dynamic opposition-based learning to refine the original AO’s update rules, thus accelerating convergence and improving solution precision. The main contributions of this work can be summarized as follows:
A random sub-dimension update mechanism is incorporated to effectively decompose high-dimensional search spaces. The memory strategy and dream-sharing strategy from the DOA are fused to enable individual memory retention and inter-population information sharing, achieving a coordinated balance between global exploration and local exploitation.
Adaptive parameter and dynamic opposition-based learning are integrated to further refine the original AO update rules within its multi-strategy framework.
In the CEC2017 benchmark suite, the MSAO significantly outperforms the original AO and seven other state-of-the-art algorithms. Ablation studies confirm the independent contributions of each enhancement, and outstanding results in five real-world engineering optimization cases underscore the practical applicability.
3. The Proposed MSOA
This section presents the Multi-Strategy Aquila Optimizer (MSAO), developed to address the original AO’s limitations in convergence accuracy and its propensity to become trapped in local optima when solving high-dimensional optimization problems. To achieve this, we integrate several critical enhancements within the AO framework to markedly strengthen its global exploration and local exploitation capabilities. Each of these improvements will be described in detail in the following sections.
In all mathematical expressions of this algorithm, vector variables are shown in boldface. The symbol “×” denotes multiplication between scalars or between a scalar and a vector and the symbol “∗” denotes element-wise multiplication between vectors.
3.1. Exploration Phase
During the exploration phase, the MSAO implements a random sub-dimension update mechanism by partitioning the population into five subpopulations. Initially, the each positions of subpopulation positions are reinitialized to their best individual position from the previous iteration using the memory strategy. Then, for each subpopulation q (), dimensions are randomly selected from the set , denoted as , , …, . For each individual , only the components in these dimensions are updated according to the enhanced exploration rule, while all other dimensions remain unchanged. This procedure is applied sequentially across individuals, ensuring that all subpopulations collaboratively explore their respective subspaces within a single iteration, thus increasing global search diversity and accelerating convergence.
3.1.1. Memory Strategy
The memory strategy, originating from the DOA [
49], is defined mathematically in Equation (
12). For any individual
in subpopulation
q, its position at the beginning of iteration
is reset to the best solution found by that subpopulation in iteration
t.
where
denotes the position of the
ith individual in generation
.
represents the best individual in the
qth subpopulation during generation
t.
3.1.2. The Improved Expanded Exploration
To accelerate convergence and enhance solution accuracy, an adaptive convergence parameter
is introduced during the expanded exploration phase. This parameter enables gradual decay in the early stages of iteration, facilitating broad global exploration [
50]. As the iteration progresses, the decay rate increases, promoting faster convergence. In contrast, the original AO’s convergence parameter may lead to premature convergence. Based on this nonlinear factor, the improved update formula for the expanded exploration is presented in Equation (
14).
where
denotes the value of the
ith individual in the
jth dimension at iteration
.
represents the value of the best individual in group
q in the
jth dimension at iteration
t.
is the mean position of the population in the
jth dimension at iteration
t.
3.1.3. The Improved Narrowed Exploration
To better align the narrowed exploration with the multi-strategy framework, this study replaces the original random perturbation mechanism with a dynamic opposition-based learning strategy [
51], thus more effectively guiding the algorithm’s exploration and exploitation in the search space. The control factor for this strategy is defined in Equation (
15), the dynamic opposition-based learning formula is given in Equation (
16), and the resulting improved narrowed exploration update rule is presented in Equation (
17).
where
denotes the position in the
jth dimension at iteration
t obtained by the dynamic opposition-based learning strategy.
3.1.4. Dream-Sharing Strategy
The dream-sharing strategy, derived from the DOA [
49], enables each individual to randomly “borrow” positional information from other members of its subpopulation along selected dimensions, thereby enhancing the algorithm’s ability to escape local optima. This strategy is executed in parallel with the improved exploration update rules. Its mathematical formulation is given by
3.2. Exploitation Phase
During the exploitation phase, the MSAO continues to employ the random sub-dimension update mechanism without subpopulation partitioning. Specifically, the algorithm first reinitializes all individual positions to that of the best solution from the previous iteration using the memory strategy. It then updates the selected sub-dimensions of each individual according to the enhanced exploration update rule, while keeping the other dimensions unchanged, thereby preserving global alignment while enabling localized fine-grained exploitation.
3.2.1. Memory Strategy
During the exploitation phase, each individual
has its position at the beginning of the iteration
reset to the best global solution from iteration
t. Its mathematical formulation is as follows [
49].
where
represents the best individual during generation
t.
3.2.2. The Improved Expanded Exploitation
To accelerate iterations in high-dimensional settings, we streamline the expanded exploitation update strategy and introduce an adaptive parameter
. This enhancement maintains a minimalist design of the parameter, while improving the numerical stability and high-dimensional handling [
50]. The definition of
is provided in Equation (
20), and the refined expanded exploitation update rule is given in Equation (
21).
3.2.3. The Improved Narrowed Exploitation
The update strategy for the narrowed exploitation phase remains largely unchanged, with the addition of a random sub-dimension update mechanism to enhance high-dimensional performance. Its mathematical formulation is as follows:
3.3. Parameters Setting
The remaining key parameter settings for the MSAO are as follows: The threshold for switching between the exploration and exploitation phases is critical to overall performance, and the default value of the original AO is unsuitable. Therefore, in subsequent experiments, we conduct a sensitivity analysis to identify the optimal threshold that maximizes the benefits of prior empirical knowledge. In this study, the phase switching parameter is set to 0.6.
and
denote the number of randomly selected sub-dimensions in the exploration and exploitation phases, respectively. Their calculation formulas are given in Equations (
23) and (
24).
The parameter u is used to balance the enhanced exploration update rule and the dream-sharing strategy during the exploration phase. If rand < u, the algorithm applies the forgetting-and-replenishment mechanism; otherwise, it executes the dream-sharing strategy. Following the empirical setting in the DOA, u is set to 0.9.
3.4. The Detail of MSAO
The pseudo-code for the proposed MSAO algorithm is presented in Algorithm 2, and its flowchart is illustrated in
Figure 2.
Algorithm 2: Pseudo-code of MSAO |
![Biomimetics 10 00620 i002 Biomimetics 10 00620 i002]() |
The algorithm begins by randomly generating
N individuals within the search space and evenly dividing them into five subpopulations to ensure initial diversity and enable parallel local searches. The iteration counter
t is initialized to 1. The fitness of all individuals is then evaluated and the best global solution
is recorded to guide the subsequent memory strategy and positional updates. During the first 60% of the iterations (exploration phase), the algorithm extracts the best solution of each subpopulation
and applies the memory strategy to reset all individuals in that subpopulation to their best solution, thereby rapidly exploiting promising regions. Then, it randomly switches between the expanded exploration update strategy (Equation (
14)) and the narrowed exploration update strategy (Equation (
17)) to balance global and local search and activates the dream-sharing mechanism (Equation (
18)) to enhance diversity through inter-information exchange between subpopulations. In the remaining 40% of the iterations (exploitation phase), the algorithm focuses on the best global solution
, using the memory strategy (Equation (
19)) to pull all individuals back into its neighborhood and fully utilize global information. It selects between the expanded exploitation update strategy (Equation (
21)) and the narrowed exploitation update strategy (Equation (
22)) based on a random probability, flexibly adjusting step sizes for a finer-grained local search. After position update, boundary handling ensures feasibility. If the termination criterion is met, the final best global solution
is returned. Otherwise,
t is incremented and the process repeats.
Figure 1 and
Figure 2 illustrate that both the DOA and MSAO divide the algorithmic workflow into exploration and exploitation phases, with the MSAO adopting the DOA’s memory strategy and dream-sharing strategy to enhance performance. However, they differ markedly in their primary update procedures. The DOA relies primarily on the forgetting and supplementation strategy in both phases. In contrast, the MSAO adopts two distinct update rules in each phase. During the exploration phase, it utilizes Expanded Exploration and Narrowed Exploration, while in the exploitation phase, it applies Expanded Exploitation and Narrowed Exploitation. This richer set of phase-specific update mechanisms further enhances search efficiency and solution accuracy.
3.5. Complexity Analysis of MSAO
The time complexity of the MSAO is determined by the population size N, initialization, fitness evaluations and individual updates, the number of iterations T, and the dimensionality of the problem D. Initializing the population incurs complexity, whereas the fitness evaluation complexity depends on the specific problem and is not detailed here. Individual updates involve for position updates and for other per-generation operations. Hence, the overall time complexity of MSAO can be expressed as: .
4. Experimental Results and Analysis
In this experiment, the MSAO is evaluated on 29 benchmark functions of the CEC2017, excluding the unstable F2 function to ensure consistency of the result [
52], detailed in
Table 1. All benchmark functions are defined in the search space [−100, 100], and
denotes the theoretical minimum value of each test function. To assess adaptability across problem scales, we conduct experiments in 10, 30, 50, and 100 dimensions, with a fixed population size of 30 and a maximum of 1000 iterations. Each configuration is independently run 30 times to evaluate reliability. The MSAO is compared with its five AO variants and six state-of-the-art metaheuristic algorithms: the original AO [
35] and its enhanced variants LOBLAO [
53], TEAO [
54], SGAO [
55], MMSIAO [
56], and IAO [
57]; the classic WOA [
32] and HHO [
58]; the novel AOO [
59] and DOA [
49]; and the CEC-winning LSHADE [
60]. The algorithm parameters are set according to original sources or widely accepted values, detailed in
Table 2, and all methods use the same population size and iteration count to ensure a fair and rigorous comparison.
4.1. Sensitivity Analysis
This section presents a systematic sensitivity and performance evaluation of the phase-switching threshold between exploration and exploitation. This threshold critically determines the allocation of computational effort between early global search and later local refinement, affecting the convergence speed and final precision [
61]. To investigate its impact on the MSAO’s performance, we test threshold values of 0.5, 0.6, 0.7, 0.8, and 0.9, corresponding to 50%, 60%, 70%, 80%, and 90% of the total iteration. The results are summarized in
Table 3, with the best performing thresholds for each dimension highlighted in bold. The “w/t/l” stands for win/tie/loss. For example, the “2/6/21” in the first row and second column of the table indicates that with the phase-switch threshold set to 0.5 in the 10D CEC2017 tests, the algorithm outperformed all competitors on 2 functions, tied on six, and underperformed on 21 of the 29 benchmark functions.
The data reveal that increasing the threshold from 0.5 to 0.6 yields a marked performance improvement, whereas further increases lead to degradation. Although both 0.6 and 0.7 perform well, 0.6 offers the most stable and overall superior performance across dimensions. Consequently, we set the phase switching threshold to 0.6 to achieve an optimal balance between the two phases.
4.2. Ablation Experiments
To validate the individual contributions of each enhancement in the MSAO, we conduct ablation experiments. The MSAO comprises three key components: the memory strategy combined with dream-sharing strategy, the random sub-dimension update mechanism, and the tailored update rules that leverage these mechanisms. We created multiple variants of the algorithm by selectively including or excluding each component to assess their respective impacts on performance and to examine their synergistic effects. The results show that omitting any single strategy degrades performance, highlighting that these three components work cooperatively to achieve the MSAO’s optimal performance.
Table 4 summarizes the combinations of strategies for each variant, where “∘” indicates inclusion and “×” indicates exclusion of a given component.
Figure 3,
Figure 4,
Figure 5 and
Figure 6 present heatmaps of Friedman test rankings for these variants. In these heatmaps, darker cells correspond to better average rankings and, thus, superior performance. In
Figure 3 and
Figure 4, the original AO shows the lightest cells, while each single-strategy variant exhibits darker cells, indicating performance improvements over the AO. Two-strategy combinations yield even darker cells, whereas the full three-strategy integration, the MSAO, shows the darkest cells, denoting the best performance. This shows that the three strategies synergize in more than an additive fashion, mutually reinforcing each other under the complex characteristics of different test functions to substantially enhance global exploration and local exploitation. Similar trends are observed when the dimensionality increases to 50 and 100, confirming that the strategy combination remains highly effective in high-dimensional settings. In summary, analysis of cell darkness (average Friedman rankings) leads to three conclusions: (1) Any single strategy surpasses the original AO, (2) multi-strategy fusion further elevates performance, and (3) the fully integrated three-strategy MSAO achieves the top results, validating the effectiveness of the multi-strategy cooperative optimization framework.
4.3. Qualitative Analysis
To further validate the MSAO, we perform qualitative analyses on unimodal, simple multimodal, hybrid, and composition functions from the CEC2017. The results are shown in
Figure 7.
Four visualization metrics are used to intuitively assess performance: search history, average fitness curve, first-dimensional trajectory, and the convergence curve of the best candidate solution. In the search history plots, red dots denote global optima and blue dots indicate the best solution found at each iteration. These polts show that the MSAO explores effectively, using its memory strategy for rapid convergence, demonstrating strong global exploration and local development capabilities. The average fitness curve exhibits a pronounced steep decline in the early phase, indicating that the population, leveraging the memory strategy, dream-sharing strategy, and enhanced position-update mechanisms, rapidly escapes local optima and converges efficiently. In the mid-to-late stages, the curve levels off, demonstrating that the population has locked onto a global or near-global optimum region and, aided by the random sub-dimension update mechanism and refined update rules, performs fine-grained local searches to produce high-quality solutions. The first-dimension trajectory illustrates position oscillations: The initial rapid oscillations reflect the random sub-dimension update mechanism swiftly identifying the global optimum region, and subsequent reduced oscillations indicate the fine-tuned search. The convergence curves confirm that the MSAO quickly reduces fitness values early on and stabilizes at high-quality solutions, further validating its efficient performance.
4.4. Performance Analysis of MSAO and AO’s Variants in CEC2017
In this section, we compare the MSAO against five AO variants on the CEC2017 to verify the MSAO’s performance.
Table 5,
Table 6,
Table 7 and
Table 8 summarize the results for 10D, 30D, 50D, and 100D.
Across the four dimensions (10D, 30D, 50D, and 100D), the MSAO achieves the lowest average fitness on most of the benchmark functions: 25 (86%) in 10D, 22 (76%) in 30D, 23 (79%) in 50D, and 28 (97%) in 100D. Furthermore, the MSAO’s average rankings consistently rank first at 1.34, 1.28, 1.34, and 1.03 in these dimensions, followed by the LOBLAO (average rankings consistently rank second at 2.34, 2.41, 2.48, and 2.17 in these dimensions) and the TEAO (average rankings consistently rank third at 3.41, 3.72, 3.48, and 3.48 in these dimensions). Given their strong performance and status as recent AO’s variants, the LOBLAO and TEAO will serve as representative AO variants for comparative evaluation in subsequent experiments. Furthermore, the MSAO exhibits lower standard deviations in all dimensions, indicating a more stable convergence. In summary, the MSAO not only achieves superior average fitness across most test functions but also demonstrates higher stability, outperforming current state-of-the-art AO variants.
4.5. Performance Analysis of MSAO in CEC2017
In this section, we provide a comprehensive analysis of the MSAO’s performance compared to eight leading optimization algorithms in the CEC2017 benchmark suite.
Table 9,
Table 10,
Table 11 and
Table 12 summarize results for 10D, 30D, 50D, and 100D problems.
Figure 8 shows the convergence curves and
Figure 9 shows the box plots [
62].
In 10D and 30D (
Table 9 and
Table 10), the MSAO achieves the best average fitness ranking in 16 (55%) and 20 (69%) of the 29 benchmark functions, respectively. Specifically, for unimodal functions (F1–F3), the MSAO ranked first in F1 in 10D and in F3 in 30D. For simple multimodal functions (F4–F10), it dominated with a win rate of 86% in 10D, which slightly decreased to 57% in 30D. For hybrid functions (F11–F20), its win rate increases from 30% in 10D to 90% in 30D. For composition functions (F21–F30), the MSAO achieved 60% wins in 10D and 70% in 30D. For more than half of low-dimensional problems, the MSAO not only finds lower objective values than all competing algorithms but also maintains consistency with smaller standard deviations, demonstrating its exceptional ability to quickly and reliably locate high-quality solutions in low-dimensional search spaces. In 50D and 100D in higher dimensions (
Table 11 and
Table 12), the MSAO achieves the best average fitness in 20 (69%) and 21 (72%) of the 29 benchmark functions, respectively. In 50D, the MSAO ranks first on F5, F6, F7, F8, and F10 (71%) in the simple multimodal functions. For hybrid functions (F11 – F20), it takes the top spot in F12, F13, F15, F16, F17, F18, F19, and F20 (80%). In composition functions (F21–F30), the MSAO wins in F21, F22, F23, F24, F26, F29, and F30(70%). Although it does not claim first place in the unimodal functions F1 and F3, it secures second on F1. In 100D, the win rates remain 71% in simple multimodal and 70% in composition functions, while hybrid functions rise to 90%. This outstanding result in high dimensions, in which it wins nearly two thirds of all test functions, demonstrates that the integration of three strategies significantly enhances global search capability in high dimensions. Moreover, the MSAO shows lower standard deviations than the original AO and other competitors in these experiments, confirming its exceptional stability and robustness in high dimensional optimization problems.
As shown in
Figure 8, the MSAO’s convergence curves in 10D, 30D, 50D, and 100D exhibit markedly steeper declines, faster convergence speeds, and lower fitness values than its counterparts. In the initial phase, the MSAO’s rapid descent is based on its memory strategy. By retaining and leveraging elite solutions from previous iterations, the algorithm quickly targets high-potential regions and avoids aimless exploration. Concurrently, the random sub-dimension update mechanism breaks free from full-dimensional search constraints, maintaining robust global exploration in high-dimensional landscapes and enabling rapid escape from local optima. In the mid-to-late stages, the MSAO’s curve levels off with significantly reduced oscillations compared to other methods, indicating that near convergence, its refined update rules, such as adaptive parameter control and dynamic opposition learning, effectively shrink step sizes for precise local exploitation, thus securing superior solutions and preventing premature convergence. In
Figure 9, boxes are generally lower, narrower, and more concentrated, further confirming their superiority in consistency and stability of the results.
In summary, the MSAO not only significantly outperforms the original AO and other algorithms on low-dimensional optimization problems but also demonstrates exceptional performance and robustness in high-dimensional optimization problems. The experiments confirm that the memory strategy combined with dream-sharing strategy, the random sub-dimension update mechanism, and the tailored update rules that leverage these mechanisms effectively address the limitations of the original AO in high-dimensional search, offering an efficient and reliable solution for complex, large-scale optimization challenges.
4.6. Nonparametric Test Analysis
In this section, we perform nonparametric statistical analyses [
63] (Wilcoxon rank sum test and Friedman test) to compare the performance of the MSAO against other algorithms.
Table 13,
Table 14,
Table 15 and
Table 16 present the results of the Wilcoxon test in various dimensional settings, showing statistically significant differences compared to the original AO and other competing algorithms (
p < 0.05). The statistics of “wins/ties/losses” (w/t/l) overwhelmingly favor the MSAO, which generally achieves 27 to 29 wins (93% to 100%) against eight competing algorithms in 29 benchmark functions; even its lowest win count is 16 (55%), with only 0 to 7 ties and, at most, three losses, further corroborating its superior performance in most benchmark functions.
Table 17,
Table 18,
Table 19 and
Table 20 report the Friedman test results, showing that the MSAO’s average ranking mainly ranges from 1.4 to 2.4, while the runner-up LSHADE’s ranking mainly ranges from 2.7 to 4.4. Nonparametric statistical analyses confirm that the MSAO delivers outstanding optimization capability under the selected test conditions.
4.7. Engineering Optimization Experiments
To further assess the practical applicability of the MSOA, we compare it with other algorithms on five representative engineering design problems: the tension/compression spring design problem [
64], pressure vessel design problem [
65], three-bar truss design problem [
64], welded beam design problem [
66], and speed reducer design problem [
67]. These engineering design problems have been widely adopted as benchmarks for evaluating optimization algorithms [
68,
69,
70]. Each engineering design problem comprises an objective function to be minimized (typically weight or cost), subject to several nonlinear constraints. A smaller objective value indicates a better design and, consequently, corresponds to a better rank. Through comprehensive testing and the performance evaluation of these real-world engineering cases, the exceptional performance of the MSAO in engineering optimization is validated. The results are presented in
Table 21,
Table 22,
Table 23,
Table 24 and
Table 25.
4.7.1. Tension/Compression Spring Design Problem
The tension/compression spring design problem aims to minimize the weight of the spring. It involves four nonlinear constraints and three design variables: the wire diameter
, mean coil diameter
, and number of active coils
. The mathematical model is defined as follows:
Table 21 presents the optimization results for the tension/compression spring design problem. The MSAO achieves the best optimal value of 0.012671 for the tension/compression spring design problem, ranking first. It is closely followed by LSHADE (0.012681, second place) and the AOO (0.012721, third place), with all three algorithms showing strong search capabilities for this problem. In contrast, the original AO (0.015784, eighth place), LOBLAO (0.015593, seventh place), and TEAO (0.015830, ninth place) perform poorly, primarily because they struggle to balance the three design variables while satisfying four nonlinear strength and deformation constraints. Although the DOA (0.014085, sixth place) and HHO (0.013924, fifth place) show improvements over them, they still fall short of the MSAO’s performance.
4.7.2. Pressure Vessel Design Problem
The pressure vessel design problem aims to minimize the total manufacturing cost, which comprises welding, material, and forming expenses. The problem involves four constraints and four design variables: shell thickness
, head thickness
, inner radius
, and vessel length excluding heads
. The mathematical model is defined as follows:
Table 22 presents the optimization results for the pressure vessel design problem, in which the MSAO achieves the best optimal value of 5821.851, ranking first. It is followed by LSHADE (5848.727, second place) and the AOO (5892.490, third place). In contrast, the WOA (5950.671, fourth place), LOBLAO (6282.705, fifth place), and TEAO (6307.177, sixth place) demonstrate reasonable search capability, but still exhibit shortcomings and underperform overall. The HHO (6755.636, seventh place) and DOA (7209.205, eighth place) adopt conservative boundary handling, leading to suboptimal design variables. The original AO performs the worst (7644.994, ninth place), struggling to finely optimize material thickness and dimensional requirements within the feasible region.
4.7.3. Three-Bar Truss Design Problem
The three-bar truss design problem, originating in civil engineering, features a complex feasible region defined by stress constraints. The primary objective is to minimize the total weight of the truss members, subject to three nonlinear inequality constraints based on member stresses, resulting in a linear objective function optimization. The mathematical formulation is as follows:
Table 23 presents the optimization results for the three-bar truss design problem, in which the MSAO achieves the best optimal value of 263.8958 for the three-bar truss design problem, ranking first. LSHADE (263.8972, second place) and the AOO (263.8972, third) follow closely. Mid-level performers include the WOA (263.8973, fourth place), DOA (263.8998, fifth place), and TEAO (263.9626, sixth place), which yield similar results but exhibit slightly less precision in exploring the constraint compared to the top three. The HHO (263.9721, seventh place), AO (264.0865, eighth place), and LOBLAO (264.2021, ninth place) demonstrate higher optimal values due to less refined adjustments near the stress limit constraints.
4.7.4. Welded Beam Design Problem
The welded beam design problem aims to minimize the manufacturing cost of a welded beam. It features five constraints and four design variables that define the weld and beam geometry: weld thickness
, beam height
, beam length
, and beam width
. The mathematical formulation is as follows:
Table 24 presents the optimization results for the welded beam design problem, in which the MSAO achieves the lowest manufacturing cost of 1.692794 for the welded beam design problem, ranking first. It is followed by LSHADE (1.695725, second place) and the WOA (1.697121, third place). The AOO (1.826056, fourth place) demonstrates reasonable search capability, but still exhibits shortcomings and underperforms overall. The HHO (2.231482, fifth place) and LOBLAO (2.324208, sixth place) exhibit a similar problem, resulting in higher costs. The DOA (2.340397, seventh place) and AO (2.414442, eighth place) handle constraint boundaries conservatively, leading to suboptimal cost solutions. The TEAO performs worst (2.827475, ninth place), as its fixed parameter update rules struggle with the precise adjustments required in a complex, multi-constraint environment.
4.7.5. Speed Reducer Design Problem
The speed reducer design problem involves the structural optimization of a gearbox for a small aircraft engine. Its mathematical formulation is given by
Table 25 presents the optimization results for the speed reducer design problem, in which the MSAO achieves the best objective value of 2500.976, ranking first. It is followed by LSHADE (2592.060, second place) and the DOA (2995.152, third place), all demonstrating strong optimization capabilities. The WOA (2995.287, fourth), AOO (3000.988, fifth place), HHO (3025.564, sixth place), and TEAO (3047.838, seventh place) deliver similar performances, showcasing solid global search abilities but slightly lacking in fine-tuning under multiple geometric and stress constraints. The AO (3102.403, eighth place) and LOBLAO (3180.600, ninth place) perform comparatively worse.
5. Conclusions
This paper introduces a Multi-Strategy Aquila Optimizer (MSAO) that integrates a memory strategy and dream-sharing strategy, a random sub-dimension update mechanism, and enhanced position update rules to bolster performance on high-dimensional, complex optimization problems. First, the random sub-dimension update mechanism effectively decomposes the high-dimensional search space, significantly strengthening the algorithm’s ability to handle large-scale problems. Second, by combining the memory strategy and dream-sharing strategy from the DOA, individuals can both leverage past elite solutions and flexibly share information on each subpopulation, achieving an organic balance between global exploration and local exploitation. Finally, adaptive parameter control and dynamic opposition-based learning are employed to deeply refine the AO’s update rules, markedly accelerating convergence and improving solution precision. Compared with other AO variants, the MSAO achieves average rankings of 1.31 in the 10 dimension, 1.34 in the 30 dimension, 1.38 in the 50 dimension, and 1.45 in the 100 dimension. It is in the top ranking in each case, clearly demonstrating its superior performance over existing AO variants. Compared with other state-of-the-art algorithms, the MSAO delivered 16 (55%) and 20 (69%) best solutions out of 29 benchmark functions in the CEC2017 10D and 30D tests, respectively. When the dimensionality increases to 50D and 100D, the number of best solutions increases to 20 (69%) and 21 (72%). These results demonstrate that the MSAO’s three-strategy integration significantly enhances global search capability, effectively addressing the AO’s shortcomings in high-dimensional optimization. Ablation studies demonstrate the independent contributions of each module, thereby validating the effectiveness of each strategy. Across five real-world engineering design problems, the MSAO consistently achieved the top rank, demonstrating its superiority in engineering optimization and effectively addressing the shortcomings of existing AO variants in practical applications.
However, the performance in hybrid functions indicates that there is room for improvement. This limitation probably stems from the rigid division of the MSOA’s exploration and exploitation into two separate phases under a fixed threshold. In hybrid functions, subregions with diverse characteristics frequently overlap and cannot be addressed by inflexible phase transitions. As a result, if the algorithm becomes trapped in a local optimum in one region, it may shift phases too early or place undue emphasis on that area, thereby overlooking other promising regions of the search space. Future work will investigate a novel mechanism to more flexibly coordinate the exploration and exploitation phases, thereby overcoming this limitation, and focus on developing large-scale and multi-objective versions and applying the MSAO to path planning, image segmentation, data clustering, hyperparameter tuning, and wireless sensor network coverage to fully evaluate its generality and engineering value.