1. Introduction
Global optimization problems are ubiquitous in modern complex systems and engineering design [
1,
2]. They are typically characterized by high non-linearity, multimodality, multiple constraints and large dimensionality, posing significant challenges to traditional optimization techniques [
3]. Classical deterministic approaches such as gradient-based methods and dynamic programming perform well on simple tasks like convex optimization, but often fail to provide satisfactory solutions when the problem is non-convex or non-differentiable [
4]. In contrast, meta-heuristic methods, which operate without gradient information, can effectively explore the search space and have been widely applied to complex global optimization and engineering design tasks [
5]. Although these algorithms do not guarantee global optimality, they are highly valuable because they consistently deliver high-quality solutions for large-scale and intricate problems [
6]. Numerous studies have demonstrated their superior performance in practical engineering challenges ranging from mechanical structure design and parameter estimation to mission planning [
7,
8,
9,
10,
11]. However, as problem size and complexity grow, the performance of meta-heuristics may degrade markedly, making the continuous development and refinement of new algorithms an urgent necessity [
12].
The central idea of meta-heuristic algorithms is to design adaptive and robust search strategies by emulating natural phenomena, biological behaviors, or physical laws. Representative categories and algorithms are briefly exemplified below [
13].
Evolutionary-based algorithms realize optimization by replicating the “selection–crossover–mutation” cycle observed in biological evolution. For example, the canonical genetic algorithm (GA) [
14] iteratively refines a population through encoded individuals, fitness evaluation, and the genetic operators of selection, crossover, and mutation. Differential evolution (DE) [
15] creates candidate solutions by adding scaled difference vectors between existing individuals, a mechanism that is particularly effective in continuous search spaces. Other representatives of this family include Evolution Strategies (ES) [
16] and the recently proposed Alpha Evolution (AE) [
17], all of which follow the same evolutionary paradigm while introducing distinct variation and selection schemes.
Physics-based algorithms abstract optimization rules from the laws of physics or chemical reactions observed in nature. Inspired by the annealing of metals, Simulated Annealing (SA) [
18] employs a temperature schedule to probabilistically accept inferior solutions, thereby escaping local optima. The Gravitational Search Algorithm (GSA) [
19] models candidate solutions as masses that interact through Newtonian gravity; heavier masses (better solutions) attract others, guiding the population toward high-fitness regions. Other representatives of this category include Polar Lights Optimization (PLO) [
20], Fick’s Law Algorithm (FLA) [
21], Light Spectrum Optimizer (LSO) [
22] and Fata Morgana Algorithm (FMA) [
23].
Human-based algorithms distill optimization logic from human activities and social collaboration. The Teaching–Learning-Based Optimization (TLBO) [
24], for instance, mimics the knowledge-transfer process between an instructor and learners in a classroom, whereas the Social Group Optimization (SGO) [
25] exploits the cooperative mechanisms of information sharing and collective decision-making observed in human social networks. Other algorithms that belong to this category include the Political Optimizer (PO) [
26], Football Team Training Algorithm (FTTA) [
27], Escape Optimization Algorithm (EOA) [
28], Catch Fish Optimization Algorithm (CFOA) [
29], and Student Psychology Based Optimization (STBO) [
30].
Swarm-based algorithms mimic the cooperative behavior of biological swarms and achieve global optimization through information sharing among simple agents. Particle Swarm Optimization (PSO) [
31], for instance, emulates the foraging of bird flocks: each particle adjusts its position and velocity at every iteration by simultaneously learning from its personal best experience and the swarm’s global best, rapidly converging toward promising regions. Ant Colony Optimization (ACO) [
32] replicates the pheromone trail communication of foraging ants; artificial ants probabilistically construct paths and reinforce shorter ones through pheromone updates, eventually converging to minimal routes. Grey Wolf Optimizer (GWO) [
33] abstracts the strict social hierarchy and collective hunting tactics of grey wolves—tracking, encircling, and attacking prey—to balance exploration and exploitation in multimodal landscapes. Beyond these mainstream methods, a growing family of swarm intelligence techniques—such as Greylag Goose Optimization (GGO) [
34], Tuna Swarm Optimization (TSO) [
35], Sled Dog Optimizer (SDO) [
36], and Crayfish Optimization Algorithm (COA) [
37]—continues to emerge, offering alternative neighborhood topologies and communication rules for diverse optimization scenarios.
For meta-heuristic algorithms, exploration and exploitation constitute the two core phases that directly govern global search capacity and local refinement efficiency; balancing them is therefore pivotal to overall performance [
38]. Exploration refers to the extensive sampling of the search space to identify promising regions, thereby reducing the risk of premature entrapment in local optima and safeguarding the discovery of the global solution. Exploitation, in contrast, concentrates the search within these promising regions to refine solutions, improving accuracy and accelerating convergence. Unfortunately, the foundational variants of most meta-heuristics suffer from inherent bottlenecks—premature convergence, local-optimum stagnation, and slow convergence—when confronted with complex problems. To alleviate these drawbacks, researchers have proposed novel operators or refined existing ones to strengthen both global exploration and local exploitation capabilities. In parallel, hybrid algorithms that synergize operators drawn from different meta-heuristics have been developed to achieve an adaptive equilibrium between the two search modes. Reinforcing this trend, the No-Free-Lunch theorem [
39] asserts that no single algorithm can outperform all others across every optimization problem, continuously motivating the emergence of new intelligent optimization paradigms.
The SBOA is a novel swarm-based meta-heuristic that emulates the secretary bird’s hunting and escape tactics. The optimization process is explicitly divided into an exploration phase and an exploitation phase, each governed by distinct search behaviors executed in sequence. Owing to these diversified strategies, SBOA delivers competitive performance on a variety of benchmark functions and engineering cases. In the original study by Fu et al., SBOA outperformed basic algorithms such as WOA, GWO, COA and DBO, and achieved results comparable with the advanced LSHADE-SPACMA [
40]. Although the original SBOA has shown certain merits, follow-up studies reveal that it still suffers from several drawbacks when confronted with complex optimization scenarios, such as the following: local optimum stagnation—on high-dimensional, multimodal or irregular landscapes the population often stalls around sub-optimal regions and is unable to jump out [
41]; limited global exploration—in the early search stage the diversity of the swarm is frequently insufficient, so the algorithm cannot thoroughly cover the solution space [
42]; premature convergence—blind attraction toward the current global best easily misguides the whole population into a local optimum, after which further improvement becomes difficult [
43]. Consequently, enhancing the global exploration ability and the exploitation accuracy—while simultaneously avoiding local optima—has become the focus of recent research. To mitigate these drawbacks, several SBOA variants have recently been proposed. Xu et al. enriched population diversity through an adaptive learning strategy and balanced exploration and exploitation with a multi-population evolution scheme [
44]. Zhu et al. embedded a Student’s t-distribution mutation derived from quantum computing to help the algorithm escape local optima [
45]. Song et al. accelerated exploitation by introducing a golden-sine guidance operator while preserving diversity with a cooperative camouflage mechanism [
46]. Meng et al. reduced the risk of stagnation via a differential cooperative search and speeded up convergence with an information-retention control strategy [
47]. Although these variants improve performance, most of them retain the original SBOA framework in which exploration and exploitation are executed simultaneously within a single iteration. This synchronous update can still lead to incomplete search and premature convergence. Moreover, the extra operators often raise algorithmic complexity and computational cost, creating new challenges that remain to be addressed.
To mitigate the inherent limitations of the original SBOA, this paper proposes an enhanced variant, MESBOA, which incorporates a multi-population management strategy and an experience-trend guidance strategy. The main contributions are summarized as follows:
- (1)
Dual-mechanism bottleneck relief. A new multi-population management protocol restructures the SBOA framework so that exploration and exploitation are executed in separate subpopulations, eliminating mutual interference and guaranteeing a balanced search. An experience-trend guidance operator dynamically extracts historical information from elite groups and uses it to steer the entire swarm along the most promising evolutionary direction. Acting synergistically, these two mechanisms enlarge global exploration breadth, refine local exploitation accuracy, and reinforce convergence robustness;
- (2)
Comprehensive benchmark validation. MESBOA is systematically compared with several state-of-the-art basic and improved algorithms on both the 10/20-D CEC-2022 and the 50/100-D CEC-2017 test suites. Statistical analyses—including Friedman, Wilcoxon rank-sum and Nemenyi tests—consistently rank MESBOA among the top performers;
- (3)
Superior performance on real-world engineering tasks. When applied to a variety of constrained mechanical-design problems, MESBOA reliably obtains better feasible solutions than its competitors while exhibiting markedly lower variance, offering practitioners an efficient and stable optimization tool, especially for high-dimensional complex systems.
This paper is organized as follows:
Section 2 presents primarily knowledge of the Secretary Bird Optimization Algorithm. The proposed MPSBOA method with exploration and exploitation strategies is explained in
Section 3. Experimental studies and the results are presented in
Section 4 and
Section 5. Finally, the conclusion is given in
Section 6.
3. The Proposed MESBOA
This section presents MESBOA and devises two enhancement strategies to overcome the original SBOA’s imbalance between exploitation and exploration and its tendency to fall into local optima.
3.1. Multi Population Management Strategy
For meta-heuristics, exploration features wide search coverage, high diversity and strong randomness, and is therefore preferred in early iterations, multimodal landscapes or dynamic environments. Exploitation, by contrast, is characterized by narrow search range, increased determinism and rapid convergence, so it is better suited to later iterations, unimodal problems or static environments.
The basic SBOA, however, executes the exploration and exploitation behaviors in sequence within every single iteration: each bird first performs an exploratory move and then immediately applies an exploitative update to the same individual. This updating pattern has two defects. Excessive exploitation in early stages wastes evaluations on unpromising regions and restricts the search breadth. Excessive exploration in later stages prevents the swarm from focusing on the most promising areas and slows convergence. The multi-population (or sub-population) strategy is a well-established and widely adopted enhancement technique in the field of meta-heuristics. By dividing the entire search workforce into several mutually interacting sub-groups and equipping each sub-group with its own search operator, the algorithm is able to maintain higher diversity, explore different regions of the search space in parallel, and thus achieve a better balance between exploration and exploitation. This idea has been successfully integrated into numerous optimization algorithms—ranging from differential evolution and particle swarm optimization to artificial bee colony and genetic algorithms—yielding consistent performance improvements across a broad set of benchmark and real-world problems [
48,
49,
50]. To overcome these drawbacks, we propose a multi-population management strategy (MMS) that restructures the SBOA search framework and coordinates exploration and exploitation instead of running them back-to-back. Specifically, MMS splits the population into three sub-groups—dominant, balanced and inferior—according to fitness.
The dominant group act as leaders. In early stages they enlarge the search scope to guarantee strong global exploration, while in later stages they retain the ability to jump out of local optima and thus avoid premature convergence.
The balanced group are responsible for a smooth transition between exploration and exploitation, ensuring the algorithm keeps refining the solution without oscillation.
Regarding the inferior group, although they are far from the optimum, they are not discarded. Opposition-based learning suggests that the opposite of a poor solution may be promising; hence these individuals continue to broaden the search and, in later stages, can accelerate convergence by learning from the best.
Based on the above analysis, MMS reallocates the original SBOA operators so that exploration and exploitation are executed by different sub-populations at the same time, eliminating mutual interference and achieving a cooperative balance.
MMS assigns Equations (2) and (5) to the dominant group. When is satisfied, Equation (5) is executed for position updating; otherwise, Equation (2) is applied. This switching guarantees intensive global exploration at the beginning while retaining the ability to jump out of local optima, and shifts the emphasis to deep exploitation in later stages while still preserving diversity. The balanced group is updated by the secretary bird’s escape and camouflage behaviors. If , Equation (10) is used; otherwise, Equation (9) is performed.
This mechanism keeps the balanced individuals moving toward the current best while periodically enlarging their search radius, thereby sustaining an equilibrium between exploration and exploitation. For the inferior group, the prey-search and prey-exhaust operators are adopted. When
is met, Equation (2) is executed; otherwise, Equation (4) is employed. Consequently, the inferior individuals continuously broaden the search scope and, by learning from the best in the final phase, accelerate convergence without misleading the rest of the swarm. The pseudo-code of MMS is given in Algorithm 1.
| Algorithm 1: Pseudocode of multi-population management strategy (MMS) |
| 1: Input: lb, ub, D, N, T |
| 2: Initialize population randomly according to Equation (1) |
| 3: While (t < T) do |
| 4: Calculate the fitness of each secretary bird individual |
| 5: For i =1: N |
| 6: If Xi belongs to dominant group |
| 7: If |
| 8: Update the position of secretary bird individual using Equation (5) |
| 9: Else |
| 10: Update the position of secretary bird individual using Equation (2) |
| 11: End if |
| 12: Else if Xi belongs to balanced group |
| 13: If |
| 14: Update the position of secretary bird individual using Equation (10) |
| 15: Else |
| 16: Update the position of secretary bird individual using Equation (9) |
| 17: End if |
| 18: Else |
| 19: If |
| 20: Update the position of secretary bird individual using Equation (2) |
| 21: Else |
| 22: Update the position of secretary bird individual using Equation (4) |
| 23: End if |
| 24: End if |
| 25: End for |
| 26: t = t + 1 |
| 27: End while |
| 28: Output: The best solution Xb |
3.2. Experience-Trend Guidance Strategy
SBOA neglects inter-individual information exchange; consequently, population diversity collapses in the later search period. Moreover, its limited global exploration capacity and insufficient local refinement ability restrict overall search performance. The algorithm also discards the historical data accumulated during iterations, so it cannot fully capture latent search dynamics or trends. The guided learning strategy is an experience-driven enhancement technique that evaluates the instantaneous exploration–exploitation demand from the collective search history of all individuals and then adaptively selects the most suitable search mode; however, because the decision is based on the averaged past experience of the whole swarm, it may mistake a temporary local aggregation for a genuine convergence signal and consequently mistrigger excessive exploitation, leading to erroneous guidance and degraded global exploration capability [
51]. To remedy these weaknesses we propose an experience-trend guidance strategy (EGS) that exploits the historical information of dominant individuals to steer the optimization process.
Specifically, EGS computes the standard deviation of recent positions to measure population dispersion and infers the type of guidance currently required. When the algorithm is biased toward exploration, EGS switches the search to exploitation; otherwise, it drives the swarm back toward exploration. Meanwhile, by analyzing the positional records of dominant individuals over previous iterations, EGS extracts the evolutionary trend and locates regions that are likely to contain the global optimum. In this way, EGS dynamically alternates between guiding exploitation and exploration according to the instantaneous search state, achieving an effective exploration–exploitation balance. The mathematical model of EGS is given below.
First, every individual of each iteration is stored in a historical memory pool
whose maximum capacity is
. When the number of stored individuals exceeds
, the standard deviation of these archived individuals is computed with Equation (11), and the resulting value is normalized by
obtained from Equation (12) to eliminate sensitivity to variable-bound changes.
where
is the function to calculate standard deviation. After obtaining
, EGS updates individuals through two adaptive schemes: an exploration-oriented update where, if
indicates low population diversity (i.e., the swarm is highly clustered), EGS relocates individuals toward under-explored regions via Equation (14) to prevent premature convergence and sustain exploration momentum, and an exploitation-oriented update where, if
exceeds the threshold (i.e., the swarm is overly dispersed), EGS triggers an exploitation operator via Equation (13) that intensifies the search around promising areas through the combined influence of the elite group, the global best agent and a randomly chosen agent to accelerate convergence, with the switch between these two modes being self-adaptive to guarantee a dynamic balance between exploration and exploitation according to the evolutionary state inferred from
.
where
is a secretary bird individual randomly selected from the current population.
denotes the weighted average position of the dominant individuals stored in the historical memory pool
.
is a randomly chosen individual among the top three fittest birds in the current swarm.
is the covariance matrix of the dominant population.
represents the dominant individuals preserved in
, and
gives the number of such dominant individuals. As illustrated in
Figure 1, the dominant group drives the population toward promising regions, the top three randomly chosen individuals supply alternative directions while accelerating convergence, and the random agent enlarges the set of possible search orientations; consequently Equation (14) markedly improves individual quality and strengthens global exploration, whereas Equation (13) speeds up convergence yet still preserves the possibility of correcting the search direction.
3.3. Implementation Steps of MESBOA
In summary, the proposed MESBOA first generates an initial set of solutions within the search bounds via Equation (1). During each iteration, every individual is updated by the search operator designated by MMS; the renewed individuals are then saved in pool
P. Once the EGS activation condition is met, each dimension of every individual is revised by either Equation (13) or Equation (14) according to the current demand. This loop repeats until the stopping criterion is satisfied. The pseudo-code of MESBOA is given in Algorithm 2 and its flowchart is illustrated in
Figure 2.
| Algorithm 2: Pseudocode of MESBOA |
| 1: Input: lb, ub, D, N, T |
| 2: Initialize population randomly according to Equation (1) |
| 3: While (t < T) do |
| 4: Calculate the fitness of each secretary bird individual |
| 5: For i = 1: N |
| 6: If Xi belongs to dominant group |
| 7: If |
| 8: Update the position of secretary bird individual using Equation (5) |
| 9: Else |
| 10: Update the position of secretary bird individual using Equation (2) |
| 11: End if |
| 12: Else if Xi belongs to balanced group |
| 13: If |
| 14: Update the position of secretary bird individual using Equation (10) |
| 15: Else |
| 16: Update the position of secretary bird individual using Equation (9) |
| 17: End if |
| 18: Else |
| 19: If |
| 20: Update the position of secretary bird individual using Equation (2) |
| 21: Else |
| 22: Update the position of secretary bird individual using Equation (4) |
| 23: End if |
| 24: End if |
| 25: |
| 26: If |
| 27: If |
| 28: Update the position of secretary bird individual using Equation (13) |
| 29: Else |
| 30: Update the position of secretary bird individual using Equation (14) |
| 31: End if |
| 32: End if |
| 33: End for |
| 34: t = t + 1 |
| 35: End while |
| 36: Output: The best solution Xb |
3.4. The Computational Complexity of MESBOA
Time complexity is a critical metric for evaluating the performance of optimization algorithms. The computational cost of both SBOA and MESBOA is dominated by population initialization and iterative updating, with the key factors being the maximum number of iterations (T), problem dimension (D), and population size (N). According to the original SBOA paper, its time complexity is . Below is the time complexity of MESBOA. The time complexity of initializing the secret birds population is . In the position-update stage, MMS only reassigns the existing search operators; each individual is still moved once per iteration. Hence, the time complexity of MMS remains . As an extra position-update module, GES adds its own cost to SBOA. If EGS is invoked times, its time complexity is . Therefore, the overall time complexity of MESBOA is . Since the EGS procedure is triggered only after the archive has reached its maximum capacity , we assumed is . Consequently, EGS will be executed times, yielding , and because , it follows immediately that T1 < T. Thus, the overall time complexity of MESBOA is slightly lower than that of the original SBOA.
4. Benchmark Test Results and Analysis
To comprehensively evaluate the performance of the proposed MESBOA, a total of 41 functions from the CEC2017 and CEC2022 test suites are employed. In this section, we conduct parameter sensitivity analysis, ablation experiments, convergence analysis, robustness analysis and statistical tests to demonstrate the superiority of MESBOA, and compare its results with various baseline and improved algorithms of different types. The section is organized into six parts: benchmark test function, experimental setup and competitor algorithms, parameter sensitivity analysis, an ablation study, low-dimensional experiments and high-dimensional experiments, which will be presented in sequence.
4.1. Review the Benchmark Test Suites
Two benchmark suites are employed in the experiments. The CEC 2017 set consists of unimodal (UM—F1, F3; F2 was officially removed), multimodal (MM—F4–F10), hybrid (H—F11–F20), and composite (C—F21–F30) functions. The CEC 2022 suite contains unimodal (UM—F1), multimodal (MM—F2–F5), hybrid (H—F6–F8), and composite (C—F9–F12) functions. While CEC 2017 can be evaluated at 10 D, 30 D, 50 D and 100 D, and CEC 2022 at 10 D and 20 D, we selected 50 D and 100 D cases from CEC 2017 together with 10 D and 20 D cases from CEC 2022 to examine MESBOA across a broad dimensional range. Detailed specifications for both suites are summarized in
Table A1 and
Table A2 of
Appendix A.
4.2. Experimental Environment and Configuration
The experiments are realized on MATLAB 2021b in a Windows 11 environment. The platform has 32 GB RAM and an AMD R9 7945HX processor. To guarantee fairness and persuasiveness, all algorithms compared in the performance tests were run on identical data sets, the maximum number of function evaluations was fixed at 1000 D, and every algorithm was executed over 30 independent trials; to reduce randomness, the minimum value (Min), average value (Avg), and standard deviation (Std) of each metric across the 30 runs was used for statistical analysis. Statistical significance was examined via Wilcoxon rank-sum test, Friedman test and Nemenyi test.
To highlight the superiority of the proposed MESBOA, eight basic and enhanced algorithms of distinct categories are selected for comparison. These include evolution-based AE [
17] and LSHADE-SPACMA [
52], physics-based EO [
53] and GLS-RIME [
51], human-based CFOA [
29] and ISGTOA [
54], and swarm-based RBMO [
55] and ESLPSO [
56]. The AE, CFOA and RBMO are recently published high-performance basic algorithms, EO is a widely cited method, LSHADE-SPACMA is an advanced differential-evolution variant, ESLPSO represents the latest upgrade of the classical PSO, while GLS-RIME and ISGTOA have demonstrated strong efficacy in their respective studies. Overall, this diverse set of competitors provides a solid basis for verifying the exceptional performance of MESBOA. Parameter values for all contenders are taken from their original papers; only the common termination criterion (maximum function evaluations) is imposed to ensure a fair comparison.
Table 1 outlines the specific parameter settings.
4.3. Parameter-Sensitivity Analysis
For meta-heuristic algorithms, appropriate parameter values are essential to fully exploit their potential. The proposed MESBOA integrates MMS and EGS, each of which introduces its own tunable parameters; therefore, this subsection is devoted to identifying the best settings for both modules.
The MMS partitions the population into three groups to boost SBOA, so fixing the sizes of these groups is critical, and a grid search is therefore adopted in which the dominant group is set to contain
individuals, the inferior group
, and the total population remains
, with a sweep from 0.1 to 0.4 in steps of 0.1, b from 0.6 to 0.9 in steps of 0.1, and N from 5D to 30D in steps of 5D, yielding 96 combinations (
Figure 3); the version of MESBOA that employs only MMS is run 30 times on selected low- and high-dimensional functions for every parameter triple, and the results are analyzed with the Friedman test to identify the best configuration.
Figure 4 visually presents the Friedman mean ranks of MESBOA under different parameter settings for the four-dimensional cases of the two test suites; examining
Figure 4a–f reveals that the algorithm achieves the top rank when a is 0.3 or 0.4 and b is 0.7 or 0.8, indicating that these parameters are insensitive to overall population size. A small a (dominant group too small) degrades performance, whereas b (inferior group) must be neither too large nor too small, confirming its constructive role.
Figure 4g further shows that the best overall rank occurs at N = 15 D with a = 0.4 and b = 0.8, so this parameter set is adopted in all subsequent experiments.
The EGS exerts its strength only when enough historical experience is stored and the correct exploration–exploitation bias is chosen; too few data fail to reveal the evolutionary trend, whereas too many may mislead and slow the search. Bcause different algorithms struggle to balance exploration and exploitation, we need to decide which behavior is currently required. Hence the best values of Pmax (history capacity) and c (bias control) are investigated. A grid search is again employed: Pmax is swept from 10 to 90 in steps of 20 and c from 10 to 50 in steps of 10.
Figure 5 reports the Friedman ranks of MESBOA with different parameter pairs across all tested dimensions, where a lower average rank (Ar) indicates better overall performance. In
Figure 5, the boxed numbers indicate the Friedman ranks obtained under each parameter configuration; the 10-D and 20-D results refer to the CEC 2022 suite, whereas the 50-D and 100-D results correspond to the CEC 2017 suite. Taking c = 10 as an example, the cell aggregates the ranks (and their average) across the five Pmax settings for every dimensionality: specifically, when c = 10 and Pmax = 10 N, the algorithm achieves a rank of 13.67 on the 10-D CEC 2022 functions.
At any fixed value of c, increasing Pmax consistently worsens the rank, implying that excessive historical information obscures rather than reveals the population’s evolutionary trend, and thereby misguides the search. Conversely, for a fixed Pmax, a larger c yields better ranks, showing that the original SBOA is exploration-deficient and benefits from stronger exploration bias introduced by EGS. The best compromise is obtained with c = 70 and Pmax = 10 N; this combination achieves the lowest Ar on both low- and high-dimensional functions, and is therefore adopted in all subsequent experiments.
4.4. Strategy Effectiveness Analysis
In this section, we examine the individual contributions of the proposed enhancement strategies to the performance gains observed in MESBOA. Two reduced variants, each incorporating only one of the two mechanisms, are evaluated—MSBOA, obtained by removing the EGS module from MESBOA, and ESBOA, derived by excluding the MMS component. The experimental results for MESBOA, SBOA, MSBOA and ESBOA on both test suites are compiled in
Table A1,
Table A2,
Table A3 and
Table A4 of
Appendix A, and Friedman together with Wilcoxon rank-sum tests are employed to analyze the outcomes.
Table 2 summarizes the Friedman results for MESBOA and its two derived variants; the obtained
p-value < 0.05 confirms significant differences among the four configurations. MESBOA, equipped with both enhancement modules, ranks first under all four dimensional settings, whereas the original SBOA always places last. MSBOA consistently outperforms ESBOA, indicating that the MMS component contributes more to the overall improvement than the EGS component, yet each single strategy still yields a clear gain over the baseline SBOA.
Table 3 quantifies the win/tie/loss counts of MESBOA and its partial variants against SBOA: all three improved versions achieve significant superiority on more than half of the functions, again underlining the effectiveness of the proposed modifications. The number of wins for MSBOA exceeds that for ESBOA, further evidencing that MMS provides a larger performance boost than EGS. Overall, both proposed enhancement strategies are statistically validated as distinctly beneficial.
4.5. Low-Dimensional Function Experiments
To comprehensively evaluate the performance of the proposed MESBOA on low-dimensional problems, the 12 functions of the CEC2022 test suite are adopted as the benchmark.
Figure 6 shows the average rankings of MESBOA, AE, LSHADE-SPACMA, EO, GLS-RIME, CFOA, ISGTOA, RBMO and ESLPSO when solving the 10-D/20-D functions. The complete data of the best, mean and standard deviation values are provided in
Table A1 and
Table A2 of
Appendix A. In the radar chart of rankings, each algorithm connects its ranks on the different functions into one surface; the smaller the enclosed surface, the better the overall performance. As can be seen, the surface produced by the proposed MESBOA is the smallest, demonstrating its superior overall performance. The surfaces of ESLPSO and ISGTOA are similar in size but exhibit poorer smoothness, indicating relatively weak stability and search efficiency. Although SBOA shows small fluctuations, its area is still large, confirming its poor overall performance.
Figure 7 presents boxplots of MESBOA, AE, LSHADE-SPACMA, EO, GLS-RIME, CFOA, ISGTOA, RBMO and ESLPSO on the CEC2022 functions to assess robustness. It is evident that MESBOA exhibits the smallest box ranges on the majority of problems—specifically F1, F3–F4 and F7–F8 for 10-D, and F1, F4, F7–F8 and F11 for 20-D, ten functions in total—while also showing fewer outliers (“+”), indicating high stability. The plots further reveal that MESBOA consistently attains the lowest median on almost half of the functions, underscoring its superior accuracy. Overall, the narrow and low-lying boxes demonstrate that MESBOA delivers stable distributions and strong robustness across the test suite.
Based on the convergence curves collected on the CEC2022 test set, we further examined MESBOA’s convergence behavior when tackling low-dimensional tasks. As
Figure 8 shows, MESBOA delivers excellent convergence. For the unimodal F1, all algorithms continue to converge, yet MESBOA exhibits the fastest speed and the highest accuracy, whereas the original SBOA descends much more slowly and attains a poorer final value. This superior performance on unimodal landscapes is attributed to MMS, which reconstructs the search framework so that dominant birds keep exploiting promising regions, and to EGS, which intensifies exploitation exactly when the population needs it; the synergy of the two mechanisms markedly strengthens MESBOA’s local search capability. On multimodal functions F2–F5, MESBOA is not always the most accurate on F2–F3 and F5, yet it converges faster and consistently yields higher-quality solutions than the basic SBOA. On F4 its convergence is slower, but the algorithm keeps escaping local optima and continues to discover better points. Compared with the original SBOA, MESBOA maintains a steady convergence curve, rarely stalls, and even accelerates in the later search phase. These improvements are credited to the supplementary role of the inferior group defined by MMS; by enlarging the search scope it helps the swarm jump out of local traps. In addition, EGS preserves population diversity through multiple guiding points, which further protects MESBOA from premature convergence. On the more challenging F6–F12, MESBOA attains the highest convergence accuracy on 10-D F7–F8, 20-D F7–F8 and F11. For the remaining functions except F10, it may not deliver the best final value, yet it always exhibits a rapid convergence rate; on F10 its speed and accuracy are not the top but they still surpass those of the basic SBOA. These results confirm that MESBOA possesses a well-balanced exploration–exploitation capability, owing to the fact that EGS dynamically detects the swarm’s current demand and drives each dimension to search accordingly. Overall, MESBOA demonstrates the best convergence behavior among all contenders on low-dimensional optimization problems.
In addition to convergence and robustness analyses, several statistical tests are employed to examine the differences between MESBOA and the compared algorithms.
Table 4 summarizes the Wilcoxon rank-sum test results between MESBOA and AE, LSHADE-SPACMA, EO, GLS-RIME, CFOA, ISGTOA, RBMO, and ESLPSO, where the symbols “+/=/−” indicate that the proposed MESBOA is superior, similar, or inferior to the compared algorithm, respectively. The Wilcoxon rank-sum test is a non-parametric pairwise comparison method that checks whether two algorithms exhibit significant differences across different functions. It should be noted that all subsequent statistical tests are conducted at a significance level of 0.05.
Table 4 shows that, against every competitor, MESBOA obtains more “+” than “−”; for most algorithms the count of “+” even exceeds the sum of “−” and “=”, evidencing a clear superiority. Versus the basic algorithms the advantage is larger in 20 D than in 10 D, whereas versus the enhanced variants the advantage is larger in 10 D than in 20 D. These patterns not only confirm the overall competitiveness of MESBOA but also re-assert the NFL statement that no single algorithm is best for all problems.
Beyond pairwise comparisons, an overall analysis is conducted using the Friedman test, and the results are reported in
Table 5. The obtained
p-values confirm a significant global performance difference between MESBOA and all contenders. Specifically, MESBOA ranks first on both 10-D and 20-D problems, achieving mean ranks of 2.500 and 2.333, respectively, followed immediately by LSHADE-SPACMA and AE, while the original SBOA places second-to-last, outperforming only CFOA. Notably, although MESBOA’s superiority margin is smaller in 20-D than in 10-D, its absolute rank is actually better at 20-D because the rankings of nearly all improved algorithms rise with dimension, while those of the baseline methods fall; consequently, MESBOA’s relative advantage over the other enhanced variants does not increase even though its absolute rank improves. Nevertheless, MESBOA unquestionably delivers the best overall performance among all algorithms examined.
The Friedman test only indicates whether significant differences exist among all algorithms; it does not quantify the magnitude of the gap between MESBOA and any specific competitor. Therefore, the Nemenyi post-hoc test is applied to obtain a finer-grained analysis. Based on the Friedman rankings, Nemenyi’s procedure computes a critical difference value (CDV) with which algorithm pairs can be judged equivalent; if the difference between the mean ranks of MESBOA and another algorithm is smaller than CDV, the two methods are deemed statistically indistinguishable. CDV is calculated with Equation (18).
where
represents the number of algorithms and
represents the number of functions tested.
is obtained from the table and equals 3.1640 in this paper.
Figure 9 illustrates the Nemenyi post-hoc test results for MESBOA and the competing algorithms. It can be observed that MESBOA exhibits significant differences with CFOA, SBOA, and ISGTOA on 10-D functions, while no significant differences are found with the remaining algorithms. On 20-D functions, MESBOA shows significant differences with CFOA, SBOA, RBMO, ISGTOA, and EO, but not with the other algorithms. Therefore, it can be concluded that MESBOA is a competitive algorithm on the CEC2022 test suite.
4.6. High-Dimensional Function Experiments
Although
Section 4.5 has verified the effectiveness of MESBOA on low-dimensional tasks, modern applications increasingly involve high-dimensional and complex landscapes; hence the performance of MESBOA under such conditions must be assessed. The 50-D and 100-D instances of the CEC2017 test set are therefore employed.
Figure 10 gives a first overview of the comparative behavior through a ranking radar chart, where the area enclosed by each algorithm indicates its overall standing on the CEC2017 benchmark. MESBOA never drops out of the top four ranks on any function, demonstrating consistently superior and stable performance, while SBOA and CFOA remain firmly in the bottom two positions across the entire benchmark.
Table 6 summarizes the Wilcoxon rank-sum results between MESBOA and the competitors on high-dimensional functions, with a visual depiction in
Figure 11. MESBOA registers significant superiority on at least 16 functions against every rival. Specifically, the counts of superior/(inferior) functions are 58(0) vs. SBOA, 49(6) vs. AE, 33(22) vs. LSHADE-SPACMA, 55(0) vs. EO, 88(0) vs. GLS-RIME, 57(1) vs. CFOA, 55(1) vs. ISGTOA, 58(0) vs. RBMO, and 41(8) vs. ESLPSO. Overall, MESBOA exhibits clear advantages on the majority of 50-D and 100-D functions.
The Friedman test results for MESBOA and the competing algorithms on the 50-D/100-D functions of the CEC2017 suite are reported in
Table 7 and visualized in
Figure 12; the
p-values confirm a significant overall difference, which is further quantified by the Nemenyi post-hoc analysis shown in
Figure 13. MESBOA ranks first on both dimensionalities with mean ranks of 1.828 (50-D) and 1.931 (100-D), whereas the original SBOA places second-to-last at 8.103 and 8.069, respectively, and LSHADE-SPACMA and AE occupy the second and third positions. The Nemenyi post-hoc test reveals that, on both 50-D and 100-D CEC2017 functions, MESBOA is not statistically distinguishable from LSHADE-SPACMA, yet it exhibits significant differences from all other contenders. This outcome contrasts with the CEC2022 low-dimensional results and indicates that the proposed MESBOA possesses a stronger edge when tackling high-dimensional problems. Moreover, being compared against the advanced differential evolution variant LSHADE-SPACMA further highlights the superiority of the proposed approach.