Next Article in Journal
Towards Biologically-Inspired Visual SLAM in Dynamic Environments: IPL-SLAM with Instance Segmentation and Point-Line Feature Fusion
Previous Article in Journal
Motion Intention Prediction for Lumbar Exoskeletons Based on Attention-Enhanced sEMG Inference
Previous Article in Special Issue
Explainable Reinforcement Learning for the Initial Design Optimization of Compressors Inspired by the Black-Winged Kite
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Strategy Improved Red-Billed Blue Magpie Optimizer for Global Optimization

1
School of Information Science and Technology, Yunnan Normal University, Kunming 650000, China
2
Southwest United Graduate School, Kunming 650000, China
3
School of Information Science and Engineering, Yunnan University, Kunming 650000, China
4
Télécom SudParis, Institut Polytechnique de Paris, 91120 Palaiseau, France
5
Department of Computer Science and Technology, Kean University, Union, NJ 07083, USA
6
School of Electrical and Electronic Information, Xihua University, Chengdu 610039, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 557; https://doi.org/10.3390/biomimetics10090557
Submission received: 22 April 2025 / Revised: 4 July 2025 / Accepted: 19 July 2025 / Published: 22 August 2025
(This article belongs to the Special Issue Advances in Biological and Bio-Inspired Algorithms)

Abstract

To enhance the convergence efficiency and solution precision of the Red-billed Blue Magpie Optimizer (RBMO), this study proposes a Multi-Strategy Enhanced Red-billed Blue Magpie Optimizer (MRBMO). The principal methodological innovations encompass three aspects: (1) Development of a novel dynamic boundary constraint handling mechanism that strengthens algorithmic exploration capabilities through adaptive regression strategy adjustment for boundary-transgressing particles; (2) Incorporation of an elite guidance strategy during the predation phase, establishing a guided search framework that integrates historical individual optimal information while employing a Lévy Flight strategy to modulate search step sizes, thereby achieving effective balance between global exploration and local exploitation capabilities; (3) Comprehensive experimental evaluations conducted on the CEC2017 and CEC2022 benchmark test suites demonstrate that MRBMO significantly outperforms classical enhanced algorithms and exhibits competitive performance against state-of-the-art optimizers across 41 standardized test functions. The practical efficacy of the algorithm is further validated through successful applications to four classical engineering design problems, confirming its robust problem-solving capabilities.

1. Introduction

Optimization problems are widespread [1], defined as the process of identifying the optimal solution from feasible solutions to minimize or maximize a given objective [2]. Optimization problems are generally characterized by large scale, numerous constraints, complex parameter control, and high computational cost [3]. In many optimization problems, it is necessary to find an optimal solution to a particular problem with highly complex constraints in a reasonable time period [4]. Effective methods are required to solve optimization problems involving numerous decision variables, complex nonlinear constraints, and objective functions [5]. Traditional optimization methods, represented by gradient descent and Lagrange multiplier methods, derive their efficacy strictly from mathematical properties, such as the differentiability and convexity of the objective function, along with the explicit formulation of constraint conditions [6]. However, optimization problems commonly encountered in real-world production and daily life are typically characterized by large-scale, high-dimensional, and nonlinear features [7], thus conventional methods often struggle to obtain acceptable solutions within a feasible time frame [8]. Specifically, the problem presents the following challenges:
  • Multimodal: Many optimization problems exhibit multiple local optima, causing algorithms to easily become trapped in local optima and to fail to obtain satisfactory solutions.
  • High-dimensional: As the number of decision variables increases, the problem’s dimensionality grows accordingly. The search space expands exponentially with increasing dimensions, leading to significantly higher problem complexity.
  • Nonlinear: The objective functions of many problems are frequently nonlinear and may even be non-differentiable, rendering certain algorithms inapplicable due to their strict requirements on objective function properties.
  • Multi-objective: Some problems require simultaneous optimization of multiple objectives that often exhibit conflicting relationships, making it difficult or impossible to find solutions that satisfy all objectives concurrently.
In contrast to conventional algorithms, heuristic algorithms [9] demonstrate the capability to identify a feasible suboptimal solution for optimization problems within a reasonable timeframe or under constrained computational resources. This methodology achieves computational feasibility and suboptimality through a systematic trade-off between solution accuracy/precision and operational efficiency [10]. However, these approaches frequently encounter convergence limitations toward local optima due to their relatively simplistic search patterns, which restrict their exploration capacity in complex solution spaces. The metaheuristic algorithm further combines the stochastic algorithm with the local search of the traditional heuristic algorithm to enhance the ability of the algorithm to jump out of the local optimal solution. It combines the laws of nature and biological laws for better solution results compared to heuristic algorithms [11].
Swarm Intelligence (SI) [12] is a significant branch of Artificial Intelligence (AI) that is grounded in the intelligent collective behavior observed in social groups in nature [13]. Researchers draw inspiration from various species and natural phenomena, leading to the development of various metaheuristic optimization algorithms based on SI [14]. The Particle Swarm Optimization Algorithm (PSO) [15] conceptualizes feasible solutions to optimization problems as particles within a search space. Each particle possesses a specific velocity and position, updating these attributes based on its historical optimal position and the best position identified within the group, thereby facilitating the search for superior solutions. In this context, each particle represents a candidate solution within the solution space. Ant Colony Optimization (ACO) [16] is another metaheuristic optimization algorithm inspired by the foraging behavior of natural ant colonies. Its fundamental principle involves addressing combinatorial optimization problems by simulating the collaborative behavior of ants as they release and perceive pheromones during food-searching activities. This bio-inspired algorithm, governed by simple rules that emulate biological swarm intelligence, provides an effective means of tackling complex optimization challenges. The Firefly Algorithm (FA) [17] is a metaheuristic driven by swarm intelligence inspired by the bioluminescent attraction mechanisms of fireflies. It utilizes brightness-mediated interactions to guide individuals toward optimal solutions, achieving a balance between global exploration and local exploitation through adaptive attraction dynamics and stochastic movement. Grey Wolf Optimization (GWO) [18] establishes an optimization search framework based on biological group intelligence by simulating the hierarchy, collaborative predation strategies, and group decision-making mechanisms of grey wolf populations in nature. The algorithm categorizes individual grey wolves into a four-level hierarchical structure with a strict social division of labor: α wolves serve as population leaders responsible for global decision-making; β wolves act as secondary coordinators assisting in decision execution; δ wolves form the base population unit and engage in local searches; and ω wolves function as followers, completing comprehensive explorations of the solution space. The Whale Optimization Algorithm (WOA) [19] mimics the feeding behavior of whales, particularly humpback whales, in the ocean, addressing complex optimization problems through strategies of distributed search, autonomous decision-making, and adaptive adjustment. The Sparrow Search Algorithm [20] simulates the role differentiation of sparrows during foraging, distinguishing between the leader (the sparrow that locates food) and the follower (the sparrow that trails the leader). The leader primarily focuses on finding food, while the follower remains within a certain range of the leader to assist in locating food. Additionally, sparrows make positional adjustments to evade predators, and these behavioral characteristics are abstracted into key steps within the algorithm to optimize the objective function.
However, as asserted by the “No Free Lunch” (NFL) theorem [21], every algorithm has inherent limitations, and no single algorithm can solve all optimization problems. Consequently, many researchers are dedicated to proposing new algorithms or enhancing existing ones. For instance, Zhu Fang et al. [22] introduced a good point set strategy during the initialization phase of the dung beetle optimization algorithm to increase population diversity. They also proposed a new nonlinear convergence factor to balance exploration and exploitation within the algorithm, as well as a dynamic balancing strategy between the number of dung beetles spawning and foraging. Additionally, they introduced a strategy based on perturbations from quantum computation and t-distributions to promote the algorithm’s ability of optimization search. Ya Shen et al. [23] proposed an improved whale optimization algorithm based on multiple population evolution to address the defects of the whale optimization algorithm, which has a slow convergence speed and easily falls into a local optimum. The population is further divided into three sub-populations based on the fitness value: exploratory population, exploitative population, and fitness population, and each sub-population performs a different updating strategy, and then it is experimentally verified that this multiple swarms co-evolutionary strategy can effectively enhance the algorithm’s optimization search capability. Yiying Zhang et al. [24] proposed an EJaya algorithm, whose local exploitation strategy enhances the local search capability by defining upper and lower bound local attraction points (combining the current optimal, worst, and population mean information), while the global exploration strategy expands the search scope by using random perturbations of historical population information to avoid stagnation at the later stage of the algorithm. Dhargupta et al. [25] proposed a Selective Opposition-based Grey Wolf Optimizer (SOGWO) that applies dimension-selective opposition learning to enhance exploration efficiency. By identifying ω -wolves through Spearman’s correlation analysis, the algorithm strategically targets low-status individuals for opposition learning, thereby reducing redundant search efforts and improving convergence speed while maintaining exploration–exploitation balance. Shuang Liang et al. [26] proposed an Enhanced Sparrow Search Swarm Optimizer (ESSSO); firstly, ESSSO introduces an adaptive sinusoidal walking strategy (SLM) based on the von Mises distribution, which enables individuals to dynamically adjust the learning rate to improve the evolutionary efficiency; secondly, a learning strategy with roulette wheel selection (LSR) is adopted to maintain the population diversity and prevent premature convergence; furthermore, a two-stage evolutionary strategy (TSE) is designed, which includes a mutation mechanism (AMM) to enhance the local searching ability and accelerate the algorithm’s convergence rate through a selection mechanism (SMS). Through the study, it is shown that the appropriately improved intelligent algorithm has better adaptability and effectiveness in some complex applications, and also has certain advantages compared with other heuristic algorithms.
The Red-billed Blue Magpie Optimizer (RBMO) [27] is a novel group intelligence-based metaheuristic algorithm inspired by the hunting behavior of red-billed blue magpies, which rely on community cooperation for activities such as searching for food, attacking prey, and food storage. The original paper applied the RBMO algorithm to various domains, including numerical optimization, engineering design problems, and UAV path planning. While the RBMO algorithm boasts a simple structure, minimal parameter requirements, and effective optimization performance, it still faces challenges in achieving optimal results for complex optimization problems. To address these challenges, we propose an improved version of the algorithm, termed the Multi-Strategy Enhanced Red-billed Blue Magpie Optimizer (MRBMO). This enhancement involves the design of a new boundary constraint and the establishment of an innovative prey attack model, which collectively improve the algorithm’s exploration and exploitation balance as well as its overall performance. The primary contributions of this paper are outlined as follows:
  • A new boundary constraint processing method is designed and verified to accelerate the convergence speed and improve the exploitation ability of the algorithm. The method can also be used for other algorithms that requiring boundary processing.
  • Inspired by particle swarm optimization (PSO), a new update method is redesigned in the development stage of the RBMO algorithm, which mainly introduces the optimal information of individual history to guide the algorithm to deeply explore the solution space. The step size is controlled by Lévy Flight to avoid the algorithm falling into the local optimal solution.
  • The proposed MRBMO is evaluated on 41 benchmark functions. And its optimization performance is compared to eight other of the most advanced and high-performance algorithms. The MRBMO was successfully applied to four classical engineering design problems.
This paper is organized as follows. Section 2 introduces the foundational principles of the Red-billed Blue Magpie Optimization (RBMO) algorithm. In Section 3, we propose a multi-strategy enhanced version of the algorithm (MRBMO) to address the limitations of the original RBMO. Section 4 presents a comprehensive experimental comparison between MRBMO and eight other state-of-the-art optimization algorithms. To further validate the practical utility of the proposed improvements, Section 5 demonstrates the application of MRBMO in real-world engineering design problems. Finally, Section 6 summarizes the key contributions and findings of this study.

2. Red-Billed Blue Magpie Optimizer

In metaheuristic algorithms, exploration (diversification) and exploitation (intensification) work in tandem to determine the performance and efficiency of the algorithm [28]. On one hand, the algorithm must identify more promising regions by searching a broader space; on the other hand, it must also focus its resources on the in-depth development of these promising areas. Striking a balance between exploration and exploitation is often one of the primary challenges in the design of intelligent optimization algorithms [29]. An excessive inclination towards exploration can result in slow convergence and high computational costs, while an overemphasis on exploitation may cause the algorithm to prematurely converge to local optima, preventing it from discovering a globally optimal solution. In the Red-billed Blue Magpie Optimizer (RBMO), exploration and exploitation correspond to searching for prey and attacking prey, respectively.

2.1. Searching for Prey

When searching for prey, red-billed blue magpies usually operate in small groups (2–5 individuals) or collectively (more than 10 individuals) to exchange information. Therefore, two updating models were designed, with small groups and collective actions corresponding to Equations (1) and (2), respectively.
X i ( t + 1 ) = X i ( t ) + 1 p × m = 1 p X m ( t ) X r s ( t ) × R a n d 1
X i ( t + 1 ) = X i ( t ) + 1 q × m = 1 q X m ( t ) X r s ( t ) × R a n d 2
where t denotes the current iteration number, X i ( t + 1 ) denotes the i-th new search agent position, p and q denote the number of randomly selected individuals from the population, p ranges between 2 and 5, and q ranges between 10 and n, n is the size of the population, X m ( t ) denotes the m-th randomly selected individual, X i ( t ) denotes the i-th individual, and X r s ( t ) denotes the randomly selected search agent in the current iteration, the R a n d 1 and R a n d 2 denote two independent uniformly distributed random variables, each defined over the interval [0, 1).

2.2. Attacking the Prey

In small group operations, the main targets are small prey or plants. The corresponding mathematical model is shown in Equation (3). When acting in a collective manner, red-billed blue magpies are able to collectively target larger prey, such as large insects or small vertebrates. The mathematical representation of this behavior is shown in Equation (4).
X i ( t + 1 ) = X f o o d ( t ) + C F × 1 p × m = 1 p X m ( t ) X i ( t ) × R a n d n 1
X i ( t + 1 ) = X f o o d ( t ) + C F × 1 q × m = 1 q X m ( t ) X i ( t ) × R a n d n 2
where X f o o d ( t ) represents the position of the food; it represents the solution with the minimum fitness value in the current iteration (i.e., the current best solution). C F = ( 1 ( t T ) ) ( 2 × t T ) are coefficients that decrease nonlinearly with the number of iterations for adaptive control of the search step, T denotes the maximum number of iterations, and R a n d n 1 and R a n d n 2 represent a random number used to generate a standard normal distribution (mean 0, standard deviation 1).

2.3. Food Storage

In the original RBMO, the food storage behavior is defined as a greedy rule, and the new location is reserved only when the new generation fitness value is better, such as Equation (5).
X i ( t + 1 ) = X i ( t ) if f i t n e s s o l d i > f i t n e s s n e w i X i ( t + 1 ) e l s e
where f i t n e s s o l d i and f i t n e s s n e w i represent the fitness values before and after position updating for the i-th Red-billed Blue Magpie, respectively.

2.4. Detailed Process of the RBMO

A control parameter ϵ = 0.5 has been incorporated into the RBMO algorithm to enable adaptive selection between different update strategies. Specifically, the overall framework of the RBMO is shown as Algorithm 1.
Algorithm 1: The framework of the RBMO algorithm
Biomimetics 10 00557 i001

3. The Multi-Strategy Red-Billed Blue Magpie Optimizer

3.1. A New Way to Handle Boundary Constraints

In the original algorithm, when the dimension overstep occurs in the iterative process of the solution vector, the dimension overstep is usually assigned to the boundary in the way shown in Equation (6); this method is also used by most intelligent algorithms to transgress [30]. However, this method cannot effectively use historical search information, which can easily lead to premature loss of population diversity, which is not conducive to population convergence.
X ( i , d ) = u b ( d ) u b < X ( i , d ) l b ( d ) l b > X ( i , d )
This study introduces a dynamic boundary correction strategy incorporating elite-guided dimensional adaptation. As formalized in Equation (7), the mechanism dynamically replaces out-of-bound dimension values with those from the global optimum’s corresponding dimensions. By leveraging dimensional information from elite individuals, this approach achieves three key advantages: (1) It establishes directional guidance through preferential dimension inheritance; (2) it fosters dimensional collaboration via cross-dimensional information exchange; and (3) it enhances exploitation capability through adaptive recombination of advantageous dimensional traits. The strategy effectively transforms isolated boundary handling into a coordinated optimization process, enabling more efficient exploitation of promising search regions while maintaining exploration diversity.
X ( i , d ) = X f o o d ( d ) u b < X ( i , d ) or l b > X ( i , d ) X ( i , d ) otherwise
Here, d represents the dimension index ( d = 1 , 2 , , D ), where D is the total number of dimensions in the search space. u b ( d ) and l b ( d ) denote the upper and lower bounds of the d-th dimension, respectively, ensuring the solution remains within the feasible search region.

3.2. Individual Optimal Value Guidance

In the classical Particle Swarm Optimization (PSO) framework, the individual location updating mechanism integrates a dual guidance strategy that utilizes both the individual historical optimal solution (pbest) and the group historical optimal solution (gbest). This memory-driven approach effectively balances global exploration with local exploitation, providing a theoretical guarantee for algorithm convergence. However, in the original prey attack model of RBMO, the location update process only considered the one-way guidance of the global optimal solution. This limitation resulted in two significant drawbacks: (1) The neglect of individual cognitive experience led to a premature attenuation of population diversity; (2) The unipolar guidance mode is susceptible to causing search stagnation.
To address the aforementioned challenges, this study introduces an individual optimal value guidance strategy based on Lévy Flight [31] during the attack phase on prey. This strategy has dual optimization characteristics: first, the optimal solution for each individual by establishing a memory bank of individual optimal solutions; second, it employs the Lévy Flight distribution to generate the search step size. The power-law step size distribution, characterized by its heavy-tail property, effectively accommodates both local fine search and global mutation exploration. The combination of short-range, high-frequency jumps and long-range, low-frequency jumps produced by Lévy Flight significantly increases the probability of escaping local optimal while ensuring population diversity. In this study, the Mantegna algorithm was used to achieve efficient Lévy Flight simulation, and the step calculation process is shown in Equation (8).
u N ( 0 , σ u 2 ) , v N ( 0 , σ v 2 ) , σ v = 1 , σ u = Γ ( 1 + β ) sin ( π β / 2 ) Γ 1 + β 2 β · 2 ( β 1 ) / 2 1 / β , s = u | v | 1 / β ;
where β denotes the shape parameter, usually 0.3 β 1.99 , Γ represents the gamma function, and s specifies the step size. Figure 1 illustrates the motion trajectory of Lévy Flight, whose random walk mechanism alternating between long and short step lengths can enhance the algorithm’s global search efficiency and its local optima avoidance capabilities [32]. The updated attack prey position is subsequently determined by Equations (9) and (10).
X i ( t + 1 ) = X f o o d ( t ) + C F × 1 p × m = 1 p X m ( t ) X i ( t ) × R a n d n 1 + ( p B e s t ( i ) ( t ) X i ( t ) ) × s
X i ( t + 1 ) = X f o o d ( t ) + C F × 1 q × m = 1 q X m ( t ) X i ( t ) × R a n d n 2 + ( p B e s t ( i ) ( t ) X i ( t ) ) × s
Here, p B e s t ( i ) ( t ) denotes the historical best value of the i-th individual at the t-th iteration, and its update rule is defined in Equation (11). It is worth noting that f ( ) represents the objective function.
p B e s t ( i ) ( t ) = X i ( t ) if f ( X i ( t ) ) < f ( p B e s t ( i ) ( t 1 ) ) p B e s t ( i ) ( t 1 ) else

3.3. Detailed Process of the MRBMO

The overall framework of the Multi-strategy Red-billed Blue Magpie Optimizer (MRBMO) algorithm is presented in Algorithm 2. Crucially, this enhanced implementation introduces no additional computational overhead compared to the baseline, thus preserving equivalent time complexity.
Algorithm 2: The framework of the MRBMO algorithm
Biomimetics 10 00557 i002

4. Numerical Experiment

The CEC2017 [33] test set is a set of benchmark functions for evaluating and comparing the performance of intelligent optimization algorithms. The test suite consists of 29 single-objective functions, covering a variety of complexities and diversity, designed to simulate real-world optimization problems and provide a fair evaluation platform for algorithmic researchers. This suite incorporates unimodal functions (F1–F2), which are characterized by a single global optimum devoid of local optima. These functions are primarily employed to evaluate the convergence velocity and exploration efficiency of optimization algorithms. Simple multimodal functions (F3–F9), containing multiple local optima, are used to evaluate the ability of the algorithm to avoid falling into local optima. Hybrid functions (F10–F19), which are composed of multiple basic functions, have a complex search space and test the performance of the algorithm when dealing with nonlinear and discontinuous problems. Composition functions (F20–F29), which are generated by multiple basic functions through rotation, migration and other operations, have higher complexity and are used to evaluate the performance of algorithms in high-dimensional and complex problems. The CEC2022 [34] benchmark function set is similar to CEC2017, with a total of 12 test functions: Unimodal functions, Basic functions, Hybrid functions, and Composition functions. CEC2022 is further optimized in terms of function complexity, diversity, and real-world problem simulation, enabling a more comprehensive evaluation of algorithm performance in different types of optimization problems. CEC2017 and CEC2022 test functions are detailed in Table A1 and Table A2.

4.1. Parameters Sensitivity Testing

In the stage of attacking prey, Lévy Flight is introduced as the step size. The parameter β of Lévy Flight is a constant, and its value significantly influences the performance of the algorithm. To select the parameters β and ϵ in the original algorithm, this paper designs a set of experiments for analysis. Specifically, we consider ϵ 0.25 ,   0.5 ,   0.75 and β 0.5 , 1 , 1.5 . The CEC2022 benchmark was utilized in the test set, with the dimension of the vector X set to 20; the maximum number of iterations was 500, the population size was 30, and the experiment was repeated 30 times. The experimental results are summarized in Table 1, which documents the average performance metrics and the corresponding rankings for each parameter configuration across 30 independent trials. To statistically evaluate the overall algorithmic performance, the Friedman rank sum test was applied to the ranked results, with lower average ranks indicating superior performance. In particular, the combination of parameters ϵ = 0.75 and β = 0.5 consistently achieved optimal performance on multiple benchmark functions. Based on these findings, this configuration is proposed as the recommended parameter setting for the proposed algorithm.

4.2. Ablation Experiment

To ascertain the impact of the enhanced strategy introduced in this paper on the performance of the algorithm, an ablation experiment is meticulously designed in this section. Building upon the foundational RBMO algorithm, the first modification involves refining the out-of-bounds processing method, hereafter referred to as RBMO1. Subsequently, the hunting behavior within the original RBMO algorithm is revised, and a novel update mechanism, designated as RBMO2, is conceptualized. These two enhancements are then integrated to form MRBMO, which represents the ultimate refined algorithm presented in this study. A comparative analysis of RBMO1, RBMO2, and MRBMO against the original RBMO algorithm is conducted to comprehensively assess the magnitude of the improvements achieved.
The CEC2022 benchmark suite was utilized to evaluate algorithm performance, with metrics including minimum and average values, standard deviations, and Friedman rankings calculated for each test function (Table 2). For the unimodal function F1, both RBMO1 and RBMO2 achieved superior values compared to RBMO, indicating that both strategies enhanced the performance of RBMO to varying degrees. A closer examination reveals that RBMO2 attained not only smaller values but also exhibited greater stability than RBMO1, suggesting that the second strategy contributed more significantly to the improvement of RBMO. For the hybrid function F6, analysis of average values reveals that implementing either strategy individually compromised the algorithm’s ability to escape local optima. However, their synergistic combination demonstrated superior performance, yielding solutions closest to the theoretical optimum. On composite functions F9–F12, the combination of the two strategies enhanced the convergence performance of RBMO. However, there remains a discrepancy from the theoretical optimum. Observing the final mean rank, both strategies enhance the overall performance of RBMO to varying degrees, with their synergistic combination yielding the most favorable outcomes.

4.3. Comparison of MRBMO with State-of-the-Art and Well-Established Algorithms

The comparison algorithms selected for this study encompass a diverse range of optimization techniques. These include the original Red-billed Blue Magpie Optimizer (RBMO), and two advanced algorithms: the Selective Opposition-based Grey Wolf Optimizer (SOGWO) [25] and the Transient Trigonometric Harris Hawks Optimizer (TTHHO) [35]. Additionally, the classical Particle Swarm Optimization (PSO) [15] is included, along with more recently developed methods, such as the Sparrow Search Algorithm (SSA) [20] and the RIME Optimization Algorithm (RIME) [36]. Top-performing algorithms from the CEC competitions LSHADE_SPACMA [37] are also incorporated. The parameter settings for each algorithm were strictly configured according to the recommendations provided in their respective original publications.
To further verify the effectiveness of the improved algorithm, the CEC2017 test function suite was used to fully verify the algorithm performance. First of all, the mean value, standard deviation, and ranking of the MRBMO and the comparison algorithm running 30 times with dimension 30, 50, and 100 were counted, respectively, as shown in Table 3, Table 4 and Table 5.
  • When the dimension is 30, MRBMO achieved first-ranked average values for the unimodal functions F1 and F2. These results demonstrate the algorithm’s superior exploration capabilities. For simple multimodal functions, MRBMO achieved first-place average scores in F3–F9, outperforming all other algorithms. These results indicate that MRBMO possesses strong local-optimum avoidance capabilities. However, an examination of the standard deviation values reveals significant fluctuations in the results obtained by MRBMO. In Hybrid function F17, MRBMO (2.56E+03) is slightly inferior to RBMO (2.42E+03), but significantly superior to other algorithms. Looking at the standard deviation, it can be found that MRBMO has a lower standard deviation in most functions, and achieves the second or third standard deviation in very few functions. It shows that its performance is highly stable in different problems with little fluctuation, strong adaptability to problem characteristics, and better robustness.
  • Furthermore, we investigate the impact of dimensionality changes on the performance of MRBMO. Notably, in experiments with 50 and 100 dimensions, MRBMO consistently maintains a stable first-place average ranking across most test functions. This indicates that as the dimensionality increases, its search performance does not deteriorate. Collectively, MRBMO remains highly competitive in addressing complex high-dimensional optimization challenges.
A statistical analysis was conducted on the rankings of each algorithm across 29 benchmark functions in 30 dimensions. As illustrated in Figure 2, the MRBMO algorithm demonstrates superior performance, securing first place on 26 test functions while attaining second place on three functions (F17, F21 and F28). This dominance can be attributed to its multi-strategy framework, which effectively balances exploration and exploitation through dynamic boundary constraint handling and elite-guided Lévy Flight mechanisms. In contrast, other algorithms, such as SSA and PSO, exhibit significant performance fluctuations, demonstrating competitive results on specific functions while showing limited effectiveness on others.
Furthermore, to rigorously evaluate the comprehensive performance of all algorithms, the Friedman rank-sum test was employed. A bar chart depicting the mean ranks for all algorithms is presented in Figure 3, where lower numerical rankings indicate superior performance. The results demonstrate that across all tested dimensionalities, the MRBMO algorithm consistently attained the lowest average rank. This indicates the excellent robustness and insensitivity to dimension changes of MRBMO.
In this study, the Wilcoxon rank sum test [38], a nonparametric statistical method, was utilized to evaluate whether the performance differences between the improved method and the comparative algorithms are statistically significant. The test was conducted at a significance level of 0.05. A p-value lower than 0.05 indicates a significant difference between the two algorithms being compared, while a p-value of 0.05 or higher suggests that the performance of the two algorithms is not significantly different and can be considered comparable. The test results are presented in Table A3, Table A4 and Table A5, with data points exhibiting p-values greater than 0.05 highlighted in bold font. The experimental results are summarized in Table 6, where R + denotes the number of comparisons showing statistically significant differences, and R represents the number of comparisons without statistically significant differences, with the final row presenting the aggregate results. It can be observed that the proposed method demonstrates statistically significant differences compared to competing algorithms in most test functions, and its superiority is substantiated.

4.4. Assessing Convergence Performance

To assess both the accuracy and the convergence speed of the algorithms, convergence curves were plotted for MRBMO and the other algorithms at dimension 30, as illustrated in Figure 4 and Figure 5. It is worth noting that in each subplot, the horizontal axis represents the number of iterations, while the vertical axis represents the average convergence curve over 30 runs.
  • During the optimization process of the unimodal test function F1, MRBMO exhibits faster convergence speed compared to other comparative algorithms. Notably, the MRBMO algorithm benefits from its enhanced global search capability, enabling it to obtain higher-quality feasible solutions. This advantageous characteristic is further validated in the optimization process of another unimodal function F2.
  • During the optimization processes of simple multimodal functions F3 and F8, the MRBMO algorithm did not demonstrate significant competitive advantages. However, when addressing functions F4, F5, F6, and F7, MRBMO exhibited unique optimization characteristics as other comparative algorithms became trapped in local optima stagnation: Not only did the algorithm avoid convergence stagnation, but it also obtained superior solutions through significantly accelerated convergence, thereby validating the advancement of its local optima avoidance mechanism. Notably, in the optimization scenario of function F9, although MRBMO showed lower convergence rates than the SSA, RIME, and PSO algorithms during the initial phase (iteration count < 300), these contrast algorithms were constrained by premature convergence due to their inability to escape local extremum constraints. In stark contrast, MRBMO demonstrated stronger sustained optimization capability in later stages (iteration count > 300), eventually achieving gradual approximation to the global optimum. This phenomenon highlights the algorithm’s superiority in long-term convergence performance. Such phased performance disparities validate the design advantages of MRBMO in maintaining an exploration–exploitation balance, enabling it to demonstrate enhanced global convergence characteristics when addressing complex optimization problems.
  • In the hybrid benchmark function tests, the MRBMO algorithm demonstrated significant performance advantages. Specifically, for the F11, F12, F14, F15, and F19 functions, this algorithm exhibited marked superiority over all reference algorithms in both key metrics: convergence rate and solution accuracy. Regarding the optimization of the F10 function, MRBMO showed the fastest convergence characteristics during the initial iteration phase (<100 iterations), with its solution quality rapidly approaching the theoretical optimum. During optimization processes for the F13, F17, and F18 functions, MRBMO achieved comparable convergence performance with the RBMO and LSHADE_SPACMA algorithms, with all three outperforming other comparative algorithms. Regarding the F16 function, in the early iteration stage, MRBMO merely maintained convergence levels comparable to those of other algorithms, without showcasing any remarkable superiority. In the later iteration stage, however, through continuous exploration, MRBMO managed to discover higher-quality solutions.
  • The proposed MRBMO algorithm exhibits superior convergence rates for multiple composition functions. This is particularly evident in the test cases of F20, F21, F22, F23, F25, and F29. By contrast, for functions F24, F26, F27, and F28, the convergence performance of MRBMO is comparable to that of other comparative algorithms.

4.5. Population Diversity Analysis

Maintaining adequate population diversity throughout the optimization process is a critical factor influencing the performance of metaheuristic optimization algorithms, as highlighted in recent studies. Population diversity refers to the distribution of individuals within the search space and plays a pivotal role in balancing exploration and exploitation. When diversity is relatively high, individuals are more broadly distributed across the search space. This enhances the algorithm’s ability to explore various regions, thereby reducing the likelihood of becoming trapped in local optima. Conversely, lower levels of diversity may result in premature convergence, where the algorithm stagnates around suboptimal solutions due to insufficient exploration.
This section employs Equation (12) [39] to quantify population diversity. In this equation, the parameter I C ( t ) represents the degree of dispersion of the population relative to its center of mass during each iteration t. Specifically, x i d ( t ) denotes the value of the d-th dimension of the i-th individual at iteration t. Here, n represents the population size, and D is the total number of dimensions in the search space. By calculating the distance of each individual from the center of mass, this metric provides a numerical measure of how dispersed the population is at any given point in the optimization process.
I C t = i = 1 n d = 1 D x i d t c d t 2 c d ( t ) = 1 D i = 1 n x i d ( t )
Figure 6 compares the population diversity dynamics of RBMO and MRBMO on CEC2017 benchmark functions on 30 dimensions. The figure reveals that population diversity generally exhibits a declining trend throughout the optimization process. This decline is expected, as the optimization progresses and the algorithm increasingly focuses on refining solutions near the optimal region. However, a notable difference emerges between RBMO and MRBMO: MRBMO demonstrates consistently lower diversity compared to RBMO. This phenomenon can be attributed to the boundary constraint strategy implemented in MRBMO. This strategy is designed to guide individuals toward feasible regions of the search space, potentially accelerating convergence toward the current optimal solution. While this approach enhances exploitation by focusing the search around promising areas, it simultaneously reduces the breadth of exploration.

4.6. Exploration and Exploitation Evaluation

During evolutionary optimization processes, metaheuristic algorithms employ distinct coordination mechanisms to balance exploration and exploitation among population members. This section uses a methodology [40] to quantify algorithmic search behavior through tracking dynamic variations in individual dimensional components, thereby enabling quantitative assessment of exploration–exploitation tendencies throughout the evolutionary process [41]. In this section, the exploration–exploitation balance of the algorithm is evaluated through the formulation defined in Equation (13). Where X denotes the population matrix at the t-th iteration (structured as an n   ×   D matrix), D i v ( t ) quantifies the dimensional diversity metric, m e d i a n X d ( t ) corresponds to the median value of the d-th dimension across all individuals at iteration t, while X i d ( t ) indicates the d-th dimensional value of the i-th individual. The exploration-to-exploitation ratio (ERP/ETP) corresponds to percentage-based metrics derived from these measurements. The experimental findings are comprehensively illustrated in Figure 7. It can be observed that as the iterations proceed, the algorithm’s exploitation curve progressively ascends towards 100%, while the exploration curve correspondingly diminishes to zero. This pattern indicates that the algorithm predominantly engages in exploratory behaviors during the initial stages, shifting its emphasis to exploitation in subsequent phases.
D i v ( t ) = 1 D d = 1 D 1 n i = 1 n m e d i a n X d ( t ) X i d ( t ) E R P = D i v ( t ) max ( D i v ) × 100 % E T P = max ( D i v ) D i v ( t ) max ( D i v ) × 100 %

5. Engineering Design Application

The engineering design optimization problem seeks to achieve optimal system performance through mathematical modeling and algorithmic solutions while satisfying physical constraints and performance metrics. To validate the efficacy of the proposed methodology, four engineering design problems were adopted in this section. For experimental rigor, identical parameters were configured: a population size of 30, a maximum iteration count of 100, and 30 independent trials to mitigate stochastic interference. Statistical metrics including the best value, mean, median, worst value, and standard deviation were systematically documented for each algorithm. Furthermore, convergence curves and box plots were generated to visualize the solution distributions and search efficiency across algorithmic approaches.

5.1. Extension/Compression Spring Design (TCSD)

The extension/compression spring design problem [42], illustrated in Figure 8, seeks to minimize the spring weight by optimizing parameters such as the wire diameter (d), average coil diameter (D), and the number of active coils (N). This problem endeavors to identify the optimal parameter combination to achieve the desired performance while simultaneously minimizing the spring weight, thereby facilitating efficient and lightweight spring design for diverse applications. The mathematical model can be found in Appendix B.1.
The statistical results of 30 independent operations are presented in Table 7. It can be observed that, when compared with RMBO, the method proposed in this paper reveals improvements in the average value, the worst value, and the standard deviation. Nevertheless, it should not be overlooked that there still exist disparities with LSHADE_SPACMA and LSHADE in relation to different indicators. As demonstrated in Figure 8, during the initialization phase, all algorithms present relatively high fitness value distributions. As the iterations advance, they converge rapidly. A further analysis of the statistical characteristics of boxplots indicates that the MRBMO, RBMO, LSHADE, and LSHADE_SPACMA algorithms display a compact interquartile range (IQR), which affirms the strong stability of their optimization processes. Notably, MRBMO has the lowest occurrence of outliers, accentuating its superiority in solution quality control.

5.2. Reducer Design Problem (RDP)

The schematic diagram of the speed reducer design problem [43] is depicted in Figure 9. The problem involves seven design variables: end face width ( x 1 ), number of tooth modules ( x 2 ), number of teeth in the pinion ( x 3 ), length of the first shaft between the bearings ( x 4 ), length of the second shaft between the bearings ( x 5 ), diameter of the first shaft ( x 6 ), and diameter of the second shaft ( x 7 ). The objective of the problem is to minimize the total weight of the gearbox by optimizing seven variables. The mathematical model can be found in Appendix B.2.
Table 8 presents the statistical results, with the optimal outcomes highlighted in bold. It is evident that MRBMO achieves the most favorable results in terms of the best, mean, and median values, ranking second only to LSHADE in the worst and standard deviation categories. Subsequently, the convergence curves and box plots of the algorithms are illustrated in Figure 10. An analysis of the convergence curves reveals that, with the exception of the TTHHO algorithm—which exhibits a significantly slower convergence pace, all the compared algorithms demonstrate comparable convergence speeds. The box plots summarize the distribution of fitness values obtained from 30 independent runs for each algorithm, encompassing the median, quartiles, and outliers. The TTHHO algorithm features a substantially larger box plot accompanied by numerous outliers, indicating significant performance variability and multiple data points that deviate markedly from the central values. Conversely, the other algorithms exhibit more compact box plots, reflecting a higher degree of stability in their performance.

5.3. Welded Beam Design Problem (WBD)

The objective of the welded beam design problem [44] is to minimize the cost of the welded beam. As shown in Figure 11, the welded beam design problem exists with four parametric variables: weld thickness (h), length of the connected portion of the bar (l), height of the bar (t), and thickness of the reinforcement bar (b). The mathematical model can be found in Appendix B.3.
The statistical results of 30 independent operations are shown in Table 9. It can be observed that MRBMO achieves optimal values in all statistical metrics, demonstrating excellent stability. Figure 12 presents the convergence curves and box plots of all algorithms. The analysis reveals that the fitness values of the cost function are relatively large during initialization, but most algorithms subsequently converge rapidly to superior values. In the box plots, MRBMO and LSHADE_SPACMA exhibit relatively shorter box lengths, indicating that the distribution of fitness values obtained from 30 independent runs is more concentrated, suggesting potentially higher algorithmic stability. Conversely, RIME, SSA and TTHHO display noticeable outliers (denoted by the ’+’ symbols), implying these algorithms may occasionally yield exceptionally superior or inferior fitness values under specific conditions.

5.4. Gear Train Design Problem (GTD)

The primary objective of the gear train design problem [45] is to minimize the specific cost of the transmission. As illustrated in Figure 13, the design variables for this problem consist of four gear quantities: T a , T b , T c , and T d . The mathematical model is presented in Appendix B.4.
The statistical results of the algorithms are presented in Table 10. It is evident that all algorithms achieved comparable optimal solutions across the thirty independent trials. In terms of the mean, median, and standard deviation, MRBMO demonstrates enhanced stability in locating superior solutions. Notably, the worst-case values indicate that MRBMO and RBMO are of the same order of magnitude, significantly outperforming other algorithms and further validating their stability in GTD problems. Figure 14 illustrates the convergence curves and box plots, where MRBMO maintains a faster convergence rate throughout the entire process while exhibiting the fewest outliers, which substantiates its superior stability.

6. Conclusions

This study proposes a Multi-Strategy Enhanced Red-billed Blue Magpie Optimizer (MRBMO) to address the limitations of convergence efficiency and solution precision inherent in the original RBMO algorithm. The primary innovations of MRBMO include the design of a novel dynamic boundary constraint processing mechanism, the introduction of an elite guidance strategy during the predation stage, and the implementation of a Lévy Flight strategy to adjust the search step size. These synergistic modifications effectively reconcile the exploration–exploitation trade-off in metaheuristic optimization.
Comprehensive experimental evaluations using the standardized CEC2017 and CEC2022 benchmark suites demonstrate that MRBMO outperforms classical optimizers (SOGWO, TTHHO, PSO) and achieves competitive results against a state-of-the-art algorithm (LSHADE_SPACMA). Nonparametric statistical analyses (Friedman rank-sum test and the Wilcoxon signed-rank test) statistically validate MRBMO’s superiority across multiple dimensional configurations (30D, 50D, 100D), confirming its algorithmic robustness. To systematically evaluate the contribution of each methodological enhancement, a comprehensive ablation study was conducted to quantify the individual effects of the proposed algorithmic components on overall performance. Furthermore, an experiment was carried out to assess the method’s capacity for balancing exploration and exploitation mechanisms, while a thorough analysis was performed on the algorithm’s population diversity characteristics to evaluate its evolutionary dynamics.
To validate the performance of the proposed method in practical applications, MRBMO was applied to four engineering design optimization problems: extension/compression spring design, reducer design, welded beam design, and gear train design. The experimental results demonstrate that MRBMO not only achieves superior convergence accuracy and speed but also exhibits enhanced stability and robustness.

Author Contributions

Conceptualization, B.H. and X.W.; methodology, Z.G.; software, L.W.; validation, M.Y., X.W. and L.W.; formal analysis, M.Y.; investigation, Z.G.; resources, B.H.; data curation, M.Y.; writing—original draft preparation, M.Y.; writing—review and editing, L.W.; visualization, Z.G.; supervision, B.H.; project administration, X.W.; funding acquisition, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Test Function Suite Details and Results of the Rank Sum Test

Table A1. CEC2017 Test function details.
Table A1. CEC2017 Test function details.
TypeNo.FunctionsMinimum Value
Unimodal functions1Shifted and Rotated Bent Cigar Function100
2Shifted and Rotated Zakharov Function200
Simple multimodal functions3Shifted and Rotated Rosenbrock’s Function300
4Shifted and Rotated Rastrigin’s Function400
5Shifted and Rotated Expanded Scaffer’s F6 Function500
6Shifted and Rotated Lunacek Bi_Rastrigin Function600
7Shifted and Rotated Non-Continuous Rastrigin’s Function700
8Shifted and Rotated Lecy Function800
9Shifted and Rotated Schwefel’s Function900
Hybrid functions10Hybrid Function 1 (N = 3)1000
11Hybrid Function 2 (N = 3)1100
12Hybrid Function 3 (N = 3)1200
13Hybrid Function 4 (N = 4)1300
14Hybrid Function 5 (N = 4)1400
15Hybrid Function 6 (N = 4)1500
16Hybrid Function 6 (N = 5)1600
17Hybrid Function 6 (N = 5)1700
18Hybrid Function 6 (N = 5)1800
19Hybrid Function 6 (N = 6)1900
Composition functions20Composition Function 1 (N = 3)2000
21Composition Function 2 (N = 3)2100
22Composition Function 3 (N = 4)2200
23Composition Function 4 (N = 4)2300
24Composition Function 5 (N = 5)2400
25Composition Function 6 (N = 5)2500
26Composition Function 7 (N = 6)2600
27Composition Function 7 (N = 6)2700
28Composition Function 9 (N = 3)2800
29Composition Function 10 (N = 3)2900
Table A2. CEC2022 Test function details.
Table A2. CEC2022 Test function details.
TypeNo.FunctionsMinimum Value
Unimodal functions1Shifted and full Rotated Zakharov Function300
Basic functions2Shifted and full Rotated Rosenbrock’s Function400
3Shifted and full Rotated Expanded Schaffer’s f6 Function600
4Shifted and full Rotated Non-Continuous Rastrigin’s Function800
5Shifted and full Rotated Levy Function900
Hybrid functions6Hybrid Function 1 (N = 3)1800
7Hybrid Function 2 (N = 6)2000
8Hybrid Function 3 (N = 5)2200
Composition functions9Composition Function 1 (N = 5)2300
10Composition Function 2 (N = 4)2400
11Composition Function 3 (N = 5)2600
12Composition Function 4 (N = 6)2700
Table A3. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 30.
Table A3. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 30.
FunctionRBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F19.211E-053.020E-113.020E-115.692E-012.195E-083.020E-113.020E-11
F22.499E-033.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F35.264E-046.722E-103.010E-073.042E-017.245E-021.430E-053.020E-11
F42.838E-015.092E-083.474E-103.020E-118.564E-041.429E-083.020E-11
F58.663E-053.338E-113.690E-113.020E-111.850E-083.020E-113.020E-11
F62.416E-024.077E-116.066E-113.020E-113.965E-083.690E-113.020E-11
F72.510E-021.254E-072.371E-103.020E-111.106E-046.722E-103.020E-11
F81.873E-074.504E-113.020E-113.020E-112.371E-108.993E-113.020E-11
F92.388E-046.669E-032.236E-027.119E-093.501E-033.778E-023.338E-11
F101.174E-033.020E-118.993E-118.993E-112.879E-069.260E-093.020E-11
F119.049E-023.020E-113.020E-117.389E-116.526E-073.020E-113.020E-11
F127.483E-023.020E-113.020E-113.592E-058.315E-032.959E-053.020E-11
F131.606E-063.020E-113.020E-113.020E-114.077E-113.020E-113.020E-11
F149.514E-063.020E-115.573E-105.494E-115.092E-085.072E-103.020E-11
F157.288E-034.118E-063.352E-086.518E-094.744E-064.943E-053.690E-11
F161.175E-043.368E-057.119E-091.957E-109.883E-036.010E-083.020E-11
F173.403E-013.020E-113.020E-113.020E-113.338E-113.020E-113.020E-11
F184.639E-053.020E-113.020E-113.690E-114.444E-072.371E-103.020E-11
F193.339E-031.529E-053.352E-082.669E-092.783E-071.988E-022.371E-10
F202.772E-014.118E-063.825E-093.020E-112.052E-031.102E-085.573E-10
F217.695E-087.380E-102.196E-072.154E-106.528E-081.174E-094.504E-11
F224.353E-052.317E-063.820E-103.338E-111.297E-016.066E-113.020E-11
F233.965E-082.572E-074.975E-113.020E-115.971E-053.020E-113.020E-11
F241.174E-031.777E-103.197E-099.941E-011.302E-031.028E-063.020E-11
F255.264E-046.526E-072.154E-105.092E-086.787E-025.561E-043.820E-10
F261.765E-021.547E-099.533E-074.573E-094.639E-051.857E-093.020E-11
F271.857E-093.020E-119.919E-113.034E-037.043E-079.756E-103.020E-11
F283.835E-062.154E-063.820E-106.066E-116.520E-011.493E-043.020E-11
F291.680E-033.020E-113.020E-119.334E-023.848E-037.599E-073.020E-11
Bold indicates values less than 0.05.
Table A4. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 50.
Table A4. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 50.
FunctionRBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F13.020E-113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F29.049E-023.020E-113.020E-113.020E-114.504E-113.020E-113.020E-11
F31.698E-083.020E-114.975E-114.676E-022.610E-107.389E-113.020E-11
F45.092E-085.494E-113.690E-113.020E-112.439E-096.066E-113.020E-11
F51.407E-043.338E-113.020E-113.020E-113.324E-063.020E-113.020E-11
F61.311E-083.020E-113.020E-113.020E-115.494E-113.020E-113.020E-11
F72.377E-072.610E-101.411E-093.020E-116.010E-083.690E-113.020E-11
F82.133E-053.020E-113.020E-113.020E-111.957E-103.020E-113.020E-11
F99.833E-089.926E-022.236E-022.380E-037.695E-081.154E-013.690E-11
F102.371E-103.020E-113.020E-117.119E-093.020E-113.020E-113.020E-11
F114.573E-093.020E-113.020E-116.722E-101.287E-093.020E-113.020E-11
F125.462E-063.020E-113.020E-113.497E-093.256E-071.777E-103.020E-11
F132.126E-043.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F144.515E-023.020E-113.020E-117.739E-061.120E-017.483E-023.020E-11
F156.203E-042.062E-017.043E-074.183E-091.175E-041.441E-023.020E-11
F163.183E-031.809E-017.695E-081.102E-086.913E-046.377E-035.494E-11
F176.414E-013.020E-113.020E-115.494E-111.337E-057.389E-113.020E-11
F184.825E-013.020E-113.020E-113.953E-016.952E-011.988E-023.020E-11
F192.499E-033.917E-026.765E-059.756E-103.159E-108.120E-042.227E-09
F201.606E-063.820E-101.464E-103.020E-112.015E-083.690E-113.020E-11
F213.825E-092.755E-034.676E-024.714E-041.254E-075.369E-023.020E-11
F221.957E-102.371E-101.206E-103.020E-113.825E-093.020E-113.020E-11
F231.070E-092.439E-092.154E-103.020E-117.773E-093.020E-113.020E-11
F241.102E-083.020E-111.850E-084.427E-031.287E-098.891E-103.020E-11
F251.529E-053.474E-105.494E-111.442E-036.518E-098.663E-053.020E-11
F262.491E-064.077E-115.072E-102.371E-101.102E-082.154E-103.020E-11
F278.993E-113.020E-112.610E-108.883E-061.957E-101.957E-103.020E-11
F287.599E-073.159E-104.077E-115.494E-111.171E-022.669E-093.020E-11
F296.528E-083.020E-113.020E-113.848E-031.777E-105.967E-093.020E-11
Bold indicates values less than 0.05.
Table A5. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 100.
Table A5. The results of Wilcoxon rank sum test p-value on CEC2017 when D = 100.
FunctionRBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F13.020E-113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F24.119E-013.020E-113.020E-113.020E-119.833E-083.020E-116.696E-11
F33.020E-113.020E-114.077E-111.857E-093.020E-113.020E-113.020E-11
F41.202E-088.993E-118.993E-113.020E-111.464E-103.020E-113.020E-11
F53.520E-076.696E-113.338E-113.020E-111.892E-043.020E-113.020E-11
F63.197E-093.020E-113.690E-113.020E-113.338E-113.020E-113.020E-11
F74.504E-113.020E-113.020E-113.020E-117.389E-113.020E-113.020E-11
F83.256E-073.020E-113.020E-113.020E-117.389E-113.020E-113.020E-11
F91.957E-101.624E-011.383E-023.835E-067.389E-112.624E-033.020E-11
F101.329E-103.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F123.020E-113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
F139.587E-013.020E-113.020E-112.610E-109.533E-074.077E-113.020E-11
F143.831E-053.020E-113.020E-112.922E-094.200E-104.504E-113.020E-11
F159.533E-078.841E-071.613E-108.663E-057.088E-088.663E-053.020E-11
F161.492E-065.570E-031.311E-083.825E-093.835E-065.573E-103.020E-11
F174.637E-034.077E-113.020E-113.010E-072.068E-023.020E-113.020E-11
F185.555E-023.020E-113.020E-113.770E-041.206E-103.020E-113.020E-11
F194.207E-021.988E-025.607E-052.390E-084.504E-115.607E-057.389E-11
F203.338E-113.690E-113.338E-113.020E-113.020E-113.020E-113.020E-11
F213.646E-089.470E-013.147E-023.671E-034.077E-111.635E-053.020E-11
F224.077E-111.957E-107.389E-113.020E-112.610E-103.020E-113.020E-11
F233.020E-113.020E-113.690E-113.020E-113.020E-113.020E-113.020E-11
F243.020E-113.020E-114.504E-113.825E-093.020E-113.020E-113.020E-11
F259.063E-081.411E-095.462E-099.833E-089.756E-102.371E-103.020E-11
F262.254E-043.020E-113.338E-113.835E-064.975E-114.077E-113.020E-11
F273.020E-113.020E-114.077E-111.429E-083.020E-114.077E-113.020E-11
F285.600E-071.613E-103.020E-112.597E-057.599E-073.010E-073.020E-11
F293.020E-113.020E-113.020E-113.020E-113.020E-113.020E-113.020E-11
Bold indicates values less than 0.05.

Appendix B. Engineering Application Design Issues

Appendix B.1. Extension/Compression Spring Problem

Minimize: f ( x ) = ( x 3 + 2 ) x 2 x 1 2
Subject to: g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 .

Appendix B.2. Reducer Design Problem

Minimize: f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 )
Subject to: g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 2 1 0 , g 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 , g 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 , g 5 ( x ) = ( 745 x 4 x 2 x 3 ) 2 + 16.9 × 10 6 110.0 x 6 3 1 0 , g 6 ( x ) = ( 745 x 4 x 2 x 3 ) 2 + 157.5 × 10 6 85.0 x 6 3 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 , 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.8 x 5 8.3 , 2.9 x 6 3.9 , 5.0 x 7 5.5 ,

Appendix B.3. Welded Beam Problem

Minimize: f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to: g 1 ( x ) = τ ( x ) τ max 0 g 2 ( x ) = σ ( x ) τ max 0 g 3 ( x ) = δ ( x ) τ max 0 g 4 ( x ) = x 1 x 4 0 g 5 ( x ) = P P c ( x ) 0 g 6 ( x ) = 0.125 x 1 0 g 7 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0 0.1 x 1 , x 4 2 , 0.1 x 2 , x 3 10
where: τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 , τ = P 2 x 1 x 2 , τ = M R J , M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , J = 2 ( 2 x 1 x 2 ( x 2 2 12 + ( x 1 + x 3 2 ) 2 ) ) , σ ( x ) = 6 P L x 4 x 3 2 , δ ( x ) = 4 P L 3 E x 3 3 x 4 , P c ( x ) = 4.013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G ) , P = 6000 l b , L = 14 i n , δ max = 0.25 i n , E = 30 × 10 6 p s i , G = 12 × 10 6 p s i , τ max = 13600 p s i , σ max = 30000 p s i .

Appendix B.4. Gear Train Design Problem

Minimize: f ( x ) = 1 6.931 x 2 x 3 x 1 x 4 2
Subject to: 12 x i 60 , i = 1 , 2 , 3 , 4 .

References

  1. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  2. Nadimi-Shahraki, M.H.; Taghian, S.; Javaheri, D.; Sadiq, A.S.; Khodadadi, N.; Mirjalili, S. MTV-SCA: Multi-trial vector-based sine cosine algorithm. Clust. Comput. 2024, 27, 13471–13515. [Google Scholar] [CrossRef]
  3. Ming, F.; Gong, W.; Li, D.; Wang, L.; Gao, L. A competitive and cooperative swarm optimizer for constrained multiobjective optimization problems. IEEE Trans. Evol. Comput. 2022, 27, 1313–1326. [Google Scholar] [CrossRef]
  4. Gharehchopogh, F.S.; Gholizadeh, H. A comprehensive survey: Whale Optimization Algorithm and its applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  5. Chakraborty, S.; Saha, A.K.; Sharma, S.; Mirjalili, S.; Chakraborty, R. A novel enhanced whale optimization algorithm for global optimization. Comput. Ind. Eng. 2021, 153, 107086. [Google Scholar] [CrossRef]
  6. Wang, J.; Lin, Y.; Liu, R.; Fu, J. Odor source localization of multi-robots with swarm intelligence algorithms: A review. Front. Neurorobot. 2022, 16, 949888. [Google Scholar] [CrossRef]
  7. Jin, Y.; Wang, H.; Chugh, T.; Guo, D.; Miettinen, K. Data-driven evolutionary optimization: An overview and case studies. IEEE Trans. Evol. Comput. 2018, 23, 442–458. [Google Scholar] [CrossRef]
  8. Ji, X.F.; Zhang, Y.; He, C.L.; Cheng, J.X.; Gong, D.W.; Gao, X.Z.; Guo, Y.N. Surrogate and autoencoder-assisted multitask particle swarm optimization for high-dimensional expensive multimodal problems. IEEE Trans. Evol. Comput. 2023, 28, 1009–1023. [Google Scholar] [CrossRef]
  9. Bouaouda, A.; Sayouti, Y. Hybrid meta-heuristic algorithms for optimal sizing of hybrid renewable energy system: A review of the state-of-the-art. Arch. Comput. Methods Eng. 2022, 29, 4049–4083. [Google Scholar] [CrossRef]
  10. Vinod Chandra, S.S.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar]
  11. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  12. Ma, J.; Hao, Z.; Sun, W. Enhancing sparrow search algorithm via multi-strategies for continuous optimization problems. Inf. Process. Manag. 2022, 59, 102854. [Google Scholar] [CrossRef]
  13. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  14. Slowik, A.; Kwasnicka, H. Nature inspired methods and their industry applications—Swarm intelligence algorithms. IEEE Trans. Ind. Inform. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  15. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–11 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  16. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2007, 1, 28–39. [Google Scholar] [CrossRef]
  17. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Frome, UK, 2010. [Google Scholar]
  18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  20. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  21. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  22. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  23. Shen, Y.; Zhang, C.; Gharehchopogh, F.S.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering design problems. Knowl.-Based Syst. 2021, 233, 107555. [Google Scholar] [CrossRef]
  25. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  26. Liang, S.; Yin, M.; Sun, G.; Li, J.; Li, H.; Lang, Q. An enhanced sparrow search swarm optimizer via multi-strategies for high-dimensional optimization problems. Swarm Evol. Comput. 2024, 88, 101603. [Google Scholar] [CrossRef]
  27. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 1–89. [Google Scholar] [CrossRef]
  28. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. (CSUR) 2013, 45, 1–33. [Google Scholar] [CrossRef]
  29. Jia, H.; Lu, C. Guided learning strategy: A novel update mechanism for metaheuristic algorithms design and improvement. Knowl.-Based Syst. 2024, 286, 111402. [Google Scholar] [CrossRef]
  30. Gandomi, A.H.; Yang, X.S. Evolutionary boundary constraint handling scheme. Neural Comput. Appl. 2012, 21, 1449–1462. [Google Scholar] [CrossRef]
  31. Seyyedabbasi, A. WOASCALF: A new hybrid whale optimization algorithm based on sine cosine algorithm and levy flight to solve global optimization problems. Adv. Eng. Softw. 2022, 173, 103272. [Google Scholar] [CrossRef]
  32. Wu, L.; Wu, J.; Wang, T. Enhancing grasshopper optimization algorithm (GOA) with levy flight for engineering applications. Sci. Rep. 2023, 13, 124. [Google Scholar] [CrossRef]
  33. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  34. Yazdani, D.; Branke, J.; Omidvar, M.N.; Li, X.; Li, C.; Mavrovouniotis, M.; Nguyen, T.T.; Yang, S.; Yao, X. IEEE CEC 2022 competition on dynamic optimization problems generated by generalized moving peaks benchmark. arXiv 2021, arXiv:2106.06174. [Google Scholar]
  35. Abdulrab, H.; Hussin, F.A.; Ismail, I.; Assad, M.; Awang, A.; Shutari, H.; Arun, D. Energy efficient optimal deployment of industrial wireless mesh networks using transient trigonometric Harris Hawks optimizer. Heliyon 2024, 10, e28719. [Google Scholar] [CrossRef] [PubMed]
  36. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  37. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  38. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  39. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  40. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  41. Hu, G.; Huang, F.; Seyyedabbasi, A.; Wei, G. Enhanced multi-strategy bottlenose dolphin optimizer for UAVs path planning. Appl. Math. Model. 2024, 130, 243–271. [Google Scholar] [CrossRef]
  42. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  43. Zhou, L.; Liu, X.; Tian, R.; Wang, W.; Jin, G. A multi-strategy enhanced reptile search algorithm for global optimization and engineering optimization design problems. Clust. Comput. 2025, 28, 1–41. [Google Scholar] [CrossRef]
  44. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  45. Chen, P.; Zhou, S.; Zhang, Q.; Kasabov, N. A meta-inspired termite queen algorithm for global optimization and engineering design problems. Eng. Appl. Artif. Intell. 2022, 111, 104805. [Google Scholar] [CrossRef]
Figure 1. Lévy Flight Random Walk.
Figure 1. Lévy Flight Random Walk.
Biomimetics 10 00557 g001
Figure 2. The radar of all algorithms.
Figure 2. The radar of all algorithms.
Biomimetics 10 00557 g002
Figure 3. The results of the Friedman rank sum test of CEC2017.
Figure 3. The results of the Friedman rank sum test of CEC2017.
Biomimetics 10 00557 g003
Figure 4. Convergence curve of F1–F15.
Figure 4. Convergence curve of F1–F15.
Biomimetics 10 00557 g004
Figure 5. Convergence curve of F16–F29.
Figure 5. Convergence curve of F16–F29.
Biomimetics 10 00557 g005
Figure 6. Population diversity analysis.
Figure 6. Population diversity analysis.
Biomimetics 10 00557 g006
Figure 7. Exploration and exploitation evaluation.
Figure 7. Exploration and exploitation evaluation.
Biomimetics 10 00557 g007
Figure 8. Curve and box plots of TCSD.
Figure 8. Curve and box plots of TCSD.
Biomimetics 10 00557 g008
Figure 9. Structure diagram of RDP.
Figure 9. Structure diagram of RDP.
Biomimetics 10 00557 g009
Figure 10. Curve and box plots of RDP.
Figure 10. Curve and box plots of RDP.
Biomimetics 10 00557 g010
Figure 11. Structure diagram of WBD.
Figure 11. Structure diagram of WBD.
Biomimetics 10 00557 g011
Figure 12. Curve and box plots of WBD.
Figure 12. Curve and box plots of WBD.
Biomimetics 10 00557 g012
Figure 13. Structure diagram of GTD.
Figure 13. Structure diagram of GTD.
Biomimetics 10 00557 g013
Figure 14. Curve and box plots of GTD.
Figure 14. Curve and box plots of GTD.
Biomimetics 10 00557 g014
Table 1. The influence of MRBMO’s parameters ( ϵ and β ) tested on CEC2022.
Table 1. The influence of MRBMO’s parameters ( ϵ and β ) tested on CEC2022.
FunctionIndexScenario1Scenario2Scenario3Scenario4Scenario5Scenario6Scenario7Scenario8Scenario9
Argument ϵ 0.250.250.250.50.50.50.750.750.75
β 0.511.50.511.50.511.5
F1Avg300.40300.34300.38300.23300.18300.19300.12300.22300.19
Rank978624153
F2Avg445.55446.79444.69450.30446.58451.88451.47444.68442.75
Rank463759821
F3Avg600.11600.05600.11600.07600.15600.15600.04600.08600.09
Rank627389145
F4Avg831.93831.00827.69833.18832.41832.13829.29830.20830.56
Rank651987234
F5Avg903.62905.88902.87903.31903.23902.43902.19903.56901.78
Rank894653271
F6Avg2792.792563.473744.632305.862121.382210.582376.102329.082172.18
Rank879413652
F7Avg2048.982041.512048.922047.492048.302047.052043.882043.352041.57
Rank918675432
F8Avg2250.172250.402238.802233.922240.762229.682247.152233.632237.16
Rank895361724
F9Avg2480.782480.782480.782480.782480.782480.782480.782480.782480.78
Rank341786259
F10Avg2843.702903.502660.812905.062911.992848.772894.232898.442937.29
Rank261783459
F11Avg2900.002900.002900.002900.002900.002900.002900.002900.002900.00
Rank548217369
F12Avg2956.752957.062963.302953.342956.412952.002952.822955.632956.05
Rank789361245
Mean Rank6.35.75.35.35.44.83.54.34.5
Table 2. Ablation experiment of CEC2022.
Table 2. Ablation experiment of CEC2022.
FunctionIndexRBMORBMO1RBMO2MRBMO
F1Min3.0099E+023.0004E+023.0000E+023.0000E+02
Avg3.0562E+023.0204E+023.0030E+023.0014E+02
Std3.6600E+004.4658E+004.4298E-012.8318E-01
Rank4321
F2Min4.3081E+024.0027E+024.4490E+024.0584E+02
Avg4.5500E+024.4119E+024.5190E+024.5086E+02
Std1.2485E+012.0785E+018.2075E+001.1773E+01
Rank4132
F3Min6.0001E+026.0000E+026.0000E+026.0000E+02
Avg6.0031E+026.0066E+026.0009E+026.0003E+02
Std6.0572E-011.7808E+001.6496E-012.4421E-02
Rank3421
F4Min8.1238E+028.2190E+028.1592E+028.1691E+02
Avg8.3101E+028.3488E+028.2683E+028.3161E+02
Std9.3417E+001.0593E+019.1196E+008.9787E+00
Rank2413
F5Min9.0054E+029.0054E+029.0000E+029.0000E+02
Avg9.0578E+029.0401E+029.0316E+029.0330E+02
Std5.5217E+004.9538E+005.7774E+003.6224E+00
Rank4312
F6Min1.8573E+031.8534E+031.8243E+031.8465E+03
Avg2.4147E+033.4106E+032.4491E+032.1821E+03
Std2.2683E+033.2690E+032.1614E+039.7115E+02
Rank2431
F7Min2.0263E+032.0260E+032.0241E+032.0251E+03
Avg2.0523E+032.0499E+032.0428E+032.0398E+03
Std1.7996E+012.2585E+011.5680E+011.1262E+01
Rank4321
F8Min2.2241E+032.2246E+032.2214E+032.2217E+03
Avg2.2462E+032.2376E+032.2297E+032.2300E+03
Std4.0668E+013.0494E+011.8648E+012.2493E+01
Rank4312
F9Min2.4808E+032.4808E+032.4808E+032.4808E+03
Avg2.4808E+032.4808E+032.4808E+032.4808E+03
Std2.0010E-041.6344E-082.4676E-102.6702E-09
Rank4312
F10Min2.5004E+032.5003E+032.5004E+032.5004E+03
Avg3.4839E+033.0681E+033.1715E+032.9806E+03
Std8.7540E+024.9379E+026.9863E+024.8419E+02
Rank4231
F11Min2.9000E+032.9000E+032.9000E+032.9000E+03
Avg2.9001E+032.9000E+032.9000E+032.9000E+03
Std4.9301E-024.9872E-081.3116E-096.1406E-10
Rank4321
F12Min2.9397E+032.9346E+032.9357E+032.9328E+03
Avg2.9648E+032.9786E+032.9550E+032.9547E+03
Std2.3877E+015.0664E+011.4986E+011.5355E+01
Rank3421
Mean rank3.503.081.921.50
Table 3. Statistics results of 30D.
Table 3. Statistics results of 30D.
FunctionIndexMRBMORBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F1Avg(Rank)3.85E+03(1)1.76E+04(3)5.76E+08(6)3.81E+06(5)5.56E+03(2)9.24E+04(4)5.78E+08(7)2.04E+09(8)
Std(Rank)4.55E+03(1)2.65E+04(3)7.32E+08(6)1.62E+06(5)6.98E+03(2)1.82E+05(4)9.89E+08(8)9.46E+08(7)
F2Avg(Rank)1.12E+03(1)1.66E+03(2)6.09E+04(5)4.79E+04(4)4.56E+04(3)7.51E+04(8)6.77E+04(7)6.51E+04(6)
Std(Rank)5.35E+02(1)7.69E+02(2)1.18E+04(5)1.46E+04(6)7.25E+03(4)6.05E+04(8)2.62E+04(7)6.74E+03(3)
F3Avg(Rank)4.91E+02(1)5.13E+02(4)5.64E+02(6)5.40E+02(5)5.01E+02(3)4.99E+02(2)6.75E+02(7)1.04E+03(8)
Std(Rank)2.63E+01(2)2.72E+01(3)3.45E+01(5)5.88E+01(6)3.05E+01(4)2.36E+01(1)3.17E+02(8)2.93E+02(7)
F4Avg(Rank)5.62E+02(1)5.66E+02(2)6.05E+02(4)6.07E+02(5)7.30E+02(7)5.80E+02(3)6.21E+02(6)7.80E+02(8)
Std(Rank)1.88E+01(2)1.62E+01(1)3.33E+01(6)2.58E+01(4)4.66E+01(8)1.95E+01(3)3.64E+01(7)3.22E+01(5)
F5Avg(Rank)6.01E+02(1)6.02E+02(2)6.08E+02(4)6.15E+02(6)6.48E+02(7)6.02E+02(3)6.12E+02(5)6.70E+02(8)
Std(Rank)4.61E-01(1)2.08E+00(3)3.62E+00(4)5.60E+00(5)1.09E+01(8)1.23E+00(2)6.21E+00(7)6.21E+00(6)
F6Avg(Rank)7.90E+02(1)8.02E+02(2)8.79E+02(5)8.78E+02(4)1.23E+03(7)8.32E+02(3)8.91E+02(6)1.32E+03(8)
Std(Rank)1.63E+01(1)2.21E+01(2)5.62E+01(6)3.52E+01(4)1.03E+02(8)3.05E+01(3)4.66E+01(5)6.62E+01(7)
F7Avg(Rank)8.59E+02(1)8.70E+02(2)9.05E+02(4)9.18E+02(6)9.73E+02(7)8.78E+02(3)9.12E+02(5)9.94E+02(8)
Std(Rank)1.58E+01(1)2.15E+01(3)4.63E+01(8)2.41E+01(5)3.62E+01(7)1.99E+01(2)2.65E+01(6)2.38E+01(4)
F8Avg(Rank)9.32E+02(1)1.01E+03(2)2.20E+03(4)3.23E+03(6)5.26E+03(7)1.23E+03(3)2.71E+03(5)8.60E+03(8)
Std(Rank)3.56E+01(1)1.17E+02(2)1.47E+03(6)1.81E+03(7)4.22E+02(4)2.61E+02(3)1.81E+03(8)1.34E+03(5)
F9Avg(Rank)4.25E+03(1)4.91E+03(5)5.00E+03(6)4.79E+03(3)5.39E+03(7)4.85E+03(4)4.65E+03(2)7.29E+03(8)
Std(Rank)5.95E+02(3)5.61E+02(2)1.15E+03(8)6.95E+02(4)5.35E+02(1)8.18E+02(6)7.00E+02(5)9.63E+02(7)
F10Avg(Rank)1.18E+03(1)1.22E+03(2)1.89E+03(7)1.34E+03(6)1.32E+03(5)1.26E+03(3)1.28E+03(4)2.26E+03(8)
Std(Rank)3.73E+01(1)3.76E+01(2)7.98E+02(8)6.46E+01(6)6.09E+01(5)5.57E+01(4)5.38E+01(3)3.71E+02(7)
F11Avg(Rank)6.83E+04(1)9.78E+04(2)7.01E+07(6)1.85E+07(5)1.51E+06(4)3.66E+05(3)1.04E+08(7)2.42E+08(8)
Std(Rank)6.41E+04(1)9.22E+04(2)9.97E+07(6)2.12E+07(5)1.19E+06(4)4.02E+05(3)2.77E+08(8)2.14E+08(7)
F12Avg(Rank)1.11E+04(1)1.84E+04(3)7.00E+06(7)2.71E+05(5)3.36E+04(4)1.44E+04(2)7.35E+06(8)2.07E+06(6)
Std(Rank)1.34E+04(2)2.05E+04(3)1.52E+07(7)4.25E+05(5)2.50E+04(4)9.27E+03(1)2.18E+07(8)1.16E+06(6)
F13Avg(Rank)1.48E+03(1)1.51E+03(2)5.36E+05(7)1.07E+05(6)5.42E+04(4)1.09E+04(3)5.46E+04(5)1.09E+06(8)
Std(Rank)1.64E+01(1)2.12E+01(2)8.33E+05(8)8.33E+04(6)5.24E+04(5)5.04E+04(3)5.23E+04(4)6.94E+05(7)
F14Avg(Rank)1.88E+03(1)2.23E+03(2)2.23E+05(8)1.72E+04(6)9.09E+03(4)2.55E+03(3)1.02E+04(5)1.86E+05(7)
Std(Rank)3.06E+02(1)5.09E+02(2)5.09E+05(8)1.05E+04(5)8.67E+03(4)6.67E+02(3)1.17E+04(6)1.02E+05(7)
F15Avg(Rank)2.26E+03(1)2.48E+03(2)2.69E+03(5)2.86E+03(6)2.91E+03(7)2.63E+03(3)2.64E+03(4)3.95E+03(8)
Std(Rank)2.45E+02(1)3.34E+02(5)3.58E+02(7)2.67E+02(2)3.11E+02(4)2.76E+02(3)3.57E+02(6)5.56E+02(8)
F16Avg(Rank)1.90E+03(1)2.04E+03(3)2.07E+03(4)2.20E+03(6)2.50E+03(7)1.99E+03(2)2.17E+03(5)2.78E+03(8)
Std(Rank)9.31E+01(1)1.53E+02(3)1.73E+02(4)2.31E+02(6)2.77E+02(7)1.46E+02(2)1.88E+02(5)2.82E+02(8)
F17Avg(Rank)2.56E+03(2)2.42E+03(1)1.67E+06(6)1.30E+06(5)5.13E+05(4)3.64E+04(3)1.69E+06(7)1.17E+07(8)
Std(Rank)1.27E+03(2)3.93E+02(1)2.01E+06(7)1.51E+06(5)7.25E+05(4)1.96E+04(3)1.94E+06(6)1.59E+07(8)
F18Avg(Rank)2.00E+03(1)4.23E+03(3)2.12E+06(7)1.76E+04(5)9.91E+03(4)2.14E+03(2)9.27E+04(6)3.79E+06(8)
Std(Rank)5.85E+01(1)8.74E+03(3)7.87E+06(8)1.30E+04(4)1.37E+04(5)1.46E+02(2)2.55E+05(6)2.98E+06(7)
F19Avg(Rank)2.29E+03(1)2.38E+03(2)2.48E+03(4)2.51E+03(6)2.77E+03(7)2.50E+03(5)2.44E+03(3)2.90E+03(8)
Std(Rank)1.42E+02(3)1.27E+02(2)1.46E+02(4)1.94E+02(5)2.31E+02(7)1.02E+02(1)2.42E+02(8)2.19E+02(6)
F20Avg(Rank)2.36E+03(1)2.37E+03(2)2.40E+03(4)2.43E+03(6)2.52E+03(7)2.38E+03(3)2.41E+03(5)2.60E+03(8)
Std(Rank)2.10E+01(2)1.88E+01(1)3.03E+01(5)3.19E+01(6)5.15E+01(7)2.80E+01(4)2.66E+01(3)7.38E+01(8)
F21Avg(Rank)2.55E+03(2)4.45E+03(3)5.65E+03(6)5.03E+03(5)6.47E+03(7)2.31E+03(1)4.64E+03(4)7.93E+03(8)
Std(Rank)9.30E+02(2)2.11E+03(7)2.19E+03(8)1.86E+03(5)1.53E+03(4)2.01E+01(1)1.90E+03(6)1.31E+03(3)
F22Avg(Rank)2.72E+03(1)2.75E+03(3)2.76E+03(4)2.78E+03(5)2.93E+03(7)2.73E+03(2)2.90E+03(6)3.30E+03(8)
Std(Rank)2.67E+01(2)3.01E+01(4)3.10E+01(5)2.82E+01(3)7.70E+01(6)2.11E+01(1)8.99E+01(7)1.03E+02(8)
F23Avg(Rank)2.88E+03(1)2.93E+03(3)2.95E+03(5)2.94E+03(4)3.10E+03(6)2.91E+03(2)3.10E+03(7)3.55E+03(8)
Std(Rank)2.00E+01(1)2.94E+01(3)6.73E+01(5)3.85E+01(4)8.52E+01(7)2.57E+01(2)8.25E+01(6)1.73E+02(8)
F24Avg(Rank)2.89E+03(1)2.89E+03(2)2.96E+03(7)2.92E+03(6)2.89E+03(3)2.90E+03(4)2.91E+03(5)3.06E+03(8)
Std(Rank)1.40E+01(3)6.10E+00(1)4.34E+01(6)2.99E+01(5)1.30E+01(2)1.51E+01(4)4.69E+01(8)4.45E+01(7)
F25Avg(Rank)4.22E+03(1)4.75E+03(3)4.87E+03(4)5.09E+03(6)6.05E+03(7)4.37E+03(2)4.97E+03(5)8.18E+03(8)
Std(Rank)5.83E+02(2)8.01E+02(5)4.02E+02(1)6.23E+02(4)1.28E+03(8)5.91E+02(3)9.83E+02(6)1.25E+03(7)
F26Avg(Rank)3.22E+03(1)3.23E+03(2)3.26E+03(5)3.24E+03(4)3.27E+03(6)3.24E+03(3)3.29E+03(7)3.70E+03(8)
Std(Rank)1.43E+01(1)2.29E+01(5)1.71E+01(4)1.63E+01(3)3.91E+01(6)1.45E+01(2)4.62E+01(7)2.21E+02(8)
F27Avg(Rank)3.21E+03(1)3.28E+03(4)3.38E+03(7)3.28E+03(5)3.23E+03(2)3.25E+03(3)3.33E+03(6)3.65E+03(8)
Std(Rank)2.30E+01(2)4.63E+01(5)6.97E+01(6)3.41E+01(3)2.19E+01(1)3.65E+01(4)1.19E+02(7)1.51E+02(8)
F28Avg(Rank)3.63E+03(2)3.82E+03(3)3.86E+03(5)4.04E+03(6)4.26E+03(7)3.62E+03(1)3.85E+03(4)5.45E+03(8)
Std(Rank)1.43E+02(3)1.33E+02(2)1.54E+02(4)2.35E+02(6)2.53E+02(7)1.06E+02(1)2.22E+02(5)7.66E+02(8)
F29Avg(Rank)1.44E+04(1)5.21E+04(4)1.11E+07(7)6.76E+05(6)1.88E+04(2)1.97E+04(3)2.60E+05(5)8.09E+07(8)
Std(Rank)7.76E+03(1)1.31E+05(4)9.27E+06(7)5.52E+05(5)1.21E+04(3)8.23E+03(2)7.43E+05(6)1.29E+08(8)
Table 4. Statistics results of 50D.
Table 4. Statistics results of 50D.
FunctionIndexMRBMORBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F1Avg(Rank)6.40E+03(1)4.74E+07(4)6.52E+09(7)4.33E+07(3)9.66E+05(2)9.62E+07(5)4.25E+09(6)1.14E+10(8)
Std(Rank)6.58E+03(1)7.26E+07(4)3.89E+09(7)1.75E+07(3)5.66E+05(2)1.01E+08(5)5.00E+09(8)3.06E+09(6)
F2Avg(Rank)3.13E+04(1)3.74E+04(2)1.76E+05(4)2.09E+05(7)2.59E+05(8)1.42E+05(3)1.98E+05(6)1.88E+05(5)
Std(Rank)1.36E+04(1)1.62E+04(2)2.72E+04(4)4.76E+04(5)7.33E+04(7)8.05E+04(8)4.87E+04(6)2.04E+04(3)
F3Avg(Rank)5.49E+02(1)6.41E+02(3)1.08E+03(7)6.99E+02(5)5.70E+02(2)6.64E+02(4)9.63E+02(6)3.07E+03(8)
Std(Rank)3.80E+01(1)6.13E+01(4)3.18E+02(6)6.55E+01(5)4.84E+01(3)4.74E+01(2)4.98E+02(7)8.03E+02(8)
F4Avg(Rank)6.37E+02(1)6.95E+02(2)7.76E+02(5)7.55E+02(4)8.66E+02(7)7.22E+02(3)7.77E+02(6)9.66E+02(8)
Std(Rank)2.68E+01(1)3.85E+01(5)9.17E+01(8)4.72E+01(6)3.02E+01(2)3.82E+01(4)5.18E+01(7)3.57E+01(3)
F5Avg(Rank)6.05E+02(1)6.08E+02(2)6.20E+02(4)6.32E+02(5)6.63E+02(7)6.09E+02(3)6.34E+02(6)6.82E+02(8)
Std(Rank)2.09E+00(1)3.13E+00(3)5.31E+00(4)8.16E+00(7)5.66E+00(5)2.97E+00(2)1.28E+01(8)5.92E+00(6)
F6Avg(Rank)9.11E+02(1)9.92E+02(2)1.13E+03(5)1.12E+03(4)1.73E+03(7)1.09E+03(3)1.18E+03(6)1.91E+03(8)
Std(Rank)2.91E+01(1)5.31E+01(3)9.74E+01(8)7.39E+01(5)5.17E+01(2)7.31E+01(4)7.73E+01(7)7.72E+01(6)
F7Avg(Rank)9.46E+02(1)1.01E+03(2)1.05E+03(4)1.06E+03(5)1.18E+03(7)1.01E+03(3)1.09E+03(6)1.26E+03(8)
Std(Rank)2.77E+01(1)4.47E+01(5)6.11E+01(6)6.58E+01(8)4.16E+01(3)4.41E+01(4)6.19E+01(7)3.34E+01(2)
F8Avg(Rank)1.66E+03(1)2.52E+03(2)9.40E+03(4)1.24E+04(5)1.32E+04(6)4.99E+03(3)2.07E+04(7)3.38E+04(8)
Std(Rank)5.54E+02(1)8.07E+02(3)4.56E+03(6)6.91E+03(8)7.89E+02(2)2.40E+03(4)6.69E+03(7)2.91E+03(5)
F9Avg(Rank)7.57E+03(1)9.51E+03(7)8.69E+03(5)8.06E+03(3)8.37E+03(4)9.14E+03(6)8.03E+03(2)1.08E+04(8)
Std(Rank)8.42E+02(2)1.19E+03(6)2.74E+03(8)7.21E+02(1)9.07E+02(3)1.37E+03(7)1.15E+03(5)1.05E+03(4)
F10Avg(Rank)1.28E+03(1)1.44E+03(3)5.27E+03(8)1.72E+03(4)1.41E+03(2)2.71E+03(6)1.72E+03(5)4.58E+03(7)
Std(Rank)4.30E+01(1)8.29E+01(3)1.86E+03(7)1.13E+02(4)7.12E+01(2)3.48E+03(8)1.99E+02(5)8.86E+02(6)
F11Avg(Rank)2.31E+06(1)1.11E+07(2)6.77E+08(6)1.62E+08(5)1.32E+07(3)1.36E+07(4)2.91E+09(7)3.14E+09(8)
Std(Rank)1.82E+06(1)8.74E+06(3)9.61E+08(6)1.01E+08(5)8.51E+06(2)1.26E+07(4)2.72E+09(8)2.02E+09(7)
F12Avg(Rank)7.71E+03(1)2.13E+04(3)9.67E+07(6)4.46E+05(5)3.50E+04(4)2.06E+04(2)4.03E+08(8)3.15E+08(7)
Std(Rank)8.52E+03(2)1.47E+04(3)1.06E+08(6)3.54E+05(5)1.98E+04(4)8.36E+03(1)9.36E+08(8)5.22E+08(7)
F13Avg(Rank)1.68E+03(1)1.77E+03(2)1.58E+06(7)6.25E+05(5)3.96E+05(4)1.67E+04(3)7.05E+05(6)1.45E+07(8)
Std(Rank)7.83E+01(1)8.24E+01(2)1.15E+06(7)4.23E+05(5)2.41E+05(4)2.25E+04(3)6.54E+05(6)1.70E+07(8)
F14Avg(Rank)7.94E+03(1)1.14E+04(3)1.06E+07(7)1.12E+05(6)1.95E+04(5)1.00E+04(2)1.39E+04(4)2.81E+07(8)
Std(Rank)5.07E+03(1)6.64E+03(3)1.86E+07(7)5.30E+04(6)1.11E+04(4)5.83E+03(2)1.25E+04(5)6.15E+07(8)
F15Avg(Rank)3.01E+03(1)3.41E+03(4)3.22E+03(2)3.68E+03(6)3.92E+03(7)3.45E+03(5)3.30E+03(3)5.72E+03(8)
Std(Rank)4.14E+02(2)4.80E+02(6)6.22E+02(7)4.40E+02(3)4.60E+02(5)3.33E+02(1)4.54E+02(4)7.15E+02(8)
F16Avg(Rank)2.86E+03(1)3.14E+03(3)3.02E+03(2)3.49E+03(6)3.58E+03(7)3.15E+03(4)3.17E+03(5)3.85E+03(8)
Std(Rank)2.81E+02(1)3.27E+02(2)3.91E+02(5)4.01E+02(6)4.33E+02(7)3.34E+02(3)4.40E+02(8)3.79E+02(4)
F17Avg(Rank)9.35E+04(2)6.19E+04(1)5.63E+06(7)4.70E+06(6)2.95E+06(4)5.24E+05(3)3.34E+06(5)2.14E+07(8)
Std(Rank)1.32E+05(2)3.54E+04(1)5.16E+06(7)3.10E+06(5)1.76E+06(4)1.49E+06(3)3.48E+06(6)1.66E+07(8)
F18Avg(Rank)1.48E+04(2)1.27E+04(1)3.17E+06(8)5.63E+05(5)1.85E+04(4)1.50E+04(3)7.17E+05(6)2.98E+06(7)
Std(Rank)1.10E+04(2)1.13E+04(3)3.59E+06(8)6.40E+05(5)1.49E+04(4)9.79E+03(1)2.44E+06(6)2.59E+06(7)
F19Avg(Rank)2.91E+03(1)3.12E+03(3)3.10E+03(2)3.22E+03(5)3.60E+03(7)3.54E+03(6)3.18E+03(4)3.60E+03(8)
Std(Rank)2.12E+02(1)2.89E+02(3)4.04E+02(8)2.96E+02(4)3.01E+02(5)2.41E+02(2)3.23E+02(6)3.59E+02(7)
F20Avg(Rank)2.45E+03(1)2.50E+03(2)2.56E+03(4)2.57E+03(5)2.73E+03(7)2.52E+03(3)2.60E+03(6)2.99E+03(8)
Std(Rank)3.17E+01(1)4.25E+01(3)8.97E+01(6)5.15E+01(4)1.06E+02(8)3.77E+01(2)6.62E+01(5)9.27E+01(7)
F21Avg(Rank)8.89E+03(1)1.13E+04(7)1.03E+04(5)9.88E+03(2)1.02E+04(4)1.13E+04(6)9.95E+03(3)1.31E+04(8)
Std(Rank)2.01E+03(7)9.95E+02(3)1.48E+03(6)9.95E+02(2)9.24E+02(1)2.13E+03(8)1.09E+03(4)1.22E+03(5)
F22Avg(Rank)2.89E+03(1)3.06E+03(5)3.00E+03(3)3.06E+03(4)3.35E+03(7)3.00E+03(2)3.33E+03(6)4.10E+03(8)
Std(Rank)3.65E+01(1)8.41E+01(4)4.44E+01(2)9.00E+01(5)1.74E+02(7)5.76E+01(3)1.48E+02(6)2.27E+02(8)
F23Avg(Rank)3.05E+03(1)3.22E+03(5)3.21E+03(4)3.19E+03(3)3.49E+03(6)3.16E+03(2)3.54E+03(7)4.52E+03(8)
Std(Rank)4.40E+01(1)8.58E+01(4)1.16E+02(5)6.53E+01(3)1.53E+02(6)5.66E+01(2)1.68E+02(7)2.38E+02(8)
F24Avg(Rank)3.07E+03(1)3.16E+03(5)3.60E+03(7)3.14E+03(3)3.10E+03(2)3.15E+03(4)3.30E+03(6)4.09E+03(8)
Std(Rank)3.11E+01(1)4.90E+01(5)4.71E+02(7)4.89E+01(4)3.14E+01(2)3.91E+01(3)4.79E+02(8)2.80E+02(6)
F25Avg(Rank)5.10E+03(1)6.28E+03(2)6.76E+03(5)7.26E+03(6)8.10E+03(7)6.48E+03(3)6.71E+03(4)1.21E+04(8)
Std(Rank)9.68E+02(5)1.13E+03(6)5.56E+02(1)6.18E+02(3)3.50E+03(8)5.57E+02(2)1.82E+03(7)8.37E+02(4)
F26Avg(Rank)3.35E+03(1)3.50E+03(2)3.60E+03(4)3.60E+03(5)3.69E+03(6)3.54E+03(3)3.77E+03(7)5.03E+03(8)
Std(Rank)6.69E+01(1)1.21E+02(5)8.60E+01(2)1.10E+02(4)2.33E+02(6)1.05E+02(3)2.50E+02(7)5.80E+02(8)
F27Avg(Rank)3.34E+03(1)3.78E+03(5)4.25E+03(7)3.45E+03(3)3.38E+03(2)3.48E+03(4)4.01E+03(6)5.51E+03(8)
Std(Rank)3.10E+01(1)6.37E+02(7)3.70E+02(5)5.65E+01(3)3.25E+01(2)8.69E+01(4)9.33E+02(8)4.94E+02(6)
F28Avg(Rank)4.02E+03(1)4.63E+03(3)4.80E+03(4)5.35E+03(7)5.12E+03(6)4.24E+03(2)4.82E+03(5)8.66E+03(8)
Std(Rank)3.05E+02(2)4.75E+02(7)3.51E+02(3)4.44E+02(6)3.89E+02(5)2.98E+02(1)3.84E+02(4)1.30E+03(8)
F29Avg(Rank)1.25E+06(1)3.30E+06(3)1.38E+08(7)6.30E+07(6)1.78E+06(2)3.46E+06(4)4.09E+06(5)1.71E+08(8)
Std(Rank)3.84E+05(1)1.70E+06(4)3.95E+07(7)2.16E+07(6)8.37E+05(2)1.55E+06(3)3.12E+06(5)7.36E+07(8)
Table 5. Statistics results of 100D.
Table 5. Statistics results of 100D.
FunctionIndexMRBMORBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
F1Avg(Rank)5.56E+07(1)6.49E+09(4)4.37E+10(7)9.59E+08(3)3.62E+08(2)8.35E+09(5)1.92E+10(6)7.06E+10(8)
Std(Rank)3.31E+07(1)2.15E+09(4)8.69E+09(7)2.23E+08(3)1.41E+08(2)3.34E+09(5)1.03E+10(8)7.24E+09(6)
F2Avg(Rank)2.42E+05(2)2.32E+05(1)5.56E+05(5)7.04E+05(7)7.69E+05(8)4.53E+05(4)5.70E+05(6)3.98E+05(3)
Std(Rank)3.56E+04(2)3.32E+04(1)6.98E+04(3)9.83E+04(4)1.37E+05(6)1.57E+05(8)1.14E+05(5)1.53E+05(7)
F3Avg(Rank)8.56E+02(1)1.73E+03(4)4.62E+03(7)1.15E+03(3)1.02E+03(2)1.80E+03(5)3.29E+03(6)1.46E+04(8)
Std(Rank)7.22E+01(1)3.59E+02(5)1.55E+03(6)1.32E+02(3)7.48E+01(2)3.54E+02(4)1.73E+03(7)2.79E+03(8)
F4Avg(Rank)9.81E+02(1)1.16E+03(2)1.21E+03(3)1.32E+03(5)1.37E+03(6)1.27E+03(4)1.47E+03(7)1.72E+03(8)
Std(Rank)8.12E+01(4)8.87E+01(5)6.25E+01(3)1.15E+02(8)4.10E+01(1)9.56E+01(6)1.14E+02(7)5.67E+01(2)
F5Avg(Rank)6.23E+02(1)6.32E+02(3)6.45E+02(4)6.56E+02(5)6.66E+02(7)6.28E+02(2)6.64E+02(6)6.94E+02(8)
Std(Rank)5.64E+00(5)5.92E+00(6)5.23E+00(4)8.77E+00(7)2.70E+00(1)4.40E+00(2)9.65E+00(8)4.54E+00(3)
F6Avg(Rank)1.51E+03(1)1.86E+03(2)2.17E+03(3)2.21E+03(4)3.26E+03(7)2.26E+03(6)2.22E+03(5)3.81E+03(8)
Std(Rank)1.36E+02(2)1.53E+02(4)1.62E+02(5)2.14E+02(8)7.33E+01(1)2.05E+02(7)1.79E+02(6)1.41E+02(3)
F7Avg(Rank)1.27E+03(1)1.47E+03(2)1.58E+03(3)1.63E+03(5)1.83E+03(7)1.61E+03(4)1.77E+03(6)2.19E+03(8)
Std(Rank)6.13E+01(2)8.37E+01(4)1.66E+02(8)1.36E+02(7)5.02E+01(1)1.07E+02(5)1.24E+02(6)6.42E+01(3)
F8Avg(Rank)1.22E+04(1)1.94E+04(2)4.59E+04(5)4.92E+04(6)2.52E+04(3)3.37E+04(4)7.03E+04(7)7.25E+04(8)
Std(Rank)3.92E+03(2)4.76E+03(3)1.25E+04(7)1.20E+04(6)6.32E+02(1)1.03E+04(5)1.76E+04(8)5.14E+03(4)
F9Avg(Rank)1.93E+04(2)2.34E+04(6)2.05E+04(4)2.00E+04(3)1.75E+04(1)2.44E+04(7)2.11E+04(5)2.63E+04(8)
Std(Rank)1.14E+03(1)1.80E+03(4)5.51E+03(8)1.31E+03(2)1.41E+03(3)3.41E+03(7)2.86E+03(6)2.63E+03(5)
F10Avg(Rank)5.44E+03(1)1.23E+04(2)8.62E+04(7)4.47E+04(3)7.80E+04(6)6.11E+04(4)7.58E+04(5)1.98E+05(8)
Std(Rank)1.58E+03(1)3.61E+03(2)1.97E+04(4)1.14E+04(3)2.29E+04(5)3.83E+04(7)2.88E+04(6)5.42E+04(8)
F11Avg(Rank)2.90E+07(1)3.58E+08(3)9.37E+09(7)1.25E+09(5)1.76E+08(2)9.27E+08(4)8.90E+09(6)2.22E+10(8)
Std(Rank)1.33E+07(1)2.01E+08(3)4.60E+09(6)4.58E+08(5)8.34E+07(2)4.13E+08(4)7.29E+09(8)6.26E+09(7)
F12Avg(Rank)8.46E+03(1)6.90E+05(3)1.25E+09(7)2.18E+06(5)1.25E+05(2)1.53E+06(4)9.82E+08(6)1.56E+09(8)
Std(Rank)5.26E+03(1)3.46E+06(4)8.14E+08(6)1.42E+06(3)3.41E+05(2)4.83E+06(5)1.30E+09(8)8.36E+08(7)
F13Avg(Rank)3.21E+05(2)3.07E+05(1)7.19E+06(6)8.00E+06(7)1.99E+06(4)8.82E+05(3)5.61E+06(5)1.53E+07(8)
Std(Rank)3.80E+05(2)2.97E+05(1)3.99E+06(6)4.15E+06(7)9.09E+05(4)5.02E+05(3)2.87E+06(5)5.29E+06(8)
F14Avg(Rank)5.76E+03(1)4.42E+05(5)2.17E+08(7)4.24E+05(4)2.72E+04(2)3.03E+04(3)2.22E+08(8)1.06E+08(6)
Std(Rank)5.28E+03(1)1.99E+06(5)2.67E+08(7)2.41E+05(4)1.66E+04(2)1.78E+04(3)4.63E+08(8)8.09E+07(6)
F15Avg(Rank)5.49E+03(1)6.72E+03(5)6.59E+03(4)7.33E+03(7)6.11E+03(2)7.08E+03(6)6.36E+03(3)1.21E+04(8)
Std(Rank)6.05E+02(2)9.36E+02(6)8.14E+02(4)6.62E+02(3)5.17E+02(1)9.56E+02(7)8.90E+02(5)1.79E+03(8)
F16Avg(Rank)4.72E+03(1)5.50E+03(3)5.45E+03(2)5.91E+03(5)5.99E+03(6)5.79E+03(4)6.48E+03(7)1.12E+04(8)
Std(Rank)5.06E+02(2)4.64E+02(1)1.89E+03(7)6.07E+02(3)6.16E+02(4)8.31E+02(5)1.84E+03(6)3.47E+03(8)
F17Avg(Rank)1.06E+06(2)6.20E+05(1)7.97E+06(5)1.12E+07(7)3.02E+06(4)1.45E+06(3)8.78E+06(6)1.30E+07(8)
Std(Rank)6.77E+05(2)3.52E+05(1)4.49E+06(6)5.26E+06(7)1.83E+06(4)6.89E+05(3)4.05E+06(5)7.64E+06(8)
F18Avg(Rank)6.31E+03(1)1.36E+04(2)4.04E+08(8)2.62E+07(5)1.98E+04(3)1.26E+05(4)2.76E+08(7)1.39E+08(6)
Std(Rank)4.65E+03(1)1.65E+04(2)5.69E+08(7)1.60E+07(5)2.52E+04(3)1.22E+05(4)7.06E+08(8)1.63E+08(6)
F19Avg(Rank)4.89E+03(1)5.23E+03(2)5.60E+03(4)5.57E+03(3)6.08E+03(6)6.63E+03(8)5.65E+03(5)6.58E+03(7)
Std(Rank)5.40E+02(3)5.64E+02(4)1.16E+03(8)6.12E+02(5)6.77E+02(6)4.33E+02(1)6.84E+02(7)5.35E+02(2)
F20Avg(Rank)2.80E+03(1)3.11E+03(3)3.09E+03(2)3.22E+03(5)3.68E+03(7)3.13E+03(4)3.50E+03(6)4.50E+03(8)
Std(Rank)7.40E+01(2)1.02E+02(4)8.59E+01(3)1.30E+02(5)2.73E+02(8)6.77E+01(1)1.42E+02(6)1.93E+02(7)
F21Avg(Rank)2.12E+04(2)2.46E+04(6)2.27E+04(4)2.23E+04(3)2.00E+04(1)2.62E+04(7)2.37E+04(5)2.91E+04(8)
Std(Rank)1.56E+03(4)1.96E+03(5)5.15E+03(8)1.48E+03(2)1.39E+03(1)2.62E+03(7)2.58E+03(6)1.49E+03(3)
F22Avg(Rank)3.36E+03(1)3.91E+03(5)3.65E+03(3)3.74E+03(4)4.32E+03(6)3.63E+03(2)4.73E+03(7)6.26E+03(8)
Std(Rank)1.15E+02(4)1.70E+02(5)1.09E+02(3)1.00E+02(2)1.88E+02(6)9.11E+01(1)2.82E+02(7)5.63E+02(8)
F23Avg(Rank)3.84E+03(1)4.64E+03(5)4.37E+03(3)4.24E+03(2)5.25E+03(6)4.43E+03(4)5.73E+03(7)9.30E+03(8)
Std(Rank)9.54E+01(1)2.05E+02(5)1.04E+02(2)1.57E+02(4)4.60E+02(7)1.55E+02(3)4.50E+02(6)1.20E+03(8)
F24Avg(Rank)3.55E+03(1)4.39E+03(4)6.91E+03(7)3.91E+03(3)3.70E+03(2)4.41E+03(5)4.59E+03(6)8.47E+03(8)
Std(Rank)6.57E+01(1)2.47E+02(4)9.75E+02(8)1.33E+02(3)7.66E+01(2)3.13E+02(5)7.21E+02(7)7.14E+02(6)
F25Avg(Rank)1.15E+04(1)1.58E+04(3)1.69E+04(4)1.57E+04(2)2.30E+04(7)1.75E+04(5)1.95E+04(6)3.41E+04(8)
Std(Rank)2.42E+03(4)2.60E+03(5)1.27E+03(1)1.45E+03(2)6.53E+03(8)1.82E+03(3)2.92E+03(7)2.65E+03(6)
F26Avg(Rank)3.60E+03(1)3.72E+03(2)4.19E+03(6)4.09E+03(5)3.81E+03(3)4.04E+03(4)4.20E+03(7)8.18E+03(8)
Std(Rank)7.68E+01(1)1.18E+02(2)1.74E+02(4)1.57E+02(3)1.87E+02(5)1.99E+02(6)3.93E+02(7)1.73E+03(8)
F27Avg(Rank)3.67E+03(1)6.67E+03(5)8.80E+03(7)4.31E+03(3)3.83E+03(2)5.48E+03(4)6.89E+03(6)1.15E+04(8)
Std(Rank)7.66E+01(1)2.37E+03(8)1.40E+03(6)4.07E+02(3)8.36E+01(2)7.14E+02(4)1.74E+03(7)8.72E+02(5)
F28Avg(Rank)7.03E+03(1)8.08E+03(5)8.95E+03(6)9.66E+03(7)7.77E+03(2)7.99E+03(3)8.01E+03(4)1.45E+04(8)
Std(Rank)5.90E+02(1)6.80E+02(5)7.40E+02(7)7.36E+02(6)5.90E+02(2)6.31E+02(4)5.95E+02(3)1.96E+03(8)
F29Avg(Rank)2.60E+04(1)1.08E+06(3)1.33E+09(7)1.79E+08(5)8.09E+05(2)9.38E+06(4)8.44E+08(6)1.83E+09(8)
Std(Rank)1.22E+04(1)9.20E+05(3)1.09E+09(8)8.03E+07(5)4.23E+05(2)6.80E+06(4)9.14E+08(6)1.01E+09(7)
Table 6. Overall results of the Wilcoxon rank sum test on CEC2017.
Table 6. Overall results of the Wilcoxon rank sum test on CEC2017.
IndexRRBMOSOGWORIMESSALSHADE_SPACMAPSOTTHHO
30 R + 24292925252929
R 5004400
50 R + 27262928272629
R 2301230
100 R + 29272929292929
R 0200000
total R + R 80 7 82 5 87 0 82 5 81 6 84 3 87 0
Table 7. Statistical results of TCSD.
Table 7. Statistical results of TCSD.
AlgorithmBestMeanMedianWorstStd
MRBMO0.0126660.0127390.0127140.0129840.000070
RBMO0.0126650.0127590.0127140.0132720.000147
SOGWO0.0127310.0132880.0128790.0158310.000797
LSHADE0.0126690.0127350.0127070.0129940.000070
RIME0.0129480.0174080.0178480.0204300.002111
SSA0.0126680.0139360.0132010.0177800.001582
LSHADE_SPACMA0.0126650.0127830.0126980.0138700.000276
PSO0.0126750.0136820.0132340.0174550.001195
TTHHO0.0126910.0136890.0134580.0160060.000863
Bold indicates the minimum value.
Table 8. Statistical results of RDP.
Table 8. Statistical results of RDP.
AlgorithmBestMeanMedianWorstStd
MRBMO2996.348172996.348382996.348212996.349300.00033
RBMO2996.348482997.473192996.356143007.426083.18454
SOGWO3019.542523037.591703038.005753055.499229.83048
LSHADE2996.348172996.359702996.348232996.348360.00005
RIME2998.046413020.042693016.546263061.7809917.06851
SSA2996.348392996.671392996.351373005.669731.69989
LSHADE_SPACMA2996.392152998.025352996.823173006.355792.80615
PSO2996.432943036.916143035.637023166.0828429.64018
TTHHO3050.145314222.524794173.652105562.229211011.74585
Bold indicates the minimum value.
Table 9. Statistical results of WBD.
Table 9. Statistical results of WBD.
AlgorithmBestMeanMedianWorstStd
MRBMO0.0126660.0127390.0127140.0129840.000070
RBMO0.0126650.0127590.0127140.0132720.000147
SOGWO0.0127310.0132880.0128790.0158310.000797
LSHADE0.0126690.0127350.0127070.0129940.000070
RIME0.0129480.0174080.0178480.0204300.002111
SSA0.0126680.0139360.0132010.0177800.001582
LSHADE_SPACMA0.0126650.0127830.0126980.0138700.000276
PSO0.0126750.0136820.0132340.0174550.001195
TTHHO0.0126910.0136890.0134580.0160060.000863
Bold indicates the minimum value.
Table 10. Statistical results of GTD.
Table 10. Statistical results of GTD.
AlgorithmBestMeanMedianWorstStd
MRBMO2.70086E-122.89217E-101.16612E-102.35764E-095.20441E-10
RBMO2.70086E-124.96898E-101.16612E-102.05753E-095.79916E-10
SOGWO2.70086E-121.99867E-091.36165E-092.61397E-084.63342E-09
LSHADE2.70086E-128.06901E-108.88761E-102.35764E-097.21118E-10
RIME2.30782E-113.98814E-092.35764E-092.72645E-085.85797E-09
SSA2.30782E-111.23977E-082.35764E-091.76128E-073.21443E-08
LSHADE_SPACMA2.70086E-121.56380E-091.16612E-102.72645E-084.92322E-09
PSO2.70086E-125.96709E-092.35764E-092.72645E-088.15863E-09
TTHHO2.70086E-127.91875E-092.90473E-093.63493E-081.00949E-08
Bold indicates the minimum value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, M.; Wang, X.; Guo, Z.; Hu, B.; Wang, L. A Multi-Strategy Improved Red-Billed Blue Magpie Optimizer for Global Optimization. Biomimetics 2025, 10, 557. https://doi.org/10.3390/biomimetics10090557

AMA Style

Ye M, Wang X, Guo Z, Hu B, Wang L. A Multi-Strategy Improved Red-Billed Blue Magpie Optimizer for Global Optimization. Biomimetics. 2025; 10(9):557. https://doi.org/10.3390/biomimetics10090557

Chicago/Turabian Style

Ye, Mingjun, Xiong Wang, Zihao Guo, Bin Hu, and Li Wang. 2025. "A Multi-Strategy Improved Red-Billed Blue Magpie Optimizer for Global Optimization" Biomimetics 10, no. 9: 557. https://doi.org/10.3390/biomimetics10090557

APA Style

Ye, M., Wang, X., Guo, Z., Hu, B., & Wang, L. (2025). A Multi-Strategy Improved Red-Billed Blue Magpie Optimizer for Global Optimization. Biomimetics, 10(9), 557. https://doi.org/10.3390/biomimetics10090557

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop