Next Article in Journal
Lorenz and Chua Chaotic Key-Based Dynamic Substitution Box for Efficient Image Encryption
Previous Article in Journal
Applying Computer Vision for the Detection and Analysis of the Condition and Operation of Street Lighting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges

by
Baili Lu
1,
Zhanxi Xie
2,
Junhao Wei
3,
Yanzhao Gu
3,
Yuzheng Yan
3,
Zikun Li
4,
Shirou Pan
1,
Ngai Cheong
3,*,
Ying Chen
5 and
Ruishen Zhou
3
1
College of Animal Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
2
Faculty of Humanities and Social Sciences, Macao Polytechnic University, Macao 999078, China
3
Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, China
4
School of Economics and Management, South China Normal University, Guangzhou 510006, China
5
School of Health Economics and Management, Nanjing University of Chinese Medicine, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(8), 1295; https://doi.org/10.3390/sym17081295
Submission received: 17 March 2025 / Revised: 16 April 2025 / Accepted: 25 April 2025 / Published: 11 August 2025
(This article belongs to the Section Engineering and Materials)

Abstract

To address the limitations of the Red-billed Blue Magpie Optimization algorithm (RBMO), such as its tendency to get trapped in local optima and its slow convergence rate, an enhanced version called MRBMO was proposed. MRBMO was improved by integrating Good Nodes Set Initialization, an Enhanced Search-for-food Strategy, a newly designed Siege-style Attacking-prey Strategy, and Lens-Imaging Opposition-Based Learning (LIOBL). The experimental results showed that MRBMO demonstrated strong competitiveness on the CEC2005 benchmark. Among a series of advanced metaheuristic algorithms, MRBMO exhibited significant advantages in terms of convergence speed and solution accuracy. On benchmark functions with 30, 50, and 100 dimensions, the average Friedman values of MRBMO were 1.6029, 1.6601, and 1.8775, respectively, significantly outperforming other algorithms. The overall effectiveness of MRBMO on benchmark functions with 30, 50, and 100 dimensions was 95.65%, which confirmed the effectiveness of MRBMO in handling problems of different dimensions. This paper designed two types of simulation experiments to test the practicability of MRBMO. First, MRBMO was used along with other heuristic algorithms to solve four engineering design optimization problems, aiming to verify the applicability of MRBMO in engineering design optimization. Then, to overcome the shortcomings of metaheuristic algorithms in antenna S-parameter optimization problems—such as time-consuming verification processes, cumbersome operations, and complex modes—this paper adopted a test suite specifically designed for antenna S-parameter optimization, with the goal of efficiently validating the effectiveness of metaheuristic algorithms in this domain. The results demonstrated that MRBMO had significant advantages in both engineering design optimization and antenna S-parameter optimization.

Graphical Abstract

1. Introduction

In the field of optimization, metaheuristic algorithms have received widespread attention due to their effectiveness and applicability in solving complex multimodal problems. Metaheuristic algorithms are an improvement of heuristic algorithms and are a combination of random algorithms and local search algorithms. Metaheuristic algorithms are employed to solve complex optimization problems by conducting both global searches and local exploration to find optimal or near-optimal solutions. The key concepts in these algorithms are exploration and exploitation. Exploration involves extensively searching the entire problem space, as the optimal solution could be anywhere within it. Exploitation focuses on maximizing the use of available information, often by identifying patterns or correlations in the solutions and refining the algorithm to improve its results. The main benefit of metaheuristic algorithms is their ability to address complex, nonlinear problems without needing assumptions about the problem’s specific model. While they cannot guarantee a global optimal solution, these algorithms are efficient at finding or approximating the best solution within a given time frame.
Table 1 lists some classical and novel metaheuristic algorithms. Due to their excellent optimization ability and versatility, metaheuristic algorithms have been widely applied in various fields such as robot path planning, job shop scheduling, neural network parameter optimization, and feature selection. However, many traditional metaheuristic algorithms, such as PSO, GA, and ACO, often face issues such as getting trapped in local optima and slow convergence when dealing with complex problems. To address these challenges, researchers have attempted to integrate various improvement strategies into basic metaheuristic algorithms, including hybrid algorithms, enhanced exploration–exploitation balance methods, and the introduction of new biological behavior models.
In 2018, Guojiang Xiong et al. proposed the improved whale optimization algorithm (IWOA) with a novel search strategy for solving solar photovoltaic model parameter extraction problems [10]. In 2023, Ya Shen et al. proposed an improved whale optimization algorithm based on multi-population evolution (MEWOA). The algorithm divides the population into three sub-populations based on individual fitness and assigns different search strategies to each sub-population. This multi-population cooperative evolution strategy effectively enhances the algorithm’s search capability [11]. In 2024, Ying Li et al. proposed the Improved Sand Cat Swarm Optimization algorithm (VF-ISCSO) based on virtual forces and a nonlinear convergence strategy. VF-ISCSO demonstrated significant advantages in enhancing the coverage range of wireless sensor networks [12]. In 2024, Gu Y et al. proposed an IPSO that incorporated adaptive t-distribution and Levy flight for UAV three-dimensional path planning [13]. In 2025, Wei J et al. proposed LSEWOA, which integrates Spiral flight and Tangent flight, to address the slow convergence speed and susceptibility to local optima of WOA [14]. LSEWOA significantly improved the convergence speed and accuracy of WOA. These outstanding algorithms, by integrating various novel improvement strategies, offer new insights into the enhancement of metaheuristic algorithms.
The Red-Billed Blue Magpie Optimization (RBMO) algorithm is a novel metaheuristic algorithm inspired by the foraging behavior and social cooperation characteristics of the red-billed blue magpie [15]. RBMO demonstrates significant advantages in global search, population diversity, and ease of implementation. However, it still exhibits limitations in solution accuracy and convergence speed, particularly when dealing with complex multimodal problems, where it struggles to quickly approach the optimal solution. To address these challenges, this paper proposed an enhanced RBMO algorithm (MRBMO). MRBMO was improved by integrating Good Nodes Set Initialization, an Enhanced Search-for-food Strategy, a newly designed Siege-style Attacking-prey Strategy and Lens-Imaging Opposition-Based Learning (LIOBL).

2. Current Research on Antenna Design

Adegboye et al. introduced the Honey Badger Algorithm (HBA) for antenna design optimization, demonstrating the algorithm’s effectiveness in enhancing antenna performance through specific cases [16]. The improvements were evident in key metrics such as gain and bandwidth. They compared the performance of various algorithms in antenna design, highlighting the advantages of the new algorithm in addressing particular design challenges. Park et al. proposed a method for optimizing antenna placement in single-cell and dual-cell distributed antenna systems (DAS) to maximize the lower bounds of expected signal-to-noise ratio (SNR) and expected signal-to-leakage ratio (SLR) [17]. The results indicated that the DAS using the proposed gradient ascent-based algorithm outperforms traditional centralized antenna systems (CASs) in terms of capacity, especially in dual-cell environments, effectively reducing interference and improving system performance. Jiang et al. designed a multi-band pixel antenna using genetic algorithms and N-port characteristic modal analysis, which operates effectively across the 900 MHz, 1800 MHz, and 2600 MHz frequency bands [18]. The effectiveness of the Genetic Algorithm in antenna design optimization was validated by monitoring changes in the objective function. Yang Zhao et al. proposed an optimization design method for dual-band tag antennas based on a multi-population Genetic Algorithm, overcoming the inefficiencies of traditional simulation and parameter tuning experiments in determining optimal size parameters [19]. The optimized UHF antenna achieved near-ideal input impedance at 915 MHz, resulting in good impedance matching with the chip. The size of the optimized dual-band tag antenna was significantly reduced compared to existing designs. Cai Jiaqi et al. presented a self-optimization method for base station antenna azimuth and downtilt angles based on the Artificial Bee Colony algorithm [20]. Experimental results provided the optimal azimuth and downtilt angles for base station antennas, improving coverage effectiveness for user devices, particularly in weak coverage areas. Fengling Peng et al. developed an antenna optimization framework based on Differential Evolution (DE), customized decision trees, and Deep Q-Networks (DQNs) [21]. Experimental results showed that this hybrid strategy-based framework achieves superior antenna design solutions with fewer simulation iterations. Metaheuristic algorithms have played an important role in antenna design. While the aforementioned studies primarily focus on antenna design, they consistently demonstrate the effectiveness of metaheuristic algorithms in addressing complex, high-dimensional, and multi-constrained optimization problems. These characteristics are not unique to antenna systems but are prevalent across a wide range of engineering design challenges. Accordingly, the demonstrated success of metaheuristic algorithms in antenna optimization has motivated their broader application in general engineering design problems. The subsequent section provides a systematic review of recent advancements in engineering design optimization, emphasizing the expanding role and applicability of metaheuristic algorithms as robust and efficient tools for solving intricate engineering tasks.

3. Current Research on Engineering Design Optimization

Nowadays, with the rapid development of metaheuristic algorithms, they are being widely applied in engineering design optimization [22]. Since the early 21st century, metaheuristic algorithms have been introduced to various engineering optimization problems. Before this, engineering design heavily relied on engineers’ experience and intuition. Although some numerical optimization methods were introduced, they were often limited by problem complexity and struggled to find global optima. With the rapid development of computer technology and the maturation of metaheuristic algorithms, engineering design optimization entered a new era. For instance, in pressure vessel design, canonical designs relied on experience and experimentation [23]. By introducing metaheuristic algorithms, multiple parameters such as vessel size and materials can be optimized, significantly reducing material costs while ensuring safety. In rolling bearing design [24], metaheuristic algorithms optimize parameters such as geometric dimensions and contact angles to achieve longer bearing life and higher load capacity.
Compared to canonical engineering design methods, metaheuristic algorithms offer several advantages. First, they can efficiently handle complex optimization problems, such as high-dimensional, multi-constraint, and nonlinear problems, without relying on specific mathematical models. Second, metaheuristic algorithms possess strong global search capabilities, allowing them to escape local optima and find global solutions. Additionally, these algorithms exhibit good robustness, maintaining high optimization performance across different application scenarios. These advantages have made metaheuristic algorithms important tools in modern engineering design and effective solutions for complex optimization problems [25]. In 2021, MH Nadimi-Shahraki et al. proposed I-GWO to solve problems such as pressure vessel design, welded beam design, and optimal power flow problems [26]. In 2023, JO Agushaka et al. introduced a new metaheuristic algorithm, the Greater Cane Rat Algorithm (GCRA), to solve issues in engineering design, including Three-bar truss, Gear train, and Welded beam problems, providing a new metaheuristic approach for engineering design optimization [27]. In 2025, Wei J et al. proposed LSEWOA, which was applied to solve engineering design optimization problems such as Three-bar Truss, Multi-disc Clutch Brake, and Industrial Refrigeration System, offering new insights for the application of WOA in engineering design optimization [14].
This paper will explore the effectiveness and applicability of MRBMO in engineering design optimization, aiming to provide a new optimizer for engineering design optimization.

4. Arrangement of the Rest of the Paper

Section 5 outlines the key contributions of this study. Section 6 details the principles of the RBMO algorithm, highlighting its strengths and limitations. Section 7 introduces the proposed MRBMO algorithm. Section 8 evaluates the performance of MRBMO through various experiments. Section 9 presents simulations and compares MRBMO with other metaheuristic algorithms on different engineering design optimization problems, as well as an antenna S-parameter optimization test suite, demonstrating the effectiveness of MRBMO.

5. Contributions of This Study

By integrating Good Nodes Set Initialization, an Enhanced Search-for-Food Strategy, a newly designed Siege-style Attacking-Prey Strategy, and Lens-Imaging Opposition-Based Learning (LIOBL), we proposed a novel optimizer, MRBMO, for solving real-world challenges. Through an ablation study, we evaluated the effectiveness of each strategy. By comparing MRBMO with other state-of-the-art metaheuristic algorithms on classical benchmark functions, we validated the outstanding performance of MRBMO. In a subsequent series of simulation experiments, MRBMO demonstrated excellent optimization ability and good convergence, proving that it can be used in real-world applications to solve various numerical optimization problems.

6. Red-Billed Blue Magpie Optimization Algorithm

The red-billed blue magpie, native to Asia, is commonly found in China, India, and Myanmar. This bird is notable for its large size, vibrant blue feathers, and distinct red beak. Its diet mainly consists of insects, small vertebrates, and plants, demonstrating active hunting behavior. When foraging, red-billed blue magpies use a mix of hopping, walking on the ground, and searching for food on branches.
These magpies are most active in the early morning and evening, often forming small groups of 2–5 individuals, but sometimes gathering in larger groups of over 10. They exhibit cooperative hunting behaviors, such as when one magpie finds food like fruit or insects and then invites others to share. This group effort allows them to capture larger prey, and their collective actions help them overcome the prey’s defense mechanisms. Additionally, magpies store food for later, hiding it in tree hollows, branches, and rock crevices to protect it from other animals.
Overall, red-billed blue magpies are flexible predators that acquire and store food through diverse strategies, while also exhibiting social and cooperative hunting behaviors. Inspired by this, Shengwei Fu et al. proposed a novel metaheuristic algorithm in 2024, called the Red-billed Blue Magpie Optimization algorithm (RBMO) [15]. When solving complex problems, RBMO defines an objective function specific to the problem, and the solution space refers to the set of all possible candidate solutions. The goal of RBMO is to efficiently search for the global or near-global optimum within this space. In each iteration, RBMO randomly generates N individuals (called search agents) within the solution space. These agents simulate the behaviors of red-billed blue magpies, such as searching for food, attacking prey, and storing food. During the optimization process, each agent updates its position, which represents a candidate solution. The fitness of that solution is then evaluated using the objective function. After multiple iterations, the algorithm converges to the best or a near-best solution according to the fitness values.

6.1. Search for Food

In red-billed blue magpies’ search-for-food stage, they use a variety of methods such as hopping on the ground, walking or searching for food resources in trees. The whole flock will be divided into small groups of 2–5 individuals or in clusters of 10 or more to search for food.
RBMO imitates their search-for-food behavior in small groups as follows:
X i ( t + 1 ) = X i ( t ) + ( 1 p · m = 1 p X m ( t ) X r s ( t ) ) · R a n d 1
where t represents the current iteration number; p is a random integer between 2 and 5, representing the number of red-billed blue magpies in a population of 2 to 5 randomly selected from all searched individuals; X m represents the m th randomly selected individual; X i represents the i th individual; X r s represents the randomly selected search agent in the current iteration; R a n d 1 is a random number in the range of [0, 1].
Also, RBMO imitates their search-for-food behavior in clusters as follows:
X i ( t + 1 ) = X i ( t ) + ( 1 q · m = 1 q X m ( t ) X r s ( t ) ) · R a n d 2
where q is a random integer between 10 and n, representing the number of red-billed blue magpies in a population of 10 to n randomly selected from all searched individuals; n is the population size; R a n d 2 is a random number in the range of [0, 1].
The whole Search-for-food Strategy is modeled below.
X i ( t + 1 ) = X i ( t ) + ( 1 p · m = 1 p X m ( t ) X r s ( t ) ) · R a n d 1 , r a n d < ε
X i ( t + 1 ) = X i ( t ) + ( 1 q · m = 1 q X m ( t ) X r s ( t ) ) · R a n d 2 , r a n d ε
where r a n d is a random number in the range of [0, 1]; balance coefficient ε is usually set to 0.5.
Although the mathematical forms of Equations (3) and (4) appear similar, they represent different swarm behaviors triggered under distinct probabilistic regimes, and are thus presented separately for clarity and fidelity to the original biological metaphor [15].

6.2. Attacking Prey

When attacking prey, red-billed blue magpies demonstrate remarkable hunting proficiency and cooperative behavior. They employ diverse strategies such as rapid pecking, leaping to capture ground prey, and flying to intercept insects. To improve predation efficiency, they typically operate in flexible formations, either in small coordinated groups of 2–5 individuals or in larger clusters of 10 or more. This adaptive grouping behavior provides a natural model of collaborative predation, which is reflected in the RBMO algorithm’s exploitation phase to enhance convergence and intensify the search around promising solutions.
RBMO imitates red-billed blue magpies’ attacking-prey behavior in small groups as follows:
X i ( t + 1 ) = X f o o d ( t ) + C F · ( 1 p · m = 1 p X m ( t ) X i ( t ) ) · R a n d n 1
where X i ( t ) represents the location of t th new search agent; X f o o d represents the position of the food, which indicates the current optimal solution; p is a random integer between 2 and 5, representing the number of red-billed blue magpies in a population of 2 to 5 randomly selected from all searched individuals; R a n d n 1 denotes the random number used to generate the standard normal distribution (mean 0, standard deviation 1); C F is the step control factor, calculated as in Equation (6).
C F = 1 t T ( 2 · t T )
where t represents the current number of iterations; T represents the maximum number of iterations.
RBMO imitates red-billed blue magpies’ attacking-prey behavior in clusters as follows:
X i ( t + 1 ) = X f o o d ( t ) + C F · ( 1 q · m = 1 q X m ( t ) X i ( t ) ) · R a n d n 2
where q is a random integer between 10 and n, representing the number of red-billed blue magpies in a population of 10 to n randomly selected from all searched individuals; R a n d n 2 denotes the random number used to generate the standard normal distribution (mean 0, standard deviation 1).
The Attacking prey is modeled below.
X i ( t + 1 ) = X f o o d ( t ) + C F · ( 1 p · m = 1 p X m ( t ) X i ( t ) ) · R a n d n 1 , r a n d < ε
X i ( t + 1 ) = X f o o d ( t ) + C F · ( 1 q · m = 1 q X m ( t ) X i ( t ) ) · R a n d n 2 , r a n d ε
where r a n d is a random number in the range of [0, 1]; balance coefficient ε is usually set to 0.5.

6.3. Food Storage

As well as searching for food and attacking food, red-billed blue magpies store excess food in tree holes or other hidden places for future consumption, ensuring a steady supply of food in times of shortage.
RBMO imitates the storing-food behavior of red-billed blue magpies. And the formula for storing food is shown in Equation (10).
X i ( t + 1 ) = X i ( t ) if f i t n e s s o l d i > f i t n e s s n e w i X i ( t + 1 ) e l s e
where f i t n e s s o l d i and f i t n e s s n e w i denote the fitness values before and after the position update of the i th red-billed blue magpie, respectively.

6.4. Initialization

Like many metaheuristic algorithms, RBMO employs pseudo-random numbers for initializing the population. Although this method is straightforward, it frequently leads to limited diversity and an uneven distribution of solutions, potentially reducing search efficiency. Figure 1 illustrates a population initialized using the pseudo-random approach.
X i = ( u b l b ) · R a n d + l b
where X i represents the randomly generated population, u b and l b denote the upper and lower bounds of the problem, and R a n d is a random value between 0 and 1.
To better illustrate the randomness and distribution pattern of the initialization process, Figure 1 uses a normalized domain of [0, 1] along each axis, independent of the specific problem’s actual bounds. The coordinates shown in the figure correspond to the unscaled values of R a n d , not the final scaled positions X i . This figure is intended solely as a visualization of the randomness and spatial diversity of the initial population in 2D and 3D spaces.

6.5. Workflow of RBMO and Its Analysis

The workflow of RBMO is provided in Figure 2.
As a novel biologically inspired metaheuristic algorithm, RBMO had significant advantages in global search, population diversity, and simplicity of implementation. The update mechanism of RBMO increased the breadth and diversity of the search by randomly selecting the mean values of multiple individuals for updating, which enabled it to cover a larger search space, effectively avoiding falling into a local optimum.
Secondly, the algorithm structure of RBMO was simple and easy to implement, making it suitable for the rapid solution of different problems across various fields. In addition, by randomly selecting individuals and mean updating strategies, RBMO could adapt to different types and sizes of optimization problems, demonstrating good stability and adaptability.
However, RBMO also had some limitations, particularly in its local search ability and convergence speed. As the attacking-prey strategy of RBMO was relatively monotonous, it led to insufficient local search ability when facing complex and multi-peak problems, making it difficult to approach the optimal solution quickly. Additionally, the convergence speed of RBMO was relatively slow, requiring more iterations to find a better solution in the optimization process. These shortcomings limited the effectiveness and efficiency of RBMO’s application to some extent. To address these issues, enhance the local search capability, and accelerate the convergence speed, we proposed an enhanced RBMO, called MRBMO.

7. MRBMO

7.1. Good Nodes Set Initialization

The original RBMO uses the pseudo-random number method to initialize the population; this method is simple, direct and random, but there are some drawbacks. The randomly generated population, as shown in Figure 1, is not uniformly distributed throughout the solution space; it is very aggregated in some areas and scattered in some areas, which leads to the algorithm’s poor exploitation of the whole search space and low diversity of the population. Therefore, some experts proposed using chaotic mapping, random wandering, Gaussian distribution and other methods to initialize the population. Later, some scholars proposed using Good Nodes Set initialization [28].
The theory of Good Nodes Set was first proposed by the famous Chinese mathematician Loo-keng Hua. Good Nodes Set is a method used to cover a multidimensional space uniformly, aiming to improve the quality of initialized populations. Compared with the traditional initialization method, Good Nodes Set initialization, as shown in Figure 3, can better distribute the nodes and improve the diversity of the population, thus providing better initial conditions for the optimization algorithm. This method is also effective in high-dimensional spaces.
Let U D denote a unit cube in D-dimensional Euclidean space, and let r be a given parameter. The canonical node set P r M is defined as in Equation (12):
P r M = { p ( k ) = ( { k r } , { k r 2 } , , { k r D } ) | k = 1 , 2 , , M }
where {x} represents the fractional part of x; M is the number of points; r is a deviation parameter greater than zero.
This set is referred to as the Good Nodes Set, with each element p ( k ) termed a Good Node. Given the lower and upper bounds x m i n i and x m a x i of the i th dimension in the search space, the mapping from the Good Nodes Set to the actual search space is expressed as follows:
x k i = x m i n i + p i ( k ) · ( x m a x i x m i n i )

7.2. Enhanced Search-for-Food Strategy

In the original RBMO, the Search-for-food phase relied on the random number and lacks dynamic adjustment, which resulted in large random fluctuations between the population individuals. Particularly in the later stages of iteration, individuals may still explore with large step sizes, leading to a decrease in search efficiency and affecting convergence accuracy. The whole Search-for-food Strategy is modeled below.
X i ( t + 1 ) = X i ( t ) + ( 1 p · m = 1 p X m ( t ) X r s ( t ) ) · R a n d 1 , r a n d < ε
X i ( t + 1 ) = X i ( t ) + ( 1 q · m = 1 q X m ( t ) X r s ( t ) ) · R a n d 2 , r a n d ε
where r a n d is a random number in the range of [0, 1]; balance coefficient ε is usually set to 0.5.
Therefore, this paper introduced a nonlinear factor k, which enabled more thorough exploration in the early stages and more refined development in the later stages. The variation process of k is shown in Figure 4. And the calculation of the nonlinear factor k is as follows:
k = 1 t T 2
where t represents the current number of iterations; T represents the maximum number of iterations.
In the early iterations, the value of k is close to 1, which enhanced the large step movements between the population individuals, thus improving global exploration ability. In the later iterations, the value of k approached 0, limiting the movement range of individuals and gradually transitioning towards local exploitation. By introducing the nonlinear factor k, the search intensity of RBMO dynamically decayed, naturally balancing exploration and exploitation, thereby improving convergence stability. The Enhanced Search-for-food Strategy addressed the issue of excessive randomness in the search phase of the original algorithm. The Enhanced Search-for-food Strategy is modeled below.
X i ( t + 1 ) = X i ( t ) + k · ( 1 p · m = 1 p X m ( t ) X r s ( t ) ) , r a n d < ε
X i ( t + 1 ) = X i ( t ) + k · ( 1 q · m = 1 q X m ( t ) X r s ( t ) ) , r a n d ε
where k is the proposed nonlinear factor; r a n d is a random number in the range of [0, 1]; balance coefficient ε is usually set to 0.5.

7.3. Siege-Style Attacking-Prey Strategy

7.3.1. Inspiration of HHO

The original RBMO algorithm was prone to getting stuck in local optima because the average position induced a contraction effect on the dynamic range of the population, limiting further exploration of the search space. Furthermore, in the original RBMO algorithm, the strategy for attacking prey relied on the average position of the red-billed blue magpie individuals. This updating mechanism may lead to a decrease in population diversity, resulting in slower convergence and reduced accuracy and efficiency of the search, thus hindering further optimization. Therefore, inspired by the Harris Hawk Optimization (HHO) algorithm, this paper introduced the concept of HHO into the prey attack phase of RBMO, proposing the Siege-style Attacking-prey Strategy. The Harris Hawk Optimization (HHO), introduced by Ali Asghar Heidari et al. in 2019, is a novel bio-inspired optimization algorithm [7]. The HHO algorithm simulated the diverse hunting strategies of Harris hawks, allowing HHO to perform efficient global search in a larger solution space while reducing the likelihood of falling into local optima. At the development stage, the HHO algorithm fine-tuned the position of prey to perform local search, thereby finding better solutions in the local regions of the solution space. We drew inspiration from the following position updating strategy of HHO.
X i ( t + 1 ) = Δ X i ( t ) E · | J · X r a b b i t ( t ) X i ( t ) |
Δ X i ( t ) = X r a b b i t ( t ) X i ( t )
where X i is the positions of the i th Harris hawk search agent; E is the energy of the prey; J is a prey’s random step while escaping; and X r a b b i t represents the position of the food, which indicates the current optimal solution.
The Siege-style Attacking-prey Strategy integrated the ideas of HHO, introducing the absolute difference between the prey’s position X f o o d and the red-billed blue magpie individual’s current position X i ( t ) , combined with the step size C F to directly adjust the individual’s position update step size. This mechanism helped to guide the red-billed blue magpie individuals more rapidly toward better solutions, refining the local development capability of RBMO in later stages, thereby improving solution accuracy and accelerating convergence speed. Additionally, through the combination of a random factor and nonlinear scaling, the Siege-style Attacking-prey Strategy maintained population diversity during the development phase. With the dynamically adjusted step size C F , the attacking behavior of the red-billed blue magpie individuals was able to adapt to the different search demands at various stages of iteration, enhancing exploration in the early stages and reinforcing exploitation in the later stages. This avoided premature convergence to a single solution and improved the robustness of the algorithm.

7.3.2. Levy Flight

The concept of Levy flight originates from the work of mathematician Paul Levy in the 1920s. Inspired by the foraging behavior observed in nature and the jump phenomena in complex systems, Levy flight combines frequent short-range movements with occasional long-range jumps, resembling the foraging trajectories of predators such as sharks, birds, and insects. It is a stochastic walk model based on the Levy distribution, with a defining characteristic of alternating between small local steps and rare, large-distance jumps. This search pattern enables individuals to avoid being trapped in local optima while preserving the capability for global exploration. The long jumps allow the algorithm to escape from local regions, significantly alleviating the problem of premature convergence in complex optimization problems. Meanwhile, the predominance of short steps enables fine-tuned search within promising local regions [13]. Figure 5 presents a two-dimensional simulation of Levy flight generated using the Mantegna method. In this visualization, each jump is rendered in a different color to intuitively convey the heterogeneity of step sizes. This coloring approach allows readers to clearly distinguish between the frequent, short-distance movements and the occasional, long-distance jumps that characterize Levy flight behavior, thereby highlighting its dual exploration–exploitation capability. The step length L ( s ) in Levy flight follows the Levy distribution, and is computed as follows:
L ( s ) = u | ν | 1 β
where u and ν are normally distributed; β = 1.5.
u N ( 0 , σ u 2 )
v N ( 0 , 1 )
The calculation of σ u is given by:
σ u = Γ ( 1 + β ) · sin π β 2 Γ 1 + β 2 · β · 2 β 1 2 1 β
One of the Siege-style Attacking-prey Strategies is modeled in Equation (25).
X i ( t + 1 ) = Δ X ( t ) C F · | r 1 · X f o o d ( t ) X i ( t ) | · L ( s )
where X i represents the i th individual; X f o o d represents the position of the food, which indicates the current optimal solution; C F is the step control factor, calculated as in Equation (6); r 1 is a random number in the range of [0, 1]; Δ X ( t ) is calculated in Equation (20); L ( s ) is the step size of Levy flight, calculated in Equation (21).

7.3.3. Prey-Position-Based Enhanced Guidance

In the original attacking-prey phase, the movement of the red-billed blue magpie individuals relied on both the position of the prey and the average position of the randomly selected red-billed blue magpies. This movement strategy introduced some randomness and bias, which caused individuals to become trapped near suboptimal solutions and prevented them from fully utilizing information about the global optimum, hindering the local exploitation of the RBMO. Therefore, we proposed Prey-position-based Enhanced Guidance. Due to the success of the prey-position-based guidance strategy in HHO and GWO, in this approach, we replaced the average position of the randomly selected red-billed blue magpies with the position of the prey, directly guiding the red-billed blue magpie individuals towards the prey. This helped to reduce the gap between the individuals and the optimal solution. Prey-position-based Enhanced Guidance strengthened the dependency on the optimal position, mitigated the degradation of solution quality due to randomness, and enhanced the concentration of local exploitation. Therefore, Prey-position-based Enhanced Guidance, one of the Siege-style Attacking-prey Strategies, is modeled in Equation (26).
X i ( t + 1 ) = X f o o d ( t ) + C F · ( X f o o d ( t ) X i ( t ) ) · r 2
where X i represents the i th individual; X f o o d represents the position of the food, which indicates the current optimal solution; C F is the step control factor, calculated as in Equation (6); r 2 is a random number in the range of [0, 1].
The entire Siege-style Attacking-prey Strategy is modeled below:
X i ( t + 1 ) = Δ X ( t ) C F · | r 1 · X f o o d ( t ) X i ( t ) | · L ( s ) , r a n d < ε
X i ( t + 1 ) = X f o o d ( t ) + C F · ( X f o o d ( t ) X i ( t ) ) · r 2 , r a n d ε
where r a n d is a random number in the range of [0, 1]; balance coefficient ε is set to 0.5.

7.4. Lens-Imaging Opposition-Based Learning

Opposition-Based Learning (OBL) is a concept proposed by Tizhoosh in 2005, which expands the search space by generating the ‘opposite solution’ of the current solution, thus facilitating global search. The fundamental idea of OBL is to map the current solution X to an opposite solution X , which lies in the opposite region of the solution space (the relative opposite direction of the solution space). Lens Imaging Opposition-Based Learning (LIOBL) is an extension of OBL, further enhancing the effect of opposition-based learning by incorporating the concept of ‘lens imaging’, as shown in Figure 6 [30]. The figure illustrates how rays originating from the object are refracted through the lens to form an image. Compared to OBL, LIOBL introduces lens imaging during the generation of opposite solutions, which results in individuals with greater diversity, enhancing the coverage of the solution space and preventing the population from getting trapped in local optima.
In the RBMO, the Food Storage phase is relatively simple; it passively replaces the current solution with a previous one when the solution is poor. In practice, this leads to early convergence of the population, making it difficult to escape local optima. Furthermore, this strategy lacks the ability for comprehensive exploration of the solution space, hindering the discovery of better solutions. In contrast to the original Food Storage strategy, LIOBL is more forward-looking [30]. Therefore, we introduced the concept of LIOBL into the Food Storage phase. The improved Food Storage strategy consisted of two steps. Firstly, we calculated the opposite solutions by Equation (29).
X i = u b + l b 2 + u b + l b 2 η X i η
where X i is the given solution; u b and l b are the upper and lower bounds of domain of definition, respectively; η is the scaling factor of len imaging, which is set to 0.5.
Then, we needed to retain the better individuals through greedy meritocracy to the next generation of the population, increasing the proportion of elite individuals in the population, as shown in Equation (30).
X i + 1 = X i , if f i t n e s s ( X i ) > f i t n e s s ( X i ) X i , if f i t n e s s ( X i ) f i t n e s s ( X i )
where f i t n e s s ( X i ) indicates the fitness value of X i , and f i t n e s s ( X i ) indicates the fitness value of X i .

7.5. Time Complexity Analysis

For RBMO, let the initialization time complexity be O ( N D ) . In each iteration, both food storage and position updates require O ( N D ) operations, resulting in an overall per-iteration complexity of O ( N D ) . After T iterations, the total computational cost can be expressed as:
Total Time Complexity 1 = Initialization + T ∗ (the total time complexity per iteration) = O ( N D ) + T O ( N D ) = O ( T N D )
Similarly, for MRBMO, the initialization phase also has a complexity of O ( N D ) . Within each iteration, food storage, LIOBL processing, and position updates each contribute O ( N D ) , leading to the same per-iteration complexity of O ( N D ) . After T iterations, the total computational cost is given by:
Total Time Complexity 2 = Initialization + T ∗ (the total time complexity per iteration) = O ( N D ) + T O ( N D ) = O ( T N D )
In conclusion, both RBMO and MRBMO share the same overall time complexity of O ( T · N D ) .

7.6. Worflow of MRBMO

The worflow of MRBMO is provided in Figure 7.

8. Performance Test

The experimental environment for experiments was Windows 11 (64 bit), Intel(R) Core(TM) i5-8300H CPU @ 2.30 GHz, 8 GB running memory and the simulation platform is Matlab R2023a.
In order to validate the performance and effectiveness of MRBMO, the following four experiments were designed to test the algorithms on 23 classical benchmark functions and simulation experiment for engineering design optimization and antenna S-parameter optimization will be performed in the next chapter:
  • Each of the four improvement strategies was removed from MRBMO and an ablation study was performed on the 23 classical benchmark functions in CEC2005 test suite in Table 2 [31].
  • A qualitative analysis experiment was conducted to assess the performance, robustness, and exploration–exploitation balance of MRBMO across various benchmark functions. This evaluation focused on examining convergence behavior, population diversity, and the algorithm’s ability to balance exploration and exploitation in different problem types.
  • MRBMO, traditional RBMO and other outstanding metaheuristic algorithms were examined on the classical benchmark functions with the dimension D = 30.
  • MRBMO, traditional RBMO and other outstanding metaheuristic algorithms were examined on the classical benchmark functions with the higher dimensions D = 50 and D = 100.

8.1. Ablation Study

This paper designed an ablation study to evaluate the effectiveness of various improvement strategies on RBMO. We defined the following variants. MRBMO1 was the MRBMO which removes the Good Nodes Set initialization. MRBMO2 was the MRBMO which removes Enhanced Search-for-prey Strategy. MRBMO3 was the MRBMO which removes Siege-style Attacking-prey Strategy. And MRBMO4 was the MRBMO which removes LIOBL. To fairly compare the effectiveness of each strategy, we test these improved algorithms on 23 benchmark functions. We set the maximum iteration as T = 500 and the population size as N = 30. We ran each algorithm on the 23 functions for 30 iterations and the results are shown in Figure 8.
Experimental results show that each improvement strategy significantly enhances RBMO’s performance. The Good Nodes Set Initialization distributed the population evenly in the solution space, improving quality and aiding in solving high-dimensional multi-modal functions like F5, F8, F12 and F13. As shown in F6, F7 and F13, the Enhanced Search-for-food Strategy sacrificed a slight reduction in convergence speed but improves the optimization ability for handling complex multi-modal functions. As shown in F1–F4 and F9–F13, the Siege-style Attacking-prey Strategy strengthened the exploitation phase, enhancing local search capability and convergence speed. Replacing the Food Storage mechanism with LIOBL allowed population updates post-development, increasing diversity while preserving elite individuals, which boosted exploration and reduced local optima entrapment.

8.2. Qualitative Analysis Experiment

In the qualitative analysis experiment, we applied MRBMO on the benchmark functions recorded in the search history of the red-billed blue magpie individuals, the exploration–exploitation percentage of MRBMO during the iterations and the population diversity of MRBMO, so that we could comprehensively evaluate the performance, robustness and exploration–exploitation balance of MRBMO in different types of problems.
In this experiment, the maximum number of iterations was set to T = 500 and the population size was N = 30. The search history of the red-billed blue magpie individuals, the proportions of exploration and exploitation, population diversity, and iteration curves were recorded and are presented in Figure 9, Figure 10, Figure 11 and Figure 12. From the figures, it is evident that the red-billed blue magpie individuals in MRBMO demonstrate a well-distributed search within the solution space, indicating the effectiveness of the Good Nodes Set Initialization. For F8, the global optimal solution was located in the upper-right corner of the solution space, posing significant challenges for the algorithm’s ability to escape local optima. The Siege-style Attacking-prey Strategy facilitated detailed exploration around the region with the potential solution, ultimately leading to the identification of the optimal solution for F8. Additionally, the introduction of Levy flight allowed MRBMO to consistently escape local optima and maintain high population diversity, even when addressing complex combinatorial problems such as F15–F23. For uni-modal functions, the results showed that the exploitation proportion of MRBMO increased rapidly during the iterative process, demonstrating strong exploitation capabilities. For complex functions like F7, F8 and F15, the exploration proportion decreased gradually in the early iterations, reflecting MRBMO’s robust global exploration ability. In the later stages of the iterations, the exploitation proportion increased significantly, indicating strong local exploitation capabilities.

8.3. Superiority Comparative Test with Dimension D = 30

To further verify the superiority of MRBMO, we selected Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], MSWOA [36], MWOA [37], and RBMO for superiority comparative experiments. The parameter configurations for all algorithms are listed in Table 3. The population size was set to N = 30 , and the number of iterations to T = 500 . Each algorithm was executed independently 30 times. For performance evaluation, the average fitness (Ave), standard deviation (Std), p-values from the Wilcoxon rank-sum test, and Friedman rankings across the 30 runs were recorded. The corresponding results are displayed in Figure 13 and Table 4 and Table 5.
From the experiment results, it could be seen that MRBMO converged to the optimal value on most functions, with a standard deviation of zero or close to zero, demonstrating strong stability, robustness and optimization capabilities. For problems like F5–F8 and F20–F23, which were prone to local optima, the Good Nodes Set initialization allowed the population of MRBMO to be evenly distributed in the solution space, significantly improving the population quality. As a result, MRBMO could escape local optima and achieve better solutions for these types of problems. The incorporation of Enhanced Search-for-food Strategy and Siege-style Attacking-prey Strategy contributed to higher accuracy in solving complex problem like F5–F6 and F12–F13. Siege-style Attacking-prey Strategy helped MRBMO achieve higher convergence speed and accuracy, enabling MRBMO to find the optimal solutions for F1–F4 and F9–F11 within a limited number of iterations.
In non-parametric tests, the statistical results, as shown in Table 5, indicated that most p-values from Wilcoxon rank-sum tests were less than 0.05, suggesting significant differences between the optimization results of MRBMO and the nine comparison algorithms. On F1 and F3, there was no significant difference between MRBMO and MWOA. On F9–F11, there was no significant difference between MRBMO and HHO, MSWOA and MWOA. They all found the optimal solutions within a limited number of iterations when solving these functions. There was no significant difference between MRBMO and RBMO on F17 and F19, because MRBMO not only found the optimal solutions for complex functions F16–F19 every time but also had a standard deviation equal to or slightly smaller than that of RBMO. This indicated that MRBMO, while maintaining RBMO’s ability to solve complex functions, also achieved faster convergence speed and higher accuracy. And there is a significant difference between MRBMO and the rest of the algorithms (AROA, GWO, RIME, WOA and ROA). This experiment further corroborated the reliability of the Superiority Test. According to these results, MRBMO ranked first in terms of the average Friedman value among the ten algorithms, indicating its superior performance. This consistent performance across multiple functions highlighted the effectiveness and robustness of MRBMO.

8.4. Superiority Comparative Test with High Dimensions of 50 and 100

Among the 23 classical benchmark functions, F1–F13 are scalable in dimensionality, whereas F14–F23 are defined with fixed dimensions. To assess MRBMO’s capability in solving problems of varying dimensionality and complexity, the dimensions of the scalable functions (F1–F13) were extended to 50 and 100, while the fixed-dimension functions (F14–F23) remained unchanged. MRBMO was evaluated against AROA, GWO, RIME, WOA, ROA, HHO, MSWOA, MWOA, and RBMO on these benchmarks under two high-dimensional settings: D = 50 and D = 100 . All algorithm parameters are summarized in Table 3, with the population size and iteration count set to N = 30 and T = 500 , respectively. Each algorithm was independently executed 30 times on each function. Performance was analyzed using the Wilcoxon rank-sum test (p-values) and Friedman rankings, and the results are presented in Table 6.
The findings demonstrate that MRBMO exhibits strong performance in high-dimensional optimization tasks, outperforming several competing algorithms in comparative evaluations. As shown in Table 6, in the experiments with dimensions of 50 and 100, MRBMO achieved first place in the Friedman ranking. Moreover, in the Wilcoxon rank-sum test, there was a significant difference between MRBMO and other metaheuristic algorithms at the dimensions of 50 and 100. This provided sufficient evidence to demonstrate that MRBMO still possessed strong optimization capability when handling optimization problems of different dimensions, and it showed a strong competitive edge compared to other excellent basic metaheuristic algorithms.
Table 7 presents a comprehensive comparison of MRBMO against competing algorithms using the metric known as overall effectiveness (OE). In this table, w, t, and l, respectively, represent the number of wins, ties, and losses. The OE for each method is determined using Equation (31) [38].
O E = N L L · 100
where N denotes the total number of evaluations, and L indicates the number of times an algorithm underperformed (i.e., losses).
Achieving an OE score of 95.65%, MRBMO emerged as the most effective among the evaluated approaches. Moreover, MRBMO demonstrated strong competitiveness when compared with state-of-the-art (SOTA) methods across benchmark problems of varying dimensions. These findings confirm the robustness and adaptability of MRBMO in addressing optimization challenges across different problem scales.

9. Simulation Experiments

To validate the ability of MRBMO to solve real-world problems, we used four engineering design optimization problems to test the performance of MRBMO, in order to verify the effectiveness and applicability of MRBMO in engineering design optimization. We also used an antenna S-parameter optimization test suite to test the performance of MRBMO, in order to quickly validate the effectiveness and applicability of MRBMO in antenna S-parameter optimization.

9.1. Engineering Design Optimization

To manage constraints in the engineering design optimization process, we adopted a transformation strategy based on the Penalty Function approach. This technique reformulates the original problem by embedding constraint violations directly into the objective function through additional penalty terms. As a result, the constrained problem is converted into an unconstrained one, facilitating its resolution. Whenever a decision variable x i breaches a given constraint, a high cost is incurred in the modified objective, effectively steering the optimization algorithm toward feasible regions of the search space.

9.1.1. Pressure Vessel Design

The structural configuration of the pressure vessel is illustrated in Figure 14, featuring sealed ends enclosed by caps [14], with one cap being hemispherical in shape. In this design, x 1 and x 2 denote the thicknesses of the cylindrical body and the end cap, respectively. The variable x 3 corresponds to the inner diameter of the cylindrical section, while x 4 specifies its length, excluding the hemispherical head. These four parameters— x 1 , x 2 , x 3 , and x 4 —serve as the decision variables in the optimization of the vessel. The objective function, along with the associated four design constraints, is formulated as follows:
  • Variable:
    x = [ x 1 , x 2 , x 3 , x 4 ]
  • Minimize:
    f ( x ) = 0.6224 x 1 · x 3 · x 4 + 1.7781 x 2 · x 3 2 + 3.1661 x 1 2 · x 4 + 19.84 x 1 2 · x 3 + p u n i s h m e n t
  • Subject to:
    g 1 ( x ) = x 1 + 0.0193 x 3 0 ;
    g 2 ( x ) = x 2 + 0.00954 x 3 0 ;
    g 3 ( x ) = π x 3 2 · x 4 4 3 π x 3 2 + 1296000 0 ;
    g 4 ( x ) = x 4 240 0 ;
  • Variable range:
    1 x 1 99 ; 1 x 2 99 ; 10 x 3 90 ; 10 x 4 200 ;
    where:
    p u n i s h m e n t = 10 3 i = 1 4 max ( 0 , g ( i ) ) 2
In this work, we benchmarked the performance of MRBMO against several established metaheuristic algorithms, including the Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], as well as enhanced variants such as MSWOA [36], MWOA [37], and RBMO. A consistent experimental setup was maintained, with all algorithms configured as outlined in Table 3. The number of iterations was fixed at T = 500, and the population size was set to N = 30. To ensure statistical robustness, each method was independently executed 30 times on the pressure vessel design task. The performance metrics, including average fitness (Ave) and standard deviation (Std), were recorded for evaluation. Experimental outcomes are presented in Figure 15 and Table 8.
As can be observed from Table 8, MRBMO consistently outperformed other methods in terms of both accuracy and stability on the pressure vessel design task. These results underscore MRBMO’s effectiveness in solving such engineering optimization problems.

9.1.2. Piston Lever Design

The piston lever, illustrated in Figure 16, represents a representative case in classical engineering optimization problems [14]. Its design process involves tuning several geometric and mechanical variables to reduce material usage or structural mass, while adhering to constraints such as mechanical strength and stability. The goal is to strike an optimal trade-off between cost-effectiveness and structural integrity, making this problem highly relevant in fields like mechanical system design, automotive engineering, and other industrial applications focused on lightweight, high-performance components.
In this context, the optimization objective is to minimize the material consumption of the piston lever, subject to constraints that ensure sufficient load-bearing capacity and functional reliability. The structure is characterized by a set of design variables that define critical geometric relationships and determine its overall performance. In this optimization task, the piston lever comprises several interconnected structural components with the following key attributes; one end is anchored, while the opposite end experiences an external load. Its mechanical behavior is significantly affected by geometric parameters such as radius and length, with each of these features being regulated by specific decision variables. According to the geometric configuration, the variables x 1 to x 4 are defined as follows. x 1 and x 2 represent the primary dimensions—length and width—of the structure, which shape the overall lever arm. x 3 denotes the radius of the cross-section at the force application point, influencing the distribution of stress. x 4 corresponds to a dimensional parameter associated with the support location.
The optimization goal in the Piston Lever design task is formulated as follows:
  • Variable:
    x = [ x 1 , x 2 , x 3 , x 4 ]
  • Minimize:
    f ( x ) = 0.25 π x 3 2 ( L 2 L 1 ) + p u n i s h m e n t
  • Subject to:
    g 1 ( x ) = Q L cos θ R F 0 ;
    g 2 ( x ) = Q ( L x 4 ) M max 0 ;
    g 3 ( x ) = 1.2 ( L 2 L 1 ) L 1 0 ;
    g 4 ( x ) = x 3 2 x 2 0 ;
  • Variable range:
    0.05 x 1 500 ; 0.05 x 2 500 ; 0.05 x 3 120 ; 0.05 x 4 500 ;
    where:
    Q = 10000 ; P = 1500 ; L = 240 ; M max = 1.8 × 10 6 ;
    L 1 = ( x 4 x 2 ) 2 + x 1 2 ; L 2 = ( x 4 sin θ + x 1 ) 2 + ( x 2 x 4 cos θ ) 2 ;
    R = | x 4 ( x 4 sin θ + x 1 ) + x 1 ( x 2 x 4 cos θ ) | ( x 4 x 2 ) 2 + x 1 2 ;
    F = 0.25 π P x 3 2 ;
    p u n i s h m e n t = 10 3 i = 1 4 max ( 0 , g ( i ) ) 2
This study compared MRBMO with Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], MSWOA [36], MWOA [37] and RBMO. The configuration of each algorithm is listed in Table 3. To ensure fairness, the iteration count was consistently set to T = 500, and the population size was fixed at N = 30. All algorithms were executed 30 times independently on the piston lever optimization task. For performance evaluation, the average fitness (Ave) and standard deviation (Std) were collected. The corresponding results are depicted in Figure 17 and detailed in Table 8. As evidenced by Table 8, MRBMO outperformed the other methods in terms of both optimization precision and result stability, showcasing its strong effectiveness in addressing such engineering design challenges.

9.1.3. Robot Gripper Design

The robot gripper design problem is a classic engineering optimization problem, widely applied in industrial automation, medical robotics, and logistics. The goal is to maximize gripping performance or minimize material usage under the constraints of gripping force range, structural requirements, and geometric stability, thereby optimizing the structural efficiency and cost-effectiveness of the robot gripper. Figure 18 is the structure of a robot gripper [14].
The robot gripper involves several critical parameters related to geometry, mechanics, and motion. x 1 , x 2 , x 3 , x 4 are geometric parameters of the gripper; x 5 is the force applied to the gripper; x 6 is the length of the gripper; x 7 is the angular offset of the gripper.
The objective function of the Robot Gripper design problem can be described as:
  • Variable:
    x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]
  • Minimize:
    f ( x ) = max z [ 0 , Z max ] F 1 ( x , z , 2 ) + min z [ 0 , Z max ] F 1 ( x , z , 2 ) + p u n i s h m e n t
  • Subject to:
    g 1 ( x ) = Y min + F 1 ( x , Z max , 1 ) 0 ;
    g 2 = F 1 ( x , Z max , 1 ) 0 ;
    g 3 ( x ) = Y max F 1 ( x , 0 , 1 ) 0 ;
    g 4 ( x ) = F 1 ( x , 0 , 1 ) Y G 0 ;
    g 5 ( x ) = x 6 2 + x 4 2 ( x 1 + x 2 ) 2 0 ;
    g 6 ( x ) = x 2 2 ( x 1 x 4 ) 2 ( x 6 Z max ) 2 0 ;
    g 7 ( x ) = Z max x 6 0
  • Variable range:
    10 x 1 150 ; 10 x 2 150 ; 100 x 3 200 ; 0 x 4 50 ;
    10 x 5 150 ; 100 x 6 300 ; 1 x 7 3.14
    where:
    P = 100 ; Z max = 100 ;
    Y min = 50 ; Y max = 100 ; Y G = 150 ;
    F 1 ( x , z , f l a g ) : Calculate the grabbing force or the applied force
When Flag = 1, calculate the grabbing force:
F 1 ( x , z , 1 ) = 2 x 5 + x 4 + x 3 · sin ( β + x 7 ) ;
When Flag = 2, calculate the applied force:
F 1 ( x , z , 2 ) = P x 2 · sin ( α + β ) 2 x 3 ;
α = arccos x 1 2 + g 2 x 2 2 2 x 1 g + ϕ o ;
β = arccos x 2 2 + g 2 x 1 2 2 x 2 g ϕ o ;
g = x 4 2 + ( z x 6 ) 2 ;
ϕ o = arctan x 4 x 6 z ;
p u n i s h m e n t = 10 3 i = 1 7 max ( 0 , g ( i ) ) 2
This study compared MRBMO with Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], MSWOA [36], MWOA [37] and RBMO. The parameter configurations for all algorithms are summarized in Table 3. For consistency across experiments, the number of iterations was set to T = 500 , and the population size was maintained at N = 30 . Each algorithm was independently run 30 times on the robot gripper design task, with the average fitness (Ave) and standard deviation (Std) recorded for comparative analysis. The corresponding results are displayed in Figure 19 and Table 8.
As shown in Table 8, MRBMO exhibited notably better performance than the other methods in terms of both solution accuracy and robustness in the Robot Gripper design scenario, highlighting its strong capability in solving such engineering optimization problems.

9.1.4. Industrial Refrigeration System Design

The industrial refrigeration system optimization problem aims to reduce both energy usage and operational cost while maintaining effective cooling capability, as depicted in Figure 20 [14]. The task involves determining the optimal configuration of system elements—such as compressors, condensers, and evaporators—to ensure minimal cost and maximum thermal efficiency. This problem encompasses fourteen decision variables: compressor powers ( x 1 , x 2 ), refrigerant flow and mass rates ( x 3 x 6 ), design parameters of the condenser and evaporator ( x 7 , x 8 ), compression characteristics ( x 9 , x 10 ), temperature-related parameters ( x 11 , x 12 ), and flow control variables ( x 13 , x 14 ). In detail, x 1 and x 2 define the cooling output via compressor power; x 3 to x 6 characterize the movement of refrigerant through key system units; x 7 and x 8 correspond to size attributes of the condenser and evaporator; x 9 and x 10 describe compression level and efficiency; x 11 and x 12 adjust the heat exchange temperature gradient; and x 13 , x 14 regulate coolant or refrigerant flow rates, which critically influence overall system effectiveness. The formal mathematical formulation of this problem is provided below.
  • Variable:
    x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , x 12 , x 13 , x 14 ]
  • Minimize:
    y = f ( x ) + p u n i s h m e n t
  • Subject to:
    g 1 ( x ) = 1.524 x 7 1 0 ;
    g 2 ( x ) = 1.524 x 8 1 0 ;
    g 3 ( x ) = 0.07789 · x 1 2 · x 9 x 7 1 0 ;
    g 4 ( x ) = 7.05305 · x 1 2 · x 10 x 9 · x 8 · x 2 · x 14 1 0 ;
    g 5 ( x ) = 0.0833 · x 14 x 13 1 0 ;
    g 6 ( x ) = 47.136 · x 2 0.333 · x 12 x 10 1.333 · x 8 · x 13 2.1195 + 62.08 · x 13 2.1195 · x 8 0.2 x 12 · x 10 1 0 ;
    g 7 ( x ) = 0.04771 · x 10 · x 8 1.8812 · x 12 0.3424 1 0 ;
    g 8 ( x ) = 0.0488 · x 9 · x 7 1.893 · x 11 0.316 1 0 ;
    g 9 ( x ) = 0.0099 · x 1 x 3 1 0 ;
    g 10 ( x ) = 0.0193 · x 2 x 4 1 0 ;
    g 11 ( x ) = 0.0298 · x 1 x 5 1 0 ;
    g 12 ( x ) = 0.056 · x 2 x 6 1 0 ;
    g 13 ( x ) = 2 x 9 1 0 ;
    g 14 ( x ) = 2 x 10 1 0 ;
    g 15 ( x ) = x 12 x 11 1 0 ;
    where:
    f ( x ) = 63098.88 · x 2 · x 4 · x 12 + 5441.5 · x 2 2 · x 12 + 115055.5 · x 2 1.664 · x 6
    + 6172.27 · x 2 2 · x 6 + 63098.88 · x 1 · x 3 · x 11 + 5441.5 · x 1 2 · x 11
    + 115055.5 · x 1 1.664 · x 5 + 6172.27 · x 1 2 · x 5 + 140.53 · x 1 · x 11
    + 281.29 · x 3 · x 11 + 70.26 · x 1 2 + 281.29 · x 1 · x 3 + 281.29 · x 3 2
    + 14437 · x 8 1.8812 · x 12 0.3424 · x 10 · x 1 2 · x 7 x 14 · x 9
    + 20470.2 · x 7 2.893 · x 11 0.316 · x 12 ;
    p u n i s h m e n t = 10 3 i = 1 15 max ( 0 , g ( i ) ) 2
  • Variable range:
    0.001 < x 1 < 5 ; 0.001 < x 2 < 5 ; 0.001 < x 3 < 5 ; 0.001 < x 4 < 5 ;
    0.001 < x 5 < 5 ; 0.001 < x 6 < 5 ; 0.001 < x 7 < 5 ; 0.001 < x 8 < 5 ;
    0.001 < x 9 < 5 ; 0.001 < x 10 < 5 ; 0.001 < x 11 < 5 ; 0.001 < x 12 < 5 ;
    0.001 < x 13 < 5 ; 0.001 < x 14 < 5 ;
A comparative test was also conducted between MRBMO and Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], MSWOA [36], MWOA [37] and RBMO. The parameter configurations for all algorithms are detailed in Table 3. Each algorithm was executed for 30 independent trials, with a maximum of T = 500 iterations and a population size of N = 30. The results, shown in Figure 21 and Table 8, reveal that MRBMO consistently avoids local optima, continually improving the solution even when other algorithms stagnated in sub-optimal regions. Compared to other methods, MRBMO demonstrated superior accuracy and stability in its search for optimal solutions. Thus, MRBMO stands out as a robust and effective optimization approach for complex design problems.

9.2. Antenna S-Parameter Optimization

The optimization of antenna S-parameters (scattering parameters) is a critical aspect of the design of wireless communication systems, radar, and other electronic devices, serving as a key factor in ensuring the efficient operation of wireless systems. S-parameters describe the reflection and transmission characteristics of antennas, primarily including S 11 (reflection coefficient) and S 21 (transmission coefficient). Optimizing these parameters can enhance antenna performance, reduce signal loss, and improve radiation efficiency. Metaheuristic algorithms are capable of finding optimal or near-optimal solutions within complex design spaces. When integrated with electromagnetic simulation software, they create an iterative optimization workflow. Designers can effectively optimize the S-parameters of antennas, thereby enhancing their performance and reliability, by selecting appropriate algorithms, configuring suitable parameters, and utilizing relevant objective functions. However, validating the suitability of algorithms for optimizing antenna S-parameters through simulation can be time-consuming and resource-intensive. Therefore, Zhen Zhang et al. developed a benchmark test suite for antenna S-parameter optimization [39] to intuitively and rapidly assess the performance of metaheuristic algorithms in antenna design. This benchmark suite simulates the characteristics of electromagnetic simulations and addresses common antenna issues, ranging from single antennas to multiple antennas, thereby tackling the structural design challenges for both types. They demonstrated that the test suite they proposed has the same effect as the electromagnetic simulation of antenna S-parameters, that is, if an algorithm performs well on the test suite, it is suitable for antenna S-parameter optimization. The benchmark functions of the test suite are detailed below and Figure 22 is the landscapes of the functions. More details of the benchmark functions of the Antenna S-parameter Optimization test suite are listed in Table A1.
F 1 = 20 l o g ( 2 ( i = 1 n | s i n ( x i 8 ) 2 | + i = 1 n | s i n ( x i 8 ) | ) + 1 )
where F 1 is a uni-modal function characterized by a rose-shaped valley, with a minimum value of 0 and a dimension of 8. It is continuous, differentiable, and non-separable for single antenna optimization.
F 2 = 20 l o g ( 10 ( i = 1 n x i 2 ) 2 + 1 )
where F 2 is a uni-modal function with a steep narrow valley, having a minimum value of 0 and a dimension of 8. It is continuous, differentiable, separable, and scalable, for multi-antenna design optimization.
F 3 = 20 l o g ( 10 ( i = 1 n 0.01 i 5 x i 2 ) 2 + 1 )
where F 3 is a uni-modal function featuring a long narrow valley, with a minimum value of 0 and a dimension of 8. It is continuous, differentiable, separable, and scalable, for both single and multiple antenna optimization.
F 4 = 20 l o g ( ( i = 1 n ( 100 ( x i + 1 x i 2 ) 2 ( x i 1 ) 2 ) ) + 1 )
where F 4 is a uni-modal function characterized by steep and banana-shaped curved valleys, with a minimum value of 0 and a dimension of 8. It is continuous, differentiable, non-separable, and scalable, for multi-antenna design optimization.
F 5 = 100 | x 2 + 1 0.01 ( x 1 10 ) 2 | + 0.01 | x 1 |
where F 5 is a multi-modal function with long narrow valleys, having a minimum value of 0 and a dimension of 2. It is continuous, non-differentiable, non-separable, and non-scalable, for multi-antenna design optimization.
F 6 = 20 l o g ( ( 0.01 ( i = 1 n | x i | ) 2 ( s i n ( 0.8 x 1 ) + 2 ) 2 + 1 )
where F 6 is a multi-modal function with long narrow valleys that intersect, featuring a minimum value of 0 and a dimension of 8. It is continuous, scalable, non-differentiable, and non-separable, for multi-antenna optimization.
F 7 = 20 l o g ( ( i = 1 n 1 ( 100 ( x i + 1 x i 2 ) ) 2 ( x i 1 ) 2 ) ) + 1 ) + 20 l o g ( 0.01 ( i = 1 n | x i | ) 2 ( s i n ( 0.8 x 1 ) + 2 ) 2 + 1 )
where F 7 is a multi-modal compositional function with long narrow and intersecting valleys, with a minimum value of 0 and a dimension of 8. It is continuous, non-differentiable, non-separable, and scalable, for multi-antenna design optimization.
F 8 = 100 | x 2 + 1 0.01 ( x 1 10 ) 2 | + 0.01 | x 1 | + 20 l o g ( 0.01 ( i = 1 n | x i | ) 2 ( sin ( 0.8 x 1 ) + 2 ) 2 + 1 )
where F 8 is a multi-modal function characterized by long narrow and intersecting valleys, with a minimum value of 0 and a dimension of 8. It is continuous, non-differentiable, non-separable, and scalable, for multi-antenna optimization.
We employed a set of algorithms, including the Attraction-Repulsion Optimization Algorithm (AROA) [32], Grey Wolf Optimizer (GWO) [33], Rime Optimization Algorithm (RIME) [34], Whale Optimization Algorithm (WOA) [6], Remora Optimization Algorithm [35], Harris Hawks Optimization (HHO) [7], MSWOA [36], MWOA [37], RBMO, and MRBMO, to assess their effectiveness in antenna S-parameter optimization tasks using a benchmark suite. For consistency, all tests were conducted with a fixed iteration count of T = 500 and a population size of N = 30. Each method was independently run 30 times on eight benchmark functions, and metrics such as average fitness (Ave), standard deviation (Std), Wilcoxon rank-sum test p-values, and Friedman test results were collected to evaluate performance.
According to the outcomes presented in Figure 23 and Table 9 and Table 10, MRBMO demonstrated outstanding performance in the antenna S-parameter optimization task, achieving significantly better results than other state-of-the-art algorithms. MRBMO exhibited rapid convergence speeds and high accuracy across many functions. In particular, MRBMO showcased robust optimization capabilities in the benchmark functions. In F2, F3, and F6, similar to most other SOTA algorithms, MRBMO was able to converge to the optimal solution at a relatively fast speed when solving these three functions. Notably, in F1, F5, and F8, compared to other SOTA algorithms, MRBMO converged to the optimal solution within a limited number of iterations. In F4 and F7, when other algorithms got stuck in local optima, MRBMO effectively escaped the local optimum and continued to find better solutions. Additionally, the Wilcoxon rank-sum test and Friedman test confirm that MRBMO’s performance in various aspects was significantly superior to that of the other algorithms, highlighting its overall excellence. MRBMO, with overall effectiveness of 100%, was the most effective algorithm. The results revealed the ability of MRBMO to solve antenna S-parameter optimization problems (Table 11).

10. Discussion

We conducted a series of experiments using benchmark functions, including unimodal, multimodal, and composite functions, to assess the performance of MRBMO. These experiments validated the efficacy of MRBMO across various problem types. In the ablation study, we evaluated the effectiveness of each improvement strategy embedded in MRBMO. Qualitative analyses were also carried out to examine its search dynamics, including the balance between exploitation and exploration, as well as the maintenance of population diversity. The experimental results demonstrated that MRBMO effectively explored the solution space and was capable of locating global or near-global optima. Exploitation–exploration ratio curves showed that MRBMO maintained a balanced search process, while population diversity analyses confirmed its ability to avoid premature convergence.
When compared with several state-of-the-art (SOTA) metaheuristic algorithms, MRBMO exhibited superior performance in terms of convergence speed and solution accuracy. It also demonstrated strong robustness and adaptability across different problem dimensions (30, 50, and 100). Furthermore, in engineering simulations, MRBMO achieved the best results on four real-world engineering design problems and the antenna S-parameter optimization task, underscoring its effectiveness and practical potential. In the engineering design optimization simulations, MRBMO achieved the lowest average fitness and standard deviation compared to other advanced metaheuristic algorithms. This indicated that MRBMO was capable of effectively identifying optimal or near-optimal solutions within the given number of iterations. In the antenna S-parameter optimization tasks, MRBMO continued to demonstrate significant superiority. In most cases, MRBMO was able to locate the global optimum even when other competitive metaheuristic algorithms were trapped in local optima. These results confirmed that MRBMO remained a highly effective and recommended optimizer in the domain of antenna S-parameter optimization.
However, MRBMO is not without limitations. The algorithm involved multiple parameters whose improper configuration could affect performance, indicating a certain degree of parameter sensitivity. Additionally, although MRBMO exhibited strong global search ability, it might still face challenges when dealing with extremely high-dimensional or dynamically changing environments. In terms of computational cost, while MRBMO maintained reasonable runtime in benchmark tests, its complexity increased with problem size due to the multi-phase structure.
In future research, we aim to further explore the parameter sensitivity of MRBMO and develop adaptive parameter control strategies to enhance robustness. Moreover, computational efficiency can be improved through parallelization or lightweight strategies. We also plan to evaluate MRBMO using real-world prototypes of mechanical components and antenna systems, integrating more practical constraints into the optimization process. Potential application domains include combinatorial optimization (e.g., TSP), financial modeling, and neural network hyperparameter tuning. Ultimately, MRBMO is expected to serve as a reliable and effective tool for engineering optimization, simulation, and design.

Author Contributions

Conceptualization, J.W., B.L. and Z.X.; methodology, J.W.; software, J.W.; validation, J.W.; formal analysis, Y.Y.; investigation, Z.L.; resources, N.C.; data curation, R.Z., B.L. and S.P.; writing—original draft preparation, J.W.; writing—review and editing, Y.C., J.W. and N.C.; visualization, J.W. and Y.G.; supervision, N.C. and Y.C.; project administration, B.L.; funding acquisition, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

Our research, including the article processing charges (APC), was supported by Macao Polytechnic University under Grant no: RP/FCA-06/2022 and the Macao Science and Technology Development Fund through Grant no: 0044/2023/ITP2.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We gratefully acknowledge the financial support from Macao Polytechnic University (MPU Grant no: RP/FCA-06/2022) and the Macao Science and Technology Development Fund (FDCT Grant no: 0044/2023/ITP2). These contributions were instrumental in facilitating various aspects of the study, including data acquisition, result analysis, and resource procurement. The generous funding from MPU and FDCT played a vital role in enhancing the overall quality and depth of our research.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AveAverage fitness
StdStandard deviation
OEoverall effectiveness

Appendix A. Table

Appendix A.1. Details of the Benchmark Functions

The benchmark function models have been made available on Figshare. The specific modeling details for the Standard Benchmark Functions with D = 30 can be accessed via the following link: https://doi.org/10.6084/m9.figshare.28440863, intended solely for readers’ reference and in-depth examination.

Appendix A.2. Details of the Antenna S-Parameter Optimization Test Suit

Table A1. Benchmark functions in Antenna S-parameter Optimization test suit.
Table A1. Benchmark functions in Antenna S-parameter Optimization test suit.
FunctionTypeDimension F min Boundaries
F1uni-modal80[−100, 100]
F2uni-modal80[−50, 50]
F3uni-modal80[−30, 30]
F4uni-modal80[−10, 10]
F5Multi-modal20[−5, 5]
F6Multi-modal80[−5, 5]
F7Compositional80[−20, 20]
F8Compositional80[−50, 50]

References

  1. Laarhoven, P.J.M.V.; Aarts, E.H.L.; Laarhoven, P.J.M.V. Simulated Annealing; Springer: Dordrecht, The Netherlands, 1987. [Google Scholar]
  2. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  5. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  7. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  8. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  9. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 1919–1979. [Google Scholar] [CrossRef]
  10. Xiong, G.; Zhang, J.; Shi, D.; He, Y. Parameter extraction of solar photovoltaic models using an improved whale optimization algorithm. Energy Convers. Manag. 2018, 174, 388–405. [Google Scholar] [CrossRef]
  11. Shen, Y.; Zhang, C.; Gharehchopogh, F.S.; Mirjalili, S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 2023, 215, 119269. [Google Scholar] [CrossRef]
  12. Li, Y.; Zhao, L.; Wang, Y.; Wen, Q. Improved sand cat swarm optimization algorithm for enhancing coverage of wireless sensor networks. Measurement 2024, 233, 114649. [Google Scholar] [CrossRef]
  13. Wei, J.; Gu, Y.; Law, K.L.E.; Cheong, N. Adaptive Position Updating Particle Swarm Optimization for UAV Path Planning. In Proceedings of the 2024 22nd International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), Seoul, Republic of Korea, , 21–24 October 2024; pp. 124–131. [Google Scholar]
  14. Wei, J.; Gu, Y.; Yan, Y.; Li, Z.; Lu, B.; Pan, S.; Cheong, N. LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems. Sensors 2025, 25, 2054. [Google Scholar] [CrossRef] [PubMed]
  15. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif. Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  16. Adegboye, O.R.; Feda, A.K.; Ishaya, M.M.; Agyekum, E.B.; Kim, K.C.; Mbasso, W.F.; Kamel, S. Antenna S-parameter optimization based on golden sine mechanism based honey badger algorithm with tent chaos. Heliyon 2023, 9, e21596. [Google Scholar] [CrossRef]
  17. Park, E.; Lee, S.R.; Lee, I. Antenna placement optimization for distributed antenna systems. IEEE Trans. Wirel. Commun. 2012, 11, 2468–2477. [Google Scholar] [CrossRef]
  18. Jiang, F.; Chiu, C.Y.; Shen, S.; Cheng, Q.S.; Murch, R. Pixel antenna optimization using NN-port characteristic mode analysis. IEEE Trans. Antennas Propag. 2020, 68, 3336–3347. [Google Scholar] [CrossRef]
  19. Karthika, K.; Anusha, K.; Kavitha, K.; Geetha, D.M. Optimization algorithms for reconfigurable antenna design: A review. In Advances in Microwave Engineering; CRC Press: Boca Raton, FL, USA, 2024; pp. 85–103. [Google Scholar]
  20. Cai, J.; Wan, H.; Sun, Y. Artificial bee colony algorithm-based self-optimization of base station antenna azimuth and down-tilt angle. Telecommun. Sci. 2021, 1, 69–75. [Google Scholar]
  21. Peng, F.; Chen, X. An antenna optimization framework based on deep reinforcement learning. IEEE Trans. Antennas Propag. 2024, 72, 7594–7605. [Google Scholar] [CrossRef]
  22. Martins, J.R.R.A.; Ning, A. Engineering Design Optimization; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  23. Salih, S.Q.; Alsewari, A.R.A.; Yaseen, Z.M. Pressure vessel design simulation: Implementing of multi-swarm particle swarm optimization. In Proceedings of the 2019 8th International Conference on Software and Computer Applications, Penang, Malaysia, 19–21 February 2019; pp. 120–124. [Google Scholar]
  24. Dandagwhal, R.D.; Kalyankar, V.D. Design optimization of rolling element bearings using advanced optimization technique. Arab. J. Sci. Eng. 2019, 44, 7407–7422. [Google Scholar] [CrossRef]
  25. Wei, J.; Gu, Y.; Lu, B.; Cheong, N. RWOA: A novel enhanced whale optimization algorithm with multi-strategy for numerical optimization and engineering design problems. PLoS ONE 2025, 20, e0320913. [Google Scholar] [CrossRef]
  26. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  27. Agushaka, J.O.; Ezugwu, A.E.; Saha, A.K.; Pald, J.; Abualigahe, L.; Mirjalilii, S. Greater cane rat algorithm (GCRA): A nature-inspired metaheuristic for optimization problems. Heliyon 2024, 10, e31629. [Google Scholar] [CrossRef] [PubMed]
  28. Xiao, C.; Cai, Z.; Wang, Y. A good nodes set evolution strategy for constrained optimization. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 943–950. [Google Scholar]
  29. Wei, J.; Gu, Y.; Xie, Z.; Yan, Y.; Lu, B.; Li, Z.; Cheong, N. TSWOA: An enhanced whale optimization algorithm with Levy flight and Spiral flight for numerical and engineering design optimization problems. PLoS ONE 2025, 20, e0322058. [Google Scholar] [CrossRef] [PubMed]
  30. Yu, F.; Guan, J.; Wu, H.; Chen, Y.; Xia, X. Lens imaging opposition-based learning for differential evolution with cauchy perturbation. Appl. Soft Comput. 2024, 152, 111211. [Google Scholar] [CrossRef]
  31. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005, 2005005. [Google Scholar]
  32. Cymerys, K.; Oszust, M. Attraction-Repulsion Optimization Algorithm for Global Optimization Problems. Swarm Evol. Comput. 2024, 84, 101459. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  34. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  35. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  36. Wenbiao, Y.; Kewen, X.; Shurui, F.; Li, W.; Tiejun, L.; Jiangnan, Z.; Yu, F. A Multi-Strategy Whale Optimization Algorithm and Its Application. Eng. Appl. Artif. Intell. 2022, 108, 104558. [Google Scholar] [CrossRef]
  37. Anitha, J.; Pandian, S.I.A.; Agnes, S.A. An efficient multilevel color image thresholding based on modified whale optimization algorithm. Expert Syst. Appl. 2021, 178, 115003. [Google Scholar] [CrossRef]
  38. Nadimi-Shahraki Mohammad, H.; Shokooh, T.; Seyedali, M.; Hossam, F. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  39. Zhang, Z.; Chen, H.; Jiang, F.; Yu, Y.; Cheng, Q.S. A benchmark test suite for antenna S-parameter optimization. IEEE Trans. Antennas Propag. 2021, 69, 6635–6650. [Google Scholar] [CrossRef]
Figure 1. The population initialized by pseudo-random number method (N = 150). The figure on the left is the 2D view of the population distribution, and the figure on the right is the 3D view of the population distribution. Axes are normalized in [0, 1] for visualization purposes.
Figure 1. The population initialized by pseudo-random number method (N = 150). The figure on the left is the 2D view of the population distribution, and the figure on the right is the 3D view of the population distribution. Axes are normalized in [0, 1] for visualization purposes.
Symmetry 17 01295 g001
Figure 2. Workflow of RBMO. After executing the Search-for-food strategy, RBMO will execute the Food storage strategy to preserve the better solution. Then, after executing the Attacking prey strategy, the Food storage strategy will be executed again to preserve the better solution.
Figure 2. Workflow of RBMO. After executing the Search-for-food strategy, RBMO will execute the Food storage strategy to preserve the better solution. Then, after executing the Attacking prey strategy, the Food storage strategy will be executed again to preserve the better solution.
Symmetry 17 01295 g002
Figure 3. The population initialized by Good Nodes Set method (N = 150). The figure on the left is the 2D view of the population distribution, and the figure on the right is the 3D view of the population distribution. Axes are normalized in [0, 1] for visualization purposes.
Figure 3. The population initialized by Good Nodes Set method (N = 150). The figure on the left is the 2D view of the population distribution, and the figure on the right is the 3D view of the population distribution. Axes are normalized in [0, 1] for visualization purposes.
Symmetry 17 01295 g003
Figure 4. The variation process of k.
Figure 4. The variation process of k.
Symmetry 17 01295 g004
Figure 5. Two-dimensional simulation of Levy flight using the Mantegna method [29]. The trajectory is generated by repeatedly taking steps whose lengths follow a Levy distribution, with random directions in 2D space.
Figure 5. Two-dimensional simulation of Levy flight using the Mantegna method [29]. The trajectory is generated by repeatedly taking steps whose lengths follow a Levy distribution, with random directions in 2D space.
Symmetry 17 01295 g005
Figure 6. The concept of lens imaging. In this schematic, s and s represent the distances from the object and image to the lens, respectively, while f denotes the focal length of the lens. y is the height of the object, and y is the height of the corresponding image.
Figure 6. The concept of lens imaging. In this schematic, s and s represent the distances from the object and image to the lens, respectively, while f denotes the focal length of the lens. y is the height of the object, and y is the height of the corresponding image.
Symmetry 17 01295 g006
Figure 7. Workflow of MRBMO. After executing the Enhanced Search-for-food strategy, MRBMO will implement the Enhanced Food Storage with LIOBL strategy to generate new solutions and retain better ones. Then, after executing the Siege-style Attacking Prey strategy, the original Food Storage strategy will be applied to retain better solutions.
Figure 7. Workflow of MRBMO. After executing the Enhanced Search-for-food strategy, MRBMO will implement the Enhanced Food Storage with LIOBL strategy to generate new solutions and retain better ones. Then, after executing the Siege-style Attacking Prey strategy, the original Food Storage strategy will be applied to retain better solutions.
Symmetry 17 01295 g007
Figure 8. Iteration curves for MRBMOs in ablation study.
Figure 8. Iteration curves for MRBMOs in ablation study.
Symmetry 17 01295 g008
Figure 9. Results of MRBMO in qualitative analysis experiment (F1–F6).
Figure 9. Results of MRBMO in qualitative analysis experiment (F1–F6).
Symmetry 17 01295 g009
Figure 10. Results of MRBMO in qualitative analysis experiment (F7–F12).
Figure 10. Results of MRBMO in qualitative analysis experiment (F7–F12).
Symmetry 17 01295 g010
Figure 11. Results of MRBMO in qualitative analysis experiment (F13–F18).
Figure 11. Results of MRBMO in qualitative analysis experiment (F13–F18).
Symmetry 17 01295 g011
Figure 12. Results of MRBMO in qualitative analysis experiment (F19–F23).
Figure 12. Results of MRBMO in qualitative analysis experiment (F19–F23).
Symmetry 17 01295 g012
Figure 13. Iteration curves for each algorithm in superiority comparative test.
Figure 13. Iteration curves for each algorithm in superiority comparative test.
Symmetry 17 01295 g013
Figure 14. The structure of a pressure vessel.
Figure 14. The structure of a pressure vessel.
Symmetry 17 01295 g014
Figure 15. Iteration curves of the algorithms in pressure vessel design.
Figure 15. Iteration curves of the algorithms in pressure vessel design.
Symmetry 17 01295 g015
Figure 16. The structure of a piston lever.
Figure 16. The structure of a piston lever.
Symmetry 17 01295 g016
Figure 17. Iteration curves of the algorithms in piston lever design problem.
Figure 17. Iteration curves of the algorithms in piston lever design problem.
Symmetry 17 01295 g017
Figure 18. The structure of a robot gripper.
Figure 18. The structure of a robot gripper.
Symmetry 17 01295 g018
Figure 19. Iteration curves of the algorithms in Robot Gripper design problem.
Figure 19. Iteration curves of the algorithms in Robot Gripper design problem.
Symmetry 17 01295 g019
Figure 20. The structure of an industrial refrigeration system.
Figure 20. The structure of an industrial refrigeration system.
Symmetry 17 01295 g020
Figure 21. Iteration curves of the algorithms in Industrial Refrigeration System design problem.
Figure 21. Iteration curves of the algorithms in Industrial Refrigeration System design problem.
Symmetry 17 01295 g021
Figure 22. Landscapes of eight benchmark functions in the Antenna S-parameter Optimization test suite.
Figure 22. Landscapes of eight benchmark functions in the Antenna S-parameter Optimization test suite.
Symmetry 17 01295 g022
Figure 23. Iteration curves of the algorithms in antenna S-parameter optimization.
Figure 23. Iteration curves of the algorithms in antenna S-parameter optimization.
Symmetry 17 01295 g023
Table 1. Details of the metaheuristic algorithms.
Table 1. Details of the metaheuristic algorithms.
AlgorithmYearAuthorSource of Inspiration
Simulated annealing (SA) [1]1987Laarhoven et al.The annealing process.
Genetic Algorithm (GA) [2]1992Holland et al.Darwin’s theory of evolution and Mendel’s genetic.
Ant Colony Optimization (ACO) [3]2006Dorigo et al.Foraging behavior of ants.
Particle Swarm Optimization (PSO) [4]1995Kennedy et al.Foraging behavior of birds.
Differential Evolution (DE) [5]2010Das et al.Mutation, crossover and selection.
Whale Optimization Algorithm (WOA) [6]2016Seyedali Mirjalili et al.Hunting and search-for-food behaviors of humpback whales.
Harris Hawks Optimization (HHO) [7]2019Heidari et al.Hunting behavior of Harris hawks.
Beluga Whale Optimization (BWO) [8]2022C. Zhong et al.Swimming, foraging and whale fall phenomena of beluga white whales.
Crayfish Optimization (COA) [9]2023H. Jia et al.foraging, cooling and competitive behaviors of crayfish.
Table 2. Classical benchmark functions [31].
Table 2. Classical benchmark functions [31].
FunctionFunction’s NameTypeDimension (D) F min Boundaries
F1SphereUni-modal/Scalable30/50/1000[−100, 100]
F2Schwefel’s Problem 2.22Uni-modal/Scalable30/50/1000[−10, 10]
F3Schwefel’s Problem 1.2Uni-modal/Scalable30/50/1000[−100, 100]
F4Schwefel’s Problem 2.21Uni-modal/Scalable30/50/1000[−100, 100]
F5Generalized Rosenbrock’s FunctionUni-modal/Scalable30/50/1000[−30, 30]
F6Step FunctionUni-modal/Scalable30/50/1000[−100, 100]
F7Quartic FunctionUni-modal/Scalable30/50/1000[−1.28, 1.28]
F8Generalized Schwefel’s FunctionMulti-modal/Scalable30/50/100−418.98·D[−500, 500]
F9Generalized Rastrigin’s FunctionMulti-modal/Scalable30/50/1000[−5.12, 5.12]
F10Ackley’s FunctionMulti-modal/Scalable30/50/1000[−32, 32]
F11Generalized Griewank’s FunctionMulti-modal/Scalable30/50/1000[−600, 600]
F12Generalized Penalized Function 1Multi-modal/Scalable30/50/1000[−50, 50]
F13Generalized Penalized Function 2Multi-modal/Scalable30/50/1000[−50, 50]
F14Shekel’s Foxholes FunctionMulti-modal/Unscalable20.998[−65.536, 65.536]
F15Kowalik’s FunctionMulti-modal/Unscalable40.0003075[−5, 5]
F16Six-Hump Camel-Back FunctionCompositional/Unscalable2−1.0316[−5, 5]
F17Branin FunctionCompositional/Unscalable20.398[−5, 10] & [0, 15]
F18Goldstein-Price FunctionCompositional/Unscalable23[−2, 2]
F19Hartman’s Function 1Compositional/Unscalable3−3.8628[0, 1]
F20Hartman’s Function 2Compositional/Unscalable6−3.32[0, 1]
F21Shekel’s Function 1Compositional/Unscalable4−10.1532[0, 10]
F22Shekel’s Function 2Compositional/Unscalable4−10.4029[0, 10]
F23Shekel’s Function 3Compositional/Unscalable4−10.5364[0, 10]
Table 3. Parameter configurations for different algorithms.
Table 3. Parameter configurations for different algorithms.
AlgorithmParametersValue
AROA [32]Attraction factor c0.95
Local search scaling factor 10.15
Local search scaling factor 20.6
Attraction probability 10.2
Local search probability0.8
Expansion factor0.4
Local search threshold 10.9
Local search threshold 20.85
Local search threshold 30.9
GWO [33]Convergence factor a2 decreasing to 0
RIME [34] ω 5
WOA [6]Spiral factor b1
Convergence factor a2 decreasing to 0
ROA [35]c0.1
HHO [7]Threshold0.5
MSWOA [36]b1
a2 decreasing to 0
MWOA [37]b1
a2 decreasing to 0
RBMO [15]Balance coefficient ε 0.5
MRBMOBalance coefficient ε 0.5
Nonlinear factor k1 decreasing to 0
Table 4. Parametric results (Ave and Std) of each algorithm in superiority test with D = 30.
Table 4. Parametric results (Ave and Std) of each algorithm in superiority test with D = 30.
FunctionMetricsAROAGWORBMOWOAROAHHOMSWOAMWOARBMOMRBMO
F1Ave3.9647 × 1002.2650 × 10 27 2.1397 × 1006.0157 × 10 74 1.1650 × 10 14 7.1665 × 10 71 2.7361 × 10 149 0.0000 × 10 0 1.9502 × 10 3 0.0000 × 10 0
Std2.5119 × 10 0 3.2697 × 10 27 7.0650 × 10 1 2.2859 × 10 73 4.5799 × 10 14 3.9185 × 10 70 5.7668 × 10 149 0.0000 × 10 0 2.6488 × 10 3 0.0000 × 10 0
F2Ave6.8940 × 10 1 1.1359 × 10 16 1.5336 × 10 0 5.8384 × 10 52 1.1029 × 10 8 1.1909 × 10 35 2.6047 × 10 81 5.8676 × 10 230 1.8157 × 10 2 0.0000 × 10 0
Std3.2481 × 10 1 1.0110 × 10 16 1.0147 × 10 0 1.3577 × 10 51 3.7150 × 10 8 6.1876 × 10 35 4.6120 × 10 81 5.9750 × 10 230 2.4320 × 10 2 0.0000 × 10 0
F3Ave1.7767 × 10 2 9.4016 × 10 6 1.5305 × 10 3 3.9649 × 10 4 8.0265 × 10 12 5.1400 × 10 67 1.3776 × 10 137 0.0000 × 10 0 1.8304 × 10 2 0.0000 × 10 0
Std3.1751 × 10 2 3.1649 × 10 5 4.6600 × 10 2 1.5223 × 10 4 3.7115 × 10 11 2.7983 × 10 66 5.4327 × 10 137 0.0000 × 10 0 1.0061 × 10 2 0.0000 × 10 0
F4Ave1.7527 × 10 0 5.5395 × 10 7 6.3762 × 10 0 5.2312 × 10 1 3.0431 × 10 8 1.7491 × 10 33 1.3822 × 10 70 1.2545 × 10 200 2.5805 × 10 0 0.0000 × 10 0
Std1.0490 × 10 0 4.7259 × 10 7 2.3386 × 10 0 2.4835 × 10 1 9.6925 × 10 8 9.5620 × 10 33 1.2047 × 10 70 1.4312 × 10 200 1.1422 × 10 0 0.0000 × 10 0
F5Ave9.5376 × 10 1 2.6761 × 10 1 6.6249 × 10 2 2.7893 × 10 1 1.4279 × 10 1 1.4728 × 10 1 4.8616 × 10 0 2.8652 × 10 1 9.5634 × 10 1 6.1565 × 10 4
Std7.4615 × 10 1 7.3068 × 10 1 7.8023 × 10 2 4.5739 × 10 1 2.8546 × 10 1 1.4294 × 10 1 1.0780 × 10 1 1.4513 × 10 1 8.1235 × 10 1 8.8135 × 10 4
F6Ave1.0190 × 10 1 8.3156 × 10 1 2.0273 × 10 0 4.6561 × 10 1 5.5960 × 10 3 9.5914 × 10 2 1.6378 × 10 3 1.3576 × 10 0 1.9037 × 10 3 2.5333 × 10 7
Std2.9668 × 10 0 4.2338 × 10 1 7.2057 × 10 1 2.3248 × 10 1 9.9996 × 10 3 2.0569 × 10 1 1.7461 × 10 3 3.7342 × 10 1 3.8004 × 10 3 3.9361 × 10 7
F7Ave3.0705 × 10 2 1.9773 × 10 3 3.8782 × 10 2 3.6904 × 10 3 1.6753 × 10 4 1.4159 × 10 4 1.6721 × 10 4 1.6721 × 10 4 2.1883 × 10 2 6.9412 × 10 5
Std2.8548 × 10 2 1.0758 × 10 3 1.2389 × 10 2 3.9641 × 10 3 1.8595 × 10 4 1.2573 × 10 4 1.3641 × 10 4 1.6895 × 10 4 9.7618 × 10 3 5.3951 × 10 5
F8Ave−4.5163 × 10 3 −6.1067 × 10 3 −9.9296 × 10 3 −1.0361 × 10 4 −1.2569 × 10 4 −1.2565 × 10 4 −9.8995 × 10 3 −4.5228 × 10 3 −8.7197 × 10 3 −1.2569 × 10 4
Std7.6476 × 10 2 7.7155 × 10 2 5.1712 × 10 2 1.8684 × 10 3 6.6652 × 10 3 2.2197 × 10 1 1.7758 × 10 3 2.0106 × 10 3 8.7678 × 10 2 1.5549 × 10 3
F9Ave5.7303 × 10 1 2.07527.1275 × 10 1 2.71541.9137 × 10 13 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 4.7238 × 10 1 0.0000 × 10 0
Std6.8504 × 10 1 3.5045 × 10 0 1.5290 × 10 1 1.4873 × 10 1 9.4465 × 10 13 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 1.6920 × 10 1 0.0000 × 10 0
F10Ave8.0335 × 10 1 9.6131 × 10 14 2.3205 × 10 0 4.2337 × 10 15 6.8461 × 10 9 4.4409 × 10 16 4.4409 × 10 16 4.4409 × 10 16 7.9180 × 10 1 4.4409 × 10 16
Std3.6028 × 10 1 1.7750 × 10 14 3.9117 × 10 1 2.9405 × 10 15 2.1223 × 10 8 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 7.7005 × 10 1 0.0000 × 10 0
F11Ave1.0033 × 10 0 3.5149 × 10 3 9.7117 × 10 1 8.0699 × 10 3 1.2452 × 10 12 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 1.7652 × 10 2 0.0000 × 10 0
Std8.2278 × 10 2 7.6237 × 10 3 7.8814 × 10 2 3.4426 × 10 2 6.4144 × 10 12 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 1.8647 × 10 2 0.0000 × 10 0
F12Ave1.2171 × 10 0 4.7080 × 10 2 2.9708 × 10 0 2.5362 × 10 2 6.7374 × 10 5 1.4334 × 10 3 1.7333 × 10 4 9.1259 × 10 2 7.1672 × 10 1 1.3557 × 10 9
Std2.2700 × 10 1 2.5012 × 10 2 1.8256 × 10 0 2.9413 × 10 2 1.4443 × 10 4 4.0055 × 10 3 2.5033 × 10 4 6.4656 × 10 2 1.0528 × 10 0 3.3947 × 10 9
F13Ave3.9506 × 10 0 6.2653 × 10 1 2.6948 × 10 1 5.5005 × 10 1 4.5995 × 10 3 5.1951 × 10 2 1.1315 × 10 2 7.3393 × 10 1 1.4255 × 10 1 1.8805 × 10 8
Std4.4205 × 10 1 2.9013 × 10 1 1.4472 × 10 1 2.4206 × 10 1 6.5868 × 10 3 9.1574 × 10 2 2.1366 × 10 2 1.7378 × 10 1 3.6407 × 10 1 5.9007 × 10 8
F14Ave6.2903 × 10 0 3.9354 × 10 0 9.9800 × 10 1 5.0190 × 10 0 9.9800 × 10 1 1.2958 × 10 0 1.6492 × 10 0 6.0085 × 10 0 9.9800 × 10 1 9.9800 × 10 1
Std3.9721 × 10 0 4.2304 × 10 0 1.7707 × 10 12 4.0147 × 10 0 4.2712 × 10 10 6.6981 × 10 1 1.1147 × 10 0 4.1744 × 10 0 1.2820 × 10 16 0.0000 × 10 0
F15Ave4.6840 × 10 3 4.4128 × 10 3 3.4476 × 10 3 6.1407 × 10 4 6.2009 × 10 4 3.9665 × 10 4 1.1637 × 10 3 5.1890 × 10 4 3.1647 × 10 3 3.0848 × 10 4
Std6.9509 × 10 3 8.1133 × 10 3 6.7549 × 10 3 3.3452 × 10 4 4.6076 × 10 4 1.1960 × 10 4 7.7257 × 10 4 1.4584 × 10 4 6.8709 × 10 3 4.8372 × 10 6
F16Ave−1.0315 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0 −1.0018 × 10 0 −1.0316 × 10 0 −1.0316 × 10 0
Std2.8439 × 10 4 2.0391 × 10 8 1.8958 × 10 7 2.4381 × 10 9 6.4995 × 10 5 1.7798 × 10 6 3.7658 × 10 5 2.3834 × 10 2 5.7578 × 10 16 5.7578 × 10 16
F17Ave3.9808 × 10 1 3.9789 × 10 1 3.9789 × 10 1 3.9789 × 10 1 3.9789 × 10 1 3.9804 × 10 1 3.9807 × 10 1 4.1593 × 10 1 3.9789 × 10 1 3.9789 × 10 1
Std5.4892 × 10 4 2.7341 × 10 6 9.6785 × 10 7 1.0341 × 10 5 8.5110 × 10 4 3.5704 × 10 4 2.1308 × 10 4 2.1183 × 10 2 3.2434 × 10 16 0.0000 × 10 0
F18Ave3.0003 × 10 0 5.7000 × 10 0 8.4000 × 10 0 3.0002 × 10 0 3.0019 × 10 0 3.0003 × 10 0 3.1446 × 10 0 1.1428 × 10 1 3.0000 × 10 0 3.0000 × 10 0
Std6.3725 × 10 4 1.4789 × 10 1 2.0550 × 10 1 5.9163 × 10 4 4.1560 × 10 3 8.2802 × 10 4 7.0853 × 10 1 1.0741 × 10 1 1.3272 × 10 15 5.9467 × 10 16
F19Ave−3.8603 × 10 0 −3.8613 × 10 0 −3.8628 × 10 0 −3.8525 × 10 0 −3.8307 × 10 0 −3.7743 × 10 0 −3.8607 × 10 0 −3.7579 × 10 0 −3.8628 × 10 0 −3.8628 × 10 0
Std3.9101 × 10 3 2.5477 × 10 3 4.1968 × 10 7 3.4631 × 10 2 6.5684 × 10 2 1.4535 × 10 1 1.7167 × 10 3 7.5301 × 10 2 2.7101 × 10 15 2.7101 × 10 15
F20Ave−3.1878 × 10 0 −3.2758 × 10 0 −3.2625 × 10 0 −3.2236 × 10 0 −2.9703 × 10 0 −2.5715 × 10 0 −3.1315 × 10 0 −2.8719 × 10 0 −3.2705 × 10 0 −3.3141 × 10 0
Std9.2909 × 10 2 7.6746 × 10 2 6.0462 × 10 2 1.2117 × 10 1 2.3240 × 10 1 4.9586 × 10 1 4.2713 × 10 2 2.6838 × 10 1 5.9923 × 10 2 3.0164 × 10 2
F21Ave−5.5628 × 10 0 −9.4764 × 10 0 −7.6305 × 10 0 −8.7810 × 10 0 −1.0148 × 10 1 −3.6511 × 10 0 −8.5297 × 10 0 −4.4882 × 10 0 −8.9856 × 10 0 −1.0153 × 10 1
Std3.1905 × 10 0 1.7508 × 10 0 3.0294 × 10 0 2.5378 × 10 0 8.1531 × 10 3 1.2066 × 10 0 2.3167 × 10 0 6.6797 × 10 1 2.6823 × 10 0 7.0670 × 10 15
F22Ave−5.7081 × 10 0 −1.0224 × 10 1 −8.5674 × 10 0 −7.3291 × 10 0 −1.0397 × 10 1 −3.2970 × 10 0 −7.2977 × 10 0 −4.7820 × 10 0 −9.7495 × 10 0 −1.0403 × 10 1
Std3.2512 × 10 0 9.7011 × 10 1 2.9025 × 10 0 2.9211 × 10 0 1.2500 × 10 2 1.3156 × 10 0 3.6395 × 10 0 3.6395 × 10 0 2.0184 × 10 0 1.2775 × 10 15
F23Ave−6.0706 × 10 0 −1.0176 × 10 1 −8.9421 × 10 0 −6.6613 × 10 0 −1.0528 × 10 1 −3.0284 × 10 0 −7.7995 × 10 0 −4.7853 × 10 0 −1.0313 × 10 1 −1.0536 × 10 1
Std3.4178 × 10 0 1.3657 × 10 0 2.9793 × 10 0 3.0672 × 10 0 1.7572 × 10 2 1.1707 × 10 0 3.6450 × 10 0 1.4233 × 10 0 1.2234 × 10 0 1.9515 × 10 15
Table 5. Ranking of non-parametric tests of different algorithms. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
Table 5. Ranking of non-parametric tests of different algorithms. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
AlgorithmAverage Friedman ValueRank+/=/−
AROA8.23331023/0/0
GWO5.6884623/0/0
RIME6.9145923/0/0
WOA5.8659723/0/0
ROA4.9370323/0/0
HHO5.3283420/3/0
MSWOA4.7377220/3/0
MWOA6.3674818/5/0
RBMO5.3246521/2/0
MRBMO1.60291-
Table 6. Results of non-parametric tests of different algorithms in higher dimensions. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
Table 6. Results of non-parametric tests of different algorithms in higher dimensions. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
Dimension (D)AlgorithmRankAverage Friedman Value+/=/−
D = 50AROA108.015923/0/0
GWO45.544223/0/0
RIME96.924623/0/0
WOA55.507221/0/2
ROA35.009423/0/0
HHO75.904420/0/3
MSWOA24.741320/0/3
MWOA86.008718/0/5
RBMO65.684119/1/3
MRBMO11.6601-
D = 100AROA107.718823/0/0
GWO76.004423/0/0
RIME97.105823/0/0
WOA55.630422/0/1
ROA34.727522/1/0
HHO45.178320/0/3
MSWOA24.479020/1/2
MWOA65.907318/0/5
RBMO76.371019/1/3
MRBMO11.8775-
Table 7. OE of MRBMO and other SOTA algorithms.
Table 7. OE of MRBMO and other SOTA algorithms.
MetricsAROAGWORIMEWOAROAHHOMSWOAMWOARBMOMRBMO
( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )
D = 300/0/230/0/230/0/230/0/230/0/230/3/200/3/200/5/180/2/2116/7/0
D = 500/0/230/0/230/0/230/2/210/0/230/3/200/3/200/5/181/3/1914/8/1
D = 1000/0/230/0/230/0/230/1/220/0/230/3/201/2/200/5/181/3/1913/8/2
Total0/0/690/0/690/0/690/3/660/0/690/9/601/8/600/15/542/8/5943/23/3
O E 0.00%0.00%0.00%4.35%0.00%13.04%13.04%21.74%14.49%95.65%
Table 8. Results of different algorithms on various engineering design optimization problems.
Table 8. Results of different algorithms on various engineering design optimization problems.
ProblemsAlgorithmAveStd x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14
Pressure
Vessel
AROA1370.488626295.1841211.0000001.00000040.000000200.000000----------
GWO1141.810949141.7184141.0000001.00000040.000000200.000000----------
RIME1282.653763182.8254821.0000001.00000041.000000197.000000----------
WOA1219.422206268.3741991.0000001.00000040.000000200.000000----------
ROA1763.647185293.4064181.0000001.00000041.000000195.000000----------
HHO1495.258505320.4968321.0000001.00000040.000000200.000000----------
MSWOA1147.519333169.2771681.0000001.00000040.000000200.000000----------
MWOA5942.2662453824.5483901.0000001.00000044.000000190.000000----------
RBMO1141.781875141.7086671.0000001.00000040.000000200.000000----------
MRBMO1115.9095300.0000001.0000001.00000040.000000200.000000----------
Piston
Lever
AROA319.936826227.2533180.0500001.4764422.931026378.235743----------
GWO34.41078767.8397260.0500071.0079492.016278500.000000----------
RIME77.701420213.6440690.0500001.0098992.017956500.000000----------
WOA57.774868110.2424260.0857201.0068352.016056500.000000----------
ROA1153.4228771682.3570614.89361311.7726621.972531500.000000----------
HHO249.909678218.8341250.0500001.0622062.097799461.730830----------
MSWOA1.1077270.0689460.0500001.0157862.017575499.632547----------
MWOA220.524760163.0561977.0551696.6435772.169984500.000000----------
RBMO23.24591657.5374960.0500001.0076462.016228500.000000----------
MRBMO1.0571750.0000000.0500001.0076462.016228500.000000----------
Robot
Gripper
AROA13.62844717.218374149.702921111.143182182.68253034.658555127.482976160.4716862.836951-------
GWO3.7945960.454172145.658521137.187924198.8324496.603866144.600354142.8770622.508651-------
RIME4.0474370.576625149.213537129.144305200.00000018.106185131.840515142.9988902.528185-------
WOA4.6993880.573319149.897652118.974762199.98498926.757698145.241962157.2653312.737721-------
ROA38.65958630.576667150.00000093.873820132.7221839.652063125.389579209.2733913.140000-------
HHO257,131.862654812,946.547661148.434356139.577035191.7599655.075438149.604104159.9896912.656773-------
MSWOA6.5832204.819690146.889687114.032475177.14928931.674680144.382918129.3904402.876919-------
MWOA5538.72280017,421.60772598.65339778.718941100.00000019.40360577.005510105.4727712.515893-------
RBMO2.9647490.155839149.906017140.719282200.0000009.016330145.854080104.4313642.430569-------
MRBMO2.6546240.101681149.767107147.509018198.2951262.14729530.987108100.0000001.701362-------
Industrial
Refrigeration
System
AROA43,678.95442473,616.0354160.0108100.0010000.0719960.2938510.2330230.1418700.8440540.9940753.2396124.6130640.2593220.0655980.0743481.325086
GWO646.5346183498.3184890.0010050.0010230.0029030.1436120.0031020.0099001.5081701.5519054.8764302.1350140.0010000.0010000.0063510.070231
RIME1523.2690135797.9338180.0010000.0010000.0010000.0010000.0010000.0010001.5054921.5246664.9537132.8380800.0010000.0010000.0085670.101852
WOA83.810189118.2624540.0010000.0010000.0010000.2739650.0259210.0010001.5073841.5566512.0202852.7487800.0010000.0010000.0072750.033436
ROA754,251.693909718,095.6635400.0308790.0048090.2557840.8154895.0000000.4407940.4024780.5813641.1273195.0000000.8312931.5404640.4407695.000000
HHO807.0425241495.0723850.0010000.0010000.0010000.0010002.9654260.0148791.5078981.5208583.7863952.5766160.0010000.0010000.0019430.020105
MSWOA24.92914837.8843490.0017880.0019820.0019840.9696760.0052470.0015241.4980621.5229953.2009663.7553120.0010000.0010000.0070920.067323
MWOA61,268.82951681,677.6398620.0010000.0143320.2650393.2062601.5854390.0010001.1893821.5946085.0000001.5734790.8358540.0010000.0010000.001000
RBMO3180.3418557215.4986690.0010000.0010000.0010000.0010000.0010000.0010001.5076581.5239695.0000001.9999890.0010000.0010000.0072930.087557
MRBMO7.8139010.2563600.0010000.0010000.0010000.0010000.0010000.0010001.5076581.5239695.0000001.9999890.0010000.0010000.0072930.087557
Table 9. Parametric results (Ave and Std) of each algorithm in antenna S-parameter optimization.
Table 9. Parametric results (Ave and Std) of each algorithm in antenna S-parameter optimization.
FunctionMetricsAROAGWORBMOWOAROAHHOMSWOAMWOARBMOMRBMO
F1Ave1.5868 × 10 0 8.3149 × 10 1 9.4475 × 10 2 4.9843 × 10 2 2.1584 × 10 6 3.0128 × 10 7 7.5848 × 10 4 4.2347 × 10 1 5.0025 × 10 1 0.0000 × 10 0
Std1.3555 × 10 0 9.6380 × 10 1 5.8396 × 10 2 1.4451 × 10 1 4.5519 × 10 6 1.1466 × 10 6 2.2885 × 10 3 1.2317 × 10 0 5.2254 × 10 2 0.0000 × 10 0
F2Ave3.4783 × 10 2 0.0000 × 10 0 5.4425 × 10 4 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
Std7.5769 × 10 2 0.0000 × 10 0 1.4625 × 10 3 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
F3Ave1.7331 × 10 0 0.0000 × 10 0 1.4123 × 10 2 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
Std3.6256 × 10 0 0.0000 × 10 0 1.6904 × 10 2 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
F4Ave1.8067 × 10 1 1.4665 × 10 1 1.0956 × 10 1 1.4556 × 10 1 8.7451 × 10 1 6.0083 × 10 0 1.5250 × 10 1 1.7621 × 10 1 2.2786 × 10 0 1.0729 × 10 7
Std2.4265 × 10 1 9.4171 × 10 1 5.0148 × 10 0 2.3540 × 10 0 3.2941 × 10 0 8.5264 × 10 0 4.2306 × 10 0 3.1070 × 10 1 4.8407 × 10 0 5.8579 × 10 7
F5Ave1.9159 × 10 1 1.4283 × 10 1 1.7725 × 10 1 3.3105 × 10 2 2.1978 × 10 1 2.6291 × 10 2 2.9688 × 10 1 3.8916 × 10 3 2.4216 × 10 2 0.0000 × 10 0
Std1.4883 × 10 1 1.1732 × 10 1 9.8241 × 10 2 1.7225 × 10 2 4.1198 × 10 1 5.6502 × 10 2 1.6194 × 10 1 7.6360 × 10 3 1.7012 × 10 2 0.0000 × 10 0
F6Ave6.9601 × 10 3 0.0000 × 10 0 8.6394 × 10 5 0.0000 × 10 0 2.5715 × 10 16 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
Std9.1197 × 10 3 0.0000 × 10 0 7.9839 × 10 5 0.0000 × 10 0 1.1019 × 10 15 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0 0.0000 × 10 0
F7Ave3.9895 × 10 1 2.6438 × 10 1 1.1680 × 10 1 2.1158 × 10 1 3.8193 × 10 1 1.2388 × 10 1 2.2010 × 10 1 3.8804 × 10 1 1.4856 × 10 1 2.5500 × 10 6
Std3.3136 × 10 0 7.3601 × 10 0 8.8572 × 10 0 1.1953 × 10 1 5.7164 × 10 1 1.7567 × 10 1 1.2744 × 10 1 1.8945 × 10 1 1.6176 × 10 5 8.0639 × 10 6
F8Ave3.9628 × 10 1 12.96221.9946 × 10 1 54.01964.0846 × 10 2 8.5873 × 10 4 7.6902 × 10 4 8.8006 × 10 5 2.9829 × 10 1 0.0000 × 10 0
Std1.2821 × 10 1 1.5151 × 10 1 1.0542 × 10 1 1.8167 × 10 1 6.0217 × 10 2 2.6630 × 10 3 2.2268 × 10 3 4.8107 × 10 4 1.3939 × 10 1 0.0000 × 10 0
Table 10. Results of non-parametric tests of different algorithms in antenna S-parameter optimization. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
Table 10. Results of non-parametric tests of different algorithms in antenna S-parameter optimization. ‘Rank’ refers to the ranking of the Average Friedman Value for ten metaheuristic algorithms. ‘+/=/−’ refers to the result of the Wilcoxon rank-sum test.
AlgorithmAverage Friedman ValueRank+/=/−
AROA9.4167108/0/0
GWO6.108485/3/0
RIME7.395898/0/0
WOA5.743775/3/0
ROA4.281236/2/0
HHO4.114625/3/0
MSWOA5.289665/3/0
MWOA5.139655/3/0
RBMO5.202155/3/0
MRBMO2.30831-
Table 11. Effectiveness of MRBMO and other SOTA algorithms in antenna S-parameter optimization.
Table 11. Effectiveness of MRBMO and other SOTA algorithms in antenna S-parameter optimization.
MetricsAROAGWORIMEWOAROAHHOMSWOAMWOARBMOMRBMO
( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )( w / t / l )
Total0/0/80/3/50/0/80/3/50/2/60/3/50/3/50/3/50/3/55/3/0
O E 0.00%37.50%0.00%25.00%37.50%37.50%37.50%37.50%37.50%100.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, B.; Xie, Z.; Wei, J.; Gu, Y.; Yan, Y.; Li, Z.; Pan, S.; Cheong, N.; Chen, Y.; Zhou, R. MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges. Symmetry 2025, 17, 1295. https://doi.org/10.3390/sym17081295

AMA Style

Lu B, Xie Z, Wei J, Gu Y, Yan Y, Li Z, Pan S, Cheong N, Chen Y, Zhou R. MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges. Symmetry. 2025; 17(8):1295. https://doi.org/10.3390/sym17081295

Chicago/Turabian Style

Lu, Baili, Zhanxi Xie, Junhao Wei, Yanzhao Gu, Yuzheng Yan, Zikun Li, Shirou Pan, Ngai Cheong, Ying Chen, and Ruishen Zhou. 2025. "MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges" Symmetry 17, no. 8: 1295. https://doi.org/10.3390/sym17081295

APA Style

Lu, B., Xie, Z., Wei, J., Gu, Y., Yan, Y., Li, Z., Pan, S., Cheong, N., Chen, Y., & Zhou, R. (2025). MRBMO: An Enhanced Red-Billed Blue Magpie Optimization Algorithm for Solving Numerical Optimization Challenges. Symmetry, 17(8), 1295. https://doi.org/10.3390/sym17081295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop