Next Article in Journal
A Novel Fish Pose Estimation Method Based on Semi-Supervised Temporal Context Network
Next Article in Special Issue
Optimizing Maritime Search and Rescue Planning via Genetic Algorithms: Incorporating Civilian Vessel Collaboration
Previous Article in Journal
IEGS-BoT: An Integrated Detection-Tracking Framework for Cellular Dynamics Analysis in Medical Imaging
Previous Article in Special Issue
An Enhanced MIBKA-CNN-BiLSTM Model for Fake Information Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Red-Crowned Crane Optimization: A Novel Biomimetic Metaheuristic Algorithm for Engineering Applications

1
School of Mechanical and Electrical Engineering, Sanjiang University, Nanjing 210012, China
2
Nanjing Agricultural Robotics and Equipment Engineering Research Center, Nanjing 210012, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 565; https://doi.org/10.3390/biomimetics10090565
Submission received: 30 June 2025 / Revised: 18 August 2025 / Accepted: 21 August 2025 / Published: 24 August 2025
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)

Abstract

This paper proposes a novel bio-inspired metaheuristic algorithm called the Red-crowned Crane Optimization (RCO) algorithm. This algorithm is developed by mathematically modeling four habits of red-crowned cranes: dispersing for foraging, gathering for roosting, dancing, and escaping from danger. The foraging strategy is used to search unknown areas to ensure the exploration ability, and the roosting behavior prompts cranes to approach better positions, thereby enhancing the exploitation performance. The crane dancing strategy further balances the local and global search capabilities of the algorithm. Additionally, the introduction of the escaping mechanism effectively reduces the possibility of the algorithm falling into local optima. The RCO algorithm is compared with eight popular optimization algorithms on a large number of benchmark functions. The results show that the RCO algorithm can find better solutions for 74% of the CEC-2005 test functions and 50% of the CEC-2022 test functions. This algorithm has a fast convergence speed and high search accuracy on most functions, and it can handle high-dimensional problems. The Wilcoxon signed-rank test results demonstrate the significant superiority of the RCO algorithm over other algorithms. In addition, applications to eight practical engineering problems further demonstrate its ability to find near-optimal solutions.

1. Introduction

Optimization algorithms are a class of computational methods used to find optimal or near-optimal solutions in complex problem spaces. Problems in many fields can be transformed into mathematical models that “maximize or minimize a certain objective function under specific constraints”, and optimization algorithms are the core tools for solving these problems. For example, in the field of engineering design, optimization algorithms can help explore design schemes with better performance, lower costs, or higher reliability [1,2]. In path planning problems, optimization can provide the path with the shortest distance or the least time to reach the destination [3,4]. For control systems, optimization methods can be used to determine controller parameters that enable the system to achieve better control performance [5,6]. In view of the demand for such technologies in real-world problems, many optimization methods have been proposed to solve these problems.
Optimization methods can generally be summarized as deterministic optimization methods and stochastic optimization methods. For some linear, low-dimensional, and continuous simple problems, traditional deterministic methods are sufficient to effectively solve them. However, most real-world problems exhibit complex characteristics such as nonlinearity, discreteness, and high dimensionality, which pose challenges for traditional methods to give satisfactory results. To address the challenges that are difficult to solve by deterministic optimization techniques, stochastic optimization techniques have emerged. Stochastic methods do not need to capture information from the objective function but exhibit random search behavior to provide suitable solutions to the problems. It should be noted that since these algorithms rely on the principle of random search, they often require multiple attempts before they can give a reliable solution.
Metaheuristic algorithms are a typical class of stochastic optimization methods for solving complex optimization problems [7,8]. They approach the optimal solution of a problem by utilizing candidate solutions in the search space and appropriate search strategies. The main characteristics of these algorithms include flexibility, robustness, and the ability to handle large-scale problems. Over the past few decades, metaheuristic algorithms have become one of the main methods for solving complex optimization problems and have been widely applied in many fields such as engineering optimization, energy management, power, and computer vision.
Metaheuristic algorithms are usually proposed by designers inspired by biological sciences, physical and chemical phenomena, and animal, plant, and human behaviors, which are used to solve real-world optimization problems. They can be roughly classified into four categories: evolution-based algorithms, swarm-based algorithms, human-based algorithms, and physics- and chemistry-based algorithms. The Genetic Algorithm (GA) [9] is the earliest evolutionary algorithm and was proposed in the 1970s. It searches for optimal solutions by simulating the processes of inheritance, crossover, and mutation in nature. Based on the idea of the GA, another representative Differential Evolution (DE) [10] was introduced in the 1990s. Swarm intelligence optimization algorithms acquire a near-optimal solution based on the wisdom of biological populations. These algorithms work through the synergistic behavior between individuals to complete tasks that individuals cannot complete independently. Particle Swarm Optimization (PSO) [11] is a very popular swarm intelligence algorithm originated from research on the foraging behavior of bird or fish populations. PSO has attracted widespread attention due to its advantages such as ease of implementation and few adjustment parameters. Since then, many new swarm intelligence algorithms have been proposed. Some examples are given below: the Artificial Bee Colony (ABC) [12] inspired by the cooperative foraging of bee colonies, the Fruit Fly Optimization Algorithm (FOA) [13], proposed based on the fact that fruit flies uses their keen senses of smell and vision for predation, the Grey Wolf Optimizer (GWO) [14], which is inspired by the hierarchy system and hunting behavior of gray wolves, the Butterfly Algorithm (BA) [15], which imitates the flight behavior of butterflies attracted by fragrance, the Whale Optimization Algorithm (WOA) [16], which simulates the hunting process of humpback whales, and the Dragonfly Algorithm (DA) [17], developed based on the foraging and migration behaviors of dragonflies. Human-based optimization algorithms are mainly inspired by daily human behaviors, such as management, cooperation, learning, and social behaviors. Several human-based algorithms are briefly described below. Rao et al. proposed Teaching–Learning-Based Optimization (TLBO) [18], which was inspired by the impact of teachers on student learning and interactive learning among students. The algorithm has been widely improved and applied due to its efficient optimization performance. Das et al. introduced Student Psychology-Based Optimization (SPBO) [19] by simulating the psychology of students expecting to improve their academic performance. In order to solve complex global optimization problems, Feng et al. proposed the Cooperation Search Algorithm (CSA) [20] inspired by the collaborative behavior of enterprise groups. Trojovská et al. developed the Chef-Based Optimization Algorithm (CBOA) [21] and Driving Training-Based Optimization (DTBO) [22] by simulating the process of chefs learning cooking skills and drivers learning driving skills, respectively. Physics- and chemistry-based algorithms mimic some of the physical and chemical phenomena in nature for the purpose of optimization search. Similarly, several examples are presented below, including the Gravitational Search Algorithm (GSA) [23] based on Newton’s law of gravity, the Multi-Verse Optimizer (MVO) [24] based on the theory of multi-verse evolution, the Material Generation Algorithm (MGA) [25] based on the chemical reactions of materials, and Fick’s Law Algorithm (FLA) [26] based on Fick’s first law of diffusion. In addition to the above four common classifications, there are other metaheuristic algorithms that are not included, such as Sine Cosine Algorithm (SCA) [27], Arithmetic Optimization Algorithm (AOA) [28], Hunger Games Search (HGS) [29], and Football-Game-Based Optimization (FGBO) [30]. Table 1 shows more metaheuristic algorithms and summarizes their inspirations. Note that since new optimization algorithms are constantly being proposed, it is impossible to cite all of them.
The core concepts of metaheuristic algorithms are exploration and exploitation [39]. In general, the design goal of all algorithms is to balance the exploration and exploitation phases so that the search agents efficiently converge to the near-optimal solutions [40]. Although various algorithms have been proposed, many of them are unsatisfactory when dealing with high-dimensional and multimodal problems. This is because their inappropriate search mechanisms make them susceptible to local optima. For example, the GWO [14], SCA [27], and Golden Jackal Optimization (GJO) algorithms [41] all require search individuals to move towards the optimal position found in the current iteration. This mechanism leads to a gradual decay in population diversity during the iteration process, which weakens the exploration capability of the algorithms, resulting in limited optimization performance in high-dimensional or multi-modal problems. For the HHO algorithm [33], it achieves the transition from global exploration to local exploitation through an energy factor (which decreases linearly with the number of iterations). However, this mode is highly dependent on the effectiveness of the exploration phase. If the algorithm fails to find high-quality candidate solutions during the exploration phase, it will easily fall into local optima during the exploitation phase. In addition, we find that some algorithms, such as the Slime Mould Algorithm (SMA) [42], lack effective strategies to jump out of local optima, thus performing poorly on multi-modal functions.
By studying the habits of red-crowned cranes, we have found that their behaviors such as foraging, roosting, and escaping are similar to the processes of exploration, exploitation, and jumping out of local optima in metaheuristic algorithms. Therefore, this study aims to develop an algorithm model capable of dealing with certain complex multimodal and high-dimensional problems based on the behaviors of red-crowned cranes. The main contributions of this work are as follows:
  • A biomimetic RCO algorithm is proposed, which simulates the four behaviors of red-crowned cranes in nature: dispersing for foraging, escaping from danger, gathering for roosting, and crane dance. The foraging strategy is used to search unknown areas to ensure the exploration ability, and the roosting behavior prompts cranes to approach better positions, thereby enhancing the exploitation performance. The crane dancing strategy further balances the local and global search capabilities of the algorithm. The introduction of the escaping mechanism effectively reduces the possibility of the algorithm falling into local optima.
  • The RCO algorithm is tested on CEC-2005 and CEC-2022 benchmark functions and is compared with eight popular algorithms from multiple perspectives, including optimization accuracy, convergence speed, rank-sum test, and scalability.
  • The RCO algorithm is used to optimize eight constrained application problems, and the ability of the RCO algorithm to deal with engineering design problems is compared with fifteen other optimization algorithms.
The framework of the rest of this paper is as follows: Section 2 describes the design inspiration of RCO and establishes its mathematical model. Section 3 gives the results and discussion of RCO on two types of test function sets. Section 4 tests the performance of RCO on engineering design problems. This work is summarized and the outlook for future research is indicated in Section 5.

2. Red-Crowned Crane Optimization (RCO)

2.1. Inspiration Source

Red-crowned cranes are a type of large wading bird in the Gruidae family of the group Gruiformes, named after the red crowns on their heads. In general, they have a body height of 1.5 to 1.6 m, a body length of 1 to 1.5 m, a wingspan of 2.2 to 2.5 m, and a weight of 5 to 10.5 kg [43]. Male and female cranes are very similar in appearance, with their bodies almost pure white. As shown in Figure 1a, their heads are bare, featherless, and vermilion in color. The cheeks, throat and neck appear mostly dark brown. The primary flight feathers and the entire body feathers of the red-crowned cranes are all white, which is especially noticeable when they are flying [44]. The secondary and tertiary flight feathers appear black, and the tertiary flight feathers are long and curved, covering the tails. Therefore, the black feathers on their tails when they are standing are actually the tertiary flight feathers.
Red-crowned cranes symbolize happiness, longevity, and loyalty. Unfortunately, red-crowned cranes have become one of the rarest cranes in the world. At present, red-crowned cranes have been included in the IUCN Red List of Threatened Species and have also been included in the list of key protected wild animals in China [45,46]. In the world, red-crowned cranes are mainly distributed in China, Japan, Korea, Mongolia, and Russia. Red-crowned cranes in Japan usually do not migrate, while those in other countries migrate according to the seasons. Generally, red-crowned cranes leave their wintering grounds in February or March every year and go to areas such as the Russian Far East and Heilongjiang in China for breeding. In September or October, they migrate from their breeding grounds to Korea and east-central China for the winter [47].
During migration, several family groups usually gather into a larger group (up to 40–50 individuals, or even more than 100 individuals). However, after completing the migration, they still live in families or small groups. In addition, it has been observed that red-crowned cranes often forage in a certain area in pairs or alone, and sometimes in small groups. They eat a wide variety of foods, including fish, shrimp, tadpoles, clams, and the stems, leaves, and fruits of aquatic plants. Figure 1b shows red-crowned cranes foraging in shallow ponds. At night, red-crowned cranes roost in families in shoals or reed ponds surrounded by water. Whether they are foraging or resting, some adult red-crowned cranes are always particularly alert and constantly look up and around. Once they sense danger approaching, they take to the air and chirp loudly.
In late March or early April every year, red-crowned cranes begin their courtship behavior. During this period, the male crane turns its beak upward, raises its head and neck, looks up at the sky, spreads its wings, and sings loudly. The female crane responds loudly, and then they sing, jump, and dance with each other [48]. Their dance is very graceful, either stretching their necks and raising their heads, bending their knees and stooping, stepping in place, or jumping in the air, as shown in Figure 1c.
In this work, the foraging, escaping, roosting, and dancing behaviors of red-crowned cranes provide design inspiration and ideas for the RCO algorithm. These behaviors are analyzed and modeled in detail in the following section.

2.2. Population Initialization

The RCO algorithm proposed in this work is a new swarm-based metaheuristic algorithm that simulates the living habits of the red-crowned crane population. In the RCO algorithm, the positions of red-crowned cranes represent some candidate solutions for the problem to be solved. To find the near-optimal solution to this problem, it is necessary to continuously adjust the positions of the red-crowned cranes so that the candidate solutions are as close to the optimal solution as possible. Note that a sufficient number of candidate solutions (search agents) and adjustment steps (iterations) can increase the likelihood of finding the better solution. Therefore, like other swarm intelligence algorithms, the RCO algorithm initially needs to generate a random population Xcranes in the search space, as follows:
X c r a n e s = X 1 X i X n = x 1 , 1 x 1 , j x 1 , d x i , 1 x i , j x i , d x n , 1 x n , j x n , d ,
x i , j = l b j + r 0 · ( u b j l b j ) , i = 1 , 2 , , n , j = 1 , 2 , , d ,
where Xi is the position of the i-th red-crowned crane, xi,j is the value of the j-th dimension (decision variable) of the i-th red-crowned crane, n is the number of red-crowned cranes, d is the number of decision variables, r0 is a random value in the range of 0 to 1, and ubj and lbj are the upper and lower bounds of the j-th decision variable, respectively.
In order to evaluate the quality of the candidate solutions, the fitness values of these candidate solutions for the objective function F(X) are calculated using Equation (3), and the solution corresponding to the best fitness value is recorded. As the candidate solutions are updated during iterations, the optimal solution obtained so far is also updated.
F c r a n e s = F ( X 1 ) F ( X i ) F ( X n ) = F ( x 1 , 1 , , x 1 , j , , x 1 , d ) F ( x i , 1 , , x i , j , , x i , d ) F ( x n , 1 , , x n , j , , x n , d ) ,

2.3. Mathematical Model of RCO

In the RCO algorithm, the position update formulas of red-crowned cranes are established based on their behaviors of dispersing for foraging, avoiding danger, gathering for roosting, and dancing. Specifically, the following four rules are formulated:
  • Dispersing for foraging: The first thing to be pointed out is that the best position discovered by red-crowned cranes in the current iteration is considered an ideal habitat. Then, when daylight comes, the red-crowned cranes disperse from this habitat in search of food. They can be divided into two categories. Some red-crowned cranes forage randomly around the habitat, which are known as random foragers. Others have the courage to fly away from the habitat to explore richer food. These red-crowned cranes are called long-distance foragers.
  • Avoiding danger: For the long-distance foragers, they usually live on the edge of the population and are more likely to be exposed to danger. Therefore, these red-crowned cranes are very alert when foraging for food. As soon as danger is imminent, they emit a ‘ko-lo-lo-’ call and take to the air to escape from the danger.
  • Gathering for roosting: When the red-crowned cranes forage during the day, they also consider choosing a better habitat. If one red-crowned crane reaches a better position, this position will become a new habitat. At night, with the guidance of this red-crowned crane, other red-crowned cranes gather towards the new habitat.
  • Crane dance: With certain probability, a male red-crowned crane and a female red-crowned crane can successfully pair up and express their love for each other through singing, jumping, and dancing. During this time, they sing and make loud sounds. As a result, other red-crowned cranes stop near them to enjoy their dance. In this case, the two red-crowned cranes with the first and second fitness values are considered this pair of red-crowned cranes.

2.3.1. Strategies Based on Foraging and Roosting Behaviors

First, according to the fitness values, the red-crowned cranes are divided into long-distance foragers and random foragers in a certain proportion. Specifically, the fitness values of the red-crowned cranes are sorted. The top-ranked red-crowned cranes are classified as random foragers, and the bottom-ranked red-crowned cranes are classified as long-distance foragers. Here, since red-crowned cranes with poor fitness values are often in danger, they are always alert, which gives them the courage to fly far away for food. The positions of the random foragers Xrf and the long-distance foragers Xlf are as follows:
X r f = X F 1 X F k = x F 1 , 1 x F 1 , j x F 1 , d x F k , 1 x F k , j x F k , d ,
X l f = X F ( k + 1 ) X F n = x F ( k + 1 ) , 1 x F ( k + 1 ) , j x F ( k + 1 ) , d x F n , 1 x F n , j x F n , d ,
where XF1, …, XFk are the positions of the first k red-crowned cranes after sorting the fitness values of all red-crowned cranes, and XF(k+1), …, XFn are the positions of the last (nk) red-crowned cranes after sorting the fitness values of all red-crowned cranes. The new positions reached by random and long-distance foragers dispersing from the habitat to forage are shown in Equations (6) and (7), respectively.
X i 0 ( t + 1 ) = X i ( t ) + c 1 · R · X h o m e ( t ) X i ( t ) , i = F 1 , , F k ,
X i 0 ( t + 1 ) = X i ( t ) + c 2 · X h o m e ( t ) X i ( t ) , i = F ( k + 1 ) , , F n ,
where Xi(t) is the position reached by the i-th red-crowned crane after t iterations, and Xhome(t) is the position of the ideal habitat obtained after t iterations, that is, the best position reached by the entire population of red-crowned cranes after t iterations. R is a 1 × d matrix, and its each element is a random number in the range of 0 to 1. c1 and c2 are the step adjustment coefficients. The step size c1 and parameter R determine the movement of random foragers. Since random foragers only search for food near the habitat, c1 is set to a constant value. However, if a large constant value of c2 is also used to force long-distance foragers to move, they will never be able to approach the optimal solution in the later stages of iteration. Therefore, we consider that c2 decreases linearly during the iteration process. Through the trial-and-error method, c1 is set to 2, and c2 is set to 5 − 4·t/tmax, where tmax is the maximum number of iterations.
Figure 2 shows the schematic diagram (2D) of the foraging behavior of the red-crowned cranes, in which (a) and (b) are the possible results in the early and later iterations, respectively. It can be seen from Figure 2 that each random forager searches randomly within a rectangular range centered on the habitat, and the size of this range is determined by its distance from the habitat. Long-distance foragers fly along the habitat towards farther foraging grounds in the early iterations, but they still move within a certain area (search space). In the later phases, they move in smaller steps, either foraging or resting. As a result, in the early iteration phases, the red-crowned crane population has rich diversity, with both search agents surrounding current optimal position and search agents performing global searches. Obviously, the cooperation between random and long-distance foragers endows the algorithm with a certain local search capability during the exploration process, thereby accelerating convergence. As the number of iterations increases, the long-distance foragers gradually converge to current optimal positions. In addition, danger avoidance is an important strategy for long-distance foragers. Whether they are foraging or resting, they always keep looking up. When they realize that danger is coming, they quickly fly away from the dangerous position. It is assumed here that the probability of the appearance of danger increases with the increase in iterations. In other words, they have a high probability of changing the foraging grounds in the later iterations. This strategy can effectively prevent the algorithm from falling into local optimum, as shown in Figure 2b. Specifically, each long-distance forager has a risk coefficient cr, which is defined as a random value in the range of 0 to 1. When cr < (t/tmax)1/2, the long-distance forager flies to a new position to replace the current foraging area, as follows:
X i 0 ( t + 1 ) = X i 0 ( t + 1 ) + r 1 · X r a n d X i 0 ( t + 1 ) + r 2 · X i p b e s t X i 0 ( t + 1 ) ,       i = F ( k + 1 ) , , F n ,
where Xrand is a newly generated random position in the search space, Xipbest is the individual optimal position discovered by long-distance foragers in the past foraging process, which is different from the individual optimal position used in the PSO algorithm [11], and r1 and r2 are random numbers belonging to the range of 1 to 2.
When night falls, the red-crowned cranes fly back to their habitat one after another. Note that their habitat is not fixed every day. While foraging during the day, they also try to look for a better habitat. Therefore, the red-crowned crane that finds a better position after foraging guide other companions (including all random and long-distance foragers) to gather towards it. If no better position is found, all red-crowned cranes will still return to their original habitat. The behavior of gathering for roosting can be modeled as follows:
X i ( t + 1 ) = X i 0 ( t + 1 ) + c 3 · r 3 · X h o m e 0 ( t + 1 ) X i 0 ( t + 1 ) ,
where X0home(t + 1) is the habitat tonight, which may be updated or unchanged. c3 is a time-varying step adjustment coefficient, equal to 2 − t/tmax, and r3 is a random value in the range of 0 to 1. The design of c3 is also to ensure a balance between exploration and exploitation.
Figure 3 shows the schematic diagram (2D) of the gathering behavior of red-crowned cranes. From Figure 3, a new habitat X0home(t + 1) is discovered, and then other red-crowned cranes gradually move there. They fly to the habitat in a straight direction and then randomly select a position to roost, which may be close to the habitat or far away. This is due to the fact that red-crowned cranes not only roost in habitats but may also choose to roost in foraging areas where food is abundant. The step coefficient c3 adopts a design that linearly decreases from 2 to 1 with the increase in iterations, which can reduce the probability of the red-crowned cranes moving away from the habitat and thus enables them to gradually converge to the habitat.

2.3.2. Strategy Based on Crane Dance

This section establishes an update strategy based on the dancing behavior of the red-crowned cranes. In this case, a male crane and a female crane successfully pair up and engage in a series of interactions. As a result, other red-crowned cranes gather near them to watch their dance. Assume that the search agents with the first fitness value and the second fitness value are considered the pair of red-crowned cranes. Then, the position update formula of the red-crowned cranes can be modeled as follows:
X i ( t + 1 ) = X i ( t ) + u · r 4 · X f i r s t ( t ) X i ( t ) ,
X i ( t + 1 ) = X i ( t ) + u · r 4 · X s e c o n d ( t ) X i ( t ) ,
X i ( t + 1 ) = X i ( t + 1 ) + X i ( t + 1 ) 2 ,
where r4 is the deviation coefficient, and it is a random number in the range of 0 to 0.1. Xfirst(t) and Xsecond(t) correspond to the global optimum position and global second position after t iterations, respectively. u is a random number that obeys Gaussian distribution, as follows:
u ~ N ( 1 , σ u 2 ) , σ u = 1 t t max ,
Under this strategy, other red-crowned cranes update their positions based on the positions of the pair of red-crowned cranes that are dancing. The design of the deviation coefficient r4 is intended to prevent the algorithm from being over-exploited during iterations. From Equation (13), it is found that the dispersion degree of the Gaussian distribution is adaptively adjusted due to the gradual decrease in σu from 1 to 0. This can promote the algorithm to have a certain global search capability in the early iterations, and in the later iterations, u is almost close to 1, thus giving the algorithm a good exploitation capability. Figure 4 shows the variation in the random number u during iterations.

2.4. Implementation of RCO

In each iteration of the RCO algorithm, the position update strategy developed based on foraging and roosting behaviors or the position update strategy developed based on crane dance is selected with a certain probability. Specifically, a probability coefficient pc is defined, which is a constant value in the range of 0.1 to 0.9. When a random number r5 in the range of 0 to 1 is less than pc, the position update strategy in Section 2.3.1 is selected in this iteration; otherwise, the position update strategy in Section 2.3.2 is selected. Regarding the parameter pc, we analyze and clarify it in the subsequent section, Section 3.2.5. Therefore, the internal parameters that need to be set for the RCO algorithm include the probability coefficient pc and the ratio of the random foragers to the long-distance foragers. The run of the algorithm is stopped based on the number of iterations t or the number of function evaluations Fes; that is, the external parameter of maximum number of iterations tmax or the maximum number of function evaluations FEmax also needs to be given. In addition, the size n of the red-crowned crane population also belongs to an external parameter. Algorithm 1 shows the pseudo-code of the proposed RCO algorithm.
Algorithm 1: Pseudo-code of RCO
Input: The maximum number of iterations tmax, the maximum number of function evaluations FEmax, the population size n, the probability coefficient pc, and the ratio of random foragers to long-distance foragers k:(n-k);
Output: The best solution Xbest and its fitness value F(Xbest).
1:   Initialize the red-crowned cranes Xcranes using Equations (1) and (2)
2:   t = 0 and FEs = 0
3:   while (t < tmax or FEs < FEmax)
4:    Calculate the fitness values of all red-crowned cranes using Equation (3)
5:    Record the first and second individuals so far
6:    if r5 < pc
7:     Take the position corresponding to the first fitness value as Xhome
8:     Sort the red-crowned cranes according to their fitness values
9:     for i = F1:Fk  /Foraging behavior of random foragers/
10:      Update the positions of the random foragers using Equation (6)
11:      Calculate the fitness values of random foragers
12:      end for
13:      for i = F(k+1):Fn  /Foraging behavior of long-distance foragers/
14:       Update the positions of the long-distance foragers using Equation (7)
15:       if cr < (t/tmax)1/2  /Escaping behavior of long-distance foragers/
16:        Generate Xrand and record Xipbest of long-distance foragers
17:        Further update their positions using Equation (8)
18:       end if
19:       Calculate the fitness values of long-distance foragers
20:      end for
21:      Determine Xhome by comparing the fitness values of all red-crowned cranes after foraging with the fitness values of Xhome
22:      for i = 1:n  /Roosting behavior of red-crowned cranes/
23:       Update the positions of all red-crowned cranes using Equation (9)
24:      end for
25:      FEs = FEs + 2n
26:     else
27:      for i = 1:n  /Crane dance of red-crowned cranes/
28:       Update the positions of red-crowned cranes using Equation (12)
29:      end for
30:      FEs = FEs + n
31:     end if
32:     t = t + 1
33: end while
34: Return Xbest and F(Xbest)

2.5. Computational Complexity of RCO

The computational complexity of RCO is mainly affected by the population initialization, the fitness evaluation, and the position update. During the initialization process, the computational complexity is O(n) due to the use of n search agents. When the foraging and roosting strategies are used to update the positions, a quick sort on the fitness values is required, and its computational complexity is O(nlog2n). In addition, all search agents perform the foraging and roosting phases successively, with a computational complexity of O(2nd + 2n). Then, the computational complexity of executing this strategy once is O(nlog2n + 2nd + 2n). The computational complexity of using the crane dance strategy is O(nd + n). In the iteration, the probability of using the former strategy is pc, and the probability of using the latter strategy is (1 − pc). Finally, the total computational complexity is
        O n + p c t max ( n log 2 n + 2 n d + 2 n ) + ( 1 p c ) t max ( n d + n ) = O n + p c t max n log 2 n + ( 1 + p c ) t max n ( d + 1 ) ,

3. Experimental Results and Discussion

3.1. Experimental Setup

To demonstrate the performance of the proposed algorithm, this section tests RCO and eight other algorithms on two types of test function sets (CEC-2005 functions [49] and CEC-2022 functions [50]). The eight algorithms for comparison are as follows: Rüppell’s fox optimizer (RFO) [51], Eel and Grouper Optimizer (EGO) [52], Coati Optimization Algorithm (COA) [53], Harris Hawks Optimizer (HHO) [33], Slime Mould Algorithm (SMA) [42], Runge–Kutta Optimizer (RUN) [54], Golden Jackal Optimization (GJO) [41], and Dung Beetle Optimizer (DBO) [34]. Table 2 presents the internal parameter settings of the RCO algorithm and other comparative algorithms, with parameters set according to their original texts. For the CEC-2005 functions, all algorithms are run with 50,000 function evaluations and 50 search agents. Similarly, for the CEC-2022 functions, FEs and n are set to 100,000 and 50, respectively. To ensure more reliable results, all algorithms are run 30 times. Experiments for all algorithms are conducted using MATLAB 2021a on an Intel Core i7-10750H Processor with 2.60 GHz and 16.0 GB of main memory.

3.2. Tests on CEC-2005 Benchmark Functions

3.2.1. Exploitation and Exploration Analysis

The CEC-2005 test functions contain seven variable-dimensional unimodal functions (F1F7), six variable-dimensional multimodal functions (F8F13), and ten fixed-dimensional multimodal functions (F14F23) [49]. Here, the dimensions of F1F13 are fixed to 30. The unimodal functions can be used to test the exploitation capacity of the algorithm since they have only one global optimal solution, while the multimodal functions with local optimal solutions are used to examine the exploration capacity of the algorithm. Compared with multimodal functions F8F13, functions F14F23 are easier to solve due to their lower dimensionality and fewer local optimal solutions. The performance of all optimizers on these functions is presented through mean, standard deviation, minimum, and maximum values of the results of 30 independent runs, and the ranking (algorithms with the same performance share the sum of their rankings equally), as shown in Table A1 of Appendix A.
For the unimodal functions F1 and F3, the RCO, SMA, and EGO algorithms successfully obtain the optimal value (i.e., zero) in every run, which is significantly better than the other algorithms. For functions F2, F4, and F7, the RCO algorithm also provides better mean and standard deviation results than all other algorithms. For F6, RCO ranks third in terms of mean and standard deviation, after DBO and RUN. Overall, the RCO algorithm ranks first in functions F1, F2, F3, F4, and F7, and the test results of these unimodal functions demonstrate the superior exploitation capability of RCO.
The optimization process for the multimodal functions F9, F10, and F11 seems to be easy, and RCO, DBO, RUN, SMA, HHO, COA, and EGO can make them converge to the optimum in all runs. For function F12, the RCO optimizer gives satisfactory results and is slightly inferior to the RUN optimizer in terms of the mean and standard deviation values, ranking second. Observing the performance of RCO in functions F14F23, it can be found that although some competing optimizers also provide the same mean values as RCO, they give clearly worse standard deviation results. In addition, it can be judged from the index of the maximum value that DBO, GJO, HHO, COA, and EGO have relatively weak exploration capabilities and are prone to falling into local optima. In contrast, RUN and SMA have better exploration performance. There is no doubt that the RCO optimizer shows significant exploration capability and strong competitiveness for multimodal functions.
From the ranking point of view, the RCO algorithm ranks first (or tied for first) in five of the seven unimodal functions, and ranks first (or tied for the first) in twelve of the sixteen multimodal functions. Therefore, the proposed RCO algorithm exhibits better optimization accuracy on 74% of the CEC-2005 benchmark functions. This indicates that the RCO algorithm outperforms the other compared algorithms in optimizing the benchmark functions and ranks first overall.

3.2.2. Convergence Analysis

In order to intuitively analyze the convergence speed of the RCO algorithm, the convergence curves of the RCO algorithm and other algorithms on the benchmark test functions are drawn, as shown in Figure 5. Under the same population size and number of function evaluations, RCO is able to achieve better results more efficiently than other optimizers for the unimodal functions F1, F2, F3, F4, and F7, showing fast convergence speed and high convergence accuracy. For the multimodal test functions F9F11 and F16F23, RCO can effectively avoid falling into the local optimum by virtue of its promising exploration capability, thereby allowing these functions to quickly converge to the global optimum. Although some competing algorithms can also make these functions converge to similar values, their convergence speed obviously fails to exceed that of the RCO algorithm, which is relatively slow. In general, the convergence speed of the RCO algorithm is satisfactory.

3.2.3. Non-Parametric Statistical Analysis

To further evaluate the performance of the RCO algorithm, the Wilcoxon signed-rank test at α = 0.05 level is used to determine whether there is a significant difference in performance of RCO compared to its competitors. Table 3 gives the statistical results of RCO compared to its competitors one by one. Note that “+” means that there is a significant difference between the RCO algorithm and competitors at the 0.05 level, and RCO outperforms its competitors. “−” likewise means that there is a significant difference between RCO and competitors, but RCO is not as good as its competitors. “=” indicates that there is no significant difference between the two.
RCO significantly outperforms GJO on 20 functions, outperforms HHO, COA, and EGO on 16 functions, and outperforms RUN on 15 functions. Furthermore, it outperforms DBO and SMA on 13 functions. However, it is significantly inferior to other algorithms on at most four functions. This indicates that compared with other algorithms, the RCO algorithm achieves significant improvements in optimization performance. From the statistical results on unimodal functions alone, RCO outperforms GJO on all functions, outperforms DBO and RFO on six functions, and outperforms RUN, HHO, and COA on five functions. However, the comparison between RCO and SMA shows four “=”, indicating that there is no significant difference between them. In summary, the RCO algorithm exhibits significantly better exploitation ability compared with other algorithms except SMA. Finally, on multimodal functions, the RCO algorithm obtains at least 10 “+” and at most 3 “−” compared with all other algorithms except DBO and RFO, which means that it also has significant advantages in exploration ability. It is worth mentioning that there is no significant difference between RCO and RFO on most (10) multimodal functions. Overall, it can be concluded that the proposed algorithm has a significantly superior comprehensive optimization ability.

3.2.4. Scalability Analysis

Scalability analysis is used to show whether the algorithms perform similarly for low-dimensional and high-dimensional tasks. In this test, the dimensions of the variable-dimensional functions F1F13 vary from 50 to 500 with a step size of 50. Figure 6 shows the results of the scalability test for all optimization algorithms. It can be observed that the performance of RCO, HHO, and COA does not decrease significantly as the variable dimension increases, and they basically show consistent search capabilities for all functions. The optimization capabilities of DBO, GJO, RUN, SMA, EGO, and RFO for some functions decrease with the increase in dimensions. It should be emphasized that the function F8 exhibits a downward trend since its optimal value decreases linearly as the dimension increases. To sum up, the variable dimensions have less impact on the solution quality of the RCO HHO, and COA than the other optimization algorithms. Table 4 gives the evaluation results of all algorithms optimizing functions F1F13 in 500 dimensions. It is found that RCO provides the best results for functions F1F4, F6F7, and F9F12 (the ones tied for first place are also taken into account), and the differences between these results and the results of F1F13 in 30 dimensions are small. It reveals that RCO has good scalability due to its ability to provide satisfactory results for high-dimensional problems.

3.2.5. Parameter Analysis

This section focuses on exploring the effect of the parameter pc on the performance of the RCO algorithm. Table 5 shows the mean and standard deviation values of 30 results when the probability coefficient pc of RCO is 0.1, 0.3, 0.5, 0.7, and 0.9. From Table 5, it is found that for unimodal functions except F6, the smaller the parameter pc is, the better the results are, which is particularly obvious for functions F1F4. For multimodal functions F8, F12, and F13, as the probability coefficient pc increases, RCO provides better results. Moreover, for multimodal functions F14F23, it is observed that the standard deviation value decreases with the increase in the probability coefficient pc, which means that the obtained results are more stable. There is no doubt that the probability coefficient significantly affects the exploitation and exploration performance of the algorithm. Specifically, a smaller pc value can enhance the exploitation capability of the RCO algorithm, and a larger pc value endows RCO with a superior exploration capability. Figure 7 shows the convergence curves of RCO for functions F1F13 under different pc values. It can be found that, except for function F8, RCO has the fastest convergence speed when pc is 0.1 and the slowest convergence speed when pc is 0.9. Therefore, the RCO convergence speed can be improved by reducing the value of pc.

3.2.6. Running Time Comparison

Table A2 in Appendix A presents the average running time of each algorithm on the CEC-2005 benchmark functions. Among all the algorithms, EGO has the shortest computational time, while RUN and SMA have relatively longer computational times. Moreover, all other algorithms, including RCO, have similar running time. Therefore, the time metrics indicate that the proposed RCO algorithm improves optimization capability without significantly increasing the computational time.

3.3. Tests on CEC-2022 Functions

This section uses the newer CEC-2022 test function set to further verify the search performance of RCO. It contains a total of twelve functions in four categories: unimodal, multimodal, hybrid, and composite. During the test, the dimensions of all functions are set to 10. Table A3 of Appendix A gives the mean, standard deviation, minimum, maximum, and ranking results of all algorithms for these functions. For function F24, RCO, RUN, and SMA provide superior results. Further comparison shows that the standard deviation value of RCO is smaller than that of RUN and SMA. For the multimodal functions F25F28, RCO ranks first for two functions and ranks third after SMA and RFO for the other two. For two of the three hybrid functions F29F31, the performance of RCO is poor and only at an intermediate level. In addition, RCO is also highly competitive in terms of composition functions F32F35. These results show that the RCO algorithm is feasible and exhibits overall better performance than the compared algorithms on the CEC-2022 test set.
In addition, the Wilcoxon signed-rank test is also used for further comparison, and the statistical results are shown in Table 6. Obviously, RCO is superior to GJO, COA, EGO, and RFO in most functions. Compared with DBO and HHO, RCO outperforms them in half of the functions, and shows no significant differences in other functions. There are significant differences between RCO and RUN for six functions, but RCO is surpassed by RUN in two functions. RCO is better than SMA for five functions and worse than SMA for five functions. Overall, from a statistical significance perspective, RCO is rarely surpassed by other algorithms, as “−” appears only 9 times out of 96 (8 × 12) comparisons. In addition, “+” appears 54 times, and “=” appears 33 times. This further demonstrates that RCO can effectively achieve the optimization of the CEC-2022 test set and indicate its certain advantages.

4. Application of RCO in Engineering Design Problems

To verify the ability of the proposed RCO algorithm to solve actual engineering problems, RCO and fifteen other optimizers (Rüppell’s fox optimizer (RFO) [51], the Eel and Grouper Optimizer (EGO) [52], Coati Optimization Algorithm (COA) [53], Artificial Bee Colony (ABC) [12], Moth–Flame Optimization (MFO) [32], Ant Lion Optimizer (ALO) [55], Multi-Verse Optimizer (MVO) [24], Dragonfly Algorithm (DA) [17], Grasshopper Optimisation Algorithm (GOA) [56], Seagull Optimization Algorithm (SOA) [57], Harris Hawks Optimizer (HHO) [33], Slime Mould Algorithm (SMA) [42], Runge–Kutta Optimizer (RUN) [54], Golden Jackal Optimization (GJO) [41], and Dung Beetle Optimizer (DBO) [34]) are used to optimize eight reported classic engineering design problems. These optimization problems usually have some constraints, and the values of their decision variables are not always continuous, which are challenges for the optimizer. Note that each optimizer is run 30 times in optimizing each problem and uses 50 individuals (n) and 50,000 function evaluations (FEs). The parameters for RCO are as follows: pc = 0.9, k:(nk) = 1:1. The parameters for the other optimizers follow the directions set out the original paper. The performance of RCO and other optimizers is evaluated based on four statistical indicators (mean, standard deviation, and best and worst values) and convergence curves. These problems include a three-bar truss design [58], cantilever beam problem [59], corrugated bulkhead problem [60], speed reducer problem [61]), Himmelblau’s nonlinear problem [62], I-beam problem [63], tension/compression spring problem [64], and reinforced concrete beam problem [65]. For the sake of conciseness in the main text, the results and analyses of the latter four problems are provided in Appendix B.1, Appendix B.2, Appendix B.3 and Appendix B.4.

4.1. Constraint Handling Method

In optimization problems with constraints, penalty function methods are an effective means to handle constraints. There are many penalty methods, and some common methods have been summarized in the literature [66]. The death penalty and static penalty are the two most popular methods. This paper chooses the latter to handle constraints in engineering design problems, which is achieved by constructing the following static penalty function:
F ( X ) = F ( X ) + i = 1 m l i · max 0 , g i ( X ) 2 ,
where F′(X) is the modified objective function, li (I = 1, 2, …, m, and m is the number of constraints for the problem to be solved) are the penalty factors, and gi(X) are the values of the constraint functions. In this paper, li are assigned large values.

4.2. Three-Bar Truss Design Problem

In the three-bar truss design problem, it is necessary to consider reducing the volume as much as possible while satisfying the stress constraints on each side of the member. The two variables of this problem are the cross-sectional areas A1 and A2 of the bars, as shown in Figure 8. Specifically, this problem can be expressed as follows:
Consider variable X = x 1 , x 2 = A 1 , A 2 ;
minimize F ( X ) = 2 2 x 1 + x 2 l ;
subject to
g 1 ( X ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 2 ( X ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 3 ( X ) = 1 x 1 + 2 x 2 P σ 0 ;
where l = 100   cm , P = 2   kN / cm 2 , σ = 2   kN / cm 2 ;
variable range: 0 x 1 , x 2 1 .
Table 7 shows the results for the three-bar truss problem using sixteen optimizers. In terms of best value and mean value, DBO ranks first. It is worth mentioning that RCO finds the best value very close to that found by DBO and exceeds other algorithms in terms of mean value. The best solution obtained by RCO is X = (0.788633343920, 0.408366505177). Figure 9 shows the convergence curves for all methods. Obviously, RCO has a very satisfactory convergence performance in dealing with this problem.

4.3. Cantilever Beam Design Problem

Figure 10 shows a cantilever beam composed of five hollow square tubes with constant thickness. The width and height of each hollow tube are equal. The widths (heights) of the five hollow tubes are considered as the five variables in the design of the cantilever beam. In the cantilever beam problem, the goal is to minimize the weight of the entire cantilever beam. This problem can be expressed by the following formula:
Consider variable X = x 1 , x 2 , x 3 , x 4 , x 5 = a 1 , a 2 , a 3 , a 4 , a 5 ;
minimize F ( X ) = 0.0624 x 1 + x 2 + x 3 + x 4 + x 5 ;
subject to
g 1 ( X ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0 ;
variable range: 0.01 x 1 , x 2 , x 3 , x 4 , x 5 100 .
Table 8 shows the detailed results of various indicators when optimizing the cantilever beam problem. It is observed that DBO gives the first ranked mean, standard deviation, and best values, followed by RUN. RCO ranks third, but the difference between the best value returned by RCO and that returned by DBO or RUN is very small. The solution corresponding to the minimum weight of the cantilever beam achieved by RCO is X = (6.016442523051, 5.308074580329, 4.491372442055, 3.500315808517, 2.157480922246). In addition, it is found from Figure 11 that RCO converges to a good result at the earliest.

4.4. Corrugated Bulkhead Design Problem

In the corrugated bulkhead design problem, the weight of the bulkhead is required to be minimized. As shown in Figure 12, the bulkhead has four variable parameters, which are width b, depth h, length l, and thickness t. Additionally, the problem is subject to six inequalities. The mathematical model of this minimization problem is as follows:
Consider variable X = x 1 , x 2 , x 3 , x 4 = b , h , l , t ;
minimize F ( X ) = 5.885 x 1 + x 3 x 4 x 1 + x 3 2 x 2 2 ;
subject to
g 1 ( X ) = x 2 x 4 2 x 1 5 + x 3 6 + 8.94 x 1 + x 3 2 x 2 2 0 ,
g 2 ( X ) = x 2 2 x 4 x 1 5 + x 3 12 + 2.2 8.94 x 1 + x 3 2 x 2 2 4 3 0 ,
g 3 ( X ) = 0.0156 x 1 x 4 + 0.15 0 ,
g 4 ( X ) = 0.0156 x 3 x 4 + 0.15 0 ,
g 5 ( X ) = x 4 + 1.05 0 ,
g 6 ( X ) = x 2 x 3 0 ;
variable range: 0 x 1 , x 2 , x 3 100 , 0 x 4 5 .
From the statistical results shown in Table 9, it is found that RCO, DBO, MFO, and RFO provide the same best value, and this value is the optimal objective function value. The optimal solution obtained is X = (57.692307672839, 34.147620293494, 57.692307345992, 1.050000000008). Among these three optimization algorithms, MFO ranks first due to its lower mean and standard deviation values, RFO ranks second only to MFO, and RCO ranks third. Finally, Figure 13 shows that RCO outperforms other algorithms in terms of convergence performance.

4.5. Speed Reducer Design Problem

Speed reducers are important transmission components in the industrial field. In the speed reducer optimization problem, it is required to minimize the weight of the speed reducer while satisfying four linear and seven nonlinear constraints. The problem has seven variables: the face width b, the module of teeth m, the number of teeth in the pinion z, the length of the first shaft between bearings l1, the length of the second shaft between bearings l2, the diameter of the first shaft d1, and the diameter of the second shaft d2. Note that the variable z can only be set to an integer; that is, it is a discrete variable. Figure 14 shows the schematic diagram of a speed reducer. The problem is clearly expressed as follows:
Consider variable X = x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 = b , m , z , l 1 , l 2 , d 1 , d 2 ;
minimize F ( X ) = 0.7854 x 1 x 2 2 3.3333 x 3 2 + 14.9334 x 3 43.0934 1.508 x 1 x 6 2 + x 7 2 + 7.4777 x 6 3 + x 7 3 + 0.7854 x 4 x 6 2 + x 5 x 7 2 ;
subject to
g 1 ( X ) = 27 x 1 x 2 2 x 3 1 0 ,
g 2 ( X ) = 397.5 x 1 x 2 2 x 3 2 1 0 ,
g 3 ( X ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 ,
g 4 ( X ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 ,
g 5 ( X ) = 745 x 4 / x 2 x 3 2 + 16.9 · 1 0 6 110 x 6 3 1 0 ,
g 6 ( X ) = 745 x 5 / x 2 x 3 2 + 157.5 · 1 0 6 85 x 7 3 1 0 ,
g 7 ( X ) = x 2 x 3 40 1 0 ,
g 8 ( X ) = 5 x 2 x 1 1 0 ,
g 9 ( X ) = x 1 12 x 2 1 0 ,
g 10 ( X ) = 1.5 x 6 + 1.9 x 4 1 0 ,
g 11 ( X ) = 1.1 x 7 + 1.9 x 5 1 0 ;
variable range: 2.6 x 1 3.6 , 0.7 x 2 0.8 , x 3 17 , 18 , , 28 , 7.3 x 4 8.3 , 7.8 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5 .
Judging from the statistical results for all optimization algorithms shown in Table 10, RCO, DBO, and MFO together give the best value F(X) = 2996.34816496 in repeated runs, which is the optimal value of this problem. The corresponding variable solution is X = (3.499999999997, 0.7, 17, 7.3, 7.8, 3.350214666096, 5.286683229756). Further, the mean value returned by RCO is not as good as MFO, but the standard deviation is better than MFO. Figure 15 shows the convergence curves for optimizing the speed reducer problem. It can be found that the RCO algorithm can obtain good results with only a small number of function evaluations.

5. Conclusions

This paper proposes a new bio-inspired algorithm called Red-crowned Crane Optimization (RCO) algorithm. This algorithm simulates the foraging, roosting, escaping from danger, and crane dance behaviors of the red-crowned crane. The RCO algorithm is tested on the classic CEC-2005 test functions and the newer twelve CEC-2022 test functions. The evaluation results and the signed-rank test results for the benchmark test function set highlight that RCO is highly competitive compared to its competitors. The convergence results show that RCO has a faster convergence speed, and the scalability test results show that RCO can cope with complex high-dimensional tasks. In addition, the sensitivity analysis results reveal that RCO has greater sensitivity to the probability coefficient. The smaller the probability coefficient, the better the exploitation performance of RCO, and the larger the probability coefficient, the better the exploration performance of RCO. The statistical and signed-rank test results for the CEC-2022 function set also demonstrate that RCO is effective and has certain advantages. The RCO algorithm is also used to solve eight reported classic engineering problems to verify its ability to cope with problems with constraints and discrete variables. The optimization results show that RCO is able to provide highly competitive solutions to these problems. Therefore, the RCO algorithm can be used as an alternative optimization algorithm.
In this study, the algorithm uses a certain number of iterations or function evaluations as the termination condition. In future research, to save computational costs, we will attempt other criteria, such as changes in the objective function and the accuracy of optimization results. More importantly, we plan to apply this algorithm to solve controller parameter optimization and robot path planning problems involved in our field.

Author Contributions

Conceptualization, J.K.; methodology, J.K.; software, Z.M.; validation, Z.M.; formal analysis, Z.M.; investigation, Z.M.; resources, J.K.; data curation, Z.M.; writing—original draft preparation, Z.M.; writing—review and editing, J.K.; visualization, Z.M.; supervision, J.K.; project administration, J.K.; funding acquisition, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Nanjing Agricultural Robotics and Equipment Engineering Research Center.

Data Availability Statement

The data presented in this study may be available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Evaluation results for CEC-2005 functions optimized by RCO and other optimizers.
Table A1. Evaluation results for CEC-2005 functions optimized by RCO and other optimizers.
IndexRCODBOGJORUNSMAHHOCOAEGORFO
F1Mean05.2809 × 10−2303.9820 × 10−1282.2935 × 10−24301.0477 × 10−1882.7846 × 10−30001.9440 × 10−123
Std02.8925 × 10−2291.0427 × 10−1277.3031 × 10−24305.7385 × 10−1881.5252 × 10−29908.6274 × 10−123
Min001.9033 × 10−1321.3587 × 10−26401.1141 × 10−2184.4465 × 10−32309.2003 × 10−147
Max01.5843 × 10−2284.1799 × 10−1272.8536 × 10−24203.1431 × 10−1878.3539 × 10−29904.7129 × 10−122
Rank168517419
F2Mean1.9434 × 10−2382.3225 × 10−1275.2171 × 10−741.3440 × 10−1321.4401 × 10−2163.6619 × 10−1027.2913 × 10−1536.5602 × 10−2263.0528 × 10−64
Std1.0645 × 10−2371.2649 × 10−1261.3483 × 10−735.4701 × 10−1327.8877 × 10−2161.1524 × 10−1012.4693 × 10−1522.0974 × 10−2251.0288 × 10−63
Min3.5173 × 10−2891.8746 × 10−1559.6739 × 10−761.1056 × 10−14401.7402 × 10−1151.7869 × 10−1612.6197 × 10−2353.0543 × 10−78
Max5.8303 × 10−2376.9297 × 10−1265.5034 × 10−732.9423 × 10−1314.3203 × 10−2156.0460 × 10−1011.1194 × 10−1519.1667 × 10−2255.0084 × 10−63
Rank168537429
F3Mean01.2452 × 10−1081.1524 × 10−462.9380 × 10−20301.4027 × 10−1681.4690 × 10−30607.8981 × 10−33
Std06.8205 × 10−1085.9207 × 10−461.6092 × 10−20207.6814 × 10−1685.5447 × 10−30604.3252 × 10−32
Min08.7993 × 10−2961.2082 × 10−555.9121 × 10−23101.5541 × 10−1915.4347 × 10−32206.7212 × 10−52
Max03.7357 × 10−1073.2464 × 10−458.8141 × 10−20204.2073 × 10−1672.5106 × 10−30502.3690 × 10−31
Rank178516419
F4Mean1.3051 × 10−2261.2965 × 10−1081.4151 × 10−382.2911 × 10−1072.4594 × 10−2095.4802 × 10−983.4686 × 10−1531.2982 × 10−2101.2699 × 10−30
Std7.1478 × 10−2267.1009 × 10−1083.9775 × 10−381.2446 × 10−1061.3471 × 10−2081.5566 × 10−978.2139 × 10−1536.0192 × 10−2106.7899 × 10−30
Min1.8275 × 10−2798.3784 × 10−1574.6046 × 10−411.9124 × 10−12406.2512 × 10−1101.7214 × 10−1611.3882 × 10−2247.4726 × 10−49
Max3.9150 × 10−2253.8893 × 10−1072.1602 × 10−376.8188 × 10−1067.3783 × 10−2088.2062 × 10−972.8000 × 10−1523.2877 × 10−2093.7211 × 10−29
Rank158637429
F5Mean2.3255 × 1012.4222 × 1012.7537 × 1012.3002 × 1011.6940 × 10−11.4904 × 10−31.4705 × 10−12.7439 × 1011.9529 × 101
Std1.3112 × 10−12.0856 × 10−18.1310 × 10−11.2877 × 1001.3180 × 10−12.8720 × 10−32.4898 × 10−16.2131 × 10−13.0118 × 100
Min2.2954 × 1012.3818 × 1012.6218 × 1012.0991 × 1013.3788 × 10−31.9144 × 10−63.3091 × 10−32.6492 × 1014.1507 × 100
Max2.3553 × 1012.4549 × 1012.8830 × 1012.5741 × 1014.7196 × 10−11.5115 × 10−21.3118 × 1002.8745 × 1012.3250 × 101
Rank679531284
F6Mean8.2703 × 10−82.2893 × 10−132.2185 × 1001.5362 × 10−96.9817 × 10−41.1121 × 10−51.2516 × 10−24.6390 × 1003.5866 × 10−6
Std8.8799 × 10−87.0206 × 10−134.9214 × 10−16.2534 × 10−103.0287 × 10−41.5140 × 10−51.3605 × 10−24.0602 × 10−12.2533 × 10−6
Min3.5588 × 10−91.0557 × 10−151.2516 × 1006.0582 × 10−103.0562 × 10−42.2074 × 10−81.4359 × 10−43.8596 × 1008.2509 × 10−7
Max3.2621 × 10−72.9547 × 10−123.5005 × 1003.7636 × 10−91.3765 × 10−36.7073 × 10−54.4397 × 10−25.3294 × 1008.4035 × 10−6
Rank318265794
F7Mean3.8235 × 10−54.9274 × 10−41.0833 × 10−41.3639 × 10−49.4520 × 10−56.1586 × 10−53.9921 × 10−53.9288 × 10−51.9593 × 10−3
Std5.4596 × 10−54.1904 × 10−49.5409 × 10−58.4876 × 10−51.1986 × 10−48.2083 × 10−52.7439 × 10−52.7069 × 10−52.1825 × 10−3
Min5.4448 × 10−74.2673 × 10−58.5343 × 10−62.2017 × 10−54.8300 × 10−61.6027 × 10−67.4315 × 10−73.5073 × 10−61.1118 × 10−4
Max2.8157 × 10−41.5603 × 10−33.7173 × 10−43.8663 × 10−46.3406 × 10−43.6845 × 10−41.0214 × 10−49.0637 × 10−59.0398 × 10−3
Rank186754329
F8Mean−7.8058 × 103−1.0527 × 104−4.7758 × 103−8.3556 × 103−1.2569 × 104−1.2530 × 104−1.2569 × 104−7.0586 × 103−7.4069 × 103
Std1.1148 × 1031.9844 × 1031.0318 × 1035.9000 × 1023.0944 × 10−22.1715 × 1022.8011 × 10−29.2974 × 1029.1907 × 102
Min−1.1403 × 104−1.2562 × 104−6.5500 × 103−9.4898 × 103−1.2569 × 104−1.2569 × 104−1.2569 × 104−9.3321 × 103−9.5175 × 103
Max−6.5455 × 103−6.8480 × 103−3.0756 × 103−6.9719 × 103−1.2569 × 104−1.1380 × 104−1.2569 × 104−5.8874 × 103−5.8374 × 103
Rank649523187
F9Mean000000002.4215 × 100
Std000000009.2071 × 100
Min000000000
Max000000004.5768 × 101
Rank111111119
F10Mean8.8818 × 10−168.8818 × 10−164.6777 × 10−158.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Std009.0135 × 10−16000000
Min8.8818 × 10−168.8818 × 10−164.4409 × 10−158.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Max8.8818 × 10−168.8818 × 10−167.9936 × 10−158.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Rank119111111
F11Mean000000002.0490 × 10−3
Std000000009.1755 × 10−3
Min000000000
Max000000004.9149 × 10−2
Rank111111119
F12Mean2.1603 × 10−73.4556 × 10−32.3100 × 10−16.7256 × 10−103.9651 × 10−46.6063 × 10−79.6963 × 10−54.5220 × 10−12.6797 × 10−7
Std4.7274 × 10−71.8927 × 10−21.7503 × 10−11.9651 × 10−105.9390 × 10−48.9000 × 10−71.2439 × 10−47.9334 × 10−21.7045 × 10−7
Min1.2207 × 10−81.5935 × 10−179.9289 × 10−24.0781 × 10−101.2079 × 10−61.4431 × 10−81.7254 × 10−72.9122 × 10−17.2406 × 10−8
Max2.5373 × 10−61.0367 × 10−17.9104 × 10−11.2067 × 10−92.9710 × 10−34.0629 × 10−64.9112 × 10−46.3744 × 10−17.9917 × 10−7
Rank278164593
F13Mean9.7940 × 10−11.1682 × 10−11.4702 × 1005.0640 × 10−38.8519 × 10−49.0349 × 10−61.4980 × 10−37.9877 × 10−19.8447 × 10−3
Std7.1097 × 10−11.4208 × 10−12.5182 × 10−16.7515 × 10−32.0406 × 10−31.6272 × 10−52.4410 × 10−33.5452 × 10−13.0028 × 10−2
Min2.4881 × 10−54.6345 × 10−158.4171 × 10−12.2125 × 10−94.2916 × 10−55.8846 × 10−84.2391 × 10−63.1981 × 10−19.5356 × 10−7
Max2.7692 × 1006.0266 × 10−11.9765 × 1002.1024 × 10−21.1598 × 10−28.0253 × 10−51.2578 × 10−21.7298 × 1009.8924 × 10−2
Rank869421375
F14Mean9.9800 × 10−11.4887 × 1004.9781 × 1002.3436 × 1009.9800 × 10−11.0311 × 1009.9800 × 10−11.0118 × 1009.9800 × 10−1
Std2.3142 × 10−161.8652 × 1004.3820 × 1002.4550 × 1002.0785 × 10−141.8148 × 10−14.9859 × 10−115.1198 × 10−24.1233 × 10−16
Min9.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−19.9800 × 10−1
Max9.9800 × 10−11.0763 × 1011.2671 × 1011.0763 × 1019.9800 × 10−11.9920 × 1009.9800 × 10−11.2798 × 1009.9800 × 10−1
Rank179836452
F15Mean3.2153 × 10−45.8790 × 10−43.4290 × 10−47.6533 × 10−44.3310 × 10−43.5002 × 10−43.9725 × 10−43.2385 × 10−43.1037 × 10−3
Std3.5851 × 10−52.8511 × 10−41.6828 × 10−44.6567 × 10−42.0813 × 10−41.6951 × 10−41.0307 × 10−42.0846 × 10−56.8926 × 10−3
Min3.0749 × 10−43.0749 × 10−43.0749 × 10−43.0749 × 10−43.0762 × 10−43.0776 × 10−43.0995 × 10−43.0878 × 10−43.0749 × 10−4
Max4.3029 × 10−41.2239 × 10−31.2232 × 10−31.2232 × 10−31.2233 × 10−31.2437 × 10−36.7254 × 10−44.1770 × 10−42.0363 × 10−2
Rank173864529
F16Mean−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0310 × 100−1.0316 × 100
Std5.3761 × 10−166.6486 × 10−161.8306 × 10−83.1600 × 10−131.2302 × 10−111.7724 × 10−121.4653 × 10−46.2720 × 10−46.7752 × 10−16
Min−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100
Max−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0308 × 100−1.0293 × 100−1.0316 × 100
Rank127465893
F17Mean3.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9829 × 10−13.9909 × 10−13.9789 × 10−1
Std006.5154 × 10−61.1965 × 10−112.9134 × 10−99.3218 × 10−81.3177 × 10−35.3145 × 10−30
Min3.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−1
Max3.9789 × 10−13.9789 × 10−13.9791 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−14.0455 × 10−14.2717 × 10−13.9789 × 10−1
Rank117456891
F18Mean3.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0417 × 1003.0002 × 1003.0000 × 100
Std2.0534 × 10−154.6868 × 10−157.8820 × 10−72.5021 × 10−131.3151 × 10−116.4462 × 10−91.3874 × 10−14.7576 × 10−41.2934 × 10−15
Min3.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 100
Max3.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.7212 × 1003.0021 × 1003.0000 × 100
Rank237456981
F19Mean−3.8628 × 100−3.8615 × 100−3.8612 × 100−3.8628 × 100−3.8628 × 100−3.8624 × 100−3.8430 × 100−3.8591 × 100−3.8628 × 100
Std2.2494 × 10−152.9649 × 10−33.1976 × 10−33.1487 × 10−101.0144 × 10−86.4592 × 10−42.5786 × 10−22.9782 × 10−32.7101 × 10−15
Min−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8625 × 100−3.8628 × 100
Max−3.8628 × 100−3.8549 × 100−3.8549 × 100−3.8628 × 100−3.8628 × 100−3.8602 × 100−3.7757 × 100−3.8544 × 100−3.8628 × 100
Rank168345972
F20Mean−3.2863 × 100−3.2362 × 100−3.1541 × 100−3.2507 × 100−3.2189 × 100−3.2101 × 100−2.8638 × 100−3.2469 × 100−3.2508 × 100
Std5.5415 × 10−28.4116 × 10−21.3116 × 10−15.9241 × 10−24.1112 × 10−26.5791 × 10−22.6506 × 10−18.9701 × 10−25.9397 × 10−2
Min−3.3220 × 100−3.3220 × 100−3.3220 × 100−3.3220 × 100−3.3220 × 100−3.3172 × 100−3.2038 × 100−3.3133 × 100−3.3224 × 100
Max−3.2031 × 100−3.0839 × 100−2.6381 × 100−3.2031 × 100−3.2027 × 100−3.0883 × 100−2.1416 × 100−3.0449 × 100−3.2032 × 100
Rank158367942
F21Mean−1.0153 × 101−7.6162 × 100−9.0867 × 100−1.0153 × 101−1.0153 × 101−5.3899 × 100−1.0153 × 101−5.3010 × 100−8.0521 × 100
Std6.1269 × 10−152.5802 × 1002.1780 × 1005.1533 × 10−106.3068 × 10−51.2815 × 1001.1031 × 10−41.0508 × 1002.8888 × 100
Min−1.0153 × 101−1.0153 × 101−1.0153 × 101−1.0153 × 101−1.0153 × 101−1.0135 × 101−1.0153 × 101−9.8387 × 100−1.0153 × 101
Max−1.0153 × 101−5.0551 × 100−3.5916 × 100−1.0153 × 101−1.0153 × 101−5.0476 × 100−1.0153 × 101−4.9663 × 100−2.6305 × 100
Rank175238496
F22Mean−1.0403 × 101−7.6775 × 100−9.9734 × 100−1.0403 × 101−1.0403 × 101−5.2615 × 100−1.0403 × 101−5.2316 × 100−8.9356 × 100
Std2.4240 × 10−152.8037 × 1001.6514 × 1004.0533 × 10−104.1346 × 10−59.6677 × 10−19.3489 × 10−59.3835 × 10−12.7659 × 100
Min−1.0403 × 101−1.0403 × 101−1.0403 × 101−1.0403 × 101−1.0403 × 101−1.0380 × 101−1.0403 × 101−1.0199 × 101−1.0403 × 101
Max−1.0403 × 101−2.7659 × 100−2.8676 × 100−1.0403 × 101−1.0403 × 101−5.0687 × 100−1.0402 × 101−5.0158 × 100−2.7519 × 100
Rank175238496
F23Mean−1.0536 × 101−8.9233 × 100−9.3781 × 100−1.0536 × 101−1.0536 × 101−5.3034 × 100−1.0536 × 101−5.2654 × 100−8.2315 × 100
Std4.6181 × 10−152.5060 × 1002.6826 × 1002.9870 × 10−103.5879 × 10−59.7000 × 10−17.3804 × 10−58.5743 × 10−13.3811 × 100
Min−1.0536 × 101−1.0536 × 101−1.0536 × 101−1.0536 × 101−1.0536 × 101−1.0439 × 101−1.0536 × 101−9.8046 × 100−1.0536 × 101
Max−1.0536 × 101−5.1285 × 100−2.4216 × 100−1.0536 × 101−1.0536 × 101−5.1086 × 100−1.0536 × 101−5.0699 × 100−2.4217 × 100
Rank165238497
Mean rank2.50005.32617.08704.28263.97835.28264.97835.89135.6739
Total rank169325487
Table A2. Average running time (s) for CEC-2005 functions optimized by RCO and other optimizers.
Table A2. Average running time (s) for CEC-2005 functions optimized by RCO and other optimizers.
RCODBOGJORUNSMAHHOCOAEGORFO
F10.21930.20140.26391.13690.84070.24360.12780.17080.3725
F20.22480.20360.26191.09740.78640.22920.13460.18950.3849
F30.59510.57360.66481.72781.16981.15220.69700.52000.7107
F40.21370.19790.25231.06890.78940.28040.12500.16480.3703
F50.25600.24150.29891.15960.86040.43650.19010.20650.4073
F60.21370.20700.25261.05720.80140.33030.12400.17760.3688
F70.39470.37550.44371.38620.97580.68120.40430.34310.5414
F80.25980.26590.30861.17450.84980.45710.18960.20860.3960
F90.23860.21200.26741.12500.80850.38170.15790.18230.3984
F100.24370.22200.27071.12690.80990.39280.16900.19830.4053
F110.27070.26720.31631.18520.83940.45900.21900.22280.4342
F120.70020.69560.80831.95951.33131.54420.92790.67010.8538
F130.71210.70530.80291.98611.27531.56630.93740.68041.1196
F141.04321.14721.06312.63871.32722.66021.59880.99931.2211
F150.11880.19180.15380.94140.41310.27220.13720.09490.3190
F160.10460.17370.14220.91280.37490.26300.12590.08550.3233
F170.09010.17210.13230.89440.36120.23140.10060.09060.3102
F180.08750.15850.12390.90660.36320.22480.09970.07050.3057
F190.12490.20030.16570.96900.40270.31060.15410.10480.3318
F200.13520.20210.18570.97520.46210.32170.15620.11380.3544
F210.13210.20160.16941.01260.42040.30490.15030.10670.3338
F220.13600.21370.18081.00240.45320.32540.16890.11920.3528
F230.14730.21970.19421.01460.45020.36860.19080.12920.3616
Table A3. Evaluation results for CEC-2022 test functions optimized by RCO and other optimizers.
Table A3. Evaluation results for CEC-2022 test functions optimized by RCO and other optimizers.
IndexRCODBOGJORUNSMAHHOCOAEGORFO
F24Mean3.0000 × 1023.0000 × 1022.5036 × 1033.0000 × 1023.0000 × 1023.0066 × 1024.6737 × 1026.8079 × 1023.1808 × 102
Std9.5520 × 10−76.5870 × 10−32.1924 × 1031.1582 × 10−41.8839 × 10−42.6587 × 10−16.7693 × 1011.0234 × 1024.9090 × 101
Min3.0000 × 1023.0000 × 1024.3993 × 1023.0000 × 1023.0000 × 1023.0025 × 1023.2378 × 1024.8892 × 1023.0000 × 102
Max3.0000 × 1023.0004 × 1028.5756 × 1033.0000 × 1023.0000 × 1023.0144 × 1026.5187 × 1028.7563 × 1025.6092 × 102
Rank149235786
F25Mean4.0845 × 1024.2628 × 1024.4023 × 1024.0961 × 1024.0963 × 1024.1502 × 1024.3332 × 1024.2556 × 1024.0938 × 102
Std1.3978 × 1013.1933 × 1012.8100 × 1011.7157 × 1011.2213 × 1012.1922 × 1013.0829 × 1012.2737 × 1014.4093 × 100
Min4.0000 × 1024.0012 × 1024.0644 × 1024.0000 × 1024.0564 × 1024.0004 × 1024.0020 × 1024.0062 × 1024.0014 × 102
Max4.6894 × 1024.9270 × 1025.2571 × 1024.7078 × 1024.7393 × 1024.7104 × 1024.8553 × 1024.7090 × 1024.1946 × 102
Rank179345862
F26Mean6.1579 × 1026.2004 × 1026.3574 × 1026.1663 × 1026.0007 × 1026.2819 × 1026.1631 × 1026.1688 × 1026.0528 × 102
Std9.8711 × 1009.7215 × 1007.5118 × 1008.6491 × 1001.4000 × 10−11.2081 × 1017.7566 × 1004.0105 × 1003.2735 × 100
Min6.0163 × 1026.0342 × 1026.2369 × 1026.0322 × 1026.0002 × 1026.0546 × 1026.0192 × 1026.1177 × 1026.0035 × 102
Max6.3772 × 1026.3747 × 1026.5486 × 1026.3448 × 1026.0081 × 1026.5478 × 1026.3764 × 1026.2980 × 1026.1104 × 102
Rank379518462
F27Mean8.2262 × 1028.2519 × 1028.2548 × 1028.2315 × 1028.2322 × 1028.2463 × 1028.4184 × 1028.2900 × 1028.2810 × 102
Std9.1560 × 1009.2451 × 1007.7796 × 1006.3624 × 1001.0028 × 1016.6644 × 1001.5499 × 1011.1551 × 1011.2351 × 101
Min8.0796 × 1028.0791 × 1028.1225 × 1028.1094 × 1028.0697 × 1028.1300 × 1028.1892 × 1028.1094 × 1028.0895 × 102
Max8.3582 × 1028.3811 × 1028.4503 × 1028.3383 × 1028.4378 × 1028.4099 × 1028.7466 × 1028.5373 × 1028.5115 × 102
Rank156234987
F28Mean9.9915 × 1021.0588 × 1031.1603 × 1031.0233 × 1039.0018 × 1021.3088 × 1031.0078 × 1031.0060 × 1039.2283 × 102
Std1.3394 × 1021.2952 × 1021.5648 × 1027.4703 × 1012.8177 × 10−11.7534 × 1025.7384 × 1012.3982 × 1024.2240 × 101
Min9.0000 × 1029.0018 × 1029.8482 × 1029.4077 × 1029.0000 × 1021.0097 × 1039.0539 × 1029.0000 × 1029.0010 × 102
Max1.3612 × 1031.4457 × 1031.5323 × 1031.2464 × 1039.0091 × 1021.6358 × 1031.1268 × 1031.9679 × 1031.0671 × 103
Rank378619542
F29Mean3.3257 × 1034.8438 × 1037.1368 × 1033.3307 × 1035.9396 × 1033.7954 × 1033.7087 × 1034.9859 × 1035.0525 × 103
Std1.4167 × 1032.1609 × 1031.8322 × 1031.3872 × 1032.0529 × 1032.3647 × 1031.5260 × 1032.2015 × 1032.2488 × 103
Min1.8825 × 1031.9222 × 1032.6176 × 1031.8847 × 1031.9655 × 1031.9369 × 1031.9336 × 1031.8568 × 1031.8341 × 103
Max6.4416 × 1038.2446 × 1038.9817 × 1037.2376 × 1038.1397 × 1038.2625 × 1038.0003 × 1038.1304 × 1038.2965 × 103
Rank159284367
F30Mean2.0399 × 1032.0463 × 1032.0444 × 1032.0392 × 1032.0188 × 1032.0506 × 1032.0410 × 1032.0511 × 1032.0170 × 103
Std2.0802 × 1011.9596 × 1011.9758 × 1011.2284 × 1015.9280 × 1002.5132 × 1011.4021 × 1018.0569 × 1009.2755 × 100
Min2.0139 × 1032.0230 × 1032.0140 × 1032.0103 × 1032.0000 × 1032.0247 × 1032.0200 × 1032.0362 × 1032.0010 × 103
Max2.0923 × 1032.0995 × 1032.1126 × 1032.0653 × 1032.0226 × 1032.1135 × 1032.0732 × 1032.0693 × 1032.0300 × 103
Rank476328591
F31Mean2.2273 × 1032.2276 × 1032.2264 × 1032.2226 × 1032.2207 × 1032.2295 × 1032.2312 × 1032.2266 × 1032.2307 × 103
Std5.2714 × 1006.8296 × 1003.5636 × 1003.8180 × 1004.9956 × 10−11.2234 × 1012.9311 × 1005.6304 × 1006.8252 × 100
Min2.2058 × 1032.2124 × 1032.2206 × 1032.2041 × 1032.2200 × 1032.2075 × 1032.2227 × 1032.2126 × 1032.2121 × 103
Max2.2340 × 1032.2479 × 1032.2332 × 1032.2254 × 1032.2216 × 1032.2638 × 1032.2389 × 1032.2319 × 1032.2505 × 103
Rank563217948
F32Mean2.5293 × 1032.5308 × 1032.5826 × 1032.5293 × 1032.5293 × 1032.5346 × 1032.5595 × 1032.5542 × 1032.5360 × 103
Std2.6704 × 10−134.6347 × 1003.3278 × 1012.5820 × 10−49.2424 × 10−52.6781 × 1011.5811 × 1011.3486 × 1011.7542 × 101
Min2.5293 × 1032.5293 × 1032.5306 × 1032.5293 × 1032.5293 × 1032.5293 × 1032.5385 × 1032.5352 × 1032.5293 × 103
Max2.5293 × 1032.5483 × 1032.6734 × 1032.5293 × 1032.5293 × 1032.6762 × 1032.5980 × 1032.5905 × 1032.6183 × 103
Rank149325876
F33Mean2.5006 × 1032.5179 × 1032.5667 × 1032.5469 × 1032.5079 × 1032.5444 × 1032.5113 × 1032.5062 × 1032.5434 × 103
Std1.6487 × 10−14.4319 × 1016.2210 × 1015.7549 × 1012.8585 × 1016.3047 × 1013.5215 × 1012.5066 × 1015.7762 × 101
Min2.5003 × 1032.5003 × 1032.5003 × 1032.5004 × 1032.5003 × 1032.5004 × 1032.5009 × 1032.5008 × 1032.5001 × 103
Max2.5010 × 1032.6313 × 1032.6347 × 1032.6246 × 1032.6138 × 1032.6482 × 1032.6511 × 1032.6388 × 1032.6311 × 103
Rank159837426
F34Mean2.7510 × 1032.7715 × 1032.9569 × 1032.7450 × 1032.7549 × 1032.8005 × 1032.8759 × 1032.8192 × 1032.9846 × 103
Std1.5730 × 1022.5274 × 1022.2904 × 1021.4993 × 1021.9730 × 1021.5457 × 1021.6812 × 1021.0886 × 1021.8435 × 102
Min2.6000 × 1032.6000 × 1032.6040 × 1032.6000 × 1032.6000 × 1032.6033 × 1032.7700 × 1032.7614 × 1032.6038 × 103
Max3.0000 × 1033.9036 × 1033.4254 × 1032.9002 × 1033.2127 × 1033.2133 × 1033.3418 × 1033.2675 × 1033.4250 × 103
Rank248135769
F35Mean2.8656 × 1032.8678 × 1032.8701 × 1032.8637 × 1032.8619 × 1032.9049 × 1032.9244 × 1032.8933 × 1032.8737 × 103
Std1.6380 × 1007.3106 × 1001.2778 × 1011.7089 × 1001.4949 × 1004.4703 × 1012.5242 × 1011.0017 × 1011.0000 × 101
Min2.8626 × 1032.8597 × 1032.8586 × 1032.8586 × 1032.8586 × 1032.8649 × 1032.8810 × 1032.8721 × 1032.8635 × 103
Max2.8682 × 1032.8994 × 1032.9158 × 1032.8665 × 1032.8639 × 1033.0521 × 1032.9836 × 1032.9000 × 1032.9012 × 103
Rank345218976
Mean rank2.16675.41677.50003.25002.66676.25006.50006.08335.1667
Total rank159327864

Appendix B

Appendix B.1. Himmelblau’s Nonlinear Problem

Himmelblau’s nonlinear problem is a typical optimization problem with constraints, and thus we attempt to solve it using the proposed RCO algorithm. This problem contains five decision variables, and its goal is to minimize the objective function subject to six inequality constraints. It can be described as follows:
Consider variable X = x 1 , x 2 , x 3 , x 4 , x 5 ;
minimize F ( X ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141 ;
subject to
g 1 ( X ) = 85.334407 0.0056858 x 2 x 5 0.0006262 x 1 x 4 + 0.0022053 x 3 x 5 0 ,
g 2 ( X ) = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 92 0 ,
g 3 ( X ) = 80.51249 0.0071317 x 2 x 5 0.0029955 x 1 x 2 0.0021813 x 3 2 + 90 0 ,
g 4 ( X ) = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 110 0 ,
g 5 ( X ) = 9.300961 0.0047026 x 3 x 5 0.0012547 x 1 x 3 0.0019085 x 3 x 4 + 20 0 ,
g 6 ( X ) = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 4 25 0 ;
variable range: 78 x 1 102 , 33 x 2 45 , 27 x 3 , x 4 , x 5 45 .
Table A4 shows the evaluation results of RCO and other algorithms for optimizing this problem. It can be found that RCO, DBO, MFO, and RFO give the minimum value of the objective function in 30 repeated attempts. The optimal value is F(X) = −30665.53867178, and the corresponding optimal solution is X = (78, 33, 29.995256025680, 45, 36.775812905789). Further comparisons reveal that RCO outperforms DBO, MFO, and RFO in terms of mean and standard deviation indicators. Therefore, RCO ranks first in responding to this problem. In addition, RCO has a faster convergence rate compared to most of the algorithms mentioned above, as shown in Figure A1.
Table A4. Evaluation results for Himmelblau’s nonlinear problem optimized by RCO and other optimizers.
Table A4. Evaluation results for Himmelblau’s nonlinear problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO−30,665.323498547.1895 × 10−1−30,665.53867178−30,661.79892780
DBO−30,635.714608491.6335 × 102−30,665.53867178−29,770.81677325
GJO−30,657.378935054.1842 × 100−30,663.55315586−30,646.38130073
RUN−30,660.135391401.9969 × 101−30,665.53586489−30,561.69882047
SMA−30,665.537349181.5540 × 10−3−30,665.53865029−30,665.53262256
HHO−30,532.304417781.5263 × 102−30,663.48977472−30,201.43851773
SOA−30,641.703384421.7341 × 101−30,657.74185848−30,566.54192773
GOA−30,496.991931132.2198 × 102−30,665.43888010−29,837.56764245
DA−30,625.266836399.1543 × 101−30,665.53866631−30,342.54049981
MVO−30,575.235853707.9803 × 101−30,662.44936380−30,386.62854427
EGO−30,354.736437221.5849 × 102−30,634.70568304−30,085.99212698
COA−30,221.792950082.7718 × 102−30,660.04569407−29,657.05292693
ALO−30,624.678874731.0054 × 102−30,665.53858867−30,218.93339012
MFO−30,665.310556281.2452 × 100−30,665.53867178−30,658.71750920
RFO−30,658.814289561.8304 × 101−30,665.53867178−30,572.05049500
ABC−30,607.673799081.1696 × 101−30,630.43422662−30,584.93003770
Figure A1. Convergence curves for Himmelblau’s nonlinear problem optimized by RCO and other optimizers.
Figure A1. Convergence curves for Himmelblau’s nonlinear problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g0a1

Appendix B.2. I-Beam Design Problem

I-beams are widely used in the construction industry. The I-beam design problem is to minimize the vertical deflection of the beam while meeting the stress and cross-sectional area constraints under the required loads. As shown in Figure A2, it contains four decision variables: the width of beam b, the height of beam h, the thickness of waist tw, and the thickness of foot tf. The complete description of this problem is as follows:
Figure A2. Structure diagram of I-beam.
Figure A2. Structure diagram of I-beam.
Biomimetics 10 00565 g0a2
Consider variable X = x 1 , x 2 , x 3 , x 4 = b , h , t w , t f ;
minimize F ( X ) = 5000 x 3 x 2 2 x 4 3 / 12 + x 1 x 4 3 / 6 + 2 x 1 x 4 x 2 x 4 / 2 2 ;
subject to
g 1 ( X ) = 2 x 1 x 4 + x 3 x 2 2 x 4 300 0 ,
g 2 ( X ) = 180000 x 2 x 3 x 2 2 x 4 3 + 2 x 1 x 4 4 x 4 2 + 3 x 2 x 2 2 x 4 + 15000 x 1 x 3 3 x 2 2 x 4 + 2 x 1 3 x 4 16 0 ;
variable range: 10 x 1 50 , 10 x 2 80 , 0.9 x 3 , x 4 5 .
According to the evaluation results shown in Table A5, it is found that RCO, DBO, HHO, GOA, DA, MFO, and RFO can successfully solve this problem. The optimal solution obtained is X = (50, 80, 0.9, 2.321792260692), which achieves the minimum vertical deflection F(X) = 0.013074118905. Furthermore, the proposed RCO returns better mean and standard deviation values compared to the other six algorithms, which demonstrates its advantages and stability. Figure A3 shows that RCO not only has high accuracy but also has a fast convergence rate in solving the I-beam problem.
Table A5. Evaluation results for I-beam design problem optimized by RCO and other optimizers.
Table A5. Evaluation results for I-beam design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO0.0130741202177.0324 × 10−90.0130741189050.013074157447
DBO0.0131297247002.1161 × 10−40.0130741189050.013908205841
GJO0.0130749319956.2377 × 10−70.0130741567400.013076087308
RUN0.0130741214472.4426 × 10−90.0130741189870.013074128823
SMA0.0130741221521.3191 × 10−80.0130741189130.013074191503
HHO0.0130753430235.1039 × 10−60.0130741189050.013102198502
SOA0.0130762692122.0988 × 10−60.0130741334990.013081247447
GOA0.0130741595642.1984 × 10−70.0130741189050.013075323503
DA0.0131148500401.6511 × 10−40.0130741189050.013908205863
MVO0.0130750306289.5513 × 10−70.0130741344490.013078209725
EGO0.0131860623731.8456 × 10−40.0130750037780.013818841015
COA0.0131560844922.3038 × 10−40.0130742884570.013915045976
ALO0.0130801043101.7143 × 10−50.0130741189130.013151854904
MFO0.0130831091104.8065 × 10−50.0130741189050.013337561384
RFO0.0130741706558.5477 × 10−80.0130741189050.013075136549
ABC0.0132470287893.3982 × 10−50.0131398207560.013264707253
Figure A3. Convergence curves for I-beam design problem optimized by RCO and other optimizers.
Figure A3. Convergence curves for I-beam design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g0a3

Appendix B.3. Tension/Compression Spring Design Problem

The spring design problem has been optimized by many researchers using their methods and is therefore also optimized here. The schematic diagram of a tension/compression spring is shown in Figure A4. The requirement of this problem is to find the solution that can minimize the weight of the spring on the premise of satisfying four inequality constraints. This problem includes three variables: the wire diameter d, the mean coil diameter D, and the number of active coils N. Its design problem is modeled as follows:
Figure A4. Structure diagram of tension/compression spring.
Figure A4. Structure diagram of tension/compression spring.
Biomimetics 10 00565 g0a4
Consider variable X = x 1 , x 2 , x 3 = d , D , N ;
minimize F ( X ) = x 1 2 x 2 x 3 + 2 ;
subject to
g 1 ( X ) = 1 x 2 3 x 3 71785 x 1 4 0 ,
g 2 ( X ) = 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 + 1 5108 x 1 2 1 0 ,
g 3 ( X ) = 1 140.45 x 1 x 2 2 x 3 0 ,
g 4 ( X ) = x 1 + x 2 1.5 1 0 ;
variable range: 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15 .
The evaluation results of all optimizers on the spring design problem are presented in Table A6. From the indicator of the best value, it is found that RCO, DBO, and RFO give very close results, which are better than the other algorithms. In more detail, the best solution obtained by RCO is slightly worse than RFO. This solution is X = (0.051696624950, 0.356899733826, 11.278303978922), achieving a minimum weight of F(X) = 0.012665233831. However, the RCO is mediocre in terms of mean value performance. The convergence curves of RCO and other optimizers are shown in Figure A5. Overall, the best solution found by RCO in multiple runs and the convergence behavior are satisfactory.
Figure A5. Convergence curves for spring design problem optimized by RCO and other optimizers.
Figure A5. Convergence curves for spring design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g0a5
Table A6. Evaluation results for spring design problem optimized by RCO and other optimizers.
Table A6. Evaluation results for spring design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO0.0130534276667.8521 × 10−40.0126652338310.015754390208
DBO0.0137724402631.8547 × 10−30.0126653048590.017773158078
GJO0.0127368069424.4089 × 10−50.0126844258030.012958545486
RUN0.0131976656601.0827 × 10−30.0126663344470.017773164310
SMA0.0132132444277.3444 × 10−40.0126697264610.015376704673
HHO0.0135970489379.5390 × 10−40.0126662372210.017774812796
SOA0.0127521311102.2074 × 10−50.0127047574070.012811498589
GOA0.0151980862721.9339 × 10−30.0126680780080.017867196026
DA0.0129918205304.2921 × 10−40.0126898205080.014901052692
MVO0.0170909028591.6960 × 10−30.0127619744980.018383176280
EGO0.0137724919109.6876 × 10−40.0127270478920.016798046250
COA0.0132582655938.7897 × 10−40.0126855261910.017299772980
ALO0.0137249367361.5357 × 10−30.0126724184610.017773186025
MFO0.0131886268441.0021 × 10−30.0126670850440.017772992685
RFO0.0126652327895.6062 × 10−130.0126652327880.012665232791
ABC0.0133494267862.4523 × 10−40.0128656359020.013783878098

Appendix B.4. Reinforced Concrete Beam Design Problem

The decision variables of several engineering application problems mentioned above are continuous. Unlike them, discrete variables appear in the reinforced concrete beam design problem. The problem aims to save the total cost of reinforcement and concrete as much as possible. Figure A6 shows the schematic diagram of a reinforced concrete beam, which includes two discrete variables (the area of reinforcement As and the depth of beam df) and one continuous variable (the width of beam b). The optimization problem can be formulated as follows:
Figure A6. Structure diagram of reinforced concrete beam.
Figure A6. Structure diagram of reinforced concrete beam.
Biomimetics 10 00565 g0a6
Consider variable X = x 1 , x 2 , x 3 = A s , d f , b ;
minimize F ( X ) = 29.4 x 1 + 0.6 x 2 x 3 ;
subject to
g 1 ( X ) = x 2 x 3 4 0 ,
g 2 ( X ) = 180 + 7.375 x 1 2 x 3 x 1 x 2 0 ;
variable range x 1 6 , 6.16 , 6.32 , 6.6 , 7 , 7.11 , 7.2 , 7.8 , 7.9 , 8 , 8.4 , x 2 28 , 29 , , 40 , 5 x 3 10 .
Table A7 shows four statistical indicators for RCO and other optimizers on this problem. Obviously, RCO gives results that surpass all algorithms on these indicators. It is worth mentioning that RCO and RFO can find the optimal solution X = (6.32, 34, 8.499999999999) that minimizes the cost in every run. In addition, DBO, HHO, GOA, DA, and MFO can also find the optimal solution through repeated attempts. From Figure A7, it can be noticed that the convergence behavior of RCO has a stepwise process and finally converges to the near-optimal solution of this problem.
Table A7. Evaluation results for reinforced concrete beam design problem optimized by RCO and other optimizers.
Table A7. Evaluation results for reinforced concrete beam design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO359.207999995.7815 × 10−14359.20799999359.20799999
DBO360.019200351.3682 × 100359.20799999362.24999999
GJO359.211738233.6435 × 10−3359.20801045359.22462089
RUN359.309402745.5539 × 10−1359.20800001362.25001392
SMA359.322200036.2549 × 10−1359.20800000362.63400001
HHO359.438320988.6872 × 10−1359.20799999362.63399999
SOA359.237876862.6046 × 10−2359.20928087359.29587381
GOA363.643119545.2455 × 100359.20799999376.80000000
DA359.613599991.0517 × 100359.20799999362.24999999
MVO359.212130894.5808 × 10−3359.20805144359.22277961
EGO362.929179391.6793 × 100359.49427687366.56462811
COA362.163495522.9032 × 100359.20800093373.47135412
ALO359.652000051.1529 × 100359.20800000362.63400012
MFO360.704399991.6324 × 100359.20799999362.63399999
RFO359.207999996.5919 × 10−14359.20799999359.20799999
ABC359.208557459.3011 × 10−4359.20800023359.21200306
Figure A7. Convergence curves for reinforced concrete beam design problem optimized by RCO and other optimizers.
Figure A7. Convergence curves for reinforced concrete beam design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g0a7

References

  1. Rao, S.S. Engineering Optimization: Theory and Practice, 4th ed.; John Wiley and Sons: Hoboken, NJ, USA, 1997. [Google Scholar]
  2. Houssein, E.H.; Elaziz, M.A.; Oliva, D.; Abualigah, L. Integrating Meta-Heuristics and Machine Learning for Real-World Optimization Problems; Springer Nature: Dordrecht, The Netherlands, 2022. [Google Scholar] [CrossRef]
  3. Wu, L.; Huang, X.; Cui, J.; Liu, C.; Xiao, W. Modified adaptive ant colony optimization algorithm and its application for solving path planning of mobile robot. Expert Syst. Appl. 2023, 215, 119410. [Google Scholar] [CrossRef]
  4. Dian, S.; Zhong, J.; Guo, B.; Liu, J.; Guo, R. A smooth path planning method for mobile robot using a BES-incorporated modified QPSO algorithm. Expert Syst. Appl. 2022, 208, 118256. [Google Scholar] [CrossRef]
  5. Zeng, W.; Zhu, W.; Hui, T.; Chen, L.; Xie, J.; Yu, T. An IMC-PID controller with particle swarm optimization algorithm for MSBR core power control. Nucl. Eng. Des. 2020, 360, 110513. [Google Scholar] [CrossRef]
  6. Qi, Z.; Shi, Q.; Zhang, H. Tuning of digital PID controllers using particle swarm optimization algorithm for a CAN-based DC motor subject to stochastic delays. IEEE Trans. Ind. Electron. 2020, 67, 5637–5646. [Google Scholar] [CrossRef]
  7. Kumari, M. 4—A review on metaheuristic algorithms: Recent and future trends. In Metaheuristics-Based Materials Optimization; Elsevier: Amsterdam, The Netherlands, 2025; pp. 103–128. [Google Scholar] [CrossRef]
  8. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  9. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  10. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  12. Karaboga, D.; Basturk, B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In Proceedings of the 12th International Fuzzy Systems Association World Congress, Cancun, Mexico, 18–21 June 2007; pp. 789–798. [Google Scholar] [CrossRef]
  13. Pan, W. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  15. Arora, S.; Singh, S. Butterfly algorithm with Lèvy flights for global optimization. In Proceedings of the 2015 International Conference on Signal Processing, Computing and Control, Solan, India, 24–26 September 2015; pp. 220–224. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  18. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  19. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  20. Feng, Z.; Niu, W.; Liu, S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl. Soft Comput. 2021, 98, 106734. [Google Scholar] [CrossRef]
  21. Trojovská, E.; Dehghani, M. A new human-based metahurestic optimization method based on mimicking cooking training. Sci. Rep. 2022, 12, 14861. [Google Scholar] [CrossRef] [PubMed]
  22. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  23. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  25. Talatahari, S.; Azizi, M.; Gandomi, A.H. Material generation algorithm: A novel metaheuristic algorithm for optimization of engineering problems. Processes 2021, 9, 859. [Google Scholar] [CrossRef]
  26. Hashim, F.A.; Mostafa, R.R.; Hussien, A.G.; Mirjalili, S.; Sallam, K.M. Fick’s law algorithm: A physical law-based algorithm for numerical optimization. Knowl.-Based Syst. 2023, 260, 110146. [Google Scholar] [CrossRef]
  27. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  29. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  30. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  31. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  32. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  33. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  34. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  35. Abdel-Basset, M.; Mohamed, R.; Zidan, M.; Jameel, M.; Abouhawwash, M. Mantis search algorithm: A novel bio-inspired algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116200. [Google Scholar] [CrossRef]
  36. Talatahari, S.; Bayzidi, H.; Saraee, M. Social network search for global optimization. IEEE Access 2021, 9, 92815–92863. [Google Scholar] [CrossRef]
  37. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  38. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  39. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  40. Han, B.; Li, B.; Qin, C. A novel hybrid particle swarm optimization with marine predators. Swarm Evol. Comput. 2023, 83, 101375. [Google Scholar] [CrossRef]
  41. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  42. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  43. Inoue, M.; Shimura, R.; Uebayashi, A.; Ikoma, S.; Iima, H.; Sumiyoshi, T.; Teraoka, H.; Makita, K.; Hiraga, T.; Momose, K.; et al. Physical body parameters of red-crowned cranes Grus japonensis by sex and life stage in eastern Hokkaido, Japan. J. Vet. Med. Sci. 2013, 75, 1055–1060. [Google Scholar] [CrossRef] [PubMed]
  44. Wikipedia Contributors. Red-Crowned Crane. Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/w/index.php?title=Red-crowned_crane&oldid=1180027726 (accessed on 15 May 2025).
  45. Su, L.; Zou, H. Status, threats and conservation needs for the continental population of the red-crowned crane. Chin. Birds 2012, 3, 147–164. [Google Scholar] [CrossRef]
  46. Xu, P.; Zhang, X.; Zhang, F.; Bempah, G.; Lu, C.; Lv, S.; Zhang, W.; Cui, P. Use of aquaculture ponds by globally endangered red-crowned crane (Grus japonensis) during the wintering period in the Yancheng National Nature Reserve, a Ramsar wetland. Glob. Ecol. Conserv. 2020, 23, e01123. [Google Scholar] [CrossRef]
  47. Liu, L.; Liao, J.; Wu, Y.; Zhang, Y. Breeding range shift of the red-crowned crane (Grus japonensis) under climate change. PLoS ONE 2020, 15, e0229984. [Google Scholar] [CrossRef]
  48. Takeda, K.F.; Kutsukake, N. Complexity of mutual communication in animals exemplified by paired dances in the red-crowned crane. Jpn. J. Anim. Psychol. 2018, 68, 25–37. [Google Scholar] [CrossRef]
  49. Nenavath, H.; Jatoth, R.K.; Das, S. A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking. Swarm Evol. Comput. 2018, 43, 1–30. [Google Scholar] [CrossRef]
  50. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  51. Braik, M.; Al-Hiary, H. Rüppell’s fox optimizer: A novel meta-heuristic approach for solving global optimization problems. Clust. Comput. 2025, 28, 292. [Google Scholar] [CrossRef]
  52. Mohammadzadeh, A.; Mirjalili, S. Eel and grouper optimizer: A nature-inspired optimization algorithm. Clust. Comput. 2024, 27, 12745–12786. [Google Scholar] [CrossRef]
  53. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati optimization algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  54. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  55. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  56. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  57. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  58. Ray, T.; Saini, P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng. Optim. 2001, 33, 735–748. [Google Scholar] [CrossRef]
  59. Chickermane, H.; Gea, H.C. Structural optimization using a new local approximation method. Int. J. Numer. Methods Eng. 1996, 39, 829–846. [Google Scholar] [CrossRef]
  60. Ravindran, A.; Ragsdell, K.M.; Reklaitis, G.V. Engineering Optimization: Methods and Applications; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  61. Akhtar, S.; Tai, K.; Ray, T. A socio-behavioural simulation model for engineering design optimization. Eng. Optim. 2002, 34, 341–354. [Google Scholar] [CrossRef]
  62. Zavala, A.E.M.; Aguirre, A.H.; Diharce, E.R.V. Particle evolutionary swarm optimization with linearly decreasing ε-tolerance. In Proceedings of the MICAI 2005: Advances in Artificial Intelligence, Monterrey, Mexico, 14–18 November 2005; pp. 641–651. [Google Scholar] [CrossRef]
  63. Osyczka, A. Multicriteria optimization for engineering design. In Design Optimization; Academic Press: Cambridge, MA, USA, 1985; pp. 193–227. [Google Scholar] [CrossRef]
  64. Coello, C.A.C. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  65. Amir, H.M.; Hasegawa, T. Nonlinear mixed-discrete structural optimization. J. Struct. Eng. 1989, 115, 626–646. [Google Scholar] [CrossRef]
  66. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
Figure 1. (a) Appearance of red-crowned cranes; (b) foraging behavior of red-crowned cranes; (c) dancing behavior of paired red-crowned cranes.
Figure 1. (a) Appearance of red-crowned cranes; (b) foraging behavior of red-crowned cranes; (c) dancing behavior of paired red-crowned cranes.
Biomimetics 10 00565 g001
Figure 2. Schematic diagram of foraging and escaping behaviors of red-crowned cranes: (a) possible results in early iterations; (b) possible results in later iterations.
Figure 2. Schematic diagram of foraging and escaping behaviors of red-crowned cranes: (a) possible results in early iterations; (b) possible results in later iterations.
Biomimetics 10 00565 g002
Figure 3. Schematic diagram of gathering behavior of red-crowned cranes: (a) possible results in early iterations; (b) possible results in later iterations.
Figure 3. Schematic diagram of gathering behavior of red-crowned cranes: (a) possible results in early iterations; (b) possible results in later iterations.
Biomimetics 10 00565 g003
Figure 4. Possible values of the random number u during iterations.
Figure 4. Possible values of the random number u during iterations.
Biomimetics 10 00565 g004
Figure 5. Convergence curves for CEC-2005 functions optimized by RCO and other optimizers.
Figure 5. Convergence curves for CEC-2005 functions optimized by RCO and other optimizers.
Biomimetics 10 00565 g005aBiomimetics 10 00565 g005b
Figure 6. Scalability results for scalable functions (F1F13) optimized by RCO and other optimizers in the case of different dimensions.
Figure 6. Scalability results for scalable functions (F1F13) optimized by RCO and other optimizers in the case of different dimensions.
Biomimetics 10 00565 g006aBiomimetics 10 00565 g006b
Figure 7. Convergence curves for benchmark test functions optimized by RCO in the case of different probability coefficients.
Figure 7. Convergence curves for benchmark test functions optimized by RCO in the case of different probability coefficients.
Biomimetics 10 00565 g007
Figure 8. Structure diagram of three-bar truss.
Figure 8. Structure diagram of three-bar truss.
Biomimetics 10 00565 g008
Figure 9. Convergence curves for three-bar truss design problem optimized by RCO and other optimizers.
Figure 9. Convergence curves for three-bar truss design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g009
Figure 10. Structure diagram of cantilever beam.
Figure 10. Structure diagram of cantilever beam.
Biomimetics 10 00565 g010
Figure 11. Convergence curves for cantilever beam design problem optimized by RCO and other optimizers.
Figure 11. Convergence curves for cantilever beam design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g011
Figure 12. Structure diagram of corrugated bulkhead.
Figure 12. Structure diagram of corrugated bulkhead.
Biomimetics 10 00565 g012
Figure 13. Convergence curves for corrugated bulkhead design problem optimized by RCO and other optimizers.
Figure 13. Convergence curves for corrugated bulkhead design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g013
Figure 14. Structure diagram of speed reducer.
Figure 14. Structure diagram of speed reducer.
Biomimetics 10 00565 g014
Figure 15. Convergence curves for speed reducer design problem optimized by RCO and other optimizers.
Figure 15. Convergence curves for speed reducer design problem optimized by RCO and other optimizers.
Biomimetics 10 00565 g015
Table 1. A summary of metaheuristic algorithms.
Table 1. A summary of metaheuristic algorithms.
TypeAlgorithmInspiration
Evolution-basedGenetic Algorithm (GA) [9]Mutation, crossover, and natural selection strategies
Genetic Programming (GP) [31]Inherited the basic idea of GA
Differential Evolution (DE) [10]Inherited the basic idea of GA
Swarm-basedParticle Swarm Optimization (PSO) [11]Predation behavior of birds
Grey Wolf Optimizer (GWO) [14]Hierarchy and hunting behavior of gray wolves
Moth–Flame Optimization (MFO) [32]Navigation method of moths
Harris Hawks Optimizer (HHO) [33]Cooperative and chasing behaviors of Harris’ hawks
Dung Beetle Optimizer (DBO) [34]Five behaviors of dung beetles
Mantis Search Algorithm (MSA) [35]Hunting and sexual cannibalism of praying mantises
Human-basedTeaching–Learning-Based Optimization (TLBO) [18]Impact of teachers on student learning
Student Psychology-Based Optimization (SPBO) [19]Psychology of students expecting for progress
Social Network Search (SNS) [36]Interactive behavior among users in social networks
Physics- and chemistry-basedSimulated Annealing (SA) [37]Annealing process in physics
Gravitational Search Algorithm (GSA) [23]Newton’s law of universal gravitation
Multi-Verse Optimizer (MVO) [24]Concepts of white hole, black hole, and wormhole
OthersSine–Cosine Algorithm (SCA) [27]Mathematical model of sine and cosine functions
Arithmetic Optimization Algorithm (AOA) [28]Main arithmetic operators in mathematics
Weighted Mean of Vectors (INFO) [38]Idea of weighted mean
Table 2. Parameter settings for all algorithms, where the parameters of RCO are set based on the need to balance exploration and exploitation and the parameter settings of other algorithms follow the recommendations in their original text.
Table 2. Parameter settings for all algorithms, where the parameters of RCO are set based on the need to balance exploration and exploitation and the parameter settings of other algorithms follow the recommendations in their original text.
AlgorithmParameter Settings
RCOpc = 0.7, k:(nk) = 1:1
DBOk = λ = 0.1, b = 0.3, S = 0.5
GJOc1 = 1.5
RUNa = 20, b = 12
SMAvc = 1 − t/tmax, z = 0.03
HHOE0 randomly changes in (−1,1)
COAI randomly changes in {1,2}
EGOa = 2 − 2*t/tmax
RFOβ = 0.000001, e0 = 1, e1 = 3, c0 = 2, c1 = 2, a0 = 2, a1 = 3
Table 3. Wilcoxon signed-rank test results for CEC-2005 functions.
Table 3. Wilcoxon signed-rank test results for CEC-2005 functions.
IndexRCO vs. DBORCO vs. GJORCO vs. RUNRCO vs. SMARCO vs. HHORCO vs. COARCO vs. EGORCO vs. RFO
F1p-value2.5631 × 10−61.7344 × 10−61.7344 × 10−611.7344 × 10−61.7344 × 10−611.7344 × 10−6
R+00000000
R−43546546504654650465
+/=/−+++=++=+
F2p-value1.7344 × 10−61.7344 × 10−61.7344 × 10−62.1336 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+0001720000
R−465465465293465465465465
+/=/−+++=++++
F3p-value1.7344 × 10−61.7344 × 10−61.7344 × 10−611.7344 × 10−61.7344 × 10−611.7344 × 10−6
R+00000000
R−46546546504654650465
+/=/−+++=++=+
F4p-value1.7344 × 10−61.7344 × 10−61.7344 × 10−61.2544 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+0003070000
R−465465465158465465465465
+/=/−+++=++++
F5p-value1.7344 × 10−61.7344 × 10−63.1849 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.9209 × 10−6
R+002814654654650464
R−4654651840004651
+/=/−++=+
F6p-value1.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−63.8822 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+465046508000
R−04650465457465465465
+/=/−++++++
F7p-value1.7344 × 10−61.9646 × 10−31.3601 × 10−51.8326 × 10−32.2102 × 10−15.8571 × 10−13.2857 × 10−11.7344 × 10−6
R+08221811732592800
R−465383444384292206185465
+/=/−++++===+
F8p-value1.7988 × 10−51.7344 × 10−69.2710 × 10−31.7344 × 10−61.7344 × 10−61.7344 × 10−68.9443 × 10−44.4463 × 10−2
R+441035946546546571139
R−24465106000394326
+/=/−+++
F9p-value11111113.1250 × 10−2
R+00000000
R−000000021
+/=/−=======+
F10p-value11.0135 × 10−7111111
R+00000000
R−0465000000
+/=/−=+======
F11p-value11111111.2500 × 10−1
R+00000000
R−000000010
+/=/−========
F12p-value3.1123 × 10−51.7344 × 10−61.7344 × 10−61.7344 × 10−62.7653 × 10−31.7344 × 10−61.7344 × 10−61.2544 × 10−1
R+435046508700158
R−304650465378465465307
+/=/−+++++=
F13p-value4.2857 × 10−61.7088 × 10−32.1266 × 10−61.9209 × 10−61.7344 × 10−61.7344 × 10−66.7328 × 10−11.7344 × 10−6
R+45680463464465465253465
R−938521002120
+/=/−+=
F14p-value1.0881 × 10−11.6678 × 10−64.7045 × 10−42.4730 × 10−62.5631 × 10−65.6061 × 10−61.7344 × 10−61
R+00000000
R−64651204354353784650
+/=/−=++++++=
F15p-value2.0515 × 10−41.1093 × 10−11.9569 × 10−21.8326 × 10−34.9498 × 10−21.3601 × 10−54.1955 × 10−46.7328 × 10−1
R+52310119811372161253
R−413155346384328444404212
+/=/−+=+++++=
F16p-value11.7344 × 10−63.8710 × 10−51.7279 × 10−67.6227 × 10−42.5631 × 10−61.7344 × 10−61
R+00000000
R−04652534651054354650
+/=/−=++++++=
F17p-value11.7344 × 10−62.6414 × 10−51.7344 × 10−68.2981 × 10−61.7344 × 10−61.7344 × 10−61
R+00000000
R−04652764653514654650
+/=/−=++++++=
F18p-value2.8557 × 10−51.7344 × 10−61.8072 × 10−51.7257 × 10−62.5596 × 10−61.7344 × 10−61.7344 × 10−61
R+00000000
R−1904653004654354654650
+/=/−+++++++=
F19p-value4.0479 × 10−21.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61
R+10000000
R−204654654654654654650
+/=/−+++++++=
F20p-value3.2082 × 10−23.1123 × 10−52.1053 × 10−31.3601 × 10−51.6046 × 10−41.7344 × 10−64.0702 × 10−27.2488 × 10−1
R+36.5308321490133149.5
R−134.5435382444416465332175.5
+/=/−+++++++=
F21p-value1.8965 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−69.7656 × 10−4
R+00000000
R−17146546546546546546566
+/=/−++++++++
F22p-value6.2096 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.5625 × 10−2
R+00000000
R−12046546546546546546528
+/=/−++++++++
F23p-value3.1915 × 10−31.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.9531 × 10−3
R+00000000
R−6646546546546546546555
+/=/−++++++++
Unimodal (+/=/−)6/0/17/0/05/1/12/4/15/1/15/1/14/3/06/0/1
Multimodal (+/=/−)7/6/313/3/010/3/311/3/211/3/211/3/212/4/05/10/1
Total (+/=/−)13/6/420/3/015/4/413/7/316/4/316/4/316/7/011/10/2
Table 4. Evaluation results for scalable functions (F1F13) in 500 dimensions optimized by RCO and other optimizers.
Table 4. Evaluation results for scalable functions (F1F13) in 500 dimensions optimized by RCO and other optimizers.
IndexRCODBOGJORUNSMAHHOCOAEGORFO
F1Mean03.2549 × 10−2344.9056 × 10−345.1469 × 10−20601.0271 × 10−1871.0905 × 10−30001.2147 × 10−116
Std01.7826 × 10−2335.4342 × 10−342.8173 × 10−20505.6213 × 10−1875.9725 × 10−30006.6469 × 10−116
Min03.7701 × 10−2973.1716 × 10−352.6183 × 10−22903.0699 × 10−213003.8455 × 10−146
Max09.7636 × 10−2332.1880 × 10−331.5431 × 10−20403.0790 × 10−1863.2713 × 10−29903.6408 × 10−115
Rank159617418
F2Mean1.6764 × 10−2318.2119 × 10−1251.1235 × 10−213.2985 × 10−1171.2861 × 10−12.8828 × 10−991.2626 × 10−1519.3039 × 10−2241.9514 × 10−60
Std9.0402 × 10−2313.6042 × 10−1247.4570 × 10−221.7917 × 10−1164.5867 × 10−19.0091 × 10−995.6229 × 10−1515.0513 × 10−2231.0687 × 10−59
Min1.8057 × 10−2691.9735 × 10−1513.3215 × 10−222.9994 × 10−1263.2039 × 10−629.6944 × 10−1104.0545 × 10−1631.0810 × 10−2332.0457 × 10−76
Max4.9536 × 10−2301.9349 × 10−1234.4806 × 10−219.8163 × 10−1162.3614 × 1004.0818 × 10−983.0789 × 10−1502.7674 × 10−2225.8533 × 10−59
Rank148596327
F3Mean05.5813 × 10−591.3174 × 1038.8613 × 10−1702.9198 × 10−2042.2178 × 10−991.7681 × 10−30003.5998 × 10−33
Std03.0570 × 10−593.9531 × 1034.8535 × 10−1691.5993 × 10−2031.2147 × 10−989.6727 × 10−30001.7931 × 10−32
Min08.8207 × 10−2562.6831 × 10−27.1550 × 10−19401.7423 × 10−1638.0533 × 10−32207.6499 × 10−62
Max01.6744 × 10−581.9815 × 1042.6584 × 10−1688.7595 × 10−2036.6534 × 10−985.2982 × 10−29909.8324 × 10−32
Rank179546318
F4Mean1.2991 × 10−2213.1022 × 10−867.3632 × 1014.0938 × 10−981.7504 × 10−1518.2143 × 10−963.9586 × 10−1539.9172 × 10−1971.1167 × 10−32
Std7.1135 × 10−2211.3699 × 10−855.4243 × 1001.8918 × 10−979.5872 × 10−1513.3751 × 10−951.1159 × 10−1523.7978 × 10−1963.5729 × 10−32
Min1.1104 × 10−2636.2548 × 10−1456.3334 × 1011.5590 × 10−11103.1642 × 10−1071.0136 × 10−1617.2829 × 10−2041.1963 × 10−43
Max3.8963 × 10−2207.3011 × 10−858.4766 × 1011.0228 × 10−965.2511 × 10−1501.8161 × 10−944.8629 × 10−1522.0742 × 10−1951.8240 × 10−31
Rank179546328
F5Mean4.9166 × 1024.9692 × 1024.9815 × 1024.9274 × 1023.8907 × 1012.5573 × 10−22.5121 × 1004.9713 × 1024.9694 × 102
Std4.6246 × 10−13.8883 × 10−14.2371 × 10−11.5641 × 1007.6452 × 1012.8135 × 10−24.2257 × 1003.1220 × 10−14.4343 × 10−1
Min4.9054 × 1024.9609 × 1024.9676 × 1024.8966 × 1024.5526 × 10−22.6476 × 10−55.0683 × 10−34.9664 × 1024.9566 × 102
Max4.9250 × 1024.9782 × 1024.9844 × 1024.9476 × 1023.6236 × 1021.0743 × 10−11.6977 × 1014.9774 × 1024.9746 × 102
Rank469531287
F6Mean7.5351 × 10−87.0061 × 1011.1072 × 1027.8535 × 10−15.3724 × 1002.1118 × 10−41.7517 × 10−11.1613 × 1024.8723 × 101
Std9.9886 × 10−83.4698 × 1001.4933 × 1002.2126 × 10−17.2437 × 1002.2250 × 10−42.9082 × 10−11.6564 × 1004.6534 × 100
Min3.5668 × 10−96.4904 × 1011.0658 × 1023.9644 × 10−14.2435 × 10−54.3670 × 10−81.7219 × 10−61.1171 × 1023.8296 × 101
Max3.8987 × 10−77.7117 × 1011.1384 × 1021.1747 × 1003.0000 × 1019.0233 × 10−41.5263 × 1001.1892 × 1025.7172 × 101
Rank178452396
F7Mean3.4765 × 10−55.0432 × 10−48.2868 × 10−41.6166 × 10−43.7354 × 10−47.6822 × 10−58.6069 × 10−52.9107 × 10−36.7328 × 10−3
Std3.6087 × 10−54.4814 × 10−45.6353 × 10−41.2099 × 10−44.3033 × 10−48.3173 × 10−57.6387 × 10−52.8227 × 10−35.9478 × 10−3
Min1.3065 × 10−66.8894 × 10−52.5234 × 10−41.7797 × 10−51.3430 × 10−55.1595 × 10−74.7560 × 10−61.2498 × 10−46.0707 × 10−4
Max1.4808 × 10−42.0633 × 10−32.4861 × 10−34.8836 × 10−41.7325 × 10−34.2642 × 10−42.9340 × 10−41.0039 × 10−22.7323 × 10−2
Rank167452389
F8Mean−9.9019 × 104−1.8160 × 105−3.3607 × 104−9.4850 × 104−2.0948 × 105−2.0949 × 105−2.0949 × 105−1.0313 × 105−9.6555 × 104
Std1.0804 × 1041.1723 × 1041.7736 × 1041.6511 × 1042.1871 × 1011.5231 × 1006.0066 × 10−11.2180 × 1036.9129 × 103
Min−1.2597 × 105−1.9880 × 105−7.1206 × 104−1.2633 × 105−2.0949 × 105−2.0949 × 105−2.0949 × 105−1.0654 × 105−1.1127 × 105
Max−8.1295 × 104−1.5141 × 105−1.1479 × 104−6.2180 × 104−2.0937 × 105−2.0949 × 105−2.0949 × 105−1.0169 × 105−8.3802 × 104
Rank649832157
F9Mean000000003.1686 × 10−1
Std000000001.7355 × 100
Min000000000
Max000000009.5059 × 100
Rank111111119
F10Mean8.8818 × 10−161.0066 × 10−153.4521 × 10−148.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Std06.4863 × 10−164.1445 × 10−15000000
Min8.8818 × 10−168.8818 × 10−162.9310 × 10−148.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Max8.8818 × 10−164.4409 × 10−153.9968 × 10−148.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Rank189111111
F11Mean008.1416 × 10−17000005.1196 × 10−4
Std004.9935 × 10−17000002.8041 × 10−3
Min000000000
Max001.1102 × 10−16000001.5359 × 10−2
Rank118111119
F12Mean5.1880 × 10−73.8002 × 10−19.3776 × 10−19.9025 × 10−43.2460 × 10−46.6731 × 10−74.8266 × 10−58.8587 × 10−11.8152 × 10−1
Std1.4180 × 10−73.4013 × 10−22.8596 × 10−21.2189 × 10−33.9289 × 10−46.9197 × 10−77.5522 × 10−51.0236 × 10−13.8451 × 10−2
Min7.7506 × 10−102.9454 × 10−18.8196 × 10−15.4244 × 10−41.6356 × 10−72.5876 × 10−93.5303 × 10−75.2520 × 10−11.2009 × 10−1
Max7.3749 × 10−74.3693 × 10−19.9715 × 10−17.3945 × 10−31.2682 × 10−38.4475 × 10−73.3038 × 10−41.0450 × 1002.5459 × 10−1
Rank179542386
F13Mean8.3503 × 10−14.8909 × 1014.8170 × 1012.1474 × 1001.9215 × 10−14.8130 × 10−54.4956 × 10−34.5826 × 1014.7719 × 101
Std5.9713 × 10−12.6817 × 10−13.9525 × 10−16.8741 × 10−13.8994 × 10−14.9497 × 10−57.2180 × 10−34.5337 × 1002.0191 × 100
Min2.9012 × 10−74.8491 × 1014.7390 × 1011.2182 × 1005.1525 × 10−53.7797 × 10−74.3281 × 10−53.3137 × 1014.2382 × 101
Max1.7359 × 1004.9563 × 1014.8971 × 1013.8496 × 1001.7122 × 1001.5601 × 10−43.4253 × 10−24.9384 × 1014.9637 × 101
Rank498531267
Mean rank2.69236.03858.19234.96154.19233.65383.03854.92317.3077
Total rank179643258
Table 5. Evaluation results for benchmark test functions optimized by RCO in the case of different probability coefficients.
Table 5. Evaluation results for benchmark test functions optimized by RCO in the case of different probability coefficients.
Index0.10.30.50.70.9
F1Mean00005.5895 × 10−146
Std00003.0615 × 10−145
F2Mean0001.9434 × 10−2381.0826 × 10−78
Std0001.0645 × 10−2375.7653 × 10−78
F3Mean00009.0527 × 10−135
Std00004.9584 × 10−134
F4Mean0001.3051 × 10−2262.4421 × 10−73
Std0007.1478 × 10−2269.8518 × 10−73
F5Mean2.3727 × 1012.3416 × 1012.3906 × 1012.3255 × 1012.4518 × 101
Std5.5787 × 10−12.4010 × 10−14.0693 × 10−11.3112 × 10−12.6634 × 10−1
F6Mean9.7394 × 10−17.5580 × 10−31.9383 × 10−58.2703 × 10−87.6147 × 10−9
Std4.4785 × 10−11.6890 × 10−22.4897 × 10−58.8799 × 10−81.5297 × 10−8
F7Mean2.2990 × 10−51.7420 × 10−54.2269 × 10−53.8235 × 10−51.8210 × 10−4
Std1.9088 × 10−52.0308 × 10−53.8226 × 10−55.4596 × 10−52.6879 × 10−4
F8Mean−7.7980 × 103−7.8777 × 103−7.9839 × 103−7.8058 × 103−8.0328 × 103
Std1.2775 × 1031.3528 × 1031.0981 × 1031.1148 × 1031.0466 × 103
F9Mean00000
Std00000
F10Mean8.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Std00000
F11Mean00000
Std00000
F12Mean8.2255 × 10−23.0265 × 10−29.3620 × 10−42.1603 × 10−78.1993 × 10−8
Std3.4341 × 10−22.5054 × 10−21.8180 × 10−34.7274 × 10−72.3867 × 10−7
F13Mean2.1561 × 1001.4643 × 1001.1598 × 1009.7940 × 10−19.2470 × 10−1
Std5.5240 × 10−14.8535 × 10−16.4637 × 10−17.1097 × 10−17.4855 × 10−1
F14Mean1.6924 × 1001.0458 × 1009.9800 × 10−19.9800 × 10−19.9800 × 10−1
Std9.7957 × 10−12.6199 × 10−14.6963 × 10−132.3142 × 10−162.2204 × 10−16
F15Mean3.7399 × 10−43.2535 × 10−43.1835 × 10−43.2153 × 10−43.9879 × 10−4
Std1.6935 × 10−46.5432 × 10−55.4844 × 10−53.5851 × 10−51.9682 × 10−4
F16Mean−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100
Std1.9433 × 10−75.1881 × 10−114.3300 × 10−155.3761 × 10−165.2156 × 10−16
F17Mean3.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−1
Std2.5819 × 10−64.1858 × 10−101.0725 × 10−1300
F18Mean3.0001 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 100
Std2.2073 × 10−42.0626 × 10−86.2552 × 10−122.0534 × 10−152.6453 × 10−15
F19Mean−3.8619 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100−3.8628 × 100
Std1.5473 × 10−39.0922 × 10−79.9934 × 10−132.2494 × 10−152.4057 × 10−15
F20Mean−3.2561 × 100−3.2630 × 100−3.2784 × 100−3.2863 × 100−3.2943 × 100
Std7.4010 × 10−26.5231 × 10−25.8281 × 10−25.5415 × 10−25.1146 × 10−2
F21Mean−1.0151 × 101−1.0153 × 101−1.0153 × 101−1.0153 × 101−1.0153 × 101
Std2.4943 × 10−33.3099 × 10−61.3447 × 10−96.1269 × 10−155.2051 × 10−15
F22Mean−1.0400 × 101−1.0403 × 101−1.0403 × 101−1.0403 × 101−1.0403 × 101
Std3.6378 × 10−31.8140 × 10−52.0135 × 10−92.4240 × 10−151.3995 × 10−15
F23Mean−1.0533 × 101−1.0536 × 101−1.0536 × 101−1.0536 × 101−1.0536 × 101
Std3.7565 × 10−31.2283 × 10−51.2123 × 10−94.6181 × 10−151.7455 × 10−15
Table 6. Wilcoxon signed-rank test results for CEC-2022 functions.
Table 6. Wilcoxon signed-rank test results for CEC-2022 functions.
IndexRCO vs. DBORCO vs. GJORCO vs. RUNRCO vs. SMARCO vs. HHORCO vs. COARCO vs. EGORCO vs. RFO
F24p-value7.0356 × 10−11.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+2510000000
R−214465465465465465465465
+/=/−=+++++++
F25p-value1.3238 × 10−32.1630 × 10−58.9364 × 10−18.7297 × 10−31.3591 × 10−14.6818 × 10−32.0515 × 10−42.2985 × 10−1
R+62262261051609552108
R−344439239360305370413192
+/=/−++=+=++=
F26p-value4.4919 × 10−22.3534 × 10−68.9364 × 10−11.7344 × 10−66.3198 × 10−59.4261 × 10−13.8203 × 10−18.4661 × 10−6
R+135322646538229190449
R−330462239042723627516
+/=/−++=+==
F27p-value2.6229 × 10−13.1849 × 10−19.2626 × 10−18.9364 × 10−13.4935 × 10−12.8786 × 10−69.1694 × 10−39.1662 × 10−2
R+178184228226187597150.5
R−287281237239278460338314.5
+/=/−=====++=
F28p-value7.8647 × 10−26.1564 × 10−41.1561 × 10−11.9209 × 10−66.3391 × 10−62.7029 × 10−26.8836 × 10−16.9575 × 10−2
R+1476615646413125252277
R−3183993091452340213188
+/=/−=++++==
F29p-value6.8359 × 10−33.8822 × 10−68.6121 × 10−11.4773 × 10−47.3433 × 10−12.1336 × 10−18.8203 × 10−39.6266 × 10−4
R+1018241482161729072
R−364457224417249293375393
+/=/−++=+==++
F30p-value1.5886 × 10−13.3886 × 10−16.4352 × 10−14.7292 × 10−61.0639 × 10−19.9179 × 10−17.7309 × 10−36.3391 × 10−6
R+164186210455154232103452
R−3012792551031123336213
+/=/−=====+
F31p-value5.8571 × 10−16.5641 × 10−21.4773 × 10−43.4053 × 10−59.4261 × 10−12.8308 × 10−48.1302 × 10−15.3197 × 10−3
R+2593224174342365624497
R−2061434831229409221368
+/=/−===+=+
F32p-value8.8561 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+00000000
R−105465465465465465465465
+/=/−++++++++
F33p-value1.3595 × 10−41.8910 × 10−41.4936 × 10−54.5336 × 10−46.6392 × 10−41.7344 × 10−61.7344 × 10−61.7344 × 10−6
R+4751226267000
R−418414443403398465465465
+/=/−++++++++
F34p-value6.5833 × 10−19.6266 × 10−48.2901 × 10−14.9080 × 10−12.0589 × 10−16.4242 × 10−32.4308 × 10−25.7517 × 10−6
R+2547222219917110012312
R−211393243266294365342453
+/=/−=+===+++
F35p-value3.3269 × 10−24.0483 × 10−12.5967 × 10−51.9209 × 10−67.6909 × 10−61.7344 × 10−61.9209 × 10−61.1265 × 10−5
R+129192437464150119
R−336273281450465464446
+/=/−+=++++
Total (+/=/−)6/6/08/4/04/6/25/2/56/6/09/3/09/3/07/3/2
Table 7. Evaluation results for three-bar truss design problem optimized by RCO and other optimizers.
Table 7. Evaluation results for three-bar truss design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO263.897568562.8392 × 10−3263.89584466263.90810098
DBO263.895994081.6833 × 10−4263.89584349263.89636188
GJO263.902814304.7808 × 10−3263.89653293263.91402442
RUN263.908046433.6576 × 10−2263.89585913264.09314182
SMA269.042437462.3510 × 100264.27175200272.76121135
HHO263.937319836.0537 × 10−2263.89585959264.15000833
SOA265.187975754.7991 × 100263.89778678282.84271247
GOA263.943965818.8377 × 10−2263.89585342264.20396976
DA263.906404351.4283 × 10−2263.89597194263.95468591
MVO263.907268651.5644 × 10−3263.89586605263.91319598
EGO264.086157781.6593 × 10−1263.92084673264.72395877
COA264.007278601.4136 × 10−1263.89594398264.47160208
ALO263.906170014.1912 × 10−4263.89586378263.90800028
MFO263.917666972.7763 × 10−2263.89589229263.99424724
RFO263.918865303.4853 × 10−2263.89584948264.06851817
ABC263.900899343.3974 × 10−3263.89660081263.90982347
Table 8. Evaluation results for cantilever beam design problem optimized by RCO and other optimizers.
Table 8. Evaluation results for cantilever beam design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO1.340006332.2786 × 10−51.339958021.34009808
DBO1.339959462.7255 × 10−61.339956671.33996851
GJO1.340085877.1985 × 10−51.339982541.34029105
RUN1.339959712.7643 × 10−61.339956811.33996632
SMA1.340090641.0247 × 10−41.339972131.34038478
HHO1.341989081.2892 × 10−31.340284521.34471428
SOA1.340604423.6542 × 10−41.340064551.34150095
GOA1.341803354.0402 × 10−31.339975921.36152295
DA1.349398866.3933 × 10−31.340415071.36246818
MVO1.340448012.9033 × 10−41.340042481.34140214
EGO1.352781005.8881 × 10−31.343970131.36625739
COA1.445148354.6579 × 10−21.362088251.52706561
ALO1.340015045.6375 × 10−51.339962971.34025313
MFO1.340263882.5869 × 10−41.339961261.34089458
RFO1.341060202.6982 × 10−31.339958821.35122774
ABC1.340218619.4754 × 10−51.340059921.34050604
Table 9. Evaluation results for corrugated bulkhead design problem optimized by RCO and other optimizers.
Table 9. Evaluation results for corrugated bulkhead design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO6.846342827.9957 × 10−36.842958016.88137318
DBO6.853719005.8940 × 10−26.842958017.16578788
GJO6.954303103.5572 × 10−16.845809798.26480456
RUN6.864244001.1607 × 10−16.842975767.47881321
SMA11.560460911.2513 × 1008.1835326912.45148765
HHO7.046561801.8495 × 10−16.857764247.53023436
SOA7.590634576.9362 × 10−16.859656718.28438401
GOA8.071623334.7083 × 10−17.034598388.80221181
DA7.019363696.2798 × 10−16.8442088310.31687564
MVO6.853306427.1921 × 10−36.843787066.86921410
EGO7.144240153.1709 × 10−16.950860948.54195460
COA7.448620572.6788 × 10−17.006827038.25761889
ALO6.914108259.1423 × 10−26.842967607.28100558
MFO6.842958118.7974 × 10−76.842958016.84296066
RFO6.843415321.7596 × 10−36.842958016.85264001
ABC6.843239561.6049 × 10−46.842983326.84363911
Table 10. Evaluation results for speed reducer design problem optimized by RCO and other optimizers.
Table 10. Evaluation results for speed reducer design problem optimized by RCO and other optimizers.
OptimizerMeanStdBestWorst
RCO3004.778717436.3584 × 1002996.348164963016.62134716
DBO3019.696468563.6985 × 1012996.348164963188.32450766
GJO3008.295235324.5195 × 1003002.656474143018.77341248
RUN2996.362446651.2103 × 10−22996.350524162996.40167482
SMA2996.348420123.1748 × 10−42996.348192342996.34996928
HHO3022.707320941.1816 × 1013004.685944213055.73175253
SOA3020.702625691.0990 × 1013006.325014923050.35832119
GOA3021.585875791.5643 × 1013010.789970403039.52619342
DA3015.087183011.7095 × 1012998.163750953056.04561087
MVO3036.799267781.8310 × 1013011.095687333086.79861945
EGO3084.754070374.5309 × 1013039.509024063197.67885658
COA3009.297758342.8429 × 1002998.069427563030.25654118
ALO3003.412141415.0234 × 1002996.352008673014.25227204
MFO2997.968539977.3133 × 1002996.348164963035.62557865
RFO2996.754785711.9620 × 1002996.348165013007.10936746
ABC2996.365922817.7803 × 10−32996.354372822996.38186476
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kang, J.; Ma, Z. Red-Crowned Crane Optimization: A Novel Biomimetic Metaheuristic Algorithm for Engineering Applications. Biomimetics 2025, 10, 565. https://doi.org/10.3390/biomimetics10090565

AMA Style

Kang J, Ma Z. Red-Crowned Crane Optimization: A Novel Biomimetic Metaheuristic Algorithm for Engineering Applications. Biomimetics. 2025; 10(9):565. https://doi.org/10.3390/biomimetics10090565

Chicago/Turabian Style

Kang, Jie, and Zhiyuan Ma. 2025. "Red-Crowned Crane Optimization: A Novel Biomimetic Metaheuristic Algorithm for Engineering Applications" Biomimetics 10, no. 9: 565. https://doi.org/10.3390/biomimetics10090565

APA Style

Kang, J., & Ma, Z. (2025). Red-Crowned Crane Optimization: A Novel Biomimetic Metaheuristic Algorithm for Engineering Applications. Biomimetics, 10(9), 565. https://doi.org/10.3390/biomimetics10090565

Article Metrics

Back to TopTop