Next Article in Journal
Advanced Robust Heading Control for Unmanned Surface Vessels Using Hybrid Metaheuristic-Optimized Variable Universe Fuzzy PID with Enhanced Smith Predictor
Previous Article in Journal
Hybrid Cuckoo Search–Bees Algorithm with Memristive Chaotic Initialization for Cryptographically Strong S-Box Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Greater Cane Rat Algorithm with Adaptive and Global-Guided Mechanisms for Solving Real-World Engineering Problems

1
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
2
Network Information Center, Shijiazhuang University, Shijiazhuang 050035, China
3
School of Journalism & Communication, Shijiazhuang University, Shijiazhuang 050035, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(9), 612; https://doi.org/10.3390/biomimetics10090612
Submission received: 20 July 2025 / Revised: 30 August 2025 / Accepted: 9 September 2025 / Published: 10 September 2025

Abstract

This study presents an improved variant of the greater cane rat algorithm (GCRA), called adaptive and global-guided greater cane rat algorithm (AGG-GCRA), which aims to alleviate some key limitations of the original GCRA regarding convergence speed, solution precision, and stability. GCRA simulates the foraging behavior of the greater cane rat during both mating and non-mating seasons, demonstrating intelligent exploration capabilities. However, the original algorithm still faces challenges such as premature convergence and inadequate local exploitation when applied to complex optimization problems. To address these issues, this paper introduces four key improvements to the GCRA: (1) a global optimum guidance term to enhance the convergence directionality; (2) a flexible parameter adjustment system designed to maintain a dynamic balance between exploration and exploitation; (3) a mechanism for retaining top-quality solutions to ensure the preservation of optimal results.; and (4) a local perturbation mechanism to help escape local optima. To comprehensively evaluate the optimization performance of AGG-GCRA, 20 separate experiments were carried out across 26 standard benchmark functions and six real-world engineering optimization problems, with comparisons made against 11 advanced metaheuristic optimization methods. The findings indicate that AGG-GCRA surpasses the competing algorithms in aspects of convergence rate, solution precision, and robustness. In the stability analysis, AGG-GCRA consistently obtained the global optimal solution in multiple runs for five engineering cases, achieving an average rank of first place and a standard deviation close to zero, highlighting its exceptional global search capabilities and excellent repeatability. Statistical tests, including the Friedman ranking and Wilcoxon signed-rank tests, provide additional validation for the effectiveness and importance of the proposed algorithm. In conclusion, AGG-GCRA provides an efficient and stable intelligent optimization tool for solving various optimization problems.

1. Introduction

Recently, influenced by the collective behaviors observed in biological populations, swarm intelligence-driven metaheuristic optimization algorithms have shown remarkable adaptability and versatility in addressing intricate optimization challenges [1,2,3,4,5]. By simulating behaviors such as co-evolution [6], foraging strategies [7,8,9], or social interactions of biological populations [10,11,12], these algorithms can efficiently perform global searches and local development in complex search spaces, including high-dimensional [13,14,15,16], multimodal, and nonlinear ones [17,18,19,20,21,22]. The core principles of biomimetics—drawing on the self-organization, self-adaptation, and collaboration mechanisms of natural organisms—provide a rich theoretical foundation and practical guidance for the design of optimization algorithms [23]. This has led to their widespread application and in-depth research in fields such as engineering optimization, machine learning parameter tuning, structural design, and robotic control [24,25,26,27,28,29]. For example, the particle swarm optimization (PSO) algorithm is influenced by the collective foraging behaviors of bird and fish groups [30]; the ant colony optimization (ACO) algorithm relies on the pheromone-based navigation of ants [31]; the butterfly optimization algorithm (BOA) is inspired by butterflies’ innate foraging and mating patterns [32]; the whale optimization algorithm (WOA) emulates the bubble-net hunting technique of humpback whales [33]; and the goose optimization algorithm (GOOSE) mirrors the V-shaped formation and coordinated navigation seen during goose migration [34]. These algorithms, by simulating collective behaviors in animal populations, have enabled the efficient exploration and exploitation of complex search spaces, thereby advancing fields such as engineering optimization.
However, these bio-inspired algorithms often face challenges, including slow convergence rates, suboptimal solution accuracy, and limited stability, frequently resulting in premature convergence to local optima. To address these issues, various improvements have been proposed by researchers. For example, Cao et al. [35] developed a global-best guided phase based optimization (GPBO) algorithm, incorporating a global optimum guidance term to enhance convergence directionality in large-scale optimization problems. Yang et al. [36] introduced a spatial information sampling strategy for adaptive parameter control in meta-heuristic algorithms, dynamically adjusting parameters based on the spatial distribution of solutions. Wu et al. [37] enhanced PSO with an elite retention mechanism to improve convergence performance and avoid premature local optima in solving the flexible job shop scheduling problem. Öztaş and Tuş [38] developed a hybrid iterated local search algorithm that incorporates a local perturbation mechanism to explore diverse regions of the search space and improve solutions for the vehicle routing problem with simultaneous pickup and delivery.
The greater cane rat algorithm (GCRA), a recently introduced biologically inspired method, emulates the foraging behavior of the greater cane rat during both mating and non-mating periods, integrating global search and local development features [39]. In the GCRA framework, the greater cane rat’s behavior is modeled in two distinct phases: the non-mating foraging phase and the mating period. During the non-mating phase, the individuals forage independently while sharing indirect information about promising food sources, which, in the algorithm, corresponds to global exploration of the search space. In the mating phase, individuals tend to cluster and compete for mates, representing intensified local search around high-quality areas. These biological strategies are mathematically encoded in position-updating equations, where movement decisions are guided by both random exploration and attraction toward elite individuals, thus translating observed animal behaviors into optimization processes. However, despite its good performance in certain optimization problems, GCRA still faces challenges in high-dimensional, multimodal, and constrained optimization problems, including slow convergence, limited solution precision, and a tendency to become trapped in local optima, restricting its practical application and value.
In order to improve the performance of the GCRA, this paper proposes the adaptive and global-guided greater cane rat algorithm (AGG-GCRA). This algorithm integrates four key improvements into the original GCRA framework, systematically enhancing its convergence, accuracy, and stability. First, AGG-GCRA introduces a global guidance term that incorporates historical best solution information into the individual position update process [40], effectively improving the population’s convergence direction toward the optimal solution region and enhancing its global search ability. Second, an adaptive parameter adjustment mechanism is designed, enabling key parameters to dynamically change during the iteration process, thus balancing exploration and exploitation. Third, an elite retention mechanism periodically retains the current best individual to prevent the loss of high-quality solutions due to random operations, thereby improving the stability and repeatability of the results. Finally, a local perturbation mechanism is introduced to apply minor disturbances to some high-quality individuals [41], enhancing population diversity and assisting the algorithm in avoiding local optima, thereby improving global optimization effectiveness.
To comprehensively evaluate the optimization performance of AGG-GCRA, comprehensive comparative experiments were performed using 26 standard benchmark functions and six practical engineering optimization problems. The results were evaluated against 11 other leading metaheuristic algorithms, such as PSO, WOA, and GOOSE. The experimental results show that AGG-GCRA outperforms the comparison algorithms in terms of convergence speed, solution accuracy, and robustness, particularly in demonstrating lower standard deviations in engineering cases, reflecting its excellent stability and repeatability. Additionally, the statistical significance of the proposed algorithm’s advantages was confirmed using Friedman ranking [42,43] and Wilcoxon signed-rank tests [44,45].
In conclusion, AGG-GCRA addresses the shortcomings of the original GCRA, exhibiting robust global search, local exploitation, and stable convergence, thus establishing it as an effective and dependable optimization tool for a range of complex problems. The primary contributions of this paper are as follows:
  • A new greater cane rat optimization algorithm (AGG-GCRA) is introduced, incorporating adaptive mechanisms and global guidance techniques to enhance the algorithm’s convergence rate, solution precision, and stability.
  • A comprehensive comparison of AGG-GCRA with 11 mainstream metaheuristic algorithms on 26 benchmark functions shows that AGG-GCRA surpasses the other algorithms in terms of optimization precision, convergence efficiency, and robustness, with its significance confirmed through statistical tests.
  • AGG-GCRA was applied to six practical engineering optimization problems, with the algorithm consistently producing high-quality solutions in multiple independent runs, demonstrating its practical value and reliability in engineering applications.
The structure of the paper is organized as follows: Section 2 introduces the basic principles and mechanisms of the original GCRA; Section 3 provides a detailed description of the proposed AGG-GCRA; Section 4 presents experimental comparisons and analyses of AGG-GCRA and 11 comparison algorithms on standard test functions and practical engineering optimization problems; Section 5 concludes the paper and outlines directions for future research.

2. Greater Cane Rat Algorithm

The GCRA is a novel metaheuristic optimization technique drawn from the foraging and social behaviors of the African greater cane rat (scientific name: Thryonomys swinderianus). This section elucidates the biological inspiration, mathematical model, and algorithm structure of the GCRA.

2.1. Biological Inspiration and Conceptual Model

The African greater cane rat, native to sub-Saharan Africa, has a distinct social structure and foraging behavior, which inspired the design of the GCRA. In its social system, a dominant male leads a group consisting of several females and juveniles. Greater cane rats forage at night, with the males memorizing paths to food and water sources and guiding the group members to them. During the mating season, males and females separate into groups and focus their activities around abundant food areas, thus enhancing foraging efficiency.
Figure 1 illustrates the natural habitat of the African greater cane rat. These rats are excellent swimmers, often using water to avoid danger, and are quick and agile on land. While primarily nocturnal, they may also be active during the day. They reside in a matriarchal society, governed by a dominant male, and typically nest in dense vegetation or occasionally in burrows vacated by other animals or termites. When threatened, they either produce a grunting sound or swiftly escape into the water for protection. The shaded region in the figure indicates the water source, with tall reeds growing nearby. The white regions and trails denote the routes to known food sources.
These biological traits are transformed into the exploration, exploitation, and information-sharing mechanisms within the algorithm, which simulate the global and local search processes within the search space.

2.2. Population Initialization

The optimization process of the GCRA begins with the random generation of a cane rat population P , which consists of M individuals, each representing a candidate solution located at a specific position in a D -dimensional search space. The population initialization formula is shown as Equation (1):
P = p 1,1 p 1,2 p 1 , D p 2,1 p 2,2 p 2 , D p M , 1 p M , 2 p M , D
where the position of the m -th individual in the k -th dimension is determined by Equation (2):
p m , k = λ × ( U k L k ) + L k
where λ [ 0,1 ] is a random number uniformly distributed, and U k and L k represent the upper and lower bounds of the k -th dimension, respectively. This initialization guarantees that the population is evenly spread across the search space, which, in turn, enhances the algorithm’s diversity and exploration capacity.
Figure 2 presents the conceptual framework of the GCRA. Assuming the target food resource is located at coordinates ( X , Y ) , the path to this location is initially identified by the dominant male individual within the group, which then disseminates the corresponding path information to other members, enabling them to adjust their positions accordingly. Positioned at ( X , Y ) , the dominant individual possesses knowledge of the target location ( X , Y ) and explores several accessible neighboring positions, as guided by Equations (4) and (5). In subsequent iterations, the dominant individual located at the relative position ( X X ,   Y Y ) continues to execute similar path update procedures. During the breeding season, information regarding food resource locations is shared across the group, prompting the population to segregate into male and female subsets. Each subset migrates toward food-rich regions and establishes provisional campsites in those areas.

2.3. Behavioral Modeling and Position Update

The GCRA alternates between two main phases based on the control parameter α (representing the environmental shifts between the rainy and dry seasons): the exploration phase and the exploitation phase.
In the exploration phase (path construction period), cane rat individuals either explore unknown areas or follow the paths remembered by the dominant male. The fundamental position update rule is shown as Equation (3):
q m , k n e w = 0.7 × q m , k + q r , k 2
where q m , k denotes the position of the m -th individual in the k -th dimension, and q r , k is the position of another random individual in the same dimension. To enhance the update strategy, three additional coefficients are introduced as Equations (4)–(6):
r = F q r t × F q r T m a x ,
α = 2 r × ξ r ,
β = 2 r × η r ,
where F q r is the fitness value of individual r , t and T m a x are the current iteration and maximum iteration count, and ξ and η are random variables uniformly distributed in the range [ 0 , 1 ] .
In the exploitation phase (mating season foraging period), males and females search independently, focusing on approaching the position of a randomly selected female to accelerate the local search. The position update formula is shown as Equation (7):
q m , k n e w = q m , k + C × ( q r , k μ × q f , k )
where q f , k represents the position of the randomly selected female individual, μ simulates the offspring count for each female rat, with possible values in the set { 1,2 , 3,4 } , and C is the weight parameter controlling the step size.
In addition, the original formulation of the GCRA also defines an alternative exploitation variant, expressed as Equation (8):
q ( m , k ) n e w = q ( m , k ) + C × q ( k , k ) μ × q ( m , k )
After generating a candidate position q ( m , k ) n e w , the update is only accepted if the new position yields a better fitness; otherwise, the previous position is retained, as shown in Equation (9):
i f   F q ( m , k ) n e w < F q ( m , k ) , q ( m , k ) q ( m , k ) n e w ; else ,   it   remain s   unchanged .

2.4. Dominant Male Selection and Fitness Evaluation

In the GCRA, the dominant male is the individual with the highest fitness in the current population, symbolizing the optimal solution. Other individuals modify their strategies accordingly. Other individuals adjust their strategies based on the position information of the dominant male, thereby optimizing their search direction. The dominant male is updated whenever a better solution emerges in the population, which can be formulated as Equation (10):
k = a r g   m i n i   F ( q i )
where k denotes the index of the newly selected dominant male, a r g   m i n i represents the operation of finding the index i that yields the minimum value of F ( q i ) among all individuals, and F ( q i ) refers to the fitness value of individual i .
In each iteration, if a better solution is found, the dominant male is updated to ensure the algorithm progresses toward the global optimum. If no better solution is found, the position is updated or adjusted according to the current phase rules to avoid convergence to local optima.
The importance of the GCRA lies not merely in its novelty but in its unique balance between exploration and exploitation inspired by the greater cane rat’s real-world adaptive foraging strategies. Unlike many generic swarm-based metaheuristics, the GCRA integrates memory-based path following and seasonal behavior switching, which naturally encode both long-range search and localized intensification. These mechanisms address the frequent imbalance between global and local search observed in conventional metaheuristics. Furthermore, the separation of male and female subgroups during exploitation phases introduces a structured diversity preservation mechanism, reducing the risk of premature convergence and enhancing performance in complex multimodal optimization landscapes. This combination of behavioral realism and search efficiency underpins its relevance and value for both theoretical study and practical applications.
The algorithm flow of the GCRA is shown in Figure 3.

3. Adaptive and Global-Guided Greater Cane Rat Algorithm

To improve the global search capability, local exploitation ability, and convergence stability of the GCRA in addressing complex optimization challenges, this paper introduces an enhanced algorithm, AGG-GCRA, which integrates adaptive global guidance and elite mechanisms. By introducing a global optimum guidance term, an adaptive parameter adjustment strategy, an elite retention mechanism, and a local perturbation mechanism, the AGG-GCRA effectively addresses the issues in the GCRA, including the tendency to become trapped in local optima and slow convergence.
This section highlights the main enhancements of the AGG-GCRA and offers an in-depth description of the four fundamental mechanisms.

3.1. Global Optimum Guidance Term

To enhance the convergence precision and search efficiency of the GCRA, a global optimum guidance term is introduced by modifying the original position update formula (Equation (3)), adding a guidance component from the global best position q b e s t . In each iteration, the individual position update is influenced not only by a randomly selected individual but also by the current global optimum solution q b e s t , as shown in Equation (11):
q m , k n e w = q m , k + λ ( q b e s t , k q m , k )
where λ is the global guidance learning rate parameter, which dynamically adjusts the influence of the global best solution on individual position updates based on the iteration number and population state. This adjustment improves the convergence directionality while effectively preventing premature convergence. The dynamic mechanism helps maintain population diversity and avoids stagnation.

3.2. Adaptive Parameter Adjustment

In order to maintain a dynamic balance between exploration and exploitation, an adaptive parameter adjustment strategy is proposed. The adaptive factor σ ( t ) is defined as Equation (12):
σ ( t ) = α 0 × 1 t T m a x × ( 1 δ )
where α 0 is the initial balance factor, t is the current iteration, T m a x is the maximum number of iterations, and δ is the relative ratio of the current best solution’s fitness fluctuation to the historical best.
Based on the value of σ ( t ) , the algorithm dynamically decides the proportion of exploration and exploitation formulas to be used: when σ ( t ) > 0.5 , exploration operations are prioritized; otherwise, exploitation operations are prioritized. This strategy enhances the algorithm’s search adaptability across different stages.

3.3. Elite Preservation Mechanism

To ensure that high-quality solutions in the population are not disrupted by random updates, an elite preservation mechanism is embedded as an additional position update step extending Equation (7). The top 20% of individuals are selected to form an elite set E , and the average solution q ¯ E of these elite individuals is used to guide the position updates of regular individuals as Equation (13):
q m , k n e w = q m , k + γ ( q ¯ E , k q m , k )
where the adaptive learning rate γ is defined as Equation (14):
γ = γ 0 1 t T m a x
The learning rate γ decreases linearly as the iterations progress, enabling the algorithm to gradually shift from exploration to exploitation. Moreover, using the average position of the elite subset rather than solely relying on the global best individual enhances the algorithm’s robustness by preventing overdependence on a single solution and better maintaining population diversity. This mechanism helps individuals converge toward high-quality regions, improving the accuracy of later-stage searches.

3.4. Local Perturbation Strategy

To enhance the algorithm’s capability to avoid becoming trapped in local optima, a local perturbation strategy is introduced. The updated position of an individual incorporates a perturbation term Δ , expressed as Equation (15):
q m , k n e w = q m , k n e w + ω Δ
where Δ follows a mixed distribution as Equation (16):
Δ = θ 1 N ( 0 , σ 1 2 ) + θ 2 Lévy ( β )
where N ( 0 , σ 1 2 ) represents a normal distribution perturbation, and Lévy ( β ) represents a Lévy flight distribution. θ 1 + θ 2 = 1 is the perturbation weight.
The Lévy flight component introduces a probability-driven mechanism for generating occasional long-distance jumps, which follows a heavy-tailed distribution. This allows the algorithm to escape from local optima by occasionally exploring far regions of the search space that are inaccessible to purely Gaussian perturbations. The combination of Gaussian perturbations for fine-tuning and Lévy flights for large-scale jumps strikes a balance between local exploitation and global exploration, ensuring that the search process remains diverse while improving the chances of finding better solutions. This combines Gaussian noise and Lévy flight for fine-tuned local search and occasional long jumps, acting as a stochastic refinement after the main update steps (modifying Equations (3) and (7)), which enhances the local search capability while maintaining global exploration.

3.5. Complexity Analysis

The AGG-GCRA consists mainly of two stages: population initialization and iterative updates. During initialization, the algorithm generates random initial positions for each search agent and dimension, resulting in a time complexity of O ( A g e n t s × d i m e n s i o n ) . In the main loop, during each iteration, the algorithm updates the positions and evaluates the fitness of all agents across the dimensions. The total number of iterations is T m a x , and the overall time complexity is mainly determined by this part. Elite replacement and local perturbation operations are relatively small in comparison and can be ignored. Thus, the time complexity of the algorithm is shown as Equation (17):
O ( T m a x × A g e n t s × d i m e n s i o n )
where T m a x is the maximum number of iterations, A g e n t s is the population size, and d i m e n s i o n is the dimensionality of the problem. If the objective function is computationally intensive, the time complexity will rise accordingly. More specifically, the position update stage involves vector operations with a computational cost of O ( d i m e n s i o n ) per agent, while the fitness evaluation stage typically incurs the highest cost at O ( f e v a l ) per agent, where O ( f e v a l ) denotes the time complexity of the objective function. Consequently, the full computational complexity can be expressed as O ( T m a x × A g e n t s × ( d i m e n s i o n + f e v a l ) ) . In cases where f e v a l dominates, this term becomes the primary contributor to runtime, whereas, for simpler objective functions, the algorithm’s complexity is effectively linear in both the population size and problem dimension.

3.6. Summary of the AGG-GCRA

This section systematically presents the core improvement strategies of the AGG-GCRA, covering the four aspects of global optimum guidance, adaptive parameter adjustment, elite retention, and local perturbation. These strategies maintain a well-balanced approach between global exploration and local refinement throughout the search process. By dynamically regulating the transition between exploration and exploitation phases, the algorithm’s adaptability and robustness are improved. The elite set effectively prevents premature convergence, while the local perturbation mechanism improves population diversity, ensuring the algorithm’s capability to avoid local optima in complex search spaces. Overall, the AGG-GCRA achieves multi-level optimization over the original GCRA, accelerating convergence speed and improving the stability and accuracy of the final solution.
To facilitate understanding and practical implementation, Algorithm 1 provides a detailed pseudocode for the AGG-GCRA, clearly illustrating the execution logic of each key step. Additionally, the algorithm flowchart in Figure 4 offers a visual representation of the operational framework. The algorithm is well structured and scientifically designed, providing a solid foundation for performance verification and application in subsequent sections.
Algorithm 1. AGG-GCRA
Input: Population size M, maximum iterations T_max, parameters α0, γ0, λ, σ12, β, θ1, θ2, ω
Output: Optimal solution q_best
1. Initialize population positions qₘ (m = 1, 2, …, M)
2. Evaluate fitness f(qₘ) for each individual
3. Set q_best as the global best solution
4. For t = 1 to T_max do
  4.1 Adaptive Parameter Adjustment — (Equation (12))
    Compute σ(t) = α0 × (1 - t / T_max) × (1 − δ) (12)
  4.2 For m = 1 to M do
    If σ(t) > 0.5 then // Exploration Phase
      Global Optimum Guidance Term — (Equation (11))
      qₘ,ₖⁿᵉʷ = qₘ,ₖ + λ × (q_best,ₖ − qₘ,ₖ) (11)
    Else // Exploitation Phase
      Elite Preservation Mechanism — (Equation (13)–(14))
      Find top 20% individuals → elite set E
      Compute elite mean position q̄_E
      Adaptive learning rate: γ = γ0 × (1 − t / T_max) (14)
      Update position: qₘ,ₖⁿᵉʷ = qₘ,ₖ + γ × (q̄_E,ₖ − qₘ,ₖ) (13)
    End If
    Local Perturbation Strategy—(Equation (15)–(16))
      Generate perturbation: Δ = θ1 × N(0, σ12) + θ2 × Lévy(β) (16)
      Apply perturbation: qₘ,ₖⁿᵉʷ = qₘ,ₖⁿᵉʷ + ω × Δ (15)
    Evaluate fitness of qₘⁿᵉʷ and update qₘ if better
  End For
  Update elite set E and global best q_best
End For
Input: Population size M, maximum iterations T_max, parameters α0, γ0, λ, σ12, β, θ1, θ2, ω

4. Results and Analytical Evaluation of the Experiment

To validate the performance and effectiveness of the proposed AGG-GCRA, this section performs a comparative analysis from two perspectives: standard benchmark functions and practical engineering problems. Specifically, it consists of (1) a comparison with 11 mainstream optimization algorithms on 26 classic benchmark functions, including the GCRA, BOA, WOA, GOOSE, AOA [46], PSO, DE [47], ACO, NSM-BO [48], PSOBOA [49], and FDB-AGSK [50]. The sources of the algorithms are provided in the Appendix A. The parameter configurations for each algorithm can be found in Table 1. For fairness, parameters commonly used across algorithms—such as maximum iterations, population size, and termination conditions—were set according to widely accepted values reported in previous studies, while algorithm-specific parameters were further refined through experimental tuning. This tuning process involved evaluating multiple parameter combinations on representative benchmark functions and selecting those that consistently produced high average performance. Such adjustment of literature-based values ensures that each algorithm performs reasonably and competitively. In Table 1, the term “Agents” denotes the population size, meaning the number of candidate solutions maintained by the algorithm at each iteration; (2) an application verification on six typical engineering optimization problems, followed by a performance analysis compared to the aforementioned algorithms.
Experimental setup: The proposed AGG-GCRA, along with other metaheuristic methods, was implemented in MATLAB 2023a. All tests were carried out on a Windows 10 platform with an Intel(R) Core(TM) i9-14900KF processor (3.10 GHz) and 32 GB of RAM.

4.1. Tests on 26 Benchmark Functions

To thoroughly assess the performance of the proposed AGG-GCRA, this paper selects 26 classic test functions as the benchmark set [51,52,53,54,55,56]. These test functions are widely used in metaheuristic optimization, because they effectively reflect the characteristics and difficulty of various optimization problems, making them a crucial tool for validating algorithm performance.
In accordance with the benchmarking guidelines outlined by Beiranvand et al. [57], and aligned with the principles emphasized by Piotrowski et al. [58,59], all experiments were conducted under uniform experimental settings to ensure fair and reproducible comparisons. Each algorithm was executed on all 26 benchmark functions with a fixed dimensionality of 30, a maximum of 500 iterations, and a population size of 30. To guarantee statistical reliability, each algorithm was independently run 30 times on each function. Key performance metrics—including the best, mean, and standard deviation of the obtained objective values, as well as computational time—were recorded. Furthermore, the convergence behavior of each algorithm across all test functions was documented. Statistical analyses, including Friedman ranking scores and Wilcoxon signed-rank tests, were performed to rigorously assess and compare the relative performance of the algorithms. These procedures collectively ensure that the experimental evaluation is thorough, systematic, and consistent with established best practices in the literature.
To ensure that the performance evaluation of the proposed algorithm is both comprehensive and representative, this study follows the problem landscape characteristic coverage principle in selecting the benchmark test functions. Specifically, the test set is designed to cover various typical characteristics of optimization problems, thereby enabling a thorough assessment of the algorithm’s capabilities under different difficulty levels and structural conditions. The selected functions fall into the following categories:
  • Unimodal Functions—These functions have a single global optimum and are primarily used to evaluate the convergence speed and local exploitation ability of the algorithm.
    Examples from the selected set: F1 (Sphere), F2 (Schwefel 2.22), F3 (Schwefel 1.2), F4 (Schwefel 2.21), F5 (Step), F6 (Quartic), F7 (Exponential), F8 (Sum Power), F9 (Sum Square), F10 (Rosenbrock), F11 (Zakharov), F12 (Trid), F13 (Elliptic), and F14 (Cigar).
  • Multimodal Functions—These functions have multiple local optima, allowing for the assessment of the algorithm’s global search capability and its ability to escape from local optima.
    Examples from the selected set: F15 (Rastrigin), F16 (NCRastrigin), F17 (Ackley), F18 (Griewank), F19 (Alpine), F20 (Penalized 1), F21 (Penalized 2), F23 (Lévy), F24 (Weierstrass), F25 (Solomon), and F26 (Bohachevsky).
  • Separable and Non-separable Functions—These functions are used to test the algorithm’s adaptability in handling problems with either independent variables or strong inter-variable coupling.
    Examples from the selected set:
    ·
    Separable: F1, F2, F3, F5, F8, F9, and F16.
    ·
    Non-separable: F10, F12, F17, F18, F20, F23, and F24.
  • Ill-conditioned and Anisotropic Functions—These functions exhibit large variations in gradient magnitude across different search directions, testing the algorithm’s stability in highly non-uniform search spaces.
    Examples from the selected set: F10 (Rosenbrock), F13 (Elliptic), and F14 (Cigar).
  • Non-differentiable/Discontinuous Functions—These functions are used to evaluate the robustness of the algorithm under conditions where gradient information is unavailable or the function is non-smooth.
    Examples from the selected set: F5 (Step) and F16 (NCRastrigin).
  • Scalable Functions—These functions allow the dimensionality to be adjusted, enabling the analysis of the algorithm’s computational efficiency and performance trends in high-dimensional spaces.
  • Examples from the selected set: F1, F2, F3, F4, F7, F8, F9, F10, F11, F12, F13, F15, F16, F17, F18, F20, and F24.
By adopting such a balanced and diverse test set design, the proposed algorithm can be thoroughly evaluated across a variety of search scenarios. This ensures a comprehensive examination of its global search capability, local exploitation ability, convergence speed, and stability, thereby enhancing the generality and persuasiveness of the conclusions.
The selection of these 26 test functions enables a comprehensive and systematic validation of the proposed algorithm’s adaptability and superiority across various types of optimization problems. The detailed information about the test functions can be found in Table 2.

4.1.1. Performance Indicators

To thoroughly assess the performance of the proposed algorithm, 30 independent repetitions of experiments were conducted for the AGG-GCRA and 11 comparison algorithms. The results were analyzed using three metrics: best, mean, and standard deviation (Std). The best value represents the optimal solution achieved across multiple runs, the mean value reflects the overall search performance, and the standard deviation measures the stability of the results. These three metrics enable a thorough evaluation of the algorithm’s convergence accuracy, stability, and global search capacity.
The formulas for calculating the best value, mean, and standard deviation are given in Equations (18)–(20), respectively:
B e s t = m i n { f 1 , f 2 , , f N } ,
M e a n = 1 N i = 1 N   f i ,
S t d = 1 N 1 i = 1 N   ( f i M e a n ) 2 ,
where f i represents the fitness score obtained from the i -th independent run, and N is the total number of runs, which is 30 in this case.

4.1.2. Numerical Results Analysis

In this study, we conducted a comprehensive evaluation of the proposed AGG-GCRA and 11 comparison algorithms on 26 benchmark functions. Each test was assessed based on five performance metrics: best fitness, average fitness, standard deviation, average computational time, and average ranking based on average fitness. The experimental results are shown in Table 3. Additionally, we included two benchmark algorithms for comparison: Latin Hypercube Sampling combined with parallel local search (LHS-PLS), which generates uniformly distributed initial samples and performs independent local searches from each to enhance global coverage, and pattern search, a classical derivative-free method that explores the neighborhood of the current point along predefined directions and adapts the step size to converge to local optima. Both are typical standard methods for derivative-free optimization. Given that double-precision floating-point calculations typically provide about 16 significant digits of precision, and due to rounding errors, differences below 1 × 10−8 are considered insignificant in the analysis. Therefore, all values below this threshold were rounded to zero before analysis.
The numerical results indicate that the AGG-GCRA demonstrates superior optimization performance on most test functions. Specifically, it achieved the best fitness value on 25 out of the 26 benchmark functions, ranking first among all 14 comparison algorithms. The only exception was function F13, where it ranked 12th. The average ranking across all benchmark functions was 1.42, with an overall ranking of first. This suggests that the algorithm exhibits strong and stable performance across a variety of optimization problems.
Furthermore, the standard deviation results confirm the algorithm’s excellent stability: For many functions, the standard deviation was zero, indicating that the AGG-GCRA consistently converged to the same optimal value across all 30 independent runs. The computational time remained within a reasonable range for all test functions, highlighting the practicality of the AGG-GCRA in optimization scenarios.

4.1.3. Convergence Curve Analysis

To further assess the effectiveness of the proposed AGG-GCRA on standard test functions, this paper examines its convergence curves across 26 test functions in comparison with 11 other algorithms. The experimental results show that the AGG-GCRA is able to approach the optimal solution in fewer iterations on 23 of the test functions (F1–F6, F8–F12, F14, and F16–F26) and demonstrates outstanding stability, showing faster convergence and higher accuracy. This result clearly demonstrates that the AGG-GCRA has a significant advantage in balancing global search and local exploitation capabilities, effectively avoiding local optima and enhancing both optimization efficiency and solution accuracy. The relevant convergence curves are shown in Figure 5.

4.1.4. Friedman Ranking Scores and Wilcoxon Signed-Rank Analysis Results

Based on the results from 26 benchmark test functions, Table 4 presents the Friedman ranking scores and corresponding rankings for the proposed AGG-GCRA; 11 competing algorithms (GCRA, BOA, WOA, GOOSE, AOA, PSO, DE, ACO, NSM-BO, PSOBOA, and FDB-AGSK); and 2 baseline algorithms (LHS combined with parallel local search and Patternsearch).
The AGG-GCRA achieves the lowest Friedman score of 1.7308, ranking first among all 14 algorithms, which demonstrates its superior overall optimization performance. The next best performers are the GCRA (3.3077) and FDB-AGSK (4.9231), indicating that the improvements in the AGG-GCRA provide a noticeable performance gain over its base version. In contrast, the ACO algorithm records the highest score (13.0769) and ranks last, reflecting its relatively weaker performance on the test set. Algorithms such as the BOA, PSO, and GOOSE are positioned in the middle-to-lower ranking range and do not outperform the top-ranked methods. Overall, the Friedman ranking analysis confirms that the AGG-GCRA consistently delivers better performance and stability compared to all other tested algorithms.
At a significance level of α = 0.05, the Wilcoxon signed-rank test was performed to evaluate pairwise performance differences between the AGG-GCRA and each of the other 13 algorithms across the 26 functions. As shown in Table 5, all p-values are less than 0.05, indicating statistically significant performance differences in every comparison. Notably, the AGG-GCRA achieves extremely small p-values against DE (9.3386 × 10−6), ACO (9.3386 × 10−6), PSO (7.0443 × 10−5), BOA (9.6755 × 10−5), and GOOSE (7.0443 × 10−5), highlighting its substantial advantage. Even in comparisons with strong competitors such as the GCRA (1.3183 × 10−4) and FDB-AGSK (2.3556 × 10−3), the AGG-GCRA still shows a statistically significant improvement.
These results, validated by both the Friedman ranking and the Wilcoxon signed-rank test, demonstrate that the AGG-GCRA not only outperforms twelve state-of-the-art algorithms but also surpasses the two baseline optimization strategies in terms of convergence stability, search capability, and robustness.

4.2. Application on Six Practical Engineering Problems

To verify the effectiveness of the proposed AGG-GCRA in real-world engineering optimization problems, this paper selects four classic engineering design problems as benchmark test cases and compares the results with 11 mainstream metaheuristic algorithms. All comparison algorithms are executed under identical experimental conditions, with a maximum of 500 iterations and a population size of 30. To ensure the reliability and statistical significance of the results, each set of experiments is independently repeated 20 times.
To comprehensively evaluate the performance of each algorithm on engineering optimization problems, this paper employs several evaluation metrics, including the best value, mean value, standard deviation, median, and average computational time (ACT). The best value represents the minimum objective function value obtained from 20 independent runs, reflecting the algorithm’s maximum optimization capability in a single run. The mean value is calculated by averaging the objective function values across 20 experiments, reflecting the algorithm’s overall performance. The standard deviation measures the fluctuation range of the 20 results, reflecting the stability and robustness of the algorithm. The median is the middle value of the 20 results, further revealing the distribution characteristics and minimizing the impact of outliers. ACT indicate the average time (in seconds) the algorithm takes to finish each optimization task over several runs, serving as a measure of its computational efficiency.

4.2.1. Weight Minimization of a Speed Reducer

Figure 6 illustrates the schematic diagram of the weight minimization of a speed reducer problem [60]. This problem focuses on minimizing the weight of the speed reducer structure while satisfying engineering constraints related to gear meshing, bearing strength, bending stress, and dimensions. It involves complex nonlinear relationships and multiple constraints, making it a classic example of mechanical structural optimization, ideal for evaluating the performance of optimization algorithms when applied to real-world engineering problems.
The problem involves optimizing seven parameters: shaft 1 diameter d 1 , shaft 2 diameter d 2 , shaft 1 length between bearings l 1 , shaft 2 length between bearings l 2 , number of pinion teeth z , teeth module m , and face width b . The formula for calculating the minimum weight is shown in Equation (21):
min   f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 43.0934 + 14.9334 x 3 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
where x 1 = b , x 2 = m , x 3 = z , x 4 = l 1 , x 5 = l 2 , x 6 = d 1 , and x 7 = d 2 , subject to:
g 1 ( x ) = 27 x 1 x 2 x 3 1 0 ,
g 2 ( x ) = 397.5 x 1 x 2 2 x 3 1 0 ,
g 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 ,
g 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 ,
g 5 ( x ) = 745 x 4 x 2 x 3 2 + 16.9 × 10 6 110 x 6 3 1 0 ,
g 6 ( x ) = 745 x 5 x 2 x 3 2 + 157.5 × 10 6 85 x 7 3 1 0 ,
g 7 ( x ) = x 2 x 3 40 1 0 ,
g 8 ( x ) = 5 x 2 x 1 1 0 ,
g 9 ( x ) = x 1 12 x 2 1 0 ,
g 10 ( x ) = 1.56 x 6 + 1.9 x 4 1 0 ,
g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 ,
2.6 x 1 3.6 ,
0.7 x 2 0.8 ,
17 x 3 28 ,
7.3 x 4 8.3 ,
7.8 x 5 8.3 ,
2.9 x 6 3.9 ,
5 x 7 5.5
In this problem, we present the optimal values and corresponding parameter configurations obtained by each algorithm during the runs (Table 6), along with statistical metrics such as best value, mean value, median, and standard deviation (Table 7). The algorithms are ranked based on the mean value. Based on the experimental results, it is clear that the AGG-GCRA achieved the optimal value of 2994.2343 and the ACTs remain reasonable. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

4.2.2. Tension/Compression Spring Design Problem

Figure 7 illustrates the schematic diagram of the tension/compression spring design problem [61]. The objective of this problem is to minimize the spring weight while meeting constraints related to shear stress, deflection, natural frequency, and geometric dimensions. With its solid engineering background and clear constraint system, this problem is widely used to evaluate the adaptability and convergence performance of optimization algorithms in structural optimization.
The design problem includes three parameters to be optimized: the wire diameter ( d ), the mean coil diameter ( D ), and the number of active coils ( N ). The formula for calculating the minimum weight is shown in Equation (22):
m i n f ( x ) = ( x 3 + 2 ) x 2 x 1 2
where x 1 = d , x 2 = D , and x 3 = N , subject to:
g 1 ( x ) = 1 x 2 3 x 3 71,785 x 1 4 0 ,
g 2 ( x ) = 4 x 2 2 x 1 x 2 12,566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 ,
g 3 ( x ) = 1 140.45 x 1 x 2 x 3 0 ,
g 4 ( x ) = x 1 + x 2 1.5 1 0
0.5 x 1 , x 2 2 , 2 x 3 15 ,
In this problem, we present the optimal values and corresponding parameter configurations obtained by each algorithm during the runs (Table 8), along with statistical metrics such as best value, mean value, median, and standard deviation (Table 9). The algorithms are ranked based on their mean values. Based on the experimental results, it is clear that the AGG-GCRA achieved the optimal value of 0.0127, with a success rate of 100% across 20 independent runs, and the ACTs remain reasonable. Moreover, its mean value ranks first, indicating the algorithm’s excellent optimization ability and stability. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

4.2.3. Welded Beam Design Problem

Figure 8 illustrates the schematic diagram of the welded beam design problem [62]. The goal is to minimize the manufacturing cost of the welded beam structure while considering engineering constraints related to strength, deflection, and stability. This problem involves balancing material costs and mechanical performance, with nonlinear and multi-constraint characteristics, making it ideal for evaluating the ability of optimization algorithms to handle multi-objective trade-offs and constraint management.
The welded beam design problem involves four design parameters: weld thickness ( h ), welding rod length ( l ), rod height ( t ), and rod thickness ( b ). The formula for minimizing manufacturing costs is shown in Equation (23):
m i n f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
where x 1 = h , x 2 = l , x 3 = t , and x 4 = b , subject to:
g 1 ( x ) = τ ( x ) + τ m a x 0 ,
g 2 ( x ) = σ ( x ) + σ m a x 0 ,
g 3 ( x ) = δ ( x ) + δ m a x 0 ,
g 4 ( x ) = x 1 x 4 0 ,
g 5 ( x ) = P P c ( x ) 0 ,
g 6 ( x ) = 0.125 x 1 0 ,
g 7 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0 ,
0.1 x 1 , x 4 2.0,0.1 x 2 , x 3 10.0 ,
where
τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 2 R + ( τ ) 2 , τ = P 2 x 1 x 2 , τ = M R J ,
M = P L + x 2 2 , R = x 2 2 4 + x 1 + x 3 2 2 , J = 2 2 x 1 x 2 x 3 2 4 + x 1 + x 3 2 2 ,
σ x = 6 P L x 4 x 3 2 , δ x = 4 P L 3 E x 4 x 3 3 , P c x = 4.013 E x 3 2 x 4 6 36 L 2 1 x 3 2 L E 4 G ,
P = 6000   l b , L = 14   i n , E = 30 × 10 6 p s i , G = 12 × 10 6 p s i , τ m a x = 13,600   p s i , σ x = 30,000   p s i , δ x = 0.25 i n .
In this problem, we present the optimal values and corresponding parameter configurations obtained by each algorithm during the runs (Table 10), along with statistical metrics such as best value, mean value, median, and standard deviation (Table 11). The algorithms are ranked according to their mean value. Based on the experimental results, it is clear that the AGG-GCRA achieved the optimal value of 1.6702, with a success rate of 90% across 20 independent runs, and the ACTs remain reasonable. Moreover, its mean value ranks first, indicating the algorithm’s excellent optimization ability and stability. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

4.2.4. Gas Transmission Compressor Design (GTCD) Problem

Figure 9 illustrates the schematic diagram of the GTCD problem [63]. This problem focuses on minimizing the cost of the natural gas compressor system, with constraints related to fluid dynamics principles and engineering safety requirements. This problem demonstrates notable nonlinear characteristics and practical engineering significance, making it a standard test case for assessing the performance of optimization algorithms in the design of energy and process systems.
In this problem, the objective is to determine the optimal values of three parameters, L , r , and D , to minimize the value of C1. Specifically, L represents the distance between two compressor stations; r denotes the compression ratio at the compressor inlet, calculated as P1 divided by P2, where P1 is the outlet pressure and P2 is the inlet pressure (both measured in psi); and D refers to the pipeline’s internal diameter (in inches). The formula for calculating the minimum weight is shown in Equation (24):
m i n f ( x ) = 8.61 × 10 5 x 1 1 2 x 2 x 3 2 3 x 2 2 1 1 2 7.6543 × 10 8 x 1 1 + 3.69 × 10 4 x 3 + 7.72 × 10 8 x 1 1 x 2 0.219 ,
where x 1 = L , x 2 = r , and x 3 = D , subject to:
10 x 1 55 ,
1.1 x 2 2 ,
10 x 3 40 .
In this problem, we report the optimal values and corresponding parameter configurations achieved by each algorithm during the runs (Table 12), as well as statistical metrics such as best value, mean value, median, and standard deviation (Table 13). The algorithms are ranked based on the mean value. From the experimental results, it is evident that the AGG-GCRA achieved the optimal value of 1,677,759.276 and the ACTs remain reasonable. Moreover, its mean value ranks first, with a standard deviation of 0, indicating the algorithm’s excellent optimization ability and stability. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

4.2.5. Three-Bar Truss Design Problem

Figure 10 illustrates the schematic diagram of the three-bar truss design problem [64]. The goal is to minimize the total mass of the truss, with design variables generally consisting of the cross-sectional area or dimensions of each bar. These parameters must be properly adjusted within the stress and geometric constraints to optimize the structure. Although the problem is of modest size, its clear structure and well-defined constraints make it an ideal standard test case for evaluating and comparing the basic performance of optimization algorithms.
The formula for calculating the minimum weight is shown in Equation (25):
m i n f ( x ) = 2 2 x 1 + x 2 × H
where x 1 = A 1 , x 2 = A 2 :
g 1 ( x ) = 2 x 1 + x 2 2 x 1 + 2 x 1 x 2 x 2 P σ 0 ,
g 2 ( x ) = P 2 x 1 + 2 x 1 x 2 x 2 σ 0 ,
g 3 ( x ) = P x 1 + 2 x 2 σ 0 ,
0 x 1 , x 2 1 .
where H = 1000   m m ,   P = 2 k N / c m 2 , σ = 2   k N / c m 2 .
In this problem, we report the optimal values and corresponding parameter configurations achieved by each algorithm during the runs (Table 14), as well as statistical metrics such as best value, mean value, median, and standard deviation (Table 15). The algorithms are ranked based on the mean value. From the experimental results, it is evident that the AGG-GCRA achieved the optimal value of 263.8523 and the ACTs remain reasonable. Moreover, its mean value ranks first, with a standard deviation of 0, indicating the algorithm’s excellent optimization ability and stability. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

4.2.6. Multiple-Disk Clutch Brake Design Problem

Figure 11 presents the schematic diagram of the multiple-disk clutch brake design problem [65]. The objective of this problem is to reduce the weight of the clutch/brake system while adhering to constraints such as friction torque, contact pressure, and rotational speed limits. The problem features a practical engineering context and a complex system of physical constraints, making it an important reference case for evaluating the performance of optimization algorithms in high-complexity, multi-constraint environments.
In this problem, five parameters are considered as decision variables: the number of friction surfaces Z and the driving force F , as well as three dimensional parameters—the disk thickness t , outer radius r o , and inner radius r i —all measured in millimeters. The formula for calculating the minimum weight is shown in Equation (26):
m i n f ( x ) = π ( x 2 2 x 1 2 ) x 3 ( x 5 + 1 ) p m
where x 1 = r i , x 2 = , x 3 = t , x 4 = F , x 5 = Z , subject to:
g 1 ( x ) = x 2 x 1 Δ R 0
g 2 ( x ) = L m a x ( Z + 1 ) ( t + δ ) 0
g 3 ( x ) = p m a x p r z 0
g 4 ( x ) = p m a x V s r , m a x p r z V s r 0
g 5 ( x ) = V s r , m a x V s r 0
g 6 ( x ) = M h s M s 0
g 7 ( x ) = T 0
g 8 ( x ) = T m a x T 0
60 x 1 80   m m , 90 x 2 110   m m , 1.5 x 3 3   m m , 0 x 4 1000   N , 2 x 5 9
where:
p m = 0.0000078   k g / m m 3 , p m a x = 1   M P a , μ = 0.5 , V s r , m a x = 10   m / s , s = 1.5 , T m a x = 15   s ,
n = 250   r p m , M f = 3   N m , I z = 55   k g / m 2 , δ = 0.5   m m , Δ R = 20   m m , L m a x = 30   m m ,
M h = 2 3 μ x 4 x 5 x 2 3 x 1 3 x 2 2 x 1 2 N   m m , w = π n 30 r a d s , R s r = 2 3 x 2 3 x 1 3 x 2 2 x 1 2 m m , A = π ( x 2 2 x 1 2 ) m m 2 ,
M s = 40   N m , p r z = x 4 A N / m m 2 , V s r = π R s r n 30 m m / s
In this problem, we report the optimal values and corresponding parameter configurations achieved by each algorithm during the runs (Table 16), as well as statistical metrics such as best value, mean value, median, and standard deviation (Table 17). The algorithms are ranked based on the mean value. From the experimental results, it is evident that the AGG-GCRA achieved the optimal value of 0.2352 and the ACTs remain reasonable. Moreover, its mean value ranks first, with a standard deviation of 0, indicating the algorithm’s excellent optimization ability and stability. Overall, the AGG-GCRA significantly outperforms most of the comparison algorithms.

5. Discussion

The superiority of the proposed AGG-GCRA stems from several critical improvements. The introduction of a global optimum guidance term effectively steers search agents toward promising areas, reducing the risk of premature convergence. Meanwhile, adaptive parameter adjustment dynamically balances exploration and exploitation, enhancing both search efficiency and convergence speed. Moreover, the elite preservation mechanism retains high-quality solutions throughout the iterations, improving robustness. Coupled with a local perturbation strategy that blends Gaussian noise and Lévy flights, the algorithm maintains population diversity and performs both fine local searches and occasional large jumps to escape local optima, which boosts performance on complex multimodal problems.
From simulation results, the animal-inspired strategies in the GCRA and AGG-GCRA reveal valuable insights from real-world behavior. The greater cane rat’s alternating foraging strategy naturally balances exploration and exploitation. Our experiments show that mimicking this adaptive switching improves escaping local optima and convergence to global solutions. This dual-phase behavior, common in various species, may be evolutionarily advantageous for resource acquisition in uncertain environments, and their computational analogs are effective for optimization.
Compared to popular algorithms like PSO, WOA, and the original GCRA, the AGG-GCRA achieves better solution quality, stability, and consistency. The biological inspiration from the cane rat’s strategy offers a novel framework to switch intelligently between search phases, resulting in superior adaptive behavior. The key novelty lies in integrating adaptive global guidance, elite retention, and local perturbation, balancing exploration and exploitation to tackle a broad spectrum of challenging optimization problems.

6. Conclusions

This paper presents the AGG-GCRA that improves upon the original GCRA in convergence speed, solution accuracy, and stability by incorporating global optimum guidance, adaptive parameter tuning, elite preservation, and local perturbations. Extensive tests on 26 benchmark functions and six engineering problems show that the AGG-GCRA outperforms most competitors across multiple metrics, demonstrating excellent convergence and robustness.
Notably, the AGG-GCRA achieved zero standard deviation and top ranking in mean values, validating its strong global optimization ability and reproducibility. Furthermore, the AGG-GCRA maintains reasonable computational time, showing promise for practical applications. Overall, the AGG-GCRA is an efficient, stable, and broadly applicable intelligent optimization tool. However, the algorithm’s added mechanisms increase computational overhead, especially for high-dimensional or costly functions. The adaptive parameter tuning adds complexity to configurations, requiring experience or trial runs. Despite improved local perturbation, premature convergence may still occur in very complex landscapes. Also, the current design targets single-objective static problems and lacks extensions to dynamic or multi-objective cases, limiting applicability in some scenarios.
In summary, the AGG-GCRA strikes a balance between enhanced optimization and computational cost, and users should weigh solution quality against resource constraints. Future work will focus on extending the AGG-GCRA to multi-objective and dynamic optimization problems; exploring large-scale and real-world engineering applications; and integrating recent advances in deep learning and meta-learning to further enhance its performance, adaptability, and broader applicability.

Author Contributions

Conceptualization, Y.C. and F.Z.; methodology, Z.T. and K.Z.; software, Y.C. and K.Z.; validation, Z.T. and Y.C.; formal analysis, Z.T.; investigation, Y.C. and K.Z.; data curation, Z.T.; writing—original draft preparation, Y.C.; writing—review and editing, A.Z. and F.Z.; supervision, F.Z. and A.Z.; project administration, F.Z. and A.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the subject of Human Resources and Social Security in Hebei Province JRSHZ-2025-01126.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used and/or analyzed during this research are openly available and can be accessed freely. If needed, they can be requested from the corresponding author.

Acknowledgments

We sincerely thank the anonymous reviewers for their very comprehensive and constructive comments.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Source and Implementation of the Compared Algorithms

AlgorithmsSource/Implementation
GCRAhttps://www.mathworks.com/matlabcentral/fileexchange/165241-greater-cane-rat-algorithm-gcra (Accessed on 10 July 2025)
BOAhttps://ieeexplore.ieee.org/abstract/document/9641410 (Accessed on 10 July 2025)
WOAhttps://www.mathworks.com/matlabcentral/fileexchange/55667-the-whale-optimization-algorithm (Accessed on 10 July 2025)
GOOSEhttps://www.mathworks.com/matlabcentral/fileexchange/163071-goose-algorithm (Accessed on 10 July 2025)
AOAhttps://www.researchgate.net/publication/348653867_The_Arithmetic_Optimization_Algorithm_AOA_-_Matlab_Code (Accessed on 10 July 2025)
PSOhttps://www.mathworks.com/matlabcentral/fileexchange/52857-particle-swarm-optimization-pso (Accessed on 10 July 2025)
DEhttps://www.mathworks.com/matlabcentral/fileexchange/18593-differential-evolution?s_tid=srchtitle (Accessed on 10 July 2025)
ACOhttps://www.mathworks.com/matlabcentral/fileexchange/52859-ant-colony-optimization-aco (Accessed on 10 July 2025)
NSM-BOhttps://www.mathworks.com/matlabcentral/fileexchange/180062-nsm-bo-algorithm?s_tid=srchtitle (Accessed on 10 July 2025)
PSOBOAhttps://www.mathworks.com/matlabcentral/fileexchange/18593-differential-evolution?s_tid=srchtitle (Accessed on 10 July 2025)
FDB-AGSKhttps://www.mathworks.com/matlabcentral/fileexchange/129154-fdb-agsk?s_tid=srchtitle (Accessed on 10 July 2025)
LHS-PLSlhsdesign is from MATLAB Statistics and Machine Learning Toolbox, used to generate Latin Hypercube sample points. https://www.mathworks.com/help/stats/lhsdesign.html (Accessed on 10 July 2025)
fminsearch is MATLAB’s built-in Nelder–Mead simplex method for local search.
https://www.mathworks.com/help/matlab/ref/fminsearch.html (Accessed on 10 July 2025)
Patternsearchpatternsearch is from MATLAB Global Optimization Toolbox, implementing the MADS (Mesh Adaptive Direct Search) algorithm.
https://www.mathworks.com/help/gads/patternsearch.html (Accessed on 10 July 2025)

References

  1. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  2. Ikram, R.M.A.; Dehrashid, A.A.; Zhang, B.; Chen, Z.; Le, B.N.; Moayedi, H. A novel swarm intelligence: Cuckoo optimization algorithm (COA) and SailFish optimizer (SFO) in landslide susceptibility assessment. Stoch. Environ. Res. Risk Assess. 2023, 37, 1717–1743. [Google Scholar] [CrossRef]
  3. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 1919–1979. [Google Scholar] [CrossRef]
  4. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  5. Jia, H.; Sun, K.; Zhang, W.; Leng, X. An enhanced chimp optimization algorithm for continuous optimization domains. Complex Intell. Syst. 2022, 8, 65–82. [Google Scholar] [CrossRef]
  6. Li, H.; Wang, D.; Zhou, M.C.; Fan, Y.; Xia, Y. Multi-swarm co-evolution based hybrid intelligent optimization for bi-objective multi-workflow scheduling in the cloud. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 2183–2197. [Google Scholar] [CrossRef]
  7. Shen, B.; Khishe, M.; Mirjalili, S. Evolving Marine Predators Algorithm by dynamic foraging strategy for real-world engineering optimization problems. Eng. Appl. Artif. Intell. 2023, 123, 106207. [Google Scholar] [CrossRef]
  8. Tao, S.; Liu, S.; Zhou, H.; Mao, X. Research on inventory sustainable development strategy for maximizing cost-effectiveness in supply chain. Sustainability 2024, 16, 4442. [Google Scholar] [CrossRef]
  9. Hao, H.; Yao, E.; Pan, L.; Chen, R.; Wang, Y.; Xiao, H. Exploring heterogeneous drivers and barriers in MaaS bundle subscriptions based on the willingness to shift to MaaS in one-trip scenarios. Transp. Res. Part A Policy Pract. 2025, 199, 104525. [Google Scholar] [CrossRef]
  10. Trojovský, P.; Dehghani, M. A new bio-inspired metaheuristic algorithm for solving optimization problems based on walruses behavior. Sci. Rep. 2023, 13, 8775. [Google Scholar] [CrossRef] [PubMed]
  11. Liu, S.; Jin, Z.; Lin, H.; Lu, H. An improve crested porcupine algorithm for UAV delivery path planning in challenging environments. Sci. Rep. 2024, 14, 20445. [Google Scholar] [CrossRef]
  12. Mei, M.; Zhang, S.; Ye, Z.; Wang, M.; Zhou, W.; Yang, J.; Zhang, J.; Yan, L.; Shen, J. A cooperative hybrid breeding swarm intelligence algorithm for feature selection. Pattern Recognit. 2026, 169, 111901. [Google Scholar] [CrossRef]
  13. Wang, Z.; Feng, P.; Lin, Y.; Cai, S.; Bian, Z.; Yan, J.; Zhu, X. Crowdvlm-r1: Expanding r1 ability to vision language model for crowd counting using fuzzy group relative policy reward. arXiv 2025, arXiv:2504.03724. [Google Scholar]
  14. Zhou, Y.; Xia, H.; Yu, D.; Cheng, J.; Li, J. Outlier detection method based on high-density iteration. Inf. Sci. 2024, 662, 120286. [Google Scholar] [CrossRef]
  15. Lu, W.; Wang, J.; Wang, T.; Zhang, K.; Jiang, X.; Zhao, H. Visual style prompt learning using diffusion models for blind face restoration. Pattern Recognit. 2025, 161, 111312. [Google Scholar] [CrossRef]
  16. Feng, P.; Peng, X. A note on Monge–Kantorovich problem. Stat. Probab. Lett. 2014, 84, 204–211. [Google Scholar] [CrossRef]
  17. Cai, H.; Wu, W.; Chai, B.; Zhang, Y. Relation-Fused Attention in Knowledge Graphs for Recommendation. In Proceedings of the International Conference on Neural Information Processing, Auckland, New Zealand, 2–6 December 2024; Springer Nature: Singapore, 2024; pp. 285–299. [Google Scholar]
  18. Beşkirli, A. An efficient binary Harris hawks optimization based on logical operators for wind turbine layout according to various wind scenarios. Eng. Sci. Technol. Int. J. 2025, 66, 102057. [Google Scholar] [CrossRef]
  19. Sun, L.; Shi, W.; Tian, X.; Li, J.; Zhao, B.; Wang, S.; Tan, J. A plane stress measurement method for CFRP material based on array LCR waves. NDT E Int. 2025, 151, 103318. [Google Scholar] [CrossRef]
  20. Ismail, W.N.; PP, F.R.; Ali, M.A.S. A meta-heuristic multi-objective optimization method for Alzheimer’s disease detection based on multi-modal data. Mathematics 2023, 11, 957. [Google Scholar] [CrossRef]
  21. Yuan, F.; Zuo, Z.; Jiang, Y.; Shu, W.; Tian, Z.; Ye, C.; Yang, J.; Mao, Z.; Huang, X.; Gu, S.; et al. AI-Driven Optimization of Blockchain Scalability, Security, and Privacy Protection. Algorithms 2025, 18, 263. [Google Scholar] [CrossRef]
  22. Beşkirli, A. Improved Chef-Based Optimization Algorithm with Chaos-Based Fitness Distance Balance for Frequency-Constrained Truss Structures. Gazi Univ. J. Sci. Part A Eng. Innov. 2025, 12, 377–402. [Google Scholar] [CrossRef]
  23. Yuan, F.; Lin, Z.; Tian, Z.; Chen, B.; Zhou, Q.; Yuan, C.; Sun, H.; Huang, Z. Bio-inspired hybrid path planning for efficient and smooth robotic navigation: F. Yuan et al. Int. J. Intell. Robot. Appl. 2025, 1–31. [Google Scholar] [CrossRef]
  24. Beşkirli, A.; Dağ, İ.; Kiran, M.S. A tree seed algorithm with multi-strategy for parameter estimation of solar photovoltaic models. Appl. Soft Comput. 2024, 167, 112220. [Google Scholar] [CrossRef]
  25. Bolotbekova, A.; Hakli, H.; Beskirli, A. Trip route optimization based on bus transit using genetic algorithm with different crossover techniques: A case study in Konya/Türkiye. Sci. Rep. 2025, 15, 2491. [Google Scholar] [CrossRef]
  26. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  27. Beşkirli, A.; Dağ, İ. Parameter extraction for photovoltaic models with tree seed algorithm. Energy Rep. 2023, 9, 174–185. [Google Scholar] [CrossRef]
  28. Yao, Z.; Zhu, Q.; Zhang, Y.; Huang, H.; Luo, M. Minimizing Long-Term Energy Consumption in RIS-Assisted UAV-Enabled MEC Network. IEEE Internet Things J. 2025, 12, 20942–20958. [Google Scholar] [CrossRef]
  29. Beşkirli, A.; Özdemir, D.; Temurtaş, H. A comparison of modified tree–seed algorithm for high-dimensional numerical functions. Neural Comput. Appl. 2020, 32, 6877–6911. [Google Scholar] [CrossRef]
  30. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  31. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2007, 1, 28–39. [Google Scholar] [CrossRef]
  32. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  34. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  35. Cao, Z.; Wang, L.; Hei, X. A global-best guided phase based optimization algorithm for scalable optimization problems and its application. J. Comput. Sci. 2018, 25, 38–49. [Google Scholar] [CrossRef]
  36. Yang, H.; Tao, S.; Zhang, Z.; Cai, Z.; Gao, S. Spatial information sampling: Another feedback mechanism of realising adaptive parameter control in meta-heuristic algorithms. Int. J. Bio-Inspired Comput. 2022, 19, 48–58. [Google Scholar] [CrossRef]
  37. Wu, M.; Yang, D.; Liu, T. An improved particle swarm algorithm with the elite retain strategy for solving flexible jobshop scheduling problem. J. Phys. Conf. Ser. 2022, 2173, 012082. [Google Scholar] [CrossRef]
  38. Öztaş, T.; Tuş, A. A hybrid metaheuristic algorithm based on iterated local search for vehicle routing problem with simultaneous pickup and delivery. Expert Syst. Appl. 2022, 202, 117401. [Google Scholar] [CrossRef]
  39. Agushaka, J.O.; Ezugwu, A.E.; Saha, A.K.; Pal, J.; Abualigah, L.; Mirjalili, S. Greater cane rat algorithm (GCRA): A nature-inspired metaheuristic for optimization problems. Heliyon 2024, 10, e31629. [Google Scholar] [CrossRef]
  40. Vediramana Krishnan, H.G.; Chen, Y.T.; Shoham, S.; Gurfinkel, A. Global guidance for local generalization in model checking. Form. Methods Syst. Des. 2024, 63, 81–109. [Google Scholar] [CrossRef]
  41. Zhou, X.; Ma, H.; Gu, J.; Chen, H.; Deng, W. Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism. Eng. Appl. Artif. Intell. 2022, 114, 105139. [Google Scholar] [CrossRef]
  42. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna swarm optimization: A novel swarm-based metaheuristic algorithm for global optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef] [PubMed]
  43. Liu, J.; Xu, Y. T-Friedman test: A new statistical test for multiple comparison with an adjustable conservativeness measure. Int. J. Comput. Intell. Syst. 2022, 15, 29. [Google Scholar] [CrossRef]
  44. Kitani, M.; Murakami, H. One-sample location test based on the sign and Wilcoxon signed-rank tests. J. Stat. Comput. Simul. 2022, 92, 610–622. [Google Scholar] [CrossRef]
  45. Li, X.; Wu, Y.; Wei, M.; Guo, Y.; Yu, Z.; Wang, H.; Li, Z.; Fan, H. A novel index of functional connectivity: Phase lag based on Wilcoxon signed rank test. Cogn. Neurodynamics 2021, 15, 621–636. [Google Scholar] [CrossRef] [PubMed]
  46. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  47. Karaboğa, D.; Ökdem, S. A simple and global optimization algorithm for engineering problems: Differential evolution algorithm. Turk. J. Electr. Eng. Comput. Sci. 2004, 12, 53–60. [Google Scholar]
  48. Öztürk, H.T.; Kahraman, H.T. Metaheuristic search algorithms in frequency constrained truss problems: Four improved evolutionary algorithms, optimal solutions and stability analysis. Appl. Soft Comput. 2025, 171, 112854. [Google Scholar] [CrossRef]
  49. Khosla, T.; Verma, O.P. An adaptive hybrid particle swarm optimizer for constrained optimization problem. In Proceedings of the 2021 International Conference in Advances in Power, Signal, and Information Technology (APSIT), Bhubaneswar, India, 8–10 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–7. [Google Scholar]
  50. Bakır, H.; Duman, S.; Guvenc, U.; Kahraman, H.T. Improved adaptive gaining-sharing knowledge algorithm with FDB-based guiding mechanism for optimization of optimal reactive power flow problem. Electr. Eng. 2023, 105, 3121–3160. [Google Scholar] [CrossRef]
  51. Jamil, M.; Yang, X.S. A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  52. Zhang, K.; Yuan, F.; Jiang, Y.; Mao, Z.; Zuo, Z.; Peng, Y. A Particle Swarm Optimization-Guided Ivy Algorithm for Global Optimization Problems. Biomimetics 2025, 10, 342. [Google Scholar] [CrossRef]
  53. Cai, T.; Zhang, S.; Ye, Z.; Zhou, W.; Wang, M.; He, Q.; Chen, Z.; Bai, W. Cooperative metaheuristic algorithm for global optimization and engineering problems inspired by heterosis theory. Sci. Rep. 2024, 14, 28876. [Google Scholar] [CrossRef]
  54. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  55. Zhang, K.; Li, X.; Zhang, S.; Zhang, S. A Bio-Inspired Adaptive Probability IVYPSO Algorithm with Adaptive Strategy for Backpropagation Neural Network Optimization in Predicting High-Performance Concrete Strength. Biomimetics 2025, 10, 515. [Google Scholar] [CrossRef]
  56. Kwakye, B.D.; Li, Y.; Mohamed, H.H.; Baidoo, E.; Asenso, T.Q. Particle guided metaheuristic algorithm for global optimization and feature selection problems. Expert Syst. Appl. 2024, 248, 123362. [Google Scholar] [CrossRef]
  57. Beiranvand, V.; Hare, W.; Lucet, Y. Best practices for comparing optimization algorithms. Optim. Eng. 2017, 18, 815–848. [Google Scholar] [CrossRef]
  58. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Metaheuristics should be tested on large benchmark set with various numbers of function evaluations. Swarm Evol. Comput. 2025, 92, 101807. [Google Scholar] [CrossRef]
  59. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. Choice of benchmark optimization problems does matter. Swarm Evol. Comput. 2023, 83, 101378. [Google Scholar] [CrossRef]
  60. Tilahun, S.L.; Matadi, M.B. Weight minimization of a speed reducer using prey predator algorithm. Int. J. Manuf. Mater. Mech. Eng. 2018, 8, 19–32. [Google Scholar] [CrossRef]
  61. Tzanetos, A.; Blondin, M. A qualitative systematic review of metaheuristics applied to tension/compression spring design problem: Current situation, recommendations, and research direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  62. Kamil, A.T.; Saleh, H.M.; Abd-Alla, I.H. A multi-swarm structure for particle swarm optimization: Solving the welded beam design problem. J. Phys. Conf. Ser. 2021, 1804, 012012. [Google Scholar] [CrossRef]
  63. Dai, L.; Zhang, L.; Chen, Z. GrS Algorithm for Solving Gas Transmission Compressor Design Problem. In Proceedings of the 2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Niagara Falls, ON, Canada, 17–20 November 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 842–846. [Google Scholar]
  64. Yildirim, A.E.; Karci, A. Application of three bar truss problem among engineering design optimization problems using artificial atom algorithm. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–5. [Google Scholar]
  65. Multiple-Disk Clutch Brake Design Problem. Available online: https://desdeo-problem.readthedocs.io/en/latest/problems/multiple_clutch_brakes.html (accessed on 12 July 2025).
Figure 1. Natural habitat of the greater cane rat.
Figure 1. Natural habitat of the greater cane rat.
Biomimetics 10 00612 g001
Figure 2. 2D possible position vectors.
Figure 2. 2D possible position vectors.
Biomimetics 10 00612 g002
Figure 3. Flowchart of the GCRA.
Figure 3. Flowchart of the GCRA.
Biomimetics 10 00612 g003
Figure 4. Flowchart of the AGG-GCRA.
Figure 4. Flowchart of the AGG-GCRA.
Biomimetics 10 00612 g004
Figure 5. Convergence curves of 12 algorithms across 26 test functions.
Figure 5. Convergence curves of 12 algorithms across 26 test functions.
Biomimetics 10 00612 g005aBiomimetics 10 00612 g005bBiomimetics 10 00612 g005cBiomimetics 10 00612 g005dBiomimetics 10 00612 g005eBiomimetics 10 00612 g005fBiomimetics 10 00612 g005gBiomimetics 10 00612 g005hBiomimetics 10 00612 g005iBiomimetics 10 00612 g005jBiomimetics 10 00612 g005kBiomimetics 10 00612 g005lBiomimetics 10 00612 g005m
Figure 6. Schematic diagram of weight minimization of a speed reducer problem.
Figure 6. Schematic diagram of weight minimization of a speed reducer problem.
Biomimetics 10 00612 g006
Figure 7. Schematic diagram of the spring design problem.
Figure 7. Schematic diagram of the spring design problem.
Biomimetics 10 00612 g007
Figure 8. Schematic diagram of the welded beam design problem.
Figure 8. Schematic diagram of the welded beam design problem.
Biomimetics 10 00612 g008
Figure 9. Schematic diagram of the GTCD problem.
Figure 9. Schematic diagram of the GTCD problem.
Biomimetics 10 00612 g009
Figure 10. Schematic diagram of the three-bar truss design.
Figure 10. Schematic diagram of the three-bar truss design.
Biomimetics 10 00612 g010
Figure 11. Schematic diagram of the multiple-disk clutch brake design problem.
Figure 11. Schematic diagram of the multiple-disk clutch brake design problem.
Biomimetics 10 00612 g011
Table 1. Parameter settings of 12 algorithms.
Table 1. Parameter settings of 12 algorithms.
AlgorithmParameterAlgorithmParameter
ALLMax iteration = 500, Agents = 30, Runs = 30, dim = 30PSO w _ m a x = 0.9 , w _ m i n = 0.2 , c 1 = 2 , c 2 = 2
AGG-GCRA l = 1 , x = 1 , y = 4 , G R _ c = r a n d , G R _ c = 0.8 0.6 ( l / T m a x )DE p = 0.5 , C R = 0.9
GCRA l = 1 , x = 1 , y = 4 , G R _ c = r a n d ACO t a u = 1 , e t a = 1 , a l p h a = 1 ,
b e t a = 0.1 , r h o = 0.2
BOA p = 0.6 , p o w e r e x p o n e n t = 0.1
s e n s o r y _ m o d a l i t y = 0.01
NSM-BO p x g m i n i t i a l = 0.03 ; s c a b = 1.25 , s c s b = 1.3 ,
r c p p = 0.0035 , t s g s _ f a c t o r _ m a x = 0.05
WOA a = l i n e a r   d e c r e a s e   f r o m   2   t o   0 ,
a 2 = l i n e a r   d e c r e a s e   f r o m   1   t o   2
C = r a n d o m   [ 0 , 2 ]
PSOBOA p = 0.6 , p o w e r e x p o n e n t = 0.1 ,
s e n s o r y _ m o d a l i t y = 0.01
GOOSE S W m i n = 5 , S W m a x = 25 ,   c o e _ m i n = 0.17 FDB-AGSK K F _ p o o l = [ 0.1 , 1.0 , 0.5 , 1.0 ] ,
K R _ p o o l = [ 0.2 , 0.1 , 0.9 , 0.9 ]
AOA C 1 = 2 , C 2 = 6 , C 3 = 1 , C 4 = 2 , u = 0.9 ,
l = 0.1
Table 2. Details of the 26 test functions.
Table 2. Details of the 26 test functions.
s/nCategoryFunction NameFormula f m i n * Range
F1UnimodalSphere f 1 ( x ) = i = 1 d i m   x i 2 0[−100, 100]
F2UnimodalSchwefel 2.22 f 2 x = i = 1 d i m   x i + i = 1 d i m   | x i | 0[−10, 10]
F3UnimodalSchwefel 1.2 f 3 ( x ) = i = 1 d i m   j = 1 i   x j 2 0[−100, 100]
F4UnimodalSchwefel 2.21 f 4 ( x ) = m a x i   { | x i | } , 1 i d i m 0[−100, 100]
F5UnimodalStep f 5 ( x ) = i = 1 d i m   0.5 + x i 2 0[−100, 100]
F6UnimodalQuartic f 6 ( x ) = i = 1 d i m   i x i 4 + r a n d 0[−1.28, 1.28]
F7UnimodalExponential f 7 ( x ) = i = 1 d i m   ( e x i x i ) 0[−10, 10]
F8UnimodalSum power f 8 ( x ) = i = 1 d i m   x i i + 1 0[−1, 1]
F9UnimodalSum square f 9 ( x ) = i = 1 d i m   i x i 2 0[−10, 10]
F10UnimodalRosenbrock f 10 ( x ) = i = 1 d i m 1   ( x i 1 ) 2 + 100 ( x i + 1 x i 2 ) 2 0[−5, 10]
F11UnimodalZakharov f 11 ( x ) = i = 1 d i m   0.5 i x i 2 + i = 1 d i m   x i 2 + i = 1 d i m   0.5 i x i 4 0[−5, 10]
F12UnimodalTrid f 12 ( x ) = i = 1 d i m   ( 1 x i ) 2 i = 2 d i m   x i x i 1 0[−5, 10]
F13UnimodalElliptic f 13 ( x ) = i = 1 d i m   ( 10 6 ) i / ( d i m 1 ) x i 2 0[−100, 100]
F14UnimodalCigar f 14 ( x ) = 10 6 i = 2 d i m   x i 2 + x 1 2 0[−100, 100]
F15FixedRastrigin f 15 ( x ) = i = 1 d i m   10 10 c o s ( 2 π x i ) + x i 2 0[−5.12, 5.12]
F16MultimodalNCRastrigin f 16 ( x ) = i = 1 d i m   10 10 c o s ( 2 π x i ) + x i 2 , x i = x i , if x i 0.5 x i 1 , otherwise 0[−5.12, 5.12]
F17MultimodalAckley f 17 ( x ) = e 1 i = 1 d i m   c o s ( 2 π x i ) + 20 e 0.2 1 d i m i = 1 d i m   x i 2 + 20 + e 0[−50, 50]
F18MultimodalGriewank f 18 ( x ) = 1 i = 1 d i m   c o s x i i + 1 4000 i = 1 d i m   x i 2 0[−600, 600]
F19FixedAlpine f 19 ( x ) = i = 1 d i m   | 0.1 x i + x i s i n ( x i ) | 0[−10, 10]
F20MultimodalPenalized 1 f 20 x = π d i m i = 1 d i m 1   ( y i 1 ) 2 1 + 10 sin 2 π y i + 1 + 10 s i n 2 ( π y 1 ) + ( y d i m 1 ) 2 + i = 1 d i m   u x i , 10,100,4 , y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = k ( x i a ) m , x i > a 0 , a x i a k ( x i a ) m , x i < a 0[−100, 100]
F21MultimodalPenalized 2 f 21 ( x ) = i = 1 d i m 1   ( x i 1 ) 2 1 + sin 2 3 π x i + 1 + 0.1 s i n 2 ( 3 π x 1 ) + ( x d i m 1 ) 2 [ 1 + s i n 2 ( 2 π x d i m ) ] } + i = 1 d i m   u ( x i , 5,100,4 ) 0[−100, 100]
F22FixedSchwefel f 22 ( x ) = i = 1 d i m   x i s i n ( | x i | ) 0[−100, 100]
F23MultimodalLévy f 23 ( x ) = i = 1 d i m   ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + s i n 2 ( 3 π x 1 ) + ( x d i m 1 ) 2 [ 1 + s i n 2 ( 2 π x d i m ) ] 0[−10, 10]
F24MultimodalWeierstrass f 24 ( x ) = i = 1 d i m   k = 0 k m a x   a k c o s ( 2 π b k ( 0.5 + x i ) ) d i m k = 0 k m a x   a k c o s ( π b k ) , a = 0.5 , b = 3 , k m a x = 20 0[−0.5, 0.5]
F25FixedSolomon f 25 ( x ) = 1 + 0.1 i = 1 d i m   x i 2 c o s 2 π i = 1 d i m   x i 2 0[−100,100]
F26FixedBohachevsky f 26 ( x ) = i = 1 d i m   3 x i 2 0.3 c o s ( 3 π x i ) 0[−10,10]
Table 3. The best fitness, average fitness, standard deviation, average computational time, and average ranking based on average fitness for the 14 algorithms across the 26 test functions.
Table 3. The best fitness, average fitness, standard deviation, average computational time, and average ranking based on average fitness for the 14 algorithms across the 26 test functions.
Function
/Metric
AGG-GCRAGCRAFDB-AGSKPSOBOAWOAAOABOANSM-BOPSOPattern SearchGOOSELHS-PLSDEACO
F1 Best00000001.9399 × 10−61.090314229.30215.4495 × 10−320,167.226323,191.336536,233.1474
F1 Mean00000001.16852.211319416.825520.302826,421.768735,676.620642,226.9578
F1 Std00000004.01271.03582973.295984.16054064.55785066.42942908.8408
F1 Time(s)0.02470.01745.3131 × 10−31.6310 × 10−20.00961.9274 × 10−21.4206 × 10−20.22480.01470.29681.9789 × 10−24.21880.00180.5168
F1 Rank1111111891110121314
F2 Best00000001.5375 × 10−42.414638.38290.5331637.249978.101890.4445
F2 Mean0000002.1331 × 10−83.2326 × 10−34.417642.78172,788,028.52285,787,9731,302,772.221,468,718.161
F2 Std00000003.7095 × 10−31.26884.711815,269,285.3603,192,7746,926,041.502,437,197.031
F2 Time(s)2.6266 × 10−21.8048 × 10−25.9280 × 10−31.7815 × 10−20.01092.0043 × 10−21.4920 × 10−20.22560.01510.14872.0100 × 10−20.04411.2786 × 10−30.5427
F2 Rank1111117891013141112
F3 Best00141.6436069.6426000.383448.6593386.60740.7112349.8473436.6124349.3299
F3 Mean00426.21770445.8555004.176483.3512525.2252.3813416.7211802.3345491.1673
F3 Std00123.86020151.7218003.317321.6652150.53650.965658.5605158.774564.7969
F3 Time(s)0.36120.31235.3404 × 10−20.11235.7816 × 10−26.6523 × 10−20.11030.28046.1761 × 10−20.22646.8623 × 10−20.03772.9051 × 10−30.6103
F3 Rank1110111117813691412
F4 Best00000.252302.0495 × 10−81.41931.51952.2582 × 10−40.12366.08897.85016.181
F4 Mean003.944105.05302.6646 × 10−82.57331.95595.3745 × 10−40.27636.28948.53537.095
F4 Std003.834603.1573000.57150.22752.5594 × 10−40.2050.20280.35070.3177
F4 Time(s)2.66 × 10−20.01835.3560 × 10−31.6477 × 10−21.0274 × 10−21.9173 × 10−21.4509 × 10−20.22121.4931 × 10−20.14151.9799 × 10−20.0341.1742 × 10−30.5586
F4 Rank1110111159867121413
F5 Best005.3031 × 10−24.74442.0897 × 10−23.92694.52026.5048 × 10−80.88843.1374 × 10−73.7242 × 10−3280.2907248.8288294.9828
F5 Mean000.49566.21090.10355.30975.35591.1256 × 10−22.73734.7814 × 10−60.0101305.505343.4264409.294
F5 Std000.31050.40497.3053 × 10−20.55380.47323.7013 × 10−21.67666.5110 × 10−60.003925.880151.675341.9994
F5 Time(s)2.3279 × 10−20.01820.00521.5982 × 10−21.0183 × 10−21.8761 × 10−21.3310 × 10−20.22231.4530 × 10−20.140.01960.02431.1875 × 10−30.5453
F5 Rank1171169105834121314
F6 Best04.3524 × 10−73.9866 × 10−61.1820 × 10−54.0721 × 10−52.6791 × 10−56.1079 × 10−44.4489 × 10−24.06490.18790.052115.889320.508726.1925
F6 Mean04.1875 × 10−68.2589 × 10−42.2505 × 10−42.1290 × 10−36.8253 × 10−42.0296 × 10−30.114915.80920.30040.127822.086540.424547.8157
F6 Std04.8947 × 10−62.7612 × 10−31.5498 × 10−42.4085 × 10−35.8479 × 10−41.0059 × 10−33.8165 × 10−210.48080.13410.04124.158912.75448.1893
F6 Time(s)0.25420.17744.1547 × 10−20.08820.04555.4340 × 10−28.6195 × 10−20.25480.04980.13325.6226 × 10−20.02822.4504 × 10−30.594
F6 Rank1253746811109121314
F7 Best00000000000000
F7 Mean02.0136 × 10−7000000000000
F7 Std01.0679 × 10−6000000000000
F7 Time(s)3.6708 × 10−20.02675.1364 × 10−31.5876 × 10−29.7843 × 10−30.01850.01320.21711.4304 × 10−20.01341.9878 × 10−20.03380.00120.5516
F7 Rank114111111111111
F8 Best000000001.5873 × 10−202.8639 × 10−68.1776 × 10−38.5020 × 10−24.2330 × 10−2
F8 Mean000000000.228601.2779 × 10−51.5291 × 10−20.29750.1015
F8 Std000000000.223307.5423 × 10−65.2097 × 10−30.13653.3335 × 10−2
F8 Time(s)0.0330.02123.1039 × 10−20.07643.8073 × 10−24.4416 × 10−27.6715 × 10−20.25640.0460.11670.04940.03332.2327 × 10−30.5852
F8 Rank1111111113110111412
F9 Best00000002.9706 × 10−68.92971538.87940.21153276.2813128.33953870.0176
F9 Mean00000000.106827.3533650.77220.95613507.64654541.36185647.2092
F9 Std00000000.403411.53622527.02020.934186.6183648.8771509.1589
F9 Time(s)2.7421 × 10−21.8986 × 10−24.9784 × 10−31.5623 × 10−29.6762 × 10−31.8831 × 10−20.01310.22010.01430.13961.9342 × 10−20.03391.2213 × 10−30.5506
F9 Rank1111111810129111314
F10 Best0028.481728.915227.150928.678228.832117.4889312.08427.832926.2799443,156.311366,499.537776,195.6054
F10 Mean0028.71228.961727.738728.854528.9063125.1043824.28232922.737584.9944564,518.1141,112,877.961,294,452.428
F10 Std006.8177 × 10−22.1486 × 10−20.44358.5553 × 10−23.4005 × 10−288.9557268.27895332.335788.684148,774.305385,312.034186,334.3659
F10 Time(s)1.85671.78191.0809 × 10−22.7269 × 10−21.5192 × 10−22.3847 × 10−22.4665 × 10−20.23370.01970.14952.5653 × 10−20.02631.4139 × 10−30.5583
F10 Rank1147356910118121314
F11 Best00000004.1308 × 10−639.54781894.57967.0951 × 10−25708.83486581.64368724.161
F11 Mean00000003.1196 × 10−2120.51954949.6180.16767220.633213817.196515909.7282
F11 Std00000007.3861 × 10−263.2092266.46494.9575 × 10−21337.42394371.89782421.5444
F11 Time(s)0.09828.3352 × 10−20.03990.08590.0440.05360.08410.2614.8942 × 10−20.18935.4641 × 10−20.03162.4947 × 10−30.5855
F11 Rank1111111810119121314
F12 Best000.66950.98980.66670.67560.94260.674469.48980.34270.849212,142.084611,518.795934,151.5626
F12 Mean01.9982 × 10−40.7670.99590.66690.85050.97253.1722231.09782.4831.796922,449.902843,256.956144,770.0338
F12 Std02.6125 × 10−40.11993.1627 × 10−31.3003 × 10−40.10759.3781 × 10−31.8195139.32652.57331.15757977.630213097.00835430.0965
F12 Time(s)0.03170.02070.00521.5725 × 10−29.9140 × 10−31.8636 × 10−21.3556 × 10−20.22360.01460.13780.01970.02921.1668 × 10−30.5524
F12 Rank1247356101198121314
F13 Best25.21111078.4966000000008.3514 × 10−76.9049 × 10−40.00614.0105 × 10−2
F13 Mean137.762414,589,152.5000000001.2301 × 10−30.1082519.259593.7392
F13 Std109.355332,824,190.3000000002.9873 × 10−30.1624745.2069141.3684
F13 Time(s)3.8810 × 10−22.2246 × 10−23.5922 × 10−27.7428 × 10−23.9963 × 10−20.04960.07560.26824.5103 × 10−20.03195.0539 × 10−20.03592.6594 × 10−30.5848
F13 Rank1214111111119101311
F14 Best00000000000.38615.23955.72170.783
F14 Mean00000000001947.830427.07391077.7608705.1322
F14 Std00000000002008.264614.8971412.7496708.58
F14 Time(s)0.02610.01985.0980 × 10−31.5748 × 10−20.00970.01871.3206 × 10−20.23020.01470.03571.9516 × 10−20.02930.00120.5495
F14 Rank111111111114111312
F15 Best0000000004.5099 × 10−73.2750 × 10−71.8192 × 10−24.7326 × 10−30.6389
F15 Mean0000000008.5054 × 10−66.3075 × 10−30.146714.656629.034
F15 Std0000000007.6659 × 10−61.0058 × 10−20.130228.866643.1852
F15 Time(s)2.9519 × 10−20.023.6948 × 10−20.07974.1501 × 10−20.05077.7772 × 10−20.25964.5894 × 10−20.03585.1558 × 10−20.02972.3669 × 10−30.6228
F15 Rank1111111111011121314
F16 Best00000003.983391.109970.3276109.4377251.0689297.7145297.6154
F16 Mean0000021.447744.86489.0016169.7599122.5299161.6984259.8313337.4773344.6979
F16 Std0000054.814882.79023.251132.525932.944941.94446.557917.817217.3044
F16 Time(s)3.6488 × 10−22.1836 × 10−26.5039 × 10−32.4299 × 10−20.01182.1001 × 10−20.02560.23312.1004 × 10−20.14212.5895 × 10−20.02451.4741 × 10−30.5664
F16 Rank1111178611910121314
F17 Best00000004.028770.474421155.0034258.3868257.7023248.8714
F17 Mean0001.0507 × 10−811.102849.2463115.9559.4965176.793542.1495206.8601281.0974311.2447305.4808
F17 Std0003.6072 × 10−843.310360.897477.72552.876838.484918.07930.80615.632526.680218.2432
F17 Time(s)0.03010.02216.8767 × 10−33.2395 × 10−21.3646 × 10−22.3441 × 10−23.1329 × 10−20.23072.3152 × 10−20.13180.02760.03520.00160.5648
F17 Rank1114689510711121413
F18 Best0000001.9024 × 10−81.9620 × 10−31.2117.74825.2772 × 10−217.338315.037716.5686
F18 Mean0000002.7516 × 10−80.7562.432812.56226.395517.59216.228117.2635
F18 Std00000000.71390.48352.84887.44170.21210.66020.2797
F18 Time(s)3.9665 × 10−22.4574 × 10−26.8717 × 10−32.0754 × 10−20.01222.0866 × 10−21.9857 × 10−20.23990.02160.15090.02620.0261.6078 × 10−30.5749
F18 Rank1111117891110141213
F19 Best00000001.7373 × 10−25.5459 × 10−244.49949.7781 × 10−4251.4396217.33276.6861
F19 Mean006.1988 × 10−204.6423 × 10−35.3737 × 10−300.25730.1391163.32255.7776277.7318316.6874371.6827
F19 Std000.23602.5427 × 10−22.9433 × 10−200.21655.3178 × 10−273.5493216.776222.108441.353434.9386
F19 Time(s)0.16230.1150.01162.7403 × 10−21.6550 × 10−20.0272.7819 × 10−20.23992.6147 × 10−20.15362.9002 × 10−20.02670.00170.5777
F19 Rank1171561981011121314
F20 Best00000004.4821 × 10−52.07521.21474.731416.858535.617635.7455
F20 Mean04.1074 × 10−7000.6393003.2476 × 10−35.65276.35397.231421.274246.273844.1548
F20 Std02.1893 × 10−6003.501501.2303 × 10−86.5164 × 10−32.29165.28922.03693.35034.02182.973
F20 Time(s)0.02310.01986.4092 × 10−31.8716 × 10−21.1262 × 10−22.0097 × 10−22.3484 × 10−20.23251.9482 × 10−20.14952.4172 × 10−20.02551.4180 × 10−30.5768
F20 Rank1611815791011121413
F21 Best002.1148 × 10−30.50793.0826 × 10−30.35870.231908.6050 × 10−301.75446.56478.275910.3408
F21 Mean02.8567 × 10−72.9429 × 10−20.94241.1607 × 10−20.60780.47241.7299 × 10−24.2368 × 10−21.3318 × 10−63.94868.807213.458212.6119
F21 Std04.3539 × 10−73.0725 × 10−20.17098.9169 × 10−30.19620.13096.1369 × 10−23.1413 × 10−22.1931 × 10−61.17651.42822.11391.1492
F21 Time(s)1.94231.75160.09650.19950.10.10840.19710.33180.10640.29210.1110.04790.00450.6564
F21 Rank1261049857311121413
F22 Best003.5834 × 10−32.15660.0481.66182.0051.3087 × 10−80.20140.0112.6443 × 10−314.32539.059110.2405
F22 Mean06.6016 × 10−70.10332.84890.13792.68762.75116.2585 × 10−30.57281.0989 × 10−21.2149 × 10−217.638213.67915.704
F22 Std01.2625 × 10−60.10660.19577.5121 × 10−20.39060.31661.2020 × 10−20.20492.0861 × 10−61.4515 × 10−22.48041.86251.5966
F22 Time(s)1.79351.58980.09890.19890.10180.11090.20190.33740.10860.2940.11330.03710.00460.6562
F22 Rank1261179103845141213
F23 Best006.2351 × 10−46.11771.2115 × 10−26.05416.47790.01240.69151.2395 × 10−20.1582.8478.160712.1094
F23 Mean01.9215 × 10−40.365615.99040.367610.212812.63040.05445.66362.3468 × 10−20.69444.137514.71514.6409
F23 Std03.3954 × 10−40.46084.00570.44471.89232.27267.6478 × 10−24.10011.7110 × 10−20.57121.24982.5991.6099
F23 Time(s)1.35451.27041.2930 × 10−20.03120.01822.6490 × 10−20.03360.23770.02430.15430.02950.02711.6134 × 10−30.5618
F23 Rank1251461011493781312
F24 Best000000002.3174 × 10−41.2069034.761814.306739.3755
F24 Mean0007.8268 × 10−505.76241.65884.9605 × 10−44.85831.7797.817240.018318.40842.6382
F24 Std0001.7532 × 10−408.33533.08442.6863 × 10−33.86530.66166.43643.33852.28421.0708
F24 Time(s)1.55641.2561.0732.08721.07721.07342.11381.36391.1061.70331.08810.19893.8840 × 10−21.5972
F24 Rank1115110769811131214
F25 Best0009.9496 × 10−209.9496 × 10−20.39812.48740.895536.49580.895597.7456108.255135.0328
F25 Mean000.15269.9497 × 10−20.16589.9496 × 10−20.84595.41631.71657.76531.5654129.9475145.4765170.0775
F25 Std000.14252.3512 × 10−60.18357.3556 × 10−70.15131.7520.361929.14430.407222.3522.6312.2997
F25 Time(s)3.5189 × 10−22.0889 × 10−26.0106 × 10−31.8021 × 10−21.0965 × 10−21.9399 × 10−21.5407 × 10−20.22931.5514 × 10−20.14232.0651 × 10−20.02711.2769 × 10−30.5573
F25 Rank1154637109118121314
F26 Best00000001.5132 × 10−616.414963.89091.4907135.6216186.4904253.5763
F26 Mean00000000.535524.066397.26974.6319234.5143276.4403323.3349
F26 Std00000001.21135.020234.18462.056956.605138.856525.8319
F26 Time(s)2.4409 × 10−21.9670 × 10−27.4357 × 10−32.0204 × 10−21.3020 × 10−20.02142.1382 × 10−20.23122.6075 × 10−20.14672.9143 × 10−20.0331.7073 × 10−30.5669
F26 Rank1111111810119121314
Paired rank +/=/−8/18/015/10/111/14/110/15/112/13/113/12/121/4/122/3/124/1/124/1/124/1/123/2/125/1/0
Avg. rank1.422.383.233.543.693.814.736.008.088.388.9211.3812.5812.77
Overall rank1234567891011121314
Table 4. Friedman ranking scores of 14 algorithms.
Table 4. Friedman ranking scores of 14 algorithms.
AlgorithmsFriedman ScoresRank
AGG-GCRA1.73081
GCRA3.30772
BOA7.76928
WOA5.11544
GOOSE8.884611
AOA5.23085
PSO8.846210
DE12.846213
ACO13.076914
NSM-BO6.65387
PSOBOA6.53856
FDB-AGSK4.92313
LHS-PLS11.615412
Patternsearch8.46159
Table 5. Wilcoxon signed-rank test results for the AGG-GCRA compared to the other 13 algorithms.
Table 5. Wilcoxon signed-rank test results for the AGG-GCRA compared to the other 13 algorithms.
AlgorithmsWilcoxon Test p-ValueSignificant
AGG-GCRA-GCRA1.3183 × 10−4Yes
AGG-GCRA-BOA9.6755 × 10−5Yes
AGG-GCRA-WOA1.5487 × 10−3Yes
AGG-GCRA-GOOSE7.0443 × 10−5Yes
AGG-GCRA-AOA2.9531 × 10−4Yes
AGG-GCRA-PSO7.0443 × 10−5Yes
AGG-GCRA-DE9.3386 × 10−6Yes
AGG-GCRA-ACO9.3386 × 10−6Yes
AGG-GCRA-NSM-BO4.6925 × 10−4Yes
AGG-GCRA-PSOBOA3.7291 × 10−4Yes
AGG-GCRA-FDB-AGSK2.3556 × 10−3Yes
AGG-GCRA-LHS-PLS1.8726 × 10−5Yes
AGG-GCRA-Patternsearch4.1000 × 10−5Yes
Table 6. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the weight minimization of a speed reducer problem.
Table 6. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the weight minimization of a speed reducer problem.
AlgorithmOptimal ValueX1X2X3X4X5X6X7ACTs
AGG-GCRA2994.23433.4990.7177.37.71523.35055.28670.2735
GCRA2994.23433.4990.7177.37.71523.35055.28670.2276
BOA3129.32753.60.7178.01378.0563.35295.41140.3277
WOA3008.36063.49920.7177.38.0673.37525.28680.166
GOOSE2998.11533.50070.7177.43657.80263.35165.28670.1863
AOA3007.04533.49210.7177.37.71223.35835.28670.1636
PSO2994.23433.4990.7177.37.71523.35055.28670.1949
DE3028.01323.5060.7178.38.26523.38255.28970.0066
ACO3148.44983.560.711917.04928.18197.95023.54075.28870.2844
NSM-BO2994.23433.4990.7177.37.71523.35055.28670.4035
PSOBOA3067.03924.50551.7188.96779.08214.39386.35560.3273
FDB-AGSK2994.23643.4990.7177.37.71523.35055.28670.1024
Table 7. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the weight minimization of a speed reducer problem.
Table 7. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the weight minimization of a speed reducer problem.
AlgorithmBestMeanWorstMedianStdRank
AGG-GCRA2994.23432994.23432994.23432994.234301
GCRA2994.23433081.65843187.60373068.919247.26916
BOA3129.32755144.096124,197.85763796.024608.681111
WOA3008.36063330.82515331.22813128.3262527.31619
GOOSE2998.11533033.14013500.48133007.9772110.07645
AOA3007.04535720.791113,313.96284494.03982918.666412
PSO2994.23433001.09453033.70042994.234314.33764
DE3028.01323112.32073399.81353085.230387.09027
ACO3148.44983281.71243493.07593270.609392.26138
NSM-BO2994.23432994.23432994.23432994.234301
PSOBOA3067.03925036.927418,460.96483296.27954299.011910
FDB-AGSK2994.23642994.51362996.97132994.30430.66623
Table 8. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the spring design problem.
Table 8. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the spring design problem.
AlgorithmOptimal ValuedDNACTs
AGG-GCRA0.01270.05210.367110.70450.239
GCRA0.01280.050.317214.10830.2164
BOA0.01340.05210.367110.70450.2876
WOA0.01270.05210.367110.70450.1451
GOOSE0.01270.05210.367110.70450.152
AOA0.01270.05210.367110.70450.141
PSO0.01270.05210.367110.70450.1734
DE0.01320.050.3105150.0058
ACO0.01380.05430.40419.5280.1868
NSM-BO0.01270.05210.367110.70450.3903
PSOBOA0.01680.12591.396710.75210.2882
FDB-AGSK0.01270.05210.367110.70450.0886
Table 9. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the spring design problem.
Table 9. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the spring design problem.
AlgorithmBestMeanWorstMedianStdRank
AGG-GCRA0.01270.01270.01270.012701
GCRA0.01280.01330.01560.01320.00066
BOA0.0134212.87192251.74410.0225573.47710
WOA0.01270.01320.01450.01320.00055
GOOSE0.01270.0130.01410.01280.00044
AOA0.01271387.680927,743.6330.01356203.547811
PSO0.01270.01340.01560.01310.00097
DE0.01320.01850.03440.01740.0059
ACO0.01380.0180.02370.01750.00298
NSM-BO0.01270.01270.01270.012701
PSOBOA0.016826,132.249282,511.186717,171.299730,481.712
FDB-AGSK0.01270.01270.01280.012701
Table 10. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the welded beam design problem.
Table 10. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the welded beam design problem.
AlgorithmOptimal ValuehltbACTs
AGG-GCRA1.67020.19883.33739.1920.19880.3286
GCRA1.67020.19883.33739.1920.19880.2971
BOA2.23630.1256.00158.34170.26570.3357
WOA1.7880.20123.44298.66220.22480.1704
GOOSE1.74360.20793.28318.80370.21680.1784
AOA1.78530.13565.10799.1840.19920.1663
PSO1.67020.19883.33739.1920.19880.2009
DE2.04970.20834.8368.64840.23190.0067
ACO1.83560.17573.7689.84490.20280.2329
NSM-BO1.67020.19883.33739.1920.19880.4244
PSOBOA2.23850.8755.74817.39781.12410.3362
FDB-AGSK1.67030.19893.33689.1920.19880.0987
Table 11. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the welded beam design problem.
Table 11. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the welded beam design problem.
AlgorithmBestMeanWorstMedianStdRank
AGG-GCRA1.67021.67181.70121.67020.05111
GCRA1.67021.73471.7821.73450.02165
BOA2.23632.81973.49982.74930.320411
WOA1.7882.59834.70712.35270.72578
GOOSE1.74362.0282.62522.03190.23386
AOA1.78533.07293.68273.18410.562712
PSO1.67021.70681.91851.67230.06814
DE2.04972.64443.92932.51990.479410
ACO1.83562.31622.86372.3110.2567
NSM-BO1.67021.68491.81671.67020.04513
PSOBOA2.23852.62023.25952.60910.24339
FDB-AGSK1.67031.67231.68021.67140.00252
Table 12. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the GTCD problem.
Table 12. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the GTCD problem.
AlgorithmOptimal ValueLrDACTs
AGG-GCRA1,677,759.27624.4691.1587200.1985
GCRA1,677,759.27624.4691.1587200.1644
BOA1,677,905.67423.95451.1382200.169
WOA1,677,759.27624.46911.1587200.085
GOOSE1,677,759.27624.4691.1587200.0939
AOA1,677,762.95124.36111.1576200.0828
PSO1,677,759.27624.4691.1587200.115
DE1,678,411.33623.57481.1109200.0038
ACO1,911,487.41726.32781.205322.59310.1507
NSM-BO1,677,759.27624.4691.1587200.3045
PSOBOA1,685,732.677211.1129210.1701
FDB-AGSK1,677,759.27624.4691.1587200.0641
Table 13. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the GTCD problem.
Table 13. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the GTCD problem.
AlgorithmMeanBestWorstMedianStdRank
AGG-GCRA1,677,759.2761,677,759.2761,677,759.2761677,759.27601
GCRA1,677,759.2761,677,759.2761,677,759.2761,677,759.27601
BOA1,677,905.6741,684,415.9541,685,732.7721,685,732.682746.0389
WOA1,677,759.2761,677,759.2761,677,759.281,677,759.2760.0011
GOOSE1,677,759.2761,717,117.3132,342,300.2951,681,853.006147,350.65141
AOA1,677,762.9511,705,262.8981,900,901.6351,685,744.1357,901.21318
PSO1,677,759.2761,728,464.9052,675,925.0651,677,759.276223,022.40581
DE1,678,411.3361,844,361.3362,771,701.9291,738,970.151273,878.269110
ACO1,911,487.4172,109,740.5592,392,295.0852,082,094.482136,205.47212
NSM-BO1,677,759.2761,677,759.2761,677,759.2761,677,759.27601
PSOBOA1,685,732.6771,685,733.1831,685,736.1651,685,732.8590.840911
FDB-AGSK1,677,759.2761,677,759.2761,677,759.2761,677,759.27601
Table 14. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the three-bar truss design problem.
Table 14. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the three-bar truss design problem.
AlgorithmOptimal ValueX1X2ACTs
AGG-GCRA263.85230.78840.40810.2554
GCRA263.85370.78840.40810.218
BOA265.06890.78430.42020.2835
WOA265.87750.7890.40640.1926
GOOSE263.85240.78840.40810.4691
AOA264.48140.78820.40870.1346
PSO263.85250.78840.40810.1695
DE264.9650.7880.40960.0056
ACO264.26520.78580.41530.1618
NSM-BO263.85230.78840.40810.3629
PSOBOA267.12861.78131.12810.282
FDB-AGSK263.85230.78840.40810.0849
Table 15. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the three-bar truss design problem.
Table 15. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the three-bar truss design problem.
AlgorithmBestMeanWorstMedianStdRank
AGG-GCRA263.8523263.8523263.8523263.852301
GCRA263.8523263.8523263.8537263.852301
BOA263.876264.258265.0689264.19480.336910
WOA263.8526264.422265.8775264.09410.667711
GOOSE263.8523263.8524263.8524263.852301
AOA263.8524263.987264.4814263.87980.210612
PSO263.8523263.8524263.8525263.852301
DE263.8605264.0924264.965264.0450.25059
ACO263.8651263.9623264.2652263.94310.09798
NSM-BO263.8523263.8523263.8523263.852301
PSOBOA263.9719264.8713267.1286264.51270.88167
FDB-AGSK263.8523263.8523263.8523263.852301
Table 16. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the multiple-disk clutch brake design problem.
Table 16. Optimal values, corresponding design variables, and ACTs of 12 algorithms on the multiple-disk clutch brake design problem.
AlgorithmOptimal ValueX1X2X3X4X5ACTs
AGG-GCRA0.235270901100020.3157
GCRA0.235270901100020.2874
BOA0.252368.3183901247.337720.7765
WOA0.235270901100020.1887
GOOSE0.235270901100020.1963
AOA0.235669.969901469.317420.1883
PSO0.235270901100020.2167
DE0.236369.8969901748.14320.0074
ACO0.244470.250390.36061.01134.30492.0530.2714
NSM-BO0.235270901100020.4129
PSOBOA0.236868.845891048.708610.3751
FDB-AGSK0.235270901100020.1074
Table 17. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the multiple-disk clutch brake design problem.
Table 17. Optimal values, mean, median, standard deviation, and ranking of 12 algorithms on the multiple-disk clutch brake design problem.
AlgorithmBestMeanWorstMedianStdRank
AGG-GCRA0.23520.23520.23520.235201
GCRA0.23520.24010.26170.23520.00247
BOA0.25230.31630.33080.33080.023111
WOA0.23520.23520.23520.235201
GOOSE0.23520.23990.25430.23520.00686
AOA0.23560.25620.30240.25220.02099
PSO0.23520.23520.23520.235201
DE0.23630.24330.26090.24130.00648
ACO0.24440.27720.30.27720.013310
NSM-BO0.23520.23520.23520.235201
PSOBOA0.23680.32550.33080.33080.02112
FDB-AGSK0.23520.23520.23520.235201
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Tian, Z.; Zhang, K.; Zhao, F.; Zhao, A. An Improved Greater Cane Rat Algorithm with Adaptive and Global-Guided Mechanisms for Solving Real-World Engineering Problems. Biomimetics 2025, 10, 612. https://doi.org/10.3390/biomimetics10090612

AMA Style

Chen Y, Tian Z, Zhang K, Zhao F, Zhao A. An Improved Greater Cane Rat Algorithm with Adaptive and Global-Guided Mechanisms for Solving Real-World Engineering Problems. Biomimetics. 2025; 10(9):612. https://doi.org/10.3390/biomimetics10090612

Chicago/Turabian Style

Chen, Yepei, Zhangzhi Tian, Kaifan Zhang, Feng Zhao, and Aiping Zhao. 2025. "An Improved Greater Cane Rat Algorithm with Adaptive and Global-Guided Mechanisms for Solving Real-World Engineering Problems" Biomimetics 10, no. 9: 612. https://doi.org/10.3390/biomimetics10090612

APA Style

Chen, Y., Tian, Z., Zhang, K., Zhao, F., & Zhao, A. (2025). An Improved Greater Cane Rat Algorithm with Adaptive and Global-Guided Mechanisms for Solving Real-World Engineering Problems. Biomimetics, 10(9), 612. https://doi.org/10.3390/biomimetics10090612

Article Metrics

Back to TopTop