Next Article in Journal
Therapeutic Potential of Stem Cell-Derived Exosomes in Skin Wound Healing
Previous Article in Journal
Hybrid Algorithms Based on Two Evolutionary Computations for Image Classification
Previous Article in Special Issue
Multi-Objective White Shark Optimizer for Global Optimization and Rural Sports-Facilities Location Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO)

1
School of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
2
School of Mechatronic Engineering and Automation, Foshan University, Foshan 528225, China
3
Jilin Provincial Institute of Product Quality Supervision and Inspection, Changchun 130103, China
4
Automotive Parts Intelligent Manufacturing Assembly Inspection Technology and Equipment University—Enterprise Joint Innovation Laboratory, Changchun University of Science and Technology, Changchun 130022, China
5
College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 545; https://doi.org/10.3390/biomimetics10080545
Submission received: 9 July 2025 / Revised: 6 August 2025 / Accepted: 15 August 2025 / Published: 19 August 2025

Abstract

In this paper, an Enhanced Greylag Goose Optimization Algorithm (EGGO) based on evolutionary game theory is presented to address the limitations of the traditional Greylag Goose Optimization Algorithm (GGO) in global search ability and convergence speed. By incorporating dynamic strategy adjustment from evolutionary game theory, EGGO improves global search efficiency and convergence speed. Furthermore, EGGO employs dynamic grouping, random mutation, and local search enhancement to boost efficiency and robustness. Experimental comparisons on standard test functions and the CEC 2022 benchmark suite show that EGGO outperforms other classic algorithms and variants in convergence precision and speed. Its effectiveness in practical optimization problems is also demonstrated through applications in engineering design, such as the design of tension/compression springs, gear trains, and three-bar trusses. EGGO offers a novel solution for optimization problems and provides a new theoretical foundation and research framework for swarm intelligence algorithms.

1. Introduction

Meta-heuristic algorithms are highly effective for solving non-deterministic problems. They find approximate optimal solutions by imitating biological or physical phenomena, with simple structures and limited computational resources [1]. Their strong adaptability and robustness make them a key focus in computational intelligence research. Scholars have proposed numerous meta-heuristic algorithms [2], such as Particle Swarm Optimization (PSO) [3] and the Bat Algorithm (BA) [4]. Valdez and Castillo et al. noted around 150 available algorithms for optimization problems [5]. As a crucial subset of meta-heuristic algorithms, swarm intelligence (SI) algorithms are known for their adaptability, robustness, and flexibility [6]. Notable examples include Grey Wolf Optimizer (GWO) [7], Ant Colony Optimization (ACO) [8], Whale Optimization Algorithm (WOA) [9], and Greylag Goose Optimization (GGO) [10]. Due to their strengths, SI algorithms are widely used in engineering, biomedicine, and other fields [11]. However, they also have limitations like premature convergence [12], poor balance [13], and susceptibility to local optima [14].
In 2023, Kenawy and Nima Khodadadi proposed the GGO algorithm. This novel meta-heuristic, inspired by the collective behavior and social structure of greylag geese during migration, falls under swarm intelligence [10]. Its dynamic grouping and exploration strategies enable it to escape local optima and enhance the likelihood of finding the global optimum. However, similar to other swarm intelligence algorithms, GGO has limitations such as premature convergence in high-dimensional problems and high computational complexity due to maintaining two groups. To address these issues, Hossein Najafi Khosrowshahi et al. proposed a modified GGO with adaptive mechanisms like dynamic balanced partitioning and stasis detection, improving its robustness and convergence speed [15]. Amal H et al. introduced a Parallel GGO with Restricted Boltzmann Machines (PGGO-RBM), boosting the model’s dynamic balance and accuracy [16]. Ahmed El-Sayed Saqr et al. combined GGO with a Multi-Layer Perceptron (MLP) to enhance accuracy [17]. Nikunj Mashru et al. integrated an elite non-dominated sorting method and archiving mechanism into a single-objective GGO-based algorithm, maintaining its advantages while improving convergence and diversity [18]. Dildar Gürses et al. improved GGO by adding a Lévy flight mechanism and artificial neural network (ANN) strategy, balancing exploration and exploitation, and validated its effectiveness in engineering problems like heat exchanger design, automotive side-impact design, and spring design optimization [19]. These modifications demonstrate GGO’s effectiveness in solving uncertain problems and its great potential. However, challenges such as local optima, premature convergence, and issues with diversity and balance remain.
This paper integrates evolutionary game theory (EGT) into the Grey Goose Optimization (GGO) algorithm to address its limitations. By imitating competition and cooperation among individuals, EGT guides population evolution. This integration allows for dynamic adjustment of strategy frequencies, enhancing search efficiency and solution quality.
Firstly, the dynamic population structure adjustment mechanism from EGT is introduced to maintain population diversity and exploration ability. A dynamic grouping mechanism adjusts the size of exploration and exploitation groups based on fitness distribution, balancing global and local searching.
Secondly, random mutation and partial re-initialization of individuals are applied to preserve population diversity and avoid premature convergence. The local search scope and intensity are also dynamically adjusted according to fitness distribution.
Finally, the local search capability from EGT is incorporated to strengthen the algorithm’s exploitation. Individuals with high fitness are selected for local searching to further optimize solution quality.

2. Materials and Methods

2.1. Overview of the Greylag Goose Optimization Algorithm

The GGO algorithm is a meta-heuristic algorithm inspired by the migratory behavior of greylag geese. During migration, these geese fly in a V-formation to reduce air resistance and enhance efficiency, demonstrating remarkable collaboration. In the GGO algorithm, the population is divided into an exploration group and an exploitation group. The exploration group focuses on the global search, while the exploitation group handles local optimization. This structure helps to balance exploration and exploitation capabilities.
Within a greylag goose gaggle, individuals enhance survival efficiency through division of labor. Some members serve as sentries monitoring environmental risks, while others focus on foraging (Figure 1A). Integrating this dynamic grouping mechanism (Figure 1B) with natural migratory behavior (Figure 1C) creates an efficient search framework for algorithms.

2.2. Improvement of Greylag Goose Optimization Based on Evolutionary Game Theory (EGGO)

Evolutionary game theory (EGT) is a framework for studying the evolution of biological populations and their adaptive behaviors [20]. By simulating competition and cooperation among individuals, it offers guidance for population evolution. Integrating EGT into the GGO algorithm enables dynamic adjustment of strategy usage. This, in turn, enhances the algorithm’s search efficiency and solution quality.

2.2.1. Strategy Selection and Update

In the standard GGO, the patrol group, guard group, and foraging group are three types of auxiliary search agents that expand the exploration scope. Within the EGT framework, these agents in different tasks can be regarded as distinct “strategies”. The strategy selection mechanism in EGT dynamically adjusts the usage frequency of these strategies. By following the leaders in different auxiliary search agent groups, a wider search scope and more optimal solutions can be obtained.
The dynamic population structure adjustment mechanism of EGT is incorporated into the GGO to maintain population diversity and exploration ability. A dynamic grouping mechanism adjusts the size of exploration and exploitation groups based on fitness distribution, balancing global and local searching. Random mutation or partial individual re-initialization is introduced to preserve diversity and prevent premature convergence. Meanwhile, the local search capability of EGT is integrated into the GGO to enhance its exploitation. Individuals with high fitness are selected for local searching to optimize solution quality, with the local search range and intensity dynamically adjusted according to fitness distribution.

2.2.2. Fitness Assessment and Evolutionary Stable Strategy (ESS)

In EGT, fitness assesses individuals’ quality within a population. In the GGO, an individual’s fitness value reflects its position in the search space. Comparing these values identifies superior individuals and guides population evolution.
In evolutionary game theory, an Evolutionary Stable Strategy (ESS) is a key concept that describes a strategy prevailing in a population. Within the GGO framework, an ESS guides population evolution, ensuring convergence to a stable strategy distribution. To clarify the proposed method’s workings, a detailed schematic of the Evolutionary Game Theory Greylag Goose Optimization (EGGO) is presented in Figure 2.
In the mixed game method, the following settings can be made:
(1)
Each greylag goose is mapped to a player in the evolutionary game.
(2)
The three operators are regarded as three available strategies S1, S2, and S3, with the state space K = y m : y m = 1 , m = 1 , 2 , 3 , y m > 0 , where y m R represents the proportion of strategy m in the population.
(3)
The average behavior obtained by following a specific strategy constitutes the payoff matrix H.
Subsequently, the game proceeds based on the fundamental evolutionary dynamics mechanism proposed by Taylor and Jonker [20], which is written as:
y ˙ m = y m h m y y T H y
Here, Hm represents the m-th row of the payment matrix H, and H R m × m stores all the fitness information of the population, that is, the combined result of each individual using a single strategy. Therefore, H can be defined as:
H = h s 1 h s 1 + h s 2 2 h s 1 + h s 2 + h s 3 3 h s 1 + h s 2 2 h s 2 h s 2 + h s 3 2 h s 1 + h s 2 + h s 3 3 h s 2 + h s 3 2 h s 3
Here, h s m , m 1 , 2 , 3 represents the benefit that the goose obtain by using strategy m. As a first-order ordinary differential equation that describes the difference between the fitness of the strategy and the average fitness in the group, Equation (1) depicts the evolution of the strategy frequency ym. Once (2) is established, (1) is executed to generate the related Evolutionary Stable Strategy (ESS), which is the output of the game [21]. Therefore, we obtain the ESS candidate P t = Y m t , m 1 , 2 , 3 for the number of iterations t, where the specific values of the coefficient Y m t , m 1 , 2 , 3 , ym represent the ratio of strategy m in the t-th iteration. It should be noted that the initial value of the strategy proportion Y is set as a uniform distribution, and during the update process, Y is standardized to ensure that y m = 1 . Among them, if y m < δ , δ = 10 6 , then y m = δ is reset and re-standardized. This process can effectively prevent boundary absorption.
Taking into account the iterative process, the benefit of the strategy is presented as:
h s m = 1 t j = 1 t Y m j f X j
Among them, 1 / t represents the average value during the iteration process, and f X j indicates the cost of goose X in the j-th iteration. Clearly, (3) reflects the average performance of the gain obtained through a specific strategy. During the process of solving the dynamic equation, better results are continuously generated to replace the previous solutions, and the game eventually converges to an ESS that no mutation strategy can invade [21]. The replicator dynamics are discretized via the forward Euler method, y m t + 1 = y m t + t · y m t · H m y t y t T H y t , with step size t = 1 per algorithm iteration. This discretization preserves evolutionary directionality while maintaining computational efficiency, and empirical results confirm its efficacy in driving convergence toward ESS.
Through the optimization of strategy selection, population structure, and local searching using evolutionary game theory, EGGO significantly improves the convergence speed and global optimization ability while maintaining the biological behavior simulation of the greylag goose algorithm. Experimental results show that this improved algorithm has higher solution accuracy and stability in complex optimization problems.

2.3. Lyapunov Stability Theory

To verify the convergence of the EGGO algorithm, we constructed a Lyapunov function to prove the stability of the system through dynamic adjustment of the strategy.

2.3.1. Dynamic System Modeling

In EGGO, the evolution of strategy proportions follows a first-order ordinary differential equation, as expressed in Equation (4). Here, y = y 1 , y 2 , y 3 T represents the vector of strategy proportions, h m y is the payoff function of strategy m, and h ¯ y = m = 1 3 y m h m y is the average payoff.
y ˙ m = y m h m y h ¯ y , m 1 , 2 , 3
Assumption 1: The profit function h m y is continuously differentiable within the strategy space, and it satisfies h m y n 0 (indicating that there is a competitive relationship among the strategies).
The assumption h m / y n 0 stems from resource competition in optimization:
  • Strategy similarity: increased y n reduces h s m if strategies n and mm exploit overlapping regions.
  • Resource dilution: a fixed population size implies that a higher y n diminishes resources available to strategy m .
We emphasize that ESS convergence guarantees only local stability of strategy distribution, not global optimality. EGGO mitigates local optima via dynamic group adjustment, increasing the exploration ratio upon solution stagnation, random mutation, perturbing agents to escape locally optimal solutions, persistent multi-strategy coexistence, and exploratory strategies to sustain global search potential.

2.3.2. Lyapunov Stability Proof

The candidate Lyapunov function is defined as the negative entropy function of the policy distribution, as given by Equation (5). This function is positive definite within the policy simplex K = y R 3 y m = 1 , y m > 0 and attains its minimum value at the equilibrium point y * (ESS).
V y = m = 1 3 y m ln y m
The time derivative of V y along the system trajectory is calculated as follows:
V ˙ = m = 1 3 y ˙ m ln y m + y m y ˙ m y m = m = 1 3 y ˙ m ln y m + 1 = m = 1 3 y m h m h ¯ ln y m + 1 = m = 1 3 y m h m ln y m m = 1 3 y m h m + h ¯ m = 1 3 y m ln y m + h ¯ m = 1 3 y m = m = 1 3 y m h m ln y m + h ¯ m = 1 3 y m ln y m h ¯ + h ¯ = m = 1 3 y m h ¯ h m ln y m
According to Hypothesis 1, when the return of strategy m is h m > h ¯ , its proportion y m increases ( y ˙ m > 0 ). At this time, ln y m increases, but ( h ¯ h m ) is negative, so the product term y m h ¯ h m ln y m 0 ; conversely, the same reasoning holds. Therefore, V ˙ 0 , and only when y = y * E S S , V ˙ = 0 .
According to Lyapunov’s stability theorem:
(1)
V y is positive definite within the strategy space;
(2)
V ˙ 0 , and only when y = y * (ESS), then V ˙ = 0 .
Therefore, the system converges globally and asymptotically to an ESS. This demonstrates that the EGGO algorithm, enhanced by evolutionary game theory, possesses a rigorous guarantee of convergence.

2.4. EGGO Algorithm Model and Analysis

This section updates the GGO’s mathematical model based on the improvements outlined in Section 2.2. It includes the EGGO mathematical model, algorithmic complexity analysis, pseudocode for EGGO, and a visualization of the algorithmic process.

2.4.1. Mathematical Model

The exploration group ( n 1 ) in the gaggle will search for promising new locations near its current position. This is achieved by repeatedly comparing numerous potential nearby options to find the best one based on fitness. The EGGO algorithm achieves this through the following equation to update vectors A and C to A = 2 a r 1 a and C = 2 r 2 during iterations, where parameter a linearly changes from 2 to 0, and r 1 = c 1 t / t max .
X t + 1 = X * t A C X * t X t
Here, X t represents the agent in iteration t. X * t denotes the position of the best solution (the leader). X t + 1 is the updated position of the agent. The values of r 1 and r 2 randomly vary within the range of 0 ,   1 .
Three random search agents are selected and named X p a d d l e 1 , X p a d d l e 2 and X p a d d l e 3 to prevent the agents from being influenced by a leader position, thereby achieving greater exploration. This stage is improved through evolutionary game theory. The positions of the improved search agents will be updated as follows, where A 1 .
X t + 1 = ω 1 t r Y 1 X p a d d l e 1 + z ω 2 t r Y 2 X p a d d l e 2 X p a d d l e 3 + 1 z ω 3 t r Y 3 X X p a d d l e 1
Among them, the values of ω 1 , ω 2 , and ω 3 are updated in [0, 2]. Y 1 , Y 2 , and Y 3 satisfy Y 1 + Y 2 + Y 3 1 . t r represents the conversion factor and is a random number within [0, 1]. Parameter z decreases exponentially, and the calculation method is as follows.
z = 1 t / t max 2
During the second update process, the values of r 3 0.5 , and the vector values of a and A have decreased, as follows:
X t + 1 = ω 4 X * t X t e b l cos 2 π l + 2 ω 1 r 4 + r 5 X * t
In the formula, parameter b is a constant, and l is a random value in [−1, 1]. The parameter ω 4 is updated in [0, 2]. r 4 and r 5 are updated in [0, 1].
The exploitation group ( n 2 ) focuses on refining existing solutions. At the end of each cycle, individuals with the highest fitness are identified and rewarded. Three sentries (1, 2, and 3) guide other individuals X N o n s e n t r y to adjust their positions toward the estimated prey location. The following equation illustrates this position-updating process.
X 1 = X S e n t r y 1 A 1 C 1 X S e n t r y 1 X X 2 = X S e n t r y 2 A 2 C 1 X S e n t r y 2 X X 3 = X S e n t r y 3 A 3 C 1 X S e n t r y 3 X
Among them, A 1 , A 2 , and A 3 are calculated as A = 2 a r 1 a , while C 1 , C 2 , and C 3 are calculated as C = 2 r 2 . The updated position X t + 1 is represented as the average of the three solutions, X 1 , X 2 , and X 3 , as shown below.
X t + 1 = X ¯ i | 0 3
The most promising option is to stay close to the leader during flight, which prompts some greylag geese to investigate and approach the area with the most desirable response in order to find a better solution, which is named “ X F l o c k 1 “. EGGO implements the above process using the following equation.
X t + 1 = X t + D 1 + z ω X X F l o c k 1

2.4.2. Analysis of Algorithm Complexity

To evaluate the computational efficiency of the EGGO algorithm, an analysis is conducted from both temporal and spatial complexity perspectives to quantify the asymptotic behavior of resource consumption during the iterative process.
(1) Temporal Complexity: Compared with the standard GGO, the introduction of payoff matrix updates results in a temporal complexity of O 3 N . When solving the ordinary differential equation in Equation (4), the complexity is O 3 . Additionally, performing mutation operations on k individuals incurs a temporal complexity of O k D . Overall, although EGGO maintains the same order of temporal complexity as GGO, there is an increase in overall complexity. Considering that the total complexity of EGGO is O N T D , which aligns with mainstream algorithms, it is capable of meeting the demands for rapid optimization.
(2) Spatial Complexity: Compared with the standard GGO, storing the strategy proportion vector y, payoff matrix A, and historical payoff records leads to a spatial complexity of O 3 2 + 3 T . Retaining intermediate solutions for neighborhood searching contributes a spatial complexity of O N D . In summary, the overall spatial complexity of EGGO is O N D . This indicates that the algorithm’s memory consumption is primarily dominated by population size and problem dimensionality, demonstrating good scalability.

2.4.3. Algorithm Pseudocode and Flowchart

Based on the previous content, we compiled the pseudo-code of EGGO, as shown in Algorithm 1, which demonstrates the operation logic of EGGO. On this basis, we visualized the algorithm flow to further explain the operation logic of EGGO and the collaborative relationships among multiple processes in the mathematical model. The visualized algorithm flow is shown in Figure 3.
Algorithm 1 Pseudocode of EGGO
1. Initialize EGGO population X i , size n , iterations t m a x , and objective function F n
2. Initialize EGGO parameters, t = 1
3. Calculate objective function F n for each agent X i
4. Set  P = best agent position
5. Update solutions in exploration group ( n 1 ) and exploitation group ( n 2 )
6. while  t   t m a x  do
7.   Initialize the transition factor t r , strategy proportion Y
8.   Divide the exploration group members into three parts
9.   Calculate the average fitness values of each part h s 1 , h s 2 , and h s 3 .
10.   Initialize the payoff matrix H
11.   Update the strategy proportion Y based on Equation (1)
12.   for ( i = 1 : i < n 1 + 1 ) do
13.    if ( t % 2 = = 0 ) then
14.     if ( r 3 < 0.5 ) then
15.      if ( A < 1 ) then
16.       Update position of current search agent as Equation (7)
17.      else
18.       Select three random search agents X p a d d l e 1 , X p a d d l e 2 , and X p a d d l e 3
19.       Update (z) by the exponential form of Equation (9)
20.       Update position of current search agent as Equation (8)
21.      end if
22.     else
23.      Update position of current search agent as Equation (10)
24.     end if
25.    else
26.     Update position of current search agent as Equation (13)
27.    end if
28.   end for
29.   for ( i = 1 : i < n 2 + 1 ) do
30.    if ( t % 2 = = 0 ) then
31.     Calculate X 1 , X 2 , and X 3 by the Equation (11)
32.     Update individual position as Equation (12)
33.    else
34.     Update position of current search agent as Equation (13)
35.    end if
36.   end for
37.   Calculate objective function F n for each agent X i
38.   Update parameters
39.   Set t = t + 1
40.   Adjust beyond the search space solutions
41.   if (Best is same as previous two iterations) then
42.    Increase solutions of exploration group ( n 1 )
43.    Decrease solutions of exploitation group ( n 2 )
44.   end if
45. end while
46. Return best agent P

3. Results

3.1. Comparison and Analysis of Test Functions

This section compares EGGO with seven classic algorithms (standard GGO [10], GWO [7], MFO [22], SSA [23], WOA [9], HHO [24], and PSO [3]) using benchmark functions and the CEC 2022 test suite. Simulations are run on an Intel(R) Core (TM) i7-14650HX processor with Windows 11, 32GB RAM, and a 2.2GHz processor speed. The algorithms are implemented in MATLAB R2024b, with parameters set as shown in Table 1.

3.1.1. Benchmark Test Functions

Among the 23 benchmark test functions, we selected 8 (F2, F5, F8, F11, F14, F17, F20, and F23) according to an arithmetic sequence as the comparison test functions for this stage. The 3D images of the test functions and the convergence curves of the algorithms are shown in Figure 4.
From the results shown in Figure 4, it can be observed that EGGO significantly outperforms the other algorithms in unimodal (F2, F5) and multimodal functions (F8, F11). The advantages in mixed functions (F14, F17) and combined functions (F20, F23) are reduced. It is notable that in the 30-repetition experiments of each benchmark function group, the performance of SSA is not stable. For example, in Figure 4a, it can be observed that the algorithm stops converging when the number of iterations reaches 211. In each of the 30-repetition experiments of the benchmark test functions, the number of times EGGO ranks first is all over 28, demonstrating strong algorithm performance.

3.1.2. CEC 2022 Test Suite

In this section, the above eight algorithms will be compared and tested using the CEC 2022 test suite. The function information of the test suite is shown in Table 2 below.
The test suite dimension is set to 10 dimensions (10D), and the experimental results are shown in Table 3. According to Table 2, in the 10D tests of 12 functions, EGGO achieves the following rankings:
Average Value (Ave.): first place in eight functions (F2, F3, F4, F6, F7, F9, F11, F12); second place in two functions (F8, F10); third place in one function (F5); sixth place in one function (F1).
Standard Deviation (Std.): first place in ten functions (F1, F3, F4, F5, F6, F7, F8, F9, F11, F12); second place in two functions (F2, F10).
Execution Time (Time): first place in two functions (F7, F11); fifth place in three functions (F8, F9, F12); sixth place in one function (F10); seventh place in three functions (F3, F4, F5); eighth place in three functions (F1, F2, F6).
Best Value (Best): first place in nine functions (F3, F4, F5, F6, F7, F8, F9, F11, F12); third place in two functions (F1, F2); seventh place in one function (F10).
Based on the rankings in Table 3, Figure 5 visualizes the performance of all six algorithms across four metrics: Ave., Std., Time, and Best.
The test suite dimension is set to 20 dimensions (20D), and the experimental results are shown in Table 4. According to Table 4, in the 20D tests of 12 functions, EGGO achieves the following rankings:
Average Value (Ave.): first place in eleven functions (F1, F2, F3, F4, F5, F6, F7, F8, F10, F11, F12); second place in one function (F9).
Standard Deviation (Std.): first place in seven functions (F2, F3, F6, F7, F8, F10, F12); second place in three functions (F1, F5, F9); third place in two functions (F4, F11).
Execution Time (Time): first place in four functions (F1, F3, F5, F8); second place in one function (F2); fifth place in three functions (F9, F10, F11); sixth place in two functions (F7, F12); seventh place in two functions (F4, F6).
Best Value (Best): first place in eight functions (F2, F4, F5, F6, F8, F9, F10, F12); second place in three functions (F3, F7, F11); third place in one function (F1).
Based on the rankings in Table 4, Figure 6 visualizes the performance of all six algorithms across four metrics: Ave., Std., Time, and Best.
Comprehensive evaluation of algorithm performance across the CEC 2022 benchmark suite indicates that EGGO significantly outperforms the other algorithms, particularly in terms of convergence speed and precision. When the dimensionality of the test suite increases from 10D to 20D, EGGO shows more remarkable advantages. Its rankings for mean value, variance, time consumption, and best score in the 20D tests are significantly better than those in the 10D tests. This highlights EGGO’s superior performance in handling high-dimensional problems compared to the other algorithms. The visualization results of this analysis are presented in Figure 7.

3.2. Engineering Applications of EGGO

To further evaluate the EGGO algorithm’s applicability to engineering design problems, this paper assesses its performance on three such problems: gear train design, tension–compression spring design, and three-bar truss design. In the experiments, the design variables of each problem serve as individual information in the algorithm, with the design model acting as the objective function for optimization. The results of the EGGO algorithm are compared with those of other algorithms to demonstrate its superiority in solving engineering problems. These algorithms originate from various sources in the literature and include PSO [3], GWO [7], SSA [22], WOA [9], MFO [22], HHO [24], GMO [25], KABC [26], ALO [27], GOA [28], CS [29], MBA [30], GSA [31], IAPSO [32], and DMMFO [33].

3.2.1. Tension/Compression Spring Design Problem

As shown in Figure 8, the tension/compression spring design (TCSD) problem is a constrained optimization task aimed at minimizing spring volume under constant tension/compression loads, as described by Belegundu and Arora [34]. It involves three design variables: the number of coils (L), the coil diameter (d), and the wire diameter (w).
The calculation model of the tension/compression spring is as follows:
M i n i m i z e f 1 ω , d , L = L + 2 ω 2 d
It is subject to the following constraints:
g 1 = 1 d 3 + L 71.783 ω 4 0 ; g 2 = 4 d 3 ω L 12.566 d L 3 ω 4 0 ; g 3 = 1 140.45 ω d 2 L 0 ; g 4 = 2 ω + d 3 1 0 .
The value ranges of the other three variables are as follows:
0.05 ω 2.0 , 0.25 d 1.3 , 2.0 L 15 .
For constrained optimization, EGGO employs a static penalty function method. The fitness function is reformulated as F X = f X + λ j = 1 n m a x 0 , g j X 2 , where f X is the primary objective; g j X denotes n design constraints; and the penalty coefficient λ = 10 6 ensures strong infeasibility rejection. This mechanism transforms constrained problems into unoptimization via objective space mapping.
As indicated in Table 5, the computational outcomes of the EGGO algorithm for the tension/compression spring design problem are compared with those from seven other algorithms. All variables in the results satisfy the constraints. The EGGO algorithm demonstrates significant superiority in performance.

3.2.2. Gear Train Design Problem

Gear train design (GTD) is a classic engineering design problem in mechanical transmission [35,36]. Its objective is to determine the number of teeth on each gear in the transmission system based on a reasonable gear ratio. The gear train structure is shown in Figure 9. The design variables for GTD consist of the number of teeth on four gears, denoted as x1, x2, x3, and x4.
The specific mathematical model is as follows:
C o n s i d e r x = x 1 , x 2 , x 3 , x 4 = A , B , C , D
M i n i m i z e f 2 x = 1 6.931 x 3 x 2 x 1 x 3 2
It is subject to 12 x 1 , x 2 , x 3 , x 4 60 .
The solution to the gear train design problem is unique and must be an integer. As shown in Table 6, EGGO yields the same optimal results as GMO, ALO, IAPSO, and MBA. This demonstrates that EGGO’s solution is both optimal and feasible.

3.2.3. Three-Bar Truss Design Problem

TA schematic diagram of the three-bar truss design (T-BTD) is shown in Figure 10, and T-BTD is a mechanical optimization problem [26]. The objective of this problem is to minimize the weight of the three-bar truss structure while meeting the constraints of stress and loading force. The optimization variables of this problem are the cross-sectional areas of the connecting rods (x1, x2).
The specific optimization objective function is:
M i n i m i z e f 3 x = 2 2 x 1 + x 2 * l
In the formula, l represents the distance between the connecting rods, and l = 100 cm, x1, x2 ∈ [0, 1].
During the T-BTD optimization process, the design variables are constrained from three aspects: structural stress, material deflection, and buckling. The three constraint formulas are as follows:
g 1 x = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 , P σ 0 ; g 2 x = x 2 2 x 1 2 + 2 x 1 x 2 , P σ 0 ; g 3 x = 1 2 x 2 + x 1 , P σ 0 .
Here, P = 2 KN/cm2 and σ = 2 KN/cm2.
Based on Equations (18) and (19), the EGGO algorithm was applied to solve the three-bar truss design problem, with results presented in Table 7. When compared to the other algorithms, EGGO achieved the same optimal fitness as GMO, ALO, and GSA. Additionally, the solution values x1 and x2 satisfy the constraints. These experimental results demonstrate the EGGO algorithm’s capability to effectively solve the three-bar truss design problem.

4. Conclusions

In response to the shortcomings of the traditional GGO, such as being prone to getting trapped in local optima and having a slow convergence speed, an EGGO algorithm based on evolutionary game theory is proposed. EGGO significantly enhances the global search ability and convergence speed by introducing the dynamic strategy adjustment mechanism from evolutionary game theory. At the same time, it adopts strategies such as a dynamic grouping mechanism, random mutation, and local search enhancement to further improve the efficiency and robustness of the algorithm. The dynamic grouping mechanism of EGGO dynamically adjusts the number of the exploration group and development group according to the fitness distribution of the population to balance global searching and local development; random mutation or re-initialization of some individuals maintains the diversity of the population and prevents premature convergence; and the introduction of a local search ability improves the local development ability of the algorithm.
The EGGO algorithm was compared with seven classic algorithms: standard GGO, GWO, MFO, SSA, WOA, HHO, and PSO. The results indicate that EGGO surpasses the others in multiple standard test functions and the CEC 2022 benchmark suite. It shows excellent performance in convergence accuracy and speed. EGGO is significantly better in unimodal and multimodal functions. Though its advantage is slightly less in hybrid and composite functions, it still ranks first in over 28 of 30 runs per group, proving its strong performance. In the CEC 2022 tests for 10D and 20D, EGGO achieved excellent results. Among the 12 functions in the 10D test, EGGO ranked first in mean value, standard deviation, execution time, and best value. In the 20D test, it also performed outstandingly in these metrics. This indicates that EGGO outperforms other algorithms in handling high-dimensional problems.
In engineering applications, EGGO was applied to the engineering design problems of tension/compression spring design, gear train design, and three-bar truss design. The experimental results indicate that EGGO can efficiently solve these problems while satisfying constraints, with results either comparable to or better than other advanced algorithms. This demonstrates the feasibility and effectiveness of EGGO in practical engineering optimization.
In this paper, the following contributions are proposed:
1. An EGGO algorithm is proposed. By incorporating dynamic strategy adjustment from evolutionary game theory, it improves the algorithm’s adaptability and strategy selection ability. This significantly boosts global search efficiency and convergence speed.
2. EGGO uses a dynamic grouping mechanism. Based on population fitness distribution, it adjusts exploration and exploitation groups to balance global and local searches. This avoids premature convergence and ensures excellent performance in high-dimensional problems.
3. Random mutation and local search enhancement strategies are adopted. These maintain population diversity, prevent early convergence, and improve local exploitation ability.
Despite the remarkable performance improvements achieved by EGGO, there remains scope for further enhancement. Future work will focus on fine-tuning the algorithm’s parameters to accommodate diverse optimization scenarios and integrating additional adaptive strategies to bolster the algorithm’s robustness and applicability. Furthermore, the potential of EGGO will be explored in a broader range of practical applications, such as engineering design, resource allocation, and scheduling problems. In summary, the EGGO algorithm proposed in this paper has significantly advanced the development of GGO and laid a solid foundation for future innovation in the field of computational intelligence and optimization.

Author Contributions

Conceptualization, L.W. and Z.Y.; Methodology, L.W. and X.Z.; Software, Y.Y.; Validation, Y.Y., Z.Y. and Z.Z.; Formal analysis, L.W. and Y.Z.; Investigation, Y.Y. (Yuanting Yang) and Z.Y.; Resources, Y.Z. and Y.Y. (Yuanting Yang); Data curation, L.W., Y.Y. (Yuqi Yao) and Z.Z.; Writing—original draft preparation, L.W.; Writing—review and editing, L.W. and X.Z.; Visualization, Y.Y. (Yuqi Yao) and Z.Y.; Supervision, Y.Z.; Project administration, Y.Z. and X.Z.; Funding acquisition, X.Z. and Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Jilin Province and Changchun City Major Science and Technology Special Project: 20240301008ZD.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qin, S.; Liu, J.; Bai, X.; Hu, G. A Multi-Strategy Improvement Secretary Bird Optimization Algorithm for Engineering Optimization Problems. Biomimetics 2024, 9, 478. [Google Scholar] [CrossRef]
  2. Zhang, C.; Song, Z.; Yang, Y.; Zhang, C.; Guo, Y. A Decomposition-Based Multi-Objective Flying Foxes Optimization Algorithm and Its Applications. Biomimetics 2024, 9, 417. [Google Scholar] [CrossRef]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  4. Casella, G.; Murino, T.; Bottani, E. A modified binary bat algorithm for machine loading in flexible manufacturing systems: A case study. Int. J. Syst. Sci. Oper. Logist. 2024, 11, 2381828. [Google Scholar] [CrossRef]
  5. Fevrier, V.; Oscar, C.; Patricia, M. Bio-Inspired Algorithms and Its Applications for Optimization in Fuzzy Clustering. Algorithms 2021, 14, 122. [Google Scholar] [CrossRef]
  6. Mohammadi, A.; Sheikholeslam, F.; Mirjalili, S. Nature-Inspired Metaheuristic Search Algorithms for Optimizing BenchmarkProblems: Inclined Planes System Optimization to State-of-the-Art Methods. Arch. Computat. Methods Eng. 2023, 30, 331–389. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  8. Dorigo, M.; Di Caro, G.; Gambardella, L.M. Ant Algorithms for Discrete Optimization. Artif. Life 1999, 5, 137–172. [Google Scholar] [CrossRef] [PubMed]
  9. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. El-Kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag Goose Optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  11. Xu, M.; Cao, L.; Lu, D.; Hu, Z.; Yue, Y. Application of Swarm Intelligence Optimization Algorithms in Image Processing: A Comprehensive Review of Analysis, Synthesis, and Optimization. Biomimetics 2023, 8, 235. [Google Scholar] [CrossRef]
  12. Liu, J.; Shi, J.; Hao, F.; Dai, M. A novel enhanced global exploration whale optimization algorithm based on Lévy flights and judgment mechanism for global continuous optimization problems. Eng. Comput. 2023, 39, 2433–2461. [Google Scholar] [CrossRef]
  13. Chu, J.; Yu, X.; Yang, S.; Qiu, J.; Wang, Q. Architecture entropy sampling-based evolutionary neural architecture search and its application in osteoporosis diagnosis. Complex Intell. Syst. 2023, 9, 213–231. [Google Scholar] [CrossRef]
  14. Sun, Y.; Yang, T.; Liu, Z. A whale optimization algorithm based on quadratic interpolation for high-dimensional global optimization problems. Appl. Soft Comput. 2019, 85, 105744. [Google Scholar] [CrossRef]
  15. Khosrowshahi, H.N.; Aghdasi, H.S.; Salehpour, P. A refined Greylag Goose optimization method for effective IoT service allocation in edge computing systems. Sci. Rep. 2025, 15, 15729. [Google Scholar] [CrossRef]
  16. Alharbi, A.H.; Khafaga, D.S.; El-Kenawy, E.-S.M.; Eid, M.M.; Ibrahim, A.; Abualigah, L.; Khodadadi, N.; Abdelhamid, A.A. Optimizing electric vehicle paths to charging stations using parallel greylag goose algorithm and Restricted Boltzmann Machines. Front. Energy Res. 2024, 12, 1401330. [Google Scholar] [CrossRef]
  17. Saqr, A.E.S.; Saraya, M.S.; El-Kenawy, E.S.M. Enhancing CO2 emissions prediction for electric vehicles using Greylag Goose Optimization and machine learning. Sci. Rep. 2025, 15, 16612. [Google Scholar] [CrossRef]
  18. Mashru, N.; Tejani, G.G.; Patel, P. Reliability-based multi-objective optimization of trusses with greylag goose algorithm. Evol. Intell. 2025, 18, 25. [Google Scholar] [CrossRef]
  19. Gürses, D.; Mehta, P.; Sait, S.M.; Yildiz, A.R. Enhanced Greylag Goose optimizer for solving constrained engineering design problems. Mater. Test. 2025, 67, 900–909. [Google Scholar] [CrossRef]
  20. Cheng, L.; Huang, P.; Zhang, M.; Yang, R.; Wang, Y. Optimizing Electricity Markets Through Game-Theoretical Methods: Strategic and Policy Implications for Power Purchasing and Generation Enterprises. Mathematics 2025, 13, 373. [Google Scholar] [CrossRef]
  21. Taylor, P.D.; Jonker, L.B. Evolutionary stable strategies and game dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  22. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  23. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  24. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  25. Wu, H.; Zhang, X.; Song, L.; Zhang, Y.; Gu, L.; Zhao, X. Wild Geese Migration Optimization Algorithm: A New Meta-Heuristic Algorithm for Solving Inverse Kinematics of Robot. Comput. Intell. Neurosci. 2022, 2022, 5191758. [Google Scholar] [CrossRef]
  26. El-Sherbiny, A.; Elhosseini, M.A.; Haikal, A.Y. A new ABC variant for solving inverse kinematics problem in 5 DOF robot arm. Appl. Soft Comput. J. 2018, 73, 24–38. [Google Scholar] [CrossRef]
  27. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  28. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  29. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35, Erratum in Eng. Comput. 2013, 29, 245. [Google Scholar] [CrossRef]
  30. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. J. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  31. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  32. Guedria, N.B. Improved accelerated PSO algorithm for mechanical engineering optimization problems. Appl. Soft Comput. 2016, 40, 455–467. [Google Scholar] [CrossRef]
  33. Ma, L.; Wang, C.; Xie, N.G.; Shi, M.; Ye, Y.; Wang, L. Moth-flame optimization algorithm based on diversity and mutation strategy. Appl. Intell. 2021, 51, 1–37. [Google Scholar] [CrossRef]
  34. Belegundu, A.D.; Arora, J.S. A study of mathematical programming methods for structural optimization. Part I: Theory. Int. J. Numer. Methods Eng. 1985, 21, 1583–1599. [Google Scholar] [CrossRef]
  35. Moosavi SH, S.; Bardsiri, K.V. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  36. Xu, X.; Hu, Z.; Su, Q.; Li, Y.; Dai, J. Multivariable grey prediction evolution algorithm: A new metaheuristic. Appl. Soft Comput. J. 2020, 89, 106086. [Google Scholar] [CrossRef]
Figure 1. Greylag Goose Optimization exploration, exploitation, and dynamic groups. (A) Exploration group; (B) Exploitation group; (C) Dynamic group in real life.
Figure 1. Greylag Goose Optimization exploration, exploitation, and dynamic groups. (A) Exploration group; (B) Exploitation group; (C) Dynamic group in real life.
Biomimetics 10 00545 g001
Figure 2. Schematic diagram of EGGO in detail.
Figure 2. Schematic diagram of EGGO in detail.
Biomimetics 10 00545 g002
Figure 3. Flowchart of EGGO.
Figure 3. Flowchart of EGGO.
Biomimetics 10 00545 g003
Figure 4. Test results in benchmark test function.
Figure 4. Test results in benchmark test function.
Biomimetics 10 00545 g004aBiomimetics 10 00545 g004b
Figure 5. Test result (CEC 2022 10D) ranking: (A) Ave.; (B) Std.; (C) Time; (D) Best.
Figure 5. Test result (CEC 2022 10D) ranking: (A) Ave.; (B) Std.; (C) Time; (D) Best.
Biomimetics 10 00545 g005
Figure 6. Test result (CEC 2022 20D) ranking: (A) Ave.; (B) Std.; (C) Time; (D) Best.
Figure 6. Test result (CEC 2022 20D) ranking: (A) Ave.; (B) Std.; (C) Time; (D) Best.
Biomimetics 10 00545 g006
Figure 7. Test result: (A) average rank in 10D test; (B) average rank in 20D test; (C) total average rank in 10D and 20D test.
Figure 7. Test result: (A) average rank in 10D test; (B) average rank in 20D test; (C) total average rank in 10D and 20D test.
Biomimetics 10 00545 g007
Figure 8. Schematic of the tension/compression spring.
Figure 8. Schematic of the tension/compression spring.
Biomimetics 10 00545 g008
Figure 9. Transmission diagram of the gear train.
Figure 9. Transmission diagram of the gear train.
Biomimetics 10 00545 g009
Figure 10. Schematic of three-bar truss mechanism.
Figure 10. Schematic of three-bar truss mechanism.
Biomimetics 10 00545 g010
Table 1. Settings of algorithm parameters under 500 iterations and a population size of 100.
Table 1. Settings of algorithm parameters under 500 iterations and a population size of 100.
AlgorithmParameter(s)Value(s)
GGO r 1 ,   r 2 ,   r 3 ,   r 4 ,   r 5 0 ,   1
ω 1 ,   ω 2 ,   ω 3 ,   ω 4 0 ,   2
GWO a 2 to 0
MFO a −1 to −2
SSA P D 0.7
S D 0.2
S T 0.8
C D 0.3
HHO E 1 2 to 0
E 0 1 ,   1
PSO W m a x ,   W m i n 0.9, 0.6
C 1 ,   C 2 2, 2
EGGO r 1 ,   r 2 ,   r 3 ,   r 4 ,   r 5 0 ,   1
ω 1 ,   ω 2 ,   ω 3 ,   ω 4 0 ,   2
Table 2. Details of the CEC 2022.
Table 2. Details of the CEC 2022.
TypeNo.FunctionsDimension F m i n
Unimodal1Shifted and fully rotated Zakharov’s function10 and 20300
Multimodal2Shifted and fully rotated Rosenbrock’s function10 and 20400
3Shifted and fully rotated expanded Schaffer’s F6 function10 and 20600
4Shifted and fully rotated non-continuous Rastrigin’s function10 and 20800
5Shifted and fully rotated Levy’s function10 and 20900
Hybrid6Hybrid function 1 (N = 3)10 and 201800
7Hybrid function 2 (N = 6)10 and 202000
8Hybrid function 3 (N = 5)10 and 202200
Composition9Composition function 1 (N = 5)10 and 202300
10Composition function 2 (N = 4)10 and 202400
11Composition function 3 (N = 5)10 and 202600
12Composition function 4 (N = 6)10 and 202700
Table 3. Results of the CEC 2022 (10D).
Table 3. Results of the CEC 2022 (10D).
FunctionGGOGWO
Ave.Std.TimeBestAve.Std.TimeBest
F18.9718 × 1031.5981 × 1045.5247 × 10−32.7181 × 1031.3111 × 1044.2103 × 1034.5857 × 10−35.2255 × 103
F25.7769 × 1021.0435 × 1025.0988 × 10−34.3759 × 1026.7885 × 1021.8679 × 1024.6382 × 10−34.4534 × 102
F36.4261 × 1021.1480 × 1011.1234 × 10−26.2053 × 1026.3172 × 1029.4214 × 1001.0332 × 10−26.1625 × 102
F48.4191 × 1021.0476 × 1017.5605 × 10−38.2221 × 1028.4370 × 1021.1005 × 1016.5427 × 10−38.2070 × 102
F51.4666 × 1032.1263 × 1027.8058 × 10−31.0897 × 1031.2911 × 1032.3521 × 1026.6609 × 10−39.5376 × 102
F68.8523 × 1051.7495 × 1065.9593 × 10−33.3302 × 1033.1509 × 1067.1051 × 1065.6340 × 10−32.9283 × 103
F72.0917 × 1032.8859 × 1016.0866 × 10−22.0438 × 1032.0877 × 1033.2519 × 1011.1995 × 10−22.0457 × 103
F82.2397 × 1031.2090 × 1012.0055 × 10−22.2204 × 1032.2390 × 1032.3285 × 1011.5010 × 10−22.2247 × 103
F92.6888 × 1034.8706 × 1011.3135 × 10−22.5514 × 1032.7271 × 1035.9966 × 1011.1441 × 10−22.5818 × 103
F102.5896 × 1038.5976 × 1011.2604 × 10−22.5007 × 1032.7194 × 1033.6536 × 1021.1074 × 10−22.5037 × 103
F113.2998 × 1034.1118 × 1022.0037 × 10−22.8391 × 1034.0570 × 1033.8145 × 1021.6217 × 10−23.1253 × 103
F122.8958 × 1033.6440 × 1016.6573 × 10−22.8694 × 1032.9202 × 1033.4289 × 1011.6469 × 10−22.8729 × 103
FunctionMFOSSA
Ave.Std.TimeBestAve.Std.TimeBest
F11.1746 × 1046.4984 × 1037.9990 × 10−32.2914 × 1036.8049 × 1032.3104 × 1034.7821 × 10−31.0998 × 103
F24.1524 × 1021.7718 × 1017.9137 × 10−34.0764 × 1025.0932 × 1026.4575 × 1014.7256 × 10−34.3617 × 102
F36.0385 × 1024.1744 × 1001.3584 × 10−26.0074 × 1026.3476 × 1028.7952 × 1009.9959 × 10−36.2033 × 102
F48.3243 × 1021.4684 × 1019.8440 × 10−38.2561 × 1028.4346 × 1029.8422 × 1006.5236 × 10−38.3317 × 102
F59.9985 × 1021.3416 × 1021.0137 × 10−29.0080 × 1021.3829 × 1032.4189 × 1026.7142 × 10−31.0154 × 103
F64.4705 × 1032.0481 × 1031.0395 × 10−21.9312 × 1039.6276 × 1067.6058 × 1065.8453 × 10−32.0961 × 105
F72.0280 × 1031.0856 × 1011.5446 × 10−22.0212 × 1032.0812 × 1031.8711 × 1011.1743 × 10−22.0442 × 103
F82.2253 × 1035.0467 × 1001.7968 × 10−22.2044 × 1032.2432 × 1037.5604 × 1001.4752 × 10−22.2302 × 103
F92.5359 × 1031.9028 × 1011.5121 × 10−22.5293 × 1032.6328 × 1034.2446 × 1011.1410 × 10−22.5370 × 103
F102.5092 × 1033.1289 × 1011.4872 × 10−22.5004 × 1032.5710 × 1037.7876 × 1011.0658 × 10−22.5016 × 103
F113.2059 × 1033.0766 × 1021.9802 × 10−22.7544 × 1033.0473 × 1032.8749 × 1021.5646 × 10−22.7994 × 103
F122.9037 × 1033.8110 × 1012.9144 × 10−22.8715 × 1032.8801 × 1031.6583 × 1011.6196 × 10−22.8676 × 103
FunctionWOAHHO
Ave.Std.TimeBestAve.Std.TimeBest
F13.3560 × 1041.4314 × 1045.1796 × 10−36.4874 × 1036.2245 × 1031.4228 × 1031.4660 × 10−22.1536 × 103
F25.2604 × 1021.1783 × 1025.1717 × 10−34.1115 × 1025.3526 × 1021.0068 × 1021.3786 × 10−24.1830 × 102
F36.4022 × 1021.3011 × 1011.0706 × 10−26.1425 × 1026.4120 × 1021.1798 × 1012.7284 × 10−26.1749 × 102
F48.4883 × 1021.3810 × 1016.9401 × 10−38.2021 × 1028.2818 × 1028.4948E+001.9141 × 10−28.1322 × 102
F51.5849 × 1033.5923 × 1027.1641 × 10−31.0663 × 1031.4879 × 1031.9571 × 1021.9885 × 10−21.0407 × 103
F67.5709 × 1043.0266 × 1055.7107 × 10−32.5663 × 1031.6960 × 1041.5749 × 1041.6687 × 10−22.4769 × 103
F72.0915 × 1033.3568 × 1011.2637 × 10−22.0425 × 1032.0799 × 1033.7449 × 1013.2018 × 10−22.0288 × 103
F82.2453 × 1032.2034 × 1011.5100 × 10−22.2271 × 1032.2365 × 1031.1173 × 1013.8780 × 10−22.2220 × 103
F92.6465 × 1035.0501 × 1011.1429 × 10−22.5445 × 1032.6576 × 1034.7668 × 1012.8653 × 10−22.5412 × 103
F102.6658 × 1033.0651 × 1021.1056 × 10−22.5004 × 1032.6260 × 1031.7383 × 1022.7131 × 10−22.5010 × 103
F113.3174 × 1034.1017 × 1021.6037 × 10−22.7653 × 1033.1280 × 1033.2571 × 1023.6550 × 10−22.7317 × 103
F122.9193 × 1034.3466 × 1011.6161 × 10−22.8682 × 1032.9453 × 1036.4707 × 1014.0554 × 10−22.8705 × 103
FunctionPSOEGGO
Ave.Std.TimeBestAve.Std.TimeBest
F19.7990 × 1035.9490 × 1034.5264 × 10−32.6880 × 1039.0246 × 1032.1287 × 1031.5709 × 10−22.1574 × 103
F24.4942 × 1026.2600 × 1014.5352 × 10−34.0791 × 1024.1372 × 1021.5217 × 1011.6228 × 10−24.0991 × 102
F36.0719 × 1025.6420 × 1001.0123 × 10−26.3648 × 1026.0295 × 1023.8314 × 1002.1082 × 10−26.0067 × 102
F48.3708 × 1021.2480 × 1016.2257 × 10−38.1230 × 1028.2084 × 1027.9549 × 1001.7677 × 10−28.0714 × 102
F59.9364 × 1022.0986 × 1026.4482 × 10−39.0027 × 1021.1621 × 1031.1416 × 1021.7785 × 10−29.0007 × 102
F66.1313 × 1032.2224 × 1035.1812 × 10−31.8941 × 1034.4205 × 1031.8767 × 1031.7495 × 10−21.8541 × 103
F72.0381 × 1033.3524 × 1011.1919 × 10−22.0213 × 1032.0265 × 1038.6546 × 1001.1643 × 10−22.0182 × 103
F82.2308 × 1031.8176 × 1011.4598 × 10−22.2068 × 1032.2253 × 1034.0303 × 1001.6902 × 10−22.2043 × 103
F92.5650 × 1036.2866 × 1011.0617 × 10−22.5293 × 1032.5227 × 1031.1393 × 1011.1867 × 10−22.5172 × 103
F102.6067 × 1031.8315 × 1021.0071 × 10−22.5008 × 1032.5336 × 1035.1820 × 1011.2894 × 10−22.5021 × 103
F113.4855 × 1034.5981 × 1023.0300 × 10−22.8209 × 1032.9557 × 1032.7451 × 1021.5464 × 10−22.6972 × 103
F122.8724 × 1031.2959 × 1011.5298 × 10−22.8624 × 1032.8634 × 1031.3006 × 1001.9950 × 10−22.8597 × 103
Table 4. Results of the CEC 2022 (20D).
Table 4. Results of the CEC 2022 (20D).
FunctionGGOGWO
Ave.Std.TimeBestAvestdTimeBest
F15.9176 × 1043.3688 × 1046.4054 × 10−31.7561 × 1045.2474 × 1041.7347 × 1045.8229 × 10−32.1648 × 104
F21.6682 × 1035.1504 × 1026.2956 × 10−38.6866 × 1021.6142 × 1036.6266 × 1025.5554 × 10−38.3519 × 102
F36.7649 × 1021.1363 × 1011.7646 × 10−26.4770 × 1026.6671 × 1021.2263 × 1011.6815 × 10−26.3532 × 102
F49.5411 × 1021.9946 × 1011.0233 × 10−29.1132 × 1029.5686 × 1021.9856 × 1019.2864 × 10−39.1853 × 102
F53.4490 × 1034.3306 × 1021.0047 × 10−22.2112 × 1033.3415 × 1035.7951 × 1029.2455 × 10−31.8887 × 103
F66.4271 × 1085.5643 × 1087.0105 × 10−32.0092E+076.5769 × 1086.1897 × 1086.2666 × 10−34.9632E+07
F72.2067 × 1035.7969 × 1012.0988 × 10−22.1064 × 1032.2292 × 1035.8759 × 1011.9661 × 10−22.1208 × 103
F82.3474 × 1031.2507 × 1022.4538 × 10−22.2328 × 1032.3625 × 1031.1567 × 1022.2600 × 10−22.2329 × 103
F92.9256 × 1031.6498 × 1022.3938 × 10−22.6696 × 1032.8730 × 1031.2788 × 1022.1279 × 10−22.6940 × 103
F105.4755 × 1031.8551 × 1031.8726 × 10−22.5362 × 1036.2215 × 1031.2073 × 1031.6678 × 10−22.6349 × 103
F117.8689 × 1038.6044 × 1023.2313 × 10−25.6132 × 1038.7527 × 1037.8209 × 1022.7213 × 10−26.6834 × 103
F123.1950 × 1031.4512 × 1023.4304 × 10−22.9730 × 1033.3076 × 1031.8595 × 1023.0777 × 10−23.0239 × 103
FunctionMFOSSA
Ave.Std.TimeBestAvestdTimeBest
F16.1231 × 1041.1673 × 1041.2524 × 10−23.3050 × 1047.9260 × 1043.1556 × 1041.7645 × 10−23.5907 × 104
F25.4734 × 1026.0904 × 1011.1413 × 10−24.6990 × 1021.0300 × 1032.1787 × 1025.7990 × 10−35.9555 × 102
F36.2558 × 1027.3784 × 1002.2844 × 10−26.1151 × 1026.6635 × 1021.1044 × 1011.6953E × 10−26.3920 × 102
F49.0440 × 1022.1010 × 1011.5567 × 10−28.6080 × 1029.6325 × 1021.4271 × 1019.3094 × 10−39.3560 × 102
F53.0670 × 1031.0176 × 1031.5547 × 10−21.5266 × 1033.6631 × 1034.6332 × 1029.6323 × 10−32.2913 × 103
F68.9806 × 1063.2434 × 1071.2335 × 10−23.7945 × 1041.4716 × 1081.0144 × 1086.6011 × 10−32.7310 × 107
F72.1225 × 1035.0527 × 1012.6023 × 10−22.0362 × 1032.2096 × 1036.4204 × 1012.0051 × 10−22.0982 × 103
F82.2556 × 1033.7816 × 1012.9029 × 10−22.2275 × 1032.3669 × 1037.0686 × 1012.2952 × 10−22.2473 × 103
F92.4995 × 1031.5796 × 1012.7036 × 10−22.4817 × 1032.6849 × 1035.8066 × 1012.1603 × 10−22.5745 × 103
F103.6510 × 1031.1102 × 1032.2880 × 10−22.5030 × 1035.3046 × 1032.0359 × 1031.7008 × 10−22.5315 × 103
F111.6579 × 1048.2847 × 1033.3437 × 10−28.6969 × 1036.9695 × 1034.7637 × 1022.7133 × 10−25.6653 × 103
F123.0669 × 1031.3933 × 1014.4198 × 10−22.9818 × 1033.1248 × 1039.3803 × 1013.0523 × 10−22.9939 × 103
FunctionWOAHHO
Ave.Std.TimeBestAvestdTimeBest
F15.4916 × 1041.8629 × 1046.4459 × 10−31.6798 × 1045.0829 × 1041.5932 × 1041.8694 × 10−22.1948 × 104
F28.5476 × 1021.6965 × 1026.3828 × 10−35.8846 × 1028.7254 × 1021.5400 × 1021.6184 × 10−26.3827 × 102
F36.7819 × 1021.2832 × 1011.7020 × 10−26.5280 × 1026.6708 × 1021.0961 × 1014.4547 × 10−26.3088 × 102
F49.5966 × 1022.6966 × 1019.8217 × 10−38.9240 × 1028.9961 × 1021.4888 × 1012.5623 × 10−28.6119 × 102
F54.5437 × 1031.3793 × 1031.0117 × 10−22.2502 × 1033.1558 × 1033.7781 × 1022.8117 × 10−22.1695 × 103
F67.1896 × 1078.2773 × 1077.3713 × 10−33.3799 × 1051.7839 × 1073.7642 × 1071.9376 × 10−23.5391 × 105
F72.2575 × 1038.9587 × 1012.0146 × 10−22.1066 × 1032.2271 × 1037.2493 × 1015.3109 × 10−22.1327 × 103
F82.3488 × 1031.2004 × 1022.3399 × 10−22.2304 × 1032.3335 × 1031.0893 × 1025.8203 × 10−22.2318 × 103
F92.6565 × 1036.8495 × 1012.1379 × 10−22.5218 × 1032.7010 × 1037.2822 × 1015.0356 × 10−22.5664 × 103
F105.5397 × 1031.3492 × 1031.7146 × 10−22.5106 × 1034.8006 × 1031.7619 × 1034.3705 × 10−22.5284 × 103
F115.6536 × 1035.8157 × 1022.7628 × 10−24.3616 × 1036.2412 × 1039.0108 × 1026.2003 × 10−24.1782 × 103
F123.1741 × 1031.3144 × 1023.0692 × 10−23.0187 × 1033.3290 × 1032.0010 × 1027.5187 × 10−23.0188 × 103
FunctionPSOEGGO
Ave.Std.TimeBestAvestdTimeBest
F16.8451 × 1042.0562 × 1045.8537 × 10−32.9053 × 1043.7418 × 1041.1689 × 1045.8202 × 10−31.7921 × 104
F27.1217 × 1022.2996 × 1021.6870 × 10−24.4896 × 1025.1170 × 1024.5991 × 1015.7311 × 10−34.3560 × 102
F36.2776 × 1028.8523 × 1001.6637 × 10−26.1328 × 1026.2367 × 1026.7724 × 1001.5967 × 10−26.1260 × 102
F49.4195 × 1022.6359 × 1019.0505 × 10−38.9326 × 1028.7182 × 1021.7680 × 1011.6379 × 10−28.4077 × 102
F53.7861 × 1031.5398 × 1039.2301 × 10−31.6896 × 1032.7219 × 1033.9913 × 1029.1869 × 10−31.5199 × 103
F69.6455 × 1061.6462 × 1076.5599 × 10−34.0381 × 1031.3426 × 1068.2665 × 1051.7603 × 10−22.1938 × 103
F72.1349 × 1034.5206 × 1011.9406 × 10−22.0655 × 1032.1184 × 1033.6514 × 1012.1295 × 10−22.0420 × 103
F82.3038 × 1037.3715 × 1012.2781 × 10−22.2305 × 1032.2486 × 1032.4609 × 1011.9803 × 10−22.2183 × 103
F92.5862 × 1039.3564 × 1012.0743 × 10−22.4819 × 1032.5060 × 1035.1559 × 1012.3087 × 10−22.4649 × 103
F104.3177 × 1031.1727 × 1031.6412 × 10−22.5212 × 1033.5931 × 1031.0167 × 1031.7544 × 10−22.5028 × 103
F111.8980 × 1041.9458 × 1042.7436 × 10−24.4370 × 1035.5911 × 1035.8405 × 1023.1106 × 10−24.2239 × 103
F123.0249 × 1034.5139 × 1012.9727 × 10−22.9527 × 1032.9533 × 1037.9448 × 1003.7432 × 10−22.9411 × 103
Table 5. Comparison of the best solution for tension/compression spring design problem.
Table 5. Comparison of the best solution for tension/compression spring design problem.
AlgorithmOptimal Values for Variablesf
wdLg1g2g3g4
GGO0.051780.3588511.171−3.52 × 10−4−1.33 × 10−4−4.0555−0.72620.0126724
PSO0.05270.380910.03011−0.0011−0.0013−4.0863−0.71090.0127263
GWO0.051730.3574911.2594−6.99 × 10−4−4.80 × 10−4−4.0492−0.72720.0126845
SSA0.05070.331912.98−5.31 × 10−4−0.0035−3.9801−0.74490.0127801
WOA0.051730.3576411.24−2.32 × 10−4−1.43 × 10−4−4.0537−0.72710.0126712
MFO0.051720.3574611.31−0.0057−5.64 × 10−6−4.0265−0.72720.0127269
HHO0.051730.357611.242−7.48 × 10−5−2.33 × 10−4−4.0539−0.72710.0126717
EGGO0.051780.3588411.1704−2.06 × 10−4−1.56 × 10−4−4.0561−0.72630.0126703
Table 6. Comparison of the best solution for gear train design problem.
Table 6. Comparison of the best solution for gear train design problem.
AlgorithmOptimal Values for VariablesOptimum Cost
x1x2x3x4
EGGO431916492.700857 × 10−12
GMO431916492.700857 × 10−12
KABC50.425922.398716.708251.43940
IAPSO431916492.700857 × 10−12
MBA431916492.700857 × 10−12
ALO431916492.7009 × 10−12
Table 7. Comparison of the best solution for three-bar truss design problem.
Table 7. Comparison of the best solution for three-bar truss design problem.
AlgorithmOptimal Values for VariablesOptimum Cost
x1x2
EGGO0.788686240.40823425263.8958434
GMO0.78867750.4082415263.8958434
KABC0.78860.4084263.8959
DMMFO0.7886874210.408213541263.8958435
GOA0.78889760.4076196263.895881
ALO0.7886628160003170.408283133832901263.8958434
CS0.788670.40902263.9716
GSA0.78867512840.4082483080263.8958434
MBA0.78856500.4085597263.8958522
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Yao, Y.; Yang, Y.; Zang, Z.; Zhang, X.; Zhang, Y.; Yu, Z. Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO). Biomimetics 2025, 10, 545. https://doi.org/10.3390/biomimetics10080545

AMA Style

Wang L, Yao Y, Yang Y, Zang Z, Zhang X, Zhang Y, Yu Z. Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO). Biomimetics. 2025; 10(8):545. https://doi.org/10.3390/biomimetics10080545

Chicago/Turabian Style

Wang, Lei, Yuqi Yao, Yuanting Yang, Zihao Zang, Xinming Zhang, Yiwen Zhang, and Zhenglei Yu. 2025. "Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO)" Biomimetics 10, no. 8: 545. https://doi.org/10.3390/biomimetics10080545

APA Style

Wang, L., Yao, Y., Yang, Y., Zang, Z., Zhang, X., Zhang, Y., & Yu, Z. (2025). Novel Greylag Goose Optimization Algorithm with Evolutionary Game Theory (EGGO). Biomimetics, 10(8), 545. https://doi.org/10.3390/biomimetics10080545

Article Metrics

Back to TopTop