Next Article in Journal
Bio-Inspired Compliant Joints and Economic MPC Co-Design for Energy-Efficient, High-Speed Locomotion in Snake-like Robots
Previous Article in Journal
Comparative Analysis of Highly Purified Sericin and Waste-Derived Sericin: Implications for Biomedical Applications
Previous Article in Special Issue
Three Strategies Enhance the Bionic Coati Optimization Algorithm for Global Optimization and Feature Selection Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Snow Geese Optimizer Integrating Multiple Strategies for Numerical Optimization

1
Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo 315800, China
2
School of Artificial Intelligence, Ningbo Polytechnic, Ningbo 315800, China
3
Taizhou Institute of Zhejiang University, Taizhou 318000, China
4
Solar Energy Research Institute of Singapore, National University of Singapore, Singapore 117574, Singapore
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(6), 388; https://doi.org/10.3390/biomimetics10060388
Submission received: 19 April 2025 / Revised: 21 May 2025 / Accepted: 9 June 2025 / Published: 11 June 2025

Abstract

:
An enhanced snow geese algorithm (ESGA) is proposed to address the problems of the weakened population diversity and unbalanced search tendencies encountered by the snow geese algorithm (SGA) in the search process. First, an adaptive switching strategy is used to dynamically select the search strategy to balance the exploitation and exploration capabilities. Second, a dominant group guidance strategy is introduced to improve the population quality. Finally, a dominant stochastic difference search strategy is designed to enrich the population diversity and help it escape from the local optimum by co-directing effects in multiple directions. Ablation experiments were performed on the CEC2017 test set to illustrate the improvement mechanism and the degree of compatibility of their improved strategies. The proposed ESGA with a highly cited algorithm and the powerful improved algorithm are compared on the CEC2022 test suite, and the experimental results confirm that the ESGA outperforms the compared algorithms. Finally, the ability of the ESGA to solve complex problems is further highlighted by solving the robot path planning problem.

1. Introduction

1.1. Research Background

The effectiveness of deterministic algorithms has been greatly limited in the past few years because of the growing intricacy of optimization problems in real-world situations. Conversely, as a stochastic algorithm, meta-heuristic (MH) algorithms have demonstrated remarkable efficacy in addressing these issues. MH algorithms are used in solving optimization problems, first in the problem space to generate a number of random solutions, and then in the space with a certain search rule to adjust the location of these solutions, use evaluation mechanisms to consider the individual advantages and disadvantages, and use this to adjust the direction of the final optimal solution in order to lock [1]. MH algorithms have been extensively utilized in diverse domains such as path planning [2,3], feature selection [4,5], and image segmentation [6,7]. In consideration of the no free lunch (NFL) theorem [8], which states that there is no one algorithm that can consistently outperform all optimization problems, it is essential to explore meta-heuristic (MH) algorithms, as no individual algorithm can attain optimal performance across all optimization problems.

1.2. Contribution of the Work

This work proposes an enhanced snow geese algorithm (ESGA), which improves the basic SGA through the adaptive switching strategy, dominant group guidance strategy, and dominant stochastic difference search strategy. The performance and potential of the proposed ESGA are comprehensively examined on the CEC2017 test suite, the CEC2022 test suite, and the robot path planning problem. The results show that the ESGA is a promising SGA variant.

1.3. Section Arrangement

A literature review is conducted in Section 2. Section 3 provides an overview of the SGA. Section 4 details the proposed ESGA. Section 5 presents the benchmark test function experiments and robot path planning trials. Finally, the conclusions are drawn in Section 6.

2. Literature Review

Among the many meta-heuristic algorithms, the sources of inspiration are varied and unique. Most of these algorithms draw on the essence of various types of behaviors of creatures or animals in nature. Among these algorithms, the Genetic Algorithm (GA) [9], Particle Swarm Optimization (PSO) [10], and Simulated Annealing (SA) [11] are the three most well-known algorithms. GA is an evolution-based algorithm inspired by the principles of Darwinian-based evolution and Mendelian genetics. It progressively optimizes the problem solution by simulating natural selection and genetic mechanisms. PSO is a swarm-based meta-heuristic algorithm inspired by the foraging behavior of bird flocks. Its basic idea is to optimize the solution of the problem step by step by simulating the information sharing and collaboration between individuals of a flock of birds during the foraging process. SA is a physics-based algorithm derived from the annealing process of solid materials. SA finds the global optimal solution by gradually decreasing the temperature of the system. With the development of meta-heuristic algorithms, researchers continue to present more powerful and novel algorithms. Tuna Swarm Optimization (TSO) [12] originated from two behavioral patterns of spiral and parabolic foraging in tuna. The Secretary Bird Optimization Algorithm (SBOA) originated from the behavioral patterns of secretary birds constantly searching for prey and evading predators [13]. The Spider Wasp Optimizer (SWO) replicates the foraging, nest construction, and reproductive activities of female spider wasps [14]. Dwarf Mongoose Optimization (DMO) draws inspiration from the foraging behaviors exhibited by dwarf mongeese, which exert a profound influence on the social dynamics and ecological adaptability of these species [15]. The Sea Horse Optimizer (SHO) draws inspiration from the natural actions of sea horses, including their patterns of movement, methods of foraging, and habits related to reproduction [16]. Inspired by the natural behavior of crayfish, the Crayfish Optimization Algorithm (COA) [17] solves complex optimization problems by simulating their mechanisms of temperature adaptation, burrow selection, and foraging competition. The Polar Lights Optimizer (PLO) [18] is a meta-heuristic algorithm based on the phenomenon of polar lights to efficiently solve various complex optimization problems by simulating the rotational motion of charged particles in the Earth’s magnetic field, their walks in the auroral ellipse region, and inter-particle collisions. Henry Gas Solubility Optimization (HGSO) [19] is a physics-based meta-heuristic algorithm inspired by Henry’s law, which describes the solubility of a gas in a liquid as proportional to the partial pressure of that gas. The HGSO algorithm exploits and explores the search space in a balanced manner by modeling the aggregation behavior of gases.
The snow geese algorithm (SGA) is a swarm-based meta-heuristic algorithm proposed by Tian et al. in 2024 [20]. The SGA is inspired by the migratory behavior of snow geese, in particular the unique “herringbone” and “straight line” flight patterns that they form during migration. The algorithm achieves efficient search and optimization in the solution space by simulating the flight behavior of snow geese. Wu et al. developed an agent-assisted multi-objective snow geese algorithm based on Gaussian processes and optimized the cold chain logistics problem [21]. Chandrashekhar et al. applied the SGA to optimize the performance of on-board chargers for electric vehicles [22]. Although the SGA achieves better performance by virtue of its unique search method, it still faces the disadvantage of insufficient population diversity when dealing with complex situations, and at the same time, it is difficult for it to balance exploitation and exploration, hence it falling into the local optimum. Bian et al. comprehensively strengthened the search capability of the SGA by proposing a lead-geese rotation mechanism, a horn-oriented mechanism, and an outlier boundary strategy [23]. In order to better utilize the capability of the SGA, this paper proposes an ESGA approach that integrates three improvement strategies. First, an adaptive switching strategy is used to dynamically select the search strategy to balance the exploitation and exploration capabilities. Second, a dominant group guidance strategy is introduced to improve the population quality. Finally, a dominant stochastic difference search strategy is designed to enrich population diversity and help it escape from the local optimum by co-directing effects in multiple directions.

3. The Basic SGA

3.1. Inspiration Source

The snow geese algorithm, proposed by Tian et al. in 2024 [20], is a swarm-based meta-heuristic algorithm. The algorithm was conceived based on the unique formation structure of snow geese during migration and the corresponding change patterns of this formation structure under different circumstances. The SGA consists of three phases: the initialization phase, exploration phase, and exploitation phase. The critical operations of this algorithm, along with their corresponding mathematical models, are presented below.

3.2. Initialization Phase

In the SGA, each member of the population represents a solution. Each solution consists of D elements that fulfill the restrictions of the boundary conditions. These solutions then collectively form the population. Like other meta-heuristic algorithms, the first step of the SGA is to generate the initial population. Assuming that the search range of the problem space is l b , u b , the position of the i t h solution, P i , can be given by Equation (1):
P i = l b + r a n d 1 , D × u b l b , i = 1 , 2 , , N
In Equation (1), r a n d 1 , D is a D dimensional random vector whose elements are random numbers ranging from 0 to 1. N is the number of SGA population members. After obtaining the initial population, we will evaluate the fitness of each individual, denoted as Equation (2).
f i t i = F P i
where F denotes the objective function. f i t i represents the fitness of the i t h member.

3.3. Exploration Phase

The SGA achieves extensive searching during the exploration phase by modeling the migration process of the herringbone formation of the snow geese population. In this phase, the SGA divides the population into three parts, corresponding to different updating methods, as follows. The populations are divided according to the magnitude of fitness. For individuals ranked in the top twenty percent of fitness, the SGA updates their position using Equation (2).
P i t + 1 = P i t + a × P b e s t t P i t + V i t
V i t + 1 = 4 V i t × t T × e 4 t T + P b e s t t P i t 0.005 × 1.29 × V i t 2 × sin θ
where P i t represents the position of the i t h agent at iteration t . P b e s t t is the position of the best agent (with best fitness value) at iteration t . T denotes the total number of allowable iterations. The constants in Equation (4) are taken from the original literature. For those individuals ranked in the middle of the fitness scale, the SGA updates their position using Equation (5).
P i t + 1 = P i t + a × P b e s t t P i t + b × P c t P i t d × P w o r s t t P i t + V i t
P c t = i = 1 N P i t × f i t i N × i = 1 N f i t i
For the remaining individuals (fitness ranked in the bottom twenty percent), the SGA applies Equation (7) to update their positions.
P i t + 1 = P i t + a × P b e s t t P i t b × P c t P i t + V i t
where P w o r s t t represents the position of the worst agent at iteration t . The a , b , d appearing in Equation (3), Equation (5), and Equation (7) are empirical values obtained experimentally as follows:
a = 4 × r a n d 2
b = 3 × r a n d 1.5
d = 2 × r a n d 1

3.4. Exploitation Phase

The SGA can switch between exploitation and exploration behaviors via angle θ . When θ < π , the SGA uses a linear flight mode that contains the following two mechanisms:
P i t + 1 = P i t P b e s t t P i t × r a n d , r a n d > 0.5
P i t + 1 = P i t P b e s t t P i t × r a n d × B r o w n i a n D , r a n d 0.5
θ = 2 π t T
where B r o w n i a n D are the values of standard Brownian motion that obey a normal distribution with mean 0 and standard deviation 1. r a n d is a random number ranging from 0 to 1.

4. The Proposed ESGA

The performance of the SGA decreases significantly when facing complex optimization problems. This is due to the fact that it neglects the guiding role of the dominant population, which makes it difficult to improve the quality of the population. It is difficult to maintain a coordinated search with a single exploitation or exploration behavior. Utilizing only the information of the optimal individuals in the exploitation stage easily leads to the SGA falling into local optimality. Therefore, this paper proposes three improvement strategies: the adaptive switching strategy, dominant group guidance strategy, and dominant stochastic difference search strategy to address these drawbacks encountered by the SGA.

4.1. Adaptive Switching Strategy (ASS)

In general, meta-heuristic algorithms require searching for more unknown regions upfront and selecting promising regions for deeper exploitation at a later stage. The switching mechanism of the SGA does exactly that. However, the switching mechanism of the SGA is fixed-switching, i.e., only global exploration in the early stage and only local exploitation in the later stage. This search mechanism, which only performs a single exploitation or exploration in any period, limits the performance of the algorithm. In this paper, we propose an adaptive switching strategy to overcome this shortcoming. As shown in Figure 1, the strategy performs a more global search in the early stage, while retaining some local exploitation capability, which can improve the convergence speed of the algorithm. In the later stage, the strategy ensures the better execution of the exploitation strategy and also has the ability of global exploration, which can enrich the diversity of the population and avoid falling into the local optimum. It is calculated that during the first half of the iterations, the exploration and exploitation are 75% and 25%, respectively, and during the second half of the iterations, these percentages are reversed. The modified version of θ is updated in the following way:
θ = 2 π × sin π t T × r a n d

4.2. Dominant Group Guidance Strategy (DGS)

During the global exploration phase, the most numerous populations in the SGA update their position through Equation (5). This updating method utilizes the positions of the best individual, the worst individual, and the average position of the entire population for guidance. Although it is able to expand the search to some extent, this approach may lead to a stagnant search. This is because the guidance of the best and worst individuals may be constrained by each other, while the average position of the whole population cannot guide promising directions. Based on the above analysis, this work proposes a dominant group guidance strategy (DGS). The DGS fits the distribution of the dominant group through a Gaussian probability distribution model, which in turn yields the evolutionary trend of the population. This method utilizes the guidance of dominant groups, which can enhance the quality of the population and promote the search. In addition, we use the position of each individual to modify the evolutionary trend, which can enrich the population diversity. The DGS can be expressed in the following equation:
P i t + 1 = P w m e a n t + P i t + g , g N 0 , C o v
P w m e a n t = i = 1 0.5 N ln 0.5 N + 1 / i = 1 0.5 N ln 0.5 N + 1 ln i × P i t
C o v = 1 0.5 N × i = 1 0.5 N P i t P w m e a n t × P i t P w m e a n t T
where P w m e a n t is the weighted average position of the top half of the populations ranked in terms of fitness. P w m e a n t is used instead of P c t , which better reflects the search direction of the whole population. At the same time, different positions have different degrees of influence on the search direction, and the better the individual, the greater its influence on P w m e a n t .

4.3. Dominant Stochastic Difference Search Strategy (DSS)

During the exploitation phase, the SGA performs an in-depth search for promising regions in two ways. This approach of Equation (11) allows each individual to move closer to the optimal solution, which accelerates convergence, but at the same time diminishes population diversity while risking falling into a local optimum. In general, when the optimal individual falls into local optimality, the rest of the individuals approaching it are also prone to falling into local optimality. To address this problem, a dominant stochastic difference search strategy is proposed in this paper, which introduces the bootstrapping of three factors, including the bootstrapping of the dominant group, the bootstrapping of the stochastic individuals, and the bootstrapping of the historical individuals, as shown in Equation (18).
P i t + 1 = P i t + P p b e s t t P i t × r a n d + P r 1 t P A t × r a n d , r a n d > 0.5
where P p b e s t t is a randomly selected individual from the top ten percent of individuals in terms of fitness. P r 1 t denotes a randomly selected individual from the entire population. P A t is a randomly selected individual from a set A . This set A contains historical information about each individual, and current population information, which can enhance population diversity.

4.4. The Framework of the ESGA

This subsection introduces the framework of the ESGA. The pseudo-code of the proposed ESGA is exhibited in Algorithm 1. The flowchart of the ESGA combining the adaptive switching strategy, dominant group guidance strategy, and dominant stochastic difference search strategy is shown in Figure 2.
Algorithm 1: Pseudo-Code of ESGA
Input: N, T, lb, ub, D.
Initialize the population using Equation (1).
FOR t = 1: T
     Calculate the fitness value of the search agent using Equation (2).
     Calculate the θ using Equation (14). //ASS
     IF θ < π
     Calculate the P w m e a n t and C o v using Equations (16) and (17). //DGS
          Update the individual’s position using Equations (3), (7), and (15).
     ELSE
          IF rand < 0.5
               Update the individual’s position using Equation (18). //DSS
          ELSE
               Update the individual’s position using Equation (12).
          END IF
     END IF
END FOR
Output: The best position and the best fitness.

4.5. Complexity Analysis of the ESGA

Time complexity is essential for assessing an algorithm’s performance efficiency. In swarm-based optimization algorithms, time complexity mainly depends on factors such as population size, dimensionality, the number of iterations, and the computational cost of the fitness function. The time complexities of the SGA and ESGA are as follows.
In the SGA, the initialization function generates N individuals, each with a dimension of D. Therefore, the time complexity of this step is O N × D . The complexity of each fitness computation is O N . In the position update operation, the outer loop runs N times, while the inner loop iterates D times. Therefore, the time complexity of this part is O N × D . Given that the algorithm runs for T iterations, the time complexity of the SGA is O N × D × T .
In the ESGA, the time complexity of the initialization function and the computation of the fitness value are unchanged and remain O N × D and O N , respectively. In the position update operation, the outer loop runs N times, while the inner loop iterates D times. Therefore, the time complexity of this part is O N × D . The ASS is used to replace the original update method of θ without increasing the time complexity. The DGS and DSS are used to replace the original search strategy without increasing the time complexity. Given that the algorithm runs for T iterations, the time complexity of the ESGA is O N × D × T . Combining the above complexities, the time complexity of the ESGA is of the same order of magnitude as that of the SGA, and its performance improvements are not achieved through increased complexity.

5. Experimental Results and Analysis

The performance of the proposed ESGA was evaluated using the CEC2017 and CEC2022 benchmark functions to validate its performance. The results were compared with six other meta-heuristic algorithms: MRFO [24], DBO [25], EO [26], RIME [27], GLS-MPA [28], EOSMA [29], AFDB-ARO [30], and LSHADE [31]. These four basic algorithms are widely used algorithms. The four improved algorithms have proven their excellent performance in the respective literature on them. The convergence curves of each algorithm across each benchmark function were analyzed, illustrating the convergence speed, stability, and precision of the optimization process for each function. Additionally, the mean value, standard deviation, and optimal value for each algorithm on each benchmark function were examined, providing a comprehensive view of overall convergence performance, performance consistency, reliability under challenging conditions, and typical performance less sensitive to outliers. The significant difference between the ESGA and the competing algorithms was verified using the nonparametric Wilcoxon rank sum test and Friedman test.
The simulations and analyses for this experiment were conducted using MATLAB2023b on a Core i9-13900H processor with 32 GB of RAM running at 2.6 GHz. Table 1 provides the simulation parameters for each algorithm.

5.1. Ablation Experiments Using the CEC2017 Test Set

To examine the effectiveness of each improvement strategy, we tested the SGA variants integrating a single improvement strategy and two improvement strategies on the CEC2017 test suite. The details of the SGA variants are shown in Table 2, where “Y” denotes the integration of the strategy and “N“ indicates that the strategy is not included. Specifically, these include SGA-AS (the improved SGA with the adaptive switching strategy), SGA-DG (the improved SGA with the dominant group guidance strategy), SGA-DS (the improved SGA with the dominant stochastic difference search strategy), SGA-ASDG (the improved SGA with the adaptive switching strategy and dominant group guidance strategy), SGA-ASDS (the improved SGA with the adaptive switching strategy and dominant stochastic difference search strategy), and SGA-DGDS (the improved SGA with the dominant group guidance strategy and dominant stochastic difference search strategy). The benchmark functions of CEC2017 are divided into three categories: unimodal functions F1–F2, multimodal functions F3–F9, and hybrid and composite functions F10–F29. To ensure a more accurate comparison between these SGA variants, 30 independent runs are conducted for each algorithm in this experiment. In each run, the population size is initialized to 30, the dimensionality is set to 10/30/50/100, and the maximum number of function evaluations is set to 30,000, with the upper limit and the lower limit to be [−100, 100].
Table 3 summarizes the results of the Friedman test based on the performance of ESGA, SGA, SGA-AS, SGA-DG, SGA-DS, SGA-ASDG, SGA-ASDS, and SGA-DGDS on the CEC2017 test suite, and the Friedman ranking is depicted as Figure 3. The algorithm data with good performance are shown in bold. Based on the p-values in the last column of Table 3, we can first conclude that there are significant differences between these algorithms. Then, in terms of the overall ranking, the ESGA containing all the improved strategies shows the best performance in solving the different dimensions of the problem, achieving an average ranking of 1.190. By comparing the SGA with the three SG variants combining a single improvement strategy, we can learn that all three improvement strategies improve the performance of the SGA in descending order of performance improvement: DGS, DSS, and ASS. By comparing the SGA variant incorporating two improvement strategies with the SGA variant incorporating a single improvement strategy separately, we can learn that the three strategies can mutually reinforce each other to further improve the SGA performance.
Figure 4 presents the results of the Wilcoxon rank sum test between the SGA variant and the SGA. In Figure 4, “better” indicates that the SGA variant significantly outperforms the original SGA. Furthermore, “similar” indicates that the SGA variant and the original SGA have similar performance, while “worst” indicates that the SGA variant is significantly inferior to the original SGA. According to Figure 4, the ESGA significantly outperforms the original SGA in all functions for all four cases, which again illustrates the great strength of the ESGA. By observing the comparison between the SGA variants combining a single or two improvement strategies and the original SGA, we can conclude that all the improvement strategies will not significantly weaken the search capability of the original SGA, even though they do not enhance its performance. This demonstrates the effectiveness of the three improvement strategies proposed in this paper.

5.2. Comparison Test Using the CEC2022 Test Suite

In this subsection, we further evaluate the performance of the ESGA by comparing it with MRFO, DBO, EO, RIME, GLS-MPA, EOSMA, AFDB-ARO, and LSHADE in the CEC2022 test set. The benchmark functions of CEC2022 are divided into three categories: unimodal functions F1, basic functions F2–F5, and hybrid and composite functions F6–F12. In order to be fair, we set the number of populations as 300, the dimensionality to 10/20, and the maximum number of function evaluations to 30,000, with the upper and lower limits as [−100, 100]. Each algorithm is run for 30 times, and the best value (Min), average value (Avg), and standard deviation (Std) of each algorithm are calculated. These three indicators can reflect the algorithm’s ability to find the best; in order to find out more clearly which algorithm has a clear advantage, we choose to put the data with the best average in bold. The experimental results are shown in Table 4 and Table 5, and the convergence curves and boxplots are shown in Figure 5 and Figure 6.
For the unimodal function F1, the ESGA achieves excellent performance on both 10D and 20D, obtaining the best average value, which indicates that the improvement strategy has enhanced the local search capability of the SGA to find the optimal solution more accurately. Combined with the standard deviation, the ESGA shows some stability on the unimodal functions. For the basic functions F2–F5, the ESGA shows the best performance when facing F2, F4, and F5 on 10D, and provides the best solution when solving F2 and F4 on 20D. Although the ESGA’s performance is not the best in F3, it still outperforms the original SGA. EO outperforms other algorithms on F3 on both 10D and 20D. EOSMA on F5 on 20D ranked first. Hybrid and combined functions featuring multiple peaks or valleys, multiple local optima, and a single global optimum are used to evaluate the global search capability and ability of the algorithm to get rid of local optima. When facing the 10D function, the ESGA can provide the best averages on F6 and F8–F12, weaker than GLS-MPA and AFDB-ARO only on F7, still outperforming the original SGA. For the 20D hybrid and combined functions, the ESGA outperforms the other algorithms in terms of averages on F6–F9 and F11. However, the ESGA does not perform as well on two functions, F10 and F12, ranking only fifth and sixth, respectively. Overall, in terms of the number of average values, the ESGA has more average values than the other algorithms, reflecting the robustness and adaptability of the improved ESGA in complex environments. From the data of minimum values, most of the values of the ESGA are smaller than those of the SGA and other comparative algorithms, indicating that the local search ability of the ESGA is enhanced and the convergence accuracy gradually becomes close to the optimal solution.
The convergence curves of the ESGA and the comparison algorithms on the CEC2022 test set are given in Figure 5. The ESGA shows strong a convergence speed on the unimodal function F1, achieving the best convergence accuracy. For the basic function, the ESGA demonstrates excellent search capability, achieving better fitness values with fewer iterations. For hybrid and combinatorial functions, the ESGA achieves faster convergence and excellent solution quality on most functions. Overall, the ESGA effectively avoids stagnation at local optima and premature convergence, and shows better convergence speed and convergence accuracy, which outperforms the original SGA and other comparative algorithms. These results further confirm the effectiveness of the proposed algorithm.
Additionally, we performed a boxplot analysis to investigate the distribution characteristics of solutions obtained using the ESGA compared to other competing algorithms on the challenging CEC2022 benchmark suite (as depicted in Figure 6). This analysis clearly demonstrates that our proposed method surpasses its competitors across most functions in terms of performance.
Table 6 presents the values of the 12 CEC2022 test functions calculated using the Wilcoxon rank sum test. Two independent samples (ESGA vs. SGA, MRFO, DBO, EO, RIME, GLS-MPA, EOSMA, AFDB-ARO, and LSHADE) are compared and it is determined whether the proposed algorithm is statistically different from the others. In the table, the symbol “+” indicates that the proposed algorithm outperforms the compared algorithms, the symbol “−” indicates a worse performance relative to the algorithm, and the symbol “=” indicates that both performances are similar. Table 6 shows that the proposed ESGA obtains more “+”. The results are as follows: 24/0/0, 23/1/0, 19/5/0, 19/5/0, 20/4/0, 18/5/1, 20/3/1, 19/3/2, and 21/2/1. Specifically, the ESGA dominates across the board when compared to these highly cited basic algorithms and is not weaker on any function. When compared to the improved algorithms, the ESGA is only weaker on individual functions. This indicates that the ESGA exhibits a statistically significant performance advantage over the comparison algorithms on most of the benchmark functions.
Table 7 summarizes the Friedman test results of the ESGA and comparison algorithms on the CEC2022 test suite, with the rankings shown in Figure 7. The significance level of the Friedman test is 0.05. According to Table 7, the p-values for both 10D and 20D are less than 0.05, indicating significant differences between these comparison algorithms and the ESGA. The ESGA achieves Friedman scores of 1.333 and 2.167 for 10D and 20D, respectively, compared to the original SGA’s scores of 9.250 and 9.250, respectively, which underscores the large performance gap between the ESGA and the basic SGA. In addition, as shown in Figure 7, the ESGA has the least fluctuation in the Friedman rankings, suggesting that it is insensitive to dimensionality changes and therefore has better scalability.
Based on the Friedman rankings obtained from the ESGA and comparison algorithms, the Nemenyi test was used to quantify the performance differences between these algorithms, as shown in Figure 8. In the Nemenyi test, the critical difference value (CDV) is first calculated based on the number of algorithms and the number of functions involved in the comparison. Then, the CDV is compared with the ranking difference between the two algorithms; if the CDV is greater than this interpolated value, it means that there is a difference between these two algorithms, and if vice versa, there is no difference. For 10D, the ESGA performed no difference with EOSMA, GLS-MPA, or EO and significantly outperformed LSHADE, RIME, AFDB-ARO, DBO, MRFO, and the SGA. For 20D, the performance of the ESGA is similar to that of EOSMA, RIME, EO, AFDB-ARO, and GLS-MPA, and significantly better than that of LSHADE, DBO, MRFO, and the SGA. In conclusion, the proposed ESGA has a clear advantage in comparison with these meta-heuristic algorithms, and is a promising improved algorithm.

5.3. Comparison Test Using Robot Path Planning

In this subsection, we evaluate the performance of the ESGA in the robot path planning problem and compare it with the top-ranked GLS-MPA and EOSMA in Section 4.2. To be fair, we set the population size to 30 and the maximum number of function evaluations to 3000. Each algorithm is run 30 times and the best value (Best), average (Avg), and standard deviation (Std) are calculated for each algorithm.
The optimal path planning graphs for solving the 15 × 15 raster graphs for the ESGA and the comparison algorithms are given in Figure 9, respectively. Green represents the starting point and red represents the ending point. Black indicates obstacles and white indicates passable. It can be seen that all four algorithms plan a feasible path, but that the ESGA gives a significantly smoother solution with a shorter path length. From Table 8, it can be learned that the ESGA proposed in this paper is the best among all the algorithms in the two metrics of optimal path length and average path length, and that the GLS-MPA algorithm is more stable, but its performance is average. The average length convergence curves of the four algorithms are plotted in Figure 10a, which shows that the ESGA has better convergence accuracy and a faster convergence speed. The boxplots are obtained based on the solutions of the four algorithms for solving the robot path planning problem, and as shown in Figure 10b, the distribution of solutions for the ESGA is denser and smaller. This is because the introduction of multiple strategies makes the search of the algorithms more comprehensive, which greatly improves the search capability of the ESGA and enables it to plan routes with lower costs.

6. Conclusions

Building upon improvement strategies, this paper introduces the enhanced snow geese optimizer (ESGA) as a new reference for enhancing other algorithms. The performance of the ESGA was evaluated using two sets of mathematical benchmark test suites, namely the CEC2017 and CEC2022 benchmark test functions. These tests were conducted to analyze the algorithm’s capabilities in global searches and local contractions. The results demonstrate that the ESGA exhibits strong performance in terms of convergence speed, solution accuracy, precision, and stability, outperforming other competitive algorithms and showcasing sufficient competitiveness. In order to further verify the practical application value of the ESGA, it is applied to the robot path planning problem. The optimization results of the robot path planning problem highlight the superiority of the ESGA over other optimization algorithms.
However, it is undeniable that the research level of the ESGA improvement in this paper is still relatively shallow. Due to the “no free lunch” theory, it does not show the best results on all problems. At the same time, some strategies have excellent performance when acting on certain function problems alone, but the optimization accuracy of the improved algorithm after multi-strategy fusion is reduced. How to effectively select the improvement strategy, increase the mutual gain of multiple improvement strategies in the algorithm, reduce the conflict between strategies, and use parallel computing and distributed computing technology to optimize the computational efficiency of the algorithm still needs to be further tested and explored. In the follow-up research process, further improvement research will be carried out to solve the above problems.
For future work, the improvement strategy of the ESGA can be used as a reference for enhancing other algorithms, extending the ESGA to a multi-objective algorithm for solving various multi-objective optimization problems, such as multi-UAV path problems. Additionally, the ESGA can be extended and applied to solving optimization problems in different fields and various real-world engineering applications, such as neural network hyper-parameter optimization.

Author Contributions

Conceptualization, B.Z. and Y.F.; Data curation, B.Z.; Formal analysis, B.Z.; Funding acquisition, Y.F.; Investigation, B.Z. and Y.F.; Methodology, B.Z. and Y.F.; Project administration, B.Z. and Y.F.; Resources, B.Z.; Software, B.Z. and Y.F.; Supervision, Y.F.; Validation, B.Z.; Visualization, B.Z. and T.C.; Writing—original draft, B.Z.; Writing—review and editing, B.Z., Y.F. and T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key Project of Ningbo Polytechnic No. NZ23Z01, Ningbo Natural Science Foundation No. 2023J242, and Zhejiang Province Teaching Research Project Achievement No. 522500402209.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data is provided within the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Slowik, A.; Kwasnicka, H. Nature Inspired Methods and Their Industry Applications-Swarm Intelligence Algorithms. IEEE Trans. Ind. Inform. 2018, 14, 1004–1015. [Google Scholar] [CrossRef]
  2. Tang, A.D.; Han, T.; Zhou, H.; Xie, L. An Improved Equilibrium Optimizer with Application in Unmanned Aerial Vehicle Path Planning. Sensors 2021, 21, 1814. [Google Scholar] [CrossRef]
  3. Hu, G.; Huang, F.Y.; Shu, B.; Wei, G. MAHACO: Multi-Algorithm Hybrid Ant Colony Optimizer for 3D Path Planning of a Group of UAVs. Inf. Sci. 2025, 694, 121714. [Google Scholar] [CrossRef]
  4. Seyyedabbasi, A.; Hu, G.; Shehadeh, H.A.; Wang, X.P.; Canatalay, P.J. V-Shaped and S-Shaped Binary Artificial Protozoa Optimizer (APO) Algorithm for Wrapper Feature Selection on Biological Data. Clust. Comput. J. Netw. Softw. Tools Appl. 2025, 28, 163. [Google Scholar] [CrossRef]
  5. Jia, H.; Sun, K.; Li, Y.; Cao, N. Improved Marine Predators Algorithm for Feature Selection and SVM Optimization. KSII Trans. Internet Inf. Syst. 2022, 16, 1128–1145. [Google Scholar] [CrossRef]
  6. Abualigah, L.; Habash, M.; Hanandeh, E.S.; Hussein, A.M.A.; Al Shinwan, M.; Zitar, R.A.; Jia, H. Improved Reptile Search Algorithm by Salp Swarm Algorithm for Medical Image Segmentation. J. Bionic Eng. 2023, 20, 1766–1790. [Google Scholar] [CrossRef]
  7. Hu, G.; Zheng, Y.X.; Houssein, E.H.; Wei, G. GSRPSO: A Multi-Strategy Integrated Particle Swarm Algorithm for Multi-Threshold Segmentation of Real Cervical Cancer Images. Swarm Evol. Comput. 2024, 91, 101766. [Google Scholar] [CrossRef]
  8. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  9. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  10. Huang, W.; Xu, J. Particle Swarm Optimization. In Springer Tracts in Civil Engineering; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  11. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  12. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef] [PubMed]
  13. Fu, Y.F.; Liu, D.; Chen, J.D.; He, L. Secretary Bird Optimization Algorithm: A New Metaheuristic for Solving Global Optimization Problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  14. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider Wasp Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  15. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  16. Aribowo, W. A Novel Improved Sea-Horse Optimizer for Tuning Parameter Power System Stabilizer. J. Robot. Control 2023, 4, 12–22. [Google Scholar] [CrossRef]
  17. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish Optimization Algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  18. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar Lights Optimizer: Algorithm and Applications in Image Segmentation and Feature Selection. Neurocomputing 2024, 607, 128427. [Google Scholar] [CrossRef]
  19. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry Gas Solubility Optimization: A Novel Physics-Based Algorithm. Futur. Gener. Comput. Syst. Int. J. Escience 2019, 101, 646–667. [Google Scholar] [CrossRef]
  20. Tian, A.Q.; Liu, F.F.; Lv, H.X. Snow Geese Algorithm: A Novel Migration-Inspired Meta-Heuristic Algorithm for Constrained Engineering Optimization Problems. Appl. Math. Model. 2024, 126, 327–347. [Google Scholar] [CrossRef]
  21. Wu, J.; Yang, Z.-J.; Du, Z.-G.; Tian, A.-Q. Surrogate-Assisted Multi-Objective Snow Goose Algorithm with Gaussian Process for Cold Chain Logistics. IEEE Access 2025, 13, 67350–67365. [Google Scholar] [CrossRef]
  22. Chandrashekhar, A.; Mallikarjuna, B.; Thiyagesan, M.; Umasankar, L.; Ezhil, G.; Devi, R. Dual-Stage Interleaved On-Board Charger for Electric Vehicle Using Snow Geese Algorithm. In Proceedings of the 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL), Birendranagar, Nepal, 18–20 February 2025; pp. 628–633. [Google Scholar]
  23. Bian, H.; Li, C.; Liu, Y.; Tong, Y.; Bing, S.; Chen, J.; Ren, Q.; Zhang, Z. Improved Snow Geese Algorithm for Engineering Applications and Clustering Optimization. Sci. Rep. 2025, 15, 4506. [Google Scholar] [CrossRef] [PubMed]
  24. Zhao, W.; Zhang, Z.; Wang, L. Manta Ray Foraging Optimization: An Effective Bio-Inspired Optimizer for Engineering Applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  25. Xue, J.; Shen, B. Dung Beetle Optimizer: A New Meta-Heuristic Algorithm for Global Optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  26. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A Novel Optimization Algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  27. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A Physics-Based Optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  28. Jia, H.; Lu, C. Guided Learning Strategy: A Novel Update Mechanism for Metaheuristic Algorithms Design and Improvement. Knowl.-Based Syst. 2024, 286, 111402. [Google Scholar] [CrossRef]
  29. Yin, S.; Luo, Q.; Zhou, Y. EOSMA: An Equilibrium Optimizer Slime Mould Algorithm for Engineering Design Problems. Arab. J. Sci. Eng. 2022, 47, 10115–10146. [Google Scholar] [CrossRef]
  30. Ozkaya, B.; Duman, S.; Kahraman, H.T.; Guvenc, U. Optimal Solution of the Combined Heat and Power Economic Dispatch Problem by Adaptive Fitness-Distance Balance Based Artificial Rabbits Optimization Algorithm. Expert Syst. Appl. 2024, 238, 122272. [Google Scholar] [CrossRef]
  31. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, CEC 2014, Beijing, China, 6–11 July 2014. [Google Scholar]
Figure 1. The value of original θ and new θ .
Figure 1. The value of original θ and new θ .
Biomimetics 10 00388 g001
Figure 2. The flowchart of ESGA.
Figure 2. The flowchart of ESGA.
Biomimetics 10 00388 g002
Figure 3. The Friedman ranking of SGA variants with different strategies on CEC2017 test suite.
Figure 3. The Friedman ranking of SGA variants with different strategies on CEC2017 test suite.
Biomimetics 10 00388 g003
Figure 4. The Wilcoxon rank sum test results of SGA variants with different strategies on CEC2017 test suite.
Figure 4. The Wilcoxon rank sum test results of SGA variants with different strategies on CEC2017 test suite.
Biomimetics 10 00388 g004
Figure 5. The convergence curves of ESGA and competitors on CEC2022 test suite.
Figure 5. The convergence curves of ESGA and competitors on CEC2022 test suite.
Biomimetics 10 00388 g005aBiomimetics 10 00388 g005b
Figure 6. The boxplot curves of ESGA and competitors on CEC2022 test suite.
Figure 6. The boxplot curves of ESGA and competitors on CEC2022 test suite.
Biomimetics 10 00388 g006
Figure 7. The Friedman ranking of ESGA and competitors on CEC2022 test suite.
Figure 7. The Friedman ranking of ESGA and competitors on CEC2022 test suite.
Biomimetics 10 00388 g007
Figure 8. The visualization of Nemenyi test results of ESGA and competitors on CEC2022 test suite.
Figure 8. The visualization of Nemenyi test results of ESGA and competitors on CEC2022 test suite.
Biomimetics 10 00388 g008
Figure 9. Path diagrams for ESGA and competitors.
Figure 9. Path diagrams for ESGA and competitors.
Biomimetics 10 00388 g009
Figure 10. The convergence curves and boxplots of ESGA and competitors on robot path planning.
Figure 10. The convergence curves and boxplots of ESGA and competitors on robot path planning.
Biomimetics 10 00388 g010
Table 1. Parameter settings of ESGA and other competing algorithms.
Table 1. Parameter settings of ESGA and other competing algorithms.
AlgorithmParameter Settings
ESGANo parameter
SGANo parameter
MRFO S = 2
DBO L = 0.8 , P = 0.11 , a r c = 1.5 , s = 0.5 , N m = 4 , M = 5
EO a 1 = 2 , a 2 = 1 , G P = 0.5
RIME W = 5
GLS-MPA F = 0.2 , P = 0.5 , A = 30 , C m = 2500
EOSMA G P = 0.5 , z = 0.6 , q = 0.2
AFDB-ARO F = 0.2 , P = 0.5 , A = 30 , C m = 2500
LSHADE F = 0.5 , C r = 0.5 , p = 0 . 11 , N min = 4
Table 2. Descriptions of SGA variants with different strategies.
Table 2. Descriptions of SGA variants with different strategies.
StrategySGASGA-ASSGA-DGSGA-DSSGA-ASDGSGA-ASDSSGA-DGDSESGA
ASSNYNNYYNY
DGSNNYNYNYY
DSSNNNYNYYY
Table 3. Friedman test results of SGA variants with different strategies on CEC2017 test suite.
Table 3. Friedman test results of SGA variants with different strategies on CEC2017 test suite.
AlgorithmSGASGA-ASSGA-DGSGA-DSSGA-ASDGSGA-ASDSSGA-DGDSESGAp-Value
10D7.8976.7244.0005.5172.7935.1032.4831.4832.35 × 10−32
30D7.9316.7244.1035.6903.0694.8622.4831.1385.20 × 10−34
50D7.8286.5524.5175.8283.0344.8282.2761.1381.03 × 10−33
100D7.7936.9664.5525.5523.0344.4832.6211.0003.93 × 10−34
Average ranking7.8626.7414.2935.6472.9834.8192.4661.190
Table 4. Qualitative results of ESGA and competitors on CEC2022 test suite (10D).
Table 4. Qualitative results of ESGA and competitors on CEC2022 test suite (10D).
Function No.IndicatorsESGASGAMRFODBOEORIMEGLS-MPAEOSMAAFDB-AROLSHADE
F1Min3.000 × 1023.070 × 1025.305 × 1028.887 × 1025.782 × 1023.451 × 1023.200 × 1024.084 × 1021.871 × 1033.082 × 102
Avg3.000 × 1023.392 × 1031.386 × 1033.389 × 1031.911 × 1035.214 × 1026.268 × 1021.006 × 1035.840 × 1033.375 × 102
Std7.029 × 10−42.138 × 1036.118 × 1021.749 × 1031.204 × 1031.705 × 1022.678 × 1023.938 × 1021.664 × 1032.625 × 101
F2Min4.000 × 1024.071 × 1024.007 × 1024.006 × 1024.049 × 1024.006 × 1024.005 × 1024.012 × 1024.071 × 1024.055 × 102
Avg4.063 × 1024.776 × 1024.167 × 1024.192 × 1024.106 × 1024.179 × 1024.105 × 1024.091 × 1024.344 × 1024.075 × 102
Std3.0741.074 × 1022.188 × 1012.443 × 1015.7952.443 × 1011.498 × 1017.2392.319 × 1011.850
F3Min6.002 × 1026.052 × 1026.009 × 1026.000 × 1026.005 × 1026.003 × 1026.004 × 1026.006 × 1026.001 × 1026.012 × 102
Avg6.017 × 1026.223 × 1026.030 × 1026.022 × 1026.014 × 1026.018 × 1026.047 × 1026.015 × 1026.018 × 1026.030 × 102
Std1.5591.118 × 1011.1333.1496.195 × 10−11.1803.2995.912 × 10−11.6431.012
F4Min8.040 × 1028.087 × 1028.119 × 1028.110 × 1028.071 × 1028.112 × 1028.096 × 1028.101 × 1028.173 × 1028.164 × 102
Avg8.133 × 1028.286 × 1028.204 × 1028.325 × 1028.151 × 1028.211 × 1028.176 × 1028.191 × 1028.298 × 1028.392 × 102
Std4.6481.318 × 1017.4531.266 × 1014.2747.3734.9215.3456.3828.848
F5Min9.000 × 1029.321 × 1029.004 × 1029.001 × 1029.002 × 1029.004 × 1029.031 × 1029.003 × 1029.053 × 1029.017 × 102
Avg9.014 × 1021.150 × 1039.192 × 1029.411 × 1029.026 × 1029.065 × 1029.289 × 1029.017 × 1029.332 × 1029.052 × 102
Std1.9631.836 × 1023.745 × 1017.505 × 1012.5457.3111.850 × 1011.2961.561 × 1013.665
F6Min1.823 × 1031.918 × 1033.286 × 1031.854 × 1032.467 × 1031.961 × 1031.927 × 1034.052 × 1031.938 × 1031.967 × 103
Avg2.310 × 1034.313 × 1031.625 × 1044.727 × 1035.655 × 1036.930 × 1034.112 × 1031.415 × 1042.723 × 1032.780 × 103
Std1.523 × 1032.136 × 1031.116 × 1042.430 × 1033.213 × 1035.722 × 1031.823 × 1031.026 × 1047.638 × 1028.892 × 102
F7Min2.005 × 1032.022 × 1032.019 × 1032.020 × 1032.019 × 1032.020 × 1032.015 × 1032.014 × 1032.013 × 1032.033 × 103
Avg2.024 × 1032.053 × 1032.035 × 1032.025 × 1032.028 × 1032.025 × 1032.024 × 1032.029 × 1032.021 × 1032.042 × 103
Std8.2702.089 × 1017.0636.8554.0852.4174.5705.4592.2604.909
F8Min2.201 × 1032.223 × 1032.213 × 1032.217 × 1032.211 × 1032.208 × 1032.208 × 1032.213 × 1032.208 × 1032.219 × 103
Avg2.221 × 1032.240 × 1032.229 × 1032.226 × 1032.227 × 1032.223 × 1032.222 × 1032.227 × 1032.221 × 1032.229 × 103
Std7.9123.216 × 1013.8786.0054.7154.8844.1704.3113.2693.454
F9Min2.529 × 1032.531 × 1032.535 × 1032.529 × 1032.529 × 1032.529 × 1032.529 × 1032.530 × 1032.531 × 1032.529 × 103
Avg2.529 × 1032.609 × 1032.543 × 1032.536 × 1032.533 × 1032.530 × 1032.530 × 1032.532 × 1032.537 × 1032.530 × 103
Std4.266 × 10−64.869 × 1011.075 × 1012.070 × 1013.9827.675 × 10−12.613 × 10−11.7231.206 × 1013.058 × 10−1
F10Min2.500 × 1032.500 × 1032.500 × 1032.500 × 1032.500 × 1032.500 × 1032.500 × 1032.500 × 1032.501 × 1032.500 × 103
Avg2.500 × 1032.519 × 1032.505 × 1032.505 × 1032.504 × 1032.516 × 1032.501 × 1032.500 × 1032.501 × 1032.501 × 103
Std9.063 × 10−24.564 × 1012.156 × 1012.046 × 1012.070 × 1014.055 × 1011.060 × 10−19.232 × 10−23.713 × 10−11.455 × 10−1
F11Min2.600 × 1032.747 × 1032.639 × 1032.615 × 1032.685 × 1032.671 × 1032.719 × 1032.718 × 1032.756 × 1032.754 × 103
Avg2.657 × 1033.111 × 1032.780 × 1032.765 × 1032.915 × 1033.043 × 1032.771 × 1032.863 × 1032.816 × 1032.902 × 103
Std1.113 × 1022.998 × 1021.394 × 1021.296 × 1021.597 × 1021.728 × 1021.916 × 1011.393 × 1024.159 × 1011.422 × 102
F12Min2.860 × 1032.864 × 1032.866 × 1032.862 × 1032.862 × 1032.862 × 1032.861 × 1032.863 × 1032.868 × 1032.861 × 103
Avg2.863 × 1032.878 × 1032.873 × 1032.865 × 1032.864 × 1032.865 × 1032.864 × 1032.865 × 1032.874 × 1032.864 × 103
Std1.4051.806 × 1018.9311.9991.1502.2141.3195.491 × 10−12.9829.318 × 10−1
Table 5. Qualitative results of ESGA and competitors on CEC2022 test suite (20D).
Table 5. Qualitative results of ESGA and competitors on CEC2022 test suite (20D).
Function No.IndicatorsESGASGAMRFODBOEORIMEGLS-MPAEOSMAAFDB-AROLSHADE
F1Min3.000 × 1028.507 × 1038.784 × 1032.124 × 1041.236 × 1044.966 × 1033.331 × 1035.645 × 1031.956 × 1049.058 × 103
Avg3.018 × 1022.848 × 1041.980 × 1043.545 × 1042.471 × 1041.426 × 1049.735 × 1031.433 × 1042.959 × 1041.615 × 104
Std3.1732.217 × 1047.099 × 1038.568 × 1038.166 × 1035.125 × 1033.136 × 1034.159 × 1035.669 × 1035.017 × 103
F2Min4.339 × 1024.827 × 1024.688 × 1024.503 × 1024.639 × 1024.472 × 1024.857 × 1024.702 × 1024.779 × 1024.858 × 102
Avg4.591 × 1026.565 × 1025.324 × 1025.256 × 1025.092 × 1024.847 × 1025.525 × 1025.175 × 1025.681 × 1025.090 × 102
Std2.314 × 1011.281 × 1024.543 × 1016.388 × 1012.756 × 1013.958 × 1015.234 × 1012.601 × 1015.195 × 1011.481 × 101
F3Min6.043 × 1026.283 × 1026.110 × 1026.111 × 1026.044 × 1026.045 × 1026.120 × 1026.053 × 1026.014 × 1026.157 × 102
Avg6.097 × 1026.477 × 1026.201 × 1026.231 × 1026.088 × 1026.095 × 1026.221 × 1026.092 × 1026.092 × 1026.216 × 102
Std3.4261.186 × 1016.9208.1361.8453.2555.9002.5584.7392.532
F4Min8.189 × 1028.445 × 1028.650 × 1028.743 × 1028.497 × 1028.453 × 1028.379 × 1028.479 × 1028.715 × 1029.178 × 102
Avg8.415 × 1029.023 × 1028.961 × 1029.244 × 1028.801 × 1028.759 × 1028.827 × 1028.844 × 1029.027 × 1029.402 × 102
Std1.471 × 1012.520 × 1012.191 × 1012.625 × 1011.227 × 1011.900 × 1012.326 × 1011.394 × 1011.714 × 1019.359
F5Min9.276 × 1021.339 × 1031.010 × 1031.517 × 1039.275 × 1029.626 × 1021.104 × 1039.278 × 1021.184 × 1031.182 × 103
Avg1.009 × 1032.314 × 1031.722 × 1032.340 × 1031.060 × 1031.466 × 1031.431 × 1031.009 × 1031.559 × 1031.409 × 103
Std9.871 × 1015.110 × 1026.378 × 1026.309 × 1029.279 × 1016.769 × 1022.206 × 1024.949 × 1011.928 × 1021.356 × 102
F6Min2.030 × 1034.193 × 1031.590 × 1052.311 × 1032.177 × 1041.950 × 1043.438 × 1036.569 × 1042.128 × 1041.145 × 107
Avg8.505 × 1031.870 × 1077.970 × 1051.136 × 1063.418 × 1058.791 × 1059.867 × 1055.427 × 1058.254 × 1041.998 × 107
Std6.488 × 1039.124 × 1074.826 × 1055.347 × 1064.088 × 1058.182 × 1051.484 × 1065.766 × 1055.492 × 1047.041 × 106
F7Min2.042 × 1032.071 × 1032.066 × 1032.043 × 1032.053 × 1032.046 × 1032.037 × 1032.062 × 1032.042 × 1032.115 × 103
Avg2.075 × 1032.155 × 1032.115 × 1032.107 × 1032.092 × 1032.091 × 1032.081 × 1032.098 × 1032.088 × 1032.163 × 103
Std1.853 × 1014.455 × 1013.639 × 1013.628 × 1012.231 × 1012.633 × 1012.515 × 1012.221 × 1013.155 × 1012.599 × 101
F8Min2.222 × 1032.231 × 1032.232 × 1032.228 × 1032.229 × 1032.227 × 1032.226 × 1032.233 × 1032.224 × 1032.244 × 103
Avg2.230 × 1032.326 × 1032.250 × 1032.273 × 1032.241 × 1032.240 × 1032.231 × 1032.240 × 1032.235 × 1032.264 × 103
Std6.3561.016 × 1022.997 × 1015.373 × 1012.324 × 1012.338 × 1014.5863.3427.8581.002 × 101
F9Min2.481 × 1032.522 × 1032.500 × 1032.481 × 1032.486 × 1032.484 × 1032.487 × 1032.488 × 1032.484 × 1032.491 × 103
Avg2.482 × 1032.611 × 1032.514 × 1032.512 × 1032.501 × 1032.496 × 1032.513 × 1032.500 × 1032.489 × 1032.501 × 103
Std3.2175.949 × 1011.131 × 1012.430 × 1011.250 × 1011.672 × 1011.944 × 1016.0734.2835.025
F10Min2.500 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.501 × 1032.436 × 1032.501 × 103
Avg2.519 × 1034.043 × 1032.616 × 1032.798 × 1032.927 × 1032.883 × 1032.510 × 1032.508 × 1032.514 × 1032.518 × 103
Std5.475 × 1011.588 × 1034.473 × 1027.239 × 1021.071 × 1035.248 × 1023.359 × 1013.610 × 1013.864 × 1015.612 × 101
F11Min2.902 × 1033.740 × 1033.279 × 1033.447 × 1033.474 × 1034.036 × 1033.847 × 1033.740 × 1032.949 × 1035.395 × 103
Avg3.010 × 1035.386 × 1033.587 × 1034.556 × 1034.226 × 1034.570 × 1035.130 × 1034.213 × 1033.299 × 1037.868 × 103
Std1.406 × 1028.346 × 1021.952 × 1029.359 × 1023.311 × 1023.443 × 1026.091 × 1022.878 × 1021.082 × 1021.017 × 103
F12Min2.943 × 1032.980 × 1033.006 × 1032.944 × 1032.950 × 1032.944 × 1032.945 × 1032.968 × 1032.988 × 1032.970 × 103
Avg2.983 × 1033.102 × 1033.076 × 1032.969 × 1032.967 × 1032.971 × 1032.979 × 1032.996 × 1033.056 × 1032.981 × 103
Std3.126 × 1011.109 × 1023.611 × 1012.934 × 1011.156 × 1011.660 × 1012.264 × 1011.516 × 1014.014 × 1017.135
Table 6. Wilcoxon rank sum test results of ESGA and competitors on CEC2022 test suite.
Table 6. Wilcoxon rank sum test results of ESGA and competitors on CEC2022 test suite.
ESGA
vs. +/=/−
SGAMRFODBOEORIMEGLS-MPAEOSMAAFDB-AROLSHADE
10D12/0/011/1/09/3/09/3/010/2/09/3/011/1/010/1/111/1/0
20D12/0/012/0/010/2/010/2/010/2/09/2/19/2/19/2/110/1/1
Total24/0/023/1/019/5/019/5/020/4/018/5/120/3/119/3/221/2/1
Table 7. Friedman test results of ESGA and competitors on CEC2022 test suite.
Table 7. Friedman test results of ESGA and competitors on CEC2022 test suite.
Algorithm10D20DAverage Ranking
ESGA1.3332.1671.750
SGA9.2509.2509.250
MRFO7.1676.6676.917
DBO6.4177.5006.958
EO4.9174.3334.625
RIME5.8334.4175.125
GLS-MPA4.0005.0004.500
EOSMA4.9174.0004.458
AFDB-ARO5.9174.8335.375
LSHADE5.2506.8336.042
p-value1.00 × 10−71.35 × 10−7
Table 8. Qualitative results of ESGA and competitors on robot path planning.
Table 8. Qualitative results of ESGA and competitors on robot path planning.
IndexESGASGAEOSMAGLS-MPA
Best21.5625.9024.1423.56
Avg31.7832.2633.7049.18
Std15.127.6515.0322.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, B.; Fang, Y.; Chen, T. An Enhanced Snow Geese Optimizer Integrating Multiple Strategies for Numerical Optimization. Biomimetics 2025, 10, 388. https://doi.org/10.3390/biomimetics10060388

AMA Style

Zhao B, Fang Y, Chen T. An Enhanced Snow Geese Optimizer Integrating Multiple Strategies for Numerical Optimization. Biomimetics. 2025; 10(6):388. https://doi.org/10.3390/biomimetics10060388

Chicago/Turabian Style

Zhao, Baoqi, Yu Fang, and Tianyi Chen. 2025. "An Enhanced Snow Geese Optimizer Integrating Multiple Strategies for Numerical Optimization" Biomimetics 10, no. 6: 388. https://doi.org/10.3390/biomimetics10060388

APA Style

Zhao, B., Fang, Y., & Chen, T. (2025). An Enhanced Snow Geese Optimizer Integrating Multiple Strategies for Numerical Optimization. Biomimetics, 10(6), 388. https://doi.org/10.3390/biomimetics10060388

Article Metrics

Back to TopTop