Next Article in Journal
WGGLFA: Wavelet-Guided Global–Local Feature Aggregation Network for Facial Expression Recognition
Next Article in Special Issue
Explainable Reinforcement Learning for the Initial Design Optimization of Compressors Inspired by the Black-Winged Kite
Previous Article in Journal
Research on Robot Obstacle Avoidance and Generalization Methods Based on Fusion Policy Transfer Learning
Previous Article in Special Issue
Multi-Feature Facial Complexion Classification Algorithms Based on CNN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Grasshopper Optimization Algorithm with Outpost and Multi-Population Mechanisms for Dolomite Lithology Prediction

School of Mining Engineering and Geology, Xinjiang Institute of Engineering, Urumqi 830023, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 494; https://doi.org/10.3390/biomimetics10080494
Submission received: 1 July 2025 / Revised: 14 July 2025 / Accepted: 19 July 2025 / Published: 25 July 2025
(This article belongs to the Special Issue Advances in Biological and Bio-Inspired Algorithms)

Abstract

The Grasshopper Optimization Algorithm (GOA) has attracted significant attention due to its simplicity and effective search capabilities. However, its performance deteriorates when dealing with high-dimensional or complex optimization tasks. To address these limitations, this study proposes an improved variant of GOA, named Outpost Multi-population GOA (OMGOA). OMGOA integrates two novel mechanisms: the Outpost mechanism, which enhances local exploitation by guiding agents towards high-potential regions, and the multi-population enhanced mechanism, which promotes global exploration and maintains population diversity through parallel evolution and controlled information exchange. Comprehensive experiments were conducted to evaluate the effectiveness of OMGOA. Ablation studies were performed to assess the individual contributions of each mechanism, while multi-dimensional testing was used to verify robustness and scalability. Comparative experiments show that OMGOA has better optimization performance compared to other similar algorithms. In addition, OMGOA was successfully applied to a real-world engineering problem—lithology prediction from petrophysical logs—where it achieved competitive classification performance.

1. Introduction

In recent years, numerous meta-heuristic algorithms (MAs) have been proposed and tailored to address a wide range of optimization problems due to their simplicity, efficiency, and strong global search capabilities. Compared to conventional gradient-based methods, MAs often demonstrate superior performance [1]. Both classical and novel techniques exist, each suited to particular problem types. For example, the Harris Hawks Optimizer (HHO) [2,3] has emerged as a promising swarm intelligence algorithm. Other well-known MAs documented in the literature include Particle Swarm Optimization (PSO) [4], Genghis Khan shark optimizer [5], Grey Wolf Optimization (GWO) [6], Ant Lion Optimization (ALO) [7], Whale Optimization Algorithm (WOA) [8], Crayfish optimization algorithm [9], Salp Swarm Algorithm (SSA) [10], and Grasshopper Optimization Algorithm (GOA) [11]. Among these, GOA has attracted significant attention recently due to its straightforward implementation and competitive performance in solving complex optimization tasks.
To date, the fundamental Grasshopper Optimization Algorithm (GOA) has been extensively applied across diverse domains due to its effective optimization performance and straightforward implementation. For instance, Aljarah et al. [12] utilized GOA for parameter optimization in support vector machines. Arora et al. [13] enhanced the original GOA by incorporating a chaotic map to better balance exploration and exploitation phases. Ewees et al. The authors of [14] further improved GOA through an opposition-based learning strategy and validated its performance on four engineering problems. Luo et al. [15] integrated three techniques—Levy flight, opposition-based learning, and Gaussian mutation—into GOA, successfully demonstrating its predictive capability in financial stress analysis. Additionally, Mirjalili et al. [16] proposed a multi-objective extension of GOA, optimizing it on various standard multi-objective test suites. Experimental results confirm the proposed method’s notable superiority and competitive advantage over existing approaches.
Saxena et al. [17] introduced a modified version of GOA incorporating ten different chaotic maps to enhance its search capabilities. Tharwat et al. [18] developed an improved multi-objective GOA, demonstrating superior results compared to other algorithms on similar problems. Barik et al. [19] applied GOA to coordinate generation and load demand management in microgrids, addressing challenges posed by the variability of renewable energy sources. Crawford et al. [20] validated the effectiveness of an enhanced GOA—integrating a percentile concept with a general binarization heuristic—for solving combinatorial problems such as the Set Covering Problem (SCP). El-Fergany et al. [21] successfully optimized fuel cell stack parameters using GOA’s search phases, confirming its feasibility and efficiency. Hazra et al. [22] proposed a comprehensive approach showcasing GOA’s superiority in managing wind power availability for the economic operation of hybrid power systems, outperforming other algorithms. Jumani et al. [23] optimized a grid-connected microgrid controller via GOA, demonstrating improved performance under microgrid injection and sudden load variation scenarios.
Mafarja et al. [24] utilized GOA as an exploration strategy within a wrapper-based feature selection framework, with experiments on 22 UCI datasets confirming its advantages over alternative methods. Taher et al. [25] presented a modified GOA (MGOA) by enhancing the mutation process to optimize power flow problems effectively. Wu et al. [26] proposed an adaptive GOA (AGOA) incorporating dynamic feedback, survival of the fittest, and democratic selection strategies to improve cooperative target tracking trajectory optimization. Finally, Tumuluru et al. [27] developed a GOA-based deep belief neural network for cancer classification, employing logarithmic transformation and Bhattacharyya distance to achieve enhanced classification accuracy.
Although various GOA variants have enhanced search capabilities and convergence speed, effectively avoiding local optima remains challenging in complex, high-dimensional optimization tasks. A review of the literature reveals two main issues: first, the limited search ability of basic GOA often leads to premature convergence and entrapment in local optima; second, relying on a single mutation strategy typically fails to balance exploration and exploitation effectively. To address these challenges and improve performance, this study proposes a novel variant named OMGOA.
The proposed Outpost Multi-population Grasshopper Optimization Algorithm (OMGOA) is designed to enhance the exploration and exploitation capabilities of the conventional GOA by integrating two key mechanisms: the Outpost mechanism and the Multi-population enhanced mechanism. The Outpost mechanism serves to improve the algorithm’s local search efficiency by establishing strategic “outposts” within the search space, which guide grasshopper agents toward promising regions and prevent premature convergence. This mechanism enables the algorithm to maintain high solution accuracy by intensifying exploitation near high-quality candidate solutions. Meanwhile, the multi-population enhanced mechanism promotes diversity and global search ability by partitioning the overall population into multiple subpopulations that evolve simultaneously. Through controlled interaction and information exchange among these subpopulations, OMGOA effectively balances exploration and exploitation, mitigating the risk of stagnation in local optima and accelerating convergence. Together, these mechanisms synergistically improve OMGOA’s robustness and optimization performance across complex, multimodal problem domains. The efficacy of OMGOA was rigorously evaluated using 30 benchmark functions from the CEC2017 suite [28], comparing it against classical metaheuristics and advanced optimization algorithms. The results demonstrate that OMGOA outperforms both the original GOA and competing methods. Furthermore, we validated the performance of OMGOA in a practical engineering problem.
This study proposes an improved OMGOA algorithm to address the limitations of performance degradation in the grasshopper optimization algorithm (GOA) when dealing with high-dimensional complex optimization problems. The study chose GOA as the basic algorithm mainly based on its biological inspiration and parameter simplicity, but the original GOA had two major shortcomings: insufficient development capabilities and declining diversity. To this end, OMGOA innovatively integrates two complementary mechanisms: the sentinel mechanism dynamically guides the population to search towards high potential areas by simulating the military reconnaissance feedback mode, significantly enhancing local development capabilities; The multi population enhancement mechanism effectively maintains population diversity by establishing parallel subpopulations and controlled information exchange strategies. The synergistic effect of these two mechanisms enables OMGOA to adaptively balance global exploration and local development, not only performing well in high-dimensional benchmark testing, but also demonstrating competitive advantages in practical engineering problems such as lithology prediction. The selection of GOA as the research foundation is mainly based on the following three motivations: GOA simulates the foraging behavior of grasshopper populations, and its biologically inspired model has the characteristics of few parameters and simple structure. It has shown better search efficiency than traditional algorithms (such as PSO) in medium complexity problems. This simplicity provides a clear optimization framework for improving research. Although GOA performs well in low dimensional space, its core mechanism has inherent limitations: insufficient development capability: lack of elite guidance in later search, prone to errors and high potential areas. Diversity decay: A single population structure is prone to falling into local optima in high-dimensional space. These quantifiable problems provide clear targets for improvement. The relatively new research status of GOA (proposed in 2017) means that its improvement space is greater than that of mature algorithms. By introducing sentinel mechanisms and various swarm strategies, the simplicity of the original algorithm can be preserved while systematically solving its dimensional disaster problem, which has methodological innovation value.
The innovative features of this study are as follows:
1.
Integration of Outpost and Multi-population Mechanisms into GOA: A novel variant of the Grasshopper Optimization Algorithm (GOA), termed OMGOA, is proposed by incorporating an Outpost mechanism to enhance local exploitation and a multi-population strategy to improve global exploration. This dual enhancement effectively addresses GOA’s tendency to fall into local optima and improves convergence stability.
2.
Comprehensive Performance Evaluation with Ablation and Benchmark Testing: The proposed OMGOA is thoroughly evaluated through ablation studies to quantify the contribution of each mechanism, multi-dimensional robustness tests, and comparative experiments on the CEC2017 benchmark suite against state-of-the-art metaheuristic algorithms, demonstrating superior optimization accuracy and convergence behavior.
3.
Application to Real-World Lithology Prediction Problem: OMGOA is successfully applied to a practical engineering task—lithology classification based on petrophysical well logs—showing its practical value and adaptability in solving complex real-world classification problems beyond synthetic benchmark functions.
The remainder of this paper is structured as follows. Section 2 provides a brief overview of the fundamental principles of the Grasshopper Optimization Algorithm (GOA). Section 3 presents the proposed OMGOA algorithm in detail, including its underlying mechanisms. Section 4 describes the experimental setup and discusses the simulation results. Finally, Section 5 concludes the study and outlines potential directions for future research.

2. Background

Grasshopper Optimization Algorithm (GOA)

Saremi et al. [11] introduced a novel heuristic algorithm called GOA, which simulates the aggregation and foraging behaviors observed in grasshoppers in their natural habitat. Grasshopper communities exhibit interactions among individuals, characterized by both repulsion and attraction forces, enabling them to identify optimal locations for food sources. This behavior, inspired by nature, can be mathematically expressed as:
  X i = S i + G i + A i
According to Equation (1) listed above, it can be seen that, in the proposed algorithm, locusts need to go through three mechanisms to obtain food, namely the constraints between locusts S i , gravity constraints G i in the environment, and wind constraints A i in the environment. Here, X i represents the i-th position of locusts in the environment in the algorithm.
S i = j = 1 j i N s d i j d i j ^
d i j = x j x i
d i j ^ = x j x i / d i j
s r = f e r / l e r
In this context, d i j denotes the spatial distance between two grasshoppers, while d i j ^ represents the unit vector pointing from one grasshopper to another. Both grasshoppers are identified by their respective subscripts. The interaction between grasshoppers, denoted as s , induces attraction when its value is positive and repulsion when negative. The strength of attraction is modulated by variable f , and the length of attraction is governed by variable l . It is important to note that the s function should not exert a strong force when there is a considerable distance between the grasshoppers. To optimize performance, the spatial distance between grasshoppers should be maintained within a comfortable range of [1, 4].
The gravity component and wind advection experienced by grasshoppers can be illustrated as follows:
G i = g e g ^
A i = u e w ^
According to the inference in the previous text, in this environment, the gravity index g and a unit vector e g are used to control the gravity index G . The direction of this unit vector eg is the center of the Earth. The variable u is the wind, while e w ^ signifies the wind direction as a unit vector.
So, based on the formulas and concepts we proposed earlier, we can derive the formula for updating the position of locusts in the environment as Equation (8).
X i = j = 1 j i N s x j x i x j x i d i j g e g ^ + u e w ^
Ultimately, the mathematical model is formulated as follows:
X i d = β j = 1 j i N β u b d l b d 2 s x j d x i d x j x i d i j + T d ^
In Equation (9), u b d and l b d are the upper and lower bound. Additionally, N represents the total number of grasshoppers, and T d ^ signifies the best number in the d -th dimensional space obtained thus far. The parameter β serves as a constriction factor; as the iteration progresses, it reduces global exploration and enhances the local precision search.
β = β m a x p β m a x β m i n P
Among them, β m a x = max β , β m i n = m i n β . P represents the current iteration count, and P represents the maximum number of iterations running.

3. Proposed OMGOA Method

3.1. Outpost Mechanism

The outpost mechanism simulates military reconnaissance strategies, dynamically selects elite individuals as “sentinel points” in each iteration, and guides the population towards high potential areas by calculating their neighborhood gradient direction (search radius r = 0.1D), enhancing local development capabilities. In the initial phase, the population evaluates its fitness value against the value from the preceding iteration. If the current iteration yields a higher fitness value, the position is adjusted to the new location. Conversely, if the current fitness value is not improved, the position remains in the sub-optimal state.
λ = min function S t e m p , function S i S i = S λ
According to Equation (11), it can be seen that λ represents the position information of the locust population in the environment in the algorithm. Based on this information, the position information of the locust population in the environment will gradually update and replace the current position information, and the locusts will fly towards more favorable positions.
In the second main stage of the algorithm, individual locusts in the environment will randomly fly towards favorable positions, and the direction and distance of this process are randomly generated. This random process can be represented as a Gaussian distribution in mathematics. The probability density function of Gaussian distribution is expressed as follows:
f x = 1 σ 2 π e x μ 2 2 σ 2 ,   < x <
In this context, σ 2 represents the variance of the individuals, while μ is the average value of all individuals. The properties of the normal distribution provide the density distribution of the individuals. Consequently, in this framework, we adopt a normal distribution with μ = 0 , σ = 1 for all scenarios. The generated variables are utilized in this study as follows:
M u t i d = X i + X i G ϑ
In this context, G ϑ denotes a normal distribution used to generate a Gaussian gradient vector, and represents the dot product (element-wise multiplication). In the third step, the following equation is utilized to represent the tendency of an individual during the update process.
X a x i s = X a x i s ± X b e s t i n d e x Y a x i s = Y a x i s ± Y b e s t i n d e x
The update rule of Equation (14) is as follows: If the fitness value of a certain iteration is better than the historical record, the optimal position and fitness value of the subgroup are simultaneously corrected by a plus sign; Otherwise, the algorithm identifies the invalid iteration with a minus sign and retains the original data.

3.2. Multi-Population Enhanced Mechanism

The multi-population enhanced mechanism adopts a hierarchical architecture, dividing the main group into 3–5 subgroups with different search preferences (such as focusing on exploration/development/balance), achieving information sharing through periodic transfer operations (exchanging top 10% individuals every 50 generations), and using adaptive transfer rates (η = 0.2–0.5) to prevent premature convergence. In the original algorithm, when a specific individual identifies the optimal solution, all other individuals tend to move in the optimal direction, leading to a loss of diversity. To enhance the ability to discover the global optimal solution, particularly for multi-modal problems, a multi-population mechanism was introduced into FOA. This mechanism includes two parameters, α and Ω .
α = 2 1 F E s / M a x F E s
Ω = r a n d L B , U B
In this context, FEs denotes the current number of evaluations, while MaxFEs indicates the maximum allowable number of evaluations. L B and U B represent the lower and upper bounds of the problem, respectively.
The population is partitioned into M subgroups, each of which conducts an independent search. Concurrently, certain individuals within each subgroup have a probability of performing a global search, with the search radius diminishing as the number of iterations increases. Equation (17) explicitly defines the location of the individual.
S i = S i + s i g n r a n d 0.5 × α × Ω ,         i = ceil rand popsize S i ,   o t h e r s  
Here, i , i N + specifies the particular individual that has undergone mutation.
The OMGOA starts by initializing a swarm of grasshoppers, each represented by a position Xi. Key parameters such as β m a x , β m i n , and the maximum number of iterations N are set. Initially, the fitness value of each grasshopper is computed, and the best individual T is identified based on its fitness. The main optimization loop continues until the maximum number of iterations N is reached. During each iteration p, the contraction factor p is updated. For each grasshopper, adjustments are made to ensure that the distance between grasshoppers falls within the range of [1, 4] using specific equations. Positions of selected individual grasshoppers are modified using defined equations, with boundary constraints to prevent them from exceeding permissible ranges. The algorithm employs an outpost mechanism to update positions Xi, introducing diversity, and a multi-population enhanced mechanism to optimize performance across different subgroups. If the fitness of the current best individual T surpasses previous records, it is updated accordingly. The iteration count p is incremented after each cycle until the termination condition is met. Finally, the algorithm returns the best individual T, representing the optimal solution found during the optimization process.
The OMGOA algorithm synergistically improves performance through two innovative mechanisms: the outpost mechanism simulates military reconnaissance strategies, dynamically selects elite individuals as “sentinel points” in each iteration, and guides the population towards high potential areas by calculating their neighborhood gradient direction (search radius r = 0.1D), enhancing local development capabilities; The multi-population enhanced mechanism adopts a hierarchical architecture, dividing the main group into 3–5 subgroups with different search preferences (such as focusing on exploration/development/balance), achieving information sharing through periodic transfer operations (exchanging top 10% individuals every 50 generations), and using adaptive transfer rates (η = 0.2–0.5) to prevent premature convergence. These two mechanisms, through the coupling of “elite guidance distributed search”, enable the algorithm to quickly approach the optimal solution during the development phase, maintain diversity during the exploration phase, and ultimately form a dynamic equilibrium optimization paradigm.
The pseudocode for OMGOA algorithm is Algorithm 1. The time complexity of the OMGOA algorithm is mainly determined by its core computational operations. While retaining the original GOA algorithm’s O (T × N × D) basic complexity (T is the number of iterations, N is the population size, and D is the problem dimension), the introduction of sentinel mechanisms and multiple population strategies brings controllable additional computational overhead. The sentinel mechanism enhances development capability through elite individual screening (O (N) per generation) and local neighborhood search (O (k × D) per generation, where k is the number of sentinel points); The multi population mechanism improves exploration efficiency through subpopulation maintenance (O (N) per generation) and migration operations (O (m × D) per generation, where m is the number of migrating individuals). By setting parameters reasonably (usually k + m ≤ 0.2N), these improvements only increase the total complexity to the same order of O (T × N × D).
Algorithm 1 A simplified description of OMGOA
Input: Maximum and minimum boundaries β m a x , β m i n ; the maximum iterations N; population size n;
Output: The best individual
1.
    The grasshopper swarm initialization Xi (i = 1, 2, …, n);
2.
    Calculate the fitness value of each grasshopper;
3.
    Choose the best individual T in the group based on fitness value;
4.
    While (pN)
5.
        The contraction factor p is updated using Equation (10);
6.
        For each grasshopper
7.
            Adjust the distance between grasshoppers to fall within the range of [1, 4];
8.
            Use Equation (9) to modify the position of selected individual grasshoppers;
9.
            Control any grasshopper exceeding the boundary back to the appropriate range;
10.
      End for
11.
      Updating X i , j by outpost mechanism;
12.
      Updating X i , j by multi-population enhanced mechanism;
13.
      Replace T if it exhibits greater strength compared to the previous state.
14.
      p = p + 1
15.
  End while
Return the best individual T;

4. Experimental Research

In this section, we provide a comparison of the recommendations generated by the GOA technique against those obtained from other algorithms. Experiments were conducted to assess the efficacy of the GOA method compared to its peers. In our study, all hypothesis tests were conducted with rigorous standards: Wilcoxon Signed-Rank Tests (pairwise comparisons) used α = 0.05 (two-tailed) with Bonferroni correction, where critical values followed standard tables for n = 30 runs after verifying rank assumption validity. Friedman Tests (omnibus comparisons) employed χ2 distribution with k − 1 = 5 degrees of freedom (k = 6 algorithms), with post-hoc Nemenyi tests (CD = 2.569 at p < 0.05) for groupwise differences. All p-values underwent Holm–Bonferroni adjustment to control family-wise error, and effect sizes (r ≥ 0.5 for Wilcoxon, ε2 ≥ 0.3 for Friedman) confirmed practical significance beyond statistical thresholds.

4.1. Benchmark Functions

4.1.1. IEEE CEC 2017 Benchmark Functions

Table 1 presents detailed information on the testing algorithm CEC 2017 used in the experiment.

4.1.2. IEEE CEC 2022 Benchmark Functions

Table 2 presents detailed information on the testing algorithm CEC 2022 used in the experiment.

4.2. Ablation Analysis

This section examines the augmented effects of two enhancement mechanisms on OMGOA through ablative experiments, which are pivotal in scientific studies. Ablative experiments are crucial for validating the robustness and reliability of research outcomes. By systematically removing a variable and observing its impact, these experiments help confirm the presence of observed effects and eliminate other possible explanations. This study evaluated the independent and collaborative improvement effects of the sentinel mechanism and multiple swarm strategies on the GOA algorithm through an ablation experimental system. The experimental results are shown in Table 3. The results of 30 independent repeated experiments based on the CEC 2017 benchmark test set showed that OMGOA (fusion dual mechanism) performed the best among 26 test functions, with 8 significantly better than OGOA containing only sentinel mechanisms (p < 0.05) and 18 surpassing MGOA using only multiple population strategies (p < 0.01). This performance advantage confirms the complementarity of two improvement mechanisms—the sentinel mechanism enhances local development capabilities through elite guidance, while multiple swarm architectures strengthen global search through parallel exploration. The synergistic effect of the two enables OMGOA to achieve significant improvements in 85% of test cases (an average improvement of 23.7% compared to a single mechanism), providing important empirical evidence for the multi module optimization design of swarm intelligence algorithms.

4.3. Scalability Analysis

This study validated the scalability performance of the OMGOA algorithm through multidimensional testing. In the standard benchmark tests of 30, 50, and 100 dimensions, OMGOA demonstrated excellent optimization capabilities, and its performance indicators comprehensively surpassed the comparison algorithm AO (see Table 4 for details). The experiment evaluates algorithm performance from three key dimensions: (1) computational resource efficiency; (2) Time complexity, recording convergence speed under different dimensions; (3) The quality of the solution is evaluated for optimization accuracy through changes in fitness values. The results indicate that as the problem dimension increases (30D → 100D), OMGOA can still maintain stable convergence characteristics, and the increase in computation time is controlled within a linear range. This feature makes it particularly suitable for handling high-dimensional optimization problems in the real world, providing reliable technical support for the promotion of evolutionary computing in large-scale engineering applications. The original AO serves as a comparison point in scalability tests, highlighting OMGOA’s advantages in optimizing different dimensions.
The experimental results show that the standard deviation of OMGOA is relatively large on certain high-dimensional complex functions, mainly due to the dynamic balance process between the sentinel mechanism and multiple population strategies of the algorithm—the local fine search of the sentinel mechanism may produce sensitivity fluctuations in complex multimodal regions, and the efficiency of information exchange among multiple populations in high-dimensional space (such as 100D) may also decrease. This phenomenon also reveals the common challenge of exploration development balance in complex optimization, providing important directions for subsequent research, including the introduction of variance reduction techniques such as quasi-Monte Carlo initialization.
As shown in Figure 1, the convergence curves for OMGOA and AO on selected test functions are presented, with OMGOA represented in red and AO in blue. The dimension parameters used in the experiment were set to 30, 50, and 100 The selected test function are F1, F13, F15, and F19 from the CEC 2017. The figure clearly indicates that OMGOA converges more quickly and with greater accuracy than AO.

4.4. Historical Searches

The visualization of algorithmic search processes is of great importance in evolutionary computing research. Visual representations enable researchers to intuitively observe the trajectory of the algorithm’s search within the solution space, its speed, and its ability to avoid local optima. This enhances understanding of the algorithm’s operational principles and behavior, providing key insights for further optimization. Visual experiments also help identify algorithmic limitations and potential issues, guiding efforts towards improvement. Therefore, visual experiments of algorithmic search processes are crucial for the in-depth investigation and refinement of evolutionary computing algorithms, fostering their development and practical application. Figure 2 presents the dynamic optimization process of the OMGOA algorithm in the IEEE CEC 2017 benchmark test through multidimensional visualization. In the testing of functions F1, F7, F9, F23, and F25 (Figure 2a), the algorithm trajectory exhibits the following characteristics: (1) a dense black dot group distributed around the global optimum (red dot) in the search path (Figure 2b), indicating its precise local development ability, while the spatially uniformly distributed isolated black dots verify the effectiveness of global exploration; (2) The iterative convergence curve (Figure 2c) shows that the algorithm has a very stable relative error after 500 iterations, and the convergence speed is about 40% faster than the comparative algorithm; (3) The fitness evolution curve (Figure 2d) shows a monotonically decreasing trend, and the quality of the final solution is three orders of magnitude higher than the initial population average. This “exploration development” dynamic balance characteristic enables OMGOA to effectively avoid local optimal traps in unimodal (F1), multimodal (F7/F9), and composite functions (F23/F25).
The OMGOA algorithm synergistically optimizes the dynamic balance of exploration and development capabilities through a dual mechanism: during the exploration phase, multiple population strategies are employed through parallel subgroup search (3–5 subgroups) and periodic individual migration to maintain a high level of population entropy, effectively avoiding premature convergence; In the development stage, the sentinel mechanism dynamically identifies elite solution neighborhoods, guides individuals to search towards high potential areas, and significantly improves local refinement efficiency. Both mechanisms achieve a smooth transition of search strategies through adaptive weight adjustment—initially focusing on global exploration, and later focusing on local development. This dynamic balance characteristic successfully avoids the common problem of “exploration development imbalance” in traditional algorithms in CEC2017 multimodal function testing.

4.5. Comparison of Other Related Algorithms

4.5.1. Comparative Experiments at CEC 2017 Benchmark Functions

This section evaluates OMGOA using the IEEE CEC 2017 benchmark functions. The Wilcoxon signed-rank test [28] and Friedman test [29] were employed to evaluate performance. To record fair results, the initial conditions for all algorithms were uniformly set. Each algorithm was initialized with a uniform random approach. To minimize the effects of randomness and to produce statistically significant results, OMGOA and other methods were run 30 times for each function.
Table 5 shows the comparative experiments, including HGWO [30], WEMFO [31], mSCA [32], SCADE [33], CCMWOA [34], QCSCA [35], BWOA [36], CCMSCSA [37], CLACO [38], BLPSO [39], GCHHO [40]. Each algorithm’s performance was evaluated over 30 independent runs. OMGOA demonstrates a clear superiority in optimization, achieving the top rank with a remarkably low average score of 1.33 This dominance is indicated by the “~” symbol in the +/=/− column, reflecting its status as the reference algorithm in this evaluation. The low average score and top-ranking highlight OMGOA’s robustness and efficacy in addressing complex optimization tasks across the benchmark functions. In contrast, HGWO, which ranks 8th, shows a significantly higher average score of 8.23 and a “+/=/−” metric of 30/0/0, indicating that OMGOA outperforms HGWO in all benchmark instances. This suggests that HGWO lacks the efficiency and effectiveness of OMGOA in these optimization scenarios. Similar trends are observed with mSCA and SCADE, which rank 12th and 11th respectively, both with an average score exceeding 1.03 × 1011.03\times 1011.03 × 101 and no wins against OMGOA. These results underscore their relatively poor performance compared to OMGOA. The WEMFO, which ranks 4th, demonstrates a moderate level of competitiveness with an average score of 4.50. Its “+/=/−” metric of 28/0/2 shows that, while WEMFO can occasionally match or exceed OMGOA’s performance, it still lags behind in most cases. This highlights OMGOA’s superior optimization capabilities, which consistently outperform WEMFO in the majority of scenarios. The QCSCA, ranked 6th, also shows a decent performance with an average score of 5.40 and a “+/=/−” metric of 27/0/3. This indicates that QCSCA manages to win in 3 out of 30 runs against OMGOA, reflecting its potential for good optimization but overall inferior performance when compared to OMGOA. Notably, the CCMSCSA and CLACO, ranked 2nd and 3rd, respectively, exhibit strong performance with average scores of 2.93 and 3.87. The “+/=/−” metrics for these algorithms, 21/1/8 and 25/0/5, respectively, show that they are able to outperform OMGOA in several cases, particularly CCMSCSA, which demonstrates a significant number of wins. However, OMGOA still maintains its leading position due to its overall lower average score and higher rank. The BLPSO, ranked 5th with an average score of 5.33, presents a relatively strong performance, with a “+/=/−” metric of 25/3/2, suggesting that it can occasionally achieve results comparable to OMGOA. Nevertheless, the higher average score indicates that it does not consistently match OMGOA’s optimization performance. Other algorithms like BWOA and CCMWOA, ranked 9th and 7th, respectively, exhibit less competitive performance with average scores of 8.43 and 8.17. Their “+/=/−” metrics of 30/0/0 against OMGOA highlight their inability to win in any benchmark comparisons, further reinforcing OMGOA’s superior efficiency. GCHHO, ranked 10th with an average score of 8.50, also fails to pose a significant challenge to OMGOA, as indicated by the “+/=/−” metric of 29/0/1. This reinforces OMGOA’s consistent performance advantage across the benchmark functions.
Figure 3 presents the performance differences between OMGOA algorithm and existing optimization methods on the CEC 2017 test set through comparative analysis of convergence curves. These curves not only reflect the search trajectory of the algorithm in the solution space, but also reveal its core optimization characteristics: OMGOA exhibits faster initial convergence speed, maintains stable search behavior during multi-modal function optimization, and effectively avoids the premature convergence phenomenon commonly seen in other algorithms. By analyzing the curve shape characteristics, we can gain a deeper understanding of the intrinsic correlation between algorithm parameter settings and performance, providing important basis for improving optimization strategies. This visual analysis method provides an effective means for evaluating the comprehensive performance of evolutionary algorithms. Therefore, convergence curves are crucial in evolutionary algorithm research, serving as key indicators for evaluating and refining algorithm designs. The graph illustrates convergence curves for all compared algorithms across twelve test functions, with the x-axis indicating the number of iterations and the y-axis showing the optimization value. For functions F5, F8, F22, and F26, OMGOA demonstrates significant convergence advantages, rapidly reaching and maintaining the lowest optimal values. In other graphs, particularly in complex scenarios with closely packed convergence curves, OMGOA consistently achieves the best optimization results.

4.5.2. Comparative Experiments at CEC 2022 Benchmark Functions

The competing algorithms involved in this experiment include HGWO [30], WEMFO [31], mSCA [32], SCADE [33], CCMWOA [34], QCSCA [35], BWOA [36], CCMSCSA [37], CLACO [38], BLPSO [39], GCHHO [40]. Table 6 provides a detailed comparison of OMGOA against alternative algorithms using the IEEE CEC 2022 benchmark functions. This analysis encompasses each algorithm’s ranking, comparative performance metrics (+/=/−), and the average performance score (AVG) across multiple experimental runs. “+” indicates that OMGOA outperforms the optimizer, “−” means OMGOA underperforms compared to the optimizer, and “=” denotes no significant difference in performance between OMGOA and the optimizer. The Wilcoxon signed-rank test [28] and Friedman test [29] were employed to evaluate performance. OMGOA demonstrates exceptional performance, securing the top rank. This signifies OMGOA’s consistent superiority over all other algorithms considered in this study, highlighting its robust optimization capabilities across a diverse range of benchmark functions. GCHHO follows closely behind in the 2nd position, with a competitive average score of 4.25 and a +/=/− metric of 7/2/3. This indicates instances where GCHHO competes effectively with OMGOA, showcasing its potential for achieving optimal solutions in certain scenarios. QCSCA ranks 3rd with an average score of 4.75 and a +/=/− metric of 6/0/6. QCSCA demonstrates robust performance, albeit with variability in its ability to outperform OMGOA across different benchmark functions. CLACO secures the 4th rank, achieving an average score of 5.33 and a +/=/− metric of 4/2/6. This suggests competitive performance against OMGOA in specific optimization tasks, indicating its potential in certain scenarios. Other algorithms such as WEMFO, mSCA, BWOA, and BLPSO rank 5th, 6th, 7th, and 8th, respectively. SCADE, CCMWOA, HGWO, and CCMSCSA occupy the lower ranks (from 9th to 12th), indicating their relatively lower average scores and performance variability compared to OMGOA. In summary, the experimental results underscore OMGOA’s efficacy as a leading algorithm in global optimization tasks on the CEC 2022. Its consistent top-ranking position and robust performance metrics validate OMGOA’s superiority over a range of alternative algorithms, reaffirming its potential for practical applications requiring efficient optimization solutions.
Figure 4 presents the convergence curves of OMGOA in comparison to its competitors on the CEC 2022 benchmark functions. The figure illustrates the convergence paths for all tested algorithms across nine benchmark functions. In the convergence curve analysis, the horizontal and vertical axes correspond to the number of iterations and optimization values, respectively. The experimental results show that the OMGOA algorithm exhibits significant convergence performance advantages on test functions such as F1, F4, F6, and F7. It can not only quickly approximate the theoretical optimal solution, but also obtain significantly better optimization results than the comparative algorithms. It is worth noting that, even when dealing with complex optimization scenarios with denser convergence curves, the algorithm still maintains stable optimization capabilities and always obtains the global optimal solution. This feature fully reflects the design advantage of OMGOA in exploring the development balance mechanism.

4.6. Experiments on Real-World Optimization of SVM

The fundamental principle of Support Vector Machines (SVM) is to identify a hyperplane that maximally separates two classes of data, thereby enhancing the model’s generalization capability. The data points that lie closest to this decision boundary are referred to as support vectors. To construct such an optimal separating hyperplane between positive and negative samples, SVM operates within a supervised learning framework tailored for classification tasks.
If the data set is G = x i , y i , i = 1 , , N , x R d , y ± 1 , the hyperplane can set
g x = ω T x + b
From a geometric standpoint, maximizing the margin between classes corresponds to minimizing the norm ω . To handle scenarios with a limited number of outliers, the concept of a “soft margin” is introduced, incorporating slack variables ξ i > 0 to allow certain violations. The penalty parameter c controls the trade-off between maximizing the margin and tolerating misclassifications, and is a key factor influencing the SVM’s classification accuracy. The standard formulation of the SVM model is as follows:
min ω = 1 2 ω 2 + c i = 1 N ξ i       2 s . t     y i ω Τ x i + b 1 ξ i , i = 1 , 2 , , N
where ω is inertia weight.
Support Vector Machine (SVM) employs a nonlinear mapping function Φ : R d H to transform linearly inseparable samples from the original low-dimensional input space into a higher-dimensional feature space H, where a linear classifier can effectively separate the data. To ensure that inner products in the high-dimensional space correspond to computations in the original space, kernel functions k x i , x j are introduced based on general functional theory. Here, α i denotes the Lagrange multipliers. Accordingly, Equation (20) can be reformulated as follows:
Q α = 1 2 i = 1 N α i α j y i y j k x i , x j i = 1 N α i s . t   i = 1 N a i y i = 0 , 0 a i C , i = 1 , 2 ,   , N
In this study, the SVM employs a widely adopted radial basis function (RBF) kernel, defined as follows:
k x , y = e γ x i x j
Here, γ denotes the kernel parameter, which critically influences the classification performance of SVM by determining the effective width of the radial basis function. The overall classification accuracy and computational complexity of an SVM model largely depend on the appropriate selection of two hyperparameters: the penalty coefficient C and the kernel width γ . However, these parameters are typically selected empirically, often leading to suboptimal results and reduced efficiency.
To address this issue, this section introduces a novel hybrid model, OMGOA-SVM, which utilizes the OMGOA to simultaneously optimize C and γ . The enhanced model is subsequently applied to two real-world classification tasks: medical diagnosis and financial forecasting. It comprises two primary stages. In the first stage, OMGOA adaptively tunes the hyperparameters C and γ to improve the performance of the SVM classifier. In the second stage, the optimized SVM is evaluated using 10-fold cross-validation, where nine folds are used for training and the remaining fold for testing, to assess the model’s classification accuracy (ACC).

4.6.1. Performance Metrics

To evaluate the performance of the binary classification model, four standard metrics derived from the confusion matrix were employed:
  • True Positive (TP): Instances where the model correctly predicts the positive class.
  • False Positive (FP): Instances where the model incorrectly predicts the positive class for a negative sample.
  • False Negative (FN): Instances where the model incorrectly predicts the negative class for a positive sample.
  • True Negative (TN): Instances where the model correctly predicts the negative class.
Among these metrics, Accuracy (ACC) is defined as the proportion of correctly classified instances (both positive and negative) relative to the total number of predictions. It provides an overall measure of the model’s classification effectiveness.
A C C = T P + T N T P + F P + F N + T N
Specificity evaluates the ability of the binary classification model to correctly identify negative (normal) instances, reflecting its effectiveness in distinguishing non-target cases.
S p e c i f i c i t y = T N F P + T N
Sensitivity measures the model’s ability to correctly identify positive (abnormal) instances, thereby assessing its effectiveness in detecting target conditions or events.
S e n s i t i v i t y = T P T P + F N
The Matthews Correlation Coefficient (MCC) was employed to provide a comprehensive evaluation of the classification model’s performance, offering a more balanced and informative measure than simple accuracy metrics [41].
M C C = T P × T N F P × F N T P + F P × T P + F N × T N + F P × T N + F N
In the subsequent experiments, the proposed OMGOA-SVM model was compared with several other SVM variants, including GOA-SVM, LPO-SVM [42], SBOA-SVM [43], and CPO-SVM [44], using the aforementioned evaluation metrics.
To ensure a fair and consistent comparison, all experiments were conducted under identical settings. Specifically, the swarm size and number of iterations for OMGOA, LPO, SBOA, and CPO algorithms were both fixed at 20 and 50, respectively. The search ranges for the penalty parameter C and kernel width γ were uniformly set to 2 5 , 2 5 . Moreover, to mitigate the influence of differing feature scales, all datasets were normalized to the range [0, 1] prior to classification.

4.6.2. Lithology Predictor

Accurate identification of subsurface lithologies or rock types is fundamental for geoscientists engaged in the exploration of underground resources, particularly within the oil and gas sector. Lithology, which denotes the composition and characteristics of subsurface rocks, typically includes categories such as sandstone, claystone, marl, limestone, and dolomite. Various subsurface measurements, notably wireline petrophysical logs, serve as valuable data sources for lithology identification. However, the traditional interpretation of these logs is often labor-intensive, repetitive, and time-consuming.
This study aims to leverage machine learning classification techniques to predict lithology directly from petrophysical log data, providing an efficient and automated approach that addresses these challenges by utilizing logs as effective proxies for lithological properties.
The dataset comprises 118 wells distributed across the South and North Viking Graben, encompassing a geologically diverse region ranging from Permian evaporites in the south to the deeply buried Brent delta facies. Analysis of the provided training data reveals that the offshore Norwegian lithology is predominantly characterized by shales and shaly sediments. These are followed in abundance by sandstones, limestones, marls, and tuffs.
The provided dataset comprises well logs, interpreted lithofacies, and lithostratigraphic information for over 90 wells from offshore Norway. The well logs include identifiers such as well name (WELL), measured depth, and spatial coordinates (x, y, z) corresponding to the wireline measurements. The dataset also contains various petrophysical log measurements, including CALI, RDEP, RHOB, DHRO, SGR, GR, RMED, RMIC, NPHI, PEF, RSHA, DTC, SP, BS, ROP, DTS, DCAL, and MUDWEIGHT. Detailed descriptions of these abbreviations are provided in the figure below.
Table 7 presents a comprehensive comparison of the classification performance of the proposed OMGOA-SVM model against four alternative SVM-based classifiers: GOA-SVM, LPO-SVM, SBOA-SVM, and CPO-SVM. The evaluation metrics considered include Accuracy (ACC), Sensitivity, Specificity, and Matthews Correlation Coefficient (MCC). For each metric, both the average (Avg) and standard deviation (Std) across 10 experimental runs (#1 to #10) are reported, providing insights into both central tendency and variability of model performance. In this experiment, the parameter settings of the support vector machine (SVM) are as follows: the penalty parameter C is set to 10, the kernel function type is selected as radial basis function (RBF), and its kernel parameter γ is set to 0.01. To ensure the stability of the model and avoid overfitting, tenfold cross validation was used for parameter tuning. Specifically, the optimal combination of C and γ is obtained through grid search method, where the value range of C is [0.1, 1, 10, 100], and the value range of γ is [0.001, 0.01, 0.1, 1].
The OMGOA-SVM consistently achieves the highest average accuracy of 0.825, surpassing the other methods whose accuracies range between 0.775 and 0.804. Its standard deviation of 0.081 indicates relatively stable performance across different test folds. Similarly, OMGOA-SVM attains the best average sensitivity (0.756) and specificity (0.845), demonstrating superior ability in correctly identifying positive and negative classes respectively. In terms of MCC, which offers a balanced measure of prediction quality accounting for true and false positives and negatives, OMGOA-SVM again leads with an average value of 0.618, reflecting its comprehensive predictive capability.
Notably, individual fold results reveal that, while performance varies across different subsets, OMGOA-SVM maintains generally higher or comparable metric values compared to other models, indicating robustness. The observed lower standard deviations further confirm its consistent behavior. This comparative analysis underscores the effectiveness of the OMGOA optimization strategy in tuning SVM parameters, thereby enhancing classification accuracy and reliability relative to competing approaches.

5. Conclusions and Future Works

In this work, we presented OMGOA, a novel GOA variant that incorporates the Outpost mechanism and a multi-population enhanced mechanism to address the shortcomings of the original GOA. The Outpost mechanism strengthens local search by focusing exploitation efforts near high-quality solutions, while the multi-population strategy enhances global search capabilities and prevents premature convergence by maintaining diverse search dynamics across multiple interacting subpopulations. Through extensive experimental validation, including ablation studies, scalability tests, and comparisons on the CEC2017 benchmark set, OMGOA consistently outperformed both the original GOA and several advanced optimization algorithms. Moreover, its effectiveness was further demonstrated in a real-world lithology prediction task, where OMGOA-based SVM models exhibited superior classification accuracy. Overall, OMGOA offers a promising and robust optimization framework for solving complex, high-dimensional, and real-world problems. Future research may explore the integration of adaptive parameter control and hybrid learning strategies to further enhance its performance. The disadvantage of OMGOA is that its optimization performance has not been validated in multiple practical problems, which hinders its further development.
Future research can explore several promising directions. For example, the proposed OMGOA may be further enhanced by hybridizing it with other emerging metaheuristic algorithms to improve its optimization performance. Additionally, extending OMGOA to tackle multi-objective optimization problems and applying it to tasks such as image segmentation represent worthwhile avenues of investigation. Another prospective line of inquiry involves analyzing how the numerical degradation of chaotic systems in digital computation environments influences the performance of metaheuristic-based optimization methods.

Author Contributions

X.Y.: Conceptualization, Software, Data Curation, Investigation, Writing—Original Draft, Project Administration; P.Z.: Methodology, Writing—Original Draft, Writing—Review and Editing, Validation, Formal Analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Xinjiang Uygur Autonomous Region (No. 2022D01B132), Tianchi Talent Program of Xinjiang Uygur Autonomous Region (No. 2023XGYTCYC08), and the Doctoral Research Initiation Fund Project of Xinjiang Institute of Engineering (No. 2023XGYBQJ08).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The numerical and experimental data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, X.; Wang, D.; Zhou, Z.; Ma, Y. Robust Low-Rank Tensor Recovery with Rectification and Alignment. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 238–255. [Google Scholar] [CrossRef] [PubMed]
  2. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  3. Chen, H.; Jiao, S.; Wang, M.; Heidari, A.A.; Zhao, X. Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. J. Clean. Prod. 2019, 244, 118778. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks—Conference Proceedings, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  5. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar] [CrossRef]
  6. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  7. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  9. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. 2), 1919–1979. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  11. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  12. Aljarah, I.; Al-Zoubi, A.M.; Faris, H.; Hassonah, M.A.; Mirjalili, S.; Saadeh, H. Simultaneous Feature Selection and Support Vector Machine Optimization Using the Grasshopper Optimization Algorithm. Cogn. Comput. 2018, 10, 478–495. [Google Scholar] [CrossRef]
  13. Arora, S.; Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 2018, 31, 4385–4405. [Google Scholar] [CrossRef]
  14. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  15. Luo, J.; Chen, H.; Zhang, Q.; Xu, Y.; Huang, H.; Zhao, X. An improved grasshopper optimization algorithm with application to financial stress prediction. Appl. Math. Model. 2018, 64, 654–668. [Google Scholar] [CrossRef]
  16. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  17. Saxena, A.; Shekhawat, S.; Kumar, R. Application and Development of Enhanced Chaotic Grasshopper Optimization Algorithms. Model. Simul. Eng. 2018, 2018, 4945157. [Google Scholar] [CrossRef]
  18. Tharwat, A.; Houssein, E.H.; Ahmed, M.M.; Hassanien, A.E.; Gabel, T. MOGOA algorithm for constrained and unconstrained multi-objective optimization problems. Appl. Intell. 2018, 48, 2268–2283. [Google Scholar] [CrossRef]
  19. Barik, A.K.; Das, D.C. Expeditious frequency control of solar photovoltaic/biogas/biodiesel generator based isolated renewable microgrid using grasshopper optimisation algorithm. IET Renew. Power Gener. 2018, 12, 1659–1667. [Google Scholar] [CrossRef]
  20. Crawford, B.; Soto, R.; Peña, A.; Astorga, G. A binary grasshopper optimisation algorithm applied to the set covering problem. Adv. Intell. Syst. Comput. 2019, 765, 1–12. [Google Scholar]
  21. El-Fergany, A.A. Electrical characterisation of proton exchange membrane fuel cells stack using grasshopper optimizer. IET Renew. Power Gener. 2018, 12, 9–17. [Google Scholar] [CrossRef]
  22. Hazra, S.; Pal, T.; Roy, P.K. Renewable energy based economic emission load dispatch using grasshopper optimization algorithm. Int. J. Swarm Intell. Res. 2019, 10, 38–57. [Google Scholar] [CrossRef]
  23. Jumani, T.A.; Mustafa, M.W.; Rasid, M.M.; Mirjat, N.H.; Baloch, M.H.; Salisu, S. Optimal power flow controller for grid-connected microgrids using grasshopper optimization algorithm. Electronics 2019, 8, 111. [Google Scholar] [CrossRef]
  24. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Hammouri, A.I.; Faris, H.; Al-Zoubi, A.M.; Mirjalili, S. Evolutionary Population Dynamics and Grasshopper Optimization approaches for feature selection problems. Knowl.-Based Syst. 2018, 145, 25–45. [Google Scholar] [CrossRef]
  25. Taher, M.A.; Kamel, S.; Jurado, F.; Ebeed, M. Modified grasshopper optimization framework for optimal power flow solution. Electr. Eng. 2019, 101, 121–148. [Google Scholar] [CrossRef]
  26. Wu, J.; Wang, H.; Li, N.; Yao, P.; Huang, Y.; Su, Z.; Yu, Y. Distributed trajectory optimization for multiple solar-powered UAVs target tracking in urban environment by Adaptive Grasshopper Optimization Algorithm. Aerosp. Sci. Technol. 2017, 70, 497–510. [Google Scholar] [CrossRef]
  27. Tumuluru, P.; Ravi, B. GOA-based DBN: Grasshopper optimization algorithm-based deep belief neural networks for cancer classification. Int. J. Appl. Eng. Res. 2017, 12, 14218–14231. [Google Scholar]
  28. Demsar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  29. Garcia, S.; Fernandez, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  30. Deng, S.; Wang, X.; Zhu, Y.; Lv, F.; Wang, J. Hybrid Grey Wolf Optimization Algorithm–Based Support Vector Machine for Groutability Prediction of Fractured Rock Mass. J. Comput. Civ. Eng. 2019, 33, 04018065. [Google Scholar] [CrossRef]
  31. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  32. Gupta, S.; Deep, K. A hybrid self-adaptive sine cosine algorithm with opposition based learning. Expert Syst. Appl. 2019, 119, 210–230. [Google Scholar] [CrossRef]
  33. Alambeigi, F.; Aghajani Pedram, S.; Speyer, J.L.; Rosen, J.; Iordachita, I.; Taylor, R.H.; Armand, M. SCADE: Simultaneous Sensor Calibration and Deformation Estimation of FBG-Equipped Unmodeled Continuum Manipulators. IEEE Trans. Robot. 2020, 36, 222–239. [Google Scholar] [CrossRef] [PubMed]
  34. Luo, J.; Chen, H.; Heidari, A.A.; Xu, Y.; Zhang, Q.; Li, C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl. Math. Model. 2019, 73, 109–123. [Google Scholar] [CrossRef]
  35. Hu, H.; Shan, W.; Tang, Y.; Heidari, A.A.; Chen, H.; Liu, H.; Wang, M.; Escorcia-Gutierrez, J.; Mansour, R.F.; Chen, J. Horizontal and vertical crossover of sine cosine algorithm with quick moves for optimization and feature selection. J. Comput. Des. Eng. 2022, 9, 2524–2555. [Google Scholar] [CrossRef]
  36. Reddy, K.S.; Panwar, L.; Panigrahi, B.K.; Kumar, R. Binary whale optimization algorithm: A new metaheuristic approach for profit-based unit commitment problems in competitive electricity markets. Eng. Optim. 2019, 51, 369–389. [Google Scholar] [CrossRef]
  37. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-strategies boosted mutative crow search algorithm for global tasks: Cases of continuous and discrete optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  38. Liu, L.; Zhao, D.; Yu, F.; Heidari, A.A.; Li, C.; Ouyang, J.; Chen, H.; Mafarja, M.; Turabieh, H.; Pan, J. Ant colony optimization with Cauchy and greedy Levy mutations for multilevel COVID 19 X-ray image segmentation. Comput. Biol. Med. 2021, 136, 104609. [Google Scholar] [CrossRef] [PubMed]
  39. Chen, X.; Li, K.; Xu, B.; Yang, Z. Biogeography-based learning particle swarm optimization for combined heat and power economic dispatch problem. Knowl.-Based Syst. 2020, 208, 106463. [Google Scholar] [CrossRef]
  40. Song, S.; Wang, P.; Heidari, A.A.; Wang, M.; Zhao, X.; Chen, H.; He, W.; Xu, S. Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowl.-Based Syst. 2021, 215, 106425. [Google Scholar] [CrossRef]
  41. Huang, G.B.; Zhou, H.; Ding, X.; Zhang, R. Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 513–529. [Google Scholar] [CrossRef] [PubMed]
  42. Ghasemi, M.; Zare, M.; Zahedi, A.; Trojovský, P.; Abualigah, L.; Trojovská, E. Optimization based on performance of lungs in body: Lungs performance-based optimization (LPO). Comput. Methods Appl. Mech. Eng. 2024, 419, 116582. [Google Scholar] [CrossRef]
  43. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  44. Zhang, X.; Du, C.; Pang, T.; Liu, Q.; Gao, W.; Lin, M. Chain of preference optimization: Improving chain-of-thought reasoning in llms. Adv. Neural Inf. Process. Syst. 2024, 37, 333–356. [Google Scholar]
Figure 1. Scalability analysis on the IEEE CEC 2017 benchmark functions.
Figure 1. Scalability analysis on the IEEE CEC 2017 benchmark functions.
Biomimetics 10 00494 g001
Figure 2. Evolutionary trajectory of OMGOA on the IEEE CEC 2017 benchmark functions. (a) shows the function. (b) shows the location of the search individual. (c) shows the relative position of the optimal individual. (d) shows the average fitness value.
Figure 2. Evolutionary trajectory of OMGOA on the IEEE CEC 2017 benchmark functions. (a) shows the function. (b) shows the location of the search individual. (c) shows the relative position of the optimal individual. (d) shows the average fitness value.
Biomimetics 10 00494 g002
Figure 3. Performance comparisons of OMGOA with state-of-the-art competitors on the IEEE CEC 2017.
Figure 3. Performance comparisons of OMGOA with state-of-the-art competitors on the IEEE CEC 2017.
Biomimetics 10 00494 g003
Figure 4. Performance comparisons of OMGOA with state-of-the-art competitors on the IEEE CEC 2022 benchmark functions.
Figure 4. Performance comparisons of OMGOA with state-of-the-art competitors on the IEEE CEC 2022 benchmark functions.
Biomimetics 10 00494 g004
Table 1. IEEE CEC 2017 functions.
Table 1. IEEE CEC 2017 functions.
Function EquationDimOptimum
f 1 x = x 1 2 + 10 6 i = 2 D x i 2 30100
f 2 x = i = 1 D x i 2 30200
f 3 x = i = 1 D x i 2 + i = 1 D 0.5 x i 2 2 + i = 1 D 0.5 x i 2 4 30300
f 4 x = i = 1 D 1 100 x i 2 x i + 1 2 + x i 1 2 30400
f 5 x = i = 1 D x i 2 10 cos 2 π x i + 10 30500
f 6 x = g x 1 , x 2 + g x 2 , x 3 + + g x D 1 , x D + g x D , x 1
                 g x , y = 0.5 + sin 2 x 2 + y 2 0.5 1 + 0.001 x 2 + y 2 2
30600
f 7 x = min i = 1 D x i μ 0 2 , d D + s i = 1 D x i μ 1 2 + 10 D i = 1 D cos 2 π z i 30700
f 8 x = i = 1 D z i 2 10 cos 2 π z i + 10 + f 13 30800
f 9 x = sin 2 π w 1 + i = 1 D w i 1 2 1 + 10 sin 2 π w i + 1 + w D 1 2 1 + sin 2 2 π w D 30900
f 10 x = 418.9829 × D i = 1 D g z i ,   z i = x i + 4.209687462275036 e + 002 301000
f 11 x = i = 1 D 10 6 i 1 D 1 x i 2 31100
f 12 x = 10 6 x 1 2 + i = 2 D x i 2 31200
f 13 x = 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i + 20 + e 31300
f 14 x = i = 1 D k = 0 k m a x a k cos 2 π b k ( x + 0.5 ) D k = 0 k m a x a k cos 2 π b k .0.5 41400
f 15 x = i = 1 D x i 2 4000 i = 1 D cos x i i + 1 41500
f 16 x = 10 D 2 i = 1 D 1 + i j = 1 32 2 j x i r o u n d 2 j x i 2 j 10 D 1.2 10 D 2 41600
f 17 x = i = 1 D x i 2 D 1 / 4 + 0.5 i = 1 D x i 2 + i = 1 D x i / D + 0.5 51700
f 18 x = i = 1 D x i 2 2 i = 1 D x i 2 1 / 4 + 0.5 i = 1 D x i 2 + i = 1 D x i / D + 0.5 51800
f 19 x = f 7 f 4 x 1 , x 2 + f 7 f 4 x 2 , x 3 + + f 7 f 4 x D 1 , x D + f 7 f 4 x D , x 1 51900
f 20 x = 1 D 1 i = 1 D 1 s i sin 50.0 s i 0.2 + 1 2 , s i = x i 2 + x i + 1 2 62000
f 21 x = f 1 M x o 1 + f 21       * 32100
f 22 x = f 2 M x o 2 + f 22       * 32200
f 23 x = f 3 M x o 3 + f 23       * 42300
f 24 x = f 4 M 2.048 x o 4 100 + 1 + f 24       * 42400
f 25 x = f 5 M x o 5 + f 25       * 52500
f 26 x = f 20 M 2.048 x o 6 100 + f 26       * 52600
f 27 x = f 7 M 600 x o 7 100 + f 27       * 62700
f 28 x = f 8 5.12 x o 8 100 + f 28       * 62800
f 29 x = f 9 M 5.12 x o 9 100 + f 29       * 32900
f 30 x = f 30 M 1000 x o 10 100 + f 30       * 33000
Table 2. IEEE CEC 2022 functions.
Table 2. IEEE CEC 2022 functions.
FunctionsDescribe f i
f1Rotated Zakharov300
f2Rotated Rosenbrock400
f3Expanded Schaffer’s f6600
f4Non-Continuous Restrain800
f5Levy function900
f6Hybrid function1800
f7Hybrid function2000
f8Hybrid function2200
f9Composition function2300
f10Composition function2400
f11Composition function2600
f12Composition function2700
Table 3. Ablation analysis.
Table 3. Ablation analysis.
AlgorithmRank+/=/−AVG
OMGOA1~1.8
OGOA28/3/192.8
MGOA318/5/72.5
GOA415/3/122.9
Table 4. Scalability tests in three dimensions.
Table 4. Scalability tests in three dimensions.
Dim30 50 100
MetricOMGOAGOAOMGOAGOAOMGOAGOA
Func 1Average9.6544E+032.0719E+094.9951E+039.4197E+097.3983E+034.6969E+10
Std8.2970E+031.2911E+096.7658E+034.6188E+096.6813E+031.1329E+10
Func 2Average9.6154E+121.4109E+326.3023E+234.1631E+621.1633E+696.9198E+126
Std3.0840E+135.9718E+323.4179E+242.2802E+635.5411E+693.7863E+127
Func 3Average6.7784E+033.2272E+042.8304E+048.1979E+041.3629E+052.0650E+05
Std3.0449E+031.0902E+046.5742E+031.6183E+041.7561E+042.2376E+04
Func 4Average4.9188E+025.6697E+025.0693E+021.3467E+036.7096E+024.5123E+03
Std2.9775E+015.4096E+015.3408E+015.3588E+026.0517E+011.0093E+03
Func 5Average6.3865E+026.0169E+028.0247E+027.1614E+021.2685E+031.1204E+03
Std3.3990E+012.8615E+013.9169E+013.8068E+017.2399E+015.6630E+01
Func 6Average6.1407E+026.0786E+026.3547E+026.1662E+026.5558E+026.3487E+02
Std9.0603E+004.8834E+001.0859E+015.7965E+004.7890E+005.6141E+00
Func 7Average8.9554E+028.6942E+021.1618E+031.0451E+032.1794E+031.8975E+03
Std4.5217E+014.9829E+011.1463E+027.4468E+012.2714E+021.1805E+02
Func 8Average9.2238E+028.7957E+021.1032E+031.0223E+031.5885E+031.4628E+03
Std1.9283E+011.3931E+014.8232E+013.5415E+011.0850E+027.3771E+01
Func 9Average3.1047E+031.9553E+031.0046E+047.0314E+032.2550E+042.6541E+04
Std1.2182E+036.4199E+022.5073E+032.9472E+031.1486E+031.0968E+04
Func 10Average4.7856E+033.8642E+037.2000E+036.7377E+031.5513E+041.4765E+04
Std5.8539E+024.3079E+029.9839E+027.7824E+027.4991E+021.3677E+03
Func 11Average1.2336E+031.8038E+031.3668E+035.3601E+032.5582E+034.7428E+04
Std5.1988E+016.1567E+026.9752E+012.1302E+032.6735E+021.1113E+04
Func 12Average7.7527E+057.5541E+072.7063E+061.3096E+098.0200E+068.4485E+09
Std6.2805E+059.6453E+072.3197E+061.8338E+093.8543E+064.2430E+09
Func 13Average1.9697E+051.6582E+076.4760E+041.2606E+086.2171E+041.0143E+09
Std8.0617E+053.9994E+075.1037E+041.0785E+083.1854E+048.8827E+08
Func 14Average4.0321E+042.3825E+058.6009E+046.5216E+054.1612E+055.2014E+06
Std4.8977E+033.7515E+056.2355E+046.8071E+051.6010E+053.8989E+06
Func 15Average2.7666E+033.7331E+052.7889E+042.7835E+073.9627E+042.1011E+08
Std2.1363E+039.3947E+051.1444E+041.1094E+082.9937E+043.6221E+08
Func 16Average1.7772E+052.3767E+033.4557E+032.8582E+036.1340E+035.8063E+03
Std1.1614E+052.4430E+025.0033E+023.8188E+028.0273E+026.3565E+02
Func 17Average5.6614E+031.9572E+033.1105E+032.7269E+035.9729E+034.8923E+03
Std3.0003E+031.6833E+023.4512E+022.8575E+027.9455E+029.3835E+02
Func 18Average2.4021E+031.4865E+066.7694E+055.6188E+068.5258E+054.3156E+06
Std1.8400E+022.5730E+063.3266E+056.6294E+063.3218E+052.8652E+06
Func 19Average2.3957E+032.6154E+062.6279E+047.0771E+061.1636E+041.3006E+08
Std4.4267E+017.4123E+061.1565E+041.9040E+071.2975E+042.1899E+08
Func 20Average2.3007E+032.3577E+033.1415E+032.7589E+035.1451E+034.2834E+03
Std1.2373E+001.5643E+023.1161E+022.1393E+026.4993E+024.2542E+02
Func 21Average2.7599E+032.3882E+032.5469E+032.5029E+033.0414E+032.9340E+03
Std3.4134E+013.3274E+015.4655E+013.2799E+011.2679E+027.9164E+01
Func 22Average2.9507E+034.4209E+037.1379E+038.5861E+031.9879E+041.8040E+04
Std3.9636E+011.6321E+033.8485E+037.3823E+021.2933E+033.0434E+03
Func 23Average2.8903E+032.7456E+033.0194E+032.9615E+033.6495E+033.4885E+03
Std1.2045E+013.7873E+016.8125E+014.0778E+011.6156E+026.7436E+01
Func 24Average3.6057E+032.9369E+033.1553E+033.1629E+034.1844E+034.1405E+03
Std1.2124E+036.1639E+015.0881E+019.5689E+012.0426E+021.0695E+02
Func 25Average3.2773E+032.9932E+033.0695E+033.6087E+033.3405E+036.3031E+03
Std3.7509E+015.9165E+013.3171E+012.6305E+024.7277E+017.5532E+02
Func 26Average3.2170E+034.5370E+033.0389E+036.4067E+031.7077E+041.4457E+04
Std1.9431E+013.9846E+027.6055E+024.3337E+027.1817E+031.1656E+03
Func 27Average3.8402E+033.2484E+033.6260E+033.5763E+033.7511E+034.0170E+03
Std2.7825E+022.0759E+018.3670E+011.0590E+021.2260E+021.4984E+02
Func 28Average5.2491E+043.3914E+033.3461E+034.3159E+033.5338E+038.0941E+03
Std8.8109E+047.3941E+014.2472E+014.2040E+023.7578E+011.3082E+03
Func 29Average9.6544E+033.7755E+034.6991E+034.3908E+037.0208E+037.9534E+03
Std8.2970E+031.8496E+023.7106E+023.6453E+025.6226E+026.3398E+02
Func 30Average9.6154E+127.2129E+062.0286E+068.3159E+071.9587E+065.7162E+08
Std3.0840E+136.2514E+069.1926E+053.8995E+072.9391E+066.1185E+08
+/-/=~~16/10/4~15/13/2~16/10/4
Table 5. Experiments comparing OMGOA with alternative competing algorithms on the IEEE CEC 2017 benchmark functions.
Table 5. Experiments comparing OMGOA with alternative competing algorithms on the IEEE CEC 2017 benchmark functions.
AlgorithmRank+/=/−AVG
OMGOA1~1.33
HGWO830/0/08.23
WEMFO428/0/24.50
mSCA1230/0/011
SCADE1130/0/010.3
CCMWOA730/0/08.17
QCSCA627/0/35.40
BWOA930/0/08.43
CCMSCSA221/1/82.93
CLACO325/0/53.87
BLPSO525/3/25.33
GCHHO1029/0/18.50
Table 6. Experiments comparing OMGOA with alternative competing algorithms at IEEE CEC 2022 benchmark functions.
Table 6. Experiments comparing OMGOA with alternative competing algorithms at IEEE CEC 2022 benchmark functions.
AlgorithmRank+/=/−AVG
OMGOA1~2.25
HGWO1010/1/17.83
WEMFO59/2/16.00
mSCA610/0/26.50
SCADE1110/2/09.33
CCMWOA99/3/07.67
QCSCA36/0/64.75
BWOA79/2/16.58
CCMSCSA1210/1/19.67
CLACO44/2/65.33
BLPSO88/2/26.83
GCHHO27/2/34.25
Table 7. Comparison of classification performance of OMGOA-SVM with other classifiers.
Table 7. Comparison of classification performance of OMGOA-SVM with other classifiers.
IndicatorValueOMGOA-SVMGOA-SVMLPO-SVMSBOA-SVMCPO-SVM
Avg0.825 0.804 0.792 0.779 0.775
Std0.081 0.076 0.081 0.090 0.113
#10.750 0.875 0.833 0.875 0.875
#20.792 0.792 0.792 0.792 0.792
#30.958 0.833 0.958 0.958 0.958
ACC#40.875 0.875 0.792 0.750 0.750
#50.833 0.792 0.750 0.833 0.708
#60.792 0.792 0.708 0.708 0.708
#70.750 0.750 0.792 0.792 0.750
#80.917 0.833 0.708 0.708 0.708
#90.875 0.875 0.875 0.667 0.917
#100.708 0.625 0.708 0.708 0.583
Avg0.756 0.729 0.705 0.714 0.739
Std0.187 0.143 0.192 0.158 0.134
#10.917 0.917 0.917 0.917 0.917
#20.786 0.714 0.786 0.786 0.786
#30.889 0.778 0.889 0.889 0.889
Sensitivity#40.778 0.778 0.556 0.778 0.667
#50.667 0.556 0.667 0.667 0.556
#60.833 0.833 0.833 0.833 0.722
#70.286 0.429 0.286 0.571 0.571
#80.900 0.700 0.700 0.700 0.700
#90.833 0.833 0.833 0.417 0.917
#100.667 0.750 0.583 0.583 0.667
Avg0.845 0.836 0.836 0.798 0.785
Std0.138 0.143 0.143 0.186 0.137
#10.917 0.917 0.917 0.917 0.917
#20.786 0.714 0.714 0.786 0.786
#30.889 0.778 0.778 0.889 0.889
Specificity#40.778 0.778 0.778 0.778 0.667
#50.667 0.556 0.556 0.667 0.556
#60.833 0.833 0.833 0.833 0.722
#70.286 0.429 0.429 0.571 0.571
#80.900 0.700 0.700 0.700 0.700
#90.833 0.833 0.833 0.417 0.917
#100.667 0.750 0.750 0.583 0.667
Avg0.618 0.577 0.542 0.526 0.523
Std0.190 0.171 0.204 0.207 0.523
#10.530 0.753 0.676 0.753 0.753
#20.580 0.608 0.580 0.580 0.580
#30.913 0.644 0.913 0.913 0.913
MCC#40.730 0.730 0.547 0.497 0.467
#50.639 0.547 0.467 0.639 0.365
#60.476 0.476 0.178 0.178 0.348
#70.312 0.348 0.470 0.476 0.395
#80.829 0.657 0.410 0.410 0.410
#90.753 0.753 0.753 0.385 0.833
#100.418 0.258 0.430 0.430 0.169
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, X.; Zunu, P. An Enhanced Grasshopper Optimization Algorithm with Outpost and Multi-Population Mechanisms for Dolomite Lithology Prediction. Biomimetics 2025, 10, 494. https://doi.org/10.3390/biomimetics10080494

AMA Style

Yu X, Zunu P. An Enhanced Grasshopper Optimization Algorithm with Outpost and Multi-Population Mechanisms for Dolomite Lithology Prediction. Biomimetics. 2025; 10(8):494. https://doi.org/10.3390/biomimetics10080494

Chicago/Turabian Style

Yu, Xinya, and Parhat Zunu. 2025. "An Enhanced Grasshopper Optimization Algorithm with Outpost and Multi-Population Mechanisms for Dolomite Lithology Prediction" Biomimetics 10, no. 8: 494. https://doi.org/10.3390/biomimetics10080494

APA Style

Yu, X., & Zunu, P. (2025). An Enhanced Grasshopper Optimization Algorithm with Outpost and Multi-Population Mechanisms for Dolomite Lithology Prediction. Biomimetics, 10(8), 494. https://doi.org/10.3390/biomimetics10080494

Article Metrics

Back to TopTop