Next Article in Journal
Growth Propagation of Liquid Spawn on Non-Woven Hemp Mats to Inform Digital Biofabrication of Mycelium-Based Composites
Next Article in Special Issue
Enhanced Polar Lights Optimization with Cryptobiosis and Differential Evolution for Global Optimization and Feature Selection
Previous Article in Journal / Special Issue
Multi-Strategy Improved Red-Tailed Hawk Algorithm for Real-Environment Unmanned Aerial Vehicle Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization

School of Geosciences, Yangtze University, Wuhan 430100, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(1), 32; https://doi.org/10.3390/biomimetics10010032
Submission received: 6 December 2024 / Revised: 1 January 2025 / Accepted: 2 January 2025 / Published: 7 January 2025

Abstract

:
Global optimization problems, prevalent across scientific and engineering disciplines, necessitate efficient algorithms for navigating complex, high-dimensional search spaces. Drawing inspiration from the resilient and adaptive growth strategies of moss colonies, the moss growth optimization (MGO) algorithm presents a promising biomimetic approach to these challenges. However, the original MGO can experience premature convergence and limited exploration capabilities. This paper introduces an enhanced bio-inspired algorithm, termed crisscross moss growth optimization (CCMGO), which incorporates a crisscross (CC) strategy and a dynamic grouping parameter, further emulating the biological mechanisms of spore dispersal and resource allocation in moss. By mimicking the interwoven growth patterns of moss, the CC strategy facilitates improved information exchange among population members, thereby enhancing offspring diversity and accelerating convergence. The dynamic grouping parameter, analogous to the adaptive resource allocation strategies of moss in response to environmental changes, balances exploration and exploitation for a more efficient search. Key findings from rigorous experimental evaluations using the CEC2017 benchmark suite demonstrate that CCMGO consistently outperforms nine established metaheuristic algorithms across diverse benchmark functions. Furthermore, in a real-world application to a three-channel reservoir production optimization problem, CCMGO achieves a significantly higher net present value (NPV) compared to benchmark algorithms. This successful application highlights CCMGO’s potential as a robust and adaptable tool for addressing complex, real-world optimization challenges, particularly those found in resource management and other nature-inspired domains.

1. Introduction

Optimization is a crucial step in diverse fields such as engineering and economics, aiming to identify the optimal solution within a vast decision space [1]. This process is essential in practical applications to enhance efficiency and minimize costs. Many real-world tasks, including structural optimization in engineering, experimental parameter tuning in scientific research, and resource scheduling in industrial operations, often translate into complex optimization challenges [2]. These problems are frequently characterized by high-dimensional, nonlinear, and multimodal landscapes, often compounded by intricate constraints and dynamic environments [3].
Traditional optimization methods, such as gradient-based algorithms, simplex methods [4], and dynamic programming [5], have been extensively utilized. Gradient-based algorithms, including steepest descent [6] and conjugate gradient methods [7], are effective for convex and differentiable problems but struggle with nonconvexity, discontinuity, or the presence of multiple local optima. Simplex methods and dynamic programming perform well for specific problem types but typically suffer from the curse of dimensionality when addressing large-scale or high-dimensional problems [8]. Consequently, traditional optimization methods often fall short in addressing the complexities of many practical scenarios [9].
Metaheuristic algorithms have emerged as powerful alternatives, gaining significant attention for their adaptability and global search capabilities [10]. Unlike traditional methods, metaheuristic algorithms are not constrained by problem-specific properties such as convexity or differentiability. They employ stochastic mechanisms and principles inspired by natural phenomena to explore solution spaces effectively, making them particularly well suited for tackling complex, multimodal, and high-dimensional optimization problems [11].
In the literature, metaheuristic algorithms are generally classified into two main categories: evolutionary algorithms (EAs) and swarm intelligence (SI) algorithms [12]. Both EAs and SI share a similar framework structure: they initialize a set of solutions, iteratively update this population through specific operators, and return the current optimal solution when a termination condition is met. EAs are inspired by the principles of natural selection and biological evolution [13]. These algorithms iteratively refine a population of candidate solutions through selection, crossover, and mutation operators. Representative EAs include genetic algorithms (GAs) [14], differential evolution algorithms (DEs) [15], and spherical evolutionary algorithms (SEs) [16]. Swarm intelligence (SI) algorithms, conversely, draw inspiration from the collective behavior of social organisms such as birds, ants, and bees. These algorithms emphasize cooperation and information sharing among individuals in the population, leading to an efficient exploration of the search space and rapid convergence. Prominent SI algorithms include particle swarm optimization (PSO) [17], which models the social foraging behavior of bird flocks; ant colony optimization (ACO) [18], a method widely used for combinatorial problems such as routing and scheduling; and artificial bee colony (ABC) [19], which mimics the foraging behavior of honeybees.
Despite the success of metaheuristics in addressing complex optimization problems, these algorithms face fundamental limitations, as highlighted by the no free lunch (NFL) theorem [20]. This theorem fundamentally states that no single optimization algorithm is universally superior across all possible optimization problems. Instead, it reveals that algorithm performance is inherently problem-dependent. A strategy that excels at solving one type of problem may perform poorly when applied to a different type. This means there are always trade-offs in algorithm design. Consequently, understanding specific problem characteristics and tailoring algorithm design or improvements to those characteristics becomes crucial for achieving superior practical performance.
One prominent application domain where metaheuristic algorithms have demonstrated significant potential is reservoir production optimization [21]. In oil reservoir development, establishing an effective production scheme is essential for efficient hydrocarbon recovery and sustained production [22]. Optimizing injection and production processes in oil reservoirs involves considering numerous dynamic factors, such as reservoir heterogeneity, fluid properties, and operational constraints [23]. Achieving an optimal balance in injection rates and production strategies is vital for maximizing hydrocarbon recovery and minimizing operational costs. The inherent complexities of this domain, characterized by nonlinearity, uncertainty, and dynamic behavior, present significant challenges for traditional optimization methods. These methods often struggle with the high dimensionality and multifaceted nature of reservoir management problems. Consequently, there is a growing interest in applying metaheuristic algorithms, which offer adaptability and global search capabilities, to address these challenges and provide robust solutions for reservoir production optimization.
In the academic pursuit of optimizing petroleum injection and production, a variety of methodologies have been employed. Table 1 offers a review of some pertinent research in recent times. Although a substantial amount of research has concentrated on the development of surrogate models, frequently resorting to differential evolution (DE) or particle swarm optimization (PSO) as the optimization frameworks, it is of utmost significance that the optimizer chosen is strategically selected to match the particular traits of the problem. Such a tailored approach is vital for attaining superior optimization outcomes, as it allows for a more precise alignment with the unique demands and characteristics inherent in petroleum injection and production optimization, thus enhancing the potential for achieving enhanced efficiency and performance in this domain.
The moss growth optimizer (MGO), a swarm intelligence optimization algorithm introduced by Zheng et al. [30], draws inspiration from the growth patterns of moss in natural environments. MGO determines the evolutionary direction of the population through a “definite wind direction” mechanism. It then employs spore dispersal search and dual reproduction search for exploration and exploitation of the search space, respectively. Finally, a cryptobiosis mechanism modifies the conventional approach of directly altering individual solutions commonly used in metaheuristic algorithms. While experimental results have demonstrated the effectiveness of MGO, several limitations remain. Firstly, the algorithm lacks inter-population information exchange, leading to a suboptimal utilization of population knowledge. Secondly, the absence of dynamic adaptive strategies hinders the algorithm’s ability to optimize effectively across the different stages of the search process.
In this paper, we propose an improved MGO method, named CCMGO. By incorporating a crisscross strategy, CCMGO enhances inter-population information exchange, promotes offspring diversity, and accelerates convergence towards the optimal solution. Furthermore, the proposed dynamic grouping parameters balance the algorithm’s exploration and exploitation capabilities, enabling its effective adaptation across different optimization stages. The key contributions of this paper are threefold as follows:
  • An improved MGO algorithm is proposed, which integrates a (CC) strategy and dynamic grouping parameters to enhance search efficiency and solution quality.
  • CCMGO’s performance is rigorously evaluated against nine classic metaheuristic algorithms using the CEC2017 benchmark suite [31]. Statistical validation of the experimental results is conducted using the Wilcoxon and Friedman tests;
  • CCMGO is applied to the practical problem of reservoir production optimization, utilizing a three-phase numerical simulation model. The net present value (NPV) achieved by CCMGO is compared to that of other optimization algorithms to demonstrate its efficacy in a real-world scenario.
This paper is structured into six sections. Section 1 provides an introduction, outlining the research background, motivation, and rationale for improving the MGO algorithm. Section 2 presents a detailed review of the original MGO algorithm, discussing its core principles and identifying areas for improvement. Section 3 introduces the proposed CCMGO algorithm, elaborating on the crisscross strategy and dynamic grouping parameters designed to enhance its performance. Section 4 details the experimental setup, presents the results of the comparative analysis with nine classic metaheuristic algorithms, and discusses the statistical validation of these results. Section 5 demonstrates the application of CCMGO to oil reservoir production optimization, showcasing its performance in comparison to other algorithms. Finally, Section 6 summarizes the key findings, discusses the advantages and limitations of CCMGO, and suggests potential avenues for future research.

2. The Original MGO

MGO is a metaheuristic algorithm proposed in 2024, which was inspired by the process of moss growth in natural environments, by Zheng et al. [30]. The MGO algorithm begins by establishing the evolutionary trajectory of the population through a mechanism known as the determination of wind direction, which utilizes a method of partitioning the population. Inspired by the processes of asexual, sexual, and vegetative reproduction in moss, the algorithm introduces two innovative search strategies: spore dispersal search for exploration and dual propagation search for exploitation. Lastly, the cryptobiosis mechanism is employed to modify the traditional metaheuristic approach of directly altering individuals’ solutions, thereby preventing the algorithm from becoming ensnared in local optima. By emulating the biological behavior of moss, the primary mathematical model of the MGO algorithm is structured as follows:
1. Determination of wind direction: MGO has introduced an innovative mechanism known as “Determining Wind Direction”, which establishes the evolutionary trajectory for all members of a population based on the spatial relationship between the majority of individuals and the optimal individual.
Firstly, the grouping operation is performed for each dimension of the population X , and the optimal individual in the current population is M b e s t , and the j th dimension of M b e s t is used as the threshold; X i , j which is greater than M b e s t , j belongs to the D X j 1 set, and X i , j which is less than M b e s t , j belongs to the D X j 2 set, and after that, the determined set is selected as d i v X j according to Equation (1).
d i v X j = D X j 1 , c o u n t ( D X j 1 ) c o u n t ( D X j 2 ) D X j 2 , c o u n t ( D X j 1 ) < c o u n t ( D X j 2 )
where c o u n t ( · ) means counting the number of elements in a given set. This dividing process will be performed several times and the set after dividing is as follows:
d i ν X = { M i = ( M i , 1 , M i , 2 , , M i , d i m ) M i j = 1 d n   d i ν X p j , M i X }
where d n denotes the number of times it has been dividing, and p j denotes a randomly selected dimension, which simply means that the set is first determined to be divided d n times, followed by randomly selecting d n dimensions, after which the first dimension is used for the dividing the set, and after that the divided set is used for the second divided using the second dimension selected, and so on until the number of times the dividing is exhausted, to obtain the d i ν X . Based on the above information, the wind direction vectors are as follows:
D w i n d = M b e s t m e a n ( d i v X )
where D w i n d denotes M b e s t minus a vector that averages each dimension of d i v X ;
2. Spore dispersal search: The exploration stage of MGO simulates the dissemination process of spores. In the presence of strong winds, the spores can travel longer distances, whereas in turbulent conditions, the dispersion distance of the spores is shorter. The modeling of this process is shown in Equation (4).
M i n e w = { M i + s t e p 1 · D w i n d r 1 > 0.2 M i + s t e p 2 · D w i n d r 1 0.2
where M i n e w denotes the new individual generated by the i th individual, and r 1 is a random number between 0 and 1. step1 and step2 represent the update steps for the two modes, respectively, as shown in Equations (5) and (6).
s t e p 1 = w · r 2 0.5 · E
where w is a constant value of two; r 2 is a random vector between zero and one, and E denotes the strength of the wind as shown in Equation (7).
s t e p 2 = 0.1 · w · ( r 3 0.5 ) · E · [ 1 + E 2 · ( 1 + t a n h β 1 β 2 ) ]
where r 3 is a random vector between 0 and 1, and β is shown in Equation (8).
E = 1 F E s M a x F E s
where F E s denotes the current number of evaluations, and M a x F E s denotes the maximum number of evaluations.
β = c o u n t d i v X c o u n t X
where β denotes the proportion of the size of d i v X and X ;
3. Dual propagation search: The dual propagation search strategy simulates the phenomenon of sexual and asexual reproduction of spores, and it is important to note that the MGO algorithm uses dual propagation search with 80% probability, which is mathematically modeled as shown below:
M i n e w = ( 1 a c t ) · M i + a c t · M b e s t r 4 > 0.5 M i , j n e w = M b e s t , j + s t e p 3 · D _ w i n d j   r 4 0.5
In Equation (9), the part where r 4 > 0.5 means it is updating the i th individual, while the part where r 4 0.5 means it is updating the j th dimension of the i th individual, where a c t denotes the parameter controlling whether M b e s t is utilized or not as shown in Equation (10), and s t e p 3 denotes the update step as shown in Equation (11).
a c t = a c t = 1 ,     1 1.5 10 · r 5 0.5 0 ,     1 1.5 10 · r 5 < 0.5
where r 5 is a random vector between zero and one.
s t e p 3 = 0.1 · r 6 0.5 · E
where r 6 is a random vector between zero and one, and E denotes the strength of the wind as in Equation (7);
4. Cryptobiosis mechanism: The cryptobiosis mechanism, employed to replace the greedy selection component of traditional metaheuristic algorithms, draws inspiration from the cryptobiosis observed in mosses—an extraordinary ability enabling these organisms to recover and thrive following periods of dormancy or desiccation. Unlike the conventional greedy selection, this mechanism uses an archive to systematically store updated individuals across generations. When specific conditions, such as reaching a maximum record count, are satisfied, the selection process is triggered to replace the current individual with the best performer from the archive. This strategy allows each individual to initiate their search from the same location multiple times, thereby significantly enhancing the optimization performance of the MGO. The interested readers are referred to its detailed exposition in the study by [30].
In summary, MGO starts by randomly generating a set of initial individuals, and during each iteration, it first determines the evolutionary direction of the population based on the wind direction, after which it performs spore dispersal search and dual propagation search sequentially based on the given probability and updates each individual through the cryptobiosis mechanism. This process is repeated until the maximum number of evaluations is reached and the optimal individual of the current population is returned. The flowchart of MGO is shown in Figure 1.

3. Proposed CCMGO

3.1. Crisscross Strategy

The CC concept, originating from the crisscross optimization (CSO) algorithm introduced in 2014 [32], comprises two key components: horizontal crossover search (HCS) and vertical crossover search (VCS). CC embodies the principle of “The Doctrine of the Mean”, dynamically adjusting the search trajectory within the solution space. HCS facilitates information exchange among individuals, while VCS enables intersection across different dimensions of the same individual. This unique intersection strategy, coupled with a competitive selection mechanism, enhances the algorithm’s global search capabilities and accelerates convergence.
In this study, we integrate the CC strategy into the original MGO algorithm to enhance inter-population information dissemination, thereby facilitating a more effective evasion of local optima. The HCS and VCS strategies, as incorporated within CC, are detailed below.

3.1.1. Horizontal Crossover Search

The HCS operation involves randomly selecting particles from the population and pairing them to perform a crossover operation. This process leverages population information to enhance the algorithm’s exploratory capabilities. The mathematical formulation of HCS is defined in Equations (12) and (13).
H i j = r 1 × x i j + 1 r 1 × x k j + c 1 × x i j x k j
H k j = r 2 × x k j + 1 r 2 × x i j + c 2 × x k j x i j
where r 1 and r 2 denote random numbers within the interval [0, 1]; c 1 and c 2 denote random numbers within the interval [−1, 1]; x i j denotes the value of the j th dimension of the i th particle, and x k j is the value of the j th dimension of the k th particle. H i j and H k j are the new offspring generated by the HCS from the two particles. HCS then proceeds with a greedy selection to preserve the fittest individuals between the offspring and their parents.

3.1.2. Vertical Crossover Search

The VCS operation entails randomly selecting dimensions within each particle and pairing them for a crossover operation. This mechanism utilizes individual information to enhance the algorithm’s exploitative capabilities. The specific formulation of the VCS operation is detailed in Equation (14)
V i j = r 3 × x i j 1 + 1 r 3 × x i j 2
where r 3 denotes a random number within the interval [0, 1]; x i j 1 and x i j 2 denote the values of the two dimensions randomly picked by the i th individual; V i j denotes the offspring generated by the VCS operation. Analogous to HCS, VCS employs a greedy selection strategy, retaining individuals with superior fitness values from the offspring and their progenitors.

3.2. Dynamic Population Divisions Parameter

The direction determination phase in the original MGO algorithm necessitates specifying the number of population divisions, which subsequently determines the final direction vectors as defined in Equations (2) and (3). In the original MGO, this number is fixed at d i m 4 , a setting that may not adequately address the algorithm’s requirements across various stages of the optimization process.
Intuitively, during the initial exploration phase, a larger proportion of the population should contribute to determining the update direction, implying fewer population divisions. Conversely, in the later exploitation phase, it is preferable for a smaller subset of individuals to influence the update direction, necessitating a larger number of population divisions. To address this, we propose a dynamic population division parameter that adjusts the number of divisions in each iteration. This ensures that the algorithm’s update direction is influenced by an appropriate number of individuals at each stage, thereby enhancing optimization efficiency. The formulation of this parameter is as follows:
d i v i d e _ n u m = ( F E s M a x F E s + 1 ) · d i m 4
where F E s is the current number of evaluations; M a x F E s is the maximum number of evaluations, and d i m is the dimension of the problem.

3.3. The Proposed CCMGO

This section introduces the proposed CCMGO algorithm. First, CCMGO initializes the MGO parameters and generates the initial population. Then, the algorithm proceeds with population updating using the original MGO framework. In each iteration, the number of divisions for determining the wind direction vector is dynamically adjusted. Before concluding each iteration, the CC strategy is applied to generate a new population. This iterative process continues until the predefined maximum number of iterations is reached. The algorithm’s workflow is illustrated in Figure 2.
Algorithm 1 provides the pseudo-code for the CCMGO.
Algorithm 1 Pseudo-code of the CCMGO
Set parameters: the maximum evaluation number M a x F E s , the problem dimension d i m , and the population size N .
Initialize population M
F E s = 0
For  i = 1 : N
  Evaluate the fitness value of M i
  Find the global min M b e s t and fitness b _ c o s t
End  F o r
While ( F E s < M a x F E s )
   Calculate the d i v i d e _ n u m by Equation (15)   /* Dynamic Divisions Parameter */
   Calculate the wind direction D _ w i n d by Equation (3)
   For  i = 1 : N
    Create the new search agent M i b e s t equals M i
    Update the M i b e s t by Equation (4)
    If  r a n d < 0.8
      Update  M i n e w by Equation (9)
    End if
    If  F i t n e s s M i n e w < F i t n e s s ( M b e s t )
       M b e s t = M i n e w
       b _ c o s t = F i t n e s s ( M i n e w )
    End if
   End for
   For  i = 1 : N
    Update  M i using the cryptobiosis mechanism
   End for
    F E s = F E s + N
   For  i = 1 : N                                       /*CC*/
    Perform Horizontal crossover search to update M i
    Perform Vertical crossover search to update M i
    Update  M b e s t
   End F o r
    F E s = F E s + N
End While
Return  M b e s t
End
Four primary factors contribute to the computational complexity of CCMGO: population initialization, fitness function evaluation, particle position updates, and the crisscross (CC) strategy. Therefore, the overall complexity can be expressed as O(CCMGO) ≈ O(T × N) + O(T × N) + O(T × N × D) + O(T × N × D), which simplifies to O(T × N × D).

4. Experimental Results and Analysis

This section evaluates the performance of the proposed CCMGO algorithm using the 29 benchmark functions from the CEC2017 test suite. Experiments are conducted under fair conditions using these industry-standard benchmarks. The experimental setup consists of an Intel i5-13600KF (Intel Corporation, Santa Clara, CA, USA) processor with 32 GB of RAM, running the Windows 11 operating system. The algorithms are implemented in MATLAB 2024a. For the comparative analysis, all algorithms utilize a population size of 30, a problem dimension of 30, and a maximum of 300,000 function evaluations. Each function is executed 30 times independently, and the average and standard deviation of the resulting objective function values are reported.

4.1. Benchmark Functions Overview

This subsection describes the 29 benchmark functions from the 2017 IEEE Congress on Evolutionary Computation (CEC2017) test suite [31]. These functions are categorized into four groups: unimodal, multimodal, hybrid, and composition functions. This diverse set of functions allows for a comprehensive evaluation of the algorithm’s performance across various function landscapes. Table 2 provides a summary of the CEC2017 benchmark functions.

4.2. Performance Comparison with Other Algorithms

This section presents a comparative performance evaluation of the proposed CCMGO algorithm against nine widely used metaheuristic optimization algorithms using the CEC2017 benchmark functions. The comparative algorithms include MGO [30], WOA [33], GWO [34], MFO [35], SCA [36], PSO [37], SMA [38], BA [39], and FA [40]. All algorithms were implemented following the descriptions provided in their respective original papers to maintain methodological consistency. To ensure a fair comparison, the hyperparameter settings for each comparative algorithm were configured according to the recommendations in their original studies (as summarized in Table 3) and were not adjusted further. This means that we used the most commonly reported parameter values from their original papers and other relevant literature, providing an impartial setting for each method.
Table 4 provides a comprehensive performance comparison of CCMGO and the other nine algorithms on the CEC2017 benchmark functions. The table reports the average (Avg) and standard deviation (Std) of the fitness values for each algorithm on each benchmark function. The average value provides a measure of the overall performance of each method on each problem, and the standard deviation shows the stability of the approach when repeated. Using these two metrics provides a robust method for measuring performance. Furthermore, the Friedman test ranks (Rank) of each algorithm are presented, along with the average rank (Avg) across all functions. The Friedman test was chosen as it is a non-parametric statistical test suitable for comparing the performance of multiple algorithms across a number of problems. The “+/−/=” column denotes instances where CCMGO outperformed (+), matched (=), or underperformed (−) compared to the other algorithms. This pairwise comparison allows us to specifically see where our proposed approach shows an improvement over other standard benchmarks. By including all of these different metrics, we are able to make a rigorous and detailed assessment of the performance of the proposed method.
The results in Table 4 demonstrate that CCMGO achieves the best overall performance, attaining the lowest average rank of 1.6207 among all algorithms on the CEC2017 benchmark suite. Specifically, CCMGO outperforms MGO, the second-best performing algorithm with an average rank of 2.6552, on 18 out of 29 functions. This superior performance and lower average rank highlight CCMGO’s enhanced optimization reliability and effectiveness in tackling complex optimization problems. The specific performance of CCMGO in comparison to the other benchmark algorithms, combined with the lower average rank of our approach, demonstrates the validity and improved performance of CCMGO compared to other similar methods. This suggests that our approach can be generalized to other problem domains and may provide improvements over other methods.
Table 5 further substantiates the superior performance of CCMGO on the CEC2017 benchmark suite. The Wilcoxon signed-rank test results, presented in Table 5, reveal statistically significant differences (p < 0.05) between CCMGO and the other algorithms for most of the benchmark functions. These findings strongly support the conclusion that CCMGO offers significantly better optimization performance compared to the other algorithms, reinforcing its reliability and effectiveness for challenging optimization tasks.
Figure 3 illustrates the convergence curves of CCMGO compared to the other nine algorithms on the benchmark functions. The horizontal axis represents the number of function evaluations, while the vertical axis represents the best fitness value achieved.
The convergence curves clearly demonstrate that CCMGO consistently achieves lower fitness values compared to the other algorithms across the benchmark functions. This indicates that CCMGO converges to better solutions more efficiently, requiring fewer function evaluations to reach comparable or superior fitness levels. This observation highlights CCMGO’s effective exploration and exploitation of the search space and its ability to avoid premature convergence to local optima. In summary, the incorporation of the crisscross strategy significantly enhances the search performance of MGO, enabling it to outperform competing algorithms on the benchmark functions.

5. Application to Production Optimization

The objective of reservoir production optimization is to determine the optimal control settings for each well to maximize NPV. However, the large number of wells and production cycles leads to a combinatorial explosion of possible solutions, resulting in a high-dimensional, NP-hard optimization problem. This complexity makes evolutionary algorithms well suited for tackling such challenges. In this study, we use the Eclipse reservoir simulator to apply CCMGO to a three-channel reservoir model and compare its performance against several widely used metaheuristic algorithms. For this study, nonlinear constraints associated with oilfield production are disregarded, and the NPV, as defined in Equation (16), serves as the objective function.
N P V x , z = t = 1 n   Δ t Q o , t · r o Q w , t · r w Q i , t · r i 1 + b p t
where x denotes the set of variables to be optimized; z denotes the state parameter of the model ;   n denotes the total simulation time; Q o , t , Q w , t , and Q i , t are the rates of oil production, water production, and water injection, respectively. r o denotes the oil revenue; r w and r i denote the cost of treating and injecting the water, respectively; b is the annual interest rate, and p t is the number of years that have passed.

5.1. Three-Channel Model

The three-channel reservoir model is a typical non-homogeneous two-dimensional reservoir, which has four injection wells as well as nine production wells arranged in a five-point pattern. This model is shaped with a grid consisting of 625 grid blocks, where each grid side is 100 ft; each grid block is 20 ft thick, and all grid blocks have a porosity of 0.2, as shown in Table 6. These are all standard values that have been used in other studies and can be considered typical values for this kind of problem. To ensure our results are comparable with other studies in the literature, we used the same values as in reference [24].
The optimization variables in this production optimization problem are the injection rate for each injection well and the fluid recovery rate for the production wells. The water injection rate ranges from 0 to 500 STB/DAY, and the water extraction rate for the production wells ranges from 0 to 200 STB/DAY. Thermal storage is utilized for 1800 days, with a decision time step of 360 days. As a result, the decision variable has a dimensionality of 65.
NPV is the fitness function for this optimization problem and is determined by various parameters, including the oil price (80.0 USD/STB), the water injection cost (5.0 USD/STB), and the water treatment cost (5.0 USD/STB). To simplify the model, it is assumed that the interest rate per annum is 0%.

5.2. Analysis and Discussion of the Experimental Results

This subsection provides a detailed analysis of the experimental results for optimizing the three-channel oil reservoir production model. The performance of CCMGO is compared with six other optimization algorithms: MGO, MFO, GWO, PSO, SMA, and WOA. All algorithms were repeated five times, and the metrics such as the mean, standard deviation (Std), the best and worst values of the objective function, and the experimental results of CCMGO are presented and discussed.
Table 7 summarizes the experimental results of each algorithm, including metrics such as the mean, standard deviation, and the best and worst values. CCMGO achieves the highest mean value (9.4969 × 107) among all algorithms, highlighting its superior ability to consistently deliver high-quality solutions. Moreover, its standard deviation (1.7636 × 106) is relatively low compared to MFO (3.2705 × 106) and GWO (3.8653 × 106), indicating greater stability and reliability. These results demonstrate CCMGO’s effectiveness in exploring and exploiting the search space and its robustness in maintaining consistent performance under various conditions.
Figure 4 shows the NPV progression of CCMGO and six other algorithms over 100 iterations, with the horizontal axis representing the iteration count and the vertical axis showing the corresponding NPV values.
CCMGO exhibits the fastest convergence and consistently achieves the highest NPV among all algorithms. It rapidly approaches its peak performance within the first 30 iterations and maintains a clear advantage throughout the optimization process. In contrast, MGO converges more slowly and achieves slightly lower NPV values than CCMGO by the end of the iterations. The other algorithms, including MFO, GWO, SMA, PSO, and WOA, show varying degrees of suboptimal performance. MFO, GWO, and SMA achieve moderate results, while PSO and WOA converge significantly slower and yield lower final NPV values.
In summary, the results demonstrate the effectiveness of CCMGO in optimizing the oil reservoir production model. CCMGO achieves the highest NPV and converges faster than the other algorithms, confirming its superior performance.

6. Conclusions

This study introduced CCMGO, an enhanced MGO algorithm incorporating a CC strategy and a dynamic population divisions parameter. The CC strategy promotes information exchange among individuals within the population, thereby increasing offspring diversity and accelerating convergence towards optimal solutions. The dynamic population divisions parameter balances exploration and exploitation, leading to a more efficient search process.
The performance of CCMGO was rigorously evaluated using the CEC2017 benchmark suite and compared against nine prominent metaheuristic optimization algorithms. Experimental results demonstrated that CCMGO consistently outperformed the other algorithms across a variety of benchmark functions, achieving a significantly lower average rank of 1.6207 compared to MGO’s 2.6552 on the CEC2017 suite. Furthermore, CCMGO outperformed MGO on 18 out of 29 functions in these tests. Furthermore, CCMGO was applied to a real-world reservoir production optimization problem, where it achieved a higher NPV and the highest mean value of 9.4969 × 107 compared to all other algorithms, while also maintaining a relatively low standard deviation of 1.7636 × 106, demonstrating superior performance and stability. This practical application illustrates CCMGO’s potential for solving complex, real-world optimization challenges.
Future research will focus on enhancing the scalability of CCMGO for high-dimensional problems and extending its capabilities to address multi-objective optimization tasks. We also plan to explore the application of CCMGO in a wider range of real-world optimization problems to further demonstrate its adaptability and effectiveness.

Author Contributions

T.Y.: Conceptualization, Software, Data Curation, Investigation, Writing—Original Draft, Project Administration; T.L.: Methodology, Writing—Original Draft, Writing—Review and Editing, Vali-dation, Formal Analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, K.; Cao, Z.; Zhang, L.; Chen, Z.; Han, Y.; Pan, Q. A Review on Swarm Intelligence and Evolutionary Algorithms for Solving Flexible Job Shop Scheduling Problems. IEEE/CAA J. Autom. Sin. 2019, 6, 904–916. [Google Scholar] [CrossRef]
  2. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-Strategies Boosted Mutative Crow Search Algorithm for Global Tasks: Cases of Continuous and Discrete Optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  3. Huang, X.; Hu, H.; Wang, J.; Yuan, B.; Dai, C.; Ablameyk, S.V. Dynamic Strongly Convex Sparse Operator with Learning Mechanism for Sparse Large-Scale Multi-Objective Optimization. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 121–127. [Google Scholar]
  4. Nelder, J.A.; Mead, R. A Simplex Method for Function Minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  5. Bellman, R. Dynamic Programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef]
  6. Meza, J.C. Steepest Descent. WIREs Comput. Stat. 2010, 2, 719–722. [Google Scholar] [CrossRef]
  7. Polyak, B.T. The Conjugate Gradient Method in Extremal Problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  8. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  9. Garg, T.; Kaur, G.; Rana, P.S.; Cheng, X. Enhancing Road Traffic Prediction Using Data Preprocessing Optimization. J. Circuits Syst. Comput. 2024, 2550045. [Google Scholar] [CrossRef]
  10. Hu, H.; Shan, W.; Tang, Y.; Heidari, A.A.; Chen, H.; Liu, H.; Wang, M.; Escorcia-Gutierrez, J.; Mansour, R.F.; Chen, J. Horizontal and Vertical Crossover of Sine Cosine Algorithm with Quick Moves for Optimization and Feature Selection. J. Comput. Des. Eng. 2022, 9, 2524–2555. [Google Scholar] [CrossRef]
  11. Dréo, J.; Pétrowski, A.; Siarry, P.; Taillard, E. Metaheuristics for Hard Optimization: Methods and Case Studies; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; ISBN 978-3-540-30966-6. [Google Scholar]
  12. Janga Reddy, M.; Nagesh Kumar, D. Evolutionary Algorithms, Swarm Intelligence Methods, and Their Applications in Water Resources Engineering: A State-of-the-Art Review. H2Open J. 2021, 3, 135–188. [Google Scholar] [CrossRef]
  13. Hu, H.; Shan, W.; Chen, J.; Xing, L.; Heidari, A.A.; Chen, H.; He, X.; Wang, M. Dynamic Individual Selection and Crossover Boosted Forensic-Based Investigation Algorithm for Global Optimization and Feature Selection. J. Bionic Eng. 2023, 20, 2416–2442. [Google Scholar] [CrossRef]
  14. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  15. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  16. Tang, D. Spherical Evolution for Solving Continuous Optimization Problems. Appl. Soft Comput. 2019, 81, 105499. [Google Scholar] [CrossRef]
  17. Poli, R.; Kennedy, J.; Blackwell, T. Particle Swarm Optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  18. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  19. Akay, B.; Karaboga, D. Artificial Bee Colony Algorithm for Large-Scale Problems and Engineering Design Optimization. J. Intell. Manuf. 2012, 23, 1001–1014. [Google Scholar] [CrossRef]
  20. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  21. Hu, H.; Wang, J.; Huang, X.; Ablameyko, S.V. An Integrated Online-Offline Hybrid Particle Swarm Optimization Framework for Medium Scale Expensive Problems*. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 25–32. [Google Scholar]
  22. Foroud, T.; Baradaran, A.; Seifi, A. A Comparative Evaluation of Global Search Algorithms in Black Box Optimization of Oil Production: A Case Study on Brugge Field. J. Petrol. Sci. Eng. 2018, 167, 131–151. [Google Scholar] [CrossRef]
  23. Yu, J.; Jafarpour, B. Active Learning for Well Control Optimization with Surrogate Models. SPE J. 2022, 27, 2668–2688. [Google Scholar] [CrossRef]
  24. Chen, G.; Zhang, K.; Xue, X.; Zhang, L.; Yao, J.; Sun, H.; Fan, L.; Yang, Y. Surrogate-Assisted Evolutionary Algorithm with Dimensionality Reduction Method for Water Flooding Production Optimization. J. Petrol. Sci. Eng. 2020, 185, 106633. [Google Scholar] [CrossRef]
  25. Chen, G.; Zhang, K.; Zhang, L.; Xue, X.; Ji, D.; Yao, C.; Yao, J.; Yang, Y. Global and Local Surrogate-Model-Assisted Differential Evolution for Waterflooding Production Optimization. SPE J. 2020, 25, 105–118. [Google Scholar] [CrossRef]
  26. Cuadros Bohorquez, J.F.; Tovar, L.P.; Wolf Maciel, M.R.; Melo, D.C.; Maciel Filho, R. Surrogate-Model-Based, Particle Swarm Optimization, and Genetic Algorithm Techniques Applied to the Multiobjective Operational Problem of the Fluid Catalytic Cracking Process. Chem. Eng. Commun. 2020, 207, 612–631. [Google Scholar] [CrossRef]
  27. de Oliveira, L.C.; Afonso, S.M.B.; Horowitz, B. Global/Local Optimization Strategies Combined for Waterflooding Problems. J. Braz. Soc. Mech. Sci. Eng. 2016, 38, 2051–2062. [Google Scholar] [CrossRef]
  28. Zhang, K.; Zhao, X.; Chen, G.; Zhao, M.; Wang, J.; Yao, C.; Sun, H.; Yao, J.; Wang, W.; Zhang, G. A Double-Model Differential Evolution for Constrained Waterflooding Production Optimization. J. Pet. Sci. Eng. 2021, 207, 109059. [Google Scholar] [CrossRef]
  29. Zhao, Z.; Luo, S. A Crisscross-Strategy-Boosted Water Flow Optimizer for Global Optimization and Oil Reservoir Production. Biomimetics 2024, 9, 20. [Google Scholar] [CrossRef] [PubMed]
  30. Zheng, B.; Chen, Y.; Wang, C.; Heidari, A.A.; Liu, L.; Chen, H. The Moss Growth Optimization (MGO): Concepts and Performance. J. Comput. Des. Eng. 2024, 11, 184–221. [Google Scholar] [CrossRef]
  31. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  32. Meng, A.; Chen, Y.; Yin, H.; Chen, S. Crisscross Optimization Algorithm and Its Application. Knowl.-Based Syst. 2014, 67, 218–229. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  35. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  36. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  37. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  38. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime Mould Algorithm: A New Method for Stochastic Optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  39. Yang, X.-S.; He, X. Bat Algorithm: Literature Review and Applications. Int. J. Bio-Inspired Comput. 2013, 5, 141–149. [Google Scholar] [CrossRef]
  40. Yang, X.-S.; He, X. Firefly Algorithm: Recent Advances and Applications. Int. J. Swarm Intell. 2013, 1, 36–50. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the MGO.
Figure 1. Flowchart of the MGO.
Biomimetics 10 00032 g001
Figure 2. Flowchart of the CCMGO.
Figure 2. Flowchart of the CCMGO.
Biomimetics 10 00032 g002
Figure 3. Convergence curves of the CCMGO on benchmarks with other algorithms.
Figure 3. Convergence curves of the CCMGO on benchmarks with other algorithms.
Biomimetics 10 00032 g003
Figure 4. Convergence of NPV values for the different algorithms over iterations.
Figure 4. Convergence of NPV values for the different algorithms over iterations.
Biomimetics 10 00032 g004
Table 1. Recent years’ advances in optimization methodologies for petroleum production.
Table 1. Recent years’ advances in optimization methodologies for petroleum production.
Name Summary
SADE-Sammon [24]Chen et al. proposed a new framework for oil reservoir production optimization, combining surrogate-assisted evolutionary algorithms and Sammon mapping for dimensionality reduction to improve the efficiency of expected NPV maximization. It outperformed classical evolutionary algorithms and other dimensionality reduction methods.
GLSADE [25]Chen et al. proposed the global and local surrogate-model-assisted differential volution (GLSADE) for waterflooding production optimization. It refines the surrogate model to focus on promising regions, showing better NPV and faster convergence than traditional methods on benchmarks and real-world applications.
PSO, GA [26]Bohorquez et al. used stochastic methods (GA and PSO) with surrogate models for FCCU multi-objective optimization. PSO outperformed GA in naphtha yield (3% increase) with fewer evaluations. Stochastic optimization was better than deterministic optimization for FCCU design and planning, aiding refinery profit and compliance.
GA-SAO [27]Oliveira et al. proposed a hybrid optimization strategy for dynamic waterflooding management, maximizing NPV by optimizing well allocation rates and switching times. This approach synergistically combined GA for global search with sequential approximation optimization (SAO) for local refinement, effectively identifying optimal well management strategies and demonstrating that increased operational flexibility enhances NPV. The incorporation of cycle duration variables reduced the dimensionality of the design space while maintaining recovery efficiency.
CSDE [28]Zhang et al. proposed CSDE for waterflooding optimization, handling nonlinear inequality constraints. A two-stage method with SVM for feasible solution identification and RBF surrogate model for objective function approximation was used. CSDE improved efficiency and NPV, surpassing traditional and single-model evolutionary algorithms.
CCWFO [29]Zhao et al. developed an enhanced water flow optimizer (CCWFO) by incorporating a cross-search strategy to accelerate convergence and improve the accuracy of the original water flow optimizer (WFO). Evaluated against CEC2017 benchmarks, CCWFO demonstrated superior global optimization capabilities compared to other metaheuristic algorithms. The application of CCWFO to a three-channel reservoir model yielded a higher NPV within equivalent evaluation limits, establishing it as a robust alternative to classical evolutionary algorithms for reservoir production optimization.
Table 2. CEC2017 benchmark functions.
Table 2. CEC2017 benchmark functions.
FunctionFunction NameClassOptimum
F1Shifted and Rotated Bent Cigar FunctionUnimodal100
F2Shifted and Rotated Zakharov FunctionUnimodal300
F3Shifted and Rotated Rosenbrock’s FunctionMultimodal400
F4Shifted and Rotated Rastrigin’s FunctionMultimodal500
F5Shifted and Rotated Expanded Schaffer’s F6 FunctionMultimodal600
F6Shifted and Rotated Lunacek Bi-Rastrigin FunctionMultimodal700
F7Shifted and Rotated Non-Continuous Rastrigin’s FunctionMultimodal800
F8Shifted and Rotated Lévy FunctionMultimodal900
F9Shifted and Rotated Schwefel’s FunctionMultimodal1000
F10Hybrid Function 1 (N = 3)Hybrid1100
F11Hybrid Function 2 (N = 3)Hybrid1200
F12Hybrid Function 3 (N = 3)Hybrid1300
F13Hybrid Function 4 (N = 4)Hybrid1400
F14Hybrid Function 5 (N = 4)Hybrid1500
F15Hybrid Function 6 (N = 4)Hybrid1600
F16Hybrid Function 6 (N = 5)Hybrid1700
F17Hybrid Function 6 (N = 5)Hybrid1800
F18Hybrid Function 6 (N = 5)Hybrid1900
F19Hybrid Function 6 (N = 6)Hybrid2000
F20Composition Function 1 (N = 3)Composition2100
F21Composition Function 2 (N = 3)Composition2200
F22Composition Function 3 (N = 4)Composition2300
F23Composition Function 4 (N = 4)Composition2400
F24Composition Function 5 (N = 5)Composition2500
F25Composition Function 6 (N = 5)Composition2600
F26Composition Function 7 (N = 6)Composition2700
F27Composition Function 8 (N = 6)Composition2800
F28Composition Function 9 (N = 3)Composition2900
F29Composition Function 10 (N = 3)Composition3000
Table 3. Hyperparameter settings of comparative algorithms.
Table 3. Hyperparameter settings of comparative algorithms.
NameParameters
CCMGOw = 2; rec_num = 10; divide_num = [dim/4, dim]; d1 = 0.2
MGOw = 2; rec_num = 10; divide_num = dim/4; d1 = 0.2;
WOA a 1   = [ 2 ,   0 ] ;   a 2   = [ 1 , 2 ] ;   b = 1
GWO a = [2, 0]
MFOb = 1; t = [−1, 1]; a = [−1, −2]
SCAa = 2
PSO V m a x   =   6 ;   W m a x   =   0.9 ,   W m i n   =   0.2 ;   C 1   =   2 ;   C 2 = 2
SMAa = [2, 0]; vb = [−2, 2]; E = [0, 2]
BAQmin = 0; Qmax = 2
FAalpha = 0.5; betamin = 0.2; gamma = 1
Table 4. Results of the CCMGO and the other algorithms on CEC2017.
Table 4. Results of the CCMGO and the other algorithms on CEC2017.
F1 F2 F3
AvgStdAvgStdAvgStd
CCMGO9.3875 × 1041.2429 × 1056.3469 × 1031.8785 × 1034.9078 × 1021.4145 × 101
MGO1.6156 × 1052.8369 × 1054.7837 × 1049.1011 × 1034.9353 × 1021.1080 × 101
WOA3.1479 × 1062.1521 × 1061.5459 × 1056.5052 × 1045.5521 × 1024.3391 × 101
GWO2.0741 × 1091.1617 × 1093.1738 × 1041.1031 × 1045.9004 × 1021.0153 × 102
MFO9.3736 × 1097.0162 × 1098.0164 × 1045.6794 × 1041.3305 × 1038.6241 × 102
SCA1.2688 × 10102.4698 × 1093.6850 × 1045.6118 × 1031.4518 × 1032.4771 × 102
PSO3.1410 × 1034.1096 × 1033.0000 × 1022.1517 × 10−34.6319 × 1022.4383 × 101
SMA2.6843 × 1099.4623 × 1083.6783 × 1048.9412 × 1036.2258 × 1026.1424 × 101
BA5.4272 × 1052.8167 × 1053.0011 × 1021.1756 × 10−14.7692 × 1022.6001 × 101
FA1.4463 × 10101.7921 × 1095.9894 × 1046.6363 × 1031.3073 × 1031.4159 × 102
F4 F5 F6
AvgStdAvgStdAvgStd
CCMGO5.4730 × 1028.9643 × 1006.0000 × 1024.3867 × 10−37.8150 × 1029.7607 × 100
MGO5.6413 × 1028.5239 × 1006.0000 × 1021.2025 × 10−48.0114 × 1021.2546 × 101
WOA7.8507 × 1026.2600 × 1016.6882 × 1021.3300 × 1011.2357 × 1031.0049 × 102
GWO5.9923 × 1022.8902 × 1016.0988 × 1024.3030 × 1008.5780 × 1023.5598 × 101
MFO7.1971 × 1025.1124 × 1016.4053 × 1021.2079 × 1011.1219 × 1032.2063 × 102
SCA7.7487 × 1022.0651 × 1016.4852 × 1024.6381 × 1001.1262 × 1034.2682 × 101
PSO6.9375 × 1024.0546 × 1016.4544 × 1027.4126 × 1009.9383 × 1025.1109 × 101
SMA7.1472 × 1023.6475 × 1016.4161 × 1027.6309 × 1001.0714 × 1035.9583 × 101
BA8.4918 × 1025.4933 × 1016.7349 × 1029.6446 × 1001.6510 × 1031.8809 × 102
FA7.5784 × 1021.0742 × 1016.4407 × 1022.9626 × 1001.3864 × 1034.1624 × 101
F7 F8 F9
AvgStdAvgStdAvgStd
CCMGO8.5046 × 1027.1227 × 1009.0542 × 1025.6323 × 1003.8173 × 1033.8526 × 102
MGO8.6955 × 1021.3256 × 1019.3020 × 1022.2531 × 1014.6229 × 1033.8254 × 102
WOA1.0102 × 1035.4531 × 1017.7603 × 1032.2539 × 1036.2311 × 1036.9373 × 102
GWO8.9081 × 1022.3697 × 1012.0043 × 1036.3529 × 1024.4260 × 1031.3461 × 103
MFO1.0072 × 1034.3452 × 1017.4238 × 1032.0970 × 1035.5716 × 1038.0899 × 102
SCA1.0524 × 1031.6918 × 1015.3439 × 1037.4913 × 1028.0736 × 1033.8373 × 102
PSO9.4496 × 1022.5764 × 1014.3403 × 1039.9910 × 1025.1451 × 1036.8974 × 102
SMA9.6968 × 1022.8388 × 1015.7536 × 1039.0981 × 1025.6918 × 1036.1943 × 102
BA1.0441 × 1034.9251 × 1011.2690 × 1044.6140 × 1035.5111 × 1038.1083 × 102
FA1.0523 × 1031.1106 × 1015.3093 × 1035.7708 × 1028.1169 × 1033.0811 × 102
F10 F11 F12
AvgStdAvgStdAvgStd
CCMGO1.1777 × 1032.7442 × 1017.6021 × 1056.7582 × 1052.3250 × 1041.3882 × 104
MGO1.1888 × 1032.4376 × 1017.8080 × 1055.0623 × 1053.0908 × 1042.6628 × 104
WOA1.4723 × 1037.1039 × 1014.3190 × 1072.9598 × 1071.3858 × 1057.4986 × 104
GWO1.9076 × 1039.3785 × 1027.4988 × 1071.0179 × 1084.8309 × 1062.5806 × 107
MFO5.0242 × 1035.8710 × 1035.5507 × 1086.3529 × 1081.1620 × 1083.2070 × 108
SCA2.1162 × 1033.1725 × 1021.1021 × 1092.6941 × 1083.8766 × 1081.7146 × 108
PSO1.1985 × 1032.3743 × 1013.3605 × 1041.8907 × 1041.6430 × 1041.5195 × 104
SMA1.5378 × 1039.6734 × 1019.1920 × 1074.5310 × 1072.1682 × 1062.3372 × 106
BA1.2941 × 1035.9040 × 1012.5194 × 1061.9557 × 1063.1477 × 1051.2367 × 105
FA3.4537 × 1034.6384 × 1021.5031 × 1092.7479 × 1085.5605 × 1081.5926 × 108
F13 F14 F15
AvgStdAvgStdAvgStd
CCMGO6.6078 × 1034.2189 × 1031.1788 × 1046.3326 × 1032.1765 × 1031.3227 × 102
MGO1.5084 × 1049.8346 × 1032.2119 × 1041.8794 × 1042.1519 × 1031.6588 × 102
WOA8.7090 × 1059.4998 × 1056.6626 × 1043.6297 × 1043.4919 × 1034.3085 × 102
GWO2.2354 × 1054.4007 × 1052.0655 × 1055.1690 × 1052.3664 × 1032.7197 × 102
MFO4.3377 × 1051.2584 × 1063.0167 × 1071.6486 × 1083.1544 × 1033.3005 × 102
SCA1.4238 × 1057.9330 × 1041.2004 × 1071.0574 × 1073.6654 × 1031.8768 × 102
PSO7.4037 × 1034.8823 × 1036.3968 × 1036.1663 × 1032.8344 × 1033.6717 × 102
SMA1.6322 × 1059.4226 × 1041.7638 × 1046.0717 × 1032.8870 × 1033.1743 × 102
BA6.3676 × 1033.0735 × 1031.2503 × 1059.6707 × 1043.3392 × 1034.9046 × 102
FA2.3415 × 1051.0609 × 1056.1143 × 1072.9214 × 1073.4202 × 1031.7189 × 102
F16 F17 F18
AvgStdAvgStdAvgStd
CCMGO1.8418 × 1034.7839 × 1011.8777 × 1051.0793 × 1058.0208 × 1035.0355 × 103
MGO1.8872 × 1035.3940 × 1013.0783 × 1052.1066 × 1051.5241 × 1041.1343 × 104
WOA2.5695 × 1032.6375 × 1022.9573 × 1063.3986 × 1063.4772 × 1062.2369 × 106
GWO1.9990 × 1031.5469 × 1026.8374 × 1058.2071 × 1054.8725 × 1056.5325 × 105
MFO2.5272 × 1033.1294 × 1022.3283 × 1064.6075 × 1061.6749 × 1073.8286 × 107
SCA2.4031 × 1031.5192 × 1022.8800 × 1061.6831 × 1062.3285 × 1071.1446 × 107
PSO2.4411 × 1033.0918 × 1021.2966 × 1058.1880 × 1041.0706 × 1048.8187 × 103
SMA2.2685 × 1032.0445 × 1026.7114 × 1056.1655 × 1054.2415 × 1055.4044 × 105
BA2.8397 × 1032.6454 × 1021.7296 × 1051.4578 × 1056.3761 × 1052.8491 × 105
FA2.5193 × 1031.3317 × 1024.0789 × 1061.7465 × 1069.8337 × 1073.0392 × 107
F19 F20 F21
AvgStdAvgStdAvgStd
CCMGO2.2133 × 1037.2694 × 1012.3522 × 1031.0086 × 1012.8794 × 1031.1681 × 103
MGO2.2436 × 1038.0062 × 1012.3582 × 1033.2713 × 1012.9752 × 1031.3490 × 103
WOA2.7923 × 1031.7679 × 1022.5710 × 1036.1411 × 1016.2553 × 1031.9373 × 103
GWO2.3771 × 1031.5767 × 1022.3791 × 1031.9638 × 1014.6370 × 1031.6141 × 103
MFO2.6655 × 1032.1973 × 1022.4983 × 1034.3194 × 1016.3968 × 1031.5134 × 103
SCA2.5862 × 1031.1281 × 1022.5561 × 1032.0667 × 1018.9634 × 1031.6339 × 103
PSO2.6441 × 1032.1825 × 1022.4721 × 1034.4254 × 1014.9468 × 1032.2506 × 103
SMA2.4495 × 1031.4614 × 1022.4708 × 1032.4979 × 1014.0565 × 1032.1651 × 103
BA3.0151 × 1032.8197 × 1022.6002 × 1034.7096 × 1016.8688 × 1031.7185 × 103
FA2.6088 × 1038.7682 × 1012.5384 × 1031.4015 × 1013.8333 × 1031.2889 × 102
F22 F23 F24
AvgStdAvgStdAvgStd
CCMGO2.7098 × 1032.2201 × 1012.8826 × 1031.1959 × 1012.8869 × 1039.6118 × 10−1
MGO2.7200 × 1031.2828 × 1012.8934 × 1031.3821 × 1012.8873 × 1036.4718 × 10−1
WOA3.0563 × 1038.1959 × 1013.1756 × 1038.7993 × 1012.9499 × 1033.8644 × 101
GWO2.7476 × 1033.3271 × 1012.9316 × 1034.7673 × 1012.9811 × 1033.6181 × 101
MFO2.8323 × 1033.9181 × 1012.9938 × 1033.4765 × 1013.3149 × 1034.5989 × 102
SCA2.9828 × 1032.9394 × 1013.1635 × 1032.4587 × 1013.2066 × 1035.3344 × 101
PSO3.3244 × 1031.2112 × 1023.3639 × 1031.8302 × 1022.8813 × 1031.2226 × 101
SMA2.8518 × 1033.4349 × 1013.0024 × 1032.1021 × 1013.0105 × 1034.2694 × 101
BA3.3377 × 1031.4683 × 1023.3551 × 1031.3067 × 1022.9097 × 1032.2350 × 101
FA2.9113 × 1031.0986 × 1013.0672 × 1031.2063 × 1013.5529 × 1031.0610 × 102
F25 F26 F27
AvgStdAvgStdAvgStd
CCMGO4.0360 × 1034.4910 × 1023.2084 × 1035.6067 × 1003.2191 × 1031.1333 × 101
MGO4.1057 × 1034.4247 × 1023.2110 × 1035.7906 × 1003.2263 × 1031.2693 × 101
WOA7.1806 × 1039.9826 × 1023.3544 × 1039.4240 × 1013.2966 × 1033.3966 × 101
GWO4.5703 × 1033.1382 × 1023.2494 × 1032.6475 × 1013.4278 × 1037.1809 × 101
MFO6.1219 × 1034.4450 × 1023.2484 × 1032.2024 × 1014.2467 × 1037.9835 × 102
SCA6.9624 × 1033.2108 × 1023.4048 × 1034.8784 × 1013.8418 × 1031.3263 × 102
PSO6.8713 × 1032.1881 × 1033.3191 × 1033.3374 × 1023.1604 × 1035.7726 × 101
SMA5.1885 × 1036.1139 × 1023.2600 × 1033.0310 × 1013.4233 × 1034.0169 × 101
BA9.0997 × 1032.1628 × 1033.4647 × 1031.4833 × 1023.1460 × 1035.8797 × 101
FA6.5233 × 1031.4455 × 1023.3370 × 1031.4475 × 1013.8857 × 1038.5131 × 101
F28 F29
AvgStdAvgStd
CCMGO3.6150 × 1038.3397 × 1014.3106 × 1042.7155 × 104
MGO3.6077 × 1036.6509 × 1017.3720 × 1044.3047 × 104
WOA4.6753 × 1033.6858 × 1021.1157 × 1077.7763 × 106
GWO3.7680 × 1031.7323 × 1025.9954 × 1068.1820 × 106
MFO4.1237 × 1032.7095 × 1021.3106 × 1063.7924 × 106
SCA4.5999 × 1031.9125 × 1027.0834 × 1072.8824 × 107
PSO4.0102 × 1033.2291 × 1025.2893 × 1032.9526 × 103
SMA4.0647 × 1032.2025 × 1026.6360 × 1064.7522 × 106
BA5.0038 × 1034.1198 × 1029.7148 × 1056.7934 × 105
FA4.6803 × 1031.5299 × 1021.0007 × 1082.3892 × 107
Overall Rank
RANK+/=-AVG
CCMGO1~1.6207
MGO218/10/12.6552
WOA829/0/07.4828
GWO429/0/04.4828
MFO729/0/07.0000
SCA929/0/07.9310
PSO316/4/93.7931
SMA529/0/05.3448
BA624/2/36.6552
FA1029/0/08.0345
Table 5. The p-values of the CCMGO versus the other algorithms on CEC2017.
Table 5. The p-values of the CCMGO versus the other algorithms on CEC2017.
CCMGOMGOWOAGWOMFO
F1/1.20 × 10−11.73 × 10−61.73 × 10−61.73 × 10−6
F2/1.73 × 10−61.73 × 10−61.73 × 10−64.73 × 10−6
F3/3.09 × 10−11.73 × 10−61.92 × 10−61.92 × 10−6
F4/2.35 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F5/8.31 × 10−41.73 × 10−61.73 × 10−61.73 × 10−6
F6/4.29 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F7/1.49 × 10−51.73 × 10−62.35 × 10−61.73 × 10−6
F8/2.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F9/5.22 × 10−61.73 × 10−62.56 × 10−21.92 × 10−6
F10/3.87 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F11/7.81 × 10−11.73 × 10−61.73 × 10−61.73 × 10−6
F12/3.60 × 10−11.73 × 10−63.88 × 10−63.52 × 10−6
F13/4.90 × 10−41.73 × 10−61.73 × 10−61.73 × 10−6
F14/1.71 × 10−31.73 × 10−61.73 × 10−62.35 × 10−6
F15/7.34 × 10−11.92 × 10−62.11 × 10−31.73 × 10−6
F16/6.84 × 10−31.73 × 10−61.64 × 10−51.73 × 10−6
F17/2.56 × 10−23.18 × 10−64.20 × 10−47.71 × 10−4
F18/7.27 × 10−31.73 × 10−61.73 × 10−62.60 × 10−6
F19/1.47 × 10−11.73 × 10−61.48 × 10−41.92 × 10−6
F20/2.18 × 10−21.73 × 10−64.45 × 10−51.73 × 10−6
F21/4.95 × 10−24.73 × 10−63.72 × 10−56.34 × 10−6
F22/1.57 × 10−21.73 × 10−62.60 × 10−51.92 × 10−6
F23/5.67 × 10−31.73 × 10−65.22 × 10−61.73 × 10−6
F24/8.59 × 10−21.73 × 10−61.73 × 10−61.92 × 10−6
F25/6.29 × 10−11.92 × 10−61.24 × 10−51.73 × 10−6
F26/1.31 × 10−11.73 × 10−61.92 × 10−61.73 × 10−6
F27/3.00 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F28/9.59 × 10−11.73 × 10−62.22 × 10−42.35 × 10−6
F29/1.38 × 10−31.73 × 10−61.73 × 10−61.80 × 10−5
SCAPSOSMABAFA
F11.73 × 10−64.73 × 10−61.73 × 10−65.22 × 10−61.73 × 10−6
F21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F31.73 × 10−63.18 × 10−61.73 × 10−61.48 × 10−21.73 × 10−6
F41.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F71.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F81.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F91.73 × 10−64.29 × 10−61.92 × 10−61.73 × 10−61.73 × 10−6
F101.73 × 10−61.40 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F111.73 × 10−61.73 × 10−61.73 × 10−67.69 × 10−61.73 × 10−6
F121.73 × 10−69.37 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F131.73 × 10−66.00 × 10−11.73 × 10−66.58 × 10−11.73 × 10−6
F141.73 × 10−65.32 × 10−31.11 × 10−31.73 × 10−61.73 × 10−6
F151.73 × 10−62.60 × 10−61.92 × 10−61.73 × 10−61.73 × 10−6
F161.73 × 10−61.92 × 10−62.35 × 10−61.73 × 10−61.73 × 10−6
F171.73 × 10−61.96 × 10−22.22 × 10−44.17 × 10−11.73 × 10−6
F181.73 × 10−65.04 × 10−13.52 × 10−61.73 × 10−61.73 × 10−6
F191.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−61.73 × 10−6
F201.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F211.73 × 10−62.58 × 10−35.71 × 10−43.88 × 10−61.48 × 10−4
F221.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F231.73 × 10−62.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F241.73 × 10−68.94 × 10−41.73 × 10−61.02 × 10−51.73 × 10−6
F251.73 × 10−61.36 × 10−54.29 × 10−62.35 × 10−61.73 × 10−6
F261.73 × 10−65.71 × 10−21.73 × 10−61.73 × 10−61.73 × 10−6
F271.73 × 10−66.32 × 10−51.73 × 10−62.84 × 10−51.73 × 10−6
F281.73 × 10−63.88 × 10−61.92 × 10−61.73 × 10−61.73 × 10−6
F291.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
Table 6. Properties of the three-channel model.
Table 6. Properties of the three-channel model.
PropertiesValue
Grid Size25 × 25 × 1
Depth4800 ft
Initial Pressure4000 psi
Porosity0.2
Compressibility6.9 × 10−5 psi−1
Initial Water Saturation0.2
Viscosity2.2 cP
Table 7. The results of CCMGO and the other algorithms on the oil reservoir production optimization.
Table 7. The results of CCMGO and the other algorithms on the oil reservoir production optimization.
AlgorithmNPV (USD)
MeanStdBestWorst
CCMGO9.4969 × 1071.7636 × 1069.7758 × 1079.2169 × 107
MGO9.3972 × 1072.2134 × 1069.8885 × 1079.0026 × 107
MFO8.9384 × 1073.2705 × 1069.5890 × 1078.4333 × 107
GWO9.0767 × 1073.8653 × 1069.8620 × 1078.3161 × 107
PSO8.2722 × 1071.9501 × 1068.6192 × 1077.7980 × 107
SMA9.0067 × 1072.9355 × 1069.6942 × 1078.4874 × 107
WOA8.6832 × 1072.8288 × 1069.2802 × 1078.0419 × 107
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yue, T.; Li, T. Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization. Biomimetics 2025, 10, 32. https://doi.org/10.3390/biomimetics10010032

AMA Style

Yue T, Li T. Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization. Biomimetics. 2025; 10(1):32. https://doi.org/10.3390/biomimetics10010032

Chicago/Turabian Style

Yue, Tong, and Tao Li. 2025. "Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization" Biomimetics 10, no. 1: 32. https://doi.org/10.3390/biomimetics10010032

APA Style

Yue, T., & Li, T. (2025). Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization. Biomimetics, 10(1), 32. https://doi.org/10.3390/biomimetics10010032

Article Metrics

Back to TopTop