Next Article in Journal
Outcomes of Regenerative Endodontic Therapy Using Dehydrated Human-Derived Amnion–Chorion Membranes and Collagen Matrices: A Retrospective Analysis
Next Article in Special Issue
An Improved Elk Herd Optimization Algorithm for Maximum Power Point Tracking in Photovoltaic Systems Under Partial Shading Conditions
Previous Article in Journal
Real-Time Cascaded State Estimation Framework on Lie Groups for Legged Robots Using Proprioception
Previous Article in Special Issue
A Bio-Inspired Adaptive Probability IVYPSO Algorithm with Adaptive Strategy for Backpropagation Neural Network Optimization in Predicting High-Performance Concrete Strength
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization

Faculty of Land Resource Engineering, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 529; https://doi.org/10.3390/biomimetics10080529
Submission received: 2 July 2025 / Revised: 2 August 2025 / Accepted: 8 August 2025 / Published: 12 August 2025

Abstract

Global optimization problems, ubiquitous scientific research, and engineering applications necessitate sophisticated algorithms adept at navigating intricate, high-dimensional search landscapes. The Escape (ESC) algorithm, inspired by the complex dynamics of crowd evacuation behavior—where individuals exhibit calm, herding, or panic responses—offers a compelling nature-inspired paradigm for addressing these challenges. While ESC demonstrates a strong intrinsic balance between exploration and exploitation, opportunities exist to enhance its inter-agent communication and search trajectory diversification. This paper introduces an advanced bio-inspired algorithm, termed Crisscross Escape Algorithm (CCESC), which strategically incorporates a Crisscross (CC) information exchange mechanism. This CC strategy, by promoting multi-directional interaction and information sharing among individuals irrespective of their behavioral group (calm, herding, panic), fosters a richer exploration of the solution space, helps to circumvent local optima, and accelerates convergence towards superior solutions. The CCESC’s performance is extensively validated on the demanding CEC2017 benchmark suites, alongside several standard engineering design problems, and compared against a comprehensive set of prominent metaheuristic algorithms. Experimental results consistently reveal CCESC’s superior or highly competitive performance across a wide array of benchmark functions. Furthermore, CCESC is effectively applied to a complex reservoir production optimization problem, demonstrating its capacity to achieve significantly improved Net Present Value (NPV) over other established methods. This successful application underscores CCESC’s robustness and efficacy as a powerful optimization tool for tackling multifaceted real-world problems, particularly in reservoir production optimization within complex sedimentary environments.

1. Introduction

Optimization problems are fundamental to progress in a wide range of scientific disciplines and industrial sectors [1,2,3]. They address the ubiquitous challenge of identifying the most effective or efficient solution from a vast set of possibilities. The core objective is typically to enhance performance, minimize costs, or maximize a desired outcome [4]. Such problems are pervasive, with applications ranging from designing resilient engineering structures [5] and fine-tuning complex manufacturing processes to formulating profitable financial strategies [6] and optimizing resource allocation in environmental management [7,8]. However, translating real-world problems into tractable mathematical models often presents significant complexities. These challenges typically include high-dimensional search spaces, non-linear relationships, numerous local optima (multimodality), and intricate, often conflicting, constraints [9]. The complexity can be further compounded by dynamic or uncertain operating environments.
Historically, various traditional mathematical optimization methods have been developed to address these challenges [10]. Prominent among these are gradient-based algorithms, such as the steepest descent [11] and conjugate gradient methods [12], which leverage derivative information to iteratively approach an optimum. While highly effective for convex and continuously differentiable problems, their performance degrades significantly when encountering non-convex landscapes, discontinuities, or numerous local optima, where they are prone to converging to suboptimal solutions [13]. Other classical approaches include linear programming, which excels in well-defined linear systems [14], and dynamic programming, which can optimally solve certain sequential decision-making problems [15]. However, a critical limitation of many traditional techniques is their reliance on specific problem characteristics, such as differentiability, linearity, or convexity. Furthermore, they often struggle with high-dimensionality [16]. As the number of decision variables increases—a common feature of contemporary optimization tasks—these methods can suffer from the “curse of dimensionality” [17]. This phenomenon leads to an exponential increase in computational cost, which severely diminishes their ability to explore the vast search space and renders them impractical for many complex, large-scale scenarios [18].
In response to these limitations, researchers have increasingly turned to biomimetics: the interdisciplinary field of science and engineering that learns from and mimics strategies found in nature to solve complex human problems. Within the computational domain, a powerful application of this principle is the development of metaheuristic algorithms, which have emerged as versatile alternatives for solving complex optimization problems [19]. Unlike traditional methods, metaheuristics are generally not constrained by a problem’s mathematical properties, such as convexity or differentiability [20]. Instead, they often draw inspiration from natural phenomena—such as evolutionary processes, swarm behaviors, or physical annealing—to implement stochastic search strategies. This inherent flexibility and their derivative-free nature enable them to effectively navigate rugged, multimodal search landscapes [21]. Consequently, they are particularly well-suited for high-dimensional, non-linear, and black-box optimization problems, where the analytical form of the objective function is either unknown or too complex to be used [22].
Metaheuristic algorithms are primarily categorized by their source of inspiration, with the most prominent classes being evolutionary, physics-based, and bio-behavioral algorithms [21]. Despite their diverse origins, these methods typically share a common operational framework: they initialize a population of candidate solutions and iteratively refine this population using specialized operators. These operators are designed to balance exploration of the search space with exploitation of promising regions, ultimately returning an optimal or near-optimal solution when a termination criterion is met. Evolutionary algorithms are inspired by Darwinian principles of natural selection and genetic inheritance [23]. Representative examples include the Genetic Algorithm (GA) [21] and Differential Evolution (DE) [24]. In contrast, physics-based algorithms draw analogies from physical laws. Notable examples include Simulated Annealing (SA) [25], which mimics the metallurgical annealing process, and the Gravitational Search Algorithm (GSA) [26]. Finally, bio-behavioral algorithms emulate the collective or individual behaviors of organisms. This category is dominated by swarm intelligence (SI) algorithms, which model decentralized problem-solving. Well-known SI methods include Particle Swarm Optimization (PSO) [27], Ant Colony Optimization (ACO) [28], and the Artificial Bee Colony (ABC) algorithm [29].
Despite the proliferation and success of diverse metaheuristic algorithms, the development of novel and more efficient optimization techniques remains an active area of research. This ongoing pursuit is fundamentally motivated by the “No Free Lunch” (NFL) theorem [30]. The NFL theorem posits that no single algorithm can universally outperform all others across the full spectrum of optimization problems [31]. In other words, an algorithm that excels at solving one class of problems (e.g., continuous, unimodal functions) may perform poorly on another (e.g., discrete, highly multimodal functions), and vice versa. This theorem highlights an inherent trade-off in algorithm design: the strategies that confer an advantage for certain problem structures inevitably result in disadvantages on others. Consequently, the NFL theorem underscores the necessity of developing new algorithms or enhancing existing ones. Such efforts are crucial for targeting the specific characteristics of particular problem domains or for addressing the identified weaknesses of current methods to improve superior practical performance [26].
Reservoir production optimization is a prominent and highly complex application domain where the limitations of traditional methods are apparent and metaheuristics have shown significant potential [32]. In petroleum engineering, the strategic management of oil and gas reservoirs is paramount to maximizing hydrocarbon recovery, ensuring economic viability, and minimizing environmental impact [33]. This optimization process involves complex decisions regarding well placement, drilling trajectories, injection strategies (e.g., water or gas flooding rates and locations), and individual well production rates over extended periods [34]. The primary objectives typically include maximizing the project’s Net Present Value (NPV), enhancing the Ultimate Oil Recovery (UOR), or maintaining specific production plateaus. Achieving these objectives, however, is exceptionally challenging. Reservoir systems are inherently characterized by significant geological heterogeneity, complex multiphase fluid dynamics, considerable subsurface uncertainty, and numerous operational constraints [35]. These factors result in highly non-linear, high-dimensional, and computationally expensive simulation models. Consequently, exhaustive search and traditional gradient-based approaches are often rendered impractical or ineffective for this class of problems.
Metaheuristic algorithms offer distinct advantages in navigating such complex problems. Their ability to perform a global search without relying on gradient information, coupled with their capacity to handle non-linearities and constraints (often through penalty functions or specialized mechanisms), makes them robust in the face of noisy or uncertain objective functions. These features render metaheuristics particularly well-suited for reservoir production optimization [36]. By effectively exploring the vast decision space, these algorithms can identify superior and often non-intuitive operational strategies. Ultimately, the development of robust and efficient optimization schemes for reservoir production yields substantial economic and operational benefits [37].
Scholarly efforts in reservoir production optimization have largely focused on two complementary avenues: enhancing computational efficiency and improving the core optimization algorithms. One stream of research aims to mitigate the high computational cost of full-physics simulations. An et al. [38] demonstrated this by integrating reservoir engineering principles into Particle Swarm Optimization (PSO), finding that using engineering knowledge to constrain the search space most effectively accelerated convergence, reducing iterations by over 24%. Others have bypassed full simulators entirely by using fast proxy models. For example, Gu et al. [39] employed an Extreme Gradient Boosting model to predict water cut, while Chen et al. [40] developed the Global and Local Surrogate-Model-Assisted Differential Evolution (GLSADE) algorithm, which iteratively refines a surrogate to focus the search. Both studies successfully coupled these surrogates with a DE optimizer to achieve improved outcomes with greater efficiency. A parallel research thrust focuses on enhancing the internal mechanisms of metaheuristic algorithms. Luo et al. [41] addressed premature convergence in the Water Flow Optimizer (WFO) by integrating a cross-search strategy, creating CCWFO, which yielded a higher Net Present NPV on a three-channel reservoir model. Similarly, Gao et al. [42] developed MGFOA by augmenting the Fruit Fly Optimization Algorithm (FOA) with multi-swarm and greedy selection mechanisms, leading to significantly better performance in oil and gas production optimization. Highlighting this trend, Li et al. [43] enhanced the bio-inspired Moss Growth Optimization (MGO) by incorporating a Crisscross (CC) strategy and dynamic grouping. Their resulting CCMGO algorithm not only excelled on CEC2017 benchmarks but also achieved a superior NPV in a reservoir application, explicitly demonstrating the value of improving an algorithm’s internal information exchange and adaptive balancing. While extensive research has focused on surrogate model development or heuristic integration, often defaulting to established optimizers like DE and PSO, the aforementioned studies reveal a critical insight: significant performance gains are consistently achieved by enhancing the core search behavior of the optimizer itself. The repeated success of hybridization, particularly with strategies like Crisscross search, underscores the pivotal role of the optimization engine’s architecture. This motivates the present work, which focuses not on modifying the simulation framework, but on advancing the state of the art by improving a novel, bio-inspired algorithm—the ESC algorithm—through a similar, powerful hybridization strategy.
The ESC algorithm, introduced by Chen et al. in 2025 [44], is a metaheuristic inspired by the dynamic behaviors observed during emergency crowd evacuations. Drawing from real-world crisis scenarios where individuals exhibit diverse responses, ESC translates these human behavioral dynamics into a computational optimization framework. At its core, particularly during the exploration phase, ESC partitions its solution population into three principal behavioral archetypes: the calm crowd, which promotes methodical search and provides guidance; the herding crowd, which facilitates convergence by following perceived group movement toward promising regions; and the panic crowd, which introduces erratic movements to enhance search diversity and escape local optima.
From an evolutionary computing perspective, the motivation for enhancing ESC stems from its unique, albeit flawed, approach to population diversity and information sharing. Many metaheuristics struggle to maintain a healthy balance between convergence (intensification) and diversity (diversification) throughout the search. ESC attempts to solve this by structurally segregating these functions into distinct subgroups (calm, herding, panic). This explicit partitioning is a notable strength, as it provides a clear framework for managing different search behaviors. However, this same structure introduces a significant limitation: the information flow between these subgroups is highly constrained and largely unidirectional (e.g., herding follows calm/panic). This segregation can lead to “information silos,” where good individuals discovered by one subgroup is not effectively propagated throughout the entire population, thereby limiting the generation of high-quality, diverse offspring and increasing the risk of premature convergence within a specific behavioral mode. The primary motivation for this work, therefore, is to address this structural weakness. We hypothesize that by introducing a mechanism that facilitates comprehensive information exchange across these behavioral subgroups, we can significantly enhance the algorithm’s overall performance.
In this paper, we introduce an enhanced bio-inspired metaheuristic known as the Crisscross Escape Algorithm (CCESC). The proposed CCESC method innovatively integrates the core framework of the ESC algorithm with a Crisscross (CC) strategy. The CC strategy, characterized by its capacity to facilitate multi-directional information exchange, generates offspring through interactions among individuals from different dimensions or population subgroups. This mechanism directly addresses the challenge of limited information flow within the original ESC algorithm. By promoting wider communication and more diverse solution generation across the entire population, regardless of an individual’s assigned behavioral role (i.e., calm, herding, or panic), the CC strategy enhances the ESC’s ability to explore uncharted areas, prevent premature convergence, and expedite the discovery of high-quality solutions.
The main contributions of this paper are summarized as follows:
  • A novel CCESC algorithm is proposed, which synergistically integrates the crowd evacuation dynamics of ESC with a CC strategy to enhance search efficiency, solution diversity, and overall optimization performance.
  • CCESC’s performance is rigorously validated on the CEC2017 benchmarks against numerous established metaheuristics, with statistical significance confirmed by Wilcoxon signed-rank and Friedman tests.
  • The efficacy of CCESC in solving real-world problems is demonstrated through its application to reservoir production optimization, where it achieves a superior NPV and showcases its practical robustness.
The remainder of this paper is organized as follows. Section 2 briefly reviews the original ESC algorithm. Section 3 details the proposed CCESC algorithm and the integration of the Crisscross strategy. Section 4 presents the experimental setup, benchmark results, and statistical analyses. Section 5 demonstrates the application of CCESC to reservoir production optimization. Finally, Section 6 concludes the paper with a summary of the findings and suggestions for future research.

2. The Original ESC

The Escape algorithm (ESC), proposed in 2025 by Chen et al. [44], is a metaheuristic algorithm inspired by the complex dynamics of crowd behavior during emergency evacuations. ESC models the distinct responses observed in such scenarios by primarily dividing its population into three behavioral groups during its exploration phase: the calm crowd, representing rational search and guidance; the herding crowd, facilitating convergence by following collective movement; and the panic crowd, introducing randomness for broad exploration and escaping local optima. As the search progresses, the algorithm conceptually transitions towards an exploitation focus, where the population converges towards optimal solutions, analogous to a crowd finding the safest exit points. By simulating these diverse behavioral strategies, the core mathematical model of the ESC algorithm is structured as follows:
1. Panic index: To simulate the decreasing chaos as an evacuation progresses, a “Panic Index” P ( t ) is calculated at each iteration t . This index starts high and decreases over time, modulating the intensity of exploratory movements, particularly those influenced by panic or random adjustments. The P ( t ) can be calculated as follows:
P ( t ) = c o s ( π t 6 T )
where t is current iteration, T is the maximum number of iterations.
2. Calm crowd behavior: Simulates rational individuals moving towards a collective decision point ( C j , the center of the calm group) while making minor adjustments ( v c , j ). This promotes methodical search and guidance. A binary variable m1 introduces stochasticity, simulating potential “congestion” where some dimensions might not update. The movement is scaled by the P ( t ) , and this behavior can be modeled as follows:
x i , j n e w = x i , j + m 1 × ( w 1 × ( C j x i , j ) + v c , j ) × P ( t )
where C j is the center (mean position) of the calm group in dimension j , x i , j n e w is the updated position, x i , j is the current position, m 1 is a binary variable (0 or 1 with equal probability, Bernoulli distribution), w 1 is the adaptive Levy weight for dimension j , and v c , j is mathematically modeled as shown below:
v c , j = R c , j x i , j + ϵ j
where R c , j = r m i n , j c + r i , j × ( r m a x , j c r m i n , j c ) is a randomly generated position within the calm group’s bounds, r m a x , j c and r m i n , j c respectively represent the lower and upper bounds of the j th dimension for calm group. ϵ j = z j 50 ( z j N ( 0,1 ) ) represents a slight adjustment in the individual’s movement.
3. Herding crowd behavior: The herding group’s individuals follow both the calm group’s direction ( C j ) and the potentially disruptive influence of a random panic individual ( x p j ). This balances convergence towards promising areas with the possibility of being pulled towards exploratory directions. This behavior can be modeled as follows:
x i , j n e w = x i , j + m 1 × ( w 1 × ( C j x i , j ) + m 2 × w 2 × ( x p j x i , j ) + v h , j × P ( t ) )
where x p j represents the j th dimension of a randomly selected individual from the panic group, m 1 and m 1 are binary variables (0 or 1 with equal probability following a Bernoulli distribution), w 1 and w 2 denote the adaptive Levy weight for j th dimension, and v h , j is the adjustment vector component specific to herding group bounds as defined above.
4. Panic crowd behavior: Simulates erratic individuals influenced by both potential exits and random directions. This mechanism drives exploration and diversification, helping to escape local optima. This behavior can be modeled as follows:
x i , j n e w = x i , j + m 1 × ( w 1 × ( E j x i , j ) + m 2 × w 2 × ( x r a n d , j x i , j ) + v p , j × P ( t ) )
where E j is the j th dimension of a randomly selected individual from the Elite Pool, x r a n d , j represents the j th dimension of a randomly selected individual from the population, and v p , j is the adjustment vector component specific to panic group bounds as defined above.
5. Exploitation Phase: In the second half of the iterations, the focus shifts entirely to exploitation. All individuals are treated as ‘calm’ and refine their positions by moving towards both elite solutions and random solutions. This simulates convergence towards identified optimal exits while maintaining some diversity to avoid stagnation. This phase can be modeled as follows:
x i , j n e w = x i , j + m 1 w 1 ( E j x i , j ) + m 2 w 2 ( x r a n d , j x i , j )
In summary, the ESC algorithm commences by initializing its parameters and randomly generating a population of N individuals within the defined search space. The fitness of each individual is then evaluated, the population is sorted, and the top existing individuals are stored in an Elite Pool E. The main iterative process then begins. Within each iteration t (from 1 to T), a Panic Index P(t) is computed (Equation (1)). If the algorithm is in the exploration phase (t/T ≤ 0.5), the population is sorted by fitness and divided into calm, herding (Conforming), and panic groups according to predefined proportions (c, h, p). Individuals in the calm group update their positions using Equation (2), herding group individuals use Equation (4), and panic group individuals use Equation (5). If the algorithm is in the exploitation phase (t/T > 0.5), all individuals update their positions using Equation (6). After these position updates, the fitness of each individual is re-evaluated, and a greedy selection is applied to retain the better solution between the old and new positions. Finally, the Elite Pool E is updated with the best solutions found in the current population. This iterative loop continues until the maximum number of iterations T is reached. The algorithm then returns the best solution found, typically the best individual residing in the Elite Pool. The flowchart of ESC is presented in Figure 1 of the original paper.

3. Proposed CCESC

3.1. Crisscross Strategy

To improve the ESC algorithm’s ability to avoid local optima and enhance its search capabilities, we incorporate the CC strategy, originally introduced in the Crisscross Optimization (CSO) algorithm [41]. The CC strategy facilitates superior information exchange within the population through two core mechanisms: Horizontal Crossover Search (HCS), which enables interaction between different individual solutions, and Vertical Crossover Search (VCS), which performs crossover across different dimensions of a single solution. Philosophically, this methodology is inspired by the “Doctrine of the Mean,” as it dynamically adjusts the search trajectory. The synergy between its unique crossover and selection methods enhances the algorithm’s global exploration and accelerates its convergence rate. To ensure a standard and robust implementation, the control parameters governing the CC strategy are adopted based on recommendations and common practices from the original CSO literature. The HCS and VCS mechanisms are detailed below.
Horizontal Crossover Search: The HCS operator enhances the algorithm’s exploratory performance by leveraging information from the entire population. The procedure involves randomly selecting pairs of particles and performing a crossover operation to generate new candidate solutions. The mathematical formulation for HCS is given by Equations (7) and (8).
H C S i j = r 1 × x i j + 1 r 1 × x k j + c 1 × x i j x k j
H C S k j = r 2 × x k j + 1 r 2 × x i j + c 2 × x k j x i j
where x k j and x i j are the j th dimension of two distinct parent particles, the terms r 1 and r 2 are random numbers uniformly distributed in [0, 1], while c 1 and c 2 are random numbers uniformly distributed in [−1, 1], and H C S i j and H C S k j are the j th dimension of new offspring generated by the HCS from the two particles.
Vertical Crossover Search: The VCS operator enhances the algorithm’s exploitation capabilities by leveraging information within a single individual. The procedure involves randomly selecting two distinct dimensions within each particle and performing a crossover between them. The mathematical formulation for VCS is as follows:
V C S i j = r 3 × x i d 1 + 1 r 3 × x i d 2
where r 3 is a random number distributed in [0, 1], x i d 1 and x i d 2 are the values of the two dimensions randomly picked by the i th individual, V C S i j denotes the newly generated value for a target dimension j. Similar to HCS, VCS employs a greedy selection strategy to determine whether the updated particle (with the new dimension value) replaces the original.

3.2. The Proposed CCESC

This section details the framework of the proposed CCESC. The algorithm begins by initializing its parameters and generating an initial population, following the standard procedure of the original ESC algorithm. Subsequently, the main iterative loop commences, where population updates are performed based on the ESC framework. This includes partitioning the population into calm, herding, and panic groups during the exploration phase and transitioning to a unified exploitation phase as the search progresses.
The key innovation of CCESC lies in the strategic integration of the CC strategy. By applying the CC strategy after the standard ESC-based behavioral updates, the algorithm facilitates multi-directional information exchange across the entire population, regardless of an individual’s assigned behavioral role (i.e., calm, herding, or panic). This mechanism enhances the ESC’s ability to explore uncharted areas, prevent premature convergence, and expedite the discovery of high-quality solutions. As will be verified by the experimental results in Section 4.2, this synergistic process leads to tangible improvements in overall optimization performance.
Within each iteration, after the primary ESC-based behavioral updates are completed, the CC strategy is applied to the current population. By facilitating enhanced information exchange—both between individuals from different behavioral groups (Horizontal Crossover) and within individuals across different dimensions (Vertical Crossover)—the CC mechanism generates a new set of candidate solutions. This synergistic process, combining ESC’s behavioral mechanics with the CC strategy’s diversification and intensification capabilities, continues until a predefined termination criterion (e.g., the maximum number of function evaluations) is met. The overall workflow of CCESC is illustrated in Figure 2.
Algorithm 1 provides the pseudo-code for the CCESC.
Algorithm 1 Pseudo-code of the CCESC
Set parameters: M a x F E s , d i m , population size N
Initialize population X
F E s = 0
For  i = 1 : N
    Evaluate the fitness value of x i
    Find the global min x b e s t and fitness
End For
Sort population by fitness in ascending order
Store the top individuals in the Elite Pool E
While ( F E s < M a x F E s )
       IF  F E s / M a x F E s < 0.5
       Generate Panic Index by Equation (1)
       Divide population into: Calm group (proportion c), Conforming group
       (proportion h), and Panic group (proportion p)
       Update Calm Group by Equation (2)
       Update Herding crowd by Equation (4)
       Update Panic Group by Equation (5)
       ELSE
       Update population by Equation (6)
       End IF
        F E s = F E s + N
       For  i = 1 : N                          /*CC*/
       Perform HCS to update x i
       Perform VCS to update x i
       Update  x b e s t
       End For
        F E s = F E s + N
End While
Return  x b e s t
End
The computational complexity of the CCESC algorithm is derived from four principal operations: population initialization, fitness function evaluation, position updates, and the CC strategy. The complexities associated with initialization and fitness evaluation are O (T × N), whereas position updates and the CC strategy exhibit a complexity of O (T × N × D). Consequently, the total computational burden is dominated by the higher-order term, yielding an overall complexity of O (T × N × D).

4. Experimental Results and Analysis

The performance of the proposed CCESC was evaluated using the 29 benchmark functions from the CEC2017 test suite. All experiments were conducted in MATLAB R2024a on a workstation with an Intel Xeon Gold 6258R CPU and 128 GB of RAM. For a fair and direct comparison, the experimental parameters were set to align with recent literature [41]: a population size (N) of 30, a problem dimension (D) of 30, and a maximum of 300,000 function evaluations (MaxFEs) as the termination criterion. The focus on 30 dimensions was intentionally chosen to provide a relevant benchmark for this study’s core application—a 40-dimension oil reservoir problem—thus ensuring the analysis remains clear and concise. To account for the stochastic nature of metaheuristics, 30 independent runs were performed for each function, with the mean and standard deviation (Std) of the final objective values recorded for analysis.

4.1. Benchmark Functions Overview

The benchmark problems employed in this study are drawn from the CEC2017 test suite and are categorized into four distinct types: unimodal, simple multimodal, hybrid, and composition functions. This suite, comprising 29 functions in total, is designed to provide a comprehensive evaluation of an algorithm’s performance across a diverse range of optimization landscapes. A detailed overview of these benchmark functions is provided in Table 1.

4.2. Performance Comparison with Other Algorithms

This section presents a comparative performance evaluation of the proposed CCESC against the original ESC and eight other well-established metaheuristics on the 29 benchmark functions from the CEC2017 suite. The selected peer algorithms include DE [24], GWO [45], MFO [46], SCA [47], PSO [27], the Parrot Optimizer (PO) [48], BA [49], and FA [50]. All algorithms were implemented according to their original publications, and their control parameters were configured based on common recommendations in the literature to ensure a fair comparison.
Table 2 provides a comprehensive comparison of these ten algorithms on the CEC2017 benchmark suite. For each algorithm and function, the table reports the average fitness (Mean) and standard deviation (Std Dev) obtained over 30 independent runs. To facilitate an overall performance assessment, the table also includes the final rank of each algorithm as determined by the Friedman test. Additionally, a pairwise statistical summary (+/=/−) is provided, indicating whether CCESC performed significantly better than (+), statistically similar to (=), or worse than (−) the original ESC on each function. This +/=/− comparison is based on the Wilcoxon signed-rank test conducted at a significance level of α = 0.05. All best values have been highlighted in bold.
The results demonstrate that CCESC achieves the best overall performance, attaining the lowest average Friedman rank of 2.1034. This places it ahead of ESC, which secured the second-best rank of 2.4828. The pairwise comparison reveals that CCESC significantly outperformed ESC on 11 functions, was statistically equivalent on 9, and was outperformed on the remaining 9. More broadly, the statistical tests confirm that CCESC achieves significant improvements (p < 0.05) over most competitors—including GWO, MFO, SCA, PO, BA, and FPA—on the vast majority of the 29 functions. It also secures compelling, statistically significant victories against other strong performers like DE and PSO on multiple problems. These findings confirm that the architectural modifications introduced in CCESC lead to tangible and verifiable enhancements. Furthermore, the standard deviation values for CCESC were consistently competitive, suggesting robust and stable performance.
Figure 3 displays the convergence curves of all ten algorithms on nine representative benchmark functions (F4, F5, F10, F14, F15, F16, F21, F25, and F28), with CCESC highlighted by a red line. The horizontal axis represents function evaluations (FEs) up to 3 × 105, and the vertical axis shows the best fitness achieved. The variation in initial fitness values across algorithms is an expected outcome of their random initialization processes and distinct internal strategies. The random seed was intentionally not fixed during experiments to ensure the observed performance is robust and generalizable, rather than being tied to a specific starting condition. A logarithmic scale is used for the y-axis on F10, F14, F16, and F28.
The convergence plots demonstrate CCESC’s superior performance. On most functions (e.g., F4, F5, F15, F21, and F25), CCESC converges faster and to significantly better solutions than both ESC and other competitors. This is evidenced by its rapid initial fitness decline, confirming effective early exploration, and a sustained lead throughout the search process. Even on the logarithmically scaled functions, CCESC either finds the best solution or remains highly competitive (e.g., F10, F14). In nearly all cases, CCESC’s curve lies below that of the original ESC, with the performance gap being particularly pronounced on F15 and F21. These results visually confirm that the integrated CC strategy significantly enhances both the convergence speed and solution quality of the algorithm.
In summary, the convergence analysis visually confirms that incorporating the CC strategy enhances the search performance of the original ESC algorithm. This enhancement enables CCESC to locate higher-quality solutions more efficiently and robustly across a diverse set of benchmark problems.

5. Application to Production Optimization

The goal of optimizing reservoir production is to identify the best control parameters for each well to maximize NPV. However, the extensive number of wells and production cycles results in a combinatorial explosion of potential solutions, leading to a high-dimensional, NP-hard optimization problem. Due to this complexity, evolutionary algorithms are particularly effective for addressing such issues. In this research, we employ the MRST reservoir simulator to implement CCESC on a reservoir model and evaluate its performance compared to several commonly used metaheuristic algorithms. For this study, non-linear constraints related to oilfield production are not considered, and the NPV, as specified in Equation (16), is used as the objective function.
N P V x , z = t = 1 n   Δ t Q o , t r o Q w , t r w Q i , t r i 1 + b p t
where x is the solution to be optimized, z is the model state parameters, n is the total simulation time, Q o , t , Q w , t , Q i , t correspond to the oil production rate, water production rate, and water injection rate, respectively. r o , r w , and r i denote the oil revenue, and the cost of treating and injecting the water, respectively, b is the annual interest rate, p t is the cumulative time in years up to step t .

5.1. Reservoir Model Description

The reservoir model used in this study is a two-dimensional, heterogeneous synthetic system designed to represent a clastic depositional environment, such as a fluvial or deltaic channel system. The well configuration consists of four injection wells (INJ1–INJ4) and one central production well (PRO1), arranged in a modified five-spot pattern typical for such geological settings. The model domain is discretized into a 25 × 25 × 1 Cartesian grid, resulting in 625 active grid blocks. Each block has planar dimensions of 20 m × 20 m and a uniform thickness of 30 m. A constant porosity of 0.1 is assumed for all grid blocks, representing a moderately sorted sandstone reservoir.
The reservoir’s heterogeneity is principally defined by its permeability field, which was stochastically generated using the Karhunen–Loève (KL) expansion method. The natural logarithm of permeability, ln(K), was modeled with a mean of 3.5 and a standard deviation of 1.0. The resulting spatial distribution of log-permeability, depicted in Figure 4, reveals distinct high- and low-permeability channels that govern the fluid flow paths. For this study, a single-phase fluid (water) with a viscosity of 0.9 cP and a density of 1000 kg/m3 is considered.
The objective of this production optimization problem is to maximize the Net Present Value (NPV). The decision variables are the constant water injection rates for each of the four injection wells, which are optimized over a total simulation period of 2000 days. This period is divided into 10 discrete control steps, each with a duration of 200 days. The allowable injection rate for each well is constrained to the range of [0, 1000] m3/day. The single production well (PRO1) is operated at a constant bottom-hole pressure (BHP) of 200 barsa. Consequently, the optimization problem involves determining the optimal injection rate for each of the 4 wells across all 10 control steps, resulting in a 40-dimensional search space (4 wells × 10 steps).
The NPV serves as the fitness function and is calculated based on the following economic parameters: an oil price of 80.0 USD/STB, a water injection cost of 5.0 USD/STB, and a water treatment cost of 5.0 USD/STB. For simplicity, a discount rate of 0% per annum is assumed in this study.

5.2. Analysis and Discussion of Experimental Results

This section presents a detailed performance analysis of the proposed CCESC and six other metaheuristics—ESC, DE, GWO, MFO, SCA, and PSO—on the reservoir production optimization problem. To ensure a robust comparison for this computationally intensive application, each algorithm was executed 10 times independently. The optimization was performed for 100 iterations, and key statistical metrics, including the mean, standard deviation (Std Dev), best, and worst Net Present Value (NPV) achieved, are reported in Table 3.
The results summarized in Table 3 clearly demonstrate the superior performance of CCESC. It achieved the highest mean NPV of 9.457 × 108 USD, underscoring its enhanced capability to consistently identify high-value production strategies for this complex reservoir model. Furthermore, CCESC exhibited a relatively low standard deviation of USD 1.532 × 107, indicating greater stability and reliability compared to most of its peers, including DE (3.121 × 107), GWO (2.784 × 107), MFO (4.542 × 107), SCA (3.487 × 107), and PSO (3.939 × 107). While the original ESC also showed competitive stability (Std Dev of 2.510 × 107), its mean NPV was lower. These numerical results highlight CCESC’s dual strengths: it effectively explores the high-dimensional decision space of the reservoir model while efficiently exploiting promising regions to maximize economic returns with high consistency.
Figure 5 illustrates the convergence history of the NPV for CCESC and the six other algorithms over 100 iterations. The horizontal axis represents the iteration number, while the vertical axis shows the corresponding best NPV achieved, scaled by USD 108.
CCESC, represented by the red curve, exhibits the most rapid convergence and consistently attains the highest NPV. It demonstrates a steep initial improvement, surpassing an NPV of USD 9.0 × 108 in fewer than 40 iterations, and continues to refine its solution toward the optimal value. In contrast, the original ESC shows good initial progress but quickly plateaus at a suboptimal level. While GWO and DE also display strong initial convergence, outperforming ESC in the early and middle stages, they ultimately yield lower final NPVs than CCESC. MFO and SCA converge more gradually, reaching moderate NPVs, whereas PSO exhibits the slowest convergence and achieves the lowest final NPV among all tested algorithms.
In summary, the reservoir production optimization case study highlights the practical advantages of the proposed CCESC. The algorithm not only achieves the highest average NPV but also converges faster and with greater stability than the original ESC and the other competing metaheuristics. These results confirm CCESC’s superior performance and its suitability for solving complex, real-world engineering optimization problems such as reservoir management.

6. Conclusions

This paper introduced the CCESC algorithm, a novel metaheuristic that enhances the original ESC algorithm by integrating a CC strategy. The ESC algorithm, inspired by crowd evacuation dynamics, balances exploration and exploitation by partitioning its population into calm, herding, and panic groups. The CC strategy augments this framework by promoting comprehensive information exchange across the entire population, thereby improving solution diversity and accelerating convergence, particularly on complex optimization landscapes.
The performance of CCESC was rigorously evaluated on the CEC2017 benchmark suite against the original ESC and eight other well-established metaheuristics. The experimental results demonstrated that CCESC achieved superior or highly competitive performance. Statistical analyses, including the Friedman and Wilcoxon signed-rank tests, confirmed that CCESC significantly outperformed the original ESC and many peer algorithms in terms of both solution accuracy and convergence speed. Furthermore, the practical efficacy of CCESC was validated on a challenging heterogeneous reservoir production optimization problem. In this real-world application, CCESC obtained a higher Net Present Value (NPV) and exhibited more robust convergence than the other tested algorithms, underscoring its potential for solving complex engineering tasks.
Future research will pursue several promising directions. First, we will investigate adaptive mechanisms for the control parameters of the CC strategy to further enhance performance. Second, extending CCESC to solve multi-objective optimization problems represents a key area for future development. Finally, we plan to apply CCESC to other complex, real-world challenges to further validate its versatility. These applications include optimizing robot parameters for impedance learning in human-guided interaction with unknown environments, Unmanned Aerial Vehicle (UAV) path planning, and hyperparameter tuning for deep learning models.

Author Contributions

Y.Z.: Conceptualization, Software, Data Curation, Investigation, Writing—Original Draft, Project Administration; X.L.: Methodology, Writing—Original Draft, Writing—Review and Editing, Validation, Formal Analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number [41272119].

Data Availability Statement

The numerical and experimental data used to support the findings of this study are included with-in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, C.; Wang, P.; Heidari, A.A.; Zhao, X.; Chen, H. A Boosted Communicational Salp Swarm Algorithm: Performance Optimization and Comprehensive Analysis. J. Bionic Eng. 2023, 20, 1296–1332. [Google Scholar] [CrossRef]
  2. Cheng, M.-M.; Zhang, J.; Wang, D.-G.; Tan, W.; Yang, J. A Localization Algorithm Based on Improved Water Flow Optimizer and Max-Similarity Path for 3-D Heterogeneous Wireless Sensor Networks. IEEE Sens. J. 2023, 23, 13774–13788. [Google Scholar] [CrossRef]
  3. Garg, T.; Kaur, G.; Rana, P.S.; Cheng, X. Enhancing Road Traffic Prediction Using Data Preprocessing Optimization. J. Circuits Syst. Comput. 2024, 34, 2550045. [Google Scholar] [CrossRef]
  4. Kumar, A.; Das, S.; Mallipeddi, R. An Efficient Differential Grouping Algorithm for Large-Scale Global Optimization. IEEE Trans. Evol. Comput. 2024, 28, 32–46. [Google Scholar] [CrossRef]
  5. Shan, W.; Hu, H.; Cai, Z.; Chen, H.; Liu, H.; Wang, M.; Teng, Y. Multi-Strategies Boosted Mutative Crow Search Algorithm for Global Tasks: Cases of Continuous and Discrete Optimization. J. Bionic Eng. 2022, 19, 1830–1849. [Google Scholar] [CrossRef]
  6. Li, X.; Khishe, M.; Qian, L. Evolving Deep Gated Recurrent Unit Using Improved Marine Predator Algorithm for Profit Prediction Based on Financial Accounting Information System. Complex Intell. Syst. 2024, 10, 595–611. [Google Scholar] [CrossRef]
  7. Chakraborty, A.; Ray, S. Economic and Environmental Factors Based Multi-Objective Approach for Optimizing Energy Management in a Microgrid. Renew. Energy 2024, 222, 119920. [Google Scholar] [CrossRef]
  8. Hu, H.; Wang, J.; Huang, X.; Ablameyko, S.V. An Integrated Online-Offline Hybrid Particle Swarm Optimization Framework for Medium Scale Expensive Problems. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 25–32. [Google Scholar]
  9. Liang, J.; Lin, H.; Yue, C.; Ban, X.; Yu, K. Evolutionary Constrained Multi-Objective Optimization: A Review. Vicinagearth 2024, 1, 5. [Google Scholar] [CrossRef]
  10. Biegler, L.T.; Grossmann, I.E. Retrospective on Optimization. Comput. Chem. Eng. 2004, 28, 1169–1192. [Google Scholar] [CrossRef]
  11. Meza, J.C. Steepest Descent. WIREs Comput. Stat. 2010, 2, 719–722. [Google Scholar] [CrossRef]
  12. Polyak, B.T. The Conjugate Gradient Method in Extremal Problems. USSR Comput. Math. Math. Phys. 1969, 9, 94–112. [Google Scholar] [CrossRef]
  13. Hu, H.; Shan, W.; Tang, Y.; Heidari, A.A.; Chen, H.; Liu, H.; Wang, M.; Escorcia-Gutierrez, J.; Mansour, R.F.; Chen, J. Horizontal and Vertical Crossover of Sine Cosine Algorithm with Quick Moves for Optimization and Feature Selection. J. Comput. Des. Eng. 2022, 9, 2524–2555. [Google Scholar] [CrossRef]
  14. Dantzig, G.B. Linear Programming. Oper. Res. 2002, 50, 42–47. [Google Scholar] [CrossRef]
  15. Bellman, R. Dynamic Programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef]
  16. Hu, H.; Shan, W.; Chen, J.; Xing, L.; Heidari, A.A.; Chen, H.; He, X.; Wang, M. Dynamic Individual Selection and Crossover Boosted Forensic-Based Investigation Algorithm for Global Optimization and Feature Selection. J. Bionic Eng. 2023, 20, 2416–2442. [Google Scholar] [CrossRef]
  17. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Large-Scale Machine Learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  18. Huang, X.; Hu, H.; Wang, J.; Yuan, B.; Dai, C.; Ablameyk, S.V. Dynamic Strongly Convex Sparse Operator with Learning Mechanism for Sparse Large-Scale Multi-Objective Optimization. In Proceedings of the 2024 6th International Conference on Data-driven Optimization of Complex Systems (DOCS), Hangzhou, China, 16–18 August 2024; pp. 121–127. [Google Scholar]
  19. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A Survey on New Generation Metaheuristic Algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  20. Abdel-Basset, M.; Abdel-Fatah, L.; Sangaiah, A.K. Metaheuristic Algorithms: A Comprehensive Review. In Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications; Academic Press: Cambridge, MA, USA, 2018; pp. 185–231. [Google Scholar]
  21. Katoch, S.; Chauhan, S.S.; Kumar, V. A Review on Genetic Algorithm: Past, Present, and Future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  22. Askarzadeh, A. A Novel Metaheuristic Method for Solving Constrained Engineering Optimization Problems: Crow Search Algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  23. Rajpurohit, J.; Sharma, T.K.; Abraham, A.; Vaishali. Glossary of Metaheuristic Algorithms. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2017, 9, 25. [Google Scholar]
  24. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  25. Bertsimas, D.; Tsitsiklis, J. Simulated Annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  26. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  27. Poli, R.; Kennedy, J.; Blackwell, T. Particle Swarm Optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  28. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  29. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In Foundations of Fuzzy Logic and Soft Computing; Melin, P., Castillo, O., Aguilar, L.T., Kacprzyk, J., Pedrycz, W., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4529, pp. 789–798. ISBN 978-3-540-72917-4. [Google Scholar]
  30. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  31. Ho, Y.C.; Pepyne, D.L. Simple Explanation of the No-Free-Lunch Theorem and Its Implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  32. Desbordes, J.K.; Zhang, K.; Xue, X.; Ma, X.; Luo, Q.; Huang, Z.; Hai, S.; Jun, Y. Dynamic Production Optimization Based on Transfer Learning Algorithms. J. Petrol. Sci. Eng. 2022, 208, 109278. [Google Scholar] [CrossRef]
  33. Suwartadi, E.; Krogstad, S.; Foss, B. Adjoint-Based Surrogate Optimization of Oil Reservoir Water Flooding. Optim. Eng. 2015, 16, 441–481. [Google Scholar] [CrossRef]
  34. Rasouli, H.; Rashidi, F.; Karimi, B.; Khamehchi, E. A Surrogate Integrated Production Modeling Approach to Long-Term Gas-Lift Allocation Optimization. Chem. Eng. Commun. 2015, 202, 647–654. [Google Scholar] [CrossRef]
  35. Yan, M.; Huang, C.; Bienstman, P.; Tino, P.; Lin, W.; Sun, J. Emerging Opportunities and Challenges for the Future of Reservoir Computing. Nat. Commun. 2024, 15, 2056. [Google Scholar] [CrossRef]
  36. Verma, S.; Prasad, A.D.; Verma, M.K. Optimal Operation of the Multi-Reservoir System: A Comparative Study of Robust Metaheuristic Algorithms. Int. J. Hydrol. Sci. Technol. 2024, 17, 239–266. [Google Scholar] [CrossRef]
  37. Wang, L.; Yao, Y.; Zhao, G.; Adenutsi, C.D.; Wang, W.; Lai, F. A Hybrid Surrogate-Assisted Integrated Optimization of Horizontal Well Spacing and Hydraulic Fracture Stage Placement in Naturally Fractured Shale Gas Reservoir. J. Petrol. Sci. Eng. 2022, 216, 110842. [Google Scholar] [CrossRef]
  38. An, Z.; Zhou, K.; Hou, J.; Wu, D.; Pan, Y. Accelerating Reservoir Production Optimization by Combining Reservoir Engineering Method with Particle Swarm Optimization Algorithm. J. Pet. Sci. Eng. 2022, 208, 109692. [Google Scholar] [CrossRef]
  39. Gu, J.; Liu, W.; Zhang, K.; Zhai, L.; Zhang, Y.; Chen, F. Reservoir Production Optimization Based on Surrograte Model and Differential Evolution Algorithm. J. Pet. Sci. Eng. 2021, 205, 108879. [Google Scholar] [CrossRef]
  40. Chen, G.; Zhang, K.; Zhang, L.; Xue, X.; Ji, D.; Yao, C.; Yao, J.; Yang, Y. Global and Local Surrogate-Model-Assisted Differential Evolution for Waterflooding Production Optimization. SPE J. 2020, 25, 105–118. [Google Scholar] [CrossRef]
  41. Zhao, Z.; Luo, S. A Crisscross-Strategy-Boosted Water Flow Optimizer for Global Optimization and Oil Reservoir Production. Biomimetics 2024, 9, 20. [Google Scholar] [CrossRef]
  42. Gao, Y.; Cheng, L. A Multi-Swarm Greedy Selection Enhanced Fruit Fly Optimization Algorithm for Global Optimization in Oil and Gas Production. PLoS ONE 2025, 20, e0322111. [Google Scholar] [CrossRef]
  43. Yue, T.; Li, T. Crisscross Moss Growth Optimization: An Enhanced Bio-Inspired Algorithm for Global Production and Optimization. Biomimetics 2025, 10, 32. [Google Scholar] [CrossRef]
  44. Ouyang, K.; Fu, S.; Chen, Y.; Cai, Q.; Heidari, A.A.; Chen, H. Escape: An Optimization Method Based on Crowd Evacuation Behaviors. Artif. Intell. Rev. 2024, 58, 19. [Google Scholar] [CrossRef]
  45. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  46. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  47. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  48. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot Optimizer: Algorithm and Applications to Medical Problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef]
  49. Yang, X.-S.; Gandomi, A.H. Bat Algorithm: A Novel Approach for Global Engineering Optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  50. Yang, X.-S. Firefly Algorithm, Stochastic Test Functions and Design Optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the ESC.
Figure 1. Flowchart of the ESC.
Biomimetics 10 00529 g001
Figure 2. Flowchart of the CCESC.
Figure 2. Flowchart of the CCESC.
Biomimetics 10 00529 g002
Figure 3. Convergence curves of the CCESC on benchmarks with other algorithms.
Figure 3. Convergence curves of the CCESC on benchmarks with other algorithms.
Biomimetics 10 00529 g003
Figure 4. Distribution of log-permeability in the three-channel model.
Figure 4. Distribution of log-permeability in the three-channel model.
Biomimetics 10 00529 g004
Figure 5. Convergence of NPV values for different algorithms over iterations.
Figure 5. Convergence of NPV values for different algorithms over iterations.
Biomimetics 10 00529 g005
Table 1. CEC2017 benchmark functions.
Table 1. CEC2017 benchmark functions.
FunctionFunction NameClassOptimum
F1Shifted and Rotated Bent Cigar FunctionUnimodal100
F2Shifted and Rotated Zakharov FunctionUnimodal300
F3Shifted and Rotated Rosenbrock’s FunctionMultimodal400
F4Shifted and Rotated Rastrigin’s FunctionMultimodal500
F5Shifted and Rotated Expanded Schaffer’s F6 FunctionMultimodal600
F6Shifted and Rotated Lunacek Bi-Rastrigin FunctionMultimodal700
F7Shifted and Rotated Non-Continuous Rastrigin’s FunctionMultimodal800
F8Shifted and Rotated Lévy FunctionMultimodal900
F9Shifted and Rotated Schwefel’s FunctionMultimodal1000
F10Hybrid Function 1 (N = 3)Hybrid1100
F11Hybrid Function 2 (N = 3)Hybrid1200
F12Hybrid Function 3 (N = 3)Hybrid1300
F13Hybrid Function 4 (N = 4)Hybrid1400
F14Hybrid Function 5 (N = 4)Hybrid1500
F15Hybrid Function 6 (N = 4)Hybrid1600
F16Hybrid Function 6 (N = 5)Hybrid1700
F17Hybrid Function 6 (N = 5)Hybrid1800
F18Hybrid Function 6 (N = 5)Hybrid1900
F19Hybrid Function 6 (N = 6)Hybrid2000
F20Composition Function 1 (N = 3)Composition2100
F21Composition Function 2 (N = 3)Composition2200
F22Composition Function 3 (N = 4)Composition2300
F23Composition Function 4 (N = 4)Composition2400
F24Composition Function 5 (N = 5)Composition2500
F25Composition Function 6 (N = 5)Composition2600
F26Composition Function 7 (N = 6)Composition2700
F27Composition Function 8 (N = 6)Composition2800
F28Composition Function 9 (N = 3)Composition2900
F29Composition Function 10 (N = 3)Composition3000
Table 2. Results of the CCESC and other algorithms on CEC2017.
Table 2. Results of the CCESC and other algorithms on CEC2017.
F1 F2 F3
AvgStdAvgStdAvgStd
CCESC3.5255 × 1033.8954 × 1031.0835 × 1035.3427 × 1024.7483 × 102 =2.9002 × 101
ESC3.3872 × 103 =3.9748 × 1036.3777 × 1024.5533 × 1024.6240 × 102 +3.0354 × 101
DE2.7647 × 103 =4.4090 × 1031.9190 × 104 +5.0403 × 1034.8943 × 102 +8.7689 × 100
GWO1.4755 × 109 +1.1091 × 1093.1543 × 104 +1.0232 × 1046.1372 × 102 +1.0357 × 102
MFO1.0161 × 1010 +7.5321 × 1099.2600 × 104 +6.6262 × 1041.3525 × 103 +7.9515 × 102
SCA1.2752 × 1010 +2.3869 × 1093.8462 × 104 +7.0213 × 1031.4624 × 103 +3.2765 × 102
PSO3.1524 × 103 =3.0479 × 1033.0000 × 1024.8342 × 10−34.6206 × 102 −2.5274 × 101
PO4.2669 × 107 +5.6668 × 1075.6686 × 103 +3.8437 × 1035.2065 × 102 +2.3061 × 101
BA5.5491 × 105 +2.4723 × 1053.0010 × 102 −7.6706 × 10−24.7381 × 102 =3.5388 × 101
FA1.4343 × 1010 +1.4273 × 1096.1478 × 104 +1.0700 × 1041.3581 × 103 +1.2564 × 102
F4 F5 F6
AvgStdAvgStdAvgStd
CCESC5.5037 × 1027.8167 × 1006.0000 × 1026.5666 × 10−67.9386 × 1029.0650 × 100
ESC5.2816 × 102 −7.8871 × 1006.0010 × 102 +1.1095 × 10−17.6211 × 102 −9.3598 × 100
DE6.1306 × 102 +8.8681 × 1006.0000 × 102 =0.0000 × 1008.4353 × 102 +7.2832 × 100
GWO6.0056 × 102 +2.4028 × 1016.0595 × 102 +3.0534 × 1008.5675 × 102 +3.3805 × 101
MFO7.0553 × 102 +4.2114 × 1016.3645 × 102 +1.1182 × 1011.2015 × 103 +2.3191 × 102
SCA7.7670 × 102 +1.7574 × 1016.4961 × 102 +4.5914 × 1001.1169 × 103 +3.3466 × 101
PSO6.9392 × 102 +4.1274 × 1016.4342 × 102 +6.5115 × 1001.0271 × 103 +6.1418 × 101
PO7.3187 × 102 +4.6764 × 1016.5719 × 102 +7.5367 × 1001.1326 × 103 +7.4220 × 101
BA8.4090 × 102 +7.4849 × 1016.7434 × 102 +7.0809 × 1001.6100 × 103 +1.8629 × 102
FA7.5838 × 102 +1.1129 × 1016.4328 × 102 +3.3706 × 1001.3901 × 103 +3.4637 × 101
F7 F8 F9
AvgStdAvgStdAvgStd
CCESC8.5047 × 1026.8607 × 1009.0126 × 1021.2168 × 1003.5882 × 1032.4471 × 102
ESC8.2653 × 102 −6.7751 × 1009.2665 × 102 +2.1323 × 1013.2137 × 103 −7.9416 × 102
DE9.1027 × 102 +6.2547 × 1009.0000 × 102 −9.6743 × 10−145.8136 × 103 +2.8900 × 102
GWO8.8931 × 102 +2.1397 × 1011.7638 × 103 +6.3256 × 1024.0596 × 103 +6.3352 × 102
MFO1.0012 × 103 +4.9928 × 1017.6299 × 103 +2.2116 × 1035.5813 × 103 +9.2648 × 102
SCA1.0519 × 103 +1.7228 × 1015.2489 × 103 +1.0536 × 1038.1075 × 103 +3.3474 × 102
PSO9.4191 × 102 +2.4900 × 1014.0330 × 103 +5.9325 × 1024.8212 × 103 +5.3623 × 102
PO9.8016 × 102 +3.0531 × 1015.0865 × 103 +7.6591 × 1025.7686 × 103 +7.8686 × 102
BA1.0472 × 103 +5.6682 × 1011.2756 × 104 +4.4659 × 1035.4802 × 103 +6.2913 × 102
FA1.0552 × 103 +1.3119 × 1015.2547 × 103 +5.3521 × 1028.0185 × 103 +2.9787 × 102
F10 F11 F12
AvgStdAvgStdAvgStd
CCESC1.1399 × 1032.6975 × 1013.0823 × 1052.0745 × 1051.3200 × 1041.5007 × 104
ESC1.1684 × 103 +3.6141 × 1013.9234 × 105 =2.4493 × 1051.5499 × 104 =1.0559 × 104
DE1.1568 × 103 +2.4300 × 1011.6569 × 106 +7.9478 × 1053.0594 × 104 +2.0254 × 104
GWO1.8756 × 103 +7.1996 × 1026.9142 × 107 +8.6664 × 1072.2559 × 107 +5.8838 × 107
MFO5.7541 × 103 +5.0815 × 1033.7185 × 108 +5.0196 × 1089.3223 × 107 +3.0688 × 108
SCA2.0663 × 103 +2.1876 × 1021.1999 × 109 +3.7840 × 1083.7641 × 108 +1.7320 × 108
PSO1.2075 × 103 +3.2471 × 1013.3489 × 1042.0523 × 1041.2396 × 104 =1.5268 × 104
PO1.3055 × 103 +6.5945 × 1013.3512 × 107 +3.5698 × 1071.4785 × 105 +1.1272 × 105
BA1.3242 × 103 +8.0755 × 1012.2536 × 106 +1.7036 × 1063.0314 × 105 +1.1221 × 105
FA3.5574 × 103 +4.3103 × 1021.4741 × 109 +3.8706 × 1086.0224 × 108 +1.6634 × 108
F13 F14 F15
AvgStdAvgStdAvgStd
CCESC2.4961 × 1041.9010 × 1045.2832 × 1035.9862 × 1031.9362 × 1031.8960 × 102
ESC1.4643 × 104 −1.3725 × 1047.2748 × 103 =7.5329 × 1032.0617 × 103 + 2.1360 × 102
DE6.4083 × 104 +4.3831 × 1046.7605 × 103 +3.7226 × 1032.0987 × 103 +1.3508 × 102
GWO2.1191 × 105 +3.0916 × 1053.4786 × 105 +7.0974 × 1052.3503 × 103 +2.6201 × 102
MFO1.6461 × 105 +3.2360 × 1053.0154 × 107 +1.6484 × 1083.2261 × 103 +3.8883 × 102
SCA1.2213 × 105 +6.0385 × 1041.3110 × 107 +1.1163 × 1073.5654 × 103 +2.4475 × 102
PSO6.2203 × 1033.9012 × 1037.7850 × 103 + 6.5680 × 1032.9417 × 103 +3.9223 × 102
PO4.2016 × 104 +2.9247 × 1043.8824 × 104 +2.5295 × 1043.0436 × 103 +3.1588 × 102
BA6.5404 × 1034.0829 × 1039.1369 × 104 +4.9670 × 1043.2934 × 103 +4.4017 × 102
FA1.9705 × 105 +8.7921 × 1046.6901 × 107 +3.0210 × 1073.4141 × 103 +1.5962 × 102
F16 F17 F18
AvgStdAvgStdAvgStd
CCESC1.7798 × 1035.8233 × 1011.9710 × 1051.7757 × 1058.0167 × 1038.9592 × 103
ESC1.8868 × 103 +1.2433 × 1022.8857 × 105 =3.3036 × 1057.3902 × 103 =5.8539 × 103
DE1.8644 × 103 +5.9255 × 1013.2283 × 105 +1.6230 × 1057.2213 × 103 =4.1013 × 103
GWO1.9867 × 103 +1.4623 × 1024.5734 × 105 +4.0013 × 1054.0608 × 105 +4.8087 × 105
MFO2.5587 × 103 +2.0382 × 1023.1724 × 106 +6.9905 × 1061.3544 × 107 +3.6473 × 107
SCA2.3984 × 103 +1.2507 × 1023.0253 × 106 +1.4421 × 1062.6512 × 107 +1.5133 × 107
PSO2.4593 × 103 +2.9558 × 1021.4883 × 105 =8.6525 × 1041.0108 × 104 =1.0643 × 104
PO2.3168 × 103 +2.2105 × 1025.6943 × 105 +3.8407 × 1056.4285 × 105 +6.9384 × 105
BA2.8405 × 103 +3.2629 × 1022.2019 × 105 =1.3705 × 1056.4719 × 105 +2.9608 × 105
FA2.5289 × 103 +1.0249 × 1023.6599 × 106 +1.6108 × 1069.3336 × 107 +4.5101 × 107
F19 F20 F21
AvgStdAvgStdAvgStd
CCESC2.1155 × 1036.9545 × 1012.3516 × 1036.5957 × 1002.4905 × 1037.2399 × 102
ESC2.2513 × 103 +1.1968 × 1022.3285 × 103 −8.4475 × 1003.0059 × 103 +1.1838 × 103
DE2.1315 × 103 =5.9738 × 1012.4108 × 103 +7.5239 × 1003.7545 × 103 +1.9345 × 103
GWO2.3525 × 103 +1.2895 × 1022.3855 × 103 +2.5234 × 1014.5912 × 103 +1.3127 × 103
MFO2.6958 × 103 +2.0705 × 1022.5007 × 103 +3.7422 × 1016.4010 × 103 +1.5213 × 103
SCA2.5807 × 103 +1.4258 × 1022.5478 × 103 +1.9573 × 1018.8320 × 103 +1.9163 × 103
PSO2.6068 × 103 +1.9247 × 1022.4698 × 103 +3.5134 × 1015.1165 × 103 +2.2689 × 103
PO2.4862 × 103 +1.6903 × 1022.5023 × 103 +4.4799 × 1012.7944 × 103 +1.0815 × 103
BA2.9724 × 103 +2.2133 × 1022.6438 × 103 +6.9904 × 1016.7510 × 103 +1.7252 × 103
FA2.5833 × 103 +1.0234 × 1022.5417 × 103 +9.7253 × 1003.8372 × 103 +1.2854 × 102
F22 F23 F24
AvgStdAvgStdAvgStd
CCESC2.6986 × 1039.0945 × 1002.8647 × 1038.6917 × 1002.8898 × 1037.5304 × 100
ESC2.6880 × 103 −1.1336 × 1012.8517 × 103 −1.0709 × 1012.8976 × 103 +1.5369 × 101
DE2.7545 × 103 +8.0664 × 1002.9596 × 103 +1.1611 × 1012.8874 × 1032.5098 × 10−1
GWO2.7491 × 103 +3.8744 × 1012.9293 × 103 +5.6197 × 1013.0034 × 103 +6.0756 × 101
MFO2.8435 × 103 +3.8511 × 1012.9993 × 103 +4.2330 × 1013.3859 × 103 +5.9192 × 102
SCA2.9857 × 103 +3.3269 × 1013.1556 × 103 +3.0030 × 1013.2041 × 103 +5.2181 × 101
PSO3.2924 × 103 +1.9323 × 1023.3705 × 103 +1.0796 × 1022.8805 × 1036.3784 × 100
PO2.9715 × 103 +6.9490 × 1013.1048 × 103 +6.8115 × 1012.9227 × 103 +2.7850 × 101
BA3.3136 × 103 +1.5781 × 1023.3232 × 103 +1.2567 × 1022.9127 × 103 +2.4890 × 101
FA2.9154 × 103 +9.4232 × 1003.0645 × 103 +1.1776 × 1013.5518 × 103 +1.2014 × 102
F25 F26 F27
AvgStdAvgStdAvgStd
CCESC4.0531 × 1031.1532 × 1023.2141 × 1039.3902 × 1003.1715 × 1036.4540 × 101
ESC4.0841 × 103 =1.8511 × 1023.2302 × 103 +1.7731 × 1013.1789 × 103 =5.3878 × 101
DE4.6572 × 103 +1.2058 × 1023.2068 × 103 −3.3581 × 1003.1779 × 103 =5.7495 × 101
GWO4.7819 × 103 +3.5736 × 1023.2393 × 103 +1.7523 × 1013.4680 × 103 +2.3047 × 102
MFO5.9247 × 103 +6.3639 × 1023.2603 × 103 +2.9104 × 1014.6385 × 103 +1.0065 × 103
SCA6.9798 × 103 +3.3407 × 1023.3936 × 103 +4.5056 × 1013.8195 × 103 +9.7138 × 101
PSO6.3088 × 103 +2.3791 × 1033.3135 × 103 =2.8523 × 1023.1467 × 103 =6.2260 × 101
PO6.2389 × 103 +1.5609 × 1033.3153 × 103 +4.9880 × 1013.3089 × 103 +5.5567 × 101
BA8.8007 × 103 +2.6331 × 1033.4038 × 103 +9.5290 × 1013.1534 × 103 =6.4405 × 101
FA6.4862 × 103 +1.8155 × 1023.3339 × 103 +1.6953 × 1013.8787 × 103 +9.0911 × 101
F28 F29
AvgStdAvgStd
CCESC3.4364 × 1035.4729 × 1017.3284 × 1031.8753 × 103
ESC3.5012 × 103 +1.2967 × 1021.0768 × 104 +2.2758 × 103
DE3.5250 × 103 +6.9103 × 1011.1751 × 104 +3.3232 × 103
GWO3.7476 × 103 +1.7946 × 1025.8495 × 106 +5.5302 × 106
MFO4.1130 × 103 +2.4497 × 1027.5618 × 105 +1.0771 × 106
SCA4.5913 × 103 +2.9433 × 1027.0828 × 107 +3.0432 × 107
PSO3.9840 × 103 +3.2357 × 1025.0228 × 103 −1.7910 × 103
PO4.3708 × 103 +3.4611 × 1026.2783 × 106 +4.9233 × 106
BA4.9522 × 103 +5.0689 × 1021.2934 × 106 +7.2635 × 105
FA4.7103 × 103 +1.5665 × 1029.7735 × 107 +2.3550 × 107
Overall Rank
RANK+/=/−AVG
CCESC1~2.1034
ESC211/9/92.4828
DE321/4/43.3448
GWO529/0/05.2069
MFO829/0/07.4483
SCA929/0/08.3448
PSO417/6/64.2414
PO629/0/06.1034
BA724/3/27.1379
FA1029/0/08.5862
Table 3. The results of CCESC and other algorithms on the oil reservoir production optimization.
Table 3. The results of CCESC and other algorithms on the oil reservoir production optimization.
AlgorithmNPV (USD)
MeanStdBestWorst
CCESC9.457 × 1081.532 × 1079.782 × 1089.146 × 108
ESC8.859 × 1082.510 × 1079.403 × 1088.308 × 108
DE8.986 × 1083.121 × 1079.553 × 1088.391 × 108
GWO9.053 × 1082.784 × 1079.648 × 1088.557 × 108
MFO8.247 × 1084.542 × 1078.905 × 1087.692 × 108
SCA8.704 × 1083.487 × 1079.251 × 1088.107 × 108
PSO8.552 × 1083.939 × 1079.106 × 1087.895 × 108
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Li, X. CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization. Biomimetics 2025, 10, 529. https://doi.org/10.3390/biomimetics10080529

AMA Style

Zhao Y, Li X. CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization. Biomimetics. 2025; 10(8):529. https://doi.org/10.3390/biomimetics10080529

Chicago/Turabian Style

Zhao, Youdao, and Xiangdong Li. 2025. "CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization" Biomimetics 10, no. 8: 529. https://doi.org/10.3390/biomimetics10080529

APA Style

Zhao, Y., & Li, X. (2025). CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization. Biomimetics, 10(8), 529. https://doi.org/10.3390/biomimetics10080529

Article Metrics

Back to TopTop