Next Article in Journal
Goodness-of-Fit Tests for Combined Unilateral and Bilateral Data
Previous Article in Journal
The Stability of Isometry by Singular Value Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design

School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(15), 2499; https://doi.org/10.3390/math13152499
Submission received: 7 June 2025 / Revised: 13 July 2025 / Accepted: 30 July 2025 / Published: 3 August 2025

Abstract

This study proposes a novel metaheuristic algorithm, Cosmic Evolution Optimization (CEO), for numerical optimization and engineering design. Inspired by the cosmic evolution process, CEO simulates physical phenomena including cosmic expansion, universal gravitation, stellar system interactions, and celestial orbital resonance.The algorithm introduces a multi-stellar framework system, which incorporates search agents into distinct subsystems to perform simultaneous exploration or exploitation behaviors, thereby enhancing diversity and parallel exploration capabilities. Specifically, the CEO algorithm was compared against ten state-of-the-art metaheuristic algorithms on 29 typical unconstrained benchmark problems from CEC2017 across different dimensions and 13 constrained real-world optimization problems from CEC2020. Statistical validations through the Friedman test, the Wilcoxon rank-sum test, and other statistical methods have confirmed the competitiveness and effectiveness of the CEO algorithm. Notably, it achieved a comprehensive Friedman rank of 1.28/11, and the winning rate in the Wilcoxon rank-sum tests exceeded 80% in CEC2017. Furthermore, CEO demonstrated outstanding performance in practical engineering applications such as robot path planning and photovoltaic system parameter extraction, further verifying its efficiency and broad application potential in solving real-world engineering challenges.

1. Introduction

With the rapid development of science and technology, optimization problems in various fields have become increasingly complex, characterized by nonlinearity, the large-scale nature, high dimensionality, numerous constraints, etc. Traditional exact optimization algorithms, such as the simplex method [1] and the branch-and-bound method [2], can obtain more accurate solutions than metaheuristic algorithms by searching the entire space for small-scale problems, but they are often inadequate when dealing with large-scale and high-dimensional complex problems [3]. Compared with metaheuristic algorithms, traditional optimization algorithms are not only time-consuming but also prone to falling into local optima. They highly depend on the mathematical properties of the problem and struggle to handle complex optimization problems involving discontinuities, non-differentiability, and noisy disturbances.
To overcome the limitations of traditional exact optimization algorithms, researchers have drawn inspiration from natural phenomena to develop numerous global optimization algorithms based on simulations and nature-inspired principles, also known as metaheuristic algorithms [4]. By simulating biological behaviors, physical phenomena, or social mechanisms in nature, metaheuristic algorithms design a series of efficient search strategies to conduct global exploration in complex solution spaces, thereby effectively avoiding local optima [5,6]. Notably, these algorithms demonstrate exceptional adaptability in scenarios involving NP-hard problems and contexts where derivative computation is prohibitively expensive. This adaptability is further validated by their success in real-world engineering challenges, such as robot path planning and photovoltaic system parameter extraction, where traditional methods often struggle with high-dimensionality and non-differentiable constraints. Additionally, metaheuristic algorithms do not require complex processes such as differentiation, pay no attention to the differences between problems, and only need to focus on input and output data, exhibiting strong flexibility for many different problems [7].
Based on their search structures and inspiration sources, metaheuristic algorithms can be systematically categorized into two classes. Under the first classification scheme, metaheuristics are divided into population-based iterative optimization algorithms and single-candidate-solution-based algorithms [8]. The former explores the solution space through collaboration among multiple solutions, guiding the search direction via information interaction between individuals; a typical example is Particle Swarm Optimization (PSO) [9]. Single-solution-based algorithms generally iterate optimization around a single solution, improving solution quality through neighborhood search or perturbation strategies. For instance, Tabu Search (TS) [10] generates candidate solutions through neighborhood search around a single solution, utilizing a tabu list to record recent moves and avoid redundant searches, thereby overcoming tabu restrictions via an aspiration criterion to achieve directed optimization of the single solution. Such algorithms do not rely on population interaction and thus have advantages in computational cost. Population-based algorithms, leveraging parallel search of multiple solutions and information-sharing mechanisms, can cover the solution space more efficiently when handling high-dimensional complex problems. Their diversity advantage significantly enhances the search efficiency for global optimal solutions, while a drawback is that they require a longer running time and more function evaluations.
In another classification system, metaheuristic algorithms can be divided into four categories according to different sources of inspiration: evolutionary algorithms (EAs) based on biological evolution in nature, hysical algorithms (PhAs) based on physical laws, human-based algorithms (HBAs) inspired by the emergence and development of human society, and swarm intelligence (SI) algorithms based on the social behaviors of biological groups [11]. The genetic algorithm (GA) [12] is the most famous evolutionary algorithm, first proposed by John Holland in the 1960s and systematically expounded by him in 1975 to formally mark the birth of evolutionary algorithms. Inspired by Darwin’s theory of natural selection and Mendel’s principles of genetics, the GA simulates the processes of natural selection, crossover, and mutation in biological evolution, replacing poorer solutions with better ones. The GA can effectively address multi-objective, nonlinear, and other high-dimensional problems. Other evolutionary algorithms, such as differential evolution (DE) [13], guide the search direction by utilizing the difference information between individuals in the population to expand the global search scope, while the core ideas of evolutionary algorithms like genetic programming (GP) [14], evolutionary programming (EP) [15], and the evolutionary strategy (ES) [16] also involve processes such as crossover, mutation, and recombination.
Human-based algorithms are a class of metaheuristic optimization algorithms inspired by human behaviors, social structures, or cultural evolution, treating optimization problems as evolutionary processes of social systems by simulating behaviors such as collaboration, competition, and learning in human society. One of the most famous human-based algorithms is the teaching–learning-based optimization (TLBO) algorithm [17], which was proposed by Rao et al. in 2011 and inspired by the process of teaching and learning in classrooms. The algorithm divides the population into teachers and learners: the teacher phase transmits knowledge through the optimal solution, and the learning phase updates positions through communication among students, thus rapidly achieving or approaching the optimal solution. TLBO offers advantages such as a straightforward algorithmic structure and minimal parameter settings. Other human-based algorithms include social group optimization (SGO) [18] based on collaboration and competition in human social groups, social cognitive optimization (SCO) [19] based on individual learning through observing and imitating others, the memetic algorithm (MA) [20] based on cultural gene transmission and evolution, social emotional optimization (SEO) [21] based on emotional changes in social interactions, political optimization (PO) [22] based on political party competition and voter voting, and the war strategy algorithm (WSA) [23] based on offensive, defensive, and alliance strategies in ancient warfare.
Swarm intelligence algorithms are the most widely studied and applied algorithms to date. They treat optimization problems as dynamic evolutionary processes of groups in the solution space by simulating the collaborative behaviors of biological populations in nature. One of the most famous algorithms is Particle Swarm Optimization (PSO) [9], which simulates the behavior of bird flocks or fish schools seeking food or habitats through information sharing among individuals. Each particle updates its velocity and position by tracking its own historical best and the global best, continuously adjusting until convergence to the optimal solution at the end of iterations. Other classic swarm intelligence algorithms include ant colony optimization (ACO) [24], which mimics pheromone sharing in ant colonies to find the shortest path; the artificial bee colony (ABC) [25], which simulates the division of labor in honeybees (employed bees, onlooker bees, and scout bees) during foraging; the gray wolf optimizer (GWO) [26], which models the social hierarchy and hunting behavior of gray wolves, assuming three leader wolves ( α , β , and Ω) to guide position updates and track prey; the bat algorithm (BA) [27], which imitates bats’ echolocation for predation by adjusting the frequency, loudness, and pulse emission rate to update positions; cuckoo search (CS) [28], which simulates cuckoos’ brood parasitism and leverages Lévy flight for optimization; and the whale optimization algorithm (WOA) [29], which emulates humpback whales’ hunting behaviors (encircling prey, random search, and bubble-net attack) to gradually converge to optimal solutions.
The last category of physics-based algorithms treats optimization problems as deductions in physical systems by simulating processes such as energy minimization, force equilibrium, and particle motion, often with intuitive physical backgrounds. Simulated annealing (SA) [30], a classic physics-inspired algorithm proposed by Kirkpatrick et al. in 1983, draws inspiration from the thermodynamics of solid annealing: at high temperatures, it favors global exploration by accepting worse solutions to escape local optima; as the temperature decreases, it shifts to local exploitation, eventually converging to a global optimum or a high-quality approximate solution. SA demonstrates strong adaptability in high-dimensional problem-solving, featuring a simple structure, few parameters, and broad applicability in engineering. The Gravitational Search Algorithm (GSA) [31] is another classic physical algorithm inspired by Newton’s law of universal gravitation. It models individuals in the solution space as mass-bearing objects, updating their positions through gravitational forces and accelerations to converge to optimal solutions. Other physics-inspired algorithms include the black hole algorithm (BHA) [32] based on astrophysical black hole phenomena; the artificial electric field algorithm (AEFA) [33] inspired by charge interactions in electric fields; the atom search optimization (ASO) algorithm [34] based on interatomic forces; the heat transfer search (HTS) algorithm [35] mimicking heat exchange in thermodynamics; ray optimization (RO) [36] inspired by light propagation and reflection; quantum particle swarm optimization (QPSO) [37] based on quantum mechanical particle behavior; and the arcing search algorithm (ASA) [38] simulating lightning phenomena.
In recent years, many novel physics-inspired metaheuristic algorithms have emerged, such as the energy valley optimizer (EVO) [39], which microscopically simulates particle stabilization and decay; the propagation search algorithm (PSA) [40], inspired by wave propagation of current and voltage in long transmission lines; the attraction–repulsion optimization algorithm (AROA) [41], inspired by the simultaneous attraction and repulsion forces acting on all objects in nature; and the rime optimization algorithm (RIME) [42], which simulates the growth processes of soft and hard rime (frost ice). These new algorithms far surpass traditional metaheuristic algorithms in exploitation and exploration capabilities when applied to specific engineering design optimization problems. For example, the PSA exhibits much faster convergence than algorithms such as AEFA, CS, and SSA in solving problems requiring a rapid response. The EVO provides competitive and excellent results in handling complex benchmark and real-world problems. The RIME’s excellent balance between exploration and exploitation during iterations indicates its potential for a wide range of problems.
However, as the No Free Lunch (NFL) theorem [43] suggests, although metaheuristic algorithms have been widely applied and performed well in various fields, no single algorithm can solve all optimization problems. This perspective has motivated researchers to continuously propose new algorithms or attempt optimization measures to enhance algorithmic exploration or exploitation capabilities, achieving a balance between them [44] to facilitate broader dissemination and application.
To address these challenges, this work presents a novel physics-inspired metaheuristic algorithm, CEO. Inspired by the process of cosmic evolution, the algorithm simulates physical phenomena such as cosmic expansion [45], universal gravitation, stellar system interactions [46], and celestial orbital resonance [47]. By introducing the concept of cosmic space, CEO partitions the search space into multiple stellar systems, each containing several celestial bodies. The algorithm treats current solutions as celestial bodies in the universe, updating their positions by calculating the mutual gravitational forces between each celestial body and the surrounding stars. Additionally, it models the interactive behaviors between stellar systems and the influence of cosmic expansion on celestial bodies. This mechanism enables the algorithm to achieve a robust balance between global exploration and local exploitation.
Unlike other metaheuristic algorithms inspired by cosmic phenomena, CEO introduces a multi-stellar system framework wherein search agents are hierarchically organized into distinct subsystems overseen by high-performing “stellar” leaders. This hierarchical architecture enables multi-centered gravitational steering, which bolsters population diversity and facilitates parallel exploration. In contrast, algorithms such as the GSA [31] and BHA [32] lack such hierarchical differentiation, relying instead on a single-centered interaction paradigm. Furthermore, CEO’s celestial orbital resonance mechanism emulates micro-scale orbital modulations inspired by celestial mechanics, transcending conventional gravitational analogies. CEO achieves both search breadth and exploitation precision relative to the GSA and BHA, thereby attaining a superior exploration–exploitation balance and robust performance across diverse scenarios.
The main contributions of this work are summarized as follows:
  • A novel algorithm based on cosmic evolution phenomena, the Cosmic Evolution Optimization (CEO) algorithm, is proposed, which simulates the formation process of stellar celestial systems in the universe.
  • The capabilities of the CEO algorithm were tested on 29 CEC2017 benchmark functions with 30-dimensional, 50-dimensional, and 100-dimensional spaces and compared with ten other advanced metaheuristic algorithms. CEO demonstrated superior performance in benchmark function tests.
  • CEO was tested on 13 CEC2020 real-world constrained problems and compared with other metaheuristic algorithms. The results show that CEO outperforms other algorithms in optimizing most real-world problems.
  • The CEO algorithm has shown promising application values in two typical engineering problems: robot path planning and photovoltaic system parameter extraction.
  • Statistical analysis methods such as the Friedman test and the Wilcoxon rank-sum test were used to explore the performance of the CEO algorithm, and the results indicate that CEO performs outstandingly in many cases.

2. Cosmic Evolution Algorithm

Natural phenomena have long served as a wellspring of inspiration for designing metaheuristic algorithms. The complex systems in the universe offer unique insights into balancing exploration and exploitation in optimization problems, which directly inspired the proposal of the CEO algorithm. This section first introduces the inspiration sources of CEO. Subsequently, the mathematical model of CEO is presented, encompassing four key components: the exploration phase, the exploitation phase, and two distinctive algorithmic mechanisms, namely the stellar system collision strategy and the celestial orbital resonance strategy.

2.1. Inspiration

The CEO algorithm draws inspiration from the evolutionary processes of the universe, particularly phenomena such as cosmic expansion, interactions between stellar systems, and gravitational interactions between celestial bodies. In this algorithm, the universe is modeled as a complex dynamic system, where each solution (celestial body) is part of this system, seeking optimal solutions through mutual interactions and evolution. As the universe evolves, celestial bodies gradually aggregate under the gravitational pull of stars to form stable stellar systems, while the cosmic expansion process enables celestial bodies to expand globally. During optimization, these cosmic physical phenomena are abstracted into objective functions, allowing the CEO algorithm to be widely applied to various optimization problems.
The cosmic evolution mechanisms regulate the motion and distribution of celestial bodies holistically, and the algorithm simulates four phenomena of cosmic evolution: cosmic expansion, universal gravitational interactions between celestial bodies, stellar system interactions, and celestial orbital resonance, where the cosmic expansion [45] process acts as a global exploration mechanism enabling celestial bodies to conduct large-scale searches across the vast cosmic space and avoid being trapped in local optima while the universal gravitational interaction process mimics the gravitational effects between celestial bodies to cause them to deviate in specific directions and explore unknown regions along fixed trajectories for fine-grained local exploitation. Stellar system interactions are categorized into two types—first, all celestial bodies are attracted by the most massive star and migrate toward it, and second, celestial bodies are allowed to migrate to other positions with a certain probability due to violent system collisions to enhance the algorithm’s global search capability [46]. The celestial orbital resonance mechanism simulates the phenomenon where the orbital path of a celestial body resonates with surrounding bodies to potentially alter or restrict its current trajectory [47]. These mechanisms serve as the core strategies of the CEO algorithm.

2.2. Initialization Phase of CEO

Inspired by the evolutionary process of the universe, the initialization process of the CEO algorithm simulates the birth and development of the universe. Initialization is divided into two stages: the first stage represents the generation of celestial bodies, and the second stage represents the formation of stellar systems.
The first-stage initialization of the celestial body population can be represented by an N × D dimensional matrix, as shown in Equation (1):
X = x 1 , 1 x 1 , 2 x 1 , j x 1 , d i m x 2 , 1 x 2 , 2 x 2 , j x 2 , 1 x i , 1 x i , 2 x i , j x i , d i m x N , 1 x N , 2 x N , j x N , d i m
where X is a set of random celestial body populations obtained using Equation (2). x i , j denotes the j-th dimensional coordinate of the i-th celestial body. N represents the total number of celestial bodies, and d i m represents the dimension of the problem.
X = r a n d N , d i m × u b l b + l b
where r a n d ( ) is a random number in the interval 0 , 1 , and u b and l b denote the upper and lower bounds of the variables, respectively.
The second-stage formation of stellar systems involves partitioning the search space into multiple stellar systems, each containing multiple celestial bodies. In algorithms such as the GSA (Gravitational Search Algorithm), the fitness of solutions is abstracted as the mass of particles or objects. The CEO algorithm borrows this concept, treating the fitness of solutions in the search space as the mass of celestial bodies. Thus, the solution with the optimal fitness has the largest mass and is regarded as the star in the celestial system, while the remaining solutions are treated as planets orbiting the star. The system radius is jointly determined by multiple planets orbiting the edge of the star.

2.3. Exploration Phase

Metaheuristic algorithms emphasize balancing the relationship between exploration and exploitation during the optimization process. These algorithms can update the next position through iteration based on the current position and the location of the best solution. Therefore, in this study, the exploration of the next position is still related to the algorithm’s own current position and the currently known optimal solution.
As shown in Figure 1, during the optimization process of the CEO algorithm, the search space is treated as cosmic space and divided into multiple expanding stellar system regions. This allows search agents to continuously explore broader areas, significantly enhancing the scope of global search.
However, in the evolutionary process, the cosmic expansion coefficient does not remain constant. We adopt an adaptive parameter dynamic adjustment mechanism, where the expansion coefficient gradually weakens over time. The cosmic expansion velocity V e p t can be described by the following model:
V e p ( t ) = W 1 × ( 1 0.001 × t T ) × e 4 t T
where t is the iteration step, T is the maximum number of iterations, and W 1 is the expansion decay factor, which controls the precision of the exploration phase. Through experiments, its value is fixed at 0.1. The trend of Equation (3) is shown in Figure 2.
Therefore, the update equation for celestial body motion is as follows:
X i ( t + 1 ) = X i ( t ) + V e p ( t ) r a n d n ( · ) × ( u b l b )
In the exploration phase, the random number r a n d n ( · ) is employed to simulate the uncertainty in the direction of cosmic expansion, ensuring that celestial bodies can explore the search space as extensively as possible and expand into more feasible regions deep within the universe. In the early stage of algorithm iteration, the exponential term approaches 1, and V ep ( t ) is close to the initial value W 1 . At this time, the cosmic expansion effect is significant, driving celestial bodies to perform large-scale random searches in the search space. In the later stage of iteration, the exponential term approaches 0, and V ep ( t ) tends to 0. The cosmic expansion effect weakens, and the algorithm gradually shrinks the exploration range, shifting to local fine-grained search, and the search space decreases step by step. Additionally, the influence of cosmic expansion on the motion of celestial bodies is also related to the dimension: the higher the dimension, the greater the impact of cosmic expansion on celestial bodies.
Moreover, considering that the motion of celestial bodies is influenced not only by cosmic expansion but also by the gravitational attraction of stars and other celestial bodies within each stellar system, we simulate the universal gravitation phenomenon by which stellar systems attract celestial bodies to move, as shown in Figure 3.
First, the central coordinate and central fitness of each system are determined by the celestial body currently regarded as the star. The size of each stellar system is calculated by the following formula:
R c = 1 k c j = 1 k c d j ( c ) d j c = X i j O c k c = m a x 2 , round N C
where C represents the number of initialized celestial systems and N denotes the total number of celestial bodies. Thus, k c signifies the number of celestial bodies selected to calculate the average radius of the c-th stellar system. X i j represents the coordinate of the j-th celestial body in the top k c candidate solution vectors after sorting X i by fitness. O c denotes the central coordinate of the c-th stellar system before update, and d j c represents the absolute distance from the i j -th celestial body to the center of the c-th stellar system.
For the gravitational force exerted by the c-th stellar system on X i t , the calculation formula is as follows:
F c = U c · e O c X i R c U c = o c x i o c x i
In the gravitational force formula above, U c directs the search trend of the solution X i ( t ) toward the system center of each celestial system, forming multi-target guided exploration paths to avoid the limitations of a single search direction. Meanwhile, the gravitational strength F c is dynamically adjusted using an exponential decay function: when a celestial body is far from the system center ( o c x i   R c ), the gravitational strength decays exponentially to reduce the search weight in invalid regions; when the celestial body is close to the center ( o c x i   R c ), the gravitational force significantly increases to strengthen local refined searching. Since multiple stellar systems exist in the search space, celestial bodies are influenced by the gravitational forces of multiple stars. In this algorithm, a weight coefficient W is introduced to calculate the comprehensive gravitational force of each stellar system on the current celestial body. For the c-th system, the calculation model of its weight coefficient w c is
w c = e f i f c f best + ϵ
where f i represents the mass of the current celestial body, f c denotes the mass of the c-th star, and f b e s t represents the mass of the largest star (also the fitness of the current optimal solution). A very small positive number ϵ is introduced to avoid the case where the divisor tends to infinity when f b e s t = 0.
The total weight is obtained by summing the weights of all stars, where
W total = ϵ + c = 1 C w c
The weight calculation model in Equation (7) enhances the local exploitation capability of the search space. If the current solution is inferior to the stellar system, the weight w c exponentially decays with the increase in difference, reducing the focus on this system and guiding the population to migrate to potential regions.
Based on Equations (6)–(8) above, the comprehensive gravitational force formula of the stellar system on the celestial body X i is derived as
F ( t ) = i c W i · F i W t o t a l
Thus, the updated equation for celestial body movement modified by universal gravitation can be summarized as follows:
X i ( t + 1 ) = X i ( t ) + V e p ( t ) r a n d n ( · ) × ( u b l b ) + F ( t )
This formula describes the motion of the current celestial body, which is simultaneously influenced by cosmic expansion and the gravitational forces of multiple stellar systems.

2.4. Exploitation Phase

In the exploitation phase of the CEO algorithm, to simulate the influence of interactions between stellar systems on celestial bodies within their systems, we introduce a mechanism where all celestial bodies gradually adjust their positions under the gravitational attraction of the current largest star, namely the celestial alignment effect. To model the gravitational direction of the largest star on the current celestial body, we use the following reference direction:
s i ( t ) = X b e s t ( t ) X i ( t )
Due to the highly dispersed positions of celestial body populations, a normally distributed random number r a n d n is used to simulate the multi-directional movement of planets under the gravitational force of stars. This avoids all solutions approaching the optimal solution with the same amplitude and direction, thereby maintaining population diversity throughout the iteration process. Specifically, the influence of the alignment effect A t can be expressed as
A ( t ) = α t × s i ( t ) × randn ( 1 , dim )
where α denotes the alignment coefficient, whose value is not fixed but adaptively adjusted with the number of iterations. In this work, the initial value of α is set to 0.7. As the number of iterations t increases, the algorithm gradually enhances the weight of the alignment effect, where
α t = α × 1 + 0.0005 · t T
As can be seen from Equation (13), the alignment coefficient α ( t ) adopts an adaptive adjustment strategy, which enables the algorithm to possess certain global exploration capabilities in the early iterations. In the later iterations, as α ( t ) increases, the update direction of solutions becomes more concentrated toward the current optimal solution, thereby accelerating local fine-grained exploration and exploitation. The variation trend of a ( t ) is shown in Figure 4.
Thus, the trajectory of celestial body movement can be updated using the following formula:
X i ( t + 1 ) = X i ( t ) + V e p ( t ) r a n d n ( · ) × ( u b l b ) + F ( t ) + A ( t )
Figure 5 shows the overall updated trajectory of celestial body movement in a two-dimensional parameter space. It can be observed that the cosmic expansion effect enables the algorithm to conduct large-scale searches in the early iteration stage, covering the entire search space. Notably, as the number of iterations increases, the influence of cosmic expansion on celestial bodies gradually weakens, while the gravitational forces from stellar systems and the largest mass celestial body gradually strengthen, guiding the population to explore more optimal regions.

2.5. Stellar System Collision Strategy

In the early stages of cosmic evolution, stellar systems are generally internally fragile and unstable, with frequent collisions and close encounters between galaxies [46]. Therefore, when two planetary systems collide or pass close to each other, gravitational perturbations may cause significant deviations in the trajectories of some celestial bodies, leading them to enter new orbital paths for revolution.
To expand the global search scope and enable solutions to continuously explore unknown regions in the search space, the algorithm simulates the above evolutionary phenomenon by introducing a global exploration strategy applicable to early iterations, also known as the stellar system collision perturbation strategy. To fully explore and utilize the potential search space, all solutions have a probability of executing the system collision strategy during the optimization process.
The following formula represents the probability P global of the current solution executing the system collision strategy:
P global = P base × ( 1 0.0005 × t T ) × 1 t T
where the initial collision probability P base is set to a fixed value of 0.2. If r a n d n < P global , a trajectory deviation is generated for the current celestial body X i :
X n e w = X i + η · R ¯
where η N ( 0 , 1 ) denotes a standard normal distribution random vector and R represents the average radius of the stellar system.
During each iteration, this strategy performs omni-directional searches across the entire solution space through dynamically controlled probability and random collision mechanisms. In the initial stage, with a higher P global value, the algorithm favors large-scale global searches. In later stages, as P global decreases, the algorithm gradually shifts to local refined searches. The adaptive adjustment mechanism of P global achieves a smooth transition from broad exploration to local exploitation. Meanwhile, this jumping strategy uses the average stellar system radius R to calculate the jump step size, enabling dynamic changes in the exploration region of the current solution and ensuring matching between the search scope and the problem scale.
To ensure continuous algorithm convergence, the collision strategy introduces an elite retention mechanism: after jumping, the new solution X n e w is compared with the original solution X i ; if X n e w is better, it is accepted; otherwise, X i is retained. This mechanism not only preserves the elite characteristics of the population but also synchronously updates the global optimal solution X b e s t .

2.6. Celestial Orbital Resonance Strategy

During the optimization process, to break through the gravitational potential well of local optimal solutions, this work proposes a cross-scale search mechanism guided by celestial orbit resonance, enabling the algorithm to conduct fine exploration toward the optimal solution.
In the early iteration stage, the universe is still in a chaotic period, and effective orbital spacing cannot be formed between celestial bodies within the system, making resonance phenomena extremely unstable. If a stable proportional relationship between orbits has not been established, gravitational forces will force changes in the current celestial orbits until stability is achieved [47]. Conversely, if the current stellar system enters a stable period, integer-ratio resonance phase-locking states are often formed between celestial orbits, resulting in minimal changes in the current celestial orbits.
Based on the above celestial mechanics phenomena, we propose a search mechanism for celestial orbit resonance applied to the algorithm, and its specific mathematical model is as follows:
P r e s o n = 0.1 · e 3 t / T Δ X = 0.01 × r a n d n ( 1 , d i m ) × ( u b l b ) X r e s o n t + 1 = Δ X + X i t + 1
where P r e s o n represents the trigger probability of orbital resonance, which periodically weakens with iterations to simulate the phenomenon that the celestial orbits within stellar systems tend to stabilize in the late stage of cosmic evolution. Δ X denotes the amplitude of resonance perturbation, which uses r a n d n ( ) to generate tiny offsets, indirectly simulating the uncertainty of celestial orbit direction deviation after resonance occurs.
If the fitness of the current position is f local < f i after the resonance perturbation of a celestial body, it indicates that the current celestial orbit is more stable, and the trajectory equation of the celestial body’s movement is updated as follows:
X i t + 1 = X r e s o n t + 1
This strategy introduces the theory of celestial orbital resonance into the optimization search framework, significantly improving issues such as the algorithm being trapped in local optimal solutions. Combined with Equation (16), it forms a multi-scale search architecture. The stellar system collision perturbation strategy achieves cross-potential-well migration at the scale of stellar systems from a macro perspective, while the celestial orbital resonance mechanism performs intra-system orbital fine-tuning, keeping the algorithm highly active in exploration during iterations and enabling algorithm convergence and updates.

2.7. Computational Complexity of CEO

The computational complexity of the CEO algorithm primarily depends on four factors: population initialization, fitness function evaluations and sorting, solution updates, and cosmic circle updates. Assuming N is the number of solutions, dim is the dimension of the solution space, and T is the maximum number of iterations, the computational complexity of initializing the population is O N × d i m + O ( N ) , where O ( N ) represents the initial fitness evaluation. During the iteration process, the complexity of evaluating and sorting the fitness functions is O ( T × N log N ) , the complexity of updating the solutions is O ( T × N × d i m ) , and the complexity of updating the cosmic circles is O ( T × N × d i m ) . Therefore, the total computational complexity of CEO is O ( N × d i m ) + O ( N ) + O ( T × N log N ) + 2 · O ( T × N × d i m ) , which simplifies to O T · N ( log N + d i m ) .

2.8. Pseudo-Code and Flowchart of CEO

The workflow of the CEO algorithm is illustrated in Figure 6, and its pseudo-code is presented as Algorithm 1. During the initialization process, all initial candidate solutions are sorted according to fitness in sequence to generate three stellar celestial bodies. The stellar celestial bodies attract surrounding planetary celestial bodies to form a stellar celestial body system, which enters the early iteration stage.
At the beginning of each iteration, the stellar celestial bodies will change according to the fitness-based ranking, and the size of the stellar celestial body system will also change accordingly. At this time, phenomena such as collisions of stellar systems may occur, and this process will expand the scope of the global search of the algorithm in the solution space. If this strategy is not executed, the position coordinates of each celestial body for the next iteration will continue to be updated.
After each iteration, the algorithm may provide a strategy for solutions that have not been improved during the iteration to escape from local optima, namely the celestial orbit resonance strategy. If there is still no improvement at this point, the algorithm proceeds to the next round of iteration until the overall iteration is completed.
Algorithm 1: Cosmic evolution optimizer (CEO)
   Initialize population positions X i within search space bounds [ l b , u b ]
   Evaluate fitness values f i t i for each solution candidate
   Initialize Stellar systems parameters: numbers & centers & radius
   while Iteration < Maximum iterations T
      Sort solutions by fitness
      Update Stellar system
      for  i = 1 : C
         Set center to C i best solution
         Calculate radius as mean distance to nearest solutions by using Equation (5)
      end
      for  i = 1 : N
         if  rand < P g l o b a l
            Generate parallel position X n e w with cosmic noise by using Equation (16)
            if  f n e w < f b e s t
               Update best fitness
            end
         end
         Calculate celestial forces from all circles F c by using Equation (6)
         Calculate combined force F ( t ) by using Equation (9)
         Calculate Cosmic expansion V e p ( t ) by using Equation (3)
         Calculate alignment A ( t ) by using Equation (12)
         Update position X i ( t + 1 ) by using Equation (14)
         if  f i ( t + 1 ) < f i ( t )
            Update solution
         else
            if  rand < P r e s o n
               Generate local position X r e s o n by using Equation (17)
               Update solution if improved
            end
         end
      end
   end while
   Return the best solution X b e s t and its fitness f b e s t

3. Experimental Results and Analysis

In this section, the performance of the cosmic evolution optimizer (CEO) algorithm is validated via the CEC2017 test suite, CEC2020 real-world constrained optimization problems, and multiple typical engineering problems. The experimental investigation comprises five components: (1) benchmark functions and experimental configurations; (2) experiments on the CEC2017 test suite; (3) an analysis of the exploration–exploitation balance in CEO; (4) CEC2020 real-world constrained problems; and (5) two engineering application cases.
All experiments and simulation studies in this section were implemented using the MATLAB R2024a. The code was executed on a computer equipped with a 12th-generation Intel Core i5-12450H processor (2.0GHz), 16.0 GB of RAM, and the Windows 11 operating system. The specific MATLAB version employed was R2024a.

3.1. Benchmark Functions and Experimental Settings

This subsection employs the 30-dimensional, 50-dimensional, and 100-dimensional variants of the 29 benchmark functions from the CEC2017 test suite for experimental investigations. Table 1 classifies these 29 unconstrained CEC2017 benchmark problems according to their characteristics; for classification criteria and detailed specifications, refer to the corresponding literature [48].
As shown in Table 1, unimodal problems (F1 and F3) are often used to verify the exploitation capability of algorithms due to their single global optimum. In contrast, multimodal problems (F4–F10) contain numerous local optima. Premature convergence to any local optimum indicates insufficient exploration capability, making them suitable for validating an algorithm’s exploratory power. Hybrid problems (F11–F20) and composite problems (F21–F30) typically exhibit multimodality and nonlinearity, better simulating real-world search spaces. Their evaluation metrics generally reflect an algorithm’s comprehensive performance.
The CEC2017 test functions consist of 20 basic functions. Unimodal and simple multimodal functions are formed by translating and rotating nine basic functions, while hybrid and composite functions are constructed from multiple basic functions. Since exact mathematical expressions for these functions are difficult to provide, Table 2 lists the basic information and theoretical optimal values of the CEC2017 test functions. For detailed specifications, refer to Reference [48].
These typical benchmark functions comprehensively reflect an algorithm’s potential performance. Thus, this subsection focuses on a comprehensive evaluation of CEO on CEC2017, including its testing and performance analysis.

3.2. Experiments on the CEC2017 Test Suite

3.2.1. Convergence Behavior of CEO

In this subsection, qualitative experimental analyses of the CEO algorithm are conducted using several benchmark functions from CEC2017, focusing on search history, convergence plots, and population diversity. The experiments set the CEO population size to 30 and use a total of 1000 iterations as the termination criterion. The results are presented in Figure 7.
The first column depicts the parameter spaces of the benchmark problems, visually illustrating their structural characteristics. F1–F3 exhibit smooth landscapes, classified as unimodal problems, while F11–F30 simulate realistic solution spaces with complex multimodality.
The second column shows search history plots, intuitively reflecting the positional changes of all individuals during iterations. The plots reveal that a small number of historical best solutions are dispersed in the space, whereas most converge near the global optimum.
Then in the third column, the result records the best fitness values of the CEO algorithm after each iteration. It can be seen from the figure that for most test functions, the CEO algorithm gradually improves the quality of its solutions as the number of iterations increases. Thanks to the synergistic effect of macroscopic system interaction behaviors and the celestial orbit resonance mechanism, the CEO algorithm can escape from the local-optimum trap after being stagnant for a while and then search for the global optimum again.
The last column records the change in population diversity during the optimization process. It can be seen that as the optimization problem iterates, the growth population slowly decreases, indicating that the CEO algorithm can quickly locate the general position of the global optimal solution, with all populations gathering near the optimal solution and conducting in-depth exploitation around it.
In summary, CEO demonstrates the following characteristics: (1) Early-stage Parallel Exploration: Multiple search agents independently explore distinct celestial systems, enabling rapid approximation of the global optimum. (2) Adaptive Search Dynamics: Random position adjustments throughout optimization maintain global exploration capabilities and prevent entrapment in local optima. (3) Dual-strategy Balance: The algorithm’s dual perturbation strategies ensure continuous exploration of unexplored regions in early iterations and prevent premature convergence in late stages, achieving a robust balance between exploration and exploitation.

3.2.2. Comparison and Analysis with State-of-the-Art Algorithms

To further validate the optimization performance of CEO and demonstrate its novelty and superiority, this subsection compares CEO with ten state-of-the-art algorithms using the 30-dimensional, 50-dimensional, and 100-dimensional CEC2017 test suites. The compared algorithms include NRBO [49], ETO [50], EVO, DBO [51], RIME, GJO [52], SOA [53], WOA, SCA [54], and MFO [55]. Detailed information on the specific names of the compared algorithms and their key parameters is provided in Table 3.
All algorithms were configured with identical fundamental parameters: a search agent population size of N = 50, solution space dimensions of D = 30 , 50 , 100 , and a maximum of 1000 iterations. Each algorithm was independently executed 30 times to ensure the reliability of results. The mean (average) and standard deviation were employed to evaluate optimization performance and are defined as follows:
m e a n = 1 N i N f i * s t d = 1 N 1 i N f i * mean 2
where the mean value reflects the algorithm’s convergence accuracy, while the standard deviation indicates its stability. i denotes the iteration index. N is the total number of runs, and f i * represents the global optimal solution obtained in the i-th run.
The simulation results of the 11 algorithms are summarized in Table 4, Table 5 and Table 6. Based on the mean and standard deviation, the following conclusions were drawn:
For the unimodal problems C 2017 1 and C 2017 3 , when the problem dimension was 30, the CEO algorithm outperformed all competing algorithms except RIME, and the quality of its optimal solution was significantly improved compared with other algorithms, while algorithms such as SCA and NRBO could not directly find the global optimum. In the case of 50 and 100 dimensions, CEO outperformed all algorithms on the F1 function and showed a clear lead. Although its performance on the F3 function was similar to that of other algorithms, it proved that the performance of the CEO algorithm did not deteriorate with the increase in dimension but remained stable.
For the simple multimodal problems C 2017 4 C 2017 10 , in the tested 30-dimensional, 50-dimensional, and 100-dimensional problems, CEO surpassed all other compared algorithms under the F4–F9 functions and obtained the average optimal solution. Among them, in the 30-dimensional and 50-dimensional cases, CEO’s performance on the F10 function was second only to RIME, while in the 100-dimensional case, CEO’s performance exceeded RIME and led all algorithms. It can be seen that in high-dimensional cases, the standard deviation of CEO for the F10 function was even significantly improved.
For the mixed problems C 2017 11 C 2017 20 , in the 30-dimensional case, CEO surpassed WOA, SOA, ETO, RIME, SCA, MFO, NRBO, GJO, EVO, and DBO on 10, 10, 10, 7, 10, 10, 10, 10, 10, and 10 problems, respectively. In the 50-dimensional case, CEO surpassed the same algorithms on 10, 10, 10, 7, 10, 10, 10, 10, 10, and 10 problems. In the 100-dimensional case, CEO surpassed these algorithms on 10, 10, 10, 8, 10, 10, 10, 10, 10, and 10 problems, respectively. CEO performed better in handling high-dimensional hybrid problems than in low-dimensional ones, demonstrating adaptability to hybrid problem processing across all dimensions.
For the combination problems C 2017 21 C 2017 30 , in the 30-dimensional, 50-dimensional, and 100-dimensional cases, CEO surpassed WOA, SOA, ETO, RIME, SCA, MFO, NRBO, GJO, EVO, and DBO on 10, 10, 10, 9, 10, 10, 10, 10, 10, and 10 problems, respectively.
Based on the above data, we performed the Friedman test [56] on these eleven algorithms, and the specific ranking results are shown in Table 7. It can be seen from the table that the proposed CEO algorithm performed exceptionally well, achieving a comprehensive rank of 1.2333 on 30-dimensional functions, 1.24318 on 50-dimensional functions, and 1.36667 on 100-dimensional functions. Notably, as the dimension of the solution space increased, CEO demonstrated superior optimization performance, making it more suitable for solving high-dimensional problems.
To comprehensively evaluate the performance of the CEO algorithm from a statistical perspective, we employed the non-parametric Wilcoxon rank-sum test [56] with a significance level set at 0.05, conducting validation experiments on CEC2017 benchmark functions. This test aims to measure the performance differences between CEO and other comparative algorithms across different dimensions. For intuitive presentation, we use a “+/=/-” notation system: “+” indicates CEO significantly outperforms other algorithms (p-value < 0.05); “=” denotes no significant difference (p-value = 0.05), and “-” signifies that CEO does not significantly outperform other algorithms (p-value > 0.05). The specific results are shown in Table 8.
Statistical results reveal that CEO achieved 257 significant wins and only 33 non-significant outcomes in 30-dimensional problems, yielding a win rate of 88.6%. In more challenging 50D and 100D tests, the significant win rates remained high at 85.1% (247/290) and 81.7% (237/290), demonstrating strong dimensional adaptability and algorithmic robustness. Notably, CEO excelled in comparisons with mainstream algorithms such as EVO, GJO, RIME, and WOA, achieving over 25 significant wins (with a superiority rate exceeding 86%) in both 30D and 50D. This highlights the synergistic optimization of its global search capability and convergence speed. When facing algorithms with more complex structures or dynamic mechanisms, such as NRBO, SOA, and SCA, CEO still secured over 20 statistical wins, verifying its strong adaptability and competitiveness.
In summary, the Wilcoxon test statistically confirms the stability and superiority of the CEO algorithm in solving CEC2017 multi-dimensional complex optimization problems, providing robust support for its practical application promotion.
Additionally, Figure 8 and Figure 9 illustrate the convergence curves of CEO and other algorithms on 100-dimensional CEC2017 benchmark functions. From the figure, it can be observed that CEO’s convergence speed far exceeds that of other algorithms on most functions, rapidly converging to the optimal value. Additionally, the plot reveals that CEO exhibits strong global search capability in the early stages of iteration, with the convergence curve dropping sharply. In the mid-to-late stages of iteration, CEO demonstrates excellent local exploitation capability, with the convergence curve declining gradually, thereby avoiding premature convergence.

3.3. Analysis of Exploration and Exploitation in CEO

In this section, a visual analysis of exploration and exploitation in CEO is conducted using the CEC2017 unconstrained benchmark tests (100 dimensions). Although the experiments in Section 3.2 evaluated CEO using metrics such as the standard deviation and mean, they did not clarify the trends in exploration and exploitation performance during optimization. Therefore, this subsection elaborates on the relationship between exploration and exploitation in the CEO algorithm to explain the reasons behind its excellent performance.
Hussian et al. (2018) proposed a method [57] to measure and analyze the exploitation and exploration performance of metaheuristic algorithms, which use mathematical representations of dimensional diversity. The formulas are as follows
D i v s j = 1 N i = 1 N m e d i a n z j z i j D i v s = 1 D i m j = 1 Dim D i v s j
where m e d i a n z j denotes the median value of the j-th dimension.
The formulas for calculating the exploration percentage and exploitation percentage are summarized as
E p l % = D i v s D i v s max × 100 E p t % = | D i v s D i v s max | D i v s max × 100
where D i v s max represents the maximum dimensional diversity, while E p l % and E p t % denote the exploration percentage and exploitation percentage, respectively.
As clearly illustrated in the curves of CEC2017-F3, F9, F11, and F25 in Figure 10, the CEO algorithm demonstrates differentiated dynamic switching patterns between exploration and exploitation across iterative processes for distinct functions.
For simple unimodal and multimodal problems, such as F3 and F9, CEO initially employs exploration to rapidly cover the solution space, identifying regions with exploitable potential. Exploitation then dominantly drives the convergence process. In contrast, for hybrid and composite problems with abundant local optima, such as F11 and F25, CEO maintains a high exploration rate in the early stage to evade local trapping. Following mid-term iterations, exploitation gradually assumes dominance, forming a hierarchical mechanism comprising wide-area searching, local focusing, and fine-grained tuning. This mechanism enables exploration and exploitation to synergistically advance the optimization process, demonstrating that a high exploration rate does not impede exploitation.
The trade-off between exploration and exploitation in CEO essentially represents a dynamic strategy adapted to function complexity.Empirical results further validate that CEO outperforms the majority of comparative algorithms in solving CEC2017 benchmark functions across varying dimensions.
In summary, CEO achieves a balance between exploration and exploitation to a certain extent, enabling superior performance in solving both low-dimensional and high-dimensional problems.

3.4. Experiments on CEC2020 Real-World Constrained Optimization Problems

In this subsection, 13 CEC2020 real-world constrained optimization problems from diverse engineering domains are employed to validate the capabilities of CEO against other comparative algorithms. Unlike the unconstrained benchmark problems in CEC2017, CEC2020 real-world constrained optimization problems exhibit high non-convexity and complexity, involving multiple equality and inequality constraints.
Consequently, when addressing the constrained optimization problems in CEC2020, the penalty function method serves as a critical approach for transforming constrained problems into unconstrained ones by introducing terms that penalize constraint violations. This methodology is primarily categorized into static penalty functions and dynamic penalty functions. The static penalty function employs a fixed penalty coefficient throughout the optimization process. For instance, for the constraint g ( x ) 0 , its typical construction is expressed as X = k · max ( 0 , g ( x ) ) 2 , where k denotes a pre-specified constant. This approach is characterized by its straightforward implementation and is well-suited for CEC2020 problems with stable constraint structures. In contrast, the dynamic penalty function adaptively adjusts the penalty coefficient during the optimization process, which is often defined by the form k ( t ) = k 0 · e a t , where k 0 represents the initial coefficient and signifies the growth factor. This strategy imposes lenient penalties in the early optimization phase to facilitate extensive exploration of the solution space by the algorithm while progressively increasing the penalty intensity in subsequent stages to ensure strict compliance with constraint conditions.
CEC2020-RW can be categorized into six problem classes: industrial chemical processes, process synthesis and design problems, mechanical engineering problems, power system problems, and livestock feed formulation optimization problems. Detailed descriptions of these problems can be found in Reference [58]. Considering that different optimization algorithms may not be compatible with all problems, 13 typical problems from CEC2020-RW are selected to validate CEO and ten other algorithms. The problem names, primary parameters, and theoretical optimal values are listed in Table 9. In the experiments, each algorithm was independently run 30 times with a population size of 50 and a termination criterion of 1000 total iterations.
Table 10 presents the results of eleven algorithms on 13 practical engineering applications in CEC2020-RW, with the best results bolded. Based on the data in the table, the following conclusions can be drawn:
For industrial chemical process problems R1 and R2, CEO achieved the best performance, obtaining optimal values in both problems. This indicates that CEO has excellent potential in solving industrial chemical process problems.
For process synthesis and design problems R3–R7, CEO also performed remarkably well. Except for ranking third in problem R7, CEO achieved the best results in all other optimization problems, demonstrating its non-negligible potential in process synthesis and design.
For mechanical engineering problems R8–R13, CEO outperformed WOA, SOA, ETO, RIME, SCA, MFO, NRBO, GJO, EVO, and DBO in six, five, six, five, five, five, six, three, six, and six problems, respectively, highlighting its superior performance in handling mechanical engineering issues.
Additionally, a Friedman test was conducted on the eleven algorithms based on the 13 practical engineering problems. Table 11 shows the performance rankings of the algorithms on CEC2020-RW, where CEO ranked 1.85. This confirms that CEO meets the requirements of practical engineering problems and holds research prospects for future engineering applications.

3.5. CEO for Robot Path Planning Problem

In this subsection, eleven algorithms are employed to solve the path planning problem for a robot, aiming to find the shortest path from the start point to the end point while avoiding obstacles. Based on this engineering context, a 10 × 10 two-dimensional configuration space X is constructed, where the start coordinate is (0, 0) and the end coordinate is (10, 10). The coordinates and radii of the nine obstacles are listed in Table 12, and the entire 2D space is shown in Figure 11.
The 2D path planning problem in this study can be formulated as the following optimization model:
min p i J = L ( p ) ( 1 + λ 1 V i o l ) L ( p ) = i = 1 M 1 ( X i + 1 X i ) 2 + ( Y i + 1 Y i ) 2 V i o l = j = 1 o 1 M k = 1 M max 1 d j k r j , 0 = 0 d j k = ( X k x j ) 2 + ( Y k y j ) 2 λ 1 = 10 5
where p i denotes the interpolation control points on the path (excluding the start and end points), L ( p ) is the total path length, V i o l represents the collision penalty term between the path and all obstacles, d j k is the distance between the k-th point on the path and the center of the j-th obstacle, r j is the radius of the j-th obstacle, O is the total number of obstacles, M is the number of path interpolation points, and λ 1 is the penalty coefficient. The goal of path planning is to minimize the path length from start to end while ensuring obstacle-avoidance constraints.
The penalty function V i o l specifically calculates the distance between path interpolation points and obstacles. If a path point p i is inside an obstacle d j k < r j , this point affects the overall path through the penalty term; if it is outside, it does not influence path planning. The robot’s path is thus obtained by optimizing the path points through the algorithm, with the total number of interpolation points M set to 101.
In the experiments, a total of 1000 iterations were set, with each algorithm independently run 30 times on a population size of 100. Figure 12 shows the optimal paths of various algorithms, while Figure 13 displays the average convergence curves of different optimization algorithms. Table 13 presents the mean and standard deviation of the optimized paths for each algorithm, with the best results in bold.
As shown in Figure 12, most algorithms attempt to navigate through the middle of obstacles to reach the endpoint, whereas SOA chooses to bypass obstacles from the outside. According to Table 13, CEO outperformed most algorithms, ranking second only to RIME in mean value, while SCA performed the worst. CEO’s standard deviation also stood out, leading most algorithms, further confirming that CEO not only achieves optimality but also ensures stability in path planning. From the average convergence curve (Figure 13), CEO demonstrated remarkable capability in quickly finding the optimal value, approaching the optimal solution at around 120 iterations—far superior to other algorithms—indicating stronger local exploitation ability. By the 1000th iteration, CEO’s final convergence result was significantly better than all algorithms except RIME.
Given that RIME employs mechanisms of hard rime puncture and positive greedy selection, it enables rapid local refinement of path inflection points. With more sensitive parameter adaptability, RIME is well-suited for specialized tasks like path planning, thus outperforming CEO to a certain extent.
In summary, CEO performed outstandingly in the robot path planning problem, further validating its capability to solve complex real-world problems and warranting further research.

3.6. Application of CEO in Photovoltaic System Parameter Extraction

In the field of new energy, photovoltaic (PV) systems are powerful tools for harnessing solar energy and converting it directly into electrical energy. Therefore, extracting parameters based on measured current–voltage data to design accurate and efficient models for PV systems holds significant engineering significance.
In this subsection, three classical photovoltaic models [59] are selected to validate the effectiveness and stability of CEO against ten other algorithms in this engineering problem: the single-diode model (SDM), the double-diode model (DDM), and the photovoltaic module model (PVMM). The equivalent circuit diagrams of these three photovoltaic models are shown in Figure 14.
In the SDM, five core parameters need to be extracted: the photo-generated current source ( I p h ), the reverse saturation current ( I s d ), series resistance ( R s ),shunt resistance ( R s h ), and the diode ideality factor (n), as shown in Figure 14a. The mathematical models of these core parameters are summarized as follows:
I L = I p h I s h I d I d = I s d · exp ( I L · R s + V L ) · q T · n · k 1 I s h = I L · R s + V L R s h
where I L denotes the output current, I d denotes the diode current, I s h denotes the shunt resistance current, V L denotes the output voltage, T denotes the temperature measured in Kelvin, q denotes the electron charge ( 1.60217646 × 10 19 ) , and k denotes the Boltzmann constant ( 1.3806503 × 10 23 J / K ) . Therefore, the calculation formula for I L in the SDM is as follows:
I L = I p h I L · R s + V L R s h I s d · exp ( I L · R s + V L ) · q T · n · k 1
In the DDM, seven core parameters need to be extracted: the photo-generated current source ( I p h ), the diffusion current of I s d 1 , the saturation current ( I s d 2 ), series resistance ( R s ), shunt resistance ( R s h ), and diode ideality factors ( n 1 and n 2 ).The circuit model is shown in Figure 14b. The calculation formula for I L in the DDM is as follows:
I L = I p h I L · R s + V L R s h I s d · exp ( I L · R s + V L ) · q T · n · k 1
The PVMM is composed of multiple identical photovoltaic cells connected in parallel or series, and its specific circuit model is shown in Figure 14c. The calculation formula for I L in the PVMM is as follows:
I L = I p h · N p I s d · N p · exp I L N s R s N p + V L · q T · n · k · N s 1 I L R s N s N p + V L N s R s h N p
where N p and N s denote the number of photovoltaic cells in parallel and series, respectively. In this model, five core parameters ( I p h , I s d , R s , R s h , and n) need to be extracted. The specific ranges of different parameters in the three photovoltaic models are listed in Table 14.
In the parameter extraction problem of photovoltaic systems, the primary objective is to minimize the discrepancy between the experimental data estimated by the algorithm and the actual measured data. As a complex nonlinear optimization problem, the root mean square error (RMSE) amplifies the impact of larger errors through squaring operations and is more sensitive to errors than other error metrics. Therefore, the RMSE is selected as the objective function for this problem, with its mathematical formula defined as follows:
min RMSE ( X ) = 1 N i = 1 N f i ( X , I L , V L ) 2
where N represents the number of actual measurement data points and X denotes the solution vector of unknown core parameters to be extracted in the photovoltaic model. Therefore, the specific forms of f i ( X , I L , V L ) in the three models are as follows:
For SDM,
f i ( X , I L , V L ) = I p h I L · R s + V L R s h I s d · exp ( I L · R s + V L ) · q T · n · k 1 I L X = { I p h , I s d , R s , R s h , n }
For DDM,
f j ( X , I L , V L ) = I p h I s d 1 · exp ( I L · R s + V L ) · q T · n 1 · k 1 I s d 2 · exp ( I L · R s + V L ) · q T · n 2 · k 1 I L · R s + V L R s h I L X = { I p h , I s d 1 , R s , R s h , n 1 , I s d 2 , n 2 }
For PVMM,
f j ( X , I L , V L ) = N p · I p h N p · I s d · exp I L N s R s N p + V L · q T · n · k · N s 1 I L N s R s N p + V L N s R s h N p I L X = { I p h , I s d , R s , R s h , n }
In this experiment, the benchmark measured current–voltage data were taken from Easwarakhanthan et al. [60], who used a French RTC commercial silicon solar cell with a diameter of 57 mm (under conditions of 1000 W/m2 and 33 °C). The maximum iteration count was set to 1000; the population size was 50, and each algorithm was independently run 30 times. Based on the above experimental parameter settings, the root means square error (RMSE) statistical results of the eleven algorithms on the SDM, DDM, and PVMM are presented in Table 15.
As shown in Table 15, CEO outperformed the vast majority of optimization algorithms, achieving the best mean values in both the SDM and DDM. Although the CEO algorithm’s local fine-tuning precision in the PVMM is slightly insufficient, its test results still outperform more than half of the optimization algorithms. Additionally, a Friedman test was conducted on these eleven algorithms based on the three photovoltaic models.
As indicated in Table 16, the CEO algorithm’s superior performance ranked it first among the eleven algorithms, further demonstrating its efficiency in extracting photovoltaic system parameters. Its excellent exploitation and exploration capabilities outperform most current optimization algorithms.
Furthermore, Figure 15, Figure 16 and Figure 17, respectively, show the comparisons between the CEO test results on the SDM, DDM, and PVMM and the actual measured values. As shown in Figure 15, Figure 16 and Figure 17, it can be seen that the experimental data obtained by the CEO algorithm can perfectly fit the measured data.

4. Conclusions

This work proposes a novel metaheuristic optimization algorithm, CEO; it is inspired by cosmic evolution and simulates the formation process of stellar systems in the universe. CEO introduces exploration strategies of cosmic expansion and multiple gravitational forces for search steps, mimicking the early-stage evolution of the universe. By simulating the phenomenon of celestial bodies being attracted by the largest star, a gravitational alignment mechanism based on gravitational forces is proposed for local exploitation. Finally, improved strategies such as multi-stellar system interactions and the celestial orbital resonance strategy are introduced to enhance the algorithm’s exploration performance in the early stages and avoid local optimal traps.
The superior performance of CEO was successfully validated through 29 CEC2017 unconstrained benchmark tests and 13 CEC2020 real-world constrained optimization problems, particularly in handling high-dimensionality and multi-convexity complex problems, where CEO demonstrated high convergence speed and accuracy. The comparison algorithms included WOA, SOA, ETO, RIME, SCA, MFO, NRBO, GJO, EVO, and DBO. The experimental results show that CEO performed excellently in most test functions and practical applications.
Furthermore, this work verified the value of the engineering application of CEO through two practical engineering problems: robot path planning and extraction of photovoltaic system parameters. In the robot path planning problem, the CEO algorithm not only quickly found the optimal path while avoiding obstacles but also demonstrated superior stability in path planning. In the photovoltaic system parameter extraction problem, the CEO algorithm’s optimization performance was outstanding, successfully optimizing model parameters and improving the efficiency of the photovoltaic system, thus validating the value of the engineering application of the algorithm. These application cases further demonstrate that CEO has broad application potential in practical engineering. As future research directions, CEO will be studied in image segmentation and celestial trajectory prediction to explore its potential in more fields.

Author Contributions

Conceptualization, Z.J.; methodology, Z.J. and R.W.; software, Z.J. and R.W.; validation, R.W.; formal analysis, R.W.; resources, G.D.; data curation, R.W.; writing—original draft preparation, R.W. and Z.J.; writing—review and editing, R.W. and G.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are very grateful to the anonymous reviewers, whose valuable comments and suggestions improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  2. Ibaraki, T.; Ishii, H.; Mine, H. An Assignment Problem On A Network. J. Oper. Res. Soc. Jpn. 1976, 19, 70–90. [Google Scholar] [CrossRef]
  3. Shih, P.C.; Zhang, Y.; Zhou, X. Monitor system and Gaussian perturbation teaching–learning-based optimization algorithm for continuous optimization problems. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 705–720. [Google Scholar] [CrossRef]
  4. Almufti, S.M. Historical survey on metaheuristics algorithms. Int. J. Sci. World 2019, 7, 1. [Google Scholar] [CrossRef]
  5. Li, H.; Li, J.; Wu, P.; You, Y.; Zeng, N. A ranking-system-based switching particle swarm optimizer with dynamic learning strategies. Neurocomputing 2022, 494, 356–367. [Google Scholar] [CrossRef]
  6. Chen, H.; Li, C.; Mafarja, M.; Heidari, A.A.; Chen, Y.; Cai, Z. Slime mould algorithm: A comprehensive review of recent variants and applications. Int. J. Syst. Sci. 2023, 54, 204–235. [Google Scholar] [CrossRef]
  7. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  8. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  10. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  11. Rajabi Moshtaghi, H.; Toloie Eshlaghy, A.; Motadel, M.R. A comprehensive review on meta-heuristic algorithms and their classification with novel approach. J. Appl. Res. Ind. Eng. 2021, 8, 63–89. [Google Scholar]
  12. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  13. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  14. Poli, R.; Langdon, W.B.; McPhee, N.F. A Field Guide to Genetic Programming; Lulu.com; Springer: Cham, Switzerland, 2008; 250p, ISBN 978-1-4092-0073-4. [Google Scholar]
  15. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  16. Robbiano, A.; Tripodi, A.; Conte, F.; Ramis, G.; Rossetti, I. Evolutionary optimization strategies for Liquid-liquid interaction parameters. Fluid Phase Equilibria 2023, 564, 113599. [Google Scholar] [CrossRef]
  17. Rao, R.V.; Savsani, V.; Balic, J. Teaching–learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Eng. Optim. 2012, 44, 1447–1462. [Google Scholar] [CrossRef]
  18. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex Intell. Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef]
  19. Xie, X.F.; Zhang, W.J. Solving engineering design problems by social cognitive optimization. In Proceedings of the Genetic and Evolutionary Computation—GECCO 2004: Genetic and Evolutionary Computation Conference, Seattle, WA, USA, 26–30 June 2004; Proceedings Part I. Springer: Cham, Swizterland, 2004; pp. 261–262. [Google Scholar]
  20. Moscato, P.; Cotta, C. A gentle introduction to memetic algorithms. In Handbook of Metaheuristics; Springer: Cham, Switzerland, 2003; pp. 105–144. [Google Scholar]
  21. Cui, Z.; Xu, Y. Social emotional optimisation algorithm with Levy distribution. Int. J. Wirel. Mob. Comput. 2012, 5, 394–400. [Google Scholar] [CrossRef]
  22. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  23. Ayyarao, T.S.; Ramakrishna, N.; Elavarasan, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War strategy optimization algorithm: A new effective metaheuristic algorithm for global optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  24. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef]
  25. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Erciyes University: Kayseri, Türkiye, 2005. [Google Scholar]
  26. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  27. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Cham, Swizteralnd, 2010; pp. 65–74. [Google Scholar]
  28. Yang, X.S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 210–214. [Google Scholar]
  29. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  30. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  31. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  32. Farahmandian, M.; Hatamlou, A. Solving optimization problems using black hole algorithm. J. Adv. Comput. Sci. Technol. 2015, 4, 68–74. [Google Scholar] [CrossRef]
  33. Petwal, H.; Rani, R. An improved artificial electric field algorithm for multi-objective optimization. Processes 2020, 8, 584. [Google Scholar] [CrossRef]
  34. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  35. Patel, V.K.; Savsani, V.J. Heat transfer search (HTS): A novel optimization algorithm. Inf. Sci. 2015, 324, 217–246. [Google Scholar] [CrossRef]
  36. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  37. Sun, J.; Feng, B.; Xu, W. Particle swarm optimization with particles having quantum behavior. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), Portland, OR, USA, 19–23 June 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 1, pp. 325–331. [Google Scholar]
  38. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  39. Azizi, M.; Aickelin, U.; A. Khorshidi, H.; Baghalzadeh Shishehgarkhaneh, M. Energy valley optimizer: A novel metaheuristic algorithm for global and engineering optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  40. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S.; Loo, K.H. Propagation search algorithm: A physics-based optimizer for engineering applications. Mathematics 2023, 11, 4224. [Google Scholar] [CrossRef]
  41. Cymerys, K.; Oszust, M. Attraction–repulsion optimization algorithm for global optimization problems. Swarm Evol. Comput. 2024, 84, 101459. [Google Scholar] [CrossRef]
  42. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  44. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  45. Linder, E.V. Mapping the cosmological expansion. Rep. Prog. Phys. 2008, 71, 056901. [Google Scholar] [CrossRef]
  46. Schweizer, F. Colliding and merging galaxies. Science 1986, 231, 227–234. [Google Scholar] [CrossRef] [PubMed]
  47. Peale, S. Orbital resonances in the solar system. In Annual Review of Astronomy and Astrophysics; (A76-46826 24-90); Annual Reviews, Inc.: Palo Alto, CA, USA, 1976; Volume 14, pp. 215–246. [Google Scholar]
  48. Wu, G.; Mallipeddi, R.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization; Technical Report; Nanyang Technology University: Singapore, 2016; pp. 1–18. [Google Scholar]
  49. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar] [CrossRef]
  50. Luan, T.M.; Khatir, S.; Tran, M.T.; De Baets, B.; Cuong-Le, T. Exponential-trigonometric optimization algorithm for solving complicated engineering problems. Comput. Methods Appl. Mech. Eng. 2024, 432, 117411. [Google Scholar] [CrossRef]
  51. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  52. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  53. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl.-Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  54. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  55. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  56. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  57. Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  58. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  59. Liu, C.; Zhang, D.L.W. Crown Growth Optimizer: An Efficient Bionic Meta-Heuristic Optimizer and Engineering Applications. Mathematics 2024, 12, 2343. [Google Scholar] [CrossRef]
  60. Easwarakhanthan, T.; Bottin, J.; Bouhouch, I.; Boutrit, C. Nonlinear minimization algorithm for determining the solar cell parameters with microcomputers. Int. J. Sol. Energy 1986, 4, 1–12. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of cosmic expansion.
Figure 1. Schematic diagram of cosmic expansion.
Mathematics 13 02499 g001
Figure 2. Cosmic expansion velocity V e p ( t ) when T = 1000 .
Figure 2. Cosmic expansion velocity V e p ( t ) when T = 1000 .
Mathematics 13 02499 g002
Figure 3. Schematic diagram of universal gravitation.
Figure 3. Schematic diagram of universal gravitation.
Mathematics 13 02499 g003
Figure 4. Alignment coefficient a ( t ) when T = 1000.
Figure 4. Alignment coefficient a ( t ) when T = 1000.
Mathematics 13 02499 g004
Figure 5. Schematic diagram of celestial body movement.
Figure 5. Schematic diagram of celestial body movement.
Mathematics 13 02499 g005
Figure 6. Flowchart of the CEO algorithm.
Figure 6. Flowchart of the CEO algorithm.
Mathematics 13 02499 g006
Figure 7. Results of CEO on seven CEC2017 benchmark tests.
Figure 7. Results of CEO on seven CEC2017 benchmark tests.
Mathematics 13 02499 g007
Figure 8. Convergence plots of CEO and other algorithms on CEC2017(F1–F19).
Figure 8. Convergence plots of CEO and other algorithms on CEC2017(F1–F19).
Mathematics 13 02499 g008
Figure 9. Convergence plots of CEO and other algorithms on CEC2017(F20–F30).
Figure 9. Convergence plots of CEO and other algorithms on CEC2017(F20–F30).
Mathematics 13 02499 g009
Figure 10. Exploitation and exploration percentages of CEO on four CEC2017 unconstrained benchmark tests.
Figure 10. Exploitation and exploration percentages of CEO on four CEC2017 unconstrained benchmark tests.
Mathematics 13 02499 g010
Figure 11. The 2D space for robot path planning.
Figure 11. The 2D space for robot path planning.
Mathematics 13 02499 g011
Figure 12. Optimal paths of CEO and other algorithms.
Figure 12. Optimal paths of CEO and other algorithms.
Mathematics 13 02499 g012
Figure 13. Convergence curves of CEO and other algorithms in robot path planning.
Figure 13. Convergence curves of CEO and other algorithms in robot path planning.
Mathematics 13 02499 g013
Figure 14. Equivalent circuit diagrams for photovoltaic cells.
Figure 14. Equivalent circuit diagrams for photovoltaic cells.
Mathematics 13 02499 g014
Figure 15. Comparisons between measured data and experimental data attained by CEO for the SDM.
Figure 15. Comparisons between measured data and experimental data attained by CEO for the SDM.
Mathematics 13 02499 g015
Figure 16. Comparisons between measured data and experimental data attained by CEO for the DDM.
Figure 16. Comparisons between measured data and experimental data attained by CEO for the DDM.
Mathematics 13 02499 g016
Figure 17. Comparisons between measured data and experimental data attained by CEO for the PVMM.
Figure 17. Comparisons between measured data and experimental data attained by CEO for the PVMM.
Mathematics 13 02499 g017
Table 1. Classification of CEC2017 benchmark functions.
Table 1. Classification of CEC2017 benchmark functions.
CEC2017
NumberType
F1, F3Unimodal problem
F4–F10Simple multimodal problem
F11–F20Mixed problem
F21–F30Combination problem
Table 2. Details of the IEEE CEC2017 test suite.
Table 2. Details of the IEEE CEC2017 test suite.
TypeFunctionName f min
Unimodal FunctionsF1Shifted and Rotated Bent Cigar Function100
F2Shifted and Rotated Sum of Different Power Function200
F3Shifted and Rotated Zakharov Function300
Multimodal FunctionsF4Shifted and Rotated Rosenbrocks Function400
F5Shifted and Rotated Rastrigin Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function700
F8Shifted and Rotated Non-Continuous Rastrigin Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefels Function1000
Hybrid FunctionsF11Hybrid Function 1 (N = 3)1100
F12Hybrid Function 2 (N = 3)1200
F13Hybrid Function 3 (N = 3)1300
F14Hybrid Function 4 (N = 4)1400
F15Hybrid Function 5 (N = 4)1500
F16Hybrid Function 6 (N = 4)1600
F17Hybrid Function 7 (N = 5)1700
F18Hybrid Function 8 (N = 5)1800
F19Hybrid Function 9 (N = 5)1900
F20Hybrid Function 10 (N = 6)2000
Composition FunctionsF21Composition Function 1 (N = 3)2100
F22Composition Function 2 (N = 3)2200
F23Composition Function 3 (N = 4)2300
F24Composition Function 4 (N = 4)2400
F25Composition Function 5 (N = 5)2500
F26Composition Function 6 (N = 5)2600
F27Composition Function 7 (N = 6)2700
F28Composition Function 8 (N = 6)2800
F29Composition Function 9 (N = 3)2900
F30Composition Function 10 (N = 3)3000
Table 3. Parameter settings of compared algorithms.
Table 3. Parameter settings of compared algorithms.
AlgorithmAbbr.Key ParametersReferenceYears
Cosmic evolution algorithmCEO W 1 = 0.1 , P b a s e = 0.2 , a = 0.7 --
Newton–Raphson-based optimizerNRBO D F = 0.6 [49]2024
Exponential–trigonometric optimizationETO a = 1.55 [50]2024
Energy valley optimizerEVO a = 3 , b = 2 [39]2023
Dung beetle optimizerDBO P = 0.2 [51]2023
RIMERIME D D F = [ 0.35 , 0.6 ] [42]2023
Golden jackal optimizationGJO a = 0.1 , b = 0.05 [52]2022
Seagull optimization algorithmSOA a = 2 , b = 1 [53]2019
Whale optimization algorithmWOA a = [ 2 , 0 ] , b = 1 [29]2016
Sine cosine algorithmSCA a = 2 [54]2016
Moth–flame optimization algorithmMFO a = [ 2 , 1 ] ; b = 1 [55]2015
Table 4. Results of CEO and compared algorithms on the CEC2017 30-dimensional benchmark functions.
Table 4. Results of CEO and compared algorithms on the CEC2017 30-dimensional benchmark functions.
FunctionMetricWOASOAETORIMESCAMFONRBOGJOEVODBOCEO
F1mean2.95E+093.75E+102.42E+104.41E+065.48E+103.22E+104.48E+103.27E+103.45E+081.71E+093.51E+06
std1.08E+096.77E+094.73E+091.52E+068.71E+091.35E+106.87E+097.14E+095.64E+087.04E+094.01E+05
F3mean2.23E+051.24E+051.25E+058.43E+041.58E+053.17E+051.25E+051.13E+052.68E+052.12E+051.02E+05
std7.18E+041.50E+042.20E+042.03E+041.78E+047.16E+041.79E+041.56E+047.94E+043.24E+041.81E+04
F4mean1.54E+034.33E+033.80E+036.22E+029.94E+033.10E+036.24E+035.09E+038.24E+028.40E+025.56E+02
std3.15E+021.34E+031.50E+034.55E+011.91E+031.55E+031.93E+031.72E+031.40E+021.51E+024.30E+01
F5mean1.01E+039.30E+029.98E+026.77E+021.10E+039.56E+021.09E+039.00E+029.15E+029.66E+026.03E+02
std8.59E+014.42E+014.25E+013.73E+013.20E+016.19E+014.04E+015.41E+011.13E+028.79E+011.66E+01
F6mean6.89E+026.64E+026.71E+026.13E+026.77E+026.54E+026.88E+026.52E+026.65E+026.61E+026.02E+02
std1.37E+016.95E+009.05E+005.18E+006.63E+001.04E+018.00E+001.29E+011.05E+019.25E+009.77E−01
F7mean1.76E+031.57E+031.49E+039.91E+021.77E+031.76E+031.74E+031.39E+031.58E+031.27E+038.57E+02
std1.13E+021.08E+028.35E+014.12E+011.03E+024.53E+021.23E+027.86E+011.41E+021.42E+021.97E+01
F8mean1.34E+031.24E+031.29E+039.89E+021.41E+031.26E+031.42E+031.23E+031.24E+031.26E+039.00E+02
std9.33E+015.16E+014.11E+014.00E+013.19E+017.68E+014.24E+016.41E+011.02E+029.47E+012.15E+01
F9mean2.95E+042.05E+043.01E+045.31E+032.64E+041.82E+042.45E+042.03E+042.74E+041.54E+041.10E+03
std8.55E+034.99E+036.57E+032.98E+036.38E+035.24E+034.09E+035.08E+038.53E+034.03E+035.60E+02
F10mean1.20E+041.24E+041.33E+047.39E+031.50E+048.64E+031.31E+041.17E+041.19E+049.52E+037.39E+03
std1.39E+031.13E+031.03E+037.84E+024.58E+021.09E+038.74E+022.48E+032.20E+031.22E+031.26E+03
F11mean3.33E+037.01E+036.16E+031.51E+039.13E+031.60E+047.91E+038.41E+037.17E+032.14E+031.42E+03
std7.02E+022.09E+032.36E+037.77E+011.78E+031.22E+042.24E+032.48E+039.13E+036.72E+026.79E+01
F12mean8.21E+086.41E+095.78E+099.27E+071.79E+103.13E+091.14E+107.03E+094.80E+084.24E+082.73E+07
std4.19E+083.87E+093.86E+094.71E+074.16E+092.00E+093.82E+093.87E+092.56E+084.34E+081.30E+07
F13mean2.51E+071.57E+091.09E+092.63E+053.98E+095.35E+082.44E+091.71E+091.57E+071.92E+075.13E+05
std2.05E+071.69E+092.20E+092.02E+051.58E+091.00E+091.10E+092.75E+093.61E+072.19E+071.39E+05
F14mean3.13E+061.43E+061.64E+062.73E+054.52E+062.38E+061.59E+061.52E+063.37E+061.61E+067.02E+04
std2.21E+069.92E+051.29E+061.42E+053.05E+063.46E+061.52E+061.50E+062.77E+062.15E+064.58E+04
F15mean4.04E+068.88E+073.51E+076.43E+047.42E+082.11E+071.51E+083.29E+082.34E+061.23E+071.64E+05
std5.29E+061.87E+083.11E+074.45E+042.95E+087.81E+079.51E+074.91E+086.48E+064.25E+074.82E+04
F16mean3.97E+033.06E+033.08E+032.51E+033.88E+033.14E+033.62E+032.93E+033.35E+033.08E+032.36E+03
std5.36E+022.64E+023.90E+022.82E+022.04E+023.68E+023.98E+024.11E+024.84E+024.89E+022.04E+02
F17mean2.65E+032.29E+032.29E+032.07E+032.62E+032.39E+032.55E+032.15E+032.32E+032.42E+031.90E+03
std2.23E+022.41E+021.83E+021.70E+021.78E+022.46E+022.27E+021.78E+022.77E+022.75E+028.37E+01
F18mean6.71E+063.28E+061.91E+065.42E+058.35E+062.79E+069.72E+051.51E+062.82E+061.69E+062.64E+05
std6.20E+062.87E+061.62E+064.76E+055.67E+063.45E+061.49E+061.80E+062.88E+062.41E+061.84E+05
F19mean9.39E+067.06E+063.58E+061.69E+045.81E+076.57E+054.45E+062.13E+073.08E+061.89E+064.72E+05
std8.46E+069.08E+068.10E+061.46E+042.65E+071.44E+065.71E+066.73E+073.45E+062.81E+064.59E+05
F20mean2.83E+032.64E+032.66E+032.39E+032.86E+032.68E+032.70E+032.49E+032.65E+032.62E+032.30E+03
std2.29E+021.90E+022.01E+021.76E+021.46E+022.24E+021.73E+021.60E+021.95E+022.40E+028.48E+01
F21mean2.57E+032.49E+032.53E+032.39E+032.58E+032.48E+032.57E+032.46E+032.48E+032.52E+032.35E+03
std5.40E+013.65E+013.27E+012.18E+012.56E+014.37E+014.60E+013.58E+016.01E+014.58E+011.69E+01
F22mean7.51E+038.19E+038.10E+034.05E+038.36E+036.55E+034.93E+035.68E+033.65E+035.69E+033.88E+03
std1.69E+031.05E+032.14E+031.81E+032.41E+031.38E+032.02E+032.39E+032.60E+031.85E+032.12E+03
F23mean3.10E+032.84E+032.97E+032.75E+033.03E+032.82E+033.03E+032.89E+032.91E+032.91E+032.70E+03
std9.80E+014.24E+014.93E+012.56E+013.63E+013.33E+016.97E+015.52E+017.41E+017.07E+011.42E+01
F24mean3.22E+033.00E+033.18E+032.91E+033.21E+032.97E+033.20E+033.04E+033.06E+033.06E+032.87E+03
std9.52E+013.83E+015.37E+012.89E+013.92E+012.82E+016.61E+016.99E+018.19E+016.16E+012.04E+01
F25mean3.04E+033.17E+033.12E+032.90E+033.38E+033.30E+033.25E+033.16E+033.00E+032.95E+032.89E+03
std5.03E+011.06E+027.67E+019.04E+001.92E+024.02E+029.85E+012.06E+028.70E+014.89E+012.47E+00
F26mean7.48E+035.67E+036.48E+034.59E+037.41E+035.71E+037.13E+035.72E+035.61E+036.58E+033.86E+03
std1.04E+033.11E+026.36E+023.81E+026.01E+024.46E+021.17E+034.93E+021.75E+038.20E+025.32E+02
F27mean3.43E+033.29E+033.46E+033.23E+033.48E+033.25E+033.39E+033.34E+033.35E+033.28E+033.22E+03
std1.22E+024.21E+019.82E+011.22E+016.83E+012.41E+015.49E+014.71E+016.20E+013.56E+011.14E+01
F28mean3.45E+035.38E+033.66E+033.28E+034.12E+034.09E+033.91E+033.77E+033.38E+033.39E+033.22E+03
std8.09E+011.37E+032.39E+024.06E+012.52E+027.83E+022.84E+022.95E+025.48E+011.75E+022.46E+01
F29mean5.13E+034.41E+034.29E+033.84E+034.92E+034.17E+034.93E+034.17E+034.66E+034.17E+033.64E+03
std4.53E+022.55E+022.78E+021.97E+022.69E+022.37E+024.07E+022.52E+024.89E+023.80E+021.28E+02
F30mean3.72E+072.72E+071.20E+071.72E+051.46E+086.25E+053.62E+072.77E+071.33E+073.10E+062.03E+06
std2.77E+072.31E+071.10E+071.22E+055.00E+071.07E+062.52E+071.95E+078.72E+066.30E+061.20E+06
Bold values indicate the best results in each row.
Table 5. Results of CEO and compared algorithms on the CEC2017 50-dimensional benchmark functions.
Table 5. Results of CEO and compared algorithms on the CEC2017 50-dimensional benchmark functions.
FunctionMetricWOASOAETORIMESCAMFONRBOGJOEVODBOCEO
F1mean2.95E+093.75E+102.42E+104.41E+065.48E+103.22E+104.48E+103.27E+103.45E+081.71E+093.51E+06
std1.08E+096.77E+094.73E+091.52E+068.71E+091.35E+106.87E+097.14E+095.64E+087.04E+094.01E+05
F3mean2.23E+051.24E+051.25E+058.43E+041.58E+053.17E+051.25E+051.13E+052.68E+052.12E+051.02E+05
std7.18E+041.50E+042.20E+042.03E+041.78E+047.16E+041.79E+041.56E+047.94E+043.24E+041.81E+04
F4mean1.54E+034.33E+033.80E+036.22E+029.94E+033.10E+036.24E+035.09E+038.24E+028.40E+025.56E+02
std3.15E+021.34E+031.50E+034.55E+011.91E+031.55E+031.93E+031.72E+031.40E+021.51E+024.30E+01
F5mean1.01E+039.30E+029.98E+026.77E+021.10E+039.56E+021.09E+039.00E+029.15E+029.66E+026.03E+02
std8.59E+014.42E+014.25E+013.73E+013.20E+016.19E+014.04E+015.41E+011.13E+028.79E+011.66E+01
F6mean6.89E+026.64E+026.71E+026.13E+026.77E+026.54E+026.88E+026.52E+026.65E+026.61E+026.02E+02
std1.37E+016.95E+009.05E+005.18E+006.63E+001.04E+018.00E+001.29E+011.05E+019.25E+009.77E−01
F7mean1.76E+031.57E+031.49E+039.91E+021.77E+031.76E+031.74E+031.39E+031.58E+031.27E+038.57E+02
std1.13E+021.08E+028.35E+014.12E+011.03E+024.53E+021.23E+027.86E+011.41E+021.42E+021.97E+01
F8mean1.34E+031.24E+031.29E+039.89E+021.41E+031.26E+031.42E+031.23E+031.24E+031.26E+039.00E+02
std9.33E+015.16E+014.11E+014.00E+013.19E+017.68E+014.24E+016.41E+011.02E+029.47E+012.15E+01
F9mean2.95E+042.05E+043.01E+045.31E+032.64E+041.82E+042.45E+042.03E+042.74E+041.54E+041.10E+03
std8.55E+034.99E+036.57E+032.98E+036.38E+035.24E+034.09E+035.08E+038.53E+034.03E+035.60E+02
F10mean1.20E+041.24E+041.33E+047.39E+031.50E+048.64E+031.31E+041.17E+041.19E+049.52E+037.65E+03
std1.39E+031.13E+031.03E+037.84E+024.58E+021.09E+038.74E+022.48E+032.20E+031.22E+031.26E+03
F11mean3.33E+037.01E+036.16E+031.51E+039.13E+031.60E+047.91E+038.41E+037.17E+032.14E+031.42E+03
std7.02E+022.09E+032.36E+037.77E+011.78E+031.22E+042.24E+032.48E+039.13E+036.72E+026.79E+01
F12mean8.21E+086.41E+095.78E+099.27E+071.79E+103.13E+091.14E+107.03E+094.80E+084.24E+082.73E+07
std4.19E+083.87E+093.86E+094.71E+074.16E+092.00E+093.82E+093.87E+092.56E+084.34E+081.30E+07
F13mean2.51E+071.57E+091.09E+092.63E+053.98E+095.35E+082.44E+091.71E+091.57E+071.92E+075.13E+05
std2.05E+071.69E+092.20E+092.02E+051.58E+091.00E+091.10E+092.75E+093.61E+072.19E+071.39E+05
F14mean3.13E+061.43E+061.64E+062.73E+054.52E+062.38E+061.59E+061.52E+063.37E+061.61E+067.02E+04
std2.21E+069.92E+051.29E+061.42E+053.05E+063.46E+061.52E+061.50E+062.77E+062.15E+064.58E+04
F15mean4.04E+068.88E+073.51E+076.43E+047.42E+082.11E+071.51E+083.29E+082.34E+061.23E+071.64E+05
std5.29E+061.87E+083.11E+074.45E+042.95E+087.81E+079.51E+074.91E+086.48E+064.25E+074.82E+04
F16mean5.76E+034.10E+034.43E+033.36E+036.00E+034.36E+035.51E+033.87E+034.31E+034.52E+032.79E+03
std1.07E+034.25E+025.10E+024.86E+023.37E+026.13E+023.86E+026.23E+025.36E+025.34E+023.07E+02
F17mean4.26E+033.76E+033.62E+033.20E+034.70E+033.88E+034.28E+033.38E+033.74E+034.10E+032.71E+03
std5.38E+024.40E+023.66E+023.03E+022.15E+023.76E+024.05E+023.84E+023.75E+024.08E+022.08E+02
F18mean3.22E+078.24E+069.34E+063.23E+063.82E+071.11E+079.31E+061.48E+071.60E+075.04E+061.13E+06
std2.47E+077.45E+061.13E+072.03E+062.30E+071.68E+076.51E+062.01E+071.67E+075.25E+067.38E+05
F19mean7.19E+064.65E+071.81E+074.90E+044.21E+081.50E+079.40E+072.28E+085.60E+063.45E+061.84E+06
std7.10E+068.03E+071.01E+074.22E+041.63E+084.50E+075.79E+073.05E+085.89E+063.22E+061.10E+06
F20mean3.76E+033.66E+033.64E+033.14E+034.16E+033.47E+033.77E+033.39E+033.68E+033.57E+032.91E+03
std3.01E+023.94E+023.02E+022.09E+021.63E+024.38E+023.10E+024.43E+024.22E+023.19E+023.19E+02
F21mean2.96E+032.72E+032.84E+032.49E+032.92E+032.71E+032.93E+032.69E+032.68E+032.82E+032.40E+03
std1.06E+026.55E+016.45E+013.55E+014.10E+018.47E+015.99E+016.70E+011.05E+028.30E+012.33E+01
F22mean1.31E+041.40E+041.53E+049.52E+031.67E+041.05E+041.49E+041.29E+041.33E+041.09E+049.48E+03
std1.51E+039.83E+021.08E+038.99E+024.01E+021.09E+031.24E+032.83E+033.80E+031.43E+038.99E+02
F23mean3.74E+033.21E+033.47E+032.97E+033.60E+033.14E+033.55E+033.24E+033.35E+033.36E+032.84E+03
std2.00E+026.44E+017.41E+015.11E+014.82E+016.18E+011.30E+028.61E+011.35E+021.16E+023.56E+01
F24mean3.80E+033.29E+033.73E+033.10E+033.80E+033.22E+033.68E+033.45E+033.50E+033.46E+033.01E+03
std1.60E+025.53E+017.60E+015.26E+017.56E+016.25E+011.25E+029.91E+011.50E+029.80E+012.52E+01
F25mean3.78E+035.64E+035.10E+033.10E+037.69E+035.23E+036.33E+035.36E+033.33E+033.23E+033.04E+03
std2.31E+027.98E+027.72E+023.55E+018.90E+022.19E+038.54E+027.44E+023.40E+021.77E+022.38E+01
F26mean1.40E+048.40E+031.06E+046.02E+031.28E+048.66E+031.22E+049.16E+031.07E+041.00E+044.69E+03
std1.69E+036.61E+026.96E+024.67E+027.90E+027.41E+021.87E+039.39E+022.50E+031.54E+034.96E+02
F27mean4.56E+033.87E+034.35E+033.50E+034.69E+033.58E+034.26E+034.01E+034.07E+033.82E+033.33E+03
std4.23E+021.57E+021.87E+021.03E+021.48E+021.10E+022.63E+021.52E+022.63E+021.84E+024.21E+01
F28mean4.55E+038.18E+035.35E+033.35E+037.76E+037.76E+036.61E+035.73E+033.68E+035.02E+033.28E+03
std3.40E+021.64E+035.50E+023.85E+017.13E+021.50E+036.57E+024.84E+022.56E+022.11E+032.19E+01
F29mean8.40E+036.48E+036.43E+034.72E+038.25E+035.37E+037.61E+036.03E+036.73E+035.83E+034.13E+03
std1.45E+038.39E+025.90E+024.29E+029.85E+025.65E+028.53E+025.30E+029.61E+026.27E+022.41E+02
F30mean2.32E+082.56E+082.36E+082.35E+078.90E+087.24E+074.88E+084.06E+082.28E+082.45E+075.05E+07
std9.15E+071.00E+081.18E+086.43E+063.44E+081.41E+082.06E+082.14E+088.61E+072.11E+076.43E+06
Bold values indicate the best results in each row.
Table 6. Results of CEO and compared algorithms on the CEC2017 100-dimensional benchmark functions.
Table 6. Results of CEO and compared algorithms on the CEC2017 100-dimensional benchmark functions.
FunctionMetricWOASOAETORIMESCAMFONRBOGJOEVODBOCEO
F1mean3.65E+101.45E+111.06E+111.01E+081.92E+111.29E+111.53E+111.29E+114.49E+097.37E+101.35E+07
std6.08E+091.24E+101.00E+102.17E+071.05E+104.08E+101.06E+101.07E+101.16E+107.40E+101.14E+06
F3mean9.23E+053.48E+053.34E+054.94E+054.67E+058.88E+053.18E+052.98E+056.05E+055.37E+053.82E+05
std1.59E+052.87E+042.86E+044.24E+044.39E+041.33E+053.61E+041.69E+042.00E+052.03E+056.06E+04
F4mean7.02E+031.96E+041.49E+049.33E+024.12E+042.27E+042.39E+041.59E+041.90E+034.84E+036.86E+02
std1.40E+033.85E+033.58E+036.21E+016.73E+031.38E+044.80E+033.19E+031.63E+035.99E+033.64E+01
F5mean1.80E+031.62E+031.77E+031.06E+031.98E+031.81E+031.95E+031.53E+031.61E+031.69E+037.79E+02
std1.26E+026.82E+017.94E+017.89E+014.95E+011.40E+026.97E+017.91E+011.60E+022.06E+024.92E+01
F6mean7.02E+026.83E+026.86E+026.37E+026.99E+026.76E+027.01E+026.69E+026.87E+026.77E+026.12E+02
std1.09E+015.43E+009.78E+005.27E+005.87E+006.67E+004.93E+005.26E+008.84E+009.10E+007.77E+00
F7mean3.60E+033.32E+033.04E+031.70E+033.85E+034.68E+033.60E+032.88E+033.31E+032.64E+031.16E+03
std1.33E+021.38E+021.56E+021.38E+022.35E+027.32E+022.11E+021.51E+023.60E+024.68E+024.89E+01
F8mean2.23E+031.96E+032.13E+031.38E+032.34E+032.13E+032.40E+031.88E+032.00E+032.03E+031.09E+03
std1.12E+027.16E+011.00E+027.88E+015.98E+011.43E+027.40E+011.37E+022.44E+021.90E+026.12E+01
F9mean6.71E+045.85E+047.83E+042.99E+048.34E+044.83E+046.69E+045.37E+046.89E+045.12E+047.99E+03
std1.79E+048.36E+035.45E+031.31E+048.10E+034.09E+035.81E+031.16E+041.43E+042.14E+046.56E+03
F10mean2.71E+042.83E+043.01E+041.72E+043.24E+041.76E+043.02E+042.47E+042.74E+042.02E+041.65E+04
std1.72E+032.57E+031.31E+031.37E+035.45E+021.98E+031.06E+034.67E+034.62E+034.39E+033.20E+03
F11mean1.73E+058.64E+048.43E+046.59E+031.20E+051.71E+059.58E+049.29E+042.52E+051.32E+051.48E+04
std6.43E+041.68E+041.43E+041.17E+031.33E+046.84E+041.67E+041.55E+049.84E+044.15E+044.66E+03
F12mean5.98E+094.08E+103.61E+106.95E+088.38E+103.07E+105.41E+103.96E+102.35E+092.40E+091.97E+08
std1.31E+099.87E+091.12E+103.32E+088.90E+091.46E+101.07E+109.69E+092.60E+091.01E+097.85E+07
F13mean1.86E+086.07E+095.57E+091.01E+061.45E+104.55E+091.06E+107.12E+092.32E+071.03E+089.35E+05
std1.09E+082.72E+092.56E+092.72E+063.20E+093.38E+093.42E+093.10E+093.51E+071.39E+089.85E+04
F14mean1.20E+071.02E+071.19E+074.34E+064.37E+071.15E+071.68E+071.29E+071.20E+077.95E+061.45E+06
std5.82E+065.11E+065.71E+061.69E+061.90E+071.59E+076.45E+068.89E+065.05E+068.10E+066.04E+05
F15mean2.89E+071.99E+091.20E+092.17E+054.23E+091.09E+092.83E+092.16E+093.81E+061.97E+074.16E+05
std2.75E+071.05E+091.37E+094.12E+059.93E+081.25E+091.05E+091.65E+095.78E+064.55E+077.03E+04
F16mean1.45E+048.95E+031.12E+046.79E+031.40E+048.15E+031.39E+049.07E+039.48E+038.65E+035.06E+03
std1.95E+039.04E+021.21E+037.48E+029.25E+021.00E+031.54E+031.36E+031.79E+039.94E+027.68E+02
F17mean1.02E+048.27E+031.00E+045.47E+033.35E+049.74E+033.34E+041.78E+047.24E+038.32E+034.59E+03
std2.01E+031.99E+034.07E+036.10E+023.22E+045.11E+035.04E+043.04E+049.77E+021.25E+034.87E+02
F18mean9.42E+068.53E+061.46E+075.71E+068.28E+071.45E+072.26E+071.37E+071.33E+071.23E+072.32E+06
std5.50E+064.02E+065.71E+063.17E+063.29E+071.43E+071.17E+079.71E+067.91E+069.29E+061.20E+06
F19mean6.49E+071.50E+091.15E+097.40E+063.88E+091.01E+092.65E+091.59E+093.61E+072.07E+077.13E+06
std4.23E+071.04E+091.32E+093.74E+068.72E+081.10E+091.48E+091.20E+092.96E+071.65E+073.55E+06
F20mean6.60E+036.23E+036.91E+035.39E+037.74E+035.52E+036.91E+036.28E+036.61E+036.23E+034.89E+03
std7.03E+029.29E+026.08E+025.24E+022.61E+026.09E+024.88E+029.66E+021.00E+038.26E+024.36E+02
F21mean4.31E+033.56E+033.88E+032.90E+034.06E+033.62E+034.03E+033.43E+033.60E+033.85E+032.63E+03
std2.62E+021.05E+021.14E+028.44E+011.30E+021.23E+021.59E+029.27E+012.62E+021.45E+025.82E+01
F22mean2.95E+043.08E+043.35E+041.97E+043.47E+042.08E+043.26E+042.75E+042.98E+042.16E+042.13E+04
std1.74E+032.07E+031.03E+031.46E+035.59E+021.59E+031.06E+034.47E+033.95E+032.47E+031.03E+03
F23mean5.14E+034.07E+034.80E+033.43E+035.10E+033.78E+034.86E+034.31E+034.48E+034.43E+033.18E+03
std2.53E+021.11E+022.00E+028.50E+011.27E+029.78E+012.23E+021.50E+022.45E+021.82E+027.02E+01
F24mean6.46E+034.80E+036.25E+034.00E+037.07E+034.39E+035.99E+035.59E+035.46E+035.44E+033.54E+03
std4.30E+021.67E+022.37E+021.13E+022.87E+021.42E+023.24E+022.40E+024.45E+022.68E+024.49E+01
F25mean6.47E+031.28E+041.04E+043.65E+031.90E+041.41E+041.35E+041.14E+044.26E+037.51E+033.35E+03
std4.36E+021.84E+031.29E+037.73E+011.94E+034.45E+031.19E+031.80E+032.81E+025.66E+035.68E+01
F26mean3.66E+042.07E+042.87E+041.31E+043.80E+041.81E+043.52E+042.58E+042.82E+042.47E+048.37E+03
std4.67E+031.54E+031.95E+031.18E+031.97E+031.59E+032.57E+032.07E+034.31E+033.29E+035.60E+02
F27mean5.73E+034.62E+035.88E+033.84E+038.02E+033.96E+036.12E+035.39E+035.09E+034.34E+033.49E+03
std1.07E+032.71E+023.70E+021.23E+025.44E+022.11E+024.32E+023.60E+024.10E+023.99E+023.71E+01
F28mean8.80E+032.38E+041.23E+043.70E+032.37E+041.88E+041.80E+041.46E+045.69E+031.69E+043.43E+03
std9.35E+024.48E+031.77E+035.96E+012.54E+031.58E+031.90E+031.58E+035.66E+037.50E+034.45E+01
F29mean1.70E+041.44E+041.44E+048.48E+032.32E+041.16E+042.19E+041.26E+041.37E+041.10E+046.66E+03
std2.16E+032.20E+035.35E+035.21E+025.10E+035.93E+038.18E+031.18E+031.52E+031.49E+035.08E+02
F30mean8.75E+083.32E+093.47E+097.72E+079.62E+092.05E+098.50E+095.64E+096.35E+087.83E+074.26E+07
std4.26E+081.82E+091.84E+093.72E+071.84E+091.53E+092.85E+092.87E+093.26E+087.05E+071.85E+07
Bold values indicate the best results in each row.
Table 7. Friedman test ranks of CEO and other algorithms on CEC2017.
Table 7. Friedman test ranks of CEO and other algorithms on CEC2017.
Algorithm30-Dim50-Dim100-Dim
WOA7.966677.931037.30000
SOA6.316676.327596.03333
ETO7.016677.155176.10000
RIME1.800001.793102.10000
SCA10.3000010.379319.00000
MFO5.950005.810346.83333
NRBO8.750008.810347.83333
GJO5.833335.862076.23333
EVO6.016675.844836.36667
DBO4.816674.844835.83333
CEO1.233331.241381.36667
Bold text denotes the current best rank.
Table 8. Wilcoxon rank-sum test results.
Table 8. Wilcoxon rank-sum test results.
CEO vs.30-Dim50-Dim100-Dim
DBO26 / 0 / 325 / 0 / 424 / 0 / 5
ETO25 / 0 / 424 / 0 / 523 / 0 / 6
EVO27 / 0 / 226 / 0 / 325 / 0 / 4
GJO27 / 0 / 226 / 0 / 325 / 0 / 4
MFO25 / 0 / 424 / 0 / 523 / 0 / 6
NRBO24 / 0 / 523 / 0 / 622 / 0 / 7
RIME27 / 0 / 226 / 0 / 325 / 0 / 4
SCA25 / 0 / 424 / 0 / 523 / 0 / 6
SOA25 / 0 / 424 / 0 / 523 / 0 / 6
WOA26 / 0 / 325 / 0 / 424 / 0 / 5
Overall (+/=/-)257 / 0 / 33247 / 0 / 43237 / 0 / 53
Table 9. Names and parameters of 13 typical problems in CEC2020-RW.
Table 9. Names and parameters of 13 typical problems in CEC2020-RW.
ProbNameDghf
Industrial Chemical Processes
RW01Blending–pooling–separation problem380321.86E+00
RW02Propane, isobutane, and n-butane nonsharp Separation480382.12E+00
Process Synthesis and Design Problems
RW03Process synthesis problem2202.00E+00
RW04Process synthesis and design problem3112.56E+00
RW05Process flow sheeting problem3301.08E+00
RW06Process synthesis problem7902.92E+00
RW07Process design Problem101005.36E+04
Mechanical Engineering Problem
RW08Weight minimization of a speed reducer71102.99E+03
RW09Tension/compression spring design (case 1)3301.27E+02
RW10Welded beam design4501.67E+00
RW11Step-cone pulley problem5831.61E+01
RW12Robot gripper problem7702.53E+00
RW13Tension/compression spring design (case 2)3802.61E+00
Table 10. Results of CEO and competing algorithms on CEC2020-RW problems.
Table 10. Results of CEO and competing algorithms on CEC2020-RW problems.
ProblemMetricWOASOAETORIMESCAMFONRBOGJOEVODBOCEO
RW01mean1.28E+018.76E+004.29E+002.47E+006.64E+002.66E+001.41E+014.05E+008.43E+005.45E+001.72E+00
std1.24E+016.47E+002.86E+002.97E−016.24E+001.26E+001.90E+013.85E+007.66E+005.06E+001.84E−01
RW02mean3.73E+003.81E+004.36E+002.13E+003.03E+002.89E+006.39E+002.87E+005.63E+003.95E+001.51E+00
std2.06E+002.19E+006.58E−013.34E−013.33E−017.87E−012.81E+002.64E−011.77E+001.60E+002.74E−01
RW03mean2.01E+002.00E+002.01E+002.02E+002.00E+002.00E+002.00E+002.00E+002.06E+002.00E+002.00E+00
std4.31E−021.27E−044.92E−025.99E−023.24E−044.14E−074.36E−071.81E−059.40E−023.82E−071.88E−05
RW04mean2.78E+002.92E+002.60E+002.78E+002.68E+002.77E+002.63E+002.62E+002.87E+002.86E+002.56E+00
std3.18E−013.72E−015.04E−022.24E−016.57E−022.07E−012.18E−011.60E−016.12E−013.79E−013.61E−03
RW05mean1.23E+001.20E+001.16E+001.14E+001.23E+001.16E+001.24E+001.21E+001.22E+001.22E+001.09E+00
std5.09E−027.48E−028.62E−028.46E−024.52E−028.80E−024.40E−027.13E−024.84E−026.50E−023.17E−02
RW06mean7.12E+001.29E+015.43E+003.72E+001.09E+015.35E+003.90E+005.81E+001.56E+147.29E+002.96E+00
std3.31E+001.72E+002.38E+007.53E−013.09E+002.81E+007.61E−012.20E+008.03E+143.32E+002.56E−02
RW07mean8.84E+045.48E+046.35E+045.93E+046.82E+046.07E+046.96E+045.39E+049.31E+047.83E+045.86E+04
std2.01E+041.39E+031.88E+033.41E+034.56E+037.65E+031.15E+041.23E+021.88E+042.15E+049.65E+02
RW08mean3.14E+033.03E+033.02E+033.00E+003.12E+033.00E+033.03E+033.02E+035.55E+033.04E+033.00E+00
std1.81E+021.32E+018.57E+009.94E−014.63E+011.49E+013.71E+016.28E+007.27E+034.77E+014.39E+00
RW09mean1.39E−021.28E−021.33E−021.65E−021.30E−021.37E−021.29E−021.28E−021.27E+021.36E−021.28E−02
std1.08E−038.08E−055.42E−041.91E−031.52E−041.74E−033.66E−047.73E−056.94E+021.69E−031.29E−04
RW10mean2.43E+001.70E+001.69E+002.29E+001.81E+001.74E+001.74E+001.68E+002.86E+001.72E+001.68E+00
std7.91E−012.56E−029.94E−035.70E−013.66E−026.98E−024.61E−025.31E−036.23E−014.84E−023.07E−03
RW11mean2.66E+011.75E+012.11E+011.69E+011.71E+022.03E+012.24E+011.69E+011.12E+022.06E+011.64E+01
std3.06E+015.09E−013.17E+003.67E−019.75E+011.65E+011.92E+013.45E−012.96E+021.62E+012.07E−01
RW12mean5.46E+003.48E+005.40E+003.88E+004.27E+005.46E+005.50E+003.26E+009.48E+074.36E+003.67E+00
std1.31E+005.64E−011.52E+008.84E−013.76E−013.32E+001.45E+005.59E−015.19E+081.11E+003.33E−01
RW13mean8.61E+032.84E+002.75E+002.99E+002.89E+002.82E+002.88E+002.69E+003.99E+002.86E+002.66E+00
std4.71E+042.53E−011.16E−012.32E−011.41E−012.19E−012.74E−018.13E−022.02E+002.77E−011.42E−02
Bold values indicate the best results in each row.
Table 11. Friedman Test Ranks of CEO and other algorithms on CEC2020-RW problems.
Table 11. Friedman Test Ranks of CEO and other algorithms on CEC2020-RW problems.
AlgorithmRanking
CEO1.85
RIME3.31
GJO4.62
SOA5.31
MFO5.62
DBO6.38
ETO6.00
SCA7.38
NRBO7.62
WOA8.62
EVO10.62
Bold text denotes the current best rank.
Table 12. Center coordinates and radii of the obstacles.
Table 12. Center coordinates and radii of the obstacles.
CenterRadiusCenterRadius
(1.5, 4.5)1.5(5.0, 6.0)0.8
(4.0, 3.0)1(7.0, 1.0)0.8
(1.2, 1.5)0.8(6.0, 2.5)0.5
(7.0, 4.0)0.8(7.0, 8.0)0.8
(8.0, 6.0)0.8
Table 13. Results of CEO and other algorithms in robot path planning.
Table 13. Results of CEO and other algorithms in robot path planning.
AlgorithmBestMeanWorstStd
CEO14.714915.681216.98690.7658
WOA14.608916.456119.95851.5090
SOA17.018818.281019.95890.8134
ETO14.536615.948318.66551.2663
RIME14.428215.127116.41750.7655
SCA15.395317.029118.30131.0611
MFO14.875416.058917.72520.6020
NRBO14.643815.778116.85900.7527
GJO14.641515.926318.16470.8583
EVO14.620016.400025.74692.4049
DBO14.653515.724817.01850.5990
Bold values indicate the best results in each column.
Table 14. Bounds of different parameters in three PV models.
Table 14. Bounds of different parameters in three PV models.
ParameterSDM&DDMPVMM
LbUbLbUb
I p h (A)0102
I s d 1 , I s d 2 , I s d (μA)01050
n 1 , n 2 , n 12150
R s h ( Ω )010002000
R s ( Ω )00.202
Table 15. Statistical outcomes of the RMSE attained by eleven algorithm on three PV models.
Table 15. Statistical outcomes of the RMSE attained by eleven algorithm on three PV models.
ModelAlgorithmMinMaxMeanstd
SDMCEO1.0319E−032.2184E−031.4448E−033.3058E−04
RIME9.8634E−043.3665E−031.8378E−036.6589E−04
EVO2.0478E−036.4768E−021.4151E−021.5046E−02
ETO2.3422E−033.8576E−022.1392E−021.7378E−02
DBO9.8634E−043.8151E−028.9382E−031.4132E−02
SCA8.0708E−034.7950E−023.9309E−021.0215E−02
SOA5.6515E−032.2286E−015.4330E−024.6857E−02
GJO1.4643E−034.5481E−021.7576E−021.7853E−02
NRBO1.0117E−037.8237E−032.0972E−031.3274E−03
WOA1.0109E−034.3822E−029.4499E−031.4810E−02
MFO9.8910E−043.0630E−031.8689E−035.8066E−04
DDMCEO1.0722E−033.2600E−031.5364E−034.9123E−04
RIME1.0338E−033.5719E−032.1756E−036.5960E−04
EVO2.4682E−035.8062E−021.6849E−021.4443E−02
ETO1.5734E−033.8674E−021.6088E−021.6306E−02
DBO9.8863E−043.3392E−025.1234E−039.6871E−03
SCA1.5103E−024.7714E−023.8462E−021.0088E−02
SOA4.1689E−032.2286E−015.7105E−025.7555E−02
GJO1.5062E−034.6017E−022.1818E−021.8878E−02
NRBO9.9823E−047.2489E−032.1104E−031.3607E−03
WOA1.0655E−034.5614E−028.9629E−031.3502E−02
MFO1.2926E−033.8281E−032.3532E−035.7485E−04
PVMMCEO1.0692E−021.8661E−021.5073E−022.3536E−03
RIME2.4552E−031.8764E−021.4679E−024.4654E−03
EVO9.1956E−035.4525E−022.6622E−021.0156E−02
ETO9.2516E−033.1964E−021.8596E−027.3193E−03
DBO2.4552E−033.2882E−021.0074E−027.9653E−03
SCA2.7051E−032.7428E−016.1194E−027.2333E−02
SOA3.1736E−022.7425E−011.8664E−011.1716E−01
GJO1.0927E−023.2864E−022.6289E−027.4960E−03
NRBO2.5562E−022.0813E−011.2225E−015.7100E−02
WOA1.5042E−032.7428E−022.9773E−024.6943E−03
MFO1.3952E−021.9212E−021.7971E−021.5342E−03
Table 16. Ranks of eleven algorithms on three PV models based on Friedman’s test.
Table 16. Ranks of eleven algorithms on three PV models based on Friedman’s test.
AlgorithmRanking
CEO1.67
MFO2.33
RIME3.00
NRBO4.33
WOA5.67
DBO6.00
EVO6.33
ETO6.67
GJO7.00
SCA9.00
SOA10.00
Bold text denotes the current best rank.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, R.; Jiang, Z.; Ding, G. Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design. Mathematics 2025, 13, 2499. https://doi.org/10.3390/math13152499

AMA Style

Wang R, Jiang Z, Ding G. Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design. Mathematics. 2025; 13(15):2499. https://doi.org/10.3390/math13152499

Chicago/Turabian Style

Wang, Rui, Zhengxuan Jiang, and Guowen Ding. 2025. "Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design" Mathematics 13, no. 15: 2499. https://doi.org/10.3390/math13152499

APA Style

Wang, R., Jiang, Z., & Ding, G. (2025). Cosmic Evolution Optimization: A Novel Metaheuristic Algorithm for Numerical Optimization and Engineering Design. Mathematics, 13(15), 2499. https://doi.org/10.3390/math13152499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop