Next Article in Journal
SGRiT: Non-Negative Matrix Factorization via Subspace Graph Regularization and Riemannian-Based Trust Region Algorithm
Previous Article in Journal
Multimodal Deep Learning for Android Malware Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems

by
Manuel Soto Calvo
1,* and
Han Soo Lee
1,2,3,*
1
Coastal Hazards and Energy Sciences Laboratory, Transdisciplinary Science and Engineering Program, Graduate School and Engineering, Hiroshima University, Hiroshima 739-8529, Japan
2
Center for Planetary Health and Innovation Science (PHIS), The IDEC Institute, Hiroshima University, Hiroshima 739-8529, Japan
3
Smart Energy School of Innovation and Practice for Smart Society, Hiroshima University, Hiroshima 739-8529, Japan
*
Authors to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2025, 7(1), 24; https://doi.org/10.3390/make7010024
Submission received: 17 February 2025 / Revised: 27 February 2025 / Accepted: 3 March 2025 / Published: 6 March 2025

Abstract

:
The electrical storm optimization (ESO) algorithm, inspired by the dynamic nature of electrical storms, is a novel population-based metaheuristic that employs three dynamically adjusted parameters: field resistance, field intensity, and field conductivity. Field resistance assesses the spread of solutions within the search space, reflecting strategy diversity. The field intensity balances the exploration of new territories and the exploitation of promising areas. The field conductivity adjusts the adaptability of the search process, enhancing the algorithm’s ability to escape local optima and converge on global solutions. These adjustments enable the ESO to adapt in real-time to various optimization scenarios, steering the search toward potential optima. ESO’s performance was rigorously tested against 60 benchmark problems from the IEEE CEC SOBC 2022 suite and 20 well-known metaheuristics. The results demonstrate the superior performance of ESOs, particularly in tasks requiring a nuanced balance between exploration and exploitation. Its efficacy is further validated through successful applications in four engineering domains, highlighting its precision, stability, flexibility, and efficiency. Additionally, the algorithm’s computational costs were evaluated in terms of the number of function evaluations and computational overhead, reinforcing its status as a standout choice in the metaheuristic field.

1. Introduction

Optimization constitutes a fundamental aspect of scientific research and engineering, aiming at identifying the most effective solution among a plethora of feasible options under given constraints [1,2,3,4]. It plays a pivotal role in decision-making processes across various domains, including logistics, manufacturing, finance, and artificial intelligence [5]. Most of the problems presented in different scientific and engineering areas can be formulated as optimization problems [6].
Various optimization problems are classified on the basis of their objective function nature, constraints, and variables involved [7,8,9]. Within the classification of optimization techniques, exact methods, which are designed to find the optimal solution with precision guarantees for certain problem types, represent a critical category [2]. Exact methods are particularly powerful in contexts where the requirements for accuracy and certainty outweigh the need for computational efficiency. However, these methods are not generally effective in solving complex optimization problems because of their dependency on preconditions such as continuity, convexity, and differentiability of the objective function, which are not usually met in real-world optimization problems [7,8,10,11]
The quest for a balance between computational efficiency and optimal solution finding has spurred significant interest in heuristic and metaheuristic methods [7,8,12]. These algorithms have proven effective at navigating complex search spaces to efficiently deliver near-optimal solutions [9,13,14,15]. In recent years, significant advancements in metaheuristic algorithms have been reported. Notable developments include the marine predator algorithm (MPA) [16], which models hunting strategies in marine ecosystems, and the equilibrium optimizer (EO) [17], which is based on mass balance principles. Henry gas solubility optimization (HGSO) [18] draws inspiration from gas solubility dynamics, whereas the modified artificial gorilla trout optimizer (MAGTO) [19] enhances primate behavior modeling. Tuna swarm optimization (TSO) [20] mimics tuna hunting patterns, and the rain optimization algorithm (ROA) [21] simulates precipitation processes. The modified Harris hawk optimization (MHHO) [22] improves upon cooperative hunting strategies. The Brown-Bear Optimization Algorithm (BBOA) [23] simulates the hunting and foraging behavior of brown bears, whereas the Energy Valley Optimizer (EVO) [24] draws inspiration from energy landscapes in physical systems. Fick’s law algorithm (FLA) [25] leverages principles of molecular diffusion, and the Honey Badger Algorithm (HBA) [26] models the aggressive foraging strategies of honey badgers. The Hunger games search (HGS) [27] implements competitive survival dynamics. Other significant contributions include the arithmetic optimization algorithm (AOA) [28], the modified whale optimization algorithm (MWOA) [29], and the enhanced leader particle swarm optimization (ELPSO). This proliferation of algorithms highlights both the field’s dynamism and the ongoing challenge of developing truly novel optimization mechanisms.
Despite the success of metaheuristics in addressing a broad spectrum of optimization challenges, the demand for more efficient, robust, and versatile algorithms persists, driven by increasingly complex optimization problems. As the field evolved through the late 20th and early 21st centuries, the number of metaheuristic algorithms grew to over 500 by the end of 2023, each designed to address specific complex issues [30]. Research suggests that many of the newly developed algorithms are enhancements or variations of existing methods [7,31,32] This trend highlights the iterative process of science but has also drawn several criticisms. A significant concern is the lack of novelty, with many new algorithms being minor modifications rather than genuine innovations [33]. Additionally, the introduction of biases, such as center bias, skews the performance results, favoring solutions near the center of the search space [34]. Another issue is the inconsistency between the proposed metaphors, mathematical models, and algorithm implementations. Often, the metaphors do not translate into the algorithm’s framework, leading to a disconnect between theory and practice [35]. Furthermore, the lack of consistent and transparent benchmarking complicates the assessment of algorithm efficacy [36]. Selective datasets and metrics can result in biased conclusions that fail in broader applications.
In this context, we introduce the electrical storm optimization (ESO) algorithm, a novel mathematical approach inspired by the dynamic behavior of electrical storms. This algorithm draws from the observation that lightning strikes the path of least resistance to ground, analogous to finding the optimal path through a complex search space. The ESO algorithm uses three dynamically adjusted parameters, field resistance, conductivity and intensity, to adapt the search process in real time. This allows the algorithm to seamlessly transition between the exploration and exploitation phases on the basis of the immediate needs of the optimization landscape. Unlike the reliance on interagent communication or fixed update mechanisms seen in other algorithms, the ESO approach is autonomously driven by environmental responses, increasing its adaptability across both discrete and continuous problem spaces. This study delineates the conceptual foundation of the ESO, its algorithmic structure, and potential implications for advancing the field of optimization.
The main objective of this study is to develop a robust and efficient optimization algorithm that can effectively handle both unimodal and multimodal optimization problems while maintaining consistent performance across varying dimensionalities. This work offers several key scientific contributions: (1) the introduction of a novel dynamic adaptation mechanism that automatically adjusts search parameters on the basis of real-time feedback from the optimization landscape, enabling seamless transitions between exploration and exploitation phases; (2) a self-regulating framework that reduces the need for parameter tuning across different problem types, enhancing the algorithm’s practical applicability; (3) a mathematically rigorous approach to distance-based resistance calculation that provides more accurate guidance for the search process; and (4) a computationally efficient implementation that scales well with problem dimensionality while maintaining competitive performance. These contributions address current limitations in the field of optimization algorithms, particularly in terms of adaptation capabilities and computational efficiency.

2. Materials and Methods

Thunderstorms are among the most potent phenomena in nature and are characterized by lightning occurring within cumulonimbus clouds. These clouds experience strong updrafts and downdrafts that move water particles across different altitudes. Collisions between droplets and ice particles, combined with temperature and pressure differences, lead to charge separation, the accumulation of positive charges at cloud tops and negative charges at the bases. This results in a strong electric field within the clouds and between the clouds and the surface of the Earth [37,38]. When the potential difference exceeds the dielectric strength of air, electrical discharge, known as lightning, occurs. This discharge, influenced by wind, humidity, and particulates, can occur within a cloud, between clouds, or from clouds to the ground and is subject to environmental obstacles such as terrain or structures, which can alter the path of lightning [39].
The ESO algorithm draws inspiration from these dynamics, particularly the way in which lightning seeks the path of least resistance. This natural process is applied to solve optimization problems, where the algorithm seeks optimal solutions by navigating through problem spaces in a manner analogous to lightning navigating atmospheric obstacles.
Field resistance ( R ): In thunderstorms, electrical resistance determines the ease with which lightning can travel through the different regions of the atmosphere [40,41]. In the ESO algorithm, this parameter measures the dispersion of solutions within the search space, guiding the algorithm’s adjustments on the basis of the problem’s topology.
Field intensity ( I ): The intensity of the electric field in a thunderstorm governs the strength and frequency of lightning discharges [42]. Analogously, this parameter controls the algorithmic phases, facilitating strategic shifts between exploration and exploitation.
Field conductivity ( k e ): Drawing from the concept of electrical conductivity, which measures the ability of a material or system to conduct electric current, this is the reciprocal of the resistance. Reflecting the adaptability required by lightning to navigate varying atmospheric conditions, this parameter dynamically modulates the algorithm’s response to changes in field resistance, enhancing its ability to fine-tune the search process.
Storm power ( P ): Storm power is inspired by the cumulative and finite potential energy present in a thunderstorm. In the ESO algorithm, P measures the remaining energy and potential for exploration during the search process. This ensures that the algorithm’s efforts are proportional to the current state of variability and intensity in the search space, maintaining an adaptive balance between exploration and exploitation.
Ionized areas ( α ): In thunderstorms, ionized areas are regions where the air has become electrically conductive due to ionization, allowing for lightning to travel through these paths of least resistance [40,42]. In the ESO algorithm, the ionized areas represent regions in the search space where the probability of finding optimal solutions is higher. By identifying and targeting these ionized areas, the algorithm can navigate the search space more effectively, enhancing its efficiency in locating optimal solutions.
By leveraging nature-inspired dynamics, the ESO offers a structured and adaptive approach to complex optimization challenges, effectively translating these natural phenomena into computational strategies. Figure 1 illustrates how these dynamics are operationalized within the algorithm.
During the first iteration ( j = 1 ) of the optimization process, the lightning (agents) are positioned randomly within the defined lower ( b l o w ) and upper ( b h i g h ) bounds of the search space with n dimensions. This is expressed as ρ i , j n , where each agent i is represented as a vector with both an origin and a direction that can explore the search space. Once these initial positions are established, the fitness of each lightning agent’s position is calculated by evaluating the objective function at the agent’s position. This initial assessment is crucial, as it sets the baseline for the algorithm’s iterative process of refining positions to find optimal solutions.

2.1. Field Resistance

The concept of field resistance ( R ) in the ESO algorithm is crucial for effectively managing the optimization process. It serves as a real-time feedback mechanism that reflects the landscape of the optimization problem, enabling the algorithm to dynamically control its operational phases. The field resistance is calculated as the ratio between the standard deviation ( σ ) and the peak-to-peak difference ( Δ p ) across these positions ( ρ i , j n ) in each iteration ( j ) through the total lightning number ( N ), which is an argument of the algorithm. The standard deviation is computed on the basis of the deviation in each position from the mean position ( p ¯ ), where p ¯ is the average of all the positions. The mathematical representation of this calculation is provided in Equations (1)–(3). A higher R indicates a greater need for exploration, suggesting that the current solutions are widely spread and possibly distant from any optimum. Conversely, a lower R indicates a denser clustering of solutions, suggesting its proximity to potential optima and thus a greater need for exploitation to fine-tune the solutions.
σ j = 1 N 1 i = 1 N ρ i , j n p ¯ 2
Δ p j = max ρ i n min ρ i n
R j = σ j max Δ p j

2.2. Ionized Areas

The ionized areas ( α ) are identified as the zones within the search space where the solutions are superior, indicating lower resistance. This involves selecting a subset of the best solutions on the basis of their fitness. First, the percentile of fitness from all the solutions is calculated via Equation (4). Then, the solutions whose fitness values are better (lower for minimization, higher for maximization) are selected, as expressed in Equation (5). This process ensures that the algorithm can identify and focus on the most promising areas of the search space, facilitating a more directed and potentially more efficient search.
percentile = ( R j   |   2 )   ×   100
α = solution   |   f i t n e s s s < percentile fitness

2.3. Field Conductivity

The field conductivity ( k e ) dynamically adjusts the search strategy in response to changes in R in each iteration ( j ), allowing for the ESO algorithm to modulate exploration and exploitation on the basis of current environmental resistance. This parameter indicates how easily lightning can progress toward the optimum. It is computed via a logistic function ( β ), as expressed in Equations (6) and (7). The nonlinear nature of the logistic function makes k e highly reactive to any small changes in R , allowing for quick and effective adjustments to the search strategy. This reactivity is essential for handling the variability and complexity of search spaces. A less responsive adjustment results in a slower convergence. High field resistance ( R ) results in low k e , promoting exploration. Conversely, a low R increases k e , favoring exploitation.
β j   =   1 1 + e e R j R j × R j log e 1 R j
k e j = e R j + e 1 R j × log e R j × β j

2.4. Field Intensity

The field intensity (I) dynamically adjusts the storm stages during the optimization process through a logistic function ( γ ). This allows for a nonlinear response to variations in field resistance (R), with an imposed decreasing trend influenced by the current iteration (j) relative to the total number of iterations (iter), allowing for the algorithm to adapt its behavior over time, as in Equation (8). This adjustment models three distinct stages of storm development: an initial exploration phase, a transitional phase, and a final exploitation phase. The field intensity is then calculated as the product of γ and the field conductivity ( k e ), as in Equation (9).
Figure 2 shows the conceptual behavior of I. The pendent of the curve (m = e R j R j ) and the value of the Sigmoid’s midpoint ( x 0 = R j log e 1 i c i t e r ) are adjusted on the basis of the value of R . Consequently, when R is high, I is increased to promote the greater exploration of the search space. Conversely, when R is low, I is decreased to encourage the exploitation of promising areas near the current best solution. This dynamic adjustment of I ensures a balance between exploration and exploitation, maintaining the effectiveness and efficiency of the search process throughout the optimization.
γ j   =   1 1 + e e R j R j × R j log e 1 j i t e r
I j = k e j × γ j

2.5. Storm Power

Storm power ( P ) plays a crucial role in quantifying the cumulative potential of the search mechanism at any given point during the optimization process. It serves as an adjustment to the intensity of the perturbation introduced to the lightning positions and to the step size for the local search. It is calculated as the product of two dynamically varying factors, I and R , as in Equation (10). This definition is inspired by the physical concept of electrical loss, where power loss occurs due to resistance within a system. In the context of the ESO algorithm, this concept is adapted to calculate the remaining power of the storm during the exploration process. This adaptive mechanism ensures that the algorithm remains responsive to the topology of the search space, enabling it to dynamically balance exploration and exploitation on the basis of real-time feedback from the optimization environment.
P j = R j × I j k e j

2.6. Lightning Initialization

The initialization process involves strategically positioning new agents either within previously identified promising regions (ionized areas) or randomly within the search space. If ionized areas ( α ) are present ( j > 1 ), the position of a new lightning event ( ρ i , j n ) is determined by selecting a position from α and applying a perturbation equal to the storm power (P) value. If no α is present, the position ρ i , j n is then given by a uniform random distribution within the lower ( b l o w ) and upper ( b h i g h ) bounds of each dimension, as in Equation (11).
ρ i , j n = U b l o w , b h i g h , f o r j = 1 α j + P j ,   f o r   j   >   1

2.7. Branching and Propagation

The ESO algorithm simulates the branching and propagation of lightning via interactions with ionized areas within the search space. The new position of active lightning ( ρ n e w , j n ) is determined on the basis of whether the position of the lightning ( ρ i , j n ) is within an ionized area ( α ) or not. For lightning in α , the new position is adjusted by the storm power ( P ), aiming to explore nearby promising areas. For lightning outside of α , the new position is influenced by the average position of α , scaled by a random uniform perturbation U k e , k e   of size ± k e and storm power ( P ), facilitating the exploration of areas with known high-quality solutions, as shown in Equation (12).
ρ n e w , j n = ρ i , j n × P j i f   ρ i , j n     i n     α j 1 N α i = 1 N α ρ i , j n + U k e j , k e j × P j × e k e j , otherwise

2.8. Selection and Evaluation of New Positions

The optimization process seeks to find the optimal solution among complex and vast search spaces. Therefore, an evaluation of the improvements is a critical component of the algorithm, which focuses on the iterative refinement of agent (lightning) positions within the search space. The primary objective of this process is to evaluate new potential positions for each agent and determine whether moving to these new positions would result in an improvement in the fitness of the objective function. This decision-making process is central to the ability of the algorithms to converge on optimal or near-optimal solutions. By systematically comparing the fitness of the function in the new position f ρ n e w , j n with the current best fitness of the function in the best position found thus far ( f ρ b e s t ), the algorithm decides whether adopting the new path would bring the search closer to the optimization goal, as in Equations (13) and (14). The fitness is obtained by evaluating the objective function in the current coordinates of each lightning strike. Algorithm 1 presents the pseudocode of the main processes in the algorithm.
u p d a t e i f f ( ρ n e w , j n ) < f ( ρ b e s t ) ,         f o r   m i n f ( ρ n e w , j n ) > f ( ρ b e s t ) ,         f o r   m a x
ρ b e s t ρ n e w , j n , f ρ b e s t f ρ n e w , j n
Algorithm 1. Pseudocode of the main processes of the electrical storm optimization (ESO) algorithm
Make 07 00024 i001

3. Results

The effectiveness of the ESO algorithm was assessed through a series of diverse optimization challenges. The evaluation began with 25 primitive unimodal and multimodal problems, testing the efficiency of the ESO in scaping from local optimums and converging to the global optimum. It then proceeded to 20 shifted and rotated problems to explore its ability to navigate multiple local optima in complicated landscapes. Additionally, the ESO faced the IEEE Congress on Evolutionary Computation (CEC) 2022 Single Objective Bound Constrained (SOBC) benchmark set, which includes rotated, shifted, hybrid, and composite functions, showcasing its adaptability to intricate and computationally intensive tasks [43]. The real-world applicability of the algorithm was further validated by its performance on six engineering problems.
The ESO algorithm was benchmarked against 15 of the most cited and utilized metaheuristic algorithms and 5 recently published algorithms, as shown in Table 1 [2,7,10,12,32,44,45,46]. Moreover, the comparison set includes the LSHADE algorithm, which was the winner of numerous editions of the IEEE CEC competition [47,48,49,50]. The implementation of the algorithms was retrieved from the Mealpy v30.1 Python library, which is known for its robust and standardized implementations of optimization algorithms, providing a consistent platform for comparative evaluations [8]. To ensure fairness, standard hyperparameters were used, with a population size of 50 and a maximum of 50,000 objective function evaluations. Fifty independent runs were conducted per problem, with no parameter tuning between problems. The default parameters for each algorithm were retrieved from their corresponding papers, as shown in Table 1. However, the comparison focused on the results of the top ten performing algorithms, with the results for the ten lower-performing algorithms detailed in Appendix A Table A1, Table A2 and Table A3.

3.1. Primitive Benchmark Functions

Unimodal benchmark functions are crucial for evaluating the performance of metaheuristic algorithms, particularly their exploitation capabilities within optimization landscapes. With a single global optimum and no local optimum, they effectively assess an algorithm’s precision in converging to the global solution. The primary challenge lies in exploiting the search space while navigating environments where gradient information is absent or misleading, limiting direct feedback about the optimum location. This lack of gradients requires algorithms to rely on indirect methods to infer direction and distance to the global optimum. Consequently, algorithms need efficient strategies that transition smoothly from exploration to exploitation, even without explicit gradient information.
Multimodal benchmark functions are essential for assessing the performance of metaheuristic algorithms designed for complex optimization challenges. These functions, characterized by numerous local optima, require sophisticated navigation to distinguish between local and global optima. Functions such as Damavandi (F16), Cross Leg Table (F14), and Modified Rosenbrock (F25) have intricate landscapes that mislead optimization efforts with deceptive gradients and rugged terrains, often leading algorithms to converge on local optima instead of the global optimum. These problems evaluate the robustness, adaptability, and intelligence of algorithms, challenging the ability of algorithms to dynamically adjust search strategies in response to the changing topography of the problem space. Success depends on balancing search intensity and diversity, diving deeply into promising regions while maintaining broad sweeps to avoid pitfalls. Table 2 shows the test set of primitive benchmark problems, comprising unimodal and multimodal problems with a wide range of characteristics and complexities.
Table 3 presents the performances of the algorithms across 25 primitive problems. ESO and LSHADE consistently achieved optimal or near-optimal means and low standard deviations, reaching success rates of 92% and 86%, respectively, for a tolerance of 10E-08. The DE and GSKA algorithms also yielded strong results, although they did not match the performance of the ESO and LSHADE algorithms. Their performance was particularly notable in unimodal functions, although they showed some variability in more complex multimodal scenarios. MFO and QSA demonstrated comparable competence, especially for functions F1, F2, and F5, where they achieved near-optimal solutions. Conversely, the FLA and FPA showed the poorest performance among the tested algorithms, exhibiting greater variability in their solutions and larger distances from the optimum. This was particularly evident in functions F3 and F16, where they showed significant deviations from the optimal solutions. PSO also demonstrated inconsistent performance, struggling notably with functions F3 and F4, where it showed some of the largest deviations from the optimal solutions in the test set.
Figure 3 presents the convergence curves of the algorithms across primitive problems. The ESO algorithm demonstrates effective convergence in most of the problems and maintains the lowest function fitness, highlighting its efficiency in exploring and exploiting the solution space. GSKA, LSHADE, and QSA also show strong performance. However, MFO and DE exhibit significant performance variability across different functions.
Benchmark functions in their basic form can lead to misleading performance assessments, particularly due to center bias, which is a common issue where algorithms tend to concentrate their search near the coordinate origin. Table 4 presents transformed benchmark functions, incorporating rotations and shifts to address this limitation. These transformations move optima away from the center and introduce variable interdependencies through orthogonal matrix operations, creating more challenging and realistic test scenarios that better reflect real-world optimization problems. Several studies have shown that algorithms that perform well on basic functions often struggle with these transformed variants, revealing hidden weaknesses in their search mechanisms and potential center bias issues that could impact their practical applications.
The results in Table 5 reveal differential performance patterns across the shifted and rotated functions (F26–F45). ESO and LSHADE maintain their optimization capabilities, with ESOs achieving exceptional precision in functions such as F26, F35, F31, F39, and F40. However, several algorithms exhibit significant performance degradation when confronted with these transformed variants. The FLA and PSO struggled considerably, as evidenced by substantial deviations in F31 (1.1 × 103) and F40 (1.8 × 102), respectively. DE and ALO showed intermediate performance, but lacked consistency, particularly in challenging cases such as F44, where DE reached 2.2 × 102. GSKA emerged as a notably resilient algorithm, maintaining strong performance across several transformed functions, especially F30 and F39 (distances of 3.2 × 10−65 and 1.8 × 10−21). These results underscore the significant impact of rotational and shift transformations on optimization algorithm performance, highlighting the importance of developing algorithms with invariance to such transformations. This pattern suggests that the introduction of rotation and shift transformations substantially increases the complexity of the optimization landscape, challenging algorithms that rely heavily on coordinate-wise optimization strategies or which might be indicators of design biases, as reported previously [18].
Similarly, Figure 4 shows the convergence curves of the algorithms for shifted and rotated problems. ESO and LSHADE consistently demonstrated rapid convergence to the lowest function fitness, indicating efficiency in finding optimal solutions. Conversely, the algorithms DE, MFO, and PSO displayed significant performance variability across different functions, showing generally slower or poorer convergence.

Internal Parameter Behavior

Analysis across benchmark functions (F5, F9, F16, F26, F31, and F42) reveals distinct trends in the behaviors of the three main internal parameters of the ESO algorithm, field resistance ( R ), field conductivity ( k e ), and field intensity ( I ), as shown in Figure 5. The behavior of R changes with the evolution of the problem’s landscape, describing the diversity of solutions at each iteration. Conversely, k e is complementary and reactive to R , controlling the transition from exploration to exploitation. As R decreases, k e adjusts to favor exploitation, focusing on the promising regions of the search space. The field intensity generally follows a decreasing pattern over iterations, but can also increase when sudden changes in R and k e occur. This keeps the search agents responsive to the landscape, allowing for both steady convergence and rapid adjustment to new promising areas. The progression in the global best solution is a direct result of the interplay between R , k e , and I . This adaptability enables the ESO algorithm to effectively navigate complex search spaces, achieving consistent improvements toward the global optimum.

3.2. CEC 2022 SOBC Benchmark Problems

The CEC SOBC 2022 suite, detailed in Table 6, is designed to rigorously test optimization algorithms in demanding scenarios using transformations such as rotations and shifts in the optimum [27]. These elements are critical for examining an algorithm’s robustness and adaptability in nonlinear high-dimensional search spaces. Additionally, the suite features hybrid and composite functions that amalgamate various problem types into a single test, increasing computational demands and complexity. This thorough evaluation not only tests the algorithm’s ability to manage intricate optimization tasks, but also provides deep insights into its applicability and limitations under challenging conditions.
In Table 7, the results for optimizing the CEC SOBC 2022 competition problems reveal distinct performance patterns across different dimensionalities. For D = 10, ESO and LSHADE demonstrated superior performance, with ESOs achieving optimal or near-optimal results in several functions, particularly F46, F48, F54, and F56 (distances of 0.0 × 100).
For D = 20, while most algorithms experienced performance degradation, ESO and LSHADE maintained relative stability. ESO particularly excelled in F46, F48, and F56 (distance of 0.0 × 100), although it showed increased sensitivity in F51, where the distance rose from 2.5 × 104 at D=10 to 6.7 × 104 at D = 20. LSHADE demonstrated robust scalability, maintaining consistent performance across dimensions, notably for F46, F48, and F56 (distances of 0.0 × 100).
The remaining algorithms showed significant performance deterioration with increased dimensionality. This was particularly evident in the results of GSKA for F51, where the distance increased dramatically from 4.5 × 104 at D = 10 to 2.7 × 107 at D = 20. Similar degradation patterns were observed in FLA and PSO, indicating their limited ability to handle higher-dimensional search spaces effectively. Notably, specific functions such as F49 and F55 proved to be challenging across all algorithms and dimensions, suggesting inherent complexity in these problem landscapes that warrant further investigation for algorithm improvement.

3.3. Real-World Bound Constrained Optimization Problems

While artificial benchmark functions are commonly employed to assess the performance and accuracy of metaheuristic algorithms, such synthetic problems often embed unrealistic properties that may not be representative of real-world scenarios. As a result, evaluations can lead to overestimations or underestimations of the true potential of an algorithm when it is applied outside of controlled test conditions. To address this gap and provide a more precise assessment, we submit the ESO algorithm and its comparative metaheuristics to the challenge of solving four diverse and complex problems from real-world engineering and industrial domains.

3.3.1. Lennard–Jones Potential Problem

The Lennard–Jones potential problem is a fundamental challenge in molecular physics, which focuses on determining the optimal spatial configuration of atoms that minimizes the total potential energy of the system. For a 15-molecule system, this represents a 45-dimensional optimization problem (3 spatial coordinates per molecule), with a theoretical minimum potential that corresponds to the most stable molecular configuration.
The objective function sums the pairwise interaction potentials between all molecules as V r = 4 ε σ r 12 σ r 6 , where r is the distance between molecules, ε is the depth of the potential well, and σ is the distance at which the potential becomes zero. The optimization aims to minimize the total potential energy V N p = Σ   Σ   r i j 12 2 r i j 6 , which is subjected to geometric constraints. The complete implementation of the problem can be found in [68]. The problem’s complexity stems from its highly multimodal nature, featuring numerous local minima corresponding to different stable configurations. The global minimum represents the most energetically favorable molecular arrangement, making this a valuable benchmark for testing the ability of optimization algorithms to navigate complex energy landscapes. The complete implementation of the problem can be found in [69].
In Table 8, the results for the Lennard–Jones potential optimization problem with 15 particles reveal distinct performance patterns among the tested algorithms. ESO, HS, MFO, PSO, and the QSA achieve perfect convergence, reaching the exact known optimal value of −9.103852. This consistent achievement across multiple algorithms validates the robustness of these methods for this molecular configuration problem.
FPA, FLA, and LSHADE demonstrated near-optimal performance, achieving values of −9.091091, −9.094731, and −9.098313, respectively, with deviations of less than 0.15% from the global optimum. However, other algorithms struggled significantly, with ALO showing the poorest performance (−5.837872) followed by GSKA (−6.671860) and DE (−8.219488), suggesting that these methods may require modifications to better handle this specific type of molecular optimization problem.

3.3.2. Tersoff Potential Problem—Si(B) Model

The Tersoff Potential Problem for silicon, specifically the Si(B) parameterization, focuses on determining the minimum energy configuration of silicon atoms in a molecular cluster. This problem models covalent bonding interactions with a sophisticated three-body potential function that accounts for bond order and angular dependencies.
The optimization objective represents the total potential energy f X 1 , , X 3 = E 1 X 1 , , X 3 + + E X 1 , , X 3 , where each atomic contribution E i includes the following:
  • Two-body repulsive and attractive terms.
  • Bond-order term incorporating angular effects.
  • Cutoff function for limiting the interaction range.
The total potential is the sum of the individual potentials of atoms E i = 1 2 j i f c r i j V R r i j B i j V R r i j , where r i j is the distance between atoms i and j , V R is a repulsive term, f c ( r i j ) is a switching function, and B i j is a many-body term that depends on the position of the atoms i and j . The complete implementation of the problem can be found in [69].
In Table 9, the results for the Tersoff potential Si(B) model optimization reveal that MFO achieves the best performance, with a minimum value of −44.074326, followed closely by PSO (−43.129095) and the FPA (−42.888543). LSHADE, ESO, and QSA also showed competitive performance, reaching values of −42.698791 and −42.192469, respectively, demonstrating strong capabilities in handling this complex molecular potential optimization. However, FLA and HS produced slightly higher values of −41.701543 and −40.802206, respectively. However, some algorithms struggled significantly with this problem, particularly DE and GSKA, which only reached values of −30.258289 and −32.093009, respectively, indicating substantial difficulty in navigating this specific potential energy landscape. Significant performance differences were found, with nearly 14 energy units between the best and worst results.

3.3.3. Spread Spectrum Radar Polyphase Code Design Problem

The Spread Spectrum Radar Polyphase Code Design Problem represents a critical optimization challenge in radar signal processing. The fundamental objective is to design optimal polyphase codes that enable effective radar pulse compression while minimizing undesirable side lobes in the compressed signal output. The problem is structured as a global minimization task of a maximum function. The objective function seeks to minimize the maximum side-lobe level across all possible phase combinations as f x = m a x ( φ 1 x , , φ m ( x ) ) , where φ i x = 0.5 + j = 1 n c o s ( k = 2 i j + i k x k ) . Here, x represents the vector of phase variables ( x 1 , x 2 , , x n ) , k is the phase accumulation index, and φ i is the normalized autocorrelation. The optimization space consists of 20 phase variables, each constrained within the interval [0, 2π]. This creates a continuous search space with complex periodic behaviors. The objective function exhibits multiple local minima, making it particularly challenging for optimization algorithms to locate the global optimum. The complete implementation of the problem can be found in [69].
In Table 10, the results for the Spread Spectrum Radar Polyphase Code Design Problem demonstrate that the ESO achieves the best performance, with a minimum cost value of 1.980246, followed closely by LSHADE (2.433395) and ALO (2.448449). These results indicate superior capabilities in optimizing the phase sequences for radar signal design.
The FLA and FPA showed similar performance levels, reaching cost values of 2.609190 and 2.623724, respectively, demonstrating moderate effectiveness in this specialized optimization task. However, the GSKA encountered significant challenges, producing the highest cost value of 3.562859, followed by DE with 3.398476 and HS with 3.279291, suggesting that these algorithms struggle with the specific constraints and complexity of radar code design.

3.3.4. Circular Antenna Array Optimization Problem

The circular antenna array optimization problem addresses the design of antenna arrays arranged in a circular configuration to achieve optimal radiation patterns. This problem has significant applications in radar systems, mobile communications, and satellite technology. The array factor ( A F ) for a circular antenna array with N elements is expressed as A F φ = n = i N I n exp j k r cos φ φ n cos φ 0 φ n + β n , where φ represents the azimuth angle in the x–y plane, I n denotes the amplitude excitation of the n-th element, k is the wavenumber, r defines the radius of the circle, φ 0 indicates the direction of maximum radiation, β n is the phase excitation of the n-th element, and N is the total number of array elements. The optimization aims to minimize a composite objective f = A R φ sll , I , β A R φ max , I , β + 1 D I R φ 0 , I , β + φ 1 φ d es + k = 1 k A R φ k , I , β , where φ sll represents the angle at which the maximum side lode occurs, D I R denotes the directivity of the array pattern, φ d es is the desired maximum radiation direction, and φ k specifies the k-th null control direction. The optimization is constrained to current amplitudes ( I n ) [0.2, 1.0] and phase excitation ( β n ) [ 180 ° ,   180 ° ]. The complete implementation of the problem can be found in [69].
In Table 11, the results for the circular antenna array optimization problem demonstrate that the ESO achieved significantly superior performance, with a cost value of −11.479936, substantially outperforming all the other algorithms. HS emerged as the second-best performer, with a cost of −7.211993, followed by LSHADE at −6.009528.
DE and the GSKA achieved moderate performance levels, with costs of −5.199967 and −4.743778, respectively, whereas the remaining algorithms clustered around similar performance levels between −4.3 and −4.4. The FPA and PSO showed the least effective performance, both reaching a cost of −3.974214, indicating difficulty in optimizing the antenna array parameters.

4. Discussion

4.1. Statistical Performance Evaluation

Two statistical tests were employed to evaluate the optimization results. A Bayesian signed-rank test was employed to identify differences in mean fitness between populations, providing a more robust and comprehensive analysis than traditional nonparametric tests do [70]. Furthermore, a critical differences (CD) analysis via Friedman’s test with the Nemenyi post hoc test was conducted to compute the individual rankings and the critical differences [71]. The statistical analysis was conducted on 11 populations comprising 40,700 paired samples divided into 3 groups of problems (Figure 6A–C) and the total set of problems (Figure 6D). The familywise significance level for the tests was set to alpha = 0.05. The null hypothesis, which posited that all populations were normally distributed, was rejected (p = 0.000) for every population and set of problems. The median (MD) and median absolute deviation (MAD) were reported for each population. The populations were assessed pairwise to determine whether they were smaller, equal, or larger. A decision was made in favor of one of these outcomes if the estimated posterior probability was at least alpha = 0.05. The effect size was used to define the region of practical equivalence (ROPE), dynamically calculated around the median as 0.100 Γ , where Γ is the effect size (Akinshin’s gamma) [71].
Based on the results presented, the following conclusions can be drawn about the performance of the ESO algorithm compared to that of the other algorithms:
(a)
For the primitive problems (Figure 6A,A1), ESO and LSHADE exhibited the best performances, with minimal differences between them. The pairwise comparison in Figure 6A clearly shows that the ESO has superior performance.
(b)
For the shifted and rotated problems (Figure 6B,B1), the ESO and LSHADE algorithms again achieved the best performances, with a more noticeable difference between their results and those of the other algorithms. Critical differences were found between the performances of the PSO, FLA, DE, and MFO algorithms and those of the other methods. The pairwise comparison in Figure 6B also reveals that the ESO has superior performance.
(c)
In the CEC SOBC 2022 problems (Figure 6C,C1), LSHADE leads, followed by ESO. The pairwise comparison revealed that LSHADE had a slightly greater performance.
(d)
Among the three sets of problems, the ESO (4.06) ranks first, followed by LSHADE (4.33) and GSKA (5.75) (Figure 6D,D1).
These outcomes emphasize the ability of the ESO to adapt and excel across a broad range of optimization challenges, demonstrating its potential as a versatile and effective tool for solving complex problems under various conditions.

4.2. Computational Overhead

The computational cost of the ESO algorithm can be analyzed via ‘Big O’ notation, which is a well-established notation in computer sciences, to provide a theoretical estimate of its complexity. This analysis helps in understanding how the algorithm scales with respect to various parameters under typical usage scenarios. The asymptotic complexity of the ESO algorithm is heavily influenced by the number of iterations, the number of repetitions, and the complexity of the objective function. The dimensionality of the problem also plays a significant role, especially in how vector and matrix calculations are handled within the algorithm. Thus, the computational load of the algorithm presented herein can be theoretically estimated as follows:
(a)
Lightning initialization: Initializing each lightning event involves at least one evaluation of the objective function and operations over a vector of size d i m . The complexity of this part is O l i g h t n i n g s × T f , where T f represents the complexity of evaluating the objective function.
(b)
Main loop:
  • Identifying ionized areas: Sorting lightning events based on their fitness has a complexity of O l i g h t n i n g s × log l i g h t n i n g s .
  • Field intensity adjustment and resistance calculation: These calculations have a complexity of O l i g h t n i n g s × d i m since they involve computing the standard deviation and the peak-to-peak difference across the lightning positions.
  • Lightning propagation: Each lightning event may adjust its position based on the ionized areas, potentially requiring another evaluation of the objective function, O l i g h t n i n g s × T f .
(c)
Total function evaluations: Each lightning strike evaluates the objective function at least once per iteration, O m a x i t e r × l i g h t n i n g s × T f .
Combining all these components, the overall computational complexity of the ESO algorithm can be estimated as in Equation (15):
O ( m a x _ i t e r × l i g h t n i n g s × ( T ( f ) + l o g ( l i g h t n i n g s ) + d i m ) )
Additionally, to evaluate the computational complexity of the algorithm in terms of execution time, we applied the CEC SOBC 2022 evaluation procedure [43]. The problem in Equation (16) executes a loop with 200,000 iterations, where, in each step, the value of the variable x is modified through a series of mathematical operations. T0 represents the time calculated by running the test problem in Equation (16). Meanwhile, T1 represents the time required to execute 200,000 independent evaluations of the benchmark function F51 in D dimensions. T ^ 2, on the other hand, refers to the average time taken to execute the same number of evaluations of F51 within the framework of the algorithms in five runs, also in D dimensions. The analysis of the computational complexity across various optimization algorithms shown in Table 12 reveals significant differences in performance between algorithms and dimensions. In increasing dimensions, all algorithms show heightened execution times, underscoring the added complexity with higher dimensions. Notably, the ESO demonstrates lower overheads, suggesting greater efficiency in handling higher-dimensional tasks. All the experiments were carried out in the following computational environment:
  • OS: Ubuntu Linux 24.04 LTS;
  • CPU: Intel® Xeon® Gold 6148 CPU@2.40 GHz;
  • RAM: 64 GB;
  • Language: Python 3.12.3.
x = 0.55 ; x = x + x ; x = x 2 ; x = x x ; x = x ; x = log x ; x = exp x ; x = x x + 2

4.3. Comparison with Gradient-Based Methods

While gradient-based optimization methods remain fundamental in mathematical programming, particularly for problems with well-defined structures, the ESO algorithm offers distinct characteristics that make it suitable for specific optimization scenarios. A careful examination of both approaches reveals their complementary nature and specific domains of applicability.
Gradient-based methods excel in problems with smooth, differentiable objective functions, offering rapid convergence and mathematical guarantees of optimality when applied to convex optimization problems. These methods are particularly efficient in low- to medium-dimensional problems where gradient information is analytically available. Furthermore, for problems with clear mathematical formulations, such as linear programming or convex quadratic programming, gradient-based methods provide robust theoretical convergence properties.
In contrast, the ESO demonstrates advantages in several challenging scenarios where gradient-based methods may struggle. First, the field resistance mechanism of the ESO enables the effective navigation of multimodal landscapes, allowing for it to escape from local optima through its dynamic exploration–exploitation balance. Second, since the ESO operates solely on function evaluations, it can handle nondifferentiable objectives and discontinuous functions without modification. Third, the algorithm’s field intensity and conductivity parameters provide adaptive search capabilities that maintain optimization momentum in regions where gradient information becomes unreliable or vanishes, such as plateau regions or near-flat landscapes.
This advantage becomes particularly evident in high-dimensional problems where gradient computation through numerical differentiation becomes computationally expensive or impractical. While gradient-based methods need to evaluate the objective function multiple times to approximate each partial derivative, the ESO maintains consistent computational complexity regardless of dimensionality, making it potentially more efficient for high-dimensional optimization tasks.
These considerations suggest that the choice between the ESO and gradient-based methods should be guided by problem characteristics rather than treating either approach as universally superior. The ESO is particularly suited for complex optimization problems with limited mathematical structure, multimodal landscapes, high dimensionality, or where gradient information is unreliable or computationally expensive to obtain. Conversely, gradient-based methods remain the preferred choice for well-structured low-dimensional convex optimization problems where analytical derivatives are available.

5. Conclusions

In this research, the electrical storm optimization (ESO) algorithm is presented as a novel population-based metaheuristic inspired by the dynamic behavior observed in electrical storms. The algorithm operates primarily through three parameters that are dynamically adjusted according to the optimized landscape: field resistance, field conductivity and field intensity. These parameters are crucial because they allow for the algorithm to transition between exploration and exploitation phases effectively. Field resistance is utilized to measure the dispersion of solutions within the search space, providing insights that guide the algorithm’s adjustments based on the problem’s topology. Field conductivity dynamically modulates the algorithm’s adaptability to the optimization landscape, enabling it to respond sensitively to changes in field resistance and facilitating the fine-tuning of the search process. Field intensity controls the phases of the algorithmic process, facilitating strategic shifts between exploration and exploitation.
The performance of the ESO algorithm was thoroughly evaluated across various sets of problems. It was particularly effective in addressing 25 primitive problems characterized by complex landscapes, where it demonstrated high precision and stability. The algorithm also displayed high capabilities in resolving 20 shifted and rotated problems. To test the algorithm further, it was tested against 12 advanced problems from the CEC 2022 suite. Statistical analysis, including the Bayesian signed-rank test and critical difference (CD) analysis via Friedman’s test with the Nemenyi post hoc test, indicated that the performance of the ESO was significantly better than that of several established methods, ranking first overall, closely followed by LSHADE. However, it was observed that the precision of the ESO slightly diminished in the monotonous optimization landscapes, indicating a potential area for further refinement.
In practical applications, the ESO demonstrated effectiveness in solving four complex engineering problems involving molecular processes and antenna design problems, particularly those characterized by nonconvex search spaces and multiple local optima. While classical optimization methods such as gradient-based approaches remain superior for problems with clear mathematical formulations and convex landscapes, the performance of the ESO in these six case studies suggests its utility for specific engineering scenarios where traditional methods may face limitations, such as problems with nondifferentiable objectives or deceptive local optima. These real-world implementations highlight the potential of the ESO as a complementary tool in the optimization toolbox, which is particularly suitable for complex problems where the objective function landscape is not well understood or where classical optimization methods may struggle to find global solutions.

Limitations and Future Works

The current implementation of the ESO presents several limitations that should be acknowledged. The basic constraint-handling mechanism using penalty functions may not be optimal for problems with complex feasibility landscapes. The algorithm’s formulation assumes continuous search spaces, which limits its effectiveness in discrete optimization problems. Experimental validation, while being comprehensive for static optimization, does not include dynamic optimization problems where the optimum changes over time, leaving this capability unexplored. The algorithm’s applicability to multiobjective problems currently relies on aggregating multiple objectives into a single function, which may limit the accuracy of the Pareto front approximation. Additionally, the dynamic parameter adaptation strategy might require more iterations to achieve convergence in extremely deceptive or monotonic landscapes.
Future research directions could address these limitations through several key developments. The advancement of constraint-handling mechanisms based on adaptive penalty functions or feasibility rules would enhance the algorithm’s ability to handle complex constraints. The algorithm could be extended to discrete optimization through appropriate modifications of the field parameters and branching mechanisms, while adaptation for dynamic optimization problems could be achieved by incorporating time-varying field properties. The framework could be reformulated for true multiobjective optimization without requiring objective aggregation. Additionally, enhancing the parameter adaptation mechanism would enable faster convergence in deceptive landscapes. The integration of local search strategies would improve performance in monotonic search spaces, and the development of hybrid versions would combine the ESO’s global exploration capabilities with problem-specific local search methods. These developments significantly expand the algorithm’s applicability while addressing its current limitations.

Author Contributions

Conceptualization, M.S.C. and H.S.L.; methodology, M.S.C.; software, M.S.C.; validation, M.S.C. and H.S.L.; investigation, M.S.C. and H.S.L.; data curation, M.S.C.; writing—original draft preparation, M.S.C.; writing—review and editing, H.S.L.; visualization, M.S.C. and H.S.L.; supervision, H.S.L.; funding, H.S.L. All authors have read and agreed to the published version of the manuscript.

Funding

The research has been supported by the JSPS Kakenhi Grant No. 24K00991.

Data Availability Statement

All the data used in this study are available from the authors upon request.

Acknowledgments

The first author is supported by the program “SDS’s Global Leaders Scholarship” from the Japan International Cooperation Agency (JICA) with grant number D-2203278.

Conflicts of Interest

The authors declare that they have no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Results of the algorithms with lower performance for the unimodal problems.
Table A1. Results of the algorithms with lower performance for the unimodal problems.
ABCACOASOBBOABBOEVOGAHBASATS
F1 Mean −2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−1.8 × 102
std0.0 × 1000.0 × 1007.6 × 10−10.0 × 1002.2 × 10−11.7 × 1008.8 × 10−20.0 × 1001.3 × 10−26.1 × 100
Dist.0.0 × 1000.0 × 1009.9 × 10−10.0 × 1003.3 × 10−19.0 × 10−19.5 × 10−20.0 × 1002.3 × 10−21.7 × 101
F2Mean0.0 × 1002.7 × 10−52.3 × 10−20.0 × 1003.8 × 10−22.1 × 10−14.5 × 10−30.0 × 1004.0 × 10−47.3 × 100
std0.0 × 1001.2 × 10−42.0 × 10−20.0 × 1004.2 × 10−24.5 × 10−16.8 × 10−30.0 × 1003.5 × 10−46.4 × 100
Dist.0.0 × 1002.7 × 10−52.3 × 10−20.0 × 1003.8 × 10−22.1 × 10−14.5 × 10−30.0 × 1004.0 × 10−47.3 × 100
F3Mean2.0 × 10105.7 × 10101.7 × 10101.8 × 10−1181.1 × 1043.3 × 1088.9 × 1080.0 × 1005.2 × 10107.1 × 1010
std2.5 × 1093.8 × 1099.9 × 1092.2 × 10−1185.0 × 1031.5 × 1081.4 × 1080.0 × 1005.6 × 1096.7 × 109
Dist.2.0 × 10105.7 × 10101.7 × 10101.8 × 10−1181.1 × 1043.3 × 1088.9 × 1080.0 × 1005.2 × 10107.1 × 1010
F4Mean1.7 × 1001.7 × 1003.1 × 1041.7 × 1001.3 × 1043.7 × 1061.2 × 1021.7 × 1002.7 × 1085.5 × 109
std0.0 × 1001.1 × 10−28.9 × 1043.1 × 10−164.1 × 1048.6 × 1062.8 × 1027.6 × 10−167.7 × 1089.4 × 109
Dist.8.0 × 10−52.6 × 10−33.1 × 1048.0 × 10−51.3 × 1043.7 × 1061.2 × 1028.0 × 10−52.7 × 1085.5 × 109
F5Mean2.7 × 10−65.0 × 10−25.9 × 10−40.0 × 1008.5 × 10−22.1 × 10−15.7 × 10−20.0 × 1002.2 × 10−37.0 × 10−1
std3.6 × 10−63.1 × 10−27.9 × 10−40.0 × 1001.1 × 10−12.1 × 10−17.9 × 10−20.0 × 1002.6 × 10−34.7 × 10−1
Dist.2.7 × 10−65.0 × 10−25.9 × 10−40.0 × 1008.5 × 10−22.1 × 10−15.7 × 10−20.0 × 1002.2 × 10−37.0 × 10−1
F6Mean1.0 × 1031.1 × 1031.1 × 1039.8 × 1011.9 × 1021.1 × 1024.7 × 1029.7 × 1019.9 × 1021.6 × 103
std4.2 × 1012.0 × 1028.3 × 1011.5 × 10−24.5 × 1011.1 × 1012.2 × 1011.9 × 1006.8 × 1019.0 × 101
Dist.1.0 × 1031.1 × 1031.1 × 1039.8 × 1011.9 × 1021.1 × 1024.7 × 1029.7 × 1019.9 × 1021.6 × 103
F7Mean−2.2 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.2 × 100−1.8 × 100−1.5 × 100
std2.7 × 10−161.5 × 10−24.8 × 10−24.6 × 10−21.2 × 10−29.0 × 10−25.0 × 10−34.6 × 10−162.3 × 10−12.5 × 10−1
Dist.4.7 × 10−25.0 × 10−21.2 × 10−16.1 × 10−27.2 × 10−21.2 × 10−15.5 × 10−24.7 × 10−24.1 × 10−17.0 × 10−1
F8Mean3.1 × 10−151.6 × 10−13.6 × 10−22.5 × 10−312.1 × 10−19.8 × 1001.2 × 10−11.9 × 10−307.4 × 10−23.3 × 101
std1.5 × 10−147.9 × 10−13.5 × 10−23.7 × 10−311.9 × 10−11.2 × 1011.2 × 10−14.4 × 10−301.6 × 10−13.4 × 101
Dist.3.1 × 10−151.6 × 10−13.6 × 10−22.5 × 10−312.1 × 10−19.8 × 1001.2 × 10−11.9 × 10−307.4 × 10−23.3 × 101
F9Mean6.2 × 10−139.2 × 10−99.0 × 1030.0 × 1004.2 × 1021.2 × 1051.7 × 1000.0 × 1004.8 × 1082.1 × 109
std3.0 × 10−123.8 × 10−81.6 × 1040.0 × 1001.2 × 1032.8 × 1054.4 × 1000.0 × 1001.2 × 1093.7 × 109
Dist.6.2 × 10−139.2 × 10−99.0 × 1030.0 × 1004.2 × 1021.2 × 1051.7 × 1000.0 × 1004.8 × 1082.1 × 109
F10Mean−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−2.7 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−17.5 × 10−1
std5.6 × 10−175.6 × 10−177.9 × 10−35.6 × 10−171.9 × 10−38.5 × 10−21.3 × 10−45.6 × 10−179.4 × 10−41.5 × 100
Dist.8.6 × 10−58.6 × 10−54.4 × 10−38.6 × 10−51.3 × 10−38.5 × 10−23.3 × 10−58.6 × 10−54.9 × 10−41.1 × 100
F11Mean−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−1.5 × 101−1.8 × 100
std0.0 × 1000.0 × 1002.1 × 10−30.0 × 1007.3 × 10−37.1 × 10−41.6 × 10−30.0 × 1003.5 × 1001.7 × 10−1
Dist.6.8 × 10−66.8 × 10−62.7 × 10−36.8 × 10−65.8 × 10−32.7 × 10−41.3 × 10−36.8 × 10−61.3 × 1012.2 × 10−1
F12Mean4.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.9 × 10−14.0 × 10−14.0 × 10−14.0 × 10−11.5 × 100
std0.0 × 1000.0 × 1006.8 × 10−30.0 × 1003.0 × 10−31.6 × 10−17.7 × 10−40.0 × 1002.6 × 10−410.0 × 10−1
Dist.7.4 × 10−67.4 × 10−65.0 × 10−37.4 × 10−62.7 × 10−39.5 × 10−24.7 × 10−47.4 × 10−62.3 × 10−41.1 × 100
F13Mean2.7 × 10−24.9 × 10−14.5 × 10−12.7 × 10−34.3 × 10−12.0 × 10−13.3 × 10−18.0 × 10−45.0 × 10−19.2 × 10−1
std4.8 × 10−25.8 × 10−27.2 × 10−26.7 × 10−36.1 × 10−21.5 × 10−15.0 × 10−25.3 × 10−49.7 × 10−21.3 × 10−1
Dist.2.7 × 10−24.9 × 10−14.5 × 10−12.6 × 10−34.3 × 10−12.0 × 10−13.3 × 10−17.0 × 10−45.0 × 10−19.2 × 10−1
F14Mean−7.2 × 10−4−2.1 × 10−4−2.3 × 10−4−2.0 × 10−1−2.5 × 10−4−8.9 × 10−4−3.0 × 10−4−2.7 × 10−1−1.6 × 10−4−1.1 × 10−4
std3.1 × 10−44.1 × 10−52.8 × 10−53.5 × 10−17.2 × 10−51.5 × 10−34.1 × 10−53.7 × 10−12.4 × 10−51.5 × 10−5
Dist.10.0 × 10−110.0 × 10−110.0 × 10−18.0 × 10−110.0 × 10−110.0 × 10−110.0 × 10−17.3 × 10−110.0 × 10−110.0 × 10−1
F15Mean−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.0 × 100−2.0 × 100
std0.0 × 1001.6 × 10−82.8 × 10−40.0 × 1004.5 × 10−54.4 × 10−42.0 × 10−60.0 × 1005.0 × 10−27.0 × 10−2
Dist.1.2 × 10−51.2 × 10−51.5 × 10−41.2 × 10−52.1 × 10−53.4 × 10−41.0 × 10−51.2 × 10−52.0 × 10−26.6 × 10−2
F16Mean2.0 × 1002.0 × 1001.9 × 1001.9 × 1002.0 × 1002.0 × 1002.0 × 100−8.9 × 10−142.0 × 1004.0 × 100
std2.9 × 10−20.0 × 1003.6 × 10−13.9 × 10−18.1 × 10−42.9 × 10−26.2 × 10−51.7 × 10−141.7 × 10−41.9 × 100
Dist.2.0 × 1002.0 × 1001.9 × 1001.9 × 1002.0 × 1002.0 × 1002.0 × 1008.9 × 10−142.0 × 1004.0 × 100
F17Mean2.9 × 10−35.9 × 10−32.0 × 10−28.9 × 10−66.6 × 10−32.2 × 10−23.1 × 10−35.3 × 10−46.0 × 10−31.4 × 101
std2.4 × 10−34.9 × 10−32.8 × 10−22.3 × 10−56.9 × 10−39.0 × 10−23.0 × 10−31.2 × 10−37.8 × 10−31.4 × 101
Dist.2.9 × 10−35.9 × 10−32.0 × 10−28.9 × 10−66.6 × 10−32.2 × 10−23.1 × 10−35.3 × 10−46.0 × 10−31.4 × 101
F18Mean−1.0 × 100−1.0 × 100−5.0 × 10−1−1.0 × 100−3.7 × 10−1−6.5 × 10−1−5.6 × 10−1−1.0 × 100−4.0 × 10−2−7.0 × 10−7
std0.0 × 1000.0 × 1003.6 × 10−10.0 × 1004.3 × 10−14.7 × 10−14.8 × 10−10.0 × 1002.0 × 10−13.4 × 10−6
Dist.0.0 × 1000.0 × 1005.0 × 10−10.0 × 1006.3 × 10−13.5 × 10−14.4 × 10−10.0 × 1009.6 × 10−110.0 × 10−1
F19Mean3.0 × 1003.0 × 1003.1 × 1003.0 × 1007.1 × 1009.4 × 1003.0 × 1001.1 × 1011.5 × 1013.6 × 101
std1.1 × 10−151.6 × 10−29.5 × 10−26.6 × 10−169.9 × 1009.1 × 1001.6 × 10−32.2 × 1011.9 × 1013.5 × 101
Dist.7.8 × 10−143.2 × 10−38.5 × 10−27.9 × 10−144.1 × 1006.4 × 1001.4 × 10−37.6 × 1001.2 × 1013.3 × 101
F20Mean−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.7 × 101−1.4 × 101
std0.0 × 1007.4 × 10−31.4 × 10−23.0 × 10−152.0 × 10−39.3 × 10−23.3 × 10−43.7 × 10−153.9 × 1003.6 × 100
Dist.5.0 × 10−41.8 × 10−31.2 × 10−25.0 × 10−41.6 × 10−34.2 × 10−23.6 × 10−45.0 × 10−42.4 × 1005.1 × 100
F21Mean−2.5 × 100−2.3 × 100−2.1 × 100−3.0 × 100−2.8 × 100−2.8 × 100−3.0 × 100−3.0 × 100−1.2 × 100−6.5 × 10−1
std2.2 × 10−14.8 × 10−14.5 × 10−10.0 × 1003.7 × 10−13.8 × 10−11.8 × 10−13.4 × 10−66.8 × 10−12.8 × 10−1
Dist.4.6 × 10−17.1 × 10−19.0 × 10−10.0 × 1001.9 × 10−12.5 × 10−13.7 × 10−21.1 × 10−61.8 × 1002.4 × 100
F22Mean2.0 × 1002.0 × 1005.7 × 1003.4 × 1002.0 × 1007.0 × 1002.0 × 1002.7 × 1001.0 × 1004.3 × 102
std0.0 × 1000.0 × 1001.8 × 1004.2 × 1001.4 × 10−23.5 × 1004.8 × 10−31.9 × 1002.3 × 10−63.3 × 102
Dist.0.0 × 1000.0 × 1003.7 × 1001.4 × 1004.4 × 10−25.0 × 1001.5 × 10−27.2 × 10−110.0 × 10−14.2 × 102
F23Mean−3.3 × 10−66−4.9 × 10−71−2.6 × 10−55−4.6 × 10−3−2.3 × 10−84−8.7 × 10−3−4.3 × 10−60−8.5 × 10−3−1.5 × 10−105−9.4 × 10−176
std1.6 × 10−652.4 × 10−701.3 × 10−544.6 × 10−31.1 × 10−834.0 × 10−22.1 × 10−597.3 × 10−37.4 × 10−1050.0 × 100
Dist.1.0 × 1001.0 × 1001.0 × 1001.0 × 1001.0 × 10010.0 × 10−11.0 × 10010.0 × 10−11.0 × 1001.0 × 100
F24Mean10.0 × 10−11.0 × 10010.0 × 10−11.0 × 1001.0 × 1001.0 × 1001.0 × 1009.6 × 10−11.0 × 1001.1 × 100
std3.4 × 10−32.8 × 10−41.3 × 10−24.5 × 10−52.3 × 10−51.3 × 10−31.3 × 10−64.9 × 10−23.3 × 10−47.6 × 10−2
Dist.9.9 × 10−21.0 × 10−19.7 × 10−21.0 × 10−11.0 × 10−11.0 × 10−11.0 × 10−16.0 × 10−21.0 × 10−11.6 × 10−1
F25Mean3.9 × 1016.5 × 1013.5 × 1015.3 × 1016.3 × 1016.8 × 1015.5 × 1013.4 × 1017.1 × 1017.5 × 101
std7.0 × 1001.2 × 1012.4 × 1002.0 × 1011.8 × 1011.3 × 1012.0 × 1012.6 × 10−141.1 × 1016.1 × 100
Dist.4.1 × 1003.0 × 1011.1 × 1001.9 × 1012.9 × 1013.3 × 1012.1 × 1013.3 × 10−13.6 × 1014.0 × 101
Table A2. Results of the algorithms with lower performance for multimodal problems.
Table A2. Results of the algorithms with lower performance for multimodal problems.
ABCACOASOBBOABBOEVOGAHBASATS
F26 Mean 7.7 × 1001.3 × 1011.9 × 1018.6 × 1005.8 × 1005.7 × 1004.0 × 1007.5 × 1002.0 × 1012.1 × 101
std1.2 × 1006.9 × 10−12.8 × 10−13.4 × 10−18.5 × 10−19.3 × 10−12.9 × 10−14.8 × 10−12.8 × 10−13.2 × 10−1
Dist.7.7 × 1001.3 × 1011.9 × 1018.6 × 1005.8 × 1005.7 × 1004.0 × 1007.5 × 1002.0 × 1012.1 × 101
F27Mean3.6 × 1014.1 × 1014.1 × 1011.4 × 1019.8 × 1001.3 × 1015.3 × 1009.3 × 1001.5 × 1016.3 × 101
std1.6 × 1002.9 × 1004.1 × 1001.4 × 1002.1 × 1002.7 × 1001.4 × 1002.4 × 1002.9 × 1006.5 × 100
Dist.3.6 × 1014.1 × 1014.1 × 1011.4 × 1019.8 × 1001.3 × 1015.3 × 1009.3 × 1001.5 × 1016.3 × 101
F28Mean1.9 × 1025.5 × 1021.3 × 1038.9 × 1022.1 × 1002.5 × 1028.2 × 1007.3 × 1013.8 × 1013.3 × 103
std3.1 × 1019.1 × 1012.0 × 1021.4 × 1024.6 × 10−17.7 × 1012.7 × 1004.3 × 1011.4 × 1015.1 × 102
Dist.1.9 × 1025.5 × 1021.3 × 1038.9 × 1022.1 × 1002.5 × 1028.2 × 1007.3 × 1013.8 × 1013.3 × 103
F29Mean9.4 × 10−29.7 × 10−12.7 × 1023.2 × 1004.8 × 10−37.8 × 1019.8 × 10−27.3 × 10−11.3 × 1008.5 × 105
std5.2 × 10−21.2 × 10−18.0 × 1015.0 × 10−11.1 × 10−33.5 × 1012.7 × 10−22.3 × 10−13.2 × 10−11.2 × 106
Dist.9.4 × 10−29.7 × 10−12.7 × 1023.2 × 1004.8 × 10−37.8 × 1019.8 × 10−27.3 × 10−11.3 × 1008.5 × 105
F30Mean3.8 × 10−27.9 × 10−14.7 × 1017.7 × 10−31.5 × 10−74.6 × 10−23.1 × 10−68.8 × 10−51.5 × 10−13.3 × 102
std3.5 × 10−23.0 × 10−12.6 × 1013.5 × 10−31.9 × 10−72.6 × 10−21.4 × 10−67.7 × 10−54.1 × 10−29.6 × 101
Dist.3.8 × 10−27.9 × 10−14.7 × 1017.7 × 10−31.5 × 10−74.6 × 10−23.1 × 10−68.8 × 10−51.5 × 10−13.3 × 102
F31Mean1.3 × 1031.1 × 1079.7 × 1085.9 × 1063.3 × 1011.6 × 1074.4 × 1035.3 × 1052.4 × 1095.3 × 109
std10.0 × 1023.8 × 1063.3 × 1083.0 × 1063.4 × 1011.1 × 1071.5 × 1035.2 × 1055.1 × 1086.7 × 108
Dist.1.3 × 1031.1 × 1079.7 × 1085.9 × 1063.3 × 1011.6 × 1074.4 × 1035.3 × 1052.4 × 1095.3 × 109
F32Mean1.2 × 1031.2 × 1045.3 × 1053.3 × 1031.3 × 1016.2 × 1032.9 × 1018.9 × 1011.5 × 1022.0 × 106
std1.2 × 1035.9 × 1032.0 × 1051.4 × 1031.6 × 1014.3 × 1031.1 × 1015.3 × 1011.4 × 1023.5 × 105
Dist.1.2 × 1031.2 × 1045.3 × 1053.3 × 1031.3 × 1016.2 × 1032.9 × 1018.9 × 1011.5 × 1022.0 × 106
F33Mean1.5 × 10−22.9 × 10−22.2 × 10−23.1 × 10−44.2 × 10−24.1 × 10−26.8 × 10−37.0 × 10−23.7 × 1001.5 × 101
std8.2 × 10−32.3 × 10−21.9 × 10−29.1 × 10−43.4 × 10−23.0 × 10−24.4 × 10−32.0 × 10−17.0 × 1008.2 × 100
Dist.1.5 × 10−22.9 × 10−22.2 × 10−23.1 × 10−44.2 × 10−24.1 × 10−26.8 × 10−37.0 × 10−23.7 × 1001.5 × 101
F34Mean2.4 × 1061.1 × 1091.0 × 1094.2 × 1052.2 × 10−25.5 × 1061.1 × 10−25.9 × 1008.1 × 1002.2 × 1014
std3.0 × 1061.0 × 1091.4 × 1091.2 × 1069.2 × 10−31.5 × 1076.8 × 10−31.0 × 1017.9 × 1004.2 × 1014
Dist.2.4 × 1061.1 × 1091.0 × 1094.2 × 1052.2 × 10−25.5 × 1061.1 × 10−25.9 × 1008.1 × 1002.2 × 1014
F35Mean8.1 × 1043.7 × 1062.6 × 1088.2 × 1041.9 × 1017.3 × 1057.3 × 1008.9 × 1012.1 × 1081.3 × 109
std8.7 × 1041.7 × 1069.2 × 1078.6 × 1042.0 × 1015.9 × 1054.8 × 1004.0 × 1018.0 × 1072.7 × 108
Dist.8.1 × 1043.7 × 1062.6 × 1088.2 × 1041.9 × 1017.3 × 1057.3 × 1008.9 × 1012.1 × 1081.3 × 109
F36Mean3.1 × 10−11.0 × 1004.1 × 1012.4 × 1001.2 × 10−11.2 × 1006.3 × 10−24.7 × 10−18.9 × 10−11.7 × 102
std6.9 × 10−22.9 × 10−11.3 × 1011.2 × 1006.6 × 10−26.9 × 10−11.7 × 10−22.0 × 10−13.7 × 10−13.5 × 101
Dist.3.1 × 10−11.0 × 1004.1 × 1012.4 × 1001.2 × 10−11.2 × 1006.3 × 10−24.7 × 10−18.9 × 10−11.7 × 102
F37Mean−2.1 × 100−1.9 × 100−1.5 × 100−1.5 × 100−1.4 × 100−1.5 × 100−1.2 × 100−2.0 × 100−1.5 × 100−1.1 × 100
std3.4 × 10−23.7 × 10−13.3 × 10−14.5 × 10−13.9 × 10−14.6 × 10−12.0 × 10−121.6 × 10−13.0 × 10−11.0 × 10−1
Dist.2.1 × 1001.9 × 1001.5 × 1001.5 × 1001.4 × 1001.5 × 1001.2 × 1002.0 × 1001.5 × 1001.1 × 100
F38Mean4.5 × 1014.4 × 1016.9 × 1013.8 × 1012.0 × 1012.9 × 1019.9 × 1002.9 × 1019.3 × 1011.0 × 102
std3.2 × 1004.8 × 1006.1 × 1002.9 × 1006.7 × 1006.2 × 1003.4 × 1004.5 × 1009.0 × 1005.4 × 100
Dist.4.7 × 1014.6 × 1017.1 × 1014.0 × 1012.2 × 1013.1 × 1011.2 × 1013.2 × 1019.6 × 1011.0 × 102
F39Mean3.9 × 1013.1 × 1033.4 × 1042.2 × 1033.9 × 1003.0 × 1036.1 × 1015.4 × 1024.5 × 1046.5 × 104
std2.7 × 1015.8 × 1024.7 × 1032.1 × 1021.6 × 1009.8 × 1021.2 × 1012.3 × 1028.0 × 1039.7 × 103
Dist.3.9 × 1013.1 × 1033.4 × 1042.2 × 1033.9 × 1003.0 × 1036.1 × 1015.4 × 1024.5 × 1046.5 × 104
F40Mean6.7 × 1013.5 × 1033.1 × 1043.9 × 1032.4 × 1013.1 × 1038.4 × 1011.6 × 1036.3 × 1046.6 × 104
std3.3 × 1016.8 × 1026.0 × 1038.0 × 1027.1 × 1008.3 × 1022.1 × 1014.9 × 1025.7 × 1039.7 × 103
Dist.6.7 × 1013.5 × 1033.1 × 1043.9 × 1032.4 × 1013.1 × 1038.4 × 1011.6 × 1036.3 × 1046.6 × 104
F41Mean6.0 × 1007.4 × 1001.9 × 1015.3 × 1001.8 × 1005.2 × 1002.4 × 1003.9 × 1002.5 × 1012.7 × 101
std4.2 × 10−16.6 × 10−11.4 × 1005.1 × 10−12.1 × 10−15.9 × 10−12.8 × 10−14.6 × 10−12.0 × 1001.7 × 100
Dist.6.0 × 1007.4 × 1001.9 × 1015.3 × 1001.8 × 1005.2 × 1002.4 × 1003.9 × 1002.5 × 1012.7 × 101
F42Mean5.0 × 10−15.0 × 10−15.0 × 10−14.6 × 10−11.4 × 10−12.7 × 10−12.9 × 10−13.3 × 10−15.0 × 10−15.0 × 10−1
std1.5 × 10−43.5 × 10−49.1 × 10−51.1 × 10−24.3 × 10−21.8 × 10−17.2 × 10−27.4 × 10−22.0 × 10−51.4 × 10−5
Dist.5.0 × 10−15.0 × 10−15.0 × 10−14.6 × 10−11.4 × 10−12.7 × 10−12.9 × 10−13.3 × 10−15.0 × 10−15.0 × 10−1
F43Mean3.0 × 10−65.8 × 10−61.1 × 10−62.1 × 10−89.0 × 10−102.8 × 10−54.0 × 10−107.9 × 10−91.7 × 10−53.4 × 10−3
std2.0 × 10−64.3 × 10−65.1 × 10−71.8 × 10−81.1 × 10−94.9 × 10−53.2 × 10−101.9 × 10−81.0 × 10−53.2 × 10−3
Dist.3.0 × 10−65.8 × 10−61.1 × 10−62.1 × 10−89.0 × 10−102.8 × 10−54.0 × 10−107.9 × 10−91.7 × 10−53.4 × 10−3
F44Mean6.9 × 1028.9 × 1024.9 × 1028.2 × 1013.1 × 1025.6 × 1025.0 × 1021.6 × 1016.6 × 1025.8 × 103
std6.1 × 1018.3 × 1017.5 × 1019.0 × 1001.1 × 1021.7 × 1029.5 × 1017.1 × 1001.5 × 1027.2 × 103
Dist.6.9 × 1028.9 × 1024.9 × 1028.2 × 1013.1 × 1025.6 × 1025.0 × 1021.6 × 1016.6 × 1025.8 × 103
F45Mean2.8 × 1005.0 × 1003.9 × 1000.0 × 1002.4 × 1002.5 × 1002.5 × 1001.9 × 1003.5 × 1007.5 × 101
std8.9 × 10−11.7 × 1001.5 × 1000.0 × 1006.2 × 10−11.6 × 1009.4 × 10−14.6 × 10−11.3 × 1003.3 × 101
Dist.2.8 × 1005.0 × 1003.9 × 1000.0 × 1002.4 × 1002.5 × 1002.5 × 1001.9 × 1003.5 × 1007.5 × 101
Table A3. Results of the algorithms with lower performance for the CEC 2022 problems.
Table A3. Results of the algorithms with lower performance for the CEC 2022 problems.
ABCACOASOBBOABBOEVOGAHBASATS
D = 10
F46 Mean 3.0 × 1023.0 × 1025.5 × 1033.2 × 1034.0 × 1025.9 × 1033.0 × 1023.0 × 1021.5 × 1043.1 × 104
std3.0 × 10−102.4 × 10−31.8 × 1031.8 × 1031.5 × 1023.2 × 1031.1 × 1001.4 × 10−26.6 × 1039.8 × 103
Dist.1.1 × 10−101.4 × 10−35.2 × 1032.9 × 1031.0 × 1025.6 × 1039.3 × 10−11.5 × 10−21.5 × 1043.1 × 104
F47Mean4.1 × 1024.1 × 1025.9 × 1026.4 × 1024.1 × 1025.3 × 1024.1 × 1024.1 × 1021.3 × 1033.1 × 103
std2.7 × 10−16.6 × 10−15.4 × 1011.7 × 1021.7 × 1019.0 × 1012.1 × 1012.2 × 1015.7 × 1021.0 × 103
Dist.8.9 × 1008.6 × 1001.9 × 1022.4 × 1021.1 × 1011.3 × 1021.4 × 1011.4 × 1019.3 × 1022.7 × 103
F48Mean6.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 102
std0.0 × 1000.0 × 1004.4 × 10−27.4 × 10−23.0 × 10−52.8 × 10−23.7 × 10−66.6 × 10−72.2 × 10−12.5 × 10−1
Dist.0.0 × 1000.0 × 1001.5 × 10−11.6 × 10−14.3 × 10−55.0 × 10−25.3 × 10−61.1 × 10−68.6 × 10−18.8 × 10−1
F49Mean8.5 × 1028.5 × 1028.8 × 1028.3 × 1028.5 × 1028.6 × 1028.5 × 1028.4 × 1029.6 × 1021.0 × 103
std8.2 × 1001.2 × 1011.4 × 1018.6 × 1002.5 × 1012.7 × 1012.2 × 1011.6 × 1014.6 × 1013.7 × 101
Dist.4.6 × 1015.2 × 1017.9 × 1012.7 × 1015.5 × 1016.3 × 1014.7 × 1014.0 × 1011.6 × 1022.2 × 102
F50Mean9.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.1 × 1029.1 × 102
std1.9 × 10−86.7 × 10−97.2 × 10−15.2 × 10−12.5 × 10−14.1 × 10−13.0 × 10−15.6 × 10−12.3 × 1003.3 × 100
Dist.8.0 × 10−95.5 × 10−91.8 × 1008.3 × 10−12.9 × 10−16.7 × 10−13.0 × 10−16.4 × 10−17.5 × 10010.0 × 100
F51Mean5.0 × 1046.9 × 1041.4 × 1072.5 × 1041.1 × 1056.9 × 1062.9 × 1041.8 × 1043.1 × 1071.3 × 109
std1.1 × 1044.9 × 1041.7 × 1071.8 × 1042.0 × 1051.2 × 1071.8 × 1041.2 × 1049.8 × 1079.0 × 108
Dist.4.9 × 1046.8 × 1041.4 × 1072.4 × 1041.1 × 1056.9 × 1062.7 × 1041.6 × 1043.1 × 1071.3 × 109
F52Mean2.0 × 1032.1 × 1032.2 × 1032.0 × 1032.0 × 1032.1 × 1032.0 × 1032.0 × 1033.6 × 1034.6 × 103
std6.3 × 1001.0 × 1019.2 × 1011.6 × 1016.8 × 1008.6 × 1017.1 × 1001.1 × 1016.1 × 1021.2 × 103
Dist.4.5 × 1016.2 × 1012.2 × 1023.9 × 1012.6 × 1011.1 × 1022.0 × 1013.2 × 1011.6 × 1032.6 × 103
F53Mean2.2 × 1032.3 × 1038.0 × 1032.2 × 1032.6 × 1033.4 × 1072.6 × 1032.2 × 1034.5 × 1036.7 × 1013
std4.7 × 1001.4 × 1022.0 × 1042.0 × 1015.6 × 1021.5 × 1085.5 × 1021.9 × 1011.4 × 1032.6 × 1014
Dist.3.5 × 1011.3 × 1025.8 × 1033.4 × 1014.2 × 1023.4 × 1073.8 × 1023.4 × 1012.3 × 1036.7 × 1013
F54Mean2.6 × 1032.7 × 1032.8 × 1032.7 × 1032.6 × 1032.7 × 1032.5 × 1032.6 × 1033.3 × 1033.8 × 103
std9.6 × 1011.0 × 1001.2 × 1022.2 × 1021.7 × 1021.6 × 1021.8 × 1021.6 × 1022.7 × 1024.5 × 102
Dist.2.7 × 1023.7 × 1024.7 × 1023.7 × 1022.8 × 1024.3 × 1022.4 × 1022.6 × 1029.6 × 1021.5 × 103
F55Mean1.7 × 1032.3 × 1032.1 × 1032.0 × 1031.4 × 1032.1 × 1037.8 × 1026.5 × 1022.3 × 1033.4 × 103
std3.0 × 1023.0 × 1022.8 × 1026.7 × 1026.8 × 1027.0 × 1025.9 × 1025.8 × 1025.0 × 1024.8 × 102
Dist.6.8 × 1021.4 × 1023.4 × 1023.8 × 1021.0 × 1032.9 × 1021.6 × 1031.7 × 1031.1 × 1021.0 × 103
F56Mean2.6 × 1033.2 × 1032.7 × 1032.7 × 1032.6 × 1032.8 × 1032.6 × 1032.6 × 1033.2 × 1035.0 × 103
std1.5 × 10−62.6 × 1024.9 × 1017.7 × 1011.7 × 1022.5 × 1027.8 × 1007.7 × 1026.3 × 1021.1 × 103
Dist.4.9 × 10−75.6 × 1029.1 × 1019.1 × 1013.9 × 1011.6 × 1024.1 × 1003.0 × 1016.1 × 1022.4 × 103
F57Mean2.9 × 1032.9 × 1032.9 × 1032.9 × 1032.7 × 1032.9 × 1032.4 × 1032.8 × 1033.1 × 1033.4 × 103
std2.1 × 1001.3 × 1005.2 × 1013.5 × 1012.0 × 1022.3 × 1013.3 × 1025.9 × 1019.8 × 1012.1 × 102
Dist.1.6 × 1021.6 × 1021.9 × 1021.9 × 1022.7 × 1011.9 × 1023.0 × 1021.5 × 1023.8 × 1026.8 × 102
D = 20
F46 Mean 1.1 × 1046.9 × 1032.5 × 1042.1 × 1041.3 × 1032.4 × 1045.1 × 1026.9 × 1026.0 × 1048.7 × 104
std1.8 × 1031.5 × 1035.8 × 1035.1 × 1039.0 × 1021.0 × 1042.6 × 1027.5 × 1021.5 × 1044.3 × 104
Dist.1.0 × 1046.6 × 1032.5 × 1042.1 × 1041.0 × 1032.4 × 1042.1 × 1023.9 × 1025.9 × 1048.6 × 104
F47Mean4.5 × 1024.5 × 1021.9 × 1031.6 × 1034.7 × 1027.8 × 1024.6 × 1025.1 × 1024.5 × 1038.5 × 103
std9.3 × 10−22.9 × 10−13.9 × 1024.9 × 1023.2 × 1011.3 × 1021.4 × 1016.5 × 1011.3 × 1032.1 × 103
Dist.4.9 × 1015.0 × 1011.5 × 1031.2 × 1036.5 × 1013.8 × 1025.9 × 1011.1 × 1024.1 × 1038.1 × 103
F48Mean6.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 102
std3.8 × 10−103.7 × 10−52.9 × 10−12.9 × 10−15.8 × 10−51.2 × 10−12.0 × 10−55.6 × 10−24.3 × 10−14.6 × 10−1
Dist.6.0 × 10−101.4 × 10−41.1 × 1001.5 × 1001.3 × 10−42.9 × 10−15.0 × 10−53.4 × 10−22.8 × 1002.9 × 100
F49Mean1.0 × 1031.0 × 1031.1 × 1039.6 × 1029.3 × 1021.0 × 1038.8 × 1029.6 × 1021.2 × 1031.4 × 103
std2.9 × 1012.6 × 1013.6 × 1012.6 × 1013.5 × 1016.6 × 1012.1 × 1014.4 × 1016.4 × 1016.7 × 101
Dist.2.4 × 1022.3 × 1023.2 × 1021.6 × 1021.3 × 1022.1 × 1028.5 × 1011.6 × 1024.3 × 1025.8 × 102
F50Mean9.0 × 1029.0 × 1029.1 × 1029.1 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.2 × 1029.4 × 102
std2.1 × 10−17.1 × 10−22.5 × 1001.6 × 1009.4 × 10−11.4 × 1006.4 × 10−11.6 × 1004.6 × 1008.7 × 100
Dist.3.7 × 10−15.0 × 10−11.0 × 1015.4 × 1001.3 × 1003.6 × 1001.3 × 1003.3 × 1002.4 × 1013.6 × 101
F51Mean9.4 × 1071.9 × 1081.4 × 1099.3 × 1082.7 × 1057.9 × 1071.4 × 1059.5 × 1043.3 × 1098.8 × 109
std2.6 × 1079.8 × 1075.9 × 1088.8 × 1081.7 × 1051.1 × 1085.5 × 1043.4 × 1041.7 × 1093.0 × 109
Dist.9.4 × 1071.9 × 1081.4 × 1099.3 × 1082.7 × 1057.9 × 1071.4 × 1059.4 × 1043.3 × 1098.8 × 109
F52Mean2.3 × 1032.5 × 1033.9 × 1033.0 × 1032.0 × 1032.6 × 1032.0 × 1032.1 × 1038.3 × 1031.4 × 104
std1.0 × 1022.8 × 1026.9 × 1026.0 × 1028.0 × 1013.7 × 1028.4 × 1012.0 × 1022.7 × 1034.5 × 103
Dist.3.3 × 1024.8 × 1021.9 × 1039.9 × 1021.5 × 1015.6 × 1022.8 × 1011.4 × 1026.3 × 1031.2 × 104
F53Mean1.2 × 1053.4 × 1073.4 × 1081.0 × 1044.0 × 1034.3 × 1082.8 × 1032.3 × 1033.0 × 10127.3 × 1014
std1.8 × 1056.6 × 1076.3 × 1083.0 × 1041.3 × 1031.6 × 1097.5 × 1022.1 × 1021.4 × 10132.1 × 1015
Dist.1.2 × 1053.4 × 1073.4 × 1087.9 × 1031.8 × 1034.3 × 1086.4 × 1021.4 × 1023.0 × 10127.3 × 1014
F54Mean2.7 × 1032.6 × 1033.8 × 1034.1 × 1032.7 × 1033.1 × 1032.7 × 1032.6 × 1036.8 × 1039.7 × 103
std8.0 × 1006.1 × 1003.8 × 1028.0 × 1021.4 × 1011.8 × 1028.0 × 1004.8 × 1001.7 × 1032.0 × 103
Dist.3.7 × 1023.5 × 1021.5 × 1031.8 × 1033.6 × 1027.9 × 1023.6 × 1023.4 × 1024.5 × 1037.4 × 103
F55Mean2.9 × 1032.7 × 1033.2 × 1032.3 × 103−1.8 × 1021.5 × 103−7.5 × 1021.4 × 1031.6 × 1036.5 × 103
std1.8 × 1024.6 × 1023.5 × 1025.9 × 1026.8 × 1021.1 × 1038.2 × 1029.0 × 1021.2 × 1038.3 × 102
Dist.5.4 × 1023.1 × 1028.2 × 1028.9 × 1012.6 × 1038.6 × 1023.1 × 1031.0 × 1038.2 × 1024.1 × 103
F56Mean2.6 × 1032.6 × 1034.1 × 1036.2 × 1032.6 × 1033.1 × 1032.6 × 1032.9 × 1031.2 × 1042.5 × 104
std2.4 × 10−26.6 × 1001.1 × 1032.1 × 1039.9 × 1005.2 × 1021.0 × 1014.9 × 1023.8 × 1031.2 × 104
Dist.3.6 × 10−21.2 × 1011.5 × 1033.6 × 1037.6 × 1005.1 × 1028.5 × 1003.0 × 1029.0 × 1032.2 × 104
F57Mean2.9 × 1032.9 × 1033.4 × 1033.1 × 1032.9 × 1033.1 × 1032.9 × 1033.0 × 1033.6 × 1034.5 × 103
std4.7 × 1007.3 × 1001.1 × 1028.8 × 1011.6 × 1019.4 × 1012.4 × 1016.8 × 1012.2 × 1023.4 × 102
Dist.2.4 × 1022.3 × 1027.1 × 1024.3 × 1022.4 × 1023.8 × 1022.3 × 1023.5 × 1029.0 × 1021.8 × 103

References

  1. Agushaka, J.O.; Ezugwu, A.E. Initialisation Approaches for Population-Based Metaheuristic Algorithms: A Comprehensive Review. Appl. Sci. 2022, 12, 896. [Google Scholar] [CrossRef]
  2. Toaza, B.; Esztergár-Kiss, D. A Review of Metaheuristic Algorithms for Solving TSP-Based Scheduling Optimization Problems. Appl. Soft Comput. J. 2023, 148, 110908. [Google Scholar] [CrossRef]
  3. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  4. Jamil, M.; Yang, X.-S. A Literature Survey of Benchmark Functions For Global Optimization Problems. J. Math. Model. Numer. Optim. 2013, 4, 150–194. [Google Scholar] [CrossRef]
  5. Sallam, K.M.; Abdel-Basset, M.; El-Abd, M.; Wagdy, A. IMODEII: An Improved IMODE Algorithm Based on the Reinforcement Learning. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
  6. Stanovov, V.; Akhmedova, S.; Semenkin, E. NL-SHADE-LBC Algorithm with Linear Parameter Adaptation Bias Change for CEC 2022 Numerical Optimization. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 01–08. [Google Scholar]
  7. Ezugwu, A.E.; Shukla, A.K.; Nath, R.; Akinyelu, A.A.; Agushaka, J.O.; Chiroma, H.; Muhuri, P.K. Metaheuristics: A Comprehensive Overview and Classification along with Bibliometric Analysis. Artif. Intell. Rev. 2021, 54, 4237–4316. [Google Scholar] [CrossRef]
  8. Van Thieu, N.; Mirjalili, S. MEALPY: An Open-Source Library for Latest Meta-Heuristic Algorithms in Python. J. Syst. Archit. 2023, 139, 102871. [Google Scholar] [CrossRef]
  9. Rahimi, I.; Gandomi, A.H.; Chen, F.; Mezura-Montes, E. A Review on Constraint Handling Techniques for Population-Based Algorithms: From Single-Objective to Multi-Objective Optimization. Arch. Comput. Methods Eng. 2023, 30, 2181–2209. [Google Scholar] [CrossRef]
  10. Humberto Valencia-Rivera, G.; Torcoroma Benavides-Robles, M.; Vela Morales, A.; Amaya, I.; Cruz-Duarte, J.M.; Carlos Ortiz-Bayliss, J.; Gabriel Avina-Cervantes, J. A Systematic Review of Metaheuristic Algorithms in Electric Power Systems Optimization. Appl. Soft Comput. J. 2024, 150, 111047. [Google Scholar] [CrossRef]
  11. Liu, J.; Sarker, R.; Elsayed, S.; Essam, D.; Siswanto, N. Large-Scale Evolutionary Optimization: A Review and Comparative Study. Swarm Evol. Comput. 2024, 85, 101466. [Google Scholar] [CrossRef]
  12. Mohammadi, A.; Sheikholeslam, F. Intelligent Optimization: Literature Review and State-of-the-Art Algorithms (1965–2022). Eng. Appl. Artif. Intell. 2023, 126, 106959. [Google Scholar] [CrossRef]
  13. Chattopadhyay, S.; Marik, A.; Pramanik, R. A Brief Overview of Physics-Inspired Metaheuristic Optimization Techniques. arXiv 2022, arXiv:2201.12810. [Google Scholar]
  14. Beiranvand, V.; Hare, W.; Lucet, Y. Best Practices for Comparing Optimization Algorithms. Optim. Eng. 2017, 18, 815–848. [Google Scholar] [CrossRef]
  15. Abualigah, L.; Gandomi, A.H.; Elaziz, M.A.; Hamad, H.A.; Omari, M.; Alshinwan, M.; Khasawneh, A.M. Advances in Meta-Heuristic Optimization Algorithms in Big Data Text Clustering. Electronics 2021, 10, 101. [Google Scholar] [CrossRef]
  16. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-Inspired Metaheuristic. Expert. Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  17. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A Novel Optimization Algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  18. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry Gas Solubility Optimization: A Novel Physics-Based Algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  19. Xie, L.; Han, T.; Zhou, H.; Zhang, Z.-R.; Han, B.; Tang, A. Tuna Swarm Optimization: A Novel Swarm-Based Metaheuristic Algorithm for Global Optimization. Comput. Intell. Neurosci. 2021, 2021, 9210050. [Google Scholar] [CrossRef]
  20. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial Gorilla Troops Optimizer: A New Nature-inspired Metaheuristic Algorithm for Global Optimization Problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  21. Moazzeni, A.R.; Khamehchi, E. Rain Optimization Algorithm (ROA): A New Metaheuristic Method for Drilling Optimization Solutions. J. Pet. Sci. Eng. 2020, 195, 107512. [Google Scholar] [CrossRef]
  22. Cui-Cui Cai, C.-C.C.; Cui-Cui Cai, M.-S.F.; Mao-Sheng Fu, X.-M.M.; Xian-Meng Meng, Q.-J.W.; Qi-Jian Wang, Y.-Q.W. Modified Harris Hawks Optimization Algorithm with Multi-Strategy for Global Optimization Problem. J. Comput. 2023, 34, 91–105. [Google Scholar] [CrossRef]
  23. Prakash, T.; Singh, P.P.; Singh, V.P.; Singh, S.N. A Novel Brown-Bear Optimization Algorithm for Solving Economic Dispatch Problem. In Advanced Control & Optimization Paradigms for Energy System Operation and Management; River Publishers: New York, NY, USA, 2023; pp. 137–164. [Google Scholar]
  24. Azizi, M.; Aickelin, U.; Khorshidi, H.A.; Baghalzadeh Shishehgarkhaneh, M. Energy Valley Optimizer: A Novel Metaheuristic Algorithm for Global and Engineering Optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  25. Hashim, F.A.; Mostafa, R.R.; Hussien, A.G.; Mirjalili, S.; Sallam, K.M. Fick’s Law Algorithm: A Physical Law-Based Algorithm for Numerical Optimization. Knowl.-Based Syst. 2023, 260, 110146. [Google Scholar] [CrossRef]
  26. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  27. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger Games Search: Visions, Conception, Implementation, Deep Analysis, Perspectives, and towards Performance Shifts. Expert. Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  29. Lu, K.; Ma, Z. A Modified Whale Optimization Algorithm for Parameter Estimation of Software Reliability Growth Models. J. Algorithm Comput. Technol. 2021, 15. [Google Scholar] [CrossRef]
  30. Rajwar, K.; Deep, K. Uncovering Structural Bias in Population-Based Optimization Algorithms: A Theoretical and Simulation-Based Analysis of the Generalized Signature Test. Expert. Syst. Appl. 2024, 240, 122332. [Google Scholar] [CrossRef]
  31. Hu, Z.; Zhang, Q.; Wang, Y.; Su, Q.; Xiong, Z. Research Orientation and Novelty Discriminant for New Metaheuristic Algorithms. Appl. Soft Comput. J. 2024, 157, 1568–4946. [Google Scholar] [CrossRef]
  32. Rajwar, K.; Deep, K.; Das, S. An Exhaustive Review of the Metaheuristic Algorithms for Search and Optimization: Taxonomy, Applications, and Open Challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  33. Velasco, L.; Guerrero, H.; Hospitaler, A. A Literature Review and Critical Analysis of Metaheuristics Recently Developed. Arch. Comput. Methods Eng. 2024, 31, 125–146. [Google Scholar] [CrossRef]
  34. Kudela, J. A Critical Problem in Benchmarking and Analysis of Evolutionary Computation Methods. Nat. Mach. Intell. 2022, 4, 1238–1245. [Google Scholar] [CrossRef]
  35. Aranha, C.; Camacho Villalón, C.L.; Campelo, F.; Dorigo, M.; Ruiz, R.; Sevaux, M.; Sörensen, K.; Stützle, T. Metaphor-Based Metaheuristics, a Call for Action: The Elephant in the Room. Swarm Intell. 2022, 16, 1–6. [Google Scholar] [CrossRef]
  36. Tzanetos, A.; Blondin, M. A Qualitative Systematic Review of Metaheuristics Applied to Tension/Compression Spring Design Problem: Current Situation, Recommendations, and Research Direction. Eng. Appl. Artif. Intell. 2023, 118, 105521. [Google Scholar] [CrossRef]
  37. Ren, Y.; Xu, W.; Fu, J. Characteristics of Intracloud Lightning to Cloud-to-Ground Lightning Ratio in Thunderstorms over Eastern and Southern China. Atmos. Res. 2024, 300, 107231. [Google Scholar] [CrossRef]
  38. Stucke, I.; Morgenstern, D.; Zeileis, A.; Mayr, G.J.; Simon, T.; Diendorfer, G.; Schulz, W.; Pichler, H. Electric Power Systems Research Diagnosing Upward Lightning from Tall Objects from Meteorological Thunderstorm Environments. Electr. Power Syst. Res. 2024, 229, 110199. [Google Scholar] [CrossRef]
  39. Liu, Y.X.; Hong, H.P. Analyzing, Modelling, and Simulating Nonstationary Thunderstorm Winds in Two Horizontal Orthogonal Directions at a Point in Space. J. Wind. Eng. Ind. Aerodyn. 2023, 237, 105412. [Google Scholar] [CrossRef]
  40. Hoole, P.R.P.; Fisher, J.; Hoole, S.R.H. Thunderstorms and Pre-Lightning Electrostatics. In Lightning Engineering: Physics, Computer-Based Test-Bed, Protection of Ground and Airborne Systems; Springer International Publishing: Cham, Switzerland, 2022; pp. 51–83. [Google Scholar]
  41. Dementyeva, S.; Shatalina, M.; Popykina, A.; Sarafanov, F.; Kulikov, M.; Mareev, E. Trends and Features of Thunderstorms and Lightning Activity in the Upper Volga Region. Atmosphere 2023, 14, 674. [Google Scholar] [CrossRef]
  42. Holle, R.L.; Zhang, D. The Scientific Basics of Lightning. In Flashes of Brilliance; Springer International Publishing: Cham, Switzerland, 2023; pp. 1–29. [Google Scholar]
  43. Abhishek, K.; Kennet, V.P.; Ali, W.M.; Anas, A.H.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2022 Special Session and Competition on Single Objective Bound. Constrained Numerical Optimization. 2021. Available online: https://github.com/P-N-Suganthan/2022-SO-BO/blob/main/CEC2022%20TR.pdf (accessed on 21 April 2024).
  44. Hendy, H.; Irawan, M.I.; Mukhlash, I.; Setumin, S. A Bibliometric Analysis of Metaheuristic Research and Its Applications. Regist. J. Ilm. Teknol. Sist. Inf. 2023, 9, 1–17. [Google Scholar] [CrossRef]
  45. Halim, A.H. Performance Assessment of the Metaheuristic Optimization Algorithms: An Exhaustive Review. Artif. Intell. Rev. 2023, 54, 2323–2409. [Google Scholar] [CrossRef]
  46. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  47. Hadi, A.A.; Mohamed, A.W.; Jambi, K.M. Single-Objective Real-Parameter Optimization: Enhanced LSHADE-SPACMA Algorithm; Springer: Berlin/Heidelberg, Germany, 2021; pp. 103–121. [Google Scholar]
  48. Cuong, L.V.; Bao, N.N.; Binh, H.T.T. Technical Report A Multi-Start Local Search Algorithm with l-Shade for Single Objective Bound Constrained Optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation, CEC, Krakow, Poland, 28 June–1 July 2021. [Google Scholar]
  49. Polakova, R. L-SHADE with Competing Strategies Applied to Constrained Optimization. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1683–1689. [Google Scholar]
  50. Karaboga, D.; Basturk, B. A Powerful and Efficient Algorithm for Numerical Function Optimization: Artificial Bee Colony (ABC) Algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  51. Dorigo, M.; Stützle, T. The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances. In Electromagnetism-Like Mechanism Algorithm. In: Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms. Intelligent Systems Reference Library; Springer: Boston, MA, USA, 2003; Volume 62, pp. 250–285. [Google Scholar]
  52. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  53. Azizi, M. Atomic Orbital Search: A Novel Metaheuristic Algorithm. Appl. Math. Model. 2021, 93, 657–683. [Google Scholar] [CrossRef]
  54. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  55. Storn, R.; Price, K. Differential Evolution-A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  56. Yang, X.-S. Unconventional Computation and Natural Computation. Lect. Notes Comput. Sci. 2012, 7445, 240–249. [Google Scholar]
  57. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
  58. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-Sharing Knowledge Based Algorithm for Solving Optimization Problems: A Novel Nature-Inspired Algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  59. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  60. Tanabe, R.; Fukunaga, A.S. Improving the Search Performance of SHADE Using Linear Population Size Reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  61. Mirjalili, S. Moth-Flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  62. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  63. Zhang, J.; Xiao, M.; Gao, L.; Pan, Q. Queuing Search Algorithm: A Novel Metaheuristic Algorithm for Solving Engineering Optimization Problems. Appl. Math. Model. 2018, 63, 464–490. [Google Scholar] [CrossRef]
  64. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science (1979) 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  65. Glover, F. Tabu Search—Part I. ORSA J. Comput. 1989, 1, 190–206. [Google Scholar] [CrossRef]
  66. Sharma, P.; Raju, S. Metaheuristic Optimization Algorithms: A Comprehensive Overview and Classification of Benchmark Test Functions. Soft Comput 2023, 28, 3123–3186. [Google Scholar] [CrossRef]
  67. Soto Calvo, M.; Lee, H.S. Benchmark Functions Repository. Available online: https://github.com/msotocalvo/ESO/tree/main (accessed on 4 January 2025).
  68. Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80. [Google Scholar] [CrossRef]
  69. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems. Available online: https://github.com/P-N-Suganthan/CEC-2011--Real_World_Problems/blob/master/Tech-Rep.pdf (accessed on 24 January 2025).
  70. Herbold, S. Autorank: A Python Package for Automated Ranking of Classifiers. J. Open Source Softw. 2020, 5, 2173. [Google Scholar] [CrossRef]
  71. Kruschke, J.K.; Liddell, T.M. The Bayesian New Statistics: Hypothesis Testing, Estimation, Meta-Analysis, and Power Analysis from a Bayesian Perspective. Psychon. Bull. Rev. 2018, 25, 178–206. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the electrical storm optimization (ESO) algorithm illustrating the initialization of agents, the iterative adjustments of environmental parameters, and the continuous selection and refinement of solutions toward identifying the optimum.
Figure 1. Flowchart of the electrical storm optimization (ESO) algorithm illustrating the initialization of agents, the iterative adjustments of environmental parameters, and the continuous selection and refinement of solutions toward identifying the optimum.
Make 07 00024 g001
Figure 2. Conceptual behavior of field intensity curves under different scenarios, showing the dynamic modulation of the transition between the exploration and exploitation stages.
Figure 2. Conceptual behavior of field intensity curves under different scenarios, showing the dynamic modulation of the transition between the exploration and exploitation stages.
Make 07 00024 g002
Figure 3. Convergence curves of the algorithms for unimodal problems. The ESO (red line) shows consistent convergence and MFO (pink line) shows the highest performance variability.
Figure 3. Convergence curves of the algorithms for unimodal problems. The ESO (red line) shows consistent convergence and MFO (pink line) shows the highest performance variability.
Make 07 00024 g003
Figure 4. Convergence curves of the algorithms for multimodal problems. The ESO (red line) shows consistent convergence, whereas the MFO (pink line) and PSO (orange line) show greater performance variability.
Figure 4. Convergence curves of the algorithms for multimodal problems. The ESO (red line) shows consistent convergence, whereas the MFO (pink line) and PSO (orange line) show greater performance variability.
Make 07 00024 g004
Figure 5. Normalized behavior of field resistance, field conductivity, field intensity, storm power, and progression toward the global best solution over 1000 iterations for benchmark functions F6, F20, F24, F33, F46, and F50.
Figure 5. Normalized behavior of field resistance, field conductivity, field intensity, storm power, and progression toward the global best solution over 1000 iterations for benchmark functions F6, F20, F24, F33, F46, and F50.
Make 07 00024 g005
Figure 6. Statistical results for the three groups of functions. The critical difference diagrams (AD) illustrate the relative rankings of the algorithms, highlighting the groups of algorithms that are not significantly different from each other. The heatmaps (A1D1) show that the Bayesian probability of one algorithm outperforms the other.
Figure 6. Statistical results for the three groups of functions. The critical difference diagrams (AD) illustrate the relative rankings of the algorithms, highlighting the groups of algorithms that are not significantly different from each other. The heatmaps (A1D1) show that the Bayesian probability of one algorithm outperforms the other.
Make 07 00024 g006aMake 07 00024 g006b
Table 1. Overview of 20 metaheuristic algorithms considered for comparison and the default parameters used.
Table 1. Overview of 20 metaheuristic algorithms considered for comparison and the default parameters used.
No.Algorithm (Abbreviation)ParametersReference
1Artificial Bee Colony (ABC)n_limits = 50[51]
2Ant Colony Optimization (ACO)Sample_count = 25, intent_factor = 0.5, zeta = 1.0[52]
3Ant Lion Optimizer (ALO)-[53]
4Atom Search Optimization (ASO)alpha = 10, beta = 0.2[54]
5Biogeography-Based Optimization (BBO)p_m = 0.01, n_elites = 2[55]
6Brown Bear Optimization Algorithm (BBOA)-[23]
7Differential Evolution (DE)wf = 0.7, cr = 0.9[56]
8Energy Valley Optimizer (EVO)-[24]
9Fick’s Law Algorithm (FLA)C1 = 0.5, C2 = 2.0, C3 = 0.1, C4 = 0.2, C5 = 2.0, DD = 0.01[27]
10Flower Pollination Algorithm (FPA)p_s = 0.8, levy_mult = 0.2[57]
11Genetic Algorithm (GA)pc = 0.9, pm = 0.05, k_way = 0.4[58]
12Gaining Sharing Knowledge-based Algorithm (GSKA)pb = 0.1, kr = 0.7[59]
13Harmony Search (HS)c_r = 0.95, pa_r = 0.05[60]
14Honey Badger Algorithm (HBA)-[26]
15Linear population reduction SHADE (LSHADE)miu_f = 0.5, miu_cr = 0.5[61]
16Moth–Flame Optimization (MFO)-[62]
17Particle Swarm Optimization (PSO)c1 = 2.05, c2 = 2.05, w = 0.4[63]
18Queuing Search Algorithm (QSA)-[64]
19Simulated Annealing (SA)temp_init = 100, step_size = 0.1[65]
20Tabu Search (TS)-[66]
Table 2. Selected primitive benchmark problems, including their bounds, dimensionality (N), optimal solution fitness (fmin(x)), characteristics, and sources. Linear (L); nonlinear (NL); convex (C); nonconvex (NC); separable (S); nonparametric (NS); scalable (SC); nonscalable (NSC).
Table 2. Selected primitive benchmark problems, including their bounds, dimensionality (N), optimal solution fitness (fmin(x)), characteristics, and sources. Linear (L); nonlinear (NL); convex (C); nonconvex (NC); separable (S); nonparametric (NS); scalable (SC); nonscalable (NSC).
Problem NameBoundsN f m i n x CharacteristicsSource
Unimodal primitive benchmark functions
F1 Ackley 02 32 x i 32 2−200NL, C, NS, NSC[4,67]
F2Booth 10 x i 10 20NL, C, NS, NSC[4,67]
F3Chung Reynolds 100 x i 100 1000NL, C, NS, SC[4,67]
F4El-Attar-Vidyasagar-Dutta 500 x i 500 21.7127NL, C, NS, NSC[4,67]
F5Leon 1.2 x i 1.2 20NL, C, NS, NSC[4,67]
F6Rosenbrock 30 x i 30 1000NL, C, NS, SC[4,67]
F7Ripple 01 0 x i 1 100−2.2NL, NC, NS, NSC[4,67]
F8Wayburn Seader 01 5 x i 5 20NL, C, NS, SC[4,67]
F9Wayburn Seader 02 50 x i 50 20NL, C, NS, SC[4,67]
F10Zirilli 10 x i 10 2−0.3523NL, C, S, NSC[4,67]
Multimodal primitive benchmark functions
F11 Adjiman 1 x 1 2 ; 1 x 2 1 2−2.0218NL, C, NS, NSC[4,67]
F12Branin 01 5 x 1 10 ; 0 x 2 15 20.39788NL, C, NS, NSC[4,67]
F13Crowned Cross 10 x i 10 20.0001NL, NC, NS, NSC[4,67]
F14Cross Leg Table 10 x i 10 2−1NL, NC, NS, NSC[4,67]
F15Cross in Tray 10 x i 10 2−2.0626NL, NC, NS, NSC[4,67]
F16Damavandi 0 x i 14 20NL, C, NS, NSC[4,67]
F17Dolan 100 x i 100 50NL, C, NS, NSC[4,67]
F18Easom 100 x i 100 2−1NL, C, S, NSC[4,67]
F19Goldstein Price 2 x i 2 23NL, C, NS, NSC[4,67]
F20Holder Table 10 x i 10 2−19.208NL, C, S, NSC[4,67]
F21Lennard Jones 4 x i 4 10−3NL, NC, NS, SC[4,67]
F22Mishra 01 0 x i 1 1002NL, C, NS, SC[4,67]
F23Odd Square 5 π x i 5 π 20−1.0084NL, NC, NS, NSC[4]
F24Price 02 500 x i 500 20.9NL, C, NS, NSC[4,67]
F25Rosenbrock Modified 2 x i 2 234.37NL, C, NS, NSC[4,67]
Table 3. Comparison of performance metrics across algorithms for primitive optimization problems.
Table 3. Comparison of performance metrics across algorithms for primitive optimization problems.
FunctionStatistics ESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
F1 Mean −2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102−2.0 × 102
std0.0 × 1001.1 × 10−60.0 × 1003.8 × 10−119.0 × 10−40.0 × 1005.7 × 10−40.0 × 1000.0 × 1000.0 × 1000.0 × 100
Dist.0.0 × 1001.1 × 10−60.0 × 1004.8 × 10−111.1 × 10−30.0 × 1001.1 × 10−30.0 × 1000.0 × 1000.0 × 1000.0 × 100
F2Mean0.0 × 1005.9 × 10−140.0 × 1008.5 × 10−223.7 × 10−50.0 × 1005.6 × 10−80.0 × 1000.0 × 1000.0 × 1000.0 × 100
std0.0 × 1008.2 × 10−140.0 × 1001.2 × 10−215.5 × 10−50.0 × 1005.8 × 10−80.0 × 1000.0 × 1000.0 × 1000.0 × 100
Dist.0.0 × 1005.9 × 10−140.0 × 1008.5 × 10−223.7 × 10−50.0 × 1005.6 × 10−80.0 × 1000.0 × 1000.0 × 1000.0 × 100
F3Mean0.0 × 1001.3 × 10−23.7 × 1078.1 × 1061.2 × 1021.1 × 1021.3 × 1073.9 × 10−113.2 × 1086.0 × 1074.6 × 106
std0.0 × 1001.9 × 10−21.6 × 1079.6 × 1054.8 × 1014.6 × 1023.0 × 1061.2 × 10−105.3 × 1088.3 × 1072.1 × 107
Dist.0.0 × 1001.3 × 10−23.7 × 1078.1 × 1061.2 × 1021.1 × 1021.3 × 1073.9 × 10−113.2 × 1086.0 × 1074.6 × 106
F4Mean1.7 × 1001.7 × 1001.7 × 1001.7 × 1001.7 × 1001.7 × 1005.1 × 1001.7 × 1001.7 × 1007.0 × 1001.7 × 100
std0.0 × 1002.6 × 10−94.6 × 10−165.4 × 10−123.4 × 10−23.1 × 10−165.8 × 1001.8 × 10−163.4 × 10−161.5 × 1012.8 × 10−16
Dist.8.0 × 10−58.0 × 10−58.0 × 10−58.0 × 10−51.3 × 10−28.0 × 10−53.3 × 1008.0 × 10−58.0 × 10−55.3 × 1008.0 × 10−5
F5Mean6.0 × 10−171.9 × 10−150.0 × 1003.2 × 10−195.0 × 10−62.2 × 10−303.7 × 10−20.0 × 1000.0 × 1000.0 × 1001.3 × 10−29
std2.9 × 10−162.1 × 10−150.0 × 1007.5 × 10−195.5 × 10−66.1 × 10−306.1 × 10−20.0 × 1000.0 × 1000.0 × 1003.4 × 10−29
Dist.6.0 × 10−171.9 × 10−150.0 × 1003.2 × 10−195.0 × 10−62.2 × 10−303.7 × 10−20.0 × 1000.0 × 1000.0 × 1001.3 × 10−29
F6Mean0.0 × 1009.8 × 1012.1 × 1024.4 × 1020.0 × 1001.4 × 1023.2 × 1021.3 × 1022.2 × 1024.2 × 1029.9 × 101
std0.0 × 1001.1 × 10−16.1 × 1011.9 × 1010.0 × 1004.0 × 1015.5 × 1013.3 × 1015.6 × 1014.7 × 1012.9 × 10−1
Dist.0.0 × 1009.8 × 1012.1 × 1024.4 × 1020.0 × 1001.4 × 1023.2 × 1021.3 × 1022.2 × 1024.2 × 1029.9 × 101
F7Mean−2.2 × 100−2.1 × 100−2.2 × 100−2.2 × 100−2.1 × 100−2.2 × 100−2.2 × 100−2.2 × 100−2.2 × 100−2.1 × 100−2.2 × 100
std3.9 × 10−43.4 × 10−27.3 × 10−162.8 × 10−43.4 × 10−23.8 × 10−162.6 × 10−37.5 × 10−164.6 × 10−163.3 × 10−27.0 × 10−16
Dist.4.7 × 10−25.4 × 10−24.7 × 10−24.7 × 10−25.7 × 10−24.7 × 10−24.9 × 10−24.7 × 10−24.7 × 10−25.5 × 10−24.7 × 10−2
F8Mean2.8 × 10−198.0 × 10−133.2 × 10−319.6 × 10−114.0 × 10−46.1 × 10−85.8 × 10−22.5 × 10−313.7 × 10−193.8 × 10−311.6 × 10−31
std1.2 × 10−181.1 × 10−123.9 × 10−312.3 × 10−107.0 × 10−42.2 × 10−71.1 × 10−13.7 × 10−311.8 × 10−183.9 × 10−313.2 × 10−31
Dist.2.8 × 10−198.0 × 10−133.2 × 10−319.6 × 10−114.0 × 10−46.1 × 10−85.8 × 10−22.5 × 10−313.7 × 10−193.8 × 10−311.6 × 10−31
F9Mean2.1 × 10−205.1 × 10−110.0 × 1003.8 × 10−121.5 × 10−59.6 × 10−198.2 × 10−20.0 × 1003.8 × 10−310.0 × 1005.9 × 10−33
std1.0 × 10−196.4 × 10−110.0 × 1005.5 × 10−121.5 × 10−54.0 × 10−189.2 × 10−20.0 × 1001.7 × 10−300.0 × 1002.1 × 10−32
Dist.2.1 × 10−205.1 × 10−110.0 × 1003.8 × 10−121.5 × 10−59.6 × 10−198.2 × 10−20.0 × 1003.8 × 10−310.0 × 1005.9 × 10−33
F10Mean−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1−3.5 × 10−1
std5.6 × 10−172.0 × 10−145.6 × 10−175.6 × 10−177.9 × 10−75.6 × 10−175.2 × 10−105.6 × 10−175.6 × 10−175.6 × 10−175.6 × 10−17
Dist.8.6 × 10−58.6 × 10−58.6 × 10−58.6 × 10−58.5 × 10−58.6 × 10−58.6 × 10−58.6 × 10−58.6 × 10−58.6 × 10−58.6 × 10−5
F11Mean−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100−2.0 × 100
std0.0 × 1000.0 × 1000.0 × 1003.9 × 10−141.2 × 10−90.0 × 1009.9 × 10−140.0 × 1000.0 × 1000.0 × 1000.0 × 100
Dist.6.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−66.8 × 10−6
F12Mean4.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−14.0 × 10−1
std0.0 × 1002.7 × 10−140.0 × 1004.1 × 10−137.8 × 10−50.0 × 1006.5 × 10−90.0 × 1000.0 × 1000.0 × 1000.0 × 100
Dist.7.4 × 10−67.4 × 10−67.4 × 10−67.4 × 10−68.7 × 10−57.4 × 10−67.4 × 10−67.4 × 10−67.4 × 10−67.4 × 10−67.4 × 10−6
F13Mean1.0 × 10−41.7 × 10−24.5 × 10−37.6 × 10−22.2 × 10−11.0 × 10−41.5 × 10−11.0 × 10−31.7 × 10−31.0 × 10−31.2 × 10−3
std0.0 × 1002.6 × 10−27.1 × 10−34.2 × 10−24.3 × 10−20.0 × 1003.0 × 10−24.0 × 10−41.0 × 10−34.1 × 10−49.0 × 10−6
Dist.0.0 × 1001.7 × 10−24.4 × 10−37.6 × 10−22.2 × 10−10.0 × 1001.5 × 10−19.1 × 10−41.6 × 10−39.3 × 10−41.1 × 10−3
F14Mean−1.0 × 100−6.4 × 10−1−1.2 × 10−1−1.8 × 10−3−4.7 × 10−4−1.0 × 100−7.1 × 10−4−2.7 × 10−1−7.3 × 10−2−2.3 × 10−1−8.4 × 10−2
std0.0 × 1004.8 × 10−11.8 × 10−11.8 × 10−39.3 × 10−50.0 × 1001.4 × 10−43.7 × 10−12.5 × 10−23.4 × 10−17.8 × 10−4
Dist.0.0 × 1003.6 × 10−18.8 × 10−110.0 × 10−110.0 × 10−10.0 × 10010.0 × 10−17.3 × 10−19.3 × 10−17.7 × 10−19.2 × 10−1
F15Mean−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100−2.1 × 100
std0.0 × 1004.0 × 10−150.0 × 1001.0 × 10−143.4 × 10−70.0 × 1005.4 × 10−110.0 × 1000.0 × 1000.0 × 1000.0 × 100
Dist.1.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−51.2 × 10−5
F16Mean−9.2 × 10−141.4 × 1001.9 × 1001.6 × 1001.8 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1001.4 × 100
std2.3 × 10−149.0 × 10−13.9 × 10−18.0 × 10−16.5 × 10−11.3 × 10−26.2 × 10−100.0 × 1000.0 × 1000.0 × 1008.8 × 10−1
Dist.9.2 × 10−141.4 × 1001.9 × 1001.6 × 1001.8 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1001.4 × 100
F17Mean0.0 × 1001.8 × 10−65.0 × 10−42.7 × 10−41.6 × 10−32.2 × 10−31.9 × 10−42.3 × 10−41.9 × 10−42.1 × 10−69.5 × 10−4
std0.0 × 1002.6 × 10−61.4 × 10−33.6 × 10−42.0 × 10−32.0 × 10−32.7 × 10−42.0 × 10−44.7 × 10−46.0 × 10−61.3 × 10−3
Dist.0.0 × 1001.8 × 10−65.0 × 10−42.7 × 10−41.6 × 10−32.2 × 10−31.9 × 10−42.3 × 10−41.9 × 10−42.1 × 10−69.5 × 10−4
F18Mean−1.0 × 100−10.0 × 10−1−1.0 × 100−10.0 × 10−1−10.0 × 10−1−1.0 × 100−9.2 × 10−1−1.0 × 100−8.4 × 10−1−1.0 × 100−1.0 × 100
std0.0 × 1004.7 × 10−120.0 × 1003.7 × 10−125.8 × 10−40.0 × 1002.6 × 10−10.0 × 1003.7 × 10−10.0 × 1000.0 × 100
Dist.0.0 × 1003.7 × 10−120.0 × 1002.0 × 10−122.3 × 10−40.0 × 1007.6 × 10−20.0 × 1001.6 × 10−10.0 × 1000.0 × 100
F19Mean3.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 1003.0 × 100
std2.4 × 10−153.3 × 10−135.7 × 10−162.5 × 10−151.0 × 10−45.1 × 10−161.4 × 10−91.3 × 10−154.4 × 10−161.7 × 10−154.4 × 10−16
Dist.7.7 × 10−141.9 × 10−137.9 × 10−147.5 × 10−146.3 × 10−57.9 × 10−148.6 × 10−107.9 × 10−147.9 × 10−147.7 × 10−147.9 × 10−14
F20Mean−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101−1.9 × 101
std3.0 × 10−152.4 × 10−132.3 × 10−16.9 × 10−125.8 × 10−52.1 × 10−157.3 × 10−94.8 × 10−154.8 × 10−154.5 × 10−153.7 × 10−15
Dist.5.0 × 10−45.0 × 10−44.7 × 10−25.0 × 10−44.4 × 10−45.0 × 10−45.0 × 10−45.0 × 10−45.0 × 10−45.0 × 10−45.0 × 10−4
F21Mean−3.0 × 100−3.0 × 100−3.0 × 100−3.0 × 100−3.0 × 100−2.9 × 100−3.0 × 100−3.0 × 100−3.0 × 100−3.0 × 100−3.0 × 100
std0.0 × 1009.7 × 10−132.9 × 10−37.5 × 10−52.7 × 10−41.2 × 10−11.4 × 10−91.6 × 10−51.3 × 10−80.0 × 1001.3 × 10−15
Dist.0.0 × 1001.1 × 10−127.5 × 10−48.5 × 10−52.9 × 10−41.2 × 10−17.0 × 10−101.3 × 10−52.9 × 10−90.0 × 1000.0 × 100
F22Mean2.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.0 × 1002.1 × 1002.0 × 100
std0.0 × 1000.0 × 1000.0 × 1003.0 × 10−30.0 × 1000.0 × 1000.0 × 1000.0 × 1000.0 × 1001.3 × 10−10.0 × 100
Dist.0.0 × 1000.0 × 1000.0 × 1001.6 × 10−20.0 × 1000.0 × 1000.0 × 1000.0 × 1000.0 × 1001.5 × 10−10.0 × 100
F23Mean−5.3 × 10−1−3.2 × 10−2−4.4 × 10−3−1.6 × 10−12−1.2 × 10−1−7.3 × 10−1−7.6 × 10−20−3.2 × 10−1−1.2 × 10−1−3.8 × 10−14−1.4 × 10−3
std2.3 × 10−17.2 × 10−21.6 × 10−24.3 × 10−129.7 × 10−23.9 × 10−23.7 × 10−191.4 × 10−17.5 × 10−21.9 × 10−136.9 × 10−3
Dist.4.7 × 10−19.8 × 10−11.0 × 1001.0 × 1008.9 × 10−12.8 × 10−11.0 × 1006.9 × 10−18.9 × 10−11.0 × 1001.0 × 100
F24Mean9.0 × 10−11.0 × 10010.0 × 10−11.0 × 1009.9 × 10−19.7 × 10−11.0 × 1009.9 × 10−11.0 × 10010.0 × 10−110.0 × 10−1
std0.0 × 1009.5 × 10−115.5 × 10−33.9 × 10−52.6 × 10−24.7 × 10−26.8 × 10−72.9 × 10−23.9 × 10−51.6 × 10−21.9 × 10−2
Dist.0.0 × 1001.0 × 10−19.9 × 10−21.0 × 10−19.2 × 10−26.8 × 10−21.0 × 10−18.9 × 10−21.0 × 10−19.7 × 10−29.6 × 10−2
F25Mean3.4 × 1013.4 × 1014.7 × 1013.7 × 1013.4 × 1016.7 × 1015.6 × 1015.2 × 1015.2 × 1016.4 × 1013.4 × 101
std0.0 × 1002.6 × 10−121.9 × 1011.1 × 1018.5 × 10−41.0 × 1012.0 × 1012.0 × 1012.0 × 1011.7 × 1010.0 × 100
Dist.3.3 × 10−13.3 × 10−11.2 × 1012.9 × 1003.3 × 10−13.2 × 1012.2 × 1011.7 × 1011.7 × 1013.0 × 1013.3 × 10−1
Table 4. The rotated, shifted, and rotated benchmark problems, including their bounds, dimensionality (N), optimal solution fitness ( f min x ), and characteristics, are selected. Linear (L); nonlinear (NL); convex (C); nonconvex (NC); scalable (SC); nonscalable (NSC). The python implementation of the transformed functions can be found in [67].
Table 4. The rotated, shifted, and rotated benchmark problems, including their bounds, dimensionality (N), optimal solution fitness ( f min x ), and characteristics, are selected. Linear (L); nonlinear (NL); convex (C); nonconvex (NC); scalable (SC); nonscalable (NSC). The python implementation of the transformed functions can be found in [67].
Problem NameTransformationBoundsNCharacteristics f m i n x
F26Ackley 01Shifted, Rotated 32.7 x i 32.7 30NL, NC, SC0
F27Alpine 01Shifted, Rotated 10 x i 10 30NL, NC, SC0
F28BoothShifted, Rotated 10 x i 10 2NL, C, NSC0
F29BrownShifted, Rotated 1 x i 4 30NL, NC, SC0
F30CsendesShifted, Rotated 2 x i 2 30NL, NC, SC0
F31Chung ReynoldsShifted, Rotated 100 x i 100 30NL, C, SC0
F32Dixon PriceShifted, Rotated 10 x i 10 30NL, C, SC0
F33GulfShifted, Rotated 0 x i 100 3NL, NC, NSC0
F34GriewankShifted, Rotated 600 x i 600 30NL, NC, SC0
F35Powell SumShifted, Rotated 5 x i 5 30NL, C, SC0
F36Penalized 02Shifted, Rotated 50 x i 50 30NL, NC, SC0
F37Quartic with NoiseShifted, Rotated 1.28 x i 1.28 30NL, C, SC0
F38Ripple 01Shifted, Rotated 0 x i 1 2NL, NC, NSC−2.2
F39Schwefel 2.21Shifted, Rotated 100 x i 100 30NL, C, SC0
F40SphereShifted, Rotated 100 x i 100 30NL, C, SC0
F41Step 01Shifted, Rotated 100 x i 100 30NL, NC, SC0
F42SalomonShifted, Rotated 100 x i 100 30NL, NC, SC0
F43Schaffer 02Shifted, Rotated 100 x i 100 30NL, NC, SC0
F44Xin She Yang 02Shifted, Rotated π 2 x i π 2 30NL, NC, SC0
F45ZakharovShifted, Rotated 10 x i 10 30NL, C, SC0
Table 5. Comparison of performance metrics across algorithms for shifted and rotated optimization problems.
Table 5. Comparison of performance metrics across algorithms for shifted and rotated optimization problems.
FunctionStatisticsESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
F26 Mean 6.8 × 10−112.5 × 1001.6 × 1007.6 × 1003.5 × 1008.0 × 10−12.2 × 1007.1 × 10−151.0 × 1001.0 × 1018.0 × 10−2
std4.7 × 10−114.0 × 1005.4 × 10−14.5 × 10−12.1 × 10−16.7 × 10−16.8 × 10−10.0 × 1001.1 × 1003.1 × 1004.5 × 10−2
Dist.6.8 × 10−112.5 × 1001.6 × 1007.6 × 1003.5 × 1008.0 × 10−12.2 × 1007.1 × 10−151.0 × 1001.0 × 1018.0 × 10−2
F27Mean1.1 × 10−12.7 × 1001.8 × 1011.2 × 1011.3 × 1012.2 × 10−29.0 × 10−17.8 × 10−39.7 × 1006.1 × 1001.1 × 101
std9.4 × 10−22.7 × 1003.1 × 1009.1 × 10−15.1 × 1001.5 × 10−24.9 × 10−11.0 × 10−24.4 × 1002.5 × 1004.2 × 100
Dist.1.1 × 10−12.7 × 1001.8 × 1011.2 × 1011.3 × 1012.2 × 10−29.0 × 10−17.8 × 10−39.7 × 1006.1 × 1001.1 × 101
F28Mean1.5 × 10−95.9 × 10−52.5 × 1008.3 × 10−13.8 × 1002.9 × 10−101.4 × 1002.0 × 10−155.5 × 1011.1 × 1012.9 × 101
std2.5 × 10−93.7 × 10−56.9 × 10−110.0 × 10−29.1 × 10−17.8 × 10−103.5 × 10−14.4 × 10−151.1 × 1021.3 × 1016.2 × 101
Dist.1.5 × 10−95.9 × 10−52.5 × 1008.3 × 10−13.8 × 1002.9 × 10−101.4 × 1002.0 × 10−155.5 × 1011.1 × 1012.9 × 101
F29Mean5.6 × 10−215.0 × 10−111.6 × 10−32.4 × 10−27.4 × 10−21.6 × 10−218.6 × 10−53.6 × 10−331.2 × 10−18.2 × 10−31.9 × 100
std1.4 × 10−201.2 × 10−115.9 × 10−44.2 × 10−33.2 × 10−24.7 × 10−211.4 × 10−44.3 × 10−333.5 × 10−11.3 × 10−22.1 × 100
Dist.5.6 × 10−215.0 × 10−111.6 × 10−32.4 × 10−27.4 × 10−21.6 × 10−218.6 × 10−53.6 × 10−331.2 × 10−18.2 × 10−31.9 × 100
F30Mean2.1 × 10−491.7 × 10−102.8 × 10−82.9 × 10−74.4 × 10−73.2 × 10−653.7 × 10−71.4 × 10−756.6 × 10−169.0 × 10−77.7 × 10−13
std3.9 × 10−494.4 × 10−101.9 × 10−81.3 × 10−72.4 × 10−79.7 × 10−655.3 × 10−74.0 × 10−752.0 × 10−159.1 × 10−78.9 × 10−13
Dist.2.1 × 10−491.7 × 10−102.8 × 10−82.9 × 10−74.4 × 10−73.2 × 10−653.7 × 10−71.4 × 10−756.6 × 10−169.0 × 10−77.7 × 10−13
F31Mean1.7 × 10−458.9 × 10−131.2 × 1015.9 × 1001.1 × 1035.1 × 10−425.5 × 10−20.0 × 1002.2 × 10−277.2 × 1002.3 × 10−5
std2.6 × 10−451.9 × 10−121.6 × 1012.7 × 1004.2 × 1021.5 × 10−414.1 × 10−20.0 × 1003.8 × 10−271.2 × 1012.2 × 10−5
Dist.1.7 × 10−458.9 × 10−131.2 × 1015.9 × 1001.1 × 1035.1 × 10−425.5 × 10−20.0 × 1002.2 × 10−277.2 × 1002.3 × 10−5
F32Mean6.9 × 10−11.6 × 1006.0 × 1003.4 × 1001.8 × 1016.7 × 10−15.4 × 1006.7 × 10−17.1 × 10−18.2 × 1008.3 × 10−1
std3.5 × 10−21.6 × 1003.2 × 1005.2 × 10−18.9 × 1002.2 × 10−68.8 × 1001.3 × 10−149.1 × 10−28.6 × 1002.4 × 10−1
Dist.6.9 × 10−11.6 × 1006.0 × 1003.4 × 1001.8 × 1016.7 × 10−15.4 × 1006.7 × 10−17.1 × 10−18.2 × 1008.3 × 10−1
F33Mean1.7 × 10−32.1 × 10−32.2 × 10−32.3 × 10−51.3 × 10−28.9 × 10−35.8 × 10−36.4 × 10−54.8 × 10−32.6 × 10−31.4 × 10−5
std1.1 × 10−32.5 × 10−31.3 × 10−31.7 × 10−52.2 × 10−26.8 × 10−33.3 × 10−31.9 × 10−42.9 × 10−33.9 × 10−33.4 × 10−5
Dist.1.7 × 10−32.1 × 10−32.2 × 10−32.3 × 10−51.3 × 10−28.9 × 10−35.8 × 10−36.4 × 10−54.8 × 10−32.6 × 10−31.4 × 10−5
F34Mean4.9 × 10−49.3 × 10−42.2 × 1015.1 × 10−22.2 × 10−36.9 × 10−44.4 × 10−33.6 × 10−61.1 × 1018.8 × 1005.3 × 100
std4.3 × 10−43.2 × 10−43.8 × 1014.7 × 10−21.4 × 10−32.9 × 10−42.8 × 10−35.2 × 10−62.4 × 1012.5 × 1011.6 × 101
Dist.4.9 × 10−49.3 × 10−42.2 × 1015.1 × 10−22.2 × 10−36.9 × 10−44.4 × 10−33.6 × 10−61.1 × 1018.8 × 1005.3 × 100
F35Mean1.1 × 10−212.6 × 10−71.6 × 1002.4 × 1012.2 × 1004.4 × 10−231.6 × 10−21.3 × 10−301.1 × 10−123.4 × 1014.5 × 10−3
std2.1 × 10−212.3 × 10−71.4 × 1003.4 × 1004.4 × 10−11.2 × 10−222.9 × 10−21.2 × 10−301.4 × 10−121.7 × 1013.8 × 10−3
Dist.1.1 × 10−212.6 × 10−71.6 × 1002.4 × 1012.2 × 1004.4 × 10−231.6 × 10−21.3 × 10−301.1 × 10−123.4 × 1014.5 × 10−3
F36Mean5.4 × 10−29.2 × 10−26.7 × 10−22.5 × 10−12.4 × 10−26.7 × 10−32.2 × 10−25.0 × 10−32.2 × 10−21.5 × 10−13.3 × 10−2
std6.6 × 10−23.1 × 10−22.1 × 10−21.9 × 10−28.6 × 10−32.1 × 10−37.2 × 10−31.5 × 10−31.1 × 10−29.9 × 10−29.3 × 10−3
Dist.5.4 × 10−29.2 × 10−26.7 × 10−22.5 × 10−12.4 × 10−26.7 × 10−32.2 × 10−25.0 × 10−32.2 × 10−21.5 × 10−13.3 × 10−2
F37Mean−2.2 × 100−2.2 × 100−2.2 × 100−1.7 × 100−2.1 × 100−2.2 × 100−1.5 × 100−2.0 × 100−2.2 × 100−1.3 × 100−2.2 × 100
std9.0 × 10−41.5 × 10−118.4 × 10−44.7 × 10−11.1 × 10−12.4 × 10−34.6 × 10−13.9 × 10−17.9 × 10−42.9 × 10−14.2 × 10−4
Dist.2.2 × 1002.2 × 1002.2 × 1001.7 × 1002.1 × 1002.2 × 1001.5 × 1002.0 × 1002.2 × 1001.3 × 1002.2 × 100
F38Mean8.1 × 10−22.0 × 1016.9 × 1002.3 × 1013.7 × 1001.0 × 10−63.4 × 1002.2 × 10−31.3 × 1011.5 × 1015.0 × 100
std1.1 × 10−13.6 × 1001.1 × 1003.0 × 1006.4 × 10−17.1 × 10−71.6 × 1002.2 × 10−31.1 × 1017.2 × 1003.1 × 100
Dist.2.3 × 1002.2 × 1019.1 × 1002.5 × 1015.9 × 1002.2 × 1005.6 × 1002.2 × 1001.5 × 1011.7 × 1017.2 × 100
F39Mean1.2 × 10−173.7 × 10−72.9 × 1002.2 × 1002.8 × 1011.8 × 10−211.7 × 10−10.0 × 1003.0 × 10−144.3 × 1005.5 × 10−3
std7.6 × 10−182.7 × 10−71.6 × 1005.7 × 10−17.8 × 1005.2 × 10−214.3 × 10−20.0 × 1006.0 × 10−148.0 × 1003.1 × 10−3
Dist.1.2 × 10−173.7 × 10−72.9 × 1002.2 × 1002.8 × 1011.8 × 10−211.7 × 10−10.0 × 1003.0 × 10−144.3 × 1005.5 × 10−3
F40Mean0.0 × 1001.1 × 1016.5 × 1004.4 × 1015.0 × 1010.0 × 1001.3 × 1000.0 × 1003.5 × 1001.8 × 1020.0 × 100
std0.0 × 1002.1 × 1002.4 × 1008.5 × 1001.2 × 1010.0 × 1001.1 × 1000.0 × 1002.3 × 1002.2 × 1020.0 × 100
Dist.0.0 × 1001.1 × 1016.5 × 1004.4 × 1015.0 × 1010.0 × 1001.3 × 1000.0 × 1003.5 × 1001.8 × 1020.0 × 100
F41Mean3.6 × 10−16.0 × 10−11.6 × 1004.5 × 1001.2 × 1002.0 × 10−15.6 × 10−12.6 × 10−16.9 × 10−11.8 × 1008.3 × 10−1
std9.2 × 10−21.0 × 10−13.1 × 10−12.0 × 10−11.1 × 10−11.5 × 10−81.2 × 10−14.7 × 10−21.4 × 10−19.3 × 10−11.1 × 10−1
Dist.3.6 × 10−16.0 × 10−11.6 × 1004.5 × 1001.2 × 1002.0 × 10−15.6 × 10−12.6 × 10−16.9 × 10−11.8 × 1008.3 × 10−1
F42Mean1.8 × 10−163.7 × 10−24.6 × 10−14.9 × 10−11.0 × 10−15.3 × 10−162.4 × 10−22.2 × 10−166.3 × 10−44.9 × 10−22.6 × 10−1
std8.9 × 10−176.6 × 10−21.0 × 10−21.5 × 10−31.8 × 10−21.1 × 10−162.3 × 10−20.0 × 1001.5 × 10−36.9 × 10−25.8 × 10−2
Dist.1.8 × 10−163.7 × 10−24.6 × 10−14.9 × 10−11.0 × 10−15.3 × 10−162.4 × 10−22.2 × 10−166.3 × 10−44.9 × 10−22.6 × 10−1
F43Mean2.6 × 10−112.5 × 10−72.9 × 10−61.1 × 10−97.7 × 10−91.2 × 10−61.1 × 10−61.9 × 10−101.5 × 10−68.3 × 10−83.9 × 10−7
std1.9 × 10−117.6 × 10−71.6 × 10−64.4 × 10−101.2 × 10−86.3 × 10−78.6 × 10−71.1 × 10−102.3 × 10−61.1 × 10−75.0 × 10−7
Dist.2.6 × 10−112.5 × 10−72.9 × 10−61.1 × 10−97.7 × 10−91.2 × 10−61.1 × 10−61.9 × 10−101.5 × 10−68.3 × 10−83.9 × 10−7
F44Mean2.9 × 10−68.5 × 10−12.2 × 1028.6 × 1013.1 × 1006.3 × 1016.2 × 1012.6 × 10−24.7 × 1016.5 × 1011.5 × 101
std7.5 × 10−69.1 × 10−12.4 × 1011.2 × 1016.8 × 10−11.7 × 1011.7 × 1013.9 × 10−22.1 × 1014.0 × 1014.1 × 100
Dist.2.9 × 10−68.5 × 10−12.2 × 1028.6 × 1013.1 × 1006.3 × 1016.2 × 1012.6 × 10−24.7 × 1016.5 × 1011.5 × 101
F45Mean0.0 × 1001.0 × 1003.2 × 1001.1 × 1002.0 × 1003.1 × 1003.0 × 1002.2 × 10−12.1 × 1001.5 × 1001.0 × 100
std0.0 × 1004.8 × 10−38.4 × 10−13.2 × 10−23.4 × 10−19.9 × 10−11.4 × 1004.5 × 10−19.2 × 10−14.7 × 10−15.6 × 10−2
Dist.0.0 × 1001.0 × 1003.2 × 1001.1 × 1002.0 × 1003.1 × 1003.0 × 1002.2 × 10−12.1 × 1001.5 × 1001.0 × 100
Table 6. CEC 2022 Bound constrained single objective optimization problem characteristics.
Table 6. CEC 2022 Bound constrained single objective optimization problem characteristics.
FunctionTypeProblem f m i n x
F46Shifted–rotated primitive functionsShifted and fully rotated Zakharov300
F47Shifted and fully rotated Rosenbrock400
F48Shifted and fully rotated expanded Schaffer 06600
F49Shifted and fully rotated Non-Continuous Rastringin800
F50Shifted and fully rotated Levy900
F51Hybrid functionsHybrid Function 1 (N = 3)1800
F52Hybrid Function 2 (N = 6)2000
F53Hybrid Function 3 (N = 5)2200
F54Composition
function
Composition Function 1 (N = 5)2300
F55Composition Function 2 (N = 4)2400
F56Composition Function 3 (N = 5)2600
F57Composition Function 4 (N = 6)2700
Bounds: 100 x i 100 ; Dimension = [10, 20]
Table 7. Comparison of performance metrics across algorithms for the CEC SOBC 2022 problems for D = 10.
Table 7. Comparison of performance metrics across algorithms for the CEC SOBC 2022 problems for D = 10.
FunctionStatisticsESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
D = 10
F46 Mean 3.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 1023.0 × 102
std0.0 × 1001.6 × 10−90.0 × 1002.7 × 10−41.1 × 1002.3 × 10−144.8 × 10−40.0 × 1006.0 × 10−145.7 × 10−147.2 × 10−11
Dist.0.0 × 1003.4 × 10−90.0 × 1008.4 × 10−41.7 × 1000.0 × 1007.0 × 10−40.0 × 1000.0 × 1005.7 × 10−143.9 × 10−11
F47Mean4.0 × 1024.1 × 1024.1 × 1024.0 × 1024.1 × 1024.1 × 1024.1 × 1024.1 × 1024.1 × 1024.1 × 1024.1 × 102
std8.1 × 10−21.4 × 1012.4 × 1009.0 × 10−31.9 × 1004.1 × 1001.0 × 1012.9 × 1001.5 × 1002.2 × 1011.0 × 101
Dist.2.1 × 10−19.5 × 1006.7 × 1008.3 × 10−37.9 × 1005.7 × 1007.1 × 1007.0 × 1007.5 × 1001.3 × 1018.1 × 100
F48Mean6.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 102
std0.0 × 1007.9 × 10−140.0 × 1006.1 × 10−109.7 × 10−62.3 × 10−148.7 × 10−90.0 × 1006.0 × 10−141.0 × 10−130.0 × 100
Dist.0.0 × 1001.1 × 10−130.0 × 1001.3 × 10−91.9 × 10−50.0 × 1001.3 × 10−80.0 × 1000.0 × 1000.0 × 1000.0 × 100
F49Mean8.3 × 1028.5 × 1028.5 × 1028.3 × 1028.3 × 1028.3 × 1028.2 × 1028.1 × 1028.4 × 1028.5 × 1028.3 × 102
std1.1 × 1012.4 × 1018.6 × 1007.1 × 1001.5 × 1016.9 × 1007.5 × 1002.9 × 1001.3 × 1011.7 × 1019.0 × 100
Dist.3.3 × 1014.9 × 1014.6 × 1013.2 × 1013.4 × 1013.1 × 1012.2 × 1011.3 × 1013.6 × 1014.8 × 1013.0 × 101
F50Mean9.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 102
std3.2 × 10−13.0 × 10−10.0 × 1008.8 × 10−22.0 × 10−37.5 × 10−146.3 × 10−20.0 × 1001.4 × 10−136.7 × 10−11.6 × 10−1
Dist.1.5 × 10−13.3 × 10−10.0 × 1006.9 × 10−23.0 × 10−30.0 × 1006.8 × 10−20.0 × 1000.0 × 1006.8 × 10−17.2 × 10−2
F51Mean2.7 × 1043.1 × 1041.8 × 1031.8 × 1033.5 × 1044.7 × 1041.4 × 1041.8 × 1034.0 × 1048.2 × 1031.3 × 104
std1.2 × 1042.0 × 1043.7 × 10−19.5 × 1001.4 × 1041.8 × 1047.6 × 1032.9 × 10−16.9 × 1035.0 × 1031.8 × 104
Dist.2.5 × 1042.9 × 1047.0 × 10−13.3 × 1013.4 × 1044.5 × 1041.2 × 1044.3 × 10−13.8 × 1046.4 × 1031.1 × 104
F52Mean2.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 103
std8.7 × 1001.4 × 1019.1 × 1003.0 × 1007.5 × 1002.5 × 1008.8 × 1005.7 × 1001.3 × 1018.3 × 1009.2 × 100
Dist.2.7 × 1013.4 × 1012.5 × 1013.3 × 1012.3 × 1013.4 × 1011.5 × 1014.6 × 1003.7 × 1012.9 × 1012.8 × 101
F53Mean2.2 × 1032.3 × 1032.2 × 1032.2 × 1032.2 × 1032.2 × 1032.2 × 1032.2 × 1032.2 × 1032.2 × 1032.3 × 103
std6.5 × 1001.9 × 1024.7 × 1003.7 × 1004.5 × 1004.7 × 1004.6 × 1016.3 × 1007.8 × 1004.6 × 1005.1 × 102
Dist.2.6 × 1011.0 × 1021.7 × 1011.9 × 1012.3 × 1012.7 × 1014.8 × 1016.6 × 1002.0 × 1012.2 × 1011.3 × 102
F54Mean2.3 × 1032.6 × 1032.6 × 1032.3 × 1032.6 × 1032.5 × 1032.6 × 1032.4 × 1032.7 × 1032.4 × 1032.7 × 103
std0.0 × 1001.7 × 1021.5 × 1022.8 × 10−21.4 × 1021.7 × 1021.6 × 1021.4 × 1021.9 × 10−21.5 × 1029.0 × 101
Dist.0.0 × 1002.7 × 1022.7 × 1029.7 × 10−22.9 × 1021.5 × 1022.7 × 1027.2 × 1013.6 × 1028.5 × 1013.6 × 102
F55Mean2.6 × 1031.7 × 1038.2 × 1028.6 × 1023.2 × 1021.6 × 1031.6 × 1039.1 × 1023.0 × 1021.3 × 1036.8 × 102
std1.6 × 1021.0 × 1036.7 × 1021.9 × 1026.7 × 1026.9 × 1027.8 × 1025.9 × 1028.1 × 1029.9 × 1028.9 × 102
Dist.1.5 × 1027.1 × 1021.6 × 1031.5 × 1032.1 × 1038.2 × 1028.5 × 1021.5 × 1032.1 × 1031.1 × 1031.7 × 103
F56Mean2.6 × 1032.7 × 1032.6 × 1032.5 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.7 × 1032.6 × 1032.6 × 103
std2.6 × 10−132.4 × 1022.3 × 10−103.1 × 1021.2 × 10−11.7 × 1021.7 × 1020.0 × 1002.8 × 1027.5 × 1001.4 × 101
Dist.0.0 × 1008.2 × 1014.7 × 10−106.3 × 1013.5 × 10−13.4 × 1014.0 × 1010.0 × 1001.1 × 1022.7 × 1009.9 × 100
F57Mean2.9 × 1032.9 × 1032.8 × 1032.7 × 1032.8 × 1032.8 × 1032.5 × 1032.8 × 1032.8 × 1032.8 × 1032.8 × 103
std1.7 × 1015.5 × 1001.6 × 1011.5 × 1026.6 × 1011.4 × 1013.2 × 1021.7 × 1011.2 × 1022.1 × 1026.0 × 101
Dist.1.8 × 1021.6 × 1021.4 × 1021.6 × 10110.0 × 1011.4 × 1021.7 × 1021.4 × 1021.0 × 1028.7 × 1011.3 × 102
D = 20
F46Mean3.0 × 1023.0 × 1023.0 × 1023.1 × 1023.2 × 1023.0 × 1023.0 × 1023.0 × 1023.5 × 1023.0 × 1023.0 × 102
std5.7 × 10−146.9 × 10−93.6 × 10−12.6 × 1006.9 × 1000.0 × 1004.7 × 10−20.0 × 1001.6 × 1022.2 × 10−55.5 × 10−2
Dist.5.7 × 10−142.4 × 10−84.4 × 10−19.6 × 1002.4 × 1011.1 × 10−135.2 × 10−20.0 × 1004.7 × 1014.9 × 10−62.0 × 10−2
F47Mean4.4 × 1024.6 × 1024.5 × 1024.5 × 1024.6 × 1024.7 × 1024.5 × 1024.4 × 1024.5 × 1024.5 × 1024.5 × 102
std2.0 × 1011.9 × 1012.1 × 1001.1 × 1012.1 × 1018.6 × 1009.0 × 1001.6 × 1012.0 × 1002.3 × 1011.6 × 101
Dist.4.0 × 1016.1 × 1014.7 × 1014.7 × 1016.0 × 1016.6 × 1015.4 × 1014.3 × 1014.8 × 1014.7 × 1014.6 × 101
F48Mean6.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 1026.0 × 102
std6.4 × 10−142.5 × 10−132.5 × 10−81.6 × 10−61.5 × 10−42.3 × 10−141.4 × 10−70.0 × 1005.6 × 10−142.4 × 10−78.6 × 10−11
Dist.0.0 × 1001.0 × 10−124.3 × 10−86.7 × 10−63.3 × 10−41.1 × 10−133.1 × 10−70.0 × 1001.1 × 10−134.9 × 10−81.3 × 10−10
F49Mean9.0 × 1029.6 × 1021.0 × 1039.3 × 1029.4 × 1029.7 × 1029.6 × 1028.5 × 1029.4 × 1029.7 × 1029.4 × 102
std3.1 × 1015.5 × 1012.5 × 1011.9 × 1013.5 × 1012.3 × 1011.9 × 1019.8 × 1004.7 × 1014.9 × 1013.9 × 101
Dist.1.0 × 1021.6 × 1022.3 × 1021.3 × 1021.4 × 1021.7 × 1021.6 × 1024.6 × 1011.4 × 1021.7 × 1021.4 × 102
F50Mean9.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 1029.0 × 102
std7.7 × 10−11.3 × 1005.8 × 10−46.6 × 10−12.3 × 1002.7 × 10−19.2 × 10−21.8 × 10−28.5 × 10−12.2 × 1001.1 × 100
Dist.9.5 × 10−12.7 × 1004.2 × 10−41.8 × 1002.4 × 1002.2 × 10−11.1 × 10−13.6 × 10−35.2 × 10−14.2 × 1001.3 × 100
F51Mean6.9 × 1045.8 × 1046.9 × 1055.6 × 1043.6 × 1052.7 × 1074.4 × 1041.5 × 1041.4 × 1055.8 × 1041.2 × 106
std1.9 × 1041.9 × 1045.1 × 1051.3 × 1041.7 × 1051.4 × 1071.4 × 1045.8 × 1033.9 × 1042.2 × 1045.7 × 106
Dist.6.7 × 1045.6 × 1046.8 × 1055.4 × 1043.6 × 1052.7 × 1074.2 × 1041.3 × 1041.4 × 1055.6 × 1041.2 × 106
F52Mean2.0 × 1032.1 × 1031.9 × 1032.0 × 1032.0 × 1032.0 × 1032.0 × 1031.9 × 1031.9 × 1032.0 × 1032.0 × 103
std7.9 × 1012.2 × 1023.0 × 1013.1 × 1011.3 × 1029.1 × 1017.8 × 1017.9 × 1015.5 × 1018.5 × 1018.3 × 101
Dist.1.8 × 1011.0 × 1026.0 × 1013.9 × 1015.4 × 1007.6 × 1003.5 × 1017.0 × 1011.2 × 1021.6 × 1012.5 × 101
F53Mean2.5 × 1032.7 × 1032.1 × 1032.1 × 1032.8 × 1031.9 × 1042.9 × 1032.2 × 1034.0 × 1032.7 × 1032.5 × 103
std2.3 × 1023.8 × 1022.0 × 1022.6 × 1014.3 × 1022.0 × 1046.7 × 1027.4 × 1011.5 × 1036.9 × 1026.1 × 102
Dist.3.3 × 1025.3 × 1026.5 × 1011.0 × 1025.7 × 1021.6 × 1047.2 × 1022.2 × 1011.8 × 1035.3 × 1022.9 × 102
F54Mean2.3 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.7 × 1032.7 × 103
std0.0 × 1006.9 × 1016.4 × 10−31.0 × 1024.4 × 1003.2 × 10−52.5 × 1004.5 × 10−132.5 × 1002.9 × 1015.4 × 101
Dist.0.0 × 1003.4 × 1023.4 × 1022.8 × 1023.4 × 1023.4 × 1023.4 × 1023.4 × 1023.4 × 1023.8 × 1023.6 × 102
F55Mean2.2 × 1031.0 × 1032.6 × 1034.7 × 1021.2 × 1022.7 × 1038.6 × 1025.0 × 1025.3 × 1027.2 × 1021.5 × 103
std9.1 × 1021.2 × 1034.1 × 1023.3 × 1029.6 × 1022.4 × 1021.4 × 1031.7 × 1031.1 × 1031.6 × 1031.1 × 103
Dist.2.1 × 1021.4 × 1032.1 × 1021.9 × 1032.3 × 1032.6 × 1021.5 × 1031.9 × 1031.9 × 1031.7 × 1038.9 × 102
F56Mean2.6 × 1032.8 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1032.6 × 1033.1 × 103
std4.5 × 10−134.4 × 1021.1 × 10−28.8 × 10−25.4 × 1004.5 × 10−131.2 × 1010.0 × 1001.4 × 1011.4 × 1016.9 × 102
Dist.4.5 × 10−131.8 × 1022.3 × 10−24.8 × 10−13.2 × 1001.4 × 10−129.1 × 1000.0 × 1001.8 × 1011.1 × 1014.9 × 102
F57Mean3.0 × 1033.0 × 1032.9 × 1033.0 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 1033.0 × 1033.0 × 103
std4.3 × 1013.2 × 1019.2 × 1002.9 × 1011.5 × 1011.2 × 1011.2 × 1018.1 × 1008.7 × 1008.5 × 1012.2 × 101
Dist.3.1 × 1022.8 × 1022.3 × 1022.7 × 1022.4 × 1022.3 × 1022.3 × 1022.2 × 1022.3 × 1023.1 × 1022.6 × 102
Table 8. Performance comparison of different algorithms on the Lennard–Jones potential problem. The values show position coordinates (ranging from −2 to +2) for 15 particles and the final objective function value (Cost).
Table 8. Performance comparison of different algorithms on the Lennard–Jones potential problem. The values show position coordinates (ranging from −2 to +2) for 15 particles and the final objective function value (Cost).
ESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
x 1 1.4348651.359019−1.970468−0.0145990.365808−0.4702260.5864551.208653−1.138859−1.0092911.712633
x 2 1.1918280.336952−1.490921−0.736735−1.0051210.803350−0.394619−0.783929−1.999048−1.2756061.182789
x 3 0.6997670.5773571.1420620.199979−0.965342−0.307276−0.4211560.6596441.6245030.2229591.504628
x 4 1.9999700.896645−1.167551−0.3982931.2443530.352321−0.1557200.242997−0.436711−0.5010222.000000
x 5 1.115344−0.140981−2.0000000.015786−1.3885560.5489350.259772−1.044061−1.475131−0.7158282.000000
x 6 1.518684−0.1409811.456385−0.331072−1.235420−0.863945−0.5506390.7641471.1466780.8796321.002105
x 7 1.1754220.606547−2.0000000.4373630.532327−0.183326−0.208390−0.334759−1.984282−0.1216751.230942
x 8 1.6836180.817785−2.0000000.156283−1.227329−0.196021−0.640118−0.302973−1.973537−0.8935901.455159
x 9 1.528411−0.1060092.0000000.196516−1.914800−0.256384−0.9787130.4362021.094949−0.0260800.674214
x 10 2.000000−0.140981−2.000000−0.247618−0.3626160.317509−0.3019500.532111−1.105097−1.4924822.000000
x 11 1.989568−0.140981−0.9893520.872546−1.420013−0.399401−0.549758−0.504754−2.000000−0.6278142.000000
x 12 1.999880−0.1409812.0000000.179714−1.517291−1.0550940.014254−0.0177950.6236190.8083722.000000
x 13 1.976906−0.140981−2.000000−0.4482350.4301190.7163170.2064460.524449−1.387533−0.9109241.133734
x 14 1.973133−0.140981−2.0000000.029733−1.9601660.721455−1.316111−0.095083−1.173953−0.2906721.999975
x 15 1.002376−0.1409812.0000000.660970−1.2488260.033407−0.3731040.8957401.1143180.0708471.504609
Cost−9.103852−5.837872−8.219488−9.091091−9.094731−6.671860−9.103852−9.098313−9.103852−9.103852−9.103852
Table 9. Performance comparison for the Tersoff Potential Problem—Si(B) Model. The values show the atomic position coordinates (bounded between −2 and +2) for 30 particles and the final objective function value (Cost).
Table 9. Performance comparison for the Tersoff Potential Problem—Si(B) Model. The values show the atomic position coordinates (bounded between −2 and +2) for 30 particles and the final objective function value (Cost).
ESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
x 1 1.911343−0.4227961.9444080.2862401.416869−0.2744981.8751222.0000000.094775−0.372801−0.252995
x 2 1.8378860.2162750.3673590.2873590.2538492.000000−0.029182−0.0807881.827431−0.0013201.062548
x 3 1.830802−2.000000−0.4569271.6821892.0000000.727327−1.737446−2.0000000.372470−1.999627−1.999950
x 4 −1.576741−1.3801370.5167110.4238480.4421750.003527−1.756877−0.3756390.912800−0.052452−1.231063
x 5 −0.1322600.001101−0.229355−1.485867−0.4882122.000000−0.179581−0.102737−1.5008302.0000000.273089
x 6 1.0204041.1308580.8000000.059508−0.073051−2.0000001.811431−2.0000000.272097−0.0293510.980519
x 7 0.6650331.491677−1.421290−0.635832−1.7397881.0446581.9178200.1147312.000000−1.2772560.773974
x 8 −0.1693121.6702330.7649381.8246770.484076−2.000000−1.774767−1.8477300.4744141.450233−1.999004
x 9 2.000000−2.0000001.8715990.309387−0.093171−0.996529−0.028274−0.308888−0.4023881.8260111.903878
x 10 0.6136361.776506−2.000000−1.662366−1.0313042.0000001.8395130.971574−2.000000−0.6056001.238730
x 11 2.0000001.016080−0.1348940.2010581.9770851.751415−0.064260−1.6521460.818365−1.8157640.014669
x 12 −0.8235201.970005−0.800000−1.4296181.561238−0.6585641.6750332.0000001.158593−0.5455460.791099
x 13 −0.1409031.938587−2.0000001.561017−0.882716−2.000000−0.1268710.6722690.590976−1.9457840.233157
x 14 −1.997927−0.7065262.000000−0.605732−0.3002600.3605691.5552382.0000001.366157−0.4582692.000000
x 15 0.6729880.358969−1.412000−1.8157611.9410220.4843151.8404990.733992−1.9997890.7184840.083112
x 16 −0.173769−1.2626890.250558−1.8802012.000000−0.543066−1.6955191.632626−1.408305−2.000000−2.000000
x 17 −0.333620−1.572003−0.160011−1.658721−1.872017−0.320085−0.229380−0.160240−1.2091021.269511−0.325520
x 18 −1.095456−0.655218−2.0000000.0967841.145205−1.203856−1.7147970.4539700.221774−0.904993−1.252132
x 19 −0.4173660.895856−0.655298−0.7262591.020043−2.000000−0.094898−0.2463280.9163641.9954651.999995
x 20 1.999937−1.640321−2.000000−1.7073072.000000−2.000000−0.3656440.308474−0.9198501.114406−1.999782
x 21 1.373192−1.5190722.000000−1.9568210.5429412.0000000.0743062.000000−1.999914−0.015754−0.148793
x 22 −1.6572110.4249650.0871661.722122−0.0046642.000000−1.809800−1.661183−1.3328420.830348−0.377947
x 23 1.5325571.1065512.0000001.2820081.4820532.0000001.395652−0.365374−0.067126−1.279863−1.739322
x 24 −0.6472090.0046652.0000000.200628−1.4019372.0000000.0927050.235537−1.8095201.183456−0.201882
x 25 2.0000000.1009602.000000−1.9829982.000000−1.1666351.4877760.980895−0.0501791.5585061.012128
x 26 0.289188−1.810179−1.5122330.204361−1.969780−2.0000001.4891192.000000−0.067582−0.927549−0.949578
x 27 0.0263781.227828−2.0000001.452503−1.1454710.0429980.059791−1.8549881.931138−0.998212−2.000000
x 28 1.668289−0.361096−0.4272330.4739641.870840−2.0000000.235857−1.054432−1.9999950.924628−1.993408
x 29 −1.9017521.979633−1.7001751.4552880.2156312.000000−1.7637491.7501401.9989150.8474291.908550
x 30 −0.7845802.000000−0.422944−1.822801−1.927111−1.535681−1.841567−0.732077−0.8569101.999620−0.585655
Cost−42.175192−40.514917−30.258289−42.888543−41.701543−32.093009−40.802206−42.698791−44.074326−43.129095−42.192469
Table 10. Optimization results for the Spread Spectrum Radar Polyphase Code Design Problem. The values show phase angles (bounded between −π and π radians) for a 30-element code sequence and the final objective function value (Cost).
Table 10. Optimization results for the Spread Spectrum Radar Polyphase Code Design Problem. The values show phase angles (bounded between −π and π radians) for a 30-element code sequence and the final objective function value (Cost).
ESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
x 1 −3.100815−1.520351−0.9657740.544349−3.0365913.141593−1.5182172.557269−1.3450453.112985−3.141593
x 2 0.332067−2.1211730.6377651.360767−3.041196−1.9329732.9546690.6710640.334946−2.546519−2.760456
x 3 −2.574641−1.105060−3.1415932.3337001.201947−3.141593−2.664397−2.7534540.942050−0.316298−2.923732
x 4 −0.545815−1.319132−1.4324631.9649022.0125013.141593−2.423320−2.2917133.132594−1.697346−0.288332
x 5 1.1419801.6655741.8510322.3779520.072295−1.106539−1.237725−2.419544−2.445811−1.2775841.268597
x 6 1.120970−1.762508−3.141593−2.2367330.2040192.149439−1.8624663.141593−0.967030−2.688296−0.333390
x 7 1.282702−0.6532771.246695−0.342986−3.1415931.607691−2.8453212.9793611.6695021.401700−0.084430
x 8 −2.321864−3.1397840.8604632.0441072.2540970.685412−1.893046−1.1160272.811512−0.283606−3.141593
x 9 −0.183235−3.1072993.1415931.221318−1.062995−1.809399−1.246318−0.356383−1.2125740.1267480.299340
x 10 2.491207−2.991052−3.141593−0.072572−0.387584−2.191098−1.3610840.4563480.751910−0.365913−1.517103
x 11 −3.066385−0.2904391.752981−1.2638950.8872522.958228−0.0148450.1730171.0377212.976465−1.732491
x 12 −1.942425−1.862323−3.1415932.9172132.652048−3.141593−2.002796−1.309828−0.124584−0.8981332.381386
x 13 −0.710979−1.120679−3.1415932.243351−2.446711−2.0133412.791427−2.526366−2.4616661.2206932.236986
x 14 −2.5411450.4776360.0269192.1646341.835622−2.413783−1.128471−0.417597−0.1495181.184910−1.610186
x 15 −1.767841−1.530916−1.3916712.723396−0.090144−0.066451−2.4893930.7110402.9716602.559379−2.850798
x 16 −1.499577−0.2373830.1482661.770362−3.004787−1.423347−0.9260591.1264061.931100−1.806416−3.141593
x 17 2.9021790.6709263.141593−2.887093−2.9172921.758365−0.6161751.283235−1.421458−0.0267770.185552
x 18 −2.249648−2.129232−3.141593−1.281230−2.0064093.141593−0.9955232.264549−0.473386−2.703633−3.141593
x 19 −2.897327−3.1205220.9486393.0458172.7816380.743333−0.5123692.941368−2.866881−2.300161−3.141593
x 20 2.369769−3.0808553.141593−0.5631771.723770−3.1415932.826144−2.623419−1.2959671.011977−3.038621
x 21 −1.680448−3.1310990.0363460.1635160.1944693.1415930.3493760.1864911.764819−2.003619−1.858572
x 22 2.949012−3.1387280.855628−0.409861−1.376177−0.3703070.422910−2.9904281.0489342.780416−0.333768
x 23 3.106054−1.6671113.141593−2.546677−0.5505740.248312−1.610595−0.084111−0.5580641.409432−2.219474
x 24 −1.043635−0.1999981.5074562.4521702.278041−2.5643271.207318−2.160056−2.243206−1.218152−2.642824
x 25 1.896537−3.132711−3.141593−1.742223−1.6615610.4819991.645104−0.4389202.066923−0.299825−3.019206
x 26 −0.634534−0.0686032.0481280.769361−3.141593−1.216155−0.6005813.140928−0.0749270.222078−3.141593
x 27 2.399096−2.7688132.278858−2.627048−1.005585−3.1415930.357423−1.6370721.600227−1.5208780.504020
x 28 −0.094720−2.8617633.1415931.685769−3.095756−1.2790200.397469−0.403978−1.002799−1.6860281.779107
x 29 −0.975141−2.000241−3.141593−2.6725820.441302−1.4057122.816245−0.729825−2.085518−1.1187860.142481
x 30 −1.407476−2.1898643.0694311.838584−0.707854−0.985931−2.4265212.1113330.9135400.898270−3.134029
Cost1.9802462.4484493.3984762.6237242.6091903.5628593.2792912.4333952.7620122.6808592.943761
Table 11. Performance comparison of different algorithms in the circular antenna array optimization problem. The values show the current excitation amplitudes ( x 1 x 8 ) (bounded between 0.2 and 1.0) and phase angles ( x 9 x 16 ) (bounded between 0° and 360°) for an eight-element array.
Table 11. Performance comparison of different algorithms in the circular antenna array optimization problem. The values show the current excitation amplitudes ( x 1 x 8 ) (bounded between 0.2 and 1.0) and phase angles ( x 9 x 16 ) (bounded between 0° and 360°) for an eight-element array.
ESOALODEFPAFLAGSKAHSLSHADEMFOPSOQSA
x 1 0.8267620.6557020.3722930.5000000.4532940.5620550.6659490.9935080.3000000.5000000.655702
x 2 0.580678122.37351588.521081196.971756148.88892048.53418891.641661111.606724134.172623196.971756122.373515
x 3 0.7871510.5508320.5364350.5000000.3868200.4889680.6505000.9720030.2984810.5000000.550832
x 4 0.174480243.992739168.318334301.023384278.67525665.965652170.193613236.185596247.334374301.023384243.992739
x 5 0.0033850.2930680.2521160.5004210.5462070.0588930.2479770.7066250.4127250.5004210.293068
x 6 0.205614258.297786163.448644282.596030277.00210894.839997162.211812228.595121204.884861282.596030258.297786
x 7 0.8025470.4239610.4857560.5000000.5336800.6177810.6737470.9176690.2998540.5000000.423961
x 8 0.643657123.84323993.118729206.797952143.24020964.13776284.180194108.903852101.663546206.797952123.843239
x 9 0.0128820.5661550.4460700.5987470.5038160.3388130.0898520.1002890.3887610.5987470.566155
x 10 206.062613251.590723155.823200197.431070179.665472114.471955228.478716249.013508225.031970197.431070251.590723
x 11 0.9911100.4691930.4285870.7743210.4939680.0832790.8996030.8999990.5164780.7743210.469193
x 12 186.934931282.100443149.944308194.385970240.31719688.092418263.197623278.136398189.887400194.385970282.100443
x 13 0.4969330.4430130.4044010.5901890.6159200.0109420.7742880.6478300.4112430.5901890.443013
x 14 181.73502042.562513166.138240240.690145119.45593671.325032142.867142124.350970143.955945240.69014542.562513
x 15 0.0071590.4476680.4441430.7081130.6093040.0000000.3874220.2004410.4724670.7081130.447668
x 16 160.088913120.490818156.254987199.814598213.82003442.269511167.539353124.217988194.639459199.814598120.490818
Cost−11.479936−4.361428−5.199967−3.974214−4.366539−4.743778−7.211993−6.009528−4.336477−3.974214−4.361428
Table 12. Computational complexity analysis of the optimization algorithms using the CEC 2022 SOBC procedure. T0 represents the base time, T1 is the environment time to solve F51, and T ^ 2 is the mean time of each algorithm to solve F51. (T2 − T1)/T0 calculates the overhead.
Table 12. Computational complexity analysis of the optimization algorithms using the CEC 2022 SOBC procedure. T0 represents the base time, T1 is the environment time to solve F51, and T ^ 2 is the mean time of each algorithm to solve F51. (T2 − T1)/T0 calculates the overhead.
Algorithm T 0 T 1 T ^ 2 T ^ 2 T 1 T 0
D = 10 D = 20 D = 10D = 20D = 10D = 20
ALO0.09230.65510.67228.2710.4382.508227105.7498929
DE0.09008.0610.1782.326846105.5637776
FPA0.09108.1510.2982.406834105.6458533
FLS0.08567.679.6881.952697105.1798627
HS0.08907.9710.0682.245061105.4798575
GSKA0.175616.3222.5189.207859124.3610478
LSHADE0.132011.8314.9284.642856107.9402422
MFO0.10709.5912.0983.483305106.7504234
PSO0.10229.1611.5583.195754106.455367
QSA0.112510.0812.7283.782624107.0575553
ESO0.03503.143.6670.88571485.25142857
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soto Calvo, M.; Lee, H.S. Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems. Mach. Learn. Knowl. Extr. 2025, 7, 24. https://doi.org/10.3390/make7010024

AMA Style

Soto Calvo M, Lee HS. Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems. Machine Learning and Knowledge Extraction. 2025; 7(1):24. https://doi.org/10.3390/make7010024

Chicago/Turabian Style

Soto Calvo, Manuel, and Han Soo Lee. 2025. "Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems" Machine Learning and Knowledge Extraction 7, no. 1: 24. https://doi.org/10.3390/make7010024

APA Style

Soto Calvo, M., & Lee, H. S. (2025). Electrical Storm Optimization (ESO) Algorithm: Theoretical Foundations, Analysis, and Application to Engineering Problems. Machine Learning and Knowledge Extraction, 7(1), 24. https://doi.org/10.3390/make7010024

Article Metrics

Back to TopTop