Previous Article in Journal
A Study on Geometrical Consistency of Surfaces Using Partition-Based PCA and Wavelet Transform in Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The SIOA Algorithm: A Bio-Inspired Approach for Efficient Optimization

by
Vasileios Charilogis
1,
Ioannis G. Tsoulos
1,*,
Dimitrios Tsalikakis
2 and
Anna Maria Gianni
1
1
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
2
Department of Engineering Informatics and Telecommunications, University of Western Macedonia, 50100 Kozani, Greece
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 135; https://doi.org/10.3390/appliedmath5040135 (registering DOI)
Submission received: 11 August 2025 / Revised: 8 September 2025 / Accepted: 23 September 2025 / Published: 4 October 2025

Abstract

The Sporulation-Inspired Optimization Algorithm (SIOA) is an innovative metaheuristic optimization method inspired by the biological mechanisms of microbial sporulation and dispersal. SIOA operates on a dynamic population of solutions (“microorganisms”) and alternates between two main phases: sporulation, where new “spores” are generated through adaptive random perturbations combined with guided search towards the global best, and germination, in which these spores are evaluated and may replace the most similar and less effective individuals in the population. A distinctive feature of SIOA is its fully self-adaptive parameter control, where the dispersal radius and the probabilities of sporulation and germination are dynamically adjusted according to the progress of the search (e.g., convergence trends of the average fitness). The algorithm also integrates a special “zero-reset” mechanism, enhancing its ability to detect global optima located near the origin. SIOA further incorporates a stochastic local search phase to refine solutions and accelerate convergence. Experimental results demonstrate that SIOA achieves high-quality solutions with a reduced number of function evaluations, especially in complex, multimodal, or high-dimensional problems. Overall, SIOA provides a robust and flexible optimization framework, suitable for a wide range of challenging optimization tasks.

1. Introduction

Global optimization constitutes one of the most important areas of computational science, with applications spanning from engineering and physics to economics and artificial intelligence. Its primary objective is to identify the best possible solution in problems characterized by high complexity, nonlinearity, multiple local minima, and high dimensionality. In contrast to local optimization methods, which often become trapped in suboptimal solutions, global optimization techniques aim to systematically explore the entire search space in pursuit of the true global optimum.
To achieve this objective, numerous algorithms have been proposed, ranging from classical derivative-based approaches to modern metaheuristic techniques inspired by natural and biological processes. Within this context, it is essential to present a formal mathematical formulation of the global optimization problem, which provides the theoretical foundation for the development and evaluation of novel algorithmic approaches.
Mathematical formulation of the global optimization problem:
Let f : S R be a real-valued function of n variables, where S R n is a compact subset. The global optimization problem is defined as the problem of finding
x * = arg min x S f ( x ) ,
where the feasible set S is given by the Cartesian product
S = i = 1 n [ a i , b i ] R n ,
with the following conditions:
  • f C ( S ) , i.e., f is a continuous function on S ( C ( S ) denotes the space of continuous functions on S),
  • [ a i , b i ] R are closed and bounded intervals for i = 1 , , n ,
  • S is a compact and convex subset of the Euclidean space R n ,
  • x * S is the global minimizer of f over S.
Optimization represents a fundamental discipline in computational mathematics with widespread applications across scientific and industrial domains. Optimization techniques can be broadly categorized into several families. Classical gradient-based methods, such as steepest descent [1,2] and Newton’s method [3], remain effective for smooth and convex problems. Stochastic approaches, including Monte Carlo sampling [4] and simulated annealing [5], provide robustness against multimodality.
Among the most influential are the population-based metaheuristics, which have become standard in global optimization. Genetic Algorithms (GA) [6], Differential Evolution (DE) [7,8,9], and Particle Swarm Optimization (PSO) [10] are widely recognized benchmarks. Over the years, several important variants have been developed to improve performance and adaptability, such as self-adaptive Differential Evolution (SaDE) [11], a self-adaptive DE in which each individual carries and co-evolves its own control parameters F and CR (jDE) [12], and Comprehensive Learning Particle Swarm Optimization (CLPSO) [13]. The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [14] is also regarded as one of the most powerful evolutionary optimizers.
Derivative-free techniques, such as the Nelder–Mead simplex [15], are often applied when gradient information is unavailable. Finally, modern nature-inspired and hybrid strategies (e.g., ant colony optimization [16], artificial bee colony [17]) represent current research directions that integrate concepts from biology, physics, and social systems.
Within the diverse landscape of metaheuristic optimization algorithms, the Sporulation-Inspired Optimization Algorithm (SIOA) introduces an innovative, biologically motivated approach inspired by microbial sporulation and dispersion. SIOA operates with a dynamic population of solutions, conceptualized as “microorganisms” that undergo processes of sporulation and germination. In this framework, each solution can generate “spores” via adaptive random perturbations, guided by the current best solution, with the intensity of dispersal dynamically regulated through self-adaptive parameters. A key distinguishing feature of SIOA is its two-phase search mechanism, combining the generation of spores (sporulation phase) with a germination process that evaluates and selectively integrates new solutions into the population using a similarity-based (crowding) replacement scheme, where each new spore replaces its most similar population member only if it achieves superior fitness. The algorithm incorporates an additional “zero-reset” mechanism, occasionally forcing solution components to zero, which helps to accelerate convergence towards global optima near the origin. The search is further enhanced by a stochastic, optional local search phase, which promotes the exploitation of promising regions in the solution space. One of SIOA’s main strengths lies in its fully self-adaptive parameter control: not only the dispersal radius, but also the probabilities of sporulation and germination are automatically adjusted according to the algorithm’s search progress. This adaptive strategy enables SIOA to balance exploration and exploitation effectively, enhancing its performance in a wide range of complex optimization tasks. Multimodal and high-dimensional optimization problems are particularly challenging because they contain a vast number of local optima and exponentially growing search spaces as dimensionality increases. Such conditions often lead conventional algorithms to premature convergence, trapping the search in suboptimal regions and making it difficult to explore promising areas effectively. To address these difficulties, SIOA integrates diversity-preserving mechanisms such as similarity-based replacement and adaptive parameter control, which together sustain population heterogeneity and guide the search toward unexplored regions. These features equip SIOA with the ability to maintain a balance between exploration and exploitation even in very rugged or large-scale landscapes. Notably, in multimodal and high-dimensional problems, SIOA demonstrates a strong capability to avoid premature convergence and maintain population diversity, contributing to fast and robust convergence. Experimental results confirm that SIOA exhibits high efficiency and stability across a broad suite of benchmark functions, often outperforming established metaheuristics, especially in challenging optimization landscapes. The biological inspiration underpinning SIOA offers natural mechanisms for diversity maintenance and premature convergence avoidance, making it especially suitable for demanding applications where a balance between global exploration and focused exploitation is critical. This work thoroughly examines the theoretical underpinnings of SIOA, including its convergence properties, parameter sensitivity analysis, and practical implementation aspects. Furthermore, the potential for extensions and adaptations of the algorithm is explored, including constrained, multi-objective, and large-scale optimization scenarios. Overall, SIOA emerges as a powerful, modern, and flexible contribution to computational optimization methodology, with significant prospects for both research and real-world applications.
The remainder of this paper is structured as follows. Section 2 introduces the proposed SIOA and its biological motivation. Section 3 describes the experimental setup and presents the benchmark results. Specifically, Section 3.1 reports experiments with traditional methods on classical benchmark problems, while Section 3.2 extends the analysis to advanced methods and real-world applications. The balance between exploration and exploitation is investigated in Section 3.3, followed by a parameter sensitivity study in Section 3.4. Section 3.5 provides an analysis of the computational cost and complexity of the SIOA algorithm. Finally, Section 4 summarizes the main findings and outlines directions for future work.
Overall, this structure ensures a coherent presentation of both the methodological contributions and the experimental validation. The next section introduces the SIOA method in detail, highlighting its biological inspiration and algorithmic design principles.

2. The SIOA Method

The following is the pseudocode of SIOA and the related analysis.
To formalize the sporulation process, each spore s i is generated from its parent solution x i according to
s i = x i + R · U ( 1 , 1 ) · c 1 + β · ( x b e s t x i + U ( R , R ) ) · c 2 ,
where R denotes the adaptive dispersal radius, c 1 , c 2 are control coefficients, and  β c 2 acts as the attraction factor toward the global best solution x b e s t . With probability p z e r o , selected dimensions of the spore are reset to zero, which enhances the ability of the algorithm to detect global optima near the origin. The generated spore is subsequently bounded to the feasible domain by applying the clamp operator. This explicit formulation improves the mathematical clarity of the sporulation step and highlights the role of adaptive perturbations combined with global guidance.
The SIOA Algorithm 1 begins with an initialization phase, where an initial population of solutions (samples) of size N P is randomly generated within the specified bounds. For each solution, the fitness value is evaluated and stored in the fitness array, while the best solution ( x b e s t ) and its corresponding fitness ( f b e s t ) are also tracked. An empty list is initialized to collect spores that will be generated in each iteration.
Algorithm 1 Pseudocode of SIOA.
Input:
- N P : Population size
- I t e r m a x : Maximum iterations
- p l o c : Local search rate
- b o u n d s : Search space bounds
- c 1 , c 2 : Search coefficients
(Self-adaptive within the loop, initialized with default values:)
- R m i n , R m a x : Min/max dispersal radius
- p s p o r : Initial sporulation probability
- p g e r m : Initial germination probability
Output:
- x b e s t : Best solution found
- f b e s t : Corresponding fitness value
Initialization:
01: d i m ← Problem dimension
02: Initialize population X = x i | x i U ( b o u n d s ) , i = 1 , . . . , N P
03: Evaluate initial fitness F = f i = f ( x i ) | i = 1 , . . . , N P
04: ( x b e s t , f b e s t ) ← a r g m i n ( x i , f i ) f i
// Set adaptive parameters:
05: R R m a x
06: p s a d p s p o r
07: p g a d p g e r m
08: m e a n P f i t n e s s +
Main Optimization Loop:
09: for i t e r = 1 to I t e r m a x do
// Parameter self-adaptation
10:     t i t e r / m a x i t e r
11:     R R m a x t · ( R m a x R m i n )
12:      m e a n f i t n e s s ← mean(F)
13:      p r o g ( b e s t p r e v f b e s t ) ( | b e s t p r e v | + ε ) , ε = 1 × 10−10
14:     if p r o g 0.001 then
15:          p s a d ← clamp ( p s · 0.98 , 0.1 , 1.0 )
16:          p g a d ← clamp ( p g · 1.02 , 0.1 , 1.0 )
17:     else
18:          p s a d ← clamp ( p s a d · 1.02 , 0.1 , 1.0 )
19:          p g a d ← clamp ( p g a d · 0.98 , 0.1 , 1.0 )
20:     end if
21:      m e a n P f i t n e s s m e a n f i t n e s s
            // Sporulation phase
22:     S
23:     for each x i in X do
24:         Create vector spore = [ s p o r e 1 , s p o r e 2 s p o r e d i m ]
25:         for d = 1 to d i m do
26:              s p o r e d X i , d + U ( R , R ) * c 1 + ( x b e s t , d - X i , d + U ( R , R ) ) * c 2
                    //Equivalent mathematical form:
                    // s i x i + R · U ( 0 , I ) + β · ( x b e s t x i ) // with β c 2
                    // Special “reset to zero” rule
27:             if U ( 0 , 1 ) < 0.1 and ( f b e s t ( 3 , 3 ) ) then
28:                  s p o r e d ← 0
29:             end if
30:              s p o r e d ← clamp( s p o r e d , b l o w e r d , b u p p e r d )
31:         end for
32:         S S s p o r e
33:     end for
            // Germination phase
34:     for each s p o r e in S do
35:         if U ( 0 , 1 ) < p g a d then
36:              f s p o r e f ( s p o r e )
37:              i d x ← index of sample in X most similar to s p o r e (Euclidean distance)
38:             if f s p o r e < f i d x then
39:                  x i d x s p o r e
40:                  f i d x f s p o r e
41:             end if
42:             if f s p o r e < f b e s t then
43:                  x b e s t s p o r e
44:                  f b e s t f s p o r e
45:             end if
46:         end if
47:     end for
            // Local search (optional)
48:     for each x i in X do
49:         if U ( 0 , 1 ) < p l o c then
50:              ( x r e f , f r e f ) ← localSearch( x i ) [18]
51:             if f r e f < f i then
52:                  x i x r e f
53:                  f i f r e f
54:                 if f r e f < f_ b e s t then
55:                      x b e s t x r e f
56:                      f b e s t f r e f
57:                 end if
58:             end if
59:         end if
60:     end for
61:     if termination criteria met then break: δ s i m ( i t e r ) = f s i m , m i n ( i t e r ) f s i m , m i n ( i t e r 1 ) [19,20] or I t e r m a x or function evaluations (FEs)
62: end for
63: return ( x b e s t , f b e s t )
During the main iteration loop, the algorithm executes three core operations in every cycle:
In the first phase (sporulation), each solution in the population has a probability ( p s p o r , which is self-adaptive) of generating a spore. The new spore is created by applying a combination of adaptive random perturbations and attraction towards the global best solution, with the strength of the perturbation determined by the current value of the adaptive dispersal radius (R). Additionally, with a certain probability, individual dimensions of the spore may be forcibly set to zero, especially when the best fitness value is near zero, enhancing the algorithm’s ability to locate optima at or near the origin. All generated spores are ensured to remain within the problem boundaries.
In the second phase (germination), each spore has a probability ( p g e r m , also self-adaptive) to germinate. If so, its fitness is evaluated. The algorithm then uses a crowding (similarity-based) replacement strategy: the spore is compared against the most similar solution in the population (measured by Euclidean distance), and it replaces that solution only if its fitness is superior. If the spore achieves a new best fitness, the x b e s t and f b e s t are updated.
The third phase is optional local search, where each solution in the population has a probability ( P l o c ) of undergoing a specialized local search procedure. If the refined solution is better, it replaces the current one and updates the global best if necessary.
Throughout the process, all critical parameters including dispersal radius and the probabilities of sporulation and germination are dynamically self-adapted based on the search progress, specifically on improvements in the mean fitness of the population. This mechanism ensures that SIOA can automatically balance exploration and exploitation according to the evolving state of the search.
The use of similarity-based (crowding) replacement preserves population diversity and helps prevent premature convergence, while the special zero-reset rule increases the chance of discovering global optima at zero. In practice, the zero-reset mechanism is applied selectively to a fraction of the population when convergence is detected near the coordinate axes or the origin. This targeted reinitialization is particularly effective for benchmark problems with optima located at or close to the origin (e.g., Ackley and Discus), since it enhances the probability of sampling in the true optimum’s neighborhood. By introducing controlled diversity only under these conditions, zero-reset improves exploration without disrupting convergence dynamics, thereby increasing accuracy for this important class of optimization problems. The stochastic local search phase further enhances exploitation capability. Overall, the combination of these mechanisms creates a dynamic, self-adjusting system in which the algorithm continuously tunes its parameters and replacement strategies based on intermediate solution quality, thus maximizing its ability to efficiently explore complex, multimodal, and high-dimensional search spaces.

3. Experimental Setup and Benchmark Results

The experimental framework is structured as follows: First, the benchmark functions used for performance evaluation are introduced, then a thorough examination of the experimental results is provided. A systematic parameter sensitivity analysis is conducted to validate the algorithm’s robustness and optimization capabilities under different conditions. All experimental configurations are specified in Table 1.
The computational experiments were conducted using a system equipped with an Debian Linux machine with 128 GB of RAM.The testing framework involved 30 independent runs for each benchmark function, ensuring robust statistical analysis by initializing with fresh random values in every iteration. The experiments utilized a custom-developed tool implemented in ANSI C++ within the OPTIMUS [21] platform, an open-source optimization library available at https://github.com/itsoulos/GLOBALOPTIMUS (accessed on 28 July 2025). The algorithm’s parameters, as detailed in Table 1, were carefully selected to balance exploration and exploitation effectively.

3.1. Experiments with Traditional Methods and Classical Benchmark Problems

The evaluation of SIOA was first conducted on established benchmark function sets [22,23,24], in direct comparison with widely used traditional optimization methods, in order to assess its computational efficiency, convergence capability, and result stability under standard testing scenarios (Table 2).
The results presented in Table 3 were obtained using the parameter settings described in Table 1. An important observation is the consistency of the best solution across 12 consecutive runs, which demonstrates a high degree of stability and robustness in the optimization process. This stability was achieved with minimal reliance on local optimization, as the local search procedure was applied in only 0.5% of the cases. Such performance indicates that the algorithm’s global search capabilities are sufficient to consistently identify optimal or near-optimal solutions without heavy dependence on local refinement methods.
Figure 1, Figure 2 and Figure 3 provide a complementary visualization of the numerical data presented in Table 3. These figures highlight both the distribution of function evaluations and the relative execution times across methods, offering a clearer view of convergence dynamics and performance consistency. In this way, the graphical results reinforce the tabular evidence, allowing for a more intuitive comparison of SIOA against GA, DE, PSO, and ACO.
The comparative analysis of the results of Table 3 shows that the proposed SIOA method outperforms traditional GA, DE, PSO, and ACO methods across a wide range of benchmark functions, both in terms of the number of objective function evaluations and the success rate. In the vast majority of cases, SIOA achieves the minimum or one of the lowest evaluation counts, indicating high computational efficiency and faster convergence. The differences are particularly evident in multidimensional and multimodal problems, where traditional methods such as GA and DE require significantly more evaluations, often more than double or triple those of SIOA.
The success rate, which is 100% when not shown in parentheses, also presents a positive picture for SIOA. Its overall value reaches 94.9%, surpassing the corresponding rates of GA (90.1%) and PSO (92.8%) and coming very close to the best performances of DE (96%) and ACO (92%), but with considerably lower computational cost. In several challenging cases, such as the GRIEWANK and POTENTIAL functions, SIOA combines low evaluation requirements with competitive or even maximum success rates, demonstrating an ability to maintain a balance between exploration and exploitation.
The overall picture, as reflected in the last row of the table, confirms SIOA’s general superiority, as it achieves the lowest total number of evaluations (66,953) compared to other methods, which range from about 72,478 (ACO) to 160,423 (DE). This high efficiency, combined with the stability of the results, suggests that the biologically inspired strategy of sporulation and germination, together with mechanisms for self-adaptation and diversity preservation, offers a clear advantage over classic evolutionary and swarm-based methods across a wide spectrum of optimization problems.
To enhance the numerical accuracy of the final solutions without compromising the validity of cross-method comparisons, we enabled the same lightweight, local search routine for all algorithms reported in Table 3 (SIOA, GA, DE, PSO, ACO). Concretely, at the end of each iteration, each candidate solution had an independent probability P l o c = 0.005 (0.5%) to invoke the local search procedure (see Algorithm 1, lines 48–60 and parameters summarized in Table 1). This very small activation rate yields occasional refinements near convergence while keeping the computational overhead and any potential bias negligible, importantly, no method received bespoke local search settings or a larger budget. In practice, the routine was triggered only rarely, thus improving accuracy without altering the comparative performance trends observed in Table 3.
The analysis of the results (Friedman test [28]) presented in Figure 4 shows the performance comparison of the proposed SIOA optimization method against other established techniques. The values of the critical parameter p, which indicate the levels of statistical significance, reveal that SIOA demonstrates a very extremely significant superiority over GA, DE, and PSO, with p-values lower than 0.0001. In contrast, the comparison between SIOA and ACO did not show a statistically significant difference, as the p-value is greater than 0.05, indicating that the two methods exhibit a similar level of performance according to this statistical evaluation.
Figure 5 visualizes, on a per-problem basis, the comparative performance of all algorithms as derived from the function-evaluation counts and success rates reported in Table 3. Across most benchmarks, SIOA attains the lowest or among the lowest number of evaluations, with particularly clear margins on multimodal or higher-dimensional families (e.g., GRIEWANK*, POTENTIAL*), while ACO is occasionally comparable, these patterns are consistent with the tabulated trends. The aggregate line of Table 3 is reflected here as well, confirming the overall evaluation load: SIOA (66,953) vs. GA (93,941), PSO (106,484), DE (160,423), and ACO (72,478), underscoring SIOA’s efficiency advantage without altering the success-rate profile.

3.2. Experiments with Advanced Methods and Real-World Problems

Subsequently, SIOA was tested against more sophisticated algorithms on complex, large-scale problems derived from realistic application domains, aiming to evaluate its performance under increased complexity, constraint handling, and uncertainty. These problems are presented in Table 4.
The results shown in Table 5 were obtained using the parameter settings defined in Table 1. Also, a detailed ranking for the algorithms is presented in Table 6. The termination criterion was set to 150,000 function evaluations, ensuring a uniform computational budget across all test cases. No local optimization procedures were applied during the runs, meaning that the reported outcomes reflect solely the global search capabilities of the algorithm without any refinement from local search techniques. This setup allows for an unbiased assessment of the method’s performance under purely global exploration conditions.
The comparative analysis of the optimization methods, based on both best and mean performance after 150,000 function evaluations, reveals clear distinctions in their overall effectiveness. CMA-ES achieved the highest ranking, excelling in both peak and consistent performance, followed by EO and CLPSO, which demonstrated strong competitiveness. SIOA ranked closely behind these top methods, showing notable strengths in complex, high-dimensional, and multimodal problems, where its adaptive sporulation and germination mechanisms effectively balanced exploration and exploitation. In certain cases, such as the Tersoff Potential and Static Economic Load Dispatch problems, SIOA’s results approached those of CMA-ES, highlighting its capacity to rival advanced evolutionary strategies. However, its slightly higher variance in some problem instances, particularly in less multimodal landscapes, reduced its mean performance score, preventing it from achieving the top overall rank. Despite this, SIOA emerges as a modern and competitive algorithm with strong potential for further improvement, especially through integration with specialized local search schemes aimed at enhancing stability and precision.
Also, a comparison of all algorithms and a final ranking is presented in Table 7.

3.3. Exploration and Exploitation

In this study, the trade-off between exploration and exploitation is assessed using a specific set of quantitative indicators: Initial Population Diversity ( I P D ), Final Population Diversity ( F P D ), Average Exploration Ratio ( A E R ), Median Exploration Ratio ( M E R ), and Average Balance Index ( A B I ). These metrics, although fundamentally grounded in population diversity measurements, are designed to capture both the temporal evolution of exploration by monitoring diversity changes over the course of the optimization and the degree of exploitation through the level of convergence in the final population. While these indicators provide a structured way to examine algorithmic behavior, further investigation employing more direct analysis tools, such as attraction basin mapping or tracking the clustering of solutions around local or global optima, could yield deeper insights into the search dynamics. Such approaches are considered a promising avenue for extending the current work.
The metrics reported in Table 8 quantify and track the interplay between exploration and exploitation throughout the execution of the SIOA algorithm. Their computation relies on diversity measurements at different stages of the optimization process and on how these values evolve over iterations.
The I P D quantifies the diversity present at the very start of the optimization and is obtained by computing the mean Euclidean distance between all pairs of individuals in the initial population:
I P D = 2 N P ( N P 1 ) i = 1 N P 1 j = i + 1 N P d ( x i , x j )
where d ( x i , x j ) is the Euclidean distance between solutions x i and x j , and N P denotes the population size.
The F P D is computed using the same formulation but applied to the final set of solutions after the algorithm is completed.
The A E R reflects the average level of exploration across all iterations and is defined as:
A E R = 1 G g = 1 i t e r m a x I P D g I P D 1
where i t e r m a x is the total number of iterations, I P D g represents the diversity at iteration g, and I P D 1 is the initial diversity value.
The M E R is the median value of the exploration ratios recorded over all generations:
M E R = median I P D g I P D 1 , for g = 1 , , i t e r m a x
The A B I serves as a composite measure of the exploration–exploitation balance. It is typically calculated as a weighted function of A E R and F P D (or other exploitation-related indicators):
A B I = A E R A E R + ϵ · 1 F P D I P D
where ϵ is a small constant introduced to avoid division by zero. An A B I value close to 0.5 generally indicates a well-balanced interplay between exploration and exploitation.

3.4. Parameters Sensitivity

By adopting the parameter sensitivity examination framework proposed by Lee et al. [29], this study provides a solid foundation for understanding how optimization algorithms react to changes in their configuration and sustain their reliability across varying conditions.
In the Potential problem (Table 9 and Figure 6), the mean best value improves as c 1 decreases: the mean best moves from −13.93 ( c 1 = 0.9) toward −16.36 ( c 1 = 0.1), with a main effect range of 2.43. This indicates that for this high-dimensional, strongly multimodal potential, excessive stochastic dispersion (high c 1 ) “blurs” exploitation of promising areas, whereas mild dispersion supports steady improvement. The impact of c 2 is stronger (range 3.04) and non-monotonic: moderate values around 0.3 yield the best mean performance (−16.10), while very low or very high values degrade results. Therefore, in Potential, a clear preference emerges for a “moderate” pull toward the best solution ( c 2 ≈ 0.3) combined with a low stochastic perturbation (small c 1 ).
In the Rastrigin problem (Table 10 and Figure 7), the behavior differs: c 1 has a relatively small main effect (0.88), and the best mean value occurs around c 1 = 0.7 (Mean Best ≈ 1.25), with similar performance at c 1 = 0.3. In contrast, c 2 is more decisive (range 2.44), with the optimal zone around 0.5 (Mean Best ≈ 0.82). The Rastrigin function, with its pronounced symmetric multimodality, benefits from a stronger attraction mechanism toward the best (moderate c 2 ), which helps “lock in” low-value basins, while a moderate c 1 maintains enough exploration without destabilizing convergence. It is notable that the minima are often 0.00, indicating that all combinations can reach the global minimum, but mean values differentiate reliability and stability.
In the Test2n problem (Table 11 and Figure 8), the picture is even clearer in favor of low c 2 : the main effect of c 2 is very high (8.94), and the best mean performance appears at c 2 = 0.1 (Mean Best ≈ −152.41). Increasing c 2 toward 0.7–0.9 significantly worsens mean performance, although the minima remain near −156.664 for all settings. This shows that excessive attraction toward the best induces premature convergence into local basins and increases performance variability. c 1 has a moderate impact (2.82), with a trend suggesting that larger values (e.g., 0.9) may slightly improve mean performance, likely by helping to escape narrow polynomial valleys. Overall, in Test2n4, the guidance is clear: keep c 2 low and allow c 1 to be medium-to-high to maintain consistent solution quality.
In the Rosenbrock4 problem (Table 12 and Figure 9), c 1 has the largest overall effect across all cases (range 30.29), with a dramatic improvement in mean performance as it increases from 0.1 to 0.9 (Mean Best from ~35.11 to ~4.82). The Rosenbrock function’s narrow curved valley and anisotropy explain why stronger stochastic perturbation helps maintain mobility along the valley and avoid “dead zones” in step progression. c 2 shows a U-shaped trend: the best mean performance occurs at 0.3 (Mean Best ≈ 4.43), while very low or very high c 2 increases the risk of large outliers, as seen in maximum values that can spike dramatically. Thus, in [rosenbrock4], a high c 1 is recommended to keep search activity within the valley, and a moderate c 2 ≈ 0.3 helps avoid both over-pulling, which can distort the valley geometry, and overly loose guidance, which delays convergence.
Synthesizing these findings, a consistent tuning pattern emerges: in highly multimodal landscapes with many symmetric basins such as Rastrigin, a moderate c 2 around 0.5 and a moderate c 1 around 0.3–0.7 minimize mean values and stabilize convergence. In “parabolic” or polynomial landscapes like Test2n, a low c 2 and medium-to-high c 1 improve stability and mean performance, preventing premature convergence. In narrow-valley problems like Rosenbrock, strong c 1 and moderate c 2 ≈ 0.3 appear to be the most robust choice. Finally, for dense multimodal potentials like Potential, the optimal zone tends toward low c 1 and moderate c 2 ≈ 0.3, balancing small, targeted jumps with steady, controlled attraction toward the best.
In practical terms, the ranges that reappear as “safe defaults” are c 2 in the moderate range of 0.3–0.5, and c 1 adapted to landscape morphology: low for Potential-type landscapes, moderate for Rastrigin, high for Rosenbrock, and medium-to-high for polynomial Test2n landscapes. The min/max values per setting highlight the tendency for extreme deviations when c 2 is too high or too low especially in Rosenbrock reinforcing that the “high c 1 —moderate c 2 ” combination is often the most resilient operating point when the goal is high mean performance rather than isolated best cases.

3.5. Analysis of Computational Cost and Complexity of the SIOA Algorithm

Figure 10 illustrates the complexity of the proposed method, showing the number of objective function calls and the execution time (in seconds) for problem dimensions ranging from 20 to 260. The experimental settings follow the parameter values specified in Table 1, with the termination criterion based on the homogeneity of the best value. In addition, a limited local optimization procedure is applied at a rate of only 0.5%, enhancing the exploitation of promising regions in the search space without significantly affecting the overall global exploration strategy.
More specifically, in the ELLIPSOIDAL problem, the execution time increases gradually from 0.111 s at dimension 20 to 41.714 s at dimension 260, while the corresponding objective function calls range from 1398 to 6457. Similarly, for the ROSENBROCK problem, the execution time rises from 0.144 s at dimension 20 to 40.873 seconds at dimension 260, with the number of calls increasing from 2830 to 9200. The results indicate that both execution time and the number of calls grow as the problem dimensionality increases, with ROSENBROCK generally requiring greater computational effort in higher dimensions compared to ELLIPSOIDAL. This observation highlights the sensitivity of the method’s complexity to the nature of the problem, while also confirming its ability to scale efficiently across a wide range of search space sizes.

4. Conclusions

Based on the experiments conducted, SIOA proves to be a mature, competitive, and efficient metaheuristic. In classical benchmark problems, it consistently outperforms GA, DE, PSO, and ACO in terms of required objective function calls while maintaining a high success rate; the overall evaluation footprint is significantly lower than that of traditional methods, translating into faster convergence for a given computational budget. This performance profile supports the view that the biologically inspired “sporulation–germination” mechanism, combined with self-adaptive parameter control and similarity-based replacement, provides a tangible advantage across a wide range of problem types.
The method also demonstrates notable stability: with the parameter settings of Table 1, the best result was reproduced uniformly in 12 consecutive runs, while local optimization was used minimally (only 0.5%), indicating that SIOA’s global search is sufficient to locate optimal or near-optimal solutions without relying heavily on exploitation. The algorithm’s core components—stochastic perturbation around an adaptive radius, attraction toward the global best, the “zero-reset” rule when the optimum lies near the origin, and replacement through crowding—collectively explain both the maintenance of diversity and the ability to avoid premature convergence.
In more demanding, realistic scenarios with a uniform budget of 150,000 function evaluations and no local optimization, SIOA remains highly competitive against advanced techniques. Although CMA-ES achieved the top overall rank, SIOA came very close, with results in certain cases (e.g., Tersoff Potential and Static Economic Load Dispatch) approaching the best of the leading competitors. A slightly higher variance in some less multimodal landscapes limited the mean performance, highlighting a margin for improvement in stability without undermining the overall strength of the method.
The scalability analysis shows that both runtime and function evaluations increase with problem dimension and landscape ruggedness, with problems such as Rosenbrock generally requiring more computational effort than smoother ellipsoidal forms—an observation consistent with the expected behavior of metaheuristics in difficult, poorly scaled valleys. In all cases, SIOA maintains an economical evaluation profile compared to competing approaches, a feature of direct value in costly simulations.
Overall, the method is realistically ready for application: fast in terms of evaluations, stable without relying on intensive local search, and sufficiently flexible to dynamically adapt critical parameters as the search progresses. At the same time, clear opportunities for further improvement remain. Realistic next steps include integrating more specialized, problem-sensitive local optimizers to reduce variance and improve final accuracy, as well as extending SIOA to constrained, multi-objective, and large-scale problems, where the combination of self-adaptation, crowding, and “zero-reset” may yield even greater benefits. Equally promising are explorations of hybrid versions augmented with surrogate modeling for expensive problems, further parallelization and GPU/multi-threaded implementations, the use of restart strategies and dynamic similarity thresholds, and the development of fully parameter-free versions with stronger theoretical convergence guarantees. The indicated extensions to constrained, multi-objective, and large-scale applications, along with reinforcement via dedicated local search schemes, underscore SIOA’s realistic potential as a modern foundation for further research and practical deployment.

Author Contributions

Conceptualization, I.G.T. and V.C.; methodology, V.C.; software, V.C.; validation, I.G.T., D.T. and A.M.G.; formal analysis, D.T.; in-vestigation, I.G.T.; resources, D.T.; data curation, A.M.G.; writing—original draft preparation, V.C.; writing—review and editing, I.G.T.; visualization, V.C.; supervision, I.G.T.; project administration, I.G.T.; funding acquisition, I.G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH–CREATE–INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cauchy, A.-L. Méthode générale pour la résolution des systèmes d’équations simultanées. Compte Rendus Hebd. SéAnces L’AcadéMie Sci. 1847, 25, 536–538. [Google Scholar]
  2. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  3. Newton, I. Method of Fluxions; Colson, J., Translator; Henry Woodfall: London, UK, 1736; Original work written in 1671. [Google Scholar]
  4. Metropolis, N.; Ulam, S. The Monte Carlo method. J. Am. Stat. Assoc. 1949, 44, 335–341. [Google Scholar] [CrossRef] [PubMed]
  5. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  6. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  7. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  8. Charilogis, V.; Tsoulos, I.G.; Tzallas, A.; Karvounis, E. Modifications for the Differential Evolution Algorithm. Symmetry 2022, 14, 447. [Google Scholar] [CrossRef]
  9. Charilogis, V.; Tsoulos, I.G. A Parallel Implementation of the Differential Evolution Method. Analytics 2023, 2, 17–30. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks 1995, Perth, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  11. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  12. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  13. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  14. Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [PubMed]
  15. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  16. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milano, Italy, 1992. [Google Scholar]
  17. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization (Technical Report TR06); Erciyes University, Engineering Faculty, Computer Engineering Department: Kayseri, Turkey, 2005. [Google Scholar]
  18. Lam, A. BFGS in a Nutshell: An Introduction to Quasi-Newton Methods Demystifying the Inner Workings of BFGS Optimization; Towards Data Science: San Francisco, CA, USA, 2020. [Google Scholar]
  19. Charilogis, V.; Tsoulos, I.G. Toward an Ideal Particle Swarm Optimizer for Multidimensional Functions. Information 2022, 13, 217. [Google Scholar] [CrossRef]
  20. Gianni, A.M.; Tsoulos, I.G.; Charilogis, V.; Kyrou, G. Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria. Symmetry 2025, 17, 844. [Google Scholar] [CrossRef]
  21. Tsoulos, I.G.; Charilogis, V.; Kyrou, G.; Stavrou, V.N.; Tzallas, A. OPTIMUS: A Multidimensional Global Optimization Package. J. Open Source Softw. 2025, 10, 7584. [Google Scholar] [CrossRef]
  22. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. Softw. (TOMS) 1997, 23, 209–228. [Google Scholar] [CrossRef]
  23. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  24. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm And Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  25. Gaviano, M.; Kvasov, D.E.; Lera, D.; Sergeyev, Y.D. Algorithm 829: Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. (TOMS) 2003, 29, 469–480. [Google Scholar] [CrossRef]
  26. Jones, J.E. On the determination of molecular fields.—II. From the equation of state of a gas. Proc. R. Soc. London Ser. Contain. Pap. Math. Phys. Character 1924, 106, 463–477. [Google Scholar]
  27. Zabinsky, Z.B.; Graesser, D.L.; Tuttle, M.E.; Kim, G.I. Global optimization of composite laminates using improving hit and run. In Recent Advances in Global Optimization; Princeton University Press: Princeton, NJ, USA, 1992; pp. 343–368. [Google Scholar]
  28. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  29. Lee, Y.; Filliben, J.; Micheals, R.; Phillips, J. Sensitivity Analysis for Biometric Systems: A Methodology Based on Orthogonal Experiment Designs; National Institute of Standards and Technology Gaithersburg (NISTIR): Gaithersburg, MD, USA, 2012; p. 20899. [Google Scholar] [CrossRef]
Figure 1. Performance of SIOA and reference methods on benchmark problems: distribution of function evaluations across runs.
Figure 1. Performance of SIOA and reference methods on benchmark problems: distribution of function evaluations across runs.
Appliedmath 05 00135 g001
Figure 2. Comparative analysis of SIOA versus GA, DE, PSO, and ACO: average execution times for all test functions.
Figure 2. Comparative analysis of SIOA versus GA, DE, PSO, and ACO: average execution times for all test functions.
Appliedmath 05 00135 g002
Figure 3. Overall convergence dynamics of SIOA and competing algorithms: stability and robustness across problems.
Figure 3. Overall convergence dynamics of SIOA and competing algorithms: stability and robustness across problems.
Appliedmath 05 00135 g003
Figure 4. Statistical comparison of SIOA against other methods.
Figure 4. Statistical comparison of SIOA against other methods.
Appliedmath 05 00135 g004
Figure 5. Performance of all methods on each problem.
Figure 5. Performance of all methods on each problem.
Appliedmath 05 00135 g005
Figure 6. Graphical representation of c1 and c2 for the potential problem.
Figure 6. Graphical representation of c1 and c2 for the potential problem.
Appliedmath 05 00135 g006
Figure 7. Graphical representation of c1 and c2 for the Rastrigin problem.
Figure 7. Graphical representation of c1 and c2 for the Rastrigin problem.
Appliedmath 05 00135 g007
Figure 8. Graphical representation of c1 and c2 for the Test2n problem.
Figure 8. Graphical representation of c1 and c2 for the Test2n problem.
Appliedmath 05 00135 g008
Figure 9. Graphical representation of c1 and c2 for the Rosenbrock problem.
Figure 9. Graphical representation of c1 and c2 for the Rosenbrock problem.
Appliedmath 05 00135 g009
Figure 10. Computational performance (Calls and Time) of the proposed method on ELLIPSOIDAL and ROSENBROCK across dimensions 20–260.
Figure 10. Computational performance (Calls and Time) of the proposed method on ELLIPSOIDAL and ROSENBROCK across dimensions 20–260.
Appliedmath 05 00135 g010
Table 1. Parameters and settings.
Table 1. Parameters and settings.
ParameterValueExplanation
N P 100Population for all methods
p s p o r p s p o r [ 0 , 1 ] : adaptive, initial: 0.6Sporulation probability for SIOA
p g e r m p g e r m [ 0 , 1 ] : adaptive, initial: 0.9Germination probability for SIOA
R m i n R m i n [ 0 , 1 ] : adaptive, initial: 0.01Smaller sporulation radius for SIOA
R m a x R m a x [ 0 , 1 ] : adaptive, initial: 0.5Larger sporulation radius for SIOA
c 1 0.6Stochastic perturbation
c 2 0.4Attraction toward the global best
i t e r m a x 500Maximum number of iterations for all methods
S R Similarity of best fitness [19,20] or i t e r m a x or FEs Stopping rule
N s 12Similarity c o u n t m a x for stopping rule
P l o c 0.005 (0.5%) etc.Local search rate for all methods (optional)
C r a t e double, 0.1 (10%) (classic values)Crossover for GA
M r a t e double, 0.05 (5%) (classic values)Mutation for GA
c f 1 , c f 2 1.193Cognitive and Social coefficient for PSO
w0.721Inertia for PSO
c o e f 1 , c o e f 2 1.494Cognitive and Social coefficient for CLPSO
w0.729Inertia for CLPSO
F0.8Initial scaling factor for DE and SaDE
C R 0.9Initial crossover rate for DE and SaDE
w w [ 0.5 , 1 ] (random)Inertia for PSO
N P C N p = 4 + 3 · log ( dimension ) Population for CMA-ES
Table 2. The benchmark functions used in the conducted experiments.
Table 2. The benchmark functions used in the conducted experiments.
NameFormulaDimension
ACKLEY f ( x ) = a exp b 1 n i = 1 n x i 2 exp 1 n i = 1 n cos c x i   + a + exp ( 1 ) a = 20.0 4
BF1 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10 2
BF2 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10 2
BF3 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 + 4 π x 2 + 3 10 2
BRANIN f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10   5 x 1 10 , 0 x 2 15 2
CAMEL f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2 2
DIFFERENT POWERS f ( x ) = i = 1 n | x i | 2 + 4 i 1 n 1 10
DIFFPOWER f ( x ) = i = 1 n | x i y i | p p = 2 , 5 , 10 2, 5, 10
DISCUS f ( x ) = 10 6 x 1 2 + i = 2 n x i 2 10
EASOM f ( x ) = cos x 1 cos x 2 exp x 2 π 2 x 1 π 2 2
ELP f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2 10
EQUAL MAXIMA f ( x ) = sin 6 ( 5 π x ) · e 2 log ( 2 ) · x 0.1 0.8 2 10
EXP f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1 10
GKLS [25] f ( x ) = Gkls ( x , n , w ) w = 50 , 100 n = 2, 3
GOLDSTAIN f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ]   [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2
GRIEWANK
ROSENBROCK
f ( x ) = x 2 4000 i = 1 n cos x i i + 1 Griewank ·   1 10 i = 1 n 1 100 ( x i + 1 x i 2 ) 2 + ( 1 x i ) 2 Rosenbrock 10
GRIEWANK2 f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) 2
GRIEWANK10f ( x ) = 1 + 1 200 i = 1 10 x i 2 i = 1 10 cos ( x i ) ( i ) 10
HANSEN f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i j = 1 5 j cos ( j + 1 ) x 2 + j 2
HARTMAN3 f ( x ) = i = 1 4 c i exp j = 1 3 a i j x j p i j 2 3
HARTAMN6 f ( x ) = i = 1 4 c i exp j = 1 6 a i j x j p i j 2 6
POTENTIAL [26] V L J ( r ) = 4 ϵ σ r 12 σ r 6 9, 15, 30
RARSTIGIN2 f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) 2
ROSENBROCK f ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 , 30 x i 30 4, 8, 16
ROTATED ROSENBROCK f ( x ) = i = 1 n 1 100 z i + 1 z i 2 2 + z i 1 2 , z = R x 10
SHEKEL5 f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i 4
SHEKEL7 f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i 4
SHEKEL10 f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i 4
SINUSOIDAL [27] f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n sin 5 x i z , 0 x i π 4, 8, 16
STEP ELLIPSOIDAL f ( x ) = i = 1 n x i + 0.5 2 + α i = 1 n 10 6 · i 1 n 1 x i 2 , a = 1 4
TEST2N f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i 4, 5
TEST30N 1 10 sin 2 3 π x 1 i = 2 n 1 x i 1 2 1 + sin 2 3 π x i + 1   + x n 1 2 1 + sin 2 2 π x n 4, 5
Table 3. Comparison of function calls of the SIOA method with others.
Table 3. Comparison of function calls of the SIOA method with others.
Function SIOAGADEPSOACO
CallsTimeCallsTimeCallsTimeCallsTimeCallsTime
ACKLEY30280.07234410.089106940.1575684 (0.86)0.10234490.103
BF112040.02823460.05649630.06525620.0341558 (0.4)0.033
BF211770.02821160.05151390.07023320.0321523 (0.96)0.040
BF311440.02921630.05347300.06320930.02814100.044
BRANIN9500.02716680.04520220.03416860.02510540.031
CAMEL11540.02918350.04831610.05120290.02812270.029
DIFFERENT POWERS1021230.08625070.10038970.10826080.07920030.088
DIFFPOWER215900.03318860.05332390.04926940.03517400.036
DIFFPOWER534710.14337700.13456200.16944720.13337890.140
DIFFPOWER1044070.46139090.35265460.61650910.47945820.464
DISCUS109310.04516400.06624330.04916580.03710100.037
EASOM7760.02516180.04317840.03215760.0269770.030
ELP1011260.05917710.07926130.07018670.05112240.490
EQUAL MAXIMA1026490.13922120.11043410.17434010.14223840.135
EXP1010960.04217640.06426250.05017950.03811750.033
GKLS25012020.03718620.05734270.06519960.03712450.038
GKLS35012070.0432038 (0.86)0.06736370.07423610.0471550 (0.86)0.049
GOLDSTEIN11610.03019250.05226210.03919550.02612490.030
GRIEWANK ROSENBROCK1016840.15121360.12637430.21224370.14318430.155
GRIEWANK210610.0282956 (0.26)0.0614765 (0.46)0.0701589 (0.23)0.0528390.066
GRIEWANK101899 (0.6)0.1172936 (0.2)0.1014582 (0.5)0.1242209 (0.36)0.1182444 (0.33)0.140
HANSEN14860.0412143 (0.86)0.06830780.06229640.0611424 (0.86)0.000
HARTMAN310670.03017440.05023760.04017600.02810990.033
HARTMAN611290.0401733 (0.73)0.05525580.0501917 (0.7)0.0371222 (0.93)0.037
POTENTIAL311560.05117540.06826940.06418750.04712700.055
POTENTIAL516390.10821060.11633200.14024240.11417490.133
POTENTIAL103104 (0.6)0.6373566 (0.43)0.5585583 (0.66)0.8184581 (0.5)0.7633182 (0.43)0.801
RASTRIGIN29330.0272411 (0.93)0.05744120.0603017 (0.96)0.04016610.039
ROSENBROCK414220.03517830.05728600.04620690.03214960.039
ROSENBROCK815580.05220720.06839620.07625010.05017510.054
ROSENBROCK1618330.11325060.11441570.14527810.10421510.100
ROTATED ROSENBROCK1017850.08122370.08936630.09526750.0741918 (0.96)0.067
SHEKEL512200.0361770 (0.66)0.05228840.0501990 (0.76)0.0341298 (0.76)0.038
SHEKEL712860.0391812 (0.83)0.0542890 (0.96)0.0522080 (0.83)0.0371351 (0.83)0.040
SHEKEL101345 (0.9)0.0431867 (0.66)0.05836250.0672091 (0.83)0.04113350.040
SINUSOIDAL413580.04219380.06132630.06422130.04412780.040
SINUSOIDAL81541 (0.96)0.07519570.08032410.09620140.06614590.062
SINUSOIDAL61814 (0.53)0.2392319 (0.76)0.1574209 (0.7)0.28326800.2091979 (0.86)0.199
STEP ELLIPSOIDAL49940.0331714 (0.96)0.05221020.04219600.03612590.038
TEST2N41502 (0.73)0.0402270 (0.96)0.07036190.06721530.0381437 (0.9)0.042
TEST2N51338 (0.5)0.0442185 (0.66)0.07245560.0832376 (0.86)0.0461601 (0.63)0.042
TEST30N311420.03617300.05523810.04319980.03311160.029
TEST30N412610.04118250.06224080.04622700.04011670.059
SUM calls/time66,9533.53593,9413.880160,4234.830106,4843.66672,4784.198
AVG calls/time3043.323.5354270.053.8807291.954.8304840.183.6663294.454.198
Avg time0.0800.0880.1100.0830.095
AVG Success rate94.93090.14096.00092.76792.023
Table 4. Real world problems CEC2011.
Table 4. Real world problems CEC2011.
ProblemFormulaDimBounds
Parameter Estimation for Frequency-Modulated Sound Waves min x [ 6.4 , 6.35 ] 6 f ( x ) = 1 N n = 1 N y ( n ; x ) y target ( n ) 2   y ( n ; x ) = x 0 sin x 1 n + x 2 sin ( x 3 n + x 4 sin ( x 5 n ) ) 6 x i [ 6.4 , 6.35 ]
Lennard-Jones Potential min x R 3 N 6 f ( x ) = 4 i = 1 N 1 j = i + 1 N 1 r i j 12 1 r i j 6 30 x 0 ( 0 , 0 , 0 )  
x 1 , x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x 3 k 3  
x 3 k 2  
x i [ b k , + b k ]
Bifunctional Catalyst Blend Optimal Control d x 1 d t = k 1 x 1 , d x 2 d t = k 1 x 1 k 2 x 2 + k 3 x 2 + k 4 x 3 , d x 3 d t = k 2 x 2 , d x 4 d t = k 4 x 4 + k 5 x 5 , d x 5 d t = k 3 x 2 + k 6 x 4 k 5 x 5 + k 7 x 6 + k 8 x 7 + k 9 x 5 + k 10 x 7   d x 6 d t = k 8 x 5 k 7 x 6 , d x 7 d t = k 9 x 5 k 10 x 7   k i ( u ) = c i 1 + c i 2 u + c i 3 u 2 + c i 4 u 3 1 u [ 0.6 , 0.9 ]
Optimal Control of a Non-Linear Stirred Tank Reactor J ( u ) = 0 0.72 x 1 ( t ) 2 + x 2 ( t ) 2 + 0.1 u 2 d t   d x 1 d t = 2 x 1 + x 2 + 1.25 u + 0.5 exp x 1 x 1 + 2   d x 2 d t = x 2 + 0.5 exp x 1 x 1 + 2  
x 1 ( 0 ) = 0.9 , x 2 ( 0 ) = 0.09 , t [ 0 , 0.72 ]
1 u [ 0 , 5 ]
Tersoff Potential for model Si (B) min x Ω f ( x ) = i = 1 N E ( x i )   E ( x i ) = 1 2 j i f c ( r i j ) V R ( r i j ) B i j V A ( r i j )
where r i j = x i x j , V R ( r ) = A exp ( λ 1 r )  
V A ( r ) = B exp ( λ 2 r )   f c ( r ) : cutoff function with f c ( r ) : angle parameter
30 x 1 [ 0 , 4 ]  
x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x i 4 ( i 3 ) 4 , 4
Tersoff Potential for model Si (C) min x V ( x ) = i = 1 N j > i N f C ( r i j ) a i j f R ( r i j ) + b i j f A ( r i j )  
f C ( r ) = 1 , r < R D 1 2 + 1 2 cos π ( r R + D ) 2 D , | r R | D 0 , r > R + D  
f R ( r ) = A exp ( λ 1 r )  
f A ( r ) = B exp ( λ 2 r )  
b i j = 1 + ( β n ) ζ i j n 1 / ( 2 n )   k i , j f C ( r i k ) g ( θ i j k ) exp λ 3 3 ( r i j r i k ) 3
30 x 1 [ 0 , 4 ]  
x 2 [ 0 , 4 ]  
x 3 [ 0 , π ]  
x i 4 ( i 3 ) 4 , 4
Spread Spectrum Radar Polly phase Code Design min x X f ( x ) = max | φ 1 ( x ) | , | φ 2 ( x ) | , , | φ m ( x ) |  
X = { x R n 0 x j 2 π , j = 1 , , n } m = 2 n 1  
φ j ( x ) = k = 1 n j cos ( x k x k + j ) for j = 1 , , n 1 n for j = n φ 2 n j ( x ) for j = n + 1 , , 2 n 1  
φ j ( x ) = k = 1 n j cos ( x k x k + j ) , j = 1 , , n 1  
φ n ( x ) = n , φ n + ( x ) = φ n ( x ) , = 1 , , n 1
20 x j [ 0 , 2 π ]
Transmission Network Expansion Planning min l Ω c l n l + W 1 l O L | f l f ¯ l | + W 2 l Ω max ( 0 , n l n ¯ l )
S f = g d  
f l = γ l n l Δ θ l , l Ω  
| f l | f ¯ l n l , l Ω  
0 n l n ¯ l , n l Z , l Ω
7 0 n i n ¯ l  
n i Z
Electricity Transmission Pricing min x f ( x ) = i = 1 N g C i g e n P i g e n R i g e n 2 + j = 1 N d C j l o a d P j l o a d R j l o a d 2  
j G D i , j + j B T i , j = P i g e n , i  
i G D i , j + i B T i , j = P j l o a d , j  
G D i , j m a x = min ( P i g e n B T i , j , P j l o a d B T i , j )
126 G D i , j [ 0 , G D i , j m a x ]
Circular Antenna Array Design min r 1 , , r 6 , φ 1 , , φ 6 f ( x ) = max θ Ω A F ( x , θ )  
A F ( x , θ ) = k = 1 6 exp j 2 π r k cos ( θ θ k ) + φ k π 180
12 r k [ 0.2 , 1 ]  
φ k [ 180 , 180 ]
Dynamic Economic Dispatch 1 min P f ( P ) = t = 1 24 i = 1 5 a i P i , t 2 + b i P i , t + c i  
P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24  
i = 1 5 P i , t = D t , t = 1 , , 24  
P min = [ 10 , 20 , 30 , 40 , 50 ]  
P max = [ 75 , 125 , 175 , 250 , 300 ]
120 P i min P i , t P i max
Dynamic Economic Dispatch 2 min P f ( P ) = t = 1 24 i = 1 9 a i P i , t 2 + b i P i , t + c i  
P i min P i , t P i max , i = 1 , , 5 , t = 1 , , 24  
i = 1 5 P i , t = D t , t = 1 , , 24  
P min = [ 150 , 135 , 73 , 60 , 73 , 57 , 20 , 47 , 20 ]  
P max = [ 470 , 460 , 340 , 300 , 243 , 160 , 130 , 120 , 80 ]
216 P i min P i , t P i max
Static Economic Load Dispatch (1,2,3,4,5) min P 1 , , P N G F = i = 1 N G f i ( P i )  
f i ( P i ) = a i P i 2 + b i P i + c i , i = 1 , 2 , , N G  
f i ( P i ) = a i P i 2 + b i P i + c i + | e i sin ( f i ( P i min P i ) ) |  
P i min P i P i max , i = 1 , 2 , , N G  
i = 1 N G P i = P D + P L  
P L = i = 1 N G j = 1 N G P i B i j P j + i = 1 N G B 0 i P i + B 00  
P i P i 0 U R i P i 0 P i D R i
6
13
15
40
140
See Technical Report of CEC2011
Table 5. Algorithm comparison based on best and mean after 1.5 × 105 FEs.
Table 5. Algorithm comparison based on best and mean after 1.5 × 105 FEs.
150,000 FesCLPSOSaDEjDECMA-ESSIOA
ProblemBestMean/stBestMeanBestMeanBestMeanBestMean
Parameter Estimation for Frequency- Modulated Sound Waves0.1314 0.2124 ±0.0302 0.1899 0.2025 ±0.0092 0.1161 0.1460 ±0.0350 0.1816 0.2568 ±0.0447 0.2061 0.2599 ±0.0230
Lennard-Jones Potential−13.4364 −10.2507 ±1.0290 −24.8687 −22.6693 ±1.1272 −29.9812 −27.4925 ±1.2350 −28.4225 −25.7878 ±2.2711 −28.5113 −24.1461 ±2.4893
Bifunctional Catalyst Blend Optimal Control−0.0002 −0.0002 ±1.1577 × 10−16−0.0002 −0.0002 ±5.5136 × 10−20−0.0002 −0.0002 ±5.5136 × 10−20−0.0002 −0.0002 ±5.5136 × 10−20−0.0002 −0.0002 ±9.1776 × 10−11
Optimal Control of a Non- Linear Stirred Tank Reactor0.3903 0.3903767228 ±0 0.3903 0.3903 ±0 0.3903 0.3903 ±0 0.3903 0.3903 ±0 0.3903 0.3903 ±0
Tersoff Potential for model Si (B)−28.2354 −26.1883 ±1.0565 −3.1077 25.4711 ±16.7202 −13.5115 −3.9836 ±6.6660 −29.2624 −27.5889 ±1.0406 −28.6359 −27.1151 ±1.0847
Tersoff Potential for model Si (C)−30.8520 −28.8734 ±0.9880 −11.6071 22.0896 ±18.5809 −18.7621 −8.5060 ±5.5431 −33.1969 −31.7927 ±0.8281 −33.5041 −31.0138 ±1.4206
Spread Spectrum Radar Polly phase Code Design1.0853 1.34395 ±0.1487 1.5365 2.1508 ±0.1986 1.5258 1.8120 ±0.1712 0.0148 0.1719 ±0.1378 0.6071 1.0234 ±0.2286
Transmission Network Expansion Planning250 250 ±0 250 250 ±0 250 250 ±0 250 250 ±0 250 250 ±0
Electricity Transmission Pricing13,775 13,775,395 ±222.9723 23,481 30,034,934 ±3,264,767 13,774,627 14,020,953 ±276,142 13,775,841 13,787,550 ±6136 13,774,551 13,775,341 ±372
Circular Antenna Array Design0.0069 0.0518 ±0.0706 0.0214 0.0389 ±0.0081 0.0068 0.01765 ±0.0223 0.0072 0.0086 ±0.0009 0.0074 0.0249 ±0.0443
Dynamic Economic Dispatch 1428,607,927 435,250,914 ±2,973,190 968,042,312 1,034,679,775 ±25,667,290 968,042,312 1,034,393,036 ±25,445,935 88,285 102,776 ±6688 921,434,356 984,699,299 ±23,606,727
Dynamic Economic Dispatch 233,031,590 53,906,147 ±8,492,239 845,287,898 913,715,793 ±3,0667,287 340,091,475 397,471,715 ±37,259,947 502,699 477,720 ±193,951 768,167,675 768,167,675 ±768,167,675
StaticEconomic Load Dispatch 16554 7668 ±1245 16,877 101,588 ±81,105 6163 6778 ±3004 6657 415,917 ±688,544 6538 877,097 ±847,631
Static Economic Load Dispatch 219,030 20,699 ±2922 2,600,565 9,329,466 ±4,019,053 1,161,578 3,671,587 ±1,542,286 763,001 1,425,815 ±377,126 24,026 1,478,534 ±1,063,608
Static Economic Load Dispatch 3470,192,288 470,294,703 ±57,822 478,069,615 541,898,763 ±20,128,777 471,058,115 471,963,142 ±529,633 470,023,232 470,023,232 ±1.8487 × 10−07470,825,156 472,256,736 ±608,886
Static Economic Load Dispatch 4884,980 1,423,887 ±285,794 14,170,362 106,749,078 ±73,147,979 6,482,592 17,527,314 ±53,06,489 476,053 2,925,852 ±12,68,161 70,686 580,122 ±340,707
Static Economic Load Dispatch 58,105,947,615 8,110,924,071 ±4,422,895 1.3127 × 1010 13,543,754,650 ±213,865,059 8,453,090 8,459,337,082 ±2874,979 8,072,077 8,084,017,791 ±4,623,617 8,002,077,963 8,048,300 ±4,365,204
Table 6. Detailed Ranking of Algorithms Based on Best and Mean after 1.5 × 105 FEs.
Table 6. Detailed Ranking of Algorithms Based on Best and Mean after 1.5 × 105 FEs.
Problem CLPSO Best CLPSO Mean SaDE Best SaDE Mean jDE Best jDE Mean CMA-ES Best CMA-ES Mean SIOA Best SIOA Mean
Parameter Estimation for
Frequency-Modulated Sound Waves
2342113455
Lennard-Jones
Potential
5544113223
BifunctionalCatalyst Blend
Optimal Control
1111111111
Optimal Control of a
Non-Linear Stirred
Tank Reactor
1111111111
Tersoff Potential
for model Si (B)
3355441122
Tersoff Potential
for model Si (C)
3355442112
Spread Spectrum Radar
Polly phaseCode Design
3355441122
Transmission Network
Expansion Planning
1111111111
Electricity Transmission
Pricing
3155244312
Circular Antenna
Array Design
2554123143
Dynamic Economic
Dispatch 1
2255441133
Dynamic Economic
Dispatch 2
2255331144
Static Economic
Load Dispatch 1
3255114324
Static Economic
Load Dispatch 2
1155443223
Static Economic
Load Dispatch 3
2255431134
Static Economic
Load Dispatch 4
3255442311
Static Economic
Load Dispatch 5
3355442211
TOTAL40407168444634293642
Table 7. Comparison of Algorithms and Final Ranking.
Table 7. Comparison of Algorithms and Final Ranking.
MethodBestMeanOverallAverageRang
CMA-ES3429631.751
SIOA3642782.162
CLPSO4040802.223
jDE4446902.54
SaDE71681393.865
Table 8. Balance between exploration and exploitation of the SIOA method in each benchmark function after 1.5 × 105 FEs.
Table 8. Balance between exploration and exploitation of the SIOA method in each benchmark function after 1.5 × 105 FEs.
ProblemBestMeanSDIPDFDPAERMERABI
Parameter Estimation for Frequency-Modulated Sound Waves0.206185860.2599308630.0230213578.59014.160154.1601500.49979
Lennard-Jones Potential−28.51132554−24.146123792.48933469413.918234.4660.0002100.49967
Bifunctional Catalyst Blend Optimal Control−0.000286591−0.0002865919.177681044 × 10−110.07430.081610.0000700.50005
Optimal Control of a Non−Linear Stirred Tank Reactor0.3903767230.390376723049,184,124.110.0001917,134,746.4700.49745
Tersoff Potential for model Si (B)−28.63594613−27.115178511.0847229735.521261.710410.000200.49968
Tersoff Potential for model Si (C)−33.50417851−31.01381821.4206906015.521262.749160.0001700.49971
Spread Spectrum Radar Polly phase Code Design0.6071800671.0234980060.2286107218.069945.640580.0000800.49988
Transmission Network Expansion Planning250.00250.0000.966190.924980.0000100.5
Electricity Transmission Pricing13,774,551.113,775,341.62372.24335486.509930.060770.008450.000780.49851
Circular Antenna Array Design0.0074259750.0249895630.044360116245.6233226.213860.0005200.49938
Dynamic Economic Dispatch 1921,434,356.7984,699,299.823,606,727.92530.862650.064960.545860.00250.49827
Dynamic Economic Dispatch 2768,167,675.2768,167,675.2768,167,675.2890.769480.085590.687920.002160.49823
Static Economic Load Dispatch 16538.455462877,097.0217847631.4535141.01729149.576530.0000100.50002
Static Economic Load Dispatch 224,026.881841,478,534.0241,063,608.234238.24613207.894920.0000800.49982
Static Economic Load Dispatch 3470,825,156.9472,256,736.5608,886.262218.5954625.756250.0008200.49941
Static Economic Load Dispatch 470,686.26733580,122.834340,707.3484410.067213.950790.0119500.49942
Static Economic Load Dispatch 51.241487588 × 10101.284582323 × 1010197,599,745.4750.053610.069710.713410.002430.49825
Table 9. Sensitivity analysis of the method parameters for the potential problem (Dimension 10).
Table 9. Sensitivity analysis of the method parameters for the potential problem (Dimension 10).
Potential 10ValueMeanMinMaxItersMain Range
c10.1−16.36153−23.37956−11.013241502.42989
0.3−15.99563−20.23252−10.85775150
0.5−15.31793−21.19271−10.81508150
0.7−14.64094−20.05168−10.78772150
0.9−13.93164−19.82466−10.51742150
c20.1−15.64469−18.92493−12.610041503.03774
0.3−16.10363−23.37956−10.99769150
0.5−15.43998−20.58774−11.0502150
0.7−14.64094−16.71041−10.51742150
0.9−13.93164−20.23252−11.37413150
Table 10. Sensitivity analysis of the method parameters for the Rastrigin (Dimension 4).
Table 10. Sensitivity analysis of the method parameters for the Rastrigin (Dimension 4).
Rastrigin 4ValueMeanMinMaxItersMain Range
c10.12.13634011.100181500.88378
0.31.3207908.30785150
0.51.5252308.1495150
0.71.2525607.10786150
0.91.7439506.95643150
c20.13.25941011.100181502.44349
0.31.5129106.55699150
0.50.8159205.19549150
0.71.3478205.19957150
0.91.0428106.60519150
Table 11. Sensitivity analysis of the method parameters for the Test2n problem (Dimension 4).
Table 11. Sensitivity analysis of the method parameters for the Test2n problem (Dimension 4).
Test2n 4ValueMeanMinMaxItersMain Range
c10.1−146.95432−156.66451−128.373431502.81625
0.3−146.19977−156.66454−128.38355150
0.5−146.38681−156.66442−114.25247150
0.7−147.60809−156.6641−114.25223150
0.9−149.01602−156.66437−114.24072150
c20.1−152.40955−156.66454−128.390051508.94298
0.3−149.87331−156.66447−128.38459150
0.5−146.48201−156.66451−114.25223150
0.7−143.46657−156.66437−128.37376150
0.9−143.93359−156.664−114.24072150
Table 12. Sensitivity analysis of the method parameters for the Rosenbrock problem (Dimension 4).
Table 12. Sensitivity analysis of the method parameters for the Rosenbrock problem (Dimension 4).
Rosenbrock 4ValueMeanMinMaxItersMain Range
c10.135.106101354.3483815030.29039
0.314.6600401038.60285150
0.513.37230593.67884150
0.79.953110524.37725150
0.94.815720314.03933150
c20.118.0013201354.3483815020.91708
0.34.433890235.09807150
0.511.783640593.67884150
0.718.337450400.1598150
0.925.3509701244.51484150
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Charilogis, V.; Tsoulos, I.G.; Tsalikakis, D.; Gianni, A.M. The SIOA Algorithm: A Bio-Inspired Approach for Efficient Optimization. AppliedMath 2025, 5, 135. https://doi.org/10.3390/appliedmath5040135

AMA Style

Charilogis V, Tsoulos IG, Tsalikakis D, Gianni AM. The SIOA Algorithm: A Bio-Inspired Approach for Efficient Optimization. AppliedMath. 2025; 5(4):135. https://doi.org/10.3390/appliedmath5040135

Chicago/Turabian Style

Charilogis, Vasileios, Ioannis G. Tsoulos, Dimitrios Tsalikakis, and Anna Maria Gianni. 2025. "The SIOA Algorithm: A Bio-Inspired Approach for Efficient Optimization" AppliedMath 5, no. 4: 135. https://doi.org/10.3390/appliedmath5040135

APA Style

Charilogis, V., Tsoulos, I. G., Tsalikakis, D., & Gianni, A. M. (2025). The SIOA Algorithm: A Bio-Inspired Approach for Efficient Optimization. AppliedMath, 5(4), 135. https://doi.org/10.3390/appliedmath5040135

Article Metrics

Back to TopTop