3.2. Experimental Results
A series of experiments was conducted for the aforementioned functions, which were executed on a computer equipped with an Linux machine with 128 GB RAM. Each experiment was repeated 30 times, with different random numbers each time, and the averages were recorded. The software used in the experiments was coded in ANSI C++ using the freely available GLOBALOPTIMUS optimization environment, which can be downloaded from
https://github.com/itsoulos/GlobalOptimus (accessed on 10 April 2025). The values of the experimental parameters for the proposed method are presented in
Table 1.
In the following tables displaying the experimental results, the numbers in the cells represent the average number of function calls, as measured over 30 independent runs. The numbers in parentheses indicate the percentage of executions where the method successfully identified the global minimum. If this number is absent, it signifies that the method successfully located the global minimum in all runs (100% success rate).
Table 3 presents the experimental results of the SAO method, evaluating its performance across various functions with different numbers of subpopulations while keeping the total population size constant. The measurements are expressed in terms of objective function calls, where lower values indicate reduced computational cost and, consequently, higher efficiency. The values in parentheses denote the success rate of the method in finding the global minimum. The experimental measurements were carried out according to the parameter values listed in
Table 1, but without allowing propagation between subpopulations. For the ACKLEY function, significant improvement is observed as the number of subpopulations increases, with function calls decreasing from 9080 for one subpopulation to 7863 for 20 subpopulations. This improvement is most notable between 10 and 20 subpopulations, suggesting that the method benefits from parallelization. The BF1 function, on the other hand, shows minimal change, with function calls remaining nearly constant, from 7936 to 7779, indicating that this function does not benefit significantly from parallelization. Similarly, the BF2 function demonstrates a smaller but noticeable decrease from 7411 to 7237 calls. The CAMEL function shows a steady downward trend, with calls decreasing from 5554 to 5276, while the CM function exhibits a reduction from 4141 to 4061, with the most significant drop observed between one and two subpopulations. For the DIFFPOWER functions, which are analyzed in versions of varying complexity, there is a systematic reduction in calls as the number of subpopulations increases. In DIFFPOWER2, calls decrease from 11,928 to 11,239, while in DIFFPOWER10, a reduction from 40,094 to 38,284 is noted, demonstrating that the method performs effectively even in complex search landscapes. Conversely, for the BRANIN function, only a slight reduction is observed, from 5077 to 5003 calls, indicating limited improvement. The GRIEWANK functions, however, show more pronounced improvements. GRIEWANK2 decreases from 8511 to 6394 calls, highlighting significant adaptability to parallelization, whereas GRIEWANK10 shows a more modest reduction. The RASTRIGIN function exhibits exceptional performance, with calls decreasing from 4505 to 3653 while maintaining a 97% success rate. This behavior underscores the method’s strength in handling highly challenging search landscapes. Similar behavior is observed for the SINUSOIDAL16 function, where calls drop from 8529 to 7235, with the success rate remaining consistently high. Conversely, for functions like BRANIN and HANSEN, the improvements are more limited, suggesting that the method may not be ideally suited for these problems. Overall, the total number of calls decreases from 398,004 for one subpopulation to 371,910 for 20 subpopulations. This overall reduction is significant, while the success rate remains consistently high, ranging between 95.3% and 95.9%. This indicates that parallelization in the SAO method not only reduces computational cost but also maintains its reliability in finding optimal solutions. However, performance varies across functions, highlighting that the effectiveness of parallelization is influenced by the nature of each function.
Some test functions, such as F13 and GKLS350, have shocking convergence failure rates (3% and 67%). This is because the F13 function is known for having numerous local minima and narrow convergence regions, making the discovery of the global minimum particularly challenging even for advanced optimization methods. Similarly, GKLS350 belongs to a family of functions deliberately designed with “traps” that test an algorithm’s ability to avoid premature convergence to suboptimal solutions. The fact that parallel SAO manages to achieve even partial success on these functions (e.g., 67% for GKLS350) is considered significant, given that many other methods fail completely in similar tests. The focus is placed on the overall improvement in the algorithm’s performance rather than isolated failures, which can be attributed to the inherent difficulties of these specific benchmark problems.
In
Figure 3, the statistical results from the comparison of different numbers of subpopulations for the critical parameter p, which represents levels of statistical significance, indicate significant differences between the groups. Specifically, the comparison between one subpopulation (1 Cluster) and two subpopulations (2 Clusters) yielded a
p-value of 0.037, which is below the conventional significance threshold of 0.05, demonstrating statistically significant differences. The differences intensify as the number of subpopulations increases: for 5 subpopulations, the
p-value is 0.00023, for 10 subpopulations,
p = 0.0024, and for 20 subpopulations,
p = 2.3
. This progressive reduction in
p-values reveals that increasing the number of subpopulations correlates with a significant increase in statistical significance, suggesting stronger differences in the parameter
p as more subpopulations are involved. These findings support the idea that using multiple subpopulations significantly impacts the results, with the effect becoming more pronounced as their number grows. This could indicate that parallel processing or managing more subpopulations enhances performance or drastically alters the behavior of the method under study.
Table 4 presents the experimental results of the SAO method, evaluating its performance across different agent propagation strategies: 1to1, 1toN, Nto1, and NtoN, applied to a range of benchmark functions. The reported values represent the number of objective function evaluations required, with lower values indicating better computational efficiency. Values in parentheses denote the success rate in finding the global minimum. The experimental measurements were carried out according to the parameter values listed in
Table 1 (
and
). For the ACKLEY function, the Nto1 strategy demonstrates the best performance, requiring 8495 evaluations, slightly outperforming the 1to1 strategy (8517) and significantly outperforming the NtoN strategy (11,242). The 1toN approach exhibits intermediate performance with 9106 evaluations. This indicates that the computational efficiency of the method varies significantly depending on the propagation strategy. The BF1 function shows an inverse trend, with the NtoN strategy requiring the fewest evaluations (7361), representing a substantial improvement compared to the other strategies, which hover around 7900 evaluations. Similarly, for the BF2 function, the Nto1 strategy shows superior performance with only 5477 evaluations, followed by NtoN at 6733, while 1to1 and 1toN remain close to 7300 evaluations. The DIFFPOWER functions reveal significant differences among strategies. For DIFFPOWER2, the NtoN strategy dramatically reduces the number of evaluations to 7633, compared to values exceeding 11,600 for the other methods. For more complex cases, such as DIFFPOWER10, the NtoN strategy exhibits even greater efficiency, reducing evaluations to 19,706 compared to over 42,000 for the 1to1 strategy. This underscores the robustness of the NtoN strategy for highcomplexity problems. The ELP functions further highlight the advantage of the NtoN strategy, especially in higher dimensions. For ELP10, evaluations decrease from approximately 5800 to 5080 with NtoN. The performance gap widens with increased complexity, as ELP30 requires only 7198 evaluations under the NtoN strategy, compared to over 11,400 for the 1to1 and 1toN strategies. The RASTRIGIN function shows a slight advantage for the Nto1 strategy, with 3737 evaluations and a 97% success rate. However, the NtoN strategy, despite having slightly higher evaluations (5224), maintains a high success rate (83%), demonstrating that Nto1 is marginally more efficient in this case while maintaining reliability. The results for the ROSENBROCK functions are particularly impressive, with NtoN outperforming all other strategies by a wide margin. For ROSENBROCK8, evaluations decrease to 7922 with NtoN, compared to over 11,800 for other approaches. Similarly, for ROSENBROCK16, the reduction is even more pronounced, with evaluations dropping to 9300 for NtoN, highlighting its superiority. For the HARTMAN and SHEKEL functions, the NtoN strategy consistently requires fewer evaluations than alternatives. For HARTMAN6, evaluations drop to 3144 with NtoN, compared to over 4200 for other strategies. Similar trends are observed for the SHEKEL functions, with SHEKEL10, for instance, showing a reduction to 5088 evaluations for NtoN, compared to over 6200 for the 1to1 strategy. In contrast, the GKLS250 and GKLS350 functions show mixed results, with marginal differences among strategies. Nonetheless, NtoN remains competitive, especially in GKLS350, where success rates remain relatively stable. The final row of the table provides an aggregate view of the results, confirming the superiority of the NtoN strategy in terms of computational efficiency. The total number of evaluations for NtoN is 295,761, a significant reduction compared to 387,465 for 1to1, 382,127 for 1toN, and 376,633 for Nto1. Success rates remain competitive, with the NtoN strategy achieving strong performance across various functions. Overall, the analysis highlights the effectiveness of the NtoN strategy in reducing computational cost while maintaining high success rates across a wide range of benchmark functions. While some functions exhibit marginal differences among strategies, NtoN consistently proves to be the most reliable and efficient approach within the parallel SAO framework.
In
Figure 4, the statistical results from the comparison of information propagation strategies among subpopulations, based on the critical parameter
p (levels of statistical significance), reveal significant differences in the effectiveness of the methods. The comparison between the 1to1 and 1toN strategies yielded
p = 0.36, a value exceeding the conventional significance threshold of 0.05, suggesting that these two approaches do not differ statistically significantly. However, the remaining comparisons show marked differences. For example, the comparison of 1to1 vs. Nto1 (
p = 0.00086) and 1to1 vs. NtoN (
p = 7.3
) demonstrate very high statistical significance, meaning the Nto1 and NtoN strategies differ dramatically from 1to1. The comparisons 1toN vs. Nto1 (
p = 7.3
) and 1toN vs. NtoN (
p = 2.6
) confirm that strategies involving multiple subpopulations (Nto1, NtoN) are statistically superior to 1toN. Finally, the comparison Nto1 vs. NtoN (
p = 2.1
) indicates that even between these two strategies, there is a significant difference, with NtoN standing out as the most distinct method.
The experimental results presented in
Table 5 illustrate the performance of the SAO method in parallel optimization scenarios, focusing on execution times (in seconds) for four propagation strategies: 1to1, 1toN, Nto1, and NtoN. The results reflect the average times taken per iteration of the algorithm across all subpopulations. The first row of the table categorizes different propagation strategies and provides their respective execution times. The last row aggregates these times to provide totals and averages, enabling a comprehensive comparison of the computational efficiency of each method. The experimental measurements were carried out according to the parameter values listed in
Table 1 (
and
). Analyzing the table, it is evident that the NtoN strategy consistently achieves the lowest execution times across the majority of test functions. For example, in the ACKLEY function, the execution time for NtoN is 0.26 s, which is slightly higher than the 1to1 and Nto1 strategies (0.2 s each) but still competitive. However, as the complexity of the functions increases, the advantage of the NtoN strategy becomes more pronounced. For the DIFFPOWER10 function, the NtoN strategy records an execution time of 3.48 s, which is significantly lower than the 7.33 s required by the 1to1 strategy and the 6.7 s for the 1toN approach. In cases of simpler functions such as CAMEL and BRANIN, all strategies perform similarly, with execution times stabilizing around 0.17 s. However, for more computationally demanding functions like ELP30, the NtoN strategy outperforms the others, requiring only 1.67 s compared to over 2.4 s for the other approaches. Similarly, for POTENTIAL10, the NtoN strategy achieves a time of 4.02 s, while the 1to1 and 1toN strategies exceed 6.5 s. Another noteworthy observation is the performance of the NtoN strategy in high-dimensional or complex functions. For example, in ROSENBROCK16, the execution time of the NtoN strategy is 0.96 s, significantly lower than the approximately 1.4 s required by the other methods. In SINUSOIDAL16, the NtoN strategy records 1.5 s, far outperforming the next best approach, Nto1, which requires 2.17 s. Similar trends are observed in SHEKEL10, where the NtoN strategy achieves 0.34 s compared to 0.41 s for the 1to1 and Nto1 strategies. The NtoN strategy’s effectiveness is further emphasized in the total times reported in the last row of the table. The total execution time for the NtoN strategy is 29.13 seconds, which is significantly lower than the totals for the other strategies: 40.29 s for 1to1, 39.51 s for 1toN, and 39.99 seconds for Nto1. This substantial reduction in overall execution time underscores the computational efficiency and scalability of the NtoN strategy in parallel optimization contexts. In conclusion, the statistical analysis of the results demonstrates that the NtoN strategy consistently outperforms the other propagation methods in terms of execution time, particularly for complex and high-dimensional functions. While the differences among strategies are negligible for simpler functions, the NtoN approach exhibits clear superiority in reducing computational overhead for more demanding scenarios. This makes it the most efficient and reliable strategy for parallel SAO.
In
Figure 5, the statistical results from the comparison of propagation strategies among subpopulations, based on the critical parameter
p, reveal variations in the significance of differences. The comparison between 1to1 and 1toN yielded
p = 0.53, a value above the 0.05 significance threshold, suggesting no statistically significant difference between these strategies. Similarly, the comparisons 1to1 vs. Nto1 (
p = 0.097) and 1toN vs. Nto1 (
p = 0.16) show values above 0.05, though close to the threshold, without confirming statistical significance. However, the comparison of 1to1 vs. NtoN resulted in
p = 0.04, a value below 0.05, indicating a statistically significant difference between these strategies. Additionally, the comparison of Nto1 vs. NtoN (
p = 0.05) lies exactly at the significance threshold, which could be interpreted as suggestive of a difference. The comparison of 1toN vs. NtoN (
p = 0.066) approaches the threshold but does not exceed it. These findings suggest that the NtoN (all-to-all) strategy appears to differ significantly from 1to1, while differences between other strategies are less pronounced or uncertain. This may imply that NtoN offers unique advantages compared to other approaches, though further investigation is needed to confirm its behavior across diverse scenarios. The presence of values near the significance threshold (e.g., 0.05–0.066) highlights the need for larger sample sizes or more sensitive analytical methods to clarify these borderline results.
Table 5 records the best functional values achieved for each subpopulation across different propagation strategies, 1to1, 1toN, Nto1, and NtoN, with propagation occurring at every iteration of the algorithm
. Also,
Table 6 presents average number of function calls with 10 subpopulations, using propagation every iterations (
) and 5 agents propagated per event (
). The analysis begins with the 1to1 strategy, where the total values amount to 383838 with a success rate of 95.7%. In many functions, this strategy exhibits strong performance, such as in the ACKLEY function, where the value of 8667 is accompanied by high stability, and in the GRIEWANK10 function with a value of 12,224, indicating the reliability of this strategy for more complex functions. However, for higher-dimensional functions like POTENTIAL10 and ROSENBROCK16, the values of 15,959 and 15,679, respectively, while acceptable, fall short compared to other strategies. The 1toN strategy shows a slightly lower total sum compared to 1to1, with a value of 366,274 and a success rate of 95%. In functions such as GKLS350, this strategy excels, achieving a value of 2623 and a success rate of 87%. However, in more demanding functions like DIFFPOWER10, the 1toN strategy records high values (35,729), indicating that its effectiveness diminishes in more complex environments. The Nto1 strategy demonstrates similar overall performance to 1toN, with a total sum of 366,348 and a success rate of 95.4%. This strategy shows remarkable results in certain functions, such as TEST2N5, where the value of 4764 is accompanied by a high success rate of 97%. Meanwhile, in the HARTMAN3 function, the strategy achieves a lower value (3430), highlighting its adaptability to various search landscapes. The NtoN strategy shows the best overall performance, with a total sum of 259,119 and a success rate of 86.5%. Despite the slightly lower success rate, this strategy stands out for its significantly reduced values in many functions. For instance, in the DIFFPOWER10 function, the NtoN strategy achieves a value of 9662, which is notably lower than the other strategies. Similarly, in the POTENTIAL5 function, the value of 2795 is particularly noteworthy. Similar trends are observed in the SINUSOIDAL16 function, where NtoN records a value of 3365 and a success rate of 90%, which is remarkable relative to the lowest value achieved. The statistical overview of the overall results indicates that the NtoN strategy is the most effective in reducing functional values, particularly in complex or high-dimensional functions. The 1to1, 1toN, and Nto1 strategies exhibit slightly higher success rates but incur higher overall costs due to elevated values in most functions. The NtoN strategy proves ideal for scenarios where speed and optimization are critical, while maintaining competitive accuracy in finding the global minimum.
In
Figure 6, the statistical results from the comparison of subpopulation propagation strategies, based on the p-parameter, reveal significant differences. The NtoN (all-to-all) strategy stands out as the most effective, with extremely low
p-values in all comparisons (e.g., 5
against 1toN), suggesting statistical superiority. The 1to1 strategy differs significantly from Nto1 (
p = 7.6
) and NtoN (
p = 5.2
), while 1toN and Nto1 show no significant difference between them (
p = 0.75). The NtoN strategy also outperforms Nto1 (
p = 4.1
).
Table 7 presents execution times (in seconds) required for achieving convergence under various propagation strategies between subpopulations: 1to1, 1toN, Nto1, and NtoN. Propagation occurs during each iteration (
) of the optimization algorithm. The 1to1 strategy exhibits a total runtime of 39.61 s, the highest among the evaluated strategies. Despite its slower performance, it maintains consistency across most functions. For example, the ACKLEY function has a runtime of 0.21 s, demonstrating efficiency in relatively simple landscapes. Similarly, in the GRIEWANK10 function, the runtime is 1.00 s, highlighting the strategy’s ability to manage complexity. However, for more demanding functions such as DIFFPOWER10 and POTENTIAL10, the strategy records significant runtimes of 6.85 and 6.56 s, respectively, indicating a higher computational cost for high-dimensional problems. The 1toN strategy achieves a total runtime of 36.70 s, which is slightly lower than that of the 1to1 strategy. This improvement is particularly evident in high-dimensional functions such as DIFFPOWER10, where the runtime decreases to 6.15 s, and POTENTIAL10, with a runtime of 5.67 s. However, in simpler functions like ELP10 and ELP20, the runtimes of 0.54 and 1.22 s, respectively, suggest only marginal gains. The 1toN strategy also exhibits strong performance in certain moderate-dimensional problems, such as ROSENBROCK16, where the runtime of 1.22 s reflects its efficiency. The Nto1 strategy records a total runtime of 37.63 s, falling between the 1to1 and 1toN strategies. Its runtime distribution is consistent across various functions, with notable improvements in specific cases. For instance, in the SINUSOIDAL16 function, the runtime decreases to 1.94 s compared to 2.21 s for 1to1. Similarly, for the HARTMAN3 function, a runtime of 0.23 s highlights the strategy’s adaptability. However, in more computationally intensive functions such as DIFFPOWER10 and POTENTIAL10, the runtimes remain relatively high at 6.37 and 5.81 s, respectively, indicating room for optimization. The NtoN strategy stands out as the most time-efficient, with a total runtime of 23.75 s, significantly lower than all other strategies. This advantage is evident across nearly all functions, particularly in high-dimensional and complex scenarios. For example, in the DIFFPOWER10 function, the runtime is reduced to 1.76 s, representing a substantial improvement. Similarly, in the POTENTIAL10 function, the runtime of 2.28 s underscores the strategy’s superiority. The NtoN strategy also performs well in simpler cases, such as the ROSENBROCK8 function, where the runtime of 0.42 s demonstrates its ability to handle diverse landscapes effectively. In summary, the NtoN strategy emerges as the most efficient in terms of runtime, especially for high-dimensional or computationally intensive problems. While the 1to1, 1toN, and Nto1 strategies offer competitive runtimes for simpler functions, their overall performance lags behind the NtoN approach. The significant reduction in runtime achieved by the NtoN strategy highlights its suitability for scenarios requiring rapid convergence and effective optimization across diverse problem landscapes.
In
Figure 7, the statistical results indicate only one statistically significant difference: the 1to1 vs. NtoN comparison (
p = 0.00041). All other comparisons (1to1 vs. 1toN
p = 0.12, 1to1 vs. Nto1
p = 0.98, 1toN vs. Nto1
p = 0.45, 1toN vs. NtoN
p = 0.34, Nto1 vs. NtoN
p = 0.43) show
p-values above 0.05, indicating no statistically significant differences between these strategies.
In
Figure 8 and
Figure 9, the optimization results using the parallel SAO method for three test functions ELP100 (dimension 100), ROSENBROCK100 (dimension 100), and DIFFPOWER30 (dimension 30) are presented.
Figure 8 shows the number of function calls, while
Figure 9 includes the corresponding execution times in seconds. For all functions, a significant reduction in both function calls and execution times is observed as the number of clusters increases. For example, for ELP100 with 1Cluster, 33,375 calls and 43.165 s are required, whereas with 20Cluster, the calls drop to 7537 and the time to 5.502 s. A similar trend is observed for the other functions: ROSENBROCK100 reduces calls from 49,697 (1Cluster) to 9841 (20Cluster) and time from 50.845 to 8.298 seconds, while DIFFPOWER30 decreases from 42,614 calls and 40.602 seconds to 8,351 calls and 6.873 s, respectively. The improvement is more pronounced when transitioning from fewer clusters (e.g., 1 to 5), with reductions slowing at higher cluster counts (e.g., 10 to 20). This suggests that parallel processing via the SAO method offers significant performance advantages, particularly in computationally complex scenarios. However, the variation in time differences between functions (e.g., ROSENBROCK100 vs. DIFFPOWER30) highlights the impact of problem nature and dimensionality on overall performance.
Figure 10 presents the results of the parallel SAO method for the same functions with varying values of the parameter
(number of agents). For all functions, a general decrease in function calls is observed as
increases, though the improvement is nonlinear. For instance, in ELP100, calls drop from 14,925 (
= 1) to 7803 (
= 10), with the sharpest decline occurring between
= 1 and
= 5 (14,925 to 8473). Similarly, ROSENBROCK100 reduces calls from 22,030 (
= 1) to 10,473 (
= 10), albeit with fluctuations (e.g., an increase to 11,000 for
= 8). DIFFPOWER30 shows the greatest relative improvement, with calls decreasing from 21,323 (
= 1) to 7746 (
= 10), particularly sharply between
= 1 and
= 7. The differences in behavior across functions suggest that optimization efficiency depends on the problem’s inherent characteristics. ELP100 benefits more significantly from increasing agents, while ROSENBROCK100 exhibits greater instability, likely due to its higher complexity. Additionally, for
> 5, reductions in calls are slow, possibly indicating limits in parallel processing efficiency or increased computational coordination costs among agents.
The results from the comparison
Table 8 between the pSAOP (parallel SAO with propagation) method and the parallel algorithms pSAO (parallel SAO without propagation), pAOA, pAQUILA, and pDE reveal significant performance differences. Overall, the pSAOP method demonstrates the best total performance with a TOTAL value of 259.119, which is substantially lower than the values of the other methods (AQUILA: 554.293, AOA: 515.111, DE: 778.440, SAO: 387.927). This difference highlights the overall superiority of the pSAOP method over existing approaches, particularly in complex or high-dimensional functions. In specific functions, the pSAOP method shows remarkable improvements. For instance, in the DIFFPOWER10 function, the pSAOP method’s value (9.662) is up to six times lower than that of DE (60.024) and significantly better than the rest. A similar difference is observed in SCHWEFEL222, where the pSAOP method (4.062) dramatically outperforms DE (87.237). Furthermore, in functions like POTENTIAL5 (2.795) and POTENTIAL10 (4.899), the pSAOP method demonstrates notable performance advantages over all other algorithms. However, there are cases where other methods exhibit competitive performance. For example, in the ACKLEY function, SAO (8.307) performs slightly better than the pSAOP method (10.521). Similarly, in GRIEWANK2, SAO (6.736) appears to outperform the pSAOP method (9.109). These exceptions suggest that, while generally superior, the pSAOP method may not always be the optimal choice for certain types of problems. Additionally, the pSAOP method maintains consistent performance across a wide range of functions, as evidenced by its low values in SINUSOIDAL16 (3.365), TEST2N7 (3.707), and TEST30N4 (4.522), where it consistently outperforms the other algorithms. Its ability to minimize values across such diverse problems indicates flexibility and resilience. Overall, the findings support that the pSAOP method offers significant advantages, especially in complex and high-dimensional optimization problems. The improvements in specific functions, combined with its consistent performance, make it a favorable choice over existing methods. However, in some cases, combining it with other approaches may be required for optimal results.
The experiments shown in
Table 8 and
Figure 11 were performed according to the following parameterization:
For PROPOSED (parallel SAO with propagation): = 1, = 5, = 1 and 10 subpopulations.
For parallel SAO without propagation: 10 subpopulations.
For parallel AOA (parallel Adaptive Optimization Algorithm): 10 subpopulations.
For parallel AQUILA (parallel Aquila Optimizer): Coefficient = 0, Coefficient = 2 and 10 subpopulations.
For parallel DE (Parallel Differential Evolution [
66]): Crossover Probability = 09, Differential Weight = 0.8 and 10 subpopulations.
For DE (Differential Evolution): Crossover Probability = 09 and Differential Weight = 0.8.
For PSO (Particle Swarm Optimization):
For all methods: m = 500, N = 10, = 200, Stopping Rule: = Similarity, = 8 and = 0.02 (2%).
The
Table 8 presents the experimental results for a large number of benchmark functions, comparing the performance of various optimization algorithms. The analysis includes traditional algorithms such as DE and PSO, along with their parallel variants, like Parallel DE. It also includes newer metaheuristic algorithms such as AQUILA, AOA, and the proposed variant of Parallel SAO. The SAO algorithm is presented in both its sequential and parallel forms, and the set is completed with the proposed method, which integrates stochastic exploration, local exploitation, and cooperative information propagation strategies. It is observed that the proposed method consistently requires fewer objective function evaluations compared to the other algorithms across most benchmark functions. Notably, it demonstrates significant superiority on difficult and high-dimensional functions such as EXP16, EXP32, DIFFPOWER10, GRIEWANK10, ROSENBROCK16, and POTENTIAL10, where the reduction in required evaluations exceeds 50% compared to other methods. In most cases, the proposed method either outperforms or is highly competitive with the best alternatives, indicating its high effectiveness. Particular attention should be given to the last row of the table, which reports the total number of function evaluations across all benchmark problems. The proposed method achieves a total cost of 256,521, which is significantly lower than all other algorithms, including SAO (398,004), Parallel SAO (387,927), PSO (407,911), Parallel DE (507,525), and DE (1,347,623). The difference with the second-best algorithm (Parallel SAO) exceeds 130,000 evaluations, corresponding to a cost reduction of approximately 33%. This gap further supports the hypothesis that the combination of parallelization, improved propagation, and local exploitation proposed in this work contributes substantially to the efficiency of global optimization. Overall, the statistical results clearly demonstrate the superiority of the proposed method over the compared algorithms, confirming its effectiveness in solving both low- and high-complexity optimization problems.
Figure 11 expands on this analysis with box plots illustrating the statistical distribution of function calls. The pSAOP exhibits the most compact box plot, with a lower median and narrower range compared to the other methods. This indicates that pSAOP not only reduces the mean number of calls but also minimizes result variance, ensuring more predictable performance. Despite the presence of some outliers, the overall trend confirms that pSAOP is the most efficient choice for parallel optimization, especially in large-scale problems. The introduction of dynamic information exchange mechanisms between subpopulations (e.g., NtoN) appears to be the key factor driving this improvement, balancing exploration and exploitation within the search space.