This section presents a detailed performance evaluation of the proposed mAPO using two nonlinear benchmark systems: the chaotic Rössler system [
14] and the chaotic permanent magnet synchronous motor [
40]. These systems were selected because their nonlinear and potentially chaotic behavior makes parameter identification particularly challenging, offering an appropriate setting for assessing the accuracy, stability, and convergence characteristics of mAPO. For comparison, the algorithm was tested alongside artificial protozoa optimizer (APO) [
24], Schrödinger optimizer (SRA) [
41], holistic swarm optimization (HSO) [
42], elk herd optimizer (EHO) [
43] and Young’s double-slit experiment optimizer (YDSE) [
44] under identical conditions. Each method was executed for 20 independent runs with a population size (
) of 30 and a maximum of 400 iterations (
). This experimental setup ensures consistent benchmarking and enables a fair statistical comparison across all optimization methods.
The parameter settings adopted in this study were selected to ensure fairness, stability, and reproducibility, while preserving the intrinsic characteristics of the optimization algorithms under comparison. For the APO and the proposed modified version mAPO, the original control parameters governing foraging, dormancy, reproduction, and behavior-switching probabilities were retained exactly as defined in the canonical APO formulation. This choice was made deliberately to maintain the inherent balance between exploration and exploitation embedded in the original algorithm and to ensure that the performance improvements observed in mAPO arise solely from the introduced random learning strategy and the embedded Nelder–Mead simplex refinement, rather than from extensive parameter retuning. The additional parameters associated with the proposed enhancements were determined based on well-established practices in the metaheuristic optimization literature. In particular, the coefficients used in the Nelder–Mead simplex operations (reflection, expansion, and contraction) were selected according to standard recommendations for derivative-free local search, ensuring stable and efficient local refinement. Similarly, the activation probability of the random learning mechanism was chosen to introduce occasional diversity-enhancing perturbations without disrupting the dominant search dynamics of the population. These settings were fixed across all experiments to avoid problem-specific tuning and to preserve generality.
5.1. Results on CEC2017 Benchmark Functions
To further assess the generalization capability of the proposed modified artificial protozoa optimizer, additional experiments were conducted using the CEC2017 benchmark function suite, which is widely recognized as a rigorous testbed for evaluating optimization algorithms in high-dimensional and complex search spaces [
27]. The benchmark set includes shifted and rotated functions, hybrid functions, and composition functions, each designed to capture different optimization challenges such as nonseparability, variable interaction, and multimodality [
45]. All experiments were carried out using a 30-dimensional decision space, a population size of 50, a maximum of 1000 iterations, and 20 independent runs for each function. The statistical performance metrics reported include the best, worst, mean, and standard deviation (SD) of the objective function values.
The results for the shifted and rotated functions (
–
), summarized in
Table 1, indicate that mAPO consistently attains lower mean objective values than APO across almost all test cases. In addition, reduced standard deviation values are observed for mAPO on several functions, suggesting improved stability and repeatability. Notably, for functions such as
and
, mAPO reaches the optimal value with near-zero variability, whereas APO exhibits small but non-negligible dispersion.
Figure 5 provides a visual representation of the data presented in
Table 1.
The hybrid function results (
–
), reported in
Table 2, further highlight the advantages of the proposed modifications. Hybrid functions are particularly challenging due to their combination of different landscape properties within a single objective. Across this group, mAPO achieves lower mean and worst-case values than APO on most functions, with substantially reduced dispersion for functions such as
,
F15, and
. These outcomes indicate that the proposed random learning and local refinement mechanisms enhance the optimizer’s ability to navigate heterogeneous search spaces.
Figure 6 provides a visual representation of the data presented in
Table 2.
For the composition functions (
–
), which represent some of the most difficult problems in the CEC2017 suite, the results in
Table 3 demonstrate that mAPO maintains a consistent performance advantage over APO. Although both algorithms converge to similar regions of the search space, mAPO generally produces lower mean objective values and smaller standard deviations, particularly for functions
,
,
F28, and
. This behavior suggests that the modified framework provides improved balance between exploration and exploitation in highly irregular landscapes.
Figure 7 provides a visual representation of the data presented in
Table 3.
Overall, the CEC2017 results confirm that the proposed mAPO algorithm exhibits strong generalization capability in high-dimensional optimization problems. The consistent improvements observed across shifted, hybrid, and composition functions indicate that the proposed modifications are not problem specific, but rather enhance the underlying search dynamics of the algorithm. These findings complement the nonlinear system identification results presented earlier and demonstrate that mAPO is a robust and scalable optimization framework suitable for a wide range of complex optimization tasks
5.4. Discussion
The characterization of the obtained solutions as promising is grounded in the quantitative evidence reported throughout this study. From a general optimization perspective, the evaluation on the CEC2017 benchmark suite demonstrates that the proposed mAPO consistently yields lower mean objective values and reduced standard deviations across shifted, hybrid, and composition functions when compared with the original APO. This behavior is particularly evident for challenging benchmark functions such as , , , , and , where mAPO exhibits both improved solution quality and enhanced statistical reliability across independent runs. These numerical indicators confirm that the observed performance improvements are systematic rather than incidental.
The same conclusion is reinforced by the nonlinear system identification studies. For the chaotic Rössler system, objective function values on the order of 10−32 are obtained using mAPO, with corresponding parameter estimation errors reaching 10−14 or vanishing entirely. In contrast, competing algorithms converge to significantly higher residual error levels. For the PMSM system, exact reconstruction is achieved, with zero objective value, zero standard deviation, and zero parameter error across all runs. These outcomes demonstrate that the identified parameter sets reproduce the underlying system dynamics with full numerical precision. Accordingly, the term promising solutions is used here to denote solutions that are supported by measurable reductions in modeling error, improved convergence stability, and consistent statistical performance, rather than by qualitative observation alone.
A more detailed examination of the Rössler benchmark further clarifies the behavior of the proposed optimizer. The statistical results reported in
Table 4 show that the best, worst, and mean values of the objective function obtained by mAPO are several orders of magnitude smaller than those achieved by the competing optimizers, while the associated standard deviation remains minimal. This indicates not only high accuracy but also strong repeatability across independent runs. The convergence curves in
Figure 8 corroborate this observation, showing a rapid and monotonic decrease toward very small objective values for mAPO, whereas alternative methods either stagnate or exhibit irregular convergence patterns. In addition, the parameter evolution plots in
Figure 9,
Figure 10 and
Figure 11 reveal that the estimates of
,
, and
converge rapidly to their true values and remain stable thereafter. These trends are quantified in
Table 5 and
Table 6, where mAPO attains near-exact numerical agreement with the actual parameters, with error rates down to the order of 10
−14 or zero.
When the Rössler system is considered in relation to previously reported methods, the advantages of the modified scheme become more evident. As shown in
Table 7, all compared approaches converge to the correct parameter triplet
. However, the objective value achieved by mAPO is several orders of magnitude smaller than those reported for FPA, IFFO, and QPSO. This implies that, although competing algorithms can reproduce the qualitative structure of the chaotic attractor, a small but persistent mismatch remains in their reconstructed trajectories. In contrast, mAPO effectively eliminates this discrepancy, indicating that the integration of random learning and Nelder–Mead simplex refinement enables more efficient exploitation of promising regions and more precise solution refinement.
The online identification results for the Rössler system further highlight the adaptability of the proposed framework. By introducing piecewise changes in
,
, and
, the system parameters are forced to transition between distinct operating regimes. The time histories presented in
Figure 12,
Figure 13 and
Figure 14 show that mAPO tracks each abrupt parameter change with short transients and without noticeable overshoot, before settling rapidly at the new values. The agreement between actual and estimated parameters reported in
Table 8 confirms the absence of steady-state bias. These findings indicate that the combination of global exploration, random learning, and local simplex-based refinement provides sufficient flexibility for the optimizer to escape a previously converged region and re-identify parameters when the system dynamics change.
A similar but more demanding scenario is observed for the PMSM system, which involves nonlinear electromechanical coupling and potential chaotic behavior. In this case, mAPO achieves perfect statistical performance, with the best, worst, mean, and standard deviation of the objective function all equal to zero, as reported in
Table 9. While APO produces very small but nonzero errors, SRA, HSO, EHO, and YDSE exhibit noticeably larger objective values and higher variability. The convergence curves in
Figure 15 and the parameter trajectories in
Figure 16 and
Figure 17 are consistent with these statistics, showing that mAPO reaches the true parameters
and
within a few iterations and remains stable thereafter. These observations are confirmed by
Table 10 and
Table 11, where mAPO reconstructs the PMSM parameters with zero error, whereas alternative methods yield nonzero estimation errors ranging from very small to clearly significant.
Additional insight is provided by the comparison with PMSM identification approaches reported in the literature. As shown in
Table 12, ILCOA and DE/ABC are capable of achieving very small objective values and matching the true parameter values; however, their residual errors remain above zero. The GA-based solution exhibits both a relatively large objective value and small parameter deviations. In contrast, mAPO attains a zero objective value while exactly recovering the true parameters, implying that the reconstructed PMSM trajectory is indistinguishable from the reference data within numerical precision. This outcome suggests that the hybrid search structure adopted in mAPO achieves a more favorable balance between exploration and exploitation than those employed in the comparative methods.
The online PMSM identification experiment reinforces this conclusion. When σ and
are forced to change across three consecutive intervals, the responses shown in
Figure 18 and
Figure 19 demonstrate that mAPO responds promptly to each parameter variation and converges to the new values with minimal transient distortion. As confirmed by
Table 13, the final estimates coincide exactly with the actual parameters in all segments. From a practical perspective, this property is particularly relevant for applications such as adaptive control, fault diagnosis, and health monitoring, where system parameters may vary due to ageing, saturation effects, or external disturbances.
Taken together, the results highlight several key characteristics of the proposed mAPO framework. First, the incorporation of the random learning mechanism enhances global exploration and reduces the risk of premature convergence, as reflected by consistently lower objective values and smoother convergence behavior. Second, the embedded Nelder–Mead simplex refinement enables efficient local exploitation around high-quality candidate solutions, explaining the extremely small final errors and the zero-error statistics observed for the PMSM system. Third, the algorithm demonstrates strong robustness and adaptability, maintaining high identification accuracy under both static and time-varying conditions in chaotic environments.
At the same time, certain aspects warrant further investigation. Although excellent performance has been demonstrated for two benchmark systems using fixed population sizes and iteration limits, the sensitivity of mAPO to its internal control parameters has not yet been systematically examined. In addition, the computational overhead introduced by the random learning and simplex refinement stages has not been explicitly quantified. For large-scale or real-time applications, a more detailed analysis of computational cost and execution time would therefore be beneficial. Future work may also consider extensions to higher-dimensional systems and experimentally measured data, where measurement noise, unmodeled dynamics, and real-time constraints are present.
Finally, it is emphasized that the present work is confined to optimization-based identification of unknown parameters in nonlinear dynamic systems. The use of more than one benchmark model serves solely to evaluate the generality and robustness of the proposed optimizer and does not imply any form of system coupling, fusion, or combined modeling. Each system is treated independently, with its own mathematical formulation, parameter set, and objective function. Accordingly, the proposed framework should be interpreted strictly as an optimization process applied to parameter identification problems, and the scope of the study remains limited to simulation-based evaluation under controlled modeling conditions.