Next Article in Journal
TANS: A Tolerance-Aware Neighborhood Search Method for Workflow Scheduling with Uncertainties in Cloud Manufacturing
Previous Article in Journal
Forward–Reverse Blockchain Traceability Strategy in the NEV Supply Chain Considering Consumer Green Preferences
Previous Article in Special Issue
Synthetic Time Series Generation for Decision Intelligence Using Large Language Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Overcoming Stagnation in Metaheuristic Algorithms with MsMA’s Adaptive Meta-Level Partitioning

by
Matej Črepinšek
*,
Marjan Mernik
,
Miloš Beković
,
Matej Pintarič
,
Matej Moravec
and
Miha Ravber
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1803; https://doi.org/10.3390/math13111803
Submission received: 23 April 2025 / Revised: 23 May 2025 / Accepted: 26 May 2025 / Published: 28 May 2025

Abstract

:
Stagnation remains a persistent challenge in optimization with metaheuristic algorithms (MAs), often leading to premature convergence and inefficient use of the remaining evaluation budget. This study introduces M s M A , a novel meta-level strategy that externally monitors MAs to detect stagnation and adaptively partitions computational resources. When stagnation occurs, M s M A divides the optimization run into partitions, restarting the MA for each partition with function evaluations guided by solution history, enhancing efficiency without modifying the MA’s internal logic, unlike algorithm-specific stagnation controls. The experimental results on the CEC’24 benchmark suite, which includes 29 diverse test functions, and on a real-world Load Flow Analysis (LFA) optimization problem demonstrate that MsMA consistently enhances the performance of all tested algorithms. In particular, Self-Adapting Differential Evolution (jDE), Manta Ray Foraging Optimization (MRFO), and the Coral Reefs Optimization Algorithm (CRO) showed significant improvements when paired with MsMA. Although MRFO originally performed poorly on the CEC’24 suite, it achieved the best performance on the LFA problem when used with MsMA. Additionally, the combination of MsMA with Long-Term Memory Assistance (LTMA), a lookup-based approach that eliminates redundant evaluations, resulted in further performance gains and highlighted the potential of layered meta-strategies. This meta-level strategy pairing provides a versatile foundation for the development of stagnation-aware optimization techniques.

1. Introduction

The ability to adapt to the environment is crucial for the survival and success of living beings [1]. Humans surpass other living creatures in many ways, one of which is their capacity to optimize and utilize various tools to achieve optimization goals. Finding and implementing optimal solutions is a key factor behind humanity’s rapid and successful development [2]. Numerous optimization processes occur today without our awareness, spanning fields such as communications (where radio towers and message exchanges are modeled adaptively), logistics (where routes for goods and people are optimized), planning, production, drug manufacturing, and more. In short, optimization permeates nearly every aspect of our lives [3,4,5].
The advent of computers has enabled new approaches to solving real-world optimization problems. The first step is to model the problem in a way that lets us simulate its behavior using its key parameters. This representation, often termed a digital twin [6], aims to provide an expected or simulated state of the problem, given specific input parameters. The quality of this simulated state can then be evaluated against defined criteria. In optimization with evolutionary algorithms, this quality assessment is referred to as fitness. When developing a problem model, it is critical to define the level of detail required for the simulation to ensure the results are useful to the user. Excessive precision often yields no additional benefits, while demanding significant computational power and leading to time-consuming, costly software development [5].
A successful digital representation enables effective leveraging of modern computers’ computational power. Optimization involves identifying the best configuration of input parameters for the problem. While testing parameters may randomly yield improvements, this inefficient approach does not guarantee a local optimal solution within a limited timeframe. Conversely, examining all possible parameter combinations systematically is impractical due to the vast number of possibilities. This is where optimization algorithms, including MAs, become essential [7]. The behavior of MAs is a well-researched area, encompassing topics such as the influence of control parameters, and the mechanisms of exploration and exploitation during the search in the solution space [8,9,10].
Many MAs suffer from stagnation, where they fail to improve solutions over extended periods, often indicating entrapment in a local optimum [11]. This leads to wasteful function evaluations as MAs repeatedly explore the same search space regions without progress. Previous stagnation control is typically algorithm-specific [12], lacking universal applicability across diverse MAs. These limitations highlight a research gap in flexible, meta-level stagnation management that can enhance computational efficiency for any MA.
To address these gaps, we propose M s M A , a meta-approach that wraps any MA to enable self-adaptive search partitioning based on stagnation detection. M s M A activates at the meta-level only when stagnation occurs, reallocating resources to escape local optima efficiently, otherwise preserving the MA’s core behavior. Its universal applicability, synergy with L T M A , and evaluation on CEC’24 and LFA problems demonstrate its effectiveness.
The main contributions of this work are:
  • A novel meta-approach, M s M A , for self-adaptive search partitioning based on stagnation detection. It wraps any MA, handling stagnation at the meta-level to enhance efficiency. M s M A activates only when stagnation is detected, otherwise allowing the MA to operate unchanged, ensuring broad applicability.
  • Demonstration of meta-approach effectiveness by synergizing L T M A with M s M A . This strategy enhances exploration and exploitation across MAs without modifying their core mechanisms. Applying L T M A to M s M A showcases improved performance and supports versatile meta-strategy integration.
  • Robust evaluation of the proposed approach using the CEC’24 benchmark and the LFA problem. Results show consistent performance improvements over baseline MAs, with novel insights into ABC and CRO behaviors.
This paper explores metaheuristic optimization systematically with a focus on mitigating stagnation in MAs. We begin by reviewing the background and related work on metaheuristic optimization and stagnation in Section 2, establishing the context for our contributions. Next, we present a novel meta-level strategy to address stagnation in Section 3, detailing its design and implementation. The proposed approach is evaluated empirically in Section 4, where its performance is assessed across diverse benchmark problems. We then analyze the findings and their implications in Section 5, providing a deeper understanding of the strategy’s impact. Finally, the paper concludes in Section 6, summarizing the key outcomes, and suggesting avenues for future research.

2. Related Work

Parallel to the development of computing, significant advancements have occurred in computational applications for solving various optimization problems. For instance, as early as 1947, George Dantzig implemented the Simplex Algorithm to address the Diet Problem [13]. By 1950, techniques like Linear Programming were being applied to optimize logistical challenges, such as the Transportation Problem [14,15]. These efforts were followed by the development of optimization techniques, including Dynamic Programming, Integer Programming, Genetic Algorithms, and Simulated Annealing, applied to a wide range of optimization problems [16,17,18,19]. The period between 1980 and 1990 marked the beginning of a flourishing era for various optimization heuristics and metaheuristics. Techniques such as Tabu Search, Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Differential Evolution (DE) emerged during this time [20,21,22,23,24].
Today, we are in an era characterized by an explosion of metaheuristic optimization algorithms and hybrid approaches [25,26,27,28]. Among the more popular approaches, metaheuristics can be categorized into evolutionary algorithms (EAs) and swarm intelligence (SI)-based methods. EAs include techniques such as Genetic Algorithms (GAs), Genetic Programming (GP), and DE. SI methods include techniques such as PSO, ACO, ABC, the Firefly Algorithm (FA), and Cuckoo Search (CS).
Optimization incurs a computational cost, which can be managed through stopping criteria [29]. Various approaches exist to define these criteria, including time limits, iteration counts, energy thresholds, function evaluation limits, and algorithm convergence. Among these, the most commonly used stopping criterion is the maximum number of function evaluations ( M a x F E s ) [30,31], as function evaluations often represent the most computationally expensive aspect of optimization in real-world applications. In our experiments, we adopted M a x F E s as the stopping criterion. For this paper, the optimization process for a problem P is defined as the search for an optimal solution x using an algorithm constrained by M a x F E s , as shown in Equation (1).
x = M A ( P , M a x F E s )

Stagnation

Stagnation is a well-known phenomenon in optimization algorithms [32,33,34,35,36,37,38,39]. It occurs when an algorithm fails to improve the current best solution over an extended period of computation or time. Improvement can be quantified using the δ -stagnation radius, where δ represents the minimum required improvement for a solution to be considered enhanced [40], and a period before entering stagnation is defined by the threshold M i n s . This period can be measured by the number of iterations without improvement, elapsed time, or the number of function evaluations.
Researchers have proposed various strategies to overcome stagnation, ranging from simple restart mechanisms to more sophisticated adaptive strategies. In simple restart mechanisms, stagnation serves as an optimization stopping criterion, either to conserve computational resources or to extend optimization as long as improvements occur, thus avoiding premature termination due to fixed iteration limits or function evaluation budgets [29,41]. Alternatively, stagnation can be used as an internal mechanism to balance exploration and exploitation. In this context, stagnation triggers adaptive strategies—such as increasing diversity [42,43,44], restarting parts of the population [45,46], or modifying parameters [12,47,48]—to help the algorithm escape the local optima and enhance global search capabilities. For example, in [40], the authors proposed a hybrid PSO algorithm with a self-adaptive strategy to mitigate stagnation by adjusting the inertia weight and learning factors based on the stagnation period. Similarly, adapting the population size based on stagnation parameters is suggested in [49,50]. In Ant Colony System optimization, the authors of ref. [51] introduce an additional parameter, the distance function, to address stagnation. For PSO algorithms, ref. [52] incorporated a stagnation coefficient and Fuzzy Logic, while [53] proposed a diversity-guided convergence acceleration and stagnation avoidance strategy. A self-adjusting algorithm with stagnation detection based on a randomized local search was presented in [54]. More recent work adapts Differential Evolution (DE) with an adaptation scheme based on the stagnation ratio, using it as an indicator to adjust control parameters throughout the optimization process [55]. The authors of the MFO–SFR algorithm introduced the Stagnation Finding and Replacing (SFR) strategy, which detects and addresses stagnation using a distance-based method to identify stagnant solutions [56]. The CIR-DE method tackles DE stagnation by classifying stagnated solutions into global and local groups, employing chaotic regeneration techniques to guide exploration away from these individuals [57]. Self-adjusting mechanisms, including stagnation detection and adaptation strategies for binary search spaces, have been explored in [54,58].
The reviewed MAs often rely on internal stagnation controls, limiting their flexibility and efficiency across diverse algorithms. To address this, we propose M s M A , a meta-level approach that wraps any MA, detecting stagnation and adaptively partitioning computational resources.

3. Meta-Level Approach to Stagnation

This section introduces a novel meta-level strategy designed to address the stagnation problem in MAs. By leveraging stagnation detection, we propose a self-adaptive search partitioning mechanism that enhances the efficiency of computational resource utilization. The approach builds on the concept of MAs and introduces two key components: the M s M A strategy, which partitions the optimization process based on stagnation criteria, and its synergy with the L T M A approach, which uses memory to avoid redundant evaluations. These methods aim to improve the performance of MAs by mitigating the effects of stagnation, particularly in complex optimization landscapes.

3.1. Leveraging Stagnation for Self-Adapting Search Partitioning

Since we use M a x F E s as the stopping criterion, we define stagnation based on the number of function evaluations without improvement in the current best solution. This threshold, denoted M i n S F E s , determines when the algorithm enters a stagnation cycle.
The simplest approach to handling stagnation is to execute the algorithm multiple times and select the best solution from each run using multi-start mechanisms. Due to the stochastic nature of these algorithms, multiple restarts in a single run enable the exploration of different regions of the search space. However, computational budgets impose constraints that dictate our stopping conditions. In this case, we adopted the established stopping criterion of M a x F E s . To apply multiple restarts while adhering to the overall stopping criterion M a x F E s , a basic run partitioning mechanism can be interpreted as distributing function evaluations across individual restarts. The simplest method is to allocate evaluations uniformly across restarts. For example, if we perform r m a x restarts, the stopping condition for each restart r is defined by Equation (2) (Figure 1).
M a x F E s r = M a x F E s r m a x
This uniform distribution of computational resources raises the question of whether a single, longer run or multiple shorter runs is more effective. In most scenarios, the answer depends on stagnation. For instance, if stagnation occurs, it often makes sense to distribute computational resources across multiple restarts. However, a challenge arises in defining stagnation precisely—specifically, after how many evaluations ( M i n S F E s ) can we consider stagnation to have occurred? Naturally, algorithms do not improve solutions with every evaluation, and the frequency of improvements typically decreases as the algorithm progresses. The optimal M i n S F E s value depends on the problem type and the exploration mechanisms of the MA.
In the proposed meta-optimization approach, which incorporates internal adaptive search partitioning based on a stagnation mechanism ( M s M A ), we maintain M a x F E s as the primary stopping criterion, while M i n S F E s serves as an internal stopping/reset condition within the algorithm. The parameter M i n S F E s partitions each optimization run according to the algorithm’s stagnation criteria, where:
M a x F E s = i = 1 r M a x F E s i
Here, r represents the number of stagnation phases the algorithm has encountered excluding the current run. If the algorithm does not enter a stagnation phase, then M a x F E s = M a x F E s 1 with r = 1 .
In the worst-case scenario, such as when the first evaluation yields the best overall result, the maximum number of search partitions r is constrained by M i n S F E s and M a x F E s , as expressed in Equation (4):
r M a x F E s M i n S F E s
The best solution found in each partition is stored, and the overall best solution x S is returned as the final result (Equation (5), Figure 2):
x S = M s M A ( M A i ( P , M i n S F E s , M a x F E s i ) ) | i = 1 , , r )
The key research question is whether this approach can enhance the performance of the selected MAs. In formal notation (Equations (6) and (7)), we aim to achieve:
E [ x S ] opt E [ x ]
where:
  • E [ · ] denotes the expectation (average performance) over multiple optimization runs due to the stochastic nature of the algorithm.
  • opt is a problem-dependent comparison operator, defined as:
    a opt b = a b , if the objective is maximization , a b , if the objective is minimization .

MsMA Strategy: Implementation Details

To implement a meta-strategy for self-adaptive search partitioning at the meta-level, we propose using an algorithm wrapper that overrides execution. This strategy introduces an additional parameter, M i n S F E s , used as an internal stopping criterion (Algorithm 1).
Algorithm 1. MsMA: A Meta-Level Strategy for Overcoming Stagnation
  • Require: A base metaheuristic MA , t a s k with stopping criterion M a x F E s and stagnation threshold MinSFEs
  • Ensure: Best solution found
  1:
bestSolution null
  2:
stagnationTask newTaskStagnation ( task . MaxFEs , task . problem , MinSFEs )
  3:
while  ! stagnationTask . isMaxFEsCriterionReached ( )  do
  4:
     tmp MA . execute ( stagnationTask )
  5:
    if  task . problem . isFirstBetter ( tmp , bestSolution )  then
  6:
         bestSolution tmp
  7:
    end if
  8:
     stagnationTask . resetStagnationCounter ( )
  9:
     MA . resetToDefaults ( )
10:
end while
11:
return  bestSolution
The idea behind this approach is that the algorithm consumes as many function evaluations as needed until stagnation is detected. Upon stagnation, a new search is initiated with a reduced evaluation budget. The number of evaluations already used is subtracted from M a x F E s , effectively decreasing the budget for subsequent runs. The optimization ceases once all evaluations are exhausted.

3.2. Synergizing MsMA and LTMA for Improved Performance

LTMA is a meta-level approach that enhances performance by leveraging memory to avoid re-evaluating previously generated solutions, commonly known as duplicates [11]. When an MA generates a new solution, it is stored in the memory. If the same solution is encountered again, the algorithm skips its evaluation and reuses the previously computed fitness value, leaving the fitness evaluation counter unchanged. Since memory lookup is significantly faster than fitness evaluation—especially in real-world optimization problems—this approach improves the computational performance substantially. LTMA is particularly beneficial when an algorithm repeatedly generates already-evaluated solutions, such as during stagnation phases, where duplicates are a common issue. Stagnation may not occur at a single point but can involve wandering within a region [36].
As a meta-level strategy, M s M A can be combined with another meta-level approach, LTMA, to leverage synergistic effects. LTMA prevents redundant evaluations of previously encountered solutions, enhancing performance and accelerating convergence. The L T M A approach can be applied in two ways: either to each MA partition run (Equation (8)) or to the entire M s M A approach (Equation (9)).
r e s u l t s = M s M A ( L T M A ( M A i ( P , M i n S F E s , M a x F E s i ) ) | i = 1 , , r ) )
r e s u l t s = L T M A ( M s M A ( M A i ( P , M i n S F E s , M a x F E s i ) ) | i = 1 , , r ) )
Figure 3 illustrates the application of the LTMA strategy to MsMA, as detailed in Equations (8) and (9), showing MaxFEs partitioning for the top and bottom configurations.
In the first approach (Equation (8)), LTMA is applied to each MA partition run, with a memory reset after each run, using the same or less memory compared to the second approach. In contrast, the second approach (Equation (9)) applies LTMA across all the partition runs M A i , sharing information about previously explored areas between runs. This increases the likelihood of duplicate memory hits, as previously generated solutions remain in the LTMA memory.
The implementation of the L T M A ( M s M A ) approach is presented in Algorithm 2. This algorithm resembles the M s M A approach closely (Algorithm 1), with the addition of the L T M A wrapper, which enhances performance by leveraging memory to avoid re-evaluating previously generated solutions.
Algorithm 2. LTMA(MsMA): Implementation Variant of MsMA Using LTMA
  • Require: A base metaheuristic MA , t a s k with stopping criterion M a x F E s and stagnation threshold MinSFEs
  • Ensure: Best solution found
  1:
bestSolution null
  2:
memStaTask new LTMA ( TaskStagnation ( task . MaxFEs , task . problem , MinSFEs ) )
  3:
while  ! memStaTask . isMaxFEsCriterionReached ( )  do
  4:
     tmp MA . execute ( memStaTask )
  5:
    if  task . problem . isFirstBetter ( tmp , bestSolution )  then
  6:
         bestSolution tmp
  7:
    end if
  8:
     memStaTask . resetStagnationCounter ( )
  9:
     MA . resetToDefaults ( )
10:
end while
11:
return  bestSolution
As demonstrated in [11], modern computers have sufficient memory capacity for most optimization problems, storing only solutions and their fitness values. Therefore, we will evaluate the variant from Equation (9) in our experiments.

3.3. Time Complexity Analysis

Each meta-operator adds some overhead to the optimization process. The M s M A strategy checks for stagnation and resets the algorithm’s internal state, which can be done in constant time. The time complexity of M s M A is O ( 1 ) , as it performs only a few extra operations per function evaluation. Detecting stagnation adds minimal overhead, requiring a single if statement and a counter to check whether a new solution improves the current best.
The L T M A strategy has a higher cost. Its time complexity is O ( n ) , where n is the number of function evaluations. Each evaluation involves checking for duplicates and updating memory, operations that take constant time on average. As shown in the original LTMA study [11], this overhead is small and generally negligible. There are edge cases, however. One occurs when algorithms, such as RS, rarely generate duplicate solutions. Another arises when fitness evaluations are extremely fast—comparable to a memory lookup in LTMA. In such cases, using LTMA can be slower than running the algorithm without it.

4. Experiments

The primary objective of this research is to promote meta-approaches by investigating the phenomenon of stagnation and exploring a meta-approach to overcome it. To achieve this, we selected several MAs for single-objective continuous optimization, specifically from evolutionary algorithms (EA) and swarm intelligence (SI). The parameters of the selected algorithms are kept at their default settings rather than being optimized. It is crucial to emphasize that the focus of this research is not to identify the best-performing MA but to analyze the effects of stagnation and evaluate the proposed method, which does not modify the core functioning of the optimization algorithm directly.
The selected SI algorithms are: Artificial Bee Colony (ABC) [59] with l i m i t = p o p _ s i z e · n 2 (where n denotes the problem dimensionality) and Particle Swarm Optimization (PSO) [23]. For EA, we selected: Self-Adapting Differential Evolution (jDE) [47], the less-known Manta Ray Foraging Optimization (MRFO) [60], and the Coral Reefs Optimization Algorithm (CRO) [61]. The source codes of all the algorithms used are included in the open-source EARS framework [62], specifically in the a l g o r i t h m s . s o package. We also included a simple random search (RS) as a baseline algorithm in all the experiments (Equation (10)).
M A s { A B C , P S O , j D E , M R F O , C R O , R S }

4.1. Statistical Analysis

Comparing stochastic algorithms requires complex statistical analysis involving multiple independent runs, average results, and Standard Deviations to determine significant differences. In practice, we aim for a simple representation of an algorithm’s performance, which can be achieved, for example, by assigning a rating to each algorithm. This approach is well-established and validated in fields such as chess ranking and video game matchmaking, where the goal is to pair opponents of similar strength [63,64,65,66]. Our experiments utilized the EARS framework with the default Chess Rating System for Evolutionary Algorithms (CRS4EA) [30,31]. CRS4EA integrates the Glicko-2 rating system, a widely used method in chess for ranking players. It assigns a rating to each algorithm based on its performance in benchmark tests, comparing the results against other algorithms in pairwise statistical evaluations—analogous to how chess players are ranked. CRS4EA operates by assessing algorithms based on their wins, draws, and losses in direct comparisons. These results determine each algorithm’s rating and confidence intervals. Every algorithm starts with a rating deviation (RD) of 350 and a rating of 1500, which is updated after each tournament. Over time, as the algorithm’s performance stabilizes, its RD is expected to decrease.
A study by [67] shows that the CRS4EA method performs on par with common statistical tests, like the Friedman test and the Nemenyi test. It also gives stable results, even with few independent runs. Another study by [68] finds CRS4EA comparable to the pDSC method. Together, these studies support CRS4EA as a reliable tool for comparing evolutionary algorithms.
To ensure reliable rating updates, it is recommended to set the minimum RD ( R D m i n ) based on the number of matches and players in the tournament. In classical game scenarios, where players compete less frequently, an R D m i n of 50 is used typically. However, in our experiments, each MA competes against every other MA on 29 problems across 30 tournaments (requires, 30 independent runs for each problem), resulting in a high frequency of matches. Consequently, we set R D m i n to the minimum recommended value of 20. Reducing R D m i n below 20 is not advisable, as it may cause the ratings to converge too slowly [69,70]. We derived confidence intervals to compare algorithm performance using the computed ratings and RD values. The statistical significance of differences between algorithms is determined by the overlap (or lack thereof) of confidence intervals, calculated as ± 2 R D around each algorithm’s rating. Non-overlapping confidence intervals indicate a statistically significant difference between algorithms with 95% confidence [69].

4.2. Benchmark Problems

Evaluating and comparing the performance of optimization algorithms requires a well-defined set of benchmark problems. Selecting a representative set that captures diverse optimization challenges ensures a fair and meaningful comparison. CEC benchmarks are used widely in the optimization community due to their extensive documentation and established credibility, making them suitable for our experiments. To prevent algorithms from exploiting specific problem characteristics, the benchmark problems are shifted and rotated, ensuring a more robust evaluation [71,72].
For our experiments, we utilized the latest CEC’24 benchmark suite [73], specifically the Single Bound Constrained Real-Parameter Numerical Optimization benchmark. This suite comprises 29 problems, including 2 unimodal, 7 multimodal, 10 hybrid, and 10 composition functions, designed to simulate various optimization challenges. The CEC’24 benchmark provides a comprehensive evaluation of optimization algorithms across a diverse range of problems.

4.3. MsMA: Meta-Level Strategy Experiment

For the evaluation of M s M A , we utilized the CEC’24 benchmark suite for the selected MAs, employing the CRS4EA rating system for ranking. In our experimental setup, we set the dimensionality of the problems to n = 30 , and defined the maximum number of function evaluations ( M a x F E s ) according to the benchmark specifications as 300 , 000 .
To evaluate the meta-approach, we addressed the experimental question: How does the introduction of the MsMA strategy influence the performance of MAs? This raises an additional question with the introduction of the new parameter M i n S F E s : How does the MinSFEs parameter influence algorithm performance?
To investigate this, we tested M i n S F E s values at 2%, 4%, and 10% of M a x F E s , corresponding to 6000, 12 , 000 , and 30 , 000 evaluations, respectively. We labeled each algorithm using the M s M A strategy with suffixes _6, _12, and _30, reflecting the number of M a x F E s evaluations in thousands. For example, when configured with M i n S F E s = 6000 , the ABC algorithm is renamed ABC_6. Thus, an algorithm tested with different M i n S F E s values is treated as a distinct algorithm within the CRS4EA rating system. As a result, the ABC algorithm appears in the tournament four times—once with its default settings, and three times with different M i n S F E s values.
The experimental results are presented in a leaderboard table, displaying the ratings of MAs on the CEC’24 benchmark based on 30 tournament runs (requires, 30 independent runs for each problem). This setup is justified by previous findings showing that the CRS4EA rating system provides more reliable comparisons, and does not require a larger number of independent runs to achieve statistically robust results [67]. For each MA, we report its rank based on the overall rating, the rating deviation interval, the number of statistically significant positive (S+), and negative (S−) differences at the 95% confidence level (Table 1).
The M s M A strategy proved highly successful, as all the algorithms incorporating M s M A achieved higher ratings and ranks than their base variations (Table 1). However, not all the M s M A variations resulted in statistically significant differences. For example, while jDE_6 and jDE_30 showed significant improvements over the core jDE, jDE_12 did not. For PSO, none of the M s M A variations (PSO_6, PSO_12, PSO_30) exhibited a significant difference compared to the core PSO. In the case of MRFO, the variations were significantly different from the core MRFO, with MRFO_6 and MRFO_12 outperforming all the ABC variations. For ABC, no significant differences were observed among its variations, possibly due to its inherent limit parameter that restarts the search. For CRO, CRO_6 was significantly better than the core CRO (Table 1).
Regarding the M i n S F E s parameter, the results from Table 1 do not indicate a universally optimal value across all the algorithms. For instance, jDE, PSO, and ABC performed best with M i n S F E s set to 10% of M a x F E s , whereas MRFO and CRO achieved the best results with 2% of M a x F E s (Table 1). The varying impact of the M s M A strategy across the algorithms was expected, as each handles exploration and exploitation differently to avoid stagnation.
Further insights were gained by analyzing each algorithm’s wins, draws, and losses in pairwise comparisons. Since wins are more relevant for lower-performing algorithms, losses are crucial for top-performing ones, and draws provide less information, we focused on a detailed analysis of losses. Tables containing the results for wins and draws are provided in the Appendix A (Table A1 and Table A2).
Across 870 games, all the MAs lost at least some matches against every opponent (the first row in Table 2), except against the control RS algorithm (the last column in Table 2). A key observation is that the M s M A strategy improved jDE’s performance against ABC (highlighted in blue); however, ABC_30 still found the best solution overall in 16 games. Additionally, most of jDE’s losses came from its improved variations; for example, jDE lost to jDE_30 a total of 760 times (Table 2). Interestingly, when comparing the jDE row with its top-performing variations (jDE_30, jDE_6, jDE_12), jDE often had fewer losses than its superior counterparts in most columns. In this case, partitioning the optimization process did not outperform full runs. This aligns with findings suggesting that extended search durations can help overcome stagnation and yield better solutions, indicating that longer runs may be advantageous when computational resources are less constrained [41].
To explore performance on individual problems, we investigated whether specific problems exist where the M s M A strategy is particularly effective. Table 3 and Table 4 present the losses of each MA on problems F01 to F29.
Problems F03 and F04 were the most challenging for the overall best-performing jDE and its variations (Figure A1 and Figure A2). The greatest improvement from M s M A for jDE occurred on problem F05, where the losses decreased from 106 to 6 for jDE_12 and to 0 for jDE_6 and jDE_30 (Table 3).
Regarding ABC’s success against certain jDE runs (Table 2), Table 3 shows that ABC performed best on F05 with no losses (Figure A3), but it was the worst on F10, except for RS. Interestingly, the M s M A strategy worsened MRFO’s performance slightly on problem F03 (highlighted in blue in Table 3). Compared to MRFO, CRO, and ABC, PSO had the fewest losses on problem F21 (Table 4). Tables with wins and draws for individual problems are provided in the Appendix A (Table A3 and Table A5).

4.4. LTMA(MsMA): Performance Experiment

To determine whether L T M A ( M s M A ) complements and enhances performance, we conducted a similar experiment using the CEC’24 benchmark, following the same procedure as for M s M A in Section 4.3. Here, L T M A was applied to all M s M A variations, and for comparison, we included the best-performing M s M A variations of each MA from the previous experiment. Variations incorporating the M s M A strategy are labeled with the suffix _LTMA, along with the corresponding M s M A number label.
The experimental results are presented in a Leaderboard Table, displaying the ratings of MAs and their variations on the CEC’24 benchmark over 30 tournaments. For each MA, we report its rank based on overall rating, the rating deviation interval, the number of statistically significant positive (S+), and negative ( S ) differences at the 95% confidence level (Table 5).
Applying the LTMA strategy to all MsMA variants yielded performance improvements across most MA variants, except for jDE_30, the top performer from the prior experiment (Table 1 and Table 5). The minor rating difference between jDE_30 and jDE_LTMA_30 is statistically insignificant, and may stem from the stochastic nature of the MAs. Alternatively, LTMA may be less effective when an MA reaches the global optima consistently, as additional evaluations from duplicates fail to enhance solutions. Another possibility is the link between duplicate evaluations and stagnation, where mitigating one issue partially alleviates the other.
For PSO, there was no significant difference between PSO and its L T M A -enhanced variations (PSO_LTMA_6, PSO_LTMA_12, PSO_LTMA_30). The most substantial improvement was observed in CRO with L T M A ( M s M A ) , where the rating increased from 1370.09 to 1420.87, surpassing all MRFO and ABC variations (Table 5). As expected, self-adaptive MAs like jDE and ABC showed limited gains with L T M A ( M s M A ) , as they rarely enter stagnation cycles. The experiment also revealed a possible negative impact of L T M A ( M s M A ) on ABC, where ABC_LTMA_6 performed worse than the base ABC (Table 5), similar to ABC_6’s near-identical rating to the core ABC in the previous experiment (Table 1). This may be due to L T M A ’s precision, set to nine decimal places in the search space, where small changes could lead to larger fitness variations, resulting in more draws and fewer wins (draws were determined with a threshold of 1 × 10 6 ).
Further understanding was gained by analyzing the losses in MA vs. MA comparisons, which provide detailed insights into relative performance. The results are presented in Table 6.
The data from Table 6 reveal that jDE_LTMA_30 lost to jDE_30; however, jDE_LTMA_30 had fewer losses against jDE_30 (349) than jDE_30 had against jDE_LTMA_30 (404, highlighted in red in Table 6). The most significant rating differences for jDE_LTMA_30 stemmed from losses against jDE_LTMA_6 and jDE_LTMA_12 (highlighted in blue in Table 6). This suggests that L T M A utilized better solutions on average but with less precision, leading to more losses against higher-precision solutions. For deeper analysis, the results for wins and draws are provided in the Appendix A (Table A7 and Table A8).
To gain additional insights into performance on individual problems, we present the losses of each MA for specific problems in Table 7 and Table 8.
The results from Table 7 show that jDE_LTMA_6 would be the clear winner if problem F04 were excluded, where it performed poorly with 524 losses. Notably, RS achieved 75 wins on F02, corresponding to 675 losses (Table 7 and Table A9), and ABC and its variations remained unbeaten on F05 (Figure A3).
Analysis of the top five MAs showed consistent performance across problems F16 to F29 (Table 8). In contrast, the bottom-performing MAs like ABC, MRFO_LTMA_6, and ABC_LTMA_6 exhibited relatively large deviations despite fewer losses relative to their overall rank (Table 8). Additional Tables for wins and draws on individual problems are provided in the Appendix A (Table A9, Table A10, Table A11 and Table A12).
For a comparison involving more recent MAs, see Appendix A.4.

4.5. Experiment: Real-World Optimization Problem

The selected real-world problem addresses the optimization of load flow analysis (LFA) in unbalanced power distribution networks with incomplete data. The example showed on Figure 4 illustrates a typical residential power distribution network with multiple consumers, each connected to three-phase supply lines. The network is characterized by its unbalanced nature, where the power consumption of each consumer may vary across the three phases. Where blue nodes represent consumers with partial measurements, green nodes represent consumers without measurements, and red is the transformer with complete measurement [74].
Due to limitations in the measurement infrastructure, comprehensive per-phase power consumption data for all consumers is often unavailable, resulting in sparse datasets. The optimization problem focuses on estimating the unmeasured per-phase active and reactive power consumption for consumers, given partial measurements of per-phase voltages and power.
The objective of the proposed optimization algorithm is to minimize the discrepancy between the calculated and measured per-phase voltages at a subset of monitored nodes (n). Specifically, the algorithm estimates: (1) per-phase active ( P 1 , P 2 , P 3 ) and reactive power ( Q 1 , Q 2 , Q 3 ) for unmeasured consumers ( n [ 1 m ] ), and (2) the distribution of measured ( n [ m + 1 j ] ) three-phase active power across individual phases for consumers with partial measurements ( P m a x ). The optimization does not constrain the solution to match measured aggregate power values at the network’s supply point, allowing for the estimation of network losses. The LFA approach used in the paper was Backward Forward Sweep (BFS) [75].
Network losses, which may account for up to 5% of power in larger systems, are assumed negligible in this problem to simplify the optimization model by excluding the loss parameters. The optimization was subject to the following constraints:
1.
Maximum three-phase active power consumption per consumer ( P m a x ), reflecting realistic load limits (Equation (11)).
P n e r r = n = 1 j i = 1 3 m a x ( P n , i P m a x n , 0 )
2.
Maximum per-phase current, constrained by fuse ( I f u s e ) ratings to ensure safe operation (Equation (12)).
I e r r = n = 1 j i = 1 3 m a x ( I n , j I f u s e n , 0 )
3.
Sum of three-phase active (P) and reactive power (Q) for consumptions, reflecting transformer measured values ( P s u m , Q s u m ) (Equations (13), and (14)).
P e r r = i = 1 3 a b s ( P s u m , i n = 1 j P n , i )
Q e r r = i = 1 3 a b s ( Q s u m , i n = 1 j Q n , i )
4.
Inductive-only reactive power consumption, preventing unintended reactive power exchange between consumers.
The optimization goal is to minimize the difference ( U d i f ) between the calculated ( U c a l c ) and measured ( U m e a s ) per-phase voltages at monitored nodes, skipping the first node-transformer (Equation (15)). The BFS algorithm computes the voltage at each node based on the estimated power consumption.
U d i f = n = 2 j i = 1 3 a b s ( U m e a s n , i U c a l c n , i )
Based on the optimization goal and constraints, the fitness function is defined by Equation (16).
F f i t n e s s = U d i f + P n e r r + Q e r r + P e r r + I e r r
These constraints ensure that the estimated power consumption remains physically plausible and adheres to the operational limits of the distribution network. The problem is formulated to handle the stochastic and nonlinear nature of load flow analysis, leveraging sparse data to achieve accurate voltage estimation.
The experimental setup and statistical analysis follow the configuration detailed in Section 4.1. For the LFA problem, the maximum number of function evaluations was set to M a x F E s = 100 , 000 . To show the concept and limit the number of parameters we have limited ourselves to the “leftmost” feeder with five nodes beside the transformer (Figure 4): one transformer node where all the data are known, one measured node with known per-phase voltages U and total P, and three unmeasured nodes. For the unmeasured nodes, the optimization estimates per-phase P and Q. Additionally, for the measured node, the total three-phase active power is distributed across the three phases.
In total, the optimization involves 24 parameters: 9 parameters for the unmeasured nodes (3 nodes × 3 phases for P), 12 parameters for reactive power (4 nodes × 3 phases for Q), and 3 parameters for the per-phase distribution of active power at the measured node.
The performance of various metaheuristic algorithms on the LFA problem is summarized in Table 9.
The results demonstrate that the MRFO algorithm, enhanced with LTMA and MsMA strategies, outperformed the other algorithms significantly, achieving the highest rating value of 1843, which is significantly better than the 32 other algorithms variations. The jDE algorithm, also enhanced with LTMA and MsMA strategies, ranks second with a rating value of 1770.45, outperforming the 23 other algorithms significantly. Furthermore, all the tested LTMA configurations combined with MsMA surpassed the performance of their respective baseline algorithms (marked with the red color in Table 5). Notably, MRFO and CRO, despite being among the lowest-performing algorithms on the CEC’24 benchmark (Table 5), achieved top or near-top performances with these strategies.
For a comparison involving more recent MAs, see Appendix A.4.

5. Discussion

The stagnation problem is as old as MAs themselves. It is well recognized that stagnation occurs in MAs frequently, often when the algorithm struggles to transition from the exploitation phase back to effective exploration. Researchers have tackled this issue through various strategies, among which self-adaptive algorithms have demonstrated notable success. Self-adaptation can be applied to different aspects of MAs, ranging from adjusting the internal parameters dynamically (e.g., mutation rates or learning factors), to introducing individual-level control, or modifying the population size based on stagnation detection [48,76]. Another common practice is to use stagnation as a stopping criterion. This approach serves a dual purpose: it prevents the waste of computational resources when progress stalls, while also avoiding premature termination when other criteria (e.g., M a x F E s or iteration limits) might otherwise halt optimization too early [29].
In this study, we proposed a meta-level strategy, M s M A , that partitions the search process based on stagnation detection. A key advantage of meta-level strategies like M s M A is their generality: they can be applied across different core MAs without requiring modifications to the underlying algorithm. This facilitates fairer comparisons and reusable optimization logic across MA variants, eliminating the need to define and publish a new variant for every core MA. For instance, one might claim to have developed five novel MAs when, in reality, only a self-adaptive stagnation mechanism has been applied to the existing core MAs. Such an approach offers little benefit to the community, as it obscures the new MA’s behavior and its performance relative to the established methods. Additionally, because meta-strategies operate by wrapping core algorithms, they can be nested or chained with other meta-level mechanisms. This was demonstrated through the integration of L T M A with M s M A , illustrating how different meta-strategies can synergize to enhance performance and convergence.
However, not all strategies are suitable for meta-level abstraction. Generally, strategies with broader generalization capabilities are better suited for meta-level applications. A balance must be struck between what can be externalized at the meta-level and what must remain embedded within the core algorithm. For example, global parameters such as M a x F E s (utilized by M s M A ) and solution evaluation control (leveraged by L T M A ) are well suited for meta-level adaptation. In contrast, tightly coupled internal components—such as parent selection or recombination operators for generating new solution candidates—are challenging to expose or modify externally due to their interdependencies at each optimization step.
During the experimentation and development, we encountered several practical challenges. Notably, the proposed method employs a reset mechanism based on a stopping condition termed M i n S F E s , which partitions M a x F E s into unequal partitions depending on the algorithm’s performance. Consequently, the final partition, M a x F E s l a s t , can sometimes be extremely small—occasionally even smaller than the population size. This can cause errors in algorithms that require a minimum number of evaluations to function correctly. To address this, we propose that the algorithms implement a getMinimalEvaluation method, which returns the minimum number of evaluations needed for safe operation. If the remaining evaluation budget falls below this threshold, the algorithm can skip execution, avoiding runtime errors. To utilize otherwise unused evaluations effectively, we applied a fallback random search strategy, ensuring that even small remaining budgets contribute to the optimization process.

6. Conclusions

This study tackled the persistent challenge of stagnation in MAs by introducing a general meta-level strategy, M s M A , designed to enhance optimization performance. The proposed strategy improved performance across all tested MAs on the 29 CEC’24 benchmark problems, boosting the rankings of jDE, MRFO, and CRO significantly. Notably, applying M s M A altered the relative rankings of some algorithms; for instance, MRFO surpassed ABC in overall performance (Table 1).
Beyond its general applicability, the study demonstrated the feasibility of nesting meta-strategies through the integration of L T M A with M s M A , highlighting their synergistic potential. Leveraging L T M A enhanced the performance of most M s M A variations further, with CRO showing the greatest improvement—specifically, CRO_LTMA_12 overtook MRFO_LTMA_6 in the rankings (Table 5).
For the LFA optimization problem, the experimental results demonstrate that nearly all algorithms enhanced with MsMA and LTMA strategies outperformed their baseline counterparts significantly, with the exception of the ABC algorithm. Notably, the MRFO algorithm, despite being among the lowest-performing algorithms on the CEC’24 benchmark (Table 5), achieved the highest performance on the LFA problem, with a rating value of 1843 (Table 9). This suggests that MRFO’s search mechanisms or parameter configurations are particularly well-suited to the characteristics of the LFA problem, highlighting its potential for real-world power distribution optimization.
Future work should focus on developing more advanced meta-strategies tailored to the specific characteristics of both the core algorithm and the problem domain. Additionally, the M s M A framework could be extended to other optimization contexts, such as multi-objective, dynamic, and constrained optimization.
This study encourages further research into meta-level strategies and their integration, opening new avenues for a better understanding of the general strategies and characteristics of modern MAs.

Author Contributions

Conceptualization, M.Č., M.M. (Marjan Mernik) and M.R.; investigation, M.Č., M.M. (Marjan Mernik) and M.R.; methodology, M.Č.; software, M.Č., M.B., M.P., M.M. (Matej Moravec) and M.R.; validation, M.Č., M.M. (Marjan Mernik), M.P., M.B., M.M. (Matej Moravec) and M.R.; writing, original draft, M.Č., M.M. (Marjan Mernik), M.B. and M.R.; writing, review and editing, M.Č., M.M. (Marjan Mernik), M.P., M.B., M.M. (Matej Moravec) and M.R. All authors have read and agreed to the published version of this manuscript.

Funding

This research was funded by the Slovenian Research Agency Grant Number P2-0041 (B), P2-0114, and P2-0115.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://github.com/UM-LPM/EARS, accessed on 22 May 2025.

Acknowledgments

AI-assisted tools have been used to improve the English language.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Experiment Results

Appendix A.1. Selected CEC’24 Benchmark Problems

This subsection presents the CEC’24 benchmark problems selected for the experiments. The accompanying Figures depict these benchmark functions in their original form, without applying rotation (rotation matrix M) or shifting (shift vector o). For enhanced visualization, the functions are illustrated in two dimensions, although they are defined in n dimensions, where n represents the problem’s dimensionality. For further details on the CEC’24 benchmark suite, refer to the competition documentation [71,72].
F 03 ( x ) = i = 1 n 1 100 ( z i + 1 z i 2 ) 2 + ( z i 1 ) 2 , where z = M ( x o ) , subject to x [ 100 , 100 ] n
F 04 ( x ) = 10 D + i = 1 D z i 2 10 cos ( 2 π z i ) , where z = M ( x o ) , subject to x [ 100 , 100 ] D
F 05 ( x ) = i = 1 n 1 0.5 + sin 2 z i 2 + z i + 1 2 0.5 1 + 0.001 ( z i 2 + z i + 1 2 ) 2 , where z = M ( x o ) , subject to x [ 100 , 100 ] n
Figure A1. F03: Rosenbrock’s Function.
Figure A1. F03: Rosenbrock’s Function.
Mathematics 13 01803 g0a1
Figure A2. F04: Rastrigin’s Function (not rotated or shifted).
Figure A2. F04: Rastrigin’s Function (not rotated or shifted).
Mathematics 13 01803 g0a2
Figure A3. F05: Schaffer’s Function.
Figure A3. F05: Schaffer’s Function.
Mathematics 13 01803 g0a3

Appendix A.2. The MsMA Experiment

The following Tables present the results of the M s M A experiment on the CEC’24 benchmark. They report the number of wins and draws for each algorithm, based on pairwise comparisons and their performance on individual benchmark problems from F01 to F29. Additional Tables showing the number of losses are provided in Appendix A.
Table A1. MsMA Draw Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
Table A1. MsMA Draw Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
jDE_30jDE_6jDE_12jDEPSO_30PSO_6PSO_12PSOMRFO_6MRFO_12ABC_30ABC_12ABC_6ABCMRFO_30CRO_6CRO_12MRFOCRO_30CRORS
jDE_30069818881010032292929292001000
jDE_6698019283010022303030303001000
jDE_121881920106000022292929293002000
jDE81831060000004151515152002000
PSO_300000029292924200000190011000
PSO_61100290303226200000200011000
PSO_120000293003025210000180011000
PSO0000293230024200000180011000
MRFO_63220242625240240000210016000
MRFO_122224202021202400000170010000
ABC_302930291500000003030300000000
ABC_122930291500000030030300000000
ABC_62930291500000030300300000000
ABC2930291500000030303000000000
MRFO_30233219201818211700000007000
CRO_6000000000000000000000
CRO_12000000000000000000000
MRFO112211111111161000007000000
CRO_30000000000000000000000
CRO000000000000000000000
RS000000000000000000000
Table A2. MsMA Win Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
Table A2. MsMA Win Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
jDE_30jDE_6jDE_12jDEPSO_30PSO_6PSO_12PSOMRFO_6MRFO_12ABC_30ABC_12ABC_6ABCMRFO_30CRO_6CRO_12MRFOCRO_30CRORS
jDE_300138639760867866867867863866825828825827863867868865868869870
jDE_6340613747847848846848857856823823821824859853858855860858870
jDE_1243650729861862865867859864823824822826860864868859866868870
jDE2940350863864865864864865824825824825862869868865869869870
PSO_30323970426421420550569533532533532583611633642668672870
PSO_6321864150403419554563525511527534602596626635670694868
PSO_12324554204370443549561524514513533597583618629677674870
PSO322364214193970559553522517513524581601616622655674870
MRFO_6411962962902962870436481477486494477477502540544589869
MRFO_12212412812872882974100477466464471454452481513545550868
ABC_30161718313373453463483893930446437426438434480484517533855
ABC_12131717303383593563533934043940423400434436476493526550857
ABC_6161919313373433573573844064034170412434456471476513545856
ABC141615303383363373463763994144404280440437468490511549848
MRFO_3058762682482552713723994324364364300440460506528549869
CRO_6317612592742872693934184364344144334300457512522511868
CRO_12212222372442522543683893903943994024104130475478516863
MRFO414932172242302373143473863773943803573583950454463869
CRO_30210412022001932153263253533443573593423483924160451862
CRO112211981761961962813203373203253213213593544074190856
RS00000200121513142212718140
Table A3. The Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table A3. The Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_30510510460567405569569569569569569569569569569
jDE_6510510437352407566566566566566566566566566566
jDE_12510510424531406541541541541541541541541541541
jDE510510448561390510510510510510510510510510510
PSO_30294267256406237295378404248408372359370388318
PSO_6276146228373165247363404263412374391396426356
PSO_12273179228410207253397428272406358392387362291
PSO284325211385207274361403260408358381384386298
MRFO_633022429023683211289262119311362275355266261
MRFO_1230431125426880197295220135292388277349274275
ABC_30366754641324072951198237780157136146165287
ABC_12363654391824072931096736988136102151136302
ABC_635977468137407287115963749116278160104332
ABC3813943312540730013310336360165167130172313
MRFO_302204623351967224126726669261398169356188226
CRO_6112267181366263323304315345245117325141323202
CRO_12112346150294298308261288254247131316131299179
MRFO14146430820068153230233125270387165379160168
CRO_30124384112293332206244269261258106296116243129
CRO1413919128632819721524024624310925198188143
RS05800000000003100
Table A4. The Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
Table A4. The Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_30569569569569569569569569569569569569569569
jDE_6566566566566566566566566566566566566566566
jDE_12541541541541541541541541541541541541541541
jDE510510510510510510510510510510510510510510
PSO_30321398331231269315300300287308271240276390
PSO_6308390406258297318314299272282312222303379
PSO_12331377374276301318291289272307292270257381
PSO290396362245275318327280260284236247264369
MRFO_6210244329289273264215229331224230343163353
MRFO_12217226304286206230165259370215175284184283
ABC_30322220100313339132358260347389390338375119
ABC_1233018612031436313738433933239338934035578
ABC_63172127133732214731740432940540132136161
ABC31120111932932213829827535837041036635985
MRFO_30187271322217220223152197292231156270173288
CRO_6237197298284216232237248128178195108286271
CRO_12254190304222172201205233150164176133203281
MRFO171276206238180193106137157100104285157271
CRO_3014013419913817419821317785103187167202212
CRO13416023596151181198154110119156120162259
RS02070400000000
Table A5. The Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table A5. The Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_30909027018929292929292929292929
jDE_6909033019329292929292929292929
jDE_1290902701889999999999
jDE90902801061111111111
PSO_30000000000000000
PSO_6003000000000000
PSO_12002000000000000
PSO000000000000000
MRFO_60012000000000000
MRFO_120015000000000000
ABC_3000001930000000000
ABC_1200001930000000000
ABC_600001930000000000
ABC00001930000000000
MRFO_300011000000000000
CRO_6000000000000000
CRO_12000000000000000
MRFO008000000000000
CRO_30000000000000000
CRO000000000000000
RS000000000000000
Table A6. The Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
Table A6. The Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_302929292929292929292929292929
jDE_62929292929292929292929292929
jDE_1299999999999999
jDE11111111111111
PSO_300000015900020000
PSO_60000016200050000
PSO_120000016200000000
PSO0000016200020000
MRFO_600000136000201700
MRFO_1200000114000001300
ABC_3000000000000000
ABC_1200000000000000
ABC_600000000000000
ABC00000000000000
MRFO_300000010500050900
CRO_600000000000000
CRO_1200000000000000
MRFO0000062000001300
CRO_3000000000000000
CRO00000000000000
RS00000000000000

Appendix A.3. The LTMA(MsMA) Experiment

The following Tables present the results of the L T M A ( M s M A ) experiment on the CEC’24 benchmark. They report the number of wins and draws for each algorithm based on pairwise comparisons, as well as their performance on individual benchmark problems from F01 to F29. Additional Tables showing the number of losses are provided in Appendix A.
Table A7. LTMA(MsMA) Win Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
Table A7. LTMA(MsMA) Win Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
jDE_30jDE_LTMA_30jDE_LTMA_6jDE_LTMA_12jDEPSO_LTMA_6PSO_LTMA_12PSO_LTMA_30PSO_30PSOCRO_LTMA_12MRFO_LTMA_6CRO_LTMA_6MRFO_6ABC_LTMA_12ABC_LTMA_30MRFO_LTMA_30ABC_30ABCMRFO_LTMA_12ABC_LTMA_6CRO_LTMA_30CRO_6MRFOCRORS
jDE_30034955402760867867865867867866864867863825828863825827861827868867865869870
jDE_LTMA_304040434422750863866860863863863862862863822825863822823860825865864861867870
jDE_LTMA_6363480372746838840841841842840841840846816818846814823844815841841850849870
jDE_LTMA_1239354740747851853853851854851855850858822825859822823859823858857859860870
jDE293651390865868864863864866864867864825825862824825862825869869865869870
PSO_LTMA_637321950442462455485600598607598583576624575574603581635650653721869
PSO_LTMA_1234301713970437447472575563605585550562592559555597565620636653710869
PSO_LTMA_3059291763784030410420574558578554529528560539538574544595611630679870
PSO_3037291973863944300420567551574550518540563533532556545590611642672870
PSO37281663553684184210572560551559519520557522524578540590601622674870
CRO_LTMA_1247301942702952963032980423501460468484488491488466476473481538584870
MRFO_LTMA_657271352462802862932834470438422506501447489514433503471493536570870
CRO_LTMA_638302032632652922963193694320445496502479509505465500454463534580868
MRFO_64522962482602912962874104154250477486452481494436480471477540589869
ABC_LTMA_1216192418302873203413523514023643743930408394418399414454461454478560853
ABC_LTMA_3013162215302943083423303503863693683844320407430427412467457435487544853
MRFO_LTMA_304522832272592902882943823973913934764630476469419482448456504550868
ABC_3016192618312953113313373483793813613894224103940426413460457434484533855
ABC14181717302963153323383463823563653764414134014140413465452437490549848
MRFO_LTMA_125522852432492732922694044114054074564584344574570460440454505545870
ABC_LTMA_614162517302893053263253303943673703903863733883803754100449434468528859
CRO_LTMA_3025291212352502752802803973994163994094134224134184304210452501547863
CRO_636291312202342592592693893774073934164354144364334164364180512511868
MRFO45191032062062282172373323173363143923833553863803534023693580463869
CRO13211011491601911981962863002902813103263203373213253423233594070856
RS00000110000021171721522011721140
Table A8. LTMA(MsMA) Draw Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
Table A8. LTMA(MsMA) Draw Outcomes in MA vs. MA Comparisons on the CEC’24 Benchmark.
jDE_30jDE_LTMA_30jDE_LTMA_6jDE_LTMA_12jDEPSO_LTMA_6PSO_LTMA_12PSO_LTMA_30PSO_30PSOCRO_LTMA_12MRFO_LTMA_6CRO_LTMA_6MRFO_6ABC_LTMA_12ABC_LTMA_30MRFO_LTMA_30ABC_30ABCMRFO_LTMA_12ABC_LTMA_6CRO_LTMA_30CRO_6MRFOCRORS
jDE_3001177794298100000010329293292942900100
jDE_LTMA_30117088948400100010229292292952900400
jDE_LTMA_67798804247300000020230302303043000100
jDE_LTMA_124299442408400000020330303303033000100
jDE81847384001000010015155151531500200
PSO_LTMA_600000031302930026024001900240001100
PSO_LTMA_1200001310302930027025001900240001100
PSO_LTMA_3001000303003032026025002000230001200
PSO_3000000292930029026024001900220001100
PSO00000303032290027024001900230001100
CRO_LTMA_1200000000000000000000000000
MRFO_LTMA_611221262726262700033002600260001700
CRO_LTMA_600000000000000000000000000
MRFO_632230242525242403300002500270001600
ABC_LTMA_1229293030150000000000300303003000000
ABC_LTMA_3029293030150000000003000303003000000
MRFO_LTMA_3032235191920191902602500000170001100
ABC_3029293030150000000003030003003000000
ABC29293030150000000003030030003000000
MRFO_LTMA_1245433242423222302602700170000001200
ABC_LTMA_629293030150000000003030030300000000
CRO_LTMA_3000000000000000000000000000
CRO_600000000000000000000000000
MRFO14112111112111101701600110012000000
CRO00000000000000000000000000
RS00000000000000000000000000
Table A9. LTMA(MsMA) Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table A9. LTMA(MsMA) Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_30630639574704495688688688688688688688688688688
jDE_LTMA_30630639537648496708708708708708708708708708708
jDE_LTMA_6630630555226497690690690690690690690690690690
jDE_LTMA_12630639535502497676676676676676676676676676676
jDE630639553707481630630630630630630630630630630
PSO_LTMA_6377194278534216292473476309509408529479526485
PSO_LTMA_12381225300474278312433527303514433474466508442
PSO_LTMA_30337342313492301298404509314504474458476466386
PSO_30367403306483328330433501282517469440450475339
PSO355485273461293301432504298517451452472464328
CRO_LTMA_1260290185538300477432383442311189446237387348
MRFO_LTMA_635834036734080313311284114374466299434350353
CRO_LTMA_630200126559228415494411512264203342292346431
MRFO_639933935527298236345303137390456308427310295
ABC_LTMA_124238756117549733411779417117191103195158324
ABC_LTMA_3046464527182497332144106461111172151144189349
MRFO_LTMA_3034358233926010925432023197298489315446224224
ABC_304439859115349732712010442993167145164172313
ABC4535755913549734513912843278183184149184346
MRFO_LTMA_1232345538128093274316295148337503297441317305
ABC_LTMA_6434985651674973211369341911115799172110298
CRO_LTMA_30184438146444390391400430381312139393109346193
CRO_6158407206438363377342377401312139378152374200
MRFO20558738124185159265270131329484187471186185
CRO20653410033443821224328927330212730095208156
RS07500000000002900
Table A10. LTMA(MsMA) Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F06–F29).
Table A10. LTMA(MsMA) Win Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F06–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_30688688688688688688688688688688688688688688
jDE_LTMA_30708708708708708708708708708708708708708708
jDE_LTMA_6690690690690690690690690690690690690690690
jDE_LTMA_12676676676676676676676676676676676676676676
jDE630630630630630630630630630630630630630630
PSO_LTMA_6465470555418365378368376430445429322352499
PSO_LTMA_12411458499387364378363352319403390350341519
PSO_LTMA_30319489484323351378296330339324349293337452
PSO_30389492395252306374335330354342312315313477
PSO333483447259319378375303322318270321304463
CRO_LTMA_12368299367328320253371356163239267137361363
MRFO_LTMA_6294336326330279346251281368302251362215361
CRO_LTMA_64053073704273621724253757725935882376252
MRFO_6233282390319309312224234416257260423182419
ABC_LTMA_123782379634239615545347238947544646042186
ABC_LTMA_3036121314937228513436541742149449243744699
MRFO_LTMA_30216343350316269276218229461273190382188332
ABC_30365251111348419140406322425463462435428139
ABC351209137366397149338334445430514461422103
MRFO_LTMA_12179305330243278314188221353227173353238367
ABC_LTMA_63852297636543316844051535545740024143176
CRO_LTMA_30263166375213252299309279171182252201286325
CRO_6260227338324239305243262152184213138318327
MRFO18932823727120722211414819697104369166330
CRO13617526893150222218164144112168160175311
RS01040400000000
Table A11. LTMA(MsMA) Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table A11. LTMA(MsMA) Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_3012011129024744444444444444444444
jDE_LTMA_301201114302451111111111
jDE_LTMA_61208436025343434343434343434343
jDE_LTMA_1212011137025328282828282828282828
jDE1201114301350000000000
PSO_LTMA_6000000000000000
PSO_LTMA_12002000000000000
PSO_LTMA_30002000000000000
PSO_30000000000000000
PSO000100000000000
CRO_LTMA_12000000000000000
MRFO_LTMA_60012100100000000
CRO_LTMA_6000000000000000
MRFO_60015000000000000
ABC_LTMA_1200002530000000000
ABC_LTMA_3000002530000000000
MRFO_LTMA_300019000000000000
ABC_3000002530000000000
ABC00002530000000000
MRFO_LTMA_120023000100000000
ABC_LTMA_600002530000000000
CRO_LTMA_30000000000000000
CRO_6000000000000000
MRFO0013000000000000
CRO000000000000000
RS000000000000000
Table A12. LTMA(MsMA) Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
Table A12. LTMA(MsMA) Draw Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_304444444444444444444444444444
jDE_LTMA_3011111111111111
jDE_LTMA_64343434343434343434343434343
jDE_LTMA_122828282828282828282828282828
jDE00000000000000
PSO_LTMA_60000022200020000
PSO_LTMA_120000022200030000
PSO_LTMA_300000022200050000
PSO_300000021600030000
PSO0000022200020000
CRO_LTMA_1200000000000000
MRFO_LTMA_600000196000502600
CRO_LTMA_600000000000000
MRFO_600000184000403000
ABC_LTMA_1200000000000000
ABC_LTMA_3000000000000000
MRFO_LTMA_3000000143000602200
ABC_3000000000000000
ABC00000000000000
MRFO_LTMA_1200000174000201700
ABC_LTMA_600000000000000
CRO_LTMA_3000000000000000
CRO_600000000000000
MRFO0000085000202100
CRO00000000000000
RS00000000000000

Appendix A.4. Evaluation of Recent MAs

Experiments in this section aim to evaluate the meta-approaches M s M A and L T M A ( M s M A ) applied to recent metaheuristic algorithms (MAs). The selected algorithms retain their default parameter settings, without optimization, to ensure a consistent baseline. The primary focus is not to identify the best-performing MA but to analyze stagnation effects and assess the proposed method, which operates without directly altering the core optimization algorithm.
For recent MAs, we selected the following algorithms from the EARS framework [62]: Artificial Hummingbird Algorithm (AHA) [77], Balanced Teaching-Learning-Based Optimization (BTLBO) [78], Electron Radar Search Algorithm (ERSA) [79], Sand Cat Swarm Optimization (SCSO) [80], Gazelle Optimization Algorithm (GAOA) [81], and Linear Population Size Reduction Success-History Based Adaptive Differential Evolution (LSHADE) [48].
All selected algorithms, except LSHADE, were introduced in peer-reviewed publications after 2020. LSHADE is included due to its demonstrated effectiveness and the strong performance of its variants in international optimization competitions, such as those organized by the IEEE Congress on Evolutionary Computation (CEC) [44,82,83,84] (see Equation (A4)).
M A s 2 { A H A , B T L B O , E R S A , G A O A , L S H A D E , S C S O }
All experimental configurations are consistent with those detailed in Section 4.
Applying the M s M A or L T M A ( M s M A ) strategy resulted in performance improvements across all MA variants (Table A13). For all MAs except SCSO, applying M s M A yielded at least one statistically significant improvement. The top-performing algorithm was LSHADE_LTMA_12, with a rating of 1811.66, followed closely by LSHADE_12 with a rating of 1807.21. These algorithms outperformed other MAs by at least 20 rating points, demonstrating significantly superior performance.
Table A13. The LTMA(MsMA) Leaderboard on the CEC’24 Benchmark Using Recent MAs.
Table A13. The LTMA(MsMA) Leaderboard on the CEC’24 Benchmark Using Recent MAs.
RankMARating−2RD+2RDS+ (95% CI)S− (95% CI)
1.LSHADE_LTMA_121811.661771.661851.66+21
2.LSHADE_121807.211767.211847.21+21
3.LSHADE_301800.101760.101840.10+20
4.LSHADE_LTMA_301797.731757.731837.73+20
5.LSHADE1771.681731.681811.68+20
6.BTLBO_121757.111717.111797.11+20
7.BTLBO_301756.241716.241796.24+20
8.BTLBO_LTMA_301748.191708.191788.19+20
9.BTLBO_LTMA_121732.211692.211772.21+20
10.BTLBO1721.501681.501761.50+20−2
11.GAOA_LTMA_301604.091564.091644.09+15−10
12.GAOA_121602.661562.661642.66+15−10
13.GAOA_301598.461558.461638.46+15−10
14.GAOA_LTMA_121566.901526.901606.90+14−10
15.GAOA1563.201523.201603.20+13−10
16.AHA_LTMA_121491.681451.681531.68+10−13
17.AHA_LTMA_301485.551445.551525.55+10−14
18.AHA_121477.351437.351517.35+10−15
19.AHA_301468.121428.121508.12+10−15
20.AHA1456.861416.861496.86+10−15
21.ERSA_301305.921265.921345.92+8−20
22.ERSA_121287.301247.301327.30+8−20
23.ERSA_LTMA_301186.041146.041226.04 −22
24.ERSA_LTMA_121186.041146.041226.04 −22
25.ERSA1184.911144.911224.91 −22
26.SCSO_LTMA_121171.821131.821211.82 −22
27.SCSO_301169.791129.791209.79 −22
28.SCSO_121169.501129.501209.50 −22
29.SCSO_LTMA_301161.551121.551201.55 −22
30.SCSO1158.631118.631198.63 −22
In an experiment addressing the LFA problem, the M s M A and L T M A ( M s M A ) strategies yielded significant performance improvements across all MA variants, except for SCSO, where CSO_LTMA_30 exhibited a slight performance degradation of 4.77 rating points (Table A14). Potential reasons for this reduced performance include precision loss when applying L T M A , the algorithm’s inherent stagnation mitigation strategy, or the stochastic nature of optimization algorithms, particularly in cases of minor performance differences.
Table A14. The LTMA(MsMA) Leaderboard on the LFA Problem Using Recent MAs.
Table A14. The LTMA(MsMA) Leaderboard on the LFA Problem Using Recent MAs.
RankMARating−2RD+2RDS+ (95% CI)S− (95% CI)
1.LSHADE_LTMA_121883.421843.421923.42+22
2.LSHADE_LTMA_301860.791820.791900.79+22
3.GAOA_LTMA_121859.601819.601899.60+22
4.GAOA_LTMA_301850.081810.081890.08+22
5.GAOA_121841.741801.741881.74+21
6.LSHADE_121826.261786.261866.26+20
7.GAOA_301813.161773.161853.16+20
8.GAOA1808.401768.401848.40+20
9.LSHADE_301766.731726.731806.73+20−4
10.LSHADE1759.581719.581799.58+20−5
11.BTLBO_121598.831558.831638.83+14−10
12.BTLBO_LTMA_121591.691551.691631.69+12−10
13.BTLBO_301576.211536.211616.21+12−10
14.AHA_LTMA_121552.391512.391592.39+11−10
15.BTLBO_LTMA_301525.011485.011565.01+11−10
16.BTLBO1523.811483.811563.81+11−10
17.AHA_LTMA_301517.861477.861557.86+11−11
18.AHA_121514.291474.291554.29+11−11
19.AHA_301483.331443.331523.33+10−13
20.AHA1419.031379.031459.03+10−18
21.SCSO_LTMA_121228.511188.511268.51+4−20
22.SCSO_121222.561182.561262.56+4−20
23.SCSO_301208.271168.271248.27+4−20
24.SCSO1204.701164.701244.70+4−20
25.SCSO_LTMA_301199.931159.931239.93+4−20
26.ERSA_301177.311137.311217.31+3−20
27.ERSA_121105.861065.861145.86+1−25
28.ERSA_LTMA_121030.85990.851070.85 −26
29.ERSA_LTMA_301026.08986.081066.08 −26
30.ERSA1023.70983.701063.70 −27

References

  1. Darwin, C. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life; John Murray: London, UK, 1859. [Google Scholar]
  2. Toth, N.; Schick, K. The Oldowan: Case Studies into the Earliest Stone Age; Interdisciplinary Contributions to Archaeology: New York, NY, USA, 2002. [Google Scholar]
  3. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; Volume 1, pp. 1–102. [Google Scholar] [CrossRef]
  4. Nocedal, J.; Wright, S.J. Numerical Optimization, 2nd ed.; Springer: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  5. Jensen, A.; Tavares, J.M.R.; Silva, A.R.S.; Bastos, F.B. Applications of Optimization in Engineering and Economics: A Survey. Comput. Oper. Res. 2007, 34, 3485–3497. [Google Scholar] [CrossRef]
  6. Dalibor, M.; Heithoff, M.; Michael, J.; Netz, L.; Pfeiffer, J.; Rumpe, B.; Varga, S.; Wortmann, A. Generating customized low-code development platforms for digital twins. J. Comput. Lang. 2022, 70, 101117. [Google Scholar] [CrossRef]
  7. Dokeroglu, T.; Sevinc, E.; Kucukyilmaz, T.; Cosar, A. A survey on new generation metaheuristic algorithms. Comput. Ind. Eng. 2019, 137, 106040. [Google Scholar] [CrossRef]
  8. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and Exploitation in Evolutionary Algorithms: A Survey. ACM Comput. Surv. 2013, 45, 35:1–35:33. [Google Scholar] [CrossRef]
  9. Beyer, H.; Schwefel, H. Evolutionary Computation: A Review. Nat. Comput. 2001, 1, 3–52. [Google Scholar] [CrossRef]
  10. Honda, Y.; Ueno, K.; Yairi, T. A Survey on Hyperparameter Optimization for Evolutionary Algorithms. Artif. Intell. Rev. 2018, 49, 49–70. [Google Scholar]
  11. Črepinšek, M.; Liu, S.H.; Mernik, M.; Ravber, M. Long Term Memory Assistance for Evolutionary Algorithms. Mathematics 2019, 7, 1129. [Google Scholar] [CrossRef]
  12. Eiben, A.E.; Hinterding, R.; Michalewicz, Z. Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 1999, 3, 124–141. [Google Scholar] [CrossRef]
  13. Dantzig, G.B. Maximization of a Linear Function of Variables Subject to Linear Inequalities. Coop. Res. Proj. 1947, 1, 1–10. [Google Scholar]
  14. Hitchcock, F.L. The Distribution of a Product from Several Sources to Numerous Localities. J. Math. Phys. 1941, 20, 224–230. [Google Scholar] [CrossRef]
  15. Dantzig, G.B. Application of the Simplex Method to the Transportation Problem. IBM J. Res. Dev. 1951, 1, 26–31. [Google Scholar]
  16. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  17. Gomory, R.E. Outline of an algorithm for integer solutions to linear programs. Bull. Am. Math. Soc. 1958, 64, 107–112. [Google Scholar] [CrossRef]
  18. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975; Reprinted by MIT Press, 1992. [Google Scholar]
  19. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  20. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  21. Dorigo, M. Ant System: A Cooperative Approach to the Traveling Salesman Problem. In Proceedings of the European Conference on Artificial Life, France, Paris, 11–13 December 1991; MIT Press: Cambridge, MA, USA, 1991; pp. 134–142. [Google Scholar]
  22. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 1992. [Google Scholar]
  23. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  24. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. In Proceedings of the IEEE International Conference on Evolutionary Computation, Perth, WA, Australia, 29 November–1 December 1995; pp. 519–523. [Google Scholar]
  25. Talbi, E. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  26. Yu, X.; Gen, M. Introduction to Evolutionary Algorithms; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  27. Luke, S. Essentials of Metaheuristics, 2nd ed.; Lulu Press: Morrisville, NC, USA, 2013; Available online: https://cs.gmu.edu/~sean/book/metaheuristics/ (accessed on 22 April 2025).
  28. Gendreau, M.; Potvin, J. (Eds.) Handbook of Metaheuristics, 3rd ed.; International Series in Operations Research & Management Science; Springer: Berline/Heidelberg, Germany, 2019; Volume 272. [Google Scholar]
  29. Ravber, M.; Liu, S.h.; Mernik, M.; Črepinšek, M. Maximum number of generations as a stopping criterion considered harmful. Appl. Soft Comput. 2022, 128, 109478. [Google Scholar] [CrossRef]
  30. Veček, N.; Mernik, M.; Črepinšek, M. A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms. Inf. Sci. 2014, 277, 656–679. [Google Scholar] [CrossRef]
  31. Veček, N.; Mernik, M.; Filipič, B.; Črepinšek, M. Parameter tuning with Chess Rating System (CRS-Tuning) for meta-heuristic algorithms. Inf. Sci. 2016, 372, 446–469. [Google Scholar] [CrossRef]
  32. Matthysen, W.; Engelbrecht, A.; Malan, K. Analysis of stagnation behavior of vector evaluated particle swarm optimization. In Proceedings of the 2013 IEEE Symposium on Swarm Intelligence (SIS), Singapore, 16–19 April 2013; pp. 155–163. [Google Scholar] [CrossRef]
  33. Bonyadi, M.R.; Michalewicz, Z. Stability Analysis of the Particle Swarm Optimization Without Stagnation Assumption. IEEE Trans. Evol. Comput. 2016, 20, 814–819. [Google Scholar] [CrossRef]
  34. Chen, J.; Fan, X. Stagnation analysis of swarm intelligent algorithms. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, 22–24 October 2010; Volume 12, pp. V12-24–V12-28. [Google Scholar] [CrossRef]
  35. Kazikova, A.; Pluhacek, M.; Senkerik, R. How Does the Number of Objective Function Evaluations Impact Our Understanding of Metaheuristics Behavior? IEEE Access 2021, 9, 44032–44048. [Google Scholar] [CrossRef]
  36. Lu, H.; Tseng, H. The Analyze of Stagnation Phenomenon and Its’ Root Causes in Particle Swarm Optimization. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Taiwan (ICCE-Taiwan), Taoyuan, Taiwan, 28–30 September 2020; pp. 1–2. [Google Scholar] [CrossRef]
  37. Li, C.; Deng, L.; Qiao, L.; Zhang, L. An efficient differential evolution algorithm based on orthogonal learning and elites local search mechanisms for numerical optimization. Knowl.-Based Syst. 2022, 235, 107636. [Google Scholar] [CrossRef]
  38. Safi-Esfahani, F.; Mohammadhoseini, L.; Larian, H.; Mirjalili, S. LEVYEFO-WTMTOA: The hybrid of the multi-tracker optimization algorithm and the electromagnetic field optimization. J. Supercomput. 2025, 81, 432. [Google Scholar] [CrossRef]
  39. Doerr, B.; Rajabi, A. Stagnation Detection Meets Fast Mutation. In Evolutionary Computation in Combinatorial Optimization, EvoCOP 2022, Proceedings of the 22nd European Conference on Evolutionary Computation in Combinatorial Optimisation (EvoCOP) Held as Part of EvoStar Conference, Madrid, Spain, 20–22 April 2022.; Lecture Notes in Computer Science; Caceres, L., Verel, S., Eds.; Species; Springer: Cham, Switzerland, 2022; Volume 13222, pp. 191–207. [Google Scholar] [CrossRef]
  40. Shanhe, J.; Zhicheng, J. An improved HPSO-GSA with adaptive evolution stagnation cycle. In Proceedings of the 33rd Chinese Control Conference, Nanjing, China, 28–30 July 2014; pp. 8601–8606. [Google Scholar] [CrossRef]
  41. Piotrowski, A.P.; Napiorkowski, J.J.; Piotrowska, A.E. To what extent evolutionary algorithms can benefit from a longer search? Inf. Sci. 2024, 654, 119766. [Google Scholar] [CrossRef]
  42. Ursem, R.K. Diversity-guided evolutionary algorithms. In Proceedings of the Parallel Problem Solving from Nature—PPSN VII, Granada, Spain, 7–11 September 2002; Volume 2439, pp. 462–471. [Google Scholar] [CrossRef]
  43. Alam, M.S.; Islam, M.M.; Yao, X.; Murase, K. Diversity Guided Evolutionary Programming: A novel approach for continuous optimization. Appl. Soft Comput. 2012, 12, 1693–1707. [Google Scholar] [CrossRef]
  44. Chauhan, D.; Trivedi, A.; Shivani. A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization. arXiv 2024, arXiv:2409.15994. [Google Scholar]
  45. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 2, pp. 1769–1776. [Google Scholar] [CrossRef]
  46. Loshchilov, I.; Schoenauer, M.; Sebag, M. Black-box optimization benchmarking of IPOP-saACM-ES and BIPOP-saACM-ES on the BBOB-2012 noiseless testbed. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO Companion), Philadelphia, PA, USA, 7–11 July 2012; pp. 175–182. [Google Scholar] [CrossRef]
  47. Brest, J.; Greiner, S.; Bošković, B.; Mernik, M.; Žumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  48. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  49. Morales-Castaneda, B.; Maciel-Castillo, O.; Navarro, M.A.; Aranguren, I.; Valdivia, A.; Ramos-Michel, A.; Oliva, D.; Hinojosa, S. Handling stagnation through diversity analysis: A new set of operators for evolutionary algorithms. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–7. [Google Scholar] [CrossRef]
  50. Raß, A.; Schmitt, M.; Wanka, R. Explanation of Stagnation at Points that are not Local Optima in Particle Swarm Optimization by Potential Analysis. In Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation, New York, NY, USA, 11 July 2015; GECCO Companion ’15; pp. 1463–1464. [Google Scholar] [CrossRef]
  51. Laalaoui, Y.; Drias, H.; Bouridah, A.; Ahmed, R. Ant colony system with stagnation avoidance for the scheduling of real-time tasks. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Scheduling, Nashville, TN, USA, 30 March–2 April 2009; pp. 1–6. [Google Scholar] [CrossRef]
  52. Morales-Castañeda, B.; Oliva, D.; Navarro, M.A.; Ramos-Michel, A.; Valdivia, A.; Casas-Ordaz, A.; Rodríguez-Esparza, E.; Mousavirad, S.J. Improving the Convergence of the PSO Algorithm with a Stagnation Variable and Fuzzy Logic. In Proceedings of the 2023 IEEE Congress on Evolutionary Computation (CEC), Chicago, IL, USA, 1–5 July 2023; pp. 1–8. [Google Scholar] [CrossRef]
  53. Worasucheep, C. A Particle Swarm Optimization with diversity-guided convergence acceleration and stagnation avoidance. In Proceedings of the 2012 8th International Conference on Natural Computation, Chongqing, China, 29–31 May 2012; pp. 733–738. [Google Scholar] [CrossRef]
  54. Rajabi, A.; Witt, C. Stagnation Detection with Randomized Local Search. Evol. Comput. 2023, 31, 1–29. [Google Scholar] [CrossRef]
  55. Liu, Y.; Zheng, L.; Cai, B. Adaptive Differential Evolution with the Stagnation Termination Mechanism. Mathematics 2024, 12, 3168. [Google Scholar] [CrossRef]
  56. Nadimi-Shahraki, M.H.; Zamani, H.; Fatahi, A.; Mirjalili, S. MFO-SFR: An Enhanced Moth-Flame Optimization Algorithm Using an Effective Stagnation Finding and Replacing Strategy. Mathematics 2023, 11, 862. [Google Scholar] [CrossRef]
  57. Qin, Y.; Deng, L.; Li, C.; Zhang, L. CIR-DE: A chaotic individual regeneration mechanism for solving the stagnation problem in differential evolution. Swarm Evol. Comput. 2024, 91, 101718. [Google Scholar] [CrossRef]
  58. Rajabi, A.; Witt, C. Self-Adjusting Evolutionary Algorithms for Multimodal Optimization. Algorithmica 2022, 84, 1694–1723. [Google Scholar] [CrossRef]
  59. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  60. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  61. Salcedo-Sanz, S.; Ser, J.D.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J.A. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems. Sci. World J. 2014, 2014, 1–15. [Google Scholar] [CrossRef]
  62. EARS—Evolutionary Algorithms Rating System (Github). Available online: https://github.com/UM-LPM/EARS (accessed on 22 May 2025).
  63. Elo, A. The Rating of Chessplayers, Past and Present; Batsford: London, UK, 1978. [Google Scholar]
  64. Glickman, M.E. Parameter Estimation in Large Dynamic Paired Comparison Experiments. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1999, 48, 377–394. [Google Scholar] [CrossRef]
  65. Herbrich, R.; Minka, T.; Graepel, T. TrueSkill™: A Bayesian Skill Rating System. In Advances in Neural Information Processing Systems 19 (NIPS 2006); MIT Press: Cambridge, MA, USA, 2007; pp. 569–576. [Google Scholar]
  66. Bober-Irizar, M.; Dua, N.; McGuinness, M. Skill Issues: An Analysis of CS:GO Skill Rating Systems. arXiv 2024, arXiv:2410.02831. [Google Scholar]
  67. Veček, N.; Črepinšek, M.; Mernik, M. On the influence of the number of algorithms, problems, and independent runs in the comparison of evolutionary algorithms. Appl. Soft Comput. 2017, 54, 23–45. [Google Scholar] [CrossRef]
  68. Eftimov, T.; Korošec, P. Identifying practical significance through statistical comparison of meta-heuristic stochastic optimization algorithms. Appl. Soft Comput. 2019, 85, 105862. [Google Scholar] [CrossRef]
  69. Glickman, M.E. The Glicko-2 Rating System. 2022. Available online: https://www.glicko.net/glicko/glicko2.pdf (accessed on 19 March 2025).
  70. Cardoso, L.F.F.; Santos, V.C.A.; Francês, R.S.K.; Prudêncio, R.B.C.; Alves, R.C.O. Data vs classifiers, who wins? arXiv 2021, arXiv:2107.07451. [Google Scholar]
  71. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  72. Price, K.V.; Kumar, A.; Suganthan, P.N. Trial-based dominance for comparing both the speed and accuracy of stochastic optimizers with standard non-parametric tests. Swarm Evol. Comput. 2023, 78, 101287. [Google Scholar] [CrossRef]
  73. Qiao, K.; Ban, X.; Chen, P.; Price, K.V.; Suganthan, P.N.; Wen, X.; Liang, J.; Wu, G.; Yue, C. Performance Comparison of CEC 2024 Competition Entries on Numerical Optimization Considering Accuracy and Speed; Technical Report; Zhengzhou University: Zhengzhou, China; Qatar University: Doha, Qatar, 2024. [Google Scholar]
  74. Huppertz, P.; Schallenburger, M.; Zeise, R.; Kizilcay, M. Nodal load approximation in sparsely measured 4-wire electrical low-voltage grids by stochastic optimization. In Proceedings of the 2015 IEEE Eindhoven PowerTech, Eindhoven, The Netherlands, 29 June–2 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
  75. Chang, G.W.; Chu, S.Y.; Wang, H.L. An Improved Backward/Forward Sweep Load Flow Algorithm for Radial Distribution Systems. IEEE Trans. Power Syst. 2007, 22, 882–884. [Google Scholar] [CrossRef]
  76. Brest, J.; Maučec, M.S. Self-adaptive differential evolution algorithm using population size reduction and three strategies. Soft Comput. 2011, 15, 2157–2174. [Google Scholar] [CrossRef]
  77. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  78. Taheri, A.; RahimiZadeh, K.; Rao, R.V. An efficient Balanced Teaching-Learning-Based optimization algorithm with Individual restarting strategy for solving global optimization problems. Inf. Sci. 2021, 576, 68–104. [Google Scholar] [CrossRef]
  79. Rahmanzadeh, S.; Pishvaee, M.S. Electron radar search algorithm: A novel developed meta-heuristic algorithm. Soft Comput. 2020, 24, 8443–8465. [Google Scholar] [CrossRef]
  80. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  81. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Gazelle optimization algorithm: A novel nature-inspired metaheuristic optimizer. Neural Comput. Appl. 2023, 35, 4099–4131. [Google Scholar] [CrossRef]
  82. Hadi, A.A.; Mohamed, A.W.; Jambi, K.M. LSHADE with Semi-Parameter Adaptation Hybrid with CMA-ES for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; pp. 1318–1325. [Google Scholar] [CrossRef]
  83. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  84. Bujok, P.; Michalak, K. Improving LSHADE by Means of a Pre-Screening Mechanism. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Boston, MA, USA, 9–13 July 2022; pp. 884–892. [Google Scholar] [CrossRef]
Figure 1. Uniform partitioning as defined by Equation (2).
Figure 1. Uniform partitioning as defined by Equation (2).
Mathematics 13 01803 g001
Figure 2. Partitioning based on stagnation as defined by Equation (5).
Figure 2. Partitioning based on stagnation as defined by Equation (5).
Mathematics 13 01803 g002
Figure 3. Applying the LTMA strategy to MsMA, as defined by Equations (8) (top) and (9) (bottom).
Figure 3. Applying the LTMA strategy to MsMA, as defined by Equations (8) (top) and (9) (bottom).
Mathematics 13 01803 g003
Figure 4. Power Distribution Network Topology.
Figure 4. Power Distribution Network Topology.
Mathematics 13 01803 g004
Table 1. The MsMA Leaderboard on the CEC’24 Benchmark.
Table 1. The MsMA Leaderboard on the CEC’24 Benchmark.
RankMARating−2RD+2RDS+ (95% CI)S− (95% CI)
1.jDE_301974.691934.692014.69+18
2.jDE_61956.611916.611996.61+18
3.jDE_121916.001876.001956.00+17
4.jDE1865.721825.721905.72+17−2
5.PSO_301536.851496.851576.85+13−4
6.PSO_61533.721493.721573.72+13−4
7.PSO_121533.481493.481573.48+13−4
8.PSO1527.451487.451567.45+13−4
9.MRFO_61437.601397.601477.60+4−8
10.MRFO_121422.061382.061462.06+3−8
11.ABC_301421.611381.611461.61+3−8
12.ABC_121420.361380.361460.36+3−8
13.ABC_61419.341379.341459.34+3−8
14.ABC1418.151378.151458.15+3−8
15.MRFO_301397.951357.951437.95+2−8
16.CRO_61395.201355.201435.20+2−8
17.CRO_121368.821328.821408.82+1−8
18.MRFO1343.251303.251383.25+1−9
19.CRO_301321.081281.081361.08+1−14
20.CRO1303.181263.181343.18+1−16
21.RS986.87946.871026.87 −20
Note: The red font color highlights deviations discussed explicitly in the main text.
Table 2. The Loss Outcomes for MS vs. MsMA on the CEC’24 Benchmark.
Table 2. The Loss Outcomes for MS vs. MsMA on the CEC’24 Benchmark.
jDE_30jDE_6jDE_12jDEPSO_30PSO_6PSO_12PSOMRFO_6MRFO_12ABC_30ABC_12ABC_6ABCMRFO_30CRO_6CRO_12MRFOCRO_30CRORS
jDE_300344329333342161316145324210
jDE_61380654023212422111217171916817121410120
jDE_12639613035985394181719157629420
jDE7607477290765661313031306123110
PSO_3086784786186304154204212962813373383373382682592372172021980
PSO_686684886286442604374192902873453593433362482742442242001762
PSO_1286784686586542140303972962883463563573372552872522301931960
PSO86784886786442041944302872973483533573462712692542372151960
MRFO_686385785986455055454955904103893933843763723933683143262811
MRFO_1286685686486556956356155343603934044063993994183893473253202
ABC_30825823823824533525524522481477039440341443243639038635333715
ABC_12828823824825532511514517477466446041744043643439437734432013
ABC_6825821822824533527513513486464437423042843641439939435732514
ABC827824826825532534533524494471426400412043043340238035932122
MRFO_3086385986086258360259758147745443843443444004304103573423211
CRO_686785386486961159658360147745243443645643744004133583483592
CRO_1286885886886863362661861650248148047647146846045703953923547
MRFO86585585986564263562962254051348449347649050651247504164071
CRO_3086886086686966867067765554454551752651351152852247845404198
CRO869858868869672694674674589550533550545549549511516463451014
RS8708708708708708688708708698688558578568488698688638698628560
Note: Red and blue font colors highlight deviations discussed explicitly in the main text.
Table 3. The Loss Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table 3. The Loss Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_30001133362222222222
jDE_60013024805555555555
jDE_120014969650505050505050505050
jDE001243910489898989898989898989
PSO_30306333344194363305222196352192228241230212282
PSO_6324454369227435353237196337188226209204174244
PSO_12327421370190393347203172328194242208213238309
PSO316275389215393326239197340192242219216214302
MRFO_6270376298364517389311338481289238325245334339
MRFO_12296289331332520403305380465308212323251326325
ABC_302345251364680305481518223520443464454435313
ABC_122375351614180307491533231512464498449464298
ABC_62415231324630313485504226509438522440496268
ABC2195611674750300467497237540435433470428287
MRFO_30380138254404528359333334531339202431244412374
CRO_6488333419234337277296285255355483275459277398
CRO_12488254450306302292339312346353469284469301421
MRFO459136284400532447370367475330213435221440432
CRO_30476216488307268394356331339342494304484357471
CRO459209509314272403385360354357491349502412457
RS600542600600600600600600600600600600569600600
Note: Red and blue font colors highlight deviations discussed explicitly in the main text.
Table 4. The Loss Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
Table 4. The Loss Outcomes of MAs on Individual Problems in the CEC’24 Benchmark (F16–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_3022222222222222
jDE_655555555555555
jDE_125050505050505050505050505050
jDE8989898989898989898989898989
PSO_30279202269369331126300300313290329360324210
PSO_6292210194342303120286301328313288378297221
PSO_12269223226324299120309311328293308330343219
PSO310204238355325120273320340314364353336231
MRFO_6390356271311327200385371269374370240437247
MRFO_12383374296314394256435341230385425303416317
ABC_30278380500287261468242340253211210262225481
ABC_12270414480286237463216261268207211260245522
ABC_6283388529263278453283196271195199279239539
ABC289399481271278462302325242230190234241515
MRFO_30413329278383380272448403308364444321427312
CRO_6363403302316384368363352472422405492314329
CRO_12346410296378428399395367450436424467397319
MRFO429324394362420345494463443500496302443329
CRO_30460466401462426402387423515497413433398388
CRO466440365504449419402446490481444480438341
RS600598600593600596600600600600600600600600
Note: The red font color highlights deviations discussed explicitly in the main text.
Table 5. The LTMA(MsMA) Leaderboard on the CEC’24 Benchmark.
Table 5. The LTMA(MsMA) Leaderboard on the CEC’24 Benchmark.
RankMARating−2RD+2RDS+ (95% CI)S− (95% CI)
1.jDE_301951.691911.691991.69+22
2.jDE_LTMA_301945.871905.871985.87+22
3.jDE_LTMA_61929.011889.011969.01+21
4.jDE_LTMA_121917.701877.701957.70+21
5.jDE1856.161816.161896.16+21−2
6.PSO_LTMA_61557.011517.011597.01+16−5
7.PSO_LTMA_121540.231500.231580.23+16−5
8.PSO_LTMA_301518.021478.021558.02+16−5
9.PSO_301516.401476.401556.40+16−5
10.PSO1510.431470.431550.43+16−5
11.CRO_LTMA_121420.841380.841460.84+3−10
12.MRFO_LTMA_61420.291380.291460.29+3−10
13.CRO_LTMA_61415.251375.251455.25+3−10
14.MRFO_61412.701372.701452.70+310
15.ABC_LTMA_121396.661356.661436.66+2−10
16.ABC_LTMA_301396.371356.371436.37+2−10
17.MRFO_LTMA_301394.671354.671434.67+2−10
18.ABC_301394.081354.081434.08+2−10
19.ABC1393.841353.841433.84+2−10
20.MRFO_LTMA_121393.411353.411433.41+2−10
21.ABC_LTMA_61380.611340.611420.61+2−10
22.CRO_LTMA_301375.581335.581415.58+2−10
23.CRO_61370.091330.091410.09+2−10
24.MRFO1324.751284.751364.75+1−14
25.CRO1282.191242.191322.19+1−23
26.RS986.17946.171026.17 −25
Note: Red and blue font colors highlight deviations discussed explicitly in the main text.
Table 6. The LTMA(MsMA) Loss Outcomes for MS vs. MsMA for the CEC’24 Benchmark.
Table 6. The LTMA(MsMA) Loss Outcomes for MS vs. MsMA for the CEC’24 Benchmark.
jDE_30jDE_LTMA_30jDE_LTMA_6jDE_LTMA_12jDEPSO_LTMA_6PSO_LTMA_12PSO_LTMA_30PSO_30PSOCRO_LTMA_12MRFO_LTMA_6CRO_LTMA_6MRFO_6ABC_LTMA_12ABC_LTMA_30MRFO_LTMA_30ABC_30ABCMRFO_LTMA_12ABC_LTMA_6CRO_LTMA_30CRO_6MRFOCRORS
jDE_30040436392933533453416134161451423410
jDE_LTMA_3034903483543674977778519165191851656530
jDE_LTMA_6554340745132302929283027302224222226172225292919210
jDE_LTMA_1240242237203919171719161913209181581817817121310100
jDE760750746747051676453630303313053011310
PSO_LTMA_686786383885186503973783863552702462632482872942272952962432892352202061491
PSO_LTMA_1286786684085386844204033943682952802652603203082593113152493052502342061601
PSO_LTMA_3086586084185386446243704304182962862922913413422903313322733262752592281910
PSO_3086786384185186345544741004213032932962963523302883373382923252802592171980
PSO86786384285486448547242042002982833192873513502943483462693302802692371960
CRO_LTMA_1286686384085186660057557456757204473694104023863823793824043943973893322860
MRFO_LTMA_686486284185586459856355855156042304324153643693973813564113673993773173000
CRO_LTMA_686786284085086760760557857455150143804253743683913613654053704164073362902
MRFO_686386384685886459858555455055946042244503933843933893764073903993933142811
ABC_LTMA_12825822816822825583550529518519468506496477043247642244145638640941639231017
ABC_LTMA_30828825818825825576562528540520484501502486408046341041345837341343538332617
MRFO_LTMA_3086386384685986262459256056355748844747945239440703944014343884224143553202
ABC_30825822814822824575559539533522491489509481418430476041445738041343638633715
ABC827823823823825574555538532524488514505494399427469426045737541843338032122
MRFO_LTMA_1286186084485986260359757455657846643346543641441241941341304104304163533250
ABC_LTMA_6827825815823825581565544545540476503500480454467482460465460042143640234211
CRO_LTMA_3086886584185886963562059559059047347145447146145744845745244044904183693237
CRO_686786484185786965063661161160148149346347745443545643443745443445203583592
MRFO86586185085986565365363064262253853653454047848750448449050546850151204071
CRO869867849860869721710679672674584570580589560544550533549545528547511463014
RS8708708708708708698698708708708708708688698538538688558488708598638688698560
Note: Red and blue font colors highlight deviations discussed explicitly in the main text.
Table 7. The Loss Outcomes of MAs with LTMA on Individual Problems in the CEC’24 Benchmark (F01–F15).
Table 7. The Loss Outcomes of MAs with LTMA on Individual Problems in the CEC’24 Benchmark (F01–F15).
F01F02F03F04F05F06F07F08F09F10F11F12F13F14F15
jDE_300014746818181818181818181818
jDE_LTMA_3000170102941414141414141414141
jDE_LTMA_6036159524017171717171717171717
jDE_LTMA_1200178248046464646464646464646
jDE0015443134120120120120120120120120120120
PSO_LTMA_6373556472216534458277274441241342221271224265
PSO_LTMA_12369525448276472438317223447236317276284242308
PSO_LTMA_30413408435258449452346241436246276292274284364
PSO_30383347444267422420317249468233281310300275411
PSO395265477288457449318246452233299298278286422
CRO_LTMA_12690460565212450273318367308439561304513363402
MRFO_LTMA_6392410371409670437438466636376284451316400397
CRO_LTMA_6720550624191522335256339238486547408458404319
MRFO_6351411380478652514405447613360294442323440455
ABC_LTMA_123276631895750416633671333633559647555592426
ABC_LTMA_302866862235680418606644289639578599606561401
MRFO_LTMA_30407168392490641496430519653452261435304526526
ABC_303076521595970423630646321657583605586578437
ABC2976931916150405611622318672567566601566404
MRFO_LTMA_12427295346470657476433455602413247453309433445
ABC_LTMA_63166521855830429614657331639593651578640452
CRO_LTMA_30566312604306360359350320369438611357641404557
CRO_6592343544312387373408373349438611372598376550
MRFO545163356509665591485480619421266563279564565
CRO544216650416312538507461477448623450655542594
RS750675750750750750750750750750750750721750750
Note: The red font color highlights deviations discussed explicitly in the main text.
Table 8. The Loss Outcomes of MAs with LTMA on Individual Problems in the CEC’24 Benchmark (F16–F29).
Table 8. The Loss Outcomes of MAs with LTMA on Individual Problems in the CEC’24 Benchmark (F16–F29).
F16F17F18F19F20F21F22F23F24F25F26F27F28F29
jDE_301818181818181818181818181818
jDE_LTMA_304141414141414141414141414141
jDE_LTMA_61717171717171717171717171717
jDE_LTMA_124646464646464646464646464646
jDE120120120120120120120120120120120120120120
PSO_LTMA_6285280195332385150382374320303321428398251
PSO_LTMA_12339292251363386150387398431344360400409231
PSO_LTMA_30431261266427399150454420411421401457413298
PSO_30361258355498444160415420396405438435437273
PSO417267303491431150375447428430480429446287
CRO_LTMA_12382451383422430497379394587511483613389387
MRFO_LTMA_6456414424420471208499469382443499362535389
CRO_LTMA_6345443380323388578325375673491392668374498
MRFO_6517468360431441254526516334489490297568331
ABC_LTMA_12372513654408354595297278361275304290329664
ABC_LTMA_30389537601378465616385333329256258313304651
MRFO_LTMA_30534407400434481331532521289471560346562418
ABC_30385499639402331610344428325287288315322611
ABC399541613384353601412416305320236289328647
MRFO_LTMA_12571445420507472262562529397521577380512383
ABC_LTMA_6365521674385317582310235395293350509319674
CRO_LTMA_30487584375537498451441471579568498549464425
CRO_6490523412426511445507488598566537612432423
MRFO561422513479543443636602554651646360584420
CRO614575482657600528532586606638582590575439
RS750749750746750746750750750750750750750750
Note: The red font color highlights deviations discussed explicitly in the main text.
Table 9. The MAs Leaderboard on The LFA Problem.
Table 9. The MAs Leaderboard on The LFA Problem.
RankMARating−2RD+2RDS+ (95% CI)S− (95% CI)
1.MRFO_LTMA_121843.001803.001883.00+32
2.jDE_LTMA_121770.451730.451810.45+23
3.MRFO_LTMA_301764.531724.531804.53+22
4.jDE_LTMA_301763.051723.051803.05+22
5.MRFO_61761.571721.571801.57+21−1
6.jDE_LTMA_61756.141716.141796.14+21−1
7.CRO_LTMA_301751.701711.701791.70+20−1
8.jDE_121741.831701.831781.83+18−1
9.jDE1735.411695.411775.41+17−1
10.MRFO_LTMA_61725.541685.541765.54+16−1
11.jDE_301717.641677.641757.64+16−1
12.jDE_61709.251669.251749.25+16−1
13.CRO_61698.401658.401738.40+15−1
14.CRO_LTMA_121686.061646.061726.06+15−2
15.MRFO_121682.601642.601722.60+15−4
16.CRO_301672.731632.731712.73+15−6
17.CRO_121666.811626.811706.81+15−7
18.MRFO1662.371622.371702.37+15−7
19.MRFO_301659.901619.901699.90+15−8
20.CRO_LTMA_61652.991612.991692.99+15−9
21.CRO1627.821587.821667.82+15−12
22.ABC_LTMA_121320.361280.361360.36+8−21
23.ABC_301308.511268.511348.51+8−21
24.ABC1301.601261.601341.60+8−21
25.ABC_LTMA_301292.721252.721332.72+8−21
26.ABC_121290.751250.751330.75+8−21
27.ABC_61273.971233.971313.97+8−21
28.ABC_LTMA_61256.201216.201296.20+7−21
29.PSO_LTMA_61187.111147.111227.11+3−27
30.PSO_LTMA_121168.351128.351208.35+2−28
31.PSO_61152.561112.561192.56+1−28
32.PSO_LTMA_301128.871088.871168.87+1−28
33.PSO_121119.991079.991159.99+1−28
34.PSO_301092.851052.851132.85+1−29
35.PSO1074.591034.591114.59+1−30
36.RS981.80941.801021.80 −35
Note: The red font color highlights deviations discussed explicitly in the main text.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Črepinšek, M.; Mernik, M.; Beković, M.; Pintarič, M.; Moravec, M.; Ravber, M. Overcoming Stagnation in Metaheuristic Algorithms with MsMA’s Adaptive Meta-Level Partitioning. Mathematics 2025, 13, 1803. https://doi.org/10.3390/math13111803

AMA Style

Črepinšek M, Mernik M, Beković M, Pintarič M, Moravec M, Ravber M. Overcoming Stagnation in Metaheuristic Algorithms with MsMA’s Adaptive Meta-Level Partitioning. Mathematics. 2025; 13(11):1803. https://doi.org/10.3390/math13111803

Chicago/Turabian Style

Črepinšek, Matej, Marjan Mernik, Miloš Beković, Matej Pintarič, Matej Moravec, and Miha Ravber. 2025. "Overcoming Stagnation in Metaheuristic Algorithms with MsMA’s Adaptive Meta-Level Partitioning" Mathematics 13, no. 11: 1803. https://doi.org/10.3390/math13111803

APA Style

Črepinšek, M., Mernik, M., Beković, M., Pintarič, M., Moravec, M., & Ravber, M. (2025). Overcoming Stagnation in Metaheuristic Algorithms with MsMA’s Adaptive Meta-Level Partitioning. Mathematics, 13(11), 1803. https://doi.org/10.3390/math13111803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop