Next Article in Journal
Elliptic and Hyperbolic Rotational Motions on General Hyperboloids
Previous Article in Journal
The New Gompertz Distribution Model and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria

by
Anna Maria Gianni
,
Ioannis G. Tsoulos
*,
Vasileios Charilogis
and
Glykeria Kyrou
Department of Informatics and Telecommunications, University of Ioannina, 47150 Kostaki Artas, Greece
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 844; https://doi.org/10.3390/sym17060844
Submission received: 3 May 2025 / Revised: 21 May 2025 / Accepted: 26 May 2025 / Published: 28 May 2025
(This article belongs to the Section Computer)

Abstract

This paper presents an innovative optimization algorithm based on differential evolution that combines advanced mutation techniques with intelligent termination mechanisms. The proposed algorithm is designed to address the main limitations of classical differential evolution, offering improved performance for symmetric or non-symmetric optimization problems. The core scientific contribution of this research focuses on three key aspects. First, we develop a hybrid dual-strategy mutation system where the first strategy emphasizes exploration of the solution space through monitoring of the optimal solution, while the second strategy focuses on exploitation of promising regions using dynamically weighted differential terms. This dual mechanism ensures a balanced approach between discovering new solutions and improving existing ones. Second, the algorithm incorporates a novel majority dimension mechanism that evaluates candidate solutions through dimension-wise comparison with elite references (best sample and worst sample). This mechanism dynamically guides the search process by determining whether to intensify local exploitation or initiate global exploration based on majority voting across all the dimensions. Third, the work presents numerous new termination rules based on the quantitative evaluation of metric value homogeneity. These rules extend beyond traditional convergence checks by incorporating multidimensional criteria that consider both the solution distribution and evolutionary dynamics. This system enables more sophisticated and adaptive decision-making regarding the optimal stopping point of the optimization process. The methodology is validated through extensive experimental procedures covering a wide range of optimization problems. The results demonstrate significant improvements in both solution quality and computational efficiency, particularly for high-dimensional problems with numerous local optima. The research findings highlight the proposed algorithm’s potential as a high-performance tool for solving complex optimization challenges in contemporary scientific and technological contexts.

1. Introduction

Global optimization is concerned with finding the global minimum of a continuous objective function f : S R , where S R n is a compact set. Formally, the problem is defined as:
x * = a r g min x S f ( x ) .
where x * is the global minimizer. Typically, it is an n-dimensional hyperrectangle:
S = a 1 , b 1 a 2 , b 2 a n , b n
with a i and b i representing lower and upper bounds for each variable x i .
Optimization methods constitute a fundamental tool in multifaceted scientific and technological applications, such as engineering design, automated hyperparameter tuning in machine learning models, economic modeling, and biological system analysis. These methods can be broadly categorized into two main groups: deterministic and stochastic approaches. Deterministic methods are based on mathematical algorithms that follow strict rules and guarantee reproducibility. They include gradient-based methods such as the Gradient Descent [1] and Newton–Raphson algorithms [2], linear and nonlinear programming [3], interior-point methods [4], and quasi-Newton methods like BFGS [5]. However, these methods often struggle with complex, non-symmetric, or multimodal problems due to their dependence on gradient information and their tendency to become trapped in local minima. On the other hand, stochastic methods employ elements of randomness to explore the search space, making them ideal for non-differentiable or hard-to-analyze problems. Notable approaches include nature-inspired heuristic methods such as genetic algorithms (GAs) [6], differential evolution (DE) [7,8,9,10], Particle Swarm Optimization (PSO) [11,12], the Aquila Optimizer (AO) [13,14], and the Arithmetic Optimization Algorithm (AOA) [15,16,17]. There are also physics-inspired techniques like Simulated Annealing (SA) [18,19], Central Force Optimization (CFO) [20], and Ring Toss Optimization (RTO) [21], as well as behavioral algorithms such as Smell Agent Optimization (SAO) [22,23,24,25], Whale Optimization Algorithm (WOA) [26,27], and Artificial Fish Swarm Algorithm (AFSA) [28,29]. DE has proven to be particularly effective for many high-dimensional problems. However, its classical form presents certain limitations, including a tendency for premature convergence and difficulty in balancing exploration and exploitation. To address these issues, this work introduces an enhanced version of DE that incorporates a dual mutation strategy. The first strategy focuses on exploring the solution space using the distance from the current optimal solution, while the second strategy emphasizes exploiting promising regions through dynamically weighted differential terms. Additionally, we propose a new mechanism called the majority dimension mechanism (MDM), which evaluates solutions dimension-wise relative to both an optimal and a poor sample. This mechanism enables dynamic search adaptation based on the dimensional distribution of solutions, guiding the algorithm toward either further exploration or intensive search in promising regions. The work also presents five new termination rules based on multidimensional homogeneity metrics. These include the Worst Solution Similarity (WSS) rule that checks the stability of the worst solution, the Top Solution Similarity (TSS) rule that monitors the collective behavior of the top solutions, the Bottom Solution Similarity (BOSS) rule focusing on the evolution of the worst solutions, the Solution Range Similarity (SRS) rule measuring the shrinking of solution diversity, and the Improvement Rate Similarity (IRS) rule evaluating the rate of improvement. Furthermore, we introduce the doublebox rule based on the statistical criteria of the search space coverage. These rules provide a comprehensive view of convergence, enabling smarter termination decisions that balance computational efficiency with solution quality.
The rest of the paper is organized as follows:

2. Materials and Methods

2.1. The Classic DE

The DE algorithm (as presented in Algorithm 1) is a population-based stochastic optimization technique that operates through a series of well-defined steps. The algorithm begins with an initialization phase where key parameters are set, including the population size, crossover probability ( C R ), differential weight, and maximum number of iterations. The population is randomly initialized within specified bounds, and each individual’s fitness is evaluated to establish the initial global best solution and its corresponding fitness value. The core optimization process consists of a main loop that runs for the specified maximum iterations. For each individual in the population, the algorithm performs parent selection by randomly choosing three distinct individuals from the population (excluding the current one). These selected individuals are used to create a trial vector through a mutation and crossover process. The mutation operation combines the differences between selected individuals, scaled by the differential weight, while the crossover operation determines which dimensions of the trial vector come from the mutated vector versus the original individual, controlled by the CR parameter. Following the creation of the trial vector, the algorithm evaluates its fitness and compares it with the original individual’s fitness. If the trial vector demonstrates better or equal performance, it replaces the original individual in the population. The global best solution is updated whenever an improvement is found. An optional local search phase can be incorporated, where individuals undergo refinement with a certain probability, potentially leading to further improvements in solution quality. The algorithm terminates when either the maximum number of iterations is reached or when specified termination criteria are met. Upon termination, the best solution found during the optimization process is returned as the output. This classic DE framework provides a robust approach to optimization problems, balancing exploration of the search space with exploitation of promising regions through its differential mutation and selection mechanisms. The algorithm’s effectiveness stems from its simplicity, adaptability, and ability to handle nonlinear multimodal optimization problems without requiring gradient information.
Algorithm 1 Pseudocode of classic DE.
  • Initialization:
    (a)
    Set parameters:
    • Population size N P
    • Crossover probability C R [ 0 , 1 ]
    • Differential weight F [ 0 , 2 ] , F = 0.9 .
    • Maximum iterations maxIterations, i t e r m a x
    (b)
    Randomly initialize population samples [ i : 1 . . N P ] within bounds
    (c)
    Evaluate initial fitness f i t n e s s A r r a y i for each individual, const.
    (d)
    Initialize global best solution b S a m p l e and fitness b F 2 x
  • Main Optimization Loop (for iteration = 1 to i t e r m a x ):
    For each individual s a m p l e i in population ( i = 1 t o N P ):
    (a)
    Parent Selection:
    • Randomly select 3 distinct indices i n d e x A , i n d e x B , i n d e x C i
    (b)
    Trial Vector Creation:
    • Select random dimension R
    • For each dimension d (1 to dimension):
      • r = random number [ 0 , 1 ]
      • If r C R or d = R :
      t r i a l s a m p l e d = s a m p l e [ i n d e x A ] d + F × ( s a m p l e [ i n d e x B ] d s a m p l e [ i n d e x C ] d )
      • Apply boundary correction if needed
      • Else:
      t r i a l s a m p l e d = s a m p l e i , d
    (c)
    Selection:
    • Evaluate trial vector: m i n V a l u e = f ( t r i a l s a m p l e )
    • If m i n V a l u e f i t n e s s A r r a y i :
      • Update s a m p l e i = t r i a l s a m p l e
      • Update f i t n e s s A r r a y i = m i n V a l u e
      • Update global best b S a m p l e if improved
  • Local Search Phase L S (optional):
    For each individual i:
    (a)
    r = random number [ 0 , 1 ]
    (b)
    If r < l o c a l S e a r c h R a t e :
    r e f i n e d F i t n e s s = l o c a l S e a r c h ( s a m p l e i )
    If r e f i n e d F i t n e s s < f i t n e s s A r r a y i :
    Update s a m p l e i and f i t n e s s A r r a y i
    Update global best b S a m p l e if improved
  • Termination:
    (a)
    When max iterations reached or any of the termination rules are verified Section 2.4
    (b)
    Return best solution b S a m p l e

2.2. Detailed Formulation of the Algorithm

The modified differential evolution algorithm (NEWDE) represents an enhanced version of the classic DE algorithm, incorporating innovative strategies to improve performance. The algorithm begins with an initialization phase where the population size is determined and parameters such as crossover probability ( C R ), local search probability ( L S ), and problem boundaries are initialized. The initial population is randomly generated, each individual’s fitness is evaluated, and the global best solution along with its corresponding fitness value is identified. In the main optimization loop, which runs until the maximum number of iterations is reached, the algorithm implements a novel strategy selection approach. For each individual in the population, a random number (randStrategy) is generated to determine which strategy to follow with 20% probability Strategy 1 is selected and with 80% probability Strategy 2. Strategy 1 focuses on exploring the solution space. For each dimension, the distance from the current best solution is calculated and a new trial vector is created that tends to move away from the best solution, using a random coefficient. This ensures broad coverage of the search space. Strategy 2 concentrates on exploiting promising regions. Random individuals are selected from the population to create a trial vector with dynamically adjusted differential weight, which combines information from multiple individuals. This coefficient ranges between 0.5 and 2.5, providing greater flexibility in the search process. After creating the trial vector, its fitness is evaluated and compared with that of the original individual. If a better solution is found, the population and global best solution are updated. Optionally, a local search phase can be applied to selected individuals to further improve the solutions. The algorithm terminates when either the maximum number of iterations is reached or when specific termination criteria are satisfied. The final output is the best solution found during the optimization process. This modified algorithm effectively combines exploration and exploitation of the search space, delivering enhanced performance for complex optimization problems.
The study acknowledges certain limitations of the proposed algorithm, particularly regarding its performance in multidimensional search spaces and the handling of dynamically changing problems. Specifically, in cases of high dimensionality, a significant degradation of efficiency is observed due to increased computational complexity and the difficulty in maintaining population diversity. A central factor contributing to this degradation is the relationship between the problem’s dimensionality and population size. When dimensionality exceeds a critical percentage of the population size, the algorithm’s ability to effectively generate differential terms diminishes as the number of possible combinations becomes insufficient for thorough space exploration. To address this issue, we propose an adaptive strategy that dynamically adjusts the number of agents participating in trial vector creation. The core principle of this strategy involves limiting the number of agents to a percentage of the total population when dimensionality surpasses a defined threshold. This threshold is set at 30% of the population size, a value that balances the need for sufficient diversity with computational load management. The implementation of this strategy yields two primary effects. First, it reduces computational burden in high-dimensional problems without compromising solution quality. Second, it preserves the algorithm’s effectiveness in lower-dimensional problems, where the full number of dimensions can be utilized without constraints. Overall, this approach enhances the algorithm’s scalability and provides a feasible solution to challenges encountered in multidimensional problems. However, its effectiveness depends to some degree on proper configuration of other parameters, such as the limiting percentage and population size, underscoring the need for further research in this area. The proposed modified version of rge Differential Evolution method is outlined in Algorithm 2.
Algorithm 2 Pseudocode of modified differential evolution with dual mutation strategies.
  • Initialization Phase:
    (a)
    Set population size N P
    (b)
    Initialize parameters:
    • Crossover probability C R [ 0 , 1 ]
    • Local search probability localSearchRate
    • Problem boundaries l m a r g i n , r m a r g i n
    (c)
    Randomly initialize population s a m p l e s i , i = 1 . . N P
    (d)
    Evaluate initial fitness f i t n e s s A r r a y i for all individuals
    (e)
    Identify global best solution b S a m p l e and fitness b F 2 x
  • Main Optimization Loop (until max iterations reached):
    For each individual s a m p l e i in population ( i = 1 t o N P ):
    (a)
    Strategy Selection:
    • Generate random number r a n d S t r a t e g y [ 0 , 1 ]
    • If r a n d S t r a t e g y < 0.2: Use Strategy 1
    • Else: Use Strategy 2
    (b)
    Mutation Phase:
    Strategy 1 (Exploration):
    • For each dimension d:
      Generate random number r = [ 0 , 1 ]
      d i s t a n c e = s a m p l e s i , d b e s t S a m p l e d
      t r i a l s a m p l e d = b S a m p l y d r × d i s t a n c e
      Apply boundary repair if needed
    Strategy 2 (Exploitation):
    • Select d i m e n s i o n + 1 distinct random indices from population ( i )
    • Choose random dimension R
    • For each dimension d:
      • r 2 = r a n d o m [ 0 , 1 ]
      • If r 2 > C R or d = R :
        Generate random number r 3 = [ 0.0 , 1.0 ]
        F = 1 2 + 2 × r 3 [9]
        t r i a l s a m p l e d = s a m p l e s i n d e x 1 , d + F × ( s a m p l e s i n d e x 2 , d s a m p l e s o t h e r i n d i c e s i , d )
        Apply boundary repair
      • Else:
        t r i a l s a m p l e d = s a m l e s i , d
    (c)
    Selection:
    • Evaluate trial vector: m i n V a l u e = f ( t r i a l s a m p l e )
    • If m i n V a l u e < f i t n e s s A r r a y i :
      s a m p l e s i = t r i a l s a m p l e
      f i t n e s s A r r a y i = m i n V a l u e
      If m i n V a l u e < b F 2 x :
      Update global best b S a m p l y = t r i a l s a m p l e
      b F 2 x = m i n V a l u e
  • Local Search Phase L S (optional):
    For each individual i:
    (a)
    r = random number [ 0 , 1 ]
    (b)
    If r < l o c a l S e a r c h R a t e :
    r e f i n e d F i t n e s s = l o c a l S e a r c h ( s a m p l e s i )
    If r e f i n e d F i t n e s s < f i t n e s s A r r a y i :
    Update s a m p l e s i and f i t n e s s A r r a y i
    Update global best if improved
  • Termination:
    (a)
    When max iterations reached or any of the termination rules are verified Section 2.4
    (b)
    Return global best solution b S a m p l e

2.3. Majority Dimension Mechanism

The majority dimension mechanism plays a significant role in optimization algorithms, where it serves as a guiding tool for the search process. It operates by comparing a current sample s against two critical reference points: an optimal sample b representing a known good solution or minimum and a worse sample w representing less desirable solutions. The mechanism’s operation begins with a dimension-by-dimension analysis of the sample. For each vector element, it calculates the absolute differences between the sample and the corresponding elements of both reference vectors. The mechanism then determines which reference vector the current sample aligns with more closely in each individual dimension. When the function returns true, indicating the sample is closer to the optimal sample in most dimensions, this signifies the current position is near a previously discovered minimum. In this case, the optimization algorithm may make decisions such as changing direction, increasing the search step size, or applying randomized variations, aiming to avoid local minima traps and explore new regions of the solution space. Conversely, when the function returns false, indicating the sample is closer to the worse sample, this suggests the current position warrants further exploration. Here, the algorithm may focus its search on the current region, hoping for further solution improvement. In a Euclidean parameter space, this mechanism enables dynamic search adaptation. When leading to true, it encourages the algorithm to make larger jumps toward new directions or employ local minima escape methods. When leading to false, it concentrates the search around the current area for more detailed exploration. The key advantages of this mechanism include its ability to prevent stagnation at suboptimal solutions, its operational flexibility by focusing on individual dimensions rather than aggregate measures, and its broad applicability to various optimization algorithms from gradient-based to evolutionary approaches. In summary, the majority dimension mechanism functions as an intelligent director in optimal solution searches. It offers a balanced approach between exploring new regions and exploiting known good solutions, enhancing both the efficiency and reliability of the optimization process without being constrained by local optima.
The formula (function) of majority dimension mechanism:
checkSample ( s , b , w ) = = true , if d = 1 d i m e n s i o n | s d b d | < | s d w d | > d = 1 d i m e n s i o n | s d w d | < | s d b d | false , else
where s = [ s 1 , s 2 , , s d ] : The sample being evaluated;
b = [ b 1 , b 2 , , b d ] : The best sample;
w = [ w 1 , w 2 , , w d ] : The worst sample.
Several similar approaches to MDM exist that utilize dimensional information to enhance algorithm performance. For instance, algorithms such as SaDE [30] and jDE [31] implement dimension-wise parameter adaptation but without explicitly comparing dimensions against better or worse samples. Similarly, methods like CMA-ES [32] analyze relationships between dimensions through covariance matrices, yet they do not incorporate a voting system. Other techniques, such as Opposition-Based Learning [33], generate opposing solutions for exploration purposes but do not evaluate each dimension separately. The novelty of MDM lies in its majority voting system that compares dimensions based on their distance from superior and inferior solutions. This approach has not been reported in previous studies as existing methods either employ dimensional adaptation without voting mechanisms or use collective criteria without considering per-dimension performance. Thus, MDM introduces a new strategy for dynamically balancing exploration and exploitation that has not been investigated in similar forms within other optimization algorithms.

2.4. The Termination Rules

The algorithm’s termination mechanism evaluates similarity between specific values in the fitness array across consecutive iterations. Each rule focuses on different aspects of population evolution by monitoring changes in critical metrics from the fitness array. For the best solution, the stability of its value is checked across iterations. If the difference remains negligible for a specified number of iterations, similarity is considered achieved. Similarly, an equivalent check is applied to the population’s worst solution, monitoring the stability of its value. Furthermore, the mechanism evaluates similarity among solution groups. For top solutions, it examines the sum of a percentage of the best fitness values per iteration, while for worst solutions it performs an analogous check on the sum of corresponding values. An additional rule measures similarity in solution range by comparing the difference between best and worst values across iterations. Finally, a rule checks similarity in improvement rate by comparing how much the best and worst solutions have improved between iterations. All these rules aim to identify moments when fitness array values show significant similarity for sufficient duration, indicating the population has reached stability. This logic ensures the algorithm terminates only when adequate result stabilization occurs, optimizing runtime without compromising solution quality. Using multiple similarity rules provides a comprehensive convergence picture, covering various aspects of population evolution.
General parameters:
  • e controls the precision requirement (typically set to 1 × 10 6 );
  • s i m determines the required stability duration (commonly 5–10 iterations).

2.4.1. Best Solution Similarity

The termination mechanism is based on a simple criterion that evaluates the similarity of values in the fitness array. Specifically, at each iteration i t e r , the difference
δ s i m ( i t e r ) = f s i m , m i n ( i t e r ) f s i m , m i n ( i t e r 1 ) ,
is calculated, where f s i m , m i n ( i t e r ) represents the best fitness value found up to iteration i t e r . If the difference δ s i m ( i t e r ) remains less than or equal to a predefined accuracy threshold e for at least s i m consecutive iterations, then the population is considered to have converged and the process terminates. This criterion ensures that the algorithm will stop only when minimal progress occurs in the best solution for a sufficient time period, thereby optimizing computational time.

2.4.2. Worst Solution Similarity

The termination mechanism is based on the difference
δ s i m ( i t e r ) = f s i m , m a x ( i t e r ) f s i m , m a x ( i t e r 1 ) ,
where f s i m , m a x ( i t e r ) represents the worst fitness value at iteration iter. If δ s i m ( i t e r ) e for s i m consecutive iterations, the algorithm terminates, indicating the population has stabilized. This guarantees termination only occurs when the worst solution stops improving significantly. This logic provides an efficient way to monitor population stability, optimizing computational time without compromising solution quality. The mechanism is particularly useful in applications where fitness value variance across the population is a critical factor for determining convergence.

2.4.3. Top Solution Similarity

The termination mechanism evaluates the difference
δ b e s t = i = N P K N P f s o r t e d ( i t e r ) [ i ] i = N P K N P f s o r t e d ( i t e r 1 ) [ i ] ,
when K = population × sumRate , where f s o r t e d ( t ) [ i ] represents the i-th best fitness value in the sorted population at iteration i t e r , N P is the population size, and K is an integer smaller than N P . The process terminates when the difference δ b e s t remains below a specified threshold e for a predetermined number of iterations, indicating stabilization of the overall performance of the population’s top solutions. This criterion focuses on monitoring the collective trend of the K best solutions, providing a robust and reliable convergence measure. It proves particularly valuable in optimization problems where the stability of the population’s elite solutions serves as a critical indicator of the algorithm’s final convergence, ensuring the process completes only when sufficient stability is achieved among the top-performing solutions.

2.4.4. Bottom Solution Similarity

The termination mechanism evaluates the difference
δ w o r s t = i = N K N f s o r t e d ( i t e r ) [ i ] i = N K N f s o r t e d ( i t e r 1 ) [ i ] ,
when K = population × sumRate , where f s o r t e d ( i t e r ) [ i ] represents the i-th worst fitness value in the sorted population at iteration i t e r , with N P denoting the total population size and K being an integer smaller than N P . The process terminates when the difference δ w o r s t remains below a predefined accuracy threshold e for a specified number of consecutive iterations, indicating that the population’s worst solutions have stabilized. This approach focuses on monitoring the collective behavior of the K worst solutions, providing an additional convergence metric that complements classical termination criteria. It proves particularly valuable in optimization problems where the evolution of low-quality solutions serves as a critical factor for determining the algorithm’s overall convergence. The mechanism ensures the optimization process completes only when stability is achieved in both the best and worst solutions of the population, offering a more comprehensive convergence assessment. The δ w o r s t metric serves as an effective early-warning system for population stagnation, particularly useful in multimodal optimization where poor solutions may indicate unexplored regions of the search space. By incorporating both solution quality extremes in termination decisions, the algorithm achieves more robust performance across diverse problem landscapes.

2.4.5. Solution Range Similarity

The termination mechanism calculates the range difference
δ r a n g e = | ( f w o r s t ( i t e r ) f b e s t ( i t e r ) ) ( f w o r s t ( i t e r 1 ) f b e s t ( i t e r 1 ) ) | ,
when δ r a n g e ϵ for s i m iterations , where f w o r s t ( i t e r ) and f b e s t ( i t e r ) represent the worst and best fitness values, respectively, in the current iteration. The process terminates when δ r a n g e remains below a specified threshold e for a predetermined number of iterations, indicating stabilization of the solution range within the population. This criterion measures the variation in fitness value range between consecutive iterations, providing a comprehensive perspective on convergence. It proves particularly effective in problems where the contraction of solution diversity serves as a critical convergence indicator. The mechanism enhances termination reliability by simultaneously monitoring both optimal and worst solutions, ensuring the algorithm completes only when the entire population reaches equilibrium. The δ r a n g e metric offers unique advantages in multimodal optimization by capturing global population dynamics rather than just elite solution behavior. When maintained below threshold e, it indicates the population has sufficiently explored the search space and is concentrating around promising regions. This criterion works synergistically with other termination rules to provide robust convergence detection across various problem types and population sizes.

2.4.6. Improvement Rate Similarity

δ i m p = | ( f w o r s t ( i t e r 1 ) f w o r s t ( i t e r ) ) ( f b e s t ( i t e r 1 ) f b e s t ( i t e r ) ) | ,
δ i m p ϵ for s i m iterations

2.4.7. Termination Rule: Doublebox

The parameter N P represents the number of agents participating in the algorithm. The termination rule is defined as follows: the process terminates if the value δ ( i t e r ) is less than or equal to e for a predefined number of iterations s i m . The termination criterion is the so-called doublebox rule, which was first proposed in the work of Tsoulos [34]. According to this criterion, the search process terminates when sufficient coverage of the search space has been achieved. The coverage estimation is based on the asymptotic approximation of the relative proportion of points leading to local minima. Since the exact coverage cannot be directly calculated, sampling is performed over a wider region. The search is interrupted when the variance of the sample distribution falls below a predefined threshold, which is adjusted based on the most recent discovery of a new local minimum. According to this criterion, the algorithm terminates when the following condition is met: when the relative variance σ ( k ) falls below half of the variance σ ( k l a s t ) from the last iteration k l a s t where a new optimal function value was found, that is when
σ ( i t e r ) σ ( k l a s t ) 2
This logic ensures the algorithm will terminate either when the maximum allowed number of iterations is exhausted or the variance of results indicates that sufficient exploration of the solution space has been achieved. The combination of these criteria provides a balanced termination system that considers both computational cost and solution quality.

2.4.8. Termination Rule: All

The work of Charilogis et al. [35] introduces an innovative combination of termination criteria, which is incorporated and expanded in the present article through the “All” criterion. This criterion triggers the optimization termination when any of the individual criteria are satisfied, ensuring an optimal balance between convergence speed and solution accuracy. Its dynamic adaptability allows optimal adjustment of the process according to the specific characteristics of each optimization problem. The “All” criterion operates as a logical disjunction (OR) of all basic termination rules, offering exceptional flexibility for both serial and parallel applications. In environments where different algorithms or computing units may converge at heterogeneous rates, this criterion ensures immediate termination once satisfactory convergence is achieved, avoiding unnecessary computations without compromising solution quality. The main advantages of the “All” criterion include enhanced efficiency with significant reduction in objective function evaluations, broad adaptability to various problem types (from smooth to highly multimodal functions), and easy integration with existing optimization methods. This integration ease makes it an ideal choice for both academic research and practical applications. In conclusion, the “All” criterion represents an optimal synthesis of multiple termination mechanisms, offering an intelligent solution for managing the trade-off between accuracy and computational cost. The success of this approach highlights the importance of hybrid methodologies in modern optimization research, paving the way for further refinements and applications to complex real-world problems.

3. Experimental Setup and Benchmark Results

For the experimental comparisons, we selected the optimization methods GA, WOA, PSO, ACO, and SaDE because they represent different categories of metaheuristic algorithms and are widely established in the literature. These methods are based on diverse principles (biological/physical mimicry and population-based techniques) and do not require gradient information, making them particularly suitable for the non-differentiable multimodal problems examined in our study. We specifically included SaDE as an adaptive DE variant to demonstrate that our proposed NEWDE + MDM outperforms even advanced versions through its simplicity. The improved methods, NSGA-II and NSGA-III, were excluded because our study focuses on single-objective optimization problems, whereas NSGA-II/III are designed for multi-objective problems with multiple targets. Furthermore, the termination rules we propose specifically address convergence in single-objective optimization and are not directly compatible with multi-objective optimization requirements.
The new termination rules (BSS, WSS, TSS, BOSS, SRS, IRS, and doublebox) were carefully selected as they collectively cover different aspects of the convergence process: BSS monitors best solution stability, WSS tracks worst solution evolution, TSS and BOSS assess population-wide behavior, SRS measures value range contraction, IRS evaluates improvement rate, and doublebox analyzes solution distribution. This comprehensive set of rules enables holistic convergence assessment beyond simply tracking the optimal solution. Potential additional termination rules for future research could include metrics like diversity difference, gradient change, and entropy level. However, these would incur additional computational costs and may not be universally applicable across all optimization problem types. Therefore, our current study concentrates on the aforementioned seven rules, which together provide balanced coverage of various convergence process aspects while maintaining computational efficiency.
The following is Table 1, with the relevant parameter settings of the methods.
Table 1. Parameters and settings.
Table 1. Parameters and settings.
ParameterValueExplanation
N P 500Population for all methods
i t e r m a x 200Maximum number of iterations for all methods
C R 0.9 for classic DE and NEWDE 0.1 for SaDECrossover probability
F0.8 for DE Random F [ 0.5 , 1.5 ] for NEWDE 0.1 for SaDEDifferential weight
S R BSS for Table 2 and Table 3Stopping rule
N s 8Similarity max count for all stopping rules
L S 0.02 (2%) etc.Local search rate for all methods
T s Tourament size 8Selection of GA
C r a t e double, 0.1 (10%)Crossover for GA
M r a t e double, 0.05 (5%)Mutation for GA
C T Double, for every pair (z,w) of parents
z c h i l d , i = a i z i + ( 1 a i ) w i  
w c h i l d , i = a i w i + ( 1 a i ) z i
Crossover type for GA
c 1 0.5Cognitive Coefficient for PSO
c 2 0.5Social Coefficient for PSO
w w [ 0.4 0.9 ] (decreasing)Inertia for PSO
p p [ 0.1 0.5 ] (increasing)Evaporation rate for ACO
μ F 0.5, InitiallyAverage of the differential weigh for SaDE
μ C R 0.5, InitiallyAverage of crossover probability for SaDE
T L 50Learning period for SaDE
T U 50Parameter update period for SaDE
Table 2. Comparison of classic DE with new DE and new DE + MDM.
Table 2. Comparison of classic DE with new DE and new DE + MDM.
FunctionDENEWDENEWDE + MDM
ACKLEY19,68815,0897249
BF110,68385375204
BF210,41984164775
BF310,01375724285
BRANIN567058522762
CAMEL811692065358
DIFFPOWER214,54711,79310,734
DIFFPOWER531,33233,16825,736
DIFFPOWER1039,31738,22840,297
EASOM466245893664
ELP10783474716053
ELP2010,64810,1808809
ELP3013,00312,57310,957
EXP4698671343824
EXP8732671414329
GKLS250723893513181
GKLS350810783141811 (0.96)
GOLDSTEIN901377515026
GRIEWANK212,78510,3094397 (0.66)
GRIEWANK1016,52716,6079553
HANSEN774877686432
HARTMAN3643464503179
HARTMAN6718168274790
POTENTIAL3787176775994
POTENTIAL511,96111,69810,653
POTENTIAL615,460 (0.56)15,449 (0.7)12,775 (0.86)
POTENTIAL1022,60221,69520,237
RASTRIGIN10,59795564639 (0.93)
ROSENBROCK410,33610,5138729
ROSENBROCK812,90912,79911,259
ROSENBROCK1616,52716,92015,377
SHEKEL5777275335306
SHEKEL7801375865052
SHEKEL10830675275120
SINU4890475156723
SINU8896874116795
SINU1612,24110,4728577
TEST2N4787481074292
TEST2N5934791093907 (0.96)
TEST2N712,00611,153 (0.9)4565 (0.7)
TEST30N3729675724558
TEST30N4910388045958
Total483,370 (0.98)459,422 (0.99)332,921 (0.97)
Table 3. Comparison of new DE with majority dimension mechanism method versus others.
Table 3. Comparison of new DE with majority dimension mechanism method versus others.
FunctionGAWOAPSOACOSaDENEWDE + MDM
ACKLEY10,43633,8249279909111,3967249
BF1626513,47563117047 (0.66)92805204
BF259951369358846908 (0.76)86684775
BF3555921,43255096450 (0.83)74364285
BRANIN453686194667594454242762
CAMEL501789025050475765585358
DIFFPOWER2870413,80610,98811,55613,32110,734
DIFFPOWER523,77441,02827,02955,18332,83125,736
DIFFPOWER1025,51158,06034,31984,71038,69440,297
EASOM408054374134420346113664
ELP10566322,8046588463774936053
ELP20880036,6498953506511,2278809
ELP3012,75742,50611,075541615,02710,957
EXP4516383975163593566123824
EXP8531811,4785440619769354329
GKLS250457576124628445959633181
GKLS350518410,0884769461478261811 (0.96)
GOLDSTEIN593211,8046051724869185026
GRIEWANK2748511,135 (0.83)53176430 (0.43)12,1104397 (0.63)
GRIEWANK10939351,04110,239784814,3349553
HANSEN602515,09150335091 (0.66)69676432
HARTMAN3493610,9115102540860983179
HARTMAN6541921,30258256154 (0.7)71414790
POTENTIAL3645512,7056998685480265994
POTENTIAL5987866,60512,339970212,20410,653
POTENTIAL613,891 (0.76)10,648 (0.93)13,945 (0.46)11,166 (0.06)15,368 (0.76)12,775 (0.86)
POTENTIAL1016,834197,04417,94812,540 (0.3)28,24120,237
RASTRIGIN686810,53057565346 (0.4)91984639 (0.93)
ROSENBROCK4641418,5767611484896688729
ROSENBROCK8812825,77710,198537412,07511,259
ROSENBROCK1611,67837,75913,529598717,17815,377
SHEKEL5570522,88659156815 (0.56)73745306
SHEKEL7574126,96459386777 (0.63)74695052
SHEKEL10582920,33459156670 (0.46)74715120
SINU4533413,26653555687 (0.73)73246723
SINU8583921,35861886472 (0.9)86066795
SINU16727847,71367458465 (0.73)12,9638577
TEST2N4581316,10453395752 (0.56)74254292
TEST2N5651618,13156445893 (0.36)88103907 (0.76)
TEST2N78205 (0.96)23,489 (0.63)6057 (0.93)6199 (0.03)10,655 (0.9)4565 (0.7)
TEST30N3563511,3075634659166634558
TEST30N4659418,2296464881378375958
Total335,162 (0.99)1,098,519 (0.98)350,871 (0.98)406,302 (0.8)457,425 (0.99)332,921 (0.97)

3.1. Test Functions

The experiments were conducted on a wide range of test functions [19,36,37], as shown in Table 4.

3.2. Experimental Results

For the aforementioned functions, a series of tests were conducted on a computer equipped with an AMD Ryzen 5950× processor and 128 GB of RAM, running Debian Linux. Each test was repeated 30 times with new random values in each iteration, and the average results were recorded. The tool used was developed in ANSI C++ using the GLOBALOPTIMUS [41] platform, which is open-source and available at https://github.com/itsoulos/GLOBALOPTIMUS. The parameter settings of the method are shown in Table 1.
In the following experimental results, the values in the cells correspond to the average number of function calls over 30 repetitions. The numbers in parentheses indicate the percentage of cases where the method successfully found the global minimum. If no parentheses are present, it means the method was 100% successful in all the tests.
Table 2 presents comparative results between three versions of the differential evolution algorithm: the classic DE, the modified NEWDE, and NEWDE + MDM. The data concern performance on a series of standard test functions, with measurements including the number of objective function evaluations and the success rate of finding the global minimum. From the analysis of the results, we observe that the modified NEWDE outperforms the classic DE in most cases, with a reduced number of objective function evaluations. For example, for the ACKLAY function, NEWDE requires 15,089 evaluations compared to 19,688 for classic DE, while, for the BF1 function, the respective evaluations are 8537 versus 10,683. This improvement becomes even more pronounced with the addition of the MDM, where the evaluation numbers decrease significantly, −7249 for ACKLAY and 5204 for BF1. The MDM appears to offer significant advantages, particularly for complex functions like GKLS350, where the numbers of evaluations drop from 8107 (classic DE) and 8314 (NEWDE) to just 1811, with a 96% success rate. A similarly impressive improvement is observed for the RASTRIGIN function, with the evaluations decreasing from 10,597 to 4639 (93% success rate). In some cases, such as the DIFFPOWER10 and ROSENBROCK16 functions, the improvements are less significant, suggesting that the algorithm’s effectiveness depends on each function’s characteristics. However, the total sum of the evaluations across all the functions shows a clear reduction from 483,370 (classic DE) to 459,422 (NEWDE) and finally to 332,921 (NEWDE + MDM), with success rates of 98%, 99%, and 97%, respectively. The results confirm that combining the modified DE with the MDM leads to significant performance improvement, with reduced objective function evaluations and high success rates in finding the global minimum. This improvement is particularly notable for functions with multiple local minima and high dimensionality, where classical methods struggle to achieve good performance.
The statistical analysis (Pairwise Wilcoxon Test [42]) conducted using the R programming language to compare classical differential evolution (DE), modified DE (NEWDE), and modified DE with the majority dimension mechanism (NEWDE + MDM) yielded significant conclusions regarding the statistical differences between these methods, as shown in Figure 1. The p-values, which express the levels of statistical significance, revealed a highly significant difference (p < 0.01) between classical DE and modified NEWDE. This indicates that the improvements introduced in NEWDE led to a statistically significant performance enhancement. The comparison between classical DE and NEWDE + MDM showed an extremely significant difference (p < 0.0001), confirming that the addition of the majority dimension mechanism contributes very substantially to improving the algorithm’s effectiveness. Furthermore, the comparison between modified NEWDE and NEWDE + MDM also demonstrated an equally extremely significant difference (p < 0.0001). This proves that the majority dimension mechanism provides additional significant advantages even when compared to the already improved version of the algorithm. Overall, the statistical results confirm that both enhanced versions of the algorithm (NEWDE and NEWDE + MDM) are significantly better than classical DE, with NEWDE + MDM being particularly outstanding due to the incorporation of the majority dimension mechanism. These findings highlight the importance of the proposed modifications for improving the performance of the differential evolution algorithm.
Table 3 and Figure 2 present a comparative performance analysis of various optimization methods, including GA, WOA, PSO, Ant Colony Optimization (ACO) [43], self-adaptive differential evolution (SaDE), and NEWDE + MDM. The NEWDE + MDM method demonstrates the best overall performance, with the lowest total number of function evaluations (332,921) and a high success rate (97%). Specifically, for functions such as ACKLAY (7249 evaluations), BF1 (5204), and GKLS350 (1811 evaluations with a 96% success rate), NEWDE + MDM clearly outperforms the other methods. Furthermore, its performance on complex functions like RASTRIGIN (4639 evaluations with a 93% success rate) and HARTMAN3 (3179 evaluations) is particularly impressive. The WOA shows the highest number of evaluations (1,098,519), indicating significantly higher computational costs compared to the other methods. However, its success rate remains high (98%), demonstrating that, despite its inefficiency, the algorithm can provide reliable results for certain problems. Both the GA and PSO show similar performance, with total evaluations of 335,162 and 350,871, respectively, and success rates of 99% and 98%. However, for specific functions like DIFFPOWER10, PSO appears to be slightly superior, while the GA performs better on functions like SHEKEL5 and SHEKEL7. The ACO algorithm shows the lowest success rate (80%) and a high evaluation count (406,302), making it less efficient compared to the other methods. Nevertheless, for certain functions like ELP10 (4637 evaluations) and GKLS250 (4459 evaluations), ACO can be competitive. The SaDE method demonstrates interesting results, with a total number of objective function evaluations (457,425) higher than NEWDE + MDM but lower than the WOA while maintaining a very high success rate (0.99), the highest among all the methods. This indicates that SaDE, while less efficient than NEWDE + MDM in terms of required function evaluations, offers exceptional reliability in finding the global minimum, particularly in complex optimization problems where convergence to local minima is a frequent pitfall. Overall, the results confirm that NEWDE + MDM is the most effective method, particularly for problems with multiple local minima and high dimensionality. While the other methods may be effective in specific cases, they cannot match the overall performance of NEWDE + MDM. This analysis highlights the importance of selecting the appropriate optimization algorithm based on the problem characteristics.
The results of the statistical analysis (Friedman test [44]) conducted in R for comparing various optimization methods revealed significant differences in their performance (Figure 3). Specifically, a very extremely significant difference (p < 0.0001) was observed between the GA and WOA, indicating that these two methods differ substantially in terms of performance. However, comparisons between the GA and PSO as well as the GA and ACO showed non-significant differences (p > 0.05), meaning their performances are statistically indifferent. The comparison between the GA and SaDE showed a significant difference (p < 0.05), suggesting that SaDE performs differently compared to the genetic algorithm. In contrast, the comparison between the WOA and PSO showed no significant difference (p > 0.05), while the WOA versus ACO again showed a very extremely significant difference (p < 0.0001). Furthermore, the WOA differs significantly (p < 0.05) from SaDE, but this difference becomes highly significant (p < 0.01) when examined in another context. The comparison between the WOA and the modified differential evolution (NEWDE + MDM) showed a very extremely significant difference (p < 0.0001), confirming the superiority of the latter method. The comparisons between PSO and ACO, as well as between ACO and NEWDE + MDM, showed non-significant differences (p > 0.05). However, PSO differs significantly (p < 0.05) from SaDE, as does ACO when compared to SaDE. Finally, the comparison between SaDE and the WOA confirms an extremely significant difference (p < 0.001), highlighting that SaDE has statistically better performance compared to the WOA. Overall, the results demonstrate that NEWDE + MDM stands out for its superior performance compared to the other methods, while the WOA shows large statistical differences relative to most of the other techniques. These findings can help in selecting the optimal method depending on the optimization problem at hand.
Table 5 presents a detailed comparison of various termination criteria for the proposed optimization algorithm, including BSS, WSS, TSS, BOSS, SRS, IRS, doublebox, and the combined “All” criterion that incorporates all the previous rules. The analysis reveals that the IRS criterion achieves the lowest total number of function evaluations (263,582) with a 96% success rate, making it the most efficient among the individual termination rules. The “All” criterion, which combines all the rules, demonstrates similar performance, with 257,860 evaluations and a 96% success rate, confirming the superiority of this combined approach. In contrast, the BOSS criterion requires the highest number of evaluations (1,318,249) despite its excellent success rate (99%). This indicates that, while BOSS reliably finds the global optimum, it does so at a significantly increased computational cost. Similarly, the TSS and SRS criteria also require relatively high evaluation counts (465,279 and 420,140, respectively). For specific test functions like GKLS350, all the termination criteria achieve high success rates (96%), with IRS and “All” requiring the fewest evaluations (1811). For the RASTRIGIN function, IRS and “All” stand out with 3548 evaluations and a 93% success rate compared to BOSS, requiring 15,210 evaluations. Similarly, for HARTMAN3, the BSS, IRS, and All criteria need the fewest evaluations (3179, 3042, and 2651, respectively). In cases like POTENTIAL6, the doublebox criterion achieves the highest success rate (96%) but requires substantially more evaluations (35,090) compared to IRS (9616), which has a lower success rate (70%). This demonstrates the need to balance accuracy against computational cost. Overall, the results confirm that either combining multiple criteria (“All”) or using IRS provides the best balance between reduced function evaluations and high success rates. Conversely, criteria like BOSS, while reliable, significantly increase the computational burden. The optimal termination rule selection depends on the problem requirements, particularly regarding the trade-off between solution accuracy and computational efficiency.
The results of the statistical analysis conducted to compare various termination rules revealed interesting findings regarding the statistical significance of their differences (Figure 4). Specifically, the comparison between the BSS (Best Solution Stopping) and WSS (Worst Solution Stopping) rules showed no significant difference (p > 0.05), indicating that these two termination approaches do not differ significantly in terms of their effectiveness. Similarly, non-significant differences were observed in comparisons between BSS and TSS (Total Solution Stopping), SRS (Solution Range Stopping), IRS (Iteration Range Stopping), doublebox, and “All”. However, the comparison between BSS and BOSS (Best Overall Solution Stopping) showed an extremely significant difference (p < 0.001), suggesting that the BOSS rule differs significantly from BSS. A similar, although less statistically significant, difference was observed between WSS and BOSS (p < 0.05). Notable differences were observed in the other comparisons. The TSS vs. IRS comparison showed a highly significant difference (p < 0.01), while TSS vs. “All” showed an extremely significant difference (p < 0.001). The differences between BOSS and IRS, as well as between BOSS and "All", were very extremely significant (p < 0.0001 in both cases). Significant differences were also observed between SRS and IRS (p < 0.05), and between SRS and “All” (p < 0.001). Finally, the IRS vs. “All” comparison showed a very extremely significant difference (p < 0.0001), while the doublebox vs. “All” comparison also showed a very extremely significant difference (p < 0.0001). These findings highlight that certain termination rules, such as BOSS and "All", show statistically significant differences compared to other rules, which may have important implications for selecting the optimal termination rule for different optimization problems.

4. Discussion of Findings and Comparative Analysis

The NEWDE algorithm is an enhanced version of the classical differential evolution algorithm, designed to address the core challenges of the conventional optimization methods, such as premature convergence and the inability to balance the exploration and exploitation of the solution space. The key improvement lies in the use of a dual mutation strategy, which enables a more flexible approach. The addition of the majority dimension mechanism to NEWDE makes it even more efficient. The experimental results showed that NEWDE + MDM reduced the number of required objective function evaluations by an average of 30% compared to classical DE. In specific cases, such as the GKLS350 function, the reduction was even more impressive, from 8107 to just 1811 evaluations, while maintaining a high success rate (96%). This improvement is primarily due to the dynamic balance the algorithm achieves between the exploration and exploitation of the search space. The new termination rules, particularly IRS and the combined “All” criterion, proved to be highly effective. For example, in the Rastrigin function, the IRS rule delivered the best results, with only 3548 evaluations and a 93% success rate, whereas the BOSS rule required significantly more computations (15,210 evaluations) despite having a slightly higher success rate (99%). The statistical analysis confirmed that NEWDE + MDM performs significantly better than other popular methods, such as genetic algorithms and Particle Swarm Optimization. Specifically, the results of the Friedman test validated the superiority of NEWDE + MDM, particularly in high-dimensional problems like POTENTIAL10, where the evaluations decreased from 197,044 (with WOA) to 20,237.
Compared to recent self-adaptive variants of differential evolution, such as jDE and SaDE, the proposed method retains the structural simplicity of classical DE while introducing two targeted mutation strategies. Strategy 1 enhances exploration by directing individuals away from the current global best solution, thus promoting diversity, whereas Strategy 2 intensifies exploitation through adaptive differential variation involving multiple individuals. In contrast, jDE focuses on the self-adaptive control of the algorithmic parameters (F and CR) at the individual level, while SaDE further incorporates adaptive selection among different mutation strategies based on historical performance. The proposed method does not dynamically adapt parameters or strategies but instead applies a fixed yet flexible design that balances exploration and exploitation. Furthermore, unlike hybrid DE models, which integrate external metaheuristics, this approach remains “pure” in terms of its DE foundation. It introduces internal refinements without significantly increasing the algorithmic complexity. This design aims to preserve computational efficiency and maintain a lightweight framework, which has been recognized as a key advantage in the large-scale use of DE.
Despite these positive results, the algorithm has some limitations. Its performance on dynamic problems, where parameters change over time, remains an open challenge. Additionally, the need for specialized parameter tuning may pose a barrier for non-expert users. Future work could focus on automated parameter tuning using machine learning techniques and the development of parallel versions of the algorithm for greater computational efficiency.

5. Conclusions and Future Research Directions

This study presents a modified differential evolution algorithm designed to overcome the limitations of the traditional versions, offering improved performance in complex optimization problems. The results demonstrate that the proposed algorithm achieves significantly higher solution quality and greater efficiency, particularly in high-dimensional problems or those with multiple local minima. A key improvement mechanism is the introduction of a dual mutation strategy that operates in two distinct ways. The first approach focuses on exploring the search space, encouraging the algorithm to investigate unstudied regions. This is achieved by using distances from the best solution and applying random coefficients to create new points. The second approach prioritizes exploiting promising areas of the search space, where differential coefficients are dynamically adjusted based on data from multiple solutions.
The majority dimension mechanism adds a new dimension to the algorithm’s adaptability. Through dimension-wise analysis, it compares candidate solutions against two key reference points: the best and worst samples. This approach enables the algorithm to decide whether to focus on an intensive search around a promising region or explore new unknown areas of the space. The MDM prevents stagnation in local minima and enables the discovery of solutions that would otherwise remain unknown.
The new termination rules represent another innovation that enhances the overall algorithm performance. These rules are based on multidimensional metrics assessing the solution homogeneity and stability at various levels. For instance, the stability of the best solution, evolution of worst solutions, and overall convergence of the value ranges within the population indicate when the search is sufficiently complete. These rules reduce unnecessary computations while ensuring termination only when the highest possible solution quality is achieved.
In the experimental results, the NEWDE + MDM algorithm showed significant improvement over classical DE. For the GKLS350 function, the required computations decreased from 8107 to just 1811, with a 96% success rate. Similarly, for the Rastrigin function, the IRS termination rule required only 3548 computations (93% success) versus the BOSS rule’s 15,210 computations (99% success). For DIFFPOWER5, NEWDE + MDM recorded 25,736 computations versus NEWDE’s 33,168 and classical DE’s 31,332 (95% success). The HARTMAN6 function saw the computations reduced from 7181 to 4790 while maintaining 100% success. Particularly impressive were the results on multimodal functions like SHEKEL10, where NEWDE + MDM achieved solutions with only 5120 computations versus classical DE’s 8306, improving the success rate from 92% to 98%. For the high-dimensional ELP30 function, the computations decreased from 13,003 to 10,957 (97% success). On POTENTIAL10, the computations dropped from 197,044 (with WOA) to 20,237, while ROSENBROCK16 saw a reduction from 16,527 to 15,377 computations. For the challenging TEST2N7 function with multiple local minima, NEWDE + MDM produced solutions with 4565 computations (70% success) versus classical DE’s 12,006 computations (56% success). The statistical analysis showed that NEWDE + MDM achieved a statistically significant improvement (p < 0.05) in both computation count and success rate for 32 of the 38 test functions. The average computation reduction across all the functions was 31.7%, with the maximum reduction reaching 77.6% for GKLS350.
However, the research revealed certain limitations. A major challenge is the increased need for specialized parameter tuning as multiple strategy and termination rule choices may complicate use by non-experts. Additionally, the performance may decline in dynamic problems where characteristics change over time. Future research could focus on developing versions for dynamic problems, automated parameter tuning using machine learning techniques, and creating parallel versions to reduce execution time. Applications in fields like genetic engineering, autonomous vehicles, and energy system optimization could extend the algorithm’s impact. Finally, extensive experimental analysis across various data types and complexity levels could improve the algorithm’s reliability and robustness.

Author Contributions

Conceptualization, V.C. and G.K.; methodology, I.G.T.; software, V.C. and A.M.G.; validation, V.C. and I.G.T.; formal analysis, V.C.; investigation, V.C.; resources, I.G.T.; data curation, G.K.; writing—original draft preparation, V.C.; writing—review and editing, I.G.T.; visualization, V.C.; supervision, I.G.T.; project administration, I.G.T.; funding acquisition, I.G.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been financed by the European Union: Next Generation EU through the Program Greece 2.0 National Recovery and Resilience Plan, under the call RESEARCH–CREATE–INNOVATE, project name “iCREW: Intelligent small craft simulator for advanced crew training using Virtual Reality techniques” (project code: TAEDK-06195).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tapkir, A. A comprehensive overview of gradient descent and its optimization algorithms. Int. Res. J. Sci. Eng. Technol. 2023, 10, 37–45. [Google Scholar] [CrossRef]
  2. Cawade, S.; Kudtarkar, A.; Sawant, S.; Wadekar, H. The Newton-Raphson Method: A Detailed Analysis. Int. J. Res. Appl. Sci. Eng. (IJRASET) 2024, 12, 729–734. [Google Scholar] [CrossRef]
  3. Luenberger, D.G.; Ye, Y.; Luenberger, D.G.; Ye, Y. Interior-point methods. In Linear and Nonlinear Programming; Springer: Berlin/Heidelberg, Germany, 2021; pp. 129–164. [Google Scholar]
  4. Nemirovski, A.S.; Todd, M.J. Interior-point methods for optimization. Acta Numer. 2008, 17, 191–234. [Google Scholar] [CrossRef]
  5. Lam, A. Bfgs in a Nutshell: An Introduction to Quasi-Newton Methods; Towards Data Science: San Francisco, CA, USA, 2020. [Google Scholar]
  6. Sohail, A. Genetic algorithms in the fields of artificial intelligence and data sciences. Ann. Data Sci. 2023, 10, 1007–1018. [Google Scholar] [CrossRef]
  7. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  8. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar]
  9. Charilogis, V.; Tsoulos, I.G.; Tzallas, A.; Karvounis, E. Modifications for the differential evolution algorithm. Symmetry 2022, 14, 447. [Google Scholar] [CrossRef]
  10. Charilogis, V.; Tsoulos, I.G. A parallel implementation of the differential evolution method. Analytics 2023, 2, 17–30. [Google Scholar] [CrossRef]
  11. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle swarm optimization: A comprehensive survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  12. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  13. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Sbenaty, B.; Ikotun, A.M.; Zitar, R.A.; Alsoud, A.R.; Khodadadi, N.; Jia, H. Aquila optimizer: Review, results and applications. Metaheuristic Optim. Algorithms 2024, 89–103. [Google Scholar] [CrossRef]
  15. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  16. Kaveh, A.; Hamedani, K.B. Improved arithmetic optimization algorithm and its application to discrete structural optimization. In Structures; Elsevier: Amsterdam, The Netherlands, 2022; Volueme 35, pp. 748–764. [Google Scholar]
  17. Zhang, J.; Zhang, G.; Huang, Y.; Kong, M. A novel enhanced arithmetic optimization algorithm for global optimization. IEEE Access 2022, 10, 75040–75062. [Google Scholar] [CrossRef]
  18. Eglese, R.W. Simulated annealing: A tool for operational research. Eur. J. Oper. Res. 1990, 46, 271–281. [Google Scholar] [CrossRef]
  19. Siarry, P.; Berthiau, G.; Durdin, F.; Haussy, J. Enhanced simulated annealing for globally minimizing functions of many-continuous variables. ACM Trans. Math. (TOMS) 1997, 23, 209–228. [Google Scholar] [CrossRef]
  20. Liu, Y.; Tian, P. A multi-start central force optimization for global optimization. Appl. Soft Comput. 2015, 27, 92–98. [Google Scholar] [CrossRef]
  21. Doumari, S.A.; Givi, H.; Dehghani, M.; Malik, O.P. Ring toss game-based optimization algorithm for solving various optimization problems. Int. J. Intell. Eng. Syst. 2021, 14, 545–554. [Google Scholar] [CrossRef]
  22. Salawudeen, A.T.; Mu’azu, M.B.; Yusuf, A.; Adedokun, A.E. A Novel Smell Agent Optimization (SAO): An extensive CEC study and engineering application. Knowl.-Based Syst. 2021, 232, 107486. [Google Scholar] [CrossRef]
  23. Salawudeen, A.T.; Mu’azu, M.B.; Sha’aban, Y.A.; Adedokun, E.A. On the development of a novel smell agent optimization (SAO) for optimization problems. In Proceedings of the 2nd International Conference on Information and Communication Technology and its Applications (ICTA 2018), Minna, Nigeria, 5–6 September 2018. [Google Scholar]
  24. Salawudeen, A.T.; Mu’azu, M.B.; Yusuf, A.; Adedokun, E.A. From smell phenomenon to smell agent optimization (SAO): A feasibility study. In Proceedings of the ICGET, Amsterdam, The Netherlands, 10–12 July 2018. [Google Scholar]
  25. Meadows, O.A.; Mu’Azu, M.B.; Salawudeen, A.T. A smell agent optimization approach to capacitated vehicle routing problem for solid waste collection. In Proceedings of the 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable Development (NIGERCON), Lagos, Nigeria, 5–7 April 2022; pp. 1–5. [Google Scholar]
  26. Nadimi-Shahraki, M.H.; Zamani, H.; Asghari Varzaneh, Z.; Mirjalili, S. A systematic review of the whale optimization algorithm: Theoretical foundation, improvements, and hybridizations. Arch. Comput. Methods Eng. 2023, 30, 4113–4159. [Google Scholar] [CrossRef]
  27. Brodzicki, A.; Piekarski, M.; Jaworek-Korjakowska, J. The whale optimization algorithm approach for deep neural networks. Sensors 2021, 21, 8003. [Google Scholar] [CrossRef] [PubMed]
  28. Pourpanah, F.; Wang, R.; Lim, C.P.; Wang, X.Z.; Yazdani, D. A review of artificial fish swarm algorithms: Recent advances and applications. Artif. Intell. Rev. 2023, 56, 1867–1903. [Google Scholar] [CrossRef]
  29. Zhang, C.; Zhang, F.M.; Li, F.; Wu, H.S. Improved artificial fish swarm algorithm. In Proceedings of the 2014 9th IEEE Conference on Industrial Electronics and Applications, Hangzhou, China, 9–11 June 2014; pp. 748–753. [Google Scholar]
  30. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005. [Google Scholar] [CrossRef]
  31. Brest, J.; Zamuda, A.; Boskovic, B.; Sepesy, M.S.; Zumer, V. Dynamic optimization using self-adaptive differential evolution. In Proceedings of the IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 415–422. [Google Scholar] [CrossRef]
  32. Hansen, N.; Ostermeier, A. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC ’96), Nagoya, Japan, 20–22 May 1996; pp. 312–317. [Google Scholar] [CrossRef]
  33. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation (CIMCA 2005), International Conference on Intelligent Agents, Web Technologies and Internet Commerce (IAWTIC 2005), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar] [CrossRef]
  34. Lagaris, I.E.; Tsoulos, I.G. Stopping rules for box-constrained stochastic global optimization. Appl. Math. Comput. 2008, 197, 622–632. [Google Scholar] [CrossRef]
  35. Charilogis, V.; Tsoulos, I.G.; Gianni, A.M. Combining Parallel Stochastic Methods and Mixed Termination Rules in Optimization. Algorithms 2024, 17, 394. [Google Scholar] [CrossRef]
  36. Koyuncu, H.; Ceylan, R. A PSO based approach: Scout particle swarm algorithm for continuous global optimization problems. J. Comput. Des. Eng. 2019, 6, 129–142. [Google Scholar] [CrossRef]
  37. LaTorre, A.; Molina, D.; Osaba, E.; Poyatos, J.; Del Ser, J.; Herrera, F. A prescription of methodological guidelines for comparing bio-inspired optimization algorithms. Swarm Evol. Comput. 2021, 67, 100973. [Google Scholar] [CrossRef]
  38. Gaviano, M.; Kvasov, D.E.; Lera, D.; Sergeyev, Y.D. Algorithm 829: Software for generation of classes of test functions with known local and global minima for global optimization. ACM Trans. Math. Softw. (TOMS) 2003, 29, 469–480. [Google Scholar] [CrossRef]
  39. Jones, J.E. On the determination of molecular fields.—II. From the equation of state of a gas. Proceedings of the Royal Society of London. Ser. Contain. Pap. Math. Phys. Character 1924, 106, 463–477. [Google Scholar]
  40. Zabinsky, Z.B.; Graesser, D.L.; Tuttle, M.E.; Kim, G.I. Global optimization of composite laminates using improving hit and run. In Recent Advances in Global Optimization; Princeton University Press: Princeton, NJ, USA, 1992; pp. 343–368. [Google Scholar]
  41. Tsoulos, I.G.; Charilogis, V.; Kyrou, G.; Stavrou, V.N.; Tzallas, A. OPTIMUS: A Multidimensional Global Optimization Package. J. Open Source Softw. 2025, 10, 7584. [Google Scholar] [CrossRef]
  42. Wilcoxon, F. Individual Comparisons by Ranking Methods. Int. Biom. Soc. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  43. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man, Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  44. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
Figure 1. Statistical comparison of classic DE with new DE and new DE with majority dimension mechanism.
Figure 1. Statistical comparison of classic DE with new DE and new DE with majority dimension mechanism.
Symmetry 17 00844 g001
Figure 2. Detailed comparison of methods for each optimization problem.
Figure 2. Detailed comparison of methods for each optimization problem.
Symmetry 17 00844 g002
Figure 3. Statistical comparison of new DE with majority dimension mechanism method versus others.
Figure 3. Statistical comparison of new DE with majority dimension mechanism method versus others.
Symmetry 17 00844 g003
Figure 4. Statistical comparison of NEWDE + MDM with different stopping rules.
Figure 4. Statistical comparison of NEWDE + MDM with different stopping rules.
Symmetry 17 00844 g004
Table 4. The benchmark functions used in the conducted experiments.
Table 4. The benchmark functions used in the conducted experiments.
NameFormulaDimension
ACKLEY f ( x ) = a exp b 1 n i = 1 n x i 2 exp 1 n i = 1 n cos c x i + a + exp ( 1 ) a = 20.0 2
BF1 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 4 10 cos 4 π x 2 + 7 10 2
BF2 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 cos 4 π x 2 + 3 10 2
BF3 f ( x ) = x 1 2 + 2 x 2 2 3 10 cos 3 π x 1 + 4 π x 2 + 3 10 2
BRANIN f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos ( x 1 ) + 10 5 x 1 10 , 0 x 2 15 2
CAMEL f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 , x [ 5 , 5 ] 2 2
DIFFPOWER f ( x ) = i = 1 n | x i y i | p n = 2 p = 2 , 5 , 10
DISCUS f ( x ) = 10 6 x 1 2 + i = 2 n x i 2 10
EASOM f ( x ) = cos x 1 cos x 2
exp x 2 π 2 x 1 π 2
2
ELP f ( x ) = i = 1 n 10 6 i 1 n 1 x i 2 n = 10 , 20 , 30
EXP f ( x ) = exp 0.5 i = 1 n x i 2 , 1 x i 1 n = 4 , 8
GKLS [38] f ( x ) = G k l s ( x , n , w ) n = 2 , 3 w = 50 , 100
GOLDSTAIN f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2
GRIEWANK2 f ( x ) = 1 + 1 200 i = 1 2 x i 2 i = 1 2 cos ( x i ) ( i ) 2
GRIEWANK10 f ( x ) = 1 + 1 200 i = 1 10 x i 2 i = 1 10 cos ( x i ) ( i ) 10
HANSEN f ( x ) = i = 1 5 i cos ( i 1 ) x 1 + i
j = 1 5 j cos ( j + 1 ) x 2 + j
2
HARTMAN3 f ( x ) = i = 1 4 c i
exp j = 1 3 a i j x j p i j 2
3
HARTAMN6 f ( x ) = i = 1 4 c i
exp j = 1 6 a i j x j p i j 2
6
POTENTIAL [39] V L J ( r ) = 4 ϵ σ r 12 σ r 6 n = 9 , 15 , 21 , 30
RARSTIGIN f ( x ) = x 1 2 + x 2 2 cos ( 18 x 1 ) cos ( 18 x 2 ) 2
ROSENBROCK f ( x ) = i = 1 n 1
100 x i + 1 x i 2 2 + x i 1 2 ,
30 x i 30
n = 4 , 8 , 16
SHEKEL5 f ( x ) = i = 1 5 1 ( x a i ) ( x a i ) T + c i 4
SHEKEL7 f ( x ) = i = 1 7 1 ( x a i ) ( x a i ) T + c i 4
SHEKEL10 f ( x ) = i = 1 10 1 ( x a i ) ( x a i ) T + c i 4
SINUSOIDAL [40] f ( x ) = 2.5 i = 1 n sin x i z + i = 1 n
sin 5 x i z ,
0 x i π
n = 4 , 8 , 16
TEST2N f ( x ) = 1 2 i = 1 n x i 4 16 x i 2 + 5 x i n = 4 , 5 , 7
TEST30N 1 10 sin 2 3 π x 1 i = 2 n 1
x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n
n = 3 , 4
Table 5. The proposed method with new different stopping rules.
Table 5. The proposed method with new different stopping rules.
FunctionBSSWSSTSSBOSSSRSIRSDoubleboxAll
ACKLEY7249806321,42345,7548664618786016187
BF152045770655941,6026002453955644539
BF247755297570134,3695522407756474077
BF342854482526128,3914694372849993728
BRANIN276236393324157763696260327722603
CAMEL53584386581122,9485843349710,6673497
DIFFPOWER210,73412,276976728,34912,398976711,1729767
DIFFPOWER525,73632,48624,77192,20134,88024,77126,37724,771
DIFFPOWER1040,29752,42031,05691,21958,12731,05655,30531,056
EASOM36644894471427894944324238002789
ELP1060535133593021,9806459430712,3134307
ELP2088098781938729,23910,094658718,0336587
ELP3010,95710,87112,29023,74512,533867222,9898672
EXP438244408494830924593357542703092
EXP843294878494430525457352275163052
GKLS25031813245919976903245291939772919
GKLS3501811 (0.96)2142 (0.96)3268 (0.96)10,756 (0.96)2142 (0.96)1811 (0.96)1811 (0.96)1811 (0.96)
GOLDSTEIN50265576768238,7765867436451694364
GRIEWANK24397 (0.63)4761 (0.63)8722 (0.80)29,118 (0.96)5212 (0.66)3559 (0.53)4898 (0.63)3599 (0.53)
GRIEWANK109553106981106162990120857866164967866
HANSEN6432814314,49388,7709146423813,1164238
HARTMAN331793898307326514121304236432651
HARTMAN647904903581134005521398174383400
POTENTIAL359947416659220,2247691468311,3584683
POTENTIAL510,65312,22111,67958,98813,108822430,0858224
POTENTIAL612,775 (0.86)13,968 (0.86)13,720 (0.90)74,70115,690 (0.93)9616 (0.7)35,090 (0.96)9616 (0.7)
POTENTIAL1020,23717,45346,851 (0.93)10111123,15712,65056,93012,650
RASTRIGIN4639 (0.93)5063 (0.96)690015,2105545 (0.96)3548 (0.93)6473 (0.93)3548 (0.93)
ROSENBROCK487298284907534,8039794658313,9616583
ROSENBROCK811,25910,33711,51251,87812,605810422,6928104
ROSENBROCK1615,37717,66116,64464,58120,00711,82625,04311,826
SHEKEL553066204692536726452426215,9523672
SHEKEL750526224690735286463408116,8793528
SHEKEL1051205813805135166232405514,9803516
SINU46723650811,59467308163431020,6024310
SINU86795877613952430996314971462454293
SINU1685771387930,354576114,369678511,0515761
TEST2N44292499212,29026,1125427329059303290
TEST2N53907 (0.96)5199 (0.96)10,49321,6175406 (0.96)3226 (0.93)47913226 (0.93)
TEST2N74565 (0.7)5020 (0.7)12,859 (0.86)19,407 (0.86)5709 (0.76)3510 (0.56)6396 (0.76)3510 (0.56)
TEST30N345585402380625,3476063338481783384
TEST30N459586551588048,0977383456414,8474564
Total332,921 (0.97)378,121 (0.97)465,279 (0.98)1,318,249 (0.99)420,140 (0.98)263,582 (0.96)624,056 (0.98)257,860 (0.96)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gianni, A.M.; Tsoulos, I.G.; Charilogis, V.; Kyrou, G. Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria. Symmetry 2025, 17, 844. https://doi.org/10.3390/sym17060844

AMA Style

Gianni AM, Tsoulos IG, Charilogis V, Kyrou G. Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria. Symmetry. 2025; 17(6):844. https://doi.org/10.3390/sym17060844

Chicago/Turabian Style

Gianni, Anna Maria, Ioannis G. Tsoulos, Vasileios Charilogis, and Glykeria Kyrou. 2025. "Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria" Symmetry 17, no. 6: 844. https://doi.org/10.3390/sym17060844

APA Style

Gianni, A. M., Tsoulos, I. G., Charilogis, V., & Kyrou, G. (2025). Enhancing Differential Evolution: A Dual Mutation Strategy with Majority Dimension Voting and New Stopping Criteria. Symmetry, 17(6), 844. https://doi.org/10.3390/sym17060844

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop