Next Article in Journal
Stability of a General Functional Equation
Previous Article in Journal
Further Formulae for Harmonic Series with Convergence Rate “−1/4”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter Adaptive Differential Evolution Based on Individual Diversity

College of Information Science and Technology, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1016; https://doi.org/10.3390/sym17071016
Submission received: 17 May 2025 / Revised: 24 June 2025 / Accepted: 25 June 2025 / Published: 27 June 2025
(This article belongs to the Section Computer)

Abstract

Differential evolution (DE) has emerged as a numerical optimization technique due to its conceptual simplicity and demonstrated effectiveness across diverse problem domains. However, the algorithm’s performance remains critically dependent on appropriate control parameter settings. This paper introduces a novel diversity-based parameter adaptation (div) mechanism, generates two sets of symmetrical parameters, F and CR, adaptively first, and then dynamically selects the final parameters based on individual diversity rankings. It employs a straightforward approach to identify the more effective option from two sets of symmetrical parameters. Comprehensive experimental evaluation demonstrated that the div mechanism significantly enhanced the performance of the DE algorithm. Furthermore, by incorporating div, our enhanced algorithm exhibited superior optimization capability compared to five state-of-the-art DE variants. The results show that, among the 145 cases studied, DTDE-div outperformed others in 92 cases and underperformed in 32 cases, with the lowest performance ranking of 2.59. Consequently, DTDE-div demonstrated superior performance compared to other advanced DE variants. The results highlight the effectiveness of div in enhancing solution precision while preventing premature convergence.

1. Introduction

The differential evolution (DE) algorithm, proposed by Storn and Price in 1997 [1], is a heuristic parallel search method based on population differences. Its original intention was to solve the Chebyshev polynomial problem. As a population-based random search technique, DE includes four operations, namely, initialization, mutation, crossover, and selection. Unlike other optimization algorithms, the disturbance of evolutionary individuals in DE is reflected through the differential information of multiple individuals. DE has excellent optimization capabilities and has been widely applied in theoretical research and engineering practice. For example, it has achieved remarkable results in various fields, such as UAV swarm resource configuration [2], image segmentation [3], function optimization [4], neural networks [5], chemical engineering [6], knapsack problems [7], and power systems [8]. Research on the performance analysis and improvement of DE mainly focuses on two aspects of its shortcomings: (1) the phenomenon that the population individuals are unable to continue searching for the optimal solution and stop evolving toward the global optimum, known as the stagnation problem, and (2) the phenomenon that population individuals lose diversity and become trapped in local optimal solutions, called the premature convergence problem. The setting of control parameters and the choice of mutation strategies are key factors determining the performance of DE.
On the one hand, besides the mutation strategy of the original DE, “DE/rand/1” [1], there are some other mutation strategies, such as “DE/best/1” [1], “DE/current-to-best/1” [9], “DE/rand-to-best/1” [10], “DE/current-to-pbest/1” [11], and “current-to-ci_mbest/1” [12]. In DSM-DE [13], two mutation strategies using a dynamic species approach: “DE/seeds-to-seeds” and “DE/seeds-to-rand”, are proposed, which can enhance exploitation and exploration of the algorithm. Reference [14] proposes a new mutation strategy based on a circular topological structure: “DE/current-to-pnbest/1”. Reference [15] employs an adjustable range method to find the optimal individual of nearest neighbors (bestn) and designs a “DE/bestn/2” mutation strategy to enhance performance.
On the other hand, some control parameter setting suggestions were proposed in the past [1,16]. Research on the control parameter settings mainly focuses on the following three aspects: fixed control parameter, control parameter with linear variation, and adaptive control parameter. Firstly, Storn [1] suggested that the value of the scaling factor (F) be set to 0.5, and the population size (NP) should be between 5 × D and 10 × D , where D is the dimension of the individual. Secondly, Reference [17] extended the previously proposed SHADE (success-history-based adaptive DE) algorithm [18] by introducing a linear population size reduction method, thus proposing the L-SHADE (SHADE with linear population size reduction) algorithm; in this algorithm, the required population size for each generation is determined based on a linear decreasing function. Thirdly, in the JADE (adaptive DE with optional external archive) algorithm [11], each individual’s CR is generated independently according to a normal distribution, while each individual’s F is adjusted independently according to a Cauchy distribution. SHADE [18] utilizes a historical memory to store successful parameter settings and uses them to guide the generation of the next generation of F and CR. The iL-SHADE (improved L-SHADE) algorithm proposed in [19] employs a method for generating F and CR that is similar to that in [11]. Based on this, the jSO (DE for single-objective real-parameter optimization) algorithm presented in [20] utilizes a weighted Lehmer mean for the dynamic adjustment of F; for CR, all values in its historical memory MCR are initialized to 0.8. Li et al. [21] proposed an enhanced adaptive differential evolution algorithm (EJADE), which incorporates a dynamic population size reduction strategy based on JADE [11]. Wang et al. [22] proposed an improved variant based on integration (self-adaptive ensemble-based differential evolution, SAEDE), which sets the control parameters of each generation’s population through adaptive and ensemble mechanisms. Cpałka et al. [23] used specific functions to adjust NP for each iteration. The performance of DE is greatly influenced by its control parameters. For different optimization problems, the control parameters in DE often require extensive experimentation to find a more suitable optimization solution.
Hence, in this paper, we propose a novel control parameter mechanism for DE, namely, the div mechanism. The main contributions of the div mechanism are summarized as follows:
(1) A novel diversity-based parameter adaptation mechanism (div) is proposed, which can be flexibly integrated into various DE algorithms to enhance their performance. The proposed div mechanism generates two sets of symmetrical parameters, F and CR, using the parameter generation method of the original algorithm firstly, and then adaptively selects the final parameters based on individual diversity rankings. The mechanism’s effectiveness has been empirically validated through its integration with four DE variants.
(2) DTDE-div is verified under the Congress on Evolutionary Computation (CEC) 2017 competition test suite on real-parameter single-objective numerical optimization. The results demonstrate that, out of the 145 cases examined, DTDE-div achieves better performance in 92 cases and underperforms in 32 cases, with the lowest performance ranking of 2.59. Thus, DTDE-div exhibits superior performance compared to other advanced DE variants. Furthermore, we provide comprehensive analyses of both the working mechanism and computational complexity.
The remainder of this paper is structured as follows: Section 2 reviews DE and discusses related work on control parameter settings in DE. Section 3 elaborates on the specifics of the proposed mechanism. Section 4 presents the experimental results and corresponding analyses, while Section 5 concludes the paper.

2. Related Work

This section introduces the detailed procedures of DE and summarizes the related work about control parameter settings in DE.

2.1. DE

The whole framework of DE consists of initialization, mutation, crossover, and selection. This section gives a brief look at related DE concepts in the order of these four steps.
Initialization: In the optimization process, DE seeks to identify a global optimum within a D-dimensional space, with initialization treated as a sampling operation [24]. Each sampling point is referred to as an individual or a target vector, and the collection of all individuals forms the population. The number of sampling points, denoted as NP, represents the population size. In classical DE, random initialization is the most frequently employed method. According to the given bound [Xmin, Xmax] of an individual at each dimension, where Xmin = [X1,min, …, XD,min] and Xmax = [X1,max, …, XD,max], the i t h individual Xi can be obtained by a random initialization, as follows:
x j , i = x j , m i n + r a n d 0 , 1 x j , m a x x j , m i n , j = 1 , , D ,
where rand [0, 1] returns a random number in the interval [0, 1].
Mutation: The mutation operator in DE distinguishes itself from those used in other evolutionary algorithms (EAs). During this phase, for every target vector Xi, a base vector is selected initially. Subsequently, the scaled difference between other vectors, referred to as the differential vector, is utilized to introduce perturbations to the base vector, thereby achieving the mutation process. The resulting vector after mutation is termed as the mutation vector (or donor vector), denoted as Vi. A specific mutation operator in DE can be represented as DE/x/y, where x signifies the method of selecting the base vector, and y represents the number of differential vectors added to the base vector. For instance, the classic operator DE/rand/1 implies that the base vector is randomly selected from the population, and only one differential vector is incorporated. The detailed formulation of the mutation operator is outlined below:
V i = X r 1 + F X r 2 X r 3 ,
where Xr1 is the randomly chosen base vector, r1, r2, and r3 are randomly chosen from interval [1, NP] and satisfy the expression r1 ≠ r2 ≠ r3 ≠ i, and F is one of the control parameters, namely, the scaling factor.
Crossover: The crossover operator of DE is utilized to combine components between Xi and Vi, resulting in a vector Ui, known as the trial vector:
u j , i = v j , i ,     i f   r a n d 0,1 C R   o r   j = j r a n d x j , i ,     o t h e r w i s e ,
where rand [0, 1] returns a random number in the interval [0, 1], and jrand is randomly chosen in the interval [0, D] to get rid of the situation where all the components of Ui come from Xi.
Selection: The selection operator determines whether Xi or Ui survives, and the details are described as in Equation (4):
X i + 1 = U i ,     i f   f ( U i ) f ( X i ) X i ,     o t h e r w i s e .
Here, f(X) denotes the objective function, which is used to assess the individuals. In this study, the optimization problem is formulated as a mathematical model aimed at minimizing the objective function. Consequently, a smaller objective function value indicates a higher-quality individual. In Equation (4), the individual with the smaller objective function value between Xi and Ui will proceed to the next generation.

2.2. Control Parameter Settings in DE

Research on the control parameter settings mainly focuses on the following three approaches: fixed control parameter, control parameter with linear variation, and adaptive control parameter.

2.2.1. Fixed Control Parameter

In the classic DE algorithm, a fixed parameter setting approach is adopted, meaning that parameters are pre-set before the search and remain unchanged throughout the entire iteration process. Storn and Price [1] set the parameters as follows: the population size NP is from 5 × D to 10 × D (D is the dimension of the individual), the scaling factor F is 0.5, and the initial crossover rate CR is generally set to 0.1, but when rapid convergence is required, it is set to 0.9. Gämperle et al. [25] summarized that the performance of the DE algorithm is heavily dependent on the settings of control parameters. The settings for the control parameters are listed as follows: the ideal range for the population size NP is between 3 × D and 8 × D , the effective initial value for the scaling factor F is 0.6, and the ideal initial range for the crossover rate CR is between 0.3 and 0.9. Ronkkonen et al. [26] believed that the ideal range for the number of populations NP is 2 × D to 40 × D , the scaling factor F should be between 0.4 and 0.95 (where F = 0.9 allows for a compromise between exploration and exploitation), and for separable problems, the crossover rate CR is ideally between 0.0 and 0.2, while for non-separable problems or multimodal problems, it should be set between 0.9 and 1. CoDE [27] selects parameters for each experimental vector by randomly choosing from three pre-set parameter pools. ODE [28] adopts an orthogonal crossover operator to enhance the search capability of the algorithm, with parameters set to F = 0.9, CR = 0.9, and NP = D. DE-APC [29] adopts an automatic parameter configuration method, where the evolution control parameters F and CR for each individual are randomly selected from two pre-set parameter sets.

2.2.2. Control Parameter with Linear Variation

To avoid manually adjusting control parameters, linear variation is a common method for setting random parameters. Das et al. [30] proposed two methods for setting the parameter F: random setting and time-varying setting. In the random method, the parameter F is set as a random number between 0.5 and 1, while in the time-varying method, the parameter F decreases linearly over a given time interval. Brest et al. [31] presented a population size reduction algorithm (population size reduction for DE, dynNP-DE). The population size undergoes three reductions throughout the entire evolutionary cycle, and when the algorithm reaches the specified number of evolutionary generations, the population size is reduced to half of the previous generation. Reference [32] adopted the same population size reduction method as Ref. [17]. However, the former only began to implement population reduction after consuming half of the function evaluation count.

2.2.3. Adaptive Control Parameter

Another method for parameter setting is the adaptive adjustment approach, which adjusts control parameters based on feedback from the search process or through evolutionary operations.
Observation-based adaptive parameter control methods (APCMs) adjust the values of F and CR based on indicator values from individuals and/or their objective function values. In the population-based control method of DE using precalculated differentials (PCM-DEPD) [33], only the F parameter is adaptively tuned according to the objective values of individuals in the population Pt. For each iteration t, Ft is defined as follows:
F t = m a x F m i n , 1 f t m a x f t m i n     i f f t m a x f t m i n < 1 m a x F m i n , 1 f t m i n f t m a x     o t h e r w i s e .
Let f t m a x and f t m i n denote the maximum and minimum objective values in the population Pt, respectively. The recommended value for F m i n is 0.4. Unlike F, the parameter CR remains a fixed value (CR = 0.5). FiADE [34] proposes two straightforward adaptation schemes for the control parameters F and CR, which play a critical role in the performance of DE algorithms. These adaptation schemes are based on the objective function values of the target vectors and donor vectors. In the rank-based DE (PCM-RDE) [35], during each iteration t, individuals are sorted according to their objective values. Subsequently, Fi,t and CRi,t are assigned based on the rank of the base vector. A smaller Fi,t value and a larger CRi,t value are assigned when the objective value of the base vector is small. The individual-dependent mechanism of DE (PCM-IDE) [36] employs a similar approach to PCM-RDE by using rank values of individuals in the population. However, while F and CR are deterministically assigned to individuals based on their ranks in PCM-RDE, they are randomly generated in PCM-IDE.
As for APCMs with the parameter inheritance mechanism, some variants of jDE [37] have been proposed. In jDE, parameters that can generate superior offspring are retained for the next generation in the process of evolution, and some probability formulas are designed to achieve the adaptive control parameters F and CR. While the probability is constant in jDE, it depends on the diversity of the objective values of individuals in FDSADE [38]. ISADE [39] utilizes the objective value of each individual to determine the probability and generate new values of F and CR.
In recent years, several researchers have proposed DE variants focused on control parameter settings. In MD-DE [40], a multi-stage parameter adaptation scheme was proposed, where the scaling factor F was generated using improved wavelet basis functions, Laplace distribution, and Cauchy distribution, depending on the evolutionary stage. Ghosh et al. [41] introduced a nearest spatial neighborhood-based approach to enhance parameter adaptation in the SHADE algorithm. Poláková et al. [42] implemented adaptive adjustment of the population size, enabling dynamic modifications during the search process through a linear reduction in population diversity. Tong et al. [43] addressed the limitations of MPEDE [44] by developing a multi-population ensemble mechanism that introduced a new mutation operator combining information from both superior and inferior individuals. Furthermore, a parameter adaptation strategy based on the weighted Lehmer mean was adopted. Similarly, Gui et al. [45] enhanced the DE algorithm by implementing a multi-group population mechanism. In this approach, each group contributed to the evolution process based on fitness, allowing suitable evolution operators and parameter settings to be selected from the corresponding pools. In addition, NP was also adaptively adjusted. Yi et al. [46] introduced an enhanced version of DE by incorporating ensemble populations, utilizing a set of evolution operators, and integrating a parameter adaptation strategy to further improve its performance. Zhang et al. [47] applied a Q-learning strategy in adapting the parameter NP within the L-SHADE framework, treating the search process as a Markov decision process (MDP). Li et al. [48] developed both an operator selector and a parameter selector using ensemble learning, decision trees, and neural networks, and subsequently integrated these selectors into DE on the basis of fitness landscape analysis. Zeng et al. [49] simultaneously adapted the three key parameters of DE. The scaling factor (F) and crossover rate (CR) were controlled using a distance-based strategy, while the population size (NP) was adjusted via a sawtooth-linear strategy. Wang et al. [50] proposed a dual-population framework to boost DE’s performance. The primary population was used for standard evolution, while the secondary population, composed of suboptimal individuals, was employed to adapt the mutation operator and parameters within the main population. Meng and Yang [51] divided the evolutionary process into two stages, applying different mutation operators at each stage. Additionally, they introduced a fitness-independent mechanism to adapt F and CR. Using convergence tracking, Abbas et al. [52] developed an adaptive DE based on a history learning strategy. Huynh et al. [53] incorporated a reinforcement learning (RL) strategy, specifically Q-learning, into the DE framework as an adaptive controller, enabling automatic parameter adjustment at each stage of evolution. A data-driven DE with parameter adaptation strategies based on clustering, niching, and surrogate models was proposed by Ma et al. [54].

3. Proposed Method

3.1. Motivation

The control parameters of the DE algorithm mainly include the population size NP, scaling factor F, and crossover rate CR. If the parameters are not chosen appropriately, it may lead to stagnation in the algorithm search due to an excessive emphasis on exploration or premature convergence due to an excessive emphasis on exploitation. Among them, the scaling factor F mainly affects the search step size: increasing F can expand the algorithm’s search range, enhance population diversity, but simultaneously weaken the algorithm’s exploitation, while decreasing F can enhance the algorithm’s exploitation, improve the convergence speed, but simultaneously risk premature convergence. The crossover rate CR affects the adjustment weight of evolutionary information: increasing CR can improve population diversity, while decreasing CR is beneficial for analyzing the separability of individual dimensions.
We consider using individual diversity information as feedback for the adaptive parameters F and CR. Individuals with a high diversity ranking (closer) will configure a smaller scaling factor F and crossover rate CR to exploit the potential areas nearby, while individuals with a low diversity ranking (farther away) will configure a larger scaling factor F and crossover rate CR to explore areas farther from them in search of better solutions.

3.2. Complete Procedure of div Mechanism

Algorithm 1 demonstrates the pseudo-code of the div mechanism. At generation G , calculate the center of current population P G (line 1) and the diversity of each X i , G , denoted as d i v i , G , then sort them in ascending order and assign the rankings r a n k v i , G from 1 to N P (lines 2–5). As described by the formula d i v i , G = E u c l i d i a n   d i s t a n c e ( X i , G , P G ¯ ) , we use the Euclidian distance between individuals and the population center to represent the diversity of individuals. After generating two sets of symmetrical parameters, F   a n d   C R values using the parameter generation method from the original algorithm, assign the smaller values as the final F   a n d   C R by default (line 10). However, if the diversity ranking index meets a specific condition, assign the larger values to F   a n d   C R for that particular index (lines 11 and 12). It employs a straightforward approach to choose the more advantageous one from two sets of symmetrical parameters.
Algorithm 1 Pseudo-code of div mechanism
Input: Current population P G = ( X 1 , G , X 2 , G , , X N P , G ) with generation index G and population size NP
1:
Calculate the center of P G , denoted as P G ¯ ;
2:
For  i = 1 : N P  do
3:
d i v i , G = E u c l i d i a n   d i s t a n c e ( X i , G , P G ¯ ) ;
4:
End For
5:
Sort d i v i , G in ascending order and assign the rankings r a n k v i , G from 1 to NP;
6:
Generate F 1 and C R 1 according to the parameter generation method in the original algorithm;
7:
Generate F 2 and C R 2 according to the parameter generation method in the original algorithm;
8:
F m i n = m i n ( F 1 , F 2 ) ; F m a x = m a x ( F 1 , F 2 ) ;
9:
C R m i n = m i n ( C R 1 , C R 2 ) ; C R m a x = m a x ( C R 1 , C R 2 ) ;
10:
F = F m i n ; C R = C R m i n ;
11:
F ( r a n k v > N P 0.3 ) = F m a x ( r a n k v > N P 0.3 ) ;
12:
C R ( r a n k v > N P 0.3 ) = C R m a x ( r a n k v > N P 0.3 ) ;
Output: Parameter F and CR

3.3. The ADE-div Framework

ADE-div is a variant algorithm that applies the div mechanism to the adaptive DE algorithm. The pseudo-code of the ADE-div algorithm is shown in Algorithm 2.
Algorithm 2 Pseudo-code of ADE-div
Initialization
1:
Generation index G = 0 ;
2:
Initialize a population of N P individuals P G = ( X 1 , G , X 2 , G , . . . , X N P , G ) randomly;
3:
X i , G = x i , G 1 , x i , G 2 , . . . , x i , G D , i = 1 : N P uniformly distributed in the range X m i n , X m a x with dimension D ;
Main loop
4:
While stopping criteria is not satisfied Do
5: 
Calculate the center of P G , denoted as P G ¯ ;
6: 
For  i = 1 : N P  do
7:  
d i v i , G = E u c l i d i a n   d i s t a n c e ( X i , G , P G ¯ ) ;
8: 
End For
9: 
Sort d i v i , G in ascending order and assign the rankings r a n k v i , G from 1 to N P ;
10: 
Generate F 1 and C R 1 adaptively;
11: 
Generate F 2 and C R 2 adaptively;
12: 
F m i n = m i n ( F 1 , F 2 ) ; F m a x = m a x ( F 1 , F 2 ) ;
13: 
C R m i n = m i n ( C R 1 , C R 2 ) ; C R m a x = m a x ( C R 1 , C R 2 ) ;
14: 
F = F m i n ; C R = C R m i n ;
15: 
F ( r a n k v > N P 0.3 ) = F m a x ( r a n k v > N P 0.3 ) ;
16: 
C R ( r a n k v > N P 0.3 ) = C R m a x ( r a n k v > N P 0.3 ) ;
17: 
For  i = 1 : N P  do
18:  
Generate a mutant vector V i , G = v i , G 1 , v i , G 2 , . . . , v i , G D for each target vector X i , G via Equation (2);
19:  
Generate a trial vector U i , G = u i , G 1 , u i , G 2 , . . . , u i , G D for each target vector X i , G according to Equation (3);
20:  
Evaluate the trial vector U i , G according to Equation (4);
21: 
End For
22: 
Increment the generation index G = G + 1 ;
23:
End While
In each generation G , the control parameters F   a n d   C R are generated according to the div mechanism (lines 5–16). As shown in line 7, we represent individual diversity using the Euclidean distance between each individual and the population center. At the same time, two pairs of symmetric parameters are obtained according to the parameter generation method of the original adaptive DE algorithm (lines 10 and 11). The key steps of the div mechanism from lines 12 to 16 implement the allocation of more suitable F and CR parameters to individuals. Then, for each target vector X i , G , generate an offspring vector based on mutation (line 18), crossover (line 19), and selection steps (line 20). This process is repeated until the criteria are satisfied. The following uses JADE and SHADE as examples to explain how to apply the div mechanism to them.
Zhang et al. [11] proposed the adaptive control of DE parameters (JADE) based on historical information firstly, laying the foundation for the development of adaptive DE. The core idea of JADE is that the process of generating offspring for each generation of the population is also a “trial and error” process for the parameters, and the population retains historically successful parameters and tends to use these parameters, allowing the population to better adapt to the current environment. In JADE, the scale factor and crossover rate parameters used by each individual to generate offspring are randomly generated, as shown in Formulas (6) and (7). Here, Fi and CRi are scaling factor and crossover rate parameters used by the i-th individual in the population, while randn(μ,σ) and randc(μ,σ) represent a normal distribution with mean μ and variance σ2, and a Cauchy distribution, respectively. When the parameters CRi or Fi exceed 1, they will be set to 1, and when they are non-positive, they will be regenerated. μCR and μF are memory parameters maintained by the population, which record the historically successful parameters:
F i = r a n d c μ F , 0.1 ,
C R i = r a n d n μ C R , 0.1 ,
μ F = 1 c μ F + c m e a n L S F ,
μ C R = 1 c μ C R + c m e a n S C R ,
m e a n L S F = F S F F 2 F S F F .
In each generation, the better offspring CRi and Fi will be stored in temporary archives SCR and SF, respectively. Then, the population’s memory parameters are updated according to Formulas (8) and (9). Here, meanL is the Lehmer mean, as shown in Formula (10), and c is an algorithm parameter. For applying the div mechanism, two pairs of parameters are produced in lines 10 and 11.
In SHADE [18], the population no longer stores just one pair of memory parameters but separately retains two memory parameter vectors: MF and MCR, both of length H. Each individual randomly selects a pair from the H memory parameter pairs to generate the scaling factor and crossover rate parameters. Additionally, considering that the memory parameter update process in JADE does not take into account the varying degrees of success of historical parameters, SHADE updates each pair of memory parameters in the memory parameter vectors sequentially, as shown in the following formulas. Here, MF,k,g is the k-th element of the scaling factor memory parameter vector in the g-th iteration:
M F , k , g + 1 = m e a n W L S F , S F M F , k , g , else ,
M C R , k , g + 1 = m e a n W A S C R , S C R M C R , k , g , else ,
m e a n W L S F = k = 1 S F w k S F , k 2 k = 1 S F w k S F , k ,
m e a n W A S C R = k = 1 S C R w k S C R , k ,
w k = f u k , g f x k , g l = 1 S C R f u l , g f x l , g .
Likewise, two sets of symmetrical parameters, F and CR, are generated in lines 10 and 11 to implement the div mechanism. As for other procedures, they are following the original algorithm.

4. Results

This section presents a comprehensive evaluation of the proposed div mechanism’s effectiveness using the CEC2017 benchmark suite [55]. The CEC2017 benchmark suite comprises 29 test functions that are categorized into 4 distinct groups based on their mathematical properties, unimodal functions (F1 and F3), simple multimodal functions (F4–F10), hybrid functions (F11–F20), and composition functions (F21–F30). Note that function F2 was excluded from the CEC2017 benchmark suite due to its demonstrated instability in high-dimensional scenarios. The comparative analysis employed the solution error value as the primary performance metric, defined as F(x) − F(x*), where F(x) represents the best fitness value obtained after 10,000 × D function evaluations, F(x*) denotes the known global optimum fitness value, and error values below 10 8 were considered zero. The dimension D ∈ {50, 100}, search space was [−100, 100]ᴰ for all dimensions, and 51 independent runs were performed, while the mean and standard deviations of the solution error values were reported. We conducted Wilcoxon signed-rank tests [56] ( α = 0.05) to assess significant differences, and the symbols “+”, “ ”, and “−“ represent that the algorithm incorporating the div mechanism performed significantly better than, worse than, or similar to the compared algorithm, respectively. All the algorithms were implemented in MATLAB 2020b.

4.1. The Effectiveness of div Mechanism

This section evaluates the effectiveness of the div mechanism through its integration with the following four algorithms:
JADE [11]: An adaptive DE algorithm based on a new mutation strategy “DE/current-to-pbest”, with optional external archive and updating control parameters in an adaptive manner.
L-SHADE [17]: An improved version of the SHADE algorithm with linear population size reduction.
jSO [20]: An improved version of the iL-SHADE algorithm that introduces a new weighted mutation strategy.
DTDE [57]: An improved SCSS-L-SHADE algorithm with domain transform methodology.
All algorithm parameters were rigorously set following their respective canonical literature to preserve impartiality in the comparison, and the details are listed in Table 1. Table 2, Table 3, Table 4 and Table 5 present the algorithm comparisons of JADE, L-SHADE, jSO, DTDE, and their corresponding div variants over the 50-D and 100-D CEC2017 benchmark suite, respectively. The entries that are significantly better are marked in bold.
The experimental results demonstrate that the div mechanism significantly enhanced the performance of all four algorithms. For example, in 50-D and 100-D tests, JADE-div outperformed the original JADE algorithm on 15 and 20 functions, respectively, while showing slightly inferior performance on only 7 and 4 functions (out of 29 benchmark functions). Specifically, for 100-D tests, JADE-div exhibited comprehensive performance improvements. Among unimodal functions, it showed better performance on F3 while maintaining comparable results on F1; for simple multimodal functions, significant improvements were observed on F4, F5, F7, F8, and F10, with only minor degradation on F6 and F9; regarding hybrid functions, superior performance was achieved on F11, F14, and F16–F20, while maintaining equivalent performance on F12, F13, and F15; in composition functions, performance breakthroughs were realized on F21, F23–F27, and F29, with only F22 and F30 showing slight disadvantages and F28 performing comparably. These comprehensive results conclusively demonstrate that the div mechanism effectively enhanced the performance of the JADE algorithm.
L-SHADE-div also showed significant improvements over the original L-SHADE, outperforming it on 11 (50-D) and 15 (100-D) functions, while underperforming on only 7 and 9 functions, respectively, out of the 29 benchmark functions. Similarly, jSO-div exhibited substantial gains, surpassing jSO on 11 (50-D) and 16 (100-D) functions, with only 4 and 7 inferior performances, respectively. Although DTDE-div showed relatively more modest improvements in 50-D cases compared to the other algorithms, it still achieved better performance on 9 (50-D) and 15 (100-D) functions, while performing worse on 7 and 5 functions, respectively. These comprehensive comparisons clearly indicate that the div mechanism provided meaningful performance enhancements across various algorithm architectures, with particularly notable improvements observed in higher-dimensional (100-D) scenarios. The consistent pattern of improvement suggests the robustness and general applicability of the proposed div mechanism.

4.2. Comparisons with State-of-the-Art DEs

The analysis in Section 4.1 demonstrated that DTDE-div outperformed DTDE. To further assess the performance of DTDE-div, we will use the following five state-of-the-art DE algorithms for comparison in this subsection:
EJADE [21]: An adaptive DE algorithm with ensembling operators.
jSO [20]: An improved version of the iL-SHADE algorithm that introduces a new weighted mutation strategy.
L-SHADE-RSP [58]: An improved jSO algorithm with a rank-based selective pressure strategy for mutation.
DISH [59]: An improved jSO algorithm utilizing a novel distance-based parameter adaptation mechanism.
SCSS-L-SHADE [60]: A variant of the L-SHADE algorithm based on a selective-candidate framework with a similarity selection rule.
All algorithm parameters were rigorously set following their respective canonical literature to preserve impartiality in the comparison, and the details are listed in Table 6. Table 7 and Table 8 present the algorithm comparisons of DTDE-div and other DE algorithms over the 50-D and 100-D CEC2017 benchmark suite, respectively. From Table 7, it can be observed that compared to other algorithms, DTDE-div performed well or at least comparably on most of the functions on the 50-D CEC2017 benchmark suite. Specifically, the “+/–” metric was “24/3”, “15/8”, “13/10”, “13/9”, and “20/6” when compared with EJADE, jSO, L-SHADE-RSP, DISH, and SCSS-L-SHADE, respectively, thus the advantage of DTDE-div was more significant when compared with EJADE, jSO, and SCSS-L-SHADE. The performance superiority of DTDE-div was more significant on the 100-D CEC2017 benchmark suite compared to most other algorithms, as evidenced by the “+/–” metric observed from Table 8, which was “24/2”, “17/6”, “14/9”, “14/12”, and “23/3” when compared with EJADE, jSO, L-SHADE-RSP, DISH, and SCSS-L-SHADE, respectively. For example, in the 50-D tests, the “+/–” metric was “15/8” and “20/6” when compared with jSO and SCSS-L-SHADE, respectively, while in 100-D, the “+/–” metric was “17/6” and “23/3” when compared with jSO and SCSS-L-SHADE, respectively. The above analysis indicates that DTDE-div performed well in most cases, especially in high-dimensional functions. When considering the function types, the following were observed:
On the unimodal functions F1 and F3: Except on F3, for which DTDE-div and EJADE performed comparably in 50-D and 100-D cases, DTDE-div’s performance was inferior to that of other algorithms in other situations.
On the simple multimodal functions F4–F10: In 50-D cases, DTDE-div performed comparably on F4 and F9, and worse on F6 when compared with other algorithms. But in 100-D cases, the performance of DTDE-div declined on F4 and improved on F6. Notably, DTDE-div outperformed all compared algorithms on F5, F7, F8, and F10 in 50-D and 100-D cases.
On the hybrid functions F11–F20: DTDE-div won 10, 6, 3, 4, and 7 functions but lost 0, 4, 4, 3, and 2 functions when compared with EJADE, jSO, L-SHADE-RSP, DISH, and SCSS-L-SHADE in 50-D cases, respectively. In 100-D cases, DTDE-div won 9, 5, 4, 4, and 9 functions but lost 1, 2, 5, 6, and 1 functions compared with other algorithms. Compared with EJADE and SCSS-L-SHADE, DTDE-div exhibited better performance on most of the hybrid functions.
On the composition functions F21–F30: DTDE-div had the most outstanding performance. From Table 5, DTDE-div won 9 (F21–F29), 5 (F21, F23, F24, F26, and F29), 6 (F21–F24, F26, and F29), 5 (F21, F23, F24, F26, and F29), and 9 (F21–F27, F29, and F30) functions but lost 0, 1 (F28), 3 (F25, F27, and F28), 3 (F25, F27, and F28), and 1 (F28) functions when compared with EJADE, jSO, L-SHADE-RSP, DISH, and SCSS-L-SHADE in 50-D cases, respectively. In 100-D cases, DTDE-div won 9 (F21–F27, F29, and F30), 7 (F21–F26 and F29), 6 (F21–F24, F26, and F29), 6 (F21–F24, F26, and F29), and 8 (F21–F27 and F29) functions but lost 0, 1 (F27), 1 (F27), 2 (F27 and F28), and 0 functions compared with other algorithms, as observed from Table 6.
In addition to the performance results presented in the tables, Figure 1 shows the convergence curves of the compared DE algorithms on twelve selected functions in the run with the median error value, which are (a) 50-D F5, (b) 50-D F8, (c) 50-D F10, (d) 50-D F16, (e) 50-D F20, (f) 50-D F23, (g) 100-D F5, (h) 100-D F6, (i) 100-D F16, (j) 100-D F21, (k) 100-D F22, and (l) 100-D F26. On 100-D F6, DISH achieved the best result, followed by DTDE-div and L-SHADE-RSP. For the remaining eleven functions, DTDE-div exhibited the best or at least comparable performance. Furthermore, the overall performance ranking of the compared DE algorithms based on Friedman’s test [61] is shown in Table 9. The results demonstrate that DTDE-div achieved the highest ranking with a score of 2.59, highlighting its superior performance compared to other algorithms. L-SHADE-RSP and DISH followed closely with rankings of 2.80 and 2.88, respectively, while SCSS-L-SHADE and jSO showed moderate performance with scores of 3.63 and 3.74. Notably, EJADE ranked the lowest at 5.36, indicating its relatively weaker effectiveness among the compared methods.

4.3. Working Mechanism of div

In this section, two variants based on JADE-div shown below were developed to study the working mechanism of div.
JADE-div-opposite: The generation method of the final F   a n d   C R values followed an opposing approach. Specifically, assign F m a x and C R m a x as the final F   a n d   C R first, respectively. Then, if the diversity ranking index meets a specific condition, assign F m i n and C R m i n to F   a n d   C R for that particular index.
JADE-div-random: The generation method of the final F   a n d   C R values followed a random way. Specifically, assign F m i n or F m a x to F randomly; similarly, assign C R m i n or C R m a x to C R randomly.
The performance results based on the Wilcoxon signed-rank test and the performance ranking derived from Friedman’s test for JADE-div and its variants on 50-D CEC2017 functions are presented in Table 10. Compared to JADE-div-opposite and JADE-div-random, JADE-div won on 24 and 13 functions, respectively, while losing on only 3 and 7 functions, confirming the effectiveness of the div mechanism. Furthermore, JADE-div achieved the lowest ranking value (1.31), demonstrating its superior performance over its variants. JADE-div-random ranked second (1.90), followed by JADE-div-opposite (2.79), indicating that the random-based mechanism performed better than the opposition-based one.

4.4. Time Complexity

In this section, we present a comparative analysis of the computational complexity between JADE-div and the original JADE algorithm. Following the CEC2017 competition guidelines, we evaluated time complexity using the standardized index ( T ^ 2 T 1 ) / T 0 [55], where T 0 corresponds to the execution time for basic arithmetic operations, T 1 denotes the baseline time required for 200,000 function evaluations of benchmark F18 from the CEC2017 test suite (30-D case), and T ^ 2 represents the average execution time of the algorithm across five independent runs under the same conditions as T 1 . As shown in Table 11, our experimental results indicate that JADE-div maintained comparable computational efficiency to JADE, though with a marginally higher time complexity. This slight increase can be attributed to the additional div mechanism incorporated in JADE-div.

5. Conclusions

This paper presented a novel diversity-based parameter adaptation (div) mechanism that can be flexibly integrated into various DE algorithms to enhance their performance. The proposed div mechanism operates by generating two sets of symmetrical parameters, F and C R , using the original algorithm’s parameter generation method first, and then adaptively selecting the final parameters based on individual diversity rankings. It employs a straightforward approach to identify the more appropriate option from two sets of symmetrical parameters. The mechanism’s effectiveness has been empirically validated through its integration with four classical DE variants. In the future, the div mechanism can also be integrated into a broader range of modern adaptive or hybrid DE algorithms. Comparative experiments demonstrated that DTDE-div, our enhanced implementation, outperformed five state-of-the-art DE algorithms. Furthermore, we provided comprehensive analyses of both the working mechanism and computational complexity. The proposed approach offers a new perspective on parameter adaptation by effectively balancing local exploitation and global exploration. Future research directions may include refining the parameter selection process by incorporating additional properties of the candidate parameters to achieve further performance improvements. Besides, using the proposed approach to address real-life engineering problems can be a promising direction for future research.

Author Contributions

Conceptualization, R.Y.; methodology, R.Y.; software, R.Y.; validation, R.Y. and X.J.; writing—original draft preparation, R.Y. and X.J.; writing—review and editing, R.Y., X.J., and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in this paper was supported by the National Special Project Number for International Cooperation (under grant number 2015DFR11050) and the Applied Science and Technology Research and Development Special Fund Project of Guangdong Province (under grant number 2016B010126004).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Li, Y.; Han, T.; Zhou, H.; Tang, S.; Zhao, H. A novel adaptive L-SHADE algorithm and its application in UAV swarm resource configuration problem. Inf. Sci. 2022, 606, 350–367. [Google Scholar] [CrossRef]
  3. Sui, X.; Chu, S.C.; Pan, J.S.; Luo, H. Parallel compact differential evolution for optimization applied to image segmentation. Appl. Sci. 2020, 10, 2195. [Google Scholar] [CrossRef]
  4. Wang, Y.; Liu, Z.; Wang, G.G. Improved differential evolution using two-stage mutation strategy for multimodal multi-objective optimization. Swarm Evol. Comput. 2023, 78, 101232. [Google Scholar] [CrossRef]
  5. Wu, B.; Wang, L.; Lv, S.X.; Zeng, Y.-R. Forecasting oil consumption with attention-based IndRNN optimized by adaptive differential evolution. Appl. Intell. 2023, 53, 5473–5496. [Google Scholar] [CrossRef]
  6. Zhang, X.; Jin, L.; Cui, C.; Sun, J. A self-adaptive multi-objective dynamic differential evolution algorithm and its application in chemical engineering. Appl. Soft Comput. 2021, 106, 107317. [Google Scholar] [CrossRef]
  7. Sallam, K.M.; Abohany, A.A.; Rizk-Allah, R.M. An enhanced multi-operator differential evolution algorithm for tackling knapsack optimization problem. Neural Comput. Appl. 2023, 35, 13359–13386. [Google Scholar] [CrossRef]
  8. Kar, M.K.; Kumar, S.; Singh, A.K.; Panigrahi, S. Reactive power management by using a modified differential evolution algorithm. Opt. Control Appl. Methods 2023, 44, 967–986. [Google Scholar] [CrossRef]
  9. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; IEEE: New York, NY, USA, 2005; Volume 2, pp. 1785–1791. [Google Scholar]
  10. Mezura-Montes, E.; Velázquez-Reyes, J.; Coello Coello, C.A. A comparative study of differential evolution variants for global optimization. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 485–492. [Google Scholar]
  11. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  12. Zheng, L.M.; Zhang, S.X.; Tang, K.S.; Zheng, S.Y. Differential evolution powered by collective information. Inf. Sci. 2017, 399, 13–29. [Google Scholar] [CrossRef]
  13. Deng, L.; Zhang, L.; Sun, H.; Qiao, L. DSM-DE: A differential evolution with dynamic speciation-based mutation for single-objective optimization. Memet. Comput. 2020, 12, 73–86. [Google Scholar] [CrossRef]
  14. Wang, C.; Xu, M.; Zhang, Q.; Jiang, R.; Feng, J.; Wei, Y.; Liu, Y. Cooperative co-evolutionary differential evolution algorithm applied for parameters identification of lithium-ion batteries. Expert Syst. Appl. 2022, 200, 117192. [Google Scholar] [CrossRef]
  15. Agrawal, S.; Tiwari, A. Solving multimodal optimization problems using adaptive differential evolution with archive. Inf. Sci. 2022, 612, 1024–1044. [Google Scholar] [CrossRef]
  16. Liu, J. On setting the control parameter of the differential evolution method. In Proceedings of the 8th International Conference on Soft Computing (MENDEL 2002), Brno, Czech Republic, 5–7 June 2002; pp. 11–18. [Google Scholar]
  17. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: New York, NY, USA, 2014; pp. 1658–1665. [Google Scholar]
  18. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancún, Mexico, 20–23 June 2013; IEEE: New York, NY, USA, 2013; pp. 71–78. [Google Scholar]
  19. Brest, J.; Maučec, M.S.; Bošković, B. iL-SHADE: Improved L-SHADE algorithm for single objective real-parameter optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; IEEE: New York, NY, USA, 2016; pp. 1188–1195. [Google Scholar]
  20. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; IEEE: New York, NY, USA, 2017; pp. 1311–1318. [Google Scholar]
  21. Li, S.; Gu, Q.; Gong, W.; Ning, B. An enhanced adaptive differential evolution algorithm for parameter extraction of photovoltaic models. Energy Convers. Manag. 2020, 205, 112443. [Google Scholar] [CrossRef]
  22. Wang, S.L.; Morsidi, F.; Ng, T.F.; Budiman, H.; Neoh, S.C. Insights into the effects of control parameters and mutation strategy on self-adaptive ensemble-based differential evolution. Inf. Sci. 2020, 514, 203–233. [Google Scholar] [CrossRef]
  23. Cpałka, K.; Słowik, A.; Łapa, K. A population-based algorithm with the selection of evaluation precision and size of the population. Appl. Soft Comput. 2022, 115, 108154. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Chen, G.; Cheng, L.; Wang, Q.; Li, Q. Methods to balance the exploration and exploitation in differential evolution from different scales: A survey. Neurocomputing 2023, 561, 126899. [Google Scholar] [CrossRef]
  25. Gämperle, R.; Müller, S.D.; Koumoutsakos, P. A parameter study for differential evolution. Adv. Intell. Syst. Fuzzy Syst. Evol. Comput. 2002, 10, 293–298. [Google Scholar]
  26. Ronkkonen, J.; Kukkonen, S.; Price, K.V. Real-parameter optimization with differential evolution. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; IEEE: New York, NY, USA, 2005; Volume 1, pp. 506–513. [Google Scholar]
  27. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  28. Wang, Y.; Cai, Z.; Zhang, Q. Enhancing the search ability of differential evolution through orthogonal crossover. Inf. Sci. 2012, 185, 153–177. [Google Scholar] [CrossRef]
  29. Elsayed, S.M.; Sarker, R.A.; Ray, T. Differential evolution with automatic parameter configuration for solving the CEC2013 competition on real-parameter optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancún, Mexico, 20–23 June 2013; IEEE: New York, NY, USA, 2013; pp. 1932–1937. [Google Scholar]
  30. Das, S.; Konar, A.; Chakraborty, U.K. Two improved differential evolution schemes for faster global search. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  31. Brest, J.; Sepesy Maučec, M. Population size reduction for the differential evolution algorithm. Appl. Intell. 2008, 29, 228–247. [Google Scholar] [CrossRef]
  32. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble of parameters in a sinusoidal differential evolution with niching-based population reduction. Swarm Evol. Comput. 2018, 39, 141–156. [Google Scholar] [CrossRef]
  33. Ali, M.M.; Törn, A. Population set-based global optimization algorithms: Some modifications and numerical studies. Comput. Oper. Res. 2004, 31, 1703–1725. [Google Scholar] [CrossRef]
  34. Ghosh, A.; Das, S.; Chowdhury, A.; Giri, R. An improved differential evolution algorithm with fitness-based adaptation of the control parameters. Inf. Sci. 2011, 181, 3749–3765. [Google Scholar] [CrossRef]
  35. Takahama, T.; Sakai, S. Efficient constrained optimization by the constrained rank-based differential evolution. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–13 June 2012; pp. 1–8. [Google Scholar]
  36. Tang, L.; Dong, Y.; Liu, J. Differential evolution with an individual-dependent mechanism. IEEE Trans. Evol. Comput. 2014, 19, 560–574. [Google Scholar] [CrossRef]
  37. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  38. Tirronen, V.; Neri, F. Differential evolution with fitness diversity self-adaptation. In Nature-Inspired Algorithms for Optimisation; Springer: Berlin/Heidelberg, Germany, 2009; pp. 199–234. [Google Scholar]
  39. Jia, L.; Gong, W.; Wu, H. An improved self-adaptive control parameter of differential evolution for global optimization. In Proceedings of the Computational Intelligence and Intelligent Systems: 4th International Symposium, ISICA 2009, Huangshi, China, 23–25 October 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 215–224. [Google Scholar]
  40. Xu, Q.; Meng, Z. Differential Evolution with multi-stage parameter adaptation and diversity enhancement mechanism for numerical optimization. Swarm Evol. Comput. 2025, 92, 101829. [Google Scholar] [CrossRef]
  41. Ghosh, A.; Das, S.; Das, A.K.; Senkerik, R.; Viktorin, A.; Zelinka, I.; Masegosa, A.D. Using spatial neighborhoods for parameter adaptation: An improved success history based differential evolution. Swarm Evol. Comput. 2022, 71, 101057. [Google Scholar] [CrossRef]
  42. Poláková, R.; Tvrdík, J.; Bujok, P. Differential evolution with adaptive mechanism of population size according to current population diversity. Swarm Evol. Comput. 2019, 50, 100519. [Google Scholar] [CrossRef]
  43. Tong, L.; Dong, M.; Jing, C. An improved multi-population ensemble differential evolution. Neurocomputing 2018, 290, 130–147. [Google Scholar] [CrossRef]
  44. Wu, G.; Mallipeddi, R.; Suganthan, P.N.; Wang, R.; Chen, H. Differential evolution with multi-population based ensemble of mutation strategies. Inf. Sci. 2016, 329, 329–345. [Google Scholar] [CrossRef]
  45. Gui, L.; Xia, X.; Yu, F.; Wu, H.; Wu, R.; Wei, B.; Zhang, Y.; Li, X.; He, G. A multi-role based differential evolution. Swarm Evol. Comput. 2019, 50, 100508. [Google Scholar] [CrossRef]
  46. Yi, W.; Chen, Y.; Pei, Z.; Lu, J. Adaptive differential evolution with ensembling operators for continuous optimization problems. Swarm Evol. Comput. 2022, 69, 100994. [Google Scholar] [CrossRef]
  47. Zhang, H.; Sun, J.; Bäck, T.; Zhang, Q.; Xu, Z. Controlling sequential hybrid evolutionary algorithm by q-learning [research frontier] [research frontier]. IEEE Comput. Intell. Mag. 2023, 18, 84–103. [Google Scholar] [CrossRef]
  48. Li, S.; Li, W.; Tang, J.; Wang, F. A new evolving operator selector by using fitness landscape in differential evolution algorithm. Inf. Sci. 2023, 624, 709–731. [Google Scholar] [CrossRef]
  49. Zeng, Z.; Zhang, M.; Zhang, H.; Hong, Z. Improved differential evolution algorithm based on the sawtooth-linear population size adaptive method. Inf. Sci. 2022, 608, 1045–1071. [Google Scholar] [CrossRef]
  50. Wang, M.; Ma, Y.; Wang, P. Parameter and strategy adaptive differential evolution algorithm based on accompanying evolution. Inf. Sci. 2022, 607, 1136–1157. [Google Scholar] [CrossRef]
  51. Meng, Z.; Yang, C. Two-stage differential evolution with novel parameter control. Inf. Sci. 2022, 596, 321–342. [Google Scholar] [CrossRef]
  52. Abbas, Q.; Malik, K.M.; Saudagar, A.K.J.; Khan, M.B.; Hasanat, M.H.A.; AlTameem, A.; AlKhathami, M. Convergence Track Based Adaptive Differential Evolution Algorithm (CTbADE). Comput. Mater. Continua 2022, 72, 1229–1250. [Google Scholar] [CrossRef]
  53. Huynh, T.N.; Do, D.T.T.; Lee, J. Q-Learning-based parameter control in differential evolution for structural optimization. Appl. Soft Comput. 2021, 107, 107464. [Google Scholar] [CrossRef]
  54. Ma, X.; Zhang, K.; Zhang, L.; Yao, C.; Yao, J.; Wang, H.; Jian, W.; Yan, Y. Data-driven niching differential evolution with adaptive parameters control for history matching and uncertainty quantification. Spe J. 2021, 26, 993–1010. [Google Scholar] [CrossRef]
  55. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  56. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992; pp. 196–202. [Google Scholar]
  57. Zhang, S.X.; Wen, Y.N.; Liu, Y.H.; Zheng, L.M.; Zheng, S.Y. Differential evolution with domain transform. IEEE Trans. Evol. Comput. 2022, 27, 1440–1455. [Google Scholar] [CrossRef]
  58. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE algorithm with rank-based selective pressure strategy for solving CEC 2017 benchmark problems. In Proceedings of the 2018 IEEE congress on evolutionary computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: New York, NY, USA, 2018; pp. 1–8. [Google Scholar]
  59. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for success-history based differential evolution. Swarm Evol. Comput. 2019, 50, 100462. [Google Scholar] [CrossRef]
  60. Zhang, S.X.; Chan, W.S.; Peng, Z.K.; Zheng, S.Y.; Tang, K.S. Selective-candidate framework with similarity selection rule for evolutionary optimization. Swarm Evol. Comput. 2020, 56, 100696. [Google Scholar] [CrossRef]
  61. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Convergence curves of the compared DE algorithms on eight selected functions.
Figure 1. Convergence curves of the compared DE algorithms on eight selected functions.
Symmetry 17 01016 g001
Table 1. Recommended parameter settings for comparison algorithms.
Table 1. Recommended parameter settings for comparison algorithms.
AlgorithmParameter Settings
JADE N P = 100   f o r   D = 50 , N P = 400   f o r   D = 100 , μ F = 0.5 , μ C R = 0.5 , p = 0.05 , c = 0.1
L-SHADE N P = 18 · D ~ 4 , μ F = 0.5 , μ C R = 0.5 , p = 0.11 , H = 6 , r a r c = 2.6
jSO N P = 25 · l n ( D ) · D ~ 4 , μ F = 0.3 , μ C R = 0.8 , p = 0.25 ~ 0.125 , H = 5 , r a r c = 1
DTDE N P = 18 · D ~ 4 , μ F = 0.5 , μ C R = 0.5 , p = 0.11 , H = 6 , r a r c = 2.6 , M = 2
Table 2. Performance comparison with JADE on 50-D and 100-D CEC2017 functions.
Table 2. Performance comparison with JADE on 50-D and 100-D CEC2017 functions.
D
Function
50-D100-D
JADEJADE-DivJADEJADE-Div
MeanStdMeanStdSigMeanStdMeanStdSig
F10.00E+000.00E+000.00E+000.00E+00=0000=
F39.95E+032.82E+043.38E-084.60E-08-4.25E+051.11E+042.05E-016.39E-02+
F43.48E+013.14E+013.92E+014.62E+01=1.65E+024.45E+018.95E+011.93E+01+
F55.58E+016.43E+003.39E+015.05E+00+2.84E+028.79E+001.60E+021.59E+00+
F60.00E+000.00E+001.12E-025.33E-03-1.62E-075.30E-088.59E-046.60E-05-
F71.03E+025.59E+008.66E+015.08E+00+4.37E+021.10E+013.14E+021.53E+00+
F85.38E+018.01E+003.38E+016.30E+00+2.77E+021.23E+011.78E+025.41E+00+
F94.86E-016.94E-017.65E+003.48E+00-4.59E-012.23E-015.89E+000.00E+00-
F103.86E+033.33E+025.51E+031.04E+03-1.77E+041.35E+021.74E+049.56E+01+
F111.31E+023.28E+011.48E+025.79E+01=9.73E+031.34E+042.77E+021.55E+01+
F125.11E+032.72E+031.35E+046.09E+03-2.59E+041.06E+042.17E+047.35E-12=
F132.83E+021.64E+023.66E+024.60E+02+8.38E+022.73E+026.33E+021.55E+02=
F144.30E+046.88E+046.15E+011.56E+01+4.91E+025.83E+019.09E+016.61E+00+
F155.62E+029.19E+021.11E+026.08E+01+2.79E+026.96E+012.79E+027.07E+00=
F168.24E+021.58E+026.60E+021.62E+02+3.69E+031.00E+002.84E+033.45E+02+
F175.90E+029.78E+015.99E+022.42E+02=2.93E+038.76E+012.30E+034.59E-13+
F181.69E+028.85E+011.56E+025.82E+01=3.63E+022.49E+012.46E+027.83E+01+
F191.47E+024.95E+017.44E+012.59E+01+2.41E+024.30E+011.01E+022.22E+01+
F204.68E+021.25E+025.85E+021.93E+02-3.21E+032.49E+022.84E+031.56E+02+
F212.47E+029.45E+002.36E+024.03E+00+4.97E+022.44E+013.84E+025.74E-14+
F224.15E+031.04E+033.34E+032.32E+03=1.90E+041.36E+021.95E+042.77E+02-
F234.74E+021.31E+014.56E+026.96E+00+7.89E+026.56E+006.83E+029.64E+00+
F245.42E+028.32E+005.32E+027.30E+00+1.13E+032.20E+019.22E+027.19E+00+
F255.20E+023.84E+015.14E+022.92E+01=7.71E+025.48E+007.73E+028.27E+00+
F261.57E+039.06E+011.41E+037.05E+01+5.48E+031.11E+024.17E+030.00E+00+
F275.59E+021.99E+015.42E+022.05E+01+6.40E+024.02E+005.82E+021.30E+01+
F284.99E+021.57E+014.89E+022.35E+01+5.09E+021.40E+015.17E+029.65E+00=
F296.18E+021.70E+024.21E+024.63E+01+2.42E+036.62E+011.88E+037.72E+01+
F306.40E+056.93E+046.77E+057.03E+04-2.45E+036.42E+022.82E+031.91E+02-
+ / /− 15/7/7 20/5/4
Table 6. Recommended parameter settings for comparison algorithms.
Table 6. Recommended parameter settings for comparison algorithms.
AlgorithmParameter Settings
EJADE N P = 100 , μ = N P , μ F = 0.5 , μ C R = 0.5 , p = 0.05 , c = 0.1
jSO N P = 25 · l n ( D ) · D ~ 4 , μ F = 0.3 , μ C R = 0.8 , p = 0.25 ~ 0.125 , H = 5 , r a r c = 1
L-SHADE-RSP N P = 25 · l n ( D ) · D ~ 4 , μ F = 0.3 , μ C R = 0.8 , p = 0.25 ~ 0.125 , H = 5 , r a r c = 1 , k = 3
DISH N P = 25 · l n ( D ) · D ~ 4 , μ F = 0.3 , μ C R = 0.8 , p = 0.25 ~ 0.125 , H = 5 , r a r c = 1
SCSS-L-SHADE N P = 18 · D ~ 4 , μ F = 0.5 , μ C R = 0.5 , p = 0.11 , H = 6 , r a r c = 2.6 , M = 2
DTDE-div N P = 18 · D ~ 4 , μ F = 0.5 , μ C R = 0.5 , p = 0.11 , H = 6 , r a r c = 2.6 , M = 2
Table 7. Performance comparison of DTDE-div with state-of-the-art DEs on 50-D CEC2017 functions.
Table 7. Performance comparison of DTDE-div with state-of-the-art DEs on 50-D CEC2017 functions.
FunctionEJADEjSOL-SHADE-RSPDISHSCSS-L-SHADEDTDE-Div
Mean (Std)SigMean (Std)SigMean (Std)SigMean (Std)SigMean (Std)SigMean (Std)
F10.00E+00(0.00E+00)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-1.59E-04(5.06E-04)
F36.09E+01(1.32E+02)=0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-1.01E-04(2.80E-04)
F44.99E+01(4.70E+01)-5.93E+01(4.70E+01)=5.55E+01(4.80E+01)=4.80E+01(4.54E+01)=6.12E+01(5.01E+01)=5.84E+01(4.61E+01)
F54.45E+01(1.02E+01)+1.39E+01(3.07E+00)+1.34E+01(3.88E+00)+1.29E+01(3.96E+00)+1.14E+01(2.80E+00)+2.09E+00(1.44E+00)
F68.38E-07(3.95E-06)-2.35E-07(4.72E-07)-9.05E-08(1.99E-07)-2.57E-07(5.82E-07)-3.94E-08(9.75E-08)-1.69E-06(2.35E-06)
F78.94E+01(8.91E+00)+6.64E+01(3.29E+00)+6.72E+01(3.07E+00)+6.81E+01(3.09E+00)+6.34E+01(1.97E+00)+5.70E+01(9.27E-01)
F84.09E+01(6.76E+00)+1.47E+01(4.01E+00)+1.34E+01(3.28E+00)+1.37E+01(3.94E+00)+1.16E+01(2.44E+00)+2.19E+00(1.73E+00)
F94.71E-01(6.63E-01)+0.00E+00(0.00E+00)=0.00E+00(0.00E+00)=0.00E+00(0.00E+00)=0.00E+00(0.00E+00)=1.11E-13(1.59E-14)
F103.38E+03(6.45E+02)+3.60E+03(3.96E+02)+3.48E+03(4.36E+02)+3.75E+03(3.43E+02)+3.17E+03(2.85E+02)+1.50E+02(9.90E+01)
F114.50E+01(1.10E+01)+2.62E+01(4.25E+00)+2.37E+01(3.87E+00)=2.34E+01(3.67E+00)=3.26E+01(5.02E+00)+2.42E+01(2.90E+00)
F126.59E+03(5.51E+03)+1.93E+03(4.17E+02)+1.68E+03(3.76E+02)=1.44E+03(3.51E+02)=2.02E+03(5.27E+02)+1.55E+03(4.25E+02)
F138.22E+01(6.78E+01)+3.68E+01(2.29E+01)-3.23E+01(1.90E+01)-4.21E+01(2.91E+01)=4.16E+01(2.55E+01)=4.56E+01(2.40E+01)
F145.09E+01(2.64E+01)+2.42E+01(2.00E+00)-2.31E+01(1.61E+00)-2.42E+01(1.84E+00)-2.53E+01(1.81E+00)-3.01E+01(3.25E+00)
F156.41E+01(5.04E+01)+2.27E+01(2.17E+00)-2.13E+01(1.81E+00)-2.10E+01(1.97E+00)-2.79E+01(4.53E+00)+2.51E+01(2.76E+00)
F166.81E+02(2.56E+02)+3.94E+02(1.61E+02)+3.60E+02(1.52E+02)+3.87E+02(1.42E+02)+3.41E+02(1.24E+02)+1.46E+02(6.81E+01)
F174.78E+02(2.11E+02)+2.57E+02(9.72E+01)+2.35E+02(9.46E+01)+2.91E+02(1.23E+02)+1.94E+02(7.26E+01)+5.28E+01(5.47E+01)
F181.05E+02(9.34E+01)+2.42E+01(1.91E+00)-2.29E+01(1.47E+00)-2.25E+01(1.33E+00)-2.65E+01(2.69E+00)-4.69E+01(1.04E+01)
F193.21E+01(3.47E+01)+1.28E+01(2.51E+00)+1.04E+01(2.38E+00)=1.12E+01(2.41E+00)+1.50E+01(2.43E+00)+9.76E+00(2.01E+00)
F203.62E+02(1.85E+02)+1.39E+02(5.35E+01)+1.34E+02(6.99E+01)+1.69E+02(8.55E+01)+1.75E+02(7.21E+01)+2.29E+01(1.93E+00)
F212.41E+02(8.47E+00)+2.14E+02(4.05E+00)+2.14E+02(3.58E+00)+2.15E+02(4.57E+00)+2.13E+02(2.83E+00)+2.04E+02(2.08E+00)
F223.07E+03(1.56E+03)+1.88E+03(2.01E+03)=2.28E+03(1.81E+03)+1.73E+03(1.98E+03)=2.85E+03(1.52E+03)+3.09E+02(1.18E+02)
F234.65E+02(1.36E+01)+4.31E+02(5.29E+00)+4.31E+02(6.60E+00)+4.33E+02(6.61E+00)+4.28E+02(3.68E+00)+4.19E+02(6.91E+00)
F245.28E+02(8.86E+00)+5.08E+02(4.68E+00)+5.08E+02(3.63E+00)+5.08E+02(3.97E+00)+5.05E+02(2.72E+00)+4.98E+02(2.32E+00)
F255.23E+02(3.90E+01)+4.81E+02(3.15E+00)=4.81E+02(2.80E+00)-4.80E+02(1.62E+00)-4.83E+02(1.18E+01)+4.80E+02(1.62E+00)
F261.45E+03(1.33E+02)+1.13E+03(4.76E+01)+1.13E+03(4.61E+01)+1.11E+03(4.50E+01)+1.12E+03(4.23E+01)+9.08E+02(8.73E+01)
F275.27E+02(1.49E+01)+5.12E+02(8.93E+00)=5.11E+02(1.01E+01)-5.07E+02(9.96E+00)-5.25E+02(1.56E+01)+5.15E+02(7.52E+00)
F284.85E+02(2.37E+01)+4.59E+02(1.78E-13)-4.59E+02(2.13E-13)-4.59E+02(1.78E-13)-4.63E+02(1.33E+01)-4.60E+02(5.68E+00)
F293.34E+02(5.22E+01)+3.71E+02(1.43E+01)+3.63E+02(1.58E+01)+3.77E+02(1.57E+01)+3.63E+02(1.17E+01)+3.05E+02(7.46E+00)
F306.14E+05(4.39E+04)=6.23E+05(5.20E+04)=6.11E+05(2.99E+04)=6.01E+05(2.10E+04)=6.48E+05(5.92E+04)+6.15E+05(3.79E+04)
+ / /−24/2/315/6/813/6/1013/7/920/3/6
Table 10. Performance comparison of JADE-div with its variants on 50-D CEC2017 functions.
Table 10. Performance comparison of JADE-div with its variants on 50-D CEC2017 functions.
AlgorithmJADE-Div-OppositeJADE-Div-RandomJADE-Div
+ / /−24/2/313/9/7/
Performance ranking2.791.901.31
Table 11. Comparison of time complexity between JADE and JADE-div.
Table 11. Comparison of time complexity between JADE and JADE-div.
Algorithm T 0 T 1 T ^ 2 ( T ^ 2 T 1 ) / T 0
JADE0.04840.30120.57975.7541
JADE-div0.61186.4174
Table 3. Performance comparison with L-SHADE on 50-D and 100-D CEC2017 functions.
Table 3. Performance comparison with L-SHADE on 50-D and 100-D CEC2017 functions.
D
Function
50-D100-D
L-SHADEL-SHADE-DivL-SHADEL-SHADE-Div
MeanStdMeanStdSigMeanStdMeanStdSig
F10.00E+000.00E+000.00E+000.00E+00=0.00E+000.00E+007.95E-081.35E-07-
F30.00E+000.00E+000.00E+000.00E+00=9.74E-071.32E-066.36E-037.01E-03-
F48.58E+014.41E+015.73E+015.03E+01=1.98E+028.36E+001.99E+021.26E+01=
F51.07E+012.15E+001.34E+012.66E+00-3.75E+015.25E+003.73E+015.35E+00=
F67.88E-055.61E-046.05E-071.02E-06=6.11E-033.98E-031.70E-031.66E-03+
F76.40E+011.70E+006.72E+013.65E+00-1.41E+024.72E+001.45E+026.24E+00-
F81.38E+012.48E+001.37E+012.95E+00=3.68E+014.47E+003.70E+014.75E+00=
F90.00E+000.00E+000.00E+000.00E+00=4.33E-014.98E-017.09E-021.48E-01+
F103.01E+032.73E+023.34E+033.07E+02-1.03E+045.22E+021.13E+045.08E+02-
F115.69E+011.11E+012.92E+014.27E+00+4.43E+029.67E+011.09E+023.55E+01+
F122.46E+034.20E+021.74E+034.13E+02+2.17E+049.58E+039.26E+032.84E+03+
F135.19E+013.48E+014.45E+012.57E+01=5.84E+026.22E+021.29E+023.12E+01+
F142.80E+012.42E+002.54E+012.27E+00+2.64E+023.79E+014.65E+015.79E+00+
F153.84E+016.95E+002.40E+012.43E+00+2.42E+025.30E+011.41E+023.66E+01+
F163.72E+029.10E+014.06E+021.30E+02=1.70E+032.43E+021.76E+032.96E+02=
F172.72E+026.62E+012.98E+021.02E+02=1.09E+032.12E+021.36E+032.02E+02-
F183.26E+019.82E+002.39E+011.69E+00+2.16E+024.54E+011.65E+023.62E+01+
F192.48E+014.79E+001.39E+012.34E+00+1.73E+022.51E+017.62E+011.94E+01+
F201.69E+025.54E+012.44E+029.26E+01-1.52E+032.33E+021.89E+032.67E+02-
F212.12E+021.96E+002.16E+022.96E+00-2.59E+025.81E+002.58E+025.85E+00=
F221.21E+031.62E+032.84E+031.68E+03-1.13E+045.12E+021.20E+046.96E+02-
F234.31E+023.17E+004.31E+024.55E+00=5.71E+028.64E+005.63E+029.43E+00+
F245.06E+022.27E+005.06E+022.98E+00=9.11E+028.64E+009.01E+028.59E+00+
F254.84E+021.59E+014.82E+024.03E+00+7.44E+023.44E+017.07E+024.96E+01+
F261.18E+034.88E+011.13E+035.78E+01+3.30E+038.59E+013.18E+039.70E+01+
F275.27E+021.04E+015.16E+021.46E+01+6.29E+021.73E+016.09E+022.01E+01+
F284.68E+021.96E+014.59E+021.15E-13+5.23E+022.21E+015.37E+022.73E+01-
F293.45E+029.34E+003.69E+021.91E+01-1.25E+031.80E+021.37E+032.05E+02-
F307.28E+051.01E+056.32E+057.36E+04+2.42E+031.55E+022.35E+031.52E+02+
+ / /− 11/11/7 15/5/9
Table 4. Performance comparison with jSO on 50-D and 100-D CEC2017 functions.
Table 4. Performance comparison with jSO on 50-D and 100-D CEC2017 functions.
D
Function
50-D100-D
jSOjSO-DivjSOjSO-Div
MeanStdMeanStdSigMeanStdMeanStdSig
F10.00E+000.00E+000.00E+000.00E+00=0.00E+000.00E+002.42E-043.37E-04-
F30.00E+000.00E+006.28E-103.26E-09=7.08E-076.21E-072.68E-022.31E-02-
F46.09E+015.20E+015.99E+015.19E+01=1.98E+021.02E+011.98E+029.22E+00=
F51.55E+013.09E+001.24E+013.45E+00+3.63E+016.71E+002.33E+015.32E+00+
F65.48E-079.12E-078.18E-067.63E-06-2.23E-045.99E-041.45E-041.29E-04-
F76.55E+013.51E+006.43E+013.06E+00=1.41E+026.93E+001.28E+024.87E+00+
F81.58E+013.27E+001.31E+012.75E+00+3.74E+017.19E+002.17E+015.10E+00+
F90.00E+000.00E+000.00E+000.00E+00=2.12E-028.21E-021.23E-023.11E-02=
F103.04E+034.25E+023.34E+034.08E+02-9.61E+036.84E+021.02E+049.84E+02-
F112.76E+013.39E+002.41E+013.82E+00+1.13E+023.34E+015.83E+013.23E+01+
F121.75E+034.72E+021.08E+033.52E+02+1.67E+047.92E+039.80E+033.39E+03+
F132.82E+011.80E+012.29E+011.78E+01=1.55E+023.79E+011.28E+023.24E+01+
F142.40E+011.85E+002.32E+011.79E+00=6.33E+019.82E+003.25E+013.93E+00+
F152.37E+012.43E+002.00E+011.55E+00+1.67E+023.55E+011.13E+023.97E+01+
F164.48E+021.76E+024.47E+021.68E+02=1.72E+033.63E+021.78E+032.75E+02=
F172.94E+029.02E+012.74E+021.34E+02=1.24E+032.49E+021.25E+032.89E+02=
F182.45E+012.22E+002.21E+019.61E-01+1.87E+023.41E+018.33E+011.99E+01+
F191.39E+013.13E+001.03E+012.20E+00+1.09E+021.71E+014.88E+016.30E+00+
F201.24E+026.66E+011.48E+029.59E+01=1.27E+032.39E+021.47E+033.05E+02-
F212.17E+022.42E+002.16E+023.24E+00+2.60E+025.62E+002.46E+025.82E+00+
F221.55E+031.76E+031.76E+031.88E+03-1.04E+047.36E+021.08E+049.80E+02-
F234.34E+025.73E+004.32E+027.08E+00=5.61E+021.17E+015.70E+029.99E+00-
F245.13E+023.76E+005.12E+023.39E+00=9.17E+028.52E+009.10E+029.04E+00+
F254.81E+022.36E+004.81E+023.15E+00+7.21E+024.26E+016.87E+024.68E+01+
F261.18E+034.85E+011.16E+035.77E+01+3.37E+039.77E+013.25E+038.91E+01+
F275.19E+021.01E+015.08E+028.74E+00+5.98E+021.65E+015.73E+021.47E+01+
F284.61E+029.58E+004.59E+022.22E-13-5.25E+022.31E+015.24E+022.95E+01=
F293.58E+021.52E+013.60E+021.78E+01=1.37E+031.96E+021.33E+032.52E+02=
F306.16E+053.96E+046.03E+052.70E+04=2.35E+031.37E+022.24E+039.74E+01+
+ / /− 11/14/4 16/6/7
Table 5. Performance comparison with DTDE on 50-D and 100-D CEC2017 functions.
Table 5. Performance comparison with DTDE on 50-D and 100-D CEC2017 functions.
D
Function
50-D100-D
DTDEDTDE-DivDTDEDTDE-Div
MeanStdMeanStdSigMeanStdMeanStdSig
F10.00E+000.00E+004.21E-051.15E-04-0.00E+000.00E+001.36E+001.82E+00-
F30.00E+000.00E+009.78E-051.83E-04-1.15E-041.39E-041.03E+021.15E+02-
F47.49E+015.29E+016.05E+014.89E+01=1.97E+027.24E+002.01E+028.16E+00-
F51.70E+001.37E+002.05E+001.33E+00=4.10E+002.32E+003.59E+002.08E+00=
F64.19E-081.15E-071.91E-062.21E-06-2.26E-031.98E-031.55E-043.20E-04+
F75.66E+019.29E-015.72E+011.09E+00-1.12E+021.65E+001.13E+021.72E+00-
F82.24E+001.54E+002.34E+001.56E+00=4.00E+001.66E+003.71E+002.16E+00=
F90.00E+000.00E+000.00E+000.00E+00=9.75E-021.78E-011.42E-026.64E-02+
F101.56E+021.05E+021.35E+027.48E+01=5.16E+023.05E+023.96E+022.81E+02+
F113.31E+014.46E+002.32E+012.81E+00+1.58E+023.60E+011.35E+022.89E+01+
F122.12E+035.05E+021.71E+034.20E+02+1.87E+048.72E+034.01E+041.50E+04-
F135.53E+013.12E+014.74E+012.72E+01=1.70E+024.50E+011.60E+025.13E+01=
F142.50E+014.87E+002.98E+013.39E+00-8.06E+011.90E+014.86E+011.01E+01+
F152.62E+013.74E+002.51E+012.75E+00=2.48E+025.00E+011.45E+024.43E+01+
F161.42E+025.29E+011.41E+025.03E+01=2.39E+021.52E+021.98E+021.32E+02=
F175.53E+016.24E+014.81E+015.27E+01=7.78E+018.36E+017.24E+011.04E+02=
F183.37E+016.24E+004.89E+011.14E+01-2.15E+024.85E+011.85E+023.82E+01+
F191.01E+012.15E+001.02E+012.63E+00=1.68E+022.17E+014.82E+011.03E+01+
F202.31E+012.21E+002.32E+012.26E+00=1.64E+023.53E+011.74E+026.89E+01=
F212.03E+022.01E+002.03E+022.21E+00=2.27E+023.71E+002.25E+023.39E+00+
F223.17E+021.45E+023.25E+021.50E+02=9.91E+023.03E+021.07E+032.80E+02=
F234.23E+026.76E+004.19E+025.56E+00+5.49E+028.15E+005.30E+026.40E+00+
F245.00E+022.58E+004.98E+021.92E+00+8.94E+026.64E+008.75E+023.68E+00+
F254.82E+024.26E+004.82E+023.77E+00+7.42E+023.34E+017.20E+023.88E+01+
F261.05E+035.89E+019.00E+028.58E+01+3.13E+037.44E+012.72E+036.28E+01+
F275.29E+021.78E+015.14E+029.50E+00+6.23E+021.78E+016.00E+021.62E+01+
F284.62E+021.10E+014.59E+021.74E-13-5.26E+021.76E+015.28E+022.55E+01=
F293.03E+028.30E+003.00E+025.68E+00+8.01E+021.24E+027.73E+021.32E+02=
F306.67E+057.97E+046.15E+055.03E+04+2.35E+031.20E+022.29E+031.18E+02+
+ / /− 9/13/7 15/9/5
Table 8. Performance comparison of DTDE-div with state-of-the-art DEs on 100-D CEC2017 functions.
Table 8. Performance comparison of DTDE-div with state-of-the-art DEs on 100-D CEC2017 functions.
FunctionEJADEjSOL-SHADE-RSPDISHSCSS-L-SHADEDTDE-Div
Mean (Std)SigMean (Std)SigMean (Std)SigMean (Std)SigMean (Std)SigMean (Std)
F12.13E-10(1.52E-09)-0.00E+00(0.00E+00)-0.00E+00(0.00E+00)-2.17E-08(3.29E-08)-0.00E+00(0.00E+00)-1.59E-04(5.06E-04)
F34.13E+03(4.65E+03)=3.16E-06(3.29E-06)-2.16E-07(2.50E-07)-3.30E-05(2.55E-05)-1.39E-04(1.60E-04)-1.01E-04(2.80E-04)
F45.19E+01(6.85E+01)=1.98E+02(1.10E+01)-1.99E+02(9.27E+00)-1.98E+02(9.30E+00)-1.98E+02(8.13E+00)=5.84E+01(4.61E+01)
F51.23E+02(2.14E+01)+3.80E+01(6.66E+00)+3.10E+01(6.16E+00)+2.71E+01(1.02E+01)+2.81E+01(3.98E+00)+2.09E+00(1.44E+00)
F65.36E-02(4.27E-02)+1.93E-04(4.77E-04)+1.89E-05(1.62E-05)=4.98E-06(4.24E-06)-1.76E-03(1.35E-03)+1.69E-06(2.35E-06)
F72.21E+02(1.92E+01)+1.42E+02(9.05E+00)+1.38E+02(7.53E+00)+1.39E+02(7.75E+00)+1.32E+02(4.37E+00)+5.70E+01(9.27E-01)
F81.15E+02(1.90E+01)+3.75E+01(9.16E+00)+2.87E+01(7.41E+00)+2.67E+01(1.00E+01)+2.85E+01(3.88E+00)+2.19E+00(1.73E+00)
F92.71E+01(1.77E+01)+8.78E-03(2.69E-02)=3.51E-03(1.76E-02)=0.00E+00(0.00E+00)=8.84E-02(1.74E-01)+1.11E-13(1.59E-14)
F101.02E+04(1.13E+03)+1.09E+04(6.89E+02)+1.07E+04(7.49E+02)+1.11E+04(6.73E+02)+9.72E+03(5.38E+02)+1.50E+02(9.90E+01)
F114.14E+02(3.13E+02)+8.48E+01(2.83E+01)-7.71E+01(2.79E+01)-5.52E+01(3.15E+01)-1.74E+02(5.61E+01)+2.42E+01(2.90E+00)
F123.30E+04(1.99E+04)-1.83E+04(7.90E+03)-1.28E+04(4.90E+03)-1.31E+04(6.46E+03)-1.61E+04(6.42E+03)-1.55E+03(4.25E+02)
F135.48E+02(5.67E+02)+1.60E+02(4.74E+01)=1.33E+02(4.26E+01)-1.17E+02(3.57E+01)-1.73E+02(4.85E+01)+4.56E+01(2.40E+01)
F143.24E+02(2.44E+02)+5.35E+01(8.44E+00)+4.54E+01(7.10E+00)=3.87E+01(4.50E+00)-8.63E+01(1.35E+01)+3.01E+01(3.25E+00)
F155.00E+02(4.87E+02)+1.71E+02(4.20E+01)=1.28E+02(3.67E+01)-1.20E+02(3.82E+01)-2.43E+02(4.83E+01)+2.51E+01(2.76E+00)
F161.99E+03(5.78E+02)+1.74E+03(3.22E+02)+1.64E+03(3.67E+02)+1.85E+03(2.80E+02)+1.56E+03(2.23E+02)+1.46E+02(6.81E+01)
F171.60E+03(3.55E+02)+1.23E+03(2.56E+02)+1.13E+03(2.44E+02)+1.25E+03(2.60E+02)+9.92E+02(2.18E+02)+5.28E+01(5.47E+01)
F183.22E+03(3.38E+03)+1.71E+02(3.44E+01)=1.57E+02(3.95E+01)-1.14E+02(2.40E+01)-2.02E+02(4.15E+01)+4.69E+01(1.04E+01)
F191.71E+02(6.74E+01)+9.48E+01(1.78E+01)+6.36E+01(9.72E+00)+5.70E+01(6.88E+00)+1.65E+02(2.64E+01)+9.76E+00(2.01E+00)
F201.68E+03(4.33E+02)+1.55E+03(2.17E+02)+1.36E+03(2.44E+02)+1.56E+03(3.01E+02)+1.51E+03(1.70E+02)+2.29E+01(1.93E+00)
F213.38E+02(2.03E+01)+2.55E+02(8.25E+00)+2.51E+02(7.26E+00)+2.47E+02(6.90E+00)+2.52E+02(5.02E+00)+2.04E+02(2.08E+00)
F221.10E+04(1.44E+03)+1.15E+04(1.79E+03)+1.10E+04(7.98E+02)+1.16E+04(1.78E+03)+1.08E+04(1.28E+03)+3.09E+02(1.18E+02)
F236.29E+02(1.74E+01)+5.69E+02(9.31E+00)+5.64E+02(9.87E+00)+5.64E+02(9.40E+00)+5.58E+02(9.86E+00)+4.19E+02(6.91E+00)
F249.83E+02(2.51E+01)+8.99E+02(7.72E+00)+8.97E+02(6.10E+00)+8.94E+02(6.85E+00)+9.02E+02(6.90E+00)+4.98E+02(2.32E+00)
F257.53E+02(5.17E+01)+7.33E+02(3.84E+01)+7.26E+02(3.82E+01)=6.99E+02(5.12E+01)=7.35E+02(4.40E+01)+4.80E+02(1.62E+00)
F264.10E+03(2.00E+02)+3.20E+03(1.02E+02)+3.15E+03(8.76E+01)+3.09E+03(8.73E+01)+3.21E+03(6.93E+01)+9.08E+02(8.73E+01)
F276.54E+02(2.33E+01)+5.84E+02(2.17E+01)-5.84E+02(1.75E+01)-5.72E+02(1.78E+01)-6.25E+02(1.88E+01)+5.15E+02(7.52E+00)
F285.17E+02(4.56E+01)=5.24E+02(1.67E+01)=5.29E+02(2.46E+01)=5.22E+02(1.99E+01)-5.29E+02(2.36E+01)=4.60E+02(5.68E+00)
F291.64E+03(4.21E+02)+1.27E+03(1.82E+02)+1.23E+03(1.76E+02)+1.31E+03(2.29E+02)+1.26E+03(1.83E+02)+3.05E+02(7.46E+00)
F302.53E+03(2.02E+02)+2.35E+03(1.46E+02)=2.36E+03(1.70E+02)=2.30E+03(1.24E+02)=2.33E+03(1.15E+02)=6.15E+05(3.79E+04)
+ / /−24/3/217/6/614/6/914/3/1223/3/3
Table 9. Overall performance ranking of the compared DE algorithms.
Table 9. Overall performance ranking of the compared DE algorithms.
AlgorithmEJADEjSOL-SHADE-RSPDISHSCSS-L-SHADEDTDE-Div
Performance ranking5.363.742.802.883.632.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, R.; Zheng, L.; Jin, X. Parameter Adaptive Differential Evolution Based on Individual Diversity. Symmetry 2025, 17, 1016. https://doi.org/10.3390/sym17071016

AMA Style

Yan R, Zheng L, Jin X. Parameter Adaptive Differential Evolution Based on Individual Diversity. Symmetry. 2025; 17(7):1016. https://doi.org/10.3390/sym17071016

Chicago/Turabian Style

Yan, Rongle, Liming Zheng, and Xiaolin Jin. 2025. "Parameter Adaptive Differential Evolution Based on Individual Diversity" Symmetry 17, no. 7: 1016. https://doi.org/10.3390/sym17071016

APA Style

Yan, R., Zheng, L., & Jin, X. (2025). Parameter Adaptive Differential Evolution Based on Individual Diversity. Symmetry, 17(7), 1016. https://doi.org/10.3390/sym17071016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop