Next Article in Journal
Polarized Rigid Del Pezzo Surfaces in Low Codimension
Next Article in Special Issue
Opposition-Based Ant Colony Optimization Algorithm for the Traveling Salesman Problem
Previous Article in Journal
Possibilities of Using Kalman Filters in Indoor Localization
Previous Article in Special Issue
Elephant Herding Optimization: Variants, Hybrids, and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Success History-Based Adaptive Differential Evolution Using Turning-Based Mutation

School of Software, Yunnan University, Kunming 650000, China
*
Authors to whom correspondence should be addressed.
Mathematics 2020, 8(9), 1565; https://doi.org/10.3390/math8091565
Submission received: 24 August 2020 / Revised: 5 September 2020 / Accepted: 7 September 2020 / Published: 11 September 2020
(This article belongs to the Special Issue Evolutionary Computation 2020)

Abstract

:
Single objective optimization algorithms are the foundation of establishing more complex methods, like constrained optimization, niching and multi-objective algorithms. Therefore, improvements to single objective optimization algorithms are important because they can impact other domains as well. This paper proposes a method using turning-based mutation that is aimed to solve the problem of premature convergence of algorithms based on SHADE (Success-History based Adaptive Differential Evolution) in high dimensional search space. The proposed method is tested on the Single Objective Bound Constrained Numerical Optimization (CEC2020) benchmark sets in 5, 10, 15, and 20 dimensions for all SHADE, L-SHADE, and jSO algorithms. The effectiveness of the method is verified by population diversity measure and population clustering analysis. In addition, the new versions (Tb-SHADE, TbL-SHADE and Tb-jSO) using the proposed turning-based mutation get apparently better optimization results than the original algorithms (SHADE, L-SHADE, and jSO) as well as the advanced DISH and the jDE100 algorithms in 10, 15, and 20 dimensional functions, but only have advantages compared with the advanced j2020 algorithm in 5 dimensional functions.

1. Introduction

The single objective global optimization problem involves finding a solution vector x = (x1, …, xD) that minimizes the objective function f(x), where D is the dimension of the problem. The task of black box optimization is to solve the global optimization problem without clear objective function form or structure, that is, f is a “black box”. This problem appears in many problems of engineering optimization, where complex simulations are used to calculate the objective function.
The differential evolution (DE) algorithm, proposed by Price and Storm in 1995, laid the foundation for a series of successful algorithms for continuous optimization. DE is a random black box search method, which was originally designed for numerical optimization problems [1], and it’s also an evolutionary algorithm that ensures that every next generation has better solutions than the previous generation: a phenomenon known as elitism. The extensive study fields of DE are summarized lately in the references [2].
Studies on DE have yielded a number of improvements [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] to the classical DE algorithm, and the status of research on it can be easily obtained by noting the results of the Continuous Optimization Competition and the Evolutionary Computing Conference (CEC).
A popular variant of DE [18] is the algorithm proposed by Fukunaga and Tanabe called Success History-based Adaptive Differential Evolution (SHADE) [19]. In the optimization process, the scale factor F and the crossover rate CR of control parameters are adjusted to adapt to the given problem, and the “current to pbest/1” mutation strategy and the external archive of poor quality solutions in JADE [20] are combined in SHADE. The SHADE algorithm ranked third in CEC2013. In the second year, the author proposed an improved scheme, adding a linear reduction to the population size called L-SHADE to improve the convergence rate of SHADE [21]. L-SHADE won the CEC2014 competition. The winners in the subsequent years were SPS-L-SHADE-EIG [22] (CEC2015), LSHADE-EpSin [23] (joint winner of CEC2016), and jSO [24] (CEC2017). These algorithms are all based on L-SHADE, which makes it one of the most effective variants of SHADE [25]. With the exception of the jSO, the other winners benefited from general enhancements in the area [26]. Consequently, this study applies an improved method to the SHADE, L-SHADE and jSO algorithms. LSHADE-ESP [27] came in second in CEC2018 and the jDE100 [28] won CEC2019. And the j2020 [29] algorithm, which was proposed on CEC2020 recently, is also within the reference range. Enhanced versions of these DE algorithms add new mechanisms or parameters for optimization, similar to those in other optimization algorithms [30], as described in substantive surveys of these areas [25,31,32,33,34,35,36,37]. Moreover, theoretical analysis supporting DE has also been provided, such as in [38,39,40,41].
The DE consists of three main steps: mutation, crossover, and selection. Many proposals [6,10,11,14,24] have been made to improve the mutation process to improve optimization performance. For instance, four strategies of combining mutation and crossover was used in SHADE4 [6], SHADE44 [10] and L-SHADE44 [11] to create a new trial individual and realize an adaptive mechanism. A novel multi-chaotic framework was used in MC-SHADE [14] to generate random numbers for the parent selection process in mutation process. A new weighted mutation strategy with parameterization enhancement was used in jSO [24] to enhance adaptability. This paper also focuses on improving this process in the DE algorithm, especially SHADE-based algorithms.
The CEC2020 [42] single-objective boundary-constrained numerical optimization benchmark sets are designed to determine the improvement in performance obtained by increasing the number of the calculation of the fitness function of an optimization algorithm. There are thus two motivations for this study. First, we need to solve the problem of premature convergence of algorithms based on SHADE in high dimensional search spaces on CEC2020 benchmark sets, so that they can maintain a high population diversity and a longer exploration phase. Second, the improvement to the algorithm should be simple, should not excessively increase complexity, and should not render the proposed algorithm incomprehensible and less applicable, as discussed in [43]. We proposed a method using turning-based mutation, and apply it to the SHADE, L-SHADE, and jSO algorithms to yield good performance while using relatively simple algorithm structure. Through experimental analysis involving 10, 15, and 20 dimensions, the improved algorithms achieved better performance than the original algorithms as well as the advanced DISH [44] and jDE100 algorithms on CEC2020 benchmark sets, but were slightly worse than the j2020 algorithm. We also use population diversity measure and population clustering analysis to verify the effectiveness of the proposed method.
Section 2 describes the process of evolution from the DE algorithm to the SHADE, L-SHADE, and jSO algorithms, and turning-based mutation is introduced in Section 3. The experimental settings and results are described in Section 4 and Section 5, respectively. Section 6 discusses the results, and the conclusion of this paper is given in Section 7.

2. DE SHADE L-SHADE and jSO

2.1. Differential Evolution

The DE consists of three main steps: mutation, crossover, and selection. In mutation, the attribute vector of the selected individual x is combined in a simple vector operation to generate the mutated vector v. The scale factor F of the control parameter is used in this operation. In the crossover step, according to the probability given by the crossover rate CR of the control parameter, the trial vector u is created by selecting the attribute from the original vector x or mutated vector v. Finally, in the selection step, the trial vector u is evaluated by the objective function and the fitness f(u) is compared with the fitness of the selected vector f(x). The vector with the better fitness value survives to the next generation.
This paper focuses on improving the mutation process, so the paragraphs below describe the mutation process of the DE algorithm. The complete steps of DE can be referred to the literature [1]. The mutation strategy of DE/rand/1/bin can be expressed as follows:
v i , G = x r 1 , G + F i × x r 2 , G x r 3 , G
where vi,G is the mutated vector, and xr1,G, xr2,G, and xr3,G are three different individuals randomly selected from the population. Fi is the scaling factor, and G is the index of the current generation.
If any dimension of the mutated vector vj,i,G is outside the boundary of the search range [xmin, xmax], we perform the following correction for boundary-handling to handle infeasible solutions [45]:
v j , i , G = x m i n + x j , i , G 2 i f   v j , i , G < x m i n x m a x + x j , i , G 2 i f   v j , i , G > x m a x
where j is the dimensional index and i is the individual index.
The pseudo-code of the DE/rand/1/bin algorithm is shown in Algorithm 1.
Algorithm 1 DE/rand/1/bin
1:
initialize P, NP, F, CR and MaxFES;
2:
whileFES < MaxFES do
3:
for each individual x do
4:
  use mutation Formula (1) to create mutated vector v;
5:
  execute boundary-handling (2) to handle infeasible solutions;
6:
  use binomial crossover to create trial vector u;
7:
  use selection of classical DE to create individual of next generation;
8:
end for
9:
end while
10:
return the best found solution.
It can be seen from the description of DE algorithm that users need to set three control parameters: crossover rate CR, scaling factor F and population size NP. The setting of these parameters is very important to the performance of DE.
Fine-tuning the control parameter is a time-consuming task, because of which most advanced variants of DE use parameter adaptation. This is also why Tanabe and Fukunaga proposed the SHADE [19] algorithm in 2013. Because the algorithms used in this paper are based on SHADE, it is described in more detail below.

2.2. SHADE

In the control parameters of SHADE, crossover rate CR and scaling factor F are discussed. The algorithm is based on JADE [20], proposed by Sanderson and Zhang, and so they share many mechanisms [18]. The major difference between them is in historical memories MF and MCR with their update mechanisms. The next subsections describe the historical memory update of SHADE and the difference between DE and SHADE algorithm in initialization, mutation, crossover and selection, respectively. The complete steps of SHADE can be referred to the literature [19].

2.2.1. Initialization

In SHADE, the population is initialized in the same manner as in DE, but there are two additional components—historical memory and external archive—that also need to be initialized.
Initialize the control parameters stored in the historical memory, crossover rate CR and scale factor F to 0.5:
M C R , i = M F , i = 0.5 ; i = 1 , , H ,
where H is the size of the user-defined historical memory, and the index k to update the historical memory is initialized to one.
In addition, the initialization of the external archive of poor quality solutions is empty, i.e., A = .

2.2.2. Mutation

In contrast to DE/rand/1/bin, the “current to pbest/1” mutation strategy is used in SHADE:
v i , G = x i , G + F i × x p b e s t , G x i , G + F i × x r 1 , G x r 2 , G
p i = r a n d p m i n , 0.2
p m i n = 2 N P
where, xi,G is the given individual, and xpbest,G is an individual selected from the best NP × pi (pi ∈ [0, 1]) individuals randomly in the current population. Vector xr1,G is an individual selected from the current population randomly, and xr2,G is an individual selected from a combination of the external archive A and the current population randomly. Index r1r2i. Fi is a scaling factor, rand [] is a uniform random distribution and NP is the size of population. The vi,G is the mutated vector and G is the index of the current generation. The greed of the “current-to-pbest/1” mutation strategy depends on the control parameter pi, which is calculated as shown in Equations (5) and (6). It balances exploration and exploitation capabilities (a small value of p is more greedy). The scaling factor Fi is generated using the following formula:
F i = r a n d c i M F , r i , 0.1
where randci () is the Cauchy distribution, and MF,ri is randomly selected from historical memory MF (index ri is a uniformly distributed random value from [1, H]). If Fi > 1, let Fi = 1. If Fi ≤ 0, Equation (7) is repeated to attempt to generate a valid value.
The boundary handling of SHADE is identical to that of DE, as shown in Equation (2).

2.2.3. Crossover

DE has 2 classic crossover strategies, i.e., binomial and exponential. The crossover strategies of SHADE is the same as that of DE/rand/1/bin, i.e., binomial crossover. However, the crossover rate of DE/rand/1/bin is set in advance while the CRi of SHADE is generated by the following formula:
C R i = r a n d n i M C R , r i , 0.1
where randni () is Gaussian distribution, and MCR,ri are randomly selected from historical memory MCR (index ri is a uniformly distributed random value from [1, H]). If CRi > 1, let CRi = 1; if CRi < 0, let CRi = 0.

2.2.4. Selection

The process of selection of SHADE is the same as that of DE. However, the external archive needs to be updated during selection. If a better trail individual is generated, the original individual xi,G is stored in the external archive. If the external archive exceeds capacity, one of them is randomly deleted.

2.2.5. Historical Memory Update

Historical memory update is also an important operation in SHADE. The historical memories MCR and MF are initialized by Formula (3) but their contents change with the iteration of the algorithm. These memories store the “successful” crossover rate CR and scaling factor F. “Successful” here means that the trail vector u is selected instead of the original vector x to survive to the next generation. In each generation, the values of these “successful” CR and F are first stored in arrays SCR and SF, respectively. After each generation, a unit of each of the historical memories MF and MCR is updated. The updated unit is specified by index k, which is initialized to one and increases by one after each generation. If k exceeds the memory capacity H, it is reset to one. The following formula is used to update the k-th unit of historical memory:
M C R , k , G + 1 = m e a n W A S C R i f   S C R M C R , k , G o t h e r w i s e
M F , k , G + 1 = m e a n W L S F i f   S F M F , k , G o t h e r w i s e
If all individuals in the G-th generation fail to generate a better trail vector, i.e., SF = SCR = ∅, the historical memory will not be updated. The weighted Lehmer mean WL and weighted mean WA are calculated using the following formulas, respectively:
m e a n W A S C R = k = 1 S C R w k × S C R , k ,
m e a n W L S F = k = 1 S F w k × S F , k 2 k = 1 S F w k × S F , k
To improve the adaptability of the parameters, the weight vector w is calculated based on the absolute value of the difference that is obtained by subtracting the objective function value of the given vector from that of the trail vector in current generation G, as follows:
w k = Δ f k k = 1 S C R Δ f k
where Δ f k = f u k , G f x k , G in (13).
The pseudo-code of the SHADE algorithm is shown in Algorithm 2.
Algorithm 2 SHADE
1:
initialize P, NP, F, CR, A, H and MaxFES;
2:
initialize MF, MCR by (3);
3:
whileFES < MaxFES do
4:
for each individual x do
5:
    use mutation Formulas (7) and (8) to select F and CR;
6:
    use Formula (4) to create mutated vector v;
7:
    execute boundary-handling (2) to handle infeasible solutions;
8:
    use binomial crossover to create trial vector u;
9:
    use selection of classical DE to create individual of next generation;
10:
  update external archive A;
11:
end for
12:
 use Formulas (9) and (10) to update historical memory MF and MCR;
13:
end while
14:
return the best found solution.

2.3. Linear Decrease in Population Size: L-SHADE

In [21], a linear reduction of population size was introduced to SHADE to improve its performance. The basic thought is to gradually reduce the population size during evolution to improve exploitation capabilities. In L-SHADE, the population size is calculated after each generation using Formula (14). If the new population size NPnew is smaller than the previous population size NP, the all individuals are sorted on the basis of the value of the objective function, and the worst NP-NPnew individuals are cut. Also, the size of external archives/A/decreases synchronously with population size:
N P n e w = r o u n d N P i n i t F E S M A X F E S × N P i n i t N P f
where NPf and NPinit are the final and initial population size, respectively. MaxFES and FES are the maximum and current number of the calculation of the fitness function, respectively. And round () is a rounding function.

2.4. Weighted Mutation Strategy with Parameterization Enhancement: jSO

The jSO [24] algorithm won the CEC2017 single-objective real parameter optimization competition [46]. It is a type of iL-SHADE algorithm that uses a weighted mutation strategy [47]. The iL-SHADE algorithm extends L-SHADE by initializing all parameters in the historical memories MF and MCR to 0.8, statically initializing the last unit of historical memories MF and MCR to 0.9, updating MF and MCR with the weighted Lehmer average value, limiting the crossover rate CR and scaling factor F in the early stage, and p is calculated for the “current-to-pbest/1” mutation strategy as:
p = p m i n + F E S M A X F E S p m a x p m i n
where pmin and pmax are the minimum and maximum value of p, respectively. FES and MaxFES are the current and maximum number of the calculation of the fitness function, respectively.
The jSO algorithm sets pmax = 0.25 and pmin = pmax/2, initial population size to N P i n i t = 25 D log D , and the size of the historical memory to H = 5. All parameters in MF and MCR are initialized to 0.3 and 0.8, respectively, and the weighted mutation strategy current-to-pbest-w/1 is used:
v i , G = x i , G + F w × x p b e s t , G x i , G + F i × x r 1 , G x r 2 , G ,
where Fw is calculated as:
F w = 0.7 F i , F E S < 0.2 M A X F E S , 0.8 F i , F E S < 0.4 M A X F E S , 1.2 F i , o t h e r w i s e .

3. Turning-Based Mutation

The opposition-based DE (ODE) algorithm was proposed by Shahryar et al. [48]. The opposition-based learning (OBL) was used for generation jumping and population initialization, and the opposite numbers was used to improve the convergence rate of DE. Shahryar et al. let all vectors of the initial population take the opposite number in the initialization and allowed the trail vectors to take the opposite number in the selection operation. They then compared their fitness values and selected the vector with the better fitness to accelerate the convergence of the DE algorithm. We refer to the idea of “opposition” in the above algorithm, but the purpose of this paper is to change the direction of mutation under certain conditions to maintain population diversity and enable a longer exploration phase.
Suppose that the search space is two-dimensional (2D). There is a ring-shaped region, the center of which is the global suboptimal individual xpbest,G. The outer radius of the ring is OR and the inner radius is IR. If the Euclidean distance Distance between the given individual and the global suboptimal individual is smaller than the outer radius OR and larger than the inner radius IR, the differential vector dei from the mutation Formulas (1) and (4) takes the opposite number, and some dimensions are randomly selected to assign random values within the search range. Experiments have verified that better outer radius OR and inner radius IR can be calculated as:
O R i n i t = j = 1 D x m a x x m i n 2 2 2
I R = j = 1 D x m a x x m i n 40 2 2
O R = O R i n i t + I R O R i n i t M a x F E S × F E S
where ORinit is the initial value of the outer radius and IR is the inner radius, which is also the minimum value of the outer radius. The outer radius OR decreases with an increase in the number of fitness evaluations. MaxFES and FES are the maximum and current number of the calculation of the fitness function, respectively, and xmax and xmin are the upper and lower bounds of the search range, respectively.
The Euclidean distance Distance between the given individual and the global suboptimal individual is calculated as:
D i s t a n c e = j = 1 D x j , p b e s t , G x j , i , G 2 2
The differential vector dei from the mutation Equations (1) and (4) is calculated as:
d e i = ( F i   o r   F w ) × x p b e s t , G x i , G + F i × x r 1 , G x r 2 , G
The pseudo-code of the operation on the differential vector dei in turning-based mutation is shown as Operation 1:
Operation 1 operation on dei
1:
ifDistance > IR and Distance < OR then
2:
dei = −dei;
3:
M = randi(D), R = randperm(D);
4:
for d = 1 to M do
5:
   dei(R(d)) = rand (xmaxxmin) + xmin;
6:
end for
7:
end if
where R is the randomly disordered dimension index array, M is the number of randomly selected dimensions, and xmax and xmin are the upper and lower bounds of the search range, respectively.
Finally, the mutation operation is performed as shown in Equation (23):
v i , G = x i , G + d e i
If the Euclidean distance Distance between the given individual and the global suboptimal individual is smaller than the outer radius OR and larger than the inner radius IR, the improved method changes the direction of mutation of the given individual to maintain the population diversity and a longer exploration phase, thus enhancing the global search ability and the ability to escape the local optimum. Then, with an increase in number of fitness evaluations, the performance of the algorithm can be improved. If the Euclidean distance Distance between the given individual and the global suboptimal individual is smaller than or equal to the inner radius IR, the former is allowed to mutate in the original direction. This enables the given individual to quickly converge to the global optimal or suboptimal position to avoid the problem of non-convergence caused by turning-based mutation.
Since Equation (21) and Operation 1 need to be executed in the mutation process of each individual, the overall time complexity [42] of the improved algorithms is slightly higher than that of the original algorithms, as shown in Table 1, Table 2 and Table 3.
The pseudo-code of the Tb-SHADE algorithm (SHADE algorithm using turning-based mutation) is shown as Algorithm 3, that of the TbL-SHADE algorithm (L-SHADE algorithm using turning-based mutation) is shown as Algorithm 4, and that of the Tb-jSO algorithm (jSO algorithm using turning-based mutation) is shown as Algorithm 5. The improved parts of these algorithm are underlined.
Algorithm 3 Tb-SHADE
1:
initialize P, NP, F, CR, A, H and MaxFES;
2:
initialize MF, MCR by (3);
3:
initialize ORinit by (18), initialize IR by (19);
4:
whileFES < MaxFES do
5:
Calculate OR by (20);
6:
for each individual x do
7:
    use mutation Formulas (7) and (8) to select F and CR;
8:
    Set Distance by (21);
9:
    Execute Operation 1;
10:
  use Formula (23) to create mutated vector v;
11:
  execute boundary-handling (2) to handle infeasible solutions;
12:
  use binomial crossover to create trial vector u;
13:
  use selection of classical DE to create individual of next generation;
14:
  update external archive A;
15:
end for
16:
 use Formulas (9) and (10) to update historical memory MF and MCR;
17:
end while
18:
return the best found solution.
Algorithm 4 TbL-SHADE
1:
initialize P, NPinit, NPf, F, CR, A, H and MaxFES;
2:
initialize MF, MCR by (3);
3:
initialize ORinit by (18), initialize IR by (19);
4:
whileFES < MaxFES do
5:
Calculate OR by (20);
6:
for each individual x do
7:
    use mutation Formulas (7) and (8) to select F and CR;
8:
    Set Distance by (21);
9:
    Execute Operation 1;
10:
  use Formula (23) to create mutated vector v;
11:
  execute boundary-handling (2) to handle infeasible solutions;
12:
  use binomial crossover to create trial vector u;
13:
  use selection of classical DE to create individual of next generation;
14:
  update external archive A;
15:
end for
16:
 use Formulas (9) and (10) to update historical memory MF and MCR;
17:
use (14) to calculate NPnew;
18:
NP = NPnew, |A| = NPnew;
19:
end while
20:
return the best found solution.
Algorithm 5 Tb-jSO
1:
initialize P, NPinit, NPf, F, CR, A, H and MaxFES;
2:
initialize all values in MF to 0.3 and MCR to 0.8, but MF,H = 0.9 and MCR,H = 0.9;
3:
initialize ORinit by (18), initialize IR by (19);
4:
whileFES < MaxFES do
5:
Calculate OR by (20);
6:
for each individual x do
7:
    use mutation Formulas (7) and (8) to select F and CR;
8:
    use (17) to calculate Fw;
9:
    if FES < 0.6MaxFES and Fi,G > 0.7 then
10:
   Fi,G = 0.7;
11:
  end if
12:
  if FES < 0.25MaxFES then
13:
   CRi,G = max(CRi,G, 0.7);
14:
  else if FES < 0.5MaxFES then
15:
   CRi,G = max(CRi,G, 0.6);
16:
  end if
17:
  Set Distance by (21);
18:
  Execute Operation 1;
19:
  use Formula (23) to create mutated vector v;
20:
  execute boundary-handling (2) to handle infeasible solutions;
21:
  use binomial crossover to create trial vector u;
22:
  use selection of classical DE to create individual of next generation;
23:
   update external archive A;
24:
end for
25:
 use Formulas (9) and (10) to update historical memory MF and MCR;
26:
 use (14) to calculate NPnew;
27:
NP = NPnew, |A| = NPnew;
28:
end while
29:
return the best found solution.

4. Experimental Settings

To verify the improved method by experiments, the original algorithm, the improved algorithm and the advanced DISH and the jDE100 algorithms were tested on the Single Objective Bound Constrained Numerical Optimization (CEC2020) benchmark sets in 5, 10, 15 and 20 dimensions. The termination criteria, i.e., the maximum number of the calculation of the fitness function (MaxFES) and the minimum error value (Min error value), were set as in Table 4. The search range is [xmin, xmax] = [−100, 100], and 30 independent repeated experiments were conducted. The parameter setting of most algorithm [19,21,24] is shown in Table 5 and Table 6. In addition, the parameter setting of j2020 algorithm can be found in [29].
The hypothesis that the turning-based mutation can maintain a longer exploration phase can be verified by analyzing the clustering and density of the population during the optimization process. These two analyses are described in more detail below.

4.1. Cluster Analysis

The clustering algorithm selected in this experiment is density based noisy application spatial clustering (DBSCAN) [49], which is based on the clustering density rather than its center, so it can find clusters of arbitrary shape. DBSCAN algorithm needs to set two control parameters and a distance measurement. The settings are as follows:
(1)
distance between core points, that is, Eps = 1% of the decision space; for the CEC2020 benchmark sets, Eps = 2;
(2)
minimum number of points forming a cluster, that is, MinPts = 4 (minimum number of mutation individuals); and
(3)
distance measure uses Chebyshev distance [50]—if the distance between the corresponding attributes of two individuals is greater than 1% of the decision space, they are not considered as direct density reachable.

4.2. Population Diversity

The population diversity (PD) measure is taken from [51], which is based on the square root of the deviation sum, Equation (25), of individual components and their corresponding mean values, Equation (24):
x j ¯ = 1 N P i = 1 N P x j , i
P D = 1 N P i = 1 N P j = 1 D x j , i x j ¯ 2 .
where i is the iterator of members of the population and j is that of the component (dimension).

5. Results

Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18 compare the error values (when the error value was smaller than 10−8, the corresponding value was considered optimal) obtained by the original algorithms (SHADE, L-SHADE, and jSO) and their improved versions using turning-based mutation (Tb-SHADE, TbL-SHADE and Tb-jSO, respectively). The results of a comparison are showed in the last column of each table. If the performance of the original version was significantly better, uses the “−” sign; if the performance of the improved version was significantly better, uses the sign “+”; if their performances were similar, “=” is used. The better performance values are displayed in bold, and the last row of these tables shows the results of an overall comparison. Table 19, Table 20, Table 21 and Table 22 provide the error values obtained by the advanced algorithms DISH, jDE100 and j2020 on CEC2020. All tables provide the best, mean and std (standard deviation) values of 30 independent repetitions of the experiments.
Convergence diagrams are shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12. Figure 1, Figure 2, Figure 3 and Figure 4 shows the convergence curves of SHADE and Tb-SHADE, respectively, for some test functions in 5D, 10D, 15D, and 20D, Figure 5, Figure 6, Figure 7 and Figure 8 shows those of L-SHADE and TbL-SHADE for some test functions in 5D, 10D, 15D, and 20D. and Figure 9, Figure 10, Figure 11 and Figure 12 shows those of the jSO and Tb-jSO, respectively, for some test functions in 5D, 10D, 15D and 20D. It is apparent that the red line of the turning-based mutation version of the algorithm was often slower to converge but attained better objective function values.
Table 23, Table 24, Table 25, Table 26, Table 27, Table 28, Table 29, Table 30, Table 31, Table 32, Table 33 and Table 34 shows the number of runs (#runs) of population aggregation, the average generation (Mean CO) of the first cluster during these runs, and the average population diversity (Mean PD) of these generations.
The rankings of the Friedman test [52] were obtained by using the average value (Mean) of each algorithm on all 10 test functions in Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22, and are shown in Table 35, Table 36, Table 37 and Table 38. The related statistical values of the Friedman test are shown in Table 39. If the chi-square statistic was greater than the critical value, the null hypothesis was rejected. p represents the probability of the null hypothesis obtaining. The null hypothesis here was that there is no significant difference in performance among the nine algorithms considered here on CEC2020.

6. Results and Discussion

The results on the CEC2020 benchmark sets are first discussed. As shown in Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18, the scores were two improvements against two instances of worsening (5D), four improvements and two instances of worsening (10D), five improvements and two instances of worsening (15D), and four improvements no instances of worsening (20D) in the case of SHADE; three improvements against zero instances of worsening (5D), four improvements and one worsening (10D), six improvements no worsening (15D), and five improvements and no worsening (20D) in the case of L-SHADE; and one improvement against no worsening (5D), four improvements and one worsening (10D), four improvements and one worsening (15D), and two improvements two instances of worsening (20D) in the case of jSO. In some test functions, the improved algorithm even escaped the local optimum and found the optimal value (if the error was smaller than 10−8, the relevant value was considered optimal). Examples are f3 in Table 10 and Table 13, and Table 14, f8 in Table 9, Table 13 and Table 16, and Table 17, f9 in Table 12, and f10 in Table 11. In most cases, the improved version was clearly better than the original algorithm except for Tb-SHADE (5D) and Tb-jSO (20D).
According to the convergence curves in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, in most cases, the improved algorithm showed similar convergence to the original in the early stage of the optimization process, but it clearly maintained a longer exploration phase and achieved better values of the objective function in the middle and late stages; in a few cases (such as f4 in Figure 5), the improved algorithm had slower convergence but did not achieve a better objective function value than the original.
As the numerical analyses in Table 23, Table 24, Table 25, Table 26, Table 27, Table 28, Table 29, Table 30, Table 31, Table 32, Table 33 and Table 34 show, in most cases, the improved algorithms exhibited fewer clusters (#runs), later clustering (mean CO), and higher population density (mean PD) than the original algorithm. But Tb-SHADE (5D) had a lower population density on f6–f9, as did TbL-SHADE (all dimensions) on f2–f7, where this might have been related to the linear decrease in the population size. Tb-jSO showed similar numbers of clusters in all dimensions and a lower population density on some test functions in 5D. Therefore, in most cases, the improved versions maintained the diversity of population and a longer exploration phase in the optimization process.
The significant improvements in Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18 and the clustering analysis in Table 23 and Table 24 can be linked. The results in the former set of tables with the “+” symbol were always connected with the occurrence of later clustering, none at all, or fewer instances of clusters of 30 (for the last option, see, for example, column #runs in Table 24, Table 25 and Table 26, f3). Consequently, the improvement in the performance effected by the updated version was related to the maintenance of population diversity and a longer exploration phase.
According to the Friedman ranking in Table 35, Table 36, Table 37 and Table 38, Tb-SHADE, TbL-SHADE, and Tb-jSO were clearly better than the original algorithms and the advanced DISH and jDE100 in 10D, 15D, and 20D. But Tb-SHADE did not perform as well as SHADE in 5D and did not perform as well as DISH in 5D, 10D and 20D. In addition, the j2020 algorithm delivered the best performance and ranked first in 10D, 15D and 20D and one of the improved versions, TbL-SHADE, only delivered the best performance and ranked first in 5D. And jDE100 (winner of CEC2019), which ranks last in Table 35, Table 36, Table 37 and Table 38, did not seem suitable for CEC2020. Table 39 shows that the null hypothesis was rejected in all dimensions, and thus the Friedman ranking was correct. All in all, the three improved algorithms obtained good optimization results in contrast to the original algorithm as well as the advanced DISH and jDE100 algorithms but were slightly worse than the advanced j2020 algorithm.

7. Conclusions

In this paper, a relatively simple and direct method using turning-based mutation was proposed and tested on Single Objective Bound Constrained Numerical Optimization (CEC2020) benchmark sets in 5, 10, 15, and 20 dimensions against the SHADE, L-SHADE, and jSO algorithms. The basic thought of the proposed method is to change the direction of mutation under certain conditions to maintain the population diversity and a longer exploration phase. It can thus avoid premature convergence and escape the local optimum to get better optimization results. The results of experiments showed that this method is effective on CEC2020 benchmark sets in 10, 15, and 20 dimensions. The strong point of the proposed method is that it can be applied to variants of SHADE easily. A disadvantage is that it increases the time complexity and its effectiveness lacks theoretical proof. Our future research in the area will focus on further experiments, and on applying the proposed method to more algorithms. For example, the improved method may be useful for some practical problems featuring constraints.

Author Contributions

Conceptualization, H.K.; methodology, H.K.; project administration, L.J. and Y.S.; software, X.S.; validation, X.S. and Q.C.; visualization, L.J. and Q.C.; formal analysis, H.K.; investigation, Q.C.; resources, Y.S.; data curation, L.J.; writing—original draft preparation, L.J.; writing—review and editing, H.K. and X.S.; supervision: X.S.; funding acquisition: Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61663046, 61876166. This research was funded by Open Foundation of Key Laboratory of Software Engineering of Yunnan Province, grant number 2015SE204.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Eltaeib, T.; Mahmood, A. Differential evolution: A survey and analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef] [Green Version]
  3. Arafa, M.; Sallam, E.A.; Fahmy, M. An enhanced differential evolution optimization algorithm. In Proceedings of the 2014 Fourth International Conference on Digital Information and Communication Technology and its Applications (DICTAP), Bangkok, Thailand, 6–8 May 2014; pp. 216–225. [Google Scholar]
  4. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood for solving CEC2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  5. Bujok, P.; Tvrdík, J. Adaptive differential evolution: SHADE with competing crossover strategies. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 14–18 June 2015; pp. 329–339. [Google Scholar]
  6. Bujok, P.; Tvrdík, J.; Poláková, R. Evaluating the performance of shade with competing strategies on CEC 2014 single-parameter test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 5002–5009. [Google Scholar]
  7. Liu, X.-F.; Zhan, Z.-H.; Zhang, J. Dichotomy guided based parameter adaptation for differential evolution. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 289–296. [Google Scholar]
  8. Liu, Z.-G.; Ji, X.-H.; Yang, Y. Hierarchical differential evolution algorithm combined with multi-cross operation. Expert Syst. Appl. 2019, 130, 276–292. [Google Scholar] [CrossRef]
  9. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  10. Poláková, R.; Tvrdík, J.; Bujok, P. L-SHADE with competing strategies applied to CEC2015 learning-based test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4790–4796. [Google Scholar]
  11. Poláková, R.; Tvrdík, J.; Bujok, P. Evaluating the performance of L-SHADE with competing strategies on CEC2014 single parameter-operator test suite. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 1181–1187. [Google Scholar]
  12. Sallam, K.M.; Sarker, R.A.; Essam, D.L.; Elsayed, S.M. Neurodynamic differential evolution algorithm and solving CEC2015 competition problems. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1033–1040. [Google Scholar]
  13. Viktorin, A.; Pluhacek, M.; Senkerik, R. Network based linear population size reduction in SHADE. In Proceedings of the 2016 International Conference on Intelligent Networking and Collaborative Systems (INCoS), Ostrawva, Czech Republic, 7–9 September 2016; pp. 86–93. [Google Scholar]
  14. Viktorin, A.; Pluhacek, M.; Senkerik, R. Success-history based adaptive differential evolution algorithm with multi-chaotic framework for parent selection performance on CEC2014 benchmark set. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4797–4803. [Google Scholar]
  15. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T. Distance vs. Improvement Based Parameter Adaptation in SHADE. In Artificial Intelligence and Algorithms in Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 455–464. [Google Scholar]
  16. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for differential evolution. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–7. [Google Scholar]
  17. Zhao, F.; He, X.; Yang, G.; Ma, W.; Zhang, C.; Song, H. A hybrid iterated local search algorithm with adaptive perturbation mechanism by success-history based parameter adaptation for differential evolution (SHADE). J. Eng. Optim. 2020, 52, 367–383. [Google Scholar] [CrossRef]
  18. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  19. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  20. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  21. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  22. Guo, S.-M.; Tsai, J.S.-H.; Yang, C.-C.; Hsu, P.-H. A self-optimization approach for L-SHADE incorporated with eigenvector-based crossover and successful-parent-selecting framework on CEC 2015 benchmark set. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1003–1010. [Google Scholar]
  23. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Reynolds, R.G. An ensemble sinusoidal parameter adaptation incorporated with L-SHADE for solving CEC2014 benchmark problems. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2958–2965. [Google Scholar]
  24. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  25. Piotrowski, A.P.; Napiorkowski, J.J. Step-by-step improvement of JADE and SHADE-based algorithms: Success or failure? Swarm Evol. Comput. 2018, 43, 88–108. [Google Scholar] [CrossRef]
  26. Piotrowski, A.P. L-SHADE optimization algorithms with population-wide inertia. Inf. Sci. 2018, 468, 117–141. [Google Scholar] [CrossRef]
  27. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE algorithm with rank-based selective pressure strategy for solving CEC 2017 benchmark problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  28. Brest, J.; Maučec, M.S.; Bošković, B. The 100-Digit Challenge: Algorithm jDE100. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 19–26. [Google Scholar]
  29. Brest, J.; Maučec, M.S.; Bošković, B. Differential Evolution Algorithm for Single Objective Bound-Constrained Optimization: Algorithm j2020. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  30. Suganthan, P.N. Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1958–1962. [Google Scholar]
  31. Das, S.; Maity, S.; Qu, B.-Y.; Suganthan, P.N. Real-parameter evolutionary multimodal optimization—A survey of the state-of-the-art. Swarm Evol. Comput. 2011, 1, 71–88. [Google Scholar] [CrossRef]
  32. Mezura-Montes, E.; Coello, C.A.C. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  33. Neri, F.; Cotta, C. Memetic algorithms and memetic computing optimization: A literature review. Swarm Evol. Comput. 2012, 2, 1–14. [Google Scholar] [CrossRef]
  34. Neri, F.; Tirronen, V. Recent advances in differential evolution: A survey and experimental analysis. Artif. Intell. Rev. 2010, 33, 61–106. [Google Scholar] [CrossRef]
  35. Zamuda, A.; Brest, J. Self-adaptive control parameters’ randomization frequency and propagations in differential evolution. Swarm Evol. Comput. 2015, 25, 72–99. [Google Scholar] [CrossRef]
  36. Zhou, A.; Qu, B.-Y.; Li, H.; Zhao, S.-Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  37. Piotrowski, A.P. Review of differential evolution population size. Swarm Evol. Comput. 2017, 32, 1–24. [Google Scholar] [CrossRef]
  38. Opara, K.R.; Arabas, J. Differential Evolution: A survey of theoretical analyses. Swarm Evol. Comput. 2019, 44, 546–558. [Google Scholar] [CrossRef]
  39. Poikolainen, I.; Neri, F.; Caraffini, F. Cluster-based population initialization for differential evolution frameworks. Inf. Sci. 2015, 297, 216–235. [Google Scholar] [CrossRef] [Green Version]
  40. Weber, M.; Neri, F.; Tirronen, V. A study on scale factor/crossover interaction in distributed differential evolution. Artif. Intell. Rev. 2013, 39, 195–224. [Google Scholar] [CrossRef]
  41. Zaharie, D. Influence of crossover on the behavior of differential evolution algorithms. Appl. Soft Comput. 2009, 9, 1126–1138. [Google Scholar] [CrossRef]
  42. Yue, C.; Price, K.; Suganthan, P.; Liang, J.; Ali, M.; Qu, B.; Awad, N.; Biswas, P. Problem definitions and evaluation criteria for the CEC 2020 special session and competition on single objective bound constrained numerical optimization. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  43. Piotrowski, A.P.; Napiorkowski, J.J. Some metaheuristics should be simplified. Inf. Sci. 2018, 427, 32–62. [Google Scholar] [CrossRef]
  44. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for success-history based differential evolution. Swarm Evol. Comput. 2019, 50, 100462. [Google Scholar] [CrossRef]
  45. Caraffini, F.; Kononova, A.V.; Corne, D. Infeasibility and structural bias in differential evolution. Inf. Sci. 2019, 496, 161–179. [Google Scholar] [CrossRef] [Green Version]
  46. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastian, Spain, 5–8 June 2017. [Google Scholar]
  47. Brest, J.; Maučec, M.S.; Bošković, B. iL-SHADE: Improved L-SHADE algorithm for single objective real-parameter optimization. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 1188–1195. [Google Scholar]
  48. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  49. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD-96 Proceedings; AAAI: Menlo Park, CA, USA, 1996; pp. 226–231. [Google Scholar]
  50. Deza, M.M.; Deza, E. Encyclopedia of distances. In Encyclopedia of Distances; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–583. [Google Scholar]
  51. Poláková, R.; Tvrdík, J.; Bujok, P.; Matoušek, R. Population-size adaptation through diversity-control mechanism for differential evolution. In Proceedings of the MENDEL, 22th International Conference on Soft Computing, Brno, Czech Republic, 8–10 June 2016; pp. 49–56. [Google Scholar]
  52. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
Figure 1. The selected average convergence of SHADE and Tb-SHADE on CEC2020 in 5D is compared. From left to right f3, f9 and f10.
Figure 1. The selected average convergence of SHADE and Tb-SHADE on CEC2020 in 5D is compared. From left to right f3, f9 and f10.
Mathematics 08 01565 g001
Figure 2. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 10D. From left to right f8, f9 and f10.
Figure 2. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 10D. From left to right f8, f9 and f10.
Mathematics 08 01565 g002
Figure 3. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 15D. From left to right f8, f9 and f10.
Figure 3. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 15D. From left to right f8, f9 and f10.
Mathematics 08 01565 g003
Figure 4. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 20D. From left to right f3, f5 and f9.
Figure 4. The selected average convergence of SHADE and Tb-SHADE is compared on CEC2020 in 20D. From left to right f3, f5 and f9.
Mathematics 08 01565 g004
Figure 5. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 5D. From left to right f3, f4 and f10.
Figure 5. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 5D. From left to right f3, f4 and f10.
Mathematics 08 01565 g005
Figure 6. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 10D. From left to right f8, f9 and f10.
Figure 6. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 10D. From left to right f8, f9 and f10.
Mathematics 08 01565 g006
Figure 7. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 15D. From left to right f8, f9 and f10.
Figure 7. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 15D. From left to right f8, f9 and f10.
Mathematics 08 01565 g007
Figure 8. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 20D. From left to right f3, f5 and f9.
Figure 8. The selected average convergence of L-SHADE and TbL-SHADE is compared on CEC2020 in 20D. From left to right f3, f5 and f9.
Mathematics 08 01565 g008
Figure 9. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 5D. From left to right f3, f4 and f10.
Figure 9. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 5D. From left to right f3, f4 and f10.
Mathematics 08 01565 g009
Figure 10. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 10D. From left to right f3, f8 and f9.
Figure 10. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 10D. From left to right f3, f8 and f9.
Mathematics 08 01565 g010
Figure 11. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 15D. From left to right f3, f8 and f9.
Figure 11. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 15D. From left to right f3, f8 and f9.
Mathematics 08 01565 g011
Figure 12. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 20D. From left to right f3, f4 and f10.
Figure 12. The selected average convergence of jSO and Tb-jSO is compared on CEC2020 in 20D. From left to right f3, f4 and f10.
Mathematics 08 01565 g012
Table 1. Time complexity specified by CEC2020 technical document-SHADE vs. Tb-SHADE.
Table 1. Time complexity specified by CEC2020 technical document-SHADE vs. Tb-SHADE.
DT0T1SHADETb-SHADE
T2(T2 − T1)/T0T2(T2 − T1)/T0
56.03E+012.52E+024.81E+037.56E+015.58E+038.84E+01
106.03E+013.05E+025.55E+038.70E+016.35E+031.00E+02
156.03E+013.39E+025.68E+038.86E+016.76E+031.06E+02
206.03E+014.09E+026.03E+039.32E+017.14E+031.12E+02
Table 2. Time complexity specified by CEC2020 technical document—L-SHADE vs. TbL-SHADE.
Table 2. Time complexity specified by CEC2020 technical document—L-SHADE vs. TbL-SHADE.
DT0T1L-SHADETbL-SHADE
T2(T2 − T1)/T0T2(T2 − T1)/T0
56.03E+012.52E+024.44E+036.95E+014.97E+037.82E+01
106.03E+013.05E+024.77E+037.40E+016.71E+031.06E+02
156.03E+013.39E+024.98E+037.70E+016.97E+031.10E+02
206.03E+014.09E+025.24E+038.01E+017.30E+031.14E+02
Table 3. Time complexity specified by CEC2020 technical document-jSO vs. Tb-jSO.
Table 3. Time complexity specified by CEC2020 technical document-jSO vs. Tb-jSO.
DT0T1jSOTb-jSO
T2(T2 − T1)/T0T2(T2 − T1)/T0
56.03E+012.52E+024.30E+036.71E+015.24E+038.27E+01
106.03E+013.05E+025.28E+038.25E+016.51E+031.03E+02
156.03E+013.39E+026.50E+031.02E+027.12E+031.12E+02
206.03E+014.09E+027.27E+031.14E+027.81E+031.23E+02
Table 4. Termination criteria.
Table 4. Termination criteria.
DMaxFESMin Error Value
550,00010−8
101,000,00010−8
153,000,00010−8
2010,000,00010−8
Table 5. Parameter setting of some algorithms.
Table 5. Parameter setting of some algorithms.
AlgorithmNPH|A|NPinitNPfMaxGMCRinitMFinit
SHADE100NPNPMaxFES/NP0.50.5
Tb-SHADE100NPNPMaxFES/NP0.50.5
L-SHADECalculated by (18)100NP1004Not fixed0.50.5
TbL-SHADECalculated by (18)100NP1004Not fixed0.50.5
jSOCalculated by (18)5NP25logD4Not fixed0.8
MCR, H = 0.9
0.3
MF, H = 0.9
Tb-jSOCalculated by (18)5NP25logD4Not fixed0.8
MCR, H = 0.9
0.3
MF, H = 0.9
DISHCalculated by (18)5NP25logD4Not fixed0.8
MCR, H = 0.9
0.3
MF, H = 0.9
Table 6. Parameter setting of jDE100.
Table 6. Parameter setting of jDE100.
ParameterValueDescription
Fl 5.0 b N P lower limit of scale factor for the big population
Fl 1.0 b N P lower limit of scale factor for the small population
Fu1.1upper limit of scale factor
CRl0.0lower limit of crossover parameter
CRu1.1upper limit of crossover parameter
Finit0.5initial value of scale factor
CRinit0.5initial value of crossover parameter
τ10.1probability to self-adapt scale factor
τ20.1probability to self-adapt crossover parameter
bNP1000size of big population
sNP25size of small population
ageLmt1 × 109number of FEs when population restart needs to occurs
eps1 × 10−16small value used to check if two value are similar
myEqs25reinitialization if myEqs% of individuals in the corresponding population have the similar function values
MaxGNot fixedthe maximum number of generations
Table 7. SHADE vs. Tb-SHADE on CEC2020 in 5D.
Table 7. SHADE vs. Tb-SHADE on CEC2020 in 5D.
fSHADETb-SHADEResult
BestMeanStdBestMeanStd
11.70E−096.44E−092.40E−098.41E−015.37E+003.83E+00
23.78E−011.73E+001.80E+003.43E+001.30E+016.19E+00
31.32E+005.07E+001.07E+001.42E+006.73E+002.16E+00=
41.23E−029.32E−024.43E−024.30E−021.71E−018.96E−02=
51.65E−096.89E−092.39E−093.71E−021.05E+006.62E−01=
65.41E−106.48E−092.54E−092.49E−025.89E−021.81E−02=
72.83E−106.62E−093.01E−091.32E−052.12E−046.34E−04=
84.29E−093.34E+001.83E+014.09E−012.23E+001.61E+00=
91.00E+021.07E+023.65E+012.07E+017.80E+012.78E+01+
103.00E+023.46E+028.65E+005.62E+012.76E+027.49E+01+
2 + 2 −
Table 8. SHADE vs. Tb-SHADE on CEC2020 in 10D.
Table 8. SHADE vs. Tb-SHADE on CEC2020 in 10D.
fSHADETb-SHADEResult
BestMeanStdBestMeanStd
12.12E−097.52E−092.16E−091.50E−097.38E−092.13E−09=
21.91E−015.14E+004.17E+003.67E+002.26E+011.22E+01
31.05E+011.18E+017.29E−013.71E+001.03E+013.49E+00+
49.90E−021.63E−012.75E−023.67E−052.54E−021.47E−02=
58.82E−093.54E−011.91E−019.23E−075.48E+005.13E+00
67.17E−021.95E−011.01E−011.68E−015.28E−012.05E−01=
74.09E−061.39E−011.76E−013.97E−021.83E−011.19E−01=
81.00E+021.00E+026.24E−072.15E−041.86E+011.76E+01+
91.00E+022.92E+028.72E+011.00E+021.00E+024.45E−02+
103.98E+024.16E+022.29E+011.00E+021.21E+026.61E+01+
4 + 2 −
Table 9. SHADE vs. Tb-SHADE on CEC2020 in 15D.
Table 9. SHADE vs. Tb-SHADE on CEC2020 in 15D.
fSHADETb-SHADEResult
BestMeanStdBestMeanStd
16.48E−098.35E−099.82E−102.06E−097.43E−092.01E−09=
21.68E−017.53E+002.16E+012.46E+003.08E+012.79E+01
31.56E+011.57E+012.05E−013.64E+008.46E+002.91E+00+
41.78E−012.74E−013.69E−024.06E−027.41E−023.22E−02=
51.31E+005.07E+015.74E+012.41E+014.89E+011.76E+01
67.43E−023.78E−012.22E−013.26E−012.61E+002.98E+00=
74.18E−012.06E+014.50E+013.02E−013.07E+002.01E+00+
81.00E+021.00E+020.00E+008.74E−095.20E+014.49E+01+
93.38E+023.87E+029.71E+001.00E+021.46E+021.01E+02+
104.00E+024.00E+020.00E+001.00E+022.07E+027.85E+01+
5 + 2 −
Table 10. SHADE vs. Tb-SHADE on CEC2020 in 20D.
Table 10. SHADE vs. Tb-SHADE on CEC2020 in 20D.
fSHADETb-SHADEResult
BestMeanStdBestMeanStd
15.87E−098.38E−099.25E−102.21E−097.94E−091.76E−09=
28.52E−092.49E−015.14E−019.38E−021.97E+001.38E+00=
32.04E+012.06E+013.27E−016.66E−091.46E+018.57E+00+
42.27E−013.78E−014.93E−023.50E−014.09E−013.48E−02=
52.06E+012.04E+028.51E+017.38E+001.56E+021.11E+02+
69.26E−022.04E−016.62E−022.62E−013.84E−016.17E−02=
73.55E−014.53E+015.54E+013.06E+002.49E+012.23E+01+
81.00E+021.00E+022.23E−131.00E+021.00E+021.89E−13=
94.01E+024.05E+022.00E+001.00E+023.99E+026.89E+01+
104.14E+024.14E+029.83E−034.10E+024.13E+029.93E−01=
4 + 0 −
Table 11. L-SHADE vs. TbL-SHADE on CEC2020 in 5D.
Table 11. L-SHADE vs. TbL-SHADE on CEC2020 in 5D.
fL-SHADETbL-SHADEResult
BestMeanStdBestMeanStd
13.32E−097.16E−091.95E−091.86E−095.06E−052.77E−04=
26.54E−109.34E−021.14E−011.30E−054.24E−011.22E+00=
35.15E+005.17E+006.51E−026.14E−012.96E+001.73E+00+
49.92E−036.52E−023.08E−027.35E−071.58E−021.81E−02=
52.24E−096.88E−092.02E−092.19E−093.60E−051.97E−04=
68.44E−105.15E−092.84E−096.25E−116.00E−095.02E−09=
71.51E−096.13E−092.69E−097.16E−104.76E−092.96E−09=
85.19E−097.95E−091.44E−096.25E−092.77E−041.04E−03=
97.81E−099.67E+011.83E+014.50E−096.61E+014.36E+01+
103.00E+023.44E+021.20E+018.19E−092.71E+028.34E+01+
3 + 0 −
Table 12. L-SHADE vs. TbL-SHADE on CEC2020 in 10D.
Table 12. L-SHADE vs. TbL-SHADE on CEC2020 in 10D.
fL-SHADETbL-SHADEResult
BestMeanStdBestMeanStd
15.88E−098.29E−091.22E−091.70E−096.65E−092.33E−09=
23.44E−041.13E+001.53E+006.39E−027.09E+008.75E+00
31.04E+011.07E+012.92E−014.00E+009.40E+003.00E+00+
49.87E−021.52E−012.23E−029.53E−091.90E−021.10E−02=
53.83E−094.32E+002.20E+015.39E−021.60E+001.25E+00=
62.33E−029.59E−025.58E−027.39E−023.75E−011.38E−01=
71.06E−071.39E−012.02E−011.26E−062.14E−032.71E−03=
87.59E−099.67E+011.83E+019.62E−093.20E+012.15E+01+
91.00E+022.91E+028.70E+015.77E−091.02E+025.05E+01+
103.98E+024.16E+022.29E+011.00E+021.40E+021.03E+02+
4 + 1 −
Table 13. L-SHADE vs. TbL-SHADE on CEC2020 in 15D.
Table 13. L-SHADE vs. TbL-SHADE on CEC2020 in 15D.
fL-SHADETbL-SHADEResult
BestMeanStdBestMeanStd
13.53E−097.72E−091.70E−091.35E−097.01E−092.54E−09=
23.64E−123.73E−017.99E−018.33E−021.08E+001.74E+00=
31.56E+011.56E+011.41E−014.45E−092.55E+001.55E+00+
42.07E−012.64E−013.91E−029.87E−034.13E−021.23E−02=
52.46E+005.21E+016.26E+017.51E+003.40E+011.58E+01+
62.43E−034.23E−011.52E+001.13E−011.82E+003.20E+00=
76.63E−021.65E+014.13E+014.35E−019.54E−013.53E−01+
81.00E+021.00E+020.00E+007.71E−096.02E+014.20E+01+
93.00E+023.80E+022.36E+011.00E+021.80E+021.27E+02+
104.00E+024.00E+020.00E+001.00E+022.20E+021.27E+02+
6 + 0 −
Table 14. L-SHADE vs. TbL-SHADE on CEC2020 in 20D.
Table 14. L-SHADE vs. TbL-SHADE on CEC2020 in 20D.
fL-SHADETbL-SHADEResult
BestMeanStdBestMeanStd
14.35E−098.54E−091.33E−091.50E−097.72E−092.26E−09=
29.46E−113.44E−023.88E−023.12E−025.21E−017.45E−01=
32.04E+012.06E+013.93E−011.74E−092.28E+001.58E+00+
42.57E−013.66E−013.95E−022.97E−027.35E−022.85E−02=
53.02E+012.46E+021.02E+021.13E+012.42E+021.39E+02+
68.46E−021.85E−017.59E−021.97E−013.13E−015.12E−02=
72.23E−014.34E+015.64E+011.58E+004.15E+013.84E+01=
81.00E+021.00E+021.89E−135.14E+019.40E+011.45E+01+
94.00E+024.04E+022.37E+001.00E+024.12E+029.59E+01+
104.14E+024.14E+021.08E−023.99E+024.03E+024.09E+00+
5 + 0 −
Table 15. jSO vs. Tb-jSO on CEC2020 in 5D.
Table 15. jSO vs. Tb-jSO on CEC2020 in 5D.
fjSOTb-jSOResult
BestMeanStdBestMeanStd
11.34E−096.70E−092.16E−092.24E−097.69E−092.26E−09=
29.71E−094.11E−011.24E+008.54E−092.01E+003.43E+00=
36.13E−014.92E+001.22E+002.91E−082.34E+001.75E+00=
48.28E−096.28E−023.35E−021.92E−093.03E−023.47E−02=
53.14E−092.08E−021.14E−013.85E−097.48E−022.35E−01=
61.00E−095.84E−092.36E−094.08E−106.74E−092.84E−09=
71.57E−105.51E−092.94E−093.82E−114.59E−093.30E−09=
84.22E−097.95E−091.63E−095.87E−098.58E−091.08E−09=
95.55E−099.67E+011.83E+016.29E−099.33E+012.54E+01=
103.00E+023.46E+028.65E+003.00E+023.08E+021.80E+01+
1 + 0 −
Table 16. jSO vs. Tb-jSO on CEC2020 in 10D.
Table 16. jSO vs. Tb-jSO on CEC2020 in 10D.
fjSOTb-jSOResult
BestMeanStdBestMeanStd
14.05E−097.91E−091.39E−094.56E−098.54E−091.28E−09=
23.12E−016.99E+004.53E+003.54E+002.03E+012.14E+01
31.04E+011.19E+016.21E−012.62E+008.28E+002.85E+00+
49.86E−021.58E−013.20E−021.97E−025.51E−023.94E−02=
56.31E−092.61E−012.94E−019.79E−091.50E+009.92E−01=
61.95E−021.05E−018.44E−022.93E−023.63E−011.75E−01=
76.94E−077.13E−021.60E−011.23E−064.87E−021.01E−01=
81.00E+021.00E+020.00E+006.98E−091.45E+004.49E+00+
91.00E+023.02E+026.89E+011.00E+021.19E+026.22E+01+
103.98E+024.04E+021.57E+011.00E+023.88E+025.44E+01+
4 + 1 −
Table 17. jSO vs. Tb-jSO on CEC2020 in 15D.
Table 17. jSO vs. Tb-jSO on CEC2020 in 15D.
fjSOTb-jSOResult
BestMeanStdBestMeanStd
14.88E−098.28E−091.34E−094.93E−098.62E−091.36E−09=
24.16E−022.61E+014.47E+011.67E−012.04E+012.49E+01+
31.56E+011.66E+015.14E−011.99E+004.75E+001.59E+00+
41.78E−012.62E−013.76E−022.96E−029.31E−024.54E−02=
51.15E+003.07E+002.07E+001.56E−011.13E+011.04E+01
63.33E−023.20E−013.15E−018.05E−022.70E−011.26E−01=
71.24E−017.15E−012.13E−011.06E−014.85E−012.48E−01=
81.00E+021.00E+020.00E+007.83E−095.54E+014.08E+01+
93.86E+023.89E+028.36E−011.00E+022.55E+021.48E+02+
104.00E+024.00E+020.00E+004.00E+024.00E+020.00E+00=
4 + 1 −
Table 18. jSO vs. Tb-jSO on CEC2020 in 20D.
Table 18. jSO vs. Tb-jSO on CEC2020 in 20D.
fjSOTb-jSOResult
BestMeanStdBestMeanStd
15.80E−098.53E−091.32E−096.70E−099.21E−098.21E−10=
26.25E−021.99E+001.60E+001.74E+006.29E+003.45E+00=
32.04E+012.13E+015.24E−012.56E+005.17E+001.63E+00+
41.97E−013.53E−014.48E−025.92E−021.21E−015.05E−02=
51.20E+006.93E+005.07E+004.16E−013.27E+014.81E+01
65.63E−029.75E−014.25E−015.93E−023.11E−011.49E−01=
75.31E−031.12E−011.10E−011.79E−013.07E+004.29E+00=
81.00E+021.00E+028.44E−141.00E+021.00E+022.53E−13=
93.98E+024.01E+021.36E+004.15E+024.24E+025.11E+00
104.14E+024.14E+024.73E−043.99E+023.99E+026.47E−01+
2 + 2 −
Table 19. DISH and jDE100 on CEC2020 in 5D.
Table 19. DISH and jDE100 on CEC2020 in 5D.
fDISHjDE100j2020
BestMeanStdBestMeanStdBestMeanStd
12.60E−097.84E−091.87E−091.19E+059.94E+051.05E+060.00E+000.00E+000.00E+00
24.32E−093.99E−011.22E+001.46E+023.26E+028.04E+011.91E−043.23E+003.74E+00
36.13E−015.11E+008.63E−011.07E+012.00E+014.14E+000.00E+003.42E+002.33E+00
42.90E−096.82E−024.43E−022.69E−011.32E+004.48E−010.00E+007.68E−026.40E−02
51.83E−096.24E−021.90E−011.97E+015.80E+012.42E+010.00E+001.37E−012.86E−01
62.01E−096.94E−092.32E−094.19E−011.47E+005.28E−01
71.49E−095.96E−092.57E−091.95E+002.69E+012.96E+01
85.76E−093.35E+001.83E+014.06E+002.18E+019.07E+000.00E+006.28E−012.39E+00
91.00E+021.07E+022.54E+011.04E+021.23E+021.07E+010.00E+002.05E+013.75E+01
103.00E+023.44E+021.20E+011.89E+023.38E+023.24E+010.00E+001.26E+029.03E+01
Table 20. DISH and jDE100 on CEC2020 in 10D.
Table 20. DISH and jDE100 on CEC2020 in 10D.
fDISHjDE100j2020
BestMeanStdBestMeanStdBestMeanStd
14.26E−098.83E−091.17E−095.88E+072.80E+081.48E+080.00E+000.00E+000.00E+00
26.25E−025.26E+003.92E+007.66E+021.06E+031.13E+020.00E+006.79E−011.16E+00
31.07E+011.19E+015.71E−016.45E+018.68E+011.23E+010.00E+008.06E+003.88E+00
41.28E−011.64E−012.53E−026.08E+001.01E+012.35E+000.00E+001.09E−019.04E−02
54.96E−092.43E−011.65E−014.28E+031.38E+046.69E+030.00E+003.02E−013.13E−01
61.97E−021.61E−011.28E−013.72E+011.15E+023.93E+012.91E−024.78E−012.49E−01
71.14E−073.31E−021.09E−015.59E+022.35E+031.37E+033.10E−076.73E−021.25E−01
81.00E+021.00E+020.00E+007.55E+011.35E+022.19E+010.00E+001.54E+004.00E+00
91.00E+022.67E+021.02E+021.63E+022.24E+022.24E+010.00E+008.00E+014.07E+01
103.98E+024.07E+021.84E+014.49E+024.73E+021.26E+011.00E+021.40E+028.12E+01
Table 21. DISH and jDE100 on CEC2020 in 15D.
Table 21. DISH and jDE100 on CEC2020 in 15D.
fDISHjDE100 j2020
BestMeanStdBestMeanStdBestMeanStd
14.42E−098.41E−091.42E−096.48E+081.38E+095.36E+080.00E+000.00E+000.00E+00
21.67E−012.19E+014.02E+011.49E+032.02E+032.22E+020.00E+005.72E−024.32E−02
31.56E+011.67E+015.06E−011.41E+021.84E+022.41E+010.00E+006.78E+007.82E+00
41.78E−012.60E−013.93E−021.75E+017.69E+016.76E+010.00E+001.99E−017.47E−02
51.56E−012.54E+001.17E+003.17E+042.06E+059.67E+040.00E+007.58E+007.69E+00
62.07E−022.47E−012.06E−011.70E+023.36E+027.58E+011.65E−038.45E−012.09E+00
74.38E−017.59E−011.89E−011.51E+046.11E+043.24E+046.81E−029.83E−012.03E+00
81.00E+021.00E+020.00E+001.81E+022.82E+026.30E+010.00E+009.49E+002.74E+01
93.00E+023.84E+021.88E+012.98E+024.27E+024.89E+011.00E+021.23E+025.68E+01
104.00E+024.00E+020.00E+007.07E+028.67E+028.83E+011.00E+023.90E+025.48E+01
Table 22. DISH and jDE100 on CEC2020 in 20D.
Table 22. DISH and jDE100 on CEC2020 in 20D.
fDISH jDE100 j2020
BestMeanStdBestMeanStdBestMeanStd
16.06E−098.44E−099.07E−101.76E+094.18E+091.38E+090.00E+000.00E+000.00E+00
26.25E−021.50E+001.69E+002.59E+033.11E+032.27E+020.00E+002.60E−022.47E−02
32.06E+012.16E+014.69E−012.29E+023.05E+023.33E+010.00E+001.44E+019.29E+00
42.56E−013.60E−014.24E−026.79E+014.38E+023.73E+022.98E−021.80E−017.84E−02
52.08E−016.77E+003.21E+003.58E+056.91E+051.88E+053.12E−017.78E+015.75E+01
63.67E−026.97E−014.64E−013.96E+026.86E+021.33E+026.84E−021.91E−011.01E−01
71.39E−021.17E−011.04E−012.25E+041.48E+055.43E+041.95E−021.98E+004.02E+00
81.00E+021.00E+022.23E−134.53E+027.18E+021.67E+020.00E+009.27E+012.21E+01
93.96E+024.01E+021.86E+005.04E+025.89E+023.25E+011.00E+023.39E+021.28E+02
104.14E+024.14E+025.12E−045.56E+028.34E+021.46E+021.00E+023.39E+021.28E+02
Table 23. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 5D.
Table 23. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 5D.
fSHADETb-SHADE
#runsMean COMean PD#runsMean COMean PD
1303.04E+013.74E+01302.29E+024.84E+01
2123.97E+027.29E+010
3124.01E+021.20E+010
4132.83E+021.16E+010
5309.24E+013.69E+01304.40E+023.66E+01
6301.13E+023.29E+01304.61E+022.28E+01
7307.66E+014.57E+01303.63E+024.41E+01
8306.46E+012.84E+01294.42E+022.52E+01
9307.91E+016.31E+01304.13E+023.61E+01
10303.57E+011.30E+01303.87E+023.03E+01
Table 24. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 10D.
Table 24. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 10D.
fSHADETb-SHADE
#runsMean COMean PD#runsMean COMean PD
1305.84E+011.88E+01306.57E+028.30E+01
200
337.94E+031.35E+010
400
5307.14E+023.92E+01168.18E+038.16E+01
6038.76E+038.21E+01
7308.78E+022.36E+01149.54E+033.66E+01
8305.06E+011.48E+01308.97E+021.39E+02
925.12E+033.09E+01232.41E+031.12E+02
10301.03E+022.37E+01292.72E+037.69E+01
Table 25. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 15D.
Table 25. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 15D.
fSHADETb-SHADE
#runsMean COMean PD#runsMean COMean PD
1307.08E+011.37E+01308.93E+021.05E+02
200
3301.65E+049.53E+000
400
5298.34E+023.33E+010
631.44E+046.27E+010
7306.96E+021.56E+0162.72E+046.18E+01
8306.59E+011.12E+01301.10E+032.16E+02
9251.52E+043.12E+0128.86E+039.17E+01
10301.12E+026.72E+00303.22E+031.13E+02
Table 26. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 20D.
Table 26. Clustering and population diversity of SHADE and Tb-SHADE on the CEC2020 in 20D.
fSHADETb-SHADE
#runsMean COMean PD#runsMean COMean PD
1307.64E+011.19E+01301.26E+038.92E+01
249.06E+047.74E+010
3303.34E+049.12E+000
400
5305.46E+021.86E+01128.62E+041.27E+02
600--
7309.20E+022.59E+0148.96E+044.63E+01
8308.56E+011.06E+01301.72E+032.82E+02
900--
10301.02E+026.21E+00302.63E+031.26E+02
Table 27. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 5D.
Table 27. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 5D.
fL-SHADETbL-SHADE
#runsMean COMean PD#runsMean COMean PD
1303.00E+013.75E+01302.53E+024.85E+01
2188.71E+023.20E+0161.38E+031.66E+01
3306.29E+027.01E+0011.21E+036.85E+00
4104.65E+021.06E+010
5308.75E+013.42E+01301.10E+032.17E+01
6301.14E+023.01E+01307.34E+022.30E+01
7307.66E+014.29E+01304.85E+024.13E+01
8306.47E+012.08E+01307.59E+021.97E+01
9307.11E+016.15E+01303.90E+022.77E+01
10303.52E+011.07E+01303.71E+022.93E+01
Table 28. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 10D.
Table 28. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 10D.
fL-SHADETbL-SHADE
#runsMean COMean PD#runsMean COMean PD
1305.96E+012.01E+01306.61E+028.21E+01
2102.88E+042.98E+0193.12E+042.12E+01
3202.65E+044.36E+0023.19E+043.52E+00
412.86E+048.35E+0023.21E+041.97E+00
5307.31E+024.15E+01152.89E+041.58E+01
662.31E+041.10E+010
7309.46E+022.34E+01302.30E+041.77E+01
8305.75E+011.51E+01308.86E+021.26E+02
9101.63E+045.77E+01266.57E+031.18E+02
10301.08E+022.70E+01284.24E+037.02E+01
Table 29. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 15D.
Table 29. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 15D.
fL-SHADETbL-SHADE
#runsMean COMean PD#runsMean COMean PD
1307.06E+011.45E+01308.80E+021.05E+02
2306.19E+044.25E+01307.69E+043.52E+01
3301.84E+047.97E+00248.51E+042.14E+01
449.42E+046.03E+0039.80E+041.64E+00
5288.35E+023.80E+0129.80E+041.21E+01
6187.04E+041.09E+01179.20E+042.24E+01
7306.98E+021.54E+010
8306.60E+011.15E+01301.10E+032.16E+02
9302.00E+042.82E+01147.38E+041.05E+02
10301.14E+026.87E+00303.13E+031.08E+02
Table 30. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 20D.
Table 30. Clustering and population diversity of L-SHADE and TbL-SHADE on the CEC2020 in 20D.
fL-SHADETbL-SHADE
#runsMean COMean PD#runsMean COMean PD
1307.64E+011.22E+01301.26E+038.95E+01
2269.06E+046.00E+01301.79E+056.37E+01
3303.61E+048.72E+00302.21E+054.83E+01
4173.00E+051.14E+0193.03E+056.49E+00
5305.46E+021.63E+01192.66E+057.85E+01
6172.59E+051.75E+01112.94E+051.89E+01
7308.89E+021.92E+0173.24E+051.18E+01
8308.64E+018.66E+00301.72E+032.80E+02
9192.37E+053.59E+01262.19E+051.11E+02
10301.04E+026.78E+00302.52E+031.26E+02
Table 31. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 5D.
Table 31. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 5D.
fjSOTb- jSO
#runsMean COMean PD#runsMean COMean PD
1303.17E+014.57E+01303.17E+024.18E+01
2299.61E+024.16E+01261.22E+034.92E+01
3309.12E+029.36E+00191.43E+032.33E+01
4291.08E+038.68E+00221.44E+031.39E+01
5301.17E+023.36E+01309.94E+022.81E+01
6301.57E+023.68E+01309.66E+021.54E+01
7301.06E+024.38E+01296.79E+023.89E+01
8307.21E+011.74E+01307.20E+022.13E+01
9305.49E+016.52E+01303.94E+022.70E+01
10303.25E+019.66E+00303.90E+022.97E+01
Table 32. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 10D.
Table 32. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 10D.
fjSOTb-jSO
#runsMean COMean PD#runsMean COMean PD
1305.88E+013.93E+01302.29E+037.97E+01
2306.11E+031.12E+02306.86E+031.65E+02
3306.17E+031.48E+01301.04E+044.81E+01
4308.55E+031.28E+01301.34E+042.44E+01
5304.41E+033.63E+01308.97E+035.36E+01
6305.54E+032.38E+01301.07E+045.42E+01
7301.44E+033.57E+01309.69E+033.94E+01
8305.52E+019.59E+00303.40E+035.53E+01
9304.98E+034.19E+01303.73E+034.66E+01
10308.33E+012.45E+01303.07E+036.99E+01
Table 33. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 15D.
Table 33. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 15D.
fjSOTb- jSO
#runsMean COMean PD#runsMean COMean PD
1308.09E+014.39E+01305.34E+031.02E+02
2301.39E+041.21E+02301.54E+041.66E+02
3301.20E+041.37E+01302.02E+045.26E+01
4301.69E+041.57E+01302.99E+042.84E+01
5301.17E+044.95E+01301.31E+041.24E+02
6301.79E+042.23E+01302.41E+045.03E+01
7302.53E+033.56E+01302.26E+043.76E+01
8307.50E+017.39E+00303.53E+034.65E+01
9301.07E+043.03E+01301.83E+044.07E+01
10309.39E+016.56E+00308.49E+038.97E+01
Table 34. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 20D.
Table 34. Clustering and population diversity of jSO and Tb-jSO on the CEC2020 in 20D.
fjSOTb-jSO
#runsMean COMean PD#runsMean COMean PD
1309.32E+013.60E+01309.38E+039.36E+01
2303.15E+041.17E+02303.48E+041.69E+02
3302.91E+041.37E+01304.65E+046.03E+01
4303.86E+041.80E+01307.28E+043.40E+01
5302.94E+046.35E+01303.25E+041.28E+02
6303.81E+042.77E+01304.80E+049.32E+01
7303.06E+041.92E+01304.83E+046.16E+01
8309.09E+017.40E+00307.11E+034.78E+01
9302.84E+044.81E+01303.68E+041.08E+02
10309.94E+016.62E+00301.08E+041.27E+02
Table 35. The Friedman ranks of comparative algorithms on CEC2020 in 5D.
Table 35. The Friedman ranks of comparative algorithms on CEC2020 in 5D.
RankNameF-Rank
0TbL-SHADE3.2
1Tb-jSO3.55
2L-SHADE3.8
3jSO4.05
4j20204.95
5DISH5.05
6SHADE5.2
7Tb-SHADE6.6
8jDE1008.6
Table 36. The Friedman ranks of comparative algorithms on CEC2020 in 10D.
Table 36. The Friedman ranks of comparative algorithms on CEC2020 in 10D.
RankNameF-Rank
0j20203.2
1Tb-jSO3.6
2TbL-SHADE3.9
3DISH4.9
4jSO4.95
5Tb-SHADE5.05
6L-SHADE5.05
7SHADE5.75
8jDE1008.6
Table 37. The Friedman ranks of comparative algorithms on CEC2020 in 15D.
Table 37. The Friedman ranks of comparative algorithms on CEC2020 in 15D.
RankNameF-Rank
0j20203.15
1TbL-SHADE3.45
2Tb-jSO3.45
3Tb-SHADE4.35
4DISH4.7
5jSO5.2
6L-SHADE5.6
7SHADE6.1
8jDE1009
Table 38. The Friedman ranks of comparative algorithms on CEC2020 in 20D.
Table 38. The Friedman ranks of comparative algorithms on CEC2020 in 20D.
RankNameF-Rank
0j20202.35
1TbL-SHADE4.05
2Tb-jSO4.3
3DISH4.8
4jSO4.9
5Tb-SHADE5
6L-SHADE5.1
7SHADE5.5
8jDE1009
Table 39. Related statistical values obtained of Friedman test for α = 0.05.
Table 39. Related statistical values obtained of Friedman test for α = 0.05.
DChi-sq’Prob > Chi-sq’ (p)Critical Value·
534.254143653.65E−0515.51
1028.834688353.39E−0415.51
1538.82136285.31E−0615.51
2037.006548181.15E−0515.51

Share and Cite

MDPI and ACS Style

Sun, X.; Jiang, L.; Shen, Y.; Kang, H.; Chen, Q. Success History-Based Adaptive Differential Evolution Using Turning-Based Mutation. Mathematics 2020, 8, 1565. https://doi.org/10.3390/math8091565

AMA Style

Sun X, Jiang L, Shen Y, Kang H, Chen Q. Success History-Based Adaptive Differential Evolution Using Turning-Based Mutation. Mathematics. 2020; 8(9):1565. https://doi.org/10.3390/math8091565

Chicago/Turabian Style

Sun, Xingping, Linsheng Jiang, Yong Shen, Hongwei Kang, and Qingyi Chen. 2020. "Success History-Based Adaptive Differential Evolution Using Turning-Based Mutation" Mathematics 8, no. 9: 1565. https://doi.org/10.3390/math8091565

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop