Next Article in Journal
Adder Box Used in the Heavy Trucks Transmission Noise Reduction
Previous Article in Journal
Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Stage Differential Evolution Algorithm with Mutation Strategy Combination

School of Software, Yunnan University, Kunming 650000, China
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2163; https://doi.org/10.3390/sym13112163
Submission received: 12 October 2021 / Revised: 29 October 2021 / Accepted: 29 October 2021 / Published: 11 November 2021

Abstract

:
For most of differential evolution (DE) algorithm variants, premature convergence is still challenging. The main reason is that the exploration and exploitation are highly coupled in the existing works. To address this problem, we present a novel DE variant that can symmetrically decouple exploration and exploitation during the optimization process in this paper. In the algorithm, the whole population is divided into two symmetrical subpopulations by ascending order of fitness during each iteration; moreover, we divide the algorithm into two symmetrical stages according to the number of evaluations (FEs). On one hand, we introduce a mutation strategy, D E / c u r r e n t / 1 , which rarely appears in the literature. It can keep sufficient population diversity and fully explore the solution space, but its convergence speed gradually slows as iteration continues. To give full play to its advantages and avoid its disadvantages, we propose a heterogeneous two-stage double-subpopulation (HTSDS) mechanism. Four mutation strategies (including D E / c u r r e n t / 1 and its modified version) with distinct search behaviors are assigned to superior and inferior subpopulations in two stages, which helps simultaneously and independently managing exploration and exploitation in different components. On the other hand, an adaptive two-stage partition (ATSP) strategy is proposed, which can adjust the stage partition parameter according to the complexity of the problem. Hence, a two-stage differential evolution algorithm with mutation strategy combination (TS-MSCDE) is proposed. Numerical experiments were conducted using CEC2017, CEC2020 and four real-world optimization problems from CEC2011. The results show that when computing resources are sufficient, the algorithm is competitive, especially for complex multimodal problems.

1. Introduction

Proposed by Storn and Price in 1995 [1], the differential evolution (DE) algorithm is a stochastic optimization algorithm that simulates biological evolution in nature. It uses real number vector coding to search for and optimize the solution over continuous space. Compared with other optimization algorithms based on swarm intelligence, DE retains the global search strategy and guides the population evolution through mutation operations based on difference vectors and crossover operations through probability selection. In addition to outstanding performance on benchmark functions, DE is widely used in problems such as pattern recognition [2], image processing [3], artificial neural networks [4], electronic communication engineering [5], and bioinformatics [6].
In DE, the original vector X selected to perform the mutation operation is called the target vector, and the mutant vector V obtained through difference operations is the donor vector, which essentially is a kind of transitional vector; the new vector U generated by the crossover operation between the target and donor vectors is called the trial vector. The algorithm chooses the vector with a better fitness value from X and U to enter the next generation. Only a few DE control parameters must be set: the population size N P , scale factor F , and crossover rate C R . The procedures can be summarized as population initialization, mutation, crossover, and selection.
Mutation is crucial to the performance of the algorithm, and much work has been done on mutation strategies, leading to different exploration and exploitation features. We follow the “ D E / x / y / z ” form proposed by Storn et al. [1] to denote mutation strategies, where x represents the selection method of the target vector, y is the number of difference vectors, and z is the crossover method (generally binomial, so z is often omitted) [1].
Mutation can be treated as a form of local search, where the base vector acts as the center and the difference vector determines the range. Mutation strategies can be roughly categorized as either D E / b e s t / , D E / r a n d / , or D E / c u r r e n t t o / . D E / b e s t / is based on the optimal individual of the population, which can provide guidance about the optimal region for local search, so it has considerable convergence speed. However, for all of the individuals to share the same search center, it can diminish population diversity and cause premature convergence. D E / r a n d / randomly chooses the base vector from the whole population, and different individuals are beneficial to maintain diversity and improve convergence reliability. Because, randomly selected base vectors do not provide effective guidance information, so the search efficiency is relatively low. Moreover, when the population is mostly concentrated in the local optimal region, the influence of selection probability can cause premature convergence. D E / c u r r e n t t o / can be regarded as using a linear combination of the target vector and another vector to constitute the search center. Its performance is relatively balanced, and exploration and exploitation are performed simultaneously. Promotion, on the one hand, will inevitably lead to suppression on the other. Generally, no mutation strategy can outperform the others in all problems. It should be an appropriate methodology to combine different mutation strategies based on individual fitness values in different periods of algorithm iteration. We therefore propose a two-stage differential evolution algorithm with mutation strategy combination (TS-MSCDE).
First, we introduce a mutation strategy, D E / c u r r e n t / 1 , which has rarely appeared in the literature, along with D E / b e s t / 1 , D E / r a n d / 1 , and D E / c u r r e n t t o b e s t / 1 , and compare them. It can be seen that D E / c u r r e n t / 1 adopts the target vector as the base vector, so it has a strong capability to explore the solution space, which can avoid local optima to the maximum extent. However, its convergence speed and search efficiency are relatively low in the middle and later stages.
Second, in the evolution process of DE, individuals often have different tendencies of exploration and exploitation. Superior individuals are usually located near a local or global optimum, and tend to perform a local search to find better solutions, while inferior individuals are likely far from the optimal region and tend to explore globally to maintain population diversity. For this, we propose a heterogeneous two-stage double-subpopulation (HTSDS) mechanism.   D E / c u r r e n t / 1 and its modified version are assigned to superior and inferior subpopulations, respectively, in the first stage for exploration, D E / b e s t / 1 and a modified D E / r a n d / 1 are used in the second stage.
Finally, as mentioned above, our stage division is designed to explore in the early stage and exploit in the later stage. However, owing to the varying complexity of optimization problems, the two-stage division should be adaptive. For example, the algorithm can enter the second stage as soon as possible on unimodal problems, while it should explore more on more complex problems. Therefore, an adaptive two-stage partition (ATSP) strategy is designed to obtain the complexity of the optimization problem after population initialization, so as to adjust the stage proportion parameter.
To verify and analyze the effectiveness of our proposed algorithm, we performed experiments and comparisons on the CEC2017 benchmark set [7] to verify the impact of HTSDS and ATSP, followed by experiments with advanced algorithms on the CEC2020 benchmark set [8]. Results indicate that TS-MSCDE is highly competitive when computing resources are sufficient, especially for some difficult and complex multimodal problems. Four real-world optimization problems from CEC2011 [9] are selected to further prove the above conclusion.
The rest of this paper is organized as follows: Section 2 introduces the canonical DE algorithm, including its typical mutation, crossover, and selection operators. Section 3 reviews related work. The proposed HTSDS, ATSP, and complete TS-MSCDE procedures are introduced in Section 4. The effectiveness of the proposed mechanism and algorithm is discussed in Section 5 based on the computational results, and comparisons are drawn with other state-of-the-art algorithms. Conclusions are made, and future work discussed, in Section 6.

2. Differential Evolution

In this section, we introduce the basic differential evolution algorithm, including the well-known mutation strategy D E / r a n d / 1 [1] and other widely used mutation strategies.

2.1. Initialization

An initial random population P consists of N P individuals, each represented by { X i t = x i , 1 t , x i , 2 t , , x i , D t | i = 1 , 2 ,     , N P } , where t = 0 , 1 , 2 ,     , T m a x is the iteration number, and D is the number of dimensions in the solution space. In DE, uniformly distributed random functions are used to generate initial solutions x i , j 0 = x j , m i n   + r a n d 0 , 1 · x j , m a x   x j , m i n   , where x j , m a x   and x j , m i n   are the maximum and minimum boundary values, respectively, on the corresponding j th dimension.

2.2. Mutation

At iteration t , for each target vector X i t , a mutant vector   V i t is generated according to the following:
D E / r a n d / 1 :   V i t = X r 1 t + F · X r 2 t X r 3 t .
Other widely used mutations strategies include
D E / b e s t / 1 :   V i t = X b e s t t + F · X r 1 t X r 2 t ,
D E / b e s t / 2 :   V i t = X b e s t t + F · X r 1 t X r 2 t + F · X r 3 t X r 4 t ,
D E / r a n d / 2 :   V i t = X r 1 t + F · X r 2 t X r 3 t + F · X r 4 t X r 5 t ,
D E / c u r r e n t t o b e s t / 1 :   V i t = X i t + F · X b e s t t X i t + F · X r 1 t X r 2 t ,
D E / c u r r e n t t o r a n d / 1 :   V i t = X i t + F · X r 1 t X i t + F · X r 2 t X r 3 t ,
where r 1 , r 2 , r 3 , r 4 , and r 5 are mutually different integers randomly generated from the range 1 , 2 , , N P , and they are different from i ( r 1 r 2 r 3 i ). X b e s t t is the individual vector with the best fitness value in the population at iteration t .

2.3. Crossover

We illustrate the binomial crossover, in which the target vector X i t and donor vector V i t exchange elements according to the following rules:
u i , j t = v i , j t ,   i f ( r a n d 0 , 1 < C R   o r   j = j r a n d ) x i , j t ,
where the crossover rate C R 0 , 1 is a uniformly distributed random integer in 1 , D that ensures at least one element of the trial vector is inherited from the donor vector.

2.4. Selection

The greedy selection strategy is generally used in DE. The variable, X i t + 1 , is assigned when the fitness value of the trial vector, U i t , is equal to or better than X i t , and otherwise X i t is reserved:
X i t + 1 = U i t ,   f U i t f X i t X i t , o t h e r w i s e ,  
where f x is the fitness function.

3. Related Work

The performance of canonical DE generally depends on the mutation strategies, crossover strategies, and associated control parameters.
Parameter setting methods can be classified as constant, random, and adaptive (including self-adaptive). Storn and Price [1] provided general guidance for parameter control: a promising range of N P is between 5 D and 10 D , 0.5 is an appropriate value of F , and 0.1 is a first choice for C R in most situations. However, Ronkkonen et al. [10] concluded that F = 0.9 is a good tradeoff between the convergence rate and robustness. The setting of C R depends on the nature of the problem, with a value between 0.9 and 1 being suitable for non-separable and multimodal problems, and between 0 and 0.2 for separable problems. A DE variant with orthogonal crossover was proposed [11], with fixed parameters F = 0.9 , C R = 0.9 , and N P = D . Different conclusions have been drawn about the setting of constant parameters. It is noted that one set of parameters cannot be applied to all problems, and to improve algorithm performance requires more flexible setting methods.
To avoid manual adjustment of parameters, many researchers have proposed random parameter mechanisms. In CoDE [12], each individual randomly selects a pair of parameter settings from a pool of three pairs. DE-APC [13] randomly selects F and C R values of an individual from two preset constant sets, F s e t and C R s e t . Das et al. [14] proposed two ways to set F in DE: random and time-varying. For the random method, F is assigned to a random real number from (0.5, 1). In the time-varying method, F linearly decreases over a given interval. In one method [15], the control parameters for each individual are adaptively selected from a set of 12 settings, with the selection probability depending on the corresponding success rate. Instead of the F value of a single individual, PVADE [16] uses a scaling factor vector F m calculated from the diversity measure of the population at each dimension level. In SaDE [17], the value of F is randomly selected for each individual of the current population from N (0.5, 0.3).
Another class of approaches focuses on adaptive mechanisms for parameters. In the adaptive method, the control parameters can be adjusted dynamically according to the feedback of the search process or the evolution operation. In fuzzy adaptive DE (FADE), Liu and Lampinen [18] took the individual and relative fitness function values of successive generations as inputs and used fuzzy logic controllers to make adaptive adjustments to the parameters. Brest et al. [19] proposed jDE, which encodes parameters into each individual and adjusts them through evolution. In terms of solution quality, jDE is superior to basic DE, and competitive with other advanced algorithms. In JADE [20], according to the historical success information of parameters, F is generated by a Cauchy distribution, and C R is sampled from a normal distribution of each individual in each generation. SHADE [21] is an improved version of JADE that uses a mechanism based on success history to update F and C R . Instead of sampling F and C R values from gradually adapted probability distributions, SHADE [22] used historical memory archives M F and M C R . Historical memory archives store a set of F and C R values, which have performed well in the recent past. Zhang et al. [23] made the first attempt to model the tuning of structural hyper-parameters as a reinforcement learning problem, and present to tune the structural hyper-parameter of JADE, jSO [24], and other excellent algorithms.
Researchers have developed many mutation strategies [25,26]. Some have good exploration ability and are suitable for global search, while others have good exploitation ability and are good at local search. Fan and Lampinen [27] proposed a triangular mutation strategy, which was considered a local search operator, and designed the TDE algorithm together with D E / r a n d / 1 . Zhang and Sanderson introduced a new DE algorithm, JADE [19], improving optimization performance by a new mutation strategy, D E / c u r r e n t t o p b e s t / 1 . Experimental results showed that JADE has convergence performance that is competitive with classical DE. Tanabe and Fukunaga proposed success historical memory differential evolution (SHADE) [28] based on D E / c u r r e n t t o p b e s t / 1 , which ranked third among 21 algorithms at the IEEE CEC2013 competition [13]. Tanabe and Fukunaga proposed SHADE by linear population size reduction (L-SHADE) [28], which placed first in the IEEE CEC2014 competition [29]. An improved p-best mutation, MDE_PBX, was proposed to overcome the problem of premature convergence and convergence stagnation in classical DE [30]. This mutation strategy is similar to D E / c u r r e n t t o p b e s t / 1 , which selects the optimal vector from a dynamic group of q % of the population. Xin Zhang and Shiu Yin Yuen [31] proposed a direction-based mutation operator for DE. The idea is to construct a pool of difference vectors, record them when fitness is improved, and use it to guide the search of the next generation. The directional mutation operator can be applied to any DE mutation strategy.
In [32], a method based on the allocation of appropriate positions for each individual in the variation strategy is proposed by comprehensively considering the individual’s fitness and diversity contribution. Thus, a good balance between early exploration and later exploitation is achieved. On the basis of the classic D E / r a n d / 2 and D E / b e s t / 2 , Yuzhen Li et al. [33] proposed two new variant mutation strategies named D E / e r a n d / 2 and D E / e b e s t / 2 , respectively. An elite guidance mechanism randomly selects individuals from the elite group as the base vector and difference vector, so as to provide clearer guidance for individual mutation without losing randomness. The double-mutation strategy cooperation mechanism is adopted to achieve the balance between global and local search.
Many problems are black-box optimization problems, where the specific expression of the optimization target and its gradient information are unknown. Therefore, we cannot use the characteristics of the optimization target itself to obtain the global optimal solution [34]. However, we can still mine useful information to improve the performance of the algorithm. Wright [35] applied the theory of the fitness landscape to the dynamics of biological evolution optimization, which can reveal the relationship between a search space and fitness value through the characteristics of landscape information. Many fitness landscape measures have been proposed to understand the characteristics of optimization problems, [36,37]. Auto-correlation and correlation length [38] are often used to measure the ruggedness of a fitness landscape. The fitness distance correlation coefficient (FDC) measures the difficulty of a problem [39]. It measures the correlation between the fitness value and the optimal distance in the search domain. Lunacek et al. [40] introduced the dispersion metric (DM) to predict the presence of funnels in the fitness landscape by measuring the average distance between two high-quality solutions. Malan and Engelbrecht used entropy [41] to describe fitness landscape characteristics for continuous optimization problems.
The study of the fitness landscape is important in evolutionary computing, helping researchers to select the appropriate algorithm or operator according to the characteristics of a problem [42]. The combination of DE and fitness landscape has been much discussed. A method was proposed to detect a fitness landscape, determining whether a problem is unimodal or multimodal by generating several detection points on the line between the optimal and central solutions of a population, using different base vector selection rules and F values according to landscape modality [43]. Li et al. [44] proposed a hybrid DE, FLDE, based on the fitness landscape. The optimal mutation strategy is selected by extracting the local fitness landscape features of each generation of the population and combining these with a unimodal or multimodal probability distribution. It was proposed to use problem fitness landscape information to dynamically measure the search ability of different mutation strategies, considering the historical performance of mutation strategies, and dynamically select the most appropriate mutation strategy in the evolution process [45]. It can be seen that fitness landscape information is usually helpful to judge the complexity of optimization problems, and its inclusion can improve the performance of a DE algorithm.

4. TS-MSCDE

In this section, we discuss the characteristics of D E / c u r r e n t / 1 , introduce HTSDS and ATSP, and describe the implementation of TS-MSCDE.

4.1. DE/current/1

Numerous mutation strategies have appeared since DE was proposed, most derived from D E / b e s t / 1 , D E / r a n d / 1 , and D E / c u r r e n t t o b e s t / 1 [25]. As can be seen from Equations (1), (2) and (5), these three strategies all use a difference vector to provide disturbance, and differ in the way the base vector is chosen. Among the three mutation strategies, D E / b e s t / 1 is usually thought to have the fastest convergence speed and best search performance for simple unimodal problems. However, its population diversity and exploration ability may deteriorate and even be completely lost in very few iterations, leading to stagnation or premature convergence. Different from D E / b e s t / 1 , D E / c u r r e n t t o b e s t / 1 is more inclined to balance global exploration and local exploitation, but it has poor robustness. D E / r a n d / 1 has a strong global search capability and is effective for complex multimodal problems. However, as shown in Figure 1a, although the population is relatively evenly distributed in the solution space (vectors of different colors represent different optimal regions), the local optimal region is relatively large compared with the global optimal region. Hence, D E / r a n d / 1 has a high probability of choosing a blue or yellow vector as the base vector X r 1 t . This trend will be exacerbated as the iteration progresses. Therefore, the whole population has a high probability of falling into a local optimum.
We introduce a mutation strategy to avoid falling into a local optimum due to the selection of the base vector:
D E / c u r r e n t / 1 :   V i t = X i t + F · X r 1 t X r 2 t ,  
where r 1 , r 2 are mutually different integers randomly generated in the range 1 , 2 , , N P , and r 1 r 2 i . As shown in Figure 1b, the base vector of D E / c u r r e n t / 1 is always the target vector itself, which is equivalent to the local search of each target vector centered on itself. Therefore, it can keep the diversity of the population to a great extent, so as to fully explore the solution space, and avoid the population falling into local optima in complex multimodal problems. However, since the convergence of the mutation strategy completely depends on the difference vector, it can converge slowly or even stagnate when the difference vector cannot provide effective direction information. D E / c u r r e n t / 1 -related algorithms are rarely applied to numerical optimization problems, which is perhaps because its advantages are no less obvious than its disadvantages, as previously mentioned, but its characteristics also provide opportunities for decoupling exploration and exploitation from another perspective. Reasonable decoupling can improve algorithm performance to some extent [33,46].

4.2. HTSDS Mechanism

Because it is necessary to consider the decoupling of exploration and exploitation at different stages of the algorithm, we divide it into two stages according to the number of evaluations (FEs). The extreme properties of D E / c u r r e n t / 1 described above are undoubtedly well suited to the first stage to locate possible global optimal regions. If we do not do this, then individuals will become closer with iteration, diversity will gradually weaken, and the population will no longer be able to jump out of a local optimal region. However, it would be inefficient to use D E / c u r r e n t / 1 as an undifferentiated and directionless search for all individuals in the exploration stage. In the exploitation stage, rapid convergence is needed on the basis of possible global optimal regions to refine the solution. Therefore, we add a heterogeneous two-subpopulation mechanism.
Figure 2a shows that in the early stage of the evolution, individuals in the solution space are uniformly scattered, and that inferior individuals are often far away from optimal regions. The distance between inferior individuals is often large, leading to the difference vector being relatively large, which is suitable for exploring solution spaces with a big step size. However, superior individuals may be located in different optimal regions, and the step size of the difference vector formed by them is relatively small, so it is suitable for local exploitation of each optimal region.
According to the above analysis, to further improve the efficiency of D E / c u r r e n t / 1 without affecting its exploration performance, we sort the population in ascending order of fitness during each iteration, let p 0 , 1 , and the superior set S contains the former p N P individuals, the inferior set I contains the remaining individuals, and D E / c u r r e n t / 1 can be modified to
V i t = X i t + F · X S 1 t X S 2 t ,   i ϵ S X i t + F · X I 1 t X P 2 t , i ϵ I ,      
where S 1 , S 2 S S 1 S 2 i ,   I 1 I P 2 P I 1 P 2 i . It can be seen that the difference vector of the superior population is only selected from itself, while that of the inferior population selects the starting vector from the whole population and the end vector from the inferior population.
We can conclude from the above formula that the individuals composing the superior and inferior populations complete the mutation together, and the information sharing between the two subpopulations only depends on the population division at each iteration, and does not carry through the difference vector selection of the whole population, as in most literature [47]. This has two advantages. First, the superior population can speed up the search rate and information sharing in the promising region without being disturbed by inferior individuals, while the inferior population can focus on exploring the solution space and finding potential solutions. Second, when the superior population stagnates or falls into a local optimum, it can restore diversity and jump out of it through the population division of each iteration. This subtle difference has a significant impact on the performance of the algorithm.
Referring to Figure 2b, when the algorithm enters the exploitation stage, the previous discussion still holds, but as the iteration goes on, the population gradually converges to the optimal region and the gap among individuals gradually narrows. Although D E / c u r r e n t / 1 can keep searching at this time, the convergence efficiency is too low due to the lack of the guidance of the global optimal individual, so we consider other mutation strategies to accelerate convergence:
V i t = X b e s t t + F · X S 1 t X S 2 t ,   i ϵ S X I 1 t + F · X I 2 t X P 3 t , i ϵ I
where S 1 , S 2 S S 1 S 2 i , I 1 , I 2 I P 3 P I 1 I 2 P 3 . It can be seen from the above formula that the superior population adopts D E / b e s t / 1 because most of the superior individuals are concentrated in the possible optimal region, i.e., the region where the superior individuals are located shows features of a unimodal function in later stages. Therefore, D E / b e s t / 1 can achieve faster convergence and refine the solution. In addition, a modified D E / r a n d / 1 is used for inferior individuals, which considers convergence and diversity maintenance to improve the stability of the algorithm. The heterogeneous two-stage two-subpopulation mechanism (HTSDS) can be further explained by Figure 3.
Through the HTSDS mechanism, the algorithm implements a symmetric framework based on the population of algorithm and the process of evolution; appropriate mutation strategies are assigned to superior and inferior subpopulations in two stages of evolution iteration. Symmetrical decoupling between exploration and exploitation consequently can be done on different individuals in different stages. At the same time, because of the characteristics of the D E / c u r r e n t / 1 strategy, the algorithm can have a strong exploration ability in the early stage, which is particularly important for complex multimodal problems. Of course, this will reduce the convergence rate to some extent.

4.3. ATSP Strategy

In HTSDS, we divide the algorithm into two stages according to FEs. The question now is how to set a threshold p s 0 , 1 so that p s M a x _ F E s can be used for a partition. Problems have varying complexity. For simple unimodal problems, p s should be small so that the algorithm can converge as soon as possible, while p s should be increased for complex multimodal problems to improve the exploration proportion and avoid prematurely falling into a local optimum. Therefore, to further improve the performance of the algorithm on different problems, we should try to obtain an abstract mathematical representation of a problem’s complexity.
The study of the fitness landscape [35] is important in evolutionary computing, whose purpose is to explain the behavior of evolutionary algorithms in solving optimization problems. Among the many fitness landscape measurement methods, the fitness distance correlation coefficient (FDC) [39] has been used by many evolutionary algorithms for its simplicity and efficiency, and has significantly improved algorithm performance. The FDC-based DE algorithm has been incorporated in mutation strategy selection [48,49] to improve the performance of the algorithm in different optimization problems. We use FDC and design a simple but efficient adaptive two-stage partition strategy (ATSP) to solve the stage partition problem of HTSDS.
The FDC method determines the correlation between the fitness value and the optimal solution distance in the search space. Ideally, it requires a representative set of fitness landscape samples and global optimum-related information. The original definition of FDC was the joint distribution of fitness values and distance [39],
r F D = c F D s F s D , c F D = 1 n i = 1 n f i f ¯ d i d ¯ ,  
where r F D ϵ 1 , 1 , f ¯ and d ¯ are the mean of fitness values and distance of n samples, s F and s D are the standard deviation of the fitness value and distance samples, respectively. The distance function is d i = d x i , x g . The global optimal value x g is generally unknown, so the global optimum is replaced by the optimal candidate solution x g = a r g m i n x χ f x , where χ is the set of fitness landscape samples.
Many algorithms adopt the random walk method [49] to obtain the fitness landscape sample set. With no additional computational costs, we use the Latin hypercube design (LHD) [50] during initialization, so the initial population can uniformly cover the solution space, which can be used as the fitness landscape sample set. The initialization process can be expressed as
x i , j 0 = x j , m i n   + l h d 1 , N P · x j , m a x   x j , m i n   ,
where i = 1 , 2 , , N P j = 1 , 2 , , D , and l h d denote an LHD function that generates random numbers from independent standard normal distributions.
To more clearly show problem descriptions with FDC, we choose 10 optimization problems from CEC2020 [8], whose corresponding FDC in 5, 10, 15, and 20 dimensions is calculated from Equations (12) and (13), as shown in Figure 4. The FDC value is generally high for a simple unimodal function like function 1, but low for a complex multimodal function. With the increase of dimension, the complexity of optimization problems increases, and the corresponding FDC decreases gradually.
From Equation (12), we know that when the fitness value decreases as the distance from the global optimal value (for the minimization problem), the FDC value should ideally be close to 1. Once we have chosen FDC as a measure of the complexity of the optimization problem, the question becomes how to properly incorporate it into the algorithm. We introduce the concept of correlation distance,
d F D = 1 r F D / 2 .
Then d F D ϵ 0 , 1 . When d F D = 0 , the optimization problem presents an obvious convex feature to some extent, and it becomes increasingly complex with the increase of d F D . So,
p s = p s l o w e r + min p s u p p e r p s l o w e r d F D , p s l i m i t .
p s l i m i t is set to 0.5 so that p s will not be larger than 1. Through ATSP, we establish a simple linear relationship between the complexity of the optimization problem and the algorithm stage division. The algorithm can enter the exploitation stage earlier for simple unimodal problems, and otherwise, the exploration stage can be appropriately extended to find the global optimal solution as possible.

4.4. Parameter Self-Adaptation

The successful performance of the DE algorithm largely depends on the selection of the scaling factor F and crossover rate C R [1], which play a crucial role in its effectiveness, efficiency, and robustness. The mainstream method is to make the parameters adapt to different problems while iterating. A parameter-adaptive method based on the historical memory and fitness improvement weight showed good results [20,21,28], but this may increase the risk of the population falling into a local optimum [22], and the correlation between F and C R does not get enough attention. Therefore, we adopt another parameter-adaptive method [51].
The adaptive process starts with the parameter setting pool, P A R M _ P O O L , and probability pool, P R _ P O O L . We select (F, CR) for the initial parameter pool as [(0.1, 0.2), (0.5, 0.9), (1.0, 0.1), (1.0, 0.9)], and P R _ P O O L contains the selection probability p r for each pair of parameters, so each individual will select parameter pairs according to the probability p r . The adaptive process starts from 0.1   M A X _ F E S , and adjusts the selection probability based on the performance of each pair of parameters:
ω p r = i = 1 n f X i n e w f X i o l d ,      
where ω p r is the sum of fitness improvements corresponding to each pair of parameters, n is the individual corresponding to each pair of parameters, and X i n e w and X i o l d are post- and pre-evolutionary individuals, respectively. From this, we can get the improvement rate corresponding to each pair of parameters,
p r = max 0.05 , ω p r s u m ω p r ,            
where 0.05 is the minimum probability required for each pair of parameters. The corresponding P R _ P O O L can be obtained from p as
P R _ P O O L t + 1 = 1 c P R P O O L t + c p r ,        
where c = 0.05 is the learning rate, which can be used to obtain knowledge from the performance of each group of parameters of the previous generation, so as to better guide parameter adaptation.

4.5. Complete Procedure of the Proposed TS-MSCDE

Based on the HTSDS and ATSP described above, we now present the complete steps of TS-MSCDE.
(1)
Initialization: The initialization of the proposed algorithm differs from classical DE, it uses LHD [49] instead of randomly initializing the population, and then calculates the corresponding threshold of stage division p s   according to ATSP.
(2)
Mutation: According to the preset threshold p and the calculated threshold p s , the population was mutated by HTSDS.
(3)
Crossover: We use the same binomial crossover as the classic DE.
(4)
Selection: The same greedy selection strategy as the classic DE is adopted, that is, individuals with low fitness value enter the next iteration.
The pseudo-code of TS-MSCDE is expressed in Algorithm 1.
Algorithm 1 Pseudo-Code for the TS-MSCDE
  • Step 1: Initialization
  • Set iteration t = 0 , fitness evaluations F E s = 0 , population partition threshold p = 0.5 , stage partition threshold p s l o w e r = 0.2 , p s u p p e r = 0.7 .
  • Set parameter candidate pool P A R M _ P O O L = F = 0.1 , C R = 0.2 , F = 0.5 , C R = 0.9 , F = 1.0 ,   C R = 0.1 , F = 1.0 ,   C R = 0.9 ;
  • Initialize a population of N P individuals P t = X 1 t , ,   X N P t with X i t = x i , 1 t , x i , 2 t , , x i , D t and each individual is generated by (13);
  • Calculate evolution iteration partition threshold p s by Equation (14) and Equation (15);
  • Step 2: Evolution Iteration
  • WHILE F E s < M a x _ F E s
  • DO
  •   IF F E s < 0.1     M a x _ F E s
  •     Set P R _ P O O L as 0.85   0.05   0.05   0.05 ;
  •   ELSE
  •     Set P R _ P O O L by Equation (18);
  •   END IF
  •   Step 2.1: Mutation Operation
  •   Reindex the individuals of current population in ascending order of their fitness values, the superior (S) consists of the top p   N P individuals and the inferior (I) consists of the remaining individuals;
  •   FOR i = 1 to N P
  •   Generate F and C R from P A R M _ P O O L by probability pool P R _ P O O L ;
  •     IF F E s < p s     M a x _ F E s
  •       Generate donor vectors V i t by Equation (10);
  •     ELSE
  •       Generate donor vectors V i t by Equation (11);
  •     END IF
  •   END FOR
  •   Step 2.2: Crossover Operation
  •   FOR i = 1 to N P
  •     Generate trial vectors U i t by Equation (7);
  •   END FOR
  •   Step 2.3: Selection Operation
  •   FOR i = 1 to N P
  •     Generate new vectors X i t + 1 by Equation (8);
  •   END FOR
  •   Step 2.4: Parameter Adaption
  •     Recalculate the fitness value improvement rate for P R _ P O O L by Equation (16) and Equation (17);
  •   Step 2.5: Increase the Count
  •    F E s = F E s + N P ;
  •    t = t + 1 ;
  • END WHILE

4.6. Complexity Analysis

The complexity of the classical DE is O T m a x N P D , Where T m a x is the maximum number of iterations, N P is the population size, and D is the problem dimension. Compared with the classical DE, the extra computation of TS-MSCDE is mainly reflected in HTSDS, ATSP, and PR_POOL calculation.
In HTSDS (see lines 17 to 27 of Algorithm 1), the main operations include population sorting to classify superior and inferior population, and then select the corresponding mutation strategy. The complexity of the sorting operator is O log N P , so we can see that the increased complexity in HTSDS is O log N P .
The ATSP (see lines 1 to 8 of Algorithm 1) mainly consists of LHD initialization and FDC calculation. The complexity corresponding to LHD is O N P D , and the complexity corresponding to FDC is O N P D , so the complexity increased by ATSP is O N P D .
The parameter adaption procedure (see line 36 to 37 of Algorithm 1) mainly consists of recalculation of the fitness value improvement rate, which increased the complexity by O N P .
Considering the complexity of the classical DE, the complexity of the whole TS-MSCDE is O N P D + T m a x N P D + log N P + N P , so it can be concluded that the proposed algorithm keeps the computational efficiency in time compared with the classical DE.

5. Experiments and Comparisons

In this section, the computational results of the proposed algorithm are discussed along with other state-of-the-art algorithms. First, the characteristics of D E / b e s t / 1 , D E / r a n d / 1 , D E / c u r r e n t t o b e s t / 1 , D E / c u r r e n t / 1 , and HTSDS were compared on the CEC2017 benchmark set [7]. Second, we validated the relevant effects of ATSP. Finally, an overall performance comparison between TS-MSCDE and advanced DE variants was conducted on the CEC2020 benchmark set [8]. All simulations were performed on an Intel Core i7 processor mated to a 3.4GHz CPU and 16 GB RAM.

5.1. Validation and Discussion of Classical Mutation Strategies and HTSDS

Although the characteristics of D E / b e s t / 1 , D E / r a n d / 1 , and D E / c u r r e n t t o b e s t / 1 have been described [25], comparison with D E / c u r r e n t / 1 is rarely mentioned. Therefore, we compare the convergence rates of the four mutation strategies and the proposed HTSDS on typical optimization problems.
CEC2017 [7], as used in this paper, contains 30 optimization functions in four categories: unimodal (F1–F3), simple multimodal (F4–F10), hybrid (F11–F20), and composition (F21–F30). This benchmark set has been used to evaluate the effectiveness of many algorithms in single-objective numerical optimization.
We chose four functions with distinct features: F1, F7, F20, and F25 (see CEC2017 for features of optimization functions). Under the classical DE framework, the initial control parameters were set as D = 10 ,   M a x _ F E s = 10 , 000 D ,   N P = 18 D ,   F = r a n d 0 , 1 ,   C R = r a n d 0 , 1 . Parameters for HTSDS were set as p = 0.5 ,   p s = 0.5 . We abbreviate D E / b e s t / 1 , D E / r a n d / 1 , D E / c u r r e n t t o b e s t / 1 , and   D E / c u r r e n t / 1 as b e s t , r a n d , c u r r e n t t o b e s t , and c u r r e n t , respectively.
Convergence curves are shown in Figure 5. D E / b e s t / 1 achieved the top convergence speed among the four optimization functions, as expected, which is particularly important for unimodal function F1 (Figure 5a), and fell into a local optimum earliest for multimodal function F7 (Figure 5b). D E / c u r r e n t / 1 converged relatively slowly, especially when the difference vector size became small as iteration continued. It should be pointed out that HTSDS focuses on full exploration in the early stage, with convergence speed slower than or equal to D E / c u r r e n t / 1 , while in the later stage, it focuses on accelerating convergence, and the convergence speed was significantly improved. In Figure 5d, F25 (the recognized global optimal is difficult to find), the curve dropped to the lowest among all mutation strategies, which means a better solution was found in a limited number of iterations.
We further discuss the characteristics of HTSDS and the other four mutation strategies, using the same settings. Because some complex problems are difficult to optimize, we add a new comparison algorithm HTSDS*, where * denotes M a x _ F E s = 200 , 000 D . To reduce errors due to randomness, each algorithm was executed for 51 runs independently, with results of D = 10 , as shown in Table A1, which includes the mean value (Mean) and standard deviation (Std.Dev) of the error value f X f X , with f X representing the global optimal value. The table gives the rank of the mean value of each algorithm for each test function. The “+”, “−”, and “=“ signs at the end of the table indicate that HTSDS* is respectively better than, equal to, or worse than other comparison algorithms.
It is not difficult to see from Table A1, when D = 10 , several global optima of F1–F4 could be found for most mutation strategies. Owing to slow convergence in the late stage of D E / c u r r e n t / 1 , no global optima of a function could be found. From F5, D E / b e s t / 1 ranked relatively lowly among the six algorithms because it was easily trapped in local optima. Although D E / c u r r e n t t o b e s t / 1 achieved better results on functions F1–F3, it performed poorly on F30 due to its poor robustness, and D E / r a n d / 1 , which had similar convergence, had a more balanced performance on all functions. Most importantly, when computing resources were not added, HTSDS had average performance, but ranked relatively highly on F21–F30. After adding computing resources, HTSDS* achieved outstanding performance. HTSDS* was first on 29 functions. Compared with the other five algorithms, HTSDS* was also superior on more than 20 functions. It can be concluded that HTSDS can give full play to its effect when computing resources are sufficient.

5.2. Verifying Validity of ATSP

We saw in Section 5.1 that HTSDS accelerated later convergence compared with D E / c u r r e n t / 1 . However, the stage partition parameter p s = 0.5 may be too large for unimodal functions and too small for some multimodal functions. Therefore, we use the previous algorithm settings to further discuss the effectiveness of ATSP.
We set p s l o w e r = 0.2 and p s u p p e r = 0.7 for ATSP. For comprehensive verification, M a x _ F E s is set to 200 , 000 D , 50 , 000 D , 10 , 000 D for HTSDS-3 and HTSDS+ATSP-3, HTSDS-2 and HTSDS+ATSP-2, HTSDS-1 and HTSDS+ATSP-1, respectively. We add a new comparison item, avgFEs, in Table A2 to represent the average required FEs when the algorithm reaches the optimal condition. The global optimum is not found when avgFEs is 0.
With respect to the simple functions, i.e., F1-F4, all algorithms accurately found the global optimum in each run, and HTSDS+ATSP-3 are all smaller than HTSDS-3 in terms of avgFEs; that is, the convergence speed is faster. On complex functions F5–F30, HTSDS+ATSP-3 did not deteriorate because of ATSP. Instead, avgFEs were increased to further improve the accuracy of solutions on functions like F19 and F22. From comparison results of Table A2, HTSDS+ATSP-3 achieved better results than HTSDS-3 on 13 functions with the same computing resources. Moreover, with the reduction of computing resources, the effectiveness of ATSP diminishes considering the overall performance on all functions. We can conclude that ATSP has better adaptability than fixed parameters with relatively sufficient computing resources.

5.3. Comparison against State-of-the-Art Algorithms

It can be inferred from the previous experiments that, the mutation operator adopted by TS-MSCDE requires relatively sufficient computing resources. To verify whether TS-MSCDE performs comparably to other advanced algorithms under such conditions, we introduce the CEC2020 benchmark set [8]. Different from such sets as CEC2017, CEC2020 simplifies the test function set, leaving the most representative 10 optimization functions. Besides, in previous CEC competitions, the maximum allowed number of function evaluations ( M a x _ F E s )—unlike problem complexity—did not scale exponentially with dimension. To address this disparity, CEC2020 significantly increases the M a x _ F E s for 10 scalable benchmark problems beyond their prior contest limits, with the goal of determining the extent to which this extra time translates into improved solution accuracy.
We set D = 5 ,   M a x _ F E s = 50 , 000 ; D = 10 ,   M a x _ F E s = 1 , 000 , 000 ; D = 15 ,     M a x _ F E s = 3 , 000 , 000 ; and D = 20 ,   M a x _ F E s = 10 , 000 , 000 . Several currently popular DE variants were chosen for comparison with TS-MSCDE, including algorithms ranked relatively high on CEC2020, and executed each algorithm 30 runs independently. These algorithms are:
(1)
SHADE: An efficient DE variant using a D E / c u r r e n t t o p b e s t / 1 mutation strategy. SHADE has been widely studied and used for comparison [21].
(2)
LSHADE: Based on SHADE, the linear population decline mechanism is adopted, which significantly improves performance. This is a widely used DE variant [28].
(3)
j2020: This algorithm uses a crowding mechanism and a strategy to select the mutant vector from two subpopulations. The algorithm won third place in the CEC2020 numerical optimization competition [52].
(4)
AGSK: This new algorithm derived from human knowledge sharing won second place in the CEC2020 numerical optimization competition [51].
(5)
MODE: A hybrid algorithm with three differential mutation strategies and linear sequence programming. It was awarded first place in the CEC2020 numerical optimization competition [53].
Table A3 lists the performance of the six algorithms on F1–F10 when D = 5 . TS-MSCDE found the global optimum on F1, F5, F6, F7, and F8; IMODE found the global optimum on F9; and the other four algorithms found fewer global optima. The global optimum for F10 is generally acknowledged to be difficult to find, while TS-MSCDE has achieved outstanding performance. Overall, TS-MSCDE ranked first among all six optimization functions, IMODE ranked first among all eight functions, and the other four algorithms lagged behind.
Table A4 shows the results when D = 10 . Different from D = 5 , with the increase in dimension, the ability of all algorithms to find the global optimum in a stable manner decreases. Nevertheless, TS-MSCDE achieved first place in F1, F3, F8, and F10, while IMODE achieved first place in F1, F4, F6, F7, and F9. It is worth mentioning that j2020 achieved first place in F1, F2, and F5. The performance of the other three algorithms is relatively indifferent, and the ranking of the three algorithms on each function is lower.
Table A5 shows the results when D = 15 . All six algorithms on F1 could still find the global optimum stably in 30 runs. TS-MSCDE ranked first on F3, F9, and F10, and still won the average by a large margin, while IMODE ranked first among all algorithms on F4, F7, and F8. Generally speaking, the performance of TS-MSCDE and IMODE was close, and the other four algorithms lagged behind.
Table A6 shows the results when D = 20 . The complexity of F9 and F10 has multiplied. Experiments showed that many algorithms fall easily into local optima on the two optimization functions. TS-MSCDE still achieved outstanding results on F10, and showed the same results as IMODE on F9. TS-MSCDE had a certain probability of finding the global optimum on F3 and F8. From the final results, TS-MSCDE and IMODE ranked first among five optimization functions, but IMODE ranked slightly higher than TS-MSCDE overall.
From Table A3, Table A4, Table A5 and Table A6, it can be seen that TS-MSCDE and IMODE performed closely to each other on CEC2020, with results exceeding those of the other four algorithms. In addition to numerical comparisons, we statistically compare solution quality. In Table 1, the Wilcoxon signed-rank test (significance level α = 0.05 , R+ is the sum of ranks for the problems in which the first algorithm outperformed the second, and R- is the sum of ranks for the opposite [54]) is used to compare the results between the proposed algorithm and its competitors. From D = 5 to D = 20 , the p-value obtained by TS-MSCDE in five rounds of comparison was less than 0.05, which indicates a big difference between TS-MSCDE and the other algorithms. We also find that TS-MSCDE won 16 rounds of comparisons in terms of the value of R+ and R−, which also reflects that TS-MSCDE achieved more promising results. We can come to the same conclusion from Table A3 to Table A6.
In Table 2, the Friedman test, a widely used nonparametric test in the EA community, is used to validate the performance of the algorithms based on the mean value of 30 independent runs. It is not difficult to find that the p-values from 5 D to 20 D calculated by the Friedman test are all less than 0.05. Therefore, it can be concluded that there are significant differences in the performance of the comparison algorithms on the corresponding dimensions. From 5 D to 15 D , TS-MSCDE ranked first; was significantly better than AGSK, J2020, LSHADE, and SHADE; and slightly better than IMODE. On 20 D , the performance of TS-MSCDE declined to third, and was worse than IMODE and AGSK. Therefore, from the overall rank, TS-MSCDE ranked first with a slight advantage, while IMODE ranked second. The difference between the performances of the two algorithms was small, and both significantly exceeded the other four algorithms.

5.4. Sensitivity Analysis of Population Partition Parameter p

In the above experiments, we verified the proposed HTSDS, ATSP, and TS-MSCDE in the benchmark function. In HTSDS, we set the population partition parameter p = 0.5 to divide the population into two subpopulations. In this section, we will further discuss whether TS-MSCDE is sensitive to the change of p .
First, we set p as a sequence from 0.1 to 0.9 with step size 0.1. At the same time, for intuitive statistics, we introduced the cumulative rank normalization formula:
S R p D = f = 1 10 M R p f D p = 0.1 0.9 f = 1 10 M R p f D ,        
where D = 5 , 10 , 15 , 20 , f represents the function numbers in CEC2020, M R p f D represents the rank of the mean value of TS-MSCDE in a dimension corresponding to p . From Equation (19), the rank performance of the algorithm under different p values in different dimensions can be obtained as Figure 6. From the figure, we can see that the rank performance of p from 0.3 to 0.7 is relatively stable on 5 D , and the performance is poor beyond that. The performance of TS-MSCDE may fluctuate slightly between 10 D and 20 D , but the general trend is that after p is greater than 0.5, the performance will gradually deteriorate, while before p is smaller than 0.5, the performance is different. We can analysis the reasons behind, when p is small, it means inferior subpopulation is larger. Connected with the description in Section 4.2, most individuals are maintaining population diversity, and a small number of individuals focus on local search. It may cause rapid convergence and frequent stagnation for superior population. Therefore, it may lead to unstable optimization results and slow convergence. When p is large, it is obvious that the inferior population is small, which leads to the difficulty in maintaining population diversity and finding the region where the global optimal is located. The performance of the algorithm gradually deteriorates with the increase of p . In summary, considering the optimization accuracy and convergence speed of the algorithm, p is set as 0.5.

5.5. TS-MSCDE on Real-World Optimization Problems

In previous sections, we have applied TS-MSCDE to CEC2020 to prove that the algorithm can achieve excellent results under sufficient computing resources. To further extend the practical implications of the conducted research, four real-world optimization problems in CEC 2011 test suite [9] are selected to judge the characteristics of the TS-MSCDE. The first real-world problem (F1) is an application to parameter estimation for frequency-modulated sound waves, which is the Problem no. 1 in CEC 2011. The second real-world problem (F2) is an application to spread spectrum radar poly-phase code design, which is the Problem no. 7 in CEC 2011. For the third real-world problem (F3), we select the 12th problem in CEC 2011, which is the so-called “Messenger” spacecraft trajectory optimization problem. Finally, the fourth real-world problem (F4) is selected from the 13th problem in CEC 2011, which defines the “Cassini 2” spacecraft trajectory optimization problem. To ensure the effectiveness of TS-MSCDE, we set M a x F E s = 6 10 , 000 ,   20 1 , 000 , 000 ,   26 1 , 000 , 000 ,   22 1 , 000 , 000 for F1-F4 base on their problem dimension, other settings are consistent with Section 5.3.
As shown in Table 3, TS-MSCDE can achieve excellent results on four real-world optimization problems. It is worth mentioning that the four problems are typical complex multimodal problems and most algorithms are difficult to obtain satisfactory solutions.

6. Conclusions and Future Work

There are numerous complex optimization problems in the real world, which are difficult to obtain satisfactory solutions. When computing resources are sufficient, our first choice is to make full use of them to improve the quality of solutions. In this paper, we propose a two-stage differential evolution algorithm with mutation strategy combination (TS-MSCDE) focusing on solving this problem.
TS-MSCDE contains two search stages adaptively partitioned by ATSP and assigns different mutation strategies to superior and inferior individuals with HTSDS, so as to realize symmetrically decoupling of exploration and exploitation. Experiments were conducted on the CEC2017, CEC2020 and CEC2011 benchmark sets. From CEC2017, we can see that HTSDS+ATSP could achieve outstanding performance with sufficient computing resources. Given the curse of dimensionality, computing resources should grow exponentially with dimensions. Therefore, we selected classic DE variants SHADE and LSHADE together with IMODE, J2020, and AGSK (the top three algorithms on CEC2020) for comparative experiments. The Friedman test and the Wilcoxon test were used to analyze the performance of the algorithms. Further, four real-world optimization problems from CEC2011 are selected to verify the efficiency of TS-MSCDE. The results show that TS-MSCDE can tackle extremely difficult problems when computing resources are sufficient.
While TS-MSCDE exhibits a number of desirable characteristics, more study is required. Although HTSDS and ATSP have greatly accelerated the convergence of TS-MSCDE, the problem of slow convergence has not been completely solved. This problem occurs in some complex optimization problems with the increase of dimensions. Therefore, Future work will also focus on how to further improve the performance of TS-MSCDE under the condition of limited computing resources. Moreover, more application scenarios are worth exploring to demonstrate TS-MSCDE’s practical significance.

Author Contributions

Conceptualization: X.S., D.W. and H.K.; Data curation, X.S. and D.W.; Formal analysis, Y.S.; Investigation, Q.C. All authors have read and agreed to the published version of the manuscript.

Funding

No funding for this research.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

We certify that we have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Appendix A

Table A1. Comparison results of mutation strategy on CEC2017 test suite (D = 10).
Table A1. Comparison results of mutation strategy on CEC2017 test suite (D = 10).
HTSDS*HTSDS c u r r e n t b e s t r a n d c u r r e n t t o b e s t
F1Mean0.00E+000.00E+004.54E+040.00E+003.30E−030.00E+00
Std.Dev0.00E+000.00E+001.07E+040.00E+002.06E−030.00E+00
rank116151
F2Mean0.00E+000.00E+004.53E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+004.57E+000.00E+000.00E+000.00E+00
rank116111
F3Mean0.00E+000.00E+003.62E+020.00E+001.15E−020.00E+00
Std.Dev0.00E+000.00E+009.98E+010.00E+001.27E−020.00E+00
rank116151
F4Mean0.00E+006.60E−058.33E−010.00E+001.44E+007.24E−07
Std.Dev0.00E+002.77E−043.28E−010.00E+003.78E−011.08E−06
rank145163
F5Mean3.15E+001.05E+011.77E+011.51E+016.62E+006.46E+00
Std.Dev1.17E+003.27E+003.06E+006.45E+002.40E+001.74E+00
rank146532
F6Mean0.00E+001.86E−052.85E+007.91E−020.00E+000.00E+00
Std.Dev0.00E+004.15E−057.60E−012.14E−010.00E+000.00E+00
rank146511
F7Mean1.03E+012.31E+013.05E+012.76E+012.20E+011.85E+01
Std.Dev2.43E+005.10E+005.08E+007.37E+002.31E+001.47E+00
rank146532
F8Mean3.55E+001.10E+011.79E+011.76E+017.58E+006.81E+00
Std.Dev1.42E+003.72E+003.17E+006.12E+002.22E+001.74E+00
rank146532
F9Mean0.00E+000.00E+002.29E+013.45E−010.00E+000.00E+00
Std.Dev0.00E+000.00E+001.05E+015.91E−010.00E+000.00E+00
rank116511
F10Mean8.90E+013.17E+024.98E+025.22E+024.29E+024.03E+02
Std.Dev7.39E+011.79E+021.14E+022.72E+021.06E+021.20E+02
rank125643
F11Mean3.32E−022.37E+006.84E+001.72E+016.67E−011.47E+00
Std.Dev1.82E−011.68E+001.51E+001.37E+016.54E−019.27E−01
rank145623
F12Mean1.60E−013.18E+028.44E+034.41E+023.55E+022.00E+02
Std.Dev1.52E−011.59E+023.34E+032.04E+021.19E+021.40E+02
rank136542
F13Mean1.13E+007.16E+002.55E+012.69E+016.98E+004.98E+00
Std.Dev6.26E−014.51E+006.31E+004.35E+012.07E+003.39E+00
rank145632
F14Mean0.00E+003.17E+001.53E+012.61E+017.40E−016.14E+00
Std.Dev0.00E+001.66E+003.06E+001.93E+015.49E−018.16E+00
rank135624
F15Mean1.92E−031.65E+005.03E+009.57E+003.92E−014.95E−01
Std.Dev2.79E−037.50E−011.18E+001.34E+013.22E−012.24E−01
rank145623
F16Mean1.80E−012.12E+007.08E+008.02E+017.01E−011.83E+00
Std.Dev1.57E−013.28E+002.44E+009.50E+012.96E−016.60E−01
rank145623
F17Mean4.72E−028.90E+002.60E+014.27E+017.32E+001.16E+01
Std.Dev1.35E−019.26E+005.34E+003.85E+011.71E+006.47E+00
rank135624
F18Mean1.43E−036.52E+002.91E+012.69E+017.57E−016.19E+00
Std.Dev1.65E−037.30E+002.45E+002.42E+014.67E−018.83E+00
rank146523
F19Mean4.53E−036.94E−013.14E+003.76E+001.09E−017.78E−01
Std.Dev8.36E−035.55E−015.00E−012.26E+008.89E−026.96E−01
rank135624
F20Mean0.00E+006.26E−012.36E+012.13E+018.70E−095.47E+00
Std.Dev0.00E+006.66E−014.72E+002.21E+012.65E−087.08E+00
rank136524
F21Mean9.67E+011.01E+021.07E+021.70E+021.15E+021.19E+02
Std.Dev1.83E+011.16E+001.44E+006.19E+013.86E+014.92E+01
rank123645
F22Mean1.37E+003.26E+013.93E+019.35E+011.00E+029.72E+01
Std.Dev4.30E+001.66E+018.61E+002.29E+011.48E−011.84E+01
rank123465
F23Mean2.73E+022.90E+022.61E+023.19E+023.06E+023.07E+02
Std.Dev9.08E+017.89E+011.12E+028.08E+002.34E+001.91E+00
rank231645
F24Mean8.81E+011.00E+021.16E+023.07E+022.14E+022.68E+02
Std.Dev3.12E+011.15E−131.65E+019.45E+011.24E+021.04E+02
rank123645
F25Mean9.34E+011.60E+021.84E+024.32E+024.06E+024.16E+02
Std.Dev2.54E+011.21E+022.98E+012.20E+011.76E+012.28E+01
rank123645
F26Mean4.00E+011.47E+021.54E+023.41E+023.00E+023.00E+02
Std.Dev8.14E+011.50E+029.40E+017.10E+013.75E−108.44E−14
rank123654
F27Mean3.87E+023.89E+023.90E+023.98E+023.89E+023.90E+02
Std.Dev6.95E−017.10E−014.73E−019.94E+009.73E−012.22E+00
rank125634
F28Mean1.77E+022.10E+022.74E+024.09E+022.79E+023.66E+02
Std.Dev1.30E+021.40E+029.54E+011.38E+021.08E+021.27E+02
rank123645
F29Mean2.11E+022.47E+022.71E+022.96E+022.56E+022.49E+02
Std.Dev2.96E+012.13E+011.67E+014.30E+014.84E+006.13E+00
rank125643
F30Mean3.97E+028.77E+026.05E+032.47E+055.73E+021.24E+05
Std.Dev4.91E+004.08E+025.38E+034.27E+056.77E+013.27E+05
rank134625
(#)Best2941435
(#)+ 2629262725
(#)= 40435
(#) 01000
Table A2. Comparison effectiveness of ATSP on CEC2017 test suite (D = 10).
Table A2. Comparison effectiveness of ATSP on CEC2017 test suite (D = 10).
HTSDS-3HTSDS+ATSP-3HTSDS-2HTSDS+ATSP-2HTSDS-1HTSDS+ATSP-1
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
avgFEs543,690542,460268,502214,87486,61577,377
F2Mean0.00E+000.00E+000.00E+000.00E+000.00E+006.45E−02
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+003.59E−01
avgFEs110,298109,758110,543109,73660,31762,501
F3Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
avgFEs428,832424,428262,841333,23876,02489,785
F4Mean0.00E+000.00E+000.00E+000.00E+002.62E−075.04E−07
Std.Dev0.00E+000.00E+000.00E+000.00E+006.69E−071.97E−06
avgFEs793,938791,406278,692246,79245,84261,850
F5Mean3.15E+003.22E+003.27E+003.40E+001.02E+011.09E+01
Std.Dev1.17E+001.40E+001.78E+001.87E+003.74E+004.33E+00
avgFEs032,08214,12724,56700
F6Mean0.00E+000.00E+000.00E+000.00E+002.05E−053.09E−05
Std.Dev0.00E+000.00E+000.00E+000.00E+003.82E−059.84E−05
avgFEs1,032,570945,540306,964290,781626517,925
F7Mean1.03E+011.25E+011.32E+011.31E+012.27E+012.12E+01
Std.Dev2.43E+002.60E+002.44E+001.47E+006.33E+006.03E+00
avgFEs000000
F8Mean3.55E+003.38E+003.95E+003.59E+001.14E+011.12E+01
Std.Dev1.42E+001.51E+001.86E+002.31E+003.87E+004.14E+00
avgFEs00027,45300
F9Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
avgFEs946,686920,790269,878270,75580,72183,009
F10Mean8.90E+018.00E+011.52E+021.63E+023.05E+023.15E+02
Std.Dev7.39E+017.37E+016.62E+018.95E+011.43E+021.44E+02
avgFEs000000
F11Mean3.32E−023.32E−028.02E−018.67E−012.63E+003.56E+00
Std.Dev1.82E−011.82E−017.45E−018.02E−011.42E+002.40E+00
avgFEs1,000,9141,167,558131,806127,51027700
F12Mean1.60E−011.80E−016.99E+006.26E+002.54E+022.42E+02
Std.Dev1.52E−011.79E−012.32E+012.13E+012.24E+021.50E+02
avgFEs463,290461,98870,95519,51000
F13Mean1.13E+001.26E+002.31E+002.18E+008.81E+008.82E+00
Std.Dev6.26E−015.80E−011.44E+001.01E+004.01E+003.65E+00
avgFEs139,39239,42022,61029,89200
F14Mean0.00E+009.59E−075.78E−014.19E−013.19E+003.69E+00
Std.Dev0.00E+005.25E−065.61E−014.98E−011.98E+002.47E+00
avgFEs1,018,9741,109,688158,284205,46702694
F15Mean1.92E−031.64E−035.69E−024.45E−021.39E+001.63E+00
Std.Dev2.79E−033.16E−031.96E−011.78E−018.64E−019.54E−01
avgFEs000000
F16Mean1.80E−018.39E−024.13E−013.95E−012.28E+001.71E+00
Std.Dev1.57E−019.27E−022.48E−012.69E−013.74E+002.70E+00
avgFEs000000
F17Mean4.72E−022.25E−023.80E−017.73E−016.74E+006.31E+00
Std.Dev1.35E−017.89E−026.11E−011.12E+007.67E+007.32E+00
avgFEs889,3561,133,62273,79484,24600
F18Mean1.43E−031.19E−032.10E−024.66E−026.35E+006.37E+00
Std.Dev1.65E−031.31E−033.85E−028.85E−027.65E+008.28E+00
avgFEs0012896000
F19Mean4.53E−031.94E−035.01E−032.55E−035.07E−015.35E−01
Std.Dev8.36E−035.93E−038.64E−036.61E−035.86E−015.24E−01
avgFEs872,2981,033,962303,091349,06600
F20Mean0.00E+000.00E+000.00E+000.00E+004.89E−011.50E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+005.20E−011.60E+00
avgFEs1,013,6761,241,676294,892338,06382970
F21Mean9.67E+019.67E+019.35E+019.68E+019.10E+019.74E+01
Std.Dev1.83E+011.83E+012.50E+011.80E+013.03E+011.81E+01
avgFEs33,78635,44818,029988380303037
F22Mean1.37E+009.04E−011.34E+011.30E+013.50E+013.26E+01
Std.Dev4.30E+003.48E+001.36E+011.28E+012.03E+012.28E+01
avgFEs924,9841,010,778129,426120,62900
F23Mean2.73E+022.57E+022.66E+022.70E+022.82E+022.70E+02
Std.Dev9.08E+019.97E+011.04E+029.73E+019.38E+011.06E+02
avgFEs68,35856,24435,53022,02474859732
F24Mean8.81E+018.26E+018.06E+018.79E+019.12E+019.12E+01
Std.Dev3.12E+013.57E+014.02E+013.19E+012.75E+012.77E+01
avgFEs68,21458,24253,80313,72625374883
F25Mean9.34E+019.67E+019.36E+011.10E+021.48E+021.61E+02
Std.Dev2.54E+011.83E+012.50E+013.96E+011.12E+021.20E+02
avgFEs68,03431,86017,942000
F26Mean4.00E+018.67E+016.45E+019.03E+011.00E+021.39E+02
Std.Dev8.14E+011.01E+029.50E+011.01E+021.32E+021.48E+02
avgFEs809,232720,288185,928185,81247,63046,574
F27Mean3.87E+023.87E+023.76E+023.88E+023.89E+023.89E+02
Std.Dev6.95E−013.74E−016.95E+018.50E−012.84E−014.09E−01
avgFEs000000
F28Mean1.77E+021.93E+021.77E+022.13E+022.42E+022.26E+02
Std.Dev1.30E+021.36E+021.41E+021.31E+021.20E+021.29E+02
avgFEs235,842299,44288,81568,49915,41018,726
F29Mean2.11E+022.07E+022.26E+022.18E+022.45E+022.44E+02
Std.Dev2.96E+013.48E+012.90E+013.78E+019.74E+001.51E+01
avgFEs000000
F30Mean3.97E+023.95E+024.78E+024.86E+021.04E+037.96E+02
Std.Dev4.91E+008.23E−016.36E+016.82E+016.94E+022.69E+02
avgFEs000000
(#)Best172220171914
(#)+ 8 13 16
(#)= 9 7 3
(#) 13 10 11
Table A3. Comparison results of solution accuracy on CEC2020 test suite (D = 5).
Table A3. Comparison results of solution accuracy on CEC2020 test suite (D = 5).
TS-MSCDESHADELSHADEj2020AGSKIMODE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
rank111111
F2Mean5.58E+001.59E+006.59E−013.23E+001.64E+018.33E−02
Std.Dev5.29E+002.64E+001.69E+003.74E+002.58E+038.89E−02
rank532461
F3Mean2.45E+004.97E+005.22E+003.42E+002.87E+005.15E+00
Std.Dev2.20E+001.42E+007.86E−012.33E+002.05E+000.00E+00
rank146325
F4Mean1.98E−027.48E−025.62E−027.68E−021.11E−010.00E+00
Std.Dev4.30E−023.23E−023.22E−026.40E−026.05E−020.00E+00
rank243561
F5Mean0.00E+002.08E−024.16E−021.37E−010.00E+000.00E+00
Std.Dev0.00E+001.14E−011.58E−012.86E−010.00E+000.00E+00
rank145611
F6Mean0.00E+002.82E−080.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+008.64E−080.00E+000.00E+000.00E+000.00E+00
rank161111
F7Mean0.00E+000.00E+000.00E+000.00E+002.08E−020.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+001.14E−010.00E+00
rank111161
F8Mean0.00E+000.00E+001.34E+016.28E−010.00E+000.00E+00
Std.Dev0.00E+000.00E+003.47E+012.39E+000.00E+000.00E+00
rank116511
F9Mean6.67E+001.00E+021.07E+022.05E+013.33E+010.00E+00
Std.Dev2.54E+010.00E+004.50E+013.75E+014.79E+010.00E+00
rank256341
F10Mean6.67E+013.39E+023.39E+021.26E+022.25E+022.44E+02
Std.Dev4.79E+011.80E+011.80E+019.03E+011.32E+041.36E+05
rank155234
(#)Best733348
(#)+ 66662
(#)= 33345
(#) 11103
Table A4. Comparison results of solution accuracy on CEC2020 test suite (D = 10).
Table A4. Comparison results of solution accuracy on CEC2020 test suite (D = 10).
TS-MSCDESHADELSHADEj2020AGSKIMODE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
rank111111
F2Mean7.56E+019.64E+007.75E+002.58E+002.84E+014.20E+00
Std.Dev6.18E+012.16E+014.73E+003.54E+003.21E+013.70E+00
rank643152
F3Mean1.01E+011.66E+011.20E+011.05E+019.93E+021.21E+01
Std.Dev3.81E+005.03E−015.97E−011.66E+004.26E+007.83E−01
rank153264
F4Mean1.72E−022.55E−011.45E−011.39E−015.83E−020.00E+00
Std.Dev2.84E−022.51E−021.84E−027.72E−023.11E−020.00E+00
rank265431
F5Mean2.50E−011.99E+002.50E−011.48E−013.18E−013.88E−01
Std.Dev1.76E−013.92E+001.27E−011.37E−013.06E−013.83E−01
rank362145
F6Mean1.21E−012.17E−012.60E−014.78E−011.55E−019.15E−02
Std.Dev9.69E−029.33E−021.72E−012.49E−011.17E−015.08E−02
rank245631
F7Mean2.24E−026.94E−011.78E−016.73E−021.54E−018.54E−04
Std.Dev6.10E−021.75E−011.95E−011.25E−011.71E−031.10E−03
rank265341
F8Mean7.71E−011.00E+029.78E+011.54E+001.80E+012.72E+00
Std.Dev2.93E+000.00E+001.21E+014.00E+002.38E+017.46E+00
rank165243
F9Mean6.42E+013.83E+022.85E+028.00E+017.63E+014.11E+01
Std.Dev4.79E+013.71E+019.39E+014.07E+014.29E+014.46E+01
rank265431
F10Mean9.33E+014.00E+024.09E+021.40E+022.98E+023.98E+02
Std.Dev2.54E+010.00E+001.95E+018.12E+011.43E+022.89E−13
rank156234
(#)Best411315
(#)+ 87784
(#)= 11111
(#) 12215
Table A5. Comparison results of solution accuracy on CEC2020 test suite (D = 15).
Table A5. Comparison results of solution accuracy on CEC2020 test suite (D = 15).
TS-MSCDESHADELSHADEj2020AGSKIMODE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
rank111111
F2Mean1.33E+017.56E+005.71E+005.72E−021.85E+013.14E+00
Std.Dev9.56E+006.91E+004.47E+004.32E−021.46E+013.22E+00
rank543162
F3Mean1.73E+001.65E+011.67E+016.78E+001.42E+011.61E+01
Std.Dev1.69E+005.08E−015.82E−017.82E+004.27E+003.12E−01
rank156234
F4Mean7.22E−022.61E−012.30E−011.99E−011.42E−010.00E+00
Std.Dev6.86E−022.95E−023.06E−027.47E−025.71E−020.00E+00
rank265431
F5Mean1.32E+011.15E+005.68E+007.58E+006.25E+007.79E+00
Std.Dev1.11E+011.66E+002.19E+017.69E+004.32E+003.66E+00
rank612435
F6Mean6.84E−012.14E−011.78E−018.45E−014.02E+016.92E−01
Std.Dev3.98E−011.32E−011.24E−012.09E+002.23E−012.52E+02
rank321564
F7Mean5.35E−017.28E−017.04E−019.83E−012.47E+015.30E−01
Std.Dev1.86E−011.58E−012.29E−012.03E+002.00E−012.23E−01
rank243561
F8Mean3.16E+011.00E+021.00E+029.49E+006.85E+014.18E+00
Std.Dev3.71E+010.00E+000.00E+002.74E+013.85E+019.61E+00
rank355241
F9Mean9.00E+013.85E+023.90E+021.23E+029.67E+039.33E+04
Std.Dev3.05E+012.66E+013.47E−015.68E+011.83E+012.54E+04
rank134256
F10Mean1.07E+024.00E+024.00E+023.90E+024.00E+024.00E+02
Std.Dev2.54E+010.00E+000.00E+005.48E+012.60E−130.00E+00
rank133233
(#)Best422214
(#)+ 66684
(#)= 11111
(#) 33315
Table A6. Comparison results of solution accuracy on CEC2020 test suite (D = 20).
Table A6. Comparison results of solution accuracy on CEC2020 test suite (D = 20).
TS-MSCDESHADELSHADEj2020AGSKIMODE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
Std.Dev0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
rank111111
F2Mean3.43E+002.81E+002.10E+002.60E−029.68E−015.13E−01
Std.Dev2.53E+001.61E+001.65E+002.47E−021.23E+007.13E−01
rank654132
F3Mean1.67E+002.08E+012.08E+011.44E+012.04E+012.05E+01
Std.Dev3.67E+005.93E−015.13E−019.29E+000.00E+001.26E−01
rank165234
F4Mean2.22E−013.55E−013.20E−011.80E−011.45E−010.00E+00
Std.Dev1.54E−013.60E−022.80E−027.84E−025.47E−020.00E+00
rank465321
F5Mean6.00E+014.57E+014.73E+017.78E+014.50E+011.09E+01
Std.Dev5.43E+015.80E+015.87E+015.75E+013.67E+014.33E+03
rank534621
F6Mean1.72E−013.84E−013.50E−011.92E−011.68E−013.02E−01
Std.Dev9.61E−028.63E−027.73E−021.01E−014.45E−028.17E−02
rank265314
F7Mean7.01E+008.14E−011.11E+001.98E+006.81E−015.24E−01
Std.Dev8.64E+001.66E−011.54E+004.02E+009.09E−011.64E−01
rank634521
F8Mean7.53E+011.00E+021.00E+029.27E+019.92E+018.40E+01
Std.Dev3.25E+012.05E−131.39E−132.21E+014.63E+001.89E+01
rank155342
F9Mean9.67E+014.03E+024.03E+023.39E+021.00E+029.67E+01
Std.Dev1.83E+019.30E−019.93E−012.76E+018.30E−141.83E+01
rank165431
F10Mean2.73E+024.14E+024.14E+023.99E+023.99E+024.00E+02
Std.Dev1.46E+027.33E−031.53E−024.02E−021.59E−026.18E−01
rank156324
(#)Best511225
(#)+ 66644
(#)= 11112
(#) 33354

References

  1. Storn, R.; Price, K. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Bhadra, T.; Bandyopadhyay, S. Unsupervised feature selection using an improved version of Differential Evolution. Expert Syst. Appl. 2015, 42, 4042–4053. [Google Scholar] [CrossRef]
  3. Mlakar, U.; Potočnik, B.; Brest, J. A hybrid differential evolution for optimal multilevel image thresholding. Expert Syst. Appl. 2016, 65, 221–232. [Google Scholar] [CrossRef]
  4. PratimSarangi, P.; Sahu, A.; Panda, M. A Hybrid Differential Evolution and Back-Propagation Algorithm for Feedforward Neural Network Training. Int. J. Comput. Appl. 2013, 84, 1–9. [Google Scholar] [CrossRef]
  5. Toutouh, J.; Alba, E. Optimizing OLSR in VANETS with Differential Evolution: A Comprehensive Study. In Proceedings of the 3rd International Conference on Metaheuristics and Nature Inspired Computing, Male, Maldives, 4 November 2011. [Google Scholar]
  6. Zhan, C.; Situ, W.; Yeung, L.F.; Tsang, P.W.-M.; Yang, G. A Parameter Estimation Method for Biological Systems modelled by ODE/DDE Models Using Spline Approximation and Differential Evolution Algorithm. IEEE/ACM Trans. Comput. Biol. Bioinform. 2014, 11, 1066–1076. [Google Scholar] [CrossRef]
  7. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report for National University of Defense Technology, Changsha, Hunan, China; Kyungpook National University: Daegu, Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  8. Yue, C.; Price, K.; Suganthan, P.N.; Liang, J.; Ali, M.Z.; Qu, B.; Awad, N.H.; Biswas, P.P. Problem Definitions and Evaluation Criteria For The CEC 2020 Special Session And Competition On Single Objective Bound Constrained Numerical Optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China; Technical Report; Nanyang Technological University: Singapore, 2019. [Google Scholar]
  9. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems. In Proceedings of the 2011 IEEE Congress on Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011. [Google Scholar]
  10. Ronkkonen, J.; Kukkonen, S.; Price, K.V. Real-Parameter Optimization with Differential Evolution. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation (CEC), Edinburgh, UK, 2–5 September 2005. [Google Scholar]
  11. Wang, Y.; Cai, Z.; Zhang, Q. Enhancing the search ability of differential evolution through orthogonal crossover. Inf. Sci. 2012, 185, 153–177. [Google Scholar] [CrossRef]
  12. Storn, R.; Price, K. Differential Evolution with Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar]
  13. Elsayed, S.M.; Sarker, R.A.; Ray, T. Differential Evolution with Automatic Parameter Configuration for Solving the CEC2013 Competition On Real-Parameter Optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Turku, Finland, 16–19 April 2013. [Google Scholar]
  14. Das, S.; Konar, A.; Chakraborty, U.K. Two Improved Differential Evolution Schemes for Faster Global Search. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation (CEC), Cawnpore, India, 10 June 2005. [Google Scholar]
  15. Tvrdik, J.; Polakova, R. Competitive Differential Evolution Applied to CEC 2013 Problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Turku, Finland, 20–23 June 2013. [Google Scholar]
  16. Coelho, L.; Ayala, H.; Freire, R.Z. Population’s Variance-Based Adaptive Differential Evolution for Real Parameter Optimization. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Turku, Finland, 20–23 June 2013. [Google Scholar]
  17. Qin, K.; Huang, V.L.; Suganthan, P. Differential Evolution Algorithm With Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
  18. Liu, J.; Lampinen, J. A Fuzzy Adaptive Differential Evolution Algorithm. Soft Comput. 2004, 9, 448–462. [Google Scholar] [CrossRef]
  19. Brest, J.; Greiner, S.; Bošković, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  20. Bi, X.J.; Jing, X. JADE: Self-Adaptive Differential Evolution with Fast and Reliable Convergence Performance. In Proceedings of the 2011 IEEE Congress on Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011. [Google Scholar]
  21. Tanabe, R.; Fukunaga, A. Success-History Based Parameter Adaptation for Differential Evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Turku, Finland, 20–23 June 2013. [Google Scholar]
  22. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavý, T.; Zamuda, A. Distance based parameter adaptation for Success-History based Differential Evolution. Swarm Evol. Comput. 2018, 50, 100462. [Google Scholar] [CrossRef]
  23. Zhang, H.; Sun, J.; Xu, Z. Adaptive Structural Hyper-Parameter Configuration by Q-Learning. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar] [CrossRef]
  24. Brest, J.; Maucec, M.S.; Boskovic, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017. [Google Scholar]
  25. Das, S.; Mullick, S.S.; Suganthan, P. Recent advances in differential evolution–An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  26. Das, S.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  27. Fan, H.Y.; Lampinen, J. A Trigonometric Mutation Operation to Differential Evolution. J. Glob. Optim. 2003, 27, 105–129. [Google Scholar] [CrossRef]
  28. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef] [Green Version]
  29. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 6–11 July 2014. [Google Scholar]
  30. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P. An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization. IEEE Trans. Syst. Man, Cybern. Part B 2011, 42, 482–500. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, X.; Yuen, S.Y. A directional mutation operator for differential evolution algorithms. Appl. Soft Comput. 2015, 30, 529–548. [Google Scholar] [CrossRef]
  32. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2020, 61, 100816. [Google Scholar] [CrossRef]
  33. Li, Y.; Wang, S.; Yang, B. An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Syst. Appl. 2020, 153, 113451. [Google Scholar] [CrossRef]
  34. Loshchilov, I.; Glasmachers, T.; Beyer, H.G. Limited-Memory Matrix Adaptation for Large Scale Black-box Optimization. IEEE Trans. Evol. Comput. 2017, 23, 353–358. [Google Scholar] [CrossRef]
  35. Wright, S. The roles of mutation, inbreeding, crossbreeding, and selection in evolution. In Proceedings of the Sixth International Congress On Genetics, Ithaca, NY, USA, 29 October 1932. [Google Scholar]
  36. Caraffini, F.; Neri, F.; Picinali, L. An analysis on separability for Memetic Computing automatic design. Inf. Sci. 2014, 265, 1–22. [Google Scholar] [CrossRef]
  37. Pitzer, E.; Affenzeller, M. A Comprehensive Survey on Fitness Landscape Analysis. Recent Adv. Intell. Eng. Syst. 2012, 378, 161–191. [Google Scholar] [CrossRef]
  38. Chicano, F.; Luque, G.; Alba, E. Autocorrelation measures for the quadratic assignment problem. Appl. Math. Lett. 2012, 25, 698–705. [Google Scholar] [CrossRef] [Green Version]
  39. Tomassini, M.; Vanneschi, L.; Collard, P.; Clergue, M. A Study of Fitness Distance Correlation as a Difficulty Measure in Genetic Programming. Evol. Comput. 2005, 13, 213–239. [Google Scholar] [CrossRef]
  40. Lunacek, M.; Whitley, D. The Dispersion Metric and the CMA Evolution Strategy; Association for Computing Machinery: New York, NY, USA, 2009. [Google Scholar]
  41. Malan, K.M.; Engelbrecht, A.P. Quantifying Ruggedness of Continuous Landscapes using Entropy. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009. [Google Scholar]
  42. Muñoz, M.A.; Sun, Y.; Kirley, M.; Halgamuge, S. Algorithm selection for black-box continuous optimization problems: A survey on methods and challenges. Inf. Sci. 2015, 317, 224–245. [Google Scholar] [CrossRef] [Green Version]
  43. Takahama, T.; Sakai, S. Differential evolution with dynamic strategy and parameter selection by detecting landscape modality. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
  44. Wei, L.; Li, K.; Liang, Z.; Ying, H. A Mixed Strategies Differential Evolution Based on Fitness Landscapes Features. In Proceedings of the 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), Guangzhou, China, 21–24 July 2017. [Google Scholar]
  45. Sallam, K.; Elsayed, S.; Sarker, R.A.; Essam, D. Landscape-based adaptive operator selection mechanism for differential evolution. Inf. Sci. 2017, 418–419, 383–404. [Google Scholar] [CrossRef]
  46. Mohamed, A.W.; Hadi, A.A.; Jambi, K.M. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization. Swarm Evol. Comput. 2018, 50, 100455. [Google Scholar] [CrossRef]
  47. Xia, X.; Gui, L.; Zhang, Y.; Xu, X.; Yu, F.; Wu, H.; Wei, B.; He, G.; Li, Y.; Li, K. A fitness-based adaptive differential evolution algorithm. Inf. Sci. 2020, 549, 116–141. [Google Scholar] [CrossRef]
  48. Li, W.; Meng, X.; Huang, Y. Fitness distance correlation and mixed search strategy for differential evolution. Neurocomputing 2020, 458, 514–525. [Google Scholar] [CrossRef]
  49. Zt, A.; Kl, A.; Yi, W.B. Differential evolution with adaptive mutation strategy based on fitness landscape analysis. Inf. Sci. 2021, 49, 142–163. [Google Scholar]
  50. Ye, K.Q. Orthogonal column Latin hypercubes and their application in computer experiments. J. Am. Stat. Assoc. 1998, 93, 1430–1439. [Google Scholar] [CrossRef]
  51. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the Performance of Adaptive GainingSharing Knowledge Based Algorithm on CEC 2020 Benchmark Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 9–24 July 2020. [Google Scholar]
  52. Brest, J.; Maucec, M.S.; Boskovic, B. Differential Evolution Algorithm for Single Objective Bound-Constrained Optimization: Algorithm j2020. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 9–24 July 2020; 2020. [Google Scholar]
  53. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 9–24 July 2020. [Google Scholar]
  54. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Illustration of base vector selection rule in two-dimensional space.
Figure 1. Illustration of base vector selection rule in two-dimensional space.
Symmetry 13 02163 g001
Figure 2. Illustration of difference vector selection rule in a two-dimensional space. Red and black represent superior and inferior individuals, respectively. Difference vector at: (a) early stage of evolution; (b) later stage of evolution.
Figure 2. Illustration of difference vector selection rule in a two-dimensional space. Red and black represent superior and inferior individuals, respectively. Difference vector at: (a) early stage of evolution; (b) later stage of evolution.
Symmetry 13 02163 g002
Figure 3. Proposed HTSDS mechanism.
Figure 3. Proposed HTSDS mechanism.
Symmetry 13 02163 g003
Figure 4. FDC of CEC 2020 functions on different dimensions.
Figure 4. FDC of CEC 2020 functions on different dimensions.
Symmetry 13 02163 g004aSymmetry 13 02163 g004b
Figure 5. Convergence curves of DEs with different mutation strategies on four 10-D CEC2017 benchmark functions: (a) F1; (b) F7; (c) F20; (d) F25.
Figure 5. Convergence curves of DEs with different mutation strategies on four 10-D CEC2017 benchmark functions: (a) F1; (b) F7; (c) F20; (d) F25.
Symmetry 13 02163 g005aSymmetry 13 02163 g005b
Figure 6. Relationship between average ranks and p value for four different dimensions on 10 problems from CEC 2020.
Figure 6. Relationship between average ranks and p value for four different dimensions on 10 problems from CEC 2020.
Symmetry 13 02163 g006
Table 1. Wilcoxon test between TS-MSCDE and other algorithms using CEC2020 for D = 5, 10, 15, and 20.
Table 1. Wilcoxon test between TS-MSCDE and other algorithms using CEC2020 for D = 5, 10, 15, and 20.
DAlgorithms
TS-MSCDE VS.
R+R−p-Value
5SHADE47.57.50.0427
LSHADE4780.0348
j20204780.0517
AGSK5050.1054
IMODE25300.7654
10SHADE5230.0066
LSHADE46.58.50.0246
j202037.517.50.4524
AGSK50.54.50.0167
IMODE30.524.50.9363
15SHADE39.515.50.2468
LSHADE42.512.50.0806
j202032230.8149
AGSK44.510.50.096
IMODE21340.8113
20SHADE44110.0792
LSHADE43120.1598
j202036.518.50.5862
AGSK21340.9378
IMODE20.534.50.5972
Table 2. Average ranks for all algorithms across all problems and dimensions using CEC2020.
Table 2. Average ranks for all algorithms across all problems and dimensions using CEC2020.
Algorithms5D10D15D20DMean RankingRank
TS-MSCDE2.52.352.753.12.6751
IMODE2.62.553.22.42.68752
j20203.752.853.053.353.253
AGSK3.83.854.42.553.654
LSHADE4.34.253.754.74.255
SHADE4.055.153.854.94.48756
Friedman-P-value0.04810.00170.02980.0027
Table 3. Mean values comparison for real-world optimization problems.
Table 3. Mean values comparison for real-world optimization problems.
TS-MSCDEIMODEj2020AGSKLSHADESHADE
F1Mean2.58E+003.01E+003.31E+003.20E+003.10E+003.15E+00
Std.Dev3.22E+002.54E+006.10E+005.10E+005.82E+005.03E+00
F2Mean9.00E-011.03E+001.21E+001.10E+001.18E+001.17E+00
Std.Dev5.00E−021.00E−016.00E−025.00E−028.00E−027.70E−02
F3Mean1.23E+011.59E+011.61E+011.41E+011.58E+011.64E+01
Std.Dev7.00E−012.07E+008.50E−011.50E+001.04E+009.76E−01
F4Mean1.35E+011.53E+011.49E+011.59E+011.51E+011.61E+01
Std.Dev2.50E+003.49E+002.37E+003.10E+003.72E+002.63E+00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, X.; Wang, D.; Kang, H.; Shen, Y.; Chen, Q. A Two-Stage Differential Evolution Algorithm with Mutation Strategy Combination. Symmetry 2021, 13, 2163. https://doi.org/10.3390/sym13112163

AMA Style

Sun X, Wang D, Kang H, Shen Y, Chen Q. A Two-Stage Differential Evolution Algorithm with Mutation Strategy Combination. Symmetry. 2021; 13(11):2163. https://doi.org/10.3390/sym13112163

Chicago/Turabian Style

Sun, Xingping, Da Wang, Hongwei Kang, Yong Shen, and Qingyi Chen. 2021. "A Two-Stage Differential Evolution Algorithm with Mutation Strategy Combination" Symmetry 13, no. 11: 2163. https://doi.org/10.3390/sym13112163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop