Next Article in Journal
AI-Enabled Condition Monitoring Framework for Indoor Mobile Cleaning Robots
Previous Article in Journal
Graph Convolutional Network Design for Node Classification Accuracy Improvement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual Elite Groups-Guided Differential Evolution for Global Numerical Optimization

School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(17), 3681; https://doi.org/10.3390/math11173681
Submission received: 18 July 2023 / Revised: 14 August 2023 / Accepted: 23 August 2023 / Published: 26 August 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Differential evolution (DE) has shown remarkable performance in solving continuous optimization problems. However, its optimization performance still encounters limitations when confronted with complex optimization problems with lots of local regions. To address this issue, this paper proposes a dual elite groups-guided mutation strategy called “DE/current-to-duelite/1” for DE. As a result, a novel DE variant called DEGGDE is developed. Instead of only using the elites in the current population to direct the evolution of all individuals, DEGGDE additionally maintains an archive to store the obsolete parent individuals and then assembles the elites in both the current population and the archive to guide the mutation of all individuals. In this way, the diversity of the guiding exemplars in the mutation is expectedly promoted. With the guidance of these diverse elites, a good balance between exploration of the complex search space and exploitation of the found promising regions is hopefully maintained in DEGGDE. As a result, DEGGDE expectedly achieves good optimization performance in solving complex optimization problems. A large number of experiments are conducted on the CEC’2017 benchmark set with three different dimension sizes to demonstrate the effectiveness of DEGGDE. Experimental results have confirmed that DEGGDE performs competitively with or even significantly better than eleven state-of-the-art and representative DE variants.

1. Introduction

Differential evolution (DE) is a population-based random search algorithm proposed by Storn and Price [1]. It is primarily used to solve numerical optimization problems [1,2,3,4]. On account of its ease of implementation and strong global search ability, DE has attracted a great deal of attention from researchers. As a result, many effective DE variants have been developed in recent years [2,3,5,6,7]. In particular, DE variants have won competitions on single objective optimization held in the IEEE Congress on Evolutionary Computation (CEC) in recent years [8,9,10]. Thanks to its good optimization performance when solving optimization problems, DE has also been used for a wide range of academic and engineering applications [5,6,7].
To be specific, a population of individuals are maintained in DE to search the solution space iteratively [1,2,3]. In the population, each individual represents a feasible solution to the optimization problem and undergoes three main processes [4,6,7], namely, the mutation process, the crossover process, and the selection process, to update its position during the iteration. Among the three processes, the mutation process has been demonstrated as the most critical part in DE [11,12,13,14,15] because it has significant influence on the search diversity and the search convergence of DE. As a result, researchers have poured a great deal of attention into designing effective mutation strategies to improve the optimization performance of DE. Resultantly, numerous DE variants with novel mutation schemes have been developed in recent years [16,17,18,19,20,21].
In general, the mutation operation is usually realized by shifting the base individual by the difference vectors between individuals in the population [19,22,23,24,25]. In essence, the key to designing effective mutation strategies lies in the selection of individuals participating in the mutation, such that the mutation diversity of the population remains high and, at the same time, the mutated individuals are likely to approach optimal areas fast [11,12,15,16,20]. To this end, researchers have developed a number of individual selection approaches, especially for choosing guiding exemplars to direct the mutation of individuals [14,15,16,17,18]. Among these selection strategies, fitness values of individuals are commonly used to choose guiding exemplars to direct the evolution of the population [16,17,18,19,20]. To enhance the mutation diversity, researchers have also designed many other measures to select promising individuals to mutate the population, for example, cosine similarity [26], novelty of individuals [23], fitness and diversity contribution of individuals [27], and consecutive unsuccessful updates of individuals [25].
The above DE variants commonly utilize only one single mutation strategy to mutate all individuals. Nevertheless, different mutation strategies usually have different advantages in solving different optimization problems. Therefore, some researchers have even proposed to employ multiple mutation strategies to evolve the population, such that the strengths of different mutation schemes can be integrated to mutate the population effectively [28,29,30,31,32]. In a broad sense, the hybridization of multiple mutation strategies is classified into two major methods, namely, population-level hybridization [28,29,31,32,33,34] and individual-level hybridization [30,35,36,37,38]. In the former category of hybrid DE variants [31,33,34], in each generation, a mutation strategy is selected from the mutation strategy pool and then all individuals share this strategy to mutate. In contrast, in the latter category of hybrid DE variants [12,36,38], at each iteration, for each individual, a mutation scheme is selected from the mutation strategy pool to update the individual.
In addition to the mutation operation, the optimization performance of DE is also significantly influenced by the control parameters associated with the mutation operation and the crossover operation, that is, the scaling factor, F, in the mutation and the crossover probability, CR, in the crossover [35,39,40,41,42,43,44,45,46]. It has been experimentally verified that on the one hand, the settings of these two key parameters are usually different for the same DE in different optimization problems; on the other hand, even in the same optimization problem, the settings of these two parameters are also usually different for DE with different mutation or crossover schemes [35,41,43,44]. This indicates that DE is very sensitive to these two parameters when solving optimization problems.
To alleviate this issue, researchers have been dedicated to devising adaptive parameter adjustment strategies for DE. As a result, many adaptive parameter adaptation methods have been developed to change the settings of F and CR dynamically during the evolution process [35,39,40,41,42,43,44,45,46]. Broadly speaking, existing parameter adaptation strategies can be divided into two main categories, namely, (1) dynamic parameter adjustment strategies [39,40,41,42] and (2) self-adaptive parameter adjustment strategies [35,43,44,45,46]. In general, the former type of parameter adaptation schemes mainly adjust the parameter values dynamically without considering the evolutionary states of the population or individuals [39,40,41,42], while the latter type of parameter adaptation methods usually take the evolutionary information or states of individuals into consideration to adaptively adjust the values of the two parameters [35,43,44,45,46].
Though many novel mutation strategies and effective parameter adaptation methods have been designed to help DE improve its optimization performance, DE still cannot find satisfactory solutions to complex optimization problems with lots of local regions because of the stagnation of the population [2,3,5,6,7]. However, with the rapid development of big data and the Internet of Things, optimization problems are becoming more and more complex due to the increasing variables, especially the increasingly correlated ones [47,48,49,50,51]. As a result, there is an increasing demand for effective strategies to further improve the optimization ability of DE in solving complicated optimization problems.
To this end, this paper devises a dual elite groups-guided mutation strategy called “DE/current-to-duelite/1” to effectively mutate all individuals in the population. As a result, a novel DE variant named dual elite groups-guided DE (DEGGDE) is developed for solving complex optimization problems. The contributions of this paper and the key components of DEGGDE are summarized as follows:
(1) A dual elite groups-guided mutation strategy named “DE/current-to-duelite/1” is proposed to mutate individuals with high diversity. Instead of only using the elites in the current population as the guiding exemplars to direct the mutation of individuals in existing studies [46], DEGGDE maintains an archive to store obsolete parent individuals and then assembles the elites in both the current population and the archive to guide the mutation of all individuals in the population. In this way, the diversity of the guiding exemplars used to mutate individuals is expectedly enlarged, and thus the mutation diversity of individuals is enhanced. This is beneficial for individuals to explore the solution space in diverse directions and aids in avoiding falling into local regions.
(2) A directional random difference vector is further proposed to accelerate the approaching speed to optimal areas. Instead of directly using two random individuals from the population to generate a completely random difference vector for mutating each individual, as in most existing studies [45,46,52,53,54], DEGGDE first compares the two randomly selected individuals from the population and then uses the better one as the first exemplar and the worse one as the second exemplar to generate a directional random difference vector for each individual to mutate. In this manner, the mutated individual is expected to move toward optimal areas fast.
With the above techniques, DEGGDE is anticipated to keep a good balance between search diversity and search convergence to explore and exploit the solution space to find the global optimum. To validate its effectiveness, experiments are conducted on the commonly used CEC’2017 benchmark set [55] with different dimension sizes by comparing DEGGDE with eleven state-of-the-art DE methods. In addition, the effectiveness of the above two components in DEGGDE are also studied by conducting experiments on the CEC’2017 benchmark set.
The rest of the paper is organized as follows. In Section 2, the basic working principle of DE is first reviewed and then DE variants closely related to this paper are reviewed. After that, the devised DEGGDE is elaborated in detail in Section 3. Subsequently, a large number of experiments are carried out to verify the effectiveness of DEGGDE by comparing with eleven state-of-the-art DE variants in Section 4. Finally, conclusions are given in Section 5.

2. Related Work

This paper aims at minimization problems, which are formulated with the following general form:
min f ( x ) x R D
where D is the number of variables to be optimized and x is a feasible solution to the optimization problem, which contains D elements. It is worth noting that we directly use the function value to evaluate the fitness of each individual in DE. Therefore, the smaller the fitness value of an individual is, the better it is.

2.1. Differential Evolution

The main idea of DE is to maintain a population of individuals to iteratively explore and exploit the solution space to find the global optimum of the optimization problem. During the iteration, individuals are updated sequentially by three operators, namely, mutation, crossover, and selection [2,3,4,5,6]. As a whole, the overall pseudo-code of DE is shown in Algorithm 1, where the following four steps are mainly involved:
(1) Initialization: In the initial stage, individuals in the population are randomly initialized. When solving continuous optimization problems, individuals are usually randomly initialized in the following way [4,5,6,7]:
x i , j , 0 = x j _ min + rand ( 0 , 1 ) × ( x j _ max x j _ min )
where xi,j,0 is the value of the jth dimension for the ith individual in the 0th generation; i = 1 , 2 , , PS and j = 1 , 2 , , D , with PS denoting the population size and D denoting the dimension size of the optimization problem. rand ( 0 , 1 ) is a real random number generator to sample a real value uniformly within [0,1]. xj_max and xj_min represent the upper and the lower bounds of the jth dimension, respectively.
In the above manner, individuals in DE in the initial stage are expectedly dispersed in diverse areas of the solution space, which they can then search in distinct directions [19,22,23,24,25].
(2) Mutation: Mutation generates a mutation vector for each individual based on the difference vectors between individuals in the population [16,17,18,19,20]. It mainly generates new values for the population to create the offspring. Therefore, it has a significant effect on the optimization performance of DE. As a consequence, researchers have developed numerous mutation strategies for DE [11,12,13,14,15]. Some typical examples are listed below, with other recent ones reviewed in the next subsection:
DE/rand/1 [1]:
v i , G = x r 1 , G + F × ( x r 2 , G x r 3 , G )
DE/best/1 [56]:
v i , G = x best , G + F × ( x r 1 , G x r 2 , G )
DE/current-to-best/1 [57]:
v i , G = x i , G + F × ( x best , G x i , G ) + F × ( x r 1 , G x r 2 , G )
DE/current-to-rand/1 [52]:
v i , G = x i , G + F × ( x r 1 , G x i , G ) + F × ( x r 2 , G x r 3 , G )
DE/current-to-pbest/1 [46]:
v i , G = x i , G + F × ( x best , G p x i , G ) + F × ( x r 1 , G x r 2 , G * )
where F is the scaling factor used to control the shifting of the base individual based on the difference vectors. v i , G is the mutation vector corresponding to the ith individual of the Gth generation, x i , G ; r 1 ,   r 2 ,   and   r 3 are the index values of three different individuals randomly chosen from the population, and i r 1 r 2 r 3 ; x best is the best individual in the current population; x best , G p is a randomly selected individual from the best ⌈p% × PS⌉ individuals in the Gth population; x r 2 , G * is a randomly selected individual from PA, where P denotes the current population and A represents one archive that stores the out-of-date parents, which are replaced by their offspring.
Algorithm 1: The pseudocode of DE (DE/rand/1/bin)
Input: Population size PS, Scaling factor F, Crossover rate CR, Maximum fitness evaluations FESmax;
1:
Randomly initialize PS individuals, calculate their fitness and set fes = PS; Set the current generation number G = 0;  //initialization
2:
While (fesFESmax) do
3:
For i = 1:PS do
4:
  Randomly choose three different individuals x r 1 , G , x r 2 , G and x r 3 , G from the population;
5:
 
6:
   v i , G = x r 1 , G + F × ( x r 2 , G x r 3 , G )   //mutation
7:
   j r a n d = randint ( 1 ,   D ) ; // randomly select a dimension
8:
  For j = 1:D do   //crossover
   If rand (0, 1) CR or j = =   j r a n d then
9:
 
10:
     u i , G ( j ) = v i , G ( j ) ;
11:
   Else 
12:
     u i , G j =   x i , G j ;
13:
   End If
14:
  End For
15:
  Calculate the fitness of u i , G and fes = fes + 1;
16:
  If f ( u i , G ) < f ( x i , G )    then    //selection
17:
     x i , G + 1 = u i , G ;
18:
   Else
19:
     x i , G + 1 = x i , G ;
20:
   End If 
21:
   End For 
22:
   G++;
23:
End While 
Obtain the best individual in the population x and its fitness f(x);
Output: f(x) and x
(3) Crossover: Crossover creates an offspring for each individual by recombining the dimensions of the individual and those of its mutation vector generated by the mutation operation [45,46,52,53,54]. The most widely utilized crossover scheme is the binomial crossover [44]. It principally works as follows:
u i , j , G = v i , j , G ,   if   rand ( 0 , 1 ) C R   or   j = j r a n d x i , j , G ,   otherwise
where CR is the crossover probability within [0,1]. It mainly determines the difference between the generated offspring and the associated parent individual; jrand represents a randomly chosen dimension from all dimensions. It is used to ensure that the generated offspring, u i , G , is different from the parent individual, x i , G , in at least one dimension. As a whole, from (8), it is seen that if the randomly generated real value by rand ( 0 , 1 ) is less than CR or j = jrand , the value of the jth dimension in the mutation vector, v i , G , is assigned to the associated dimension of the offspring, u i ; otherwise, the value of the dimension in the parent individual, x i , G , is assigned to the dimension of u i , G . Therefore, the binomial crossover treats each dimension independently [34,37,58,59,60].
(4) Selection: After the generation of the offspring, the selection operation is performed to compare the offspring with the parent individuals to determine which one should enter the next iteration. In the literature, the most widely used selection method is the greedy strategy based on fitness [16,29,61,62,63], which works as follows:
x i , G + 1 = u i , G ,   if   f ( u i , G ) f ( x i , G ) x i , G ,   otherwise .
where f ( · ) is the objective function of the optimization problem. In this selection strategy, each offspring is compared with its parent and the better one with a smaller function value goes into the next generation. Therefore, the selection strategy in (9) is suitable for DE to solve minimization problems.
As shown in Algorithm 1, individuals in the population are continuously updated with the mutation, the crossover, and the selection operations until the termination condition is satisfied. In the literature, the most widely used termination condition is when the afforded number of fitness evaluations runs out [27,64,65,66,67]. After the termination, DE outputs the found best solution.

2.2. Research on DE

Though the classical DE algorithm shown in Algorithm 1 has gained promising performance in solving simple optimization problems, such as low-dimensional unimodal problems [11,12,14,15,17], it encounters effectiveness degeneration when dealing with complicated optimization problems, such as high-dimensional multimodal problems with lots of local areas. To address the above predicament, researchers have been dedicated to advancing DE in the following two major directions.

2.2.1. Research on Mutation

The mutation operation in DE mainly imports new values into the population. Therefore, it significantly affects the performance of DE. As a result, numerous excellent mutation schemes have sprung up in recent decades [13,14,15,31,34]. Comprehensively speaking, existing mutation schemes are mainly categorized into two principal types, namely, the single mutation framework [11,12,14,15,17] and the multiple mutation strategies ensemble framework [28,30,31,32,34].
Researchers’ attention has been primarily focused on developing a uniform mutation scheme for all individuals to evolve. The key to this usually lies in the selection of parent individuals participating in the mutation operation [16,19,23,24,25]. At first, researchers focused on employing topologies to organize individuals and then selecting parents based on the adopted topology structures to direct the mutation of each individual [11,12,15]. To name a few representative ones, in [12], a random triangular topology-based mutation was proposed by first randomly choosing three individuals and then using the convex combination of the three difference vectors among the three individuals to mutate each individual. Combined with some basic mutation strategies and a restart mechanism, this mutation scheme helps DE solve optimization problems very effectively. In [11], cellular topology was used to organize individuals. Then, for each individual, the directional information among its neighbors determined by the topology was fully utilized to direct the mutation of this individual. In this manner, this mutation scheme tends to exploit the areas of better individuals with the help of the directional neighborhood information, which may accelerate the convergence of DE. Tian et al. [15] first used ring topology to build a promising individual group and then adaptively selected an individual from the group to direct the mutation of each individual based on its neighborhood performance. In this manner, the neighborhood information of each individual is considered to direct its mutation, which is beneficial for DE to properly adjust the search behavior of each individual during evolution. Since different topologies usually have different strengths in helping DE solve optimization problems, some researchers have attempted to assemble multiple topologies to design mutation strategies. For instance, Sun et al. [14] hybridized multiple topologies with different connectivity degrees and further designed an adaptive topology selection scheme to select a proper topology for each individual to form its neighbors by considering its evolutionary state.
Besides topology-based individual selection, researchers have also developed many other individual selection mechanisms. For instance, Gong et al. [17] proposed a fitness ranking-based individual selection strategy. They first ranked individuals from the best to the worst in terms of their fitness values and then assigned each individual a selection probability calculated by its fitness ranking. After that, they utilized the roulette wheel method to select the guiding exemplar for each individual to direct its mutation. In [19], a multi-objective sorting-based individual selection mechanism was proposed by first using the non-dominated sorting scheme to sort individuals based on their fitness values and their diversity contribution. After that, individuals participating in the mutation of each individual are selected by the roulette wheel method with the selection probabilities computed on the basis of the rankings in terms of fitness and diversity. In this way, during the individual selection, both fitness and diversity are considered and thus a promising balance between exploration and exploitation is potentially obtained. In [23], a novelty hybrid fitness individual selection mechanism was devised by introducing a new measure called novelty. Specifically, for each individual, its fitness value and novelty value are weighted together by two adaptive scaling parameters and then its selection probability is calculated by the weighted measurement. In this way, a good tradeoff between exploration and exploitation is expectedly achieved. In [25], an individual selection mechanism based on the number of consecutive unsuccessful updates was devised. To be specific, the selection probability of each individual is calculated based on the number of its consecutive unsuccessful updates. Then, the base individual and the terminal individual in the mutation operation are randomly selected by the roulette selection strategy based on the calculated probabilities.
To enhance the mutation diversity and efficiency, researchers have also attempted to incorporate historical evolutionary information into the mutation process. For example, Ghosh et al. [16] archived difference vectors which helped to generate successful offspring to replace their associated parents in past generations and then reused them in the subsequent generations to mutate individuals. In this way, promising directions for generating promising offspring could be made full use of to mutate individuals efficiently. Zhao et al. [30] proposed a “Top–Bottom” strategy that uses historical heuristic information derived from successful individuals and failed individuals to guide the mutation of the population. To be concrete, an archive is maintained to store failed individuals and cooperates with the population to assist individuals to evolve.
The above single mutation frameworks have assisted DE in gaining promising performance in solving different kinds of optimization problems. However, it has been found that different mutation strategies usually have different advantages and strengths in solving different types of optimization problems. Therefore, a natural thought is to assemble multiple mutation strategies to incorporate their strengths together to effectively cope with optimization problems. Along this road, researchers have devised a lot of ensemble methods to hybridize multiple mutation schemes [28,29,30,32]. In the case of hybridizing multiple mutation strategies, existing ensemble methods can be roughly classified into the following two main categories:
(1) Population-level ensemble methods. In this kind of ensemble method, all individuals share the same mutation scheme to evolve. For instance, Tan et al. [31] proposed an adaptive mutation ensemble method based on the fitness landscape of optimization problems. To be specific, they first trained a random forest algorithm to find the relationship between three mutation strategies and the fitness landscape of different optimization problems. Then, for each optimization problem, they used the trained model to select an appropriate mutation scheme from the mutation pool to help DE solve this problem. Gao et al. [29] proposed the use of multiple mutation schemes to generate multiple temporary mutation vectors for each individual, and then they further combined the generated multiple mutation vectors by weights to form the final mutation operator. Das et al. [34] proposed the use of two kinds of neighborhood models to generate a local mutation vector and a global mutation vector for each individual. Then, they combined the two mutation vectors by using a weight factor to form the final mutation vector. In [33], an improved version was further proposed by using five different pairs of global and local mutation models to generate five pairs of mutation vectors. Then, these mutation vectors were weighted together by an adaptive weighting strategy. Zhou et al. [36] proposed an underestimation-based ensemble method. To be concrete, for each individual, multiple mutation strategies are conducted to generate a set of offspring and then a cheap abstract convex underestimation model was used to select the most promising offspring to compare with the associated parent. To enhance mutation diversity, some researchers have proposed to divide the whole population into several sub-populations and then, for each sub-population, a mutation strategy is selected from the mutation pool to mutate all individuals in the sub-population. For instance, Cai et al. [28] proposed a cluster-based mutation strategy that involves two main levels of evolutionary information exchange. To be specific, the whole population is divided into clusters by the k-means algorithm. Then, an intra-cluster learning strategy and an inter-cluster learning strategy are designed to exchange information within the same cluster and between different clusters, respectively. Xia et al. [32] proposed a fitness-based adaptive mutation selection method by first dividing the whole population into multiple small subpopulations and then adaptively selecting a suitable mutation scheme from three popular mutation strategies for each sub-population to mutate its individuals based on the characteristics of all individuals in the subpopulation.
(2) Individual-level ensemble methods. In this kind of ensemble method, a suitable mutation scheme is adaptively selected for each individual. Therefore, different individuals may be mutated by different mutation schemes. For instance, Liu et al. [35] maintained a mutation pool containing three mutation strategies and then selected a mutation scheme for each individual based on its historical evolutionary experience and its heuristic information. Mohammad et al. [30] maintained three mutation strategies, namely, representative-based mutation, local random-based mutation, and global best history-based mutation. Then, they assigned each individual a mutation scheme adaptively through a winner-based allocation strategy.
Although the above DE variants using different mutation strategies have shown good performance in tackling certain kinds of optimization problems, their effectiveness still encounters great challenges when dealing with complex optimization problems. Therefore, improving the optimization effectiveness of DE in coping with complicated optimization problems is still an urgent demand. To this end, this paper aims to propose a simple yet effective dual elite groups-guided mutation strategy to help DE search the problem space with high effectiveness and high efficiency.

2.2.2. Research on Parameter Adaptation

Besides the mutation operation, two key control parameters, namely, the scaling factor, F, and the crossover rate, CR, also significantly influence the optimization performance of DE [40,41,42,43,45]. This means that DE is very sensitive to these two parameters. To alleviate this predicament, researchers have designed many parameter adaptation methods for the two key parameters. Broadly speaking, existing parameter adaptation approaches can be roughly divided into the following two categories:
(1) Dynamic parameter adaptation strategies. In this type of parameter adaptation strategy, the settings of the two key parameters are adjusted dynamically as the evolution continues, without considering the evolutionary states of individuals or the population [40,41,42]. To name a few representatives, in [40], two dynamic methods for setting F were proposed, namely, the random generation method and the time-varying method. In particular, the former randomly samples a value for F from the range (0.5, 1) with a uniform distribution, while the latter linearly decreases F over a specified interval. In [41], the value of F for each individual is randomly sampled by a normal distribution, N(0.5, 0.3). In [42], a sinusoidal function with respect to the number of generations was used to generate the values of F and CR.
(2) Self-adaptive parameter adaptation strategies. In this type of parameter adaptation strategy, parameter values are updated adaptively by using the evolutionary information of the population or individuals. For instance, in [35], the authors utilized the historical experience of the entire population and the heuristic information of each individual to adaptively adjust the settings of F and CR for this individual. Tang et al. [43] proposed an individual-dependent parameter adaptation strategy by using the fitness rankings or the fitness values of individuals. In [46], the settings of F and CR are generated randomly based on the Gaussian distribution and the Cauchy distribution, respectively. However, the mean value of the Gaussian distribution and the position parameter of the Cauchy distribution are adaptively adjusted based on those F and CR values associated with successful individuals. Inspired by this idea, researchers have developed many improved versions, among which the most representative one is the parameter adaptation strategy proposed in SHADE [45].
Specifically, SHADE maintains an archive called MF to store the mean value computed on the basis of the successful F values and another archive called MCR to store the mean value calculated on the basis of the successful CR values. The sizes of the two archives are both H. In the initial stage, each element (MCRi, and MFi (i = 1,..., H)) in the two archives is initialized as 0.5. Then, in each generation, a random index, ri, is uniformly chosen from [1, H] and then the settings of Fi and CRi for each individual are randomly generated as follows:
F i = r a n d c i ( M F , r i , 0.1 )
C R i = r a n d n i ( M C R , r i , 0.1 )
where M F , r i is the selected element from MF to generate the setting of Fi for each individual, while M CR , r i is the selected element from MCR to generate the setting of CRi for each individual. r a n d c i ( M F , r i , 0.1 ) is the Cauchy distribution with the position parameter set as M F , r i and the scaling factor set as 0.1, while r a n d n i ( M CR , r i , 0.1 ) is the Gaussian distribution with the mean value set as M CR , r i and the deviation set as 0.1. When generating CRi, if it is not within [0,1], it is regenerated until the generated value falls into the range. When generating Fi, if F i 0 , it is regenerated until F i > 0 ; if F i > 1.0 , it is truncated to 1.0.
During the update of individuals, if the generated offspring is better than its parent, the associated Fi and CRi are recorded into SF and SCR, respectively. Then, after the update of the whole population in each generation, the oldest elements in MF and MCR are updated in the following manner:
M F , k , G + 1 = m e a n W L ( S F ) i f S F ϕ M F , k , G o t h e r w i s e
M C R , k , G + 1 = m e a n W A ( S C R ) i f S C R ϕ M C R , k , G o t h e r w i s e
m e a n W L ( S F ) = k = 1 | S F | ω k S F , k 2 k = 1 | S F | ω k S F , k
m e a n W A ( S C R ) = k = 1 | S C R | ω k S C R , k
ω k = Δ f k k = 1 | S C R | Δ f k
where S F and S CR are the two sets used to record the Fi and CRi values associated with the offspring which successfully replace their parents. Δ f k = | f ( u k , G ) f ( x k , G ) | represents the fitness improvement of the offspring as compared with its parent. With this adaptive strategy, SHADE obtains very promising performance in solving optimization problems [45].
Except for the above typical parameter adaptation strategies, researchers have also proposed many other individual-level parameter adaptation schemes. To list some representatives, in [68], Li et al. adjusted the values of F and CR associated with each individual based on fitness improvement. To be concrete, for each individual, if the associated F and CR values cooperate with the mutation strategy to create a better trial vector, such settings of F and CR are still used in the next generation. On the contrary, if such values of F and CR fail to help the mutation strategy to create a better offspring in many consecutive iterations, they are then randomly regenerated. In [69], the authors took advantage of the Gaussian distribution, the Cauchy distribution, and a triangular wave function to randomly sample three groups of values for F and CR, respectively. Then, they selected the proper settings of F and CR for each individual based on the evolutionary information of this individual. In [70], a parameter adaptation scheme was devised by making full use of the population feature information. Specifically, the standard deviation of the fitness values of all individuals, the sum of the standard deviation with respect to each dimension of all individuals, and the successful F and CR values are considered to adaptively regulate the settings of F and CR.
The above parameter adaptation methods have been widely incorporated into different DE variants to help them achieve good optimization performance in dealing with various optimization problems [2,3,4]. Since the focus of this paper is to design a novel mutation scheme, we directly take advantage of the parameter adaptation approach in SHADE [45] to cooperate with the devised DEGGDE to solve optimization problems.

3. Proposed DEGGDE

Although many advanced DE methods have been proposed in recent years, the performance of DE is still not as satisfactory as we had anticipated it would be when it encounters complex optimization problems with many local basins. Nevertheless, in the era of big data and the Internet of Things, variables likely correlate with each other and, consequently, optimization problems become more and more complicated [55,71,72]. Confronted with such problems, designing effective mutation schemes is the key to improving the optimization effectiveness of DE. To this end, this paper develops a simple, yet effective mutation scheme called “DE/current-to-duelite/1”, which is elucidated in detail in the following.

3.1. DE/Current-to-Duelite/1

In the research on evolutionary algorithms, it has been widely recognized that elite individuals usually preserve more valuable evolutionary information to guide the population to approach optimal areas [51,73,74]. Therefore, elites are widely utilized to direct the update of individuals. Nevertheless, most existing studies only make use of elites in the current population to assist the evolution of the population. Such utilization may limit the diversity of the guiding exemplars, with the result that individuals in the population are guided in limited directions.
To enhance the diversity of the guiding exemplars, we turn to the historical evolutionary information of the population to seek some useful elite individuals. Along this road, we design a novel mutation scheme called “DE/current-to-duelite/1” for DE by assembling historical elite individuals to guide the mutation of the population.
Specifically, given that the population size is PS, we maintain an archive (denoted as A) of the same size (namely PS) to store the obsolete parent individuals. Then, unlike existing mutation strategies [11,12,13,14,15], which only utilize elite individuals in the current population as the candidate guiding exemplars to direct the mutation of the population, we utilize the elite individuals in both the current population and the archive to direct the mutation of the population. To be concrete, we select the best [p1*PS] (p1 is actually the percentage of the number of the chosen elite individuals out of the whole population) individuals in the current population to form the elite set, denoted as PE. Then, we further select the best [p2*PS] (p2 is actually the percentage of the number of the chosen elite individuals out of the entire archive) individuals in the archive to form the elite set, denoted as AE. As a result, PEAE constitutes the candidate guiding exemplars for mutating individuals in the population.
In particular, each individual in the population is mutated by the devised “DE/current-to-duelite/1” as follows:
v i , G = x i , G + F i × ( x p b e s t _ r i x i , G ) + F i × ( x r 1 , G x r 2 , G )
where x pbest _ r i is an elite individual randomly selected from PEAE to direct the mutation of x i , G ; x i , G is the ith individual in the Gth generation; v i , G . is the generated mutant vector; F i is the scaling factor associated with x i , G ; and x r 1 , G and x r 2 , G are two individuals randomly selected from PA by the uniform distribution, where P and A denote the current population and the archive, respectively. It is worth noting that r1 ≠ r2 ≠ i.
Another thing that should be noted is that, instead of using a completely random difference vector like most existing mutation mechanisms, “DE/current-to-duelite/1” utilizes a directional random difference vector. To be concrete, “DE/current-to-duelite/1” first randomly chooses two different individuals from PA and then lets the better one between the two selected individuals act as x r 1 , G while the worse one acts as x r 2 , G . In this way, the difference vector between the two random individuals is directional from the worse individual to the better individual.
Observing Equation (17), we find that the devised “DE/current-to-duelite/1” could help DEGGDE strive a good balance between exploration of the problem space and exploitation of the found promising areas due to the following reasons:
(1) The elite individual set PEAE provides diverse candidate-guiding exemplars for individuals to mutate. On the one hand, the introduction of the elite individuals in the archive storing historical evolutionary information affords additional leading exemplars for individuals to mutate. This largely improves the mutation diversity of individuals and thus is of great profit for individuals to traverse the problem space in diverse directions. On the other hand, the elite individuals in both PE and AE are expectedly better than most individuals in the current population, P. Therefore, it is hoped that most individuals are likely guided to move towards optimal regions. From this perspective, it is highly possible that individuals can approach optimal areas fast.
(2) The selection of two random individuals, x r 1 , G and x r 2 , G , from PA further enhances the mutation diversity of individuals. Specifically, instead of selecting the two random individuals only from P, “DE/current-to-duelite/1” randomly selects both of the random individuals from PA. This largely promotes the diversity of the random difference vectors of all individuals. Consequently, the search diversity of DEGGDE is improved greatly, which is highly beneficial for escaping from local areas.
(3) The directional random difference vector affords great assistance for individuals to move towards optimal regions fast. Instead of using completely random difference vectors for all individuals, “DE/current-to-duelite/1” utilizes a directional random difference vector for each individual by letting the better one between the two selected random individuals act as the first exemplar and the worse one act as the second exemplar. In this way, individuals are expectedly mutated in promising directions. This is of considerable benefit for individuals to find promising areas fast.
With the above mechanisms, DEGGDE is anticipated to strike a very promising balance between exploration and exploitation to traverse the problem space sufficiently. Aside from the above techniques for mutation, another two key techniques should also be paid careful attention.
In the initial stage, we set archive A as empty. During evolution, the obsolete parents are sequentially added into A. When the number of individuals exceeds its fixed size, PS, we randomly select an individual from A and then compare it with the obsolete parent to be added. If the obsolete parent is better than the randomly selected individual, it replaces the individual; otherwise, it is discarded. In this way, on the one hand, high diversity can be maintained in A; on the other hand, the quality of individuals in A can also be ensured. This updating mechanism of A also potentially assists DEGGDE to maintain a good compromise between search diversity and search convergence.
For the settings of p1 and p2, p2 determines the number of the elite individuals in A, while p1 determines the number of the elite individuals in P. Since A stores the obsolete parents, theoretically, it is highly possible that the overall quality of individuals in A is worse than that of individuals in P. Thus, the number of elite individuals in A should be smaller than that in P. For convenience, in this paper, we set p2 = p1/2, leading to the number of the elite individuals in A being only half of that in P.
Subsequently, upon deeper analysis of p1, we find that a large p1 results in that a large number of elite individuals participating in the mutation of the population. This improves the search diversity of DEGGDE. Nevertheless, a too-large p1 may lead to too-high search diversity, which may do harm to the convergence of DEGGDE. In contrast, a small p1 leads to only a few elite individuals taking part in the mutation of the population. This is beneficial for fast convergence of the population to optimal areas, but it may result in low mutation diversity and then to low search diversity, which may bring stagnation of the population. From the above analysis, we find that a fixed p1 is not suitable for DEGGDE to achieve satisfactory performance.
For ease and convenience, we propose a dynamic strategy for setting p1 to alleviate the above predicament. To be concrete, at each generation, we randomly sample a value for p1 from [0.1, 0.2] by the uniform distribution. In this way, on the one hand, the sensitivity of DEGGDE to p1 could be alleviated; on the other hand, a dynamic balance between exploration and exploitation can be maintained by DEGGDE. This endows DEGGDE with a good capability of searching the problem space diversely while at the same time refining the quality of the found optimal solutions.
It should be mentioned that the range of [0.1, 0.2] is used here because the devised “DE/current-to-duelite/1” can be actually seen an extension of the mutation strategy “DE/current-to-pbest/1” in JADE [46]. Therefore, we directly borrow the recommended setting range of p in JADE to randomly sample a value for p1 in this paper.

3.2. Difference between “DE/Current-to-Duelite/1” and Existing Similar Mutation Strategies

In the literature, many existing mutation strategies also utilize elite individuals to direct the mutation of the population [46,57,73,74], such as “DE/current-to-best/1” [57] and “DE/current-to-pbest/1” [46]. Compared with these existing mutation schemes, “DE/current-to-duelite/1” differs from them mainly in the following aspects:
(1) The most significant difference between “DE/current-to-duelite/1” and existing elite-guided mutation strategies lies in the dual elite groups guided mutation mechanism. Most existing studies only use the elite individuals in the current population to direct the mutation of all individuals. Nevertheless, in “DE/current-to-duelite/1”, the elite individuals in the current population and those in the archive are used together to direct the evolution of all individuals. Therefore, compared with most existing mutation schemes, “DE/current-to-duelite/1” possesses more candidate guiding exemplars and thus preserves higher mutation diversity. Such utilization of historical evolutionary information, by focusing on the best historical individuals, provides strengthens DEGGDE in locating optimal regions fast.
(2) The second significant difference between “DE/current-to-duelite/1” and existing mutation strategies lies in the selection of two random individuals and the construction of the random difference vector. Most existing mutation strategies choose the two random individuals from the population, whereas “DE/current-to-duelite/1” selects the two random individuals from PA. This selection mechanism affords a high diversity of random difference vectors and thus is very beneficial for improving the search diversity of DEGGDE, which may greatly benefit the population in avoiding being trapped in local zones. In addition, unlike most existing mutation schemes that use completely random difference vectors, “DE/current-to-duelite/1” utilizes a directional random difference vector by taking the better one between the two random individuals as the first exemplar while taking the worse one as the second exemplar. By this means, it is expected that the directional random difference vectors could provide positive and promising directions for individuals to mutate. As a result, slightly faster convergence of the population to optimal regions could be achieved.

3.3. The Complete DEGGDE

In addition to “DE/current-to-duelite/1”, this paper employs the binomial crossover scheme to combine the target vector and the corresponding mutation vector to generate the final offspring. With the aim of reducing the sensitivity to the control parameters F and CR, DEGGDE adopts the widely used individual-level parameter adaptation strategy in SHADE [45] to generate the settings of the two parameters for each individual. However, a small modification is made to the CR settings. To be concrete, this paper first generates a set of CR values for all individuals and then sorts them from the smallest to the largest. Subsequently, in the crossover operation, we assign smaller CR values to better individuals. In this way, the generated offspring of the better individuals preserve smaller differences with their parents; in contrast, the generated offspring of the worse individuals preserve larger differences with their parents. As a result, better individuals can focus on exploiting the optimal areas they locate while worse individuals concentrate on exploring the problem space to find more promising areas. For all these techniques, the pseudocode of the complete DEGGDE is outlined in Algorithm 2.
Algorithm 2: The pseudocode of DEGGDE 
Input: Population size PS, Maximum fitness evaluations FESmax;
1:
Generate PS individuals randomly and calculate their fitness; fes = PS;  //initialization
2:
Set all elements in M F , M CR to 0.5; A = ; Set the generation number G = 0;
3:
While (fesFESmax) do  //main loop
4:
   S F = ; S CR = ;
5:
   p 1 = uniform ( 0.1 ,   0.2 ) ; p 2 = p 1   /   2 ;
6:
  Sort individuals in P and those in A from the best to the worst, respectively;
7:
  Select the top best ⌈PS × p 1 individuals from P to form PE and the top best ⌈PS × p 2 ⌉ individuals from A to form AE;
8:
  Randomly select a pair of values (MF,rand and MCR,rand) from MF and MCR to generate F and CR for all individuals;
9:
  Sort CR from the smallest to the largest;
10:
   For i = 1:PS do
11:
  Obtain the associated CR i according to the ranking of x i , G ;
12:
  Randomly select an elite individual x pbest _ r i from PEAE;
13:
  Randomly select two different individuals from PA: x r 2 , G x r 1 , G x i , G and if f ( x r 1 , G ) > f ( x r 2 , G ) , swap x r 1 , G and x r 2 , G ;
14:
  Generate the mutation vector by v i , G = x i , G + F i × ( x pbest _ r i , G x i , G ) + F i × ( x r 1 , G x r 2 , G ) ;  //mutation
15:
  Obtain the trial vector u i , G by Equation (7) and calculate its fitness; fes++;
16:
  If f ( u i , G ) < f ( x i , G ) then  //selection
17:
   If A PS then
18:
    Directly add x i , G into A;
19:
   Else
20:
    Randomly select an individual from A to compare with x i , G , and if x i , G is superior, replace it with x i , G ;
21:
   End If
22:
    x i , G + 1 = u i , G ;   F i S F , CR i S CR ;
23:
  Else
24:
    x i , G + 1 = x i , G ;
25:
  End If
26:
End For
27:
 Update M F and M CR based on Equations (11)–(15); G++;
28:
End While 
29:
Obtain the global best solution gbest and its fitness f(gbest);
Output: f(gbest) and gbest
In the initial phase, as shown in lines 1–2, PS individuals are randomly initialized and then evaluated. Along with the initialization of the population, the related archives for the parameter adaptation and mutation are also initialized. Subsequently, the algorithm enters the main loop. At first, as shown in line 5, the value of p1 is randomly generated according to the uniform distribution, and then the value of p2 is computed. After that, as shown in line 6, individuals in the current population, P, and those in the archive, A, are sorted from the best to the worst, respectively. Then, as shown in line 7, the elite sets PE and AE are formed by selecting the best [p1*PS] in P and the best [p2*PS] in A, respectively. Subsequently, a set of F values and a set of CR values are randomly generated based on the Cauchy distribution and the Gaussian distribution, respectively (line 8). After that, the set of CR values is sorted from the smallest to the largest (line 9). After the above preparations, the evolution of all individuals begins as shown in lines 10–27. To update each individual, the associated CR value is first obtained according to its fitness ranking, as shown in line 11. Then, one elite individual is randomly selected from PEAE (line 12). Next, two random individuals are selected from PA (line 13). After that, the devised mutation scheme is performed to mutate the individual (line 14), following which the binomial crossover operation generates the final offspring, as shown in line 15. Afterwards, the offspring is evaluated and the records in the relevant archives are updated along with the selection operation (lines 16–26). The above operations are performed iteratively until the termination condition is satisfied, which is commonly the completion of the maximum number of fitness evaluations. Finally, before the termination of the algorithm, the found optimal solution and its fitness value are output.
Without considering the time for fitness evaluations, at each generation, it takes O(PS × logPS) to sort individuals in P and another O(PS × logPS) to sort individuals in A. To generate two sets of values for F and CR, respectively, it requires O(PS) twice. To sort the set of CR values, it requires O(PS × logPS). Then, O(PS × D) is required to generate the offspring for all individuals. In the selection process, O(PS × D) is required twice for updating the population, P, and the archive, A. Finally, O(PS) is needed for parameter adaptation. As a whole, the overall time complexity of DEGGDE is O(PS × logPS + PS × D), which does not impose a serious burden as compared with the time complexity of the classical DE.

4. Experiments

This section comprehensively verifies the effectiveness of DEGGDE in tackling optimization problems by conducting a large number of experiments. To start with, the experimental setup is first introduced, including the used benchmark problem set, the selected state-of-the-art algorithms for comparisons, and their parameter settings. Next, the performance of DEGGDE is extensively compared with the methods in the benchmark set. Finally, in-depth observation on DEGGDE is carried out by validating the usefulness of the designed mutation scheme and the devised parameter adaptation strategy.

4.1. Experimental Setup

In the experiments, we use the CEC’2017 benchmark set, which has been widely used to test the optimization performance of evolutionary algorithms [75,76,77,78], to demonstrate the effectiveness and efficiency of DEGGDE. This set consists of 29 numerical optimization problems, which are classified into four different types. Specifically, F1 and F3 are unimodal optimization problems, F4F10 are multimodal optimization problems, F11F20 are hybrid optimization problems, and F21F30 are composition optimization problems. Brief information about these 29 problems is listed in Table 1, and more detailed information can be referred to in [55]. To comprehensively verify the effectiveness of DEGGDE, we set three different dimensionalities, namely, 30D, 50D, and 100D, for all the CEC’2017 benchmark problems.
To make comparisons with DEGGDE, eleven representative and advanced DE variants are selected, namely, SHADE [45], GPDE [79], DiDE [80], SEDE [81], FADE [32], FDDE [27], TPDE [69], NSHADE [53], CUSDE [25], PFIDE [70], and EJADE [54].
Similar to the devised DEGGDE, SHADE, NSHADE, DiDE, FDDE, and CUSDE adopt a single mutation framework to mutate individuals. Specifically, SHADE utilizes “DE/current-to-pbest/1” to mutate individuals with an adaptive strategy for F and CR [45]. Such an adaptive strategy is also used in the devised DEGGDE to mutate individuals. Therefore, SHADE is selected as a baseline method to compare with DEGGDE. NSHADE is the latest improvement on SHADE with a new parameter adaptation scheme based on the nearest spatial neighborhood information [53]. DiDE employs an extension of “DE/current-to-pbest/1” to mutate individuals by introducing an external archive based on depth information. For the parameter adaptation of F and CR, it divides the population into several groups and separately adjusts the settings of F and CR for individuals in different groups adaptively [80]. FDDE utilizes a new mutation operator to mutate the population by selecting parental individuals based on their probabilities, which are computed on the basis of both fitness rankings and diversity rankings of individuals [27]. CUSDE uses the number of consecutive unsuccessful updates to calculate the selection probability of each individual and then chooses the base individual and the terminal individuals by the roulette wheel selection method. Inferior individuals with a large number of consecutive unsuccessful updates are removed during the evolution [25].
Unlike the above methods, GPDE, SEDE, FADE, TPDE, and EJADE assemble multiple mutation strategies to mutate different individuals distinctly. In particular, GPDE assembles a newly designed Gaussian distribution-based mutation operator and “DE/rand-worst/1” to mutate each individual adaptively. Then, a cosine function is employed to sample F and a Gaussian distribution is used to sample CR dynamically in GPDE [79]. SEDE hybridizes three different mutation strategies to mutate individuals adaptively by controlling the proportion of individuals where each mutation strategy is performed to get the associated mutation vector [81]. FADE first divides the population into several sub-populations and then adaptively selects a mutation strategy from three mutation candidates for each individual to mutate based on its fitness [32]. TPDE first separates the population into three sub-populations according to the newly designed zonal-constraint stepped division mechanism and then employs three elite-guided mutation operations to mutate individuals in the three sub-populations, respectively. Then, it leverages the Gaussian distribution, the Cauchy distribution, and a triangular wave function to generate F and CR for individuals in the three sub-populations, respectively [69]. EJADE assembles two sets of integrated crossover and mutation operations to create promising offspring [54].
Different from the above 10 compared DE methods, PFIDE develops a parameter adaptation framework for DE. Specifically, it takes advantage of population feature information, such as the summation of the standard deviation values with respect to each dimension of all individuals in the population and the standard deviation in terms of the fitness values of all individuals, to allocate historical successful F and CR values with high similarity measured by the population feature to the current population [70].
To ensure fair comparisons, for those compared algorithms which were also tested on the CEC’2017 benchmark set with three settings of dimensionality, we directly adopt the recommended settings of the population size in the associated papers for them. For those compared methods which were not tested on the CEC’2017 benchmark set, we fine-tuned the population size, PS, for them for CEC’2017 problems with the three dimensionality settings. The specific settings of the population size for all algorithms to solve the CEC’2017 problems with the three settings of dimensionality are listed in Table 2.
Unless otherwise stated, the maximum number of fitness evaluations for all algorithms is set as 10,000*D, with D denoting the dimension size. For fairness, we run each algorithm 30 times independently on each benchmark problem and then use the mean and the standard deviation values to evaluate the optimization performance of each algorithm on each benchmark problem. To make statistical difference, the Friedman test at the significance level α = 0.05 is performed to get the average rank of each algorithm with respect to the overall performance on one whole benchmark set. In addition, the Wilcoxon rank sum test at the significance level α = 0.05 is also performed to compare the optimization result of DEGGDE with that of each compared method on each benchmark problem. The symbols “+”, “=”, and “−” at the upper right corner of each p-value in the tables indicate that DEGGDE obtains significantly better, equivalent, and significantly worse performance than the compared method on the corresponding problem, respectively. “w/t/l” in the tables count the numbers of “+”, “=”, and “−”, respectively.
Finally, it deserves mentioning that DEGGDE is implemented in Python and all experiments are executed on the same PC with four Intel(R) Core(TM) i5-3470 3.20 GHz CPUs, 4Gb RAM, and the Ubuntu 12.04 LTS 64-bit system.

4.2. Comparisons between DEGGDE and State-of-the-Art DE Methods on the CEC’2017 Set

This section compares DEGGDE with the selected eleven state-of-the-art DE variants on the CEC’2017 benchmark set with three dimensionality settings, namely, 30D, 50D, and 100D. With the settings of the population size for all algorithms listed in Table 2, Table 3, Table 4 and Table 5 show the detailed optimization performance of all algorithms and their comparisons on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. For better comparisons, we summarize the statistical comparison results in terms of “w/t/l” and the average rank “Rank” in Table 6. Figure 1, Figure 2 and Figure 3 show the convergence behaviors of all algorithms on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. In addition, following the guideline in [82], we also present the box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 30D, 50D, and 100D CEC’2017 benchmark sets in Figure 4, Figure 5 and Figure 6, respectively. In these figures, the rectangular box shows the distribution of the global best fitness values obtained in the thirty runs by quartiles. The horizontal line inside the rectangular box represents the median value of the thirty data. The upper and the lower boundaries of the rectangular box represent the upper and the lower quartiles of the thirty data, respectively. The top and the bottom extended lines out of the rectangular box represent the maximum and the minimum values among the thirty data, respectively. The symbol “+” represents the outlier values among the thirty data.
Making deep and close observations on the above figures and tables, we gain the following findings:
(1) From the Friedman test results, on the one hand, DEGGDE consistently ranks in first place for the 30D, 50D, and 100D CEC’2017 benchmark sets. This demonstrates that DEGGDE obtains the best overall optimization performance among all algorithms. Furthermore, the rank value of DEGGDE is much smaller than those of the eleven compared methods on the three CEC’2017 benchmark sets. This reveals that DEGGDE shows significant superiority to the eleven compared methods in solving optimization problems.
(2) From the “w/t/l” derived from the Wilcoxon rank sum test, we find that with increasing dimensionality, DEGGDE achieves better and better optimization performance than the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, DEGGDE obtains significantly better performance than the eleven compared algorithms on at least ten problems. In particular, it significantly outperforms six compared methods (namely, SHADE, GPDE, FDDE, TPDE, NSHADE and EJADE) on more than twenty problems. On the 50D CEC’2017 benchmark sets, DEGGDE achieves significantly better performance than the eleven compared methods on at least fifteen problems. Particularly, it shows significant superiority to nine compared methods (namely, SHADE, GPDE, SEDE, FADE, FDDE, TPDE, NSHADE, PFIDE, and EJADE) on more than twenty problems. On the 100D CEC’2017 set, DEGGDE is significantly superior to the eleven compared methods on more than eighteen problems. In particular, it performs significantly better than seven compared methods (namely, GPDE, FADE, FDDE, NSHADE, CUSDE, PFIDE, and EJADE) on more than twenty problems.
(3) With respect to the optimization performance on different types of optimization problems, we find that DEGGDE is particularly good at solving complex optimization problems, such as multimodal problems, hybrid problems, and composition problems. Specifically, (a) in the two unimodal problems (F1 and F3), DEGGDE shows significantly better performance for both problems than seven, eight, and six compared methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. (b) On the seven multimodal problems (F4F10), DEGGDE achieves increasingly better performance as the dimensionality increases. On the 30D multimodal problems, DEGGDE presents significant dominance over the eleven compared methods on at least three problems and only shows failures to them on, at most, three problems. In particular, it is significantly better than five compared methods on more than five problems. On the 50D multimodal problems, DEGGDE significantly outperforms the eleven compared methods on four problems and only fails the competition with them on, at most, two problems. Particularly, it is significantly superior to five compared methods on more than five problems. On the 100D multimodal problems, DEGGDE performs significantly than the eleven compared methods on more than four problems and only shows inferiority to them on, at most, three problems. Particularly, it is significantly better than nine compared methods on more than five problems. (c) On the ten hybrid problems (F11F20), DEGGDE also shows increasingly better performance with increasing dimensionality. Specifically, DEGGDE performs significantly better on more than seven problems than seven, nine, and nine compared methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. (d) On the ten composition problems (F21F30), DEGGDE also consistently achieves significantly better performance than most of the eleven compared algorithms. Specifically, it shows significant dominance on more than six problems compared to ten, nine, and nine other methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively.
(4) In terms of the convergence behavior comparisons between DEGGDE and the eleven compared methods shown in Figure 1, Figure 2 and Figure 3, we find that DEGGDE achieves much faster convergence speed along with much better solutions than most of the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, DEGGDE obtains the best performance in terms of both the solution quality and the convergence speed among all methods on eight problems (F1, F5, F7F8, F12, F16, F20F21). On the 50D CEC’2017 benchmark set, DEGGDE achieves the fastest convergence speed and the best solutions among all algorithms on eleven problems (F5, F7F8, F16F17, F20F21, F23F24, F26, F29). On the 100D CEC’2017 benchmark set, DEGGDE performs the best with respect to both the convergence speed and the solution quality among all approaches on thirteen problems (F1, F5, F7F8, F13, F16F17, F20F21, F23F24, F26, F29).
(5) With respect to the box-plot diagrams shown in Figure 4, Figure 5 and Figure 6, we find that DEGGDE achieves much more stable performance than the eleven compared methods. It was also found that the overall quality of the global optima found by DEGGDE in the thirty independent runs is much better than most of the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on eighteen problems (F1, F3, F5F9, F11–16, F18F19, F21–F22, F28), which indicates its superior and stable performance. On the 50D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on twenty problems (F1, F3, F5F9, F12, F14F21, F23F24, F26, F29). On the 100D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on eighteen problems (F1, F3, F5F8, F12F14, F16F19, F21, F23F24, F26, F29).
Above all, the above extensive comparisons between DEGGDE and the eleven compared methods demonstrate that DEGGDE preserves high effectiveness and efficiency in solving optimization problems. DEGGDE also performs very stably. In particular, it is good at solving complex optimization problems, such as multimodal problems, hybrid problems, and composition problems.

4.3. Deep Investigation on the Effectiveness of “DE/Current-to-Duelite/1”

In this section, we conduct experiments to verify the effectiveness of the devised “DE/current-to-duelite/1”. To this end, we develop several variants of “DE/current-to-duelite/1”.
First, in the designed mutation scheme, we utilize the elite individuals in both the population and the archive to direct the mutation of individuals. To demonstrate the effectiveness of this scheme, we only utilize the elite individuals in the population to guide the mutation of the population, developing a variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-Pelite/1”. Similarly, we also only utilize the elite individuals in the archive to direct the mutation of the population, developing another variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-Aelite/1”.
Second, as for the directional random difference vector in the devised mutation scheme, unlike existing studies, we randomly choose two random individuals from PA for each individual to be mutated and then take the better one as x r 1 , G while taking the worse one as x r 2 , G . In this way, the generated mutation vector expectedly moves towards the optimal areas fast. To demonstrate the effectiveness of this method of generating the random difference vector, we also develop several variants of “DE/current-to-duelite/1”. Firstly, we remove the directional scheme, taking the better one as x r 1 , G while taking the worse one as x r 2 , G , and thus develop a variant of “DE/current-to-duelite/1”, which is represented as “DE/current-to-duelite/1-WD”. Subsequently, instead of choosing the two random individuals both from PA, we randomly choose one random individual from P and another random individual from PA. Without the directional scheme between the two randomly selected individuals, we develop a variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-duelite/1-PWD”. Along with the directional scheme, we develop another variant of “DE/current-to-duelite/1”, which is represented as “DE/current-to-duelite/1-PD”.
After the above preparation, we conducted experiments on the 50D CEC’2017 benchmark set to compare the original “DE/current-to-duelite/1” with the developed five variants. Table 7 presents the comparison results. In this table, the Friedman test is performed to get the average rank of each variant of “DE/current-to-duelite/1”. The Wilcoxon rank sum test is also performed to compare the original “DE/current-to-duelite/1” with each of the developed versions on each benchmark problem. According to the results of this test, “w/t/l” in this table counts the number of problems where “DE/current-to-duelite/1” is significantly better than, equivalent with, and significantly worse than the corresponding developed versions, respectively. From this table, we gain the following findings:
(1) Regarding the Friedman test, DEGGDE with “DE/current-to-duelite/1” ranks first among all versions of DEGGDE. This means that “DE/current-to-duelite/1” helps DEGGDE achieve the best overall performance on the 50D CEC’2017 benchmark set. Additionally, the rank value of “DE/current-to-duelite/1” is much smaller than the other versions. This reveals that “DE/current-to-duelite/1” shows significant superiority to the other versions.
(2) From the perspective of “w/t/l”, we find that DEGGDE with “DE/current-to-duelite/1” performs significantly better than DEGGDE with the other versions on more than ten problems and only shows inferior performance to them on no more than three problems. In particular, compared with “DE/current-to-Pelite/1” and “DE/current-to-Aelite/1”, “DE/current-to-duelite/1” achieves significantly better performance on fourteen and twenty-one problems, respectively. This proves the great effectiveness of using both the elite individuals in the population and the ones in the archive to direct the mutation of the population. Compared with “DE/current-to-duelite/1-WD”, “DE/current-to-duelite/1” significantly outperforms it on eleven problems. This illustrates the effectiveness of the directional random difference vector in the devised mutation scheme, i.e., taking the better one between the two randomly selected individuals as x r 1 , G , and taking the worse one as x r 2 , G . Furthermore, “DE/current-to-duelite/1” significantly outperforms “DE/current-to-duelite/1-PD” and “DE/current-to-duelite/1-PWD” on ten and nineteen problems, respectively. This demonstrates the great effectiveness of selecting the two random individuals both from PA.
The above experimental results have demonstrated the great effectiveness of the devised mutation scheme, namely “DE/current-to-duelite/1”. In particular, the dual elite groups, namely the elite individuals in the current population and those in the archive, provide diverse yet promising guiding exemplars to lead the mutation of individuals. This not only affords great help in diversity enhancement, but also offers great assistance in convergence acceleration. Together with the directional random difference vector, the devised mutation scheme provides directional guidance for individuals to approach optimal areas fast.

5. Conclusions

This paper has devised a dual elite groups-guided differential evolution (DEGGDE) by designing a novel mutation scheme, called “DE/current-to-duelite/1”. To make full use of the historical evolutionary information, this mutation scheme maintains an archive to store the obsolete parent individuals and then utilizes the elite individuals in both the current population and the archive as the candidate guiding exemplars to direct the mutation of the population. In this way, the diversity of the leading exemplars could be largely enhanced, which is beneficial for individuals to search the problem space diversely. In addition, to accelerate the convergence of DEGGDE, instead of generating the random difference vector in the mutation scheme completely randomly, we propose to generate a directional random difference vector for the devised mutation scheme by first randomly selecting two individuals from the combination of the current population and the archive and then taking the better one as the first exemplar and the worse one as the second exemplar. In this way, the directional random difference vector provides a promising direction for the mutated individual to move towards optimal areas. With the collaboration between the two main techniques, DEGGDE is expected to strike a good balance between exploration and exploitation and thus is anticipated to achieve promising performance in solving optimization problems.
Extensive experiments have been executed on the CEC’2017 benchmark set with three different settings of dimensionality (namely 30D, 50D, and 100D). The experimental results have shown that, compared with eleven state-of-the-art DE variants, DEGGDE obtains significantly better performance than them in solving the 30D, 50D, and 100D CEC’2017 problems. In particular, DEGGDE shows good capability in coping with complex optimization problems, such as multimodal optimization problems, hybrid optimization problems, and composition optimization problems. Additionally, the effectiveness of the devised mutation scheme and the dynamic adjustment of the number of elite individuals has also been verified.
In the current version of DEGGDE, we dynamically adjust the number of elite individuals without considering the evolutionary state of the population. In the future, to further improve the optimization performance of DEGGDE, we aim to devise an adaptive adjustment strategy for this parameter by taking into account the evolutionary information of individuals and the evolutionary state of the population. Another future direction is to employ DEGGDE to solve practical optimization problems in both academic and engineering applications, such as path planning of unmanned aerial vehicles [83], automatic machine learning [84], control parameter optimization in wireless power transfer systems [85], expensive optimization [86], and optimization problems relevant to social networks [87].

Author Contributions

T.-T.W.: Implementation, formal analysis, and writing—original draft preparation. Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Natural Science Foundation of China under Grant 62006124 and in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Zhang, J.; Chen, D.; Yang, Q.; Wang, Y.; Liu, D.; Jeon, S.-W.; Zhang, J. Proximity ranking-based multimodal differential evolution. Swarm Evol. Comput. 2023, 78, 101277. [Google Scholar] [CrossRef]
  3. Liu, D.; He, H.; Yang, Q.; Wang, Y.; Jeon, S.-W.; Zhang, J. Function value ranking aware differential evolution for global numerical optimization. Swarm Evol. Comput. 2023, 78, 101282. [Google Scholar] [CrossRef]
  4. Yang, Q.; Yan, J.-Q.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Zhang, J. Random neighbor elite guided differential evolution for global numerical optimization. Inf. Sci. 2022, 607, 1408–1438. [Google Scholar] [CrossRef]
  5. Zhou, S.; Xing, L.; Zheng, X.; Du, N.; Wang, L.; Zhang, Q. A self-adaptive differential evolution algorithm for scheduling a single batch-processing machine with arbitrary job sizes and release times. IEEE Trans. Cybern. 2021, 51, 1430–1442. [Google Scholar] [CrossRef]
  6. Xu, Y.; Pi, D.; Wu, Z.; Chen, J.; Zio, E. Hybrid discrete differential evolution and deep q-network for multimission selective maintenance. IEEE Trans. Reliab. 2022, 71, 1501–1512. [Google Scholar] [CrossRef]
  7. Liu, H.; Chen, Q.; Pan, N.; Sun, Y.; An, Y.; Pan, D. Uav stocktaking task-planning for industrial warehouses based on the improved hybrid differential evolution algorithm. IEEE Trans. Ind. Inform. 2022, 18, 582–591. [Google Scholar] [CrossRef]
  8. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jso. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  9. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with euclidean neighborhood for solving cec2017 benchmark problems. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  10. Tanabe, R.; Fukunaga, A.S. Improving the search performance of shade using linear population size reduction. In Proceedings of the IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  11. Liao, J.; Cai, Y.; Wang, T.; Tian, H.; Chen, Y. Cellular direction information based differential evolution for numerical optimization: An empirical study. Soft Comput. 2016, 20, 2801–2827. [Google Scholar] [CrossRef]
  12. Mohamed, A.W. An improved differential evolution algorithm with triangular mutation for global numerical optimization. Comput. Ind. Eng. 2015, 85, 359–375. [Google Scholar] [CrossRef]
  13. Opara, K.; Arabas, J. Comparison of mutation strategies in differential evolution—A probabilistic perspective. Swarm Evol. Comput. 2018, 39, 53–69. [Google Scholar] [CrossRef]
  14. Sun, G.; Cai, Y.; Wang, T.; Tian, H.; Wang, C.; Chen, Y. Differential evolution with individual-dependent topology adaptation. Inf. Sci. 2018, 450, 1–38. [Google Scholar] [CrossRef]
  15. Tian, M.; Gao, X.; Yan, X. Performance-driven adaptive differential evolution with neighborhood topology for numerical optimization. Knowl.-Based Syst. 2020, 188, 105008. [Google Scholar] [CrossRef]
  16. Ghosh, A.; Das, S.; Das, A.K.; Gao, L. Reusing the past difference vectors in differential evolution—A simple but significant improvement. IEEE Trans. Cybern. 2020, 50, 4821–4834. [Google Scholar] [CrossRef]
  17. Gong, W.; Cai, Z. Differential evolution with ranking-based mutation operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef]
  18. Wang, C.; Gao, J. High-dimensional waveform inversion with cooperative coevolutionary differential evolution algorithm. IEEE Geosci. Remote Sens. Lett. 2012, 9, 297–301. [Google Scholar] [CrossRef]
  19. Wang, J.; Liao, J.; Zhou, Y.; Cai, Y. Differential evolution enhanced with multiobjective sorting-based mutation operators. IEEE Trans. Cybern. 2014, 44, 2792–2805. [Google Scholar] [CrossRef]
  20. Wang, K.; Gong, W.; Liao, Z.; Wang, L. Hybrid niching-based differential evolution with two archives for nonlinear equation system. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 7469–7481. [Google Scholar] [CrossRef]
  21. Ji, W.X.; Yang, Q.; Gao, X.D. Gaussian sampling guided differential evolution based on elites for global optimization. IEEE Access 2023, 11, 80915–80944. [Google Scholar] [CrossRef]
  22. Hamza, N.M.; Essam, D.L.; Sarker, R.A. Constraint consensus mutation-based differential evolution for constrained optimization. IEEE Trans. Evol. Comput. 2016, 20, 447–459. [Google Scholar] [CrossRef]
  23. Xia, X.; Tong, L.; Zhang, Y.; Xu, X.; Yang, H.; Gui, L.; Li, Y.; Li, K. Nfdde: A novelty-hybrid-fitness driving differential evolution algorithm. Inf. Sci. 2021, 579, 33–54. [Google Scholar] [CrossRef]
  24. Zhao, X.; Xu, G.; Rui, L.; Liu, D.; Liu, H.; Yuan, J. A failure remember-driven self-adaptive differential evolution with top-bottom strategy. Swarm Evol. Comput. 2019, 45, 1–14. [Google Scholar] [CrossRef]
  25. Zou, L.; Pan, Z.; Gao, Z.; Gao, J. Improving the search accuracy of differential evolution by using the number of consecutive unsuccessful updates. Knowl.-Based Syst. 2022, 250, 109005. [Google Scholar] [CrossRef]
  26. Cai, Y.; Wu, D.; Zhou, Y.; Fu, S.; Tian, H.; Du, Y. Self-organizing neighborhood-based differential evolution for global optimization. Swarm Evol. Comput. 2020, 56, 100699. [Google Scholar] [CrossRef]
  27. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  28. Cai, Y.; Wang, J.; Yin, J. Learning-enhanced differential evolution for numerical optimization. Soft Comput. 2012, 16, 303–330. [Google Scholar] [CrossRef]
  29. Gao, Z.; Pan, Z.; Gao, J. Multimutation differential evolution algorithm and its application to seismic inversion. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3626–3636. [Google Scholar] [CrossRef]
  30. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. Mtde: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  31. Tan, Z.; Li, K.; Wang, Y. Differential evolution with adaptive mutation strategy based on fitness landscape analysis. Inf. Sci. 2021, 549, 142–163. [Google Scholar] [CrossRef]
  32. Xia, X.; Gui, L.; Zhang, Y.; Xu, X.; Yu, F.; Wu, H.; Wei, B.; He, G.; Li, Y.; Li, K. A fitness-based adaptive differential evolution algorithm. Inf. Sci. 2021, 549, 116–141. [Google Scholar] [CrossRef]
  33. Piotrowski, A.P. Adaptive memetic differential evolution with global and local neighborhood-based mutation operators. Inf. Sci. 2013, 241, 164–194. [Google Scholar] [CrossRef]
  34. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  35. Liu, X.f.; Zhan, Z.H.; Lin, Y.; Chen, W.N.; Gong, Y.J.; Gu, T.L.; Yuan, H.Q.; Zhang, J. Historical and heuristic-based adaptive differential evolution. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2623–2635. [Google Scholar] [CrossRef]
  36. Zhou, X.G.; Zhang, G.J. Differential evolution with underestimation-based multimutation strategy. IEEE Trans. Cybern. 2019, 49, 1353–1364. [Google Scholar] [CrossRef] [PubMed]
  37. Das, S.; Mandal, A.; Mukherjee, R. An adaptive differential evolution algorithm for global optimization in dynamic environments. IEEE Trans. Cybern. 2014, 44, 966–978. [Google Scholar] [CrossRef]
  38. Wang, Y.; Liu, Z.-Z.; Li, J.; Li, H.-X.; Yen, G.G. Utilizing cumulative population distribution information in differential evolution. Appl. Soft Comput. 2016, 48, 329–346. [Google Scholar] [CrossRef]
  39. Abbass, H.A. The self-adaptive pareto differential evolution algorithm. In Proceedings of the Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 831–836. [Google Scholar]
  40. Das, S.; Konar, A.; Chakraborty, U. Two improved differential evolution schemes for faster global search. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  41. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  42. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinusoidal differential evolution algorithm for numerical optimisation. Appl. Soft Comput. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  43. Tang, L.; Dong, Y.; Liu, J. Differential evolution with an individual-dependent mechanism. IEEE Trans. Evol. Comput. 2015, 19, 560–574. [Google Scholar] [CrossRef]
  44. Zhou, Y.Z.; Yi, W.C.; Gao, L.; Li, X.Y. Adaptive differential evolution with sorting crossover rate for continuous optimization problems. IEEE Trans. Cybern. 2017, 47, 2742–2753. [Google Scholar] [CrossRef]
  45. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  46. Zhang, J.; Sanderson, A.C. Jade: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  47. Cheng, S.; Liu, B.; Shi, Y.; Jin, Y.; Li, B. Evolutionary computation and big data: Key challenges and future directions. In Proceedings of the Data Mining and Big Data, Bali, Indonesia, 25–30 June 2016; pp. 3–14. [Google Scholar]
  48. Yang, Q.; Song, G.W.; Chen, W.N.; Jia, Y.H.; Gao, X.D.; Lu, Z.Y.; Jeon, S.W.; Zhang, J. Random contrastive interaction for particle swarm optimization in high-dimensional environment. IEEE Trans. Evol. Comput. 2023, 1. [Google Scholar] [CrossRef]
  49. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An adaptive stochastic dominant learning swarm optimizer for high-dimensional optimization. IEEE Trans. Cybern. 2022, 52, 1960–1976. [Google Scholar] [CrossRef]
  50. Bhattacharya, M.; Islam, R.; Abawajy, J. Evolutionary optimization: A big data perspective. J. Netw. Comput. Appl. 2016, 59, 416–426. [Google Scholar] [CrossRef]
  51. Yang, Q.; Song, G.-W.; Gao, X.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. A random elite ensemble learning swarm optimizer for high-dimensional optimization. Complex Intell. Syst. 2023, 1–34. [Google Scholar] [CrossRef]
  52. Price, K.V. An introduction to differential evolution. In New Ideas in Optimization; McGraw-Hill Inc.: New York, NY, USA, 1999. [Google Scholar]
  53. Ghosh, A.; Das, S.; Das, A.K.; Senkerik, R.; Viktorin, A.; Zelinka, I.; Masegosa, A.D. Using spatial neighborhoods for parameter adaptation: An improved success history based differential evolution. Swarm Evol. Comput. 2022, 71, 101057. [Google Scholar] [CrossRef]
  54. Yi, W.; Chen, Y.; Pei, Z.; Lu, J. Adaptive differential evolution with ensembling operators for continuous optimization problems. Swarm Evol. Comput. 2022, 69, 100994. [Google Scholar] [CrossRef]
  55. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem definitions and evaluation criteria for the cec 2017 special session and competition on single objective bound constrained real-parameter numerical optimization. In Technical Report; Nanyang Technological University Singapore: Singapore, 2016; pp. 1–34. [Google Scholar]
  56. Storn, R.; Price, K. Minimizing the real functions of the icec’96 contest by differential evolution. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nayoya, Japan, 20–22 May 1996; pp. 842–844. [Google Scholar]
  57. Price, K.V.; Storn, R.; Lampinen, J. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin, Germany, 2005. [Google Scholar]
  58. Baatar, N.; Zhang, D.; Koh, C.S. An improved differential evolution algorithm adopting λ -best mutation strategy for global optimization of electromagnetic devices. IEEE Trans. Magn. 2013, 49, 2097–2100. [Google Scholar] [CrossRef]
  59. Chen, S.; He, Q.; Zheng, C.; Sun, L.; Wang, X.; Ma, L.; Cai, Y. Differential evolution based simulated annealing method for vaccination optimization problem. IEEE Trans. Netw. Sci. Eng. 2022, 9, 4403–4415. [Google Scholar] [CrossRef]
  60. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  61. Deng, W.; Xu, J.; Gao, X.Z.; Zhao, H. An enhanced msiqde algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1578–1587. [Google Scholar] [CrossRef]
  62. Eiben, A.E.; Hinterding, R.; Michalewicz, Z. Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 1999, 3, 124–141. [Google Scholar] [CrossRef]
  63. Fan, Q.; Yan, X. Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE Trans. Cybern. 2016, 46, 219–232. [Google Scholar] [CrossRef] [PubMed]
  64. Li, H.; Gong, M.; Wang, C.; Miao, Q. Pareto self-paced learning based on differential evolution. IEEE Trans. Cybern. 2021, 51, 4187–4200. [Google Scholar] [CrossRef]
  65. Yang, Q.; Chen, W.N.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A level-based learning swarm optimizer for large-scale optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  66. Yang, Q.; Zhu, Y.; Gao, X.; Xu, D.; Lu, Z. Elite directed particle swarm optimization with historical information for high-dimensional problems. Mathematics 2022, 10, 1384. [Google Scholar] [CrossRef]
  67. Yang, Q.; Zhang, K.-X.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. A dimension group-based comprehensive elite learning swarm optimizer for large-scale optimization. Mathematics 2022, 10, 1072. [Google Scholar] [CrossRef]
  68. Li, Y.; Wang, S.; Yang, B. An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Syst. Appl. 2020, 153, 113451. [Google Scholar] [CrossRef]
  69. Deng, L.; Li, C.; Han, R.; Zhang, L.; Qiao, L. Tpde: A tri-population differential evolution based on zonal-constraint stepped division mechanism and multiple adaptive guided mutation strategies. Inf. Sci. 2021, 575, 22–40. [Google Scholar] [CrossRef]
  70. Cao, Z.; Wang, Z.; Fu, Y.; Jia, H.; Tian, F. An adaptive differential evolution framework based on population feature information. Inf. Sci. 2022, 608, 1416–1440. [Google Scholar] [CrossRef]
  71. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.L.P.; Xu, X.M.; Zhang, J. Multimodal estimation of distribution algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef]
  72. Yang, Q.; Li, Y.; Gao, X.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An adaptive covariance scaling estimation of distribution algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  73. Sun, J.; Gao, S.; Dai, H.; Cheng, J.; Zhou, M.; Wang, J. Bi-objective elite differential evolution algorithm for multivalued logic networks. IEEE Trans. Cybern. 2020, 50, 233–246. [Google Scholar] [CrossRef]
  74. Zhang, G.; Ma, X.; Wang, L.; Xing, K. Elite archive-assisted adaptive memetic algorithm for a realistic hybrid differentiation flowshop scheduling problem. IEEE Trans. Evol. Comput. 2022, 26, 100–114. [Google Scholar] [CrossRef]
  75. Yang, Q.; Hua, L.K.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic cognitive dominance leading particle swarm optimization for multimodal problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  76. Yang, Q.; Bian, Y.-W.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic triad topology based particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1032. [Google Scholar] [CrossRef]
  77. Yang, Q.; Guo, X.; Gao, X.; Xu, D.; Lu, Z. Differential elite learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1261. [Google Scholar] [CrossRef]
  78. Yang, Q.; Jing, Y.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Predominant cognitive learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1620. [Google Scholar] [CrossRef]
  79. Sun, G.; Lan, Y.; Zhao, R. Differential evolution with gaussian mutation and dynamic parameter adjustment. Soft Comput. 2019, 23, 1615–1642. [Google Scholar] [CrossRef]
  80. Meng, Z.; Yang, C.; Li, X.; Chen, Y. Di-de: Depth information-based differential evolution with adaptive parameter control for numerical optimization. IEEE Access 2020, 8, 40809–40827. [Google Scholar] [CrossRef]
  81. Liang, J.; Qiao, K.; Yu, K.; Ge, S.; Qu, B.; Xu, R.; Li, K. Parameters estimation of solar photovoltaic models via a self-adaptive ensemble-based differential evolution. Sol. Energy 2020, 207, 336–346. [Google Scholar] [CrossRef]
  82. Wang, Y.; Gao, S.; Yu, Y.; Cai, Z.; Wang, Z. A gravitational search algorithm with hierarchy and distributed framework. Knowl.-Based Syst. 2021, 218, 106877. [Google Scholar] [CrossRef]
  83. Xiao, T.-L.; Yang, Q.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. Variation encoded large-scale swarm optimizers for path planning of unmanned aerial vehicle. In Proceedings of the Genetic and Evolutionary Computation Conference, Lisbon, Portugal, 15–19 July 2023; pp. 102–110. [Google Scholar]
  84. Lu, Z.; Liang, S.; Yang, Q.; Du, B. Evolving block-based convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5525921. [Google Scholar] [CrossRef]
  85. Gao, X.; Cao, W.; Yang, Q.; Wang, H.; Wang, X.; Jin, G.; Zhang, J. Parameter optimization of control system design for uncertain wireless power transfer systems using modified genetic algorithm. CAAI Trans. Intell. Technol. 2022, 7, 582–593. [Google Scholar] [CrossRef]
  86. Wei, F.F.; Chen, W.N.; Yang, Q.; Deng, J.; Luo, X.N.; Jin, H.; Zhang, J. A classifier-assisted level-based learning swarm optimizer for expensive optimization. IEEE Trans. Evol. Comput. 2021, 25, 219–233. [Google Scholar] [CrossRef]
  87. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant colony optimization for the control of pollutant spreading on social networks. IEEE Trans. Cybern. 2020, 50, 4053–4065. [Google Scholar] [CrossRef]
Figure 1. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 30D CEC’2017 set.
Figure 1. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 30D CEC’2017 set.
Mathematics 11 03681 g001
Figure 2. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 50D CEC’2017 set.
Figure 2. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 50D CEC’2017 set.
Mathematics 11 03681 g002
Figure 3. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 100D CEC’2017 set.
Figure 3. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 100D CEC’2017 set.
Mathematics 11 03681 g003
Figure 4. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 30D CEC’2017 set.
Figure 4. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 30D CEC’2017 set.
Mathematics 11 03681 g004
Figure 5. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 50D CEC’2017 set.
Figure 5. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 50D CEC’2017 set.
Mathematics 11 03681 g005
Figure 6. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 100D CEC’2017 set.
Figure 6. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 100D CEC’2017 set.
Mathematics 11 03681 g006
Table 1. Summarized information of the CEC’2017problems.
Table 1. Summarized information of the CEC’2017problems.
Problem TypeIndexProblem NameOptimum
Unimodal
Problems
1Shifted and Rotated Bent Cigar Problem100
3Shifted and Rotated Zakharov Problem300
Simple
Multimodal
Problems
4Shifted and Rotated Rosenbrock’s Problem400
5Shifted and Rotated Rastrigin’s Problem500
6Shifted and Rotated Expanded Scaffer’s F6 Problem600
7Shifted and Rotated Lunacek Bi_Rastrigin Problem700
8Shifted and Rotated Non-Continuous Rastrigin’s Problem800
9Shifted and Rotated Levy Problem900
10Shifted and Rotated Schwefel’s Problem1000
Hybrid
Problems
11Hybrid Problem 1 (N = 3)1100
12Hybrid Problem 2 (N = 3)1200
13Hybrid Problem 3 (N = 3)1300
14Hybrid Problem 4 (N = 4)1400
15Hybrid Problem 5 (N = 4)1500
16Hybrid Problem 6 (N = 4)1600
17Hybrid Problem 6 (N = 5)1700
18Hybrid Problem 6 (N = 5)1800
19Hybrid Problem 6 (N = 5)1900
20Hybrid Problem 6 (N = 6)2000
Composition
Problems
21Composition Problem 1 (N = 3)2100
22Composition Problem 2 (N = 3)2200
23Composition Problem 3 (N = 4)2300
24Composition Problem 4 (N = 4)2400
25Composition Problem 5 (N = 5)2500
26Composition Problem 6 (N = 5)2600
27Composition Problem 7 (N = 6)2700
28Composition Problem 8 (N = 6)2800
29Composition Problem 9 (N = 3)2900
30Composition Problem 10 (N = 3)3000
Search Range: ( 100 , 100 ) D
Table 2. The optimal PS settings of all algorithms for the 30D, 50D, and 100D CEC’2017 sets.
Table 2. The optimal PS settings of all algorithms for the 30D, 50D, and 100D CEC’2017 sets.
DParameterDEGGDESHADE
(2013)
GPDE
(2019)
DiDE
(2020)
SEDE
(2020)
FADE
(2021)
FDDE
(2021)
TPDE
(2021)
NSHADE
(2022)
CUSDE
(2022)
PFIDE
(2022)
EJADE
(2022)
30PS2301103030050Initialized as 3 × 25 Adaptively Adjusted150100180100140100
503001704050050150100180100140100
10041017040100080150100190100140100
Table 3. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 30D CEC’2017 benchmark functions.
Table 3. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 30D CEC’2017 benchmark functions.
FCategoryQuality DEGGDESHADEGPDEDiDESEDEFADEFDDETPDENSHADECUSDEPFIDEEJADE
F1Unimodal
Functions
Mean0.00 × 1005.60 × 10−154.13 × 1030.00 × 1000.00 × 1005.22 × 10−98.05 × 10−152.84 × 10−152.58 × 10−144.26 × 10−151.42 × 10−151.47 × 10−14
Std0.00 × 1006.98 × 10−156.02 × 1030.00 × 1000.00 × 1002.86 × 10−87.16 × 10−155.78 × 10−151.55 × 10−146.62 × 10−154.34 × 10−155.88 × 10−15
p-value1.28 × 10−4 +1.21 × 10−12 +NaN =NaN =8.27 × 10−7 +1.43 × 10−6 +1.09 × 10−2 +6.59 × 10−13 +1.31 × 10−3 +8.14 × 10−2 =1.92 × 10−12 +
F3Mean1.71 × 10−145.70 × 10−142.94 × 1040.00 × 1003.80 × 10−151.14 × 10−146.15 × 1028.72 × 10−21.27 × 1044.26 × 10−135.12 × 10−141.24 × 10−6
Std2.66 × 10−141.50 × 10−148.09 × 1030.00 × 1001.45 × 10−142.32 × 10−143.37 × 1032.06 × 10−12.90 × 1042.08 × 10−131.73 × 10−146.80 × 10−6
p-value1.10 × 10−7 +1.01 × 10−11 +1.31 × 10−3 −2.12 × 10−2 −3.80 × 10−1 =4.07 × 10−4 +1.01 × 10−11 +1.01 × 10−11 +9.70 × 10−12 +1.71 × 10−2 +9.21 × 10−5 +
F1–3w/t/l2/0/02/0/00/1/10/1/11/1/02/0/02/0/02/0/02/0/01/1/02/0/0
F4Simple
Multimodal
Functions
Mean5.66 × 1014.30 × 1018.51 × 1015.50 × 1016.10 × 1012.73 × 1015.00 × 1015.86 × 1016.55 × 1005.87 × 1015.12 × 1019.31 × 100
Std1.07 × 1012.76 × 1016.60 × 1001.50 × 1016.35 × 1002.88 × 1012.22 × 1013.61 × 10−141.68 × 1011.01 × 1002.28 × 1012.09 × 101
p-value6.09 × 10−1 =1.70 × 10−12 +2.75 × 10−7 −1.64 × 10−5 +3.06 × 10−3 −1.11 × 10−5 −2.71 × 10−14 +1.39 × 10−8 −4.29 × 10−14 +1.39 × 10−6 −1.24 × 10−7 −
F5Mean1.41 × 1012.07 × 1014.13 × 1011.85 × 1013.57 × 1013.16 × 1012.76 × 1014.11 × 1013.45 × 1012.78 × 1012.31 × 1013.42 × 101
Std4.17 × 1003.61 × 1001.95 × 1015.75 × 1001.21 × 1017.89 × 1002.87 × 1006.58 × 1005.09 × 1009.08 × 1003.43 × 1001.01 × 101
p-value2.78 × 10−7 +4.95 × 10−11 +1.27 × 10−3 +1.85 × 10−10 +1.14 × 10−10 +6.68 × 10−11 +3.01 × 10−11 +3.68 × 10−11 +2.42 × 10−9 +2.03 × 10−9 +7.34 × 10−11 +
F6Mean2.51 × 10−81.14 × 10−135.31 × 10−31.14 × 10−131.14 × 10−132.92 × 10−21.14 × 10−131.37 × 10−84.42 × 10−51.14 × 10−135.70 × 10−92.86 × 10−1
Std1.02 × 10−75.14 × 10−292.42 × 10−35.14 × 10−295.14 × 10−295.28 × 10−20.00 × 1004.18 × 10−86.74 × 10−50.00 × 1002.55 × 10−83.18 × 10−1
p-value1.70 × 10−1 =5.22 × 10−12 +1.70 × 10−1 =1.70 × 10−1 =5.22 × 10−12 +4.46 × 10−12 −4.46 × 10−8 −5.22 × 10−12 +4.46 × 10−12 −2.77 × 10−9 −5.22 × 10−12 +
F7Mean4.41 × 1014.94 × 1018.66 × 1014.95 × 1016.72 × 1016.46 × 1015.85 × 1016.85 × 1015.97 × 1016.13 × 1015.33 × 1016.18 × 101
Std6.14 × 1002.86 × 1003.80 × 1016.23 × 1001.00 × 1011.01 × 1014.73 × 1008.05 × 1005.19 × 1001.71 × 1013.14 × 1007.45 × 100
p-value1.00 × 100 =4.11 × 10−7 +2.87 × 10−10 +3.77 × 10−4 +3.82 × 10−10 +8.10 × 10−10 +2.03 × 10−9 +1.78 × 10−10 +1.29 × 10−9 +7.69 × 10−8 +1.01 × 10−8 +1.86 × 10−9 +
F8Mean1.49 × 1011.95 × 1014.21 × 1011.91 × 1013.19 × 1012.92 × 1012.67 × 1014.12 × 1013.30 × 1012.97 × 1012.14 × 1013.55 × 101
Std5.17 × 1002.79 × 1001.67 × 1015.00 × 1008.59 × 1008.01 × 1003.43 × 1009.31 × 1004.25 × 1001.27 × 1013.54 × 1009.40 × 100
p-value9.18 × 10−5 +5.43 × 10−11 +1.80 × 10−3 +1.16 × 10−9 +3.96 × 10−9 +1.06 × 10−9 +1.94 × 10−10 +7.33 × 10−11 +5.05 × 10−8 +1.10 × 10−6 +1.19 × 10−10 +
F9Mean0.00 × 1000.00 × 1003.56 × 1000.00 × 1000.00 × 1005.45 × 10−15.97 × 10−30.00 × 1001.31 × 10−10.00 × 1000.00 × 1001.24 × 101
Std0.00 × 1000.00 × 1001.82 × 1010.00 × 1000.00 × 1008.39 × 10−12.27 × 10−20.00 × 1003.12 × 10−10.00 × 1000.00 × 1001.68 × 101
p-valueNaN =1.20 × 10−12 +NaN =NaN =1.29 × 10−7 +1.61 × 10−1 =NaN =4.31 × 10−11 +NaN =NaN =1.21 × 10−12 +
F10Mean3.19 × 1032.29 × 1033.37 × 1032.58 × 1035.41 × 1033.15 × 1032.63 × 1032.33 × 1032.08 × 1032.34 × 1032.44 × 1031.90 × 103
Std4.03 × 1022.35 × 1025.67 × 1023.86 × 1022.89 × 1028.19 × 1022.66 × 1023.22 × 1022.13 × 1027.55 × 1022.10 × 1025.11 × 102
p-value7.38 × 10−10 −9.33 × 10−2 =8.84 × 10−7 −3.02 × 10−11 +3.26 × 10−1 =3.26 × 10−7 −3.20 × 10−9 −8.99 × 10−11 −6.05 × 10−7 −5.00 × 10−9 −3.82 × 10−10 −
F4–10w/t/l3/3/16/1/03/2/25/2/05/1/13/1/34/1/25/0/24/1/23/1/35/0/2
F11Hybrid
Functions
Mean1.67 × 1013.71 × 1014.61 × 1012.00 × 1011.47 × 1012.21 × 1013.55 × 1012.74 × 1015.46 × 1011.93 × 1013.63 × 1016.55 × 101
Std2.40 × 1012.91 × 1013.25 × 1012.33 × 1011.46 × 1012.36 × 1012.94 × 1012.06 × 1012.94 × 1012.25 × 1012.91 × 1013.18 × 101
p-value3.81 × 10−6 +4.09 × 10−7 +1.23 × 10−3 +2.55 × 10−3 −4.96 × 10−4 +3.58 × 10−5 +5.58 × 10−5 +1.72 × 10−6 +5.31 × 10−3 +1.24 × 10−5 +8.29 × 10−8 +
F12Mean1.08 × 1033.14 × 1038.01 × 1041.17 × 1031.07 × 1041.37 × 1034.37 × 1032.35 × 1036.75 × 1031.76 × 1041.60 × 1035.17 × 103
Std3.08 × 1022.88 × 1032.26 × 1053.73 × 1029.86 × 1031.55 × 1036.63 × 1037.38 × 1036.10 × 1031.11 × 1048.18 × 1023.98 × 103
p-value6.53 × 10−8 +3.02 × 10−11 +3.87 × 10−1 =3.08 × 10−8 +2.34 × 10−1 =5.27 × 10−5 +1.09 × 10−1 =4.20 × 10−10 +3.02 × 10−11 +1.24 × 10−3 +7.12 × 10−9 +
F13Mean4.96 × 1016.59 × 1013.08 × 1051.88 × 1012.13 × 1012.45 × 1016.73 × 1016.69 × 1029.96 × 1012.82 × 1015.32 × 1013.65 × 101
Std4.22 × 1013.83 × 1011.12 × 1067.17 × 1006.15 × 1009.74 × 1004.37 × 1011.52 × 1034.32 × 1016.03 × 1004.53 × 1012.16 × 101
p-value3.15 × 10−2 +3.34 × 10−11 +9.83 × 10−8 −3.52 × 10−7 −4.08 × 10−5 −6.97 × 10−3 +1.22 × 10−2 +1.39 × 10−6 +2.53 × 10−4 −5.89 × 10−1 =1.62 × 10−1 =
F14Mean2.32 × 1013.00 × 1015.91 × 1022.38 × 1011.81 × 1012.68 × 1013.27 × 1013.47 × 1014.02 × 1013.17 × 1012.81 × 1013.09 × 101
Std4.20 × 1004.53 × 1001.96 × 1034.22 × 1001.17 × 1017.80 × 1006.02 × 1002.60 × 1008.59 × 1001.02 × 1016.75 × 1008.18 × 100
p-value1.70 × 10−8 +3.02 × 10−11 +9.33 × 10−2 =6.31 × 10−1 =1.00 × 10−3 +2.92 × 10−9 +5.49 × 10−11 +6.07 × 10−11 +3.26 × 10−7 +7.60 × 10−7 +4.80 × 10−7 +
F15Mean5.95 × 1001.67 × 1012.43 × 1037.11 × 1006.69 × 1001.29 × 1011.67 × 1011.58 × 1014.18 × 1018.16 × 1001.27 × 1011.90 × 101
Std2.29 × 1001.43 × 1019.07 × 1034.10 × 1003.65 × 1005.49 × 1001.30 × 1012.76 × 1003.17 × 1015.27 × 1009.68 × 1001.34 × 101
p-value1.87 × 10−5 +3.02 × 10−11 +5.01 × 10−1 =8.42 × 10−1 =3.20 × 10−9 +1.39 × 10−6 +3.34 × 10−11 +1.29 × 10−9 +4.21 × 10−2 +1.60 × 10−7 +5.46 × 10−9 +
F16Mean6.26 × 1013.00 × 1026.46 × 1023.67 × 1025.23 × 1024.03 × 1023.31 × 1022.53 × 1024.57 × 1024.47 × 1022.36 × 1023.86 × 102
Std7.53 × 1011.44 × 1022.44 × 1021.32 × 1021.94 × 1022.39 × 1021.23 × 1021.50 × 1021.22 × 1023.33 × 1021.17 × 1021.90 × 102
p-value2.23 × 10−9 +3.02 × 10−11 +2.87 × 10−10 +8.15 × 10−11 +1.60 × 10−7 +8.10 × 10−10 +2.38 × 10−7 +4.08 × 10−11 +7.09 × 10−8 +3.65 × 10−8 +5.07 × 10−10 +
F17Hybrid
Functions
Mean5.77 × 1015.15 × 1012.23 × 1026.46 × 1011.51 × 1021.46 × 1028.20 × 1016.27 × 1016.67 × 1011.41 × 1025.96 × 1019.73 × 101
Std9.41 × 1006.83 × 1001.45 × 1021.92 × 1019.82 × 1011.40 × 1022.00 × 1012.48 × 1011.44 × 1011.21 × 1021.13 × 1016.69 × 101
p-value1.44 × 10−2 −3.81 × 10−7 +2.90 × 10−1 =1.64 × 10−5 +3.95 × 10−1 =2.49 × 10−6 +7.84 × 10−1 =1.44 × 10−2 +4.03 × 10−3 +5.89 × 10−1 =4.29 × 10−1 =
F18Mean2.55 × 1017.57 × 1012.19 × 1052.49 × 1012.64 × 1013.43 × 1014.54 × 1013.03 × 1015.13 × 1012.88 × 1013.13 × 1014.56 × 101
Std3.04 × 1005.70 × 1012.66 × 1053.67 × 1005.28 × 1001.07 × 1012.74 × 1012.52 × 1002.19 × 1019.32 × 1009.61 × 1003.16 × 101
p-value3.65 × 10−8 +3.02 × 10−11 +4.12 × 10−1 =9.05 × 10−2 =7.69 × 10−8 +7.22 × 10−6 +8.20 × 10−7 +1.78 × 10−10 +9.33 × 10−2 =6.97 × 10−3 +1.25 × 10−5 +
F19Mean1.09 × 1011.59 × 1019.63 × 1037.41 × 1005.67 × 1009.24 × 1001.86 × 1011.39 × 1012.47 × 1018.24 × 1001.27 × 1011.18 × 101
Std3.04 × 1007.18 × 1003.35 × 1042.73 × 1002.12 × 1003.86 × 1001.34 × 1011.85 × 1005.40 × 1002.29 × 1003.86 × 1005.70 × 100
p-value2.50 × 10−3 +1.61 × 10−10 +4.35 × 10−5 −5.09 × 10−8 −3.18 × 10−3 −1.17 × 10−5 +7.20 × 10−5 +1.21 × 10−10 +2.39 × 10−4 −8.24 × 10−2 =9.23 × 10−1 =
F20Mean3.88 × 1015.67 × 1013.63 × 1027.86 × 1011.51 × 1022.00 × 1029.64 × 1015.48 × 1011.16 × 1029.38 × 1017.21 × 1011.07 × 102
Std9.42 × 1003.32 × 1011.97 × 1024.12 × 1011.08 × 1021.21 × 1024.27 × 1013.25 × 1014.75 × 1011.98 × 1023.65 × 1015.82 × 101
p-value8.66 × 10−5 +3.69 × 10−11 +6.53 × 10−8 +1.17 × 10−5 +1.61 × 10−6 +9.92 × 10−11 +1.91 × 10−1 =6.70 × 10−11 +9.93 × 10−2 =5.00 × 10−9 +2.62 × 10−3 +
F11–20w/t/l9/0/110/0/03/5/24/3/36/2/210/0/07/3/010/0/06/2/27/3/07/3/0
F21Composition
Functions
Mean2.14 × 1022.21 × 1022.48 × 1022.20 × 1022.37 × 1022.29 × 1022.28 × 1022.43 × 1022.33 × 1022.27 × 1022.23 × 1022.31 × 102
Std3.50 × 1003.42 × 1002.69 × 1016.32 × 1001.02 × 1017.36 × 1004.17 × 1008.38 × 1004.55 × 1007.43 × 1003.31 × 1009.75 × 100
p-value5.53 × 10−8 +4.08 × 10−11 +8.15 × 10−5 +5.49 × 10−11 +9.76 × 10−10 +4.50 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +1.07 × 10−9 +1.41 × 10−9 +1.29 × 10−9 +
F22Mean1.00 × 1021.00 × 1024.29 × 1031.00 × 1024.28 × 1031.01 × 1021.00 × 1027.38 × 1021.00 × 1022.02 × 1031.00 × 1021.00 × 102
Std0.00 × 1000.00 × 1001.19 × 1030.00 × 1002.58 × 1031.78 × 1000.00 × 1001.08 × 1032.23 × 10−131.32 × 1030.00 × 1007.54 × 10−1
p-valueNaN =1.21 × 10−12 +NaN =1.70 × 10−8 +1.37 × 10−3 +1.69 × 10−14 −6.54 × 10−4 +1.79 × 10−7 +1.52 × 10−4 +1.69 × 10−14 −1.94 × 10−9 +
F23Mean3.61 × 1023.70 × 1023.99 × 1023.58 × 1023.89 × 1023.80 × 1023.75 × 1023.94 × 1023.79 × 1023.77 × 1023.71 × 1023.87 × 102
Std7.76 × 1006.12 × 1001.42 × 1017.48 × 1009.36 × 1001.04 × 1015.81 × 1007.26 × 1005.09 × 1001.06 × 1015.20 × 1001.39 × 101
p-value7.20 × 10−5 +4.08 × 10−11 +7.48 × 10−2 =4.50 × 10−11 +2.60 × 10−8 +2.83 × 10−8 +3.34 × 10−11 +2.37 × 10−10 +6.01 × 10−8 +1.17 × 10−5 +1.46 × 10−10 +
F24Mean4.33 × 1024.38 × 1025.50 × 1024.28 × 1024.53 × 1024.53 × 1024.42 × 1024.62 × 1024.45 × 1024.53 × 1024.39 × 1024.57 × 102
Std4.06 × 1003.62 × 1003.53 × 1014.21 × 1009.65 × 1009.26 × 1004.56 × 1006.85 × 1005.69 × 1001.55 × 1013.05 × 1001.22 × 101
p-value8.29 × 10−6 +3.02 × 10−11 +4.35 × 10−5 −1.61 × 10−10 +1.21 × 10−10 +2.39 × 10−8 +3.02 × 10−11 +1.86 × 10−9 +1.17 × 10−9 +1.03 × 10−6 +6.70 × 10−11 +
F25Mean3.87 × 1023.87 × 1023.87 × 1023.87 × 1023.87 × 1023.88 × 1023.87 × 1023.87 × 1023.87 × 1023.87 × 1023.87 × 1023.88 × 102
Std1.09 × 10−17.64 × 10−15.24 × 10−13.59 × 10−25.88 × 10−29.47 × 10−12.70 × 10−17.33 × 10−22.02 × 1001.94 × 10−21.43 × 10−13.22 × 100
p-value9.79 × 10−5 +2.01 × 10−8 +3.80 × 10−7 −8.83 × 10−7 −2.32 × 10−6 +7.66 × 10−5 +1.03 × 10−2 −5.87 × 10−4 +5.03 × 10−10 −2.05 × 10−3 +2.32 × 10−2 +
F26Mean1.05 × 1031.14 × 1031.50 × 1039.89 × 1021.49 × 1031.20 × 1031.21 × 1031.49 × 1031.03 × 1031.26 × 1031.16 × 1031.23 × 103
Std7.94 × 1018.30 × 1011.11 × 1026.85 × 1011.18 × 1022.68 × 1027.11 × 1011.01 × 1024.93 × 1021.11 × 1025.22 × 1015.24 × 102
p-value5.61 × 10−5 +3.02 × 10−11 +3.50 × 10−3 −3.02 × 10−11 +5.60 × 10−7 +1.43 × 10−8 +3.34 × 10−11 +1.03 × 10−2 −2.23 × 10−9 +7.60 × 10−7 +7.64 × 10−5 +
F27Mean5.01 × 1025.02 × 1025.10 × 1025.12 × 1025.00 × 1024.99 × 1025.05 × 1025.00 × 1025.12 × 1024.99 × 1025.02 × 1025.21 × 102
Std4.37 × 1006.42 × 1007.28 × 1009.01 × 1007.49 × 10−59.79 × 1008.40 × 1005.76 × 1005.42 × 1008.86 × 1006.03 × 1001.35 × 101
p-value9.12 × 10−1 =2.00 × 10−6 +1.73 × 10−7 +7.73 × 10−2 =3.11 × 10−1 =7.24 × 10−2 =3.79 × 10−1 =1.41 × 10−9 +4.20 × 10−1 =8.42 × 10−1 =1.01 × 10−8 +
F28Mean3.26 × 1023.38 × 1024.96 × 1023.58 × 1024.99 × 1023.42 × 1023.40 × 1023.26 × 1023.21 × 1023.34 × 1023.26 × 1023.21 × 102
Std5.36 × 1015.55 × 1012.07 × 1025.53 × 1011.87 × 1005.59 × 1015.97 × 1015.11 × 1014.41 × 1015.94 × 1014.71 × 1014.28 × 101
p-value3.22 × 10−1 =4.62 × 10−5 +2.08 × 10−2 +6.44 × 10−12 +8.09 × 10−1 =5.89 × 10−3 +4.01 × 10−5 +1.11 × 10−5 −2.45 × 10−4 +4.00 × 10−5 −5.28 × 10−6 −
F29Mean4.69 × 1024.75 × 1025.31 × 1024.56 × 1024.67 × 1024.46 × 1025.02 × 1025.85 × 1024.90 × 1024.91 × 1024.84 × 1025.25 × 102
Std2.66 × 1012.81 × 1019.33 × 1012.31 × 1011.19 × 1025.80 × 1012.96 × 1017.03 × 1012.47 × 1017.62 × 1011.63 × 1017.48 × 101
p-value3.26 × 10−1 =4.64 × 10−3 +4.36 × 10−2 −1.86 × 10−1 =1.38 × 10−2 −5.27 × 10−5 +4.62 × 10−10 +1.30 × 10−3 +4.83 × 10−1 =9.07 × 10−3 +4.43 × 10−3 +
F30Mean2.01 × 1032.12 × 1033.87 × 1032.14 × 1037.08 × 1022.03 × 1032.11 × 1031.99 × 1032.23 × 1032.06 × 1032.09 × 1032.09 × 103
Std6.81 × 1011.77 × 1022.10 × 1031.12 × 1026.99 × 1028.80 × 1011.08 × 1025.02 × 1011.28 × 1027.74 × 1011.06 × 1021.71 × 102
p-value1.06 × 10−3 +2.15 × 10−10 +1.61 × 10−6 +2.83 × 10−8 −4.38 × 10−1 =2.53 × 10−4 +1.58 × 10−1 =7.77 × 10−9 +6.67 × 10−3 +2.62 × 10−3 +4.06 × 10−2 +
F21–30w/t/l6/4/010/0/04/2/46/2/26/3/18/1/17/2/18/0/27/2/17/1/29/0/1
w/t/l20/7/228/1/010/10/915/8/618/7/423/2/420/6/325/0/419/5/518/6/523/3/3
Rank3.285.28 11.55 3.72 6.38 6.72 6.83 7.31 8.00 5.98 4.84 8.10
Table 4. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 50D CEC’2017 benchmark functions.
Table 4. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 50D CEC’2017 benchmark functions.
FCategoryQuality DEGGDESHADEGPDEDiDESEDEFADEFDDETPDENSHADECUSDEPFIDEEJADE
F1Unimodal
Functions
Mean1.40 × 10−141.63 × 10−147.14 × 1031.07 × 10−141.82 × 10−55.38 × 1032.79 × 10−142.62 × 1003.18 × 10−31.91 × 1002.08 × 10−141.26 × 10−12
Std6.42 × 10−305.31 × 10−158.07 × 1036.02 × 10−155.18 × 10−55.97 × 1031.26 × 10−141.43 × 1011.17 × 10−24.05 × 1007.21 × 10−153.73 × 10−12
p-value2.14 × 10−2 +1.21 × 10−12 +5.47 × 10−3 −1.21 × 10−12 +1.21 × 10−12 +4.98 × 10−13 +1.21 × 10−12 +1.21 × 10−12 +1.21 × 10−12 +4.63 × 10−13 +1.16 × 10−12 +
F3Mean8.74 × 10−101.22 × 10−131.76 × 1055.04 × 10−132.39 × 10−81.14 × 10−64.09 × 1036.61 × 1012.49 × 1046.07 × 1021.33 × 10−133.20 × 10−1
Std1.68 × 10−93.87 × 10−142.26 × 1042.02 × 10−123.70 × 10−81.78 × 10−61.63 × 1045.67 × 1016.48 × 1044.38 × 1023.76 × 10−141.73 × 100
p-value9.34 × 10−12 −3.02 × 10−11 +1.77 × 10−11 −1.56 × 10−8 +3.02 × 10−11 +1.01 × 10−7 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +1.60 × 10−11 −2.61 × 10−2 +
F1–3w/t/l1/0/12/0/00/0/22/0/02/0/02/0/02/0/02/0/02/0/01/0/12/0/0
F4Simple
Multimodal
Functions
Mean4.77 × 1014.98 × 1011.05 × 1028.05 × 1014.84 × 1017.32 × 1016.31 × 1016.13 × 1015.69 × 1015.48 × 1014.48 × 1013.36 × 101
Std3.33 × 1013.82 × 1015.88 × 1014.88 × 1013.45 × 1014.07 × 1014.41 × 1014.81 × 1014.83 × 1014.28 × 1013.56 × 1014.59 × 101
p-value6.32 × 10−1 =8.50 × 10−5 +3.84 × 10−3 +9.55 × 10−2 =4.00 × 10−3 +8.62 × 10−1 =1.82 × 10−1 =1.74 × 10−1 =2.55 × 10−1 =1.53 × 10−2 −1.01 × 10−3 −
F5Mean2.34 × 1015.39 × 1011.43 × 1024.35 × 1016.82 × 1016.72 × 1016.00 × 1018.63 × 1017.56 × 1013.84 × 1014.96 × 1019.84 × 101
Std5.66 × 1005.10 × 1009.61 × 1012.77 × 1011.39 × 1011.53 × 1016.66 × 1001.02 × 1018.67 × 1006.54 × 1007.77 × 1001.95 × 101
p-value3.01 × 10−11 +3.68 × 10−11 +5.18 × 10−7 +3.33 × 10−11 +3.01 × 10−11 +3.01 × 10−11 +3.01 × 10−11 +3.01 × 10−11 +9.68 × 10−10 +3.01 × 10−11 +3.00 × 10−11 +
F6Mean3.12 × 10−61.32 × 10−79.72 × 10−38.20 × 10−43.20 × 10−98.85 × 10−16.07 × 10−81.66 × 10−71.07 × 10−54.03 × 10−63.58 × 10−62.34 × 100
Std3.58 × 10−63.47 × 10−71.41 × 10−23.11 × 10−31.22 × 10−89.02 × 10−11.63 × 10−73.30 × 10−71.20 × 10−56.98 × 10−64.97 × 10−61.36 × 100
p-value2.33 × 10−9 −3.01 × 10−11 +2.03 × 10−4 +2.62 × 10−12 −3.01 × 10−11 +9.78 × 10−11 −9.25 × 10−9 −2.05 × 10−3 +5.39 × 10−1 =4.55 × 10−1 =3.01 × 10−11 +
F7Mean7.95 × 1011.09 × 1022.57 × 1028.70 × 1011.19 × 1021.38 × 1021.11 × 1021.33 × 1021.29 × 1029.43 × 1011.01 × 1021.48 × 102
Std1.22 × 1014.85 × 1008.76 × 1011.74 × 1011.54 × 1011.53 × 1017.12 × 1001.11 × 1011.08 × 1013.55 × 1014.99 × 1002.48 × 101
p-value8.15 × 10−11 +3.02 × 10−11 +5.55 × 10−2 =1.78 × 10−10 +3.02 × 10−11 +7.39 × 10−11 +4.50 × 10−11 +3.02 × 10−11 +4.03 × 10−3 +1.31 × 10−8 +4.50 × 10−11 +
F8Mean2.31 × 1015.52 × 1011.17 × 1023.92 × 1017.10 × 1016.65 × 1016.02 × 1018.70 × 1018.30 × 1014.06 × 1015.29 × 1019.52 × 101
Std7.51 × 1005.06 × 1008.89 × 1012.50 × 1011.58 × 1011.53 × 1017.72 × 1001.46 × 1016.17 × 1008.14 × 1007.31 × 1001.71 × 101
p-value3.01 × 10−11 +4.06 × 10−11 +4.73 × 10−6 +3.32 × 10−11 +3.01 × 10−11 +3.32 × 10−11 +3.01 × 10−11 +3.01 × 10−11 +2.41 × 10−9 +4.95 × 10−11 +3.00 × 10−11 +
F9Mean9.88 × 10−144.20 × 10−22.96 × 1003.80 × 10−151.19 × 1001.98 × 1014.19 × 10−12.71 × 10−21.70 × 1015.97 × 10−32.63 × 10−11.12 × 102
Std3.94 × 10−148.82 × 10−29.17 × 1002.08 × 10−141.27 × 1002.45 × 1014.98 × 10−11.01 × 10−13.31 × 1012.27 × 10−22.62 × 10−18.79 × 101
p-value6.63 × 10−4 +3.93 × 10−12 +1.32 × 10−10 −1.84 × 10−7 +4.08 × 10−12 +8.61 × 10−4 +7.70 × 10−6 +4.08 × 10−12 +5.35 × 10−6 +1.87 × 10−4 +4.08 × 10−12 +
F10Mean6.51 × 1035.28 × 1031.02 × 1045.21 × 1031.11 × 1044.95 × 1035.22 × 1034.67 × 1033.82 × 1035.52 × 1035.00 × 1034.22 × 103
Std7.20 × 1023.27 × 1021.96 × 1037.44 × 1023.17 × 1021.67 × 1033.89 × 1024.11 × 1022.91 × 1023.49 × 1034.03 × 1021.01 × 103
p-value6.01 × 10−8 −1.29 × 10−9 +1.47 × 10−7 −3.02 × 10−11 +9.21 × 10−5 −6.01 × 10−8 −2.61 × 10−10 −3.02 × 10−11 −1.32 × 10−4 −5.00 × 10−9 −8.10 × 10−10 −
F4–10w/t/l4/1/27/0/04/1/25/1/16/0/14/1/24/1/25/1/14/2/14/1/25/0/2
F11Hybrid
Functions
Mean4.46 × 1017.65 × 1019.30 × 1018.14 × 1015.43 × 1015.18 × 1011.18 × 1024.94 × 1011.30 × 1023.77 × 1019.93 × 1011.46 × 102
Std6.47 × 1001.51 × 1012.93 × 1011.52 × 1011.32 × 1011.28 × 1012.63 × 1014.93 × 1003.33 × 1014.86 × 1002.26 × 1014.40 × 101
p-value3.81 × 10−10 +1.20 × 10−8 +6.68 × 10−11 +2.75 × 10−3 +3.78 × 10−2 +3.01 × 10−11 +7.29 × 10−3 +3.01 × 10−11 +2.75 × 10−5 −3.01 × 10−11 +3.01 × 10−11 +
F12Mean2.71 × 1034.77 × 1038.21 × 1052.15 × 1034.81 × 1044.50 × 1045.93 × 1032.40 × 1044.82 × 1044.33 × 1045.38 × 1031.19 × 104
Std1.43 × 1032.42 × 1035.48 × 1055.98 × 1023.08 × 1042.25 × 1046.25 × 1031.54 × 1043.28 × 1042.23 × 1043.14 × 1031.10 × 104
p-value5.09 × 10−6 +3.02 × 10−11 +9.05 × 10−2 =3.02 × 10−11 +3.69 × 10−11 +3.83 × 10−5 +8.15 × 10−11 +5.49 × 10−11 +3.02 × 10−11 +2.43 × 10−5 +4.20 × 10−10 +
F13Mean1.81 × 1021.34 × 1021.83 × 1041.08 × 1027.47 × 1023.08 × 1031.81 × 1029.64 × 1039.90 × 1026.50 × 1032.11 × 1021.54 × 102
Std5.23 × 1011.38 × 1021.50 × 1045.60 × 1018.43 × 1026.56 × 1031.39 × 1021.02 × 1041.00 × 1038.10 × 1031.98 × 1027.44 × 101
p-value5.61 × 10−5 −6.70 × 10−11 +1.25 × 10−5 −3.18 × 10−4 +1.95 × 10−3 +9.63 × 10−2 =3.02 × 10−11 +9.92 × 10−11 +8.20 × 10−7 +1.37 × 10−1 =5.55 × 10−2 =
F14Mean4.16 × 1011.39 × 1021.36 × 1044.78 × 1014.26 × 1014.69 × 1012.00 × 1025.91 × 1011.51 × 1023.49 × 1011.58 × 1021.35 × 102
Std7.91 × 1004.62 × 1012.18 × 1041.34 × 1016.56 × 1001.00 × 1014.91 × 1011.19 × 1014.11 × 1019.81 × 1004.65 × 1018.11 × 101
p-value3.34 × 10−11 +3.02 × 10−11 +1.09 × 10−1 =5.20 × 10−1 =4.21 × 10−2 +3.02 × 10−11 +3.65 × 10−8 +4.08 × 10−11 +4.22 × 10−4 −3.02 × 10−11 +2.61 × 10−10 +
F15Mean4.48 × 1011.25 × 1026.91 × 1036.63 × 1015.29 × 1015.30 × 1012.40 × 1023.96 × 1032.14 × 1022.97 × 1011.65 × 1021.67 × 102
Std1.22 × 1016.85 × 1018.75 × 1032.47 × 1014.54 × 1012.77 × 1011.19 × 1027.98 × 1038.67 × 1011.45 × 1018.32 × 1011.20 × 102
p-value3.20 × 10−9 +3.02 × 10−11 +8.66 × 10−5 +7.51 × 10−1 =8.07 × 10−1 =1.33 × 10−10 +8.15 × 10−5 +3.02 × 10−11 +2.83 × 10−8 −8.99 × 10−11 +1.69 × 10−9 +
F16Mean4.26 × 1027.62 × 1021.59 × 1039.06 × 1021.21 × 1031.02 × 1037.78 × 1029.26 × 1027.25 × 1027.21 × 1027.78 × 1027.39 × 102
Std1.98 × 1021.55 × 1024.83 × 1021.83 × 1023.09 × 1023.14 × 1021.53 × 1022.52 × 1021.53 × 1025.23 × 1021.42 × 1022.51 × 102
p-value4.69 × 10−8 +5.49 × 10−11 +1.29 × 10−9 +1.78 × 10−10 +2.67 × 10−9 +1.56 × 10−8 +1.31 × 10−8 +1.60 × 10−7 +1.84 × 10−2 +1.70 × 10−8 +1.53 × 10−5 +
F17Hybrid
Functions
Mean2.39 × 1025.00 × 1027.09 × 1026.10 × 1027.01 × 1026.77 × 1026.30 × 1026.36 × 1025.90 × 1026.69 × 1024.62 × 1026.74 × 102
Std1.58 × 1021.00 × 1022.34 × 1021.83 × 1022.66 × 1022.48 × 1021.22 × 1021.67 × 1021.43 × 1023.95 × 1021.10 × 1021.83 × 102
p-value5.09 × 10−8 +2.67 × 10−9 +6.52 × 10−9 +1.20 × 10−8 +3.82 × 10−9 +2.03 × 10−9 +5.00 × 10−9 +6.52 × 10−9 +4.35 × 10−5 +3.01 × 10−7 +8.10 × 10−10 +
F18Mean1.12 × 1021.02 × 1022.36 × 1067.85 × 1017.05 × 1021.47 × 1021.91 × 1021.29 × 1023.63 × 1022.31 × 1031.15 × 1023.45 × 102
Std5.32 × 1016.61 × 1011.42 × 1063.95 × 1017.80 × 1021.10 × 1021.22 × 1024.42 × 1014.12 × 1022.65 × 1037.84 × 1014.16 × 102
p-value2.97 × 10−1 =3.02 × 10−11 +1.63 × 10−2 −5.97 × 10−9 +2.46 × 10−1 =1.77 × 10−3 +1.54 × 10−1 =4.08 × 10−5 +1.78 × 10−10 +7.06 × 10−1 =7.66 × 10−5 +
F19Mean3.11 × 1019.70 × 1015.73 × 1036.78 × 1011.79 × 1012.18 × 1011.21 × 1022.53 × 1026.51 × 1011.38 × 1011.11 × 1021.00 × 102
Std8.92 × 1003.58 × 1011.16 × 1042.16 × 1011.03 × 1014.86 × 1003.74 × 1015.99 × 1021.81 × 1013.87 × 1004.39 × 1015.85 × 101
p-value9.92 × 10−11 +3.02 × 10−11 +1.85 × 10−8 +4.11 × 10−7 −3.37 × 10−5 −3.34 × 10−11 +3.48 × 10−1 =1.17 × 10−9 +2.37 × 10−10 −3.02 × 10−11 +1.25 × 10−7 +
F20Mean1.47 × 1023.51 × 1027.14 × 1025.52 × 1025.57 × 1025.01 × 1024.63 × 1025.87 × 1024.07 × 1027.01 × 1023.41 × 1023.44 × 102
Std1.03 × 1029.30 × 1012.16 × 1021.84 × 1022.58 × 1022.05 × 1021.43 × 1021.24 × 1021.13 × 1023.89 × 1029.93 × 1011.89 × 102
p-value2.60 × 10−8 +5.49 × 10−11 +8.10 × 10−10 +1.56 × 10−8 +6.52 × 10−9 +1.17 × 10−9 +4.50 × 10−11 +3.82 × 10−9 +1.31 × 10−8 +9.83 × 10−8 +4.08 × 10−5 +
F11–20w/t/l8/1/110/0/06/2/27/2/17/2/19/1/08/2/010/0/06/0/48/2/09/1/0
F21Composition
Functions
Mean2.24 × 1022.55 × 1023.92 × 1022.48 × 1022.70 × 1022.69 × 1022.58 × 1022.93 × 1022.75 × 1022.41 × 1022.50 × 1022.89 × 102
Std6.22 × 1006.13 × 1009.85 × 1013.46 × 1011.87 × 1011.68 × 1018.43 × 1001.16 × 1018.62 × 1001.02 × 1015.60 × 1001.58 × 101
p-value3.69 × 10−11 +3.02 × 10−11 +8.88 × 10−6 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +1.43 × 10−8 +4.50 × 10−11 +3.02 × 10−11 +
F22Mean6.01 × 1034.57 × 1031.26 × 1043.20 × 1021.17 × 1045.35 × 1034.53 × 1035.13 × 1037.71 × 1026.34 × 1034.65 × 1033.36 × 103
Std2.10 × 1032.14 × 1034.21 × 1021.21 × 1033.67 × 1022.13 × 1031.97 × 1033.56 × 1021.55 × 1033.89 × 1031.96 × 1031.97 × 103
p-value1.43 × 10−5 −3.02 × 10−11 +1.81 × 10−8 −3.02 × 10−11 +1.33 × 10−2 −1.73 × 10−6 −4.44 × 10−7 −1.07 × 10−7 −1.09 × 10−1 =2.88 × 10−6 −7.63 × 10−8 −
F23Mean4.48 × 1024.78 × 1025.55 × 1024.79 × 1024.95 × 1024.99 × 1024.84 × 1025.20 × 1025.02 × 1024.60 × 1024.75 × 1025.31 × 102
Std8.48 × 1009.83 × 1007.25 × 1013.75 × 1011.81 × 1012.18 × 1018.54 × 1001.53 × 1011.38 × 1011.45 × 1011.02 × 1012.67 × 101
p-value4.08 × 10−11 +3.02 × 10−11 +6.77 × 10−5 +3.69 × 10−11 +3.69 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +4.46 × 10−4 +3.69 × 10−11 +3.02 × 10−11 +
F24Mean5.18 × 1025.47 × 1028.44 × 1025.35 × 1025.59 × 1025.75 × 1025.48 × 1025.83 × 1025.65 × 1025.47 × 1025.44 × 1025.85 × 102
Std6.23 × 1006.57 × 1001.57 × 1013.27 × 1011.36 × 1011.70 × 1018.36 × 1008.74 × 1001.20 × 1011.46 × 1017.53 × 1001.84 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +5.69 × 10−1 =3.02 × 10−11 +3.02 × 10−11 +3.34 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +1.61 × 10−10 +6.07 × 10−11 +3.02 × 10−11 +
F25Mean5.07 × 1025.33 × 1024.99 × 1024.90 × 1025.22 × 1025.28 × 1025.24 × 1024.91 × 1025.41 × 1024.90 × 1025.44 × 1025.36 × 102
Std3.69 × 1013.28 × 1013.53 × 1012.24 × 1013.66 × 1013.42 × 1013.51 × 1012.86 × 1014.46 × 1012.50 × 1013.24 × 1014.52 × 101
p-value4.33 × 10−4 +4.06 × 10−2 −9.06 × 10−1 =2.84 × 10−1 =6.97 × 10−3 +2.07 × 10−2 +7.01 × 10−2 =7.62 × 10−3 +5.69 × 10−1 =4.35 × 10−5 +6.10 × 10−3 +
F26Mean1.24 × 1031.59 × 1032.01 × 1031.40 × 1031.96 × 1031.93 × 1031.64 × 1032.11 × 1032.32 × 1031.47 × 1031.54 × 1032.81 × 103
Std8.99 × 1018.59 × 1013.21 × 1023.02 × 1022.06 × 1022.02 × 1021.05 × 1021.45 × 1022.40 × 1027.58 × 1018.94 × 1017.55 × 102
p-value3.34 × 10−11 +3.02 × 10−11 +1.22 × 10−1 =3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.82 × 10−10 +6.70 × 10−11 +5.57 × 10−10 +
F27Mean5.26 × 1025.40 × 1026.36 × 1025.56 × 1025.00 × 1025.21 × 1025.68 × 1025.48 × 1026.53 × 1025.25 × 1025.60 × 1026.32 × 102
Std1.12 × 1011.59 × 1017.26 × 1012.90 × 1019.63 × 10−52.15 × 1014.22 × 1013.53 × 1013.18 × 1012.10 × 1013.54 × 1018.37 × 101
p-value3.59 × 10−5 +6.07 × 10−11 +1.36 × 10−7 +3.02 × 10−11 −3.26 × 10−1 =3.96 × 10−8 +3.03 × 10−3 +3.02 × 10−11 +3.95 × 10−1 =2.02 × 10−8 +2.23 × 10−9 +
F28Mean4.98 × 1024.92 × 1029.77 × 1024.98 × 1025.00 × 1024.86 × 1025.01 × 1024.63 × 1024.97 × 1025.06 × 1024.97 × 1024.91 × 102
Std1.98 × 1012.14 × 1011.02 × 1031.99 × 1019.72 × 10−52.84 × 1011.50 × 1011.41 × 1011.82 × 1012.51 × 1021.78 × 1012.62 × 101
p-value7.32 × 10−1 =8.59 × 10−5 +2.45 × 10−1 =3.30 × 10−4 +7.52 × 10−1 =3.56 × 10−4 +1.72 × 10−9 −4.40 × 10−1 =4.51 × 10−4 +3.56 × 10−2 −9.72 × 10−2 =
F29Mean3.48 × 1024.87 × 1026.40 × 1025.41 × 1027.18 × 1024.42 × 1024.96 × 1028.06 × 1025.64 × 1023.97 × 1024.66 × 1029.76 × 102
Std2.22 × 1015.20 × 1012.37 × 1028.17 × 1012.07 × 1029.57 × 1017.86 × 1012.24 × 1027.97 × 1011.32 × 1026.00 × 1012.78 × 102
p-value3.02 × 10−11 +5.57 × 10−10 +8.15 × 10−11 +1.17 × 10−9 +9.51 × 10−6 +3.34 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +2.17 × 10−1 =3.69 × 10−11 +3.02 × 10−11 +
F30Mean6.19 × 1056.24 × 1059.44 × 1056.48 × 1052.33 × 1036.19 × 1056.51 × 1055.95 × 1057.35 × 1055.99 × 1056.41 × 1056.90 × 105
Std4.59 × 1044.11 × 1046.07 × 1057.13 × 1042.51 × 1033.65 × 1047.44 × 1042.29 × 1048.98 × 1042.45 × 1046.08 × 1047.35 × 104
p-value6.20 × 10−1 =2.38 × 10−3 +2.81 × 10−2 +3.02 × 10−11 −8.30 × 10−1 =1.05 × 10−1 =3.04 × 10−1 =3.01 × 10−7 +7.91 × 10−3 −1.76 × 10−1 =9.21 × 10−5 +
F21–30w/t/l7/2/19/0/15/4/17/1/26/3/18/1/16/2/28/1/15/4/17/1/28/1/1
w/t/l20/4/528/0/115/7/721/4/421/5/323/3/320/5/425/2/217/6/620/4/524/2/3
Rank2.934.79 11.14 4.45 6.86 6.97 7.00 7.48 8.00 5.07 5.28 8.03
Table 5. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 100D CEC’2017 benchmark functions.
Table 5. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 100D CEC’2017 benchmark functions.
FCategoryQuality DEGGDESHADEGPDEDiDESEDEFADEFDDETPDENSHADECUSDEPFIDEEJADE
F1Unimodal
Functions
Mean4.87 × 10−142.80 × 10−111.72 × 1072.04 × 10−122.51 × 10−11.15 × 1044.62 × 10−105.42 × 1017.40 × 1016.12 × 1011.57 × 10−101.65 × 100
Std5.27 × 10−146.94 × 10−119.42 × 1071.69 × 10−125.67 × 10−11.44 × 1041.00 × 10−91.42 × 1021.08 × 1022.06 × 1022.97 × 10−102.74 × 100
p-value1.87 × 10−7 +1.83 × 10−11 +2.49 × 10−11 +1.83 × 10−11 +1.83 × 10−11 +2.22 × 10−10 +1.83 × 10−11 +1.83 × 10−11 +1.83 × 10−11 +5.05 × 10−11 +1.83 × 10−11 +
F3Mean6.25 × 1017.80 × 10−56.07 × 1054.52 × 1008.46 × 1012.89 × 1018.82 × 1046.20 × 1034.46 × 1043.43 × 1052.00 × 10−33.64 × 103
Std4.90 × 1012.12 × 10−44.95 × 1047.44 × 1008.52 × 1013.80 × 1011.28 × 1054.93 × 1031.34 × 1055.12 × 1041.03 × 10−24.02 × 103
p-value3.02 × 10−11 −3.02 × 10−11 +5.57 × 10−10 −6.73 × 10−1 =1.11 × 10−4 −2.71 × 10−2 +3.02 × 10−11 +4.50 × 10−11 +3.02 × 10−11 +3.02 × 10−11 −7.38 × 10−10 +
F1–3w/t/l1/0/12/0/01/0/11/1/01/0/12/0/02/0/02/0/02/0/01/0/12/0/0
F4Simple
Multimodal
Functions
Mean1.27 × 1021.10 × 1022.24 × 1021.96 × 1021.97 × 1022.20 × 1021.70 × 1022.13 × 1021.89 × 1022.08 × 1027.92 × 1011.30 × 102
Std7.43 × 1016.39 × 1011.98 × 1011.88 × 1012.79 × 1014.68 × 1013.86 × 1016.87 × 1005.65 × 1013.14 × 1016.51 × 1016.49 × 101
p-value1.49 × 10−1 =8.10 × 10−10 +1.53 × 10−5 +1.34 × 10−5 +2.15 × 10−6 +5.37 × 10−2 =1.69 × 10−8 +3.50 × 10−3 +1.46 × 10−6 +8.68 × 10−3 −8.88 × 10−1 =
F5Mean5.11 × 1011.66 × 1025.20 × 1023.54 × 1021.43 × 1021.89 × 1021.77 × 1022.18 × 1022.56 × 1027.31 × 1021.69 × 1023.44 × 102
Std1.14 × 1011.61 × 1012.34 × 1021.20 × 1022.36 × 1012.95 × 1011.65 × 1012.52 × 1012.97 × 1011.91 × 1021.55 × 1014.54 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +4.07 × 10−11 +3.02 × 10−11 +3.01 × 10−11 +
F6Mean1.41 × 10−33.95 × 10−36.92 × 10−25.89 × 10−37.10 × 10−85.85 × 1003.73 × 10−34.43 × 10−64.38 × 10−41.21 × 10−33.86 × 10−21.53 × 101
Std1.08 × 10−34.55 × 10−35.47 × 10−26.17 × 10−31.60 × 10−71.90 × 1005.82 × 10−33.94 × 10−61.21 × 10−32.36 × 10−33.82 × 10−24.19 × 100
p-value1.44 × 10−2 +3.02 × 10−11 +1.03 × 10−2 +3.01 × 10−11 −3.02 × 10−11 +7.84 × 10−1 =3.02 × 10−11 −8.48 × 10−9 −4.71 × 10−4 −5.49 × 10−11 +3.02 × 10−11 +
F7Mean1.68 × 1022.67 × 1027.50 × 1024.90 × 1022.72 × 1024.65 × 1022.89 × 1023.07 × 1024.25 × 1029.08 × 1022.82 × 1025.58 × 102
Std3.00 × 1011.60 × 1012.33 × 1021.07 × 1022.60 × 1018.74 × 1011.96 × 1013.17 × 1013.01 × 1011.96 × 1011.98 × 1019.51 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +
F8Mean4.89 × 1011.55 × 1024.13 × 1023.77 × 1021.50 × 1021.88 × 1021.68 × 1022.17 × 1022.46 × 1027.89 × 1021.58 × 1023.62 × 102
Std1.16 × 1011.35 × 1012.65 × 1029.70 × 1011.85 × 1012.45 × 1012.13 × 1014.03 × 1012.34 × 1011.96 × 1011.64 × 1013.86 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +4.07 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.01 × 10−11 +
F9Mean2.75 × 1009.67 × 1006.01 × 1018.27 × 10−11.25 × 1015.63 × 1022.59 × 1011.33 × 1001.76 × 1034.42 × 1003.18 × 1013.66 × 103
Std1.44 × 1007.08 × 1001.04 × 1026.19 × 10−11.23 × 1012.73 × 1029.15 × 1001.11 × 1001.16 × 1033.96 × 1002.56 × 1011.47 × 103
p-value3.64 × 10−8 +1.31 × 10−8 +9.14 × 10−9 −1.63 × 10−5 +3.02 × 10−11 +3.02 × 10−11 +4.91 × 10−5 −3.02 × 10−11 +2.46 × 10−1 =3.69 × 10−11 +3.02 × 10−11 +
F10Mean1.52 × 1041.44 × 1043.01 × 1041.97 × 1042.84 × 1041.19 × 1041.42 × 1041.18 × 1041.03 × 1042.94 × 1041.38 × 1041.15 × 104
Std9.57 × 1024.40 × 1024.55 × 1023.08 × 1034.63 × 1029.87 × 1024.20 × 1028.89 × 1024.37 × 1025.82 × 1025.61 × 1021.60 × 103
p-value1.60 × 10−3 −3.02 × 10−11 +2.20 × 10−7 +3.02 × 10−11 +9.92 × 10−11 −3.83 × 10−5 −3.02 × 10−11 −3.02 × 10−11 −3.02 × 10−11 +7.69 × 10−8 −2.87 × 10−10 −
F4–10w/t/l5/1/17/0/06/0/16/0/16/0/14/2/14/0/35/0/25/1/15/0/25/1/1
F11Hybrid
Functions
Mean7.50 × 1029.95 × 1028.32 × 1026.39 × 1021.62 × 1023.17 × 1021.33 × 1033.48 × 1026.08 × 1025.65 × 1021.01 × 1039.64 × 102
Std1.49 × 1021.94 × 1021.04 × 1028.87 × 1014.92 × 1017.84 × 1016.06 × 1021.16 × 1021.29 × 1021.00 × 1022.14 × 1022.41 × 102
p-value3.09 × 10−6 +1.76 × 10−2 +1.11 × 10−3 −3.02 × 10−11 −3.69 × 10−11 −3.83 × 10−6 +9.92 × 10−11 −4.98 × 10−4 −1.49 × 10−6 −8.88 × 10−6 +4.22 × 10−4 +
F12Mean2.39 × 1041.63 × 1041.33 × 1062.63 × 1042.61 × 1053.96 × 1053.47 × 1043.23 × 1052.87 × 1051.75 × 1051.96 × 1041.15 × 105
Std9.49 × 1037.67 × 1038.38 × 1059.77 × 1039.23 × 1041.59 × 1053.55 × 1041.31 × 1051.06 × 1056.68 × 1041.24 × 1049.64 × 104
p-value8.12 × 10−4 −3.02 × 10−11 +3.11 × 10−1 =3.02 × 10−11 +3.02 × 10−11 +8.65 × 10−1 =3.02 × 10−11 +3.02 × 10−11 +3.34 × 10−11 +1.76 × 10−2 −1.69 × 10−9 +
F13Mean4.91 × 1023.39 × 1035.43 × 1031.56 × 1034.50 × 1035.52 × 1034.63 × 1034.00 × 1032.29 × 1036.07 × 1032.66 × 1033.37 × 103
Std8.17 × 1023.07 × 1035.23 × 1038.75 × 1024.14 × 1035.15 × 1034.12 × 1034.59 × 1031.75 × 1036.75 × 1032.57 × 1032.51 × 103
p-value8.35 × 10−8 +4.31 × 10−8 +2.00 × 10−6 +3.65 × 10−8 +5.97 × 10−9 +4.31 × 10−8 +2.20 × 10−7 +2.78 × 10−7 +1.25 × 10−7 +3.01 × 10−7 +2.60 × 10−8 +
F14Mean3.00 × 1023.72 × 1021.80 × 1062.80 × 1022.33 × 1022.78 × 1024.61 × 1023.26 × 1024.61 × 1036.91 × 1038.15 × 1025.51 × 102
Std4.83 × 1011.61 × 1021.24 × 1064.33 × 1011.36 × 1026.86 × 1011.50 × 1026.38 × 1016.92 × 1034.49 × 1031.80 × 1032.44 × 102
p-value1.62 × 10−1 =3.02 × 10−11 +7.48 × 10−2 =8.66 × 10−5 −8.50 × 10−2 =3.57 × 10−6 +5.37 × 10−2 =6.01 × 10−8 +3.02 × 10−11 +5.09 × 10−6 +2.78 × 10−7 +
F15Mean5.78 × 1023.33 × 1029.00 × 1032.58 × 1028.38 × 1021.90 × 1035.51 × 1025.60 × 1039.98 × 1027.73 × 1034.06 × 1026.65 × 102
Std4.86 × 1021.20 × 1028.92 × 1034.64 × 1011.04 × 1032.17 × 1036.43 × 1027.38 × 1038.54 × 1028.20 × 1031.88 × 1026.62 × 102
p-value4.36 × 10−2 −7.69 × 10−8 +3.37 × 10−4 −6.95 × 10−1 =7.66 × 10−5 +9.94 × 10−1 =1.60 × 10−3 +7.62 × 10−3 +9.79 × 10−5 +3.79 × 10−1 =7.17 × 10−1 =
F16Mean1.68 × 1032.44 × 1037.47 × 1034.00 × 1032.80 × 1032.78 × 1032.56 × 1032.85 × 1032.41 × 1036.60 × 1032.54 × 1032.47 × 103
Std3.70 × 1023.29 × 1024.23 × 1022.64 × 1026.35 × 1023.87 × 1023.75 × 1024.09 × 1022.73 × 1021.63 × 1033.59 × 1025.69 × 102
p-value7.77 × 10−9 +3.02 × 10−11 +3.02 × 10−11 +1.20 × 10−8 +3.16 × 10−10 +2.92 × 10−9 +8.99 × 10−11 +5.46 × 10−9 +6.12 × 10−10 +2.44 × 10−9 +4.44 × 10−7 +
F17Hybrid
Functions
Mean1.23 × 1031.78 × 1034.51 × 1032.65 × 1031.96 × 1031.69 × 1031.91 × 1032.17 × 1031.84 × 1033.37 × 1031.77 × 1032.20 × 103
Std3.26 × 1022.22 × 1025.25 × 1022.51 × 1024.47 × 1022.79 × 1022.43 × 1022.59 × 1022.03 × 1021.33 × 1032.10 × 1024.03 × 102
p-value7.09 × 10−8 +3.02 × 10−11 +3.34 × 10−11 +5.53 × 10−8 +2.49 × 10−6 +2.23 × 10−9 +1.21 × 10−10 +6.52 × 10−9 +5.09 × 10−8 +3.35 × 10−8 +1.46 × 10−10 +
F18Mean2.55 × 1022.38 × 1031.25 × 1072.22 × 1021.89 × 1047.75 × 1032.46 × 1031.30 × 1043.47 × 1041.01 × 1051.46 × 1031.08 × 104
Std9.24 × 1012.08 × 1035.55 × 1065.31 × 1011.62 × 1045.12 × 1032.15 × 1037.38 × 1031.86 × 1044.38 × 1049.16 × 1027.07 × 103
p-value1.21 × 10−10 +3.02 × 10−11 +1.12 × 10−1 =3.02 × 10−11 +3.02 × 10−11 +4.50 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +4.08 × 10−11 +3.02 × 10−11 +
F19Mean1.81 × 1022.61 × 1021.34 × 1041.85 × 1025.60 × 1023.46 × 1032.39 × 1034.62 × 1031.24 × 1031.13 × 1041.15 × 1032.49 × 102
Std3.46 × 1012.24 × 1021.36 × 1043.17 × 1011.31 × 1033.61 × 1034.34 × 1034.24 × 1031.16 × 1031.24 × 1041.48 × 1036.73 × 101
p-value1.11 × 10−4 +1.69 × 10−9 +5.89 × 10−1 =7.28 × 10−1 =4.08 × 10−11 +8.15 × 10−11 +6.52 × 10−9 +1.55 × 10−9 +8.48 × 10−9 +3.09 × 10−6 +1.64 × 10−5 +
F20Mean1.46 × 1031.68 × 1034.40 × 1033.21 × 1032.54 × 1031.68 × 1031.93 × 1032.12 × 1031.90 × 1033.07 × 1031.72 × 1031.68 × 103
Std3.19 × 1022.01 × 1024.37 × 1022.66 × 1028.77 × 1023.43 × 1022.12 × 1022.88 × 1022.33 × 1021.27 × 1031.84 × 1024.26 × 102
p-value1.27 × 10−2 +3.02 × 10−11 +3.02 × 10−11 +3.25 × 10−7 +2.24 × 10−2 +1.36 × 10−7 +1.69 × 10−9 +1.11 × 10−6 +4.31 × 10−8 +2.62 × 10−3 +1.56 × 10−2 +
F11–20w/t/l7/1/210/0/04/4/26/2/28/1/18/2/08/1/19/0/19/0/18/1/19/1/0
F21Composition
Functions
Mean2.66 × 1023.83 × 1026.68 × 1026.12 × 1023.70 × 1024.33 × 1023.94 × 1024.56 × 1024.41 × 1029.89 × 1023.76 × 1025.39 × 102
Std9.89 × 1001.09 × 1012.73 × 1026.73 × 1012.39 × 1012.93 × 1011.50 × 1012.87 × 1011.80 × 1011.34 × 1021.51 × 1015.23 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.34 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +
F22Mean1.64 × 1041.56 × 1043.04 × 1042.18 × 1042.90 × 1041.28 × 1041.51 × 1041.32 × 1049.68 × 1032.99 × 1041.50 × 1041.26 × 104
Std9.55 × 1025.87 × 1025.70 × 1023.07 × 1035.37 × 1021.09 × 1037.79 × 1027.00 × 1024.82 × 1035.43 × 1025.73 × 1021.49 × 103
p-value1.00 × 10−3 −3.02 × 10−11 +1.87 × 10−7 +3.02 × 10−11 +3.02 × 10−11 −5.86 × 10−6 −3.69 × 10−11 −3.02 × 10−11 −3.02 × 10−11 +4.11 × 10−7 −8.10 × 10−10 −
F23Mean5.82 × 1026.88 × 1026.77 × 1028.95 × 1026.60 × 1027.82 × 1026.98 × 1027.79 × 1026.95 × 1025.98 × 1026.81 × 1029.24 × 102
Std1.26 × 1011.59 × 1012.49 × 1016.37 × 1012.59 × 1014.11 × 1011.66 × 1012.41 × 1011.90 × 1011.78 × 1011.63 × 1016.51 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.01 × 10−4 +3.02 × 10−11 +3.02 × 10−11 +
F24Mean9.14 × 1021.03 × 1031.07 × 1039.79 × 1021.03 × 1031.15 × 1031.04 × 1031.11 × 1031.13 × 1039.41 × 1021.03 × 1031.31 × 103
Std1.03 × 1011.85 × 1013.48 × 1017.68 × 1012.15 × 1014.51 × 1012.10 × 1012.83 × 1013.05 × 1011.32 × 1011.98 × 1018.06 × 101
p-value3.02 × 10−11 +3.02 × 10−11 +4.98 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +4.57 × 10−9 +3.02 × 10−11 +3.02 × 10−11 +
F25Mean7.44 × 1027.35 × 1027.95 × 1027.39 × 1027.43 × 1027.80 × 1027.65 × 1027.55 × 1027.97 × 1027.22 × 1027.49 × 1027.44 × 102
Std3.39 × 1015.31 × 1016.07 × 1013.16 × 1014.22 × 1017.11 × 1017.66 × 1014.75 × 1016.56 × 1014.36 × 1015.83 × 1015.51 × 101
p-value8.88 × 10−1 =1.11 × 10−4 +7.17 × 10−1 =5.79 × 10−1 =1.03 × 10−2 +5.75 × 10−2 =1.76 × 10−1 =9.21 × 10−5 +1.54 × 10−1 =8.30 × 10−1 =8.88 × 10−1 =
F26Mean3.34 × 1034.53 × 1035.46 × 1033.55 × 1034.88 × 1035.95 × 1034.77 × 1035.80 × 1037.24 × 1033.81 × 1034.66 × 1031.01 × 104
Std9.87 × 1012.34 × 1023.26 × 1028.94 × 1013.53 × 1024.16 × 1022.52 × 1024.03 × 1029.90 × 1021.96 × 1021.95 × 1023.00 × 103
p-value3.02 × 10−11 +3.02 × 10−11 +7.77 × 10−9 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +9.92 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +
F27Mean6.43 × 1026.81 × 1026.90 × 1026.64 × 1025.00 × 1026.41 × 1027.22 × 1026.38 × 1028.69 × 1026.11 × 1027.08 × 1028.66 × 102
Std1.63 × 1012.25 × 1013.11 × 1012.08 × 1019.87 × 10−53.93 × 1013.56 × 1011.81 × 1014.72 × 1011.90 × 1013.32 × 1011.12 × 102
p-value6.53 × 10−8 +7.38 × 10−10 +4.35 × 10−5 +3.02 × 10−11 −2.64 × 10−1 =7.39 × 10−11 +4.20 × 10−1 =3.02 × 10−11 +5.53 × 10−8 −1.46 × 10−10 +3.69 × 10−11 +
F28Mean5.52 × 1025.31 × 1021.28 × 1035.34 × 1025.00 × 1025.70 × 1025.48 × 1025.30 × 1025.55 × 1025.34 × 1025.31 × 1025.50 × 102
Std3.41 × 1012.98 × 1012.30 × 1033.77 × 1018.10 × 10−53.31 × 1012.51 × 1011.94 × 1013.50 × 1012.78 × 1012.99 × 1013.42 × 101
p-value3.27 × 10−2 −9.52 × 10−4 +4.23 × 10−3 −8.48 × 10−9 −1.27 × 10−2 +7.06 × 10−1 =1.89 × 10−4 −1.33 × 10−1 =1.50 × 10−2 −3.51 × 10−2 −7.51 × 10−1 =
F29Mean1.22 × 1032.11 × 1032.72 × 1032.65 × 1032.09 × 1032.15 × 1032.20 × 1032.75 × 1032.43 × 1031.47 × 1032.26 × 1033.31 × 103
Std2.40 × 1022.23 × 1026.39 × 1022.86 × 1026.19 × 1023.55 × 1022.68 × 1022.74 × 1022.01 × 1021.11 × 1032.65 × 1023.71 × 102
p-value4.98 × 10−11 +4.98 × 10−11 +3.02 × 10−11 +6.52 × 10−9 +1.78 × 10−10 +4.98 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +3.40 × 10−1 =4.08 × 10−11 +3.02 × 10−11 +
F30Mean4.27 × 1035.14 × 1036.69 × 1032.89 × 1032.55 × 1034.86 × 1034.42 × 1034.34 × 1036.51 × 1034.03 × 1034.19 × 1032.85 × 103
Std1.48 × 1032.70 × 1034.74 × 1032.04 × 1021.35 × 1033.00 × 1032.80 × 1033.51 × 1032.54 × 1032.54 × 1031.86 × 1032.82 × 102
p-value4.12 × 10−1 =5.94 × 10−2 =1.95 × 10−3 −6.36 × 10−5 −6.00 × 10−1 =5.11 × 10−1 =7.24 × 10−2 =2.25 × 10−4 +7.48 × 10−2 =5.40 × 10−1 =6.55 × 10−4 −
F21–30w/t/l6/2/29/1/07/1/26/1/37/2/16/3/15/3/28/1/15/3/26/2/26/2/2
w/t/l19/4/628/1/018/5/619/4/622/3/420/7/219/4/624/1/421/4/420/3/622/4/3
Rank3.214.45 10.69 5.90 4.93 7.34 6.72 6.93 7.48 7.83 5.03 7.48
Table 6. Summarized comparison results between DEGGDE and the eleven compared algorithms with respect to “w/t/l” derived from the Wilcoxon rank sum test and the average rank obtained from the Friedman test on the 30D, 50D, and 100D CEC’2017 benchmark sets.
Table 6. Summarized comparison results between DEGGDE and the eleven compared algorithms with respect to “w/t/l” derived from the Wilcoxon rank sum test and the average rank obtained from the Friedman test on the 30D, 50D, and 100D CEC’2017 benchmark sets.
Problem SetProblem PropertyIndexDEGGDESHADEGPDEDiDESEDEFADEFDDETPDENSHADECUSDEPFIDEEJADE
CEC’2017-30DUnimodal Problemsw/t/l2/0/02/0/00/1/10/1/11/1/02/0/02/0/02/0/02/0/01/1/02/0/0
Simple Multimodal Problems3/3/16/1/03/2/25/2/05/1/13/1/34/1/25/0/24/1/23/1/35/0/2
Hybrid Problems9/0/110/0/03/5/24/3/36/2/210/0/07/3/010/0/06/2/27/3/07/3/0
Composition Problems6/4/010/0/04/2/46/2/26/3/18/1/17/2/18/0/27/2/17/1/29/0/1
Overall20/7/228/1/010/10/915/8/618/7/423/2/420/6/325/0/419/5/518/6/523/3/3
OverallRank3.285.28 11.55 3.72 6.38 6.72 6.83 7.31 8.00 5.98 4.84 8.10
CEC’2017-50DUnimodal Problemsw/t/l1/0/12/0/00/0/22/0/02/0/02/0/02/0/02/0/02/0/01/0/12/0/0
Simple Multimodal Problems4/1/27/0/04/1/25/1/16/0/14/1/24/1/25/1/14/2/14/1/25/0/2
Hybrid Problems8/1/110/0/06/2/27/2/17/2/19/1/08/2/010/0/06/0/48/2/09/1/0
Composition Problems7/2/19/0/15/4/17/1/26/3/18/1/16/2/28/1/15/4/17/1/28/1/1
Overall20/4/528/0/115/7/721/4/421/5/323/3/320/5/425/2/217/6/620/4/524/2/3
OverallRank2.934.79 11.14 4.45 6.86 6.97 7.00 7.48 8.00 5.07 5.28 8.03
CEC’2017-100DUnimodal Problemsw/t/l1/0/12/0/01/0/11/1/01/0/12/0/02/0/02/0/02/0/01/0/12/0/0
Simple Multimodal Problems5/1/17/0/06/0/16/0/16/0/14/2/14/0/35/0/25/1/15/0/25/1/1
Hybrid Problems7/1/210/0/04/4/26/2/28/1/18/2/08/1/19/0/19/0/18/1/19/1/0
Composition Problems6/2/29/1/07/1/26/1/37/2/16/3/15/3/28/1/15/3/26/2/26/2/2
Overall19/4/628/1/018/5/619/4/622/3/420/7/219/4/624/1/421/4/420/3/622/4/3
OverallRank3.214.45 10.69 5.90 4.93 7.34 6.72 6.93 7.48 7.83 5.03 7.48
Table 7. Comparison results among DEGGDE variants with different mutation schemes on the 50D CEC’2017 benchmark set.
Table 7. Comparison results among DEGGDE variants with different mutation schemes on the 50D CEC’2017 benchmark set.
FCategoryQuality DE/Current-
to-duelite/1
DE/Current-
to-Pelite/1
DE/Current-
to-Aelite/1
DE/Current-to-
duelite/1-WD
DE/Current-to-
duelite/1-PD
DE/Current-to-
duelite/1-PWD
F1Unimodal
Functions
Mean1.40 × 10−141.40 × 10−146.83 × 10−11.37 × 10−111.40 × 10−141.49 × 10−14
p-valueNAN =1.21 × 10−12 +1.21 × 10−12 +1.00 × 100 =1.61 × 10−1 =
F3Mean8.74 × 10−101.78 × 10−55.37 × 1039.94 × 10−34.34 × 10−102.00 × 10−7
p-value3.02 × 10−11 +3.02 × 10−11 +3.02 × 10−11 +1.26 × 10−1 =3.34 × 10−11 +
F1–3w/t/l1/1/02/0/02/0/00/2/01/1/0
F4Simple
Multimodal
Functions
Mean4.77 × 1014.99 × 1016.12 × 1016.30 × 1014.69 × 1016.44 × 101
p-value7.63 × 10−1 =1.30 × 10−4 +1.23 × 10−1 =4.37 × 10−1 =3.59 × 10−2 +
F5Mean2.34 × 1013.18 × 1013.59 × 1012.69 × 1014.17 × 1014.15 × 101
p-value8.13 × 10−3 +5.68 × 10−9 +1.66 × 10−2 +8.08 × 10−10 +5.59 × 10−7 +
F6Mean3.12 × 10−69.36 × 10−61.65 × 10−42.71 × 10−52.24 × 10−68.74 × 10−6
p-value4.34 × 10−5 +3.01 × 10−11 +2.71 × 10−8 +9.41 × 10−1 =1.27 × 10−3 +
F7Mean7.95 × 1011.01 × 1027.42 × 1017.46 × 1011.08 × 1021.12 × 102
p-value2.53 × 10−4 +1.49 × 10−1 =8.24 × 10−2 =4.18 × 10−9 +6.72 × 10−10 +
F8Mean2.31 × 1013.83 × 1013.46 × 1013.01 × 1014.20 × 1015.15 × 101
p-value2.43 × 10−5 +4.09 × 10−7 +1.26 × 10−3 +4.25 × 10−7 +2.60 × 10−10 +
F9Mean9.88 × 10−146.63 × 10−22.10 × 10−18.74 × 10−22.98 × 10−36.01 × 10−2
p-value5.36 × 10−4 +2.86 × 10−10 +3.62 × 10−4 +7.65 × 10−1 =1.23 × 10−2 +
F10Mean6.51 × 1036.97 × 1035.26 × 1036.55 × 1036.84 × 1036.75 × 103
p-value4.86 × 10−3 +1.25 × 10−5 −1.00 × 100 =1.30 × 10−1 =2.06 × 10−1 =
F4–10w/t/l6/1/05/1/14/3/03/4/06/1/0
F11Hybrid
Functions
Mean4.46 × 1015.38 × 1017.32 × 1015.74 × 1013.95 × 1014.48 × 101
p-value1.90 × 10−3 +1.95 × 10−10 +4.41 × 10−6 +3.58 × 10−3 −8.24 × 10−1 =
F12Mean2.71 × 1032.69 × 1032.19 × 10+043.86 × 1032.34 × 1032.26 × 103
p-value2.28 × 10−1 =6.07 × 10−11 +2.51 × 10−2 +1.86 × 10−1 =1.33 × 10−1 =
F13Mean1.81 × 1022.00 × 1026.80 × 1021.92 × 1021.89 × 1022.01 × 102
p-value3.39 × 10−2 +3.01 × 10−7 +6.79 × 10−2 =4.38 × 10−1 =3.64 × 10−2 +
F14Mean4.16 × 1016.14 × 1016.11 × 1015.66 × 1015.26 × 1015.45 × 101
p-value4.80 × 10−7 +7.09 × 10−8 +2.49 × 10−6 +1.02 × 10−5 +1.61 × 10−6 +
F15Mean4.48 × 1014.56 × 1011.01 × 1024.76 × 1014.02 × 1014.20 × 101
p-value7.17 × 10−1 =5.46 × 10−9 +4.29 × 10−1 =1.71 × 10−1 =4.20 × 10−1 =
F16Mean4.26 × 1024.83 × 1026.08 × 1025.18 × 1025.68 × 1025.38 × 102
p-value2.52 × 10−1 =2.27 × 10−3 +7.73 × 10−2 =6.38 × 10−3 +2.81 × 10−2 +
F17Mean2.39 × 1022.93 × 1023.03 × 1023.14 × 1023.96 × 1023.81 × 102
p-value3.64 × 10−2 +5.01 × 10−2 =4.86 × 10−3 +1.75 × 10−5 +3.01 × 10−4 +
F18Mean1.12 × 1026.26 × 1019.21 × 1016.97 × 1015.67 × 1015.53 × 101
p-value1.11 × 10−4 −1.71 × 10−1 =1.06 × 10−3 −1.09 × 10−5 −1.34 × 10−5 −
F19Mean3.11 × 1013.59 × 1014.19 × 1013.52 × 1013.16 × 1013.84 × 101
p-value7.98 × 10−2 =2.25 × 10−4 +1.15 × 10−1 =7.62 × 10−1 =6.97 × 10−3 +
F20Mean1.47 × 1021.58 × 1021.90 × 1021.49 × 1023.16 × 1022.80 × 102
p-value3.11 × 10−1 =3.63 × 10−1 =6.31 × 10−1 =5.19 × 10−7 +2.96 × 10−5 +
F11–20w/t/l4/5/17/3/04/5/14/4/26/3/1
F21Composition FunctionsMean2.24 × 1022.27 × 1022.32 × 1022.25 × 1022.38 × 1022.43 × 102
p-value7.73 × 10−1 =6.36 × 10−5 +6.84 × 10−1 =6.53 × 10−7 +3.01 × 10−7 +
F22Mean6.01 × 1036.33 × 1035.44 × 1036.09 × 1036.59 × 1036.61 × 103
p-value3.45 × 10−2 +8.31 × 10−3 −2.01 × 10−1 =5.01 × 10−2 =1.27 × 10−2 +
F23Mean4.48 × 1024.48 × 1024.57 × 1024.49 × 1024.56 × 1024.59 × 102
p-value8.19 × 10−1 =3.85 × 10−3 +4.12 × 10−1 =9.47 × 10−3 +2.92 × 10−2 +
F24Mean5.18 × 1025.18 × 1025.27 × 1025.21 × 1025.19 × 1025.20 × 102
p-value2.12 × 10−1 =1.86 × 10−3 +1.41 × 10−1 =9.59 × 10−1 =8.19 × 10−1 =
F25Mean5.07 × 1025.35 × 1025.35 × 1025.36 × 1025.17 × 1025.37 × 102
p-value7.96 × 10−3 +4.22 × 10−3 +3.76 × 10−3 +8.10 × 10−2 =1.03 × 10−2 +
F26Mean1.24 × 1031.27 × 1031.37 × 1031.28 × 1031.18 × 1031.29 × 103
p-value1.15 × 10−1 =4.74 × 10−6 +7.73 × 10−2 =5.08 × 10−3 −9.47 × 10−3 +
F27Mean5.26 × 1025.26 × 1025.39 × 1025.31 × 1025.29 × 1025.26 × 102
p-value8.42 × 10−1 =6.10 × 10−3 +9.33 × 10−2 =8.42 × 10−1 =7.51 × 10−1 =
F28Mean4.98 × 1025.05 × 1024.97 × 1024.99 × 1025.02 × 1025.05 × 102
p-value1.09 × 10−2 +3.15 × 10−2 −2.64 × 10−1 =8.08 × 10−1 =5.71 × 10−1 =
F29Mean3.48 × 1023.78 × 1023.90 × 1023.56 × 1023.82 × 1023.99 × 102
p-value9.63 × 10−2 =3.16 × 10−5 +2.58 × 10−1 =1.03 × 10−2 +1.91 × 10−2 +
F30Mean6.19 × 1056.16 × 1056.01 × 1056.08 × 1056.04 × 1056.02 × 105
p-value5.11 × 10−1 =8.65 × 10−1 =8.53 × 10−1 =1.81 × 10−1 =4.73 × 10−1 =
F21–30w/t/l3/7/07/1/21/9/03/6/16/4/0
w/t/l14/14/121/5/311/17/110/16/319/9/1
Rank2.003.52 4.41 3.72 3.17 4.17
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, T.-T.; Yang, Q.; Gao, X.-D. Dual Elite Groups-Guided Differential Evolution for Global Numerical Optimization. Mathematics 2023, 11, 3681. https://doi.org/10.3390/math11173681

AMA Style

Wang T-T, Yang Q, Gao X-D. Dual Elite Groups-Guided Differential Evolution for Global Numerical Optimization. Mathematics. 2023; 11(17):3681. https://doi.org/10.3390/math11173681

Chicago/Turabian Style

Wang, Tian-Tian, Qiang Yang, and Xu-Dong Gao. 2023. "Dual Elite Groups-Guided Differential Evolution for Global Numerical Optimization" Mathematics 11, no. 17: 3681. https://doi.org/10.3390/math11173681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop