You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

26 August 2023

Dual Elite Groups-Guided Differential Evolution for Global Numerical Optimization

,
and
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section E1: Mathematics and Computer Science

Abstract

Differential evolution (DE) has shown remarkable performance in solving continuous optimization problems. However, its optimization performance still encounters limitations when confronted with complex optimization problems with lots of local regions. To address this issue, this paper proposes a dual elite groups-guided mutation strategy called “DE/current-to-duelite/1” for DE. As a result, a novel DE variant called DEGGDE is developed. Instead of only using the elites in the current population to direct the evolution of all individuals, DEGGDE additionally maintains an archive to store the obsolete parent individuals and then assembles the elites in both the current population and the archive to guide the mutation of all individuals. In this way, the diversity of the guiding exemplars in the mutation is expectedly promoted. With the guidance of these diverse elites, a good balance between exploration of the complex search space and exploitation of the found promising regions is hopefully maintained in DEGGDE. As a result, DEGGDE expectedly achieves good optimization performance in solving complex optimization problems. A large number of experiments are conducted on the CEC’2017 benchmark set with three different dimension sizes to demonstrate the effectiveness of DEGGDE. Experimental results have confirmed that DEGGDE performs competitively with or even significantly better than eleven state-of-the-art and representative DE variants.

1. Introduction

Differential evolution (DE) is a population-based random search algorithm proposed by Storn and Price [1]. It is primarily used to solve numerical optimization problems [1,2,3,4]. On account of its ease of implementation and strong global search ability, DE has attracted a great deal of attention from researchers. As a result, many effective DE variants have been developed in recent years [2,3,5,6,7]. In particular, DE variants have won competitions on single objective optimization held in the IEEE Congress on Evolutionary Computation (CEC) in recent years [8,9,10]. Thanks to its good optimization performance when solving optimization problems, DE has also been used for a wide range of academic and engineering applications [5,6,7].
To be specific, a population of individuals are maintained in DE to search the solution space iteratively [1,2,3]. In the population, each individual represents a feasible solution to the optimization problem and undergoes three main processes [4,6,7], namely, the mutation process, the crossover process, and the selection process, to update its position during the iteration. Among the three processes, the mutation process has been demonstrated as the most critical part in DE [11,12,13,14,15] because it has significant influence on the search diversity and the search convergence of DE. As a result, researchers have poured a great deal of attention into designing effective mutation strategies to improve the optimization performance of DE. Resultantly, numerous DE variants with novel mutation schemes have been developed in recent years [16,17,18,19,20,21].
In general, the mutation operation is usually realized by shifting the base individual by the difference vectors between individuals in the population [19,22,23,24,25]. In essence, the key to designing effective mutation strategies lies in the selection of individuals participating in the mutation, such that the mutation diversity of the population remains high and, at the same time, the mutated individuals are likely to approach optimal areas fast [11,12,15,16,20]. To this end, researchers have developed a number of individual selection approaches, especially for choosing guiding exemplars to direct the mutation of individuals [14,15,16,17,18]. Among these selection strategies, fitness values of individuals are commonly used to choose guiding exemplars to direct the evolution of the population [16,17,18,19,20]. To enhance the mutation diversity, researchers have also designed many other measures to select promising individuals to mutate the population, for example, cosine similarity [26], novelty of individuals [23], fitness and diversity contribution of individuals [27], and consecutive unsuccessful updates of individuals [25].
The above DE variants commonly utilize only one single mutation strategy to mutate all individuals. Nevertheless, different mutation strategies usually have different advantages in solving different optimization problems. Therefore, some researchers have even proposed to employ multiple mutation strategies to evolve the population, such that the strengths of different mutation schemes can be integrated to mutate the population effectively [28,29,30,31,32]. In a broad sense, the hybridization of multiple mutation strategies is classified into two major methods, namely, population-level hybridization [28,29,31,32,33,34] and individual-level hybridization [30,35,36,37,38]. In the former category of hybrid DE variants [31,33,34], in each generation, a mutation strategy is selected from the mutation strategy pool and then all individuals share this strategy to mutate. In contrast, in the latter category of hybrid DE variants [12,36,38], at each iteration, for each individual, a mutation scheme is selected from the mutation strategy pool to update the individual.
In addition to the mutation operation, the optimization performance of DE is also significantly influenced by the control parameters associated with the mutation operation and the crossover operation, that is, the scaling factor, F, in the mutation and the crossover probability, CR, in the crossover [35,39,40,41,42,43,44,45,46]. It has been experimentally verified that on the one hand, the settings of these two key parameters are usually different for the same DE in different optimization problems; on the other hand, even in the same optimization problem, the settings of these two parameters are also usually different for DE with different mutation or crossover schemes [35,41,43,44]. This indicates that DE is very sensitive to these two parameters when solving optimization problems.
To alleviate this issue, researchers have been dedicated to devising adaptive parameter adjustment strategies for DE. As a result, many adaptive parameter adaptation methods have been developed to change the settings of F and CR dynamically during the evolution process [35,39,40,41,42,43,44,45,46]. Broadly speaking, existing parameter adaptation strategies can be divided into two main categories, namely, (1) dynamic parameter adjustment strategies [39,40,41,42] and (2) self-adaptive parameter adjustment strategies [35,43,44,45,46]. In general, the former type of parameter adaptation schemes mainly adjust the parameter values dynamically without considering the evolutionary states of the population or individuals [39,40,41,42], while the latter type of parameter adaptation methods usually take the evolutionary information or states of individuals into consideration to adaptively adjust the values of the two parameters [35,43,44,45,46].
Though many novel mutation strategies and effective parameter adaptation methods have been designed to help DE improve its optimization performance, DE still cannot find satisfactory solutions to complex optimization problems with lots of local regions because of the stagnation of the population [2,3,5,6,7]. However, with the rapid development of big data and the Internet of Things, optimization problems are becoming more and more complex due to the increasing variables, especially the increasingly correlated ones [47,48,49,50,51]. As a result, there is an increasing demand for effective strategies to further improve the optimization ability of DE in solving complicated optimization problems.
To this end, this paper devises a dual elite groups-guided mutation strategy called “DE/current-to-duelite/1” to effectively mutate all individuals in the population. As a result, a novel DE variant named dual elite groups-guided DE (DEGGDE) is developed for solving complex optimization problems. The contributions of this paper and the key components of DEGGDE are summarized as follows:
(1) A dual elite groups-guided mutation strategy named “DE/current-to-duelite/1” is proposed to mutate individuals with high diversity. Instead of only using the elites in the current population as the guiding exemplars to direct the mutation of individuals in existing studies [46], DEGGDE maintains an archive to store obsolete parent individuals and then assembles the elites in both the current population and the archive to guide the mutation of all individuals in the population. In this way, the diversity of the guiding exemplars used to mutate individuals is expectedly enlarged, and thus the mutation diversity of individuals is enhanced. This is beneficial for individuals to explore the solution space in diverse directions and aids in avoiding falling into local regions.
(2) A directional random difference vector is further proposed to accelerate the approaching speed to optimal areas. Instead of directly using two random individuals from the population to generate a completely random difference vector for mutating each individual, as in most existing studies [45,46,52,53,54], DEGGDE first compares the two randomly selected individuals from the population and then uses the better one as the first exemplar and the worse one as the second exemplar to generate a directional random difference vector for each individual to mutate. In this manner, the mutated individual is expected to move toward optimal areas fast.
With the above techniques, DEGGDE is anticipated to keep a good balance between search diversity and search convergence to explore and exploit the solution space to find the global optimum. To validate its effectiveness, experiments are conducted on the commonly used CEC’2017 benchmark set [55] with different dimension sizes by comparing DEGGDE with eleven state-of-the-art DE methods. In addition, the effectiveness of the above two components in DEGGDE are also studied by conducting experiments on the CEC’2017 benchmark set.
The rest of the paper is organized as follows. In Section 2, the basic working principle of DE is first reviewed and then DE variants closely related to this paper are reviewed. After that, the devised DEGGDE is elaborated in detail in Section 3. Subsequently, a large number of experiments are carried out to verify the effectiveness of DEGGDE by comparing with eleven state-of-the-art DE variants in Section 4. Finally, conclusions are given in Section 5.

3. Proposed DEGGDE

Although many advanced DE methods have been proposed in recent years, the performance of DE is still not as satisfactory as we had anticipated it would be when it encounters complex optimization problems with many local basins. Nevertheless, in the era of big data and the Internet of Things, variables likely correlate with each other and, consequently, optimization problems become more and more complicated [55,71,72]. Confronted with such problems, designing effective mutation schemes is the key to improving the optimization effectiveness of DE. To this end, this paper develops a simple, yet effective mutation scheme called “DE/current-to-duelite/1”, which is elucidated in detail in the following.

3.1. DE/Current-to-Duelite/1

In the research on evolutionary algorithms, it has been widely recognized that elite individuals usually preserve more valuable evolutionary information to guide the population to approach optimal areas [51,73,74]. Therefore, elites are widely utilized to direct the update of individuals. Nevertheless, most existing studies only make use of elites in the current population to assist the evolution of the population. Such utilization may limit the diversity of the guiding exemplars, with the result that individuals in the population are guided in limited directions.
To enhance the diversity of the guiding exemplars, we turn to the historical evolutionary information of the population to seek some useful elite individuals. Along this road, we design a novel mutation scheme called “DE/current-to-duelite/1” for DE by assembling historical elite individuals to guide the mutation of the population.
Specifically, given that the population size is PS, we maintain an archive (denoted as A) of the same size (namely PS) to store the obsolete parent individuals. Then, unlike existing mutation strategies [11,12,13,14,15], which only utilize elite individuals in the current population as the candidate guiding exemplars to direct the mutation of the population, we utilize the elite individuals in both the current population and the archive to direct the mutation of the population. To be concrete, we select the best [p1*PS] (p1 is actually the percentage of the number of the chosen elite individuals out of the whole population) individuals in the current population to form the elite set, denoted as PE. Then, we further select the best [p2*PS] (p2 is actually the percentage of the number of the chosen elite individuals out of the entire archive) individuals in the archive to form the elite set, denoted as AE. As a result, PEAE constitutes the candidate guiding exemplars for mutating individuals in the population.
In particular, each individual in the population is mutated by the devised “DE/current-to-duelite/1” as follows:
v i , G = x i , G + F i × ( x p b e s t _ r i x i , G ) + F i × ( x r 1 , G x r 2 , G )
where x pbest _ r i is an elite individual randomly selected from PEAE to direct the mutation of x i , G ; x i , G is the ith individual in the Gth generation; v i , G . is the generated mutant vector; F i is the scaling factor associated with x i , G ; and x r 1 , G and x r 2 , G are two individuals randomly selected from PA by the uniform distribution, where P and A denote the current population and the archive, respectively. It is worth noting that r1 ≠ r2 ≠ i.
Another thing that should be noted is that, instead of using a completely random difference vector like most existing mutation mechanisms, “DE/current-to-duelite/1” utilizes a directional random difference vector. To be concrete, “DE/current-to-duelite/1” first randomly chooses two different individuals from PA and then lets the better one between the two selected individuals act as x r 1 , G while the worse one acts as x r 2 , G . In this way, the difference vector between the two random individuals is directional from the worse individual to the better individual.
Observing Equation (17), we find that the devised “DE/current-to-duelite/1” could help DEGGDE strive a good balance between exploration of the problem space and exploitation of the found promising areas due to the following reasons:
(1) The elite individual set PEAE provides diverse candidate-guiding exemplars for individuals to mutate. On the one hand, the introduction of the elite individuals in the archive storing historical evolutionary information affords additional leading exemplars for individuals to mutate. This largely improves the mutation diversity of individuals and thus is of great profit for individuals to traverse the problem space in diverse directions. On the other hand, the elite individuals in both PE and AE are expectedly better than most individuals in the current population, P. Therefore, it is hoped that most individuals are likely guided to move towards optimal regions. From this perspective, it is highly possible that individuals can approach optimal areas fast.
(2) The selection of two random individuals, x r 1 , G and x r 2 , G , from PA further enhances the mutation diversity of individuals. Specifically, instead of selecting the two random individuals only from P, “DE/current-to-duelite/1” randomly selects both of the random individuals from PA. This largely promotes the diversity of the random difference vectors of all individuals. Consequently, the search diversity of DEGGDE is improved greatly, which is highly beneficial for escaping from local areas.
(3) The directional random difference vector affords great assistance for individuals to move towards optimal regions fast. Instead of using completely random difference vectors for all individuals, “DE/current-to-duelite/1” utilizes a directional random difference vector for each individual by letting the better one between the two selected random individuals act as the first exemplar and the worse one act as the second exemplar. In this way, individuals are expectedly mutated in promising directions. This is of considerable benefit for individuals to find promising areas fast.
With the above mechanisms, DEGGDE is anticipated to strike a very promising balance between exploration and exploitation to traverse the problem space sufficiently. Aside from the above techniques for mutation, another two key techniques should also be paid careful attention.
In the initial stage, we set archive A as empty. During evolution, the obsolete parents are sequentially added into A. When the number of individuals exceeds its fixed size, PS, we randomly select an individual from A and then compare it with the obsolete parent to be added. If the obsolete parent is better than the randomly selected individual, it replaces the individual; otherwise, it is discarded. In this way, on the one hand, high diversity can be maintained in A; on the other hand, the quality of individuals in A can also be ensured. This updating mechanism of A also potentially assists DEGGDE to maintain a good compromise between search diversity and search convergence.
For the settings of p1 and p2, p2 determines the number of the elite individuals in A, while p1 determines the number of the elite individuals in P. Since A stores the obsolete parents, theoretically, it is highly possible that the overall quality of individuals in A is worse than that of individuals in P. Thus, the number of elite individuals in A should be smaller than that in P. For convenience, in this paper, we set p2 = p1/2, leading to the number of the elite individuals in A being only half of that in P.
Subsequently, upon deeper analysis of p1, we find that a large p1 results in that a large number of elite individuals participating in the mutation of the population. This improves the search diversity of DEGGDE. Nevertheless, a too-large p1 may lead to too-high search diversity, which may do harm to the convergence of DEGGDE. In contrast, a small p1 leads to only a few elite individuals taking part in the mutation of the population. This is beneficial for fast convergence of the population to optimal areas, but it may result in low mutation diversity and then to low search diversity, which may bring stagnation of the population. From the above analysis, we find that a fixed p1 is not suitable for DEGGDE to achieve satisfactory performance.
For ease and convenience, we propose a dynamic strategy for setting p1 to alleviate the above predicament. To be concrete, at each generation, we randomly sample a value for p1 from [0.1, 0.2] by the uniform distribution. In this way, on the one hand, the sensitivity of DEGGDE to p1 could be alleviated; on the other hand, a dynamic balance between exploration and exploitation can be maintained by DEGGDE. This endows DEGGDE with a good capability of searching the problem space diversely while at the same time refining the quality of the found optimal solutions.
It should be mentioned that the range of [0.1, 0.2] is used here because the devised “DE/current-to-duelite/1” can be actually seen an extension of the mutation strategy “DE/current-to-pbest/1” in JADE [46]. Therefore, we directly borrow the recommended setting range of p in JADE to randomly sample a value for p1 in this paper.

3.2. Difference between “DE/Current-to-Duelite/1” and Existing Similar Mutation Strategies

In the literature, many existing mutation strategies also utilize elite individuals to direct the mutation of the population [46,57,73,74], such as “DE/current-to-best/1” [57] and “DE/current-to-pbest/1” [46]. Compared with these existing mutation schemes, “DE/current-to-duelite/1” differs from them mainly in the following aspects:
(1) The most significant difference between “DE/current-to-duelite/1” and existing elite-guided mutation strategies lies in the dual elite groups guided mutation mechanism. Most existing studies only use the elite individuals in the current population to direct the mutation of all individuals. Nevertheless, in “DE/current-to-duelite/1”, the elite individuals in the current population and those in the archive are used together to direct the evolution of all individuals. Therefore, compared with most existing mutation schemes, “DE/current-to-duelite/1” possesses more candidate guiding exemplars and thus preserves higher mutation diversity. Such utilization of historical evolutionary information, by focusing on the best historical individuals, provides strengthens DEGGDE in locating optimal regions fast.
(2) The second significant difference between “DE/current-to-duelite/1” and existing mutation strategies lies in the selection of two random individuals and the construction of the random difference vector. Most existing mutation strategies choose the two random individuals from the population, whereas “DE/current-to-duelite/1” selects the two random individuals from PA. This selection mechanism affords a high diversity of random difference vectors and thus is very beneficial for improving the search diversity of DEGGDE, which may greatly benefit the population in avoiding being trapped in local zones. In addition, unlike most existing mutation schemes that use completely random difference vectors, “DE/current-to-duelite/1” utilizes a directional random difference vector by taking the better one between the two random individuals as the first exemplar while taking the worse one as the second exemplar. By this means, it is expected that the directional random difference vectors could provide positive and promising directions for individuals to mutate. As a result, slightly faster convergence of the population to optimal regions could be achieved.

3.3. The Complete DEGGDE

In addition to “DE/current-to-duelite/1”, this paper employs the binomial crossover scheme to combine the target vector and the corresponding mutation vector to generate the final offspring. With the aim of reducing the sensitivity to the control parameters F and CR, DEGGDE adopts the widely used individual-level parameter adaptation strategy in SHADE [45] to generate the settings of the two parameters for each individual. However, a small modification is made to the CR settings. To be concrete, this paper first generates a set of CR values for all individuals and then sorts them from the smallest to the largest. Subsequently, in the crossover operation, we assign smaller CR values to better individuals. In this way, the generated offspring of the better individuals preserve smaller differences with their parents; in contrast, the generated offspring of the worse individuals preserve larger differences with their parents. As a result, better individuals can focus on exploiting the optimal areas they locate while worse individuals concentrate on exploring the problem space to find more promising areas. For all these techniques, the pseudocode of the complete DEGGDE is outlined in Algorithm 2.
Algorithm 2: The pseudocode of DEGGDE 
Input: Population size PS, Maximum fitness evaluations FESmax;
1:
Generate PS individuals randomly and calculate their fitness; fes = PS;  //initialization
2:
Set all elements in M F , M CR to 0.5; A = ; Set the generation number G = 0;
3:
While (fesFESmax) do  //main loop
4:
   S F = ; S CR = ;
5:
   p 1 = uniform ( 0.1 ,   0.2 ) ; p 2 = p 1   /   2 ;
6:
  Sort individuals in P and those in A from the best to the worst, respectively;
7:
  Select the top best ⌈PS × p 1 individuals from P to form PE and the top best ⌈PS × p 2 ⌉ individuals from A to form AE;
8:
  Randomly select a pair of values (MF,rand and MCR,rand) from MF and MCR to generate F and CR for all individuals;
9:
  Sort CR from the smallest to the largest;
10:
   For i = 1:PS do
11:
  Obtain the associated CR i according to the ranking of x i , G ;
12:
  Randomly select an elite individual x pbest _ r i from PEAE;
13:
  Randomly select two different individuals from PA: x r 2 , G x r 1 , G x i , G and if f ( x r 1 , G ) > f ( x r 2 , G ) , swap x r 1 , G and x r 2 , G ;
14:
  Generate the mutation vector by v i , G = x i , G + F i × ( x pbest _ r i , G x i , G ) + F i × ( x r 1 , G x r 2 , G ) ;  //mutation
15:
  Obtain the trial vector u i , G by Equation (7) and calculate its fitness; fes++;
16:
  If f ( u i , G ) < f ( x i , G ) then  //selection
17:
   If A PS then
18:
    Directly add x i , G into A;
19:
   Else
20:
    Randomly select an individual from A to compare with x i , G , and if x i , G is superior, replace it with x i , G ;
21:
   End If
22:
    x i , G + 1 = u i , G ;   F i S F , CR i S CR ;
23:
  Else
24:
    x i , G + 1 = x i , G ;
25:
  End If
26:
End For
27:
 Update M F and M CR based on Equations (11)–(15); G++;
28:
End While 
29:
Obtain the global best solution gbest and its fitness f(gbest);
Output: f(gbest) and gbest
In the initial phase, as shown in lines 1–2, PS individuals are randomly initialized and then evaluated. Along with the initialization of the population, the related archives for the parameter adaptation and mutation are also initialized. Subsequently, the algorithm enters the main loop. At first, as shown in line 5, the value of p1 is randomly generated according to the uniform distribution, and then the value of p2 is computed. After that, as shown in line 6, individuals in the current population, P, and those in the archive, A, are sorted from the best to the worst, respectively. Then, as shown in line 7, the elite sets PE and AE are formed by selecting the best [p1*PS] in P and the best [p2*PS] in A, respectively. Subsequently, a set of F values and a set of CR values are randomly generated based on the Cauchy distribution and the Gaussian distribution, respectively (line 8). After that, the set of CR values is sorted from the smallest to the largest (line 9). After the above preparations, the evolution of all individuals begins as shown in lines 10–27. To update each individual, the associated CR value is first obtained according to its fitness ranking, as shown in line 11. Then, one elite individual is randomly selected from PEAE (line 12). Next, two random individuals are selected from PA (line 13). After that, the devised mutation scheme is performed to mutate the individual (line 14), following which the binomial crossover operation generates the final offspring, as shown in line 15. Afterwards, the offspring is evaluated and the records in the relevant archives are updated along with the selection operation (lines 16–26). The above operations are performed iteratively until the termination condition is satisfied, which is commonly the completion of the maximum number of fitness evaluations. Finally, before the termination of the algorithm, the found optimal solution and its fitness value are output.
Without considering the time for fitness evaluations, at each generation, it takes O(PS × logPS) to sort individuals in P and another O(PS × logPS) to sort individuals in A. To generate two sets of values for F and CR, respectively, it requires O(PS) twice. To sort the set of CR values, it requires O(PS × logPS). Then, O(PS × D) is required to generate the offspring for all individuals. In the selection process, O(PS × D) is required twice for updating the population, P, and the archive, A. Finally, O(PS) is needed for parameter adaptation. As a whole, the overall time complexity of DEGGDE is O(PS × logPS + PS × D), which does not impose a serious burden as compared with the time complexity of the classical DE.

4. Experiments

This section comprehensively verifies the effectiveness of DEGGDE in tackling optimization problems by conducting a large number of experiments. To start with, the experimental setup is first introduced, including the used benchmark problem set, the selected state-of-the-art algorithms for comparisons, and their parameter settings. Next, the performance of DEGGDE is extensively compared with the methods in the benchmark set. Finally, in-depth observation on DEGGDE is carried out by validating the usefulness of the designed mutation scheme and the devised parameter adaptation strategy.

4.1. Experimental Setup

In the experiments, we use the CEC’2017 benchmark set, which has been widely used to test the optimization performance of evolutionary algorithms [75,76,77,78], to demonstrate the effectiveness and efficiency of DEGGDE. This set consists of 29 numerical optimization problems, which are classified into four different types. Specifically, F1 and F3 are unimodal optimization problems, F4F10 are multimodal optimization problems, F11F20 are hybrid optimization problems, and F21F30 are composition optimization problems. Brief information about these 29 problems is listed in Table 1, and more detailed information can be referred to in [55]. To comprehensively verify the effectiveness of DEGGDE, we set three different dimensionalities, namely, 30D, 50D, and 100D, for all the CEC’2017 benchmark problems.
Table 1. Summarized information of the CEC’2017problems.
To make comparisons with DEGGDE, eleven representative and advanced DE variants are selected, namely, SHADE [45], GPDE [79], DiDE [80], SEDE [81], FADE [32], FDDE [27], TPDE [69], NSHADE [53], CUSDE [25], PFIDE [70], and EJADE [54].
Similar to the devised DEGGDE, SHADE, NSHADE, DiDE, FDDE, and CUSDE adopt a single mutation framework to mutate individuals. Specifically, SHADE utilizes “DE/current-to-pbest/1” to mutate individuals with an adaptive strategy for F and CR [45]. Such an adaptive strategy is also used in the devised DEGGDE to mutate individuals. Therefore, SHADE is selected as a baseline method to compare with DEGGDE. NSHADE is the latest improvement on SHADE with a new parameter adaptation scheme based on the nearest spatial neighborhood information [53]. DiDE employs an extension of “DE/current-to-pbest/1” to mutate individuals by introducing an external archive based on depth information. For the parameter adaptation of F and CR, it divides the population into several groups and separately adjusts the settings of F and CR for individuals in different groups adaptively [80]. FDDE utilizes a new mutation operator to mutate the population by selecting parental individuals based on their probabilities, which are computed on the basis of both fitness rankings and diversity rankings of individuals [27]. CUSDE uses the number of consecutive unsuccessful updates to calculate the selection probability of each individual and then chooses the base individual and the terminal individuals by the roulette wheel selection method. Inferior individuals with a large number of consecutive unsuccessful updates are removed during the evolution [25].
Unlike the above methods, GPDE, SEDE, FADE, TPDE, and EJADE assemble multiple mutation strategies to mutate different individuals distinctly. In particular, GPDE assembles a newly designed Gaussian distribution-based mutation operator and “DE/rand-worst/1” to mutate each individual adaptively. Then, a cosine function is employed to sample F and a Gaussian distribution is used to sample CR dynamically in GPDE [79]. SEDE hybridizes three different mutation strategies to mutate individuals adaptively by controlling the proportion of individuals where each mutation strategy is performed to get the associated mutation vector [81]. FADE first divides the population into several sub-populations and then adaptively selects a mutation strategy from three mutation candidates for each individual to mutate based on its fitness [32]. TPDE first separates the population into three sub-populations according to the newly designed zonal-constraint stepped division mechanism and then employs three elite-guided mutation operations to mutate individuals in the three sub-populations, respectively. Then, it leverages the Gaussian distribution, the Cauchy distribution, and a triangular wave function to generate F and CR for individuals in the three sub-populations, respectively [69]. EJADE assembles two sets of integrated crossover and mutation operations to create promising offspring [54].
Different from the above 10 compared DE methods, PFIDE develops a parameter adaptation framework for DE. Specifically, it takes advantage of population feature information, such as the summation of the standard deviation values with respect to each dimension of all individuals in the population and the standard deviation in terms of the fitness values of all individuals, to allocate historical successful F and CR values with high similarity measured by the population feature to the current population [70].
To ensure fair comparisons, for those compared algorithms which were also tested on the CEC’2017 benchmark set with three settings of dimensionality, we directly adopt the recommended settings of the population size in the associated papers for them. For those compared methods which were not tested on the CEC’2017 benchmark set, we fine-tuned the population size, PS, for them for CEC’2017 problems with the three dimensionality settings. The specific settings of the population size for all algorithms to solve the CEC’2017 problems with the three settings of dimensionality are listed in Table 2.
Table 2. The optimal PS settings of all algorithms for the 30D, 50D, and 100D CEC’2017 sets.
Unless otherwise stated, the maximum number of fitness evaluations for all algorithms is set as 10,000*D, with D denoting the dimension size. For fairness, we run each algorithm 30 times independently on each benchmark problem and then use the mean and the standard deviation values to evaluate the optimization performance of each algorithm on each benchmark problem. To make statistical difference, the Friedman test at the significance level α = 0.05 is performed to get the average rank of each algorithm with respect to the overall performance on one whole benchmark set. In addition, the Wilcoxon rank sum test at the significance level α = 0.05 is also performed to compare the optimization result of DEGGDE with that of each compared method on each benchmark problem. The symbols “+”, “=”, and “−” at the upper right corner of each p-value in the tables indicate that DEGGDE obtains significantly better, equivalent, and significantly worse performance than the compared method on the corresponding problem, respectively. “w/t/l” in the tables count the numbers of “+”, “=”, and “−”, respectively.
Finally, it deserves mentioning that DEGGDE is implemented in Python and all experiments are executed on the same PC with four Intel(R) Core(TM) i5-3470 3.20 GHz CPUs, 4Gb RAM, and the Ubuntu 12.04 LTS 64-bit system.

4.2. Comparisons between DEGGDE and State-of-the-Art DE Methods on the CEC’2017 Set

This section compares DEGGDE with the selected eleven state-of-the-art DE variants on the CEC’2017 benchmark set with three dimensionality settings, namely, 30D, 50D, and 100D. With the settings of the population size for all algorithms listed in Table 2, Table 3, Table 4 and Table 5 show the detailed optimization performance of all algorithms and their comparisons on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. For better comparisons, we summarize the statistical comparison results in terms of “w/t/l” and the average rank “Rank” in Table 6. Figure 1, Figure 2 and Figure 3 show the convergence behaviors of all algorithms on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. In addition, following the guideline in [82], we also present the box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 30D, 50D, and 100D CEC’2017 benchmark sets in Figure 4, Figure 5 and Figure 6, respectively. In these figures, the rectangular box shows the distribution of the global best fitness values obtained in the thirty runs by quartiles. The horizontal line inside the rectangular box represents the median value of the thirty data. The upper and the lower boundaries of the rectangular box represent the upper and the lower quartiles of the thirty data, respectively. The top and the bottom extended lines out of the rectangular box represent the maximum and the minimum values among the thirty data, respectively. The symbol “+” represents the outlier values among the thirty data.
Table 3. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 30D CEC’2017 benchmark functions.
Table 4. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 50D CEC’2017 benchmark functions.
Table 5. Comparison results between DEGGDE and the eleven state-of-the-art DE variants on the 100D CEC’2017 benchmark functions.
Table 6. Summarized comparison results between DEGGDE and the eleven compared algorithms with respect to “w/t/l” derived from the Wilcoxon rank sum test and the average rank obtained from the Friedman test on the 30D, 50D, and 100D CEC’2017 benchmark sets.
Figure 1. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 30D CEC’2017 set.
Figure 2. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 50D CEC’2017 set.
Figure 3. Convergence behaviors of DEGGDE and the eleven compared DE variants on the 100D CEC’2017 set.
Figure 4. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 30D CEC’2017 set.
Figure 5. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 50D CEC’2017 set.
Figure 6. Box-plot diagrams of the global best fitness values obtained by DEGGDE and the eleven compared DE variants in the thirty independent runs on the 100D CEC’2017 set.
Making deep and close observations on the above figures and tables, we gain the following findings:
(1) From the Friedman test results, on the one hand, DEGGDE consistently ranks in first place for the 30D, 50D, and 100D CEC’2017 benchmark sets. This demonstrates that DEGGDE obtains the best overall optimization performance among all algorithms. Furthermore, the rank value of DEGGDE is much smaller than those of the eleven compared methods on the three CEC’2017 benchmark sets. This reveals that DEGGDE shows significant superiority to the eleven compared methods in solving optimization problems.
(2) From the “w/t/l” derived from the Wilcoxon rank sum test, we find that with increasing dimensionality, DEGGDE achieves better and better optimization performance than the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, DEGGDE obtains significantly better performance than the eleven compared algorithms on at least ten problems. In particular, it significantly outperforms six compared methods (namely, SHADE, GPDE, FDDE, TPDE, NSHADE and EJADE) on more than twenty problems. On the 50D CEC’2017 benchmark sets, DEGGDE achieves significantly better performance than the eleven compared methods on at least fifteen problems. Particularly, it shows significant superiority to nine compared methods (namely, SHADE, GPDE, SEDE, FADE, FDDE, TPDE, NSHADE, PFIDE, and EJADE) on more than twenty problems. On the 100D CEC’2017 set, DEGGDE is significantly superior to the eleven compared methods on more than eighteen problems. In particular, it performs significantly better than seven compared methods (namely, GPDE, FADE, FDDE, NSHADE, CUSDE, PFIDE, and EJADE) on more than twenty problems.
(3) With respect to the optimization performance on different types of optimization problems, we find that DEGGDE is particularly good at solving complex optimization problems, such as multimodal problems, hybrid problems, and composition problems. Specifically, (a) in the two unimodal problems (F1 and F3), DEGGDE shows significantly better performance for both problems than seven, eight, and six compared methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. (b) On the seven multimodal problems (F4F10), DEGGDE achieves increasingly better performance as the dimensionality increases. On the 30D multimodal problems, DEGGDE presents significant dominance over the eleven compared methods on at least three problems and only shows failures to them on, at most, three problems. In particular, it is significantly better than five compared methods on more than five problems. On the 50D multimodal problems, DEGGDE significantly outperforms the eleven compared methods on four problems and only fails the competition with them on, at most, two problems. Particularly, it is significantly superior to five compared methods on more than five problems. On the 100D multimodal problems, DEGGDE performs significantly than the eleven compared methods on more than four problems and only shows inferiority to them on, at most, three problems. Particularly, it is significantly better than nine compared methods on more than five problems. (c) On the ten hybrid problems (F11F20), DEGGDE also shows increasingly better performance with increasing dimensionality. Specifically, DEGGDE performs significantly better on more than seven problems than seven, nine, and nine compared methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively. (d) On the ten composition problems (F21F30), DEGGDE also consistently achieves significantly better performance than most of the eleven compared algorithms. Specifically, it shows significant dominance on more than six problems compared to ten, nine, and nine other methods on the 30D, 50D, and 100D CEC’2017 benchmark sets, respectively.
(4) In terms of the convergence behavior comparisons between DEGGDE and the eleven compared methods shown in Figure 1, Figure 2 and Figure 3, we find that DEGGDE achieves much faster convergence speed along with much better solutions than most of the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, DEGGDE obtains the best performance in terms of both the solution quality and the convergence speed among all methods on eight problems (F1, F5, F7F8, F12, F16, F20F21). On the 50D CEC’2017 benchmark set, DEGGDE achieves the fastest convergence speed and the best solutions among all algorithms on eleven problems (F5, F7F8, F16F17, F20F21, F23F24, F26, F29). On the 100D CEC’2017 benchmark set, DEGGDE performs the best with respect to both the convergence speed and the solution quality among all approaches on thirteen problems (F1, F5, F7F8, F13, F16F17, F20F21, F23F24, F26, F29).
(5) With respect to the box-plot diagrams shown in Figure 4, Figure 5 and Figure 6, we find that DEGGDE achieves much more stable performance than the eleven compared methods. It was also found that the overall quality of the global optima found by DEGGDE in the thirty independent runs is much better than most of the eleven compared methods. Specifically, on the 30D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on eighteen problems (F1, F3, F5F9, F11–16, F18F19, F21–F22, F28), which indicates its superior and stable performance. On the 50D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on twenty problems (F1, F3, F5F9, F12, F14F21, F23F24, F26, F29). On the 100D CEC’2017 benchmark set, the distribution of the global best fitness values obtained by DEGGDE is more centered around a better optimum than the eleven compared methods on eighteen problems (F1, F3, F5F8, F12F14, F16F19, F21, F23F24, F26, F29).
Above all, the above extensive comparisons between DEGGDE and the eleven compared methods demonstrate that DEGGDE preserves high effectiveness and efficiency in solving optimization problems. DEGGDE also performs very stably. In particular, it is good at solving complex optimization problems, such as multimodal problems, hybrid problems, and composition problems.

4.3. Deep Investigation on the Effectiveness of “DE/Current-to-Duelite/1”

In this section, we conduct experiments to verify the effectiveness of the devised “DE/current-to-duelite/1”. To this end, we develop several variants of “DE/current-to-duelite/1”.
First, in the designed mutation scheme, we utilize the elite individuals in both the population and the archive to direct the mutation of individuals. To demonstrate the effectiveness of this scheme, we only utilize the elite individuals in the population to guide the mutation of the population, developing a variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-Pelite/1”. Similarly, we also only utilize the elite individuals in the archive to direct the mutation of the population, developing another variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-Aelite/1”.
Second, as for the directional random difference vector in the devised mutation scheme, unlike existing studies, we randomly choose two random individuals from PA for each individual to be mutated and then take the better one as x r 1 , G while taking the worse one as x r 2 , G . In this way, the generated mutation vector expectedly moves towards the optimal areas fast. To demonstrate the effectiveness of this method of generating the random difference vector, we also develop several variants of “DE/current-to-duelite/1”. Firstly, we remove the directional scheme, taking the better one as x r 1 , G while taking the worse one as x r 2 , G , and thus develop a variant of “DE/current-to-duelite/1”, which is represented as “DE/current-to-duelite/1-WD”. Subsequently, instead of choosing the two random individuals both from PA, we randomly choose one random individual from P and another random individual from PA. Without the directional scheme between the two randomly selected individuals, we develop a variant of “DE/current-to-duelite/1”, which is denoted as “DE/current-to-duelite/1-PWD”. Along with the directional scheme, we develop another variant of “DE/current-to-duelite/1”, which is represented as “DE/current-to-duelite/1-PD”.
After the above preparation, we conducted experiments on the 50D CEC’2017 benchmark set to compare the original “DE/current-to-duelite/1” with the developed five variants. Table 7 presents the comparison results. In this table, the Friedman test is performed to get the average rank of each variant of “DE/current-to-duelite/1”. The Wilcoxon rank sum test is also performed to compare the original “DE/current-to-duelite/1” with each of the developed versions on each benchmark problem. According to the results of this test, “w/t/l” in this table counts the number of problems where “DE/current-to-duelite/1” is significantly better than, equivalent with, and significantly worse than the corresponding developed versions, respectively. From this table, we gain the following findings:
Table 7. Comparison results among DEGGDE variants with different mutation schemes on the 50D CEC’2017 benchmark set.
(1) Regarding the Friedman test, DEGGDE with “DE/current-to-duelite/1” ranks first among all versions of DEGGDE. This means that “DE/current-to-duelite/1” helps DEGGDE achieve the best overall performance on the 50D CEC’2017 benchmark set. Additionally, the rank value of “DE/current-to-duelite/1” is much smaller than the other versions. This reveals that “DE/current-to-duelite/1” shows significant superiority to the other versions.
(2) From the perspective of “w/t/l”, we find that DEGGDE with “DE/current-to-duelite/1” performs significantly better than DEGGDE with the other versions on more than ten problems and only shows inferior performance to them on no more than three problems. In particular, compared with “DE/current-to-Pelite/1” and “DE/current-to-Aelite/1”, “DE/current-to-duelite/1” achieves significantly better performance on fourteen and twenty-one problems, respectively. This proves the great effectiveness of using both the elite individuals in the population and the ones in the archive to direct the mutation of the population. Compared with “DE/current-to-duelite/1-WD”, “DE/current-to-duelite/1” significantly outperforms it on eleven problems. This illustrates the effectiveness of the directional random difference vector in the devised mutation scheme, i.e., taking the better one between the two randomly selected individuals as x r 1 , G , and taking the worse one as x r 2 , G . Furthermore, “DE/current-to-duelite/1” significantly outperforms “DE/current-to-duelite/1-PD” and “DE/current-to-duelite/1-PWD” on ten and nineteen problems, respectively. This demonstrates the great effectiveness of selecting the two random individuals both from PA.
The above experimental results have demonstrated the great effectiveness of the devised mutation scheme, namely “DE/current-to-duelite/1”. In particular, the dual elite groups, namely the elite individuals in the current population and those in the archive, provide diverse yet promising guiding exemplars to lead the mutation of individuals. This not only affords great help in diversity enhancement, but also offers great assistance in convergence acceleration. Together with the directional random difference vector, the devised mutation scheme provides directional guidance for individuals to approach optimal areas fast.

5. Conclusions

This paper has devised a dual elite groups-guided differential evolution (DEGGDE) by designing a novel mutation scheme, called “DE/current-to-duelite/1”. To make full use of the historical evolutionary information, this mutation scheme maintains an archive to store the obsolete parent individuals and then utilizes the elite individuals in both the current population and the archive as the candidate guiding exemplars to direct the mutation of the population. In this way, the diversity of the leading exemplars could be largely enhanced, which is beneficial for individuals to search the problem space diversely. In addition, to accelerate the convergence of DEGGDE, instead of generating the random difference vector in the mutation scheme completely randomly, we propose to generate a directional random difference vector for the devised mutation scheme by first randomly selecting two individuals from the combination of the current population and the archive and then taking the better one as the first exemplar and the worse one as the second exemplar. In this way, the directional random difference vector provides a promising direction for the mutated individual to move towards optimal areas. With the collaboration between the two main techniques, DEGGDE is expected to strike a good balance between exploration and exploitation and thus is anticipated to achieve promising performance in solving optimization problems.
Extensive experiments have been executed on the CEC’2017 benchmark set with three different settings of dimensionality (namely 30D, 50D, and 100D). The experimental results have shown that, compared with eleven state-of-the-art DE variants, DEGGDE obtains significantly better performance than them in solving the 30D, 50D, and 100D CEC’2017 problems. In particular, DEGGDE shows good capability in coping with complex optimization problems, such as multimodal optimization problems, hybrid optimization problems, and composition optimization problems. Additionally, the effectiveness of the devised mutation scheme and the dynamic adjustment of the number of elite individuals has also been verified.
In the current version of DEGGDE, we dynamically adjust the number of elite individuals without considering the evolutionary state of the population. In the future, to further improve the optimization performance of DEGGDE, we aim to devise an adaptive adjustment strategy for this parameter by taking into account the evolutionary information of individuals and the evolutionary state of the population. Another future direction is to employ DEGGDE to solve practical optimization problems in both academic and engineering applications, such as path planning of unmanned aerial vehicles [83], automatic machine learning [84], control parameter optimization in wireless power transfer systems [85], expensive optimization [86], and optimization problems relevant to social networks [87].

Author Contributions

T.-T.W.: Implementation, formal analysis, and writing—original draft preparation. Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Natural Science Foundation of China under Grant 62006124 and in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Zhang, J.; Chen, D.; Yang, Q.; Wang, Y.; Liu, D.; Jeon, S.-W.; Zhang, J. Proximity ranking-based multimodal differential evolution. Swarm Evol. Comput. 2023, 78, 101277. [Google Scholar] [CrossRef]
  3. Liu, D.; He, H.; Yang, Q.; Wang, Y.; Jeon, S.-W.; Zhang, J. Function value ranking aware differential evolution for global numerical optimization. Swarm Evol. Comput. 2023, 78, 101282. [Google Scholar] [CrossRef]
  4. Yang, Q.; Yan, J.-Q.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Zhang, J. Random neighbor elite guided differential evolution for global numerical optimization. Inf. Sci. 2022, 607, 1408–1438. [Google Scholar] [CrossRef]
  5. Zhou, S.; Xing, L.; Zheng, X.; Du, N.; Wang, L.; Zhang, Q. A self-adaptive differential evolution algorithm for scheduling a single batch-processing machine with arbitrary job sizes and release times. IEEE Trans. Cybern. 2021, 51, 1430–1442. [Google Scholar] [CrossRef]
  6. Xu, Y.; Pi, D.; Wu, Z.; Chen, J.; Zio, E. Hybrid discrete differential evolution and deep q-network for multimission selective maintenance. IEEE Trans. Reliab. 2022, 71, 1501–1512. [Google Scholar] [CrossRef]
  7. Liu, H.; Chen, Q.; Pan, N.; Sun, Y.; An, Y.; Pan, D. Uav stocktaking task-planning for industrial warehouses based on the improved hybrid differential evolution algorithm. IEEE Trans. Ind. Inform. 2022, 18, 582–591. [Google Scholar] [CrossRef]
  8. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jso. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  9. Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Ensemble sinusoidal differential covariance matrix adaptation with euclidean neighborhood for solving cec2017 benchmark problems. In Proceedings of the IEEE Congress on Evolutionary Computation, Donostia, Spain, 5–8 June 2017; pp. 372–379. [Google Scholar]
  10. Tanabe, R.; Fukunaga, A.S. Improving the search performance of shade using linear population size reduction. In Proceedings of the IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  11. Liao, J.; Cai, Y.; Wang, T.; Tian, H.; Chen, Y. Cellular direction information based differential evolution for numerical optimization: An empirical study. Soft Comput. 2016, 20, 2801–2827. [Google Scholar] [CrossRef]
  12. Mohamed, A.W. An improved differential evolution algorithm with triangular mutation for global numerical optimization. Comput. Ind. Eng. 2015, 85, 359–375. [Google Scholar] [CrossRef]
  13. Opara, K.; Arabas, J. Comparison of mutation strategies in differential evolution—A probabilistic perspective. Swarm Evol. Comput. 2018, 39, 53–69. [Google Scholar] [CrossRef]
  14. Sun, G.; Cai, Y.; Wang, T.; Tian, H.; Wang, C.; Chen, Y. Differential evolution with individual-dependent topology adaptation. Inf. Sci. 2018, 450, 1–38. [Google Scholar] [CrossRef]
  15. Tian, M.; Gao, X.; Yan, X. Performance-driven adaptive differential evolution with neighborhood topology for numerical optimization. Knowl.-Based Syst. 2020, 188, 105008. [Google Scholar] [CrossRef]
  16. Ghosh, A.; Das, S.; Das, A.K.; Gao, L. Reusing the past difference vectors in differential evolution—A simple but significant improvement. IEEE Trans. Cybern. 2020, 50, 4821–4834. [Google Scholar] [CrossRef]
  17. Gong, W.; Cai, Z. Differential evolution with ranking-based mutation operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef]
  18. Wang, C.; Gao, J. High-dimensional waveform inversion with cooperative coevolutionary differential evolution algorithm. IEEE Geosci. Remote Sens. Lett. 2012, 9, 297–301. [Google Scholar] [CrossRef]
  19. Wang, J.; Liao, J.; Zhou, Y.; Cai, Y. Differential evolution enhanced with multiobjective sorting-based mutation operators. IEEE Trans. Cybern. 2014, 44, 2792–2805. [Google Scholar] [CrossRef]
  20. Wang, K.; Gong, W.; Liao, Z.; Wang, L. Hybrid niching-based differential evolution with two archives for nonlinear equation system. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 7469–7481. [Google Scholar] [CrossRef]
  21. Ji, W.X.; Yang, Q.; Gao, X.D. Gaussian sampling guided differential evolution based on elites for global optimization. IEEE Access 2023, 11, 80915–80944. [Google Scholar] [CrossRef]
  22. Hamza, N.M.; Essam, D.L.; Sarker, R.A. Constraint consensus mutation-based differential evolution for constrained optimization. IEEE Trans. Evol. Comput. 2016, 20, 447–459. [Google Scholar] [CrossRef]
  23. Xia, X.; Tong, L.; Zhang, Y.; Xu, X.; Yang, H.; Gui, L.; Li, Y.; Li, K. Nfdde: A novelty-hybrid-fitness driving differential evolution algorithm. Inf. Sci. 2021, 579, 33–54. [Google Scholar] [CrossRef]
  24. Zhao, X.; Xu, G.; Rui, L.; Liu, D.; Liu, H.; Yuan, J. A failure remember-driven self-adaptive differential evolution with top-bottom strategy. Swarm Evol. Comput. 2019, 45, 1–14. [Google Scholar] [CrossRef]
  25. Zou, L.; Pan, Z.; Gao, Z.; Gao, J. Improving the search accuracy of differential evolution by using the number of consecutive unsuccessful updates. Knowl.-Based Syst. 2022, 250, 109005. [Google Scholar] [CrossRef]
  26. Cai, Y.; Wu, D.; Zhou, Y.; Fu, S.; Tian, H.; Du, Y. Self-organizing neighborhood-based differential evolution for global optimization. Swarm Evol. Comput. 2020, 56, 100699. [Google Scholar] [CrossRef]
  27. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  28. Cai, Y.; Wang, J.; Yin, J. Learning-enhanced differential evolution for numerical optimization. Soft Comput. 2012, 16, 303–330. [Google Scholar] [CrossRef]
  29. Gao, Z.; Pan, Z.; Gao, J. Multimutation differential evolution algorithm and its application to seismic inversion. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3626–3636. [Google Scholar] [CrossRef]
  30. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. Mtde: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  31. Tan, Z.; Li, K.; Wang, Y. Differential evolution with adaptive mutation strategy based on fitness landscape analysis. Inf. Sci. 2021, 549, 142–163. [Google Scholar] [CrossRef]
  32. Xia, X.; Gui, L.; Zhang, Y.; Xu, X.; Yu, F.; Wu, H.; Wei, B.; He, G.; Li, Y.; Li, K. A fitness-based adaptive differential evolution algorithm. Inf. Sci. 2021, 549, 116–141. [Google Scholar] [CrossRef]
  33. Piotrowski, A.P. Adaptive memetic differential evolution with global and local neighborhood-based mutation operators. Inf. Sci. 2013, 241, 164–194. [Google Scholar] [CrossRef]
  34. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  35. Liu, X.f.; Zhan, Z.H.; Lin, Y.; Chen, W.N.; Gong, Y.J.; Gu, T.L.; Yuan, H.Q.; Zhang, J. Historical and heuristic-based adaptive differential evolution. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2623–2635. [Google Scholar] [CrossRef]
  36. Zhou, X.G.; Zhang, G.J. Differential evolution with underestimation-based multimutation strategy. IEEE Trans. Cybern. 2019, 49, 1353–1364. [Google Scholar] [CrossRef] [PubMed]
  37. Das, S.; Mandal, A.; Mukherjee, R. An adaptive differential evolution algorithm for global optimization in dynamic environments. IEEE Trans. Cybern. 2014, 44, 966–978. [Google Scholar] [CrossRef]
  38. Wang, Y.; Liu, Z.-Z.; Li, J.; Li, H.-X.; Yen, G.G. Utilizing cumulative population distribution information in differential evolution. Appl. Soft Comput. 2016, 48, 329–346. [Google Scholar] [CrossRef]
  39. Abbass, H.A. The self-adaptive pareto differential evolution algorithm. In Proceedings of the Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 831–836. [Google Scholar]
  40. Das, S.; Konar, A.; Chakraborty, U. Two improved differential evolution schemes for faster global search. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  41. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  42. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinusoidal differential evolution algorithm for numerical optimisation. Appl. Soft Comput. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  43. Tang, L.; Dong, Y.; Liu, J. Differential evolution with an individual-dependent mechanism. IEEE Trans. Evol. Comput. 2015, 19, 560–574. [Google Scholar] [CrossRef]
  44. Zhou, Y.Z.; Yi, W.C.; Gao, L.; Li, X.Y. Adaptive differential evolution with sorting crossover rate for continuous optimization problems. IEEE Trans. Cybern. 2017, 47, 2742–2753. [Google Scholar] [CrossRef]
  45. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  46. Zhang, J.; Sanderson, A.C. Jade: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  47. Cheng, S.; Liu, B.; Shi, Y.; Jin, Y.; Li, B. Evolutionary computation and big data: Key challenges and future directions. In Proceedings of the Data Mining and Big Data, Bali, Indonesia, 25–30 June 2016; pp. 3–14. [Google Scholar]
  48. Yang, Q.; Song, G.W.; Chen, W.N.; Jia, Y.H.; Gao, X.D.; Lu, Z.Y.; Jeon, S.W.; Zhang, J. Random contrastive interaction for particle swarm optimization in high-dimensional environment. IEEE Trans. Evol. Comput. 2023, 1. [Google Scholar] [CrossRef]
  49. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An adaptive stochastic dominant learning swarm optimizer for high-dimensional optimization. IEEE Trans. Cybern. 2022, 52, 1960–1976. [Google Scholar] [CrossRef]
  50. Bhattacharya, M.; Islam, R.; Abawajy, J. Evolutionary optimization: A big data perspective. J. Netw. Comput. Appl. 2016, 59, 416–426. [Google Scholar] [CrossRef]
  51. Yang, Q.; Song, G.-W.; Gao, X.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. A random elite ensemble learning swarm optimizer for high-dimensional optimization. Complex Intell. Syst. 2023, 1–34. [Google Scholar] [CrossRef]
  52. Price, K.V. An introduction to differential evolution. In New Ideas in Optimization; McGraw-Hill Inc.: New York, NY, USA, 1999. [Google Scholar]
  53. Ghosh, A.; Das, S.; Das, A.K.; Senkerik, R.; Viktorin, A.; Zelinka, I.; Masegosa, A.D. Using spatial neighborhoods for parameter adaptation: An improved success history based differential evolution. Swarm Evol. Comput. 2022, 71, 101057. [Google Scholar] [CrossRef]
  54. Yi, W.; Chen, Y.; Pei, Z.; Lu, J. Adaptive differential evolution with ensembling operators for continuous optimization problems. Swarm Evol. Comput. 2022, 69, 100994. [Google Scholar] [CrossRef]
  55. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem definitions and evaluation criteria for the cec 2017 special session and competition on single objective bound constrained real-parameter numerical optimization. In Technical Report; Nanyang Technological University Singapore: Singapore, 2016; pp. 1–34. [Google Scholar]
  56. Storn, R.; Price, K. Minimizing the real functions of the icec’96 contest by differential evolution. In Proceedings of the IEEE International Conference on Evolutionary Computation, Nayoya, Japan, 20–22 May 1996; pp. 842–844. [Google Scholar]
  57. Price, K.V.; Storn, R.; Lampinen, J. Differential Evolution: A Practical Approach to Global Optimization; Springer Science & Business Media: Berlin, Germany, 2005. [Google Scholar]
  58. Baatar, N.; Zhang, D.; Koh, C.S. An improved differential evolution algorithm adopting λ -best mutation strategy for global optimization of electromagnetic devices. IEEE Trans. Magn. 2013, 49, 2097–2100. [Google Scholar] [CrossRef]
  59. Chen, S.; He, Q.; Zheng, C.; Sun, L.; Wang, X.; Ma, L.; Cai, Y. Differential evolution based simulated annealing method for vaccination optimization problem. IEEE Trans. Netw. Sci. Eng. 2022, 9, 4403–4415. [Google Scholar] [CrossRef]
  60. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  61. Deng, W.; Xu, J.; Gao, X.Z.; Zhao, H. An enhanced msiqde algorithm with novel multiple strategies for global optimization problems. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1578–1587. [Google Scholar] [CrossRef]
  62. Eiben, A.E.; Hinterding, R.; Michalewicz, Z. Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 1999, 3, 124–141. [Google Scholar] [CrossRef]
  63. Fan, Q.; Yan, X. Self-adaptive differential evolution algorithm with zoning evolution of control parameters and adaptive mutation strategies. IEEE Trans. Cybern. 2016, 46, 219–232. [Google Scholar] [CrossRef] [PubMed]
  64. Li, H.; Gong, M.; Wang, C.; Miao, Q. Pareto self-paced learning based on differential evolution. IEEE Trans. Cybern. 2021, 51, 4187–4200. [Google Scholar] [CrossRef]
  65. Yang, Q.; Chen, W.N.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A level-based learning swarm optimizer for large-scale optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  66. Yang, Q.; Zhu, Y.; Gao, X.; Xu, D.; Lu, Z. Elite directed particle swarm optimization with historical information for high-dimensional problems. Mathematics 2022, 10, 1384. [Google Scholar] [CrossRef]
  67. Yang, Q.; Zhang, K.-X.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. A dimension group-based comprehensive elite learning swarm optimizer for large-scale optimization. Mathematics 2022, 10, 1072. [Google Scholar] [CrossRef]
  68. Li, Y.; Wang, S.; Yang, B. An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Syst. Appl. 2020, 153, 113451. [Google Scholar] [CrossRef]
  69. Deng, L.; Li, C.; Han, R.; Zhang, L.; Qiao, L. Tpde: A tri-population differential evolution based on zonal-constraint stepped division mechanism and multiple adaptive guided mutation strategies. Inf. Sci. 2021, 575, 22–40. [Google Scholar] [CrossRef]
  70. Cao, Z.; Wang, Z.; Fu, Y.; Jia, H.; Tian, F. An adaptive differential evolution framework based on population feature information. Inf. Sci. 2022, 608, 1416–1440. [Google Scholar] [CrossRef]
  71. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.L.P.; Xu, X.M.; Zhang, J. Multimodal estimation of distribution algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef]
  72. Yang, Q.; Li, Y.; Gao, X.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An adaptive covariance scaling estimation of distribution algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  73. Sun, J.; Gao, S.; Dai, H.; Cheng, J.; Zhou, M.; Wang, J. Bi-objective elite differential evolution algorithm for multivalued logic networks. IEEE Trans. Cybern. 2020, 50, 233–246. [Google Scholar] [CrossRef]
  74. Zhang, G.; Ma, X.; Wang, L.; Xing, K. Elite archive-assisted adaptive memetic algorithm for a realistic hybrid differentiation flowshop scheduling problem. IEEE Trans. Evol. Comput. 2022, 26, 100–114. [Google Scholar] [CrossRef]
  75. Yang, Q.; Hua, L.K.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic cognitive dominance leading particle swarm optimization for multimodal problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  76. Yang, Q.; Bian, Y.-W.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic triad topology based particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1032. [Google Scholar] [CrossRef]
  77. Yang, Q.; Guo, X.; Gao, X.; Xu, D.; Lu, Z. Differential elite learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1261. [Google Scholar] [CrossRef]
  78. Yang, Q.; Jing, Y.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Predominant cognitive learning particle swarm optimization for global numerical optimization. Mathematics 2022, 10, 1620. [Google Scholar] [CrossRef]
  79. Sun, G.; Lan, Y.; Zhao, R. Differential evolution with gaussian mutation and dynamic parameter adjustment. Soft Comput. 2019, 23, 1615–1642. [Google Scholar] [CrossRef]
  80. Meng, Z.; Yang, C.; Li, X.; Chen, Y. Di-de: Depth information-based differential evolution with adaptive parameter control for numerical optimization. IEEE Access 2020, 8, 40809–40827. [Google Scholar] [CrossRef]
  81. Liang, J.; Qiao, K.; Yu, K.; Ge, S.; Qu, B.; Xu, R.; Li, K. Parameters estimation of solar photovoltaic models via a self-adaptive ensemble-based differential evolution. Sol. Energy 2020, 207, 336–346. [Google Scholar] [CrossRef]
  82. Wang, Y.; Gao, S.; Yu, Y.; Cai, Z.; Wang, Z. A gravitational search algorithm with hierarchy and distributed framework. Knowl.-Based Syst. 2021, 218, 106877. [Google Scholar] [CrossRef]
  83. Xiao, T.-L.; Yang, Q.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. Variation encoded large-scale swarm optimizers for path planning of unmanned aerial vehicle. In Proceedings of the Genetic and Evolutionary Computation Conference, Lisbon, Portugal, 15–19 July 2023; pp. 102–110. [Google Scholar]
  84. Lu, Z.; Liang, S.; Yang, Q.; Du, B. Evolving block-based convolutional neural network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5525921. [Google Scholar] [CrossRef]
  85. Gao, X.; Cao, W.; Yang, Q.; Wang, H.; Wang, X.; Jin, G.; Zhang, J. Parameter optimization of control system design for uncertain wireless power transfer systems using modified genetic algorithm. CAAI Trans. Intell. Technol. 2022, 7, 582–593. [Google Scholar] [CrossRef]
  86. Wei, F.F.; Chen, W.N.; Yang, Q.; Deng, J.; Luo, X.N.; Jin, H.; Zhang, J. A classifier-assisted level-based learning swarm optimizer for expensive optimization. IEEE Trans. Evol. Comput. 2021, 25, 219–233. [Google Scholar] [CrossRef]
  87. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant colony optimization for the control of pollutant spreading on social networks. IEEE Trans. Cybern. 2020, 50, 4053–4065. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.