You are currently viewing a new version of our website. To view the old version click .
Algorithms
  • Article
  • Open Access

27 October 2025

Differential Evolution with Secondary Mutation Strategies for Long-Term Search

and
1
School of Computer Science, China University of Geosciences, Wuhan 430078, China
2
College of Marine Science and Technology, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.

Abstract

For numerous years, researchers have extensively explored real parameter single-objective optimization by evolutionary computation. Among the various types of evolutionary algorithms, Differential Evolution (DE) performs outstandingly. Recently, the academic community has began concerning itself with long-term search. IMODE is a good DE algorithm for long-term search. The algorithm is based on two primary mutation strategies and one secondary. Within the population, the control ratio of each mutation strategy is determined by their respective performance outcomes. Sequential Quadratic Programming (SQP), an iterative method for continuous optimization, is employed on the best individual in the final stage of IMODE at a dynamic probability as a local search method. Based on the DE algorithm, we propose Differential Evolution with Secondary Mutation Strategies (SMSDE). In the proposed algorithm, more secondary mutation strategies are added, in addition to the original one used in IMODE. In each generation, just one of the secondary mutation strategies is activated based on history performance to cooperate with the two primary mutation strategies. In addition, at a dynamic probability, SQP is now called not only for the best individual in the final stage, but also for the worst individual among old ones in each generation. The experimental results demonstrate that SMSDE performs better than a number of state-of-the-art algorithms, including IMODE.

1. Introduction

Real-parameter single-objective optimization aims to locate the optimal decision vector within a multi-dimensional solution space by minimizing or maximizing a specified objective function as a black-box one, and it can be written as follows:
min x R n f ( x ) ,
where R n denotes the n-dimensional real space, allowing x to have n real components. As a fundamental problem of continuous optimization, real-parameter single-objective optimization lies at the heart of addressing more sophisticated optimization tasks, such as multi-objective, niche, dynamic, and constrained optimization problems. Types of evolutionary algorithms, such as Differential Evolution (DE) and Covariance Matrix Adaptation Evolution Strategy (CMA-ES), are widely studied for real-parameter single-objective optimization, partly because their operators are specifically designed for searching high-dimensional continuous solution spaces. So far, it has been difficult to find a study of real-parameter single-objective optimization based on other types of evolutionary algorithms, such as genetic algorithms, particle swarm optimization, ant colony optimization, simulated annealing, etc.
For a fair comparison, evolutionary algorithms for real-parameter single-objective optimization are given the same value in the maximum function evaluations ( M a x F E S ), the termination criterion. In previous years, with increasing dimensionality D, M a x F E S just grew linearly. Lately, researchers have shown an increasing interest in long-term search. Here, M a x F E S escalate exponentially in response to an increase in D because, usually, problem solving difficulty typically scales exponentially with higher dimensions. The difference between original search and long-term one can be viewed from the changes in the settings of the series of competitions on real-parameter single-objective optimization held by the IEEE Congress on Evolutionary Computation (CEC). From the CEC 2013 competition to the CEC 2018 one, M a x F E S = D · 10 4 ( D { 10 , 30 , 50 , 100 } ). In 2020, 2021, and 2022, M a x F E S = D · 10 6 when D = 10 , while M a x F E S = D · 10 7 when D = 20 . The latter setting for long-term search, which states that with an increase in D, M a x F E S grows super-linearly, has been followed by the academic community.
For the long-term search of real-parameter single-objective optimization, the variants of DE—IMODE [1], NL-SHADE-RSP [2], and jDE-21 [3]—have excelled in CEC competitions. IMODE secured the victory in the CEC 2020 competition, while NL-SHADE-RSP demonstrated its superiority by winning both shift cases and rotated shifted cases in the CEC 2021 competition. Furthermore, jDE-21 achieved the top rank in the non-rotated shifted cases during the same competition. Based on IMODE, AMCDE was proposed in [4]. In addition to DE, ensembles of evolutionary algorithms demonstrate excellent performance. For long-term search, APGSK-IMODE [5], the ensemble of the variant of the Gaining–Sharing Knowledge-based algorithm (GSK) APGSK and the DE variant IMODE demonstrated strong performance by winning non-shifted cases in the CEC 2021 competition. Moreover, EA4eig [6], the ensemble of three DE variants and CMA-ES, showed its superiority by winning the CEC 2022 competitions. In short, for the long-term search of real-parameter single-objective optimization, both DE variants and ensemble methods performs well.
In fact, ensembles of evolutionary algorithms are also applied for problems, in addition to real-parameter single-objective optimization. Here are some examples. DEACO [7], the ensemble of DE and ant colony optimization, was developed to solve the traveling salesman problem. GWO-DE [8], the ensemble of the gray wolf optimizer and DE, was developed to solve complex non-linear equations. DE-GA [9], the ensemble of DE and a genetic algorithm, was developed for the optimal coordination of relays. RCGA-PSO [10], the ensemble of a genetic algorithm and particle swarm optimization, was developed to optimize the characteristics of the environment and strategies for making individual decisions by agents involved in barter and monetary interactions. CBHPSO [11], the Clustering-Based particle swarm optimization, was developed for biobjective optimization. MBHGA [12], the hybrid genetic algorithm with both a real-coded crossover and a matrix binary-coded one, was developed to assist decision-makers in selecting the optimal strategies for their firms in situations where some companies impose restrictions on interactions.
Details of DE are given below because we chose to further enhance DE for the long-term search of real-parameter single-objective optimization. Similarly to other types of evolutionary algorithms, DE employs the operators—mutation, crossover, and selection. At the beginning of the execution, target vectors, individuals of DE, x i , 0 = ( x 1 , i , 0 , x 2 , i , 0 , , x D , i , 0 ) , ( i = { 1 , 2 , , N P } ) , where D denotes dimensionality and N P represents the population size, are initialized. Mutation generates mutant vectors v i , g from target vectors in the gth generation. DE/rand/1
v i , g = x r 1 , g + F · ( x r 2 , g x r 3 , g ) ,
where r 1 , r 2 , and r 3 are elements of the set 1 , 2 , , N P such that r 1 r 2 r 3 i is one of the basic mutation strategies. Here, F denotes the scaling factor. Then, on the basis of x i , g and v i , g , trial vectors u i , g = ( u 1 , i , g , u 2 , i , g , , u D , i , g ) are generated by crossover. Binomial crossover,
u j , i , g = v j , i , g , i f r a n d ( 0 , 1 ) C r o r j = j r a n d , x j , i , g , o t h e r w i s e ,
is applied in most cases. In the equation, j { 1 , 2 , , D } . Moreover, the crossover rate C r [ 0 , 1 ] , while j r a n d is an integer randomly generated from the range [ 1 , D ] to ensure that u i , g has at least one component from v i , g . In the literature, crossover and mutation are together called the trial vector generation strategy. The most popular selection strategy is
x i , g + 1 = u i , g , i f f ( u i , g ) f ( x i , g ) , x i , g , o t h e r w i s e ,
where f ( · ) represents fitness obtained by function evaluation.
DE has been improved in different ways to adapt to traditional search or long-term search. Rapid convergence is emphasized for the former purpose, while stagnation resistance is vital for the latter. In different DE variants, stagnation resistance is achieved through distinct approaches. As an example, we present an analysis of IMODE, the winning algorithm of the CEC 2020 competition for long-term search.
IMODE [1] operates with three mutation strategies, DE/current-to-ϕbest/1 with archive,
v i , g = x i , g + F i · ( x ϕ , g x i , g + x r 1 , g x r 3 , g ) ,
DE/current-to-ϕbest/1,
v i , g = x i , g + F i · ( x ϕ , g x i , g + x r 1 , g x r 2 , g ) ,
and DE/weighted-rand-to-ϕbest/1,
v i , g = F i · x r 1 , g + F i · ( x ϕ , g x r 2 , g ) ,
where r 3 { 1 , 2 , , N P + | A | } , A is the archive for storing individuals eliminated from the population, and x ϕ , g is an individual in the gth generation among the best ϕ ones. According to each of the three mutation strategies, each individual has its own value of scaling factor- F i . The external archive employed by DE/current-to-ϕbest/1 with archive shown in Equation (5) is maintained according to [13]. Meanwhile, the binomial or exponential crossover,
u j , i , g = v j , i , g , i f r a n d ( 0 , 1 ) C r i o r j = j r a n d , x j , i , g , o t h e r w i s e ,   i f r a n d ( 0 , 1 ) p v j , i , g , f o r j = l D , l + 1 D , , l + L 1 D x j , i , g , f o r a l l o t h e r j [ 1 , D ] ,   o t h e r w i s e
where p is the probability of the binomial mode in crossover, l { 1 , 2 , , N P } is a randomly chosen integer, and L { 1 , 2 , , N P } is a parameter employed by IMODE. C r i —the crossover rate of the ith position—is also adaptively set according to [14].
In Equations (5)–(7), F i is independently produced based on a Cauchy distribution with location μ F and scale 0.1,
F i = r a n d c i ( μ F , 0.1 ) .
Here, if F i > 1 , the parameter is truncated to 1. If F i < 0 , the parameter is regenerated. μ F in Equation (9) is obtained as
μ F = ( 1 c ) · μ F + c · m e a n L ( S F ) ,
where c ( 0 , 1 ) , m e a n L denotes the Lehmer mean, and S F is the set of all F values that lead to improvement in the previous generation. Meanwhile, based on a normal distribution whose mean is μ C R and standard deviation is 0.1, the C r of the ith position C r i is generated. That is,
C r i = r a n d n i ( μ C r , 0.1 ) .
Also, C r i is truncated to [ 0 , 1 ] . Meanwhile, μ C R in Equation (11) is obtained as
μ C R = ( 1 c ) · μ C r + c · m e a n A ( S C r ) ,
where m e a n A denotes the arithmetic mean, and S C r is the set of all C r values that lead to improvement in the previous generation.
In each generation of IMODE, the difference in fitness between each pair of target vector and trial vector is computed. Then, Δ f ¯ k , the average improvement after a generation made by the kth mutation strategy, can be obtained. After that, the following controlling ratio of the kth mutation strategy c k is computed as shown below,
c k = Δ f ¯ k i = 1 3 Δ f ¯ i ,
for the next generation. According to Equation (13), better performance leads to a larger controlling ratio. Moreover, a measure is adopted to make 0.1 c k 0.9 . In detail, provided that c m < 0.1 ( m { 1 , 2 , 3 } ), by reducing c m a x , c m is turned to 0.1 .
Provided that F E S M a x F E S > R , where F E S denotes the consumed function evaluations and R ( 0 , 1 ) is a parameter, the local search method Sequential Quadratic Programming (SQP) is executed on the best individual at the probability P l s in each generation. SQP is an iterative method that begins with an initial guess and refines the solution step by step. Each iteration involves solving a quadratic subproblem that approximates the original problem. IMODE employs the SQP implementation from MATLAB’s Optimization Toolbox, wherein Lagrange multipliers are utilized to handle bounds. In the application, after each iteration, the individual obtained is evaluated. Provided that the individual is better than the original in fitness, the latter is replaced by the former. Then, the SQP execution is successfully completed. In C F E l s iterations, if the original individual cannot be improved, the SQP execution failed. The successful implementation of SQP results in the parameter P l s being set to a significantly larger value for the next generation. Otherwise, P l s is set to a small value for the following generation. For the bounds’ treatment, the clamping method is consistently applied, mirroring its usage in other cases in IMODE.
In each generation, Linear Population Size Reduction (LPSR) is employed to adjust the population size as follows,
N P = F E S M a x F E S · ( N P m i n N P m a x ) + N P m a x .
According to Equation (14), LPSR uniformly reduces the population size. In the DE algorithm jDEdynNP-F [15], while the population size generally decreases over generations, it dynamically increases when specific criteria are met.
It can be seen that, in IMODE, positions in the population are scrambled by the three mutation strategies. No position is always controlled by a fixed mutation strategy. Hence, convergence occurs slowly. The competition based on performance among the mutation strategies is one of the key schemes employed to delay convergence. Therefore, IMODE is suitable for long-term search. As analyzed in [4], the three mutation strategies serve distinct purposes. In detail, the former two, which have good performance [13] and cooperate well with each other in ensemble [16], are primary mutation strategies, while the latter—DE/weighted-rand-to-ϕbest/1, whose performance is much worse than the former two—can be regarded as a secondary mutation strategy for diversification. To date, it is rare to encounter a DE algorithm that employs only the DE/weighted-rand-to-ϕbest/1 mutation strategy due to its structurally biased nature during search. However, the mutation strategy is employed in IMODE as the secondary mutation strategy because it is very different from the two main ones. The experimental results in [17] demonstrate that, in the framework of IMODE, the mutation strategy is a better choice of secondary mutation strategy compared to six other mutation strategies that are used more widely.
According to [18], IMODE is based on stateless adaptive operator selection (AOS). Each scheme of stateless AOS is composed of two components, credit assignment (CA) and operator selection rule (OSR). In detail, IMODE employs improvement-based CA, while its OSR is Roulette Wheel selection. In addition to Roulette Wheel, operator selection can be realized based on multi-armed bandit (MAB) [19,20,21,22,23,24,25,26,27,28].
The motivation of this paper is as follows. IMODE may be the pioneer among DE algorithms with a secondary mutation strategy. In the algorithm, DE/weighted-rand-to-ϕbest/1 is used as the secondary mutation strategy. As mentioned before, compared with many other mutation strategies, this one is a better choice of secondary mutation strategy to work with the two main mutation strategies. However, to more effectively diversify the population, further research should be conducted on the secondary mutation strategy. Now that IMODE has more than one primary mutation strategies, more secondary ones can also be considered. However, secondary mutation strategies should not occupy a larger ratio of individuals than primary ones. Therefore, it is suitable that, in a generation, only one of the secondary mutation strategies is called for diversification. In this case, a scheme is required to choose one of the secondary mutation strategies for each generation. It can be seen that SQP is executed on the best individual with a certain probability at the final stage in IMODE. However, executing SQP on the best individual may not be the most appropriate choice when the success rate is considered. Therefore, executing SQP on other individuals is worth trying.
In this paper, we propose Differential Evolution with Secondary Mutation Strategies—SMSDE—based on IMODE. Firstly, in addition to the secondary mutation strategy used in IMODE, we integrate three more. In detail, the three secondary mutation strategies introduced by us are DE/rand/1, DE/rand/2, and DE/current-to-best/1, which have been evaluated as the substituent of DE/weighted-rand-to-ϕbest/1 for the IMODE framework in [17]. In each generation, the four secondary mutation strategies compete based on history performance to become the activated one. Moreover, in a manner similar to LPSR, we adjust the parameter ϕ used in the two main mutation strategies DE/current-to-ϕbest/1 with archive and DE/current-to-ϕbest/1, based on the defined bounds ϕ m a x and ϕ m i n . In addition, the execution of SQP becomes more complicated than before. The original method of calling used by IMODE continues to be used. Meanwhile, in all generations, the oldest individual among the worst has the probability P l s , which varies similarly to P l s , processed by SQP.
In our experiment, on the basis of the CEC 2017, 2020, and 2022 benchmark test suites, SMSDE is compared to IMODE, AGSK [29], NL-SHADE-RSP [2], EA4eig [6], NL-SHADE-LBC [30], APGSK-IMODE-FL [31], and AMCDE [4]. Details of the peers will be found in the following sections. The results demonstrate that SMSDE performs better than the peers including IMODE.
The rest of this paper is structured as follows. In Section 2, the related work is reviewed. Section 3 offers a detailed description of our proposed algorithm. In Section 4, the experimental setup, results, and analysis are presented. Finally, we conclude our study in Section 5.

3. Methodology

In IMODE, DE/weighted-rand-to-ϕbest/1 is the only secondary mutation strategy. Compared to deep search, its main task is to change the distribution of individuals and then diversify the population. The reason for using the mutation strategy for diversification in IMODE is that, among existing mutation strategies, this one is the most different in manner from the two main ones in the algorithm [4]. The success of IMODE shows that maintaining diversification based on the secondary mutation strategy is effective in improving the solution. Now that there are two primary mutation strategies in IMODE, it is possible that more than one secondary mutation strategies are used in an ensemble for better diversification. Although it is difficult to find another mutation strategy that is so different from the two main ones as DE/weighted-rand-to-ϕbest/1, using a much larger number of secondary mutation strategies can lead to further improvement in diversification.
In [38], six mutation strategies—DE/rand/1,
v i , g = x r 1 , g + F · ( x r 2 , g x r 3 , g ) ,
DE/rand/2,
v i , g = x r 1 , g + F · ( x r 2 , g x r 3 , g ) + F · ( x r 4 , g x r 5 , g ) ,
DE/best/1,
v i , g = x b e s t , g + F · ( x r 1 , g x r 2 , g ) ,
DE/best/2,
v i , g = x b e s t , g + F · ( x r 1 , g x r 2 , g ) + F · ( x r 3 , g x r 4 , g ) ,
DE/current-to-best/1,
v i , g = x i , g + F · ( x b e s t , g x i , g ) + F · ( x r 1 , g x r 2 , g ) ,
and DE/current-to-rand/1,
u i , g = x i , g + K · ( x r 1 , g x i , g ) + F ^ · ( x r 2 , g x r 3 , g ) .
are introduced as the frequently used ones. In Equations (15)–(20), x b e s t , g specifies the best individual in the current population. The values of the five random numbers r 1 r 5 are taken within the range { 1 , 2 , , N P } . The combination coefficient K is selected with a uniform random distribution of [ 0 , 1 ] . F ^ = K · F . In our previous study in [17], the six mutation strategies have been considered to replace DE/weighted-rand-to-ϕbest/1 in IMODE.
In this paper, our proposed algorithm uses N secondary mutation strategies. DE/weighted-rand-to-ϕbest/1 is called S 1 , while the other secondary mutation strategies range from S 2 to S N . It is not suitable to use that many secondary mutation strategies simultaneously. After all, secondary mutation strategies are not the main force in optimization. It is better to choose one of them per generation to cooperate with the two main mutation strategies. Hence, secondary mutation strategies need to compete with each other to be activated based on their performance history.
To schedule so many secondary strategies, we set a priority list. All the selected secondary strategies are placed into the list and are given a counter. The counter of S n ( n = 1 , 2 , , N ) is called c o u n t n . The selected secondary strategies have an initial sequence represented by the list L = [ 1 , 2 , , N ] . In the beginning, c o u n t n = 0 . In each generation, only one of the secondary strategies S L [ c u r ] ( c u r { 1 , 2 , , N } ) is used. At first, c u r = 1 . Provided that, the best fitness is not refreshed in a generation, c u r = c u r % N + 1 in the next generation. Otherwise, if the best fitness is improved in the generation, the counter of the current secondary strategy c o u n t L [ c u r ] = c o u n t L [ c u r ] + 1 . After the accumulation of c o u n t L [ c u r ] , the lists are resorted in descending order of c o u n t n . In the next generation, c u r = 1 .
The above revision can be explained as shown below. Provided that the best fitness is improved in a generation, the currently activated secondary mutation strategy is given a higher evaluation of importance and may be listed further ahead in the list. After that, the first secondary mutation strategy in the list is activated. Otherwise, if the best fitness is not improved in a generation, the secondary mutation strategy in the list next to the currently activated one is used in the next generation. In brief, the higher in priority a secondary mutation strategy is, the more frequently the one is activated. Meanwhile, the priority is adjusted based on performance.
In IMODE and other DE algorithms, LPSR is used to gradually reduce the population size. The reason behind this is that in order to maintain the dynamic balance of exploration and exploitation during search, the focus must gradually shift from exploration to exploitation. A large population is required for deep exploration, while a small one is sufficient for exploitation. For the same purpose, we propose that the value of ϕ in the two main mutation strategies shown in Equations (5) and (6), respectively, decreases during execution. By these means, the available x ϕ , g gradually decreases during execution.
As stated before, in IMODE, the local search method SQP is executed on the best individual with a variable probability in the final generations. We propose to increase the frequency of SQP calls alongside the original executions. In detail, during the whole course of execution, SQP is applied on the oldest individual among the worst r w · N P ones at the probability P l s for the same specific times. Similar to P l s , provided that SQP leads to improvement, P l s will become large. Otherwise, P l s will become small. To find the oldest individual among the worst r w · N P ones, the age of each individual a g e i needs to be recorded in each generation.
Compared with improving the best individual by SQP, it is much more possible to improve a worse individual using that. Moreover, the phenomenon where an individual has stayed in the population for multiple generations can be interpreted as that where the operators of DE cannot improve the individual. In this case, it is reasonable that SQP is executed on the individual to eliminate it. Overall, our scheme is that SQP is not only executed on the best individual in the final generations with a dynamic probability, but also on the oldest individual among the worst r w · N P ones in each generation also with a similar dynamic probability. The SQP executions on the oldest individual among the worst ones may lead to better diversity, mainly because of the higher success rate.
Using the above two revisions based on IMODE, we obtain our SMSDE. We give the pseudo-code of SMSDE in Algorithm 1. In Algorithm 1, Algorithm 2 is called. To select the mutation strategies from the six ones for SMSDE, we execute a pre-experiment. In the pre-experiment, firstly, we attempt to add more mutation strategies from those listed in Equations (15)–(20) in sequence into the framework of SMSDE as additional secondary ones. Only if performance is improved will the mutation strategy currently tested be accepted. According to Table 1, the inclusion of DE/rand/1, DE/rand/2, and DE/current-to-best/1 leads to performance improvement. It can be observed from their equations that these three mutation strategies exhibit a higher degree of randomness when compared to the other three, thereby making them more suitable as secondary mutation strategies.
Algorithm 1 The SMSDE pseudo-code
Input: N P m a x , N P m i n , M a x F E S , R, ϕ m a x , and ϕ m i n
Parameter: N P , F E S , ϕ , F i , C R i , a g e i ( i = { 1 , 2 , , N P } ), c k ( k = { 1 , 2 , 3 } ), c o u n t n ( n = { 1 , 2 , , N } ), L, c u r , o & w , and g
1:
Initialize the first generation of the population P 0
2:
Evaluate the N P m a x individuals in P 0
3:
N P = N P m a x , F E S = N P , ϕ = ϕ m a x , L = [ 1 , 2 , , N ] , c o u n t n = 0 , c u r = 1 , g = 0 , and a g e i = 0
4:
while  F E S < = M a x F E S  do
5:
 Randomly allocate one third individuals to the two main mutation strategies and the secondary one S L [ c u r ] , respectively
6:
for  i = 1 to N P  do
7:
    F i is set based on Equations (9) and (10)
8:
    C R i is set based on Equations (11) and (12)
9:
   Obtain v i , g based on x i , g ’s mutation strategy
10:
  Execute crossover based on Equation (8) to obtain u i , g
11:
  Evaluate the trial vectors
12:
   F E S = F E S + 1
13:
  Execute selection based on Equation (4) to obtain x i , g + 1
14:
  if  x i , g + 1 = = x i , g  then
15:
    a g e i = a g e i + 1
16:
  else
17:
    a g e i = 0
18:
  end if
19:
end for
20:
 Calculate c k based on Equation (13)
21:
 Make 0.1 c k 0.9
22:
 The kth mutation strategy is allocated c k · N P random individuals
23:
if the best fitness in the population is not improved in the current generation then
24:
   c u r = c u r % N + 1
25:
else
26:
   c o u n t L [ c u r ] = c o u n t L [ c u r ] + 1
27:
  Resort L descendingly according to c o u n t n ( n = { 1 , 2 , , N } )
28:
   c u r = 1
29:
end if
30:
 Among the worst r w · N P individuals, find the one largest in a g e i x o & w , g
31:
 Execute Algorithtm 2
32:
g = g + 1 , ϕ = F E S M a x F E S · ( ϕ m i n ϕ m a x ) + ϕ m a x
33:
 Execute LPSR based on Equation (14) to decrease N P
34:
end while
35:
Report solution
Table 1. Outcome of the Friedman Test to select secondary mutation strategies. “1” represents the original secondary mutation strategy in IMODE, while the other numbers denote the additional secondary mutation strategies in Equations (15)–(20), respectively.
Therefore, the three mutation strategies are selected to cooperate with the original one as secondary mutation strategies. After that, we optimize the initial sequence of the four selected mutation strategies in the priority list L by comparison. According to Table 2, in L, DE/weighted-rand-to-ϕbest/1, DE/rand/2, DE/rand/1, and then DE/current-to-best/1 is the best initial sequence.
Algorithm 2 The SQP execution pseudo-code
Input: F E S , M a x F E S , N P , g, P l , P s , o & w and R
Parameter: P l s , P l s
Output:
1:
P l s = P l s = P l
2:
if F E S M a x F E S > R and r a n d ( 0 , 1 ) < P l s  then
3:
 SQP is executed on x b e s t , g + 1
4:
 Evaluate the offspring o b e s t
5:
if  o b e s t is better than x b e s t , g + 1  then
6:
   x b e s t , g + 1 = o b e s t
7:
   P l s = P l
8:
else
9:
   P l s = P s
10:
end if
11:
end if
12:
Perform accumulation on F E S
13:
if  r a n d ( 0 , 1 ) < P l s   then
14:
 SQP is executed on the oldest individual among the r w · N P worst ones x o & w , g + 1
15:
 Evaluate the offspring o o & w
16:
if  o o & w is better than x o & w , g + 1  then
17:
   x o & w , g + 1 = o o & w
18:
   P l s = P l
19:
else
20:
   P l s = P s
21:
end if
22:
end if
23:
Perform accumulation on F E S
Table 2. Outcome of the Friedman Test to determine the sequence of the selected secondary mutation strategies.
Our SMSDE can be regarded as a DE algorithm with a hierarchical AOS method. The competition of the two main mutation strategies and the selected secondary mutation strategy is the high layer of the method, while that of the seven secondary mutation strategies is the low layer of the method. The high layer uses improvement-based CA and Roulette Wheel OSR. The low layer employs improvement-based CA, while OSR is not considered.
SQP has a worst-case time complexity O ( D 3 ) . However, most of the SQP executions in our application terminate early because the procedure is broken as soon as a better fitness is obtained. SQP is executed sparsely in IMODE. In our SMSDE, the usage of SQP is controlled within the same order of magnitude. According to the survey of DE [39], in addition to SQP, chaotic local search [40], multi-dimensional Gaussian distribution [41], random walk with a certain probability [42], the Hooke–Jeeves method [43], Lamarckian and Baldwinian learning [44], the Nelder–Mead algorithm [45], and temporal difference Q learning [46] have been applied in DE as local search methods. It is possible that our SMSDE considers a local search method other than SQP. In this paper, we partly focus on proposing a novel scheme for calling local search to improve IMODE. Now that IMODE adopts SQP as its local search method, SQP is still called in our SMSDE. Then, our scheme for calling local search can be directly compared with the original scheme.
Like most DE algorithms for real-parameter single-objective optimization, our SMSDE remains O ( D · l o g 2 N P m a x ) in time complexity. The space complexity of SMSDE can be represented as O ( D · N P m a x + D · | A | + m e m o r y _ s i z e ) , where m e m o r y _ s i z e denotes the memory space for storing parameters and executing SQP. The third term is much smaller than the former two.

4. Experimental Study

For comparison, seven algorithms for the long-term search of real-parameter single-objective optimization—IMODE, AGSK [29], NL-SHADE-RSP, EA4eig, NL-SHADE-LBC, APGSK-IMODE-FL, and AMCDE—are included in our experiment as peers. IMODE, the Improved Multi-Operator DE, is a DE variant with three mutation strategies. AGSK, the Gaining–Sharing Knowledge-based algorithm with adaptive parameters, ranked second in the CEC 2020 competition. NL-SHADE-RSP combines the non-linear population size reduction, the linear crossover rate increase, and a strategy adaptation technique, which chooses between mutation with and without archive. EA4eig is the ensemble of four evolutionary algorithms with the Eigen approach. NL-SHADE-LBC is the non-linear population size reduction success-history adaptive DE with linear bias change. APGSK-IMODE-FL is the ensemble of APGSK and IMODE exchanging individuals between the two constituent algorithms based on fitness and lifetime. AMCDE, DE alternating between steady monopoly and transient competition of mutation strategies, is an upgraded version of IMODE with the same three mutation strategies.
Table 3 lists the settings of the algorithms included in our experiments. The settings of the peers come from the literature.
Table 3. Settings of the included algorithms.
Our algorithm is revised from IMODE. In fact, the differences between our SMSDE and IMODE are confined to the following two aspects. Firstly, SMSDE incorporates an additional mechanism for rotating secondary mutation strategies. Then, in SMSDE, SQP is additionally executed on the oldest individual among the worst ones in each generation. Therefore, for SMSDE, we decide to keep the origin value of most parameters inherited from IMODE, whose setting has been tested in the CEC 2020 competition. Among the parameters, only p, the probability of the binomial manner in crossover, is adjusted for optimization. As mentioned in Section 3, the parameter ϕ used in IMODE and is not employed in our algorithm any more. Instead, ϕ m a x and ϕ m i n must be set. Therefore, for SMSDE, Table 3 exclusively specifies the settings of p, ϕ m a x , and ϕ m i n . In fact, the substitution of ϕ with ϕ m a x and ϕ m i n leads to the decrease in parameter sensitivity.
Our comparison is based on the following three benchmark test suites: CEC 2017, 2020, and 2022. The 29 functions in the CEC 2017 benchmark test suite can have four values, 10, 30, 50, and 100, in D. The 10 functions in the CEC 2020 suite can have four values, 5, 10, 15, and 20, in D, while the 12 ones in the CEC 2022 suite can only have 10 and 20. In fact, among the three suites, only the latter two are designed for long-term search, while the CEC 2017 suite is for traditional search. In fact, for long-term search, D > 20 is not suggested. As mentioned above, with an increase in D, M a x F E S grows super-linearly for long-term search. Provided D is assigned a number lager than 20, e.g., 30, M a x F E S should be very large. Consequently, the experiment becomes infeasible. In our experiments, D is set as 10 and 20 for the 2020 and 2022 suites, while only 10 is set as D for the 2017 one.
Table 4 gives the classification of the functions in the suites.
Table 4. Classification of the functions in the three CEC suites.
In addition to the benchmark test suites, from the CEC 2011 real-world problems, we select the ones whose D is less than 20 for comparison. These problems are listed in Table 5. Furthermore, Table 6 gives the M a x F E S for the selected CEC 2011 real-world problems.
Table 5. The problems selected from the CEC 2011 real-world problems.
Table 6. Value of M a x F E S for the problems selected from the CEC 2011 real-world problems.
The results of the CEC 2020 suite when D = 10 are listed in Table 7, while those when D = 20 are listed in Table 8. Similarly, Table 9 and Table 10 give the results for the CEC 2022 suite when D = 10 and D = 20 , respectively. Moreover, Table 11 shows the results of the CEC 2017 suite when D = 10 . As shown in Table 12, we analyze the results of the three benchmark test suites based on both the Wilconxon rank sum test and the Friedman test.
Table 7. Results of the eight algorithms for the CEC 2020 benchmark test suite when D = 10 with the Wilcoxon rank sum test. A “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
Table 8. Results of the eight algorithms for the CEC 2020 benchmark test suite when D = 20 with the Wilcoxon rank sum test. A “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
Table 9. Results of the eight algorithms for the CEC 2022 benchmark test suite when D = 10 with the Wilcoxon rank sum test. “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
Table 10. Results of the eight algorithms for the CEC 2022 benchmark test suite when D = 20 with the Wilcoxon rank sum test. A “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
Table 11. Results of the eight algorithms for the CEC 2017 benchmark test suite when D = 10 with the Wilcoxon rank sum test. A “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
Table 12. Outcome of the Wilconxon rank sum test and that of the Friedman test based on the results for the CEC 2020 and 2022 benchmark test suites. “+” and “−” represent the current algorithm performs significantly better and statistically worse than SMSDE, respectively. Meanwhile, “≈” means that there is no significant difference.
It can be seen from Table 7 that, when D = 10 , for the only unimodal function F1, all algorithms always obtain the optimum. For the three basic functions F2–F4, our algorithm defeats AGSK and APGSK-IMODE-FL, while it is on par with IMODE and EA4eig. However, our SMSDE is defeated by NL-SHADE-RSP, NL-SHADE-LBC, and AMCDE. For the three hybrid functions, F5–F7, our algorithm defeats IMODE, AGSK, NL-SHADE-LBC APGSK-IMODE-FL, while it is on par with NL-SHADE-RSP. However, our SMSDE is defeated by EA4eig and AMCDE. For the three composition functions, F8–F10, our algorithm defeats all peers.
It can be seen from Table 8 that, when D = 20 , for the unimodal function, all algorithms still always obtain the optimum. For the three basic functions, our algorithm defeats AGSK and APGSK-IMODE-FL, while it is on par with IMODE and EA4eig. However, our SMSDE is defeated by NL-SHADE-RSP, NL-SHADE-LBC, and AMCDE. For the three hybrid functions, our algorithm defeats IMODE, AGSK, NL-SHADE-LBC, and APGSK-IMODE-FL, while it is on par with NL-SHADE-RSP. However, our SMSDE is defeated by EA4eig and AMCDE. For the three composition functions, our algorithm is on par with AGSK and NL-SHADE-RSP, but defeats the other peers.
It can be seen from Table 9 that, when D = 10 , for the unimodal function F1, all algorithms still always obtain the optimum. For the four basic functions F2–F5, our algorithm is on par with NL-SHADE-RSP and EA4eig, but it is defeated by the other peers. For the three hybrid functions, F6–F8, our algorithm defeats IMODE, AGSK, NL-SHADE-RSP, APGSK-IMODE-FL, and AMCDE, while it is on par with NL-SHADE-LBC. However, our SMSDE is defeated by EA4eig. For the four composition functions, F9–F12, our algorithm defeats all peers.
It can be seen from Table 10 that, when D = 20 , for the unimodal function, none of the peers show a significant difference in solution quality compared to our algorithm. For the four basic functions, our algorithm defeats IMODE, L-SHADE-RSP, and APGSK-IMODE-FL, but it is on par with the other peers. For the three hybrid functions, our algorithm defeats IMODE, AGSK, NL-SHADE-RSP, APGSK-IMODE-FL, and AMCDE, while it is on par with the other peers. For the four composition functions, our algorithm defeats all peers.
It can be seen from Table 11 that, when D = 10 , for the two unimodal functions F1 and F3, none of the peers show a significant difference in solution quality compared to our algorithm. For the seven simple multimodal functions F4–F10, our algorithm defeats AGSK, L-SHADE-RSP, and L-SHADE-LBC, while it is on par with IMODE. However, our SMSDE is defeated by EA4eig, APGSK-IMODE-FL, and AMCDE. For the ten hybrid functions F11–F20, our algorithm defeats all peers. For the ten composition functions F21–F30, our algorithm defeats six peers, but it is defeated by AMCDE.
According to the five tables, none of the algorithms demonstrate a significant difference in solution quality for the unimodal functions. Meanwhile, our algorithm is defeated by the peers in most of the cases for the simple multimodal functions or the basic ones, but it defeats the peers in most of the cases for the hybrid functions. More importantly, our algorithm always outperforms all peers for the composition functions. In brief, as the complexity of the functions increases, our SMSDE exhibits enhanced performance.
According to the Wilconxon rank sum test, the results can be summarized according to Table 12 as follows. For the CEC 2020 suite, when D = 10 , our SMSDE is defeated by NL-SHADE-RSP and AMCDE, but it performs better than the other benchmark algorithms. When D = 20 , SMSDE is only defeated by NL-SHADE-RSP, but it performs better than the other benchmark algorithms. For the CEC 2022 suite, when D = 10 , our algorithm is defeated by AMCDE and even EA4eig, but it performs better than the other benchmark algorithms. When D = 20 , our algorithm is even with EA4eig, but it performs better than the other benchmark algorithms. For the CEC 2017 suite, our algorithm is defeated by AMCDE, but it performs better than the other benchmark algorithms.
According to the Friedman test, the results can be summarized according to Table 12 as follows. For the CEC 2020 suite, when D = 10 , our algorithm ranks third and loses to AMCDE and NL-SHADE-RSP. For the same suite when D = 20 , our SMSDE ranks second and just loses to NL-SHADE-RSP. For the CEC 2022 suite, when D = 10 , SMSDE ranks second and just loses to EA4eig. When D = 20 , our algorithm ranks first. For the CEC 2017 suite, SMSDE ranks second and just loses to AMCDE.
In brief, from both points of view, although our algorithm occasionally loses in the comparison based on the three benchmark test suites, no peer consistently outperforms it. To better demonstrate this, we further summarize the outcome of both types of test in Table 13.
Table 13. Summary of the two types of tests. “+” and “−” represent the current algorithm performs significantly better and statistically worse than SMSDE, respectively. Meanwhile, “≈” means that there is no significant difference.
It can be easily seen from the table that, according to both tests, SMSDE performs better than all peers in the comparison based on the benchmark test suites.
Based on Table 13, we can see that, among the peers, NL-SHADE-RSP, EA4eig, and AMCDE perform better than the others. Therefore, in the comparison based on the selected CEC 2011 real-world problems, our SMSDE compares only with IMODE and the three good performers. The results are listed in Table 14 with the outcome of the Wilconxon rank sum test. It can be seen from Table 14 that, according to the Friedman test, our SMSDE defeats IMODE, NL-SHADE-RSP, and AMCDE. Meanwhile, SMSDE is on par with EA4eig. In brief, our algorithm is better than or at least comparable to the peers in the comparison based on the selected real-world problems.
Table 14. The results of the selected CEC 2011 real-world problems with the Wilcoxon rank sum test. A “+” or “−” denotes that the current result is significantly better or statistical worse than the result of SMSDE in terms of the Wilcoxon rank sum test at a 0.05 significance level, respectively. Meanwhile, “≈” represents that there is no significant difference.
We find that, when D = 20 , none of the algorithms obtains the optimum of F4, F6–F9, and F12 in the CEC 2022 suite. Hence, for the functions, when D = 20 , we give the convergence graph of all algorithms. In Figure 1, for each function, at 11 intervals, the average in the 30 executions of the average fitness of all individuals in the population is plotted.
Figure 1. Convergence graph of SMSDE and the benchmark algorithms for the six functions when D = 20 .
It can be seen from the figure that, for some functions, e.g., F4, F6, and F7, most of the algorithms including IMODE and our SMSDE converge rapidly, even in the final stage. Provided that further fitness evaluations are given to the algorithms, better solutions can be obtained. It can be inferred that these algorithms possess strong long-term search capabilities. More importantly, it can be observed that our SMSDE, the algorithm revised from IMODE, is similar to the latter in convergence manner. It means that our revisions do not significantly alter the convergence manner.
To fully observe our SMSDE, we execute an ablation experiment based on the CEC 2020 suite. Here, D = 15 . In the ablation experiment, we compare IMODE, IMODE with our revision on mutation strategy but without the one on local search (IMODE+), and SMSDE. Results are listed in Table 15.
Table 15. Ablation experiment.
Then, the outcome of the Friedman test is given in Table 16.
Table 16. Outcome of the Friedman test based on the results of IMODE, IMODE+, and SMSDE for the CEC 2020 benchmark test suite when D = 15 .
According to the table, IMODE+ performs better than IMODE. Furthermore, SMSDE shows better performance than IMODE+. In brief, both our revisions improve performance.
In summary, our experiments demonstrate that, although SMSDE has a similar convergence manner to IMODE, it is better in performance. Moreover, SMSDE performs better than all peers. To obtain SMSDE, we propose two revisions. Both lead to improvement.

5. Conclusions

In recent decades, research on real-parameter single-objective optimization has been undertaken. From 2020, researchers have been concerned with long-term search. Among the algorithms proposed for long-term search, IMODE is a DE ensemble based on two primary mutation strategies and a secondary one. In this paper, we add three other secondary mutation strategies, in addition to the original one. In every generation, just one of the secondary mutation strategies chosen based on history performance is activated. Meanwhile, SQP may be applied not only on the best individual, but also on the oldest individual among the worst ones. Thus, SMSDE is obtained.
Experimental results demonstrate that our SMSDE is competitive for long-term search and always shows better performance than IMODE and the other benchmark algorithms. Furthermore, SMSDE performs well in real-world problems. In our SMSDE, the successful integration of the six mutation strategies leads to diversification in the operators. This diversification effectively promotes the diversity of the population. Therefore, better solutions are obtained. However, it is impossible that the current combination of mutation strategies is the best choice, and further study should be carried out in the future to find a better one.

Author Contributions

Conceptualization, G.C.; methodology, J.P.; validation, J.P.; writing—original draft preparation, J.P.; writing—review and editing, J.P.; supervision, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Our raw data are deposited in Github. Anyone may visit the data via https://github.com/jianyipeng/SMSDE.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DEDifferential Evolution
SMSDEDifferential Evolution with Secondary Mutation Strategies
SQPSequential Quadratic Programming
CECCongress on Evolutionary Computation

References

  1. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar]
  2. Stanovov, V.; Akhmedova, S.; Semenkin, E. NL-SHADE-RSP Algorithm with Adaptive Archive and Selective Pressure for CEC 2021 Numerical Optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 809–816. [Google Scholar]
  3. Brest, J.; Maučec, M.S.; Bošković, B. Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization: Algorithm j21. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 817–824. [Google Scholar]
  4. Ye, C.; Li, C.; Li, Y.; Sun, Y.; Yang, W.; Bai, M.; Zhu, X.; Hu, J.; Chi, T.; Zhu, H.; et al. Differential evolution with alternation between steady monopoly and transient competition of mutation strategies. Swarm Evol. Comput. 2023, 83, 101403. [Google Scholar] [CrossRef]
  5. Mohamed, A.W.; Hadi, A.A.; Agrawal, P.; Sallam, K.M.; Mohamed, A.K. Gaining-sharing knowledge based algorithm with adaptive parameters hybrid with IMODE algorithm for solving CEC 2021 benchmark problems. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 841–848. [Google Scholar]
  6. Bujok, P.; Kolenovsky, P. Eigen Crossover in Cooperative Model of Evolutionary Algorithms Applied to CEC 2022 Single Objective Numerical Optimisation. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
  7. Ebadinezhad, S. DEACO: Adopting dynamic evaporation strategy to enhance ACO algorithm for the traveling salesman problem. Eng. Appl. Artif. Intell. 2020, 92, 103649. [Google Scholar] [CrossRef]
  8. Tawhid, M.A.; Ibrahim, A.M. A hybridization of grey wolf optimizer and differential evolution for solving nonlinear systems. Evol. Syst. 2020, 11, 65–87. [Google Scholar] [CrossRef]
  9. Bakhshipour, M.; Namdari, F.; Samadinasab, S. Optimal coordination of overcurrent relays with constraining communication links using DE–GA algorithm. Electr. Eng. 2021, 103, 2243–2257. [Google Scholar] [CrossRef]
  10. Akopov, A.S.; Beklaryan, A.L.; Zhukova, A.A. Optimization of characteristics for a sto-chastic agent-based model of goods exchange with the use of parallel hybrid genetic algorithm. Cybern. Inf. Technol. 2023, 23, 87–104. [Google Scholar]
  11. Akopov, A.S. A Clustering-Based Hybrid Particle Swarm Optimization Algorithm for Solving a Multisectoral Agent-Based Model. Stud. Inform. Control 2024, 33, 83–95. [Google Scholar] [CrossRef]
  12. Akopov, A.S. MBHGA: A matrix-based hybrid genetic algorithm for solving an agent-based model of controlled trade interactions. IEEE Access 2025, 13, 26843–26863. [Google Scholar] [CrossRef]
  13. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  14. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the Proc of CEC, Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1658–1665. [Google Scholar]
  15. Brest, J.; Zamuda, A.; Boskovic, B.; Maucec, M.S.; Zumer, V. High-dimensional real-parameter optimization using self-adaptive differential evolution algorithm with population size reduction. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 2032–2039. [Google Scholar]
  16. Sallam, K.M.; Elsayed, S.M.; Sarker, R.A.; Essam, D.L. Landscape-based adaptive operator selection mechanism for differential evolution. Inf. Sci. 2017, 418, 383–404. [Google Scholar] [CrossRef]
  17. Hu, J.; Zhu, X.; Ye, C. Improvement of IMODE-a Differential Evolution Algorithm-by Replacing the Third Mutation Strategy. In Proceedings of the 2023 8th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 21–23 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1511–1515. [Google Scholar]
  18. Pei, J.; Mei, Y.; Liu, J.; Zhang, M.; Yao, X. Adaptive operator selection for meta-heuristics: A survey. IEEE Trans. Artif. Intell. 2025, 6, 1991–2012. [Google Scholar] [CrossRef]
  19. DaCosta, L.; Fialho, A.; Schoenauer, M.; Sebag, M. Adaptive operator selection with dynamic multi-armed bandits. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, Atlanta, GA, USA, 12–16 July 2008; pp. 913–920. [Google Scholar]
  20. Krempser, E.; Fialho, A.; Barbosa, H.J. Adaptive operator selection at the hyper-level. In Proceedings of the International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2012; pp. 378–387. [Google Scholar]
  21. Maturana, J.; Fialho, A.; Saubion, F.; Schoenauer, M.; Lardeux, F.; Sebag, M. Adaptive operator selection and management in evolutionary algorithms. In Autonomous Search; Springer: Berlin/Heidelberg, Germany, 2012; pp. 161–189. [Google Scholar]
  22. Gonçalves, R.A.; Almeida, C.P.; Pozo, A. Upper confidence bound (UCB) algorithms for adaptive operator selection in MOEA/D. In Proceedings of the Evolutionary Multi-Criterion Optimization: 8th International Conference, EMO 2015, Guimarães, Portugal, 29 March–1 April 2015; Proceedings, Part I 8. Springer: Berlin/Heidelberg, Germany, 2015; pp. 411–425. [Google Scholar]
  23. Rodríguez-Esparza, E.; Masegosa, A.D.; Oliva, D.; Onieva, E. A new hyper-heuristic based on adaptive simulated annealing and reinforcement learning for the capacitated electric vehicle routing problem. Expert Syst. Appl. 2024, 252, 124197. [Google Scholar] [CrossRef]
  24. Lagos, F.; Pereira, J. Multi-armed bandit-based hyper-heuristics for combinatorial optimization problems. Eur. J. Oper. Res. 2024, 312, 70–91. [Google Scholar] [CrossRef]
  25. Senzaki, B.N.; Venske, S.M.; Almeida, C.P. Hyper-heuristic based NSGA-III for the many-objective quadratic assignment problem. In Proceedings of the Brazilian Conference on Intelligent Systems; Springer: Berlin/Heidelberg, Germany, 2021; pp. 170–185. [Google Scholar]
  26. Gupta, N.; Granmo, O.C.; Agrawala, A. Thompson sampling for dynamic multi-armed bandits. In Proceedings of the 2011 10th International Conference on Machine Learning and Applications and Workshops, Honolulu, HI, USA, 18–21 December 2011; IEEE: Piscataway, NJ, USA, 2011; Volume 1, pp. 484–489. [Google Scholar]
  27. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2020, 24, 6277–6296. [Google Scholar] [CrossRef]
  28. Prestes, L.; Delgado, M.R.; Lüders, R.; Gonçalves, R.; Almeida, C.P. Boosting the performance of MOEA/D-DRA with a multi-objective hyper-heuristic based on irace and UCB method for heuristic selection. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  29. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the performance of adaptive gainingsharing knowledge based algorithm on cec 2020 benchmark problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar]
  30. Stanovov, V.; Akhmedova, S.; Semenkin, E. NL-SHADE-LBC algorithm with linear parameter adaptation bias change for CEC 2022 Numerical Optimization. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 01–08. [Google Scholar]
  31. Zhu, X.; Ye, C.; He, L.; Zhu, H.; Chi, T.; Hu, J. Ensemble of differential evolution and gaining–sharing knowledge with exchange of individuals chosen based on fitness and lifetime. Soft Comput. 2023, 27, 14953–14968. [Google Scholar] [CrossRef]
  32. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  33. Brest, J.; Maučec, M.S.; Bošković, B. Differential evolution algorithm for single objective bound-constrained optimization: Algorithm j2020. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–8. [Google Scholar]
  34. Biedrzycki, R.; Arabas, J.; Warchulski, E. A Version of NL-SHADE-RSP Algorithm with Midpoint for CEC 2022 Single Objective Bound Constrained Problems. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
  35. Wang, Y.; Li, H.X.; Huang, T.; Li, L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014, 18, 232–247. [Google Scholar] [CrossRef]
  36. Bujok, P.; Tvrdík, J. Enhanced individual-dependent differential evolution with population size adaptation. In Proceedings of the 2017 IEEE congress on evolutionary computation (CEC), Donostia, Spain, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1358–1365. [Google Scholar]
  37. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia, Spain, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1311–1318. [Google Scholar]
  38. Du, W.; Leung, S.Y.S.; Tang, Y.; Vasilakos, A.V. Differential evolution with event-triggered impulsive control. IEEE Trans. Cybern. 2017, 47, 244–257. [Google Scholar] [CrossRef]
  39. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution–an updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  40. Jia, D.; Zheng, G.; Khan, M.K. An effective memetic differential evolution algorithm based on chaotic local search. Inf. Sci. 2011, 181, 3175–3187. [Google Scholar] [CrossRef]
  41. Neri, F.; Iacca, G.; Mininno, E. Disturbed exploitation compact differential evolution for limited memory optimization problems. Inf. Sci. 2011, 181, 2469–2487. [Google Scholar] [CrossRef]
  42. Zhan, Z.h.; Zhang, J. Enhance differential evolution with random walk. In Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; pp. 1513–1514. [Google Scholar]
  43. Poikolainen, I.; Neri, F. Differential evolution with concurrent fitness based local search. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 384–391. [Google Scholar]
  44. Zhang, C.; Chen, J.; Xin, B. Distributed memetic differential evolution with the synergy of Lamarckian and Baldwinian learning. Appl. Soft Comput. 2013, 13, 2947–2959. [Google Scholar] [CrossRef]
  45. Piotrowski, A.P. Adaptive memetic differential evolution with global and local neighborhood-based mutation operators. Inf. Sci. 2013, 241, 164–194. [Google Scholar] [CrossRef]
  46. Rakshit, P.; Konar, A.; Bhowmik, P.; Goswami, I.; Das, S.; Jain, L.C.; Nagar, A.K. Realization of an adaptive memetic algorithm using differential evolution and Q-learning: A case study in multirobot path planning. IEEE Trans. Syst. Man, Cybern. Syst. 2013, 43, 814–831. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.