Next Article in Journal
A Bidirectional Material Diffusion Algorithm Based on Fusion Hypergraph Random Walks for Video Recommendation
Previous Article in Journal
Numerical Simulation of High-Pressure Water Jets in Air by an Elliptic–Blending Turbulence Model: A Parametric Study
Previous Article in Special Issue
Evolutionary Reinforcement Learning: A Systematic Review and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems

1
Department of Computer Science & Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
2
Department of Industrial Engineering & Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(10), 1647; https://doi.org/10.3390/math13101647
Submission received: 6 April 2025 / Revised: 9 May 2025 / Accepted: 13 May 2025 / Published: 17 May 2025
(This article belongs to the Special Issue Computational Intelligence and Evolutionary Algorithms)

Abstract

In this study, twelve modified differential evolution algorithms with memory properties and adaptive parameters were proposed to address optimization problems. In the experimental process, these modified differential evolution algorithms were applied to 23 continuous test functions. The results indicate that MBDE2 and IHDE-BPSO3 outperform the original differential evolution algorithm and its extended variants, consistently achieving optimal solutions in most cases. The findings suggest that the proposed improved differential evolution algorithm is highly adaptable across various problems, yielding superior results. Additionally, integrating memory properties significantly enhances the algorithm’s performance and effectiveness.

1. Introduction

The differential evolution (DE) algorithm [1] is a fairly mainstream heuristic algorithm that has been studied and improved in many studies [2,3,4,5,6,7,8,9,10,11]. As the process is fast and simple, it is very suitable for solving optimization problems with less time consumption. However, the differential evolution algorithm does not guarantee that the global optimal solution can be found; therefore, to solve this problem, the differential evolution algorithm has been used as a basis to develop various mutation policies. The mutation policy refers to the part that generates a random solution, such as DE/rand/1 or DE/rand/2, or changes the formula of the algorithm to form a new algorithm. When comparing novel algorithms, the differential evolution algorithm and its variants still maintain a good position in most problems, indicating that the diversity of variation in the selection of mutation policy is important. The weight-changing differential evolution algorithm used in this study maintained good results in previous studies. The parameter settings are generally based on the user’s past experience, and the concept of memory has been rarely used in the original differential evolution algorithm. Although the original differential evolution algorithm can obtain good results in terms of average performance, it cannot obtain good results with the same parameter settings for various problems. Therefore, in this study, a novel differential evolution algorithm is proposed to improve the concepts of adaptive and memory, named the Improved Hybrid Differential Evolution Algorithm (IHDE). In this study, the concept of the particle swarm optimization algorithm is used to influence the structure of the solution generated by the formula, allowing good results to be achieved for most problems.

2. Related Works

Storn and Price [1] published the differential evolution algorithm in 1997, which was classified as an intelligent optimization algorithm. In the beginning, the differential evolution algorithm sets the basic parameters, including group size, scaling factor, and crossover probability, and generates an initial set of solutions x based on the upper and lower bounds of the given solution according to rand (0, 1). Then, the initial solution x is mutated to generate a mutation solution, and the structure of the solution is changed by means of crossover probability to produce a crossover solution x′. In the selected section, if a cross over solution x′ is better than the current solution x, the original initial solution is replaced by the crossover solution; otherwise, nothing is replaced. The algorithm repeats the mutation, crossover, and selection processes until the end of the iteration. The differential evolution algorithm is also a stochastic model that simulates biological evolution through multiple iterations and preserves individuals who are better adapted to the environment. The advantage of differential evolution algorithms is their smaller number of steps compared to other algorithms. There are only four steps: initialization, mutation, crossover, and selection. Therefore, the convergence time of the algorithm is relatively shorter than that of other algorithms, but it still retains strong global convergence ability and robustness.
The particle swarm optimization (PSO) algorithm [12,13] concept was proposed by two scholars, Kennedy and Eberhart [12]. In the beginning, it sets parameters, including population size, inertia weight, acceleration constant, and maximum velocity. A random swarm of particles is then generated, and their positions are adjusted based on the individual experience of each particle’s flight and the best experience in the past. The particles produced at the beginning can be treated as a set of solutions x, and their velocity v, the individual best position (pbest), and the global best position (gbest) affect the positions of individuals in the next iteration. The algorithm then iteratively finds the best position (i.e., the best solution), and the algorithm continues until the end of the iteration or the given max_steps is reached.
The main framework of the particle swarm optimization algorithm is to generate the initial solution and initial velocity, and then update the velocity and position according to the velocity and position formula until the end of the iteration. The formula is as follows:
v i t + 1 = ω × v i t + c 1 × r 1 × ( p i , b e s t t X i t ) + c 2 × r 2 × ( g b e s t X i t )
where v is the velocity, t is the iteration, X is the contemporary solution, pbest is the individual best position, gbest is the global best position, c1 and c2 are the acceleration constants, w is the inertia weight, and r1 and r2 are random values between 0 and 1.
X i t + 1 = X i t + v i t + 1
In the position update formula, a new particle position can be obtained, which is the contemporary solution for the next iteration. The speed update formula and the position update formula are repeated until the end of the iteration or until the user-set stop condition is reached. There are many variations of differential evolution algorithms, among which Qin and Suganthan [14] proposed a differential evolution algorithm for two mutation policies at the same time (Self-adaptive Differential Evolution, SaDE), and Huang et al. [15] proposed a differential evolution algorithm of cooperative evolution (Co-evolutionary Differential Evolution, CDE). The whole part is divided into two groups with interaction, and the algorithm also affects each other when it is carried out.
Parouha and Das [16] proposed a modified differential evolution algorithm called MBDE (Memory-based Hybrid Differential Evolution). In this study, the formula of the differential evolution algorithm is added to the memory method, and the individual best position and the global best position are introduced to change the concept that the past solution of the differential evolution algorithm does not affect the future generation solution. Then, the mutation and crossover steps are changed to Swarm Mutation and Swarm Crossover and, finally, the greedy selection method is improved to Elitism, which greatly improves the overall algorithm performance. The improved formula is as follows:
V i t + 1 = X i t + f p i , b e s t t f X i , w o r s t t p i , b e s t t X i t + f g b e s t f X i , w o r s t t ( g b e s t X i t )
In this part, the original mutation policy DE/rand/2 takes into account the concept of memory, where V is the mutation solution, t is the iteration number, X is the contemporary solution, pbest is the individual best position, gbest is the global best position, xworst is the contemporary worst solution, and f () is the objective function.
U i , j t + 1 = V i , j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r X i   j t + r a n d 0,1 g b e s t p i , b e s t t   o t h e r w i s e            
The crossover algorithm also takes into account the concept of memory, where V is the mutation solution, t is the iteration number, X is the contemporary solution, pbest is the individual best position, gbest is the global best position, rand (0, 1) and randij are random real numbers between 0 and 1, and pcr is the user-defined crossover probability.
Elitism:
This part is used to change the original greedy choice method to the elite choice method. This method is proposed to discard the overall algorithm diversity, because the diversity is processed in the previous mutation step and the crossover step, such that there is no need to deal with it in this part. The elitism process follows three rules:
(1)
In the final elite selection section, the initial population (or contemporary target vector) and the resulting species from the final crossover are combined.
(2)
The merged populations are sorted in ascending order according to the value of the target function.
(3)
The better NP (population number) individuals are retained to move on to the next iteration, and the remaining individuals are allowed to be removed.
Chen et al. [17] proposed an HPSO (Hybrid Particle Swarm Optimizer), in which the parameters required for the original PSO formula were included in the concept of adaptation, and the formula was improved to generate the target value and velocity of the next step. The acceleration constant algorithm is based on the TVAC (Time-Varying Acceleration Coefficient) commonly used in the previous literature, and the inertia weight is modified. The modified formula is as follows:
v i t + 1 = χ ( v i t + c 1 × r 1 × ( p i , b e s t t X i t ) + c 2 × r 2 × ( g b e s t X i t ) ) χ = 2 2 ϕ ϕ   2 4 × ϕ   ,   ϕ = c 1 + c 2 c 1 = c 1 f c 1 i × M j M m a x + c 1 i , c 2 = c 2 f c 2 i × M j M m a x + c 2 i
where c1f = 0.5, c1i = 2.5, c2f = 2.5, c2i = 0.5. χ replaces the inertia weight and further affects the individual best position and the proportion of the global best position on the velocity, v is the velocity, t is the iteration number, X is the contemporary solution, Mj is the current iteration, Mmax is the maximum iteration, pbest is the individual best position, and gbest is the global best position.
X i t + 1 = X i t × w i j + v i t + 1 × w i j + ρ × g b e s t × ψ w i j = ψ = e x p ( f ( j ) / u ) 1 + e x p ( f ( j ) / u ) i t e r ,   w i j = 1 w i j
where ρ is a random number between 0 and 1, u is the average value of the initial population substituting the objective function, and f(j) is the objective function of the jth particle. This improvement greatly reduces the problem of user parameter setting. For parameter adaptation, the parameter values can be changed iteratively, such that the algorithm can produce different results in different iteration periods. The proportion of the individual best position and the global best position can also be changed according to the needs of the algorithm and the progress of the iteration. The acceleration constant is set by the adaptive algorithm commonly used in the previous literature, and the overall velocity update formula is affected by another inertia weight. Hizarci et al. [13] proposed Binary Particle Swarm Optimization (BPSO), which is different from the previous method in that it sets an adaptive algorithm with a speedup constant. The acceleration constant algorithm commonly used in the past is abandoned, and the inertia weight is set adaptively to reduce the impact of the speed in the previous iteration. For the other formulas, the original particle swarm optimization algorithm is used entirely. The acceleration constant adaptation and inertia weight adaptation formulas are as follows:
c 1 = c 1 i + 2 × e ( 2.2 × i t e r / m a x i t e r ) 2 c 2 = c 1 f 2 × e ( 2.2 × i t e r / m a x i t e r ) 2
where c1i = 0.5, c1f = 2.5, iter is the iteration at that time, and m a x i t e r is the maximum number of iterations.
ω i t e r = ω m a x ω m a x ω m i n m a x i t e r × i t e r
where wmax = 0.9 and wmin = 0.4.
The literature on the differential evolution algorithm has long underscored the importance of the mutation policies in balancing exploration and exploitation. While many enhancements in recent years have focused on hybrid schemes or adaptive parameter control, few have re-examined the individual-level guidance mechanism at the mutation step. Basing the selection of guiding exemplars on an exponential ranking scheme, Liu et al. [18] addressed a common shortcoming—namely, the tendency of conventional schemes to overly concentrate on a few top individuals, thereby risking premature convergence when the best solutions fall into local optima. The authors presented a novel variant of differential evolution that introduces an innovative mutation policy, termed “DE/current-to-rwrand/1”, that serves as the core of what the authors call function value ranking aware differential evolution (FVRADE). In contrast to conventional elite-based methods such as “DE/current-to-best/1”—which may restrict diversity—or random-based methods, such as “DE/current-to-rand/1”—which maybe too exploratory—the proposed operator offers a dynamic compromise. The ability to adjust the weight-control parameter means that practitioners can steer the search process toward more diversification or exploitation as needed, making FVRADE particularly robust for high-dimensional and complicated landscapes.
Yu et al. [19] introduced a reinforcement learning-based multi-objective differential evolution algorithm (RLMODE) designed to solve constrained multi-objective optimization problems. In RLMODE, the authors enhance the classical DE framework by incorporating a feedback mechanism implemented via a Q-learning strategy to dynamically tune key control parameters based on the relationship between parent and offspring. The algorithm assigns rewards that guide adaptive adjustments by evaluating whether an offspring dominates its parent; this, in turn, helps to steer the search process toward feasible regions with improved convergence and diversity.
Chen et al.’s work [20] represents a significant step forward in DE algorithm design by integrating complementary mutation strategies, dual-stage parameter self-adaptation mechanisms, and an enhanced population sizing scheme. It adds value to the literature by providing both a conceptual framework and empirical evidence and enhances search efficiency and robustness for high-dimensionality and real-world optimization problems, making LSHADE-Code a promising candidate for complex optimization challenges in various fields.
Chen et al. [21] addressed the challenging photovoltaic (PV) parameter extraction task by hybridizing an adaptive DE algorithm with multiple linear regression (MLR). In contrast to a “pure” evolutionary search, the authors decomposed the nonlinear PV model into two parts. The DE engine is employed iteratively to estimate the parameters embedded in the nonlinear functions, while the MLR component analytically computes the related linear coefficients. This separation effectively reduces the overall dimensionality of the search space and leverages both exploration via DE and exploitation via regression for improved accuracy.
Yang et al. [22] introduced a dual-population framework embedded in the DE algorithm (DP-DE) to clearly address the challenges posed by coupled and dynamic constraints in cement calcination processes. The emphasis on a “decision-first, optimization-later” strategy is an interesting shift from traditional single-step approaches, which often struggle with stability. The comparison with NSGA-II, MOEA/D, and particle swarm optimization suggests that DP-DE not only improves convergence speed but also enhances robustness under real-world disturbances. The multi-step transition strategy plays a crucial role in mitigating oscillations and overshoots. It is essential in ensuring smooth control adjustments without exceeding operational constraints.
Based on the review of the above papers, the differential evolution algorithm with memory properties has good performance and can effectively deal with the problem of group diversity. The acceleration constant, which has a significant influence on the velocity formula of the particle swarm optimization algorithm, is nothing more than a factor of great influence. In this study, we consider applying the formula of particle swarm optimization to the differential evolution algorithm and continue with the other steps of the differential evolution algorithm to intersect and disrupt the structure of the solution. The target algorithm has the ability to adapt to most problems and achieves good results.

3. The Proposed Method

In this study, twelve modified differential evolution algorithms are proposed. They mainly use the concepts of the individual best solution (pbest) and global best solution (gbest) to improve the mutation and crossover steps of the differential evolution algorithm, and integrate the mutation formula into the concepts and related parameters of the particle swarm optimization algorithm such that it may achieve better results than the original differential evolution algorithm and relevant modified algorithms. However, in the process of improving the algorithm, it was found that random weights or random values are often used to affect the reference pbest and gbest ratios of the solutions generated by the mutation policies. This method often causes the convergence results to be unstable, with good and bad results at times. Even if the average result is better than those provided by most well-known algorithms and the original differential evolution algorithm, the abovementioned poor results often lead to the disadvantage of an extremely poor convergence effect. Therefore, this study uses an adaptive method that changes with the iteration of the algorithm to affect the reference pbest and gbest ratios, eliminating its instability-related disadvantages while retaining the advantages relating to the simple steps of the differential evolution algorithm; in this way, it can attain better performance than the original algorithm. This study used the MBDE architecture [16] to extend the formulas in the mutation, crossover, and selection steps of the original differential evolution algorithm and add variants of the particle swarm optimization algorithm, HPSO and BPSO, to develop an enhanced DE algorithm. Various strategies are proposed for comparison such that the target algorithm can achieve more stability, retain the advantages of the simple steps of the original algorithm, and obtain better results at the end of the overall algorithm. The proposed algorithm can be divided into four parts; namely,
(1)
A simple increase in the selection group.
(2)
Added additional cross-solutions based on contemporary solutions.
(3)
The crossover step uses the mutation solution as a basis to generate a new crossover solution.
(4)
Improvement in differential evolution algorithms based on the improved particle swarm algorithm.
The basic concepts and corresponding algorithms of these four classifications are described in Table 1.
MBDE2 builds upon the original MBDE algorithm by incorporating selected individuals. While the original MBDE algorithm performs well, this enhancement leverages the broader search range of the mutation solution to accelerate the discovery of optimal solutions in the early stages of the algorithm. To preserve the effectiveness of MBDE, its overall structure and formulas remain unchanged, with modifications made specifically to the selection process to account for mutated individuals.
MBDE2:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r X i j t + r a n d 0,1 g b e s t p i , b e s t t   o t h e r w i s e
Selection: Contemporary solutions X i t , mutation solutions V i t + 1 , and crossover solutions U i t + 1 to retain good NP (population) individuals for the next iteration.
The concept of using IHDE, IHDE2, and IHDE-2cross based on contemporary solutions to generate additional crossover solutions is mainly utilized to propose a new crossover formula. This new crossover formula is related to contemporary solution-based vectors. In this study, the mutation solution can be used as a group of independent individuals to compete with other individuals. Thus, the new crossover solution part does not have any effect on the mutation solution. Finally, the three algorithms consider different steps to choose separately.
IHDE:
Mutation: Make use of Formula (3).
Crossover: Make use of Formula (10) to obtain crossover solution U 2 i t + 1 .
Selection: Select the mutation solution V i t + 1 and crossover solution U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-2:
Mutation: Make use of Formula (3).
Crossover:
U 2 i , j t + 1 =   X i j t + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i j   p c r X i j t + r a n d 0,1 g b e s t X i t             o t h e r w i s e              
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-2cross:
Mutation: Make use of Formula (3).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (10) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbi and IHDE-mbm are based on the crossover step to generate a new crossover solution based on the mutation solution. The main concept of these two algorithms is that, although the mutation solution is sufficient to be used as a solution to compete with other individuals to become the next generation, its wide search range can be improved if it is influenced by past memories at the crossover step. Therefore, these two algorithms are proposed, which slightly differ in the intersection part.
IHDE-mbi:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i   j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r V i   j t + 1 + r a n d 0,1 g b e s t X i t                 o t h e r w i s e          
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbm:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i   j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r V i   j t + 1 + r a n d 0,1 g b e s t V i t + 1                 o t h e r w i s e        
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration. IHDE-BPSO3, IHDE-BPSO4, and IHDE-BPSO5 are improved based on different Binary Particle Swarm Optimization (BPSO) algorithms. BPSO is used because it proposes a new adaptive acceleration constant algorithm. In a study by Hizarci et al. [13], this adaptive algorithm obtained a good ranking compared with the adaptive algorithm commonly used in the previous particle swarm optimization algorithm-related literature, and it showed potentially beneficial effects. Therefore, the speed and position of BPSO can be used to update the formula and its adaptive algorithm to generate different mutation solutions and crossover solutions. In two of these algorithms, new crossover solutions are generated, which disrupt the structure of the solution in a probabilistic way.
IHDE-BPSO3:
Mutation: Make use of Formulas (1) and (7). The parameters of the mutation part are also set using Equation (7).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO4:
Mutation 1: Make use of Formula (1) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (2) to obtain mutation solution V 2 i t + 1 . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2:
U 2 i , j t + 1 = V 2 i t + 1                                                                                           ,   r a n d i   j   p c r 1 V i , j t + 1 + r a n d 0,1 g b e s t p i , b e s t t , p c r 1 < r a n d i   j   p c r 2 X i , j t + r a n d 0,1 g b e s t p i , b e s t t ,     o t h e r w i s e                                            
pcr1 is set to 0.5, because mutation solution 2, generated using BPSO, demonstrates superior performance. The higher probability increases its influence on the solution’s structure. Meanwhile, pcr2 is set to 0.75 such that the mutation of solution 1 and the contemporary solution can affect the structure of the solution with a smaller probability.
Selection: Select the contemporary solution X i t , mutation solution V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO5:
Mutation 1: Make use of Formula (1) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (2) to obtain mutation solution V 2 i t + 1 . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 and V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO3, IHDE-HPSO4, and IHDE-HPSO5 are improved based on different improved particle swarm optimization algorithms. The idea to use HPSO is due to the addition of novel adaptive methods to the velocity update formula and position update formula, as well as the change in weights that only affect the previous generation of generation velocity solutions to affect the entire velocity formula. The proposed method incorporates various enhancements to adaptive parameters, seamlessly integrating their concepts into the algorithm’s architecture to generate diverse solutions.
IHDE-HPSO3:
Mutation: Make use of Formula (5).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO4:
Mutation 1: Make use of Formula (5) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (6) to obtain mutation solution V 2 i t + 1 .
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO5:
Mutation 1: Make use of Formula (5) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (6) to obtain mutation solution V 2 i t + 1 .
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 and V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
These 12 improvements were compared with DE/rand/1 proposed by Storn and Price [1], as well as the MBDE proposed by Parouha and Das [16], and the results are presented in the next section.

4. Experimental Results

Section 4.1 introduces the optimization problems to be tested in this study, which were run in Python 3.7 with an Intel (R) Core (TM) i7-9700 CPU @ 3.00GHz, an Intel UHD Graphics 630 GPU, and 16.0 GB of RAM. Then, in Section 4.2 and Section 4.3, the results of the differential evolution algorithms are compared with most well-known algorithms. The results show that the original differential evolution algorithm has the best performance among the various algorithms. Finally, in Section 4.4, a comparison between the improved and original differential evolution algorithms and their variants is presented.

4.1. Benchmark Functions

In order to test the performance of the proposed improved differential evolution algorithm, 23 test functions used in the study by Abualigah et al. [23] were used to test the performance of the original differential evolution algorithm to be improved, as shown in Table 2, Table 3 and Table 4. These test functions include unimodal test functions, multimodal test functions, and fixed-dimension multimodal test functions.
The unimodal test functions F1~F7 were tested at dimension 30. The initial population size was set to 30, the number of iterations was set to 500, and the above test settings were used with 30 runs obtained. The range of each function is shown in Table 2.
The multimodal test functions F8 to F13 were tested in 30 dimensions. The initial population size was set to 30, the number of iterations was set to 500, and an average of 30 runs was used for the basic setting. Their ranges are given in Table 3.
Table 3. Multimodal test functions.
Table 3. Multimodal test functions.
FunctionDescriptionDimensionsRange f m i n
F8 f ( x ) = i = 1 n x i sin x i 30, 100, 500, 1000[−500, 500]−418.9829 × n
F9 f ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 30, 100, 500, 1000[−5.12, 5.12]0
F10 f ( x ) = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n cos 2 π x i + 20 + e 30, 100, 500, 1000[−32, 32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i 30, 100, 500, 1000[−600, 600]0
F12 f x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 [ 1 + 10 s i n 2 π y i + 1 + i = 1 n u x i , 10 ,   100 ,   4 ] ,     w h e r e   y i = 1 + x i + 1 4 ,   u x i ,   a , k , m K x i a m                       i f   x i > a 0                                                     a x i a     K x i a m                             x i a 30, 100, 500, 1000[−50, 50]0
F13 f ( x ) = 0.1 ( s i n 2 3 π x 1 + i = 1 n x i 1 2 [ 1 + s i n 2 3 π x i + 1 ] + x n 1 2 ] + s i n 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5,100,4 ) 30, 100, 500, 1000[−50, 50]0
The fixed-dimension multimodal functions from F14 to F23 were set to an initial population size of 30, 500 iterations, and 30 runs was set as the basic setting. The dimensions and ranges are shown in Table 4.
The wide range of unimodal, multimodal, and fixed-dimension multimodal test functions help to detect the convergence speed of the algorithm, the ability of the algorithm to jump out of local optima, and the convergence ability of the algorithm more comprehensively. Most of these test functions have high search difficulty and, so, most well-known algorithms may not be able to find the best value, even for emerging variants of these algorithms. The algorithm in this study tested the formulas of the improved differential evolution algorithm and set its parameters adaptively, such that the parameters were no longer set in a random or fixed way. The algorithm obtained good results in these test functions, so the performance of the algorithm was considered to have been effectively improved.
In recent years, some scholars have proposed new algorithms, various algorithm variants, or differing parameter settings to cope with complex test functions such as those considered here. The proposed algorithm was tested using the formulas, in order to determine the effect of the improved differential evolution algorithm and its adaptive parameter settings. The experimental results show that good results can be obtained on these test functions, and the performance of the algorithm can be effectively improved.
Table 4. Fixed-dimension multimodal test functions.
Table 4. Fixed-dimension multimodal test functions.
FunctionDescriptionDimensionsRange f m i n
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F16 f ( x ) = 4 x i 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17 f x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
F18 f x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + ( 2 x 1 3 x 2 ) 2 × 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2, 2]3
F19 f ( x ) = i = 1 4 c i e x p ( i = 1 3 a i j x j p i j 2 ) 3[−1, 2]−3.86
F20 f ( x ) = i = 1 4 c i e x p ( i = 1 6 a i j x j p i j 2 ) 6[0, 1]−0.32
F21 f ( x ) = i = 1 5 X a i X a i T + c i 1 4[0, 1]−10.1532
F22 f ( x ) = i = 1 7 X a i X a i T + c i 1 4[0, 1]−10.4028
F23 f ( x ) = i = 1 10 X a i X a i T + c i 1 4[0, 1]−10.5363

4.2. Algorithm and Parameter Settings Compared with the Original Differential Evolution Algorithm

This subsection introduces the basic parameter settings of the original differential evolution algorithm and 11 well-known improved algorithms used in this study. With the exception of the original differential evolution algorithm, all parameter settings were set according to the literature presented by Abualigah et al. [23]. These algorithms were used on the 23 test functions described in Section 4.1, and include the following.
  • Particle Swarm Optimization, PSO;
  • Cuckoo Search Algorithm, CS;
  • Biogeography-based Optimization, BBO;
  • Differential Evolution, DE;
  • Gravitational Search Algorithm, GSA;
  • Firefly Algorithm, FA;
  • Genetic Algorithm, GA;
  • Moth-Flame Optimization, MFO;
  • Grey Wolf Optimizer, GWO;
  • Bat Algorithm, BAT;
  • Flower Pollination Algorithm, FPA;
  • Arithmetic Optimization Algorithm, AOA.
The improved differential evolution algorithm changes the crossover probability to 0.1, compared with the crossover probability of 0.5 used by Abualigah et al. [23]. The reason for choosing 0.1 is that this value is often used in the literature to compare related differential evolution algorithms. This experiment was set to 30 runs, 500 iterations, and 30 dimensions for each of F1~F13. The algorithm was set for both differential evolution algorithms, with the scaling factor F being 0.5. The results of the comparison are presented in Table 5.

4.3. The Original Differential Evolution Algorithm Was Applied to the Comparison of Results of Test Functions

This section applies the original differential evolution algorithm to the three types of test functions described in Section 4.1, and compares it with the algorithm described in Section 4.2. The results obtained by applying the original differential evolution algorithm used in this study to the problem are compared with the results of other algorithms outlined by Abualigah et al. [23]. The table shows how each algorithm performed on each type of test function. The Gravitational Search Algorithm (GSA) [24] cannot be used in the fixed-dimensional multimodal test function (F14~F23), so the results are displayed with “-”, and its ranking is set to last place. The comparison results are shown in Table 6, Table 7 and Table 8, where the gray highlights indicate the best values.
Table 6 shows the results of the differential evolution algorithm compared to the 11other algorithms. It was the best of the 12 methods in F5 and F6, while for F1 it was ranked 4th, F2 and F3 are 3rd, and F4 and F7 are 7th and 11th, respectively. Overall, the results are not good.
Table 6. The twelve improved differential evolution algorithms when applied to the unimodal functions.
Table 6. The twelve improved differential evolution algorithms when applied to the unimodal functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F1AVE1.38 × 10−31.03 × 1031.83 × 1047.59 × 1012.01 × 10131.18 × 10−276.59 × 1047.11 × 10−39.06 × 10−41.01 × 1036.08 × 1026.67 × 10−7
Rank491061211153872
F2AVE1.88 × 10−42.47 × 1013.58 × 1021.36 × 10−33.22 × 1019.71 × 10−172.71 × 1084.34 × 10−11.49 × 10−13.19 × 1012.27 × 1010.00 × 100
Rank381141021265971
F3AVE1.59 × 10−12.65 × 1044.05 × 1041.21 × 1041.41 × 1035.12 × 10−51.38 × 1051.66 × 1032.10 × 10−12.34 × 1041.35 × 1056.87 × 10−6
Rank391075212648111
F4AVE4.19 × 1015.17 × 1014.39 × 1013.02 × 1012.38 × 1011.24 × 10−68.51 × 1011.11 × 10−19.65 × 10−27.00 × 1017.87 × 1011.40 × 10−3
Rank798651124310112
F5AVE1.03 × 1011.95 × 1041.96 × 1071.82 × 1033.17 × 1052.70 × 1012.10 × 1087.97 × 1012.76 × 1017.35 × 1037.41 × 1022.49 × 101
Rank191171031254862
F6AVE1.08 × 10−49.01 × 1021.87 × 1046.71 × 1011.70 × 1038.44 × 10−16.69 × 1046.94 × 10−33.13 × 10−32.68 × 1033.08 × 1033.47 × 10−4
Rank171168512439102
F7AVE1.35 × 1011.91 × 10−11.07 × 1012.91 × 10−33.44 × 10−11.70 × 10−34.57 × 1016.62 × 10−27.29 × 10−24.50 × 1001.12 × 10−13.92 × 10−6
Rank117103821245961
No. 1 Rank200002000003
Ave Rank4.298.2910.145.578.292.2911.864.863.868.718.291.57
Note: No. 1 rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
The results of Table 7 show that the performance of the original differential evolution algorithm in multimodal test functions F8, F12, and F13 ranked first; while F10 ranks third; F9 and F11 were poor, ranking seventh and sixth, respectively; and the average performance in F8~F13 was good.
Table 7. The twelve improved differential evolution algorithms when applied to the multimodal test functions.
Table 7. The twelve improved differential evolution algorithms when applied to the multimodal test functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F8AVE−1.26 × 104−1.26 × 104−3.86 × 103−1.24 × 104−6.45 × 103−5.91 × 103−2.33 × 103−5.85 × 103−5.19 × 101−8.48 × 103−2.35 × 103−1.22 × 104
Rank119367118125104
F9AVE3.34 × 1019.04 × 1002.87 × 1020.00 × 1001.82 × 1022.19 × 1001.92 × 1023.82 × 1011.51 × 1011.59 × 1023.10 × 1013.42 × 10−7
Rank741211031185962
F10AVE1.04 × 10−21.36 × 1011.75 × 1012.13 × 1007.14 × 1001.03 × 10−31.92 × 1014.58 × 10−23.29 × 10−21.74 × 1013.74 × 1008.88 × 10−16
Rank391168212541071
F11AVE1.00 × 1001.01 × 1011.70 × 1021.46 × 1001.73 × 1014.76 × 10−36.01 × 1024.23 × 10−34.29 × 10−53.10 × 1014.86 × 10−10.00 × 100
Rank681179412321051
F12AVE1.04 × 10−64.77 × 1001.51 × 1076.68 × 10−13.05 × 1024.83 × 10−24.71 × 1083.13 × 10−45.57 × 10−52.46 × 1024.63 × 10−14.28 × 10−6
Rank181171051243962
F13AVE1.09 × 10−131.52 × 1015.73 × 1071.82 × 1009.59 × 1045.96 × 10−19.40 × 1082.08 × 10−38.19 × 10−32.73 × 1077.61 × 1003.10 × 10−1
Rank181169512231074
No. 1 Rank310100000002
Ave Rank3.176.3310.835.008.674.3311.675.004.838.836.832.33
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Table 8 shows the results of the differential evolution algorithm applied to the fixed- dimension multimodal test function. The results show that the differential evolution algorithm achieved 1st place in F14, F16, F17, and F18, while F15 and F20 are both 5th, and F19, F21, F22, and F23 are 11th, 9th, 11th, and 8th, respectively. Therefore, there is still a lot of room for improvement in the differential evolution algorithm regarding the fixed-dimensional multimodal test problem.
Table 8. The twelve improved differential evolution algorithms when applied to the fixed-dimensional multimodal test functions.
Table 8. The twelve improved differential evolution algorithms when applied to the fixed-dimensional multimodal test functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F14AVE9.98 × 10−19.98 × 10−11.39 × 1009.98 × 10−19.98 × 10−14.17 × 1001.27 × 1013.51 × 1001.27 × 1012.74 × 100-9.98 × 10−1
Rank116119108107121
F15AVE1.55 × 10−33.33 × 10−21.61 × 10−31.66 × 10−26.88 × 10−46.24 × 10−33.00 × 10−21.01 × 10−33.13 × 10−42.35 × 10−3-3.12 × 10−4
Rank511693810427121
F16AVE−1.03 × 100−3.78 × 10−1−1.03 × 100−8.30 × 10−1−1.03 × 100−1.03 × 100−6.87 × 10−1−1.03 × 100−1.03 × 100−1.03 × 100-−1.03 × 100
Rank111191110111121
F17AVE3.98 × 10−15.24 × 10−14.00 × 10−15.49 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1-3.98 × 10−1
Rank110911111111121
F18AVE3.00 × 1003.00 × 1003.10 × 1003.00 × 1003.00 × 1003.00 × 1001.47 × 1013.00 × 1003.00 × 1003.00 × 100-3.00 × 100
Rank111011111111121
F19AVE−2.01 × 100−3.42 × 100−3.86 × 100−3.78 × 100−3.86 × 100−3.86 × 100−3.84 × 100−3.86 × 100−3.86 × 100−3.86 × 100-−3.86 × 100
Rank111019118111121
F20AVE−3.27 × 100−1.61 × 100−3.11 × 100−2.71 × 100−3.30 × 100−3.26 × 100−3.25 × 100−3.28 × 100−3.32 × 100−3.24 × 100-−3.32 × 100
Rank511910367418121
F21AVE−5.05 × 100−6.66 × 100−4.15 × 100−8.32 × 100−5.22 × 100−8.64 × 100−4.27 × 100−7.67 × 100−5.06 × 100−6.89 × 100-−8.85 × 100
Rank961137210485121
F22AVE−5.08 × 100−5.58 × 100−6.01 × 100−9.38 × 100−5.34 × 100−1.04 × 101−5.61 × 100−9.64 × 100−5.09 × 100−8.26 × 100-−1.04 × 101
Rank118649173105121
F23AVE−5.12 × 100−4.70 × 100−4.72 × 100−6.24 × 100−5.29 × 100−1.01 × 101−3.97 × 100−9.75 × 100−5.13 × 100−7.66 × 100-−1.05 × 101
Rank810956211374121
No. 1 Rank4222551454010
Ave Rank5.307.906.806.203.303.208.503.004.204.0012.001.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
As can be seen in Table 6, Table 7 and Table 8, the Arithmetic Optimization Algorithm (AOA) [23] obtained the best average results of the three types of test functions. The Differential Evolution (DE) algorithm in this section ranked first for the multimodal-type test functions with many iterations. Compared with other algorithms, its average performance was ranked in the upper middle for all types of test functions. Table 9 shows the number of No. 1 rankings and the average ranking of each algorithm.
The above table shows that the differential evolution algorithm can perform well, compared with other well-known algorithms. The results demonstrate why most scholars have used the differential evolution algorithm as a basis, taking advantage of its short steps and simple formulas to make changes to the algorithm.

4.4. Comparison of the Results of Twelve Improved Differential Evolution Algorithms Applied to the Test Functions

In this subsection, the improved differential evolution algorithm is applied to the three types of test functions described in Section 4.1, revealing how each algorithm and the twelve improved differential evolution algorithms proposed in this study performed on various types of test functions. A comparison of the results is shown in Table 10, Table 11 and Table 12, in which the gray highlights indicate the best values.
Table 10 shows that the algorithm of crossover-step and mutation solution-based variation, IHDE-mbi, performed best in all seven unimodal test functions. It is better than both the original differential evolution algorithm (DE) and the memory hybrid differential evolution algorithm (MBDE).
Table 10. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the unimodal test functions.
Table 10. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the unimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F1AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F2AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F3AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F4AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F5AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F6AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F7AVE7.72 × 10−21.20 × 10−45.84 × 10−51.48 × 10−41.92 × 10−57.46 × 10−43.53 × 10−42.42 × 10−41.46 × 10−47.42 × 10−43.46 × 10−42.32 × 10−47.66 × 10−43.23 × 10−4
Rank1432511210741196138
No. 1 Rank66667666666666
Ave Rank2.851.281.141.571.002.572.281.851.422.422.141.712.712.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Table 11 shows that, although the original differential evolution algorithm (DE) performed poorly in F9, it ranked first the most in the six multimodal test functions. However, IHDE-BPSO3, which was improved based on different particle swarm optimization algorithms, had the best average performance. Most of the twelve new methods proposed performed better than the original differential evolution algorithm and the hybrid differential evolution algorithm based on memory improvement.
Table 11. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the multimodal test functions.
Table 11. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the multimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F8AVE−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104
Rank11111111111111
F9AVE2.03 × 1010.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank141111111111111
F10AVE8.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−2
Rank1111111111118.91 × 10−28.91 × 10−2
F11AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F12AVE1.60 × 10−364.14 × 10−291.41 × 10−313.93 × 10−312.64 × 10−143.70 × 10−162.26 × 10−305.65 × 10−316.28 × 10−321.27 × 10−301.41 × 10−318.55 × 10−51.41 × 10−315.65 × 10−31
Rank1113613121072931437
F13AVE8.39 × 10−326.50 × 10−296.21 × 10−301.89 × 10−291.39 × 10−291.25 × 10−281.63 × 10−299.66 × 10−301.59 × 10−301.63 × 10−295.29 × 10−79.53 × 10−82.03 × 10−292.22 × 10−6
Rank1103851164261312914
No. 1 Rank54444444444444
Ave Rank3.164.161.663.003.664.503.332.501.333.163.335.002.664.16
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
In the fixed-dimensional multimodal test problem, the memory mixed difference evolution algorithm (MBDE) obtained first place in all of the problems, at the same time as the seven new methods proposed in this study. These seven methods include MBDE2, IHDE2, IHDE-2cross, IHDE-BPSO3, IHDE-BPSO4, IHDE-BPSO5, and IHDE-HPSO5. Therefore, most of the proposed algorithms and MBDE are more suitable for fixed-dimensional multimodal optimization problems. The results are presented in Table 12.
Table 12. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the fixed-dimensional multimodal test functions.
Table 12. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the fixed-dimensional multimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F14AVE9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Rank11111111111111
F15AVE1.17 × 10−33.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−44.32 × 10−43.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−45.42 × 10−43.08 × 10−43.08 × 10−43.08 × 10−4
Rank14111112111113111
F16AVE−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Rank11111111111111
F17AVE3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Rank11111111111111
F18AVE3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
Rank11111111111111
F21AVE−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−4.65 × 100−1.02 × 101−1.02 × 101−2.68 × 100−1.02 × 101−1.02 × 101−5.06 × 100−5.06 × 100−1.02 × 101−1.02 × 101
Rank111113111411121111
F22AVE−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.03 × 101−1.04 × 101−1.04 × 101−2.77 × 100−1.04 × 101−1.04 × 101−4.97 × 100−1.04 × 101−1.04 × 101−1.04 × 101
Rank11111211141113111
F23AVE−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−2.92 × 100−1.04 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−5.13 × 100−1.05 × 101−1.05 × 101
Rank11111412111111311
No. 1 Rank78885686885688
Ave Rank2.621.001.001.005.503.751.004.251.001.005.373.751.001.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
In summary, IHDE-mbm achieved the best performance with respect to DE and MBDE, as well as the twelve improved differential evolution algorithms in this study, when applied to the unimodal test functions. For the unimodal test functions, the solution obtained from the variation step of the original MBDE is more suitable for the unimodal optimization problem when applied to the crossover step of the double variation.
However, the algorithm IHDE-BPSO3 obtained better average performance when compared with other algorithms on the multimodal test functions. This indicates that it has a memory property variation step with adaptive acceleration and inertia weight, resulting in its good adaptability to the multimodal test functions.
Regarding the fixed-dimensional multimodal test function, most of the twelve algorithms proposed in this study achieved the best solution for this type of optimization problem. The MBDE [16] achieved the same performance for the fixed-dimensional multimodal test functions, which means that its memory parameters have a very good performance impact on the algorithm.
The multimodal optimization problem tests the ability of algorithms to jump out of local optima. Overall, the twelve improved DE algorithms proposed in this study showed a good ability to deal with complex problems. Table 13 summarizes the overall results.
From the above table, the test performances of the MBDE2 and IHDE-BPSO3 algorithms proposed in this study were very good for the overall 21 optimization problems. Out of 21 tests, they ranked first in almost each one. This shows that the twelve improved differential evolution algorithms possess improved efficiency as a whole, due to adding the concept of memory and adaptive parameter setting. Their performance in the multimodal optimization problem was stable, which also indicates that the addition of these concepts is helpful in addressing complex and difficult to search problems. The twelve improved differential evolution algorithms proposed in this study also emphasize the differences between the BPSO and HPSO concepts, with the algorithms using adaptive acceleration constants showing obvious differences in performance.

5. Conclusions

Optimization problems have arisen with the rapid development of information technology and engineering, and solving problems quickly, accurately, and effectively is a significantly sought-after goal. However, different algorithms have inherent drawbacks depending on the nature of the problem, such as long execution times or suboptimal results. Additionally, when their parameters are set arbitrarily or manually, their stability maybe compromised, ultimately affecting the effectiveness of the solution. This study introduced an improved differential evolution algorithm, designed to address optimization challenges. In this algorithm, key steps and formulas are modified, and memory properties and a parameter adaptation scheme are incorporated. These enhancements improve the stability of the solution throughout the optimization process. Furthermore, the improved differential evolution algorithm maintains the simplicity of the original differential evolution algorithm while delivering superior performance, compared to both the original algorithm and other well-known alternatives.

Author Contributions

Conceptualization, S.-K.C. and G.-H.W.; methodology, S.-K.C. and G.-H.W.; software, Y.-H.W.; validation, S.-K.C., G.-H.W. and Y.-H.W.; formal analysis, G.-H.W. and Y.-H.W.; investigation, S.-K.C.; resources, G.-H.W. and Y.-H.W.; data curation, G.-H.W. and Y.-H.W.; writing—original draft preparation, S.-K.C.; writing—review and editing, S.-K.C.; visualization, G.-H.W.; supervision, G.-H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Abbass, H.A.; Sarker, R. The pareto differential evolution algorithm. Int. J. Artif. Intell. Tools 2002, 11, 531–552. [Google Scholar] [CrossRef]
  3. Das, S.; Abraham, A.; Konar, A. Automatic clustering using an improved differential evolution algorithm. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2007, 38, 218–237. [Google Scholar] [CrossRef]
  4. Das, S.; Konar, A.; Chakraborty, U.K. Two improved differential evolution schemes for faster global search. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  5. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  6. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinusoidal differential evolution algorithm for numerical optimisation. Appl. Soft Comput. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  7. Eltaeib, T.; Mahmood, A. Differential evolution: A survey and analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef]
  8. Tao, S.; Yang, Y.; Zhao, R.; Todo, H.; Tang, Z. Competitive elimination improved differential evolution for wind farm layout optimization problems. Mathematics 2024, 12, 3762. [Google Scholar] [CrossRef]
  9. Nguyen, V.-T.; Tran, V.-M.; Bui, N.-T. Self-adaptive differential evolution with gauss distribution for optimal mechanism design. Appl. Sci. 2023, 13, 6284. [Google Scholar] [CrossRef]
  10. Chao, M.; Zhang, M.; Zhang, Q.; Jiang, Z.; Zhou, L. A two-stage adaptive differential evolution algorithm with accompanying populations. Mathematics 2025, 13, 440. [Google Scholar] [CrossRef]
  11. Sui, X.; Chu, S.-C.; Pan, J.-S.; Luo, H. Parallel compact differential evolution for optimization applied to image segmentation. Appl. Sci. 2020, 10, 2195. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  13. Hizarci, H.; Demirel, O.; Turkay, B.E. Distribution network reconfiguration using time-varying acceleration coefficient assisted binary particle swarm optimization. Eng. Sci. Technol. Int. J. 2022, 35, 101230. [Google Scholar] [CrossRef]
  14. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar]
  15. Huang, F.Z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  16. Parouha, R.P.; Das, K.N. A robust memory based hybrid differential evolution for continuous optimization problem. Knowl. Based Syst. 2016, 103, 118–131. [Google Scholar] [CrossRef]
  17. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A hybrid particle swarm optimizer with sine cosine acceleration coefficients. Inf. Sci. 2018, 422, 218–241. [Google Scholar] [CrossRef]
  18. Liu, D.; He, H.; Yang, Q.; Wang, Y.; Jeon, S.-W.; Zhang, J. Function value ranking aware differential evolution for global numerical optimization. Swarm Evol. Comput. 2023, 78, 101282. [Google Scholar] [CrossRef]
  19. Yu, X.; Xu, P.; Wang, F.; Wang, X. Reinforcement learning-based differential evolution algorithm for constrained multi-objective optimization problems. Eng. Appl. Artif. Intell. 2024, 131, 107817. [Google Scholar] [CrossRef]
  20. Chen, B.; Ouyang, H.; Li, S.; Ding, W. Dual-stage self-adaptive differential evolution with complementary and ensemble mutation strategies. Swarm Evol. Comput. 2025, 93, 101855. [Google Scholar] [CrossRef]
  21. Chen, B.; Ouyang, H.; Li, S.; Gao, L.; Ding, W. Photovoltaic parameter extraction through an adaptive differential evolution algorithm with multiple linear regression. Appl. Soft Comput. 2025, 176, 113117. [Google Scholar] [CrossRef]
  22. Yang, X.; An, L.; Gao, Y.; Hao, X. Multi-objective optimization method for cement calcination system based on dual population differential evolution algorithm. J. Process Control 2025, 151, 103448. [Google Scholar] [CrossRef]
  23. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  24. Rashedi, E.; Rashedi, E.; Nezamabadi-Pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 140–158. [Google Scholar] [CrossRef]
Table 1. Summary of target algorithm classification.
Table 1. Summary of target algorithm classification.
Algorithm PartsConception
a.Simple increase in the selection group
(MBDE2)
The mutant individuals generated in the selection step and, in the mutation step, they are ranked and selected.
b.Added additional cross-solutions based on contemporary solutions (IHDE, IHDE2, and IHDE-2cross)The crossover formula is changed slightly, such that the new crossover solution is more inclined to search around the contemporary solution. In this part of the algorithm, we select the contemporary solution, the mutation solution, the crossover solution, and the new crossover solution in the selection step.
c.The crossover step uses the mutation solution as a basis to generate a new crossover solution (IHDE-mbi, IHDE-mbm)The solution of the original crossover step is changed such that it tends to search near the mutation solution to generate a new crossover solution.
d.Improvement in differential evolution algorithms based on the improved particle swarm algorithm (IHDE-BPSO3, IHDE-BPSO4, IHDE-BPSO5, IHDE-HPSO3, IHDE-HPSO4, and IHDE-HPSO5)The velocity and position update formulas of different particle swarm algorithms are used to generate new mutation solutions and new crossover solutions corresponding to the differential evolution algorithm.
Table 2. Unimodal test functions.
Table 2. Unimodal test functions.
FunctionDescriptionDimensionsRange f m i n
F1 f ( x ) = i = 1 n x i 2 30, 100, 500, 1000[−100, 100]0
F2 f ( x ) = i = 0 n x i + i = 0 n x i 30, 100, 500, 1000[−10, 10]0
F3 f ( x ) = i = 1 d j = 1 i x j 2 30, 100, 500, 1000[−100, 100]0
F4 f ( x ) = max i x i ,   1 i n 30, 100, 500, 1000[−100, 100]0
F5 f ( x ) = i = 1 n 1 100 x i 2 x i + 1 2 + 1 x i 2 30, 100, 500, 1000[−30, 30]0
F6 f ( x ) = i = 1 n [ x i + 0.5 ] 2 30, 100, 500, 1000[−100, 100]0
F7 f ( x ) = i = 0 n i x i 4 + r a n d o m [ 0,1 ) 30, 100, 500, 1000[−128, 128]0
Table 5. A comparison of 30 runs of the differential evolution algorithm with a crossover probability of 0.1 and 0.5. (The bold indicates a better result).
Table 5. A comparison of 30 runs of the differential evolution algorithm with a crossover probability of 0.1 and 0.5. (The bold indicates a better result).
FDE
(pcr = 0.1)
The DE Used by Abualigah et al. [23]
(pcr = 0.5)
F11.38 × 10−31.33 × 10−3
F21.88 × 10−46.83 × 10−3
F31.59 × 10−13.97 × 104
F44.19 × 1011.15 × 101
F51.03 × 10−11.06 × 102
F61.08 × 10−41.44 × 10−3
F71.35 × 1015.24 × 10−2
F8−1.26 × 104−6.82 × 103
F93.34 × 1011.58 × 102
F101.04 × 10−21.21 × 10−2
F111.00 × 1003.52 × 10−2
F121.04 × 10−62.25 × 10−5
F131.09 × 10−139.12 × 10−3
F149.98 × 10−11.23 × 100
F151.55 × 10−35.63 × 10−4
F16−1.03 × 100−1.03 × 100
F173.98 × 10−13.98 × 10−1
F183.00 × 1003.00 × 100
F19−2.01 × 100−3.86 × 100
F20−3.27 × 100−3.27 × 100
F21−5.05 × 100−8.65 × 100
F22−5.08 × 100−9.75 × 100
F23−5.12 × 100−1.05 × 100
No. 1 Rank1512
Note: No. 1 rank indicates the number of times the algorithm achieved first place.
Table 9. The number of first rankings and the average ranking of each algorithm.
Table 9. The number of first rankings and the average ranking of each algorithm.
AlgorithmTotal Average RankTotal No. 1 Rank
AOA1.5215
GWO3.177
FA4.174
CS4.225
DE4.659
BBO5.873
FPA6.355
MFO6.524
GA7.522
PSO8.742
GSA9.700
BAT10.091
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Table 13. The number of first places and the average ranking of the twelve improved differential evolution algorithms, along with DE and MBDE.
Table 13. The number of first places and the average ranking of the twelve improved differential evolution algorithms, along with DE and MBDE.
AlgorithmTotal Average RankTotal No. 1 Rank
IHDE-BPSO31.238118
MBDE21.238118
IHDE21.761918
MBDE2.000018
IHDE-BPSO52.047618
IHDE-2cross2.095218
IHDE-BPSO42.095218
IHDE-HPSO52.238118
DE2.857118
IHDE2.952416
IHDE-HPSO43.428616
IHDE-mbi3.476216
IHDE-mbm3.571416
IHDE-HPSO33.714315
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, S.-K.; Wu, G.-H.; Wu, Y.-H. Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems. Mathematics 2025, 13, 1647. https://doi.org/10.3390/math13101647

AMA Style

Chen S-K, Wu G-H, Wu Y-H. Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems. Mathematics. 2025; 13(10):1647. https://doi.org/10.3390/math13101647

Chicago/Turabian Style

Chen, Shang-Kuan, Gen-Han Wu, and Yu-Hsuan Wu. 2025. "Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems" Mathematics 13, no. 10: 1647. https://doi.org/10.3390/math13101647

APA Style

Chen, S.-K., Wu, G.-H., & Wu, Y.-H. (2025). Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems. Mathematics, 13(10), 1647. https://doi.org/10.3390/math13101647

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop