You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

17 May 2025

Memory-Based Differential Evolution Algorithms with Self-Adaptive Parameters for Optimization Problems

,
and
1
Department of Computer Science & Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
2
Department of Industrial Engineering & Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Computational Intelligence and Evolutionary Algorithms

Abstract

In this study, twelve modified differential evolution algorithms with memory properties and adaptive parameters were proposed to address optimization problems. In the experimental process, these modified differential evolution algorithms were applied to 23 continuous test functions. The results indicate that MBDE2 and IHDE-BPSO3 outperform the original differential evolution algorithm and its extended variants, consistently achieving optimal solutions in most cases. The findings suggest that the proposed improved differential evolution algorithm is highly adaptable across various problems, yielding superior results. Additionally, integrating memory properties significantly enhances the algorithm’s performance and effectiveness.

1. Introduction

The differential evolution (DE) algorithm [1] is a fairly mainstream heuristic algorithm that has been studied and improved in many studies [2,3,4,5,6,7,8,9,10,11]. As the process is fast and simple, it is very suitable for solving optimization problems with less time consumption. However, the differential evolution algorithm does not guarantee that the global optimal solution can be found; therefore, to solve this problem, the differential evolution algorithm has been used as a basis to develop various mutation policies. The mutation policy refers to the part that generates a random solution, such as DE/rand/1 or DE/rand/2, or changes the formula of the algorithm to form a new algorithm. When comparing novel algorithms, the differential evolution algorithm and its variants still maintain a good position in most problems, indicating that the diversity of variation in the selection of mutation policy is important. The weight-changing differential evolution algorithm used in this study maintained good results in previous studies. The parameter settings are generally based on the user’s past experience, and the concept of memory has been rarely used in the original differential evolution algorithm. Although the original differential evolution algorithm can obtain good results in terms of average performance, it cannot obtain good results with the same parameter settings for various problems. Therefore, in this study, a novel differential evolution algorithm is proposed to improve the concepts of adaptive and memory, named the Improved Hybrid Differential Evolution Algorithm (IHDE). In this study, the concept of the particle swarm optimization algorithm is used to influence the structure of the solution generated by the formula, allowing good results to be achieved for most problems.

3. The Proposed Method

In this study, twelve modified differential evolution algorithms are proposed. They mainly use the concepts of the individual best solution (pbest) and global best solution (gbest) to improve the mutation and crossover steps of the differential evolution algorithm, and integrate the mutation formula into the concepts and related parameters of the particle swarm optimization algorithm such that it may achieve better results than the original differential evolution algorithm and relevant modified algorithms. However, in the process of improving the algorithm, it was found that random weights or random values are often used to affect the reference pbest and gbest ratios of the solutions generated by the mutation policies. This method often causes the convergence results to be unstable, with good and bad results at times. Even if the average result is better than those provided by most well-known algorithms and the original differential evolution algorithm, the abovementioned poor results often lead to the disadvantage of an extremely poor convergence effect. Therefore, this study uses an adaptive method that changes with the iteration of the algorithm to affect the reference pbest and gbest ratios, eliminating its instability-related disadvantages while retaining the advantages relating to the simple steps of the differential evolution algorithm; in this way, it can attain better performance than the original algorithm. This study used the MBDE architecture [16] to extend the formulas in the mutation, crossover, and selection steps of the original differential evolution algorithm and add variants of the particle swarm optimization algorithm, HPSO and BPSO, to develop an enhanced DE algorithm. Various strategies are proposed for comparison such that the target algorithm can achieve more stability, retain the advantages of the simple steps of the original algorithm, and obtain better results at the end of the overall algorithm. The proposed algorithm can be divided into four parts; namely,
(1)
A simple increase in the selection group.
(2)
Added additional cross-solutions based on contemporary solutions.
(3)
The crossover step uses the mutation solution as a basis to generate a new crossover solution.
(4)
Improvement in differential evolution algorithms based on the improved particle swarm algorithm.
The basic concepts and corresponding algorithms of these four classifications are described in Table 1.
Table 1. Summary of target algorithm classification.
MBDE2 builds upon the original MBDE algorithm by incorporating selected individuals. While the original MBDE algorithm performs well, this enhancement leverages the broader search range of the mutation solution to accelerate the discovery of optimal solutions in the early stages of the algorithm. To preserve the effectiveness of MBDE, its overall structure and formulas remain unchanged, with modifications made specifically to the selection process to account for mutated individuals.
MBDE2:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r X i j t + r a n d 0,1 g b e s t p i , b e s t t   o t h e r w i s e
Selection: Contemporary solutions X i t , mutation solutions V i t + 1 , and crossover solutions U i t + 1 to retain good NP (population) individuals for the next iteration.
The concept of using IHDE, IHDE2, and IHDE-2cross based on contemporary solutions to generate additional crossover solutions is mainly utilized to propose a new crossover formula. This new crossover formula is related to contemporary solution-based vectors. In this study, the mutation solution can be used as a group of independent individuals to compete with other individuals. Thus, the new crossover solution part does not have any effect on the mutation solution. Finally, the three algorithms consider different steps to choose separately.
IHDE:
Mutation: Make use of Formula (3).
Crossover: Make use of Formula (10) to obtain crossover solution U 2 i t + 1 .
Selection: Select the mutation solution V i t + 1 and crossover solution U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-2:
Mutation: Make use of Formula (3).
Crossover:
U 2 i , j t + 1 =   X i j t + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i j   p c r X i j t + r a n d 0,1 g b e s t X i t             o t h e r w i s e              
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-2cross:
Mutation: Make use of Formula (3).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (10) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbi and IHDE-mbm are based on the crossover step to generate a new crossover solution based on the mutation solution. The main concept of these two algorithms is that, although the mutation solution is sufficient to be used as a solution to compete with other individuals to become the next generation, its wide search range can be improved if it is influenced by past memories at the crossover step. Therefore, these two algorithms are proposed, which slightly differ in the intersection part.
IHDE-mbi:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i   j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r V i   j t + 1 + r a n d 0,1 g b e s t X i t                 o t h e r w i s e          
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbm:
Mutation: Make use of Formula (3).
Crossover:
U i , j t + 1 =   V i   j t + 1 + r a n d 0,1 g b e s t p i , b e s t t   i f     r a n d i   j   p c r V i   j t + 1 + r a n d 0,1 g b e s t V i t + 1                 o t h e r w i s e        
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration. IHDE-BPSO3, IHDE-BPSO4, and IHDE-BPSO5 are improved based on different Binary Particle Swarm Optimization (BPSO) algorithms. BPSO is used because it proposes a new adaptive acceleration constant algorithm. In a study by Hizarci et al. [13], this adaptive algorithm obtained a good ranking compared with the adaptive algorithm commonly used in the previous particle swarm optimization algorithm-related literature, and it showed potentially beneficial effects. Therefore, the speed and position of BPSO can be used to update the formula and its adaptive algorithm to generate different mutation solutions and crossover solutions. In two of these algorithms, new crossover solutions are generated, which disrupt the structure of the solution in a probabilistic way.
IHDE-BPSO3:
Mutation: Make use of Formulas (1) and (7). The parameters of the mutation part are also set using Equation (7).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO4:
Mutation 1: Make use of Formula (1) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (2) to obtain mutation solution V 2 i t + 1 . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2:
U 2 i , j t + 1 = V 2 i t + 1                                                                                           ,   r a n d i   j   p c r 1 V i , j t + 1 + r a n d 0,1 g b e s t p i , b e s t t , p c r 1 < r a n d i   j   p c r 2 X i , j t + r a n d 0,1 g b e s t p i , b e s t t ,     o t h e r w i s e                                            
pcr1 is set to 0.5, because mutation solution 2, generated using BPSO, demonstrates superior performance. The higher probability increases its influence on the solution’s structure. Meanwhile, pcr2 is set to 0.75 such that the mutation of solution 1 and the contemporary solution can affect the structure of the solution with a smaller probability.
Selection: Select the contemporary solution X i t , mutation solution V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO5:
Mutation 1: Make use of Formula (1) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (2) to obtain mutation solution V 2 i t + 1 . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 and V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO3, IHDE-HPSO4, and IHDE-HPSO5 are improved based on different improved particle swarm optimization algorithms. The idea to use HPSO is due to the addition of novel adaptive methods to the velocity update formula and position update formula, as well as the change in weights that only affect the previous generation of generation velocity solutions to affect the entire velocity formula. The proposed method incorporates various enhancements to adaptive parameters, seamlessly integrating their concepts into the algorithm’s architecture to generate diverse solutions.
IHDE-HPSO3:
Mutation: Make use of Formula (5).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 , and crossover solution U i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO4:
Mutation 1: Make use of Formula (5) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (6) to obtain mutation solution V 2 i t + 1 .
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO5:
Mutation 1: Make use of Formula (5) to obtain mutation solution V i t + 1 .
Mutation 2: Make use of Formula (6) to obtain mutation solution V 2 i t + 1 .
Crossover 1: Make use of Formula (9) to obtain crossover solution U i t + 1 .
Crossover 2: Make use of Formula (13) to obtain crossover solution U 2 i t + 1 .
Selection: Select the contemporary solution X i t , mutation solution V i t + 1 and V 2 i t + 1 , and crossover solution U i t + 1 and U 2 i t + 1 , and retain the better NP (population number) individuals for the next iteration.
These 12 improvements were compared with DE/rand/1 proposed by Storn and Price [1], as well as the MBDE proposed by Parouha and Das [16], and the results are presented in the next section.

4. Experimental Results

Section 4.1 introduces the optimization problems to be tested in this study, which were run in Python 3.7 with an Intel (R) Core (TM) i7-9700 CPU @ 3.00GHz, an Intel UHD Graphics 630 GPU, and 16.0 GB of RAM. Then, in Section 4.2 and Section 4.3, the results of the differential evolution algorithms are compared with most well-known algorithms. The results show that the original differential evolution algorithm has the best performance among the various algorithms. Finally, in Section 4.4, a comparison between the improved and original differential evolution algorithms and their variants is presented.

4.1. Benchmark Functions

In order to test the performance of the proposed improved differential evolution algorithm, 23 test functions used in the study by Abualigah et al. [23] were used to test the performance of the original differential evolution algorithm to be improved, as shown in Table 2, Table 3 and Table 4. These test functions include unimodal test functions, multimodal test functions, and fixed-dimension multimodal test functions.
Table 2. Unimodal test functions.
The unimodal test functions F1~F7 were tested at dimension 30. The initial population size was set to 30, the number of iterations was set to 500, and the above test settings were used with 30 runs obtained. The range of each function is shown in Table 2.
The multimodal test functions F8 to F13 were tested in 30 dimensions. The initial population size was set to 30, the number of iterations was set to 500, and an average of 30 runs was used for the basic setting. Their ranges are given in Table 3.
Table 3. Multimodal test functions.
Table 3. Multimodal test functions.
FunctionDescriptionDimensionsRange f m i n
F8 f ( x ) = i = 1 n x i sin x i 30, 100, 500, 1000[−500, 500]−418.9829 × n
F9 f ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10 30, 100, 500, 1000[−5.12, 5.12]0
F10 f ( x ) = 20 e x p 0.2 1 n i = 1 n x i 2 e x p 1 n i = 1 n cos 2 π x i + 20 + e 30, 100, 500, 1000[−32, 32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos x i i 30, 100, 500, 1000[−600, 600]0
F12 f x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 [ 1 + 10 s i n 2 π y i + 1 + i = 1 n u x i , 10 ,   100 ,   4 ] ,     w h e r e   y i = 1 + x i + 1 4 ,   u x i ,   a , k , m K x i a m                       i f   x i > a 0                                                     a x i a     K x i a m                             x i a 30, 100, 500, 1000[−50, 50]0
F13 f ( x ) = 0.1 ( s i n 2 3 π x 1 + i = 1 n x i 1 2 [ 1 + s i n 2 3 π x i + 1 ] + x n 1 2 ] + s i n 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5,100,4 ) 30, 100, 500, 1000[−50, 50]0
The fixed-dimension multimodal functions from F14 to F23 were set to an initial population size of 30, 500 iterations, and 30 runs was set as the basic setting. The dimensions and ranges are shown in Table 4.
The wide range of unimodal, multimodal, and fixed-dimension multimodal test functions help to detect the convergence speed of the algorithm, the ability of the algorithm to jump out of local optima, and the convergence ability of the algorithm more comprehensively. Most of these test functions have high search difficulty and, so, most well-known algorithms may not be able to find the best value, even for emerging variants of these algorithms. The algorithm in this study tested the formulas of the improved differential evolution algorithm and set its parameters adaptively, such that the parameters were no longer set in a random or fixed way. The algorithm obtained good results in these test functions, so the performance of the algorithm was considered to have been effectively improved.
In recent years, some scholars have proposed new algorithms, various algorithm variants, or differing parameter settings to cope with complex test functions such as those considered here. The proposed algorithm was tested using the formulas, in order to determine the effect of the improved differential evolution algorithm and its adaptive parameter settings. The experimental results show that good results can be obtained on these test functions, and the performance of the algorithm can be effectively improved.
Table 4. Fixed-dimension multimodal test functions.
Table 4. Fixed-dimension multimodal test functions.
FunctionDescriptionDimensionsRange f m i n
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F16 f ( x ) = 4 x i 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17 f x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 5]0.398
F18 f x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + ( 2 x 1 3 x 2 ) 2 × 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2, 2]3
F19 f ( x ) = i = 1 4 c i e x p ( i = 1 3 a i j x j p i j 2 ) 3[−1, 2]−3.86
F20 f ( x ) = i = 1 4 c i e x p ( i = 1 6 a i j x j p i j 2 ) 6[0, 1]−0.32
F21 f ( x ) = i = 1 5 X a i X a i T + c i 1 4[0, 1]−10.1532
F22 f ( x ) = i = 1 7 X a i X a i T + c i 1 4[0, 1]−10.4028
F23 f ( x ) = i = 1 10 X a i X a i T + c i 1 4[0, 1]−10.5363

4.2. Algorithm and Parameter Settings Compared with the Original Differential Evolution Algorithm

This subsection introduces the basic parameter settings of the original differential evolution algorithm and 11 well-known improved algorithms used in this study. With the exception of the original differential evolution algorithm, all parameter settings were set according to the literature presented by Abualigah et al. [23]. These algorithms were used on the 23 test functions described in Section 4.1, and include the following.
  • Particle Swarm Optimization, PSO;
  • Cuckoo Search Algorithm, CS;
  • Biogeography-based Optimization, BBO;
  • Differential Evolution, DE;
  • Gravitational Search Algorithm, GSA;
  • Firefly Algorithm, FA;
  • Genetic Algorithm, GA;
  • Moth-Flame Optimization, MFO;
  • Grey Wolf Optimizer, GWO;
  • Bat Algorithm, BAT;
  • Flower Pollination Algorithm, FPA;
  • Arithmetic Optimization Algorithm, AOA.
The improved differential evolution algorithm changes the crossover probability to 0.1, compared with the crossover probability of 0.5 used by Abualigah et al. [23]. The reason for choosing 0.1 is that this value is often used in the literature to compare related differential evolution algorithms. This experiment was set to 30 runs, 500 iterations, and 30 dimensions for each of F1~F13. The algorithm was set for both differential evolution algorithms, with the scaling factor F being 0.5. The results of the comparison are presented in Table 5.
Table 5. A comparison of 30 runs of the differential evolution algorithm with a crossover probability of 0.1 and 0.5. (The bold indicates a better result).

4.3. The Original Differential Evolution Algorithm Was Applied to the Comparison of Results of Test Functions

This section applies the original differential evolution algorithm to the three types of test functions described in Section 4.1, and compares it with the algorithm described in Section 4.2. The results obtained by applying the original differential evolution algorithm used in this study to the problem are compared with the results of other algorithms outlined by Abualigah et al. [23]. The table shows how each algorithm performed on each type of test function. The Gravitational Search Algorithm (GSA) [24] cannot be used in the fixed-dimensional multimodal test function (F14~F23), so the results are displayed with “-”, and its ranking is set to last place. The comparison results are shown in Table 6, Table 7 and Table 8, where the gray highlights indicate the best values.
Table 6 shows the results of the differential evolution algorithm compared to the 11other algorithms. It was the best of the 12 methods in F5 and F6, while for F1 it was ranked 4th, F2 and F3 are 3rd, and F4 and F7 are 7th and 11th, respectively. Overall, the results are not good.
Table 6. The twelve improved differential evolution algorithms when applied to the unimodal functions.
Table 6. The twelve improved differential evolution algorithms when applied to the unimodal functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F1AVE1.38 × 10−31.03 × 1031.83 × 1047.59 × 1012.01 × 10131.18 × 10−276.59 × 1047.11 × 10−39.06 × 10−41.01 × 1036.08 × 1026.67 × 10−7
Rank491061211153872
F2AVE1.88 × 10−42.47 × 1013.58 × 1021.36 × 10−33.22 × 1019.71 × 10−172.71 × 1084.34 × 10−11.49 × 10−13.19 × 1012.27 × 1010.00 × 100
Rank381141021265971
F3AVE1.59 × 10−12.65 × 1044.05 × 1041.21 × 1041.41 × 1035.12 × 10−51.38 × 1051.66 × 1032.10 × 10−12.34 × 1041.35 × 1056.87 × 10−6
Rank391075212648111
F4AVE4.19 × 1015.17 × 1014.39 × 1013.02 × 1012.38 × 1011.24 × 10−68.51 × 1011.11 × 10−19.65 × 10−27.00 × 1017.87 × 1011.40 × 10−3
Rank798651124310112
F5AVE1.03 × 1011.95 × 1041.96 × 1071.82 × 1033.17 × 1052.70 × 1012.10 × 1087.97 × 1012.76 × 1017.35 × 1037.41 × 1022.49 × 101
Rank191171031254862
F6AVE1.08 × 10−49.01 × 1021.87 × 1046.71 × 1011.70 × 1038.44 × 10−16.69 × 1046.94 × 10−33.13 × 10−32.68 × 1033.08 × 1033.47 × 10−4
Rank171168512439102
F7AVE1.35 × 1011.91 × 10−11.07 × 1012.91 × 10−33.44 × 10−11.70 × 10−34.57 × 1016.62 × 10−27.29 × 10−24.50 × 1001.12 × 10−13.92 × 10−6
Rank117103821245961
No. 1 Rank200002000003
Ave Rank4.298.2910.145.578.292.2911.864.863.868.718.291.57
Note: No. 1 rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
The results of Table 7 show that the performance of the original differential evolution algorithm in multimodal test functions F8, F12, and F13 ranked first; while F10 ranks third; F9 and F11 were poor, ranking seventh and sixth, respectively; and the average performance in F8~F13 was good.
Table 7. The twelve improved differential evolution algorithms when applied to the multimodal test functions.
Table 7. The twelve improved differential evolution algorithms when applied to the multimodal test functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F8AVE−1.26 × 104−1.26 × 104−3.86 × 103−1.24 × 104−6.45 × 103−5.91 × 103−2.33 × 103−5.85 × 103−5.19 × 101−8.48 × 103−2.35 × 103−1.22 × 104
Rank119367118125104
F9AVE3.34 × 1019.04 × 1002.87 × 1020.00 × 1001.82 × 1022.19 × 1001.92 × 1023.82 × 1011.51 × 1011.59 × 1023.10 × 1013.42 × 10−7
Rank741211031185962
F10AVE1.04 × 10−21.36 × 1011.75 × 1012.13 × 1007.14 × 1001.03 × 10−31.92 × 1014.58 × 10−23.29 × 10−21.74 × 1013.74 × 1008.88 × 10−16
Rank391168212541071
F11AVE1.00 × 1001.01 × 1011.70 × 1021.46 × 1001.73 × 1014.76 × 10−36.01 × 1024.23 × 10−34.29 × 10−53.10 × 1014.86 × 10−10.00 × 100
Rank681179412321051
F12AVE1.04 × 10−64.77 × 1001.51 × 1076.68 × 10−13.05 × 1024.83 × 10−24.71 × 1083.13 × 10−45.57 × 10−52.46 × 1024.63 × 10−14.28 × 10−6
Rank181171051243962
F13AVE1.09 × 10−131.52 × 1015.73 × 1071.82 × 1009.59 × 1045.96 × 10−19.40 × 1082.08 × 10−38.19 × 10−32.73 × 1077.61 × 1003.10 × 10−1
Rank181169512231074
No. 1 Rank310100000002
Ave Rank3.176.3310.835.008.674.3311.675.004.838.836.832.33
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Table 8 shows the results of the differential evolution algorithm applied to the fixed- dimension multimodal test function. The results show that the differential evolution algorithm achieved 1st place in F14, F16, F17, and F18, while F15 and F20 are both 5th, and F19, F21, F22, and F23 are 11th, 9th, 11th, and 8th, respectively. Therefore, there is still a lot of room for improvement in the differential evolution algorithm regarding the fixed-dimensional multimodal test problem.
Table 8. The twelve improved differential evolution algorithms when applied to the fixed-dimensional multimodal test functions.
Table 8. The twelve improved differential evolution algorithms when applied to the fixed-dimensional multimodal test functions.
FMDEGAPSOBBOFPAGWOBATFACSMFOGSAAOA
F14AVE9.98 × 10−19.98 × 10−11.39 × 1009.98 × 10−19.98 × 10−14.17 × 1001.27 × 1013.51 × 1001.27 × 1012.74 × 100-9.98 × 10−1
Rank116119108107121
F15AVE1.55 × 10−33.33 × 10−21.61 × 10−31.66 × 10−26.88 × 10−46.24 × 10−33.00 × 10−21.01 × 10−33.13 × 10−42.35 × 10−3-3.12 × 10−4
Rank511693810427121
F16AVE−1.03 × 100−3.78 × 10−1−1.03 × 100−8.30 × 10−1−1.03 × 100−1.03 × 100−6.87 × 10−1−1.03 × 100−1.03 × 100−1.03 × 100-−1.03 × 100
Rank111191110111121
F17AVE3.98 × 10−15.24 × 10−14.00 × 10−15.49 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1-3.98 × 10−1
Rank110911111111121
F18AVE3.00 × 1003.00 × 1003.10 × 1003.00 × 1003.00 × 1003.00 × 1001.47 × 1013.00 × 1003.00 × 1003.00 × 100-3.00 × 100
Rank111011111111121
F19AVE−2.01 × 100−3.42 × 100−3.86 × 100−3.78 × 100−3.86 × 100−3.86 × 100−3.84 × 100−3.86 × 100−3.86 × 100−3.86 × 100-−3.86 × 100
Rank111019118111121
F20AVE−3.27 × 100−1.61 × 100−3.11 × 100−2.71 × 100−3.30 × 100−3.26 × 100−3.25 × 100−3.28 × 100−3.32 × 100−3.24 × 100-−3.32 × 100
Rank511910367418121
F21AVE−5.05 × 100−6.66 × 100−4.15 × 100−8.32 × 100−5.22 × 100−8.64 × 100−4.27 × 100−7.67 × 100−5.06 × 100−6.89 × 100-−8.85 × 100
Rank961137210485121
F22AVE−5.08 × 100−5.58 × 100−6.01 × 100−9.38 × 100−5.34 × 100−1.04 × 101−5.61 × 100−9.64 × 100−5.09 × 100−8.26 × 100-−1.04 × 101
Rank118649173105121
F23AVE−5.12 × 100−4.70 × 100−4.72 × 100−6.24 × 100−5.29 × 100−1.01 × 101−3.97 × 100−9.75 × 100−5.13 × 100−7.66 × 100-−1.05 × 101
Rank810956211374121
No. 1 Rank4222551454010
Ave Rank5.307.906.806.203.303.208.503.004.204.0012.001.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
As can be seen in Table 6, Table 7 and Table 8, the Arithmetic Optimization Algorithm (AOA) [23] obtained the best average results of the three types of test functions. The Differential Evolution (DE) algorithm in this section ranked first for the multimodal-type test functions with many iterations. Compared with other algorithms, its average performance was ranked in the upper middle for all types of test functions. Table 9 shows the number of No. 1 rankings and the average ranking of each algorithm.
Table 9. The number of first rankings and the average ranking of each algorithm.
The above table shows that the differential evolution algorithm can perform well, compared with other well-known algorithms. The results demonstrate why most scholars have used the differential evolution algorithm as a basis, taking advantage of its short steps and simple formulas to make changes to the algorithm.

4.4. Comparison of the Results of Twelve Improved Differential Evolution Algorithms Applied to the Test Functions

In this subsection, the improved differential evolution algorithm is applied to the three types of test functions described in Section 4.1, revealing how each algorithm and the twelve improved differential evolution algorithms proposed in this study performed on various types of test functions. A comparison of the results is shown in Table 10, Table 11 and Table 12, in which the gray highlights indicate the best values.
Table 10 shows that the algorithm of crossover-step and mutation solution-based variation, IHDE-mbi, performed best in all seven unimodal test functions. It is better than both the original differential evolution algorithm (DE) and the memory hybrid differential evolution algorithm (MBDE).
Table 10. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the unimodal test functions.
Table 10. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the unimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F1AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F2AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F3AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F4AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F5AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F6AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F7AVE7.72 × 10−21.20 × 10−45.84 × 10−51.48 × 10−41.92 × 10−57.46 × 10−43.53 × 10−42.42 × 10−41.46 × 10−47.42 × 10−43.46 × 10−42.32 × 10−47.66 × 10−43.23 × 10−4
Rank1432511210741196138
No. 1 Rank66667666666666
Ave Rank2.851.281.141.571.002.572.281.851.422.422.141.712.712.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
Table 11 shows that, although the original differential evolution algorithm (DE) performed poorly in F9, it ranked first the most in the six multimodal test functions. However, IHDE-BPSO3, which was improved based on different particle swarm optimization algorithms, had the best average performance. Most of the twelve new methods proposed performed better than the original differential evolution algorithm and the hybrid differential evolution algorithm based on memory improvement.
Table 11. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the multimodal test functions.
Table 11. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the multimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F8AVE−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104−1.26 × 104
Rank11111111111111
F9AVE2.03 × 1010.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank141111111111111
F10AVE8.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−28.91 × 10−2
Rank1111111111118.91 × 10−28.91 × 10−2
F11AVE0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
Rank11111111111111
F12AVE1.60 × 10−364.14 × 10−291.41 × 10−313.93 × 10−312.64 × 10−143.70 × 10−162.26 × 10−305.65 × 10−316.28 × 10−321.27 × 10−301.41 × 10−318.55 × 10−51.41 × 10−315.65 × 10−31
Rank1113613121072931437
F13AVE8.39 × 10−326.50 × 10−296.21 × 10−301.89 × 10−291.39 × 10−291.25 × 10−281.63 × 10−299.66 × 10−301.59 × 10−301.63 × 10−295.29 × 10−79.53 × 10−82.03 × 10−292.22 × 10−6
Rank1103851164261312914
No. 1 Rank54444444444444
Ave Rank3.164.161.663.003.664.503.332.501.333.163.335.002.664.16
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
In the fixed-dimensional multimodal test problem, the memory mixed difference evolution algorithm (MBDE) obtained first place in all of the problems, at the same time as the seven new methods proposed in this study. These seven methods include MBDE2, IHDE2, IHDE-2cross, IHDE-BPSO3, IHDE-BPSO4, IHDE-BPSO5, and IHDE-HPSO5. Therefore, most of the proposed algorithms and MBDE are more suitable for fixed-dimensional multimodal optimization problems. The results are presented in Table 12.
Table 12. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the fixed-dimensional multimodal test functions.
Table 12. The twelve improved differential evolution algorithms, along with DE and MBDE, applied to the fixed-dimensional multimodal test functions.
FMDEMBDEMBDE2IHDE2IHDE-mbiIHDE-mbmIHDE-2crossIHDEIHDE-BPSO3IHDE-BPSO4IHDE-HPSO3IHDE-HPSO4IHDE-BPSO5IHDE-HPSO5
F14AVE9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Rank11111111111111
F15AVE1.17 × 10−33.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−44.32 × 10−43.08 × 10−43.08 × 10−43.08 × 10−43.08 × 10−45.42 × 10−43.08 × 10−43.08 × 10−43.08 × 10−4
Rank14111112111113111
F16AVE−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Rank11111111111111
F17AVE3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Rank11111111111111
F18AVE3.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 1003.00 × 100
Rank11111111111111
F21AVE−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−4.65 × 100−1.02 × 101−1.02 × 101−2.68 × 100−1.02 × 101−1.02 × 101−5.06 × 100−5.06 × 100−1.02 × 101−1.02 × 101
Rank111113111411121111
F22AVE−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.03 × 101−1.04 × 101−1.04 × 101−2.77 × 100−1.04 × 101−1.04 × 101−4.97 × 100−1.04 × 101−1.04 × 101−1.04 × 101
Rank11111211141113111
F23AVE−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−2.92 × 100−1.04 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−5.13 × 100−1.05 × 101−1.05 × 101
Rank11111412111111311
No. 1 Rank78885686885688
Ave Rank2.621.001.001.005.503.751.004.251.001.005.373.751.001.00
Note: No. 1 Rank indicates the number of times the algorithm ranked in first place; Average Rank indicates the average of the rankings.
In summary, IHDE-mbm achieved the best performance with respect to DE and MBDE, as well as the twelve improved differential evolution algorithms in this study, when applied to the unimodal test functions. For the unimodal test functions, the solution obtained from the variation step of the original MBDE is more suitable for the unimodal optimization problem when applied to the crossover step of the double variation.
However, the algorithm IHDE-BPSO3 obtained better average performance when compared with other algorithms on the multimodal test functions. This indicates that it has a memory property variation step with adaptive acceleration and inertia weight, resulting in its good adaptability to the multimodal test functions.
Regarding the fixed-dimensional multimodal test function, most of the twelve algorithms proposed in this study achieved the best solution for this type of optimization problem. The MBDE [16] achieved the same performance for the fixed-dimensional multimodal test functions, which means that its memory parameters have a very good performance impact on the algorithm.
The multimodal optimization problem tests the ability of algorithms to jump out of local optima. Overall, the twelve improved DE algorithms proposed in this study showed a good ability to deal with complex problems. Table 13 summarizes the overall results.
Table 13. The number of first places and the average ranking of the twelve improved differential evolution algorithms, along with DE and MBDE.
From the above table, the test performances of the MBDE2 and IHDE-BPSO3 algorithms proposed in this study were very good for the overall 21 optimization problems. Out of 21 tests, they ranked first in almost each one. This shows that the twelve improved differential evolution algorithms possess improved efficiency as a whole, due to adding the concept of memory and adaptive parameter setting. Their performance in the multimodal optimization problem was stable, which also indicates that the addition of these concepts is helpful in addressing complex and difficult to search problems. The twelve improved differential evolution algorithms proposed in this study also emphasize the differences between the BPSO and HPSO concepts, with the algorithms using adaptive acceleration constants showing obvious differences in performance.

5. Conclusions

Optimization problems have arisen with the rapid development of information technology and engineering, and solving problems quickly, accurately, and effectively is a significantly sought-after goal. However, different algorithms have inherent drawbacks depending on the nature of the problem, such as long execution times or suboptimal results. Additionally, when their parameters are set arbitrarily or manually, their stability maybe compromised, ultimately affecting the effectiveness of the solution. This study introduced an improved differential evolution algorithm, designed to address optimization challenges. In this algorithm, key steps and formulas are modified, and memory properties and a parameter adaptation scheme are incorporated. These enhancements improve the stability of the solution throughout the optimization process. Furthermore, the improved differential evolution algorithm maintains the simplicity of the original differential evolution algorithm while delivering superior performance, compared to both the original algorithm and other well-known alternatives.

Author Contributions

Conceptualization, S.-K.C. and G.-H.W.; methodology, S.-K.C. and G.-H.W.; software, Y.-H.W.; validation, S.-K.C., G.-H.W. and Y.-H.W.; formal analysis, G.-H.W. and Y.-H.W.; investigation, S.-K.C.; resources, G.-H.W. and Y.-H.W.; data curation, G.-H.W. and Y.-H.W.; writing—original draft preparation, S.-K.C.; writing—review and editing, S.-K.C.; visualization, G.-H.W.; supervision, G.-H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Abbass, H.A.; Sarker, R. The pareto differential evolution algorithm. Int. J. Artif. Intell. Tools 2002, 11, 531–552. [Google Scholar] [CrossRef]
  3. Das, S.; Abraham, A.; Konar, A. Automatic clustering using an improved differential evolution algorithm. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2007, 38, 218–237. [Google Scholar] [CrossRef]
  4. Das, S.; Konar, A.; Chakraborty, U.K. Two improved differential evolution schemes for faster global search. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 25–29 June 2005; pp. 991–998. [Google Scholar]
  5. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  6. Draa, A.; Bouzoubia, S.; Boukhalfa, I. A sinusoidal differential evolution algorithm for numerical optimisation. Appl. Soft Comput. 2015, 27, 99–126. [Google Scholar] [CrossRef]
  7. Eltaeib, T.; Mahmood, A. Differential evolution: A survey and analysis. Appl. Sci. 2018, 8, 1945. [Google Scholar] [CrossRef]
  8. Tao, S.; Yang, Y.; Zhao, R.; Todo, H.; Tang, Z. Competitive elimination improved differential evolution for wind farm layout optimization problems. Mathematics 2024, 12, 3762. [Google Scholar] [CrossRef]
  9. Nguyen, V.-T.; Tran, V.-M.; Bui, N.-T. Self-adaptive differential evolution with gauss distribution for optimal mechanism design. Appl. Sci. 2023, 13, 6284. [Google Scholar] [CrossRef]
  10. Chao, M.; Zhang, M.; Zhang, Q.; Jiang, Z.; Zhou, L. A two-stage adaptive differential evolution algorithm with accompanying populations. Mathematics 2025, 13, 440. [Google Scholar] [CrossRef]
  11. Sui, X.; Chu, S.-C.; Pan, J.-S.; Luo, H. Parallel compact differential evolution for optimization applied to image segmentation. Appl. Sci. 2020, 10, 2195. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  13. Hizarci, H.; Demirel, O.; Turkay, B.E. Distribution network reconfiguration using time-varying acceleration coefficient assisted binary particle swarm optimization. Eng. Sci. Technol. Int. J. 2022, 35, 101230. [Google Scholar] [CrossRef]
  14. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar]
  15. Huang, F.Z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  16. Parouha, R.P.; Das, K.N. A robust memory based hybrid differential evolution for continuous optimization problem. Knowl. Based Syst. 2016, 103, 118–131. [Google Scholar] [CrossRef]
  17. Chen, K.; Zhou, F.; Yin, L.; Wang, S.; Wang, Y.; Wan, F. A hybrid particle swarm optimizer with sine cosine acceleration coefficients. Inf. Sci. 2018, 422, 218–241. [Google Scholar] [CrossRef]
  18. Liu, D.; He, H.; Yang, Q.; Wang, Y.; Jeon, S.-W.; Zhang, J. Function value ranking aware differential evolution for global numerical optimization. Swarm Evol. Comput. 2023, 78, 101282. [Google Scholar] [CrossRef]
  19. Yu, X.; Xu, P.; Wang, F.; Wang, X. Reinforcement learning-based differential evolution algorithm for constrained multi-objective optimization problems. Eng. Appl. Artif. Intell. 2024, 131, 107817. [Google Scholar] [CrossRef]
  20. Chen, B.; Ouyang, H.; Li, S.; Ding, W. Dual-stage self-adaptive differential evolution with complementary and ensemble mutation strategies. Swarm Evol. Comput. 2025, 93, 101855. [Google Scholar] [CrossRef]
  21. Chen, B.; Ouyang, H.; Li, S.; Gao, L.; Ding, W. Photovoltaic parameter extraction through an adaptive differential evolution algorithm with multiple linear regression. Appl. Soft Comput. 2025, 176, 113117. [Google Scholar] [CrossRef]
  22. Yang, X.; An, L.; Gao, Y.; Hao, X. Multi-objective optimization method for cement calcination system based on dual population differential evolution algorithm. J. Process Control 2025, 151, 103448. [Google Scholar] [CrossRef]
  23. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  24. Rashedi, E.; Rashedi, E.; Nezamabadi-Pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 140–158. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.