2. Related Works
Storn and Price [
1] published the differential evolution algorithm in 1997, which was classified as an intelligent optimization algorithm. In the beginning, the differential evolution algorithm sets the basic parameters, including group size, scaling factor, and crossover probability, and generates an initial set of solutions
x based on the upper and lower bounds of the given solution according to rand (0, 1). Then, the initial solution
x is mutated to generate a mutation solution, and the structure of the solution is changed by means of crossover probability to produce a crossover solution
x′. In the selected section, if a cross over solution
x′ is better than the current solution
x, the original initial solution is replaced by the crossover solution; otherwise, nothing is replaced. The algorithm repeats the mutation, crossover, and selection processes until the end of the iteration. The differential evolution algorithm is also a stochastic model that simulates biological evolution through multiple iterations and preserves individuals who are better adapted to the environment. The advantage of differential evolution algorithms is their smaller number of steps compared to other algorithms. There are only four steps: initialization, mutation, crossover, and selection. Therefore, the convergence time of the algorithm is relatively shorter than that of other algorithms, but it still retains strong global convergence ability and robustness.
The particle swarm optimization (PSO) algorithm [
12,
13] concept was proposed by two scholars, Kennedy and Eberhart [
12]. In the beginning, it sets parameters, including population size, inertia weight, acceleration constant, and maximum velocity. A random swarm of particles is then generated, and their positions are adjusted based on the individual experience of each particle’s flight and the best experience in the past. The particles produced at the beginning can be treated as a set of solutions
x, and their velocity
v, the individual best position (
pbest), and the global best position (
gbest) affect the positions of individuals in the next iteration. The algorithm then iteratively finds the best position (i.e., the best solution), and the algorithm continues until the end of the iteration or the given
max_steps is reached.
The main framework of the particle swarm optimization algorithm is to generate the initial solution and initial velocity, and then update the velocity and position according to the velocity and position formula until the end of the iteration. The formula is as follows:
where
v is the velocity,
t is the iteration,
X is the contemporary solution,
pbest is the individual best position,
gbest is the global best position,
c1 and
c2 are the acceleration constants,
w is the inertia weight, and
r1 and
r2 are random values between 0 and 1.
In the position update formula, a new particle position can be obtained, which is the contemporary solution for the next iteration. The speed update formula and the position update formula are repeated until the end of the iteration or until the user-set stop condition is reached. There are many variations of differential evolution algorithms, among which Qin and Suganthan [
14] proposed a differential evolution algorithm for two mutation policies at the same time (Self-adaptive Differential Evolution, SaDE), and Huang et al. [
15] proposed a differential evolution algorithm of cooperative evolution (Co-evolutionary Differential Evolution, CDE). The whole part is divided into two groups with interaction, and the algorithm also affects each other when it is carried out.
Parouha and Das [
16] proposed a modified differential evolution algorithm called MBDE (Memory-based Hybrid Differential Evolution). In this study, the formula of the differential evolution algorithm is added to the memory method, and the individual best position and the global best position are introduced to change the concept that the past solution of the differential evolution algorithm does not affect the future generation solution. Then, the mutation and crossover steps are changed to Swarm Mutation and Swarm Crossover and, finally, the greedy selection method is improved to Elitism, which greatly improves the overall algorithm performance. The improved formula is as follows:
In this part, the original mutation policy DE/rand/2 takes into account the concept of memory, where
V is the mutation solution,
t is the iteration number, X is the contemporary solution,
pbest is the individual best position,
gbest is the global best position, x
worst is the contemporary worst solution, and
f (
•) is the objective function.
The crossover algorithm also takes into account the concept of memory, where V is the mutation solution, t is the iteration number, X is the contemporary solution, pbest is the individual best position, gbest is the global best position, rand (0, 1) and randij are random real numbers between 0 and 1, and pcr is the user-defined crossover probability.
Elitism:
This part is used to change the original greedy choice method to the elite choice method. This method is proposed to discard the overall algorithm diversity, because the diversity is processed in the previous mutation step and the crossover step, such that there is no need to deal with it in this part. The elitism process follows three rules:
- (1)
In the final elite selection section, the initial population (or contemporary target vector) and the resulting species from the final crossover are combined.
- (2)
The merged populations are sorted in ascending order according to the value of the target function.
- (3)
The better NP (population number) individuals are retained to move on to the next iteration, and the remaining individuals are allowed to be removed.
Chen et al. [
17] proposed an HPSO (Hybrid Particle Swarm Optimizer), in which the parameters required for the original PSO formula were included in the concept of adaptation, and the formula was improved to generate the target value and velocity of the next step. The acceleration constant algorithm is based on the TVAC (Time-Varying Acceleration Coefficient) commonly used in the previous literature, and the inertia weight is modified. The modified formula is as follows:
where
c1f = 0.5,
c1i = 2.5,
c2f = 2.5,
c2i = 0.5.
χ replaces the inertia weight and further affects the individual best position and the proportion of the global best position on the velocity,
v is the velocity,
t is the iteration number,
X is the contemporary solution,
Mj is the current iteration,
Mmax is the maximum iteration,
pbest is the individual best position, and
gbest is the global best position.
where
is a random number between 0 and 1,
u is the average value of the initial population substituting the objective function, and
f(
j) is the objective function of the
jth particle. This improvement greatly reduces the problem of user parameter setting. For parameter adaptation, the parameter values can be changed iteratively, such that the algorithm can produce different results in different iteration periods. The proportion of the individual best position and the global best position can also be changed according to the needs of the algorithm and the progress of the iteration. The acceleration constant is set by the adaptive algorithm commonly used in the previous literature, and the overall velocity update formula is affected by another inertia weight. Hizarci et al. [
13] proposed Binary Particle Swarm Optimization (BPSO), which is different from the previous method in that it sets an adaptive algorithm with a speedup constant. The acceleration constant algorithm commonly used in the past is abandoned, and the inertia weight is set adaptively to reduce the impact of the speed in the previous iteration. For the other formulas, the original particle swarm optimization algorithm is used entirely. The acceleration constant adaptation and inertia weight adaptation formulas are as follows:
where
c1i = 0.5,
c1f = 2.5,
iter is the iteration at that time, and
is the maximum number of iterations.
where
wmax = 0.9 and
wmin = 0.4.
The literature on the differential evolution algorithm has long underscored the importance of the mutation policies in balancing exploration and exploitation. While many enhancements in recent years have focused on hybrid schemes or adaptive parameter control, few have re-examined the individual-level guidance mechanism at the mutation step. Basing the selection of guiding exemplars on an exponential ranking scheme, Liu et al. [
18] addressed a common shortcoming—namely, the tendency of conventional schemes to overly concentrate on a few top individuals, thereby risking premature convergence when the best solutions fall into local optima. The authors presented a novel variant of differential evolution that introduces an innovative mutation policy, termed “DE/current-to-rwrand/1”, that serves as the core of what the authors call function value ranking aware differential evolution (FVRADE). In contrast to conventional elite-based methods such as “DE/current-to-best/1”—which may restrict diversity—or random-based methods, such as “DE/current-to-rand/1”—which maybe too exploratory—the proposed operator offers a dynamic compromise. The ability to adjust the weight-control parameter means that practitioners can steer the search process toward more diversification or exploitation as needed, making FVRADE particularly robust for high-dimensional and complicated landscapes.
Yu et al. [
19] introduced a reinforcement learning-based multi-objective differential evolution algorithm (RLMODE) designed to solve constrained multi-objective optimization problems. In RLMODE, the authors enhance the classical DE framework by incorporating a feedback mechanism implemented via a Q-learning strategy to dynamically tune key control parameters based on the relationship between parent and offspring. The algorithm assigns rewards that guide adaptive adjustments by evaluating whether an offspring dominates its parent; this, in turn, helps to steer the search process toward feasible regions with improved convergence and diversity.
Chen et al.’s work [
20] represents a significant step forward in DE algorithm design by integrating complementary mutation strategies, dual-stage parameter self-adaptation mechanisms, and an enhanced population sizing scheme. It adds value to the literature by providing both a conceptual framework and empirical evidence and enhances search efficiency and robustness for high-dimensionality and real-world optimization problems, making LSHADE-Code a promising candidate for complex optimization challenges in various fields.
Chen et al. [
21] addressed the challenging photovoltaic (PV) parameter extraction task by hybridizing an adaptive DE algorithm with multiple linear regression (MLR). In contrast to a “pure” evolutionary search, the authors decomposed the nonlinear PV model into two parts. The DE engine is employed iteratively to estimate the parameters embedded in the nonlinear functions, while the MLR component analytically computes the related linear coefficients. This separation effectively reduces the overall dimensionality of the search space and leverages both exploration via DE and exploitation via regression for improved accuracy.
Yang et al. [
22] introduced a dual-population framework embedded in the DE algorithm (DP-DE) to clearly address the challenges posed by coupled and dynamic constraints in cement calcination processes. The emphasis on a “decision-first, optimization-later” strategy is an interesting shift from traditional single-step approaches, which often struggle with stability. The comparison with NSGA-II, MOEA/D, and particle swarm optimization suggests that DP-DE not only improves convergence speed but also enhances robustness under real-world disturbances. The multi-step transition strategy plays a crucial role in mitigating oscillations and overshoots. It is essential in ensuring smooth control adjustments without exceeding operational constraints.
Based on the review of the above papers, the differential evolution algorithm with memory properties has good performance and can effectively deal with the problem of group diversity. The acceleration constant, which has a significant influence on the velocity formula of the particle swarm optimization algorithm, is nothing more than a factor of great influence. In this study, we consider applying the formula of particle swarm optimization to the differential evolution algorithm and continue with the other steps of the differential evolution algorithm to intersect and disrupt the structure of the solution. The target algorithm has the ability to adapt to most problems and achieves good results.
3. The Proposed Method
In this study, twelve modified differential evolution algorithms are proposed. They mainly use the concepts of the individual best solution (
pbest) and global best solution (
gbest) to improve the mutation and crossover steps of the differential evolution algorithm, and integrate the mutation formula into the concepts and related parameters of the particle swarm optimization algorithm such that it may achieve better results than the original differential evolution algorithm and relevant modified algorithms. However, in the process of improving the algorithm, it was found that random weights or random values are often used to affect the reference
pbest and
gbest ratios of the solutions generated by the mutation policies. This method often causes the convergence results to be unstable, with good and bad results at times. Even if the average result is better than those provided by most well-known algorithms and the original differential evolution algorithm, the abovementioned poor results often lead to the disadvantage of an extremely poor convergence effect. Therefore, this study uses an adaptive method that changes with the iteration of the algorithm to affect the reference
pbest and
gbest ratios, eliminating its instability-related disadvantages while retaining the advantages relating to the simple steps of the differential evolution algorithm; in this way, it can attain better performance than the original algorithm. This study used the MBDE architecture [
16] to extend the formulas in the mutation, crossover, and selection steps of the original differential evolution algorithm and add variants of the particle swarm optimization algorithm, HPSO and BPSO, to develop an enhanced DE algorithm. Various strategies are proposed for comparison such that the target algorithm can achieve more stability, retain the advantages of the simple steps of the original algorithm, and obtain better results at the end of the overall algorithm. The proposed algorithm can be divided into four parts; namely,
- (1)
A simple increase in the selection group.
- (2)
Added additional cross-solutions based on contemporary solutions.
- (3)
The crossover step uses the mutation solution as a basis to generate a new crossover solution.
- (4)
Improvement in differential evolution algorithms based on the improved particle swarm algorithm.
The basic concepts and corresponding algorithms of these four classifications are described in
Table 1.
MBDE2 builds upon the original MBDE algorithm by incorporating selected individuals. While the original MBDE algorithm performs well, this enhancement leverages the broader search range of the mutation solution to accelerate the discovery of optimal solutions in the early stages of the algorithm. To preserve the effectiveness of MBDE, its overall structure and formulas remain unchanged, with modifications made specifically to the selection process to account for mutated individuals.
MBDE2:
Mutation: Make use of Formula (3).
Selection: Contemporary solutions , mutation solutions , and crossover solutions to retain good NP (population) individuals for the next iteration.
The concept of using IHDE, IHDE2, and IHDE-2cross based on contemporary solutions to generate additional crossover solutions is mainly utilized to propose a new crossover formula. This new crossover formula is related to contemporary solution-based vectors. In this study, the mutation solution can be used as a group of independent individuals to compete with other individuals. Thus, the new crossover solution part does not have any effect on the mutation solution. Finally, the three algorithms consider different steps to choose separately.
IHDE:
Mutation: Make use of Formula (3).
Crossover: Make use of Formula (10) to obtain crossover solution .
Selection: Select the mutation solution and crossover solution , and retain the better NP (population number) individuals for the next iteration.
IHDE-2:
Mutation: Make use of Formula (3).
Selection: Select the contemporary solution , mutation solution , and crossover solution , and retain the better NP (population number) individuals for the next iteration.
IHDE-2cross:
Mutation: Make use of Formula (3).
Crossover 1: Make use of Formula (9) to obtain crossover solution .
Crossover 2: Make use of Formula (10) to obtain crossover solution .
Selection: Select the contemporary solution , mutation solution , and crossover solution and , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbi and IHDE-mbm are based on the crossover step to generate a new crossover solution based on the mutation solution. The main concept of these two algorithms is that, although the mutation solution is sufficient to be used as a solution to compete with other individuals to become the next generation, its wide search range can be improved if it is influenced by past memories at the crossover step. Therefore, these two algorithms are proposed, which slightly differ in the intersection part.
IHDE-mbi:
Mutation: Make use of Formula (3).
Selection: Select the contemporary solution , mutation solution , and crossover solution , and retain the better NP (population number) individuals for the next iteration.
IHDE-mbm:
Mutation: Make use of Formula (3).
Selection: Select the contemporary solution
, mutation solution
, and crossover solution
, and retain the better NP (population number) individuals for the next iteration. IHDE-BPSO3, IHDE-BPSO4, and IHDE-BPSO5 are improved based on different Binary Particle Swarm Optimization (BPSO) algorithms. BPSO is used because it proposes a new adaptive acceleration constant algorithm. In a study by Hizarci et al. [
13], this adaptive algorithm obtained a good ranking compared with the adaptive algorithm commonly used in the previous particle swarm optimization algorithm-related literature, and it showed potentially beneficial effects. Therefore, the speed and position of BPSO can be used to update the formula and its adaptive algorithm to generate different mutation solutions and crossover solutions. In two of these algorithms, new crossover solutions are generated, which disrupt the structure of the solution in a probabilistic way.
IHDE-BPSO3:
Mutation: Make use of Formulas (1) and (7). The parameters of the mutation part are also set using Equation (7).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution , mutation solution , and crossover solution , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO4:
Mutation 1: Make use of Formula (1) to obtain mutation solution .
Mutation 2: Make use of Formula (2) to obtain mutation solution . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution .
pcr1 is set to 0.5, because mutation solution 2, generated using BPSO, demonstrates superior performance. The higher probability increases its influence on the solution’s structure. Meanwhile, pcr2 is set to 0.75 such that the mutation of solution 1 and the contemporary solution can affect the structure of the solution with a smaller probability.
Selection: Select the contemporary solution , mutation solution , and crossover solution and , and retain the better NP (population number) individuals for the next iteration.
IHDE-BPSO5:
Mutation 1: Make use of Formula (1) to obtain mutation solution .
Mutation 2: Make use of Formula (2) to obtain mutation solution . The parameters of the mutation part are also set using Equations (7) and (8).
Crossover 1: Make use of Formula (9) to obtain crossover solution .
Crossover 2: Make use of Formula (13) to obtain crossover solution .
Selection: Select the contemporary solution , mutation solution and , and crossover solution and , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO3, IHDE-HPSO4, and IHDE-HPSO5 are improved based on different improved particle swarm optimization algorithms. The idea to use HPSO is due to the addition of novel adaptive methods to the velocity update formula and position update formula, as well as the change in weights that only affect the previous generation of generation velocity solutions to affect the entire velocity formula. The proposed method incorporates various enhancements to adaptive parameters, seamlessly integrating their concepts into the algorithm’s architecture to generate diverse solutions.
IHDE-HPSO3:
Mutation: Make use of Formula (5).
Crossover: Make use of Formula (9).
Selection: Select the contemporary solution , mutation solution , and crossover solution , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO4:
Mutation 1: Make use of Formula (5) to obtain mutation solution .
Mutation 2: Make use of Formula (6) to obtain mutation solution .
Crossover 1: Make use of Formula (9) to obtain crossover solution .
Crossover 2: Make use of Formula (13) to obtain crossover solution .
Selection: Select the contemporary solution , mutation solution , and crossover solution and , and retain the better NP (population number) individuals for the next iteration.
IHDE-HPSO5:
Mutation 1: Make use of Formula (5) to obtain mutation solution .
Mutation 2: Make use of Formula (6) to obtain mutation solution .
Crossover 1: Make use of Formula (9) to obtain crossover solution .
Crossover 2: Make use of Formula (13) to obtain crossover solution .
Selection: Select the contemporary solution , mutation solution and , and crossover solution and , and retain the better NP (population number) individuals for the next iteration.
These 12 improvements were compared with DE/rand/1 proposed by Storn and Price [
1], as well as the MBDE proposed by Parouha and Das [
16], and the results are presented in the next section.