1. Introduction
A meta-heuristic algorithm [
1] is an algorithm based on stochastic operators that does not depend on gradient information. It finds better solutions with limited computing power and is suitable for solving complex problems with continuous, discrete or even mixed search spaces. It has been widely used in solving practical engineering problems because of its simple concept and easy implementation. These algorithms generally fall into four categories: swarms’ behavior-based, physical rule-based, nature-based, and human-related algorithms. Swarms with collective behavior inspire swarming algorithms. The most famous are the Particle Swarm Optimization (PSO) [
2] and the Ant Colony Optimization (ACO) [
3]. The Whale Optimization Algorithm (WOA) [
4], the Marine Predator Algorithm (MPA) [
5], the Artificial Gorilla Troops Optimizer (GTO) [
6], the Snake swarm Optimizer (SO) [
7], and the Nutcracker Optimization Algorithm (NOA) [
8] have also been proposed in recent years. Physical laws and mathematical rules mostly inspire physics-based algorithms. Such algorithms usually have strict proofs. The typed algorithms are the Simulated Annealing (SA) [
9], the Multi-verse Optimizer (MO) [
10], the Sine–Cosine Algorithm (SCA) [
11], and the Kepler Optimization Algorithm (KOA) [
12]. Nature-based algorithms are primarily derived from biological evolution in nature, such as the Genetic Algorithm (GA) [
13], the Differential Evolution algorithm (DE) [
14], and the Evolutionary Strategy (ES) [
15]. Human-related algorithms are developed from long-term human experiences, such as the harmony algorithm [
16], the Teaching-Based Optimization (TLBO) [
17], and the League Championship algorithm [
18]. Exploration and exploitation are the two most essential parts of the meta-heuristic process. The exploration phase refers to searching the solution space as broadly, randomly, and globally as possible. The exploitation stage refers to the ability of the algorithm to search more accurately in the area acquired in the exploration stage, with reduced randomness and improved accuracy [
19]. However, over-exploration will eventually lead to convergence difficulties, and only focusing on exploitation will cause the model to easily fall into local optimization. Therefore, how to strike a balance between exploration and utilization is a complex problem for meta-heuristic algorithms.
The optimization algorithm selected in this paper is the slime mould algorithm (SMA) proposed by Li et al. in 2020 [
20]. The SMA is inspired by slime molds’ behavior and morphological changes during foraging. The SMA has been applied to various engineering optimization problems because of its simple code and few parameters. However, SMA needs to improve in dealing with complex and high-dimensional problems. Researchers have continuously optimized the SMA in recent years. There are generally two kinds of improved strategies: one is to improve the core equation of the algorithm by using a variety of strategies, and the other is to mix a variety of algorithms to improve efficiency. Yu et al. [
21] combined reverse learning and chaotic mapping to optimize SMA and performed well in urban water resources treatment. Naik et al. [
22] proposed to add adaptive reverse learning at the later stage of iteration to avoid the premature end of convergence. Zhang et al. [
23] presented reverse learning and Quantum Rotation Gate strategies to the SMA. Jiang et al. [
24] proposed an improved SMA based on elite reverse learning. The adaptive probability threshold was adopted to adjust the selection probability of slime moulds. The quality and diversity of the initial population are improved. Alfadhli et al. [
25] chose to integrate adaptive parameters into the iteration of the population. The improved method adaptively changes the population size to effectively balance the characteristics of exploitation and exploration in different stages of the SMA. Liu et al. [
26] introduced Chebyshev chaotic mapping in the initialization stage. They added the simplex method in the later exploration stage to increase local search ability and avoid premature convergence, which achieved excellent results in extracting PV model parameters. Qiu et al. [
27] proposed a mechanism for updating locations in stages, which divided the iteration time into three segments on average. Different stages mix different optimization strategies to balance exploitation and exploration; researchers have also integrated other swarm intelligence algorithms with SMA [
28,
29,
30,
31,
32], respectively, in either the exploration or exploitation stage to carry out different degrees of optimization, and achieved excellent results in image segmentation, support vector regression (SVR) prediction problems, and other directions.
These algorithm improvements perform well in their respective domains. However, the performance improvements on one class of problems will be offset by performance declines on another class of problems according to the No free Lunch (NFL) theorem [
33], and any other algorithm will not be ideally suited to handle various problems. Therefore, it is necessary to improve the corresponding algorithm according to the different requirements of the problem.
This paper presents an improved slime mould algorithm to solve the problems related to the actual power load prediction accuracy and stability: (1) The Bernoulli chaotic mapping is added in the initial stage because the proportion of new individuals randomly generated in the initial stage is tiny, resulting in insufficient randomness. Moreover, the initial population is optimized by using the randomness and ergodicity of the chaotic mapping. It makes the distribution of slime moulds more reasonable and avoids premature puberty. (2) The decision parameter is evenly divided into two stages, and then the excellent mechanism of development is explored in different stages in the mixed DO algorithm. The algorithm adopts different location update formulas at different stages to increase the diversity of the distribution of slime moulds and further enhance the global search ability and local exploitation. (3) The specular reflection learning strategy is introduced in the late iteration to help the group escape from local optimization and improve the solution accuracy.
The rest of this article is structured as follows:
Section 2 describes the principal concepts of the SMA and the DO.
Section 3 introduces the details of the improved algorithm BDSSMA and the improved mathematical model. In
Section 4, the proposed algorithm is compared with six swarm intelligence algorithms based on 23 benchmark functions to evaluate the performance of the proposed algorithm, and the statistical validity of the proposed algorithm is evaluated via the Wilcoxon rank sum test. In
Section 5, the power load forecasting model of ELM is used to test the above several population intelligent algorithms in practical engineering problems and prove their feasibility in power load forecasting problems.
Section 6 summarizes the whole work and provides some inspirations for future work.
3. Methods
In the second part, we find that SMA is an algorithm with simple parameters, stable operation, and particular optimization ability. However, there are still some problems: First, the initial population of the swarm intelligence algorithm should have diversity, but the random parameter Z of SMA is only 0.03, which is a small constant. The proportion of new individuals randomly generated by Equation (6) in the total population is tiny, and the population diversity will also decrease with the update of individual positions, resulting in the local optimization of the algorithm. It could perform better at jumping out and reexploring. Secondly, from the perspective of the slime mould position update mechanism, Equation (1), the position update of the slime mould is determined by the position of the current optimal individual and the position of two random individuals, which is equivalent to random exploration near the current optimal position. This enhances the global search ability of SMA in the early stage to some extent, but two randomly selected individuals also slow down the convergence rate of SMA. As the iteration progresses, the population tends to move closer to the current optimal position, which makes it easy for SMA to fall into local optima when solving functions with multiple local optima. Finally, in the exploitation stage, the disturbance factor converges linearly from 1 to 0. This simple linear function is easy to make the slime mould individual start slowly in the later exploitation, resulting in slow algorithm convergence speed or insufficient solution accuracy.
This paper proposed the following changes to solve the above problem: First, Bernoulli chaotic mapping was added in the initialization stage, and the randomness and ergodicity of the chaotic mapping were used to optimize the initial population to make the individual distribution of slime moulds more reasonable and avoid premature puberty. Second, the control variable p is divided into two stages, and then the excellent mechanism of stage exploration and exploitation is mixed in DO to increase the diversity of molds’ individual distribution, so that the algorithm adopts different position update formulas at various stages, and further enhances the global search ability and local exploitation performance of moulds. Thirdly, the planar mirror reflection imaging learning strategy is introduced in the late iteration to help the group escape from local optimization and improve the solution accuracy. The improvement measures are described as follows.
3.1. Chaotic Mapping
Whether the population initialization is uniform is an essential factor in determining the optimization effect of the algorithm. Therefore, chaotic mapping is introduced to initialize the algorithm population, which can improve the initial population’s diversity and improve the population’s quality in subsequent iterations. In ref. [
26], Liu et al. concluded that Chebyshev chaos mapping has the best optimization effect on the initialization stage of SMA compared with 10 common chaos factors. However, in addition to the mentioned chaos factors, other outstanding chaos factors have yet to be discussed. We compare the other chaos factors [
34] (
Table 1) with the best chaos map currently available in SMA (Chebyshev’s chaos map) and discuss whether there are better alternatives.
3.2. Optimization of Location Update Mechanism
As mentioned above, researchers mainly deal with the main impact factors in stages for the optimization of SMA position update mechanism, such as the average number of iterations t and weight coefficient ω into multiple stages, and different stages integrate different strategy mechanisms to achieve the optimization and balance of exploration and exploitation. However, no researchers have optimized the position update decision parameter p. In this paper, it is proposed for the first time that parameter p is evenly divided into two segments, and the different mechanisms that dandelion seeds rely on in different landing stages in the DO are mixed, such as the Lévy flight strategy and Brownian motion. The following section describes how these two mechanisms improve the location update section.
First, according to the two-dimensional trajectory diagram of Lévy’s flight strategy and Brownian motion in
Figure 1 and
Figure 2, Lévy’s flight trajectory has irregular step size, small and uncertain step size, and a larger search area. In contrast, Brownian motion has a more uniform and controlled step size, allowing for a better coverage of the entire area for finer exploitation.
Therefore, Brownian motion is added in the pre-
part at the stage of lower food concentration. In the original SMA, two random individuals are used to search at this stage. Although the randomly selected individuals can increase the search scope to some extent, they will lead to a slower convergence of SMA. This paper will improve it to replace one of the random individuals
with the optimal individual at that time, and then add Brownian motion. The Brownian movement of the population centered on the position of the elite individuals not only enhanced the search ability of the early slime mould individuals, but also avoided rapid convergence. The formula is shown as:
where
is the optimal individual,
is Brownian motion and is also a random number with standard normal distribution, and
is a random individual. Then, Lévy flight strategy is added in the later
part, that is, the stage with high food concentration. Taking advantage of Lévy’s irregular flight step length, small step length can continue to effectively conduct in-depth search in the current area, while a large step length can help the current random individuals explore the neighborhood, avoid premature convergence, and fall into local optimal. The formula is shown as:
where
is another random individual, and
is Lévy’s flight strategy. The perturbation factors α and
k in the dandelion optimizer were added in the later iteration to further make the iteration process more diverse. To sum up, the improved position update formula is shown in Equation (22):
3.3. Specular Reflection Learning (SRL)
Zhang proposed specular reflection learning (SRL) in 2021 [
35] based on the reflection imaging law of light in flat mirrors, and the specular reflection learning model is shown in
Figure 3.
In
Figure 3, O is the midpoint of [LB, UB],
is the optimal individual in the current population, and
is the inverse individual of
. According to the Pythagorean theorem, we can obtain:
Equation (24) is obtained according to
=
:
Equation (25) presents the inverse point
:
Let
, Equation (25) can be simplified to:
When
k = 1, it can be further simplified as:
Equation (27) is the general opposition-based learning applied to
, and it can be seen that the opposition-based learning is actually a special case of specular reflection learning. When the general opposition-based learning generalizes to the
D-dimensional search space:
where
j = 1, 2, …,
DNow, specular reflection learning is added to the later iteration to generate random reverse solutions, expand the diversity of the population, and avoid falling into local optimality. The calculation formula should evolve as follows:
This paper proposed a hybrid dandelion optimizer and reflection learning method to improve the slime mould optimization algorithm (BDSSMA); its pseudocode (Algorithm 1) is as follows, and the specific process is shown in
Figure 4.
Algorithm 1. Pseudocode of BDSSMA |
1: Start |
2: Initialize BDSSMA related parameters, such as population size N, maximum number of iterations T, variable dimension Dim, search for upper and lower bounds UB, LB. |
3: Generate Bernoulli map to initialize the population. |
4: While t < T |
5: Calculate the initial fitness and select the best and worst individual. |
6: Update inertia weight W according to Equation (4) |
7: For i = 1 to N |
8: if rand < z |
9: Calculate the population position by Equation (32) |
10: else |
11: if r < |
12: Calculate the population position by Equation (20) |
13: if < r < |
14: Calculate population location by Equation (21) |
15: end if |
16: Generate random reverse solutions by Equation (29) |
17: end for |
18: t = t + 1 |
19: end while |
20: Return the best fitness value and the best individual |