Next Article in Journal / Special Issue
Modeling the Energy Consumption of R600a Gas in a Refrigeration System with New Explainable Artificial Intelligence Methods Based on Hybrid Optimization
Previous Article in Journal / Special Issue
A Variable Step Crow Search Algorithm and Its Application in Function Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation

1
Department of Information Engineering, Sanming University, Sanming 365004, China
2
Prince Hussein Bin Abdullah College for Information Technology, Al Al-Bayt University, Mafraq 25113, Jordan
3
Department of Electrical and Computer Engineering, Lebanese American University, Byblos 13-5053, Lebanon
4
Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
5
MEU Research Unit, Middle East University, Amman 11831, Jordan
6
Applied Science Research Center, Applied Science Private University, Amman 11931, Jordan
7
School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(5), 396; https://doi.org/10.3390/biomimetics8050396
Submission received: 31 July 2023 / Revised: 21 August 2023 / Accepted: 23 August 2023 / Published: 28 August 2023

Abstract

:
The slime mold algorithm (SMA) and the arithmetic optimization algorithm (AOA) are two novel meta-heuristic optimization algorithms. Among them, the slime mold algorithm has a strong global search ability. Still, the oscillation effect in the later iteration stage is weak, making it difficult to find the optimal position in complex functions. The arithmetic optimization algorithm utilizes multiplication and division operators for position updates, which have strong randomness and good convergence ability. For the above, this paper integrates the two algorithms and adds a random central solution strategy, a mutation strategy, and a restart strategy. A hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA) is proposed. The improved algorithm retains the position update formula of the slime mold algorithm in the global exploration section. It replaces the convergence stage of the slime mold algorithm with the multiplication and division algorithm in the local exploitation stage. At the same time, the stochastic center learning strategy is adopted to improve the global search efficiency and the diversity of the algorithm population. In addition, the restart strategy and mutation strategy are also used to improve the convergence accuracy of the algorithm and enhance the later optimization ability. In comparison experiments, different kinds of test functions are used to test the specific performance of the improvement algorithm. We determine the final performance of the algorithm by analyzing experimental data and convergence images, using the Wilcoxon rank sum test and Friedman test. The experimental results show that the improvement algorithm, which combines the slime mold algorithm and arithmetic optimization algorithm, is effective. Finally, the specific performance of the improvement algorithm on practical engineering problems was evaluated.

1. Introduction

In the past decade, the exploitation and application of optimization models have begun to receive attention from mathematicians and engineers. In recent years, with the continuous exploitation of computer technology, more and more optimization problems have attracted people’s attention. The unconstrained optimization problem is currently a research hotspot. The complexity of these problems is gradually increasing, and they have characteristics such as being large-scale, multimodal, and nonlinear [1]. The meta-heuristic algorithm has become an excellent tool and has been recognized by people because it is simple, easy to implement, does not require gradient information, and can avoid local optimization [2]. This is because the meta-heuristic algorithm treats the problem as a black box model and only needs to input the problem to obtain the output of the problem. Researchers develop meta-heuristic algorithms by simulating various natural phenomena and biological habits. Meta-heuristic algorithms can effectively handle real-life optimization problems because they possess valuable randomness that allows them to bypass local optima and have stronger search capabilities than traditional optimization algorithms. These valuable characteristics of the meta-heuristic algorithm make it very smooth when dealing with application problems. Examples include neural networks [3], clustering [4], engineering [5], and scheduling problems [6].
The main problem is whether a meta-heuristic algorithm can be used to solve most of the problems. No Free Lunch (NFL) [7] explains that when an algorithm can provide a good solution to a particular problem, it is not guaranteed a good result on other problems. NFL’s law motivates researchers to enhance their ability to solve new problems by improving currently known algorithms. For example, Chen et al. were inspired by the lifestyle of beluga whales and developed an IBWO [8] that improved the algorithm’s global optimization ability; Wen et al. enhanced the global optimization capability of the algorithm by using a new host-switching mechanism [9]; Wu et al. improved the sand cat’s wandering strategy and applied it to engineering problems [10].
AOA [11] is a meta-heuristic optimization algorithm designed based on the four mixed operations proposed by Abualigah et al. in 2021. The algorithm uses multiplication and division operations in mathematics to improve the global dispersion of position updates and addition and subtraction operations to improve the accuracy of position updates in local areas. However, AOA still faces problems such as slow convergence in complex environments. It needs further improvement and perfecting to adapt to more complex optimization problems. Recently, many researchers have made improvements to the AOA, including adaptive parallel arithmetic optimization algorithm (AAOA) [12], dynamic arithmetic optimization algorithm (DAOA) [13], and chaotic arithmetic optimization algorithm (CAOA) [14].
SMA [15] is a new swarm intelligence optimization algorithm proposed by Li et al. in 2020 to simulate slime mold’s behavior and morphological changes in the foraging process. Its inspiration comes from simulating the foraging behavior and morphological changes of physarum polycephalum and using the weight change to simulate the positive feedback and negative feedback processes generated by slime molds in the foraging process, thus generating three stages of foraging patterns. The SMA has strong global exploration ability, certain convergence accuracy, and good stability, but in the later iteration stage, the oscillation effect is weak, and it is easy to fall into local optima. The contraction mechanism is not strong, resulting in a slower convergence speed. At present, researchers have improved and widely used this algorithm. Kouadri et al. [16] applied the algorithm to solve the problem of minimizing fuel costs and losses in exploration generators. Zhao et al. [17] proposed mixing SMA and HHO algorithms, utilizing multiple composite selection mechanisms to improve the algorithm’s selectivity and randomness, the randomness of individual position updates, and the efficiency of algorithm solving.
Based on the advantages and disadvantages of SMA and AOA, this article aims to create a more effective optimization algorithm by combining the two algorithms. To further enhance its performance, a random center solution strategy is introduced. This strategy uses random center learning to expand the exploration range of individual populations, enrich population diversity, and can effectively control the balance between exploration and exploitation. Both mutation strategy and restart strategy were used. The mutation strategy is a local adaptive mutation method that improves the algorithm’s global search ability and performs well in high-dimensional spaces. The restart strategy can help poorer individuals jump to other positions and is usually used to jump out of local optima. The proposed mixed optimization algorithm, hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA), incorporates the best features of both SMA and AOA, making it more effective in exploring the search space and enabling it to effectively solve corresponding engineering problems. The algorithm focuses on improving four key aspects:
(1)
In the exploration and exploitation stage, SMA and AOA should be organically combined to improve the exploration and exploitation capabilities comprehensively;
(2)
Innovatively propose a random center strategy, which improves the early convergence speed of the algorithm and effectively maintains a balance between exploration and development while enhancing the diversity of the population;
(3)
The introduction of the mutation strategy and restart strategy enhances the ability to solve complex problems while also enhancing the algorithm’s ability to jump out of local optima. By comparing 23 benchmark test functions with different dimensions with the CEC2020 test function, it is proven that the algorithm has significant effectiveness;
(4)
Five engineering problems were used simultaneously to verify the feasibility of RCSMAOA on practical engineering problems.
The second part of this article introduces the relevant work. SMA and AOA were introduced in the third and fourth parts, respectively. In the fifth part, we described the added strategies; SCLS, MS, and RS explained the implementation process of the algorithm and provided pseudocode and flowchart. The sixth part is the analysis of time complexity. The seventh part is the experimental analysis of the specific performance of RCLSMAOA. The eighth part is the application of RCLSMAOA in specific engineering problems. The ninth part summarizes the contributions of this article and introduces the next research directions. If you need the code in our article, you can find it through the following link: https://github.com/Mars02w/RCLSMAOA, accessed on 20 August 2023.

2. Related Works

In recent years, meta-heuristic algorithms can be divided into the following four categories based on their inspiration sources: (1) physics-based methods, whose inspiration comes from physical rules in the universe. Specific algorithms include the black hole algorithm (BH) [18], the gravity search algorithm (GSA) [19], and the most famous simulated annealing algorithm (SA) [20]. (2) Evolution-based algorithms inspired by the laws of biological evolution. Researchers have linked natural and artificial evolution to create many excellent algorithms. Examples include genetic algorithm (GA) [21], genetic programming (GP) [22], and differential evolution (DE) [23]. (3) Group-based algorithms focus on modeling observations of animals and other living organisms. The most famous is particle swarm optimization (PSO) [24], which simulates the behavior of birds and fish. Ant Colony Algorithm (ACO) [25] simulates the behavior of ants searching for food sources. The foraging behavior of slime molds inspires the slime mold optimization algorithm (SMA) [15]. (4) Human behavior habits and ideas inspire human-based algorithms in social life. A well-known one is the teaching–learning-based optimizer (TLBO) [26], inspired by the interaction between teachers and students. There are also training-based optimizers (DTBO) [27] and internal search algorithms (ISA) [28]. In recent years, many excellent algorithms have still been proposed by researchers, such as the monarch Butterfly optimization (MBO) [29], more search algorithm (MSA) [30], hunger games search (HGS) [31], Runge Kutta method (RUN) [32], colony prediction algorithm (CPA) [33], weighted mean of vectors (INFO) [34], Harris hawks optimization (HHO) [35], and prime optimization algorithm (RIME) [36].
In addition, studying hybrid optimization algorithms is also a new trend at present. In recent years, researchers have increasingly conducted mixed research on algorithms. For example, Alam Zeb et al. [37] mixed a genetic algorithm with a simulated annealing algorithm, and the powerful exploration ability of the genetic algorithm compensated for the lack of exploration in the simulated annealing algorithm, enhancing the actual optimization performance of the algorithm. Chen et al. [38] fused a particle swarm optimization algorithm with a simulated annealing algorithm and applied it to magnetic anomaly detection, achieving success. Hybrid optimization algorithms also have the ability to solve optimal power flow problems [39]. Tumari et al. studied a variant of the ocean predator algorithm for adjusting the fractional order proportional integral derivative controller of the automatic voltage regulator system [40]. Wang et al. [41] added an angle modulation mechanism to a dragonfly algorithm to enable it to work in two-dimensional space and have good performance. Devaraj A et al. [42] used a combination of fireflies and improved multi-objective particle swarm optimization (IMPSO) technology to improve load balancing capabilities in cloud computing environments and the results showed success. Jui et al. [43] hybridized the average multi-verse optimizer and sine cosine algorithm, demonstrating good potential in solving continuous-time Hammerstein system problems.

3. Slime Mold Algorithm

The SMA is a meta-heuristic optimization algorithm that simulates the foraging behavior of slime molds. This algorithm reflects slime molds’ oscillation and contraction characteristics during the foraging process. The organic matter in slime molds first secretes enzymes when searching for food, and then during migration, the front end extends into a fan shape and uses a venous network for foraging. Due to their unique morphology and characteristics, they can simultaneously utilize multiple pathways to form a connected venous network to obtain food. In addition, slime molds will also search for food in other unknown areas.
When the slime mold vein is close to food according to the smell in the air, the higher the food concentration, the stronger the propagation wave generated by the biological oscillator in its body, which increases the flow of cytoplasm in the vein. The faster the flow of cytoplasm, the thicker the slime mold vein tube, which causes the position update of the slime mold. The position update formula is:
X ( t + 1 ) = X b ( t ) + v b × ( W × X r a n d 1 ( t ) X r a n d 2 ( t ) ) , r 1 < p v c × X ( t ) , r 1 p
where Xb(t) represents the current position of the individual with the optimal fitness, vb is a parameter between [-a,a], W represents the weight coefficients of the slime mold, Xrand1(t) and Xrand2(t) represent the positions of two randomly selected individuals, r1 represents the random number in the interval [0, 1], vc is the parameter that linearly decreases from 1 to 0, and t is the current number of iterations.
The updating formula of control parameters a, p, and weight coefficient W is as follows:
a = arctan h t T + 1
p = tanh S ( i ) D F
W ( S I n d e x ) = 1 + r 2 × log b F S ( i ) b F w F + 1 , c o n d i t i o n 1 r 2 × log b F S ( i ) b F w F + 1 , o t h e r s
S I n d e x ( i ) = s o r t ( N )
The parameters are calculated based on the current individual’s fitness and optimal values, where i = 1,2,…, N. N represents the number of populations, S(i) represents the fitness value of the ith slime mold individual, and DF represents the optimal fitness obtained in the current iteration process. T represents the maximum number of iterations, and r2 is a random number within the [0, 1] interval. condition are the individuals whose fitness is in the top half of the population, and others are the remaining individuals. bF represents the best fitness value obtained during the current iteration, and wF represents the worst fitness value obtained during the current iteration. SIndex(i) indicates the fitness value sequence.
Even if slime molds find a food source, they will separate some individuals to explore other unknown areas to find a higher-quality food source. At this point, the formula for the slime mold update position is as follows:
X ( t + 1 ) = r a n d × ( U B L B ) + L B , r a n d < z X b ( t ) + v b × ( W × X r a n d 1 ( t ) X r a n d 2 ( t ) ) , r 1 < p v c × X ( t ) , r 1 p
Among them, rand is a random number within the [0, 1] interval, UB and LB represent upper and lower bounds, and z represents a custom parameter (with a value of 0.03).

4. Arithmetic Optimization Algorithm

AOA is a meta-heuristic optimization algorithm exploring and exploiting mechanisms through arithmetic operators in mathematics. The exploration stage refers to using multiplication (M) and division (D) strategies for extensive coverage and search space, improving the dispersion of solutions to avoid local optima. The exploitation stage is to improve the accuracy of the solutions obtained in the exploration stage through the addition (A) strategy and the subtraction (S) strategy, that is, to enhance the local optimization ability.

4.1. Mathematical Optimization Acceleration Function

Before exploration and exploitation, AOA generates a math optimizer accelerated (MOA) to select the search phase. When r1 > MOA (t), the exploration phase is carried out by executing (D) or (M); when r1 ≤ MOA (t), the exploitation phase is carried out by executing (A) or (S); r1 is a random number from 0 to 1.
M O A ( t ) = M i n + t × M a x M i n T
where Min and Max represent the minimum and maximum values of the optimization acceleration function (MOA).

4.2. Exploration Phase

AOA looks at different parts of the search space (global optimization) using two main search methods (multiplication (M) search strategy and division (D) search strategy). It updates its position using this formula:
X i , j ( t + 1 ) = X b , j ( t ) ÷ ( M O P + ε ) × ( ( U B j L B j ) × μ + L B j ) , r 2 < 0.5 X b , j ( t ) × M O P × ( ( U B j L B j ) × μ + L B j ) , o t h e r s
where r2 ∈ [0, 1], Xi,j(t + 1) is the position of the ith individual on the jth dimension during the next iteration. Xb,j(t) is the location of the best solution for the current fitness. ɛ is a very small number, where UBj and LBj represent the upper and lower limits of the jth position, respectively. µ is the control parameter for adjusting the search process, which is 0.499.
M O P ( t ) = 1 t 1 / α T 1 / α
The Mathematical Optimizer Probability (MOP) is a coefficient, α is a sensitive parameter that defines the exploitation accuracy during the iteration process, which is fixed at 5.

4.3. Exploitation Phase

AOA utilizes operators (subtraction (S) and addition (A)) to deeply explore search areas in several dense areas and conducts local optimization based on two main exploitation strategies. The location update formula is as follows:
X i , j ( t + 1 ) = X b , j ( t ) M O P × ( ( U B j L B j ) × μ + L B j ) , r 3 < 0.5 X b , j ( t ) + M O P × ( ( U B j L B j ) × μ + L B j ) , o t h e r s
Among them, r3 is a random number between 0 and 1.

5. Hybrid Improvement Strategy

5.1. Stochastic Center Learning Strategy (SCLS)

The random center learning strategy is a newly proposed optimization mechanism in this paper. In nature, group animals such as wolf packs and whale packs often surround their food in the middle through group cooperation and then engage in predation. In this regard, this article proposes a central learning strategy whose core idea is to generate a central solution based on upper and lower bounds during the process of searching for the optimal value of the population, comparing it with the target fitness value of the existing optimal solution, and selecting the optimal one for the next iteration. Definition of the central solution: if there is a point X in the n-dimensional coordinate system, then the central solution is calculated as follows:
X c = ( L B + U B ) / 2
Among them, Xc is the central solution. Due to the lack of randomness in the central solution. In order to further improve the ability of the population to find the globally optimal solution (as shown in Figure 1), this paper proposes an improved random center learning strategy, which is calculated as follows:
X c r a n d = X c + ( X r X c ) r 1 , r a n d ( ) < 0.5 X c + ( X c X l ) r 2 , r a n d ( ) > 0.5
where Xcrand represents the random central solution, Xr and Xl represent the object to be learned, and r1, r2, and rand are random numbers between 0 and 1. In order to reflect the randomness and symmetry of the value of random center learning, the threshold value of rand () is 0.5. The schematic diagram of the central solution and random central learning is shown in Figure 1.
Figure 1 shows that any position in the interval [Xl, Xr] may have a random central solution Xcrand, with Xc being the central solution.

5.2. Mutation Strategy (MS)

Mutation strategy refers to a composite mutation strategy based on multiple mutation operators in the mutation strategy [44]. It generates three candidate positions Vi1, Vi2, and Vi3, for the ith position by executing Equations (13)–(15) in parallel. The formula is as follows:
V i 1 , j = X r 1 , j + F 1 × X r 2 , j X r 3 , j ,   i f   r a n d ( ) < C r 1   o r   j = j r a n d X i , j , o t h e r s
Among them, Xrk, j represents the jth dimension of the rk solution (the same below), r1, r2, and r3 are different integers in the range [1, N], jrand represents integers in the interval [1, D], F1 represents a proportional coefficient equal to 1.0, and Cr1 represents a crossover rate set to 0.1.
V i 2 , j = X r 4 , j + F 2 × X r 5 , j X r 6 , j + F 2 × X r 7 , j X r 8 , j ,   i f   r a n d ( ) < C r 2   o r   j = j r a n d X i , j , o t h e r s
r4, r5, r6, r7, and r8 are distinct integers in the range [1, N]. F2 represents a proportional coefficient equal to 0.8, and Cr2 represents a crossover rate equal to 0.2.
V i 3 , j = X i , j + r a n d ( ) × X r 9 , j X i , j + F 3 × X r 10 , j X r 11 , j ,   i f   r a n d ( ) < C r 3   o r   j = j r a n d X i , j , o t h e r s
r9, r10, and r11 are distinct integers in the range [1, N]. F3 represents a proportional coefficient equal to 1.0, and Cr3 represents a crossover rate of 0.9.
After generating three candidate positions Vi1, Vi2, and Vi3, first correct them based on the upper and lower boundaries. Then, select the candidate solution Vi with the best fitness from Vi1, Vi2, and Vi3 to update the position of the ith solution using Formula (16), which is called a greedy selection strategy, as shown below.
X i = V i   ,   i f   f ( V i ) < f ( X i ) X i ,   o t h e r w i s e
Vi represents the modified best candidate solution, and Xi represents the ith position.

5.3. Restart Strategy (RS)

When the mucus cannot find food at this location for a long time, it means that the nutrients in the area are no longer sufficient to support the continued survival of the slime molds, so the slime molds in the area need to be relocated. The restart scheme [9] can help poorer individuals transition from a local optimal state to other positions, so we use a restart strategy here to change the position of poorer individuals. In this strategy, we use the trial vector trial(i) to record the number of times the position has not improved. If the fitness value corresponding to the position in the search does not improve, the test vector trial(i) increases by 1. Otherwise, trial(i) is reset to 0. If the test vector is not less than the predetermined limit, a better vector will be selected from the test vectors T1 and T2 to replace the position of the ith.
T 1 , j = L B j + r a n d ( ) × U B j L B j
T 2 , j = r a n d ( ) × U B j + L B j X i , j
T 2 , j = L B j + r a n d ( ) × U B j L B j i f   T 2 , j U B j | | T 2 , j L B j
where T1,j represents the jth dimension position in position T1, T2,j represents the jth position in position T2, UBj and LBj are the upper and lower boundaries of the jth dimension, respectively, and rand() represents the random floating-point arithmetic in the region [0, 1]. If T2,j exceeds the upper boundary UBj or lower boundary LBj in the jth dimension of the position, it will be replaced by Equations (18) and (19), and the test vector test trial (i) will be reset to zero. This article sets this limit to log t. If the restrictions are smaller in the early stages, they will help enhance the global performance of the algorithm. If the limit is larger in the later stage, it can prevent the algorithm from moving away from the optimal solution.

5.4. A Hybrid Optimization Algorithm of Slime Mold and Arithmetic Based on Random Center Learning and Restart Mutation

As mentioned above, when exploring unknown food sources, slime molds in SMA update their positions based on the synergistic effect of VB and Vc. The oscillation effect of VB increases the possibility of global exploration. When the random number z is less than, slime molds are initialized. At the end of the iteration, the VB oscillation effect is weakened, which makes the algorithm easily fall into the local optimum. Vc is a linearly decreasing parameter from 1 to 0, and the search mechanism is single and weak, making it difficult for the algorithm to jump out of local optima. In AOA, position updating is carried out according to the higher distribution of the division operator, and contraction is realized according to the addition and subtraction operators. The probability MOP of the mathematical optimizer changes according to the optimal position to improve the search breadth of exploration and increase the ability of the algorithm to jump out of the local optimum, but it will inevitably fall into the local optimum in the later iteration. The random center learning updates the random position according to the general characteristics of the optimal solution, which will improve the algorithm’s convergence rate in the early exploration stage. CMS introduces multiple candidate solutions and compares them with existing solutions. In RS, the number of times the position has not been improved is recorded using the experimental vector trial (i). When the given threshold is exceeded, it is preliminarily determined that the algorithm is trapped in a local optimum, and a new position update formula is given to prevent the algorithm from jumping out of the local optimum.
Therefore, in this paper, we abandon the weak mechanism in the exploration stage of SMA and add the multiplication and division operator in AOA to expand the scope of exploration. The mutation and restart strategies are introduced to improve the ability to jump out of the local optimal in the late iteration. Given the relatively slow convergence rate of the algorithm in the exploration stage, a random center learning strategy with characteristic solutions is added.
To sum up, the hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA) proposed in this paper, which integrates stochastic center learning, has advantages and balance in the exploration and exploitation stage, local optimization, and global optimization. Pseudocode as shown in Algorithm 1. The flowchart is shown in Figure 2.
Algorithm 1 The pseudocode of the RCLSMAOA
Initialization parameters T, Tmax, ub, lb, N, dim, w.
Initialize population X according to Equation (1).
While T ≤ Tmax
   Calculate fitness values and select the best individual and optimal location.
   Update w using Formula (4)
      For i = 1:N
       Update the value of parameter a W S using Formulas (2), (4), and (5)
      If rand < z
       Update the population position using Formula (6)
      Else
       Update vb, vc, and p.
       If r1 < p
         Update the population position using Formula (6)
       Else
         Update the value of parameter mop using Formula (9)
         If r2 < 0.5
            Update the population position using Formula (8)
         Else
            Update the population position using Formula (8)
         End If
       End If
     End If
     For i = 1:N
         Update population position using SCLS
     End For
     For i = 1:N
         Update population position using MS
     End For
     Update population position using RS
   Find the current best solution
   t = t + 1
End While
Output the best solution.

6. Time Complexity Analysis

In the RCLSMAOA, if the number of populations is N, the dimension is dim, and the maximum number of iterations is T. The time complexity of the population initialization phase is O(N × Dim). During iteration, the location of vb mechanism and AOA multiplication and division operator in SMA is updated, and the time complexity of the random central solution strategy and mutation strategy is O(3 × N × Dim × T). The time complexity of updating the convergence curve is O(1). It is worth mentioning that RS is rarely used from a general perspective, so it can be ignored and not remembered. In conclusion, the time complexity of the RCLSMAOA is O(N × Dim × (3T + 1)).

7. Experimental Part

All the experiments in this paper are completed on the computer with the 11th Gen Intel(R) Core(TM) i7-11700 processor with a primary frequency of 2.50 GHz, 16 GB memory, and an operating system of 64-bit Windows 11 using matlab2021a. In order to check the performance of the RCLSMAOA, 23 standard reference functions and CEC2020 reference functions are used to check the algorithm’s performance. In order to have a more comprehensive understanding of the actual performance of the RCLSMAOA, we choose different algorithms to compare. These include AOA and SMA, as well as the famous remora optimization algorithm (ROA) [37], sine cosine algorithm (SCA) [38], and whale optimization algorithm (WOA) [39]. In addition, we have added two improved algorithms: the whale and moth flame optimization algorithms and the average multi-verse optimizer and sine cosine algorithm. The parameter settings for these algorithms are shown in Table 1.

7.1. Experiments on the 23 Standard Benchmark Functions

In this section, we selected 23 benchmark functions to test RCLSMAOA’s performance [10]. The 23 functions consist of 7 single-mode functions, six multimodal functions, and ten fixed multimodal functions. Fn(x) represents the specific mathematical expression of the reference function, dim is the experimental dimension of the reference function, the range is the search space of the reference function, and Fmin is the theoretical optimal value of the corresponding reference function. See Figure 3 for the image of the specific function. In this experiment, we set the population size N = 30, spatial dimension = 30/500, and the maximum number of iterations T = 500. RCLSMAOA and other comparison algorithms were run 30 times to obtain each algorithm’s best fitness, average fitness, and standard deviation after 30 times of independent running.
The specific experimental table is shown in Table 2, Table 3 and Table 4. We can see that on F1–F7, RCLSMAOA obtained the optimal values among three data items, including the optimal fitness value. We observed that the AOA performed well on F2, whereas the SMA performed well on F1 and F3. The RCLSMAOA inherits its advantages in single-mode functions. On the 500 dimensional scales, the RCLSMAOA still performs well. This is because mutation strategies can perform local mutations and increase global exploration capabilities.
Similarly, we observed multimodal functions such as F8–F13. On F8, the optimal fitness value and the average value of RCLSMAOA reached the optimal value, and the standard deviation was slightly lower than that of SMA. Functions such as F9–F11 are relatively simple, giving most optimization algorithms good results. The performance of RCLSMAOA in the F12–13 function is also satisfactory. We observed that the performance of the RCLSMAOA will not be affected by changes in dimensions, and its performance remains stable. Functions such as F14–23 are fixed multimodal functions, which are relatively simple. In our experiment, it is not difficult to see that the performance of RCLSMAOA is still the best among the comparison algorithms. Although fixed multimodal functions are relatively simple, their performance in verifying algorithm performance is still reliable. The above analysis indicates that RCLSMAOA, which integrates SMA and AOA, performs better than SMA and AOA.
The data cannot intuitively understand the actual performance of the algorithm so we will show the convergence curves of each algorithm on F23 function images. The function image is shown in Figure 3, Figure 4 and Figure 5. From the image, we can see that in F1–F4, the RCLSMAOA has a fast convergence rate and high convergence precision, which SMA and AOA do not possess. This is due to the random center learning strategy, which expands the algorithm’s search range and enhances the convergence rate. For F5–6, RCLSMAOA can find a good position at the beginning of the iteration and then slowly converge to find the optimal position. Except for the WMFO algorithm, other algorithms stagnate in the early stages of the algorithm. For F7, the optimization ability of this algorithm is also stronger than other algorithms, because the existence of a restart strategy enables the algorithm to continuously jump out of local optima. On F8, the performance of RCLSMAOA is not as good as the WMFO algorithm, but stronger than other algorithms. F9–F11 is relatively simple and easy to find the optimal fitness value. RCLSMAOA algorithm is also the algorithm with the fastest rate of convergence. For F12–F13, the RCLSMAOA performs well and can also converge when other algorithms fall into local optima. F14–23 is a relatively simple function, but it can also play a role in verifying algorithm performance. On these functions, RCLSMAOA also always finds the optimal value the fastest. In summary, the RCLSMAOA applies to F23 functions.

7.2. Experiments on the CEC2020 Benchmark Function

Using F23 functions for validation is not sufficient. We have added the CEC2020 test function [9] to verify this. In this experiment, we set the variables as N = 30, T = 500, and dim = 10. The comparison algorithm remains unchanged. The results of 30 runs of RCLSMAOA and other algorithms are shown in Table 5.
CEC2020 has four class functions: unimodal function CEC01, basic multimodal function CEC02−4, mixed function CEC05-6, and combination function CEC06-10. In unimodal functions, the best one is RCLSMAOA, followed by SMA. This is because the RCLSMAOA integrates the SMA and adds mutation strategies to enhance its exploitation capabilities, enabling it to find better locations. The RCLSMAOA always performs stably based on multimodal functions and can find better values. This is based on the fact that the random center-solving strategy can effectively maintain a balance between exploration and exploitation in RCLSMAOA. Combined with the search strategy in SMA, the RCLSMAOA performs well on the basic multimodal functions. Mixed and combined functions are relatively difficult and complex and can easily trap functions into local optima. For this reason, we introduce a restart strategy to enable the RCLSMAOA to jump out of local optima. From the implementation results, the RCLSMAOA performs well and is not troubled by local optima.
To test the actual performance of the algorithm more clearly, we selected the specific convergence curves of the RCLSMAOA and other comparative algorithms, as shown in Figure 5. The convergence curves of the RLCSMAOA are mainly divided into two types. One is mainly reflected in the single-mode function. The RCLSMAOA can converge towards the optimal value and finally find the optimal value. This is because the RCLSMAOA integrates the position update formula from the SMA and uses a mutation strategy to improve it. Another is mainly reflected in the complex combination function and mixed function. To test the actual performance of the algorithm more clearly, we selected the specific convergence curves of the RCLSMAOA and other comparative algorithms, as shown in Figure 6. The convergence curves of the RLCSMAOA are mainly divided into two types. One is mainly reflected in the single-mode function. The RCLSMAOA can converge toward the optimal value and finally find the optimal value. This is because the RCLSMAOA integrates the position update formula from the SMA and uses a mutation strategy to improve it. Another is mainly reflected in the complex combination function and mixed function. For complex functions, RCLSMAOA shows a very fast convergence rate at the early stage of iteration. This is because RCLSMAOA uses the multiplication and division operator in the AOA, which allows the RCLSMAOA to find the optimal value very quickly.

7.3. Analysis of Wilcoxon Rank Sum Test Results and Friedman Test

Wilcoxon Rank Sum Test Results is a non-parametric detection test method that does not require any assumptions to be made about the data. Therefore, it applies to various types of data, including discrete, continuous, normal, and non-normal distribution. In this experiment, it was used to test whether two samples have differences. The experimental result of this experiment is p, when p is less than 5%. We believe there is a significant difference in the experimental results. Because the RCLSMAOA cannot compare with itself, we will not list the specific p-values of RCLSMAOA. Therefore, this article takes eight algorithms as samples; each algorithm independently solves 30 times and sets the population size N = 30. Among them, the dimensions selected for testing 23 standard test functions are 30 dimensions, and CEC2020 is ten dimensions.
Table 6 shows the experimental results of thirty experiments conducted on 23 standard test functions. Table 6 shows the experimental results of 30 experiments conducted on 23 standard test functions. For F1–F4, because the RCLSMAOA is a hybrid form of SMA, they are not distinguished in some functions. For F7–F11, these functions are simple, and most algorithms can achieve good results on them.
Table 7 shows the experimental results of thirty experiments conducted on the CEC2020 test function. We observed significant differences between the RCLSMAOA and other algorithms, except for CEC04. The main reason is that CEC04 is relatively simple compared to other functions, and most functions can find the optimal value.
To verify the ranking of the algorithm, we used Friedman detection. We ran each algorithm independently 30 times to take the average value, and the results are shown in Table 8 and Table 9. The dimension chosen for F23 functions here is 30. It can be noted that RCLSMAOA is still in the first position.
In this section, we conducted a more comprehensive data analysis of the algorithm’s performance using the Wilcoxon rank sum test and Friedman detection. We conclude that there are significant differences and good performance between the RCLSMAOA and most comparison algorithms for functions.

8. Engineering Issues

In this section, we will test the application of the RCLSMAOA in practical engineering problems in order to assess the quality and computational performance of RCLSMAOA in solving engineering problems and to explore whether it can achieve satisfactory results. This section will use five classic engineering problems to test the actual performance of the algorithm and compare it with other well-known optimization algorithms.

8.1. Pressure Vessel Design Problem

In practical survival problems, a common problem is pressure vessels. This issue aims to minimize the total cost of materials, forming, and welding for cylindrical containers. The structural schematic diagram is shown in Figure 7. This problem has four variables: shell thickness Ts, head thickness Th, internal radius R, and cylindrical section length L without considering the head.
The mathematical model of the pressure vessel design problem is as follows:
Consider:
x = x 1       x 2       x 3       x 4 = T s       T h       R       L
Objective function:
f x = 0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 ( x ) = x 1 + 0.0193 x 3 0
g 2 ( x ) = x 3 + 0.00954 x 3 0
g 3 ( x ) = π x 3 2 x 4 + 2 3 π x 3 3 + 1296000 0
g 4 ( x ) = x 4 240 0
Boundaries:
0 x 1 99 , 0 x 2 99 , 10 x 3 200 10 x 4 200
We can observe Table 10 to see the specific data of the RCLSMAOA on pressure vessel engineering issues. RCLSMAOA has Ts = 0.742433, Th = 0.370196, R = 40.31961, L = 200, COST = 5734.9131. Compared with other comparative algorithms, RCLSMAOA achieved the best results and achieved the optimal value of 200 on L. This means that RCLSMAOA can solve the engineering problem.

8.2. Speed Reducer Design Problem

The reducer is one of the key parts of the gearbox. In this study, we aim to achieve the minimum quality while meeting four design constraints and seven variables. The model structure is shown in Figure 8.
The mathematical model of reducer design is as follows:
Objective function:
f ( x ) = 07854 × x 1 × x 2 2 × ( 3.3333 × x 3 2 + 14.9334 × x 3 43.0934 ) 1.508 × x 1 × ( x 6 2 + x 7 2 ) + 7.4777 × x 6 3 + x 7 3 + 0.7854 × x 4 × x 6 2 + x 5 × x 7 2
Subject to:
g 1 ( x ) = 27 x 1 × x 2 2 × x 3 1 0
g 2 ( x ) = 397.5 x 1 × x 2 2 × x 3 2 1 0
g 3 ( x ) = 1.93 × x 4 3 x 2 × x 3 × x 6 4 1 0
g 4 ( x ) = 1.93 × x 5 3 x 2 × x 3 × x 7 4 1 0
g 5 ( x ) = 1 110 × x 6 3 × ( 745 × x 4 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0
g 6 ( x ) = 1 85 × x 7 3 × ( 745 × x 5 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0
g 7 ( x ) = x 2 × x 3 40 1 0
g 8 ( x ) = 5 × x 2 x 1 1 0
g 9 ( x ) = x 1 12 × x 2 1 0
g 10 ( x ) = 1.5 × x 6 + 1.9 x 4 1 0
g 11 ( x ) = 1.1 × x 7 + 1.9 x 5 1 0
Boundaries:
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5
Table 11 shows that when x = [3.4975, 0.7, 17, 7.3, 7.8, 3.3500, 5.285], the minimum weight obtained by RCLSMAOA is 2995.437365, ranking first in the comparison algorithm. Observing experimental data, it can be seen that RCLSMAOA still performs well in relatively complex engineering problems.

8.3. Three-Bar Truss Design Problem

In the design problem of a three-bar truss, in order to minimize the weight constrained by stress, deflection, and buckling, it is necessary to operate on two-bar lengths to minimize volume while satisfying the three constraint conditions. It has two decision variables, namely the lengths A1 and A2 of the two rods, and its specific physical model is shown in Figure 9.
The mathematical formulation of this problem is shown below:
Consider:
x = [ x 1   x 2 ] = [ A 1   A 2 ]
Minimize:
f ( x ) = ( 2 2 x 1 + x 2 ) l
Subject to:
g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 ,
g 3 ( x ) = 1 2 x 1 + x 1 P σ 0 ,
l = 100   cm , P = 2   kN / cm 3 , σ = 2   kN / cm 3
Variable Range:
0 x 1 , x 2 1 ,
The comparison results between RCLSMAOA and other algorithms in the design of three bar trusses are shown in Table 12. We observed that the data in the table are very close, indicating that it is difficult to optimize the problem better, but RCLSMAOA still achieved the best results.

8.4. Welded Beam Design Problem

The welded beam design problem is a classic structural optimization problem and an important example in structural mechanics. This problem aims to minimize the steel plate’s total weight while satisfying four design variables of the connecting beam: thickness b, length L, height t, and width h. The detailed diagram of the welded beam is shown in Figure 10.
The mathematical model of welded beam design is as follows:
Consider:
x = [ x 1   x 2   x 3   x 4 ] = [ h   l   t   b ]
Objective function:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to:
g 1 x = τ x τ max 0
g 2 x = σ x σ max 0
g 3 x = δ x δ max 0
g 4 x = x 1 x 4 0
g 5 x = P P c x 0
g 6 x = 0.125 x 1 0
g 7 x = 1.10471 x 1 2 + 0.04811 x 3 x 4 14.0 + x 2 0.5 0
where:
τ x = τ 2 + 2 τ τ " x 2 2 R + τ " , τ = P 2 x 1 x 2 , τ " = M R J ,
M = P L + x 2 2 , R = x 2 2 4 + x 1 + x 3 2 2 , σ x = 6 P L x 4 x 3 2 ,
J = 2 2 x 1 x 2 x x 2 4 + x 1 + x 3 2 2 , δ x = 6 P L 3 E x 4 x 3 2 ,
P c x = x 3 2 x 4 6 0 4.013 E L 2 , 1 x 3 2 L E 4 G , 1 x 3 2 L E 4 G ,
P = 6000 l b , L = 14   i n , δ max = 0.25 in , E = 30 × 10 6       p s i ,
τ max = 13600           p s i ,   a n d           σ max = 30000               p s i
Boundaries:
0.1 x i 2 , i = 1 , 4 ;   0.1 x i 10 , i = 2.3
The specific data are shown in Table 13. The same RCLSMAOA still performs well in engineering problems, and the weight of the welded beam also reaches the minimum value. The results indicate that the IBWO algorithm is reliable for solving the problem of welded beams.

8.5. Car Crashworthiness Design Problem

The engineering issue refers to the safety performance of vehicles in a collision. This issue involves many aspects, including body structure, interior devices, and airbag systems. The car crashworthiness design problem is a very important issue in automotive design, which is directly related to the safety of passengers. The specific image is shown in Figure 11.
The mathematical formulation of this problem is shown below:
Minimize:
f ( x ) = Weight ,
Subject to:
g 1 ( x ) = F a ( load in abdomen ) 1 kN ,
g 2 ( x ) = V × C u ( dummy upper chest ) 0.32 m / s ,
g 3 ( x ) = V × C m ( dummy middle chest ) 0.32 m / s ,
g 4 ( x ) = V × C l ( dummy lower chest ) 0.32 m / s ,
g 5 ( x ) = Δ ur ( upper rib deflection ) 32 mm ,
g 6 ( x ) = Δ mr ( middle rib deflection ) 32 mm ,
g 7 ( x ) = Δ lr ( lower rib deflection ) 32 mm ,
g 8 ( x ) = F ( Public force ) p 4 kN ,
g 9 ( x ) = V MBP ( Velocity of   V Pillar at middle point ) 9.9 mm / ms ,
g 10 ( x ) = V FD ( Velocity of front door at   V Pillar ) 15.7 mm / ms ,
Variable Range:
0.5 x 1 x 7 1.5 , x 8 , x 9 ( 0.192 , 0.345 ) , 30 x 10 , x 11 30 ,
The results of this engineering problem are shown in Table 14. In RCLSMAOA, X1, X3, X5, and X7 were all taken to the minimum value of 0.5, and the final weight was also the optimal value. This engineering problem shows that the RCLSMAOA still performs well in engineering problems with multiple variables and constraints.

9. Conclusions

This article fully considers the advantages and disadvantages of SMA and AOA optimization algorithms. It proposes a hybrid algorithm of slime mold and arithmetic optimization algorithm based on random center learning and restart mutation (RCLSMAOA). RCLSMAOA integrates the global search strategies of two algorithms. On this basis, a random center solution strategy is added to enhance the randomness of the algorithm, the effectiveness of global search, and the diversity of the algorithm population. The mutation strategy can enhance the convergence ability of the algorithm and further avoid the stagnation of the algorithm. Species reintroduction restart strategy can effectively avoid local optimization. The collaborative use of these strategies can help the RCLSMAOA enhance its optimization ability and maintain a good relationship between exploration and exploitation. In addition, we used the Wilcoxon rank sum test to test the significant differences between algorithms and achieved good results. Finally, five engineering experiments were conducted, and the RCLSMAOA provided an excellent solution.
From the experimental performance, convergence curve, and engineering problems, we can conclude that:
The RCLSMAOA proposed in this article combines the advantages of the SMA and AOA and effectively avoids the shortcomings of the two algorithms.
The newly proposed random center solution strategy can effectively address the shortcomings of RCLSMAOA and significantly enhance the algorithm’s global search ability.
The restart mutation strategy can improve the algorithm’s ability to overcome local optima and enhance the balance between exploration and exploitation.
By verifying the results of different test functions, the actual performance of the RCLSMAOA was effectively tested
Finally, by verifying five engineering problems, it can be concluded that the RCLSMAOA has good engineering application prospects.
This paper only studies the fusion of two optimization algorithms and adds three effective strategies. RCMSMAOA is still prone to local optimality in high dimensional space, and the convergence accuracy is not enough, and in some engineering problems did not show obvious advantages. In the future, we will further improve the performance of RCLSMAOA in practical engineering problems and improve the applicability of this algorithm in high-dimensional space. In future work, we will study the binary version of RCLSMAOA and use it to solve the feature selection problem.

Author Contributions

Conceptualization, Z.W. and H.C.; Methodology, H.J.; Software, H.C. and H.J.; Validation, H.J. and X.Z.; Formal analysis, H.C., H.J. and X.Z.; Investigation, H.C. and H.J.; Code, Z.W.; Resources, H.C. and Z.W.; Data Management, H.J. and L.A.; Writing—drafting the first draft, H.C. and Z.W.; Writing—Review and Editing, H.J. and H.C.; Visualization, Z.W., H.C. and X.Z.; Supervision, H.C. and H.J.; funding acquisition, H.C. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Education National Education Science Planning Key Project—“Collaborative Edu-cation Project of the Ministry of Education” [grant number 202102391036], a Natural Science Foundation of Fujian Province of China [grant number 2023J011030], a Middle-aged and Young Teachers’ Education and Research Project of Fujian Province [grant number JAT210423], a Sanming College Scientific Research and Development Fund Grant [grant number B202104], a Fuzhou City Science and Technology Plan Project [grant number 2021-P-064] and the APC was funded by the Tianjin Municipal Health and Health Committee.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work is supported by Fujian Key Lab of Agriculture IOT Application, and IOT Application Engineering Research Center of Fujian Province Colleges and Universities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pierezan, J.; Coelho, L.D.S. Coyote Optimization Algorithm: A New Metaheuristic for Global Optimization Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar] [CrossRef]
  2. Dhiman, G.; Kumar, V. Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowl. Based Syst. 2018, 159, 20–50. [Google Scholar] [CrossRef]
  3. Malviya, R.; Pratihar, D.K. Tuning of neural networks using particle swarm optimization to model MIG welding process. Swarm Evol. Comput. 2011, 1, 223–235. [Google Scholar] [CrossRef]
  4. Nanda, S.J.; Panda, G. A survey on nature inspired methaheuristic algorithms for partitional clustering. Swarm Evol. Comput. 2014, 16, 1–18. [Google Scholar] [CrossRef]
  5. Changdar, C.; Mahapatra, G.; Pal, R.K. An efficient genetic algorithm for multi-objective solid travelling salesman problem under fuzziness. Swarm Evol. Comput. 2014, 15, 27–37. [Google Scholar] [CrossRef]
  6. Suresh, K.; Kumarappan, N. Hybrid improved binary particle swarm optimization approach for generation maintenance scheduling problem. Swarm Evol. Comput. 2012, 9, 69–89. [Google Scholar] [CrossRef]
  7. Beyer, H.G.F.; Schwefel, H.P. Evolution strategies-a comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  8. Chen, H.; Wang, Z.; Wu, D.; Jia, H.; Wen, C.; Rao, H.; Abualigah, L. An improved multi-strategy beluga whale optimization for global optimization problems. Math. Biosci. Eng. 2023, 20, 13267–13317. [Google Scholar] [CrossRef]
  9. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  10. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified Sand Cat Swarm Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  12. Wang, R.-B.; Wang, W.-F.; Xu, L.; Pan, J.-S.; Chu, S.-C. An Adaptive Parallel Arithmetic Optimization Algorithm for Robot Path Planning. J. Adv. Transp. 2021, 2021, 3606895. [Google Scholar] [CrossRef]
  13. Khodadadi, N.; Snasel, V.; Mirjalili, S. Dynamic Arithmetic Optimization Algorithm for Truss Optimization Under Natural Frequency Constraints. IEEE Access 2022, 10, 16188–16208. [Google Scholar] [CrossRef]
  14. Li, X.-D.; Wang, J.-S.; Hao, W.-K.; Zhang, M.; Wang, M. Chaotic arithmetic optimization algorithm. Appl. Intell. 2022, 52, 16718–16757. [Google Scholar] [CrossRef]
  15. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  16. Kouadri, R.; Musirin, I.; Slimani, L.; Bouktir, T.; Othman, M. Optimal powerflow control variables using slime mould algorithm forgenerator fuel cost and loss minimization with voltage profileenhancement solution. Int. J. EmergingTrends Eng. Res. 2020, 8, 36–44. [Google Scholar]
  17. Zhao, J.; Gao, Z.M. The hybridized Harris hawk optimizationand slime mould algorithm. J. Phys. Conf. Ser. 2020, 1682, 012029. [Google Scholar] [CrossRef]
  18. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  19. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  20. Kirkpatrick, S.; Gelatto, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  21. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  22. Banzhaf, W.; Nordin, P.; Keller, R.E.; Francone, F.D. Genetic Programming: An Introduction; Morgan Kaufmann Publishers: San Francisco, CA, USA, 1998; Volume 1. [Google Scholar]
  23. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  24. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  25. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef]
  26. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  27. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  28. Gandomi, A.H. Interior search algorithm (ISA): A novel approach for global optimization. ISA Trans. 2014, 53, 1168–1183. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, G.G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef]
  30. Wang, G.G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  31. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  32. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  33. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The colony predation algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  34. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  35. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  36. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  37. Zeb, A.; Khan, M.; Khan, N.; Tariq, A.; Ali, L.; Azam, F.; Jaffery, S.H.I. Hybridization of simulated annealing with genetic algorithm for cell formation problem. Int. J. Adv. Manuf. Technol. 2016, 86, 2243–2254. [Google Scholar] [CrossRef]
  38. Chen, Z.; Chen, R.; Deng, T.; Wang, Y.; Di, W.; Luo, H.; Han, T. Magnetic Anomaly Detection Using Three-Axis Magnetoelectric Sensors Based on the Hybridization of Particle Swarm Optimization and Simulated Annealing Algorithm. IEEE Sensors J. 2021, 22, 3686–3694. [Google Scholar] [CrossRef]
  39. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Oliva, D. Hybridizing of Whale and Moth-Flame Optimization Algorithms to Solve Diverse Scales of Optimal Power Flow Problem. Electronics 2022, 11, 831. [Google Scholar] [CrossRef]
  40. Mohd Tumari, M.Z.; Ahmad, M.A.; Suid, M.H.; Hao, M.R. An Improved Marine Predators Algorithm-Tuned Fractional-Order PID Controller for Automatic Voltage Regulator System. Fractal Fract. 2023, 7, 561. [Google Scholar] [CrossRef]
  41. Wang, L.; Shi, R.; Dong, J. A Hybridization of Dragonfly Algorithm Optimization and Angle Modulation Mechanism for 0-1 Knapsack Problems. Entropy 2021, 23, 598. [Google Scholar] [CrossRef]
  42. Devaraj AF, S.; Elhoseny, M.; Dhanasekaran, S.; Lydia, E.L.; Shankar, K. Hybridization of firefly and improved multi-objective particle swarm op-timization algorithm for energy efficient load balancing in cloud computing environments. J. Parallel Distrib. Comput. 2020, 142, 36–45. [Google Scholar] [CrossRef]
  43. Jui, J.J.; Ahmad, M.A. A hybrid metaheuristic algorithm for identification of continuous-time Hammerstein systems. Appl. Math. Model. 2021, 95, 339–360. [Google Scholar] [CrossRef]
  44. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 2020, 165, 113897. [Google Scholar] [CrossRef]
  45. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  46. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Mirjalili, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  49. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag 2006, 1, 28–39. [Google Scholar] [CrossRef]
  50. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  51. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  52. Baykasoğlu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  53. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  54. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  55. Czerniak, M.; Zarzycki, H.; Ewald, D. AAO as a new strategy in modeling and simulation of constructional problems optimiza-tion. Simul. Modell. Pract. Theory 2017, 76, 22–33. [Google Scholar] [CrossRef]
  56. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  57. Baykasoglu, A.; Akpinar, S. Weighted superposition attraction (WSA): A swarm intelligence algorithm for optimization prob-lems-part2: Constrained optimization. Appl. Softw. Comput. 2015, 37, 396–415. [Google Scholar] [CrossRef]
  58. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  59. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  60. Rao, H.; Jia, H.; Wu, D.; Wen, C.; Li, S.; Liu, Q.; Abualigah, L. A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 3765. [Google Scholar] [CrossRef]
  61. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  62. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl. Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  63. Wang, S.; Sun, K.; Zhang, W.; Jia, H. Multilevel thresholding using a modified ant lion optimizer with opposition-based learning for color image segmentation. Math. Biosci. Eng. 2021, 18, 3092–3143. [Google Scholar] [CrossRef]
  64. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  65. Houssein, E.H.; Neggaz, N.; Hosney, M.E.; Hosney, M.E.; Mohamed, W.M.; Hassaballah, M. Enhanced Harris hawks opti-mization with genetic operators for selection chemical descriptors and compounds activities. Neural Comput. Appl. 2021, 33, 13601–13618. [Google Scholar] [CrossRef]
  66. Long, W.; Jiao, J.; Liang, X.; Cai, S.; Xu, M. A Random Opposition-Based Learning Grey Wolf Optimizer. IEEE Access 2019, 7, 113810–113825. [Google Scholar] [CrossRef]
  67. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
Figure 1. Central solution and random central learning solution.
Figure 1. Central solution and random central learning solution.
Biomimetics 08 00396 g001
Figure 2. Flow chart.
Figure 2. Flow chart.
Biomimetics 08 00396 g002
Figure 3. F23 function images with dim = 30 (F1–F13).
Figure 3. F23 function images with dim = 30 (F1–F13).
Biomimetics 08 00396 g003
Figure 4. F23 function images with dim = 500 (F1–F13).
Figure 4. F23 function images with dim = 500 (F1–F13).
Biomimetics 08 00396 g004
Figure 5. F23 function images (F14–F23).
Figure 5. F23 function images (F14–F23).
Biomimetics 08 00396 g005
Figure 6. Specific function image of CEC2020 test function.
Figure 6. Specific function image of CEC2020 test function.
Biomimetics 08 00396 g006
Figure 7. The pressure vessel design.
Figure 7. The pressure vessel design.
Biomimetics 08 00396 g007
Figure 8. Model of speed reducer design.
Figure 8. Model of speed reducer design.
Biomimetics 08 00396 g008
Figure 9. Pressure vessel design problem.
Figure 9. Pressure vessel design problem.
Biomimetics 08 00396 g009
Figure 10. Model of welded beams design.
Figure 10. Model of welded beams design.
Biomimetics 08 00396 g010
Figure 11. Car crashworthiness design.
Figure 11. Car crashworthiness design.
Biomimetics 08 00396 g011
Table 1. Parameter settings for the comparative algorithms.
Table 1. Parameter settings for the comparative algorithms.
AlgorithmParameter Settings
RCLSMAOAz = 0.03; µ = 0.499; α = 5
AOA [11]α = 5; MOP_Max = 1; MOP_Min = 0.2;
µ = 0.499
SMA [15]z = 0.03
ROA [45]C = 0.1
SCA [46]a = 2
WOA [47] A = 1   ;   c [ 1 , 1 ]   ;   b = 0.75   ;   l [ 1 , 1 ]
WMFO [42]aϵ[1,2]; b = 1
AMVO-SCA [43]Wmax = 1; Wmin = 0.2
Table 2. Results of benchmark functions (F1–F13) under 30 dimensions.
Table 2. Results of benchmark functions (F1–F13) under 30 dimensions.
FnMetricRCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
F1Best01.77 × 10−163002.34 × 10−22.96 × 10−823.31 × 10−735.59 × 10−1
Mean03.59 × 10−225.24 × 10−3064.33 × 10−3067.046.68 × 10−721.49 × 10−542.15
Stg01.97 × 10−21001.15 × 1013.66 × 10−715.61 × 10−548.68 × 10−1
F2Best002.59 × 10−2732.54 × 10−1834.56 × 10−44.88 × 10−583.86 × 10−362.56 × 10−1
Mean004.09 × 10−1571.66 × 10−1622.61 × 10−22.50 × 10−511.31 × 10−266.06 × 10−1
Stg002.24 × 10−1566.67 × 10−1622.68 × 10−26.92 × 10−515.00 × 10−261.96 × 10−1
F3Best04.73 × 10−117001.50 × 1031.03 × 1047.53 × 10−464.74 × 101
Mean05.09 × 10−33.79 × 10−2757.70 × 10−2807.07 × 1034.41 × 1043.58 × 1011.18 × 102
Stg09.36 × 10−3004.09 × 1031.49 × 1041.90 × 1024.05 × 101
F4Best01.07 × 10−543.97 × 10−2831.82 × 10−1762.36 × 1011.912.11 × 10−305.16
Mean03.23 × 10−25.55 × 10−1382.33 × 10−1593.75 × 1014.96 × 1011.17 × 10−108.09
Stg01.86 × 10−23.04 × 10−1371.28 × 10−1587.622.73 × 1016.20 × 10−101.97
F5Best6.30 × 10−52.74 × 1014.46 × 10−42.61 × 1011.12 × 1022.70 × 10106.04 × 101
Mean1.85 × 10−22.83 × 1015.162.70 × 1012.84 × 1042.80 × 1011.21 × 1011.37 × 102
Stg2.27 × 10−23.45 × 10−19.595.69 × 10−15.48 × 1044.53 × 10−11.40 × 1011.12 × 102
F6Best2.61 × 10−72.731.35 × 10−51.37 × 10−24.989.36 × 10−204.30
Mean3.59 × 10−63.175.77 × 10−31.17 × 10−12.35 × 1014.39 × 10−107.05
Stg2.99 × 10−62.28 × 10−13.57 × 10−31.42 × 10−12.99 × 1012.17 × 10−102.60
F7Best5.61 × 10−73.49 × 10−61.57 × 10−56.78 × 10−62.08 × 10−21.57 × 10−42.42 × 10−64.01 × 10−2
Mean4.30 × 10−56.04 × 10−51.84 × 10−41.60 × 10−41.55 × 10−14.62 × 10−32.96 × 10−46.01 × 10−2
Stg4.68 × 10−55.87 × 10−51.95 × 10−41.91 × 10−42.07 × 10−19.69 × 10−32.31 × 10−41.78 × 10−2
F8Best−1.26 × 104−6.32 × 103−1.26 × 104−1.26 × 104−4.24 × 103−1.26 × 104−2.37 × 10+22−7.24 × 103
Mean−1.26 × 104−5.21 × 103−1.26 × 104−1.24 × 104−3.69 × 103−1.05 × 104−1.42 × 10+23−6.49 × 103
Stg1.224.71 × 1024.26 × 10−14.31 × 1022.97 × 1021.76 × 1037.55 × 10+237.77 × 102
F9Best00002.84 × 10−1006.28 × 101
Mean00004.16 × 1011.89 × 10−152.65 × 1019.28 × 101
Stg00003.30 × 1011.04 × 10−143.10 × 1012.26 × 101
F10Best8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−161.04 × 10−18.88 × 10−168.88 × 10−164.46
Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−161.53 × 1013.97 × 10−151.13 × 10−156.10
Stg00007.952.42 × 10−151.30 × 10−158.36 × 10−1
F11Best01.39 × 10−2003.92 × 10−2008.05 × 10−1
Mean01.78 × 10−1009.90 × 10−11.71 × 10−209.68 × 10−1
Stg01.31 × 10−1003.80 × 10−16.81 × 10−206.45 × 10−2
F12Best5.17 × 10−94.44 × 10−12.83 × 10−52.39 × 10−32.343.37 × 10−31.57 × 10−327.35
Mean8.41 × 10−85.18 × 10−15.35 × 10−39.19 × 10−36.29 × 1041.97 × 10−21.04 × 10−11.11 × 101
Stg8.95 × 10−84.96 × 10−26.30 × 10−34.46 × 10−32.73 × 1051.41 × 10−25.68 × 10−13.00
F13Best6.96 × 10−82.629.35 × 10−66.01 × 10−32.772.03 × 10−11.35 × 10−321.61 × 101
Mean7.57 × 10−72.854.01 × 10−32.04 × 10−11.36 × 1055.37 × 10−11.80 × 10−272.91 × 101
Stg9.70 × 10−78.55 × 10−23.20 × 10−31.33 × 10−13.76 × 1052.60 × 10−19.89 × 10−279.91
Table 3. Results of benchmark functions (F1–F13) under 500 dimensions.
Table 3. Results of benchmark functions (F1–F13) under 500 dimensions.
FnMetric RCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
F1Best05.96 × 10−1002.06 × 1051.70 × 10−762.80 × 10−687.37 × 10−1
Mean06.43 × 10−13.54 × 10−25902.98 × 1051.75 × 10−692.15 × 10−522.32
Stg05.98 × 10−2008.38 × 1043.89 × 10−691.15 × 10−511
F2Best02.47 × 10−49.02 × 10−161.50 × 10−1749.36 × 1013.38 × 10−517.99 × 10−373.51 × 10−1
Mean02.74 × 10−36.76 × 10−13.35 × 10−1591.84 × 1026.39 × 10−487.25 × 10−226.00 × 10−1
Stg02.28 × 10−39.50 × 10−17.50 × 10−1596.74 × 1011.08 × 10−473.97 × 10−211.34 × 10−1
F3Best02.91 × 10102.49 × 10−2916.53 × 1062.78 × 1073.05 × 10−503.78 × 101
Mean05.28 × 1014.30 × 10−2082.89 × 10−2798.07 × 1063.90 × 1079.401.19 × 102
Stg03.30 × 101001.73 × 1068.30 × 1064.41 × 1016.19 × 101
F4Best01.76 × 10−11.20 × 10−1594.65 × 10−1709.88 × 1015.20 × 1018.09 × 10−325.36
Mean02.00 × 10−12.89 × 10−1202.18 × 10−1569.92 × 1017.28 × 1011.03 × 10−108.33
Stg04.69 × 10−26.25 × 10−1204.85 × 10−1562.93 × 10−12.00 × 1015.64 × 10−102.02
F5Best1.37 × 10−54.99 × 1023.27 × 1014.94 × 1022.03 × 1094.96 × 10207.12 × 101
Mean7.52 × 10−24.99 × 1023.70 × 1024.95 × 1022.32 × 1094.97 × 1028.351.35 × 102
Stg8.06 × 10−29.93 × 10−22.03 × 1022.87 × 10−13.50 × 1084.41 × 10−11.30 × 1017.25 × 101
F6Best1.51 × 10−61.13 × 1028.25 × 10−11.38 × 1011.30 × 1052.53 × 10104.06
Mean7.01 × 10−31.16 × 1025.24 × 1011.98 × 1012.25 × 1053.79 × 10106.94
Stg9.92 × 10−31.804.74 × 1015.458.80 × 1041.21 × 10102.07
F7Best2.45 × 10−78.86 × 10−58.56 × 10−51.25 × 10−41.60 × 1041.66 × 10−33.12 × 10−53.40 × 10−2
Mean2.63 × 10−51.37 × 10−47.06 × 10−43.96 × 10−41.79 × 1041.21 × 10−22.90 × 10−46.14 × 10−2
Stg2.30 × 10−54.50 × 10−58.20 × 10−42.59 × 10−42.22 × 1031.66 × 10−22.09 × 10−41.69 × 10−2
F8Best−2.09 × 105−2.37 × 104−2.09 × 105−2.09 × 105−1.58 × 104−2.06 × 105−8.54 × 1024−7.91 × 103
Mean−2.09 × 105−2.18 × 104−2.09 × 105−1.99 × 105−1.47 × 104−1.76 × 105−2.85 × 1023−6.41 × 103
Stg1.77 × 10−12.03 × 1032.34 × 1021.55 × 1046.84 × 1024.11 × 1041.56 × 10246.24 × 102
F9Best00005.17 × 102005.17 × 101
Mean06.93 × 10−6001.42 × 1036.06 × 10−141.19 × 1019.26 × 101
Stg06.72 × 10−6005.55 × 1023.32 × 10−132.43 × 1012.28 × 101
F10Best8.88 × 10−167.44 × 10−38.88 × 10−168.88 × 10−168.078.88 × 10−168.88 × 10−165.15
Mean8.88 × 10−168.12 × 10−38.88 × 10−168.88 × 10−161.92 × 1014.32 × 10−158.88 × 10−166.14
Stg03.45 × 10−4003.622.38 × 10−1506.10 × 10−1
F11Best06.43 × 103009.67 × 102007.55 × 10−1
Mean01.06 × 104002.02 × 1033.70 × 10−1809.87 × 10−1
Stg02.97 × 103007.53 × 1022.03 × 10−1706.07 × 10−2
F12Best4.18 × 10−131.062.34 × 10−51.43 × 10−23.40 × 1093.93 × 10−21.57 × 10−327.08
Mean2.20 × 10−71.082.60 × 10−23.97 × 10−25.72 × 1091.06 × 10−11.57 × 10−321.02 × 101
Stg2.92 × 10−71.36 × 10−29.59 × 10−22.25 × 10−21.47 × 1095.14 × 10−25.57 × 10−482.56
F13Best6.02 × 10−115.01 × 1013.61 × 10−33.375.32 × 1098.641.35 × 10−321.82 × 101
Mean1.79 × 10−35.02 × 1012.879.031.03 × 10102.00 × 1011.35 × 10−323.00 × 101
Stg3.77 × 10−34.33 × 10−28.972.722.39 × 1095.755.57 × 10−488.14
Table 4. Results of benchmark functions (F14–F23).
Table 4. Results of benchmark functions (F14–F23).
FnMetricRCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
F14Best9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Mean9.98 × 10−19.499.98 × 10−13.351.603.714.235.74
Stg03.638.61 × 10−134.019.23 × 10−14.023.924.34
F15Best3.07 × 10−43.77 × 10−43.08 × 10−43.08 × 10−44.92 × 10−43.58 × 10−43.07 × 10−43.68 × 10−4
Mean3.45 × 10−41.69 × 10−26.23 × 10−45.04 × 10−41.10 × 10−36.97 × 10−44.37 × 10−41.38 × 10−3
Stg9.47 × 10−53.25 × 10−23.04 × 10−43.18 × 10−43.56 × 10−44.54 × 10−42.96 × 10−41.10 × 10−3
F16Best−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
Mean−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
Stg6.78 × 10−161.34 × 10−71.59 × 10−95.27 × 10−85.73 × 10−52.87 × 10−95.80 × 10−105.27 × 10−3
F17Best3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14.00 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Stg01.36 × 10−72.84 × 10−81.32 × 10−51.55 × 10−35.72 × 10−61.02 × 10−87.57 × 10−4
F18Best33333333
Mean31.34 × 101333333
Stg2.08 × 10−152.01 × 1017.57 × 10−116.18 × 10−58.12 × 10−51.05 × 10−25.99 × 10−66.46 × 10−13
F19Best−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86
Mean−3.86−3.85−3.86−3.86−3.85−3.86−3.86−3.86
Stg2.71 × 10−155.82 × 10−31.90 × 10−62.77 × 10−36.12 × 10−31.07 × 10−23.39 × 10−31.36 × 10−2
F20Best−3.32−3.16−3.32−3.32−3.13−3.32−3.32−3.32
Mean−3.29−3.02−3.24−3.21−2.87−3.20−3.13−3.01
Stg5.35 × 10−29.55 × 10−25.58 × 10−21.42 × 10−13.47 × 10−11.73 × 10−13.13 × 10−13.59 × 10−1
F21Best−1.02 × 101−5.16−1.02 × 101−1.02 × 101−5.90−1.01 × 101−1.02 × 101−1.01 × 101
Mean−1.02 × 101−3.62−1.02 × 101−1.01 × 101−2.40−7.60−5.23−4.72
Stg6.96 × 10−151.064.55 × 10−41.58 × 10−21.862.819.31 × 10−12.63
F22Best−1.04 × 101−7.58−1.04 × 101−1.04 × 101−6.85−1.04 × 101−1.04 × 101−1.04 × 101
Mean−1.04 × 101−4.29−1.04 × 101−1.04 × 101−3.69−7.69−6.26−5.89
Stg1.19 × 10−151.232.55 × 10−41.59 × 10−21.863.212.713.10
F23Best−1.05 × 101−8.42−1.05 × 101−1.05 × 101−8.38−1.05 × 101−1.05 × 101−1.05 × 101
Mean−1.05 × 101−4.06−1.05 × 101−1.05 × 101−3.86−7.34−7.29−5.23
Stg1.78 × 10−151.723.91 × 10−42.00 × 10−21.873.092.693.07
Table 5. Results of benchmark functions (F14–F23).
Table 5. Results of benchmark functions (F14–F23).
CECMetricRCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
CEC_01mid1.00 × 1022.99 × 1091.05 × 1021.05 × 1094.08 × 1085.00 × 1061.19 × 1033.15 × 103
mean1.80 × 1031.02 × 10107.28 × 1035.69 × 1091.10 × 1097.74 × 1071.57 × 1058.64 × 108
std1.88 × 1034.13 × 1095.00 × 1033.26 × 1095.80 × 1081.13 × 1084.30 × 1051.43 × 109
CEC_02mid1.10 × 1031.83 × 1031.34 × 1031.77 × 1031.75 × 1031.63 × 1031.46 × 1031.57 × 103
mean1.42 × 1032.22 × 1031.77 × 1032.49 × 1032.54 × 1032.24 × 1031.98 × 1032.00 × 103
std1.33 × 1022.30 × 1022.52 × 1023.17 × 1022.73 × 1023.44 × 1023.62 × 1023.44 × 102
CEC_03mid7.11 × 1027.70 × 1027.18 × 1027.71 × 1027.56 × 1027.52 × 1027.22 × 1027.30 × 102
mean7.18 × 1027.96 × 1027.32 × 1028.17 × 1027.86 × 1027.97 × 1027.45 × 1027.65 × 102
std2.751.56 × 1019.632.46 × 1011.41 × 1012.76 × 1011.59 × 1013.23 × 101
CEC_04mid1.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 103
mean1.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 103
std00001.092.56 × 10−15.83 × 10−12.58
CEC_05mid1.70 × 1039.15 × 1032.46 × 1034.58 × 1031.23 × 1047.71 × 1036.75 × 1033.67 × 103
mean2.91 × 1034.49 × 1052.69 × 1044.77 × 1056.57 × 1042.59 × 1053.36 × 1053.36 × 105
std1.62 × 1033.28 × 1056.82 × 1043.36 × 1056.78 × 1045.10 × 1055.16 × 1053.66 × 105
CEC_06mid1.60 × 1031.76 × 1031.61 × 1031.65 × 1031.69 × 1031.65 × 1031.61 × 1031.60 × 103
mean1.65 × 1032.15 × 1031.77 × 1031.96 × 1031.86 × 1031.89 × 1031.82 × 1031.86 × 103
std5.85 × 1011.99 × 1021.05 × 1021.52 × 1029.03 × 1011.25 × 1021.39 × 1021.74 × 102
CEC_07mid2.10 × 1034.05 × 1032.33 × 1032.98 × 1035.60 × 1038.70 × 1033.43 × 1032.76 × 103
mean2.62 × 1031.04 × 1069.48 × 1033.66 × 1051.72 × 1047.75 × 1051.76 × 1055.75 × 105
std7.73 × 1022.14 × 1069.22 × 1031.02 × 1061.06 × 1042.07 × 1063.79 × 1052.97 × 106
CEC_08mid2.20 × 1032.59 × 1032.30 × 1032.38 × 1032.33 × 1032.31 × 1032.23 × 1032.30 × 103
mean2.30 × 1033.07 × 1032.46 × 1032.71 × 1032.41 × 1032.38 × 1032.40 × 1032.50 × 103
std1.99 × 1013.35 × 1023.69 × 1023.50 × 1024.66 × 1012.92 × 1023.80 × 1023.58 × 102
CEC_09mid2.40 × 1032.66 × 1032.50 × 1032.60 × 1032.57 × 1032.57 × 1032.74 × 1032.50 × 103
mean2.72 × 1032.88 × 1032.75 × 1032.81 × 1032.79 × 1032.78 × 1032.76 × 1032.76 × 103
std6.67 × 1018.73 × 1013.82 × 1018.55 × 1014.37 × 1015.17 × 1012.41 × 1017.40 × 101
CEC_10mid2.90 × 1032.99 × 1032.90 × 1032.97 × 1032.94 × 1032.91 × 1032.90 × 1032.91 × 103
mean2.93 × 1033.38 × 1032.95 × 1033.25 × 1032.99 × 1032.98 × 1032.94 × 1032.97 × 103
std2.17 × 1012.89 × 1023.18 × 1012.57 × 1023.12 × 1019.68 × 1012.93 × 1016.47 × 101
Table 6. Experimental results of Wilcoxon rank sum test for F23 functions.
Table 6. Experimental results of Wilcoxon rank sum test for F23 functions.
F23dimRCLSMAOA
vs.
AOA
RCLSMAOA
vs.
SMA
RCLSMAOA
vs.
ROA
RCLSMAOA
vs.
SCA
RCLSMAOA
vs.
WOA
RCLSMAOA
vs.
WMFO
RCLSMAOA
vs.
AMVO-SCA
F1301.73 × 10−65.00 × 10−111.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.22 × 10−45.00 × 10−11.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F23011.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
50011.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F3301.73 × 10−611.95 × 10−31.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−616.10 × 10−51.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F4301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F5301.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.21 × 10−16.10 × 10−5
5001.73 × 10−62.88 × 10−61.73 × 10−61.73 × 10−61.73 × 10−68.04 × 10−16.10 × 10−5
F6301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F7308.61 × 10−12.96 × 10−31.11 × 10−21.73 × 10−61.73 × 10−61.22 × 10−46.10 × 10−5
5002.99 × 10−14.53 × 10−43.61 × 10−31.73 × 10−62.60 × 10−66.10 × 10−46.10 × 10−5
F8301.73 × 10−63.16 × 10−28.13 × 10−11.73 × 10−61.92 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.04 × 10−24.45 × 10−51.73 × 10−62.35 × 10−66.10 × 10−56.10 × 10−5
F9301111.73 × 10−613.13 × 10−26.10 × 10−5
5001111.73 × 10−62.50 × 10−17.81 × 10−36.10 × 10−5
F10301111.73 × 10−69.90 × 10−616.10 × 10−5
5001.73 × 10−6111.73 × 10−65.00 × 10−116.10 × 10−5
F11301.73 × 10−6111.73 × 10−6116.10 × 10−5
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−616.10 × 10−5
F12301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F13301.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
5001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
F1421.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−65.00 × 10−16.10 × 10−5
F1541.92 × 10−64.45 × 10−59.32 × 10−61.73 × 10−62.35 × 10−66.10 × 10−51.07 × 10−1
F1621.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−616.10 × 10−5
F1721.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−616.10 × 10−5
F1851.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−69.77 × 10−46.10 × 10−5
F1931.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−616.10 × 10−5
F2061.73 × 10−66.32 × 10−53.52 × 10−61.73 × 10−64.53 × 10−44.03 × 10−34.21 × 10−1
F2141.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.13 × 10−26.10 × 10−5
F2241.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−67.81 × 10−36.10 × 10−5
F2341.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.13 × 10−26.10 × 10−5
Table 7. Experimental results of Wilcoxon rank sum test for CEC2020 functions.
Table 7. Experimental results of Wilcoxon rank sum test for CEC2020 functions.
F23dimRCLSMAOA
vs.
AOA
RCLSMAOA
vs.
SMA
RCLSMAOA
vs.
ROA
RCLSMAOA
vs.
SCA
RCLSMAOA
vs.
WOA
RCLSMAOA
vs.
WMFO
RCLSMAOA
vs.
AMVO-SCA
CEC01101.73 × 10−66.34 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.27 × 10−36.10 × 10−5
CEC02101.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.36 × 10−33.05 × 10−4
CEC03101.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.22 × 10−46.10 × 10−5
CEC04101.73 × 10−6111.73 × 10−613.13 × 10−26.10 × 10−5
CEC05101.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.22 × 10−46.10 × 10−5
CEC06101.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.27 × 10−46.10 × 10−5
CEC07101.73 × 10−61.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.16 × 10−32.62 × 10−3
CEC08101.73 × 10−62.60 × 10−61.73 × 10−61.73 × 10−61.73 × 10−66.10 × 10−56.10 × 10−5
CEC09101.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−68.33 × 10−28.33 × 10−2
CEC10101.73 × 10−61.13 × 10−51.73 × 10−61.73 × 10−61.73 × 10−62.56 × 10−23.53 × 10−2
Table 8. Friedman ranked F23 functions.
Table 8. Friedman ranked F23 functions.
FRCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
F11.9333333334.2666666671.9833333332.0833333337.6666666675.8666666674.8666666677.333333333
F21.51.53.23.87568
F31.54.31.53784.76
F414.92.1666666672.833333333784.16
F51.75.12.8666666673.6666666677.9666666675.91.7666666677.033333333
F625.566666667347.6333333335.53333333317.266666667
F72.3333333331.83.8666666672.9666666677.5333333336.34.17.1
F836.83385.415.8
F93.1333333333.1333333333.1333333333.1333333336.8666666673.6833333335.057.866666667
F103.1166666673.1166666673.1166666673.1166666677.6666666675.4166666673.1166666677.333333333
F112.855.8666666672.852.857.33.92.857.533333333
F1225.7333333333.13.97.4666666675.26666666717.533333333
F1326347.8517.2
F141.4166666676.93333333334.8333333334.8666666676.7333333331.756.466666667
F151.5666666676.1666666674.12.75.5666666675.6666666676.2666666673.966666667
F161.1666666676.63.85.7333333337.9666666674.8666666671.8333333334.033333333
F171.54.7666666673.65.8666666677.57.51.53.766666667
F181.0333333335.13.0666666676.26.6666666676.8666666671.9666666675.1
F191.36.53.03333333356.37.3333333331.74.833333333
F201.36.43.8333333334.1666666677.2333333337.2666666672.6333333333.166666667
F211.0333333336.2666666672.7333333333.8333333337.4666666675.93333333344.733333333
F221.1166666676.93.44.2666666676.96.72.7833333333.933333333
F231.0833333336.83.3666666674.2666666676.86.5333333332.5833333334.566666667
Avg Rank1.76445.52983.07463.87897.13766.02892.93765.9376
Final Rank15348726
Table 9. Friedman ranked CEC2020 functions.
Table 9. Friedman ranked CEC2020 functions.
CEC2020RCLSMAOAAOASMAROASCAWOAWMFOAMVO-SCA
CEC2020_011.4666666677.62.1333333335.1666666675.2666666677.3666666672.6666666674.333333333
CEC2020_021.2333333335.1666666673.0666666674.7666666676.9333333337.5666666673.6333333333.633333333
CEC2020_031.06666666772.44.7666666675.7666666677.6333333332.9666666674.4
CEC2020_043.3833333333.3833333333.3833333333.3833333335.6833333333.7666666675.0833333337.933333333
CEC2020_051.1333333337.0666666673.34.5333333334.4666666675.45.64.5
CEC2020_061.46.82.74.63.8333333337.14.4666666675.1
CEC2020_071.86.73.64.1333333334.3333333337.8666666674.33.266666667
CEC2020_081.2333333337.3666666672.855.1666666677.1333333333.0333333334.266666667
CEC2020_091.3333333336.9666666673.0333333334.4333333335.3666666677.0333333333.64.233333333
CEC2020_101.87.5666666672.7666666674.6333333335.0666666677.33.0666666673.8
Avg Rank1.5856.56162.91834.54165.18836.81663.84164.5466
Final Rank17246835
Table 10. Comparison of optimal solutions for the pressure vessel design problem.
Table 10. Comparison of optimal solutions for the pressure vessel design problem.
AlgorithmOptimal Values for VariablesCost
TsThRL
RCLSMAOA0.7424330.37019640.319612005734.9131
AOA [11]0.83037370.416205742.75127169.34546048.7844
SMA [15]0.79310.393240.6711196.21785994.1857
WOA [47]0.81250.437542.0982699176.6389986059.741
GA [21]0.81250.437542.0974176.65416059.94634
GWO [48]0.81250.434542.089181176.7587316051.5639
ACO [49]0.81250.437542.103624176.5726566059.0888
AO [50]1.0540.18280659.621939.8055949.2258
MVO [51]0.81250.437542.09074176.73876060.8066
Table 11. Comparison of optimal solutions for the speed reducer design problem.
Table 11. Comparison of optimal solutions for the speed reducer design problem.
AlgorithmOptimal Values for VariablesOptimal Weight
x1x2x3x4x5x6x7
RCLSMAOA3.49750.7177.37.83.35005.2852995.437365
AOA [11]3.503840.7177.37.729333.356495.28672997.9157
FA [52]3.5074950.7001177.7196748.0808543.3515125.2870513010.137492
RSA [53]3.502790.7177.308127.747153.350675.286752996.5157
MFO [54]3.4974550.7177.827757.7124573.3517875.2863522998.94083
AAO [55]3.4990.6999177.37.83.35025.28722996.783
HS [56]3.5201240.7178.377.83.366975.2887193029.002
WSA [57]3.50.7177.37.83.3502155.2866832996.348225
CS [58]3.50150.7177.6057.81813.3525.28753000.981
Table 12. Experimental results of three-bar truss design.
Table 12. Experimental results of three-bar truss design.
Algorithmx1x2Best Weight
RCLSMAOA0.788415440.408113094263.8523464
MVO [51]0.7886030.408453263.8958
RSA [53]0.788730.40805263.8928
GOA [59]0.7888980.40762263.8959
CS [58]0.788670.40902263.9716
Table 13. Comparison of optimal solutions for the welded beam design problem.
Table 13. Comparison of optimal solutions for the welded beam design problem.
AlgorithmOptimal Values for VariablesBest Weight
hltb
RCLSMAOA0.205733.25309.03660.205721.6952
ROA [45]0.2000773.3657549.0111820.2068931.706447
MGTOA [60]0.2053513.2684199.0698750.2056211.701633939
MVO [51]0.2054633.4731939.0445020.2056951.72645
WOA [47]0.2053963.4842939.0374260.2062761.730499
MROA [9]0.20621853.2548939.0200030.2064891.699058
RO [61]0.2036873.5284679.0042330.2072411.735344
BWO [62]0.20593.26659.02290.20641.6997
Table 14. Experimental results of car crashworthiness design.
Table 14. Experimental results of car crashworthiness design.
AlgorithmRCLSMAOAROA [45]WOA [57]MALO [63]GTOA [64]HHOCM [65]ROLGWO [66]MPA [67]
x10.50.50.85210.50.6628330.5001640.5012550.5
x21.2306381521.229421.21361.22811.2172471.2486121.2455511.22823
x30.50.50.66040.50.7342380.6595580.5000460.5
x41.1984064181.211971.11561.21261.112661.0985151.1802541.2049
x50.50.50.50.50.6131970.7579890.5000350.5
x61.083904071.377981.1951.3080.6701970.7672681.165881.2393
x70.50.500050.58980.50.6156940.5000550.5000880.5
x80.3450670130.344890.27110.34490.2717340.3431050.3448950.34498
x90.3479881730.192630.27690.28040.231940.1920320.2995830.192
x100.8777481110.622394.34370.42420.1749332.8988053.595080.44035
x110.729351464-2.23524.65650.462294-2.290181.78504
Best Weight23.1890710423.2354425.8365723.229425.7060724.4835823.2224323.19982
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, H.; Wang, Z.; Jia, H.; Zhou, X.; Abualigah, L. Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation. Biomimetics 2023, 8, 396. https://doi.org/10.3390/biomimetics8050396

AMA Style

Chen H, Wang Z, Jia H, Zhou X, Abualigah L. Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation. Biomimetics. 2023; 8(5):396. https://doi.org/10.3390/biomimetics8050396

Chicago/Turabian Style

Chen, Hongmin, Zhuo Wang, Heming Jia, Xindong Zhou, and Laith Abualigah. 2023. "Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation" Biomimetics 8, no. 5: 396. https://doi.org/10.3390/biomimetics8050396

APA Style

Chen, H., Wang, Z., Jia, H., Zhou, X., & Abualigah, L. (2023). Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation. Biomimetics, 8(5), 396. https://doi.org/10.3390/biomimetics8050396

Article Metrics

Back to TopTop