Hybrid Slime Mold and Arithmetic Optimization Algorithm with Random Center Learning and Restart Mutation

The slime mold algorithm (SMA) and the arithmetic optimization algorithm (AOA) are two novel meta-heuristic optimization algorithms. Among them, the slime mold algorithm has a strong global search ability. Still, the oscillation effect in the later iteration stage is weak, making it difficult to find the optimal position in complex functions. The arithmetic optimization algorithm utilizes multiplication and division operators for position updates, which have strong randomness and good convergence ability. For the above, this paper integrates the two algorithms and adds a random central solution strategy, a mutation strategy, and a restart strategy. A hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA) is proposed. The improved algorithm retains the position update formula of the slime mold algorithm in the global exploration section. It replaces the convergence stage of the slime mold algorithm with the multiplication and division algorithm in the local exploitation stage. At the same time, the stochastic center learning strategy is adopted to improve the global search efficiency and the diversity of the algorithm population. In addition, the restart strategy and mutation strategy are also used to improve the convergence accuracy of the algorithm and enhance the later optimization ability. In comparison experiments, different kinds of test functions are used to test the specific performance of the improvement algorithm. We determine the final performance of the algorithm by analyzing experimental data and convergence images, using the Wilcoxon rank sum test and Friedman test. The experimental results show that the improvement algorithm, which combines the slime mold algorithm and arithmetic optimization algorithm, is effective. Finally, the specific performance of the improvement algorithm on practical engineering problems was evaluated.


Introduction
In the past decade, the exploitation and application of optimization models have begun to receive attention from mathematicians and engineers.In recent years, with the continuous exploitation of computer technology, more and more optimization problems have attracted people's attention.The unconstrained optimization problem is currently a research hotspot.The complexity of these problems is gradually increasing, and they have characteristics such as being large-scale, multimodal, and nonlinear [1].The metaheuristic algorithm has become an excellent tool and has been recognized by people because it is simple, easy to implement, does not require gradient information, and can avoid local optimization [2].This is because the meta-heuristic algorithm treats the problem as a black box model and only needs to input the problem to obtain the output of the problem.Researchers develop meta-heuristic algorithms by simulating various natural phenomena and biological habits.Meta-heuristic algorithms can effectively handle real-life optimization problems because they possess valuable randomness that allows them to bypass local optima and have stronger search capabilities than traditional optimization algorithms.These valuable characteristics of the meta-heuristic algorithm make it very smooth when dealing with application problems.Examples include neural networks [3], clustering [4], engineering [5], and scheduling problems [6].
The main problem is whether a meta-heuristic algorithm can be used to solve most of the problems.No Free Lunch (NFL) [7] explains that when an algorithm can provide a good solution to a particular problem, it is not guaranteed a good result on other problems.NFL's law motivates researchers to enhance their ability to solve new problems by improving currently known algorithms.For example, Chen et al. were inspired by the lifestyle of beluga whales and developed an IBWO [8] that improved the algorithm's global optimization ability; Wen et al. enhanced the global optimization capability of the algorithm by using a new host-switching mechanism [9]; Wu et al. improved the sand cat's wandering strategy and applied it to engineering problems [10].
AOA [11] is a meta-heuristic optimization algorithm designed based on the four mixed operations proposed by Abualigah et al. in 2021.The algorithm uses multiplication and division operations in mathematics to improve the global dispersion of position updates and addition and subtraction operations to improve the accuracy of position updates in local areas.However, AOA still faces problems such as slow convergence in complex environments.It needs further improvement and perfecting to adapt to more complex optimization problems.Recently, many researchers have made improvements to the AOA, including adaptive parallel arithmetic optimization algorithm (AAOA) [12], dynamic arithmetic optimization algorithm (DAOA) [13], and chaotic arithmetic optimization algorithm (CAOA) [14].
SMA [15] is a new swarm intelligence optimization algorithm proposed by Li et al. in 2020 to simulate slime mold's behavior and morphological changes in the foraging process.Its inspiration comes from simulating the foraging behavior and morphological changes of physarum polycephalum and using the weight change to simulate the positive feedback and negative feedback processes generated by slime molds in the foraging process, thus generating three stages of foraging patterns.The SMA has strong global exploration ability, certain convergence accuracy, and good stability, but in the later iteration stage, the oscillation effect is weak, and it is easy to fall into local optima.The contraction mechanism is not strong, resulting in a slower convergence speed.At present, researchers have improved and widely used this algorithm.Kouadri et al. [16] applied the algorithm to solve the problem of minimizing fuel costs and losses in exploration generators.Zhao et al. [17] proposed mixing SMA and HHO algorithms, utilizing multiple composite selection mechanisms to improve the algorithm's selectivity and randomness, the randomness of individual position updates, and the efficiency of algorithm solving.
Based on the advantages and disadvantages of SMA and AOA, this article aims to create a more effective optimization algorithm by combining the two algorithms.To further enhance its performance, a random center solution strategy is introduced.This strategy uses random center learning to expand the exploration range of individual populations, enrich population diversity, and can effectively control the balance between exploration and exploitation.Both mutation strategy and restart strategy were used.The mutation strategy is a local adaptive mutation method that improves the algorithm's global search ability and performs well in high-dimensional spaces.The restart strategy can help poorer individuals jump to other positions and is usually used to jump out of local optima.The proposed mixed optimization algorithm, hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA), incorporates the best features of both SMA and AOA, making it more effective in exploring the search space and enabling it to effectively solve corresponding engineering problems.The algorithm focuses on improving four key aspects: (1) In the exploration and exploitation stage, SMA and AOA should be organically combined to improve the exploration and exploitation capabilities comprehensively; (2) Innovatively propose a random center strategy, which improves the early convergence speed of the algorithm and effectively maintains a balance between exploration and development while enhancing the diversity of the population; (3) The introduction of the mutation strategy and restart strategy enhances the ability to solve complex problems while also enhancing the algorithm's ability to jump out of local optima.By comparing 23 benchmark test functions with different dimensions with the CEC2020 test function, it is proven that the algorithm has significant effectiveness; (4) Five engineering problems were used simultaneously to verify the feasibility of RCSMAOA on practical engineering problems.
The second part of this article introduces the relevant work.SMA and AOA were introduced in the third and fourth parts, respectively.In the fifth part, we described the added strategies; SCLS, MS, and RS explained the implementation process of the algorithm and provided pseudocode and flowchart.The sixth part is the analysis of time complexity.The seventh part is the experimental analysis of the specific performance of RCLSMAOA.The eighth part is the application of RCLSMAOA in specific engineering problems.The ninth part summarizes the contributions of this article and introduces the next research directions.If you need the code in our article, you can find it through the following link: https://github.com/Mars02w/RCLSMAOA,accessed on 20 August 2023.

Related Works
In recent years, meta-heuristic algorithms can be divided into the following four categories based on their inspiration sources: (1) physics-based methods, whose inspiration comes from physical rules in the universe.Specific algorithms include the black hole algorithm (BH) [18], the gravity search algorithm (GSA) [19], and the most famous simulated annealing algorithm (SA) [20].(2) Evolution-based algorithms inspired by the laws of biological evolution.Researchers have linked natural and artificial evolution to create many excellent algorithms.Examples include genetic algorithm (GA) [21], genetic programming (GP) [22], and differential evolution (DE) [23].(3) Group-based algorithms focus on modeling observations of animals and other living organisms.The most famous is particle swarm optimization (PSO) [24], which simulates the behavior of birds and fish.Ant Colony Algorithm (ACO) [25] simulates the behavior of ants searching for food sources.The foraging behavior of slime molds inspires the slime mold optimization algorithm (SMA) [15].(4) Human behavior habits and ideas inspire human-based algorithms in social life.A well-known one is the teaching-learning-based optimizer (TLBO) [26], inspired by the interaction between teachers and students.There are also training-based optimizers (DTBO) [27] and internal search algorithms (ISA) [28].In recent years, many excellent algorithms have still been proposed by researchers, such as the monarch Butterfly optimization (MBO) [29], more search algorithm (MSA) [30], hunger games search (HGS) [31], Runge Kutta method (RUN) [32], colony prediction algorithm (CPA) [33], weighted mean of vectors (INFO) [34], Harris hawks optimization (HHO) [35], and prime optimization algorithm (RIME) [36].
In addition, studying hybrid optimization algorithms is also a new trend at present.In recent years, researchers have increasingly conducted mixed research on algorithms.For example, Alam Zeb et al. [37] mixed a genetic algorithm with a simulated annealing algorithm, and the powerful exploration ability of the genetic algorithm compensated for the lack of exploration in the simulated annealing algorithm, enhancing the actual optimization performance of the algorithm.Chen et al. [38] fused a particle swarm optimization algorithm with a simulated annealing algorithm and applied it to magnetic anomaly detection, achieving success.Hybrid optimization algorithms also have the ability to solve optimal power flow problems [39].Tumari et al. studied a variant of the ocean predator algorithm for adjusting the fractional order proportional integral derivative controller of the automatic voltage regulator system [40].Wang et al. [41] added an angle modulation mechanism to a dragonfly algorithm to enable it to work in two-dimensional space and have good performance.Devaraj A et al. [42] used a combination of fireflies and improved multi-objective particle swarm optimization (IMPSO) technology to improve load balancing capabilities in cloud computing environments and the results showed success.Jui et al. [43] hybridized the average multi-verse optimizer and sine cosine algorithm, demonstrating good potential in solving continuous-time Hammerstein system problems.

Slime Mold Algorithm
The SMA is a meta-heuristic optimization algorithm that simulates the foraging behavior of slime molds.This algorithm reflects slime molds' oscillation and contraction characteristics during the foraging process.The organic matter in slime molds first secretes enzymes when searching for food, and then during migration, the front end extends into a fan shape and uses a venous network for foraging.Due to their unique morphology and characteristics, they can simultaneously utilize multiple pathways to form a connected venous network to obtain food.In addition, slime molds will also search for food in other unknown areas.
When the slime mold vein is close to food according to the smell in the air, the higher the food concentration, the stronger the propagation wave generated by the biological oscillator in its body, which increases the flow of cytoplasm in the vein.The faster the flow of cytoplasm, the thicker the slime mold vein tube, which causes the position update of the slime mold.The position update formula is: where X b (t) represents the current position of the individual with the optimal fitness, vb is a parameter between [-a,a], W represents the weight coefficients of the slime mold, X rand1 (t) and X rand2 (t) represent the positions of two randomly selected individuals, r 1 represents the random number in the interval [0, 1], vc is the parameter that linearly decreases from 1 to 0, and t is the current number of iterations.
The updating formula of control parameters a, p, and weight coefficient W is as follows: The parameters are calculated based on the current individual's fitness and optimal values, where i = 1,2,. .., N. N represents the number of populations, S(i) represents the fitness value of the ith slime mold individual, and DF represents the optimal fitness obtained in the current iteration process.T represents the maximum number of iterations, and r 2 is a random number within the [0, 1] interval.condition are the individuals whose fitness is in the top half of the population, and others are the remaining individuals.bF represents the best fitness value obtained during the current iteration, and wF represents the worst fitness value obtained during the current iteration.SIndex(i) indicates the fitness value sequence.
Even if slime molds find a food source, they will separate some individuals to explore other unknown areas to find a higher-quality food source.At this point, the formula for the slime mold update position is as follows: Among them, rand is a random number within the [0, 1] interval, UB and LB represent upper and lower bounds, and z represents a custom parameter (with a value of 0.03).

Arithmetic Optimization Algorithm
AOA is a meta-heuristic optimization algorithm exploring and exploiting mechanisms through arithmetic operators in mathematics.The exploration stage refers to using multiplication (M) and division (D) strategies for extensive coverage and search space, improving the dispersion of solutions to avoid local optima.The exploitation stage is to improve the accuracy of the solutions obtained in the exploration stage through the addition (A) strategy and the subtraction (S) strategy, that is, to enhance the local optimization ability.

Mathematical Optimization Acceleration Function
Before exploration and exploitation, AOA generates a math optimizer accelerated (MOA) to select the search phase.When r 1 > MOA (t), the exploration phase is carried out by executing (D) or (M); when r 1 ≤ MOA (t), the exploitation phase is carried out by executing (A) or (S); r 1 is a random number from 0 to 1.
where Min and Max represent the minimum and maximum values of the optimization acceleration function (MOA).

Exploration Phase
AOA looks at different parts of the search space (global optimization) using two main search methods (multiplication (M) search strategy and division (D) search strategy).It updates its position using this formula: where r 2 ∈ [0, 1], X i,j (t + 1) is the position of the ith individual on the jth dimension during the next iteration.X b,j (t) is the location of the best solution for the current fitness.ε is a very small number, where UB j and LB j represent the upper and lower limits of the jth position, respectively.µ is the control parameter for adjusting the search process, which is 0.499.
The Mathematical Optimizer Probability (MOP) is a coefficient, α is a sensitive parameter that defines the exploitation accuracy during the iteration process, which is fixed at 5.

Exploitation Phase
AOA utilizes operators (subtraction (S) and addition (A)) to deeply explore search areas in several dense areas and conducts local optimization based on two main exploitation strategies.The location update formula is as follows: Among them, r 3 is a random number between 0 and 1.

Stochastic Center Learning Strategy (SCLS)
The random center learning strategy is a newly proposed optimization mechanism in this paper.In nature, group animals such as wolf packs and whale packs often surround their food in the middle through group cooperation and then engage in predation.In this regard, this article proposes a central learning strategy whose core idea is to generate a central solution based on upper and lower bounds during the process of searching for the optimal value of the population, comparing it with the target fitness value of the existing optimal solution, and selecting the optimal one for the next iteration.Definition of the central solution: if there is a point X in the n-dimensional coordinate system, then the central solution is calculated as follows: Among them, X c is the central solution.Due to the lack of randomness in the central solution.In order to further improve the ability of the population to find the globally optimal solution (as shown in Figure 1), this paper proposes an improved random center learning strategy, which is calculated as follows: where X crand represents the random central solution, X r and X l represent the object to be learned, and r 1 , r 2 , and rand are random numbers between 0 and 1.In order to reflect the randomness and symmetry of the value of random center learning, the threshold value of rand () is 0.5.The schematic diagram of the central solution and random central learning is shown in Figure 1.
The Mathematical Optimizer Probability (MOP) is a coefficient, α is a sensitive parameter that defines the exploitation accuracy during the iteration process, which is fixed at 5.

Exploitation Phase
AOA utilizes operators (subtraction (S) and addition (A)) to deeply explore search areas in several dense areas and conducts local optimization based on two main exploitation strategies.The location update formula is as follows: , Among them, r3 is a random number between 0 and 1.

Stochastic Center Learning Strategy (SCLS)
The random center learning strategy is a newly proposed optimization mechanism in this paper.In nature, group animals such as wolf packs and whale packs often surround their food in the middle through group cooperation and then engage in predation.In this regard, this article proposes a central learning strategy whose core idea is to generate a central solution based on upper and lower bounds during the process of searching for the optimal value of the population, comparing it with the target fitness value of the existing optimal solution, and selecting the optimal one for the next iteration.Definition of the central solution: if there is a point X in the n-dimensional coordinate system, then the central solution is calculated as follows: Among them, Xc is the central solution.Due to the lack of randomness in the central solution.In order to further improve the ability of the population to find the globally optimal solution (as shown in Figure 1), this paper proposes an improved random center learning strategy, which is calculated as follows: ) , () 0.5 ( ) , () 0.5 where Xcrand represents the random central solution, Xr and Xl represent the object to be learned, and r1, r2, and rand are random numbers between 0 and 1.In order to reflect the randomness and symmetry of the value of random center learning, the threshold value of rand () is 0.5.The schematic diagram of the central solution and random central learning is shown in Figure 1.  Figure 1 shows that any position in the interval [X l , X r ] may have a random central solution X crand , with X c being the central solution.

Mutation Strategy (MS)
Mutation strategy refers to a composite mutation strategy based on multiple mutation operators in the mutation strategy [44].It generates three candidate positions V i1 , V i2 , and V i3 , for the ith position by executing Equations ( 13)-( 15) in parallel.The formula is as follows: Among them, X rk, j represents the jth dimension of the r k solution (the same below), r 1 , r 2 , and r 3 are different integers in the range [1, N], j rand represents integers in the interval [1, D], F 1 represents a proportional coefficient equal to 1.0, and C r1 represents a crossover rate set to 0.1.
r 4 , r 5 , r 6 , r 7 , and r 8 are distinct integers in the range [1, N].F 2 represents a proportional coefficient equal to 0.8, and C r2 represents a crossover rate equal to 0.2.
r 9 , r 10 , and r 11 are distinct integers in the range [1, N].F 3 represents a proportional coefficient equal to 1.0, and C r3 represents a crossover rate of 0.9.
After generating three candidate positions V i1 , V i2 , and V i3 , first correct them based on the upper and lower boundaries.Then, select the candidate solution V i with the best fitness from V i1 , V i2 , and V i3 to update the position of the ith solution using Formula ( 16), which is called a greedy selection strategy, as shown below.
V i represents the modified best candidate solution, and X i represents the ith position.

Restart Strategy (RS)
When the mucus cannot find food at this location for a long time, it means that the nutrients in the area are no longer sufficient to support the continued survival of the slime molds, so the slime molds in the area need to be relocated.The restart scheme [9] can help poorer individuals transition from a local optimal state to other positions, so we use a restart strategy here to change the position of poorer individuals.In this strategy, we use the trial vector trial(i) to record the number of times the position has not improved.If the fitness value corresponding to the position in the search does not improve, the test vector trial(i) increases by 1. Otherwise, trial(i) is reset to 0. If the test vector is not less than the predetermined limit, a better vector will be selected from the test vectors T 1 and T 2 to replace the position of the ith.
where T 1,j represents the jth dimension position in position T 1 , T 2,j represents the jth position in position T 2 , UB j and LB j are the upper and lower boundaries of the jth dimension, respectively, and rand() represents the random floating-point arithmetic in the region [0, 1].If T 2,j exceeds the upper boundary UB j or lower boundary LB j in the jth dimension of the position, it will be replaced by Equations ( 18) and ( 19), and the test vector test trial (i) will be reset to zero.This article sets this limit to log t.If the restrictions are smaller in the early stages, they will help enhance the global performance of the algorithm.If the limit is larger in the later stage, it can prevent the algorithm from moving away from the optimal solution.

A Hybrid Optimization Algorithm of Slime Mold and Arithmetic Based on Random Center Learning and Restart Mutation
As mentioned above, when exploring unknown food sources, slime molds in SMA update their positions based on the synergistic effect of V B and Vc.The oscillation effect of V B increases the possibility of global exploration.When the random number z is less than, slime molds are initialized.At the end of the iteration, the V B oscillation effect is weakened, which makes the algorithm easily fall into the local optimum.Vc is a linearly decreasing parameter from 1 to 0, and the search mechanism is single and weak, making it difficult for the algorithm to jump out of local optima.In AOA, position updating is carried out according to the higher distribution of the division operator, and contraction is realized according to the addition and subtraction operators.The probability MOP of the mathematical optimizer changes according to the optimal position to improve the search breadth of exploration and increase the ability of the algorithm to jump out of the local optimum, but it will inevitably fall into the local optimum in the later iteration.The random center learning updates the random position according to the general characteristics of the optimal solution, which will improve the algorithm's convergence rate in the early exploration stage.CMS introduces multiple candidate solutions and compares them with existing solutions.In RS, the number of times the position has not been improved is recorded using the experimental vector trial (i).When the given threshold is exceeded, it is preliminarily determined that the algorithm is trapped in a local optimum, and a new position update formula is given to prevent the algorithm from jumping out of the local optimum.
Therefore, in this paper, we abandon the weak mechanism in the exploration stage of SMA and add the multiplication and division operator in AOA to expand the scope of exploration.The mutation and restart strategies are introduced to improve the ability to jump out of the local optimal in the late iteration.Given the relatively slow convergence rate of the algorithm in the exploration stage, a random center learning strategy with characteristic solutions is added.
To sum up, the hybrid slime mold and arithmetic optimization algorithm with random center learning and restart mutation (RCLSMAOA) proposed in this paper, which integrates stochastic center learning, has advantages and balance in the exploration and exploitation stage, local optimization, and global optimization.Pseudocode as shown in Algorithm 1.The flowchart is shown in Figure 2.
z the population position using Formula (6) vb, vc, and p. ate the population position using Formula (6) ate the value of parameter mop using Formula (9) < 0.5 date the population position using Formula (8) date the population position using Formula ( 8) If population position using SCLS population position using MS opulation position using RS rrent best solution st solution.

Time Complexity Analysis
In the RCLSMAOA, if the number of populations is N, the dimension is dim, and the maximum number of iterations is T. The time complexity of the population initialization phase is O(N × Dim).During iteration, the location of vb mechanism and AOA multiplication and division operator in SMA is updated, and the time complexity of the random central solution strategy and mutation strategy is O(3 × N × Dim × T).The time complexity of updating the convergence curve is O(1).It is worth mentioning that RS is rarely used from a general perspective, so it can be ignored and not remembered.In conclusion, the time complexity of the RCLSMAOA is O(N × Dim × (3T + 1)).

Experimental Part
All the experiments in this paper are completed on the computer with the 11th Gen Intel(R) Core(TM) i7-11700 processor with a primary frequency of 2.50 GHz, 16 GB memory, and an operating system of 64-bit Windows 11 using matlab2021a.In order to check the performance of the RCLSMAOA, 23 standard reference functions and CEC2020 reference functions are used to check the algorithm's performance.In order to have a more comprehensive understanding of the actual performance of the RCLSMAOA, we choose different algorithms to compare.These include AOA and SMA, as well as the famous remora optimization algorithm (ROA) [37], sine cosine algorithm (SCA) [38], and whale optimization algorithm (WOA) [39].In addition, we have added two improved algorithms: the whale and moth flame optimization algorithms and the average multi-verse optimizer and sine cosine algorithm.The parameter settings for these algorithms are shown in Table 1.
Table 1.Parameter settings for the comparative algorithms.

Experiments on the 23 Standard Benchmark Functions
In this section, we selected 23 benchmark functions to test RCLSMAOA's performance [10].The 23 functions consist of 7 single-mode functions, six multimodal functions, and ten fixed multimodal functions.Fn(x) represents the specific mathematical expression of the reference function, dim is the experimental dimension of the reference function, the range is the search space of the reference function, and Fmin is the theoretical optimal value of the corresponding reference function.See Figure 3 for the image of the specific function.In this experiment, we set the population size N = 30, spatial dimension = 30/500, and the maximum number of iterations T = 500.RCLSMAOA and other comparison algorithms were run 30 times to obtain each algorithm's best fitness, average fitness, and standard deviation after 30 times of independent running.
The specific experimental table is shown in Tables 2-4.We can see that on F1-F7, RCLSMAOA obtained the optimal values among three data items, including the optimal fitness value.We observed that the AOA performed well on F2, whereas the SMA performed well on F1 and F3.The RCLSMAOA inherits its advantages in single-mode functions.On the 500 dimensional scales, the RCLSMAOA still performs well.This is because mutation strategies can perform local mutations and increase global exploration capabilities.
Similarly, we observed multimodal functions such as F8-F13.On F8, the optimal fitness value and the average value of RCLSMAOA reached the optimal value, and the standard deviation was slightly lower than that of SMA.Functions such as F9-F11 are relatively simple, giving most optimization algorithms good results.The performance of RCLSMAOA in the F12-13 function is also satisfactory.We observed that the performance of the RCLSMAOA will not be affected by changes in dimensions, and its performance remains stable.Functions such as F14-23 are fixed multimodal functions, which are relatively simple.In our experiment, it is not difficult to see that the performance of RCLSMAOA is still the best among the comparison algorithms.Although fixed multimodal functions are relatively simple, their performance in verifying algorithm performance is still reliable.The above analysis indicates that RCLSMAOA, which integrates SMA and AOA, performs better than SMA and AOA.The data cannot intuitively understand the actual performance of the algorithm so we will show the convergence curves of each algorithm on F23 function images.The function image is shown in Figures 3-5.From the image, we can see that in F1-F4, the RCLSMAOA has a fast convergence rate and high convergence precision, which SMA and AOA do not possess.This is due to the random center learning strategy, which expands the algorithm's search range and enhances the convergence rate.For F5-6, RCLSMAOA can find a good position at the beginning of the iteration and then slowly converge to find the optimal position.Except for the WMFO algorithm, other algorithms stagnate in the early stages of the algorithm.For F7, the optimization ability of this algorithm is also stronger than other algorithms, because the existence of a restart strategy enables the algorithm to continuously jump out of local optima.On F8, the performance of RCLSMAOA is not as good as the WMFO algorithm, but stronger than other algorithms.F9-F11 is relatively simple and easy to find the optimal fitness value.RCLSMAOA algorithm is also the algorithm with the fastest rate of convergence.For F12-F13, the RCLSMAOA performs well and can also converge when other algorithms fall into local optima.F14-23 is a relatively simple function, but it can also play a role in verifying algorithm performance.On these functions, RCLSMAOA also always finds the optimal value the fastest.In summary, the RCLSMAOA applies to F23 functions.

Experiments on the CEC2020 Benchmark Function
Using F23 functions for validation is not sufficient.We have added the CEC2020 test function [9] to verify this.In this experiment, we set the variables as N = 30, T = 500, and dim = 10.The comparison algorithm remains unchanged.The results of 30 runs of RCLSMAOA and other algorithms are shown in Table 5.
CEC2020 has four class functions: unimodal function CEC01, basic multimodal function CEC02−4, mixed function CEC05-6, and combination function CEC06-10.In unimodal functions, the best one is RCLSMAOA, followed by SMA.This is because the RCLSMAOA integrates the SMA and adds mutation strategies to enhance its exploitation capabilities, enabling it to find better locations.The RCLSMAOA always performs stably based on multimodal functions and can find better values.This is based on the fact that the random center-solving strategy can effectively maintain a balance between exploration and exploitation in RCLSMAOA.Combined with the search strategy in SMA, the RCLSMAOA performs well on the basic multimodal functions.Mixed and combined functions are relatively difficult and complex and can easily trap functions into local optima.For this reason, we introduce a restart strategy to enable the RCLSMAOA to jump out of local optima.From the implementation results, the RCLSMAOA performs well and is not troubled by local optima.To test the actual performance of the algorithm more clearly, we selected the specific convergence curves of the RCLSMAOA and other comparative algorithms, as shown in Figure 5.The convergence curves of the RLCSMAOA are mainly divided into two types.One is mainly reflected in the single-mode function.The RCLSMAOA can converge towards the optimal value and finally find the optimal value.This is because the RCLSMAOA integrates the position update formula from the SMA and uses a mutation strategy to improve it.Another is mainly reflected in the complex combination function and mixed function.To test the actual performance of the algorithm more clearly, we selected the specific convergence curves of the RCLSMAOA and other comparative algorithms, as shown in Figure 6.The convergence curves of the RLCSMAOA are mainly divided into two types.One is mainly reflected in the single-mode function.The RCLSMAOA can converge toward the optimal value and finally find the optimal value.This is because the RCLSMAOA integrates the position update formula from the SMA and uses a mutation strategy to improve it.Another is mainly reflected in the complex combination function and mixed function.For complex functions, RCLSMAOA shows a very fast convergence rate at the early stage of iteration.This is because RCLSMAOA uses the multiplication and division operator in the AOA, which allows the RCLSMAOA to find the optimal value very quickly.

Experiments on the CEC2020 Benchmark Function
Using F23 functions for validation is not sufficient.We have added the CEC2020 test function [9] to verify this.In this experiment, we set the variables as N = 30, T = 500, and dim = 10.The comparison algorithm remains unchanged.The results of 30 runs of RCLSMAOA and other algorithms are shown in Table 5.

Analysis of Wilcoxon Rank Sum Test Results and Friedman Test
Wilcoxon Rank Sum Test Results is a non-parametric detection test method that does not require any assumptions to be made about the data.Therefore, it applies to various types of data, including discrete, continuous, normal, and non-normal distribution.In this experiment, it was used to test whether two samples have differences.The experimental result of this experiment is p, when p is less than 5%.We believe there is a significant difference in the experimental results.Because the RCLSMAOA cannot compare with itself, we will not list the specific p-values of RCLSMAOA.Therefore, this article takes eight algorithms as samples; each algorithm independently solves 30 times and sets the population size N = 30.Among them, the dimensions selected for testing 23 standard test functions are 30 dimensions, and CEC2020 is ten dimensions.6 shows the experimental results of 30 experiments conducted on 23 standard test functions.For F1-F4, because the RCLSMAOA is a hybrid form of SMA, they are not distinguished in some functions.For F7-F11, these functions are simple, and most algorithms can achieve good results on them.
Table 7 shows the experimental results of thirty experiments conducted on the CEC2020 test function.We observed significant differences between the RCLSMAOA and other algorithms, except for CEC04.The main reason is that CEC04 is relatively simple compared to other functions, and most functions can find the optimal value.
To verify the ranking of the algorithm, we used Friedman detection.We ran each algorithm independently 30 times to take the average value, and the results are shown in Tables 8 and 9.The dimension chosen for F23 functions here is 30.It can be noted that RCLSMAOA is still in the first position.
In this section, we conducted a more comprehensive data analysis of the algorithm's performance using the Wilcoxon rank sum test and Friedman detection.We conclude that there are significant differences and good performance between the RCLSMAOA and most comparison algorithms for functions.
RCLSMAOA can converge toward the optimal value and finally find the optimal value.This is because the RCLSMAOA integrates the position update formula from the SMA and uses a mutation strategy to improve it.Another is mainly reflected in the complex combination function and mixed function.For complex functions, RCLSMAOA shows a very fast convergence rate at the early stage of iteration.This is because RCLSMAOA uses the multiplication and division operator in the AOA, which allows the RCLSMAOA to find the optimal value very quickly.

Analysis of Wilcoxon Rank Sum Test Results and Friedman Test
Wilcoxon Rank Sum Test Results is a non-parametric detection test method that does not require any assumptions to be made about the data.Therefore, it applies to various types of data, including discrete, continuous, normal, and non-normal distribution.In this

Engineering Issues
In this section, we will test the application of the RCLSMAOA in practical engineering problems in order to assess the quality and computational performance of RCLSMAOA in solving engineering problems and to explore whether it can achieve satisfactory results.This section will use five classic engineering problems to test the actual performance of the algorithm and compare it with other well-known optimization algorithms.

Pressure Vessel Design Problem
In practical survival problems, a common problem is pressure vessels.This issue aims to minimize the total cost of materials, forming, and welding for cylindrical containers.The structural schematic diagram is shown in Figure 7.This problem has four variables: shell thickness Ts, head thickness Th, internal radius R, and cylindrical section length L without considering the head.algorithms.

Pressure Vessel Design Problem
In practical survival problems, a common problem is pressure vessels.This issue aims to minimize the total cost of materials, forming, and welding for cylindrical containers.The structural schematic diagram is shown in Figure 7.This problem has four variables: shell thickness Ts, head thickness Th, internal radius R, and cylindrical section length L without considering the head.The mathematical model of the pressure vessel design problem is as follows: Consider: The mathematical model of the pressure vessel design problem is as follows: Consider: Objective function: Subject to: We can observe Table 10 to see the specific data of the RCLSMAOA on pressure vessel engineering issues.RCLSMAOA has Ts = 0.742433, Th = 0.370196, R = 40.31961,L = 200, COST = 5734.9131.Compared with other comparative algorithms, RCLSMAOA achieved the best results and achieved the optimal value of 200 on L. This means that RCLSMAOA can solve the engineering problem.

Speed Reducer Design Problem
The reducer is one of the key parts of the gearbox.In this study, we aim to achieve the minimum quality while meeting four design constraints and seven variables.The model structure is shown in Figure 8.The mathematical model of reducer design as follows: Objective function: Subject to: .9 10 1 0 110  The mathematical model of reducer design is as follows: Objective function: Subject to:

Three-Bar Truss Design Problem
In the design problem of a three-bar truss, in order to minimize the weight constrained by stress, deflection, and buckling, it is necessary to operate on two-bar lengths to minimize volume while satisfying the three constraint conditions.It has two decision variables, namely the lengths A1 and A2 of the two rods, and its specific physical model is shown in Figure 9. 1.1 1.9 ( ) 1 0 Boundaries: Table 11 shows that when x = [3.4975,0.7, 17, 7.3, 7.8, 3.3500, 5.285], the minimum weight obtained by RCLSMAOA is 2995.437365,ranking first in the comparison algorithm.Observing experimental data, it can be seen that RCLSMAOA still performs well in relatively complex engineering problems.

Three-Bar Truss Design Problem
In the design problem of a three-bar truss, in order to minimize the weight constrained by stress, deflection, and buckling, it is necessary to operate on two-bar lengths to minimize volume while satisfying the three constraint conditions.It has two decision variables, namely the lengths A1 and A2 of the two rods, and its specific physical model is shown in Figure 9.The mathematical formulation of this problem is shown below: Consider: The mathematical formulation of this problem is shown below: Consider: Subject to: The mathematical model of welded beam design is as follows: Consider: Objective function: Subject to: ( ) ( ) ( ) ( ) The mathematical model of welded beam design is as follows: Consider: Objective function: Subject to: where: The specific data are shown in Table 13.The same RCLSMAOA still performs well in engineering problems, and the weight of the welded beam also reaches the minimum value.The results indicate that the IBWO algorithm is reliable for solving the problem of welded beams.

Car Crashworthiness Design Problem
The engineering issue refers to the safety performance of vehicles in a collision.This issue involves many aspects, including body structure, interior devices, and airbag systems.The car crashworthiness design problem is a very important issue in automotive design, which is directly related to the safety of passengers.The specific image is shown in Figure 11.

Car Crashworthiness Design Problem
The engineering issue refers to the safety performance of vehicles in a collision.This issue involves many aspects, including body structure, interior devices, and airbag systems.The car crashworthiness design problem is a very important issue in automotive design, which is directly related to the safety of passengers.The specific image is shown in Figure 11.The mathematical formulation of this problem is shown below: Minimize: Subject to: The results of this engineering problem are shown in Table 14.In RCLSMAOA, X1, X3, X5, and X7 were all taken to the minimum value of 0.5, and the final weight was also the optimal value.This engineering problem shows that the RCLSMAOA still performs well in engineering problems with multiple variables and constraints.

Conclusions
This article fully considers the advantages and disadvantages of SMA and AOA optimization algorithms.It proposes a hybrid algorithm of slime mold and arithmetic optimization algorithm based on random center learning and restart mutation (RCLSMAOA).RCLSMAOA integrates the global search strategies of two algorithms.On this basis, a random center solution strategy is added to enhance the randomness of the algorithm, the effectiveness of global search, and the diversity of the algorithm population.The mutation strategy can enhance the convergence ability of the algorithm and further avoid the stagnation of the algorithm.Species reintroduction restart strategy can effectively avoid local optimization.The collaborative use of these strategies can help the RCLSMAOA enhance its optimization ability and maintain a good relationship between exploration and exploitation.In addition, we used the Wilcoxon rank sum test to test the significant differences between algorithms and achieved good results.Finally, five engineering experiments were conducted, and the RCLSMAOA provided an excellent solution.
From the experimental performance, convergence curve, and engineering problems, we can conclude that: The RCLSMAOA proposed in this article combines the advantages of the SMA and AOA and effectively avoids the shortcomings of the two algorithms.
The newly proposed random center solution strategy can effectively address the shortcomings of RCLSMAOA and significantly enhance the algorithm's global search ability.
The restart mutation strategy can improve the algorithm's ability to overcome local optima and enhance the balance between exploration and exploitation.
By verifying the results of different test functions, the actual performance of the RCLSMAOA was effectively tested Finally, by verifying five engineering problems, it can be concluded that the RCLS-MAOA has good engineering application prospects.
This paper only studies the fusion of two optimization algorithms and adds three effective strategies.RCMSMAOA is still prone to local optimality in high dimensional space, and the convergence accuracy is not enough, and in some engineering problems did not show obvious advantages.In the future, we will further improve the performance of RCLSMAOA in practical engineering problems and improve the applicability of this algorithm in high-dimensional space.In future work, we will study the binary version of RCLSMAOA and use it to solve the feature selection problem.

Figure 1 .
Figure 1.Central solution and random central learning solution.Figure 1. Central solution and random central learning solution.

Figure 1 .
Figure 1.Central solution and random central learning solution.Figure 1. Central solution and random central learning solution.

Figure 6 .
Figure 6.Specific function image of CEC2020 test function.

Figure 6 .
Figure 6.Specific function image of CEC2020 test function.

Figure 8 .
Figure 8. Model of speed reducer design.

Figure 10 .
Figure 10.Model of welded beams design.

Figure 10 .
Figure 10.Model of welded beams design.
The pseudocode of the RCLSMAOA Initialization parameters T, Tmax, ub, lb, N, dim, w.Initialize population X according to Equation (1).
NO Stochastic center learning strategy Mutation strategy Restart strategy Figure 2. Flow chart.Algorithm 1

Table 6
shows the experimental results of thirty experiments conducted on 23 standard test functions.Table

Table 6 .
Experimental results of Wilcoxon rank sum test for F23 functions.

Table 7 .
Experimental results of Wilcoxon rank sum test for CEC2020 functions.

Table 10 .
Comparison of optimal solutions for the pressure vessel design problem.

Table 11 .
Comparison of optimal solutions for the speed reducer design problem.

Table 11 .
Comparison of optimal solutions for the speed reducer design problem.

Table 13 .
Comparison of optimal solutions for the welded beam design problem.

Table 14 .
Experimental results of car crashworthiness design.