Next Article in Journal
Multi-Unit Coupled Motion Hybrid Generator Based on a Simple Pendulum Structure
Previous Article in Journal
PPDD: Egocentric Crack Segmentation in the Port Pavement with Deep Learning-Based Methods
Previous Article in Special Issue
An Improved Bio-Inspired Material Generation Algorithm for Engineering Optimization Problems Including PV Source Penetration in Distribution Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy-Improvement-Based Slime Mould Algorithm

School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(10), 5456; https://doi.org/10.3390/app15105456
Submission received: 26 March 2025 / Revised: 7 May 2025 / Accepted: 9 May 2025 / Published: 13 May 2025
(This article belongs to the Special Issue Heuristic and Evolutionary Algorithms for Engineering Optimization)

Abstract

:
In addressing the challenges posed by the sluggish convergence rate, suboptimal stability, and susceptibility to local optimization in function optimization problems, a multi-strategy-based enhanced slime mold optimization algorithm (MSSMA) has been proposed. This algorithm integrates chaotic mapping and inverse learning to enhance the convergence speed of the initial population. Additionally, a novel balancing factor, B, has been introduced to ensure a more equitable distribution of the algorithm’s exploration and exploitation. The enhanced Lévy flight strategy and the elite tangent search strategy have been integrated to further enhance the algorithm’s global search capability and optimization finding ability. The simulation experiments have demonstrated that the enhanced algorithm exhibits faster convergence speed, enhanced stability, and a superior ability to escape local optima when compared to the other five algorithms in 50 benchmark test functions and multi-UAV cooperative path planning scenarios.

1. Introduction

A considerable number of scientific and engineering problems can be cast as optimization problems, which can be effectively addressed by a range of metaheuristic algorithms. The stochastic nature of these algorithms helps circumvent local optima and yield satisfactory outcomes, leading to their extensive utilization in global optimization. In recent years, numerous novel metaheuristic algorithms have emerged, including the sailfish optimization algorithm [1], the tuna swarm optimization algorithm [2], and the artificial jellyfish search algorithm [3]. These algorithms have found widespread application in trajectory planning [4] and engineering optimization.
The slime mold optimization (SMA) algorithm is a recently developed metaheuristic algorithm that was proposed by Li et al. in 2022 [5]. It is important to note that the term “mould” is used in British English while “mold” is used in American English. The paper [5] was written in British English, and to facilitate the retrieval of the article by researchers interested in SMA, the British English term “mould” has been retained in the title and keywords while the American English term “mold” is used in the subsequent sections. SMA is inspired by the oscillatory behavior and adaptive network formation mechanism exhibited by slime molds during the foraging process. By simulating the dynamic adjustment process of optimal paths by slime molds when exploring food sources, the algorithm exhibits strong global search capability and robustness, and has been successfully applied in the fields of image segmentation, engineering optimization, and neural network training. However, with the increasing complexity of optimization problems (e.g., high-dimensional, multimodal, dynamic environments, etc.), SMA gradually exposes limitations in the convergence speed, local development capability, and constraint handling and urgently needs to be improved to enhance its performance. The local exploitation ability of SMA is insufficient, and in the late iteration, the contraction behavior of slime mold individuals may lead to decreases in population diversity and fall into the local optimum. In addition, the performance of SMA is degraded when dealing with high-dimensional problems, and the search efficiency of the algorithm decreases significantly with an increase in the dimensionality of the variables, which is manifested by the slow convergence speed. Therefore, many researchers have conducted many studies and explorations on SMA. In an effort to address these limitations, Jia Heming et al. have proposed a combination of the arithmetic optimization algorithm with the slime mold algorithm. They have done so by replacing the SMA contraction mechanism with the multiplication and division operator in the local development phase. This modification was made with the intention of enhancing the algorithm’s stochasticity and ability to jump out of the local extremes [6]. Guo Yuxin et al. have applied elite backward learning and quadratic interpolation to the sticky bacteria algorithm. This modification was made with the intention of improving the algorithm’s optimization accuracy, optimization speed, and robustness [7]. Huang He et al. designed a nonlinear adaptive inertia weight factor to improve the linear convergence method to nonlinear convergence and used the weight value to update the position of sticky bacteria to improve the convergence speed of the sticky bacteria algorithm [8].
Despite the extensive research and improvements on SMA by numerous experts, there remains significant room for development. To further enhance SMA’s performance, this article proposes a multi-strategy-improvement-based slime mold algorithm (MSSMA). Firstly, chaotic mapping and inverse learning are used to generate a superior initial population, thereby enhancing the algorithm’s convergence accuracy. Secondly, a balancing factor B is proposed, based on the ordering of fitness values. This is intended to help the algorithm balance between exploration and exploitation. Thirdly, an enhanced Lévy flight strategy with a randomized step size is presented. This strategy is employed to execute a randomized search around the individuals of the obtained more optimal initial population, thereby enhancing the exploration capability. Finally, an elite tangent search strategy is employed to enhance the algorithm’s capacity to circumvent local optima, thereby accelerating the convergence of the algorithm. The simulation results demonstrate that the MSSMA exhibits significant advantages over other prevalent algorithms with respect to accuracy, convergence speed, and stability.
This is the structure of the rest of the paper. Section 2 explores the underlying theory of SMA and clarifies its core principles. Section 3 presents the mathematical model of MSSMA, focusing on the key improvements to the algorithm. Section 4 first presents the empirical results of the MSSMA, illustrating the effectiveness of the enhancements, and then describes the application of the MSSMA to multi-UAV cooperative path planning. Finally, Section 5 shows the results and conclusions of the study.

2. Slime Mold Algorithm

The SMA algorithm is distinctly different from other algorithms inspired by slime molds with regard to its design and application domains. The SMA algorithm primarily simulates the behavioral and morphological changes of slime molds during the foraging process, excluding the simulation of their complete life cycles. SMA simulates the foraging behavior of slime molds through two phases: the exploration phase, reflecting the slime mold’s behavior of searching for food, and the exploitation phase, reflecting the slime mold’s behavior of approaching food. Concurrently, weights are employed to emulate the positive and negative feedbacks generated by slime molds during the foraging process.

2.1. Initialization

The SMA algorithm utilizes a random initialization strategy, as outlined in Equation (1), to ascertain the initialized position of the population ( X i n i t i a l ).
X i n i t i a l = U B L B r a n d ( P o p s i z e , D i m ) + L B
Here, Popsize denotes the number of individuals in a population of slime molds, Dim denotes the dimension of the search space, and LB and UB denote the lower and upper bounds of the search range. It should be noted that LB and UB may also be vectors because the upper and lower bounds of the search space may be different in different dimensions. rand(Popsize, Dim) denotes the matrix with dimension Popsize*Dim, and each element of the matrix is a random value in the interval [0, 1].

2.2. Exploration Phase

Slime molds use a random search strategy to explore new food sources. In addition, slime molds will leave a food source if they find a low density of food origin to explore other alternative food sources in the area. Equation (2) models the behavior of slime molds in exploring food sources.
X i ( t + 1 ) = U B L B r a n d ( 1 , D i m ) + L B
Here, X i t + 1 denotes the position of the i-th slime mold individual at the t + 1st iteration.

2.3. Exploitation Phase

The slime mold exhibits a propensity to approach sustenance in accordance with the olfactory cues present in its ambient environment. To articulate this behavioral tendency in mathematical terminology, the ensuing formulae are proffered to emulate the contraction mode:
X i t + 1 = X o a t s t + v b ( t ) W s o r t ( i ) t X A t X B t , r < p i ( t ) v c ( t ) X i t           , r p i ( t )
Here, r denotes a random value in [0, 1], t denotes the current number of iterations, X o a t s t denotes the location of the slime mold with the highest concentration of odor found so far, X i t denotes the position of the i-th slime mold at iteration t, X A t and X B t denote two individuals randomly selected from the slime molds, W s o r t ( i ) t denotes the weight of the sort(i)-th slime mold at iteration t, and sort denotes the sequence that has been ordered according to the fitness values (in the minimum problem, in ascending order). Figure 1 illustrates the effect of Equation (3).
The formula for pi(t) is shown in Equation (4).
p i ( t ) = tanh F i t F g b t
Here, i∈1, 2, …, n, F i t denotes the fitness value of the slime mold whose position is at X i t , and F g b t denotes the fitness value of the slime mold whose global best position is found in 1 to t iterations.
vc(t) is calculated as shown in Equation (5).
v c ( t ) = u n i f r n d ( c , c , D i m )
Here, the function u n i f r n d ( c , c , D i m ) means to generate a random vector with Dim elements from a continuous uniform distribution with lower endpoint -c and upper endpoint c. For example, a run of the function u n i f r n d ( 1 ,   1 ,   5 ) might result in [0.6294, 0.8116, −0.7460, 0.8268, 0.2647]. c is calculated as shown in Equation (6).
c = 1 t m a x _ t
Here, max_t denotes the maximum number of iterations.
The formula for vb(t) is shown in Equation (7).
v b ( t ) = u n i f r n d ( b , b , D i m )
b = arctanh c
The values of the elements in vb(t) fluctuate randomly in the range [−b, b] and gradually converge to zero as the number of iterations increases. vc(t) fluctuates randomly in the range [−c, c] and eventually converges to zero. vb(t) and vc(t) synergistically mimic the selective behavior of the slime molds. vb(t) oscillates to simulate the process by which the slime mold decides whether or not to approach the food source or to seek other food sources.
Slime molds have been observed to form networks of interconnected veins during their migratory patterns. This ability enables them to utilize multiple food sources concurrently, establishing networks of veins that connect these sources. As the vein approaches the food source, the biological oscillator generates a propagating wave that increases the cytoplasmic flow through the vein. An increase in cytoplasmic flow leads to an increase in the diameter of the vein, and when the flow decreases, the vein contracts as its diameter decreases. This combination of positive and negative feedback enables slime molds to establish stronger routes where food concentrations are higher, thereby ensuring the receipt of maximum nutrient concentrations. Equation (9) mathematically models the positive feedback between the pulse width of the slime mold and the concentration of food being sampled, and the logarithmic part of the equation models the uncertainty in the venous contraction pattern. The logarithm is used to reduce the rate of change in the value so that the value of the contraction frequency does not vary too much. When the food concentration is high, the weight near that region increases; when the food concentration is low, the weight in that region decreases, causing a shift to explore other regions.
W s o r t i t = 1 + r log F b t F i t F b t F w t + 1     , c o n d i t i o n 1 r log F b t F i t F b t F w t + 1   ,   o t h e r s                  
Here, ‘condition’ denotes that the fitness value F i t of the slime mold individual is ranked in the top half of the population, r is a random value in the interval [0, 1], F b t denotes the optimal fitness value obtained during iteration t, and F w t denotes the worst fitness value obtained during iteration t.
W s o r t i t has been demonstrated to effectively simulate the oscillation frequency of slime molds at varying food concentrations. This enables slime molds to approach food more rapidly when the food concentration is higher and more slowly when the food concentration is lower at specific locations.
The position of the searching individual X i t can be updated according to the currently obtained best position X o a t s t , and the parameters vb(t), vc(t), and W s o r t i t can change the position of the individual as well. The random values in the formulas allow the individuals to form search vectors at arbitrary angles, i.e., to search the solution space in any direction, thus making it possible for the algorithm to find the optimal solution. Figure 2 shows the process of evaluating the fitness values of slime molds.
Equation (10) reflects the behavior of slime molds in searching for and approaching food. The use of the parameter z simulates the behavior of slime molds that dynamically adjust their search pattern according to the quality of the food source. When the quality of the food source is high, the slime molds use the area-restricted search method, thus focusing their search on already found food sources. If the density of the initially discovered food source is low, the slime molds leave that food source to explore other alternative food sources in the region. When the quality of the various food sources varies, the slime mold can choose the food source with the highest concentration. Even when slime molds find higher-quality food, they do not abandon the current food source but split a portion of their biomass to utilize both food sources.
X i ( t + 1 ) = U B L B r a n d ( 1 , D i m ) + L B ,         r a n d < z X o a t s t + v b ( t ) W s o r t ( i ) t X A t X B t ,         r < p i ( t ) v c ( t ) X i ( t ) ,         r p i ( t )
The parameter z is in the range [0, 0.1] and can take different values of z depending on the specific problem. Li et al. cleverly used the relationship between the magnitude of the parameter z and the random value rand to mimic the behavior of slime molds dynamically adjusting their search patterns according to the quality of the food source. A randomized strategy was used in the exploration phase to access as much of the problem space as possible. In addition, two exploitation strategies are used corresponding to the behavior of slime molds that can utilize two food sources at the same time, and the relationship between the magnitude of the parameter p i ( t ) and the random value r is used to decide which strategy to use. Li et al. [5] experimentally demonstrated that the algorithm results are better when z takes the value of 0.03 as this probability maintains the balance between exploration and exploitation.

2.4. Pseudo-Code of the SMA Algorithm

According to the above description, it can be seen that the algorithm first uses a randomized strategy to obtain an initialized population, then updates the location and fitness information of the slime molds according to the corresponding strategy in max_t iterations, which mimics the process of the slime molds to find good-quality food. In each iteration, the value of W is first calculated. After obtaining information about the new location and fitness value of the slime mold according to the above strategies, the information about the globally optimal slime mold individual is updated if a better slime mold individual is found in the current population. Subsequently, the next round of iterative process is started until max_t iterations are reached, then the information of the globally optimal slime mold is outputted. Algorithm 1 provides the pseudo-code of the SMA algorithm.
Algorithm 1: Pseudo-Code of SMA
Initialize the parameters popsize, Max_iteraition, Dim.
Initialize the positions of slime mold Xinitial.
While(t ≤ m a x _ t )
 Calculate the fitness of all slime mold.
 Update F b t , X o a t s t .
 Calculate the W s o r t i t by Equation (9), (i = 1, 2,…, n).
 For each search portion
  Update p i ( t ) , vb(t), vc(t).
  Update position by Equation (10).
 End For
 t = t + 1
End While
Return: F b m a x _ t , X o a t s m a x _ t .

3. MSSMA

This section will both describe the MSSMA improvement strategy in three points as well as describe the MSSMA process.

3.1. Chaotic Mapping and Reverse Learning

The randomness and uncertainty of the initial population generated by the random initialization strategy are large. The optimization and convergence performance of the algorithms is improved by improving the initial population strategy, which enhances the ability of local exploitation and global exploration [9]. In their seminal work, Maaranen and colleagues advanced the initialization population of the algorithm through the implementation of quasi-random sequences, exhibiting favorable uniform distribution characteristics. This enhancement enabled the subsequent progression of the algorithm [10]. Poles et al. demonstrated that a uniformly distributed initial population accelerates the convergence of the algorithm [11]. Ibada Ali et al. proved that in almost all cases, a better initial population will lead to faster convergence and better convergence accuracy [12]. Chaos mapping is a kind of nonlinear phenomenon that exists universally in nature because of the characteristics of randomness, ergodicity, and regularity of chaotic variables; it is often widely used in the generation of the initial population of optimization algorithms, which not only makes the initial population distribution of the algorithm more uniform but also expands the algorithm’s range of searching for the optimal outcome and improves the algorithm’s ability to escape from the local extremes. Sine chaotic mapping has the characteristics of simple structure, high efficiency, and so on [13]. The formula is shown in Equation (11). Sine chaotic mapping has better chaotic properties, and its chaoticness is strongly related to the value of the parameter μ . The value of μ is [0, 1]. As shown in Figure 3, the closer μ is to 1, the better the chaotic properties are [14].
X n + 1 = μ sin π X n , a [ 0,1 ]
The main idea of the reverse learning strategy is to generate the inverse solution of the feasible solution and select a better candidate solution by evaluating the feasible solution and the inverse solution. Ewees et al. proposed a new Quasi-Reflective Learning Mechanism (QRBL) [15], where the quasi-reflective solution is obtained based on the current solution as shown in Equation (12).
X Q R B L t = r a n d U B + L B 2 ,   X t  
Here, XQEBL(t) denotes the quasi-reflective learning solution based on the current solution X(t).
In this paper, Sine chaotic mapping is combined with quasi-reflective learning to replace the random initialization strategy of the SMA algorithm. Sine chaotic mapping is used to generate a population P1 of N individuals, and quasi-reflective learning is used to generate a population P2 of N individuals on the basis of P1, and then the 2*N individuals are ranked in terms of fitness, and the first N individuals with better fitness values are selected as the initial population.

3.2. Balancing Factor B

The parameter z is used in the viscous bacteria optimization algorithm to keep the algorithm in the balance between exploration and exploitation, and the algorithm’s proponents have concluded, through experiments, that the algorithm obtains better results when z takes the value of 0.03. However, different z values need to be set for different problems, and for intelligent optimization algorithms, it is always hoped that the algorithm will carry out more global searches in the early iteration and more local searches in the late iteration so as to accelerate the convergence of the algorithm [16,17]. For this reason, a new balancing factor B is proposed in this paper, which enables the algorithm to better maintain the balance between exploration and exploitation.
In the mucilage optimization algorithm, in order to calculate the weights of the individuals, it is necessary to sort the population individuals according to the fitness value (ascending order in the minima problem) after each population position update. In order to strengthen the global optimization seeking ability of the mucilage algorithm, a balancing factor B is introduced, and the population is divided according to the value of B to perform different position update strategies. When the individual sorted order sort(i) satisfies sort(i) > B, the global search strategy perturbed with random individuals is executed; otherwise, the individual positions are still updated with X o a t s t , and the expression of B is as follows:
B = t T × N × r a n d
Moreover, in randomized search, the distance moved must be randomly long or short. For this reason, Lévy flight is introduced; this is a model of random walks whose step lengths obey a Lévy distribution. Unlike traditional random walks (e.g., Brownian motion), the step lengths of Lévy flights are heavy-tailed, i.e., most of the steps are short, but occasionally, very long steps occur. This property gives Lévy flight a wide range of applications in nature and in optimization algorithms. Meanwhile, Minh et al. proposed a fast algorithm with high accuracy in the K-mean optimization algorithm (KO) [18], which generates random step lengths Levy_S using a random parameter β, which is shown in Equation (14). Figure 4 illustrates the distribution of step size Levy_S in two dimensions for different β-parameters. If β is a smaller scalar number, Levy_S will reach a longer move and vice versa, and if β is a larger numerical scalar, Levy_S will reach a shorter move. The objective of expanding the search space through random steps, with varying lengths, and incorporating additional random movements, is achieved when β takes the value in Equation (14).
L e v y _ S = U V 1 β           ,   β = 1 + r a n d
Here, U and V are two normal distribution variables with standard deviations of σu and σv as given in Equations (15) and (16).
U = n o r m a l 0 , σ u 2 σ u = f 1 + β sin π β / 2 f 1 + β / 2 β 2 β 1 / 2 1 / β , 1 β 2
V = n o r m a l 0 , σ v 2 σ v = 1
In Equation (15), the Gamma function f for an integer z is expressed as in Equation (17).
f z = 0 t z 1 e t   d t
Based on the above strategy, a new global search strategy is obtained as shown in Equation (18).
X t + 1 = X t + L e v y _ S

3.3. Elite Tangent Search Strategy

Sticky fungus optimization algorithms have the advantages of strong global search capability and easy parallelization, but they also have the disadvantages of insufficient local exploitation capability and tendency to fall into local optima. A Layeb proposes the Tangent Search Algorithm (TSA) inspired by the tangent function [19], which has strong global search capability and a fast convergence speed. With reference to the tangent search strategy and improvements, a new elite tangent search strategy is proposed as illustrated in Equation (19). By introducing the tangent search strategy around the optimal individual, the local exploitation ability of the algorithm can be significantly enhanced, and the information of the optimal individual can be utilized to guide the search direction so as to approximate the global optimal solution faster [20,21]. Meanwhile, the tangent search strategy can avoid the inefficiency of random search, reduce the invalid search, and improve the convergence speed. This new strategy not only retains the global exploration advantage of the slime mold algorithm but also makes up for its lack of exploitation capability through local refinement search, thus achieving a better performance balance in complex optimization problems.
X t + 1 = X b t + s t e p × tan ω × X b t × r a n d 0,1 X t           X t = X b t X b t + s t e p × tan ω × X b t X t                                                     X t X b t
Here, the parameters ω and step are shown in Equations (20) and (21), respectively.
ω = r a n d ( 0 ,   1 ) × π 2.1
s t e p = 10 × s i g n ( r a n d 0 ,   1 ) × n o r m ( X b t ) × log ( 1 + 10 × d i m t )
Here, sign() is the sign function, norm() is the Euclidean paradigm, and dim denotes the dimension of the search space.
Specifically, by recording the optimal value of best(t) at each iteration, when the optimal value of the result of L calculations is constant, the population is divided, and the top half of the individuals with the best fitness values are classified as the elite subpopulation, and the positions of the individuals are updated using a new elite tangent search strategy, which enhances the algorithm’s ability to develop locally. The value of L has an important impact on the performance and efficiency of the algorithm. A smaller value of L causes the algorithm to execute the elite tangent search strategy more frequently, which speeds up the local exploitation capability and helps converge quickly in the early stages; however, too-frequent elite search may cause the algorithm to fall into a local optimum prematurely, reducing the global exploration capability and increasing the computational cost. On the contrary, a larger L-value delays the triggering of the elite search, allowing the algorithm more time for global exploration, which makes it more likely to find the global optimal solution, but may reduce the convergence speed. Therefore, the choice of L needs to strike a balance between global exploration and local exploitation, and the specific value should be adjusted according to the complexity of the problem and the characteristics of the search space, and it has been concluded through experiments that the algorithm achieves better results when the value of L is 3.

3.4. Pseudo-Code of the MSSMA Algorithm

Based on the use of the above three strategies, the individual location update public for the MSSMA is obtained as shown in Equation (22).
X ( t + 1 ) =   r a n d U B L B + L B   ,                                               r 1 0.5   X ( t ) + L e v y S     ,                                               r 1 > 0.5                     sort ( i ) > B   X b t + v b ( W X A t X B t ) ,                           r 2 < p                                                                                           v c X t ,                             r 2 p                             s o r t ( i ) B                  
Here, r1 and r2 denote random values in the interval [0, 1], and i denotes the ordinal number of the individual X t in the sorted population.
The pseudo-code of the MSSMA is shown in Algorithm 2.
Algorithm 2: MSSMA Pseudo-Code
 Input: N,f,dim,T,LB,UB
 Output: The global best solution and its fitness value
 Initial populations are generated using chaotic mapping and reverse learning strategies, slime mold fitness values are calculated, and globally optimal solutions are recorded.
 Population are sorted by fitness value to obtain sort(i), and slime mold weight information is updated by Equation (6)
 While(t < T)
  Calculate the value of B by Equation (13).
  While(i < N)
   Update position by Equation (22).
  End While
  Check if the location of the slime mold is within the search space and calculate the fitness value of the slime mold.
  If a better solution is found in the current population, then update the global best solution and its fitness.
  If (best(t) = best(t − L) && t > L)
  Calculate the position of the slime molds that rank in the top half of the fitness values by Equation (19)
  End If
  Check if the location of the slime mold is within the search space and calculate the fitness value of the slime mold.
  Update the optimal slime mold position and the optimal value.
   Population are sorted by fitness value to obtain sort(i), and slime mold weight information is updated by Equation (6)
  End While
  Return: The global best solution and its fitness value.

4. Experimentation and Application

To test the performance of the MSSMA, 50 different benchmark functions (f1, f2, …, f50) were chosen as shown in Table A1 in Appendix A and can be found in [22], where their detailed information can be found. This test set covers functions with four characteristics: single-peaked functions, multi-peaked functions, non-separable functions, and separable functions. Single-peaked functions have only one localized extreme while multi-peaked functions have multiple localized extremes. The multi-peak characteristic tends to make it easy for the algorithm to obtain a locally optimal solution. Separability shows that the variables of a function can be decomposed into a product of the functions of each variable, whereas indivisibility does not, because of the interconnections between their variables. This indivisibility property often leads to global optimal solutions that are difficult to find. Appendix A lists these functions, of which 17 are single-peaked, 33 are multi-peaked, 36 are indivisible, and 14 are separable.
In order to ensure the fairness of the test, the following experiments were tested using Matlab R2020a; the number of populations of the algorithms was 50, the maximum number of iterations was 1000, each algorithm was tested independently for 30 times, and the mean and variance of each method in different benchmark test functions were calculated. The test environment was a Windows 10 64 bit system with 16 GB of RAM and an Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz 1.80 GHz.
Before our experiments, we referred to many research results, and most of the comparison experiments used the classical paradigm of the same population size and number of iterations. This was because CPU time measurements are significantly affected by non-algorithmic factors such as the hardware configuration and programming language implementation differences. Whereas the number of fitness evaluations is strongly correlated with the convergence characteristics of the algorithms, enforcing a uniform number of evaluations would result in certain algorithms with adaptive iteration mechanisms not being able to take advantage of their design, creating a new unfairness. However, simply using the same population size and number of iterations does not ensure a completely fair comparison, especially when the amount of computation required for each iteration may be quite different for different algorithms, which was a major shortcoming of our experiments. Researchers should use different experimental designs depending on different application scenarios. We considered parallel computing for intelligent optimization algorithms, where the computational resources required by the algorithms do not differ much in parallel computing scenarios. Therefore, we still considered keeping the existing experimental design.

4.1. Sensitivity Analysis of Parameter L

In order to calculate the effect of parameter L on the experimental results, five representative values of L were tested, namely, L = 3, 5, 10, 15, and 20. Table A2 in Appendix A provides the experimental data for the different values of L, where the best results have been highlighted in bold.
The results of the Friedman test for the algorithm using different values of L are displayed in Table 1. (W|T|L) denote the number of algorithms achieving the best, average, and worst performance in the test set, respectively. Ave represents the average Friedman mean ranking and Rank shows the ranking values of all competitors. Lower values of Ave correspond to superior overall performance metrics of the algorithm in question. The results indicate that when L is taken to be 3, the MSSMA has forty wins, four ties, and four losses, with a Friedman mean ranking value of 1.56, ranking first. Therefore, the value of L in this paper is 3.

4.2. Ablation Experiment

An ablation experiment is a research method that evaluates the contribution of a specific component in a model by systematically removing or modifying it. The basic principle is to reveal the role of the target component by comparing the performance difference between the baseline model and the ablation model. During the experiment, a complete baseline model is first constructed, followed by removing or adjusting the module, parameter, or strategy to be studied while keeping all other conditions constant and observing the change in model performance. If there is a significant decrease in performance after ablation, it indicates that the component is critical to the system; if the effect is weak, it may imply that it is redundant or needs to be optimized. Ablation experiments are necessary because they help researchers to test hypotheses, rule out invalid designs, and improve the interpretability of models. The swarm intelligence optimization algorithm can be divided into three parts, initialization, exploitation, and exploration, and we propose an improvement for each of these three parts (chaotic initialization, a balancing factor, and an elite tangent strategy). In order to verify the effectiveness of the improvement measures, the following three comparison algorithms were designed by deleting one of the improvement measures based on the MSSMA:
MSSMA-1: delete elite tangent strategy;
MSSMA-2: delete balancing factor;
MSSMA-3: remove chaotic initialization.
Table A3 shows the experimental data for the five algorithms on a test set consisting of 50 benchmark functions, with the best results highlighted in bold. From the results of the experiments shown in Table 2, it can be seen that the performance of the algorithms decreases after the removal of a certain improvement measure, and the performance of the algorithms can only be improved by using all three improvement measures simultaneously.

4.3. Comparison of Algorithm Performance

To test the performance of the MSSMA, simulation experiments were conducted for the Whale Optimization Algorithm (WOA) [23], K-mean Optimization Algorithm (KO), Beluga Whale Optimization Algorithm (BWO) [24], Ivy Optimization Algorithm (IVY) [25], and SMA algorithm and MSSMA. To ensure the fairness of the test, the number of populations of all algorithms was 50, the maximum number of iterations was 1000, and each algorithm was tested independently for 30 times, and the mean and variance of each method in different benchmarking functions were calculated.
Table A4 shows the experimental data of the six algorithms on a test set consisting of 50 benchmark test functions, with the best results highlighted in bold.
To further analyze the statistical significance of the selected improvements, the Friedman test and the Wilcoxon signed-rank test with a significant level p-value of 0.05 were used. The other five algorithms were compared with the MSSMA to obtain the p-value in the last row of Table 3 (the MSSMA was not compared with itself, so the corresponding cell is “-”). If the p-value was less than 0.05, it meant that there was a significant difference between the two algorithms in terms of performance; on the contrary, it was considered that there was no significant difference between the two algorithms. The results show that all p-values were less than 0.05, which indicates that the performance of the MSSMA is significantly different from those of the other algorithms and statistically verifies the superiority and reliability of the MSSMA.
Table 3 shows the results of the six algorithms on the test function. From the (W|T|L) and Friedman average ranking results in the table, it can be seen that the MSSMA significantly outperformed the other five algorithms compared on most of the test functions. The specific analysis is as follows.
From Table 3, we can see that the MSSMA achieved 14 best results on seventeen single-peak functions, 29 best results on thirty-three multi-peak functions, and only slightly worse test results on seven functions. Specifically, the MSSMA achieved the second-best results on functions f7, f9, f26, f40, and f43 and the third-best results on functions f6 and f50. In addition, the improved algorithm outperformed SMA on all 50 functions tested, reflecting the good results of the improved strategy. The convergence curve of the algorithm is shown in Figure A1.
The MSSMA performed much better than other algorithms on UN (single-peak non-separable) and US (single-peak separable) functions, which shows that the MSSMA has better exploration capability. It can also be seen that the MSSMA performed better than other algorithms on MS (multimodal separable) and MN (multimodal non-separable) functions, indicating that the MSSMA can better avoid local optima and has better exploration capability. In summary, the solution accuracy and stability of the MSSMA are generally better than those of the other five algorithms.

4.4. Collaborative Multi-UAV Path Planning

Collaborative multi-UAV path planning has important applications in several fields. In search and rescue, multiple drones can work together to cover large areas, quickly locate survivors, and avoid obstacles while avoiding repeated searches by sharing information in real time [26]. In agricultural monitoring, drones can efficiently collect farmland data, optimize paths to cover the entire area, and avoid collisions, improving monitoring efficiency [27]. In logistics distribution, multiple UAVs collaborate to perform package delivery tasks, dynamically adjusting paths to cope with obstacles and traffic changes in urban environments to ensure delivery accuracy and timeliness.
The main methods for collaborative multi-UAV path planning include centralized planning, distributed planning, and population intelligence algorithms [28,29]. Centralized planning assigns tasks and paths to all UAVs by a central controller, which is suitable for small teams but has high computational complexity; distributed planning relies on the autonomous decision making of each UAV, which is suitable for large teams but requires an efficient communication mechanism; swarm intelligence algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Algorithms (ACOs) solve the multi-objective optimization problem by simulating natural behaviors but may face the convergence challenges of slow speed.
The difficulties in cooperative multi-UAV path planning are mainly focused on environmental complexity, dynamic obstacles, and communication constraints [30]. Complex 3D environments require path planning algorithms that can efficiently handle obstacles and terrain changes; dynamic obstacles, such as moving vehicles or other UAVs, require real-time path adjustments to avoid collisions; and communication constraints can affect the efficiency of UAV collaboration, especially in large teams or long-distance missions. In addition, multi-objective optimization (e.g., of path length, energy consumption, and mission completion time) adds to the complexity of the problem.
The flight paths of UAVs are affected by many factors, and in this paper, we construct path planning evaluation functions in terms of path length, deflection angle size, and flight threat conditions.
Usually, when the UAV maintains a fixed speed during flight, fL can be equivalently expressed as a fixed proportion of the flight path length, as shown in Equation (23) [31].
f L = i = 1 n l i
Here, li is the distance from the ith path point Li to the next path point Li + 1.
The flight altitude of the UAV is limited by the function fH as shown in Equation (24):
f H = i = 1 n h i
Here, the altitude penalty function hi is shown in Equation (25).
h i = z i H m a x                     , z i > H m a x                                   0                                                           ,   H m i n z i H m a x H m i n z i                   , z i < H m i n                              
Here, zi is the flight altitude of the ith path point, and Hmax and Hmin are the maximum and minimum flight altitudes of the UAV, respectively.
To further test the performance of the algorithm, a battlefield environment with radar, missiles, and other threats is simulated as follows [32]. Here, the threat probability from radar to UAV can be approximated as follows:
P d R =   0                         ,   d R > d R m a x                                 1 d R 4                 ,   d R m i n d R   d R m a x   1                       , d R < d R m i n                              
Here, dR is the distance between the UAV and the radar, and the fractions DRmax and DRmin are the maximum detection radius and effective detection radius of the radar.
P d M =   0                         ,   d M > d M m a x                                 1 d M                 ,   d M m i n d M   d M m a x   1                       , d M < d M m i n                              
Here, dM is the distance between the UAV and the missile, and DMmax and DMmin are the maximum kill radius and effective kill radius of the missile.
P d A =   0                         ,   d A > d A m a x                                 1 d A                 ,   d A m i n d A   d A m a x   1                       , d A < d A m i n                              
Here, dA is the distance between the UAV and the anti-aircraft gun, and the ratios of DAmax and DAmin are the maximum kill radius and effective kill radius of the anti-aircraft gun.
P d C =   0                         ,   d C > d C m a x                                 1 d C                 ,   d C m i n d C   d C m a x   1                       , d C < d C m i n                              
Here, dC is the distance between the UAV and the atmosphere, and the ratios of DCmax and DCmin are the maximum kill radius and effective kill radius of the atmosphere.
Then, the threat function fB is determined as shown in Equation (30).
f B = r × P d R + m × P d M + a × P d A + c × P d C
In the formula, r, m, a, and c are the influence factors of the radar, missile, artillery, and atmosphere on the threat probability of the UAV, respectively.
Zhou et al. set up a method of defining the time co-constraints with fewer steps and less computation [33], i.e., they set up the command time for the UAV to reach the end point and the time range to solve for its arrival at the target end point and determine the time cost function by judging the relationship between the two as shown in Equation (31). Assuming that the command time for the UAV to reach the target endpoint is tc, the shortest time t m i n i and the longest time t m a x i for the UAV to reach the target endpoint are derived based on its speed range and trajectory length.
f t i =   0                         ,   t m i n i   t c   t m a x i                           t i t c                 ,   t m i n i > t c     o r     t c > t m a x i
Here, ti is the actual time for the ith UAV to reach the target endpoint.
Summing the time cost function yields the total time cost function fT as shown in Equation (32).
f T = i = 1 n f ( t i )
Based on the above, the following evaluation function for path planning is derived:
f 4 = ω 1 × f L + ω 2 × f H + ω 3 × f B + ω 4 × f T
Here, w1, w2, w3, and w4 are the weights of the functions fL, fH, fB, and fT, respectively, and satisfy the constraint ω1 + ω2 + ω3 + ω4 = 1.
Figure 5: Threat areas and pathways planned by MSSMA. illustrates the established threat areas and the path planned by MSSMA.
In order to test the performance of the MSSMA on collaborative multi-UAV path planning applications, the Whale Optimization Algorithm (WOA), K-mean Optimization Algorithm (KO), Beluga Optimization Algorithm (BWO), Ivy Optimization Algorithm (IVY), and SMA and MSSMA collaborative multi-UAV path planning algorithms were used. To ensure the fairness of the test, the number of populations of all algorithms was 50, the maximum number of iterations was 100, and each algorithm was tested independently 30 times, and the minimum, mean, and variance of each method were calculated, and the results are shown in Table 4. The minimum cost, mean cost, and variance of all the algorithms were significantly higher compared to those of the MSSMA. This indicates that MSSMA performs better in terms of optimization stability during path planning.

5. Conclusions

The MSSMA uses four different strategies to improve the SMA algorithm: (1) using chaotic mapping and inverse learning to generate a better initial population and improve the convergence accuracy; (2) proposing a balancing factor B based on the fitness value order, and according to B, dividing the population into two parts that are searched globally and locally (this ensures that the individuals with poor fitness values are searched globally in the early stage of iteration, and locally around the individuals’ optima in the later stage of iteration, which better maintains the balance between exploration and exploitation of the algorithm); (3) introducing the improved Lévy flight strategy, which performs random searches around the individuals of the initial population that are obtained to be more optimal, and has a better chance of discovering the optimal solution than the random generation strategy; and (4) using the elite tangent search strategy to perform larger-scale local searches around the individual when it falls into local optimality, which accelerates the convergence of the algorithm and improves the ability to jump out of local optimality.
Compared with the other five algorithms such as SMA, the MSSMA achieved better experimental results in a test set of 50 benchmark test functions, which proves the effectiveness of the improved strategy. In addition, the MSSMA performed better in terms of the stability of planning path optimization with better paths in collaborative multi-UAV path planning, which is expected to have a wide range of applications in collaborative multi-UAV path planning problems.

Author Contributions

Conceptualization, D.H. and T.T.; methodology, D.H.; software, D.H.; validation, D.H., T.T. and Y.Y.; formal analysis, D.H.; investigation, D.H.; resources, D.H.; data curation, D.H.; writing—original draft preparation, D.H.; writing—review and editing, D.H.; visualization, D.H.; supervision, T.T.; project administration, T.T.; funding acquisition, T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62266004. Also, this research was funded by the National Natural Science Foundation of China, grant number 72462006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data underlying this article will be shared upon reasonable request made to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Fifty benchmarks’ function information. D: Dimensions, C: Characteristics, U: Unimodal, M: Multimodal, S: Separable, and N: Non-separable.
Table A1. Fifty benchmarks’ function information. D: Dimensions, C: Characteristics, U: Unimodal, M: Multimodal, S: Separable, and N: Non-separable.
CFunctionDRangefopt
US f 1 x = 25 + i = 1 n x i 5[−5.12, 5.12]−5
US f 2 x = i = 1 n x i + 0.5 2 30[−100, 100]0
US f 3 x = i = 1 n x i 2 30[−100, 100]0
US f 4 x = i = 1 n i x i 2 30[−10, 10]0
US f 5 x = i = 1 n i x i 4 + r a n d o m 0 , 1 30[−1.28, 1.28]0
UN f 6 x = 1.5 x 1 + x 1 x 2 2 + 2.25 x 1 + x 1 x 2 2 2 + 2.625 x 1 + x 1 x 2 3 2 5[−4.5, 4.5]0
UN f 7 x = cos x 1 cos x 2 e x 1 π 2 x 2 π 2 2[−100, 100]−1
UN f 8 x = 0.26 x 1 2 + x 2 2 0.48 x 1 x 2 2[−10, 10]0
UN f 9 x = 100 x 1 x 2 2 + x 1 1 2 + x 4 1 2 + 90 x 3 2 x 4 2 + 10.1 x 2 1 2 + x 4 1 2 + 19.8 x 2 1 x 4 1 4[−10, 10]0
UN f 10 x = i = 1 n x i 1 2 + i = 2 n x i x i 1 6[−D2, D2]−50
UN f 11 x = i = 1 n x i 1 2 + i = 2 n x i x i 1 10[−D2, D2]−210
UN f 12 x = i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4 10[−5, 10]0
UN f 13 x = i = 1 n k x 4 i 3 + 10 x 4 i 2 2 + 5 x 4 i 1 + 10 x 4 i 2 + 5 x 4 i 2 + x 4 i 1 4 + 10 x 4 i 1 + x 4 i 4 24[−4, 5]0
UN f 14 x = i = 1 n x i + i = 1 n x i 30[−10, 10]0
UN f 15 x = i = 1 n j = 1 i x j 2 30[−100, 100]0
UN f 16 x = i = 1 n 100 x i + 1 x i 2 + x i 1 2 30[−30, 30]0
UN f 17 x = x 1 1 2 + i = 1 n i 2 x i 2 x i 1 2 30[−10, 10]0
MS f 18 x = 1 500 + j = 1 25 1 j + j = 1 2 x i a i j 6 1 2[−65.536, 65.536]0.998
MS f 19 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2[−5, 10] × [0, 15]0.398
MS f 20 x = x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 0.4 cos 4 π x 2 + 0.7 2[−100, 100]0
MS f 21 x = x 1 + 2 x 2 7 2 + 2 x 1 + x 2 5 2 2[−10, 10]0
MS f 22 x = i = 1 n x i 2 10 cos 2 π x i + 10 30[−5.12, 5.12]0
MS f 23 x = i = 1 n x i sin x i 30[−500, 500]−12,569.5
MS f 24 x = i = 1 n sin x i sin i x i 2 π 20 2[0, π]−1.8013
MS f 25 x = i = 1 n sin x i sin i x i 2 π 20 5[0, π]−4.6877
MS f 26 x = i = 1 n sin x i sin i x i 2 π 20 10[0, π]−9.6602
MN f 27 x = 0.5 + s i n 2 x 1 2 + x 2 2 0.5 1 + 0.001 x 1 2 + x 2 2 2 2[−100, 100]0
MN f 28 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + x 2 4 2[−5, 5]−1.03163
MN f 29 x = x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 4 π x 3 + 0.3 2[−100, 100]0
MN f 30 x = x 1 2 + 2 x 2 2 0.3 cos 3 π x 1 + 4 π x 3 + 0.3 2[−100, 100]0
MN f 31 x = i = 1 5 i cos i + 1 x 1 + i i = 1 5 i cos i + 1 x 2 + i 2[−10, 10]−186.7309
MN f 32 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 + 1 2 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2[−2, 2]3
MN f 33 x = i = 1 11 a i x i b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00031
MN f 34 x = i = 1 5 x i a i x i a i T + c i 1 4[0, 10]−10.1532
MN f 35 x = i = 1 7 x i a i x i a i T + c i 1 4[0, 10]−10.4028
MN f 36 x = i = 1 10 x i a i x i a i T + c i 1 4[0, 10]−10.5363
MN f 37 x = k = 1 n i = 1 n i k + β x i i k 1 2 4[−D, D]0
MN f 38 x = k = 1 n i = 1 n x i k b k 2 4[0, D]0
MN f 39 x = i = 1 4 e x p j = 1 3 a i j x j p i j 2 3[0, 1]−3.86
MN f 40 x = i = 1 4 e x p j = 1 6 a i j x j p i j 2 6[0, 1]−3.32
MN f 41 x = 1 4000 i = 1 n x i 100 2 i = 1 n cos x i 100 i + 1 30[−600, 600]0
MN f 42 x = 20 e x p 0.2 1 n i = 1 n x i 2 + 20 e e x p 1 n i = 1 n cos 2 π x i 30[−32, 32]0
MN f 43 x = π n 10 s i n 2 π y 1 + i = 1 n y i 1 2 1 + 10 s i n 2 π y i + 1 + y n 1 2 + i = 1 30 u x i , 10,100,4 30[−50, 50]0
MN f 44 x = 0.1 s i n 2 3 π x 1 + 0.1 i = 1 29 x i 1 2 p 1 + s i n 2 3 π x i + 1 + 0.1 x n 1 2 1 + s i n 2 2 π x 30 + i = 1 30 u x i , 5,100,4 30[−50, 50]0
MN f 45 x = c i e x p 1 π j = 1 n x j a i j 2 × cos π j = 1 n x j a i j 2 2[0, 10]−1.08
MN f 46 x = c i e x p 1 π j = 1 n x j a i j 2 × cos π j = 1 n x j a i j 2 5[0, 10]−1.5
MN f 47 x = c i e x p 1 π j = 1 n x j a i j 2 × cos π j = 1 n x j a i j 2 10[0, 10]–1.080938
MN f 48 x = i = 1 n A i B i 2 A i = j = 1 n a i j sin α j + b i j cos α j B i = j = 1 n a i j sin x j + b i j cos x j 2[−π, π]0
MN f 49 x = i = 1 n A i B i 2 A i = j = 1 n a i j sin α j + b i j cos α j B i = j = 1 n a i j sin x j + b i j cos x j 5[−π, π]0
MN f 50 x = i = 1 n A i B i 2 A i = j = 1 n a i j sin α j + b i j cos α j B i = j = 1 n a i j sin x j + b i j cos x j 10[−π, π]0
Table A2. Experimental results of MSSMA using different values of L (each winner is shown in bold).
Table A2. Experimental results of MSSMA using different values of L (each winner is shown in bold).
IDIndexL = 3L = 5L = 10L = 15L = 20
F1Mean−5.00−5.00−5.00−5.00−5.00
Std0.000.000.000.000.00
F2Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F3Mean0.000.004.50 × 10−2554.40 × 10−1744.00 × 10−161
Std0.000.000.000.000.00
F4Mean0.000.001.90 × 10−2261.70 × 10−1913.40 × 10−151
Std0.000.000.000.003.40 × 10−300
F5Mean5.47 × 10−492.23 × 10−32.34 × 10−33.28 × 10−32.83 × 10−3
Std3.30 × 10−183.66 × 10−63.77 × 10−61.25 × 10−57.13 × 10−6
F6Mean6.75 × 10−101.21 × 10−102.06 × 10−101.85 × 10−103.96 × 10−10
Std1.15 × 10−181.16 × 10−191.94 × 10−191.42 × 10−193.41 × 10−19
F7Mean−1.00−9.99 × 10−1−9.97 × 10−1−9.98 × 10−1−9.97 × 10−1
Std1.51 × 10−206.18 × 10−71.50 × 10−41.81 × 10−51.65 × 10−4
F8Mean0.000.000.005.20 × 10−2622.40 × 10−214
Std0.000.000.000.000.00
F9Mean4.81 × 10−78.63 × 10−55.16 × 10−57.49 × 10−54.84 × 10−5
Std1.50 × 10−122.63 × 10−81.85 × 10−83.58 × 10−81.37 × 10−8
F10Mean−5.00 × 101−4.99 × 101−4.99 × 101−4.99 × 101−4.99 × 101
Std1.11 × 10−181.99 × 10−88.07 × 10−89.7 × 10−85.53 × 10−7
F11Mean−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102
Std1.92 × 10−91.40 × 10−21.26 × 10−23.71 × 10−21.01 × 10−1
F12Mean0.000.004.90 × 10−1609.70 × 10−1157.91 × 10−86
Std0.000.000.002.80 × 10−2271.90 × 10−169
F13Mean0.000.001.70 × 10−2234.80 × 10−1152.10 × 10−116
Std0.000.000.007.00 × 10−2281.30 × 10−230
F14Mean0.009.98 × 10−942.99 × 10−672.09 × 10−361.59 × 10−35
Std0.002.95 × 10−1851.40 × 10−1321.31 × 10−706.44 × 10−69
F15Mean0.002.79 × 1016.41 × 1016.47 × 1014.94 × 101
Std0.002.28 × 1045.65 × 1047.44 × 1041.925 × 104
F16Mean2.28 × 10−81.512.221.271.90
Std3.26 × 10−98.531.30 × 1042.399.66
F17Mean3.46 × 10−45.64 × 10−15.80 × 10−16.00 × 10−15.80 × 10−1
Std4.17 × 10−138.34 × 10−29.38 × 10−29.07 × 10−21.06 × 10−1
F18Mean9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Std1.86 × 10−264.14 × 10−252.66 × 10−251.21 × 10−248.70 × 10−24
F19Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std1.10 × 10−183.16 × 10−201.05 × 10−203.21 × 10−174.70 × 10−17
F20Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F21Mean9.58 × 10−119.50 × 10−122.26 × 10−111.31 × 10−111.89 × 10−11
Std1.61 × 10−202.68- × 10−224.19 × 10−214.58 × 10−227.66 × 10−22
F22Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F23Mean−1.59 × 105−1.88 × 105−2.01 × 105−2.29 × 105−2.03 × 105
Std1.28 × 10−22.011.610.424.61
F24Mean−1.80−1.80−1.80−1.80−1.80
Std2.86 × 10−233.52 × 10−233.76 × 10−212.15 × 10−215.09 × 10−22
F25Mean−4.58−4.46−4.42−4.58−4.54
Std1.27 × 10−39.10 × 10−21.06 × 10−12.66 × 10−23.72 × 10−2
F26Mean−8.57−7.28−7.45−7.61−7.68
Std3.42 × 10−26.19 × 10−16.40 × 10−16.39 × 10−14.69 × 10−1
F27Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F28Mean−1.03−1.03−1.03−1.03−1.03
Std1.27 × 10−251.17 × 10−274.02 × 10−278.42 × 10−261.17 × 10−24
F29Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F30Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F31Mean−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102
Std4.01 × 10−146.43 × 10−168.26 × 10−142.38 × 10−141.19 × 10−14
F32Mean3.003.003.003.003.00
Std1.23 × 10−261.75 × 10−241.07 × 10−231.8 × 10−223.07 × 10−22
F33Mean3.08 × 10−43.51 × 10−43.37 × 10−44.60 × 10−44.49 × 10−4
Std5.99 × 10−142.83 × 10−85.00 × 10−95.99 × 10−86.12 × 10−8
F34Mean−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101
Std1.26 × 10−136.64 × 10−139.13 × 10−134.15 × 10−127.79 × 10−13
F35Mean−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
Std4.05 × 10−123.84 × 10−132.28 × 10−128.58 × 10−121.36 × 10−11
F36Mean−1.05 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
Std4.70 × 10−124.30 × 10−137.13 × 10−131.8 × 10−121.38 × 10−11
F37Mean9.26 × 10−26.75 × 10−22.18 × 10−25.75 × 10−25.59 × 10−2
Std1.91 × 10−22.06 × 10−21.97 × 10−32.08 × 10−22.05 × 10−2
F38Mean9.57 × 10−43.30 × 10−45.58 × 10−47.98 × 10−47.19 × 10−4
Std1.53 × 10−62.15 × 10−71.46 × 10−61.54 × 10−61.48 × 10−6
F39Mean−3.86−3.86−3.86−3.86−3.86
Std9.69 × 10−223.30 × 10−222.06 × 10−211.02 × 10−183.23 × 10−18
F40Mean−3.28−3.20−3.21−3.20−3.21
Std2.34 × 10−51.74 × 10−149.10 × 10−48.08 × 10−119.10 × 10−4
F41Mean0.001.10 × 10−40.000.002.33 × 10−10
Std0.003.65 × 10−70.000.001.64 × 10−18
F42Mean3.89 × 10−164.44 × 10−164.47 × 10−164.59 × 10−164.87 × 10−16
Std0.000.000.000.000.00
F43Mean2.36 × 10−83.45 × 10−52.20 × 10−54.07 × 10−52.73 × 10−5
Std4.30 × 10−117.54 × 10−91.88 × 10−99.11 × 10−91.57 × 10−9
F44Mean1.07 × 10−36.93 × 10−38.26 × 10−38.50 × 10−37.13 × 10−3
Std5.03 × 10−61.92 × 10−43.82 × 10−43.23 × 10−42.49 × 10−4
F45Mean−1.08−1.08−1.08−1.08−1.08
Std2.33 × 10−201.44 × 10−201.21 × 10−209.76 × 10−202.90 × 10−20
F46Mean−1.40−1.09−1.18−1.26−1.17
Std5.40 × 10−31.78 × 10−18.41 × 10−28.20 × 10−21.06 × 10−1
F47Mean−9.40 × 10−1−3.17 × 10−1−2.88 × 10−1−2.78 × 10−1−2.07 × 10−1
Std5.57 × 10−53.58 × 10−22.94 × 10−22.20 × 10−22.82 × 10−2
F48Mean2.85 × 10−91.78 × 10−93.39 × 10−95.97 × 10−98.15 × 10−9
Std3.33 × 10−174.1810−181.29 × 10−176.24 × 10−171.99 × 10−16
F49Mean1.04 × 1022.49 × 1021.34 × 1021.72 × 1023.17 × 102
Std2.45 × 1053.56 × 1058.26 × 1058.88 × 1054.06 × 105
F50Mean3.83 × 1034.42 × 1054.12 × 1052.05 × 1053.00 × 105
Std4.86 × 1064.38 × 1071.20 × 1077.48 × 1062.23 × 107
Table A3. Experimental data of ablation experiments (each winner is shown in bold).
Table A3. Experimental data of ablation experiments (each winner is shown in bold).
IDIndexMSSMASMAMSSMA-1MSSMA-2MSSMA-3
F1Mean−5.00−5.00−5.00−5.00−5.00
Std0.000.000.000.000.00
F2Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F3Mean0.000.001.49 × 10−40.000.00
Std0.000.002.04 × 10−70.000.00
F4Mean0.000.001.13 × 10−20.000.00
Std0.000.008.73 × 10−40.000.00
F5Mean5.47 × 10−83.26 × 10−41.92 × 10−41.81 × 10−64.27 × 10−7
Std3.30 × 10−185.45 × 10−82.11 × 10−61.84 × 10−88.01 × 10−8
F6Mean6.75 × 10−102.85 × 10−81.15 × 10−83.99 × 10−97.76 × 10−10
Std1.15 × 10−182.80 × 10−149.29 × 10−149.26 × 10−233.25 × 10−18
F7Mean−1.00−9.95 × 10−1−9.97 × 10−1−1.00−1.00
Std1.51 × 10−205.76 × 10−42.06 × 10−106.15 × 10−244.92 × 10−19
F8Mean0.000.009.89 × 10−1890.000.00
Std0.000.000.000.000.00
F9Mean4.81 × 10−72.37 × 10−44.01 × 10−51.72 × 10−47.37 × 10−7
Std1.50 × 10−122.34 × 10−71.52 × 10−97.91 × 10−82.53 × 10−12
F10Mean−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101−5.00 × 101
Std1.11 × 10−188.08 × 10−93.63 × 10−98.15 × 10−104.53 × 10−15
F11Mean−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102−2.10 × 102
Std1.92 × 10−92.39 × 10−46.81 × 10−41.15 × 10−83.25 × 10−9
F12Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F13Mean0.000.002.17 × 10−2430.000.00
Std0.000.000.000.000.00
F14Mean0.003.42 × 10−21.32 × 10−21.07 × 10−10.00
Std0.001.06 × 10−21.78 × 10−12.82 × 10−10.00
F15Mean0.000.003.35 × 10−10.000.00
Std0.000.007.56 × 10−10.000.00
F16Mean2.28 × 10−66.85 × 10121.041.032.33 × 10−4
Std3.26 × 10−91.20 × 1045.52 × 10−24.18 × 10−11.32 × 10−1
F17Mean3.46 × 10−48.12 × 10−13.51 × 10−15.95 × 10−13.07 × 10−3
Std4.17 × 10−138.05 × 10−23.27 × 10−22.48 × 10−21.78 × 10−2
F18Mean9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Std1.86 × 10−263.00 × 10−233.04 × 10−243.27 × 10−261.96 × 10−26
F19Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std1.10 × 10−181.12 × 10−163.17 × 10−152.89 × 10−182.35 × 10−18
F20Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F21Mean9.58 × 10−113.30 × 10−103.46 × 10−104.62 × 10−101.04 × 10−10
Std1.61 × 10−205.55 × 10−199.36 × 10−155.71 × 10−258.71 × 10−20
F22Mean0.000.005.04 × 10−40.000.00
Std0.000.001.87 × 10−160.000.00
F23Mean−1.59 × 105−2.09 × 105−2.09 × 105−2.05 × 105−1.84 × 105
Std1.28 × 10−21.18 × 1021.22 × 10−23.69 × 10−21.98 × 10−2
F24Mean−1.80−1.80−1.80−1.80−1.80
Std2.86 × 10−231.67 × 10−175.81 × 10−195.15 × 10−207.81 × 10−23
F25Mean−4.58−4.39−4.45−4.45−4.57
Std1.27 × 10−38.90 × 10−23.24 × 10−21.15 × 10−21.29 × 10−2
F26Mean−8.57−8.46−8.46−8.49−8.54
Std3.42 × 10−25.28 × 10−15.03 × 10−15.39 × 10−13.32 × 10−1
F27Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F28Mean−1.03−1.03−1.03−1.03−1.03
Std1.27 × 10−259.65 × 10−221.23 × 10−223.48 × 10−245.12 × 10−25
F29Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F30Mean0.000.000.000.000.00
Std0.000.000.000.000.00
F31Mean−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102
Std4.01 × 10−141.30 × 10−134.03 × 10−131.63 × 10−131.22 × 10−14
F32Mean3.003.003.003.003.00
Std1.23 × 10−262.97 × 10−221.48 × 10−221.06 × 10−244.34 × 10−25
F33Mean3.08 × 10−44.44 × 10−43.86 × 10−43.10 × 10−43.09 × 10−4
Std5.99 × 10−142.96 × 10−87.17 × 10−94.57 × 10−143.94 × 10−12
F34Mean−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101
Std1.26 × 10−132.04 × 10−91.35 × 10−98.18 × 10−102.28 × 10−12
F35Mean−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
Std4.05 × 10−126.36 × 10−92.70 × 10−94.00 × 10−116.44 × 10−12
F36Mean−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101
Std4.70 × 10−121.82 × 10−93.91 × 10−92.40 × 10−102.67 × 10−12
F37Mean9.26 × 10−21.38 × 10−11.02 × 10−19.58 × 10−29.43 × 10−2
Std1.91 × 10−23.77 × 10−21.99 × 10−22.57 × 10−22.50 × 10−2
F38Mean9.57 × 10−47.85 × 10−36.75 × 10−32.56 × 10−31.61 × 10−3
Std1.53 × 10−67.91 × 10−59.31 × 10−59.16 × 10−87.87 × 10−6
F39Mean−3.86−3.86−3.86−3.86−3.86
Std9.69 × 10−221.27 × 10−164.66 × 10−189.87 × 10−205.50 × 10−21
F40Mean−3.28−3.24−3.21−3.24−3.26
Std2.34 × 10−53.07 × 10−39.10 × 10−41.39 × 10−33.59 × 10−3
F41Mean0.000.007.73 × 10−70.000.00
Std0.000.001.20 × 10−110.000.00
F42Mean3.89 × 10−164.44 × 10−164.44 × 10−164.44 × 10−164.44 × 10−16
Std0.002.07 × 10−326.50 × 10−90.000.00
F43Mean2.36 × 10−84.08 × 10−44.02 × 10−62.22 × 10−61.31 × 10−7
Std4.30 × 10−111.05 × 10−61.20 × 10−129.67 × 10−86.30 × 10−11
F44Mean1.07 × 10−33.60 × 10−27.51 × 10−35.76 × 10−34.14 × 10−3
Std5.03 × 10−62.28 × 10−33.05 × 10−77.46 × 10−71.61 × 10−6
F45Mean−1.08−1.08−1.08−1.08−1.08
Std2.33 × 10−204.35 × 10−171.15 × 10−183.86 × 10−194.51 × 10−20
F46Mean−1.40−1.55 × 10−1−0.98−1.03−1.32
Std5.40 × 10−39.16 × 10−21.41 × 10−21.04 × 10−27.12 × 10−2
F47Mean−9.40 × 10−1−1.37 × 10−3−1.35−1.34−1.24
Std5.57 × 10−53.32 × 10−41.94 × 10−46.4 × 10−34.52 × 10−4
F48Mean2.85 × 10−91.46 × 10−87.44 × 10−97.23 × 10−93.44 × 10−9
Std3.33 × 10−173.24 × 10−169.31 × 10−119.72 × 10−213.08 × 10−17
F49Mean1.04 × 1022.82 × 1021.31 × 1021.22 × 1021.19 × 102
Std2.45 × 1055.67 × 1051.87 × 1055.18 × 1056.53 × 105
F50Mean3.83 × 1035.17 × 1034.59 × 1034.09 × 1033.86 × 103
Std4.86 × 1066.29 × 1062.58 × 1065.41 × 1067.82 × 106
Table A4. Means and standard deviations of benchmark functions (each winner is shown in bold).
Table A4. Means and standard deviations of benchmark functions (each winner is shown in bold).
IDIndexMSSMASMAWOAKOBWOIVY
F1Mean−5.00−5.002.46 × 10−187−4.07−5.00−8.00 × 10−1
Std0.000.000.003.40 × 10−10.001.82
F2Mean0.000.004.35 × 10−1040.000.000.00
Std0.000.004.79 × 10−2060.000.000.00
F3Mean0.000.002.76 × 1070.000.000.00
Std0.000.004.76 × 10+130.000.000.00
F4Mean0.000.007.310.000.000.00
Std0.000.006.690.000.000.00
F5Mean5.47 × 10−83.26 × 10−44.13 × 10−61.26 × 10−43.19 × 10−51.86 × 10−5
Std3.30 × 10−185.45 × 10−83.32 × 10−81.82 × 10−87.03 × 10−103.26 × 10−10
F6Mean6.75 × 10−102.85 × 10−80.008.46 × 10−74.58 × 10−46.78 × 10−18
Std1.15 × 10−182.80 × 10−140.001.02 × 10−124.25 × 10−71.00 × 10−33
F7Mean−1.00−9.95 × 10−11.95 × 10−4−1.00−1.00−2.16 × 10−1
Std1.51 × 10−205.76 × 10−45.54 × 10−81.96 × 10−130.001.21 × 10−1
F8Mean0.000.00−7.890.000.000.00
Std0.000.003.19 × 10−170.000.000.00
F9Mean4.81 × 10−72.37 × 10−40.001.39 × 10−11.35 × 10−31.14
Std1.50 × 10−122.34 × 10−70.004.16 × 10−22.35 × 10−63.01
F10Mean−5.00 × 101−5.00 × 1012.10 × 10−15−5.00 × 101−4.98 × 101−2.31
Std1.11 × 10−188.08 × 10−94.12 × 10−301.48 × 10−77.13 × 10−31.65 × 101
F11Mean−2.10 × 102−2.10 × 1022.67 × 10−2−2.10 × 102−1.97 × 102−4.43 × 10−1
Std1.92 × 10−92.39 × 10−42.86 × 10−31.66 × 10−42.50 × 1014.99
F12Mean0.000.002.39 × 10−50.000.000.00
Std0.000.003.48 × 10−100.000.000.00
F13Mean0.000.001.21 × 10−22.40 × 10−630.000.00
Std0.000.001.79 × 10−41.72 × 10−1240.000.00
F14Mean0.003.42 × 10−21.13 × 10−1060.002.47 × 10−2570.00
Std0.001.06 × 10−21.56 × 10−2110.000.000.00
F15Mean0.000.002.44 × 1071.20 × 10−3020.000.00
Std0.000.003.70 × 10+130.000.000.00
F16Mean2.28 × 10−66.85 × 1014.95 × 1024.95 × 1022.68 × 10−34.94 × 102
Std3.26 × 10−91.20 × 1045.42 × 10−26.50 × 10−22.00 × 10−47.28 × 10−1
F17Mean3.46 × 10−48.12 × 10−16.67 × 10−16.67 × 10−12.50 × 10−16.67 × 10−1
Std4.17 × 10−138.05 × 10−21.90 × 10−92.51 × 10−111.31 × 10−87.74 × 10−10
F18Mean9.98 × 10−19.98 × 10−11.529.98 × 10−19.98 × 10−19.52
Std1.86 × 10−263.00 × 10−233.413.98 × 10−245.22 × 10−251.52 × 101
F19Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.99 × 10−13.98 × 10−1
Std1.10 × 10−181.12 × 10−169.64 × 10−151.65 × 10−71.85 × 10−63.98 × 10−15
F20Mean0.000.000.000.000.000.00
Std0.000.000.000.000.000.00
F21Mean9.58 × 10−113.30 × 10−103.18 × 10−51.00 × 10−67.24 × 10−31.34 × 10−10
Std1.61 × 10−205.55 × 10−199.44 × 10−107.12 × 10−132.96 × 10−51.42 × 10−19
F22Mean0.000.000.000.000.000.00
Std0.000.000.000.000.000.00
F23Mean−1.59 × 105−2.09 × 105−1.95 × 105−6.36 × 104−2.09 × 105−1.75 × 104
Std1.28 × 10−21.18 × 1024.61 × 1081.16 × 1088.76 × 10−224.68 × 106
F24Mean−1.80−1.80−1.80−1.80−1.79−1.69
Std2.86 × 10−231.67 × 10−172.28 × 10−161.65 × 10−116.16 × 10−55.61 × 10−2
F25Mean−4.58−4.39−3.95−4.50−4.55−2.78
Std1.27 × 10−38.90 × 10−22.95 × 10−13.09 × 10−24.03 × 10−31.89 × 10−1
F26Mean−8.57−8.46−6.50−6.78−8.66−3.70
Std3.42 × 10−25.28 × 10−17.59 × 10−17.92 × 10−13.28 × 10−23.10 × 10−1
F27Mean0.000.000.000.000.000.00
Std0.000.000.000.000.000.00
F28Mean−1.03−1.03−1.03−1.03−1.03−9.71 × 10−1
Std1.27 × 10−259.65 × 10−228.25 × 10−231.41 × 10−125.45 × 10−93.55 × 10−2
F29Mean0.000.000.000.000.000.00
Std0.000.000.000.000.000.00
F30Mean0.000.002.59 × 10−170.000.000.00
Std0.000.001.25 × 10−320.000.000.00
F31Mean−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.87 × 102−1.61 × 102
Std4.01 × 10−141.30 × 10−131.75 × 10−101.59 × 10−24.72 × 10−111.02 × 103
F32Mean3.003.003.003.003.423.90
Std1.23 × 10−262.97 × 10−221.12 × 10−119.53 × 10−101.88 × 10−12.43 × 101
F33Mean3.08 × 10−44.44 × 10−45.30 × 10−43.36 × 10−43.16 × 10−41.04 × 10−3
Std5.99 × 10−142.96 × 10−88.88 × 10−84.19 × 10−91.91 × 10−101.34 × 10−5
F34Mean−1.02 × 101−1.02 × 101−9.73−1.02 × 101−1.02 × 101−2.87
Std1.26 × 10−132.04 × 10−92.664.70 × 10−73.64 × 10−124.09
F35Mean−1.04 × 101−1.04 × 101−1.00 × 101−1.04 × 101−1.04 × 101−3.66
Std4.05 × 10−126.36 × 10−91.822.25 × 10−77.32 × 10−122.73
F36Mean−1.05 × 101−1.05 × 101−8.89−1.05 × 101−1.05 × 101−4.01
Std4.70 × 10−121.82 × 10−98.042.29 × 10−71.92 × 10−113.00
F37Mean9.26 × 10−21.38 × 10−13.011.041.18 × 10−13.21 × 10−1
Std1.91 × 10−23.77 × 10−21.13 × 1019.11 × 10−18.40 × 10−34.14 × 10−1
F38Mean9.57 × 10−47.85 × 10−39.28 × 10−16.61 × 10−23.63 × 10−21.61 × 10−1
Std1.53 × 10−67.91 × 10−51.322.59 × 10−37.03 × 10−41.09 × 10−1
F39Mean−3.86−3.86−3.86−3.86−3.86−3.83
Std9.69 × 10−221.27 × 10−163.31 × 10−63.20 × 10−96.68 × 10−61.93 × 10−3
F40Mean−3.28−3.24−3.26−3.22−3.31−2.74
Std2.34 × 10−53.07 × 10−38.16 × 10−33.86 × 10−34.56 × 10−41.50 × 10−1
F41Mean0.000.000.000.000.000.00
Std0.000.000.000.000.000.00
F42Mean3.89 × 10−164.44 × 10−164.23 × 10−154.41 × 10−164.16 × 10−166.57 × 10−16
Std0.002.07 × 10−327.78 × 10−301.07 × 10−289.74 × 10−363.86 × 10−19
F43Mean2.36 × 10−84.08 × 10−41.49 × 10−21.15 × 10−11.38 × 10−97.85 × 10−2
Std4.30 × 10−111.05 × 10−61.64 × 10−52.99 × 10−44.75 × 10−83.28 × 10−4
F44Mean1.07 × 10−33.60 × 10−24.709.641.45 × 10−24.96 × 101
Std5.03 × 10−62.28 × 10−32.207.876.12 × 10−59.75 × 10−4
F45Mean−1.08−1.08−1.08−1.08−1.06−9.95 × 10−1
Std2.33 × 10−204.35 × 10−178.42 × 10−61.03 × 10−133.65 × 10−43.00 × 10−2
F46Mean−1.40−1.55 × 10−1−7.96 × 10−1−1.28−1.36−3.15 × 10−1
Std5.40 × 10−39.16 × 10−28.55 × 10−24.83 × 10−33.92 × 10−22.32 × 10−2
F47Mean−9.40 × 10−1−1.37 × 10−3−2.52 × 10−1−1.07 × 10−1−6.27 × 10−1−2.42 × 10−2
Std5.57 × 10−53.32 × 10−42.75 × 10−21.91 × 10−13.01 × 10−25.90 × 10−3
F48Mean2.85 × 10−91.46 × 10−83.53 × 10−91.13 × 10−11.28 × 1014.70 × 101
Std3.33 × 10−173.24 × 10−163.24 × 10−171.17 × 10−21.87 × 1023.20 × 104
F49Mean1.04 × 1022.82 × 1024.08 × 1026.79 × 1028.91 × 1029.92 × 102
Std2.45 × 1055.67 × 1056.34 × 1056.96 × 1015.06 × 1032.96 × 106
F50Mean3.83 × 1035.17 × 1039.67 × 1032.15 × 1033.61 × 1034.09 × 103
Std4.86 × 1066.29 × 1061.61 × 1081.05 × 1076.90 × 1053.05 × 107
Figure A1. Convergence curve of the test function.
Figure A1. Convergence curve of the test function.
Applsci 15 05456 g0a1aApplsci 15 05456 g0a1bApplsci 15 05456 g0a1cApplsci 15 05456 g0a1dApplsci 15 05456 g0a1eApplsci 15 05456 g0a1f

References

  1. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  2. Wu, G. Across neighborhood search for numerical optimization. Inf. Sci. 2016, 329, 597–618. [Google Scholar] [CrossRef]
  3. Chou, J.-S.; Truong, D.-N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Expert Syst. Appl. 2021, 165, 113702. [Google Scholar] [CrossRef]
  4. Wu, J.; Zhang, Z.; Yang, Y.; Zhang, P.; Fan, D. Time-optimal trajectory planning for robotic arms based on an improved tuna swarm algorithm. Comput. Integr. Manuf. Syst. 2024, 30, 4292–4301. [Google Scholar]
  5. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Chen, Y.; Pan, Z. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  6. Jia, H.; Liu, Y.; Liu, Q.; Wang, S.; Zheng, R. A hybrid optimization algorithm combining slime mould and arithmetic with random opposition-based learning. J. Comput. Sci. Explor. 2022, 16, 1182–1192. [Google Scholar]
  7. Guo, Y.; Liu, S.; Zhang, L.; Huang, Q. An improved slime mould algorithm with elite opposition-based learning and quadratic interpolation. Comput. Appl. Res. 2021, 38, 3651–3656. [Google Scholar] [CrossRef]
  8. Huang, H.; Gao, Y.; Ru, F.; Yang, L.; Wang, H. Three-dimensional path planning for UAVs based on adaptive slime mould algorithm optimization. J. Shanghai Jiao Tong Univ. 2023, 57, 1282–1291. [Google Scholar] [CrossRef]
  9. Chen, L.F.; Cao, K.X.; Zhang, S.P.; Bai, H.R.; Han, Y.; Dai, Q. Recent advances in swarm intelligent optimization algorithms. Comput. Eng. Appl. 2024, 60, 46–67. [Google Scholar] [CrossRef]
  10. Maaranen, H.; Miettinen, K.; Mäkelä, M.M. Quasi-random initial population for genetic algorithms. Comput. Math. Appl. 2004, 47, 1885–1895. [Google Scholar] [CrossRef]
  11. Poles, S.; Fu, Y.; Rigoni, E. The effect of initial population sampling on the convergence of multi-objective genetic algorithms. In Multiobjective Programming and Goal Programming: Theoretical Results and Practical Applications; Springer: Berlin/Heidelberg, Germany, 2009; pp. 123–133. [Google Scholar]
  12. Ibada Ali, J.; Tüű-Szabó, B.; Kóczy, T.L. Effect of the initial population construction on the DBMEA algorithm searching for the optimal solution of the traveling salesman problem. Infocommunications J. 2022, 14, 72–78. [Google Scholar] [CrossRef]
  13. Li, J.W.; Cheng, Y.M.; Chen, K.Z. Chaotic particle swarm optimization algorithm based on adaptive inertia weight. In Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May–2 June 2014; pp. 1310–1315. [Google Scholar]
  14. Hui, L.C.; Chen, X.L.; Meng, Z.B. An improved sparrow search algorithm with multi-strategy hybridization. Comput. Eng. Appl. 2022, 58, 71–83. [Google Scholar]
  15. Ewees, A.A.; Abd Elaziz, M.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  16. Zhou, X.Y.; Yin, Z.Y.; Gao, W.F.; Tan, G.S.; Yi, Y.G. An adaptive multi-neighborhood artificial bee colony algorithm based on reinforcement learning. J. Comput. 2024, 47, 1521–1546. [Google Scholar]
  17. Wu, F.; Chen, K.; Wang, W.L. A review of computational intelligence based on parallel computing. J. Zhejiang Univ. (Eng. Ed.) 2025, 59, 27–38. [Google Scholar]
  18. Minh, H.L.; Sang-To, T.; Wahab, M.A.; Cuong-Le, T. A new metaheuristic optimization based on K-means clustering algorithm and its application to structural damage identification. Knowl. -Based Syst. 2022, 251, 109189. [Google Scholar] [CrossRef]
  19. Layeb, A. Tangent search algorithm for solving optimization problems. Neural Comput. Appl. 2022, 34, 8853–8884. [Google Scholar] [CrossRef]
  20. Tao, X.M.; Guo, W.J.; Li, X.K.; Chen, W.; Wu, Y.K. Dimensionally reset multiple swarm particle swarm algorithm based on density peaks. J. Softw. 2023, 34, 1850–1869. [Google Scholar] [CrossRef]
  21. Chen, Z.; Damian, Z.; Ziyun, X. A new species of the genus Pseudococcus (Coleoptera, Staphylinidae) from China. A symbiotic nonuniform Gaussian variational bottle sea squirt swarm algorithm for multisubgroups. J. Autom. 2011, 48, 1307–1317. [Google Scholar] [CrossRef]
  22. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  24. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  25. Ghasemi, M.; Zare, M.; Trojovský, P.; Rao, R.V.; Trojovská, E.; Kandasamy, V. Optimization based on the smart behavior of plants with its engineering applications: Ivy algorithm. Knowl.-Based Syst. 2024, 295, 111850. [Google Scholar] [CrossRef]
  26. Michelon, G.K.; Assunção, W.K.; Grünbacher, P.; Egyed, A. Analysis and propagation of feature revisions in preprocessor-based software product lines. In Proceedings of the 2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Taipa, Macao, 21–24 March 2023; pp. 284–295. [Google Scholar]
  27. Schlosser, A.D.; Szabó, G.; Bertalan, L.; Varga, Z.; Enyedi, P.; Szabó, S. Building Extraction Using Orthophotos and Dense Point Cloud Derived from Visual Band Aerial Imagery Based on Machine Learning and Segmentation. Remote Sens. 2020, 12, 2397. [Google Scholar] [CrossRef]
  28. Shan, J.R. Research on Multi-AUV Task Assignment and Path Planning Based on Swarm Intelligence Algorithm. Master’s Thesis, Jilin University, Changchun, China, 2024. Available online: https://kns.cnki.net/kcms2/article/abstract?v=b41E60TuiN92yX80tKT_USqnbcSNTnxPQKhL4fFkW-XNZDWl1InNX-8Dcn9rpiwkepQv1v6478hR0Z6BX8foFcZYjotIkicM5EwiERYMp3b901hmdbSBul97a6VlBiwCki07dsMcghujyaacLzn4Uh8y0WGPTiPNv7OMzT7fHElTFXPJ2HPuvoPwch0OaM5cNNaUZ3GmvIuJbpDNVQlMNA==&uniplatform=NZKPT (accessed on 10 March 2025).
  29. Peng, H.; Zhang, J.; Li, H.; Hu, J. Cooperative UAV mission planning based on improved wolfpack algorithm. Comput. Eng. 2024, 50, 69–79. [Google Scholar] [CrossRef]
  30. Wang, F.; Zhang, H.; Han, M.; Xing, L.N. Co-evolutionary mixed-variable multi-objective particle swarm optimization algorithm based on co-evolution for solving UAV cooperative multi-tasking problem. J. Comput. 2021, 44, 1967–1983. [Google Scholar]
  31. Liu, C.-A.; Wang, X.; Liu, C.; Wu, H. Three-dimensional trajectory planning for unmanned aerial vehicles based on improved gray wolf optimization algorithm. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2017, 45, 38–42. [Google Scholar] [CrossRef]
  32. Hu, Z.; Zhao, M.; Yao, M.; Li, K.H.; Wu, R. An improved ant algorithm for UAV multi-target 3D trajectory planning. J. Shenyang Univ. Technol. 2011, 33, 570–575. [Google Scholar]
  33. Zhou, R.; Huang, C.Q.; Wei, Z.L.; Zhao, K.X. Application of MP-GWO algorithm in multi-UCAV cooperative trajectory planning. J. Air Force Eng. Univ. (Nat. Sci. Ed.) 2017, 18, 24–29. [Google Scholar]
Figure 1. Possible locations in (a) 2 dimensions and (b) 3 dimensions [5].
Figure 1. Possible locations in (a) 2 dimensions and (b) 3 dimensions [5].
Applsci 15 05456 g001
Figure 2. Assessment of fitness [5].
Figure 2. Assessment of fitness [5].
Applsci 15 05456 g002
Figure 3. Sine chaotic mapping bifurcation diagram.
Figure 3. Sine chaotic mapping bifurcation diagram.
Applsci 15 05456 g003
Figure 4. Distribution of step size Levy_S in two dimensions for different β-parameters.
Figure 4. Distribution of step size Levy_S in two dimensions for different β-parameters.
Applsci 15 05456 g004
Figure 5. Threat areas and pathways planned by MSSMA.
Figure 5. Threat areas and pathways planned by MSSMA.
Applsci 15 05456 g005
Table 1. Results of Friedman test for MSSMA at different values of L.
Table 1. Results of Friedman test for MSSMA at different values of L.
IDL = 3L = 5L = 10L = 15L = 20
(W|T|L)(40|6|4)(20|26|4)(11|35|4)(8|32|10)(7|24|19)
Ave1.562.883.543.403.62
Rank12345
Table 2. Results of Friedman test for ablation experiments.
Table 2. Results of Friedman test for ablation experiments.
IDMSSMASMAMSSMA-1MSSMA-2MSSMA-3
(W|T|L)(49|1|0)(14|9|27)(7|31|12)(15|35|0)(17|33|0)
Ave1.683.203.543.403.14
Rank13542
Table 3. The experimental results of six algorithms on Friedman and Wilcoxon signed-rank test.
Table 3. The experimental results of six algorithms on Friedman and Wilcoxon signed-rank test.
IDMSSMASMAWOAKOBWOIVY
(W|T|L)(43|7|0)(14|31|5)(7|32|11)(13|35|2)(7|40|3)(14|16|20)
Ave1.862.883.223.944.584.52
Rank123465
p-value-0.01599.7 × 10−60.00130.02420.003
Table 4. Statistical results of a collaborative multi-UAV 3D path plan.
Table 4. Statistical results of a collaborative multi-UAV 3D path plan.
IDMinimum CostMean CostVar
MSSMA53.641103.12374.996
SMA127.708143.431106.813
WOA183.296247.842245.518
KO136.166196.649305.482
BWO2350.9982421.5061558.914
IVY213.893265.48381.260
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, D.; Tang, T.; Yan, Y. Multi-Strategy-Improvement-Based Slime Mould Algorithm. Appl. Sci. 2025, 15, 5456. https://doi.org/10.3390/app15105456

AMA Style

Huang D, Tang T, Yan Y. Multi-Strategy-Improvement-Based Slime Mould Algorithm. Applied Sciences. 2025; 15(10):5456. https://doi.org/10.3390/app15105456

Chicago/Turabian Style

Huang, Donghai, Tianbing Tang, and Yi Yan. 2025. "Multi-Strategy-Improvement-Based Slime Mould Algorithm" Applied Sciences 15, no. 10: 5456. https://doi.org/10.3390/app15105456

APA Style

Huang, D., Tang, T., & Yan, Y. (2025). Multi-Strategy-Improvement-Based Slime Mould Algorithm. Applied Sciences, 15(10), 5456. https://doi.org/10.3390/app15105456

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop