Improved Salp Swarm Algorithm with Simulated Annealing for Solving Engineering Optimization Problems

Swarm-based algorithm can successfully avoid the local optimal constraints, thus achieving a smooth balance between exploration and exploitation. Salp swarm algorithm (SSA), as a swarmbased algorithm on account of the predation behavior of the salp, can solve complex daily life optimization problems in nature. SSA also has the problems of local stagnation and slow convergence rate. This paper introduces an improved salp swarm algorithm, which improve the SSA by using the chaotic sequence initialization strategy and symmetric adaptive population division. Moreover, a simulated annealing mechanism based on symmetric perturbation is introduced to enhance the local jumping ability of the algorithm. The improved algorithm is referred to SASSA. The CEC standard benchmark functions are used to evaluate the efficiency of the SASSA and the results demonstrate that the SASSA has better global search capability. SASSA is also applied to solve engineering optimization problems. The experimental results demonstrate that the exploratory and exploitative proclivities of the proposed algorithm and its convergence patterns are vividly improved.


Introduction
The purpose of optimization is to find all possible results in a search space and to select the optimal solution according to conditions and parameters. Optimization has been pre-applied to engineering and scientific disciplines, such as chemistry [1], engineering design [2] and information systems [3]. The problems related to these fields are complex in nature and difficult to optimize, which is the basis for developing different meta-heuristic algorithms to find the optimal solution.
There are two kinds of meta-heuristic algorithms: algorithms based on a single solution and algorithms based on swarm solution. The algorithm based on a single solution selects a candidate solution from all possible solution sets, and the selected candidate solution is evaluated repeatedly until the desired optimization result is achieved. The advantage of this approach is that it is faster to execute because of its lower complexity. However, its disadvantage is that it may get stuck in the local area, which results in the failure to obtain the global optimal solution. Popular methods belonging to this category include the mountain climbing algorithm [4], tabu search [5], etc. In contrast, swarm-based algorithms consider all possible solutions rather than a single candidate solution. Algorithms based on a swarm solution are divided into two categories, evolutionary algorithm and swarm intelligence algorithm. Evolutionary algorithms follow a mechanism inspired by biological evolution, including four operators-random selection, reproduction, recombination and mutation-and include the genetic algorithm [6], differential evolution (DE) [7], etc. Swarm intelligence algorithm is a kind of algorithm based on population, which is evolved from social behavior. It realizes the swarm behavior of all kinds of creatures in nature, such as birds, ants, gray wolves, and bees. This approach has been welcomed by researchers for its wide range of applications, ease of understanding and implementation, and ability to solve many complex optimization problems in real life. Widely used swarm intelligence algorithms include particle swarm optimization (PSO) [8], ant colony optimization (ACO) [9], whale optimization algorithm (WOA) [10], grey wolf optimization (GWO) [11], and artificial bee colony algorithm (ABC) [12]. Some scholars have improved such algorithms and applied them to practical optimization problems. Sun Xingping [13] proposed an improved NSGA-III that combines multi-group coevolution and natural selection. Liu Yang [14] improved the particle swarm optimization algorithm and applied it to the mine water reuse system. Shen Yong [15] improved the JSO algorithm and applied it to solve the constraint problem.
The engineering optimization problem is a constrained optimization problem, and it is one of the most important challenges in practical problems. The main purpose is to solve the problems with constraints in real life and optimize its economic indicators or various parameters. In real life, many engineering optimization problems have complex constraints, and it turns out that they simply add constraints on the basis of functional problems, but they are very difficult in actual operations. Some of these constraint conditions are simple intervals, but more are composed of linear equations, which make the solution space very complicated. Traditional classical algorithms, such as Newton's method, elimination method, and constraint variable rotation method, statically make dynamic problems and can handle these constraint problems to a certain extent. However, due to the complexity of the objective function of many practical constraint optimization problems, these traditional algorithms often do not work well. In recent years, experimental research has found that the swarm intelligence algorithm has unique advantages, so many scholars apply it to solving engineering optimization problems.
Salp swarm algorithm (SSA) [16] is a new meta-heuristic intelligent algorithm proposed by S. Mirjalili in 2017. In algorithm iteration, leaders lead followers and move towards food in a chain behavior. In the process of movement, the leaders are guided by the food source (i.e., the current global optimal solution) to make global exploration, while the followers make full local exploration, which greatly reduces the situation of getting stuck in the local area. Because of its simple structure, fast convergence speed and few control parameters, many scholars have studied and improved it and applied it to different fields. Sayed [17] proposed an SSA algorithm based on chaos theory to solve SSA algorithm's disadvantage that it is prone to fall into local optimal and slow convergence. Ibrahim [18] used the global convergence of PSO to propose a hybrid optimization algorithm based on SSA and PSO. Faris [19] used crossover operators to replace average operators and proposed a binary SSA algorithm with crossover. Liu Jingsen [20] proposed a leader-follower adaptive SSA algorithm and applied it to engineering optimization problems. Nibedan Panda [21] proposed an SSA algorithm based on space transformation search and applied it to the training of neural networks.
In order to improve the optimization ability of SSA, extending the application of algorithm of space, this paper proposes an improved salp swarm algorithm(SASSA) based on the simulated annealing (SA) [22]. First, logistic mapping was used to initialize the population to enhance the diversity of the initial population. Secondly, the symmetric adaptive division of the population was carried out to balance the development and exploration ability of the algorithm. Finally, the simulated annealing mechanism based on symmetric perturbation was introduced into the salp swarm algorithm to improve the performance of the existing algorithm. The performance of the above algorithms was evaluated on the benchmark function, and the new algorithm was compared with the original salp swarm algorithm and other popular meta-heuristic algorithms. The main work we did is as follows:

1.
We proposed an improved salp swarm algorithm based on the idea of a simulated annealing algorithm.

2.
We tested the improved algorithm on the benchmark function.

3.
The advantages of the improved algorithm were verified, and the results evaluated by the original salp swarm algorithm and other meta-heuristic algorithms on benchmark functions such as GWO and WOA are compared. 4.
The improved algorithm was applied to solve engineering optimization problems to prove its ability and effectiveness in solving practical problems.
The following sections are organized as follows: Section 2 introduces the background and principle of salp swarm algorithm; Section 3 introduces the improvement process and steps of the algorithm in detail. Section 4 describes the experimental equipment, environment, reference function and required parameters and gives the experimental results and statistical comparison with other algorithms; Section 5 introduces the application of the algorithm in solving engineering optimization Problems. The last section summarizes the conclusion of this paper and gives the future research direction.

Principle of Bionics
Salps are sea creatures with transparent, pail-shaped bodies. Their body structure are highly similar to those of jellyfish. During movement, salps provide a reverse thrust by drawing water from their surroundings through their barrel-shaped bodies. The body tissues of salp are so fragile that it is difficult for them to survive in the experimental environment. Therefore, it is not until recent years that some breakthroughs have been made in the study of this species, among which the most interesting one is the group behavior of salp.
The group behavior of salp is not distributed in a "group" mode but is often connected end to end to form a "chain" that moves sequentially, as shown in Figure 1. The salp chain also has a leader, which has the optimal judgment on the environment, often staying at the head of the chain. But unlike other groups, the leader no longer directly affects the movement of the whole group, but only directly affects the movement of the second salp next to him, and the second salp directly affects the third salp, and so on. This method is similar to a more rigorous and detailed hierarchy, each individual is only affected by the "directly leader" and cannot be over-managed. Therefore, the influence of the leader on the lower salps is sharply reduced layer by layer. The lower salps can easily retain their diversity rather than blindly move towards the leader. Since the salps follow the movement pattern in succession, the salps other than the leaders are collectively referred to as followers in this paper.

The Flow of SSA
The optimization of the salp swarm algorithm is as follows [23,24]: Firstly, population initialization. N is the population size of the salp and D is the spatial dimension. Food exists in space, F = [F 1 , F 2 , . . . , F D ] T . The upper and lower bounds of the search space are ub = [ub 1 , ub 2 , . . . , ub D ] and lb = [lb 1 , lb 2 , . . . , lb D ]. Then initialize the position of the salp x i j in a random manner, i = 1, 2, . . . , N, j = 1, 2, . . . , D.
The second is to update the position of the leader. The leader is responsible for finding food and directing the actions of the entire team. Therefore, the leader's position update follows the following formula: where x 1 j represents the leader position, F j is the food position, ub j and lb j are the bound. The control parameters include c 1 , c 2 and c 3 , among which c 2 and c 3 are random numbers within [0, 1], c 2 controls the step size and c 3 controls the direction. c 1 is the primary control parameter, which balances the exploration and development capabilities of the algorithm during iteration. In order to make the algorithm perform a global search in the first half of iteration and accurate development in the second half, the value of the c 1 follows the following formula: where l is the current iteration and Max_Iteration is the maximum iteration. Last update the position of the follower. The follower's position is only related to its initial position, motion speed and acceleration in the process of motion. The motion pattern conforms to Newton's law of motion. Therefore, the moving distance R of the follower can be expressed as: Time t is the difference value of iteration times, so t = 1; v 0 is the follower initial speed which is 0; a is the acceleration of the follower of that iteration, and the calculation formula is a = (v final − v 0 )/t. Since the follower only follows the movement of the preceding salp close to itself, the movement speed v f inal = x i−1 where it is known that t = 1 and v 0 = 0, therefore: Therefore, the update of follower position follows the following formula: where, x i j is the position of the i-th follower in the j-th dimensional space before the update, and x i j is the position of the follower after the update. The steps of salp swarm algorithm are shown in Algorithm 1: Randomly initialize the population according to Equation (1). The fitness value of each salp individual is calculated, and the optimal individual is selected as the food source location.
Update the position of leader according to Equation (2). else Update the position of follower according to Equation (6). end if end for Calculate the fitness value of individual population and the food source location is updated. l = l + 1. end while end

Population Initialization Based on Logistic Mapping
The core of the swarm intelligence algorithm is the continuous iteration of population, so the initialization of the population has a direct impact on the final solution and also affects the optimization ability. The more abundant and diverse the initialized population is, the more favorable it will be to find the global optimal solution of the population [25]. Without the help of prior knowledge, most swarm intelligence algorithms are basically random population initialization, which is greatly affects its performance.
The chaotic sequence has the characteristics of ergodicity and randomness, and the population initialization by chaotic sequence can have better diversity. The chaotic sequences commonly used at present are iterative mapping, tent mapping, logistic mapping, etc. Through comparative study, logistic mapping is used to perform population initialization in this paper.
The logistic mapping mathematical formula [26] is: where p is an adjustable parameter, usually set to 4. i = 1, 2, …, N represents the population size, j = 1, 2, …, D represents the ordinal number of chaotic variables. After logistic mapping, the initialization formula of the population becomes:

Symmetric Adaptive Population Division
In the basic SSA, the number of follower and leader is half of the population of salps, which causes the algorithm to search asymmetry: in the early iteration, the number of leaders is small and the ratio is low, which leads to insufficient global search and easy to fall into local extremum. However, in the late iteration, the number of followers is small, which leads to insufficient local search and low optimization accuracy. In response to this problem, literature [18] proposed a leader-follower adaptive adjustment strategy. This

Population Initialization Based on Logistic Mapping
The core of the swarm intelligence algorithm is the continuous iteration of population, so the initialization of the population has a direct impact on the final solution and also affects the optimization ability. The more abundant and diverse the initialized population is, the more favorable it will be to find the global optimal solution of the population [25]. Without the help of prior knowledge, most swarm intelligence algorithms are basically random population initialization, which is greatly affects its performance.
The chaotic sequence has the characteristics of ergodicity and randomness, and the population initialization by chaotic sequence can have better diversity. The chaotic sequences commonly used at present are iterative mapping, tent mapping, logistic mapping, etc. Through comparative study, logistic mapping is used to perform population initialization in this paper.
The logistic mapping mathematical formula [26] is: where p is an adjustable parameter, usually set to 4. i = 1, 2, . . . , N represents the population size, j = 1, 2, . . . , D represents the ordinal number of chaotic variables. After logistic mapping, the initialization formula of the population becomes:

Symmetric Adaptive Population Division
In the basic SSA, the number of follower and leader is half of the population of salps, which causes the algorithm to search asymmetry: in the early iteration, the number of leaders is small and the ratio is low, which leads to insufficient global search and easy to fall into local extremum. However, in the late iteration, the number of followers is small, which leads to insufficient local search and low optimization accuracy. In response to this problem, literature [18] proposed a leader-follower adaptive adjustment strategy. This paper proposes a symmetric adaptive division population according to this strategy, which adjusted the number of leaders of the salp to have an adaptive decreasing trend as the number of iterations increases, while the number of followers shows the adaptive increasing trend. This will make the algorithm focus more on global breadth exploration in the early stage, and more in-depth mining near the optimal value in the later stage, thus improving the optimization accuracy. The improved symmetric adaptive population division calculation formula is as follows: Introduce the control factor ω: ) (9) where l is the current iteration number and Max_Iteration is the maximum iteration number, and b is the proportion coefficient, which is used to avoid the imbalance of proportion. k is the disturbance deviation factor, and the decreasing ω value is disturbed in combination with the rand function. The modified number of leaders per iteration is ω·N, and the number of followers is equal to 1 − ω·N.

Simulated Annealing Mechanism Based on Symmetric Perturbation
The simulated annealing algorithm was first proposed by Metropolis and Kirkpatrick [27]. The simulated annealing algorithm originates from the principle of solid annealing [28].
The core of simulated annealing algorithm is to generate a new solution based on the current solution in some way, and accept the new solution with a certain probability, so as to enhance the local jumping out ability of the algorithm, and keep the algorithm still has a strong diversity in the later iteration.
The generation of new solutions is particularly important in simulated annealing. Based on the simulated annealing algorithm, this paper introduces symmetric perturbation to generate new solutions. Symmetric perturbation refers to the mapping of the position of the new solution to the current optimal position in the symmetrical interval. The symmetrical interval is determined by the product of the current temperature and the random number mapped to the dimensional space.
The flow of simulated annealing mechanism based on symmetric perturbation is as follows: (1) Initialization: set the initial temperature T, initial solution S and the maximum number of iterations Max_Iteration.
(3) Perturb the current solution S to obtain the new solution S . The formula is as follows: where df is the evaluation function. (5) According to the Metropolis criterion, the sampling formula is as follows: If df < 0, accept the new solution; otherwise, accept the new solution with probability e − d f T . (6) If the termination condition is satisfied, the current solution is the optimal solution and output, then stop the algorithm; otherwise, go back to step (2) after reducing the temperature. The termination condition is usually a continuous number of new solutions that have not been accepted or have reached the termination temperature.

Improved Salp Swarm Algorithm
As mentioned above, the salp swarm algorithm has issues related to slow convergence speed and low optimization accuracy. SASSA introduced a logistic chaotic map to initialize the population, which enriched the diversity of the population. The symmetric adaptive population division strategy is introduced to balance the development and exploration ability of the algorithm. Finally, the simulated annealing mechanism based on symmetric perturbation is introduced to accept the inferior solution with a certain probability, and the hybrid operation in the genetic algorithm is used. Hybridization means that the new solution and the old solution produced by simulated annealing are hybridized in proportion to obtain the final new solution. The new solution not only retains the advantages of the old solution but also reduces the influence of perturbation error. The hybridization formula is as follows: where c is a random number between 0 and 1. The flow chart of SASSA algorithm is shown in Figure 2: Algorithm 2 SASSA. begin Set algorithm parameters: the population size is N, the dimension of the problem is D, the maximum number of iterations is Max_Iteration, the initial temperature is T, and the cooling rate is Q.
According to Equation (8), Logistic chaotic map is used to initialize the population. The fitness value of each salp individual is calculated, and the optimal individual is selected as the food source location.
Update the position of leader according to Equation (2). else Update the position of follower according to Equation (6). end if end for Disturbing the current optimal salp's position S Calculated the fitness value of individual population and the food source location is updated. l = l + 1. end while end

Complexity Analysis
According to Algorithm 2, in each iteration, the population initialization, leader position update, follower position update and food source position update of SASSA algorithm are all serial. The population, dimension and iteration number are N, D and M respectively, then the time complexity of SASSA algorithm is as follows: Similarly, the time complexity of the basic SSA is the same. Therefore, the algorithm proposed in this paper is equivalent to the original algorithm in time complexity, and the execution efficiency does not decrease.

Benchmark Function Experiments
In this section, we will test the algorithm's performance through 21 benchmark functions and compare the results with other algorithms.

Benchmark Function
We used the reference function selected from the literature [29,30] to test the performance of the algorithm. The function equation is shown in Tables 1-3, where Dim represents the dimension of the function, Range is the upper and lower bound and fmin represents the optimal value. In general, we use these test functions to minimize. These functions can be divided into unimodal benchmark functions, multimodal benchmark functions and fixed-dimension multimodal benchmark functions. Unimodal function can evaluate the ability of algorithm development. Multimodal benchmark functions can test the exploration ability of the algorithm and the ability to jump out of the local optimum. Fixed-dimension multimodal benchmark functions can evaluate the comprehensive ability of the algorithm. Therefore, if we select these to test the algorithm, we can satisfy different types of problems and comprehensively evaluate the performance of the optimized algorithm.

Experimental Settings
All the tests were carried out under the same conditions. The population size was 30, and the maximum number of iterations was set to 500. Each benchmark function was run independently 30 times to mitigate the effect of randomness on the test results. The experiment was conducted on an Intel I7 processor and a computer with 16G memory. The system was macOS Catalina, and the test software was MATLAB R2020a. Based on the mean and standard deviation (Std) of fitness, values without loss of generality were used to evaluate performance.

Results Analysis
In this section, the test results are displayed in tables and images in an intuitive manner. The improved algorithm is compared with the SSA and several recently successful meta-heuristic algorithms, namely the moth flame optimization (MFO) [31], GWO and WOA. As can be seen from Table 4, in the unimodal benchmark function f 1-f 7, except for f 5, SASSA achieved good results. Among them, in f 1, f 2, f 3 and f 4, SASSA has obvious advantages in mean value, Std and lowest value. In the f 7 function, the results of GWO are very close to the improved algorithm but inferior to the improved algorithm in terms of lowest value. In terms of f 6 function, the performance of the improved algorithm is only moderate. As for the f 5 function, the result of the improved algorithm is worse than that of the GWO and the WOA but it obtains the best result in terms of the lowest value.

Experimental Settings
All the tests were carried out under the same conditions. The population size was 30, and the maximum number of iterations was set to 500. Each benchmark function was run independently 30 times to mitigate the effect of randomness on the test results. The experiment was conducted on an Intel I7 processor and a computer with 16G memory. The system was macOS Catalina, and the test software was MATLAB R2020a. Based on the mean and standard deviation (Std) of fitness, values without loss of generality were used to evaluate performance.

Results Analysis
In this section, the test results are displayed in tables and images in an intuitive manner. The improved algorithm is compared with the SSA and several recently

Experimental Settings
All the tests were carried out under the same conditions. The population size was 30, and the maximum number of iterations was set to 500. Each benchmark function was run independently 30 times to mitigate the effect of randomness on the test results. The experiment was conducted on an Intel I7 processor and a computer with 16G memory. The system was macOS Catalina, and the test software was MATLAB R2020a. Based on the mean and standard deviation (Std) of fitness, values without loss of generality were used to evaluate performance.

Results Analysis
In this section, the test results are displayed in tables and images in an intuitive manner. The improved algorithm is compared with the SSA and several recently

Experimental Settings
All the tests were carried out under the same conditions. The population size was 30, and the maximum number of iterations was set to 500. Each benchmark function was run independently 30 times to mitigate the effect of randomness on the test results. The experiment was conducted on an Intel I7 processor and a computer with 16G memory. The system was macOS Catalina, and the test software was MATLAB R2020a. Based on the mean and standard deviation (Std) of fitness, values without loss of generality were used to evaluate performance.

Results Analysis
In this section, the test results are displayed in tables and images in an intuitive manner. The improved algorithm is compared with the SSA and several recently N [−50, 50] 0

Experimental Settings
All the tests were carried out under the same conditions. The population size was 30, and the maximum number of iterations was set to 500. Each benchmark function was run independently 30 times to mitigate the effect of randomness on the test results. The experiment was conducted on an Intel I7 processor and a computer with 16G memory. The system was macOS Catalina, and the test software was MATLAB R2020a. Based on the mean and standard deviation (Std) of fitness, values without loss of generality were used to evaluate performance.

Results Analysis
In this section, the test results are displayed in tables and images in an intuitive manner. The improved algorithm is compared with the SSA and several recently In terms of multimodal benchmark functions, it can be seen from Table 5 that the SASSA has better results in f 8, f 9, f 10 and f 11. In f 9 and f 11, the mean value and Std can be directly set to 0, and the Std in f 10 can also be set to 0, which indicates that the algorithm has relatively strong stability. However, in f 12 and f 13, the performance of the SASSA is poor.
As for the fixed-dimension multimodal benchmark functions, it can be seen from Table 6 that the SASSA has obvious advantages in f 18, f 19, f 20 and f 21 functions and achieves good results in terms of mean value and lowest value. In f 14 and f 17, the performance of the SASSA is poor, but in f 15 and f 16, the Std of the SASSA is better than that of other algorithms in the case that the mean value of all algorithms can obtain the lowest value, which also provides the data basis for its better stability.  The Friedman test [32,33] were obtained by using the mean value of each algorithm on all 21 test functions in Tables 4-6, and are shown in Table 7. Table 8 shows the related statistical values of the Friedman test. If the chi-square statistic was greater than the critical value, the null hypothesis was rejected. p represents the probability of the null hypothesis obtaining. The null hypothesis here was that there is no significant difference in performance among the five algorithms considered here. According to the Friedman ranking in Table 7, SASSA can get better rank than the original algorithms and other compared algorithm. Table 8 shows that the null hypothesis was rejected, and thus the Friedman ranking was correct. On the whole, the SASSA obtained better results in contrast to the SSA as well as the other compared algorithm.
In order to better verify the algorithm's capability, we extracted some convergent images from 21 test functions, as shown in Figure 3. According to the convergence curve in the figure, we can observe that SASSA has a better convergence speed in realizing functions F3, F4, F7, F9 and F11, and other algorithms fall into local optima too early. In terms of F5 function, although the improved algorithm does not get the best result, its initial convergence speed is the fastest among all algorithms. As for F1 and F2, although SASSA cannot match the GWO in the convergence speed at the beginning, its convergence speed is greatly improved in the later stage, and better results can be explored. In general, the improved algorithm SASSA can achieve good results in three kinds of test functions. Although some test functions do not yield better solutions, they also show convergence and stability. Therefore, it is of great significance to improve the classical salp swarm algorithm.

Problem Description
The engineering optimization problem is a kind of constrained optimization problem. The constrained optimization problem is a very common planning problem in the science and engineering fields. The corresponding description of the constrained optimization problem is as follows [34]: Among them, the objective functions f (x), g 1 ,g 2 , . . . ,g m and h m+1 ,h m+2 , . . . , h n are real valued functions in the domain, g i (x) ≤ 0, (i = 1, 2, . . . , m) means inequality constraint, h i (x) = 0, (i = m + 1, m + 2, . . . , n) represents equality constraints. The decision variables are x, x = (x 1 , x 2 , . . . , x n ) ∈ R n .
The core of the constrained optimization problem is to find a feasible solution in the feasible region. If f (x * ) ≤ f (x) holds for each feasible solution x, then x * is the optimal solution of the constrained optimization problem under the given constraints. If the function of the optimization problem is linear, the optimization problem is a linear constrained optimization problem, otherwise it is a nonlinear constrained optimization problem. The engineering optimization problems used in this paper are all nonlinear single-objective constrained optimization problems.

Constraint Handling
The processing of constraint conditions is the key to solving constraint optimization problems. In function problems, these constraints determine the value range of decision variables. In actual engineering optimization problems, these constraints are the various objective factors that must be met to solve the target problem. In order to deal with these constrained optimization problems, commonly used methods include rejection method, repair method, penalty function method, etc. Continuous-time solvers is also an effective method to deal with optimization problems with nonlinear constraints. In this method, a virtual dynamical system along with the main system evolves and estimates the optimal solution of the problem [35,36]. When dealing with engineering optimization problems with constraints, the most common and simplest method is the penalty function method [37]. Its idea is to add a "penalty term" to the objective function of the problem and define the constraint as a whole, it satisfies the constraints without affecting the solution, and the problem with constraints is thus transformed into an unconstrained optimization problem. The formula of the penalty function is as follows: The objective function is f (x), the inequality penalty coefficient is k 1 , the equality penalty coefficient is k 2 , g i (x) is the inequality constraint, h i (x) is the equality constraint, b 1 and b 2 are defined as follows: A penalty term is added to the objective function of the constrained optimization problem, and the constrained optimization problem is transformed into a general optimization problem to be solved.

Experimental Settings
In order to verify the feasibility of the SASSA, four classic engineering optimization problems were selected for simulation to verify the performance of the algorithm in solving constraint optimization problems. In this paper, weight minimization of a speed reducer, gear train design problem, optimal operation of alkylation unit and welded beam design were selected as research objects [38]. Among them, weight minimization of a speed reducer, gear train design problem and welded beam design are problems in the field of physical engineering design. Weight minimization of a speed reducer is to minimize the weight, gear train design problem is to make the design more in line with requirements, and welded beam design is to minimize the production cost. And optimal operation of alkylation unit is a problem in the field of chemical engineering, aiming to make production more efficient. The four questions selected cover the two fields of physics and chemistry, and involve weight, cost, efficiency, etc., which can provide a good evaluation of the performance of the improved algorithm. In order to objectively evaluate the performance of SASSA, we selected SSA, GWO [39], DE [40], BBO [41], ACO and PSO for a comparison experiment. The SSA is the basic salp swarm algorithm. By comparing with it, we can comprehensively evaluate the optimization ability, the degree of improvement and the field of adaptation of the improved algorithm, so as to provide a better theoretical basis for the improvement strategy. The GWO originated from the predation of gray wolves, it is a highly developed algorithm. The DE is a convenient and easy algorithm, and its effectiveness has long been proven. The BBO is an evolutionary algorithm, literature proves that the BBO algorithm is an excellent algorithm for solving engineering problems, so comparing it with it can improve the credibility of the improved algorithm. The ACO is a probabilistic algorithm and has a wide range of applications, comparing it with the improved algorithm can be more beneficial to evaluate the improvement ability of the improved algorithm.
The environment used in this experiment is: the operating system is MacOS Catalina, the processor is I7, the memory is 16G, and the software is MATLAB R2020A. The population size is 30, and the maximum number of iterations is 1000. The experiment was repeated 50 times to reduce the influence of randomness on the test results. Without loss of generality, the performance was evaluated according to the Mean and standard deviation of fitness values.

Weight Minimization of a Speed Reducer
Weight minimization of a speed reducer is a typical engineering optimization problem, and its optimization purpose is to minimize the total weight of the reducer [42]. The design of the reducer is subject to some constraints, including the surface stress of the gear teeth, bending stress, shaft deflection and shaft stress. Reflected in the design drawings are gear width (b), gear modulus (m), the number of teeth on the pinion (z), the length of the first shaft between the bearings (l 1 ), and the length of the second shaft between the bearings (l 2 )), the diameter of the first shaft d 1 , and the diameter of the second shaft d 2 , as shown in Figure 4: Use x 1 -x 7 to represent the above seven variables, and the mathematical description of the problem of weight minimization of a speed reducer is as follows: Minimize: Subject to: The experimental results are shown in Table 9. SASSA has the best performance on the best value and mean value, indicating that after its optimization, the weight of the reducer is the least, but the std is slightly inferior to GWO, indicating that its stability needs to be improved. Figure 5 shows that the iteration curves of SASSA, SSA and GWO are relatively close. Combining with the data, it can be seen that the convergence accuracy of SASSA is the best. In terms of convergence speed, it can be seen directly that SASSA is in the leading position. Therefore, the convergence speed and performance of SASSA are better than other comparison algorithms. The convergence curve of weight minimization of a speed reducer is shown in Figure 5:

Gear Train Design Problem
Gear design problem is also a popular engineering optimization problem. Figure 6 shows the gear train design problem model [43]. When designing compound gears, the gear ratio between the drive shaft and the driven shaft should be considered. The gear ratio is defined as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft. Our goal is to make the gear ratio as close as possible to 1/6.931. For each gear, the number of gears is between 12 and 60. The variables Ta, Tb, Td and Tf are the number of teeth of gears A, B, D, and F, and the number of teeth must be an integer. Use x 1 -x 4 to represent the above four variables, and the mathematical description of the problem is as follows: Minimize: Subject to: 12 ≤ x i ≤ 60, i = 1, 2, 3, 4 Table 10 shows the experimental results of the gear train design problem. As can be seen from the table, both SASSA and PSO can get 1/6.931, but the mean value and variance of SASSA are the smallest, indicating that its overall performance and stability are better. Figure 7 is the convergence curve. It can be seen that compared with other algorithms, SASSA has significant advantages in terms of convergence accuracy and speed. The convergence curve of gear train design problem is shown in Figure 7:

Optimal Operation of Alkylation Unit
The optimal operation of alkylation unit is a very common in the petroleum industry. Figure 8 shows a simplified alkylation process flow [44]. As shown in the figure, the olefin feedstock (100% butane), pure isobutane recycle and 100% isobutene supplement are introduced into the reactor together with the acid catalyst, and then the reactor product is passed through a fractionator, where isobutene and alkane are separated. The base product, spent acid is also removed from the reactor. The main purpose of this problem is to increase the octane number of the olefin feedstock under acidic conditions, and the objective function is defined as the alkylation product. Literature [45] transformed this problem into a constrained optimization problem with 7 variables and 14 constraints. The mathematical description is as follows: Maximize: Subject to: We first converted the maximization problem into the minimization problem to solve it. Table 11 shows that SASSA performs best in all standards, indicating that it can maximize the alkylation product value and has a good effect on the optimization of the alkylation process. Figure 9 is the convergence curve of the optimal operation of alkylation unit. It can be seen from the figure that the convergence performance of the improved algorithm is not optimal at the beginning. But at the later stage of the iteration, after a long period of local stagnation, the improved algorithm still has the ability to get rid of the current local best points and continue to explore so that the overall optimization performance is further enhanced. This shows that the improvement strategy mentioned in the previous article for the basic algorithm that tends to fall into a partial stagnation in the later iteration of the iteration has played a role. The convergence curve of optimal operation of alkylation unit is shown in Figure 9:

Welded Beam Design
The problem of welded beam design can be described as: under the constraints such as shear stress, bending stress of beam, bending load on bar, deflection of beam end and boundary conditions, the optimal design variables h, l, t and b are sought to minimize the cost of manufacturing welded beam [46], as shown in Figure 10: Use x 1 -x 4 to represent the above four variables, the mathematical description is as follows: Minimize: f (x) = 0.04811x 3 x 4 (x 2 + 14) + 1.10471x 2 1 x 2 Subject to: g 1 (x) = x 1 − x 4 ≤ 0, g 2 (x) = δ(x) − δ max ≤ 0, g 3 (x) = P ≤ P c (x), g 4 (x) = τ max ≥ τ(x), g 5 (x) = σ(x) − σ max ≤ 0, where: τ = τ 2 + τ 2 + 2τ τ x 2 2R , τ = RM J , τ = P The experimental results are shown in Table 12. It can be seen that the improved algorithm can get the best cost value among all the algorithms. Figure 11 shows that the improved algorithm has the best iteration speed, indicating that it has the fastest speed when it obtains the best value.  The convergence curve of optimal operation of alkylation unit is shown in Figure 11: Use the data of the improved algorithm in Tables 9-12 on the four engineering optimization problems to get the ranking of Friedman test. As shown in Table 13, compared with other successful meta-heuristic algorithms, SASSA can also get the best ranking in engineering optimization problems. The results in Table 14 also show that the null hypothesis is rejected so the Friedman ranking is correct.

Conclusions
The salp swarm algorithm is a meta-heuristic algorithm based on the predatory behavior of salp, which simulates the group of salp to join end-to-end in the form of a chain and move successively. The salp swarm algorithm has some disadvantages such as slow convergence speed and poor optimization ability. In this paper, the SASSA is constructed by combining chaos initialization population, symmetric adaptive population division and a simulated annealing mechanism based on symmetric perturbation with the salp swarm algorithm. In order to test the ability of the algorithm, 21 benchmark functions were introduced in this paper to evaluate from the aspects of mean value, Std and lowest value. The results show that the improved algorithm proposed in this paper can yield better results for three different types of test functions. At the same time, in order to verify the ability of the improved algorithm to solve practical problems, this paper used the improved algorithm to solve engineering optimization problems. Weight minimization of a speed reducer, gear train design problem, optimal operation of alkylation unit and welded beam design were selected for the experiment, and the experimental results were compared with SSA, GWO, DE, BBO, ACO and PSO, the experimental results prove that the algorithm proposed in this paper has better optimization ability and stability when dealing with engineering optimization problems, and the algorithm's exploratory and mining properties and convergence mode have also been significantly improved. The problems cover the fields of mechanical engineering and chemical engineering. This provides directions and ideas for the improvement of the basic salp swarm algorithm, and also provides a reference solution for solving complex engineering optimization problems in reality.  Data Availability Statement: Data sharing is not applicable to this article.

Conflicts of Interest:
The authors declare no conflict of interest.