Next Article in Journal
Nonlinear Disturbance Observer-Based Bearing-Only Unmanned Aerial Vehicle Formation Control
Previous Article in Journal
MAGDM Model Using an Intuitionistic Fuzzy Matrix Energy Method and Its Application in the Selection Issue of Hospital Locations
Previous Article in Special Issue
Improved Whale Optimization Algorithm Based on Fusion Gravity Balance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism

1
School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
School of Software Engineering, Jiangxi University of Science and Technology, Nanchang 330013, China
3
Nanchang Key Laboratory of Virtual Digital Factory and Cultural Communications, Nanchang 330013, China
4
College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
5
School of Economics and Management, Jiangxi University of Science and Technology, Ganzhou 341000, China
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(8), 767; https://doi.org/10.3390/axioms12080767
Submission received: 15 June 2023 / Revised: 4 August 2023 / Accepted: 4 August 2023 / Published: 7 August 2023

Abstract

:
To solve the problems of the original sparrow search algorithm’s poor ability to jump out of local extremes and its insufficient ability to achieve global optimization, this paper simulates the different learning forms of students in each ranking segment in the class and proposes a customized learning method (CLSSA) based on multi-role thinking. Firstly, cube chaos mapping is introduced in the initialization stage to increase the inherent randomness and rationality of the distribution. Then, an improved spiral predation mechanism is proposed for acquiring better exploitation. Moreover, a customized learning strategy is designed after the follower phase to balance exploration and exploitation. A boundary processing mechanism based on the full utilization of important location information is used to improve the rationality of boundary processing. The CLSSA is tested on 21 benchmark optimization problems, and its robustness is verified on 12 high-dimensional functions. In addition, comprehensive search capability is further proven on the CEC2017 test functions, and an intuitive ranking is given by Friedman's statistical results. Finally, three benchmark engineering optimization problems are utilized to verify the effectiveness of the CLSSA in solving practical problems. The comparative analysis shows that the CLSSA can significantly improve the quality of the solution and can be considered an excellent SSA variant.

1. Introduction

Due to the continuous development of industry and technology, various fields have developed rapidly, accompanied by the emergence of many complex optimization problems. The key to solving these problems is to input variables that meet the conditions in the given search space to obtain the optimal feasible solution. Specifically, these types of optimization problems generally have three main elements: decision variables, objective functions, and constraints. Among them, the decision variable indicates the variable that the problem needs to be solved, the objective function is the main object of maximization or minimization, and the constraints are often divided into equality constraints and inequality constraints. In summary, it can be seen that the general form of this type of problem is:
min   f ( x ) x = ( x 1 , x 2 , , x n ) R
s . t . c i x 0 , i = 1 , 2 , , m c i x = 0 , i = m + 1 , , m + l
where Equation (1) represents the minimization objective function. R represents the entire search space. Equation (2) represents the constraints, including m inequalities and l equations.
The traditional approach to solving such problems is based on mathematical analysis. Researchers tend to choose solution methods flexibly according to the characteristics of the problems. For unconstrained optimization problems with a differentiable objective function, gradient and Newton methods are better choices [1]. For constrained optimization problems, methods based on the Lagrange multiplier are appropriate. Moreover, the differentiability and directional differentiability of the objective function [2], whether the constraints are linear or not, and whether the feasible domain is continuous or not are all important references for choosing the corresponding algorithms. It can be seen that the traditional methods have more limitations, which are not conducive to generalization to more problems and are no longer able to meet the high efficiency required by technological developments in terms of both time and accuracy. Due to the urgent need to solve real problems, optimization methods different from traditional exact algorithms have drawn much attention from the academic community. Consider that the solution space of most real-world problems is infinite, or it is too costly to evaluate all solutions with full coverage. Therefore, it makes sense to obtain a good solution within an acceptable time [3]. Based on computational intelligence mechanisms, researchers have explored multiple perspectives and pioneered a series of feasible methods to tackle the technical challenges of optimization, called metaheuristic algorithms or intelligent optimization algorithms. Since this type of method does not have any special requirements for the objective function and is not limited to solving specific problems, it has gradually become a hotspot in optimization research.
Exploring sources of inspiration for the development of such algorithms, there are five different groups in which the existing metaheuristics can be evaluated: evolution-based [4,5], physics-based [6,7,8], social-based [9,10], swarm-based [11,12,13,14], and other laws of nature [15,16,17]. The fourth category is better known for research and has given birth to the largest number of algorithms [18]. Swarm-based metaheuristics are inspired by foraging [19,20], hunting [21,22], and various other behaviors. With the depth of research, researchers used to call this category of algorithms swarm intelligence optimization algorithms. It is worth clarifying that the Ant Colony System (ACS) [11], proposed in 1991, and the Particle Swarm Optimization algorithm (PSO) [12,13], introduced in 1995, are two creations that led the research boom in this direction. In the last decade, new swarm intelligence algorithms have emerged, with researchers mimicking the behavior of various animals for modeling and naming the proposed optimization algorithms according to various animals’ names. The latest ones include the Lion Swarm Optimization algorithm (LSO) [23] in 2018, the Butterfly Optimization Algorithm (BOA) [24] in 2019, Harris Hawk Optimization (HHO) [25], the Chimp Optimization Algorithm (CHOA) [26] in 2020, etc. Compared with traditional optimization methods, these swarm-based algorithms have the advantages of fewer parameters, shorter running time, and simpler experimental operations. In today’s nonlinear and multi-modal optimization-seeking problems, swarm intelligence algorithms present excellent merit-seeking ability and feasibility.
Although the expectations of optimization problems are essentially the same, different algorithms are only capable of showing a strong performance in a fraction of optimization problems. Therefore, new optimization algorithms need to be continuously proposed and applied to different sets of problems. Through extensive research and experimental analysis, valuable ideas are proposed to eventually find commonalities in optimization. This is what the No Free Lunch Theorem has taught us [27].
In 2020, Xue et al. [28] proposed the Sparrow Search Algorithm (SSA), a novel swarm-based creation. This algorithm is implemented with less difficulty, has better self-organization ability, and demonstrates better global exploration and local mining ability in a complete predation process. However, the excessive speed of the search causes the algorithm to tend to converge too quickly in the process of finding the optimal solution and overshoot into a stagnant state, which not only loses the diversity of the population but also affects the convergence to a larger extent.
Researchers have made many attempts to solve these problems. Lv et al. [29] combined tent chaotic sequences and Gaussian variation strategy to propose an improved CSSA, where chaotic perturbation of local individuals is beneficial in solving the problem of falling into local optima. Wang et al. [30] proposed more than one learning strategy to update the best and worst position information, respectively. In addition, the improved longitudinal interleaving mechanism is fused in the update phase of followers to further balance the search capability of the algorithm. Yan et al. [31] subdivided the follower phase into two phases, global search and local search. A variable spiral factor and an improved iterative search strategy were added to each of the two stages. The effect of the improved strategy was verified using the variation in the number of individuals outside the boundary. Gad et al. [32] proposed a strategy for 3RA and local search algorithm incorporated into the original SSA in order to solve the problem that the original SSA lacked proper stimulus search in the feasible region and, thus, caused significant stagnation in the development process. A comparison was made with competing algorithms for the classification feature selection problem to demonstrate the superiority of the improvement. Yang et al. [33] utilized good point set initialization and then combined the game predatory mechanism and suicide mechanism into the SSA to improve the merit-seeking performance and improve the existing diversity index evaluation for the shortcomings of the existing SSA, which effectively avoids the defects caused by invalid search.
In addition to improvements in the underlying algorithm, validation of the algorithm for specific applications has received increasing attention. Zhou et al. [34] attempted to incorporate crossover and variation operations in the scout update phase of the SSA and combined a series of improvements in the controlled segment phase values to make it better for application in noisy natural environments. In experiments, the newly proposed strategy proved to be superior to the proposed wavefront shaping focusing algorithm in different noisy environments by representing the individual sparrow and its fitness value as a phase mask and the corresponding enhancement, respectively. Wang et al. [35] constructed a chaotic SSA based on Bernoulli mapping, Corsi variance, and dynamic adaptive weighting strategies that reduced the operating cost of microgrid cluster systems. Zhu et al. [36] utilized an adaptive SSA to optimize the model parameters of a proton exchange membrane fuel cell (PEMFC) and parametrically minimized the sum of squared errors (SSE) between empirical and computed stack voltages. The improved ASSA was experimentally demonstrated to have high efficiency in optimizing the performance. Tian et al. [37] combined the improved im-SSA with sequential quadratic programming to achieve a reduction in system fuel cost and built a hybrid generation system based on it to contribute to the supply of electricity in remote areas. Wu et al. [38] adopted sinusoidal mapping, hyperparameter adaptive tuning, and variational strategies to improve the SSA. The number of hidden layer nodes in the random configuration network was generated according to the above strategies, and the parameters in the fast random configuration network were optimized to improve the performance of classification. Fan Y et al. [39] combined the SSA and PSO organically and designed a new method using the advantages of each of the two models, which was successfully applied to the hyperparameter optimization of neural networks. This method avoids the deficiency of data overflow caused by the SSA and achieves an effective reduction in minimum error. In addition, Zhang Z et al. [40] improved the SSA as applied to optimal trajectory solving for robots to ensure smooth paths and less time overhead. Chen et al. improved the SSA to implement image segmentation [41], and Yin et al. [42] applied the ISSA to DV-hop localization to obtain satisfactory localization accuracy.
Although the SSA has been proposed for a relatively short period of time, the number of publications with the SSA as the object of research has grown at an explosive rate and shown a stable trend [43]. From the existing studies, it is easy to see that the improved algorithm only performs well in some of the tested functions, and the too-fast convergence rate often leads to a premature loss of population diversity. In the basic SSA, the population is initialized with a random distribution, which leads to problems such as irrational distribution of the algorithm, reduced diversity, and a certain blindness in the search for individuals. Secondly, the original algorithm has limited merit-seeking ability, and its convergence accuracy needs to be improved. Further, in the original follower update strategy, the setting of a poorly positioned sparrow jumping directly out of the current position cannot effectively use the information around that position. Finally, the original boundary processing mechanism is too simple to take into account the direction and step length of position update, which lacks rationality.
In view of the above ideas of improvement strategies and thinking about the existing optimization algorithms, this paper proposes a CLSSA based on the idea of customization for optimization in the global context that helps solve the defects, including insufficient optimization precision and falling into local optimality easily. The points of innovation can be summed up as follows.
  • By utilizing cube mapping to initialize the population, the inherent randomness of chaos and the irregular orderliness are exploited to make the initial population state dispersion more reasonable.
  • By introducing the adaptive spiral predation mechanism to change the predation mechanism of followers, better exploitation is accomplished.
  • For position updating of sparrows with different search abilities in the whole population, a novel customized learning strategy is proposed. The combination of this strategy makes full use of essential positional information and achieves a balance between exploration and exploitation.
  • In view of the different abilities and division of labor of the three roles, a novel boundary processing mechanism is proposed to improve the rationality of boundary processing, which effectively avoids the accumulation of the population on the boundary, thus increasing the population diversity.
  • The feasibility of the CLSSA in engineering optimization is verified with three classical discrete engineering optimization problems.
In this work, in accordance with a certain logical order, the first section introduces the swarm intelligence algorithm and the current research status of the SSA, as well as the research background of this paper. Section 2 introduces the original SSA. In Section 3, the innovations of this paper are introduced, and the CLSSA is described in detail. Multiple sets of experiments are set up to verify the merit-seeking performance and robustness of the CLSSA in Section 4. Then, in Section 5, the CLSSA is applied to three classical discrete engineering optimization problems, and the feasibility and effectiveness of the proposed algorithm are further demonstrated using the obtained data. Finally, a concise conclusion of this paper is given in Section 6.

2. Sparrow Search Algorithm

The SSA, as a novel swarm-based algorithm, simulates the mechanisms of teamwork, information exchange, and anti-feeding in sparrow populations during foraging. The bionic principle consists of dividing the whole population into three parts: discoverer, follower, and scouter. The discoverer, as the best-positioned individual in the current population, is responsible for leading the entire group in foraging, while the rest of the sparrows, called followers, follow the discoverer. They always stay competitive with the discoverer for ownership of the food resources. In other words, the roles of discoverers and followers can be interchanged while the overall ratio remains the same [30]. In particular, it is emphasized that the discoverers search more widely than the followers, while the follower’s search is relatively limited and localized [44]. In addition to the above two roles, the third role, scouter, is set according to a certain percentage with the responsibility of alerting the appearance of natural enemies and the occurrence of various dangers. In order to evaluate the position of the individual and their own ability, the index of fitness value is introduced as the basis of evaluation, which means the closer the fitness value is to the optimum zero, the heavier the predation ability. The above process is divided into three main steps for position updating.
When there is no danger, the discoverer leads the population to search widely. Conversely, when danger appears and the alarm value exceeds the safety threshold (ST), the strategy is to quickly approach the safe position. The updated formula is as follows.
x i , j t + 1 = x i , j t · exp ( i α · T )         i f     R 2 < S T x i , j t + Q · L                     i f     R 2 S T
where x i , j t denotes the position, its superscript t denotes the number of iterations, while the subscript i denotes the individual serial number, and j denotes the number of dimensions. When the superscript changes to t + 1 , it means the update is complete. T denotes the upper limit for iterating. α is a uniform number that meets 0 , 1 . Q is a random number that obeys the standard normal distribution. L is a matrix of all ones. Alarm value R 2 0 , 1 and S T 0.5 , 1 .
For the followers, the lower-ranked ones choose to leave the current location and search a more distant place for foraging, while the sparrows in the top position will choose to forage around the best location found by the finder. The mechanism can be described as:
x i , j t + 1 = Q · exp ( x w , j t x i , j t i 2 ) i f   i > N / 2 x P t + 1 + x i , j t x p t + 1 · A + · L o t h e r w i s e
where x w and x p denote the current global worst position and the current global optimum of the j-dimension, respectively. A , a matrix of 1 × d is randomly assigned −1 or +1. A + satisfies A + = A T ( A A T ) 1 . The lower-ranked follower corresponds to the part of i > N / 2 , while the other part is updated in a different way.
The scout mechanism is a more specific presence in the SSA, which usually requires 10–20% of the overall individuals to play the role of scouts. When scouts take up their position at the periphery of the population, moving to the optimal position when encountering danger is a necessary choice. Meanwhile, scouts in the middle of the population will choose to move closer to others to reduce the possibility of being attacked themselves when realizing the danger. The scout position updated formula is as follows.
x i , j t + 1 = x i , j t + β x i , j t x b , j t ,   f i f g x i , j t + K x i , j t x w , j t f i f w + ε ,   f i = f g
where β is a random number with the same distribution as Q . K is a random number within 1 , 1 . f i , f g and f w denote three important fitness value, the current sparrow’s fitness, the global optimum, and the global worst fitness values, respectively. ε is an infinitesimal constant to avoid zero-division-error.

3. CLSSA

3.1. Cube Chaos Mapping Initialization Population

In the original SSA, as in the majority of swarm intelligence algorithms, the rand () function is utilized to randomly generate quantitative random numbers between 0 and 1 and then carrier to the desired interval to obtain the initial population location. Obviously, the initial positions obtained by this method can guarantee randomness, but there are problems such as local aggregation and uneven distribution, which will bring problems such as slow convergence and premature loss of diversity to the following search. In light of the better global ergodicity and randomness of chaos, chaotic sequences have been implemented in many metaheuristic algorithms [45] whose roles can be divided into replacing random variables and initializing populations [46]. With the proposal of the sparrow search algorithm, tent mapping [29], logistic mapping [41], and ICMIC mapping [30] have been employed by researchers to improve the problem of the poor initialization effect of sparrow populations. The good results brought by the above strategy are reflected in the experimental data.
Cube chaos mapping (cube for short) is a one-dimensional self-mapping, which is more chaotic than the commonly used finite fold number mapping, and the distribution is more uniform than logistic mapping [47]. In this paper, cube is chosen to replace the pseudo-random number generator. The initial position of the population is generated by an iterative method until a predetermined sparrow member is reached [45]. Its necessary steps are as follows:
  • A d-dimensional vector is randomly generated, the process is denoted as y = y 1 , y 2 , , y d , each dimension satisfies y i 1 , 1 , y = 1 , 2 , d , and y is used to denote the first individual position.
  • A new d-dimensional vector is generated using Equation (6).
    y i + 1 = 4 y i 3 3 y i
  • The values of Equation (6) are brought into Equation (7) to obtain the values of each dimension of the individual.
    x i = l b + u b l b y i + 1 2
    where u b and l b denote the lower and upper bounds of the search interval, respectively.
  • Individual positions were obtained as:
    x = x 1 , x 1 , , x d
Figure 1 shows the distribution of sparrow populations in two-dimensional space after initializing the populations with random initialization and cube initialization, respectively, in the initialization phase, where each dot represents an individual sparrow. In order to prove the fairness, both methods set the population size at 100 and choose the same test function for testing. It can be observed in Figure 1a that the overall dispersion performs well, and there is no aggregation of most individuals. In contrast, as seen in Figure 1b, the distribution of individuals after random initialization has a higher number of individuals with overlapping positions (marked by red ovals). In addition, after dividing the whole two-dimensional space into four equal areas, it is easy to see that the number of individuals in the lower-right area (marked by a green box) in Figure 1b is significantly less than that in the other areas. The above situation does not occur in Figure 1a. In summary, it can be seen that cube initialization provides assistance in increasing the diversity and rationality of the population distribution during the initialization phase. In Section 4.1, several types of chaos mentioned in the first paragraph of this section are compared to further demonstrate the superiority of cube initialization through relevant data.

3.2. Adaptive Spiral Predation

In the original SSA, poor-position followers increase their foraging odds by tuning directly away from the current area, which obviously reduces the search and utilization of locations near the current location. Meanwhile, in view of the fact that multiple-example learning in customized learning is capable of achieving escape from poorer locations, the search method of poorer followers in this paper will be altered to increase the contribution of sparrows in this part. It is considered that a whale population will form a bubble net to attack prey by spiraling away after finding the prey, which can reduce the surroundings and complete the hunt. In the WOA [48], Mirjalili S utilized the idea of spiral predation to describe the process and obtained superior search results. The following equation is proposed after combining the above idea with the original strategy.
a = 1 + t × 1 ÷ T l = a 1 × r a n d + 1
x i , j t + 1 = cos l × π × exp x w t x i , j t i 2 , i > N 2
where a is the adaptive coefficient, l is the parameter of spiral control, and the other parameters are the same as in Section 2.

3.3. Customized Learning

A learning mechanism is a method of information transfer proposed to simulate the knowledge transfer method in people’s daily life. Predetermined position or velocity information is assigned to other individuals or vectors, and thus, a transformation is performed on that individual or vector. The most well-known learning mechanism is opposition learning [49], followed by general opposition-based learning [50], competitive learning [51], lens learning [52], and so on. As the study progressed, the researchers found that a single learning mechanism could ignore the differences among individuals in the population to some extent. For individuals ranked at different stages in the population, using different strategies for updating can achieve higher convergence.
For the above problems, new combinatorial mechanisms have been proposed, such as Ouyang et al., who proposed simultaneous updating of optimal and worst positions [53]. Zhang et al. proposed a three-learning strategy [54], which reduced the running time and obtained better results. Xia et al. [55] chose different learning situations according to the size of the three solutions to obtain excellent solutions. Inspired by these articles, a customized learning strategy based on the student tutoring mechanism in school classes is proposed in this paper, which gives differentiated management and guidance for different types of students.
Usually, students in a class have different levels of learning abilities. The most intuitive reflection of learning ability is test scores. The students who rank in the top 10% in each exam are called Elite and are considered to have the best mastery of that level of knowledge and the best learning skills. These students are often chosen as the learning objects for their excellent grades. On the contrary, the bottom 10% of the class, called Learner, is considered to have the worst learning ability and needs supervision and support. The rest of the students who also rank in the top half of the grade range are often considered to have a better attitude toward learning and can manage their study tasks better. These students are trusted by teachers with their self-learning ability. Some students who are in the 50–90% range are considered to have a high potential for improvement; however, they fail to show it in their grades due to poor learning methods and other reasons. These students receive the most attention from their teachers and are called Potential.
By combining the descriptions above, the entire class can be divided into four groups according to the ranking of their grades after each test, as shown in Figure 2. In addition, a novel customization-based learning strategy is proposed in this paper. For all students except Elite, corresponding study plans are developed separately to help them progress and achieve the effect of improving their grades.

3.3.1. Elite–Learner Paired Learning

Learners, who ranked in the bottom 10% of the class, were assigned one-on-one help with the Elites, who ranked in the top 10%, as a measure to monitor their learning. Research has shown that paired learning, according to the principle of ranking pairs, maximizes the effect of help [56]. The principle is that two groups of equal numbers of students are ranked according to their test scores. In this case, the Elite group is referred to as ES. The students in the Elite group are noted as x i E , the Learner group is referred to as LS, and the poor students are noted as x i L . As shown in Figure 3, both groups have M / 2 individuals and satisfy:
x i E E S   a n d   x i L L S ,   f x i E < f x i L
f x i E < f x i + 1 E   a n d   f x i L < f x i + 1 L ,   i = 1 , 2 , , M / 2
x 1 L x 1 E x M / 2 L x M / 2 E
The specific Elite–Learner Paired Learning Strategy for each poor student to consult with an elite individual of the same rank in the elite group is as follows.
x i L = x i L + λ x i E x i L
where x i L denotes the position obtained after learning and λ denotes a random number with the same distribution as Q above.

3.3.2. Selected Learning

Selected students, who ranked in the 10–50% range, are considered to have excellent self-learning ability, and no compulsory help will be given. They may choose whether to consult the best students according to their own judgment and decide whether to keep the learning results according to the greed principle after consulting. This type of learning is called Selected learning. To simulate this selection, a selection threshold S and a random R conforming to the standard normal distribution are introduced. After comparing the above two, if the random number exceeds the threshold, the learning process is executed; otherwise, it is not executed. Analogously to the sparrow population, x denotes the individual position, and the position updated formula is as follows.
x i t = ω i x i t + ω b x b t , S R x i t , S R
where x i t denotes the current individual position, x b t denotes the current global superior position, ω i and ω b denote the weight coefficient of the current individual position and the optimal position, respectively.
ω i = f ( i ) f ( i ) + f ( b ) ω b = f ( b ) f ( i ) + f ( b )
where f ( i ) and f ( b ) denote the fitness value of the current individual and the current global optimum, respectively.
The dynamic adjustment of the weight coefficients according to the fitness value is closer to the actual situation. At the same time, it can avoid the dominance of the current global optimal position information, thus leading the whole population to gather near the current optimal and effectively avoiding the potential risk of falling into premature maturity. In addition, the greedy principle is added after the position update, and the strategy simulates the process of judging the results by this part of students after consulting excellent students, emphasizing that the students in this ranking have better self-learning ability and self-consciousness.

3.3.3. Multi-Example Learning

For students ranked in the 50–90% range with greater potential for improvement, it is a good choice to make progress by combining elite help and self-consultation with students who are better than themselves. The specific strategy is finding a random role model in each of the Elite and Selected groups to emulate and improving by combining their own study methods with those of the highest-performing students. This kind of learning method that combines learning from many students who are better than themselves and their own efforts is called multi-example learning. Applied to sparrow populations, a center-of-mass variation strategy is introduced. Using random elite individuals x m E ( t ) , random selected individuals x n S ( t ) , their own position x i ( t ) , and the current optimal individual position x b ( t ) together to form a new position x i ( t ) .
x i ( t ) = r a n d · x m E ( t ) + x n S ( t ) + x i ( t ) 3 x b ( t ) + x b ( t )
Combining the above customized learning strategies into the location update of the sparrow population, the students in the class are analogized to the sparrows in the sparrow population, the academic performance is analogized to the fitness value of the location, the change in academic performance is analogized to the location update, and each test is analogized to each iteration of the sparrow. As can be seen, in contrast with traditional learning strategies, a customized learning strategy is able to generate personalized learning methods according to the different situations of each part. Just as in a class where the teacher cannot dogmatically use the same teaching method for students with different learning abilities, multiple strategies in a sparrow population contribute to a progressive search for the entire population while retaining good positions.

3.4. Customized Learning

The boundary processing strategy is one of the important methods to preserve the diversity of the population by redefining the positions of individuals beyond the boundaries so that the population outside the effective positions can return to the previously predefined search space. A uniform boundary treatment is adopted in the original SSA by defining individuals outside the maximum boundary on the uppermost boundary and, similarly, defining individuals below the minimum boundary on the lower boundary.
There is no doubt that such a uniform and simple treatment mechanism will cause the phenomenon of local aggregation on the search boundary. Although preserving the population size to some extent, it also has the drawback of losing diversity. At the same time, the difficulty of the next optimization will be increased, which will affect the expected results. Experimental evidence of this point will be given in Section 4. In view of the variability of the three stages in the SSA, this paper proposes a novel multi-strategy boundary processing method, which makes full use of the targeted processing corresponding to the purpose of different stages, respectively.
In the discoverer stage, the whole population should be committed to finding new food, and the current optimum is potentially the most suitable foraging site. However, the initial stage of foraging is accompanied by the uncertainty of the optimal location, so the boundary processing of individuals should retain randomness to a certain degree. Similarly, after the vigilantes are updated, the beginning of the next iteration will follow immediately. The more dispersed the initialized population is, the higher the probability that the optimal solution can be obtained after the corresponding stage of the strategy is carried out. The boundary processing method, accompanied by strong randomness, gives the population the ability to jump out of the local optimum with a certain chance. The boundary processing method with updated discoverer and scout positions is given below.
x i ' = u b u b l b · r a n d ,   x > u b   or   x < l b
In the follower phase, the competition is the most intense due to the highest number of sparrows in this part. As a result, potential individuals beyond the boundary reach the highest number of all three phases [31]. Throughout the SSA, the core purpose of the discoverer update phase is to find the superior global optimization position, while the follower phase is to randomly wander around the previously found position and attempt to exploit it in depth. In the original follower mechanism, the well-adapted followers have moved to the vicinity of the superior global optimization position, ensuring the search of the potentially optimal region. Therefore, the treatment of individuals beyond the boundaries in this phase should also be developed around the current optimal position.
Considering the original SSA, the update operation in the follower phase produces two cases, whose one-dimensional schematic diagram is shown in Figure 4. The case in the Figure 4a diagram is that the individual exceeds the upper boundary after the update operation, while the Figure 4b diagram is that the updated follower exceeds the lower boundary. In order to bring the processed individuals back within the established boundaries, important position information (the current global optimum x b , the positions of the followers before and after updating according to Equations (4), (9) and (10): x i and x i , the upper and lower bounds of the function: l b and u b ) can be sufficiently utilized. At the same time, considering that the update of the position is directional, in order to keep the same direction as the update of the original follower (as shown by the arrow in Figure 4), the following equation is proposed according to the different boundaries that are exceeded.
The boundary processing update method for exceeding the upper boundary is:
x i = x b + u b x b · x i u b x i x i
The boundary processing update method for exceeding the lower boundary is:
x i = x b l b x b · x i l b x i x i
where l b x b and u b x b control the unit distance of the step, while the subsequent fractional equation controls the number of units. The combination of the two parts achieves the step control for the position update and can guarantee that the processed position is within the preset range.

3.5. CLSSA Flow

Based on the description in the previous subsections of this chapter, the flow chart of the CLSSA is shown in Figure 5.

3.6. Time Complexity Analysis

According to the introduction for the SSA in Section 2, it is readily known that the time complexity of the SSA is O P × M × D , where P, M, and D denote the number of populations, the upper limitation of iteration, and the dimension, respectively. For the CLSSA, it can be analyzed as follows. First, both algorithms hold the same basic structure, and the three important evaluation indexes are not altered. Secondly, the use of cube to initialize the population does not increase the time complexity. In addition, the follower phase changes the update method for poorer followers but does not alter the number and dimension of individuals to be updated, so it does not increase the time complexity. Moreover, customized learning is employed after the follower update, which increases the sorting process and adopts the corresponding learning mechanism. Obviously, the time complexity increases O 1 = O P × M × D since it involves the update of the whole population. In addition, the follower phase changes the update method for poorer followers but does not alter the number and dimension of individuals to be updated, so it does not increase the time complexity. Finally, the boundary processing mechanism proposed in this paper only changes the way of processing and does not change the factors that increase the time complexity, such as the number of times to be used. In conclusion, the time complexity of the CLSSA remains O P × M × D .

4. Performance Analysis

In an effort to verify the rationality, merit-seeking capability, robustness, and other characteristics of the improved CLSSA, the following experimental set is designed. All experiments are performed on a Windows 10 64-bit system running simulation experiments with 16 GB memory and Intel(R) Core (TM) i5-9300H CPU @ 2.40 GHz. MATLAB R2019a simulation experiment platform is used for simulation. The optimal data are shown in bold.

4.1. Initialization Strategy Selection Test

It was mentioned in Section 3.1 that cube mapping is applied as a better chaotic mapping method for the optimization of algorithm parameters and initialization of populations. In this subsection, cube mapping is combined with the random initialization used in the SSA, the tent mapping proposed in the literature [29], the logistic mapping used in the literature [41], and the IHSSA mapping used in the literature [30], respectively, with the improvements in this paper. In order to verify the advancement of the cube, three important indicators were selected as references: the mean (Avg), the standard deviation (Std), and optimal values (Best) obtained from the experiments by testing on the test functions Generalized Penalized Function No. 01 and De Jong Function No. 5. The theoretical optimal value of Generalized Penalized Function No. 01 is 0, and the theoretical optimal of De Jong Function No. 5 is 0.998. The size of the population was set to 100, and other experimental parameters were set the same. Considering the chance of a single experiment, 150 independent experiments were conducted separately, and the average values of each metric are tabulated in Table 1.
The algorithm incorporating the cube initialization is able to reap better search results on six metrics for both test functions. For Generalized Penalized Function No. 01, cube leads in three metrics, while for De Jong, cube improves the Avg and Std significantly and reduces the value of Std by half compared to other algorithms, based on the fact that all five improvements can find the optimal value. In summary, cube achieves the highest optimization accuracy. Additionally, the average value is closer to the theoretical optima with a smaller standard deviation, that is, the most stable optimization ability. Therefore, cube is chosen as part of the CLSSA in this paper.

4.2. Comparison of Contributions by Strategy

To explore the contribution of the improvement strategies of each part of the CLSSA, the SSA with cube initialization is noted as ISSA-1, the SSA combined with the adaptive spiral predation strategy is noted as ISSA-2, the SSA combined with customized learning is noted as ISSA-3, the algorithm after the SSA is combined with the boundary processing mechanism is noted as ISSA-4, respectively. Additionally, the combination of ISSA-1 and ISSA-2 is denoted as ISSA-5, the combination of ISSA-1 and ISSA-3 is denoted as ISSA-6, and the combination of ISSA-2 and ISSA-3 is denoted as ISSA-7. The algorithm combining ISSA-1, ISSA-2, and ISSA-3 is denoted as ISSA-8. The 21 functions in the dataset first proposed by Yao [57] in 1999 were selected as the benchmark test functions. Among these functions, F1–F7 are high-dimensional unimodal functions, F9–F13 are high-dimensional multi-modal functions, and F10–F23 are low-dimensional complex functions. The Avg, Std, and Optimal values of each algorithm after testing are counted in Table 2.
From Table 2, it can be seen that the 10 algorithms do not differ significantly when it comes to the six functions F1, F9, F10, F11, F16, and F19. For the other functions, while the SSA can perform better on F15 and F21, ISSA-3 and ISSA-8 have advantages on F18 and F21, respectively. ISSA-2, ISSA-7, and ISSA-8 performed best on F22. However, the CLSSA can maintain an absolute advantage on six functions, F2, F3, F4, F7, F14, and F23. In addition, for the CLSSA, better optimal values can be found on the five functions F12, F13, F21, F22, and F23. In F5, F6, F12, F14, F15, F18, and F23, the algorithm combining multiple strategies outperforms the ISSA combined with a single strategy, while for F4, the single strategy is better overall. Comparing the relevant data of ISSA-2 and SSA, it is easy to see that ISSA-2 demonstrates better stability of optimization while finding better optimal values. Especially for the single-peak functions F3, F4, F5, and the multi-peak function F12, the advantage is reflected in multiple orders of magnitude of improvement. It can be seen that the improved spiral predation mechanism leads to better exploitation of the SSA model. Comparing the two groups, ISSA-3 and SSA, ISSA-5 and ISSA-8, it is obvious that ISSA-3 is more effective on F2, F4, F6, F14, and F18 than the SSA and worse on F3 and F13. On the other hand, ISSA-8 was able to perform better on F2, F4, F7, F14, and F18 and worse than ISSA-5 on F3 and F13. The common point of the above two comparisons is that ISSA-3 and ISSA-8 incorporate additional customized strategies compared to the SSA and ISSA-5, respectively. We can say that both single-peak and multi-peak functions show stronger optimization ability when combined with the customized learning strategy, which not only retains the strong local search ability of the SSA model but also increases the ability for global search to a certain extent. In other words, the customized learning strategy performs an effective role in balancing exploration and exploitation of the whole model.
In terms of the overall merit-seeking ability, each strategy improvement has a certain improvement value. However, the optimization effect can be maximized only when the proposed strategies are combined according to the logical structure shown in the CLSSA.

4.3. Benchmark Function Test

To verify the performance of the CLSSA for finding the optimum, two sets of experiments are set up in this section to compare with the basic SSA, the classical algorithms PSO [12,13], GWO [21], and their variants TACPSO [58], AGPSO3 [59], IGWO [60], SSA variants IHSSA [30], ESSA [61], CSSA [29], respectively. For more details on the parameter settings, please refer to the original algorithm text of the above literature.

4.3.1. Comparison with the SSA

To compare how the CLSSA performs compared to the basic SSA, 21 basis functions in Section 4.2 are selected in this subsection. In addition, to verify the robustness of the algorithm, 12 high-dimensional test functions are selected and experimented on dimensions 30 and 100, respectively. The experimental data of the high-dimensional test functions in dimensions 30 and 100 are summarized in Table 3, and the comparison data of the other low-dimensional functions are summarized in Table 4.
As can be observed from Table 3, for the high-dimensional functions, the CLSSA dominates across the board, finding the theoretical optimum for F1 and F3 each time, and for F5, F6, and F7, there is only a slight disadvantage in three of the data. For F9–F11, the optimization of two algorithms is successful. When D = 30, compared with the SSA, there is a quantum improvement for single-peaked functions F2–F6 and even more quantum improvement in the CLSSA on F12 and F13. When the dimension is increased to 100, it also exhibits similar advantages. As shown above, the improved CLSSA has good robustness.
As can be noticed in Table 4, the CLSSA is superior across the board and has a clear advantage on F14 and F18. For F14, the CLSSA shows better Avg and Std based on the optimal values found by both algorithms. For F18, the mean value of the SSA seeking is 3.9, while the CLSSA finds the theoretical optimal value of 3 in all 30 experiments, which demonstrates its superior seeking capacity.
In summary, the CLSSA is able to exhibit better finding ability both on high-dimensional functions and low-dimensional functions, i.e., the CLSSA has good robustness. For unimodal functions F1–F7, the CLSSA exhibits higher optimization accuracy, which shows that the adaptive spiral predation strategy has a strong exploitation ability. For multi-modal functions F9–F13, the CLSSA achieves multiple orders of magnitude improvement in indicators. It can be seen that the addition of customized learning strategies generally boosts the quality of the solution. The balance between exploration and exploitation is improved in the completion of iterations, achieving an advance in global search capabilities [62]. In addition, in low-dimensional complex functions, the SSA already exhibits better capabilities, and the CLSSA can still perform better in them. It can be seen in this study that combining multiple strategies can improve global optimization ability while ensuring the diversity of groups.

4.3.2. Comparison with Classical Algorithms and Variants

In this subsection, eight basis functions are selected to test the CLSSA’s merit-seeking ability. Among them, F1 and F2 are high-dimensional single-peak functions, while F3 and F4 are high-dimensional multi-peak functions, with F5–F8 low-dimensional multi-peak functions. The theoretical optimum of these eight functions is −9.66015 for F5 and 0 for the rest. Table 5 shows the experimental results with the other nine algorithms. To see the convergence speed more intuitively, the convergence graphs of the 10 algorithms on F1–F8 are plotted in Figure 6.
From Table 5, the CLSSA has the highest accuracy in finding the optimum on all eight functions. For the four functions F1, F5, F7, and F8, the CLSSA finds the theoretical optimum and is the only algorithm that finds the optimal solution for F5, although its average value is worse. In addition, the CLSSA is not as fast as the CSSA in finding the optimum on F1 and F7, requiring more iterations, which has something to do with the strategy designed in the CLSSA to deal with the potential premature crisis and not using the greedy principle to retain optimum. On the remaining four functions, F2, F3, F4, and F6, although all algorithms failed to find the theoretical optimum, a better solution with multiple orders of magnitude of improvement over several other algorithms can be achieved by the CLSSA. There is no doubt that in the follower stage, the CLSSA introduced a spiral strategy to improve development ability and combined it with a customized learning strategy to balance exploration and development ability, which has a significant effect on the improvement of the merit-seeking capability.
In addition, to verify whether the CLSSA is significantly different from the other nine algorithms, Table 6 summarizes the p-values of the Wilcoxon rank sum test at a significance level of 5%. It is obviously known that the difference is not significant when p > 0.05 , and the opposite is considered a significant difference between the two algorithms. NaN indicates that there is only a negligible gap between the two subjects.
It can be seen in Table 6 that the CLSSA is significantly different from both the classical algorithm and its variants. Compared with the SSA and its variant algorithms, NaN exists only on F1, F7, and F8, and the gap is large compared to the rest of the functions. As shown above, the CLSSA has a uniqueness that cannot be ignored.

4.4. CEC2017

To further verify the generality of the improved algorithms, the CEC2017 test set [63] was selected to test six algorithms, including the SSA, LSSA [53], GPSSA [33], CLSSA_T [64], FA-CL [65] (https://whuph.github.io/pubs.html), and CLSSA. Among them, F2 with its own defects is not tested. The dimensionality is set to 30, the number of populations is 100, and the number of evaluations is set to 10,000 × number of dimensions. The parameter setting details of each algorithm follow the original article. Each algorithm was run 30 times independently and compared according to the five data of Best, Worst, Median, Mean, and Std. The specific results are shown in Table 7. To show more intuitively the superiority-seeking ability of each algorithm on the whole CEC2017 test set, the comprehensive ranking according to five data using the Friedman test after 30 independent experiments are recorded in Figure 7 (accurate to two decimal places).
From Figure 7, it can be seen synthetically that the CLSSA performs best overall, ranking first for each algorithm with an average ranking of 2.79, followed by CLSSA_T (2.89) and SSA (3.02), and GPSSA (5.34) takes up the last position. From Table 7, the CLSSA obviously performs best in F1, F3, F11, F16, F25, F27, and F29, and the optimal values that can be found on the 16 functions F1, F3, F4, F10, F11, F13, F15, F16, F19, F22, F25, F26, F27, F28, F29, and F30 are closest to the theoretical optimum. Moreover, the SSA performs best on F3, F12, F28, and F30, the LSSA performs best on F15, F18, and F19, and the GPSSA performs best only on F27. For the other two algorithms, although they can also show not bad results on some functions, the overall performance is inferior to the CLSSA due to the low ranking in some functions. In general, the CLSSA in this paper has the best stability and the best merit-seeking ability, without a doubt.

5. Application to Engineering Optimization Problems

Engineering optimization problems are a class of classical optimization problems, and usually, the solving process can be described as a process of finding an optimal solution under special conditions. Bayzidi et al. [66] tested 15 classical cases and selected newly developed models for comparison to verify the potential of the proposed SNS algorithm to solve optimization problems. Inspired by this, in this subsection, three classical problems are selected to test the ability of the CLSSA to be applied in engineering. In order to show fairness, the parameters, such as the population size and the number of iterations, are set the same for both the SSA and CLSSA. Each case is run independently 30 times, and the average result of the 30 experiments is presented as an indicator to compare with the results of other algorithms in the literature [66].

5.1. Gear Train Design Problem

The gear train design problem is a classical unconstrained discrete design problem where the objective is to compute the minimum gear ratio. Sandgren et al. [67] defined the ratio of the gears as the ratio of the angular velocity of the output shaft to the angular velocity of the input shaft ( x 2 x 3 / x 1 x 4 ). The number of teeth of the four gears in this model is used as a variable to design the following mathematical model.
Minimize:
f X = 1 6.931 x 2 x 3 x 1 x 4 2
Variable range:
x 1 , x 2 , x 3 , x 4 12 , 13 , 14 , , 60 .
All 11 algorithms, including the CLSSA, perform experiments on this problem. It is shown in Table 8 that the CLSSA designs the same solution as GA and GWO and finds a better solution. The optimal solutions found by PSO and SSA are slightly inferior to those of the other 9 algorithms.

5.2. Pressure Vessel Design Problem

This is a classical mechanical optimization problem whose fundamental purpose is to minimize various design costs while designing a pressure vessel. The whole vessel can be seen as a hemispherical head and a cylindrical body composition. Among them, four important parameters, the head thickness Th, the body thickness Ts, the length L of the body part, and the inner radius R are the subjects that we focus on. The above four variables are noted in order as x1, x2, x3, x4. According to the design requirements, conditions such as x1 and x2 are multiples of 0.0625 should be satisfied [68]. In summary, we obtain the mathematical model:
Minimize:
f X = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 X = x 1 + 0.0193 x 3 0 , g 2 X = x 2 + 0.00954 x 3 0 , g 3 X = π x 3 2 x 4 4 3 π x 3 2 + 1296000 0 , g 4 X = x 4 240 0 ,
Variable range:
x 1 , x 2 1 × 0.0625 , 2 × 0.0625 , , 1600 × 0.0625 , 10 x 3 , x 4 200 .
This problem has been used to test the performance of various metaheuristic algorithms, and Table 9 summarizes the statistical results of the CLSSA and other algorithms in this problem. It can be seen that the function value of 6.05 × 10+03 is lower than the other algorithms, which indicates the effectiveness of the CLSSA in this problem.

5.3. Corrugated Bulkhead Design

The classic problem of designing a corrugated bulkhead is a key issue for chemical tankers, bulk carriers, and product oil carriers [69]. This problem aims to investigate the minimum weight of corrugated bulkheads in cruise ships [68]. The four important parameters are the width x1, depth x2, length x3, and the thickness of the bulkhead x4. The mathematical expressions are as follows. It should be emphasized that, given the practical implications of engineering optimization problems, there will be no x 1 = x 2 = x 3 = 0 , i.e., f X will not have a denominator of 0 [70].
Minimize:
f X = 5.885 x 4 x 1 + x 3 x 1 + x 3 2 x 2 2
Subject to:
g 1 X = x 2 x 4 0.4 x 1 + x 3 6 + 8.94 x 1 + x 3 2 x 2 2 0 , g 2 X = x 2 2 x 4 0.2 x 1 + x 3 12 + 2.2 8.94 x 1 + x 3 2 x 2 2 4 / 3 0 , g 3 X = x 4 + 0.0156 x 1 + 0.15 0 , g 4 X = x 4 + 0.0156 x 3 + 0.15 0 , g 5 X = x 4 + 1.05 0 , g 6 X = x 3 + x 2 0 .
Variable range:
0 x 1 , x 2 , x 3 100 , 0 x 4 5 .
The results of the CLSSA and other optimization algorithms are tabulated in Table 10. It can be seen that the best results are obtained by CS, and the other algorithms do not differ much. The minimum weight that can be found using the CLSSA has a small reduction compared to that under the SSA.

6. Conclusions

In this paper, an improved sparrow search algorithm (CLSSA) with customization-based is proposed to improve the weak global exploration ability of the SSA. It increases the reasonable distribution of the initial population by initializing the population through cube chaos mapping and increases the utilization of important location information by differentially updating the individuals in each part through three learning mechanisms in customized learning. In addition, an adaptive spiral predation mechanism inspired by WOA is formulated to achieve the exploitation enhancement of followers and avoid premature convergence to the current optimum. In view of the irrationality of the boundary processing mechanism in the original SSA, the improved processing mechanism is proposed to make the updating position of each stage more reasonable and retain the original updating direction. In the experimental part, the initialization strategy selection test and ablation experiment demonstrate the contribution of each improved strategy. By comparing 10 and 6 classic algorithms and variants on the basis function and CEC2017 test set, respectively, the CLSSA has shown superior effectiveness and has the ability to balance exploration and exploitation. Finally, the CLSSA is applied to an engineering optimization problem, which demonstrates its effectiveness compared with other algorithms. However, there are certain limitations that must be addressed [71], such as the lack of running time calculation.
Further work will focus on the quality of the global optimization results. Different techniques will be utilized to explore areas of improvement to ensure stability and augment the efficiency of the computation [72]. Meanwhile, applying the theoretical algorithms to more practical problems is the focus of the team’s next work in order to solve problems such as image segmentation, shaping the optimum path, and feature selection.

Author Contributions

Conceptualization, Z.W. and X.H.; methodology, Z.W.; software, Z.W. and D.Z.; validation, Z.W., X.H. and D.Z.; investigation, Z.W. and D.Z.; data curation, Z.W.; writing—original draft preparation, Z.W. and K.H.; writing—review and editing, Z.W., X.H., D.Z., C.Z. and K.H.; visualization, Z.W. and D.Z.; supervision, X.H. and C.Z.; project administration, X.H.; funding acquisition, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National key research and development program of China (Grant No. 2020YFB1713700).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rao, S.S. Optimization Theory and Application, 2nd ed.; Halsted Press: New Delhi, India, 1984; ISBN 978-0470274835. [Google Scholar]
  2. Dem’yanov, V.F.; Vasil’ev, V. Nondifferentiable Optimization; Springer: New York, NY, USA, 2012; ISBN 978-1461382706. [Google Scholar]
  3. Akyol, S.; Alatas, B. Plant intelligence based metaheuristic optimization algorithms. Artif. Intell. Rev. 2017, 47, 417–462. [Google Scholar] [CrossRef]
  4. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  5. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Glover, F. Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 1986, 13, 533–549. [Google Scholar] [CrossRef]
  7. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  8. Erol, O.K.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  9. Shi, Y. Brain storm optimization algorithm. In Proceedings of the Advances in Swarm Intelligence: Second International Conference, ICSI 2011, Chongqing, China, 12–15 June 2011; pp. 303–309. [Google Scholar]
  10. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  11. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  12. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  13. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  14. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  15. Mora-Gutiérrez, R.A.; Ramírez-Rodríguez, J.; Rincón-García, E.A. An optimization algorithm inspired by musical composition. Artif. Intell. Rev. 2014, 41, 301–315. [Google Scholar] [CrossRef]
  16. Alatas, B. ACROA: Artificial chemical reaction optimization algorithm for global optimization. Expert Syst. Appl. 2011, 38, 13170–13180. [Google Scholar] [CrossRef]
  17. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  18. Cai, Z.; Gao, S.; Yang, X.; Yang, G.; Cheng, S.; Shi, Y. Alternate search pattern-based brain storm optimization. Knowl.-Based Syst. 2022, 238, 107896. [Google Scholar] [CrossRef]
  19. Passino, K.M. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst. Mag. 2002, 22, 52–67. [Google Scholar]
  20. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  21. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  22. Wang, Z.; Xie, H. Wireless Sensor Network Deployment of 3D Surface Based on Enhanced Grey Wolf Optimizer. IEEE Access 2020, 8, 57229–57251. [Google Scholar] [CrossRef]
  23. Liu, S.J.; Yang, Y.; Zhou, Y.Q. A Swarm Intelligence Algorithm—Lion Swarm Optimization. Pattern Recognit. Artif. Intell. 2018, 31, 431–441. [Google Scholar]
  24. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  25. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  26. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  27. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  28. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  29. Lv, X.; Mu, X.D.; Zhang, J.; Zhen, W. Chaos Sparrow Search Optimization Algorithm. J. Beijing Univ. Aeronaut. Astronaut. 2021, 47, 1712–1720. [Google Scholar]
  30. Wang, Z.; Huang, X.; Zhu, D. A Multistrategy-Integrated Learning Sparrow Search Algorithm and Optimization of Engineering Problems. Comput. Intell. Neurosci. 2022, 2022, 2475460. [Google Scholar] [CrossRef] [PubMed]
  31. Yan, S.; Yang, P.; Zhu, D.; Zheng, W.; Wu, F. Improved Sparrow Search Algorithm Based on Iterative Local Search. Comput. Intell. Neurosci. 2021, 2021, 6860503. [Google Scholar] [CrossRef]
  32. Gad, A.G.; Sallam, K.M.; Chakrabortty, R.K.; Ryan, M.J. An improved binary sparrow search algorithm for feature selection in data classification. Neural Comput. Appl. 2022, 34, 15705–15752. [Google Scholar] [CrossRef]
  33. Yang, P.; Yan, S.; Zhu, D.; Wang, J.; Wu, F.; Yan, Z.; Yan, S. Improved sparrow algorithm based on game predatory mechanism and suicide mechanism. Comput. Intell. Neurosci. 2022, 2022, 4925416. [Google Scholar] [CrossRef]
  34. Zhou, S.; Xie, H.; Zhang, C.; Hua, Y.; Zhang, W.; Chen, Q.; Gu, G.; Sui, X. Wavefront-shaping focusing based on a modified sparrow search algorithm. Optik 2021, 244, 167516. [Google Scholar] [CrossRef]
  35. Wang, P.; Zhang, Y.; Yang, H. Research on economic optimization of microgrid cluster based on chaos sparrow search algorithm. Comput. Intell. Neurosci. 2021, 2021, 5556780. [Google Scholar] [CrossRef]
  36. Zhu, Y.; Yousefi, N. Optimal parameter identification of PEMFC stacks using Adaptive Sparrow Search Algorithm. Int. J. Hydrog. Energy 2021, 46, 9541–9552. [Google Scholar] [CrossRef]
  37. Tian, H.; Wang, K.; Yu, B.; Jermsittiparsert, K.; Song, C. Hybrid improved Sparrow Search Algorithm and sequential quadratic programming for solving the cost minimization of a hybrid photovoltaic, diesel generator, and battery energy storage system. Energy Sources Part A Recovery Util. Environ. Eff. 2021, in press. [Google Scholar] [CrossRef]
  38. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.-Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  39. Fan, Y.; Zhang, Y.; Guo, B.; Luo, X.; Peng, Q.; Jin, Z. A hybrid sparrow search algorithm of the hyperparameter optimization in deep learning. Mathematics 2022, 10, 3019. [Google Scholar]
  40. Zhang, X.; Xiao, F.; Tong, X.; Yun, J.; Liu, Y.; Sun, Y.; Tao, B.; Kong, J.; Xu, M.; Chen, B. Time optimal trajectory planning based on improved sparrow search algorithm. Front. Bioeng. Biotechnol. 2022, 10, 852408. [Google Scholar]
  41. Chen, G.; Lin, D.; Chen, F.; Chen, X. Image segmentation based on logistic regression sparrow algorithm. J. Beijing Univ. Aeronaut. Astronaut. 2021, 1, 14. [Google Scholar]
  42. Lei, Y.; De, G.; Fei, L. Improved sparrow search algorithm based DV-Hop localization in WSN. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 2240–2244. [Google Scholar]
  43. Yue, Y.; Cao, L.; Lu, D.; Hu, Z.; Xu, M.; Wang, S.; Li, B. Review and empirical analysis of sparrow search algorithm. Artif. Intell. Rev. 2023, in press. [Google Scholar] [CrossRef]
  44. Ouyang, C.; Qiu, Y.; Zhu, D. Adaptive spiral flying sparrow search algorithm. Sci. Program. 2021, 2021, 6505253. [Google Scholar]
  45. Alatas, B. Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 2010, 37, 5682–5687. [Google Scholar]
  46. Gao, S.; Yu, Y.; Wang, Y.; Wang, J.; Cheng, J.; Zhou, M. Chaotic local search-based differential evolution algorithms for optimization. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3954–3967. [Google Scholar]
  47. Gui, C.Z. Application of Chaotic Sequences in Optimization Theory. Ph.D. Thesis, Nanjing University of Science and Technology, Nanjing, China, 2006. [Google Scholar]
  48. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  49. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  50. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 2011, 181, 4699–4714. [Google Scholar]
  51. Cheng, R.; Jin, Y. A competitive swarm optimizer for large scale optimization. IEEE Trans. Cybern. 2014, 45, 191–204. [Google Scholar] [CrossRef]
  52. Ouyang, C.; Zhu, D.; Qiu, Y. Lens learning sparrow search algorithm. Math. Probl. Eng. 2021, 2021, 9935090. [Google Scholar] [CrossRef]
  53. Ouyang, C.; Zhu, D.; Wang, F. A learning sparrow search algorithm. Comput. Intell. Neurosci. 2021, 2021, 3946958. [Google Scholar] [CrossRef] [PubMed]
  54. Zhang, X.; Lin, Q. Three-learning strategy particle swarm algorithm for global optimization problems. Inf. Sci. 2022, 593, 289–313. [Google Scholar] [CrossRef]
  55. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.-L.; Zhan, Z.-H. Triple archives particle swarm optimization. IEEE Trans. Cybern. 2019, 50, 4862–4875. [Google Scholar] [CrossRef] [PubMed]
  56. Deng, H.; Peng, L.; Zhang, H.; Yang, B.; Chen, Z. Ranking-based biased learning swarm optimizer for large-scale optimization. Inf. Sci. 2019, 493, 120–137. [Google Scholar] [CrossRef]
  57. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  58. Ziyu, T.; Dingxue, Z. A modified particle swarm optimization with an adaptive acceleration coefficient. In Proceedings of the 2009 Asia-Pacific Conference on Information Processing, Wuhan, China, 28–29 November 2009; pp. 330–332. [Google Scholar]
  59. Mirjalili, S.; Lewis, A.; Sadiq, A.S. Autonomous particles groups for particle swarm optimization. Arab. J. Sci. Eng. 2014, 39, 4683–4697. [Google Scholar] [CrossRef]
  60. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  61. Wang, Z.D.; Wang, J.B.; Li, D.H. Study on WSN optimization coverage of an enhanced sparrow search algorithm. Chin. J. Sens. Actuators 2021, 34, 818–828. [Google Scholar]
  62. Bingol, H.; Alatas, B. Chaos based optics inspired optimization algorithms as global solution search approach. Chaos Solitons Fractals 2020, 141, 110434. [Google Scholar] [CrossRef]
  63. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  64. Tang, A.; Zhou, H.; Han, T.; Xie, L. A chaos sparrow search algorithm with logarithmic spiral and adaptive step for engineering problems. Comput. Model. Eng. Sci. 2022, 130, 331–364. [Google Scholar] [CrossRef]
  65. Peng, H.; Zhu, W.; Deng, C.; Wu, Z. Enhancing firefly algorithm with courtship learning. Inf. Sci. 2021, 543, 18–42. [Google Scholar] [CrossRef]
  66. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.P. Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef] [PubMed]
  67. Sandgren, E. Nonlinear integer and discrete programming in mechanical design optimization. J. Mech. Des. 1990, 111, 223–229. [Google Scholar] [CrossRef]
  68. Yadav, A.; Kumar, N. Artificial electric field algorithm for engineering optimization problems. Expert Syst. Appl. 2020, 149, 113308. [Google Scholar]
  69. Ravindran, A.; Reklaitis, G.V.; Ragsdell, K.M. Engineering Optimization: Methods and Applications, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006; ISBN 978-0471558149. [Google Scholar]
  70. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  71. Ewees, A.A.; Al-qaness, M.A.A.; Abualigah, L.; Oliva, D.; Algamal, Z.Y.; Anter, A.M.; Ali Ibrahim, R.; Ghoniem, R.M.; Abd Elaziz, M. Boosting arithmetic optimization algorithm with genetic algorithm operators for feature selection: Case study on cox proportional hazards model. Mathematics 2021, 9, 2321. [Google Scholar] [CrossRef]
  72. Anter, A.M.; Hassenian, A.E.; Oliva, D. An improved fast fuzzy c-means using crow search optimization algorithm for crop identification in agricultural. Expert Syst. Appl. 2019, 118, 340–354. [Google Scholar] [CrossRef]
Figure 1. Cube mapping initialization.
Figure 1. Cube mapping initialization.
Axioms 12 00767 g001
Figure 2. Student classification diagram.
Figure 2. Student classification diagram.
Axioms 12 00767 g002
Figure 3. Elite–Learner paired learning.
Figure 3. Elite–Learner paired learning.
Axioms 12 00767 g003
Figure 4. Schematic diagram of the follower stage boundary processing.
Figure 4. Schematic diagram of the follower stage boundary processing.
Axioms 12 00767 g004
Figure 5. CLSSA flow chart.
Figure 5. CLSSA flow chart.
Axioms 12 00767 g005
Figure 6. Convergence diagram of each algorithm.
Figure 6. Convergence diagram of each algorithm.
Axioms 12 00767 g006aAxioms 12 00767 g006b
Figure 7. Friedman statistical results.
Figure 7. Friedman statistical results.
Axioms 12 00767 g007
Table 1. Comparisons of cube and other four initializations.
Table 1. Comparisons of cube and other four initializations.
SSATentLogisticICMICCube
Avg2.76 × 10−153.06 × 10−153.08 × 10−152.51 × 10−152.45 × 10−15
Std8.25 × 10−159.25 × 10−158.36 × 10−157.66 × 10−155.23 × 10−15
Best6.20 × 10−211.93 × 10−201.27 × 10−209.89 × 10−211.44 × 10−21
Generalized Penalized Function No. 01
SSATentLogisticICMICCube
Avg1.7432.3141.9261.8871.34
Std2.3393.2352.5552.4551.275
Best0.9980.9980.9980.9980.998
De Jong Function N.5
Note: The bolded data represents the best result for each row.
Table 2. Contribution test for each strategy.
Table 2. Contribution test for each strategy.
FunctionIndexAvgStdBestFunctionIndexAvgStdBest
F1SSA0.00 × 10+000.00 × 10+000.00 × 10+00F2SSA4.36 × 10−1920.00 × 10+000.00 × 10+00
ISSA-10.00 × 10+000.00 × 10+000.00 × 10+00ISSA-11.76 × 10−2010.00 × 10+000.00 × 10+00
ISSA-20.00 × 10+000.00 × 10+000.00 × 10+00ISSA-23.26 × 10−1930.00 × 10+000.00 × 10+00
ISSA-30.00 × 10+000.00 × 10+000.00 × 10+00ISSA-31.59 × 10−1990.00 × 10+000.00 × 10+00
ISSA-40.00 × 10+000.00 × 10+000.00 × 10+00ISSA-42.73 × 10−1940.00 × 10+000.00 × 10+00
ISSA-50.00 × 10+000.00 × 10+000.00 × 10+00ISSA-51.73 × 10−2140.00 × 10+000.00 × 10+00
ISSA-60.00 × 10+000.00 × 10+000.00 × 10+00ISSA-62.85 × 10−2040.00 × 10+000.00 × 10+00
ISSA-70.00 × 10+000.00 × 10+000.00 × 10+00ISSA-74.68 × 10−2020.00 × 10+000.00 × 10+00
ISSA-80.00 × 10+000.00 × 10+000.00 × 10+00ISSA-81.06 × 10−2160.00 × 10+000.00 × 10+00
CLSSA0.00 × 10+000.00 × 10+000.00 × 10+00CLSSA8.83 × 10−2290.00 × 10+000.00 × 10+00
F3SSA6.96 × 10−279 0.00 × 10+000.00 × 10+00F4SSA2.55 × 10−1990.00 × 10+000.00 × 10+00
ISSA-10.00 × 10+000.00 × 10+000.00 × 10+00ISSA-13.86 × 10−1880.00 × 10+000.00 × 10+00
ISSA-20.00 × 10+000.00 × 10+000.00 × 10+00ISSA-24.14 × 10−2220.00 × 10+000.00 × 10+00
ISSA-33.46 × 10−271 0.00 × 10+000.00 × 10+00ISSA-31.09 × 10−2550.00 × 10+000.00 × 10+00
ISSA-42.39 × 10−271 0.00 × 10+000.00 × 10+00ISSA-41.34 × 10−2610.00 × 10+000.00 × 10+00
ISSA-50.00 × 10+000.00 × 10+000.00 × 10+00ISSA-51.19 × 10−1940.00 × 10+000.00 × 10+00
ISSA-60.00 × 10+000.00 × 10+000.00 × 10+00ISSA-61.07 × 10−2440.00 × 10+000.00 × 10+00
ISSA-76.69 × 10−302 0.00 × 10+000.00 × 10+00ISSA-75.73 × 10−2130.00 × 10+000.00 × 10+00
ISSA-80.00 × 10+000.00 × 10+000.00 × 10+00ISSA-83.96 × 10−1950.00 × 10+000.00 × 10+00
CLSSA0.00 × 10+000.00 × 10+000.00 × 10+00CLSSA2.18 × 10−2700.00 × 10+000.00 × 10+00
F5SSA1.08 × 10−052.54 × 10−059.67 × 10−11F6SSA3.42 × 10−101.65 × 10−091.81 × 10−13
ISSA-12.11 × 10−056.85 × 10−052.01 × 10−09ISSA-11.80 × 10−103.22 × 10−101.89 × 10−13
ISSA-21.02 × 10−081.98 × 10−088.86 × 10−11ISSA-22.70 × 10−105.71 × 10−101.25 × 10−13
ISSA-34.81 × 10−068.17 × 10−065.42 × 10−09ISSA-33.46 × 10−108.45 × 10−102.78 × 10−14
ISSA-49.23 × 10−061.72 × 10−051.67 × 10−10ISSA-41.25 × 10−102.66 × 10−105.06 × 10−13
ISSA-57.98 × 10−092.03 × 10−084.32 × 10−11ISSA-51.57 × 10−103.42 × 10−101.27 × 10−13
ISSA-61.32 × 10−052.83 × 10−051.92 × 10−08ISSA-67.67 × 10−111.49 × 10−105.35 × 10−14
ISSA-71.16 × 10−082.42 × 10−083.19 × 10−11ISSA-71.03 × 10−102.15 × 10−102.98 × 10−13
ISSA-81.61 × 10−083.30 × 10−083.09 × 10−11ISSA-81.89 × 10−105.93 × 10−103.73 × 10−13
CLSSA2.19 × 10−084.01 × 10−083.06 × 10−14CLSSA3.26 × 10−104.01 × 10−104.51 × 10−14
F7SSA1.11 × 10−049.95 × 10−054.54 × 10−06F9SSA0.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-11.84 × 10−041.45 × 10−041.39 × 10−05ISSA-10.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-21.91 × 10−041.42 × 10−047.01 × 10−06ISSA-20.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-31.49 × 10−041.05 × 10−042.49 × 10−06ISSA-30.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-41.48 × 10−041.50 × 10−042.79 × 10−06ISSA-40.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-51.94 × 10−041.76 × 10−044.38 × 10−06ISSA-50.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-61.89 × 10−041.30 × 10−049.56 × 10−07ISSA-60.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-71.25 × 10−041.01 × 10−041.17 × 10−05ISSA-70.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-81.55 × 10−041.51 × 10−048.79 × 10−07ISSA-80.00 × 10+000.00 × 10+000.00 × 10+00
CLSSA1.13 × 10−047.15 × 10−052.06 × 10−06CLSSA0.00 × 10+000.00 × 10+000.00 × 10+00
F10SSA8.88 × 10−169.86 × 10−328.88 × 10−16F11SSA0.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-18.88 × 10−169.86 × 10−328.88 × 10−16ISSA-10.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-28.88 × 10−169.86 × 10−328.88 × 10−16ISSA-20.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-38.88 × 10−169.86 × 10−328.88 × 10−16ISSA-30.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-48.88 × 10−169.86 × 10−328.88 × 10−16ISSA-40.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-58.88 × 10−169.86 × 10−328.88 × 10−16ISSA-50.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-68.88 × 10−169.86 × 10−328.88 × 10−16ISSA-60.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-78.88 × 10−169.86 × 10−328.88 × 10−16ISSA-70.00 × 10+000.00 × 10+000.00 × 10+00
ISSA-88.88 × 10−169.86 × 10−328.88 × 10−16ISSA-80.00 × 10+000.00 × 10+000.00 × 10+00
CLSSA8.88 × 10−169.86 × 10−328.88 × 10−16CLSSA0.00 × 10+000.00 × 10+000.00 × 10+00
F12SSA9.46 × 10−121.54 × 10−116.06 × 10−17F13SSA7.39 × 10−111.40 × 10−104.12 × 10−15
ISSA-14.63 × 10−128.62 × 10−125.91 × 10−17ISSA-11.40 × 10−102.75 × 10−104.32 × 10−14
ISSA-23.26 × 10−168.90 × 10−168.66 × 10−20ISSA-21.18 × 10−132.84 × 10−131.93 × 10−17
ISSA-37.74 × 10−122.07 × 10−111.01 × 10−15ISSA-31.67 × 10−104.15 × 10−103.01 × 10−14
ISSA-43.38 × 10−128.46 × 10−123.18 × 10−16ISSA-41.18 × 10−103.43 × 10−101.12 × 10−13
ISSA-54.72 × 10−161.55 × 10−154.63 × 10−21ISSA-51.04 × 10−132.74 × 10−132.97 × 10−19
ISSA-62.58 × 10−126.57 × 10−124.28 × 10−17ISSA-61.67 × 10−106.42 × 10−101.33 × 10−13
ISSA-73.85 × 10−161.25 × 10−154.79 × 10−20ISSA-77.81 × 10−141.72 × 10−134.74 × 10−18
ISSA-84.85 × 10−169.86 × 10−161.43 × 10−19ISSA-81.34 × 10−132.64 × 10−135.54 × 10−18
CLSSA8.78 × 10−161.23 × 10−152.59 × 10−21CLSSA7.29 × 10−131.56 × 10−127.04 × 10−20
F14SSA44.47 × 10+001F15SSA3.08 × 10−041.02 × 10−093.08 × 10−04
ISSA-134.04 × 10+001ISSA-13.07 × 10−042.94 × 10−093.07 × 10−04
ISSA-244.57 × 10+001ISSA-23.07 × 10−041.44 × 10−093.07 × 10−04
ISSA-333.92 × 10+001ISSA-33.43 × 10−041.89 × 10−043.07 × 10−04
ISSA-421.83 × 10+001ISSA-43.07 × 10−044.80 × 10−083.07 × 10−04
ISSA-534.53 × 10+001ISSA-53.08 × 10−047.95 × 10−073.07 × 10−04
ISSA-622.92 × 10+001ISSA-63.07 × 10−047.17 × 10−103.07 × 10−04
ISSA-722.92 × 10+001ISSA-73.07 × 10−041.54 × 10−083.07 × 10−04
ISSA-823.32 × 10+001ISSA-83.38 × 10−041.64 × 10−043.07 × 10−04
CLSSA11.78 × 10+001CLSSA3.08 × 10−042.45 × 10−073.08 × 10−04
F16SSA−1.0320.00 × 10+00−1.032F18SSA3.94.85 × 10+003
ISSA-1−1.0320.00 × 10+00−1.032ISSA-133.09 × 10−153
ISSA-2−1.0320.00 × 10+00−1.032ISSA-233.43 × 10−153
ISSA-3−1.0320.00 × 10+00−1.032ISSA-331.92 × 10−153
ISSA-4−1.0320.00 × 10+00−1.032ISSA-433.09 × 10−153
ISSA-5−1.0320.00 × 10+00−1.032ISSA-53.94.85 × 10+003
ISSA-6−1.0320.00 × 10+00−1.032ISSA-632.67 × 10−153
ISSA-7−1.0320.00 × 10+00−1.032ISSA-732.98 × 10−153
ISSA-8−1.0320.00 × 10+00−1.032ISSA-832.67 × 10−153
CLSSA−1.0320.00 × 10+00−1.032CLSSA33.57 × 10−153
F19SSA−3.863.11 × 10−15−3.86F20SSA−3.275.89 × 10−02−3.32
ISSA-1−3.863.11 × 10−15−3.86ISSA-1−3.245.45 × 10−02−3.32
ISSA-2−3.863.11 × 10−15−3.86ISSA-2−3.265.94 × 10−02−3.32
ISSA-3−3.863.11 × 10−15−3.86ISSA-3−3.275.82 × 10−02−3.32
ISSA-4−3.863.11 × 10−15−3.86ISSA-4−3.285.73 × 10−02−3.32
ISSA-5−3.863.11 × 10−15−3.86ISSA-5−3.255.73 × 10−02−3.32
ISSA-6−3.863.11 × 10−15−3.86ISSA-6−3.255.73 × 10−02−3.32
ISSA-7−3.863.11 × 10−15−3.86ISSA-7−3.265.94 × 10−02−3.32
ISSA-8−3.863.11 × 10−15−3.86ISSA-8−3.255.82 × 10−02−3.32
CLSSA−3.863.11 × 10−15−3.86CLSSA−3.245.60 × 10−02−3.32
F21SSA−103.24 × 10−16−10F22SSA−108.39 × 10−07−10
ISSA-1−109.15 × 10−01−10ISSA-1−109.54 × 10−01−10
ISSA-2−104.82 × 10−08−10ISSA-2−100.00 × 10+00−10
ISSA-3−109.15 × 10−01−10ISSA-3−102.46 × 10−06−10
ISSA-4−103.60 × 10−14−10ISSA-4−109.54 × 10−01−10
ISSA-5−101.27 × 10+00−10ISSA-5−109.54 × 10−01−10
ISSA-6−109.15 × 10−01−10ISSA-6−104.84 × 10−05−10
ISSA-7−105.39 × 10−14−10ISSA-7−100.00 × 10+00−10
ISSA-8−103.24 × 10−16−10ISSA-8−100.00 × 10+00−10
CLSSA−105.96 × 10−07−10CLSSA−104.42 × 10−11−10
F23SSA−118.88 × 10−15−11
ISSA-1−118.88 × 10−15−11
ISSA-2−115.42 × 10−12−11
ISSA-3−118.88 × 10−15−11
ISSA-4−111.98 × 10−14−11
ISSA-5−101.62 × 10+00−11
ISSA-6−115.87 × 10−10−11
ISSA-7−118.88 × 10−15−11
ISSA-8−116.37 × 10−05−11
CLSSA−109.71 × 10−01−11
Table 3. Comparisons of the SSA and CLSSA on high-dimensional functions.
Table 3. Comparisons of the SSA and CLSSA on high-dimensional functions.
Dim = 30Dim = 100
FunctionResultsSSACLSSASSACLSSA
F1Avg0000
Std0000
Best0000
F2Avg4.4 × 10−1928.8 × 10−2295.2 × 10−2197.9 × 10−247
Std0000
Best0000
F3Avg7 × 10−279000
Std0000
Best0000
F4Avg2.5 × 10−1992.2 × 10−2703.2 × 10−2311.7 × 10−241
Std0000
Best0000
F5Avg1.04 × 10−052.19 × 10−083.27 × 10−051.62 × 10−07
Std2.23 × 10−054.01 × 10−085.26 × 10−052.84 × 10−07
Best1.52 × 10−143.06 × 10−146.33 × 10−092.3 × 10−09
F6Avg3.42 × 10−103.26 × 10−101.62 × 10−071.56 × 10−07
Std1.65 × 10−094.01 × 10−101.94 × 10−072.4 × 10−07
Best1.81 × 10−134.51 × 10−144.22 × 10−102.07 × 10−10
F7Avg1.11 × 10−041.11 × 10−041.11 × 10−041.11 × 10−04
Std9.95 × 10−057.15 × 10−051.11 × 10−041.11 × 10−04
Best4.54 × 10−062.06 × 10−061.17 × 10−051.29 × 10−05
F9Avg0000
Std0000
Best0000
F10Avg8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std9.86 × 10−329.86 × 10−329.86 × 10−329.86 × 10−32
Best8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
F11Avg0000
Std0000
Best0000
F12Avg2.23 × 10−118.78 × 10−162.55 × 10−093.01 × 10−14
Std6.8 × 10−111.23 × 10−158.51 × 10−093.94 × 10−14
Best1.52 × 10−152.59 × 10−215.17 × 10−138.89 × 10−18
F13Avg7.39 × 10−117.29 × 10−137.69 × 10−087.85 × 10−11
Std1.4 × 10−101.56 × 10−121.21 × 10−072.06 × 10−10
Best4.12 × 10−157.04 × 10−202.74 × 10−102.55 × 10−16
Table 4. Comparisons of the SSA and CLSSA on low-dimensional functions.
Table 4. Comparisons of the SSA and CLSSA on low-dimensional functions.
FunctionResultsSSACLSSA
F14Avg3.53 × 10+001.39 × 10+00
Std4.47 × 10+001.78 × 10+00
Best0.9980.998
F15Avg3.08 × 10−043.08 × 10−04
Std1.02 × 10−092.45 × 10−07
Best3.08 × 10−043.08 × 10−04
F16Avg−1.032−1.032
Std00
Best−1.032−1.032
F18Avg3.93
Std4.8473.57 × 10−15
Best33
F19Avg−3.86−3.86
Std3.11 × 10−153.11 × 10−15
Best−3.86−3.86
F20Avg−3.27−3.24
Std5.89 × 10−025.60 × 10−02
Best−3.32−3.32
F21Avg−10−10
Std3.24 × 10−165.96 × 10−07
Best−10−10
F22Avg−10−10
Std8.39 × 10−074.42 × 10−11
Best−10−10
F23Avg−10.5364−10.3561
Std8.88 × 10−150.970753
Best−10.5364−10.5364
Table 5. Comparisons of classic algorithms and variants on eight functions.
Table 5. Comparisons of classic algorithms and variants on eight functions.
FunctionAlgorithmsAvgStdBest
F1PSO5.87 × 10+003.40 × 10+002.17 × 10+00
TACPSO2.27 × 10+028.98 × 10+021.02 × 10+01
AGPSO34.10 × 10+015.24 × 10+014.12 × 10+00
GWO1.03 × 10−112.11 × 10−111.92 × 10−14
IGWO2.00 × 10−082.82 × 10−081.28 × 10−11
SSA6.96 × 10−27900
IHSSA8.96 × 10−26400
ESSA1.57 × 10−1218.45 × 10−1210
CSSA000
CLSSA000
F2PSO3.65 × 10+013.46 × 10+011.09 × 10+00
TACPSO4.86 × 10+013.49 × 10+014.82 × 10+00
AGPSO39.66 × 10+011.41 × 10+021.28 × 10+01
GWO2.63 × 10+017.04 × 10−012.52 × 10+01
IGWO2.32 × 10+012.18 × 10−012.28 × 10+01
SSA1.08 × 10−052.54 × 10−059.67 × 10−11
IHSSA7.72 × 10−073.54 × 10−068.74 × 10−12
ESSA2.21 × 10−067.70 × 10−062.45 × 10−12
CSSA2.03 × 10−062.48 × 10−063.59 × 10−10
CLSSA2.19 × 10−084.01 × 10−083.06 × 10−14
F3PSO4.49 × 10−028.75 × 10−025.87 × 10−14
TACPSO2.95 × 10−015.29 × 10−012.91 × 10−07
AGPSO34.37 × 10−016.53 × 10−011.92 × 10−07
GWO1.67 × 10−029.03 × 10−032.46 × 10−06
IGWO2.16 × 10−066.02 × 10−071.15 × 10−06
SSA9.46 × 10−121.54 × 10−116.06 × 10−17
IHSSA4.23 × 10−127.02 × 10−122.85 × 10−14
ESSA1.17 × 10−132.75 × 10−132.92 × 10−17
CSSA7.09 × 10−121.29 × 10−111.08 × 10−15
CLSSA8.78 × 10−161.23 × 10−152.59 × 10−21
F4PSO1.46 × 10−033.73 × 10−032.74 × 10−13
TACPSO1.98 × 10−025.33 × 10−026.56 × 10−08
AGPSO31.73 × 10−023.54 × 10−029.74 × 10−07
GWO1.65 × 10−011.04 × 10−012.26 × 10−05
IGWO8.93 × 10−032.72 × 10−021.83 × 10−05
SSA7.39 × 10−111.40 × 10−104.12 × 10−15
IHSSA3.53 × 10−118.74 × 10−114.07 × 10−16
ESSA3.75 × 10−129.42 × 10−125.01 × 10−17
CSSA2.01 × 10−113.21 × 10−114.39 × 10−14
CLSSA7.29 × 10−131.56 × 10−127.04 × 10−20
F5PSO−9.262512.24 × 10−01−9.58030
TACPSO−8.796064.93 × 10−01−9.59818
AGPSO3−9.149074.49 × 10−01−9.61348
GWO−8.135527.87 × 10−01−9.23937
IGWO−8.684368.47 × 10−01−9.56575
SSA−8.593766.47 × 10−01−9.55150
IHSSA−8.590207.45 × 10−01−9.54755
ESSA−8.857084.38 × 10−01−9.49070
CSSA−8.989275.12 × 10−01−9.62254
CLSSA−8.525055.77 × 10−01−9.66015
F6PSO1.20 × 10−031.18 × 10−031.55 × 10−06
TACPSO6.56 × 10−081.11 × 10−074.33 × 10−12
AGPSO33.98 × 10−066.96 × 10−067.44 × 10−09
GWO6.99 × 10−017.57 × 10−011.62 × 10−05
IGWO4.20 × 10−053.00 × 10−059.68 × 10−06
SSA2.49 × 10−084.80 × 10−087.05 × 10−13
IHSSA2.16 × 10−085.86 × 10−089.40 × 10−13
ESSA7.10 × 10−081.47 × 10−072.54 × 10−15
CSSA1.89 × 10−084.05 × 10−087.15 × 10−17
CLSSA3.58 × 10−118.06 × 10−115.41 × 10−18
F7PSO1.50 × 10−037.21 × 10−045.80 × 10−04
TACPSO1.46 × 10+007.84 × 10+002.47 × 10−04
AGPSO32.51 × 10+001.35 × 10+018.35 × 10−04
GWO7.57 × 10−067.20 × 10−065.40 × 10−07
IGWO2.11 × 10−051.28 × 10−055.27 × 10−06
SSA000
IHSSA1.76 × 10−095.05 × 10−091.35 × 10−14
ESSA6.74 × 10−16500
CSSA000
CLSSA000
F8PSO2.39 × 10+431.13 × 10+435.64 × 10+42
TACPSO1.83 × 10+171.06 × 10+175.30 × 10+15
AGPSO33.17 × 10+381.62 × 10+394.08 × 10+15
GWO3.92 × 10+147.61 × 10+151.85 × 10+14
IGWO4.48 × 10+143.05 × 10+147.44 × 10+13
SSA000
IHSSA7.43 × 10+134.00 × 10+140.00 × 10+00
ESSA7.23 × 10−1730.00 × 10+000.00 × 10+00
CSSA000
CLSSA000
Note: The bolded data represents the best result for each function.
Table 6. Wilcoxon rank sum test.
Table 6. Wilcoxon rank sum test.
FunctionPSOTACPSOAGPSO3GWOIGWOSSAIHSSAESSACSSA
F11.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.61 × 10−012.16 × 10−023.45 × 10−07NaN
F23.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.00 × 10−091.12 × 10−011.81 × 10−019.06 × 10−08
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.98 × 10−113.02 × 10−111.89 × 10−046.70 × 10−11
F41.55 × 10−093.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.01 × 10−089.06 × 10−085.94 × 10−026.05 × 10−07
F55.09 × 10−089.63 × 10−028.15 × 10−054.68 × 10−024.21 × 10−025.79 × 10−014.64 × 10−018.68 × 10−031.11 × 10−03
F63.02 × 10−114.62 × 10−103.02 × 10−113.02 × 10−113.02 × 10−111.60 × 10−079.83 × 10−081.73 × 10−074.74 × 10−06
F71.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−121.21 × 10−12NaN1.21 × 10−126.61 × 10−05NaN
F81.21 × 10−127.58 × 10−131.05 × 10−121.21 × 10−121.21 × 10−12NaN3.34 × 10−011.61 × 10−01NaN
Table 7. CEC2017 test results.
Table 7. CEC2017 test results.
FunctionIndexSSALSSAGPSSACLSSA_TFA-CLCLSSA
F1Best1.06 × 10+021.54 × 10+027.81 × 10+101.22 × 10+026.29 × 10+051.00 × 10+02
Worst1.92 × 10+047.05 × 10+107.81 × 10+101.88 × 10+041.42 × 10+068.94 × 10+03
Median1.46 × 10+033.33 × 10+037.81 × 10+102.07 × 10+039.22 × 10+055.63 × 10+02
Mean3.82 × 10+032.62 × 10+097.81 × 10+104.53 × 10+039.61 × 10+051.48 × 10+03
Std5.36 × 10+031.28 × 10+102.95 × 10−015.50 × 10+031.85 × 10+051.92 × 10+03
F3Best9.62 × 10+021.09 × 10+038.38 × 10+042.52 × 10+036.83 × 10+037.82 × 10+02
Worst4.56 × 10+031.96 × 10+049.00 × 10+041.59 × 10+041.78 × 10+046.74 × 10+03
Median2.54 × 10+039.38 × 10+038.48 × 10+047.99 × 10+031.11 × 10+042.72 × 10+03
Mean2.75 × 10+038.41 × 10+038.50 × 10+047.96 × 10+031.14 × 10+043.12 × 10+03
Std1.03 × 10+034.30 × 10+031.38 × 10+033.14 × 10+032.63 × 10+031.64 × 10+03
F4Best4.04 × 10+024.04 × 10+022.04 × 10+044.04 × 10+024.28 × 10+024.04 × 10+02
Worst5.17 × 10+028.30 × 10+021.82 × 10+045.36 × 10+025.29 × 10+024.89 × 10+02
Median4.86 × 10+025.11 × 10+022.04 × 10+045.10 × 10+025.15 × 10+024.77 × 10+02
Mean4.79 × 10+025.31 × 10+022.03 × 10+044.97 × 10+025.06 × 10+024.68 × 10+02
Std3.36 × 10+018.72 × 10+014.06 × 10+022.69 × 10+012.24 × 10+012.30 × 10+01
F5Best6.55 × 10+026.55 × 10+028.14 × 10+026.25 × 10+026.27 × 10+026.47 × 10+02
Worst8.20 × 10+028.25 × 10+029.70 × 10+027.97 × 10+027.33 × 10+028.24 × 10+02
Median7.48 × 10+027.48 × 10+029.70 × 10+027.06 × 10+026.61 × 10+027.46 × 10+02
Mean7.48 × 10+027.50 × 10+029.53 × 10+027.05 × 10+026.64 × 10+027.56 × 10+02
Std4.47 × 10+013.92 × 10+014.56 × 10+013.47 × 10+012.65 × 10+014.46 × 10+01
F6Best6.23 × 10+026.47 × 10+026.67 × 10+026.14 × 10+026.27 × 10+026.21 × 10+02
Worst6.64 × 10+026.89 × 10+027.15 × 10+026.55 × 10+026.59 × 10+026.59 × 10+02
Median6.39 × 10+026.63 × 10+027.06 × 10+026.34 × 10+026.47 × 10+026.37 × 10+02
Mean6.40 × 10+026.63 × 10+027.04 × 10+026.34 × 10+026.44 × 10+026.38 × 10+02
Std1.14 × 10+017.65 × 10+009.95 × 10+007.97 × 10+008.41 × 10+001.08 × 10+01
F7Best1.08 × 10+031.02 × 10+031.50 × 10+039.58 × 10+029.49 × 10+021.02 × 10+03
Worst1.35 × 10+031.35 × 10+031.58 × 10+031.23 × 10+031.27 × 10+031.35 × 10+03
Median1.23 × 10+031.27 × 10+031.53 × 10+031.12 × 10+031.09 × 10+031.24 × 10+03
Mean1.24 × 10+031.24 × 10+031.52 × 10+031.13 × 10+031.10 × 10+031.23 × 10+03
Std8.33 × 10+011.06 × 10+021.95 × 10+017.08 × 10+018.39 × 10+019.50 × 10+01
F8Best9.31 × 10+029.07 × 10+021.16 × 10+039.29 × 10+028.87 × 10+029.24 × 10+02
Worst1.02 × 10+031.04 × 10+031.24 × 10+031.05 × 10+039.50 × 10+021.02 × 10+03
Median9.81 × 10+029.81 × 10+021.22 × 10+039.79 × 10+029.15 × 10+029.80 × 10+02
Mean9.80 × 10+029.76 × 10+021.21 × 10+039.77 × 10+029.15 × 10+029.74 × 10+02
Std2.15 × 10+013.33 × 10+011.93 × 10+013.47 × 10+011.67 × 10+012.63 × 10+01
F9Best5.02 × 10+034.01 × 10+036.58 × 10+034.65 × 10+032.73 × 10+033.73 × 10+03
Worst5.48 × 10+036.73 × 10+031.42 × 10+045.71 × 10+035.34 × 10+035.65 × 10+03
Median5.39 × 10+035.36 × 10+031.34 × 10+045.23 × 10+034.01 × 10+035.23 × 10+03
Mean5.32 × 10+035.34 × 10+031.32 × 10+045.25 × 10+034.06 × 10+035.06 × 10+03
Std1.25 × 10+025.25 × 10+021.26 × 10+032.25 × 10+026.25 × 10+025.25 × 10+02
F10Best4.14 × 10+034.26 × 10+038.18 × 10+034.27 × 10+034.07 × 10+033.32 × 10+03
Worst6.62 × 10+036.98 × 10+038.89 × 10+036.65 × 10+036.60 × 10+036.51 × 10+03
Median5.28 × 10+035.66 × 10+038.88 × 10+035.21 × 10+035.15 × 10+035.29 × 10+03
Mean5.35 × 10+035.57 × 10+038.69 × 10+035.33 × 10+035.13 × 10+035.24 × 10+03
Std6.54 × 10+025.96 × 10+022.68 × 10+026.68 × 10+026.04 × 10+027.11 × 10+02
F11Best1.15 × 10+031.15 × 10+032.58 × 10+031.16 × 10+031.17 × 10+031.16 × 10+03
Worst1.43 × 10+031.41 × 10+049.90 × 10+031.42 × 10+031.30 × 10+031.32 × 10+03
Median1.30 × 10+031.22 × 10+036.15 × 10+031.25 × 10+031.23 × 10+031.22 × 10+03
Mean1.28 × 10+031.66 × 10+036.92 × 10+031.25 × 10+031.23 × 10+031.22 × 10+03
Std7.99 × 10+012.34 × 10+032.26 × 10+035.39 × 10+013.60 × 10+014.33 × 10+01
F12Best2.59 × 10+044.81 × 10+047.39 × 10+091.09 × 10+051.13 × 10+066.43 × 10+04
Worst1.02 × 10+061.84 × 10+082.47 × 10+103.15 × 10+066.08 × 10+066.12 × 10+06
Median1.98 × 10+054.44 × 10+052.47 × 10+109.96 × 10+053.38 × 10+064.13 × 10+05
Mean2.68 × 10+057.61 × 10+062.35 × 10+101.10 × 10+063.38 × 10+068.15 × 10+05
Std2.31 × 10+053.36 × 10+074.38 × 10+098.57 × 10+051.23 × 10+061.20 × 10+06
F13Best1.46 × 10+032.97 × 10+037.15 × 10+071.68 × 10+035.35 × 10+041.36 × 10+03
Worst6.10 × 10+041.31 × 10+079.35 × 10+096.08 × 10+041.49 × 10+051.47 × 10+05
Median8.48 × 10+031.27 × 10+049.35 × 10+098.66 × 10+039.44 × 10+043.07 × 10+04
Mean1.72 × 10+044.53 × 10+058.56 × 10+091.60 × 10+049.61 × 10+043.99 × 10+04
Std1.91 × 10+042.39 × 10+062.48 × 10+091.80 × 10+042.22 × 10+043.87 × 10+04
F14Best2.55 × 10+032.44 × 10+032.13 × 10+052.40 × 10+032.22 × 10+034.48 × 10+03
Worst4.92 × 10+047.81 × 10+042.19 × 10+051.07 × 10+053.96 × 10+048.39 × 10+04
Median1.24 × 10+041.63 × 10+042.15 × 10+052.56 × 10+049.95 × 10+032.68 × 10+04
Mean1.83 × 10+041.99 × 10+042.16 × 10+053.65 × 10+041.13 × 10+043.47 × 10+04
Std1.46 × 10+041.58 × 10+041.21 × 10+033.36 × 10+048.47 × 10+032.35 × 10+04
F15Best1.75 × 10+031.77 × 10+033.50 × 10+031.66 × 10+031.32 × 10+041.56 × 10+03
Worst4.33 × 10+042.77 × 10+049.26 × 10+074.24 × 10+046.22 × 10+046.08 × 10+04
Median1.06 × 10+044.50 × 10+034.63 × 10+073.35 × 10+033.09 × 10+045.14 × 10+03
Mean1.47 × 10+046.81 × 10+034.63 × 10+077.09 × 10+033.20 × 10+041.24 × 10+04
Std1.36 × 10+046.62 × 10+034.71 × 10+079.26 × 10+031.13 × 10+041.90 × 10+04
F16Best2.49 × 10+032.05 × 10+035.28 × 10+031.89 × 10+032.65 × 10+032.06 × 10+03
Worst3.62 × 10+033.56 × 10+036.42 × 10+033.57 × 10+033.56 × 10+033.26 × 10+03
Median2.94 × 10+032.93 × 10+035.60 × 10+033.07 × 10+033.03 × 10+032.80 × 10+03
Mean2.93 × 10+032.89 × 10+035.63 × 10+033.06 × 10+033.07 × 10+032.79 × 10+03
Std3.02 × 10+023.88 × 10+022.46 × 10+024.47 × 10+022.67 × 10+023.30 × 10+02
F17Best1.83 × 10+031.95 × 10+032.80 × 10+041.77 × 10+031.76 × 10+032.00 × 10+03
Worst3.07 × 10+033.00 × 10+032.87 × 10+043.01 × 10+032.52 × 10+032.92 × 10+03
Median2.57 × 10+032.47 × 10+032.84 × 10+042.49 × 10+032.13 × 10+032.54 × 10+03
Mean2.57 × 10+032.46 × 10+032.83 × 10+042.45 × 10+032.12 × 10+032.49 × 10+03
Std3.38 × 10+022.79 × 10+021.63 × 10+023.24 × 10+022.23 × 10+022.51 × 10+02
F18Best7.86 × 10+043.20 × 10+045.94 × 10+073.38 × 10+048.36 × 10+044.73 × 10+04
Worst5.09 × 10+055.36 × 10+056.04 × 10+077.42 × 10+056.42 × 10+056.15 × 10+05
Median2.05 × 10+051.41 × 10+055.97 × 10+072.50 × 10+051.63 × 10+052.31 × 10+05
Mean2.28 × 10+051.66 × 10+055.97 × 10+072.92 × 10+051.98 × 10+052.55 × 10+05
Std1.23 × 10+051.19 × 10+052.12 × 10+051.95 × 10+051.26 × 10+051.57 × 10+05
F19Best2.05 × 10+032.04 × 10+031.54 × 10+091.95 × 10+038.51 × 10+041.95 × 10+03
Worst5.38 × 10+042.41 × 10+041.54 × 10+094.66 × 10+041.30 × 10+062.95 × 10+04
Median6.67 × 10+034.23 × 10+031.54 × 10+091.11 × 10+048.35 × 10+056.98 × 10+03
Mean1.10 × 10+045.78 × 10+031.54 × 10+091.37 × 10+047.85 × 10+057.26 × 10+03
Std1.03 × 10+044.64 × 10+032.79 × 10−029.90 × 10+032.57 × 10+055.35 × 10+03
F20Best2.26 × 10+032.18 × 10+032.81 × 10+032.11 × 10+032.26 × 10+032.13 × 10+03
Worst3.10 × 10+033.18 × 10+033.19 × 10+033.10 × 10+032.72 × 10+033.02 × 10+03
Median2.74 × 10+032.75 × 10+033.08 × 10+032.56 × 10+032.52 × 10+032.69 × 10+03
Mean2.68 × 10+032.74 × 10+033.03 × 10+032.62 × 10+032.46 × 10+032.65 × 10+03
Std1.94 × 10+022.20 × 10+021.23 × 10+022.49 × 10+021.40 × 10+022.43 × 10+02
F21Best2.42 × 10+032.42 × 10+032.74 × 10+032.41 × 10+032.41 × 10+032.45 × 10+03
Worst2.62 × 10+032.69 × 10+032.92 × 10+032.64 × 10+032.53 × 10+032.60 × 10+03
Median2.50 × 10+032.54 × 10+032.90 × 10+032.52 × 10+032.45 × 10+032.52 × 10+03
Mean2.51 × 10+032.55 × 10+032.87 × 10+032.51 × 10+032.46 × 10+032.51 × 10+03
Std4.59 × 10+017.11 × 10+016.56 × 10+016.32 × 10+013.07 × 10+013.88 × 10+01
F22Best2.30 × 10+032.30 × 10+037.78 × 10+032.30 × 10+032.31 × 10+032.30 × 10+03
Worst8.20 × 10+038.05 × 10+031.11 × 10+048.44 × 10+037.50 × 10+037.26 × 10+03
Median6.49 × 10+036.55 × 10+039.81 × 10+036.86 × 10+032.31 × 10+036.41 × 10+03
Mean6.14 × 10+035.31 × 10+039.70 × 10+035.88 × 10+032.48 × 10+036.30 × 10+03
Std1.87 × 10+032.48 × 10+037.50 × 10+022.09 × 10+039.47 × 10+029.53 × 10+02
F23Best2.74 × 10+032.82 × 10+033.87 × 10+032.77 × 10+032.83 × 10+032.86 × 10+03
Worst3.11 × 10+033.21 × 10+034.55 × 10+032.98 × 10+033.06 × 10+033.14 × 10+03
Median2.88 × 10+032.96 × 10+034.51 × 10+032.83 × 10+032.94 × 10+033.02 × 10+03
Mean2.89 × 10+032.97 × 10+034.41 × 10+032.86 × 10+032.94 × 10+033.01 × 10+03
Std7.74 × 10+019.42 × 10+012.29 × 10+026.50 × 10+014.70 × 10+017.03 × 10+01
F24Best2.95 × 10+033.02 × 10+034.39 × 10+032.93 × 10+032.98 × 10+033.11 × 10+03
Worst3.27 × 10+033.25 × 10+034.43 × 10+033.31 × 10+033.18 × 10+033.49 × 10+03
Median3.11 × 10+033.16 × 10+034.41 × 10+033.15 × 10+033.08 × 10+033.27 × 10+03
Mean3.10 × 10+033.15 × 10+034.41 × 10+033.13 × 10+033.08 × 10+033.28 × 10+03
Std6.27 × 10+016.17 × 10+011.07 × 10+019.69 × 10+014.66 × 10+019.97 × 10+01
F25Best2.88 × 10+032.88 × 10+033.64 × 10+032.88 × 10+032.90 × 10+032.88 × 10+03
Worst2.94 × 10+033.59 × 10+035.49 × 10+032.94 × 10+032.95 × 10+032.90 × 10+03
Median2.89 × 10+032.89 × 10+035.49 × 10+032.90 × 10+032.94 × 10+032.88 × 10+03
Mean2.90 × 10+032.93 × 10+035.43 × 10+032.90 × 10+032.94 × 10+032.88 × 10+03
Std1.85 × 10+011.29 × 10+023.37 × 10+021.33 × 10+011.55 × 10+015.12 × 10+00
F26Best2.80 × 10+034.19 × 10+031.40 × 10+042.80 × 10+033.02 × 10+032.80 × 10+03
Worst8.44 × 10+039.60 × 10+031.54 × 10+047.67 × 10+038.03 × 10+038.77 × 10+03
Median6.70 × 10+037.53 × 10+031.50 × 10+046.34 × 10+036.66 × 10+036.35 × 10+03
Mean6.45 × 10+037.16 × 10+031.49 × 10+046.12 × 10+036.37 × 10+036.00 × 10+03
Std1.28 × 10+031.46 × 10+033.04 × 10+021.25 × 10+031.23 × 10+031.46 × 10+03
F27Best3.22 × 10+033.23 × 10+033.20 × 10+033.21 × 10+033.34 × 10+033.20 × 10+03
Worst3.46 × 10+033.42 × 10+033.20 × 10+033.34 × 10+033.62 × 10+033.20 × 10+03
Median3.26 × 10+033.30 × 10+033.20 × 10+033.26 × 10+033.50 × 10+033.20 × 10+03
Mean3.28 × 10+033.31 × 10+033.20 × 10+033.26 × 10+033.50 × 10+033.20 × 10+03
Std6.05 × 10+014.70 × 10+015.65 × 10−052.66 × 10+016.81 × 10+012.56 × 10−04
F28Best3.10 × 10+033.19 × 10+033.30 × 10+033.10 × 10+033.20 × 10+033.10 × 10+03
Worst3.26 × 10+033.63 × 10+038.27 × 10+033.26 × 10+033.36 × 10+033.30 × 10+03
Median3.19 × 10+033.23 × 10+038.27 × 10+033.20 × 10+033.27 × 10+033.20 × 10+03
Mean3.17 × 10+033.26 × 10+037.94 × 10+033.20 × 10+033.26 × 10+033.22 × 10+03
Std6.00 × 10+019.42 × 10+011.26 × 10+033.11 × 10+013.05 × 10+016.11 × 10+01
F29Best3.85 × 10+033.73 × 10+037.67 × 10+033.53 × 10+033.82 × 10+033.39 × 10+03
Worst4.57 × 10+035.22 × 10+038.69 × 10+034.35 × 10+034.85 × 10+034.35 × 10+03
Median4.28 × 10+034.25 × 10+037.99 × 10+034.08 × 10+034.38 × 10+033.83 × 10+03
Mean4.27 × 10+034.26 × 10+038.07 × 10+034.06 × 10+034.36 × 10+033.86 × 10+03
Std1.82 × 10+023.40 × 10+021.88 × 10+022.09 × 10+022.78 × 10+022.41 × 10+02
F30Best5.22 × 10+036.57 × 10+034.74 × 10+087.09 × 10+034.38 × 10+053.28 × 10+03
Worst1.91 × 10+041.25 × 10+095.12 × 10+082.43 × 10+043.05 × 10+065.86 × 10+04
Median9.46 × 10+031.16 × 10+044.99 × 10+081.22 × 10+041.53 × 10+068.29 × 10+03
Mean9.94 × 10+034.20 × 10+074.98 × 10+081.28 × 10+041.56 × 10+061.30 × 10+04
Std3.72 × 10+032.28 × 10+085.45 × 10+064.47 × 10+035.86 × 10+051.33 × 10+04
Note: The bolded data represents the best result for each row.
Table 8. Best results of the gear train design.
Table 8. Best results of the gear train design.
Algorithmx1x2x3x4f(X)
GA491916432.70 × 10−12
PSO341320532.31 × 10−11
CS431619492.70 × 10−12
ABC491619432.70 × 10−12
GWO491916432.70 × 10−12
MFO431916492.70 × 10−12
ISA431916492.70 × 10−12
WSA431619492.70 × 10−12
APSO431619492.70 × 10−12
SSA513013532.31 × 10−11
CLSSA491916432.70 × 10−12
Table 9. Best results of the pressure vessel design.
Table 9. Best results of the pressure vessel design.
ABCGWOWOAHPSOCPSOCDEMCEOSSACLSSA
x18.13 × 10−018.13 × 10−018.13 × 10−018.13 × 10−018.13 × 10−018.13 × 10−018.13 × 10−011.28 × 10+011.29 × 10+01
x24.38 × 10−014.35 × 10−014.38 × 10−014.38 × 10−014.38 × 10−014.38 × 10−014.38 × 10−016.77 × 10+007.00 × 10+00
x34.21 × 10+014.21 × 10+014.21 × 10+014.21 × 10+014.21 × 10+014.21 × 10+014.21 × 10+014.21 × 10+014.23 × 10+01
x41.77 × 10+021.77 × 10+021.77 × 10+021.77 × 10+021.77 × 10+021.77 × 10+021.77 × 10+021.77 × 10+021.75 × 10+02
g1(X)0.00 × 10+00−1.79 × 10−04−3.39 × 10−060.00 × 10+000.00 × 10+000.00 × 10+00−1.13 × 10−10−5.09 × 10−043.11 × 10−03
g2(X)−3.59 × 10−02−3.30 × 10−02−3.59 × 10−02−3.58 × 10−02−3.60 × 10−04−3.58 × 10−02−3.76 × 10−02−3.61 × 10−02−3.43 × 10−02
g3(X)−2.30 × 10−04−4.06 × 10+01−1.25 × 10+003.12 × 10+00−1.19 × 10+02−3.71 × 10+00−4.73 × 10−04−2.46 × 10+00−2.63 × 10+03
g4(X)−6.34 × 10+01−6.32 × 10+01−6.34 × 10+01−6.34 × 10+01−6.33 × 10+01−6.34 × 10+01−6.34 × 10+01−6.30 × 10+01−6.49 × 10+01
f(X)6.06 × 10+036.05 × 10+036.06 × 10+036.06 × 10+036.06 × 10+036.06 × 10+036.06 × 10+036.06 × 10+036.05 × 10+03
Table 10. Best results of the corrugated bulkhead design.
Table 10. Best results of the corrugated bulkhead design.
CSVIGMM3AEFA-CSNSSSACLSSA
x137.1179557.6923157.6927757.6923157.6402857.68799
x233.0350234.1476234.1329634.1476234.1478634.14703
x337.1939557.6923157.5529457.6923157.6958457.68531
x40.730631.050001.050071.050001.050041.05004
g1(X)−23.35377−0.25839−240.89634−240.69462−240.45294−240.72862
g2(X)−1.60 × 10+01−2.22 × 10−16−1.16 × 10+01−1.47 × 10−05−9.59 × 10−01−1.53 × 10+00
g3(X)−1.59 × 10−03−9.77 × 10−15−6.86 × 10−05−5.80 × 10−09−8.54 × 10−04−1.08 × 10−04
g4(X)−4.00 × 10−04−5.55 × 10−16−2.25 × 10−03−6.21 × 10−091.24 × 10−05−1.50 × 10−04
g5(X)3.19 × 10−010.00 × 10+00−7.58 × 10−05−6.51 × 10−13−4.28 × 10−05−4.09 × 10−05
g6(X)−4.1589270.68949−23.41997−23.544687−23.54798517−23.53828095
f(X)5.894336.84296−6.845846.842966.843506.84338
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Huang, X.; Zhu, D.; Zhou, C.; He, K. An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism. Axioms 2023, 12, 767. https://doi.org/10.3390/axioms12080767

AMA Style

Wang Z, Huang X, Zhu D, Zhou C, He K. An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism. Axioms. 2023; 12(8):767. https://doi.org/10.3390/axioms12080767

Chicago/Turabian Style

Wang, Zikai, Xueyu Huang, Donglin Zhu, Changjun Zhou, and Kerou He. 2023. "An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism" Axioms 12, no. 8: 767. https://doi.org/10.3390/axioms12080767

APA Style

Wang, Z., Huang, X., Zhu, D., Zhou, C., & He, K. (2023). An Improved Sparrow Search Algorithm for Global Optimization with Customization-Based Mechanism. Axioms, 12(8), 767. https://doi.org/10.3390/axioms12080767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop