Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems

The equilibrium optimizer (EO) is a recently developed physics-based optimization technique for complex optimization problems. Although the algorithm shows excellent exploitation capability, it still has some drawbacks, such as the tendency to fall into local optima and poor population diversity. To address these shortcomings, an enhanced EO algorithm is proposed in this paper. First, a spiral search mechanism is introduced to guide the particles to more promising search regions. Then, a new inertia weight factor is employed to mitigate the oscillation phenomena of particles. To evaluate the effectiveness of the proposed algorithm, it has been tested on the CEC2017 test suite and the mobile robot path planning (MRPP) problem and compared with some advanced metaheuristic techniques. The experimental results demonstrate that our improved EO algorithm outperforms the comparison methods in solving both numerical optimization problems and practical problems. Overall, the developed EO variant has good robustness and stability and can be considered as a promising optimization tool.


Introduction
Optimization problems have gained significant attention in engineering and scientific domains.In general, the objective of optimization problems is to achieve the best possible outcome by minimizing the corresponding objective function while minimizing undesirable factors [1].These problems may involve constraints, which means that various constraints need to be satisfied during the optimization process.Based on their characteristics, optimization problems can be classified into two categories: local optimization and global optimization.Local optimization aims to determine the optimal value within a local region [2].On the other hand, global optimization aims to find the optimal value within a given region.Therefore, global optimization is more challenging compared to local optimization.
The most popular categories among these are swarm intelligence algorithms and physics-inspired algorithms, as they offer reliable metaphors and simple yet efficient search mechanisms.In this work, we consider leveraging the search behavior of swarm intelligence algorithms to enhance the performance of a physics-inspired algorithm called EO. EO simulates the dynamic equilibrium concept of mass in physics.In a container, the attempt to achieve dynamic equilibrium of mass within a controlled volume is performed by expelling or absorbing particles, which are referred to as a set of operators employed during the search in the solution space.Based on these search models, EO has demonstrated its performance across a range of real-world problems, such as solar photovoltaic parameter estimation [38], feature selection [39], multi-level threshold image segmentation [40], and so on.Despite the simple search mechanism and effective search capability of the EO algorithm, it still suffers from limitations, such as falling into local optima traps and imbalanced exploration and exploitation.To address these limitations, this paper proposes a novel variant of EO called SSEO by introducing an adaptive inertia weight factor and a swarm-based spiral search mechanism.The adaptive inertia weight factor is employed to enhance population diversity and strengthen the algorithm's global exploration ability, while the spiral search mechanism is introduced to expand the search space of particles.These two mechanisms work synergistically to achieve a balance between exploration and exploitation phases of the algorithm.To evaluate the performance of the proposed algorithm, 29 benchmark test functions from the IEEE CEC 2017 are used.The results obtained by SSEO are compared against several state-of-the-art metaheuristic algorithms, including the basic EO, spiral search mechanism-based metaheuristics, and recently proposed variants of EO.The test results demonstrate that SSEO provides competitive results on almost all functions compared to the benchmark algorithms.Additionally, the SSEO algorithm is tested on a real-world problem of MRPP and compared against several classical metaheuristic algorithms.Simulation results on three maps with different characteristics indicate that the developed SSEO-based path planning approach can find obstacle-free paths with smaller computational costs, suggesting its promising potential as a path planner.The main contributions of this work can be summarized as follows: 1.
Utilizing the structure of EO, an enhanced variant called SSEO is proposed, which employs two simple yet effective mechanisms to improve population diversity, convergence performance, and the balance between exploration and exploitation.

2.
SSEO incorporates an adaptive inertia weight mechanism to enhance population diversity in EO and a swarm-inspired spiral search mechanism to expand the search space.The simultaneous operation of these two mechanisms ensures a stable balance between exploration and exploitation.3.
To evaluate the effectiveness and problem-solving capability of SSEO, the CEC 2017 benchmark function set is utilized.Experimental results demonstrate that the proposed algorithm outperforms the basic EO, several recently reported EO variants, and other state-of-the-art metaheuristic algorithms.4.
To investigate the ability of the proposed EO variant in solving real-world problems, it is applied to address the MRPP problem.Simulation results indicate that, compared to the benchmark algorithms, SSEO can provide reasonable collision-free paths for the mobile robot in different environmental settings.
The remaining parts of this paper are organized as follows: Section 2 provides a literature review.Section 3 introduces the search framework and mathematical model of the basic EO.Section 4 reports the developed strategies and the framework of the SSEO algorithm.The validation of the SSEO algorithm's effectiveness using CEC 2017 functions is presented in Section 5. Section 6 introduces the developed SSEO-based MRPP approach and validates its performance.Finally, Section 7 summarizes the research and extends future research directions.

Related Work
Well-established metaheuristic algorithms are equipped with reasonable mechanisms to transition between exploration and exploitation.Global exploration allows the algorithm to comprehensively search the solution space and explore unknown regions, while local exploitation aids in fine-tuning solutions within specific areas to improve solution accuracy.EO algorithm, a recently proposed physics-inspired metaheuristic algorithm, is based on metaphors from the field of physics.The efficiency and applicability of EO have been demonstrated in benchmark function optimization problems as well as real-world problems.However, despite EO's attempt to design effective search models based on reliable metaphors, the transition from exploration to exploitation during the search process is still imperfect, resulting in limitations such as getting trapped in local optima and premature convergence.
To mitigate the inherent limitations of EO and provide a viable alternative efficient optimization tool for the optimization community, many researchers have made improvements and proposed different versions of EO variants.Gupta et al. [41] introduced mutation strategies and additional search operators, referred to as mEO, into the basic EO.The mutation operation is used to overcome the problem of population diversity loss during the search process, and the additional search operators assist the population in escaping local optima.The performance of mEO was tested on 33 commonly used benchmark functions and four engineering design problems.Experimental results demonstrated that mEO effectively enhances the search capability of the EO algorithm.
Houssein et al. [42] strengthened the balance between exploration and exploitation in the basic EO algorithm by employing the dimension hunting technique.The performance of the proposed EO variant was tested using the CEC 2020 benchmark test suite and compared with advanced metaheuristic methods.Comparative results showed the superiority of the proposed approach.Additionally, the proposed EO variant was applied to multi-level thresholding image segmentation of CT images.Comparative results with a set of popular image segmentation tools showed good performance in terms of segmentation accuracy.
Liu et al. [43] introduced three new strategies into EO to improve algorithm performance.In this version of EO, Levy flight was used to enhance particle search in unknown regions, the WOA search mechanism was employed to strengthen local exploitation ten-dencies, and the adaptive perturbation technique was utilized to enhance the algorithm's ability to avoid local optima.The performance of the algorithm was tested on the CEC 2014 benchmark test suite and compared with several well-known algorithms.Comparative results showed that the proposed EO variant outperformed the compared algorithms in the majority of cases.Furthermore, the algorithm's capability to solve real-world problems was investigated using engineering design cases, demonstrating its practicality in addressing real-world problems.
Tan et al. [44] proposed a hybrid algorithm called EWOA, which combines EO and WOA, aiming to compensate for the inherent limitations of the EO algorithm.Comparative results with the basic EO, WOA, and several classical metaheuristic algorithms showed that EWOA mitigates the tendency of the basic EO algorithm to get trapped in local optima to a certain extent.
Zhang et al. [45] introduced an improved EO algorithm, named ISEO, by incorporating an information exchange reinforcement mechanism to overcome the weak inter-particle information exchange capability in the basic EO.In ISEO, a global best-guided mechanism was employed to enhance the guidance towards a balanced state, a reverse learning technique was utilized to assist the population in escaping local optima, and a differential mutation mechanism was expected to improve inter-particle information exchange.These three mechanisms were simultaneously embedded in EO, resulting in an overall improved algorithm performance.The effectiveness of ISEO was demonstrated on a large number of benchmark test functions and engineering design cases.
Minocha et al. [46] proposed an EO variant called MEO, which enhances the convergence performance of the basic EO.In MEO, adjustments were made to the construction of the balance pool to strengthen the algorithm's search intensity, and the Levy flight technique was introduced to improve global search capability.To investigate the convergence performance of MEO, 62 benchmark functions with different characteristics and five engineering design cases were utilized.Experimental results demonstrated that MEO provides excellent robustness and convergence compared to other algorithms.
Balakrishnan et al. [47] introduced an improved version of EO, called LEO, for feature selection problems.LEO inherits the framework and Levy flight mechanism of EO with the expectation of providing a better search capability in comparison to the basic EO.To validate the performance of LEO, the algorithm was tested on a microarray cancer dataset and compared with several high-performing feature selection methods.Comparative results showed significant advantages of LEO in terms of accuracy and speed compared to the compared algorithms.

The Original EO
EO is a recently proposed novel physics-based metaheuristic algorithm designed for addressing global optimization problems.In the initial stage, EO randomly generates a set of particles to initiate the optimization process.In EO, the concept of concentration is used to represent the state of particles, similar to the particle positions in PSO.The algorithm expects the particles to achieve a state of balance within a mass volume, and the process of striving towards this balance state constitutes the optimization process of the algorithm, with the final balanced state being the optimal solution discovered by the algorithm.The EO algorithm generates the initial population in the following manner: where rand i is a random vector between [0, 1], C max and C min are the boundaries of the search region, and N is the population size.
After the search process is started, the initial particles are updated in concentration according to the following equation: where C eq is the concentration of a randomly selected particle in the equilibrium pool; F represents the exponent term, responsible for adjusting the global and local search behavior; G represents the generation rate, responsible for local search; λ is a random value; and V is a constant with a value of 1.The second term in the equation is responsible for global exploration, while the third term is responsible for local exploitation.The equilibrium pool is constructed as follows: C eq,pool = C eq (1) , C eq (2) , C eq(3) , C eq(4) , C eq(ave) where C eq (1) , C eq (2) , C eq (3) , and C eq (4) are the four particles with the optimal concentration in the population, which are called equilibrium candidates, and C eq(ave) is the mean value of the above four particles.In the optimization process, there is a lack of information about the equilibrium state, and the equilibrium candidates are used to act as equilibrium states to drive the optimization process.The concept of exponential term is used in EO to adjust the global search and local search behavior, and the mathematical model of the exponential term F is calculated according to the following equation: where t is a nonlinear function of the number of iterations, and t 0 is a parameter that adjusts the local and global search capabilities of the algorithm.t and t 0 are calculated according to the following equation, respectively.
In Equation ( 5), Iter and Max_iter denote the current iteration round and the set maximum number of iterations, respectively, and the parameter a 2 is responsible for adjusting the local exploration capacity of the algorithm and is set as a constant.In Equation ( 6), the parameter a 1 is responsible for managing the global exploration capacity of the algorithm and is set as a constant.In the basic EO, a 1 and a 2 are set to 0 and 1, respectively.In addition, r and λ are random vectors between [0, 1].Correspondingly, sign(r−0.5)controls the direction of particle concentration change.
The generation rate G is the key factor, which is used to fine tune the given region and improve the solution accuracy.G is calculated according to the following equation: where F is the exponential term, which is calculated according to Equation (4), and G 0 is the initial value, which is calculated according to the following equation.
where r 1 and r 2 are random values between [0, 1], and the generation rate control factor GCP controls whether the generation rate can participate in the concentration update process.From Equation ( 2), it can be seen that the concentration update mechanism consists of three terms.The first term is the balance candidate to guide the particle update; the second and third terms are the concentration variables, which are responsible for local exploitation and global exploration, respectively.the EO algorithm, with the help of these three behaviors, is able to achieve local exploitation in the early stage and global exploration in the later stage.

Proposed Improved EO
The effectiveness of the EO algorithm has been demonstrated in numerical optimization problems, engineering design cases, and real-world scenarios, attracting numerous researchers to apply this algorithm to solve problems in their respective fields.However, when faced with challenging optimization tasks, EO still exhibits insufficient exploration capabilities and can get trapped in local optima.To mitigate these limitations, this paper proposes two customized strategies that are embedded into the basic EO algorithm, aiming to develop a competitive optimization approach.In the proposed SSEO, adaptive inertia weight factors are employed to enhance global exploration tendencies, while a spiral search mechanism is introduced to expand the search space.In the following sections, detailed descriptions of the two strategies utilized in this study are presented.
The introduction of customized strategies in SSEO addresses the limitations of EO, resulting in a more robust optimization solution.By integrating adaptive inertia weight factors and the spiral search mechanism, SSEO significantly enhances its global exploration capabilities and expands the search space.These improvements provide SSEO with a competitive advantage, enabling it to effectively tackle complex optimization tasks.Extensive evaluations and comparisons with state-of-the-art algorithms across diverse domains, such as numerical optimization, engineering design, and real-world applications, confirm the exceptional performance of SSEO.These experimental findings validate the reliability and efficiency of SSEO as a powerful optimization tool.

Adaptive Inertia Weight Strategy
The basic EO algorithm employs a simple, easy-to-implement, and effective concentration updating mechanism, which enables it to rapidly converge to the optimal or suboptimal solution when faced with simple optimization problems.However, when dealing with complex multimodal optimization problems, the algorithm often gets trapped in local optima during the concentration updating process.The main reason is that the information regarding the equilibrium candidates has not been fully utilized.Specifically, one distinctive feature of the basic EO lies in the creation of an equilibrium pool.The candidates within the equilibrium pool offer knowledge about equilibrium states and establish search patterns for particles.The equilibrium pool constructed by candidates forms a fundamental component of the EO algorithm.By fully harnessing the information stored in the equilibrium candidates, it is possible to guide particles towards more promising regions.However, in the case of the basic EO, this aspect of the process did not yield the expected results, resulting in a decline in algorithm performance.Therefore, in this study, we introduced an inertia weight factor to the equilibrium candidates, aiding them to exert a more proactive influence on particles, consequently enhancing the particles' ability to escape local optima.The adaptive concentration update equation Equation (10), formed by incorporating the inertia weight, is employed to replace the original concentration update equation.The novel mathematical model for concentration updating is as follows: where ω is the inertia weight factor.To calculate ω, the following equation is utilized: where µ is a constant, Iter is the current iteration round, and ω max and ω min represent the maximum and minimum values of the inertia weight factor, respectively.In order to visually observe the changing trend of the proposed inertia weight factor during the iterations, Figure 1 illustrates the nonlinear decay process of ω.According to Figure 1, the value of ω decreases as the iterations progress.This provides larger concentration variations to the particles in the early stages of iteration while contributing smaller concentration variations in the later stages.As a result, the algorithm is able to extensively explore the solution space during the initial iterations and finely adjust the given foreground region towards the end of the iterations.

Spiral Search Strategy
The EO algorithm incorporates the concept of a balance pool, which consists of five balance candidates that replace the balance state and guide the particles in their concentration updates.While this approach increases population diversity and helps the algorithm escape from local optima, it simultaneously reduces the particles' ability to finely explore a given region.In other words, the basic EO algorithm suffers from limited local development capabilities.In order to enhance the algorithm's local exploration ability, the spiral search mechanism is integrated into the concentration update process of the EO algorithm.This mechanism introduces a spiral movement pattern that allows particles to explore the vicinity of the current position in a more comprehensive and systematic manner.The proposed concentration update equation based on the spiral search is formulated as follows: where D i denotes the distance between the current particle and the equilibrium candidate, c is a constant, and l is a random value between [0, 1].The distance D i is calculated according to the following equation: By incorporating the spiral search mechanism, the particles in the algorithm are guided to explore a given region with specific search behaviors.This integration effectively addresses the limitation of inadequate local exploration capability caused by the lack of particle interaction in the algorithm.Consequently, the algorithm's local development capacity is enhanced, leading to improved solution precision.The spiral search mechanism enables the particles to efficiently delve into the local search space, allowing for finer adjustments and refinements of the solutions.As a result, the algorithm achieves higher accuracy in capturing the local optima and refining the obtained solutions.

The Flowchart of SSEO
To facilitate a better understanding of the implementation details of SSEO, Figure 2 illustrates the flowchart of the SSEO.From the diagram, it can be observed that, in comparison to the basic EO algorithm, SSEO incorporates a new concentration update equation and introduces a spiral search phase.Update values of F and G by Eqs. ( 4) and ( 7) Randomly select a particle from the equilibrium pool Update the concentration of the ith particle using Eq. ( 10) Calculate the distance between the ith particle and C eq(1)

Calculate the parameter l
Update the concentration of the ith particle using Eq. ( 12) Select the better one to use

Simulation Results and Discussion
In this section, the performance of the proposed SSEO algorithm was evaluated using the CEC2017 benchmark function set.A comprehensive comparison was conducted with the basic EO algorithm and several popular EO variants to assess the effectiveness of SSEO.The following section will provide detailed insights into the experimental setup, methodology, and analysis of the results.The experiments aimed to investigate the algorithm's performance in terms of convergence speed, solution quality, and robustness across a diverse set of optimization problems.

Benchmark Functions
In this study, we conducted a performance evaluation of the algorithm using the CEC 2017 benchmark function set.The CEC 2017 test suite comprises 29 functions with diverse characteristics, which can be categorized into four types: unimodal, multimodal, hybrid, and combinatorial.The details of the benchmark functions in the CEC 2017 test suite are reported in Table 1.By executing the SSEO algorithm on these functions, the obtained results provide comprehensive insights into the algorithm's performance.The use of the CEC 2017 test suite ensures a thorough assessment of the algorithm's capability to handle various types of optimization problems.

Experimental Setup
The performance of the SSEO algorithm was compared with the basic EO and several advanced EO variants using the CEC 2017 test suite.The comparison was conducted by maintaining consistency with the specific parameters used in the respective original literature of the compared algorithms.In the SSEO algorithm, the values of ω max and ω min were set to 0.55 and 0.2, respectively.The maximum number of iterations was set to 500, and the population size was set to 30.The algorithm was executed 30 times on each function, and the mean and variance were recorded for evaluation purposes.The implementation of the algorithm was performed using MATLAB 2016b software with the utilization of the M-language.This experimental setup ensured a fair and reliable comparison of the SSEO algorithm against other algorithms in terms of their performance on the diverse functions in the CEC 2017 test suite.In this section, we conducted experiments using the SSEO algorithm on the 29 functions from the CEC 2017 test suite.The performance of SSEO was compared against the basic EO, recently reported EO variants, metaheuristic algorithms based on the spiral search mechanism, and other well-performing metaheuristic algorithms.The EO variants included in the comparison were mEO [41], LWMEO [43], ISEO [45], and IEO [42].The metaheuristic algorithms based on the spiral search mechanism included MFO [48], DMMFO [49], and WEMFO [50].Well-performing metaheuristic algorithms included PSO [8] and OOSSA [51].Table 2 lists the parameter settings of seven algorithms.Table 3 lists the results obtained by these algorithms on 30-dimensional functions, and the statistics of the corresponding Wilcoxon signed rank tests are reported in Table 4. Table 5 lists the results obtained by these algorithms on 100-dimensional functions, and the statistics of the corresponding Wilcoxon signed rank tests are shown in Table 6.The symbols "+/=/−" in Tables 4 and 6 represent better than, similar to, and inferior to, respectively.These tables provide a comprehensive evaluation of the algorithms' performance and facilitate a comparative analysis of their optimization capabilities across various test functions.
According to Table 3, the SSEO algorithm achieved the best performance in more than half of the functions.Specifically, SSEO outperformed MFO, WEMFO, DMMFO, and OOSSA on all benchmark functions.Among the 29 functions evaluated, SSEO outperformed the basic EO on 28 functions, with the exception of F23.In comparison to IEO, SSEO surpassed it on 22 functions but fell behind on 7 functions.When compared to LWMEO, SSEO outperformed it on 25 functions but was surpassed on F4 and F13.With the exception of F17, F20, and F27, SSEO demonstrated better performance than mEO on all functions.SSEO performed worse than ISEO on F27 but outperformed it on other functions.SSEO was superior to PSO on all 28 functions except F4.Additionally, Figure 3 presents the Friedman average ranking results of the algorithms on these functions.According to Figure 3, SSEO obtained the highest ranking, followed by IEO, mEO, EO, OOSSA, PSO, DMMFO, LWMEO, WEMFO, ISEO, and MFO.Moreover, according to Table 4, it is apparent that these algorithms are less than 0.05 on the vast majority of functions, which illustrates that there is a significant difference between SSEO and the other algorithms.Based on these analyses, we can conclude that the experimental results favor the performance of SSEO over the other algorithms.
Based on these analyses, we can conclude that the experimental results consistently support the superior performance of SSEO over the other algorithms.

Architecture of Mobile Robot Path Planning Using SSEO
The MRPP problem in autonomous mobile robots is a pivotal issue in robotics.This problem can be transformed into an optimization problem and solved using metaheuristic algorithms.In [52], the SSA algorithm was employed for the MRPP problem.The developed algorithm was tested in different environments, and the results showed that the algorithm was able to plan reasonable obstacle-free paths for autonomous mobile robots.In [53], a PSO-based MRPP approach was developed, and comprehensive experiments validated the effectiveness of the introduced method.In [54], a navigation strategy for a mobile robot encountering stationary obstacles was proposed using the FA algorithm.Simulation results show that the method successfully achieves the three basic objectives of path length, path smoothness, and path safety.In [55], the ABC algorithm is used in the MRPP problem to help autonomous mobile robots to generate suitable paths.The effectiveness of the algorithm is verified by simulating the algorithm under two terrains.In [56], GWO was employed for MRPP, and simulation outcomes highlighted its favorable performance in terms of path length and obstacle avoidance.In this section, we employ SSEO to address the MRPP problem and compare it with several classical metaheuristic algorithms.This evaluation aims to assess SSEO's performance in real-world scenarios and validate its effectiveness in tackling practical optimization challenges.The empirical assessment intends to showcase SSEO's efficacy in practical problem-solving and offer valuable insights for its practical applications.

Robot Path Planning Problem Description
In simple terms, the objective of the MRPP problem is to find an obstacle-free path from a starting point to a destination point.This process takes into consideration two main factors: minimizing the path length and avoiding collisions with obstacles.Based on these two factors, the following objective function has been devised: where L is the path length, σ is the penalty factor, and η is a flag variable that determines whether the interpolant point is lying inside of the threatening areas.σ • η is used to determine whether the route collides with an obstacle.The objective function of the MRPP problem is designed to balance the trade-off between finding the shortest path and ensuring obstacle avoidance.It combines the consideration of path length minimization with the incorporation of collision avoidance constraints.By formulating the problem in this way, the objective function guides the optimization algorithm, such as SSEO, to explore feasible solutions that optimize both path length and obstacle avoidance simultaneously.This formulation enables the algorithm to search for efficient and collision-free paths for the mobile robot, addressing the complexities of real-world scenarios.

Simulation Results
In this section, the SSEO algorithm is employed along with several well-established metaheuristic algorithms that have been validated for their performance in the MRPP problem.The MRPP problem is simulated on three different maps to evaluate the performance of these algorithms.The selected algorithms for comparison include ABC [7], PSO [8], GWO [9], FA [10], and SSA [13], which have been widely used and studied in the field.Table 7 lists the parameter settings of seven algorithms.
Five environment settings, derived from [51], are chosen for simulating the MRPP problem.It should be noted that the green star indicates the end point.The details of these environment settings are presented in Table 8.The path lengths obtained by the algorithm are shown in Table 9, and the corresponding trajectories are presented in Figures 5-9.
However, when comparing with the benchmark methods, the proposed SSEO algorithm consistently discovers shorter paths in each environment setting.This indicates that the SSEO algorithm possesses strong global optimization capabilities and the ability to avoid local optima.In summary, the SSEO algorithm shows promising potential as a path planner for the MRPP problem, outperforming the benchmark methods in terms of path length optimization and collision avoidance in diverse environments.

Conclusions
This study proposes an improved variant of EO, called SSEO, by introducing an adaptive inertia weight factor and a nature-inspired spiral search mechanism.In SSEO, the overall search framework of the basic EO is retained while incorporating the adaptive inertia weight factor and spiral search mechanism to overcome the imbalanced exploitationexploration trade-off and suboptimal convergence performance encountered by the basic EO.The performance of the proposed SSEO algorithm is evaluated using 29 functions from the CEC 2017 benchmark test suite, and comparisons are made against the basic EO, improved EO variants, and spiral search-based metaheuristic techniques.The test results demonstrate the effectiveness of SSEO.Furthermore, the capability of the SSEO algorithm

Figure 1 .
Figure 1.Nonlinear decay of the proposed inertia weight.

3 .
Comparison of SSEO with Other Well-Performing EO-Based Methods

2 FFigure 3 . 2 F r i e d m a n m e a n r a n k sFigure 4 .
Figure 3. Friedman mean ranks obtained by the employed algorithms on CEC 2017 benchmark functions with 30 dimensions.

Table 1 .
Summary of the 29 CEC 2017 benchmark problems.

Table 2 .
Parameter setting of the ten algorithms.

Table 3 .
Comparisons of eleven algorithms on CEC 2017 benchmark functions with 30 dimensions.

Table 4 .
Statistical conclusions based on Wilcoxon signed-rank test on 30-dimensional benchmark problems.

Table 5 .
Comparisons of eleven algorithms on CEC 2017 benchmark functions with 100 dimensions.

Table 6 .
Statistical conclusions based on Wilcoxon signed-rank test on 100-dimensional benchmark problems.

Table 7 .
Parameter setting of the five algorithms.

Table 9 .
The minimum route length comparison of SSEO-based MRPP method and comparison approaches under five environmental setups.