Next Article in Journal
Different Methods for Assessing Tooth Colour—In Vitro Study
Next Article in Special Issue
Training of Feed-Forward Neural Networks by Using Optimization Algorithms Based on Swarm-Intelligent for Maximum Power Point Tracking
Previous Article in Journal
A Hierarchical Framework for Quadruped Robots Gait Planning Based on DDPG
Previous Article in Special Issue
AOBLMOA: A Hybrid Biomimetic Optimization Algorithm for Numerical Optimization and Engineering Design Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems

1
School of Information Science and Engineering, Yunnan University, Kunming 650106, China
2
Glasgow College, University of Electronic Science and Technology of China, Chengdu 611731, China
3
Research and Development Department, Youbei Technology Co., Ltd., Kunming 650011, China
4
Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(5), 383; https://doi.org/10.3390/biomimetics8050383
Submission received: 17 July 2023 / Revised: 13 August 2023 / Accepted: 16 August 2023 / Published: 23 August 2023
(This article belongs to the Special Issue Nature-Inspired Computer Algorithms: 2nd Edition)

Abstract

:
The equilibrium optimizer (EO) is a recently developed physics-based optimization technique for complex optimization problems. Although the algorithm shows excellent exploitation capability, it still has some drawbacks, such as the tendency to fall into local optima and poor population diversity. To address these shortcomings, an enhanced EO algorithm is proposed in this paper. First, a spiral search mechanism is introduced to guide the particles to more promising search regions. Then, a new inertia weight factor is employed to mitigate the oscillation phenomena of particles. To evaluate the effectiveness of the proposed algorithm, it has been tested on the CEC2017 test suite and the mobile robot path planning (MRPP) problem and compared with some advanced metaheuristic techniques. The experimental results demonstrate that our improved EO algorithm outperforms the comparison methods in solving both numerical optimization problems and practical problems. Overall, the developed EO variant has good robustness and stability and can be considered as a promising optimization tool.

1. Introduction

Optimization problems have gained significant attention in engineering and scientific domains. In general, the objective of optimization problems is to achieve the best possible outcome by minimizing the corresponding objective function while minimizing undesirable factors [1]. These problems may involve constraints, which means that various constraints need to be satisfied during the optimization process. Based on their characteristics, optimization problems can be classified into two categories: local optimization and global optimization. Local optimization aims to determine the optimal value within a local region [2]. On the other hand, global optimization aims to find the optimal value within a given region. Therefore, global optimization is more challenging compared to local optimization.
To address various types of global optimization problems, numerous optimization techniques have been developed [3]. Among the current optimization techniques, metaheuristic algorithms have gained widespread attention due to their advantages of being gradient-free, not requiring prior information about the problem, and offering high flexibility. Metaheuristic algorithms provide acceptable solutions with relatively fewer computational costs [4]. Based on their sources of inspiration, metaheuristic algorithms can be classified into four categories: swarm intelligence algorithms, evolutionary optimization algorithms, physics-inspired algorithms, and human-inspired algorithms [5,6]. Swarm intelligence optimization algorithms simulate the cooperative behavior observed in animal populations in nature. Examples of such algorithms include Artificial Bee Colony (ABC) [7], Particle Swarm Optimization (PSO) [8], Grey Wolf Optimization (GWO) [9], Firefly Optimization (FA) [10], Ant Colony Optimization (ACO) [11], Harris Hawks Optimization Algorithm (HHO) [12], Salp Swarm Algorithm (SSA) [13], and others. The second category draws inspiration from the concept of natural evolution. These algorithms include, but are not limited to, Evolution Strategy (ES) [14], Differential Evolution (DE) [15], Backtracking Search Algorithm (BSA) [16], Stochastic Fractal Search (SFS) [17], Wildebeests Herd Optimization (WHO) [18]. The third class of metaheuristic algorithms is inspired by physics concepts. The following algorithms are some examples of physics-inspired algorithms: Simulated Annealing (SA) [19] algorithm, Big Bang-Big Crunch (BB-BC) [20] algorithm, Central Force Optimization (CFO) [21], Intelligent Water Drops (IWD) [22], Slime Mold Algorithm (SMA) [23], Gravitational Search Algorithm (GSA) [24], Black Hole Algorithm (BHA) [25], Water Cycle Algorithm (WCA) [26], Lightning Search Algorithm (LSA) [27]. As these physics-inspired algorithms proved to be effective in engineering and science, more similar algorithms were developed, such as Multi-Verse Optimizer (MVO) [28], Thermal Exchange Optimization (TEO) [29], Henry Gas Solubility Optimization (HGSO) [30], Equilibrium Optimizer (EO) [31], Archimedes Optimization Algorithm (AOA) [32], and Special Relativity Search (SRS) [33]. The last class of metaheuristic techniques simulates human behavior, such as Seeker Optimization Algorithm (SOA) [34], Imperialist Competitive Algorithm (ICA) [35], Brain Storm Optimization (BSO) [36], and Teaching-Learning-Based Optimization (TLBO) [37].
The most popular categories among these are swarm intelligence algorithms and physics-inspired algorithms, as they offer reliable metaphors and simple yet efficient search mechanisms. In this work, we consider leveraging the search behavior of swarm intelligence algorithms to enhance the performance of a physics-inspired algorithm called EO. EO simulates the dynamic equilibrium concept of mass in physics. In a container, the attempt to achieve dynamic equilibrium of mass within a controlled volume is performed by expelling or absorbing particles, which are referred to as a set of operators employed during the search in the solution space. Based on these search models, EO has demonstrated its performance across a range of real-world problems, such as solar photovoltaic parameter estimation [38], feature selection [39], multi-level threshold image segmentation [40], and so on. Despite the simple search mechanism and effective search capability of the EO algorithm, it still suffers from limitations, such as falling into local optima traps and imbalanced exploration and exploitation. To address these limitations, this paper proposes a novel variant of EO called SSEO by introducing an adaptive inertia weight factor and a swarm-based spiral search mechanism. The adaptive inertia weight factor is employed to enhance population diversity and strengthen the algorithm’s global exploration ability, while the spiral search mechanism is introduced to expand the search space of particles. These two mechanisms work synergistically to achieve a balance between exploration and exploitation phases of the algorithm. To evaluate the performance of the proposed algorithm, 29 benchmark test functions from the IEEE CEC 2017 are used. The results obtained by SSEO are compared against several state-of-the-art metaheuristic algorithms, including the basic EO, spiral search mechanism-based metaheuristics, and recently proposed variants of EO. The test results demonstrate that SSEO provides competitive results on almost all functions compared to the benchmark algorithms. Additionally, the SSEO algorithm is tested on a real-world problem of MRPP and compared against several classical metaheuristic algorithms. Simulation results on three maps with different characteristics indicate that the developed SSEO-based path planning approach can find obstacle-free paths with smaller computational costs, suggesting its promising potential as a path planner. The main contributions of this work can be summarized as follows:
  • Utilizing the structure of EO, an enhanced variant called SSEO is proposed, which employs two simple yet effective mechanisms to improve population diversity, convergence performance, and the balance between exploration and exploitation.
  • SSEO incorporates an adaptive inertia weight mechanism to enhance population diversity in EO and a swarm-inspired spiral search mechanism to expand the search space. The simultaneous operation of these two mechanisms ensures a stable balance between exploration and exploitation.
  • To evaluate the effectiveness and problem-solving capability of SSEO, the CEC 2017 benchmark function set is utilized. Experimental results demonstrate that the proposed algorithm outperforms the basic EO, several recently reported EO variants, and other state-of-the-art metaheuristic algorithms.
  • To investigate the ability of the proposed EO variant in solving real-world problems, it is applied to address the MRPP problem. Simulation results indicate that, compared to the benchmark algorithms, SSEO can provide reasonable collision-free paths for the mobile robot in different environmental settings.
The remaining parts of this paper are organized as follows: Section 2 provides a literature review. Section 3 introduces the search framework and mathematical model of the basic EO. Section 4 reports the developed strategies and the framework of the SSEO algorithm. The validation of the SSEO algorithm’s effectiveness using CEC 2017 functions is presented in Section 5. Section 6 introduces the developed SSEO-based MRPP approach and validates its performance. Finally, Section 7 summarizes the research and extends future research directions.

2. Related Work

Well-established metaheuristic algorithms are equipped with reasonable mechanisms to transition between exploration and exploitation. Global exploration allows the algorithm to comprehensively search the solution space and explore unknown regions, while local exploitation aids in fine-tuning solutions within specific areas to improve solution accuracy. EO algorithm, a recently proposed physics-inspired metaheuristic algorithm, is based on metaphors from the field of physics. The efficiency and applicability of EO have been demonstrated in benchmark function optimization problems as well as real-world problems. However, despite EO’s attempt to design effective search models based on reliable metaphors, the transition from exploration to exploitation during the search process is still imperfect, resulting in limitations such as getting trapped in local optima and premature convergence.
To mitigate the inherent limitations of EO and provide a viable alternative efficient optimization tool for the optimization community, many researchers have made improvements and proposed different versions of EO variants. Gupta et al. [41] introduced mutation strategies and additional search operators, referred to as mEO, into the basic EO. The mutation operation is used to overcome the problem of population diversity loss during the search process, and the additional search operators assist the population in escaping local optima. The performance of mEO was tested on 33 commonly used benchmark functions and four engineering design problems. Experimental results demonstrated that mEO effectively enhances the search capability of the EO algorithm.
Houssein et al. [42] strengthened the balance between exploration and exploitation in the basic EO algorithm by employing the dimension hunting technique. The performance of the proposed EO variant was tested using the CEC 2020 benchmark test suite and compared with advanced metaheuristic methods. Comparative results showed the superiority of the proposed approach. Additionally, the proposed EO variant was applied to multi-level thresholding image segmentation of CT images. Comparative results with a set of popular image segmentation tools showed good performance in terms of segmentation accuracy.
Liu et al. [43] introduced three new strategies into EO to improve algorithm performance. In this version of EO, Levy flight was used to enhance particle search in unknown regions, the WOA search mechanism was employed to strengthen local exploitation tendencies, and the adaptive perturbation technique was utilized to enhance the algorithm’s ability to avoid local optima. The performance of the algorithm was tested on the CEC 2014 benchmark test suite and compared with several well-known algorithms. Comparative results showed that the proposed EO variant outperformed the compared algorithms in the majority of cases. Furthermore, the algorithm’s capability to solve real-world problems was investigated using engineering design cases, demonstrating its practicality in addressing real-world problems.
Tan et al. [44] proposed a hybrid algorithm called EWOA, which combines EO and WOA, aiming to compensate for the inherent limitations of the EO algorithm. Comparative results with the basic EO, WOA, and several classical metaheuristic algorithms showed that EWOA mitigates the tendency of the basic EO algorithm to get trapped in local optima to a certain extent.
Zhang et al. [45] introduced an improved EO algorithm, named ISEO, by incorporating an information exchange reinforcement mechanism to overcome the weak inter-particle information exchange capability in the basic EO. In ISEO, a global best-guided mechanism was employed to enhance the guidance towards a balanced state, a reverse learning technique was utilized to assist the population in escaping local optima, and a differential mutation mechanism was expected to improve inter-particle information exchange. These three mechanisms were simultaneously embedded in EO, resulting in an overall improved algorithm performance. The effectiveness of ISEO was demonstrated on a large number of benchmark test functions and engineering design cases.
Minocha et al. [46] proposed an EO variant called MEO, which enhances the convergence performance of the basic EO. In MEO, adjustments were made to the construction of the balance pool to strengthen the algorithm’s search intensity, and the Levy flight technique was introduced to improve global search capability. To investigate the convergence performance of MEO, 62 benchmark functions with different characteristics and five engineering design cases were utilized. Experimental results demonstrated that MEO provides excellent robustness and convergence compared to other algorithms.
Balakrishnan et al. [47] introduced an improved version of EO, called LEO, for feature selection problems. LEO inherits the framework and Levy flight mechanism of EO with the expectation of providing a better search capability in comparison to the basic EO. To validate the performance of LEO, the algorithm was tested on a microarray cancer dataset and compared with several high-performing feature selection methods. Comparative results showed significant advantages of LEO in terms of accuracy and speed compared to the compared algorithms.

3. The Original EO

EO is a recently proposed novel physics-based metaheuristic algorithm designed for addressing global optimization problems. In the initial stage, EO randomly generates a set of particles to initiate the optimization process. In EO, the concept of concentration is used to represent the state of particles, similar to the particle positions in PSO. The algorithm expects the particles to achieve a state of balance within a mass volume, and the process of striving towards this balance state constitutes the optimization process of the algorithm, with the final balanced state being the optimal solution discovered by the algorithm. The EO algorithm generates the initial population in the following manner:
C i i n i t i a l = C m i n + r a n d i ( C m a x C m i n ) i = 1 , 2 , , N
where r a n d i is a random vector between [0, 1], C m a x and C m i n are the boundaries of the search region, and N is the population size.
After the search process is started, the initial particles are updated in concentration according to the following equation:
C = C e q + ( C C e q ) · F + G λ V ( 1 F )
where C e q is the concentration of a randomly selected particle in the equilibrium pool; F represents the exponent term, responsible for adjusting the global and local search behavior; G represents the generation rate, responsible for local search; λ is a random value; and V is a constant with a value of 1. The second term in the equation is responsible for global exploration, while the third term is responsible for local exploitation. The equilibrium pool is constructed as follows:
C e q , p o o l = C e q ( 1 ) , C e q ( 2 ) , C e q ( 3 ) , C e q ( 4 ) , C e q ( a v e )
where C e q ( 1 ) , C e q ( 2 ) , C e q ( 3 ) , and C e q ( 4 ) are the four particles with the optimal concentration in the population, which are called equilibrium candidates, and C e q ( a v e ) is the mean value of the above four particles. In the optimization process, there is a lack of information about the equilibrium state, and the equilibrium candidates are used to act as equilibrium states to drive the optimization process.
The concept of exponential term is used in EO to adjust the global search and local search behavior, and the mathematical model of the exponential term F is calculated according to the following equation:
F = e λ ( t t 0 )
where t is a nonlinear function of the number of iterations, and t 0 is a parameter that adjusts the local and global search capabilities of the algorithm. t and t 0 are calculated according to the following equation, respectively.
t = 1 I t e r M a x _ i t e r a 2 I t e r M a x _ i t e r
t 0 = 1 λ ln a 1 s i g n r 0.5 1 e λ t + t
In Equation (5), I t e r and M a x _ i t e r denote the current iteration round and the set maximum number of iterations, respectively, and the parameter a 2 is responsible for adjusting the local exploration capacity of the algorithm and is set as a constant. In Equation (6), the parameter a 1 is responsible for managing the global exploration capacity of the algorithm and is set as a constant. In the basic EO, a 1 and a 2 are set to 0 and 1, respectively. In addition, r and λ are random vectors between [0, 1]. Correspondingly, s i g n ( r 0.5) controls the direction of particle concentration change.
The generation rate G is the key factor, which is used to fine tune the given region and improve the solution accuracy. G is calculated according to the following equation:
G = G 0 e λ ( t t 0 ) = G 0 F
where F is the exponential term, which is calculated according to Equation (4), and G 0 is the initial value, which is calculated according to the following equation.
G 0 = G C P C e q λ C
G C P = 0.5 r 1 r 2 G P 0 r 2 < G P
where r 1 and r 2 are random values between [0, 1], and the generation rate control factor G C P controls whether the generation rate can participate in the concentration update process.
From Equation (2), it can be seen that the concentration update mechanism consists of three terms. The first term is the balance candidate to guide the particle update; the second and third terms are the concentration variables, which are responsible for local exploitation and global exploration, respectively. the EO algorithm, with the help of these three behaviors, is able to achieve local exploitation in the early stage and global exploration in the later stage.

4. Proposed Improved EO

The effectiveness of the EO algorithm has been demonstrated in numerical optimization problems, engineering design cases, and real-world scenarios, attracting numerous researchers to apply this algorithm to solve problems in their respective fields. However, when faced with challenging optimization tasks, EO still exhibits insufficient exploration capabilities and can get trapped in local optima. To mitigate these limitations, this paper proposes two customized strategies that are embedded into the basic EO algorithm, aiming to develop a competitive optimization approach. In the proposed SSEO, adaptive inertia weight factors are employed to enhance global exploration tendencies, while a spiral search mechanism is introduced to expand the search space. In the following sections, detailed descriptions of the two strategies utilized in this study are presented.
The introduction of customized strategies in SSEO addresses the limitations of EO, resulting in a more robust optimization solution. By integrating adaptive inertia weight factors and the spiral search mechanism, SSEO significantly enhances its global exploration capabilities and expands the search space. These improvements provide SSEO with a competitive advantage, enabling it to effectively tackle complex optimization tasks. Extensive evaluations and comparisons with state-of-the-art algorithms across diverse domains, such as numerical optimization, engineering design, and real-world applications, confirm the exceptional performance of SSEO. These experimental findings validate the reliability and efficiency of SSEO as a powerful optimization tool.

4.1. Adaptive Inertia Weight Strategy

The basic EO algorithm employs a simple, easy-to-implement, and effective concentration updating mechanism, which enables it to rapidly converge to the optimal or suboptimal solution when faced with simple optimization problems. However, when dealing with complex multimodal optimization problems, the algorithm often gets trapped in local optima during the concentration updating process. The main reason is that the information regarding the equilibrium candidates has not been fully utilized. Specifically, one distinctive feature of the basic EO lies in the creation of an equilibrium pool. The candidates within the equilibrium pool offer knowledge about equilibrium states and establish search patterns for particles. The equilibrium pool constructed by candidates forms a fundamental component of the EO algorithm. By fully harnessing the information stored in the equilibrium candidates, it is possible to guide particles towards more promising regions. However, in the case of the basic EO, this aspect of the process did not yield the expected results, resulting in a decline in algorithm performance. Therefore, in this study, we introduced an inertia weight factor to the equilibrium candidates, aiding them to exert a more proactive influence on particles, consequently enhancing the particles’ ability to escape local optima. The adaptive concentration update equation Equation (10), formed by incorporating the inertia weight, is employed to replace the original concentration update equation. The novel mathematical model for concentration updating is as follows:
C = ω C e q + ( C C e q ) · F + G λ V ( 1 F )
where ω is the inertia weight factor. To calculate ω , the following equation is utilized:
ω = ( ω max ω min ) · e 10 μ · I t e r 2 e 10 μ · I t e r + 2 + ω max
where μ is a constant, I t e r is the current iteration round, and ω max and ω min represent the maximum and minimum values of the inertia weight factor, respectively.
In order to visually observe the changing trend of the proposed inertia weight factor during the iterations, Figure 1 illustrates the nonlinear decay process of ω . According to Figure 1, the value of ω decreases as the iterations progress. This provides larger concentration variations to the particles in the early stages of iteration while contributing smaller concentration variations in the later stages. As a result, the algorithm is able to extensively explore the solution space during the initial iterations and finely adjust the given foreground region towards the end of the iterations.

4.2. Spiral Search Strategy

The EO algorithm incorporates the concept of a balance pool, which consists of five balance candidates that replace the balance state and guide the particles in their concentration updates. While this approach increases population diversity and helps the algorithm escape from local optima, it simultaneously reduces the particles’ ability to finely explore a given region. In other words, the basic EO algorithm suffers from limited local development capabilities. In order to enhance the algorithm’s local exploration ability, the spiral search mechanism is integrated into the concentration update process of the EO algorithm. This mechanism introduces a spiral movement pattern that allows particles to explore the vicinity of the current position in a more comprehensive and systematic manner. The proposed concentration update equation based on the spiral search is formulated as follows:
C i = D i + e c l · cos ( 2 π · r a n d ) + C e q
where D i denotes the distance between the current particle and the equilibrium candidate, c is a constant, and l is a random value between [0, 1]. The distance D i is calculated according to the following equation:
D i = a b s C e q C i
By incorporating the spiral search mechanism, the particles in the algorithm are guided to explore a given region with specific search behaviors. This integration effectively addresses the limitation of inadequate local exploration capability caused by the lack of particle interaction in the algorithm. Consequently, the algorithm’s local development capacity is enhanced, leading to improved solution precision. The spiral search mechanism enables the particles to efficiently delve into the local search space, allowing for finer adjustments and refinements of the solutions. As a result, the algorithm achieves higher accuracy in capturing the local optima and refining the obtained solutions.

4.3. The Flowchart of SSEO

To facilitate a better understanding of the implementation details of SSEO, Figure 2 illustrates the flowchart of the SSEO. From the diagram, it can be observed that, in comparison to the basic EO algorithm, SSEO incorporates a new concentration update equation and introduces a spiral search phase.

5. Simulation Results and Discussion

In this section, the performance of the proposed SSEO algorithm was evaluated using the CEC2017 benchmark function set. A comprehensive comparison was conducted with the basic EO algorithm and several popular EO variants to assess the effectiveness of SSEO. The following section will provide detailed insights into the experimental setup, methodology, and analysis of the results. The experiments aimed to investigate the algorithm’s performance in terms of convergence speed, solution quality, and robustness across a diverse set of optimization problems.

5.1. Benchmark Functions

In this study, we conducted a performance evaluation of the algorithm using the CEC 2017 benchmark function set. The CEC 2017 test suite comprises 29 functions with diverse characteristics, which can be categorized into four types: unimodal, multimodal, hybrid, and combinatorial. The details of the benchmark functions in the CEC 2017 test suite are reported in Table 1. By executing the SSEO algorithm on these functions, the obtained results provide comprehensive insights into the algorithm’s performance. The use of the CEC 2017 test suite ensures a thorough assessment of the algorithm’s capability to handle various types of optimization problems.

5.2. Experimental Setup

The performance of the SSEO algorithm was compared with the basic EO and several advanced EO variants using the CEC 2017 test suite. The comparison was conducted by maintaining consistency with the specific parameters used in the respective original literature of the compared algorithms. In the SSEO algorithm, the values of ω max and ω min were set to 0.55 and 0.2, respectively. The maximum number of iterations was set to 500, and the population size was set to 30. The algorithm was executed 30 times on each function, and the mean and variance were recorded for evaluation purposes. The implementation of the algorithm was performed using MATLAB 2016b software with the utilization of the M-language. This experimental setup ensured a fair and reliable comparison of the SSEO algorithm against other algorithms in terms of their performance on the diverse functions in the CEC 2017 test suite.

5.3. Comparison of SSEO with Other Well-Performing EO-Based Methods

In this section, we conducted experiments using the SSEO algorithm on the 29 functions from the CEC 2017 test suite. The performance of SSEO was compared against the basic EO, recently reported EO variants, metaheuristic algorithms based on the spiral search mechanism, and other well-performing metaheuristic algorithms. The EO variants included in the comparison were mEO [41], LWMEO [43], ISEO [45], and IEO [42]. The metaheuristic algorithms based on the spiral search mechanism included MFO [48], DMMFO [49], and WEMFO [50]. Well-performing metaheuristic algorithms included PSO [8] and OOSSA [51]. Table 2 lists the parameter settings of seven algorithms. Table 3 lists the results obtained by these algorithms on 30-dimensional functions, and the statistics of the corresponding Wilcoxon signed rank tests are reported in Table 4. Table 5 lists the results obtained by these algorithms on 100-dimensional functions, and the statistics of the corresponding Wilcoxon signed rank tests are shown in Table 6. The symbols “+/=/−” in Table 4 and Table 6 represent better than, similar to, and inferior to, respectively. These tables provide a comprehensive evaluation of the algorithms’ performance and facilitate a comparative analysis of their optimization capabilities across various test functions.
According to Table 3, the SSEO algorithm achieved the best performance in more than half of the functions. Specifically, SSEO outperformed MFO, WEMFO, DMMFO, and OOSSA on all benchmark functions. Among the 29 functions evaluated, SSEO outperformed the basic EO on 28 functions, with the exception of F23. In comparison to IEO, SSEO surpassed it on 22 functions but fell behind on 7 functions. When compared to LWMEO, SSEO outperformed it on 25 functions but was surpassed on F4 and F13. With the exception of F17, F20, and F27, SSEO demonstrated better performance than mEO on all functions. SSEO performed worse than ISEO on F27 but outperformed it on other functions. SSEO was superior to PSO on all 28 functions except F4. Additionally, Figure 3 presents the Friedman average ranking results of the algorithms on these functions. According to Figure 3, SSEO obtained the highest ranking, followed by IEO, mEO, EO, OOSSA, PSO, DMMFO, LWMEO, WEMFO, ISEO, and MFO. Moreover, according to Table 4, it is apparent that these algorithms are less than 0.05 on the vast majority of functions, which illustrates that there is a significant difference between SSEO and the other algorithms. Based on these analyses, we can conclude that the experimental results favor the performance of SSEO over the other algorithms.
From the reported results in Table 5, it can be observed that SSEO achieved the best efficiency in most of the benchmark functions. In pairwise comparisons, SSEO outperformed the basic EO across all test cases. SSEO beat LWMEO, ISEO, MFO, WEMFO, and DMMFO across all benchmark functions. SSEO exhibited better performance than IEO in a significant number of functions and inferior performance in a few cases. Except for F11, SSEO performed better than mEO on all functions. SSEO was inferior to OOSSA in one function but surpassed it in 28 functions. Regarding PSO, SSEO was inferior to PSO on F25, while it outperformed PSO in the remaining functions. The Friedman average ranking results of these algorithms on the 29 100-dimensional functions are plotted in Figure 4. According to Figure 4, the proposed SSEO obtained the highest ranking, followed by IEO, OOSSA, mEO, PSO, EO, LWMEO, WEMFO, DMMFO, ISEO, and MFO. Furthermore, based on Table 6, the statistical analysis results of the Wilcoxon signed-rank test of these algorithms are almost lower than 0.05. This shows that there is a significant difference between the SSEO algorithm and the comparative algorithms.
Based on these analyses, we can conclude that the experimental results consistently support the superior performance of SSEO over the other algorithms.

6. Architecture of Mobile Robot Path Planning Using SSEO

The MRPP problem in autonomous mobile robots is a pivotal issue in robotics. This problem can be transformed into an optimization problem and solved using metaheuristic algorithms. In [52], the SSA algorithm was employed for the MRPP problem. The developed algorithm was tested in different environments, and the results showed that the algorithm was able to plan reasonable obstacle-free paths for autonomous mobile robots. In [53], a PSO-based MRPP approach was developed, and comprehensive experiments validated the effectiveness of the introduced method. In [54], a navigation strategy for a mobile robot encountering stationary obstacles was proposed using the FA algorithm. Simulation results show that the method successfully achieves the three basic objectives of path length, path smoothness, and path safety. In [55], the ABC algorithm is used in the MRPP problem to help autonomous mobile robots to generate suitable paths. The effectiveness of the algorithm is verified by simulating the algorithm under two terrains. In [56], GWO was employed for MRPP, and simulation outcomes highlighted its favorable performance in terms of path length and obstacle avoidance. In this section, we employ SSEO to address the MRPP problem and compare it with several classical metaheuristic algorithms. This evaluation aims to assess SSEO’s performance in real-world scenarios and validate its effectiveness in tackling practical optimization challenges. The empirical assessment intends to showcase SSEO’s efficacy in practical problem-solving and offer valuable insights for its practical applications.

6.1. Robot Path Planning Problem Description

In simple terms, the objective of the MRPP problem is to find an obstacle-free path from a starting point to a destination point. This process takes into consideration two main factors: minimizing the path length and avoiding collisions with obstacles. Based on these two factors, the following objective function has been devised:
objectiveValue ( OV ) = L ( 1 + σ · η )
where L is the path length, σ is the penalty factor, and η is a flag variable that determines whether the interpolant point is lying inside of the threatening areas. σ · η is used to determine whether the route collides with an obstacle.
The objective function of the MRPP problem is designed to balance the trade-off between finding the shortest path and ensuring obstacle avoidance. It combines the consideration of path length minimization with the incorporation of collision avoidance constraints. By formulating the problem in this way, the objective function guides the optimization algorithm, such as SSEO, to explore feasible solutions that optimize both path length and obstacle avoidance simultaneously. This formulation enables the algorithm to search for efficient and collision-free paths for the mobile robot, addressing the complexities of real-world scenarios.

6.2. Simulation Results

In this section, the SSEO algorithm is employed along with several well-established metaheuristic algorithms that have been validated for their performance in the MRPP problem. The MRPP problem is simulated on three different maps to evaluate the performance of these algorithms. The selected algorithms for comparison include ABC [7], PSO [8], GWO [9], FA [10], and SSA [13], which have been widely used and studied in the field. Table 7 lists the parameter settings of seven algorithms.
Five environment settings, derived from [51], are chosen for simulating the MRPP problem. It should be noted that the green star indicates the end point. The details of these environment settings are presented in Table 8. The path lengths obtained by the algorithm are shown in Table 9, and the corresponding trajectories are presented in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
According to Table 9, the SSEO algorithm consistently produces the shortest trajectory lengths compared to the other benchmark algorithms across all environment settings. This indicates that the SSEO algorithm exhibits superior performance in solving the MRPP problem.
However, when comparing with the benchmark methods, the proposed SSEO algorithm consistently discovers shorter paths in each environment setting. This indicates that the SSEO algorithm possesses strong global optimization capabilities and the ability to avoid local optima. In summary, the SSEO algorithm shows promising potential as a path planner for the MRPP problem, outperforming the benchmark methods in terms of path length optimization and collision avoidance in diverse environments.

7. Conclusions

This study proposes an improved variant of EO, called SSEO, by introducing an adaptive inertia weight factor and a nature-inspired spiral search mechanism. In SSEO, the overall search framework of the basic EO is retained while incorporating the adaptive inertia weight factor and spiral search mechanism to overcome the imbalanced exploitation–exploration trade-off and suboptimal convergence performance encountered by the basic EO. The performance of the proposed SSEO algorithm is evaluated using 29 functions from the CEC 2017 benchmark test suite, and comparisons are made against the basic EO, improved EO variants, and spiral search-based metaheuristic techniques. The test results demonstrate the effectiveness of SSEO. Furthermore, the capability of the SSEO algorithm to solve real-world problems is tested using an MRPP problem. Comparative results with several classical metaheuristic algorithms reveal that SSEO is a promising path planner.

Author Contributions

Conceptualization, H.D. and Z.W.; methodology, Y.L. and Z.W.; software, Z.W. and G.J.; validation, H.D., Y.L. and P.H.; formal analysis, Z.W., G.J. and G.D.; investigation, Y.L. and Z.W.; resources, H.D. and G.D.; data curation, Z.W. and P.H.; writing—original draft preparation, Y.L. and Z.W.; writing—review and editing, Z.W.; visualization, H.D. and G.J.; supervision, P.H.; project administration, H.D. and P.H.; funding acquisition, P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Nature Science Foundation of China (Grant No. 61461053, 61461054, and 61072079); the Yunnan Provincial Education Department Scientific Research Fund Project (Grant No. 2022Y008); the Yunnan University’s Research Innovation Fund for Graduate Students, China (Grant No. KC-22222706); and the Ding Hongwei Expert Grassroots Research Station of Youbei Technology Co., Yunnan Province.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank the editor and the anonymous reviewers for their helpful and constructive comments, which have helped us to improve the manuscript significantly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gharehchopogh, F.S. Quantum-inspired metaheuristic algorithms: Comprehensive survey and classification. Artif. Intell. Rev. 2022, 56, 5479–5543. [Google Scholar] [CrossRef]
  2. Wang, Z.; Ding, H.; Yang, J.; Wang, J.; Li, B.; Yang, Z.; Hou, P. Advanced orthogonal opposition-based learning-driven dynamic salp swarm algorithm: Framework and case studies. IET Control. Theory Appl. 2022, 16, 945–971. [Google Scholar] [CrossRef]
  3. Kaveh, A.; Zaerreza, A. A new framework for reliability-based design optimization using metaheuristic algorithms. Structures 2022, 38, 1210–1225. [Google Scholar] [CrossRef]
  4. Wang, Z.; Ding, H.; Yang, J.; Hou, P.; Dhiman, G.; Wang, J.; Yang, Z.; Li, A. Orthogonal pinhole-imaging-based learning salp swarm algorithm with self-adaptive structure for global optimization. Front. Bioeng. Biotechnol. 2022, 10, 1018895. [Google Scholar] [CrossRef]
  5. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  6. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  7. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  8. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  10. Hossein, G.A.; Yang, X.-S.; Alavi, A.H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar]
  11. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  12. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  13. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  14. Choi, K.; Jang, D.-H.; Kang, S.-I.; Lee, J.-H.; Chung, T.-K.; Kim, H.-S. Hybrid Algorithm Combing Genetic Algorithm with Evolution Strategy for Antenna Design. IEEE Trans. Magn. 2015, 52, 1–4. [Google Scholar] [CrossRef]
  15. Price, K.V. Differential evolution. In Handbook of Optimization: From Classical to Modern Approach; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar]
  16. Civicioglu, P. Backtracking Search Optimization Algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  17. Salimi, H. Stochastic Fractal Search: A powerful metaheuristic algorithm. Knowl.-Based Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  18. Amali, D.G.B.; Dinakaran, M. Wildebeest herd optimization: A new global optimization algorithm inspired by wildebeest herding behaviour. J. Intell. Fuzzy Syst. 2019, 37, 8063–8076. [Google Scholar] [CrossRef]
  19. Dimitris, B.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar]
  20. Osman, K.E.; Eksin, I. A new optimization method: Big bang–big crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar]
  21. Richard, A.F. Central force optimization. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar]
  22. Hosseini, H.S. The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm. Int. J. Bio-Inspired Comput. 2009, 1, 71–79. [Google Scholar] [CrossRef]
  23. Nguyen, T.-T.; Wang, H.-J.; Dao, T.-K.; Pan, J.-S.; Liu, J.-H.; Weng, S. An Improved Slime Mold Algorithm and its Application for Optimal Operation of Cascade Hydropower Stations. IEEE Access 2020, 8, 226754–226772. [Google Scholar] [CrossRef]
  24. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  25. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Khasawneh, A.M.; Alshinwan, M.; Mirjalili, S.; Shehab, M.; Abuaddous, H.Y.; Gandomi, A.H. Black hole algorithm: A comprehensive survey. Appl. Intell. 2022, 52, 11892–11915. [Google Scholar] [CrossRef]
  26. Sadollah, A.; Eskandar, H.; Lee, H.M.; Yoo, D.G.; Kim, J.H. Water cycle algorithm: A detailed standard code. SoftwareX 2016, 5, 37–43. [Google Scholar] [CrossRef]
  27. Shareef, H.; Ibrahim, A.A.; Mutlag, A.H. Lightning search algorithm. Appl. Soft Comput. 2015, 36, 315–333. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  29. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  30. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  31. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  32. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  33. Goodarzimehr, V.; Shojaee, S.; Hamzehei-Javaran, S.; Talatahari, S. Special relativity search: A novel metaheuristic method based on special relativity physics. Knowl.-Based Syst. 2022, 257, 109484. [Google Scholar] [CrossRef]
  34. Dai, C.; Chen, W.; Zhu, Y.; Zhang, X. Seeker optimization algorithm for optimal reactive power dispatch. IEEE Trans. Power Syst. 2009, 24, 1218–1231. [Google Scholar]
  35. Kaveh, A.; Talatahari, S. Optimum design of skeletal structures using imperialist competitive algorithm. Comput. Struct. 2010, 88, 1220–1229. [Google Scholar] [CrossRef]
  36. Cheng, S.; Qin, Q.; Chen, J.; Shi, Y. Brain storm optimization algorithm: A review. Artif. Intell. Rev. 2016, 46, 445–458. [Google Scholar] [CrossRef]
  37. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  38. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S.; Chakrabortty, R.K.; Ryan, M.J. Solar photovoltaic parameter estimation using an improved equilibrium optimizer. Sol. Energy 2020, 209, 694–708. [Google Scholar] [CrossRef]
  39. Ahmed, S.; Ghosh, K.K.; Mirjalili, S.; Sarkar, R. AIEOU: Automata-based improved equilibrium optimizer with U-shaped transfer function for feature selection. Knowl.-Based Syst. 2021, 228, 107283. [Google Scholar] [CrossRef]
  40. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. A novel interdependence based multilevel thresholding technique using adaptive equilibrium optimizer. Eng. Appl. Artif. Intell. 2020, 94, 103836. [Google Scholar] [CrossRef]
  41. Gupta, S.; Deep, K.; Mirjalili, S. An efficient equilibrium optimizer with mutation strategy for numerical optimization. Appl. Soft Comput. 2020, 96, 106542. [Google Scholar] [CrossRef]
  42. Houssein, E.H.; Helmy, B.E.-D.; Oliva, D.; Jangir, P.; Premkumar, M.; Elngar, A.A.; Shaban, H. An efficient multi-thresholding based COVID-19 CT images segmentation approach using an improved equilibrium optimizer. Biomed. Signal Process. Control 2022, 73, 103401. [Google Scholar] [CrossRef]
  43. Liu, J.; Li, W.; Li, Y. LWMEO: An efficient equilibrium optimizer for complex functions and engineering design problems. Expert Syst. Appl. 2022, 198, 116828. [Google Scholar] [CrossRef]
  44. Tan, W.-H.; Mohamad-Saleh, J. A hybrid whale optimization algorithm based on equilibrium concept. Alex. Eng. J. 2023, 68, 763–786. [Google Scholar] [CrossRef]
  45. Zhang, X.; Lin, Q. Information-utilization strengthened equilibrium optimizer. Artif. Intell. Rev. 2022, 55, 4241–4274. [Google Scholar] [CrossRef]
  46. Minocha, S.; Singh, B. A novel equilibrium optimizer based on levy flight and iterative cosine operator for engineering optimization problems. Expert Syst. 2022, 39, e12843. [Google Scholar] [CrossRef]
  47. Balakrishnan, K.; Dhanalakshmi, R.; Akila, M.; Sinha, B.B. Improved equilibrium optimization based on Levy flight approach for feature selection. Evol. Syst. 2023, 14, 735–746. [Google Scholar] [CrossRef]
  48. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  49. Ma, L.; Wang, C.; Xie, N.-G.; Shi, M.; Ye, Y.; Wang, L. Moth-flame optimization algorithm based on diversity and mutation strategy. Appl. Intell. 2021, 51, 5836–5872. [Google Scholar] [CrossRef]
  50. Shan, W.; Qiao, Z.; Heidari, A.A.; Chen, H.; Turabieh, H.; Teng, Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowl.-Based Syst. 2021, 214, 106728. [Google Scholar] [CrossRef]
  51. Wang, Z.; Ding, H.; Yang, Z.; Li, B.; Guan, Z.; Bao, L. Rank-driven salp swarm algorithm with orthogonal opposition-based learning for global optimization. Appl. Intell. 2022, 52, 7922–7964. [Google Scholar] [CrossRef]
  52. Ding, H.; Cao, X.; Wang, Z.; Dhiman, G.; Hou, P.; Wang, J.; Li, A.; Hu, X. Velocity clamping-assisted adaptive salp swarm algorithm: Balance analysis and case studies. Math. Biosci. Eng. 2022, 19, 7756–7804. [Google Scholar] [CrossRef] [PubMed]
  53. Song, B.; Wang, Z.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  54. Hidalgo-Paniagua, A.; Vega-Rodríguez, M.A.; Ferruz, J.; Pavón, N. Solving the multi-objective path planning problem in mobile robotics with a firefly-based approach. Soft Comput. 2017, 21, 949–964. [Google Scholar] [CrossRef]
  55. Xu, F.; Li, H.; Pun, C.-M.; Hu, H.; Li, Y.; Song, Y.; Gao, H. A new global best guided artificial bee colony algorithm with application in robot path planning. Appl. Soft Comput. 2020, 88, 106037. [Google Scholar] [CrossRef]
  56. Ou, Y.; Yin, P.; Mo, L. An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning. Biomimetics 2023, 8, 84. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Nonlinear decay of the proposed inertia weight.
Figure 1. Nonlinear decay of the proposed inertia weight.
Biomimetics 08 00383 g001
Figure 2. The flowchart of SSEO.
Figure 2. The flowchart of SSEO.
Biomimetics 08 00383 g002
Figure 3. Friedman mean ranks obtained by the employed algorithms on CEC 2017 benchmark functions with 30 dimensions.
Figure 3. Friedman mean ranks obtained by the employed algorithms on CEC 2017 benchmark functions with 30 dimensions.
Biomimetics 08 00383 g003
Figure 4. Friedman mean ranks obtained by the employed algorithms on CEC 2017 benchmark functions with 100 dimensions.
Figure 4. Friedman mean ranks obtained by the employed algorithms on CEC 2017 benchmark functions with 100 dimensions.
Biomimetics 08 00383 g004
Figure 5. Map 1: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Figure 5. Map 1: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Biomimetics 08 00383 g005
Figure 6. Map 2: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Figure 6. Map 2: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Biomimetics 08 00383 g006
Figure 7. Map 3: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Figure 7. Map 3: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Biomimetics 08 00383 g007
Figure 8. Map 4: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Figure 8. Map 4: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Biomimetics 08 00383 g008
Figure 9. Map 5: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Figure 9. Map 5: (a) ABC, (b) FA, (c) GWO, (d) PSO, (e) SSA, and (f) SSEO.
Biomimetics 08 00383 g009
Table 1. Summary of the 29 CEC 2017 benchmark problems.
Table 1. Summary of the 29 CEC 2017 benchmark problems.
ClassNo.DescriptionSearch RangeOptimal
Unimodal1Shifted and Rotated Bent Cigar Function[−100, 100]100
2Shifted and Rotated Sum of Different Power Function[−100, 100]200
3Shifted and Rotated Zakharov Function[−100, 100]300
Multimodal4Shifted and Rotated Rosenbrock’s Function[−100, 100]400
5Shifted and Rotated Rastrigin’s Function[−100, 100]500
6Shifted and Rotated Expanded Scaffer’s Function[−100, 100]600
7Shifted and Rotated Lunacek Bi-Rastrigin Function[−100, 100]700
8Shifted and Rotated Non-Continuous Rastrigin’s Function[−100, 100]800
9Shifted and Rotated Levy Function[−100, 100]900
10Shifted and Rotated Schwefel’s Function[−100, 100]1000
Hybrid11Hybrid Function 1 (N = 3)[−100, 100]1100
12Hybrid Function 2 (N = 3)[−100, 100]1200
13Hybrid Function 3 (N = 3)[−100, 100]1300
14Hybrid Function 4 (N = 4)[−100, 100]1400
15Hybrid Function 5 (N = 4)[−100, 100]1500
16Hybrid Function 6 (N = 4)[−100, 100]1600
17Hybrid Function 6 (N = 5)[−100, 100]1700
18Hybrid Function 6 (N = 5)[−100, 100]1800
19Hybrid Function 6 (N = 5)[−100, 100]1900
20Hybrid Function 6 (N = 6)[−100, 100]2000
Composition21Composition Function 1 (N = 3)[−100, 100]2100
22Composition Function 2 (N = 3)[−100, 100]2200
23Composition Function 3 (N = 4)[−100, 100]2300
24Composition Function 4 (N = 4)[−100, 100]2400
25Composition Function 5 (N = 5)[−100, 100]2500
26Composition Function 6 (N = 5)[−100, 100]2600
27Composition Function 7 (N = 6)[−100, 100]2700
28Composition Function 8 (N = 6)[−100, 100]2800
29Composition Function 9 (N = 3)[−100, 100]2900
30Composition Function 10 (N = 3)[−100, 100]3000
Table 2. Parameter setting of the ten algorithms.
Table 2. Parameter setting of the ten algorithms.
AlgorithmsParameters Setting
EO [31] a 1 = 2, a 2 = 1, GP = 0.5 (Default)
mEO [39] a 1 = 2, a 2 = 1, GP = 0.5 (Default)
LWMEO [41] a 1 = 2, a 2 = 1, GP = 0.5, c = 1 (Default)
ISEO [43] a 1 = 2, a 2 = 1, GP = 0.5 (Default)
IEO [40] a 1 = 2, a 2 = 1 (Default)
MFO [46]b = 1 and a decreases linearly from −1 to −2 (Default)
DMMFO [47]b = 1 and a decreases linearly from −1 to −2 (Default)
WEMFO [48]b = 1, s = 0, and a decreases linearly from −1 to −2 (Default)
PSO [8] c 1 = 2, c 2 = 2, and ω linear reduction from 0.9 to 0.1 (Default)
OOSSA [51]b = 0.55, k = 10,000, c 1 decreases nonlinearly from 2 to 0 (Default)
Table 3. Comparisons of eleven algorithms on CEC 2017 benchmark functions with 30 dimensions.
Table 3. Comparisons of eleven algorithms on CEC 2017 benchmark functions with 30 dimensions.
FunctionResultsEOmEOLWMEOISEOIEOMFOWEMFODMMFOOOSSAPSOSSEO
F1Mean 9.85E+046.73E+069.14E+039.62E+094.91E+031.21E+101.95E+082.03E+085.81E+039.22E+074.19E+03
Std1.04E+058.77E+068.49E+032.08E+094.84E+037.54E+091.62E+082.66E+084.58E+033.55E+085.47E+03
f-rank5641021189371
F3Mean5.21E+041.28E+045.17E+041.34E+053.67E+041.83E+057.23E+041.77E+053.32E+044.29E+044.11E+03
Std1.33E+043.43E+033.46E+043.02E+041.01E+045.35E+047.48E+034.09E+049.01E+031.21E+044.96E+03
f-rank7269411810351
F4Mean5.12E+025.09E+024.85E+021.33E+035.04E+021.05E+035.75E+025.68E+025.10E+025.02E+025.02E+02
Std1.81E+012.06E+012.93E+013.18E+021.87E+016.30E+024.88E+015.91E+011.64E+012.36E+011.92E+02
f-rank7511141098623
F5Mean5.94E+025.88E+027.35E+027.36E+025.63E+027.15E+026.75E+026.59E+026.09E+026.91E+025.79E+02
Std2.28E+012.24E+015.16E+011.98E+012.03E+015.26E+015.15E+012.62E+012.76E+012.75E+012.56E+01
f-rank4310111976582
F6Mean6.02E+026.03E+026.57E+036.18E+026.01E+026.42E+026.32E+026.30E+026.27E+026.46E+026.01E+02
Std1.94E+001.73E+007.49E+004.66E+001.28E−011.19E+011.78E+011.23E+011.17E+017.39E+019.08E−01
f-rank3411519876102
F7Mean8.41E+028.24E+021.17E+031.17E+037.99E+021.21E+039.46E+029.50E+028.66E+029.01E+028.09E+02
Std2.76E+012.54E+011.19E+029.14E+011.89E+011.58E+023.79E+017.42E+012.89E+014.17E+012.61E+01
f-rank4310911178562
F8Mean8.94E+028.79E+029.80E+021.04E+038.98E+029.99E+029.99E+029.58E+029.11E+029.41E+028.77E+02
Std2.79E+011.65E+015.27E+012.33E+011.56E+013.68E+015.16E+013.36E+012.71E+012.36E+011.71E+01
f-rank3281149107561
F9Mean1.35E+031.13E+036.67E+032.09E+039.14E+027.91E+035.78E+035.11E+033.53E+034.13E+031.02E+03
Std4.98E+022.95E+022.41E+033.44E+024.23E+012.21E+033.09E+031.63E+031.42E+037.28E+021.48E+02
f-rank4310511198672
F10Mean5.72E+035.02E+035.43E+038.66E+035.21E+035.42E+035.31E+035.25E+035.02E+034.71E+034.58E+03
Std8.37E+025.78E+027.62E+022.98E+027.59E+027.29E+027.40E+026.76E+026.03E+025.23E+027.05E+02
f-rank1039115876421
F11Mean1.25E+031.26E+031.29E+032.20E+031.20E+034.87E+031.81E+034.41E+031.34E+031.23E+031.16E+03
Std4.76E+014.28E+017.13E+013.61E+024.17E+014.51E+035.47E+023.64E+037.54E+013.72E+013.42E+01
f-rank4569211810731
F12Mean1.60E+063.21E+061.05E+062.24E+084.83E+053.11E+082.33E+078.94E+061.61E+071.04E+067.84E+05
Std1.27E+061.86E+069.06E+059.39E+075.12E+055.62E+084.34E+078.53E+061.99E+075.55E+056.07E+05
f-rank5641011197832
F13Mean2.48E+049.74E+042.08E+041.94E+072.44E+041.29E+082.66E+064.89E+059.21E+041.63E+052.37E+04
Std2.67E+045.52E+041.86E+041.55E+072.30E+044.44E+087.59E+062.41E+066.92E+048.09E+052.03E+04
f-rank4611031198572
F14Mean8.36E+046.55E+048.06E+042.33E+055.04E+043.97E+057.67E+051.21E+065.07E+045.47E+042.78E+04
Std5.95E+046.29E+047.08E+041.97E+053.71E+044.75E+058.51E+051.63E+064.24E+042.55E+042.68E+04
f-rank7568291011341
F15Mean5.68E+039.74E+031.06E+043.47E+068.39E+036.86E+044.42E+041.49E+042.62E+045.62E+034.89E+03
Std4.70E+036.67E+039.37E+037.62E+068.14E+037.24E+044.65E+041.13E+041.45E+041.33E+044.00E+03
f-rank3561141097821
F16Mean2.54E+032.46E+032.97E+033.38E+032.36E+033.18E+032.99E+032.95E+032.74E+032.72E+032.35E+03
Std3.13E+022.89E+024.19E+022.52E+023.39E+024.50E+023.45E+023.06E+024.02E+022.61E+023.15E+02
f-rank4381121097651
F17Mean2.04E+031.94E+032.55E+032.44E+031.97E+032.62E+032.43E+032.30E+032.13E+032.44E+032.03E+03
Std1.71E+021.43E+022.84E+022.16E+021.62E+022.69E+022.24E+022.62E+021.71E+022.57E+021.88E+02
f-rank4110821176593
F18Mean1.39E+064.84E+053.88E+058.69E+065.96E+058.96E+063.99E+063.20E+067.80E+058.14E+053.26E+05
Std1.62E+063.94E+053.02E+055.34E+064.77E+051.09E+073.24E+065.04E+066.95E+053.44E+053.09E+05
f-rank7321041198561
F19Mean1.30E+048.25E+031.16E+047.97E+051.09E+046.85E+062.32E+053.37E+041.93E+067.73E+036.50E+03
Std1.61E+047.86E+031.10E+049.25E+051.32E+041.94E+074.72E+055.33E+041.65E+061.12E+044.45E+03
f-rank6359411871021
F20Mean2.35E+032.25E+032.79E+032.74E+032.33E+032.74E+032.64E+032.52E+032.48E+032.67E+032.31E+03
Std1.41E+021.12E+022.81E+021.73E+021.41E+022.39E+021.96E+022.22E+021.82E+021.83E+021.43E+02
f-rank4111931076582
F21Mean2.39E+032.38E+032.52E+032.52E+032.36E+032.49E+032.46E+032.45E+032.41E+032.50E+032.35E+03
Std3.19E+012.68E+016.87E+011.38E+011.88E+014.56E+015.82E+14.94E+012.83E+013.51E+011.82E+01
f-rank4311102876591
F22Mean4.33E+032.32E+036.14E+036.35E+033.44E+036.83E+035.95E+035.05E+032.31E+034.83E+032.30E+03
Std2.24E+036.81E+002.11E+033.33E+031.98E+031.35E+032.12E+032.23E+031.17E+001.97E+031.52E+00
f-rank5391041187261
F23Mean2.73E+032.74E+032.98E+032.86E+032.71E+032.85E+032.81E+032.78E+032.78E+033.23E+032.73E+03
Std2.24E+013.17E+019.47E+011.45E+012.03E+014.64E+014.57E+013.47E+014.08E+011.17E+022.56E+01
f-rank2410918756113
F24Mean2.90E+032.90E+033.15E+033.03E+032.88E+032.98E+032.97E+032.96E+032.92E+033.25E+032.88E+03
Std2.61E+013.22E+018.49E+011.51E+012.72E+013.31E+013.19E+014.39E+013.19E+018.19E+012.08E+01
f-rank3410928765111
F25Mean2.91E+032.91E+032.92E+033.29E+032.90E+033.51E+032.96E+032.97E+032.92E+032.90E+032.89E+03
Std1.99E+011.91E+012.52E+011.39E+026.12E+007.33E+022.70E+017.55E+012.01E+011.05E+011.08E+01
f-rank5471021189631
F26Mean4.29E+034.30E+037.25E+035.92E+034.04E+035.82E+035.51E+035.45E+034.58E+035.03E+033.88E+03
Std5.61E+023.56E+021.35E+031.95E+023.71E+025.03E+024.84E+025.26E+027.29E+021.75E+036.59E+02
f-rank3411102987561
F27Mean3.23E+033.22E+033.28E+033.22E+033.22E+033.25E+033.26E+033.24E+033.24E+033.57E+033.22E+03
Std9.55E+011.01E+013.17E+016.93E+007.97E+022.62E+015.25E+011.53E+012.51E+011.40E+021.19E+01
f-rank5210148967113
F28Mean3.25E+033.26E+033.28E+033.54E+033.23E+034.20E+033.41E+033.43E+033.28E+033.24E+033.21E+03
Std2.33E+012.71E+011.24E+021.04E+022.11E+018.34E+028.01E+011.43E+023.87E+011.84E+011.88E+01
f-rank4571021189631
F29Mean3.78E+033.69E+034.24E+034.39E+033.66E+034.20E+034.23E+034.05E+034.06E+034.24E+033.65E+03
Std2.09E+021.91E+022.98E+022.43E+021.53E+023.18E+023.01E+022.48E+022.63E+022.31E+021.88E+02
f-rank4310112785691
F30Mean1.89E+048.42E+041.94E+043.60E+061.37E+041.09E+061.15E+061.38E+057.54E+061.98E+041.09E+04
Std1.76E+047.87E+041.06E+043.67E+069.90E+031.91E+062.86E+063.69E+056.31E+066.08E+033.91E+03
f-rank3641028971151
Average f-rank4.58623.68977.48289.20692.51729.75868.17247.34485.65526.06901.5172
Overall f-rank4381021197561
Table 4. Statistical conclusions based on Wilcoxon signed-rank test on 30-dimensional benchmark problems.
Table 4. Statistical conclusions based on Wilcoxon signed-rank test on 30-dimensional benchmark problems.
FunctionEO
p-Value
mEO
p-Value
LWMEO
p-Value
ISEO
p-Value
IEO
p-Value
MFO
p-Value
WEMFO
p-Value
DMMFO
p-Value
OOSSA
p-Value
PSO
p-Value
F19.92E−113.02E−114.51E−023.02E−112.84E−013.02E−113.02E−113.02E−116.07E−112.59E−01
F33.69E−111.33E−101.11E−063.02E−113.26E−073.02E−113.02E−113.02E−113.02E−111.07E−09
F45.55E−021.58E−012.07E−023.02E−115.69E−012.15E−108.99E−116.01E−082.53E−049.94E−01
F54.86E−037.01E−023.69E−113.02E−116.67E−031.09E−103.47E−101.78E−106.28E−064.50E−11
F62.60E−053.50E−093.02E−113.02E−112.32E−063.02E−113.02E−113.02E−113.02E−113.02E−11
F71.41E−045.19E−023.02E−113.02E−119.63E−023.02E−113.02E−114.97E−113.82E−095.00E−09
F81.63E−027.39E−014.98E−113.02E−113.37E−053.02E−111.33E−107.39E−117.60E−071.09E−10
F93.56E−047.29E−033.02E−13.69E−111.41E−093.02E−113.02E−113.02E−113.69E−113.02E−11
F103.32E−061.03E−021.11E−043.02E−111.86E−036.77E−052.84E−041.06E−035.57E−035.49E−01
F117.12E−098.89E−101.78E−103.02E−114.46E−043.02E−113.02E−113.02E−113.02E−111.60E−07
F126.10E−039.83E−083.48E−013.02E−111.27E−023.02E−113.02E−115.09E−083.02E−115.40E−01
F137.28E−012.67E−094.55E−013.02E−111.37E−014.18E−092.78E−075.75E−021.86E−061.54E−01
F144.35E−053.85E−039.79E−051.69E−094.43E−033.92E−091.09E−101.01E−081.39E−061.58E−01
F153.71E−012.25E−042.62E−033.02E−111.15E−012.87E−103.65E−081.49E−043.50E−094.20E−01
F163.27E−022.12E−012.57E−075.49E−117.85E−013.82E−096.53E−086.01E−087.38E−101.64E−05
F179.23E−013.39E−021.56E−084.31E−082.32E−021.17E−096.53E−081.68E−049.51E−062.57E−07
F185.09E−063.27E−021.02E−013.02E−115.57E−031.17E−093.20E−091.55E−094.11E−071.17E−03
F197.62E−018.07E−016.57E−023.02E−117.51E−016.01E−089.83E−084.86E−033.69E−113.11E−01
F201.62E−011.30E−014.57E−098.89E−105.59E−019.26E−093.08E−081.89E−047.09E−085.46E−09
F218.15E−058.56E−043.02E−113.02E−118.77E−013.02E−116.70E−116.70E−111.43E−083.02E−11
F224.18E−093.34E−113.50E−093.02E−111.58E−043.02E−113.02E−113.02E−118.89E−103.08E−08
F238.65E−016.41E−013.02E−113.02E−112.50E−035.49E−118.89E−103.96E−081.07E−073.02E−11
F241.54E−015.90E−013.02E−113.02E−118.12E−041.09E−104.20E−101.41E−094.12E−063.02E−11
F257.74E−062.68E−061.29E−063.02E−113.40E−018.15E−114.98E−115.07E−105.49E−115.08E−03
F264.43E−036.67E−032.92E−093.02E−117.39E−013.02E−113.02E−114.50E−112.38E−071.26E−01
F272.71E−018.30E−011.61E−104.73E−011.99E−024.44E−071.56E−084.64E−051.55E−093.02E−11
F282.88E−068.35E−083.32E−063.02E−113.67E−033.02E−113.02E−113.02E−111.33E−106.55E−04
F292.61E−024.64E−012.03E−094.98E−119.35E−018.48E−092.92E−098.35E−082.87E−101.55E−09
F301.27E−026.70E−112.39E−043.02E−113.79E−013.02E−119.92E−116.70E−113.02E−111.34E−05
+/=/−24/4/124/4/129/0/029/0/023/6/029/0/029/0/029/0/029/0/025/3/1
Table 5. Comparisons of eleven algorithms on CEC 2017 benchmark functions with 100 dimensions.
Table 5. Comparisons of eleven algorithms on CEC 2017 benchmark functions with 100 dimensions.
FunctionResultsEOmEOLWMEOISEOIEOMFOWEMFODMMFOOOSSAPSOSSEO
F1Mean 1.49E+109.12E+097.13E+091.89E+112.41E+091.57E+115.47E+105.08E+102.67E+092.81E+091.86E+05
Std5.52E+092.89E+095.39E+092.53E+102.46E+095.37E+101.00E+101.05E+107.71E+082.37E+091.29E+07
f-rank7651121098341
F3Mean5.80E+053.22E+056.83E+051.10E+065.23E+051.02E+064.11E+059.27E+052.99E+054.79E+052.86E+05
Std1.26E+053.02E+041.45E+052.93E+057.07E+041.53E+099.33E+041.40E+051.25E+049.35E+041.85E+04
f-rank7381161049251
F4Mean1.84E+031.82E+032.08E+033.83E+041.14E+032.84E+046.21E+036.50E+031.31E+031.01E+039.01E+02
Std4.12E+022.78E+025.81E+028.17E+031.31E+021.25E+041.53E+031.95E+031.14E+022.79E+026.41E+01
f-rank6571131089421
F5Mean1.32E+031.25E+031.49E+031.71E+031.10E+031.94E+031.50E+031.72E+031.14E+031.28E+031.08E+03
Std9.93E+017.06E+011.54E+027.26E+018.52E+011.96E+021.08E+021.29E+021.12E+026.47E+017.43E+01
f-rank6479211810351
F6Mean6.36E+026.36E+026.67E+026.61E+026.18E+026.84E+026.85E+026.79E+026.48E+026.63E+026.15E+02
Std6.75E+006.94E+006.51E+005.77E+003.32E+007.96E+001.37E+019.20E+002.98E+004.09E+005.60E+00
f-rank3486210119571
F7Mean2.11E+032.06E+033.58E+033.25E+031.72E+035.78E+032.86E+034.33E+031.69E+032.05E+031.71E+03
Std2.09E+021.58E+024.59E+028.75E+021.38E+026.91E+021.54E+025.24E+021.28E+022.74E+021.93E+02
f-rank6598311710142
F8Mean1.59E+031.51E+031.90E+032.00E+031.41E+032.31E+031.81E+032.04E+031.41E+031.68E+031.33E+03
Std8.92E+018.48E+012.62E+025.07E+018.38E+011.95E+021.41E+021.42E+021.12E+027.82E+011.02E+02
f-rank5489211710361
F9Mean3.52E+042.53E+043.13E+043.03E+041.85E+045.88E+045.83E+045.62E+043.34E+045.08E+042.31E+04
Std6.78E+036.39E+039.06E+035.16E+034.73E+037.54E+031.73E+041.16E+042.64E+031.15E+043.24E+03
f-rank7354111109682
F10Mean2.37E+042.12E+041.71E+043.28E+042.27E+041.98E+042.10E+041.97E+041.73E+041.59E+041.57E+04
Std1.91E+032.13E+032.22E+037.39E+021.78E+031.87E+031.38E+031.18E+031.30E+031.28E+031.38E+03
f-rank1083119675421
F11Mean6.81E+042.05E+043.48E+042.05E+054.50E+042.15E+051.09E+052.05E+054.66E+043.75E+042.66E+04
Std1.56E+044.72E+032.84E+044.29E+049.96E+035.41E+041.94E+044.52E+041.06E+041.14E+046.41E+03
f-rank7139511810642
F12Mean4.80E+087.42E+082.68E+084.90E+109.37E+074.36E+105.24E+095.97E+096.34E+081.14E+095.17E+07
Std4.79E+082.41E+081.85E+081.02E+103.81E+071.86E+101.63E+092.51E+092.29E+081.25E+092.16E+07
f-rank4631121089571
F13Mean1.23E+053.35E+061.74E+058.09E+091.62E+046.32E+098.05E+076.26E+075.27E+042.02E+074.76E+04
Std8.11E+043.01E+067.79E+051.68E+094.97E+033.68E+098.32E+078.34E+072.32E+048.85E+077.38E+04
f-rank4651111098372
F14Mean5.17E+063.81E+061.86E+066.44E+074.91E+061.87E+071.95E+071.95E+073.69E+061.90E+061.16E+06
Std2.36E+061.69E+061.01E+062.46E+072.76E+061.61E+079.59E+061.13E+071.78E+066.77E+055.22E+05
f-rank7521168910431
F15Mean2.33E+041.78E+051.67E+041.54E+095.60E+031.29E+091.53E+075.34E+065.68E+041.74E+049.29E+03
Std1.36E+041.32E+051.47E+046.20E+083.11E+031.54E+094.13E+071.18E+072.69E+049.55E+033.35E+03
f-rank5731111098642
F16Mean6.91E+037.01E+036.72E+031.07E+045.96E+038.44E+038.19E+037.37E+036.59E+036.07E+035.62E+03
Std9.98E+028.66E+028.50E+024.48E+028.09E+021.05E+039.34E+026.52E+026.09E+026.35E+026.12E+02
f-rank6751121098431
F17Mean5.24E+035.42E+036.43E+039.47E+034.97E+031.14E+046.68E+036.98E+035.49E+035.35E+035.17E+03
Std7.07E+025.65E+027.32E+029.83E+025.21E+029.67E+035.94E+029.38E+025.22E+025.66E+025.69E+02
f-rank3571011189642
F18Mean5.15E+064.25E+062.83E+061.14E+085.46E+062.46E+071.91E+072.71E+075.01E+063.49E+062.52E+06
Std2.78E+061.96E+061.39E+065.29E+072.38E+061.92E+077.92E+061.19E+073.84E+062.16E+067.85E+05
f-rank6421179810531
F19Mean7.77E+042.00E+062.62E+041.38E+094.81E+031.47E+091.50E+071.01E+076.98E+062.78E+066.60E+03
Std2.00E+051.47E+063.14E+044.65E+082.71E+031.93E+091.41E+072.49E+029.22E+061.49E+075.41E+03
f-rank4531011198762
F20Mean5.74E+035.45E+035.84E+037.78E+035.33E+035.97E+036.04E+035.76E+035.23E+035.24E+034.82E+03
Std5.75E+024.80E+024.42E+023.52E+025.38E+025.75E+025.15E+026.17E+024.97E+026.12E+026.05E+02
f-rank6581149107231
F21Mean3.01E+032.94E+033.90E+033.45E+032.83E+033.77E+033.39E+033.55E+033.08E+033.72E+032.73E+03
Std1.13E+028.88E+012.11E+025.35E+018.33E+011.35E+021.23E+021.62E+029.78E+011.17E+026.98E+01
f-rank4311721068591
F22Mean2.63E+042.36E+042.08E+043.52E+042.61E+042.18E+042.36E+042.22E+041.69E+041.89E+041.50E+04
Std1.66E+031.64E+032.12E+035.94E+022.19E+031.61E+031.53E+031.51E+037.95E+031.40E+037.27E+03
f-rank1084119576231
F23Mean3.40E+033.36E+034.63E+033.83E+033.24E+033.89E+033.83E+033.79E+033.59E+035.30E+033.21E+03
Std8.37E+018.86E+012.72E+025.35E+016.27E+011.06E+021.10E+021.54E+021.15E+023.94E+029.11E+01
f-rank4310729865111
F24Mean3.91E+033.87E+035.59E+034.33E+033.74E+034.55E+034.55E+034.39E+033.98E+035.47E+033.69E+03
Std1.13E+021.16E+023.95E+025.21E+017.83E+011.59E+022.18E+021.42E+027.98E+013.21E+021.06E+02
f-rank4311628975101
F25Mean4.49E+034.44E+034.4E+032.38E+043.83E+032.11E+047.97E+031.11E+044.23E+033.49E+033.62E+03
Std3.09E+021.84E+023.11E+025.35E+039.48E+018.01E+039.31E+022.51E+031.97E+026.99E+017.34E+01
f-rank7651131089412
F26Mean1.48E+041.31E+042.71E+041.81E+041.17E+042.01E+041.91E+041.86E+041.41E+042.01E+041.11E+04
Std2.36E+031.89E+033.15E+036.58E+022.09E+031.81E+032.01E+031.47E+031.35E+037.55E+034.72E+03
f-rank5311629874101
F27Mean3.68E+033.66E+034.19E+034.09E+033.57E+034.11E+034.09E+033.94E+033.83E+034.33E+033.55E+03
Std7.95E+019.81E+011.97E+021.92E+027.86E+012.78E+021.81E+021.39E+021.07E+023.05E+026.27E+01
f-rank4310829765111
F28Mean5.38E+034.87E+035.83E+032.05E+044.16E+031.99E+041.75E+041.69E+045.23E+033.76E+033.71E+03
Std5.75E+023.67E+021.13E+032.57E+032.24E+021.76E+033.37E+032.45E+034.82E+023.99E+026.81E+01
f-rank6471131098521
F29Mean7.75E+037.63E+038.76E+031.36E+046.81E+031.16E+049.52E+039.19E+039.79E+038.34E+036.86E+03
Std6.09E+025.53E+026.40E+021.42E+036.22E+023.28E+037.00E+028.79E+021.09E+035.81E+027.36E+02
f-rank4361111087952
F30Mean2.05E+061.38E+073.05E+063.11E+092.25E+052.94E+097.68E+076.89E+071.16E+083.95E+071.88E+05
Std1.22E+069.22E+062.54E+068.63E+081.21E+051.95E+094.32E+078.47E+077.68E+071.22E+081.23E+05
f-rank3541121087961
Average f-rank5.51724.62076.20699.48283.03459.62078.13798.17244.55175.34481.3103
Overall f-rank6471021189351
Table 6. Statistical conclusions based on Wilcoxon signed-rank test on 100-dimensional benchmark problems.
Table 6. Statistical conclusions based on Wilcoxon signed-rank test on 100-dimensional benchmark problems.
FunctionEO
p-Value
mEO
p-Value
LWMEO
p-Value
ISEO
p-Value
IEO
p-Value
MFO
p-Value
WEMFO
p-Value
DMMFO
p-Value
OOSSA
p-Value
PSO
p-Value
F13.02E−113.02E−113.02E−113.02E−113.02E−113.02E−113.02E−113.02E−113.02E−112.15E−10
F33.02E−113.83E−063.02E−113.02E−113.02E−113.02E−113.34E−113.02E−113.82E−093.02E−11
F43.02E−113.02E−113.02E−113.02E−111.96E−103.02E−113.02E−113.02E−113.02E−119.47E−01
F51.61E−101.86E−093.02E−113.02E−115.59E−013.02E−113.02E−113.02E−119.26E−098.15E−11
F64.44E−071.03E−063.02E−113.02E−112.67E−093.02E−113.02E−113.02E−113.02E−113.02E−11
F73.08E−085.09E−083.02E−114.50E−116.74E−013.02E−113.02E−113.02E−115.61E−055.60E−07
F86.72E−104.31E−083.69E−113.02E−111.44E−033.02E−113.69E−113.02E−118.48E−098.99E−11
F91.41E−092.71E−013.01E−071.73E−071.68E−043.02E−116.70E−113.34E−111.10E−083.34E−11
F103.02E−111.78E−101.03E−023.02E−113.34E−111.29E−095.49E−112.37E−109.92E−115.30E−01
F113.69E−115.27E−056.10E−013.02E−114.57E−093.02E−113.02E−113.02E−113.02E−112.59E−05
F123.02E−113.02E−113.20E−093.02E−114.42E−063.02E−113.02E−113.02E−113.02E−111.96E−10
F135.97E−093.02E−115.90E−013.02E−112.19E−083.02E−113.02E−113.02E−116.74E−062.42E−02
F141.61E−102.87E−104.43E−033.02E−113.82E−103.02E−113.02E−113.02E−114.98E−113.37E−05
F154.11E−073.02E−112.89E−033.02E−113.83E−053.02E−113.02E−113.02E−114.08E−112.60E−05
F161.39E−062.38E−077.73E−063.02E−111.12E−053.69E−115.49E−117.39E−113.08E−081.08E−02
F175.11E−011.09E−013.96E−083.02E−111.49E−013.34E−113.47E−102.92E−096.20E−043.48E−01
F182.32E−063.37E−046.95E−013.02E−112.03E−073.34E−113.02E−113.02E−118.35E−082.81E−02
F191.73E−063.02E−112.43E−053.02E−112.28E−013.02E−113.02E−113.02E−113.02E−114.35E−05
F203.26E−071.17E−041.85E−083.02E−114.03E−032.83E−083.20E−091.49E−064.35E−052.15E−02
F211.78E−109.76E−103.02E−113.02E−111.34E−053.02E−113.02E−113.02E−113.02E−113.02E−11
F223.69E−111.61E−101.64E−053.02E−113.69E−113.08E−081.96E−102.92E−094.11E−071.02E−01
F231.01E−081.25E−073.02E−113.02E−114.68E−023.02E−113.02E−113.69E−113.34E−113.02E−11
F246.52E−094.11E−073.02E−113.02E−112.71E−023.02E−113.02E−113.02E−113.69E−113.02E−11
F253.02E−113.02E−113.34E−113.02E−119.75E−103.02E−113.02E−113.02E−113.02E−111.25E−07
F265.61E−054.03E−034.08E−111.07E−076.31E−015.97E−092.60E−083.65E−081.29E−062.96E−05
F274.69E−081.25E−053.02E−113.02E−111.37E−013.34E−113.02E−114.51E−113.02E−113.02E−11
F283.02E−113.02E−113.02E−113.02E−113.69E−113.02E−113.02E−113.02E−113.02E−111.03E−02
F291.43E−054.94E−055.07E−103.02E−119.35E−013.69E−114.98E−113.16E−103.02E−119.26E−09
F304.08E−113.02E−114.08E−113.02E−119.05E−023.02E−113.02E−113.02E−113.02E−113.02E−11
+/=/−28/1/029/0/026/3/029/0/024/4/129/0/029/0/029/0/029/0/027/2/0
Table 7. Parameter setting of the five algorithms.
Table 7. Parameter setting of the five algorithms.
AlgorithmsParameters Setting
ABC [7]Limit = 50 (Default)
PSO [8] c 1 = 2, c 2 = 2, and ω linear reduction from 0.9 to 0.1 (Default)
GWO [9]a linear reduction from 2 to 0 (Default)
FA [10]g = 1, a = 0.2, r = 0.5 (Default)
SSA [13] c 1 decreases nonlinearly from 2 to 0 (Default)
Table 8. Type of environment.
Table 8. Type of environment.
TerrainNo.InitialFinalX AxisY AxisObstacle Radius
ObstacleCoordinatesCoordinates
Map 130, 04, 6[1 1.8 4.5][1 5.0 0.9][0.8 1.5 1]
Map 260, 010, 10[1.5 8.5 3.2 6.0 1.2 7.0][4.5 6.5 2.5 3.5 1.5 8.0][1.5 0.9 0.4 0.6 0.8 0.6]
Map 3133, 314, 14[1.5 4.0 1.2 5.2 9.5 6.5 10.8[4.5 3.0 1.5 3.7 10.3 7.3 6.3[0.5 0.4 0.4 0.8 0.7 0.7 0.7 0.7
5.9 3.4 8.6 11.6 3.3 11.8]9.9 5.6 8.2 8.6 11.5 11.5]0.7 0.7 0.7 0.7 0.7]
Map 4303, 314, 14[10.1 10.6 11.1 11.6 12.1 11.2[8.8 8.8 8.8 8.8 8.8 11.7 11.7[0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
11.7 12.2 12.7 13.2 11.4 11.911.7 11.7 11.7 9.3 9.3 9.3 9.30.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
12.4 12.9 13.4 8 8.5 9 9.5 109.3 5.3 5.3 5.3 5.3 5.3 6.7 6.70.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
9.3 9.8 10.3 10.8 11.3 5.9 6.46.7 6.7 6.7 8.4 8.4 8.4 8.4 8.4]0.4 0.4 0.4 0.4 0.4 0.4]
6.9 7.4 7.9]
Map 5450, 015, 15[2 2 2 2 2 2 4 4 4 4 4 4 4 4 4[8 8.5 9 9.5 10 10.5 3 3.5 4 4.5 5[0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
6 6 6 8 8 8 8 8 8 8 8 8 10 105.5 6 6.5 7 11 11.5 12 1 1.5 2 2.50.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
10 10 10 10 10 10 10 12 123 3.4 4 4.5 5 6 6.5 7 7.5 8 8.5 9 9.50.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
12 12 12 14 14 14 14]10 10 10.5 11 11.5 12 10 10.5 11 11.5]0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4]
Table 9. The minimum route length comparison of SSEO-based MRPP method and comparison approaches under five environmental setups.
Table 9. The minimum route length comparison of SSEO-based MRPP method and comparison approaches under five environmental setups.
TerrainPSOFAABCGWOSSASSEO
Path LengthPath LengthPath LengthPath LengthPath LengthPath Length
Map 17.84977.60937.74717.77138.04697.4575
Map 214.335414.533614.388114.431116.502214.3132
Map 315.862915.86616.904615.931116.281115.8597
Map 416.224715.848915.788316.237916.279315.7398
Map 521.902121.673921.953723.320521.677921.5298
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, H.; Liu, Y.; Wang, Z.; Jin, G.; Hu, P.; Dhiman, G. Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems. Biomimetics 2023, 8, 383. https://doi.org/10.3390/biomimetics8050383

AMA Style

Ding H, Liu Y, Wang Z, Jin G, Hu P, Dhiman G. Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems. Biomimetics. 2023; 8(5):383. https://doi.org/10.3390/biomimetics8050383

Chicago/Turabian Style

Ding, Hongwei, Yuting Liu, Zongshan Wang, Gushen Jin, Peng Hu, and Gaurav Dhiman. 2023. "Adaptive Guided Equilibrium Optimizer with Spiral Search Mechanism to Solve Global Optimization Problems" Biomimetics 8, no. 5: 383. https://doi.org/10.3390/biomimetics8050383

Article Metrics

Back to TopTop