Next Article in Journal
Advanced Bionic Attachment Equipment Inspired by the Attachment Performance of Aquatic Organisms: A Review
Previous Article in Journal
Soft Robotic Glove with Sensing and Force Feedback for Rehabilitation in Virtual Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning

1
School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
2
Academic Affairs Office, Jishou University, Jishou 416000, China
3
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(1), 84; https://doi.org/10.3390/biomimetics8010084
Submission received: 6 December 2022 / Revised: 19 January 2023 / Accepted: 12 February 2023 / Published: 16 February 2023

Abstract

:
This paper discusses a hybrid grey wolf optimizer utilizing a clone selection algorithm (pGWO-CSA) to overcome the disadvantages of a standard grey wolf optimizer (GWO), such as slow convergence speed, low accuracy in the single-peak function, and easily falling into local optimum in the multi-peak function and complex problems. The modifications of the proposed pGWO-CSA could be classified into the following three aspects. Firstly, a nonlinear function is used instead of a linear function for adjusting the iterative attenuation of the convergence factor to balance exploitation and exploration automatically. Then, an optimal α wolf is designed which will not be affected by the wolves β and δ with poor fitness in the position updating strategy; the second-best β wolf is designed, which will be affected by the low fitness value of the δ wolf. Finally, the cloning and super-mutation of the clonal selection algorithm (CSA) are introduced into GWO to enhance the ability to jump out of the local optimum. In the experimental part, 15 benchmark functions are selected to perform the function optimization tasks to reveal the performance of pGWO-CSA further. Due to the statistical analysis of the obtained experimental data, the pGWO-CSA is superior to these classical swarm intelligence algorithms, GWO, and related variants. Furthermore, in order to verify the applicability of the algorithm, it was applied to the robot path-planning problem and obtained excellent results.

1. Introduction

The metaheuristic algorithm is an improvement of the heuristic algorithm combined with a random algorithm and local search algorithm to implement optimization tasks. In recent years, metaheuristic optimization has made some recent developments. Jiang X et al. proposed optimal pathfinding with a beetle antennae search algorithm using ant colony optimization initialization and different searching strategies [1]. Khan A H et al. proposed BAS-ADAM: an ADAM-based approach to improving the performance of beetle antennae search optimizer [2]. Ye et al. proposed a modified multi-objective cuckoo search mechanism and applied this algorithm to the obstacle avoidance problem of multiple uncrewed aerial vehicles (UAVs) for seeking a safe route by optimizing the coordinated formation control of UAVs to ensure the horizontal airspeed, yaw angle, altitude, and altitude rate are converged to the expected level within a given time for inverse kinematics and optimization [3]. Khan et al. proposed using the social behavior of beetles to establish a computational model for operational management [4]. As one of the latest metaheuristic algorithms, grey wolf optimizer (GWO) is widely employed to settle real industrial issues because GWO maintains a balance between exploitation and exploration through dynamic parameters and has a strong ability to explore the rugged search space of the problem [5,6], such as the selection problem [7,8,9], privacy protection issue [10], adaptive weight problem [11], smart home scheduling problem [12], prediction problem [13,14,15], classification problem [16], and optimization problem [17,18,19,20].
Although the theoretical analysis and industrial applications using GWO have gained fruitful achievements, there still exist some shortcomings that hinder the further developments of GWO, such as slow convergence speed and low accuracy in the single-peak function and easily falling into local optimum in the multi-peak function and complex problems. Recently, various GWO variants have been investigated to overcome the mentioned shortages. Mittal et al. proposed a modified GWO (mGWO) to solve the balance problem between the exploitation and exploration of GWO. The main contribution is to propose an exponential function instead of a linear function to adjust the iterative attenuation of parameter a. Experimental results proved that the mGWO improves the effectiveness, efficiency, and stability of GWO [21]. Saxena et al. proposed a β-GWO to improve the exploitation and exploration of GWO by embedding a β-chaotic sequence into parameter a through a normalized mapping method. Experimental results demonstrated that β-GWO had good exploitation and exploration performance [22]. Long et al. proposed the exploration-enhanced GWO (EEGWO) to overcome GWO’s weakness of good exploitation but poor exploration based on two modifications. Meanwhile, a random individual is used to guide the search for new individual candidates. Furthermore, a nonlinear control parameter strategy is introduced to obtain a good exploitation effect and avoid poor exploration effects. Experimental results illustrated that the proposed EEGWO algorithm significantly improves the performance of GWO [23]. Gupta and Deep proposed an RW-GWO to improve the search capability of GWO. The main contribution is to propose an improved method based on a random walk. Experimental results showed that RW-GWO provides a better lead in searching for grey wolf prey [24]. Teng et al. proposed a PSO_GWO to solve the problem of slow convergence speed and low accuracy of the grey wolf optimization algorithm. The main contribution can be divided into three aspects: firstly, a Tent chaotic sequence is used to initialize individual positions; secondly, a nonlinear control parameter is used; finally, particle swarm optimization (PSO) is combined with GWO. Experimental results showed that PSO_GWO could better search the optimal global solution and have better robustness [25]. Gupta et al. proposed SC-GWO to solve the balance problem of exploitation and exploration. The main contribution is the combination of the sine and cosine algorithm (SCA) and GWO. Experimental results showed that SC-GWO has good robustness to problem scalability [26]. In order to improve the performance of GWO in solving complex and multimodal functions, Yu et al. proposed an object-based learning wolf optimizer (OGWO). Without increasing the computational complexity, the algorithm integrates the opposing learning method into GWO in the form of a jump rate, which helps the algorithm jump out of the local optimum [27]. To improve the iterative convergence speed of GWO, Zhang et al. also improved the algorithm flow of GWO and proposed two dynamic GWOs (DGWO1 and DGWO2) [28].
Although these GWO variants improve the convergence speed and accuracy in the single-peak function and have the ability to jump out of local optimum in the multi-peak function and complex problems, they still have the disadvantages of slow convergence speed and low accuracy and easily falling into local optimum while solving some complex problems. To overcome these shortages, an improved GWO is proposed in this paper by combining it with a clonal selection algorithm (CSA) to improve the convergence speed, accuracy, and jump out of the local optimum of standard GWO. The proposed algorithm is called pGWO-CSA. The core improvements could be classified into the following points:
Firstly, a nonlinear function is used instead of a linear function for adjusting the iterative attenuation of the convergence factor to balance exploitation and exploration automatically.
Secondly, the pGWO-CSA adopts a new position updating strategy, and the position updating of α wolf is no longer affected by the wolves β and δ with poor fitness. The position updating of the β wolf is no longer affected by the low fitness value of the δ wolf.
Finally, the pGWO-CSA combines GWO with CSA and introduces the cloning and super-mutation of the CSA into GWO to improve GWO’s ability to jump out of local optimum.
In the experimental part, 15 benchmark functions are selected to perform the function optimization tasks to reveal the performance of pGWO-CSA further. Firstly, pGWO-CSA is compared with other swarm intelligence algorithms, particle swarm optimization (PSO) [29], differential evolution (DE) [30], and firefly algorithm (FA) [31]. Then pGWO-CSA is compared with GWO [5] and its variants OGWO [27], DGWO1, and DGWO2 [28]. Due to the statistical analysis of the obtained experimental data, the pGWO-CSA is superior to these classical swarm intelligence algorithms, GWO, and related variants.
The rest sections are organized as follows. Section 2 introduces the GWO and CSA, Section 3 introduces the improvement ideas and reasons for pGWO-CSA in detail, Section 4 mainly introduces experimental tests, Section 5 introduces the robot path-planning problem, and Section 6 is the summary of the whole paper.

2. GWO and CSA

This section mainly introduces the relevant concepts and algorithm ideas of GWO and CSA to provide theoretical support for subsequent improvement research.

2.1. GWO

GWO is a swarm intelligence algorithm proposed by Mirjalili et al. in 2014, which is inspired by the hunting behavior of grey wolves [5]. In nature, grey wolves like to live in packs and have a very strict social hierarchy. There are four types of wolves in the pack, ranked from highest to lowest in the social hierarchy: the α wolf, β wolf, δ wolf, and ω wolf. The GWO is also based on the social hierarchy of grey wolves and their hunting behavior, and its specific mathematical model is as follows.
(1)
Surround the prey
In the process of hunting, in order to surround the prey, it is necessary to calculate the distance between the current grey wolf and the prey and then update the position according to the distance. The behavior of grey wolves rounding up prey is defined as follows:
X ( t + 1 ) = X P ( t ) A × D
and
D = | C × X P ( t ) X ( t ) | ,
where Formula (1) is the updating formula of the grey wolf’s position, and Formula (2) is the calculation formula of the distance between the grey wolf individual and prey. Variable t is the current iteration number, X P ( t ) and X ( t ) are the current position vectors of the prey and the grey wolf at iteration t, respectively. A and C are coefficient vectors calculated by Formula (3) and Formula (4), respectively.
A = 2 × a × r 1 a ,
C = 2 × r 2 ,
and
a = 2 2 × t t max ,
where a is the convergence factor, and a linearly decreases from 2 to 0 as the number of iterations increases. r 1 and r 2 are random vectors in [0, 1]. Formula (5) is the calculation formula a and t max indicates the maximum number of iterations.
(2)
Hunting
In an abstract search space, the position of the optimal solution is uncertain. In order to simulate the hunting behavior of grey wolves, α, β, and δ wolves are assumed to have a better understanding of the potential location of prey. α wolf is regarded as the optimal solution, β wolf is regarded as the suboptimal solution, and δ wolf is regarded as the third optimal solution. Other gray wolves update their positions based on α, β, and δ wolves, and the calculation formulas are as follows:
D α = | C 1 × X α X ( t ) | D β = | C 2 × X β X ( t ) | D δ = | C 3 × X δ X ( t ) | ,
X 1 = X α A 1 × D α X 2 = X β A 2 × D β X 3 = X δ A 3 × D δ ,
and
X ( t + 1 ) = ( X 1 + X 2 + X 3 ) / 3 ,
where D α represents the distance between the current grey wolf and α wolf; D β represents the distance between the current grey wolf and β wolf; D δ represents the distance between the current grey wolf and δ wolf; and X α , X β , and X δ represent the position vectors of α wolf, β wolf, and δ wolf, respectively. X ( t ) is the current position of the grey wolf. C 1 , C 2 , and C 3 are random vectors, calculated by Formula (4). A 1 , A 2 , and A 3 are determined by Formula (3). Formula (7) represents the step length and direction of grey wolf individuals to α, β, and δ wolves, and Formula (8) is the position-updating formula of grey wolf individuals.
According to the description above, the algorithm flow chart of GWO is shown in Figure 1.

2.2. CSA

The CSA was proposed by De Castro and Von Zuben in 2002 according to the clonal selection theory [32]. The CSA simulates the mechanism of immunological multiplication, mutation, and selection and is widely used in many problems.
For the convenience of model design, the principle of the biological immune system is simplified. All substances that do not belong to themselves are regarded as antigens. When the immune system is stimulated by antigens, antibodies will be produced to bind to antigens specifically. The stronger the association between antigen and antibody, the higher the affinity. Then, the antibodies with high antigen affinity are selected to multiply and differentiate between binding to the antigens, increase their antigen affinity through super-mutation, and finally eliminate the antigens. In addition, some of the antibodies are converted into memory cells in order to respond quickly to the same or similar antigens in the future. In the CSA, the problem that needs to be solved is regarded as the antigen, and the solution to the problem is regarded as the antibody. At the same time, the receptor-editing mechanism is adopted to avoid falling into the local optimum. The flow chart of CSA is shown in Figure 2.
The above is the introduction of the GWO and CSA. The proposed algorithm in this paper is also inspired by GWO and CSA. The pGWO-CSA proposed in this paper is introduced in detail in Section 3 below.

3. The Proposed pGWO-CSA

In order to improve the convergence speed and accuracy in the single-peak function and the ability to jump out of local optimum in the multi-peak function and complex problems: Firstly, a nonlinear function is used instead of a linear function for adjusting the iterative attenuation of convergence factor to balance exploitation and exploration automatically; Secondly, the pGWO-CSA adopts a new position-updating strategy, and different position-updating strategies are used for α wolf, β wolf, and other wolves, so that the position updating of α wolf and β wolf are not affected by the wolves with lower fitness; Finally, the pGWO-CSA combines GWO with CSA and introduce the cloning and super-mutation of the CSA into GWO.
The detailed improvement strategy is as follows.

3.1. Replace Linear Function with Nonlinear Function

In GWO, a decreases from 2 to 0 as the number of iterations increases, and the range of A decreases as a decreases. According to Formulas (6) and (7), when | A | < 1 , the next position of the grey wolf can be anywhere between the current position and the prey, and the grey wolf approaches the prey guided by α, β, and δ. When | A | 1 , the grey wolf moves away from the current α, β, and δ wolves and searches for the optimal global value. Therefore, when | A | < 1 , grey wolves approach their prey for exploitation. When | A | 1 , grey wolves move away from their prey for exploration. In the original GWO, the parameter a linearly decreases from 2 to 0, with half of the iterations devoted to exploitation and half to exploration. In order to balance exploitation and exploration, the pGWO-CSA adopts a nonlinear function instead of a linear function to adjust the iterative attenuation of parameter a so as to enhance the exploration ability of the grey wolf at the early stage of iteration. In pGWO-CSA, parameter a is calculated by Formula (9).
a = cos ( π × ( t t max ) u ) + 1.0 ,
where variable t is the current iteration number, t max is the maximum iteration number, and u is the coefficient, where the value in this paper is 2.
Iterative curves of parameter a in the original GWO and pGWO-CSA are shown in Figure 3.
As can be seen from Figure 3, the convergence factor slowly decays in the early stage, improving the global search ability, and rapidly decays in the later stage, accelerating the search speed and optimizing the global exploration and local development ability of the algorithm.

3.2. Improve the Grey Wolf Position Updating Strategy

In the original grey wolf algorithm, the positions of all grey wolves in each iteration are updated by Formulas (6)–(8). In the position-updating strategy, the position updating of the α wolf is affected by the β wolf and δ wolf with poor fitness. The position updating of the β wolf is affected by the δ wolf with poor fitness.
Therefore, a new location updating strategy is proposed in this paper. In each iteration, the fitness of grey wolves is calculated, and the top three wolves α, β, and δ with the best fitness are saved and recorded. The specific location update formula is as follows.
X ( t ) = { X 1 if   α   wolf ( X 1 + X 2 ) / 2 if   β   wolf ( X 1 + X 2 + X 3 ) / 3 otherwise ,
where X 1 , X 2 , and X 3 are determined by Formula (7). X ( t ) represents the pre-update position. On this basis, if the current α wolf and β wolf are close to the optimal solution, α wolf and β wolf have a greater probability to update to the position with better fitness so as to better guide wolves to hunt the prey and find the optimal solution. If α wolf and β wolf are in the local optimum, other wolves still update their positions according to Formulas (6)–(8) so that algorithm will not fall into the local optimum. Therefore, the proposed improved method can not only improve the exploitation capability but also not affect the exploration capability.

3.3. Combine GWO with CSA

GWO is combined with CSA by introducing the cloning and super-mutation of the CSA into GWO, and the exploitation and exploration ability of GWO is improved. For each grey wolf, a super-mutation coefficient (Sc) and a random number (r3) are introduced. The wolf with good adaptability has a small coefficient of super-variance and a small probability of variation, while the wolf with poor adaptability has a large probability of variation. If the super-variation coefficient Sc of the current grey wolf is greater than the random number r3, the current grey wolf will be cloned, and then the cloned grey wolf will be mutated through Formulas (6)–(8). If the mutated grey wolf has comparatively better adaptability, it will replace the current grey wolf. The specific calculation formula is as follows.
S c = F i t n e s s i F i t n e s s min F i t n e s s max F i t n e s s min + 0.1
and
X ( t + 1 ) = { X ( t ) , if   S c r 3 X ( t ) , otherwise ,
where f i t n e s s i represents the fitness of the current grey wolf, f i t n e s s min represents the fitness of the best wolf, and f i t n e s s max represents the fitness of the worst wolf. r 3 is a random number between [0, 1]. X ( t ) is determined by Formula (10). X ( t ) represents the best between X ( t ) and X ( t ) as the result of the variation of X ( t ) through Formulas (6)–(8).

3.4. Algorithm Flow Chart of pGWO-CSA

According to the improvement idea mentioned above, the algorithm flow chart of pGWO-CSA is shown in Figure 4.

3.5. Time Complexity Analysis of the Algorithm

Assuming that the population size is N, the dimension of objective function F is Dim, and the number of iterations is T, the time complexity of the pGWO-CSA algorithm can be calculated as follows.
First, the time complexity required to initialize the grey wolf population is O(N × Dim), the time complexity required to calculate the fitness of all grey wolves is O(N × F (Dim)), and the time complexity required to preserve the location of the best three wolves is O(3 × Dim).
Then, in each iteration, the time complexity required to complete all grey wolves’ position updating is O(N × Dim), the time complexity required to update a, A, and C is O(1), and the time complexity required to calculate the fitness of all grey wolves is O(N × F(Dim)). The time complexity of cloning and super-mutation is O(N1 × Dim), where N1 is the number of wolves meeting the mutating condition, and the time complexity of updating the fitness and location of α, β, and δ is O(3 × Dim). The total iteration is T times. So, the total time complexity is O(T × N × Dim) + O(T) + O(T × N × F (Dim)) + O(T × N × Dim) + O(T × 3 × Dim).
So, in the worst case, the time complexity of the whole algorithm is O(N × Dim) + O(N × F(Dim)) + O(3 × Dim) + O(T × N × Dim) + O(T) + O(T × N × F(Dim)) + O(T × N × Dim) + O(T × 3 × Dim)≈O(T × N × (Dim + F(Dim))).

4. Experimental Test

In this section, 15 benchmark functions from F1–F15 are selected to test the performance of pGWO-CSA. Firstly, pGWO-CSA is compared with other swarm intelligence algorithms. Then pGWO-CSA is compared with GWO and its variants. Table 1 describes these benchmark functions in detail. Section 4.1 will compare pGWO-CSA with other swarm intelligence algorithms. Section 4.2 will compare pGWO-CSA with GWO and its variants.
Among these benchmark functions, the first seven benchmark functions, F1–F7, are simpler, while the last eight benchmark functions, F8–F15, are more complex. The dimension of these benchmark functions is 30 dimensions and 50 dimensions. The population size of all algorithms is set to 30, the maximal iteration of all algorithms is set to 500, and all experimental data are measured on the same computer to ensure a fair comparison between different algorithms. In order to avoid the randomness of the algorithm, each algorithm in this paper will be run on each test function 30 times. At the same time, the mean, standard deviation, and minimum and maximum values of the running results are recorded.

4.1. Compare with Other Swarm Intelligence Algorithms

In the comparison between pGWO-CSA and other swarm intelligence algorithms, PSO [29], DE [30], and FA [31] are selected to compare with pGWO-CSA. For each algorithm, the function optimization task is performed on the test functions F1–F15 in 30 dimensions and 50 dimensions, and the mean, standard deviation, and minimum and maximum values of the running results are recorded. The main parameters of the PSO, DE, FA, and pGWO-CSA are shown in Table 2.
In 30 dimensions, the test results of these four algorithms on test function F1–F15 are shown in Table 3. The convergence curves of these four algorithms on test function F1–F15 are recorded in Figure 5a, Figure 6a, Figure 7a, Figure 8a, Figure 9a, Figure 10a, Figure 11a, Figure 12a, Figure 13a, Figure 14a, Figure 15a, Figure 16a, Figure 17a, Figure 18a and Figure 19a.
In terms of the performance of the mean. Sort by the number of optimal values. The pGWO-CSA ranked first with 13 optimal values. PSO and DE tied for second place with one optimal value. FA ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 14 out of 15 test functions, and PSO outperformed pGWO-CSA only in test function F14. Compared with DE, pGWO-CSA outperformed DE in 14 out of 15 test functions, and DE outperformed pGWO-CSA only in test function F6. Compared with FA, pGWO-CSA outperformed FA in all 15 test functions.
In terms of the performance of the standard deviation. Sort by the number of optimal values. The pGWO-CSA ranked first with 11 optimal values. PSO ranked second with three optimal values. FA ranked third with one optimal value. DE ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 11 out of 15 test functions, and PSO outperformed pGWO-CSA in test functions F5, F6, F8, and F14. Compared with DE, pGWO-CSA outperformed DE in 13 out of 15 test functions, and DE outperformed pGWO-CSA in test functions F6 and F8. Compared with FA, pGWO-CSA outperformed FA in 14 out of 15 test functions, and FA outperformed pGWO-CSA only in test function F6.
In terms of the performance of the minimum. Sort by the number of optimal values. The pGWO-CSA ranked first with 13 optimal values. PSO ranked second with four optimal values. DE and FA tied for third place with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 11 out of 15 test functions, and PSO outperformed pGWO-CSA in test functions F7 and F14. In addition, the minimum values of pGWO-CSA and PSO are the same on the test functions F9 and F12, and both pGWO-CSA and PSO find the minimum values of the functions. Compared with DE and FA, pGWO-CSA outperformed DE and FA in all 15 test functions.
In terms of the performance of the maximum. Sort by the number of optimal values. The pGWO-CSA ranked first with 12 optimal values. DE ranked second with two optimal values. PSO ranked third with one optimal value. FA ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 14 out of 15 test functions, and PSO only outperformed pGWO-CSA in test function F14. Compared with DE, pGWO-CSA outperformed DE in 13 out of 15 test functions, and DE outperformed pGWO-CSA in test functions F6 and F8. Compared with FA, pGWO-CSA outperformed FA in 14 out of 15 test functions, and FA outperformed pGWO-CSA only in test function F6.
In addition, the pGWO-CSA can find theoretical optimal values on the test functions F9, F11, and F12. In the test functions F1, F2, F3, F4, F10, F11, F13, and F15, pGWO-CSA is superior to PSO, DE, and FA in terms of the mean, standard deviation, and minimum and maximum. Although PSO outperformed pGWO-CSA in the mean, standard deviation, and minimum and maximum on test function F14, pGWO-CSA still outperformed DE and FA on test function F14.
In 50 dimensions, the test results of these four algorithms on test function F1–F15 are shown in Table 4. The convergence curves of these four algorithms on test function F1–F15 are recorded in Figure 5b, Figure 6b, Figure 7b, Figure 8b, Figure 9b, Figure 10b, Figure 11b, Figure 12b, Figure 13b, Figure 14b, Figure 15b, Figure 16b, Figure 17b, Figure 18b and Figure 19b.
In terms of the performance of the mean. Sort by the number of optimal values. The pGWO-CSA ranked first with 13 optimal values. PSO and FA tied for second place with one optimal value. DE ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 14 out of 15 test functions, and PSO outperformed pGWO-CSA only in test function F14. Compared with DE, pGWO-CSA outperformed DE in all 15 test functions. Compared with FA, pGWO-CSA outperformed FA in 14 out of 15 test functions, and FA outperformed pGWO-CSA only in test function F6.
In terms of the performance of the standard deviation. Sort by the number of optimal values. The pGWO-CSA ranked first with 11 optimal values. PSO ranked second with three optimal values. FA ranked third with one optimal value. DE ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 11 out of 15 test functions, and PSO outperformed pGWO-CSA in test functions F5, F6, F8, and F14. Compared with DE, pGWO-CSA outperformed DE in 14 out of 15 test functions, and DE outperformed pGWO-CSA only in test function F8. Compared with FA, pGWO-CSA outperformed FA in 14 out of 15 test functions, and FA outperformed pGWO-CSA only in test function F6.
In terms of the performance of the minimum. Sort by the number of optimal values. The pGWO-CSA ranked first with 11 optimal values. PSO ranked second with four optimal values. FA ranked third with two optimal values. DE ranked fourth with zero optimal values. Compared with PSO, pGWO-CSA outperformed PSO in 11 out of 15 test functions, and PSO outperformed pGWO-CSA in test functions F7 and F14. In addition, the minimum value of pGWO-CSA and PSO are the same on the test functions F9 and F12, and both pGWO-CSA and PSO find the minimum values of the functions. Compared with DE, pGWO-CSA outperformed DE in all 15 test functions. Compared with FA, pGWO-CSA outperformed FA in 13 out of 15 test functions, and FA outperformed pGWO-CSA in test functions F6 and F8.
In terms of the performance of the maximum. Sort by the number of optimal values. The pGWO-CSA ranked first with 12 optimal values. PSO, DE, and FA are tied for second place with one optimal value. Compared with PSO, pGWO-CSA outperformed PSO in 14 out of 15 test functions, and PSO only outperformed pGWO-CSA in test function F14. Compared with DE, pGWO-CSA outperformed DE in 14 out of 15 test functions, and DE only outperformed pGWO-CSA in test function F8. Compared with FA, pGWO-CSA outperformed FA in 14 out of 15 test functions, and FA outperformed pGWO-CSA only in test function F6.
In addition, pGWO-CSA can find theoretical optimal values on the test functions F9, F11, and F12. In the test functions F1, F2, F3, F4, F9, F10, F13, and F15, pGWO-CSA is superior to PSO, DE, and FA in terms of the mean, standard deviation, and minimum and maximum. Although pGWO-CSA is not as good as FA on test function F6 and as good as PSO on test function F14, pGWO-CSA still outperformed the other two swarm intelligence algorithms on these two functions.
Based on the above data and analysis, pGWO-CSA has faster convergence speed, higher accuracy, and better ability to jump out of local optimum compared with other swarm intelligence algorithms in either 30 or 50 dimensions. In order to further verify the performance of pGWO-CSA, we will next compare pGWO-CSA with GWO and its variants.

4.2. Compare with GWO and Its Variants

In order to further verify the performance of pGWO-CSA, pGWO-CSA is compared with GWO [5] and its variants OGWO [27], DGWO1, and DGWO2 [28] on the test functions F1–F15 in 30 dimensions and 50 dimensions. The main parameters of pGWO-CSA, GWO, OGWO, DGWO1, and DGWO2 are shown in Table 5.
In 30 dimensions, the test results of these five algorithms on test function F1–F15 are shown in Table 6, with the optimal values highlighted in bold. The convergence curves of these five algorithms on test function F1–F15 are recorded in Figure 20a, Figure 21a, Figure 22a, Figure 23a, Figure 24a, Figure 25a, Figure 26a, Figure 27a, Figure 28a, Figure 29a, Figure 30a, Figure 31a, Figure 32a, Figure 33a and Figure 34a.
In 50 dimensions, the test results of these five algorithms on test function F1–F15 are shown in Table 7. The convergence curves of these five algorithms on test function F1–F15 are recorded in Figure 20b, Figure 21b, Figure 22b, Figure 23b, Figure 24b, Figure 25b, Figure 26b, Figure 27b, Figure 28b, Figure 29b, Figure 30b, Figure 31b, Figure 32b, Figure 33b and Figure 34b.
In terms of the performance of the mean. Sort by the number of optimal values. The pGWO-CSA ranked first with nine optimal values. DGWO2 ranked second with six optimal values. OGWO ranked third with two optimal values. DGWO1 and GWO tied for fourth place with one optimal value. Compared with GWO, pGWO-CSA outperformed GWO in 14 out of 15 test functions. Compared with OGWO, pGWO-CSA outperformed OGWO in 13 out of 15 test functions, and OGWO outperformed pGWO-CSA only in test function F7. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 14 out of 15 test functions. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in 9 out of 15 test functions, and DGWO2 outperformed pGWO-CSA only in test functions F1, F2, F3, F4, and F14.
In terms of the performance of the standard deviation. Sort by the number of optimal values. The pGWO-CSA and DGWO2 tied for first place with seven optimal values. DGWO1 and OGWO tied for third place with two optimal values. GWO ranked fifth with one optimal value. Compared with GWO, pGWO-CSA outperformed GWO in 11 out of 15 test functions, and GWO only outperformed pGWO-CSA in test functions F7 and F15. Compared with OGWO, pGWO-CSA outperformed OGWO in 13 out of 15 test functions, and OGWO only outperformed pGWO-CSA in test function F7. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 13 out of 15 test functions, and DGWO1 only outperformed pGWO-CSA in test function F15. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in seven out of fifteen test functions, and DGWO2 outperformed pGWO-CSA in test functions F1, F2, F3, F4, F6, F7, and F14.
In terms of the performance of the minimum. Sort by the number of optimal values. The pGWO-CSA and DGWO2 tied for first place with eight optimal values. OGWO ranked third with six optimal values. DGWO1 and GWO tied for fourth place with three optimal values. Compared with GWO, pGWO-CSA outperformed GWO in 12 out of 15 test functions. Compared with OGWO, pGWO-CSA outperformed OGWO in eight out of fifteen test functions, and OGWO outperformed pGWO-CSA only in test functions F4, F7, and F8. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 12 out of 15 test functions. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in seven out of fifteen test functions, and DGWO2 outperformed pGWO-CSA only in test functions F1, F2, F3, F4, and F13.
In terms of the performance of the maximum. Sort by the number of optimal values. The pGWO-CSA and DGWO2 tied for first place with seven optimal values. OGWO ranked third with three optimal values. DGWO1 ranked fourth with two optimal values. GWO ranked fifth with one optimal value. Compared with GWO, pGWO-CSA outperformed GWO in 12 out of 15 test functions, and GWO outperformed pGWO-CSA only in test functions F7 and F15. Compared with OGWO, pGWO-CSA outperformed OGWO in 11 out of 15 test functions, and OGWO outperformed pGWO-CSA only in test functions F6 and F7. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 12 out of 15 test functions, and DGWO1 outperformed pGWO-CSA only in test functions F6 and F15. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in eight out of fifteen test functions, and DGWO2 outperformed pGWO-CSA only in test functions F1, F2, F3, F4, F6, and F14.
In addition, pGWO-CSA can find theoretical optimal values on the test functions F9, F11, and F12. In the test functions F5, F9, F10, F11, and F12, pGWO-CSA is the optimal value among the five algorithms in terms of the mean, standard deviation, and minimum and maximum. Although DGWO2 outperformed pGWO-CSA in the performance of the first four test functions, F1, F2, F3, and F4, pGWO-CSA still outperformed the other three algorithms.
In terms of the performance of the mean. Sort by the number of optimal values. The pGWO-CSA ranked first with eight optimal values. DGWO2 ranked second with six optimal values. OGWO ranked third with three optimal values. DGWO1 and GWO tied for fourth place with one optimal value. Compared with GWO, pGWO-CSA outperformed GWO in 14 out of 15 test functions. Compared with OGWO, pGWO-CSA outperformed OGWO in 11 out of 15 test functions, and OGWO outperformed pGWO-CSA only in test functions F3, F7, and F10. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 14 out of 15 test functions. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in nine out of fifteen test functions, and DGWO2 outperformed pGWO-CSA only in test functions F1, F2, F3, F4, and F14.
In terms of the performance of the standard deviation. Sort by the number of optimal values. The pGWO-CSA ranked first with eight optimal values. DGWO2 ranked second with seven optimal values. OGWO ranked third with two optimal values. GWO ranked fourth with one optimal value. DGWO1 ranked fifth with zero optimal values. Compared with GWO, pGWO-CSA outperformed GWO in 13 out of 15 test functions, and GWO only outperformed pGWO-CSA in test function F7. Compared with OGWO, pGWO-CSA outperformed OGWO in 11 out of 15 test functions, and OGWO only outperformed pGWO-CSA in test functions F3, F7, and F14. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in all 15 test functions. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in seven out of fifteen test functions, and DGWO2 outperformed pGWO-CSA in test functions F1, F2, F3, F4, F7, F10, and F14.
In terms of the performance of the minimum. Sort by the number of optimal values. DGWO2 ranked first with eight optimal values. The pGWO-CSA and OGWO tied for second place with six optimal values. GWO ranked fourth with two optimal values. DGWO1 ranked fifth with one optimal value. Compared with GWO, pGWO-CSA outperformed GWO in 13 out of 15 test functions. Compared with OGWO, pGWO-CSA outperformed OGWO in six out of fifteen test functions, and OGWO outperformed pGWO-CSA in test functions F1, F3, F4, F7, F10, and F13. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 14 out of 15 test functions, and DGWO1 outperformed pGWO-CSA only in test function F6. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in six out of fifteen test functions, and DGWO2 outperformed pGWO-CSA in test functions F1, F2, F3, F4, F13, and F14.
In terms of the performance of the maximum. Sort by the number of optimal values. DGWO2 ranked first with eight optimal values. The pGWO-CSA ranked second with seven optimal values. OGWO ranked third with three optimal values. GWO ranked fourth with one optimal value. DGWO1 ranked fifth with zero optimal values. Compared with GWO, pGWO-CSA outperformed GWO in 12 out of 15 test functions, and GWO outperformed pGWO-CSA only in test functions F7 and F14. Compared with OGWO, pGWO-CSA outperformed OGWO in 10 out of 15 test functions, and OGWO outperformed pGWO-CSA only in test functions F4, F5, F7, and F14. Compared with DGWO1, pGWO-CSA outperformed DGWO1 in 13 out of 15 test functions, and DGWO1 outperformed pGWO-CSA only in test function F15. Compared with DGWO2, pGWO-CSA outperformed DGWO2 in six out of fifteen test functions, and DGWO2 outperformed pGWO-CSA in test functions F1, F2, F3, F4, F5, F7, F14, and F15.
In addition, pGWO-CSA can find theoretical optimal values on the test functions F9, F11, and F12. In the test functions F8, F9, F11, and F12, pGWO-CSA is the optimal value among the five algorithms in terms of the mean, standard deviation, and minimum and maximum. It is not difficult to find from Table 6 and Table 7 that DGWO2 performs better than pGWO-CSA in the first four test functions, F1–F4, in both the 30 dimensions and the 50 dimensions. As can be seen from Table 1, the first four test functions are simple single-peak functions, indicating that DGWO1 performs better than pGWO-CSA in simple single-peak functions. However, compared with the other three algorithms, pGWO-CSA still performs better on the test functions F1–F4.
By comparison, it is not difficult to find that pGWO-CSA performs better than the previous seven test functions, F1–F7, in the following eight test functions, F8–F15, whether in the 30 dimensions or the 50 dimensions. It can be seen that pGWO-CSA performs better in more complex functions, which is largely due to the super-mutation operation carried out by pGWO-CSA, which helps pGWO-CSA better jump out of local optimum.
Based on the above data and analysis, pGWO-CSA has faster convergence speed, higher accuracy, and better ability to jump out of the local optimum compared with GWO and its variants in either 30 or 50 dimensions. In order to further reveal the performance of pGWO-CSA, the Wilcoxon test is performed in Section 4.3 based on the experimental data in Section 4.1 and Section 4.2.

4.3. Wilcoxon Test

In order to further reveal the performance of pGWO-CSA, according to the experimental data in Section 4.1 and Section 4.2, the Wilcoxon test is conducted on the mean of the 30 running results of each algorithm. The statistical results are shown in Table 8. In the Wilcoxon test, ’+’ means that the proposed algorithm is inferior to the selected algorithm, ‘−‘ means that the proposed algorithm is superior to the selected algorithm, and ‘=’ means that the two algorithms get the same result.
It can be seen from Table 8 that the number of ‘+’ of each algorithm is small, indicating that the seven algorithms compared with pGWO-CSA only outperform pGWO-CSA in a few test functions, and the number of ‘−’ of each algorithm exceeds 15, indicating that pGWO-CSA outperformed other algorithms in most test functions. The results show that pGWO-CSA is superior to other swarm intelligence algorithms, GWO, and its variants.

5. Robot Path-Planning Problem

With the development of artificial intelligence, robots have been widely used in various fields [33,34,35]. Among them, robot path planning is an important research problem. To further verify the applicability and superiority of the proposed algorithm, it is applied to the robot path-planning problem.

5.1. Robot Path-Planning Problem Description

The robot path-planning problem mainly includes two aspects: environment modeling and evaluation function. Environment modeling is to transform the environmental information of the robot into a form that can be recognized and expressed by a computer. The evaluation function is used to measure the path quality and is regarded as the objective function to be optimized by the algorithm.

5.1.1. Environment Modeling

The environment model of the robot path-planning problem is shown in Figure 35. The starting point is located at (0,0) and marked with a black star, the endpoint is located at (10,10) and marked with a blue triangle, and the obstacles are marked with a green circle. The mathematical expression of the obstacles is shown in Formula (13).
( x a ) 2 + ( y b ) 2 = r 2 ,
where a and b represent the center coordinates of the obstacle, and r represents the radius of the circle.

5.1.2. Evaluation Function

Suppose the robot finds some path points from start to end: ( x 0 , y 0 ), ( x 1 , y 1 ), …, ( x n , y n ), and the coordinate of the path p o i n t i is ( x i , y i ). A complete path formed by connecting these path points is a feasible solution to the robot path-planning problem. In order to reduce the optimization dimension of the problem and smooth the path curve, the spline interpolation method is used to construct the path curve. In order to evaluate the quality of the path, this paper considers the length of the path and the risk of the path. The evaluation function is shown in Formula (14).
f i t = w 1 × f i t l e n + w 2 × f i t r i s k ,
where w1 and w2 are weight parameters and w1 + w2 = 1.0. f i t l e n represents the fitness value of the length of the path, which is calculated by Formula (15); f i t r i c k represents the fitness value of the risk of the path, which is calculated by Formula (16).
f i t l e n = i = 1 n ( x i x i 1 ) 2 + ( y i y i 1 ) 2 ,
where n is the total number of path points, and ( x i , y i ) represents the coordinate of the path p o i n t i .
f i t r i s k = c × i = 1 k j = 1 n max ( 0 ,   1 ( a i x j ) 2 + ( b i y j ) 2 r i ) ,
where c is the penalty coefficient, k is the total number of obstacles, ( a i , b i ) is the coordinates of the center of obstacle i, and r i is the radius.
According to Formulas (14)–(16), when the fitness value of f i t l e n is small, then the length of the path is short. When the fitness value of f i t r i s k is small, the risk of the path is low. Therefore, the smaller the f i t , the higher the quality of the path.

5.2. The Experimental Results

In order to verify the applicability and superiority of pGWO-CSA in robot path-planning problems, PSO, DE, FA, GWO, and its variants are compared with pGWO-CSA. The parameters of all algorithms are exactly the same as in Section 4. In order to avoid the randomness of the algorithm, each algorithm will be run 10 times, and then the minimum, maximum, and mean of the results will be recorded. The path planned by pGWO-CSA is shown in Figure 36, and the experimental results of all algorithms are shown in Table 9.
According to the experimental data, pGWO-CSA is the optimal value of all algorithms in the performance of the minimum, maximum, and mean. The applicability and superiority of pGWO-CSA are further verified.

6. Conclusions

Aiming at the defects of the GWO, such as low convergence accuracy and easy precocity when dealing with complex problems, this paper proposes pGWO-CSA to settle these drawbacks. Firstly, the pGWO-CSA uses a nonlinear function instead of a linear function to adjust the iterative attenuation of the convergence factor to balance exploitation and exploration. Secondly, pGWO-CSA improves GWO’s position-updating strategy, and finally, pGWO-CSA is mixed with the CSA. The improved pGWO-CSA improves the convergence speed, precision, and ability to jump out of the local optimum. The experimental results show that the pGWO-CSA has obvious accuracy advantages. Compared with GWO and its variants participating in the experiment, the pGWO-CSA shows good stability in both 30 and 50 dimensions and is suitable for the optimization of complex and variable problems. Finally, the proposed algorithm is applied to the robot path-planning problem, which further verifies the applicability and superiority of the proposed algorithm.

Author Contributions

Conceptualization, Y.O. and P.Y.; Methodology, Y.O.; Software, Y.O.; Validation, Y.O.; Formal analysis, P.Y. and L.M.; Investigation, P.Y.; Resources, L.M.; Data curation, P.Y.; Writing—original draft, Y.O.; Writing—review & editing, Y.O., P.Y. and L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Nos. 62266019 and 62066016), the Research Foundation of Education Bureau of Hunan Province, China (No. 21C0383).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, X.; Lin, Z.; He, T.; Ma, X.; Ma, S.; Li, S. Optimal path finding with beetle antennae search algorithm by using ant colony optimization initialization and different searching strategies. IEEE Access 2020, 8, 15459–15471. [Google Scholar] [CrossRef]
  2. Khan, A.H.; Cao, X.; Li, S.; Katsikis, V.N.; Liao, L. BAS-ADAM: An ADAM based approach to improve the performance of beetle antennae search optimizer. IEEE/CAA J. Autom. Sin. 2020, 7, 461–471. [Google Scholar] [CrossRef]
  3. Ye, S.-Q.; Zhou, K.-Q.; Zhang, C.-X.; Zain, A.M.; Ou, Y. An Improved Multi-Objective Cuckoo Search Approach by Ex-ploring the Balance between Development and Exploration. Electronics 2022, 11, 704. [Google Scholar] [CrossRef]
  4. Khan, A.H.; Cao, X.; Li, S.; Luo, C. Using social behavior of beetles to establish a computational model for operational management. IEEE Trans. Comput. Soc. Syst. 2020, 7, 492–502. [Google Scholar] [CrossRef] [Green Version]
  5. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  6. Makhadmeh, S.N.; Khader, A.T.; Al-Betar, M.A.; Naim, S. Multi-objective power scheduling problem in smart homes using grey wolf optimiser. J. Ambient. Intell. Humaniz. Comput. 2019, 10, 3643–3667. [Google Scholar] [CrossRef]
  7. Zawbaa, H.M.; Emary, E.; Grosan, C.; Snasel, V. Large-dimensionality small-instance set feature selection: A hybrid bio-inspired heuristic approach. Swarm Evol. Comput. 2018, 42, 29–42. [Google Scholar] [CrossRef]
  8. Daniel, E. Optimum wavelet-based homomorphic medical image fusion using hybrid genetic–grey wolf optimization algorithm. IEEE Sens. J. 2018, 18, 6804–6811. [Google Scholar] [CrossRef]
  9. Alomoush, A.A.; Alsewari, A.A.; Alamri, H.S.; Aloufi, K.; Zamli, K.Z. Hybrid harmony search algorithm with grey wolf optimizer and modified opposition-based learning. IEEE Access 2019, 7, 68764–68785. [Google Scholar] [CrossRef]
  10. Zhang, S.; Mao, X.; Choo, K.-K.R.; Peng, T.; Wang, G. A trajectory privacy-preserving scheme based on a dual-K mechanism for continuous location-based services. Inf. Sci. 2020, 527, 406–419. [Google Scholar] [CrossRef]
  11. Kaur, A.; Sharma, S.; Mishra, A. An Efficient opposition based Grey Wolf optimizer for weight adaptation in cooperative spectrum sensing. Wirel. Pers. Commun. 2021, 118, 2345–2364. [Google Scholar] [CrossRef]
  12. Makhadmeh, S.N.; Khader, A.T.; Al-Betar, M.A.; Naim, S.; Abasi, A.K.; Alyasseri, Z.A.A. A novel hybrid grey wolf optimizer with min-conflict algorithm for power scheduling problem in a smart home. Swarm Evol. Comput. 2021, 60, 100793. [Google Scholar] [CrossRef]
  13. Jeyafzam, F.; Vaziri, B.; Suraki, M.Y.; Hosseinabadi, A.A.R.; Slowik, A. Improvement of grey wolf optimizer with adaptive middle filter to adjust support vector machine parameters to predict diabetes complications. Neural Comput. Appl. 2021, 33, 15205–15228. [Google Scholar] [CrossRef]
  14. Zhou, J.; Huang, S.; Wang, M.; Qiu, Y. Performance evaluation of hybrid GA–SVM and GWO–SVM models to predict earthquake-induced liquefaction potential of soil: A multi-dataset investigation. Eng. Comput. 2021, 38, 4197–4215. [Google Scholar] [CrossRef]
  15. Khalilpourazari, S.; Doulabi, H.H.; Çiftçioğlu, A.Ö.; Weber, G.-W. Gradient-based grey wolf optimizer with Gaussian walk: Application in modelling and prediction of the COVID-19 pandemic. Expert Syst. Appl. 2021, 177, 114920. [Google Scholar] [CrossRef]
  16. Subudhi U Dash, S. Detection and classification of power quality disturbances using GWO ELM. J. Ind. Inf. Integr. 2021, 22, 100204. [Google Scholar] [CrossRef]
  17. Long, W.; Liang, X.; Cai, S.; Jiao, J.; Zhang, W. A modified augmented Lagrangian with improved grey wolf optimization to constrained optimization problems. Neural Comput. Appl. 2017, 28, 421–438. [Google Scholar] [CrossRef]
  18. Tikhamarine, Y.; Souag-Gamane, D.; Ahmed, A.N.; Kisi, O.; El-Shafie, A. Improving artificial intelligence models accuracy for monthly streamflow forecasting using grey Wolf optimization (GWO) algorithm. J. Hydrol. 2020, 582, 124435. [Google Scholar] [CrossRef]
  19. Shabeerkhan S Padma, A. A novel GWO optimized pruning technique for inexact circuit design. Microprocess. Microsyst. 2020, 73, 102975. [Google Scholar] [CrossRef]
  20. Chen, X.; Yi, Z.; Zhou, Y.; Guo, P.; Farkoush, S.G.; Niroumandi, H. Artificial neural network modeling and optimization of the solid oxide fuel cell parameters using grey wolf optimizer. Energy Rep. 2021, 7, 3449–3459. [Google Scholar] [CrossRef]
  21. Mittal, N.; Singh, U.; Sohi, B.S. Modified grey wolf optimizer for global engineering optimization. Appl. Comput. Intell. Soft Comput. 2016, 2016, 7950348. [Google Scholar] [CrossRef] [Green Version]
  22. Saxena, A.; Kumar, R.; Das, S. β-chaotic map enabled grey wolf optimizer. Appl. Soft Comput. 2019, 75, 84–105. [Google Scholar] [CrossRef]
  23. Long, W.; Jiao, J.; Liang, X.; Tang, M. An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng. Appl. Artif. Intell. 2018, 68, 63–80. [Google Scholar] [CrossRef]
  24. Gupta S Deep, K. A novel random walk grey wolf optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  25. Teng, Z.; Lv, J.; Guo, L. An improved hybrid grey wolf optimization algorithm. Soft Comput. 2019, 23, 6617–6631. [Google Scholar] [CrossRef]
  26. Gupta, S.; Deep, K.; Moayedi, H.; Foong, L.K.; Assad, A. Sine cosine grey wolf optimizer to solve engineering design problems. Eng. Comput. 2021, 37, 3123–3149. [Google Scholar] [CrossRef]
  27. Yu, X.B.; Xu, W.Y.; Li, C.L. Opposition-based learning grey wolf optimizer for global optimization. Knowl.-Based Syst. 2021, 226, 107139. [Google Scholar] [CrossRef]
  28. Zhang, X.; Zhang, Y.; Ming, Z. Improved dynamic grey wolf optimizer. Frontiers of Information Technol. Electron. Eng. 2021, 22, 877–890. [Google Scholar] [CrossRef]
  29. Kennedy J Eberhart, R. Particle swarm optimization//Proceedings of ICNN’95-international conference on neural networks. IEEE 1995, 4, 1942–1948. [Google Scholar]
  30. Storn R Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  31. Yang, X.S. Nature-Inspired Metaheuristic Algorithms; Luniver Press: Beckington, UK, 2010. [Google Scholar]
  32. De Castro, L.N.; Von Zuben, F.J. Learning and optimization using the clonal selection principle. IEEE Trans. Evol. Comput. 2002, 6, 239–251. [Google Scholar] [CrossRef]
  33. Liu, H.-G.; Liu, D.-F.; Zhang, K.; Meng, F.-G.; Yang, A.-C.; Zhang, J.-G. Clinical Application of a Neurosurgical Robot in Intracranial Ommaya Reservoir Implantation. Front. Neurorobotics 2021, 15, 28. [Google Scholar] [CrossRef] [PubMed]
  34. Jin, Z.; Liu, L.; Gong, D.; Li, L. Target Recognition of Industrial Robots Using Machine Vision in 5G Environment. Front. Neurorobotics 2021, 15, 624466. [Google Scholar] [CrossRef] [PubMed]
  35. Cheng, X.; Li, J.; Zheng, C.; Zhang, J.; Zhao, M. An Improved PSO-GWO Algorithm With Chaos and Adaptive Inertial Weight for Robot Path Planning. Front. Neurorobotics 2021, 15, 770361. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart of GWO algorithm.
Figure 1. Flow chart of GWO algorithm.
Biomimetics 08 00084 g001
Figure 2. Flow chart of CSA.
Figure 2. Flow chart of CSA.
Biomimetics 08 00084 g002
Figure 3. Iterative curve of parameter a .
Figure 3. Iterative curve of parameter a .
Biomimetics 08 00084 g003
Figure 4. Flow chart of pGWO-CSA.
Figure 4. Flow chart of pGWO-CSA.
Biomimetics 08 00084 g004
Figure 5. Convergence curve of test function F1.
Figure 5. Convergence curve of test function F1.
Biomimetics 08 00084 g005
Figure 6. Convergence curve of test function F2.
Figure 6. Convergence curve of test function F2.
Biomimetics 08 00084 g006
Figure 7. Convergence curve of test function F3.
Figure 7. Convergence curve of test function F3.
Biomimetics 08 00084 g007
Figure 8. Convergence curve of test function F4.
Figure 8. Convergence curve of test function F4.
Biomimetics 08 00084 g008
Figure 9. Convergence curve of test function F5.
Figure 9. Convergence curve of test function F5.
Biomimetics 08 00084 g009
Figure 10. Convergence curve of test function F6.
Figure 10. Convergence curve of test function F6.
Biomimetics 08 00084 g010
Figure 11. Convergence curve of test function F7.
Figure 11. Convergence curve of test function F7.
Biomimetics 08 00084 g011
Figure 12. Convergence curve of test function F8.
Figure 12. Convergence curve of test function F8.
Biomimetics 08 00084 g012
Figure 13. Convergence curve of test function F9.
Figure 13. Convergence curve of test function F9.
Biomimetics 08 00084 g013
Figure 14. Convergence curve of test function F10.
Figure 14. Convergence curve of test function F10.
Biomimetics 08 00084 g014
Figure 15. Convergence curve of test function F11.
Figure 15. Convergence curve of test function F11.
Biomimetics 08 00084 g015
Figure 16. Convergence curve of test function F12.
Figure 16. Convergence curve of test function F12.
Biomimetics 08 00084 g016
Figure 17. Convergence curve of test function F13.
Figure 17. Convergence curve of test function F13.
Biomimetics 08 00084 g017
Figure 18. Convergence curve of test function F14.
Figure 18. Convergence curve of test function F14.
Biomimetics 08 00084 g018
Figure 19. Convergence curve of test function F15.
Figure 19. Convergence curve of test function F15.
Biomimetics 08 00084 g019
Figure 20. Convergence curve of test function F1.
Figure 20. Convergence curve of test function F1.
Biomimetics 08 00084 g020
Figure 21. Convergence curve of test function F2.
Figure 21. Convergence curve of test function F2.
Biomimetics 08 00084 g021
Figure 22. Convergence curve of test function F3.
Figure 22. Convergence curve of test function F3.
Biomimetics 08 00084 g022
Figure 23. Convergence curve of test function F4.
Figure 23. Convergence curve of test function F4.
Biomimetics 08 00084 g023
Figure 24. Convergence curve of test function F5.
Figure 24. Convergence curve of test function F5.
Biomimetics 08 00084 g024
Figure 25. Convergence curve of test function F6.
Figure 25. Convergence curve of test function F6.
Biomimetics 08 00084 g025
Figure 26. Convergence curve of test function F7.
Figure 26. Convergence curve of test function F7.
Biomimetics 08 00084 g026
Figure 27. Convergence curve of test function F8.
Figure 27. Convergence curve of test function F8.
Biomimetics 08 00084 g027
Figure 28. Convergence curve of test function F9.
Figure 28. Convergence curve of test function F9.
Biomimetics 08 00084 g028
Figure 29. Convergence curve of test function F10.
Figure 29. Convergence curve of test function F10.
Biomimetics 08 00084 g029
Figure 30. Convergence curve of test function F11.
Figure 30. Convergence curve of test function F11.
Biomimetics 08 00084 g030
Figure 31. Convergence curve of test function F12.
Figure 31. Convergence curve of test function F12.
Biomimetics 08 00084 g031
Figure 32. Convergence curve of test function F12.
Figure 32. Convergence curve of test function F12.
Biomimetics 08 00084 g032
Figure 33. Convergence curve of test function F14.
Figure 33. Convergence curve of test function F14.
Biomimetics 08 00084 g033
Figure 34. Convergence curve of test function F15.
Figure 34. Convergence curve of test function F15.
Biomimetics 08 00084 g034
Figure 35. Environment modeling.
Figure 35. Environment modeling.
Biomimetics 08 00084 g035
Figure 36. The path of pGWO-CSA.
Figure 36. The path of pGWO-CSA.
Biomimetics 08 00084 g036
Table 1. Detailed description of the test functions F1–F15.
Table 1. Detailed description of the test functions F1–F15.
No.FunctionDimensionIntervalfmin
F1 f ( x ) = i = 1 d x i 2 30, 50[−100, 100]0
F2 f ( x ) = i = 1 d | x i | + i = 1 d | x i | 30, 50[−10, 10]0
F3 f ( x ) = i = 1 d ( j = 1 i x j 2 ) 30, 50[−100, 100]0
F4 f ( x ) = max i { | x i   | , 1 i n } 30, 50[−100, 100]0
F5 f ( x ) = i = 1 d 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30, 50[−30, 30]0
F6 f ( x ) = i = 1 d ( [ x i + 0.5 ] ) 2 30, 50[−100, 100]0
F7 f ( x ) = i = 1 d i x i 4 + r a n d o m ( 0 , 1 ) 30, 50[−1.28, 1.28]0
F8 f ( x ) = i = 1 d x i sin ( | x i | ) 30, 50[−500, 500]−418.9829 × 5
F9 f ( x ) = i = 1 d [ x i 2 10 cos ( 2 Π x i ) + 10 ] 30, 50[−5.12, 5.12]0
F10 f ( x ) = 20 exp ( 0.2 1 n i = 1 d x i 2 )   exp ( 1 n i = 1 d cos ( 2 Π x i ) ) + 20 + e 30, 50[−32, 32]0
F11 f ( x ) = 1 4000 i = 1 d x i 2 i = 1 d cos ( x i i ) + 1 30, 50[−600, 600]0
F12 f ( x ) = i = 1 d 1 [ x i 2 + 2 x i + 1 2 0.3 cos ( 3 π x i ) 0.4 cos ( 4 π x i + 1 ) + 0.7 ] 30, 50[−15, 15]0
F13 f ( x ) = i = 1 d | x i sin ( x i ) + 0.1 x i | 30, 50[−10, 10]0
F14 f ( x ) = i = 1 d / 4 [ ( x 4 i 3 + 10 x 4 i 2 ) 2 + 5 ( x 4 i 1 x 4 i ) 2 + ( x 4 i 2 2 x 4 i 1 ) 4 + 10 ( x 4 i 3 x 4 i ) 4 ] 30, 50[−4, 5]0
F15 f ( x ) = { [ i = 1 d sin 2 ( x i ) ] e x p ( i = 1 d x i 2 ) } · e x p [ i = 1 d sin 2 | x i | ] 30, 50[−10, 10]−1
Table 2. Main parameters of the four algorithms.
Table 2. Main parameters of the four algorithms.
AlgorithmThe Main Parameters
PSOω = 0.8, c1 = 1.5, c2 = 1.5
DECR = 0.8, F = 0.6
FAβ0 = 1.0, γ = 0.000001, α = 0.6
pGWO-CSA a nonlinearly decreases from 2 to 0, u = 2
Table 3. The experimental results under 30 dimensions.
Table 3. The experimental results under 30 dimensions.
FunctionIndexPSODEFApGWO-CSA
F1mean6.18 × 10−102.35 × 10−14.92 × 10−15.97 × 10−44
std1.97 × 10−91.41 × 10−16.87 × 10−21.12 × 10−43
min5.49 × 10−184.00 × 10−23.08 × 10−16.65 × 10−47
max1.09 × 10−86.23 × 10−16.09 × 10−14.87 × 10−43
F2mean4.30 × 10−73.48 × 10−13.19 × 1003.66 × 10−27
std1.70 × 10−61.13 × 10−11.65 × 10−13.51 × 10−27
min8.23 × 10−131.87 × 10−12.93 × 1003.98 × 10−28
max9.52 × 10−67.07 × 10−13.53 × 1001.54 × 10−26
F3mean1.12 × 10−63.45 × 1008.68 × 1002.24 × 10−42
std5.03 × 10−61.97 × 1001.32 × 1008.92 × 10−42
min2.17 × 10−166.70 × 10−16.05 × 1003.59 × 10−45
max2.78 × 10−58.94 × 1001.18 × 1015.00 × 10−41
F4mean3.24 × 10−32.09 × 1013.12 × 10−11.08 × 10−11
std5.28 × 10−36.64 × 1002.14 × 10−21.21 × 10−11
min8.06 × 10−71.13 × 1012.72 × 10−11.00 × 10−12
max2.34 × 10−24.23 × 1013.49 × 10−14.94 × 10−11
F5mean2.82 × 1011.73 × 1021.63 × 1022.67 × 101
std4.07 × 10−19.13 × 1011.64 × 1025.18 × 10−1
min2.75 × 1014.80 × 1016.81 × 1012.59 × 101
max2.89 × 1014.90 × 1026.18 × 1022.80 × 101
F6mean4.81 × 1002.11 × 10−15.06 × 10−14.20 × 10−1
std1.61 × 10−11.21 × 10−13.58 × 10−23.28 × 10−1
min4.19 × 1003.27 × 10−23.93 × 10−13.14 × 10−6
max5.05 × 1004.90 × 10−15.62 × 10−11.50 × 100
F7mean2.36 × 10−31.08 × 10−15.54 × 10−11.38 × 10−3
std2.06 × 10−32.78 × 10−21.08 × 10−18.15 × 10−4
min1.19 × 10−44.20 × 10−22.92 × 10−13.62 × 10−4
max1.01 × 10−21.71 × 10−18.16 × 10−13.67 × 10−3
F8mean−2.98 × 103−5.88 × 103−4.16 × 103−6.13 × 103
std3.37 × 1024.45 × 1021.45 × 1037.70 × 102
min−3.79 × 103−7.27 × 103−7.10 × 103−7.55 × 103
max−2.52 × 103−5.15 × 103−2.14 × 103−4.68 × 103
F9mean2.18 × 10−72.07 × 1022.28 × 1020.00 × 100
std1.05 × 10−61.79 × 1012.52 × 1010.00 × 100
min0.00 × 1001.59 × 1021.67 × 1020.00 × 100
max5.88 × 10−62.57 × 1022.89 × 1020.00 × 100
F10mean8.55 × 10−67.65 × 1001.96 × 1018.17 × 10−15
std2.19 × 10−59.03 × 1002.19 × 10−12.62 × 10−15
min2.69 × 10−111.16 × 10−11.89 × 1013.55 × 10−15
max1.19 × 10−42.00 × 1012.00 × 1011.42 × 10−14
F11mean1.36 × 10−75.09 × 10−14.10 × 10−20.00 × 100
std4.09 × 10−72.14 × 10−11.68 × 10−20.00 × 100
min3.33 × 10−151.09 × 10−12.01 × 10−20.00 × 100
max1.73 × 10−69.29 × 10−19.22 × 10−20.00 × 100
F12mean5.12 × 10−81.88 × 1001.57 × 1010.00 × 100
std2.09 × 10−71.54 × 1001.26 × 1000.00 × 100
min0.00 × 1002.85 × 10−11.21 × 1010.00 × 100
max1.16 × 10−67.37 × 1001.74 × 1010.00 × 100
F13mean2.04 × 10−64.98 × 1001.25 × 1011.28 × 10−23
std9.07 × 10−64.04 × 1002.34 × 1003.86 × 10−23
min1.42 × 10−116.93 × 10−27.89 × 1001.70 × 10−27
max5.07 × 10−51.36 × 1011.89 × 1011.68 × 10−22
F14mean2.15 × 10−71.28 × 1008.11 × 1005.36 × 10−6
std7.67 × 10−79.24 × 10−12.43 × 1007.72 × 10−6
min1.82 × 10−173.42 × 10−14.29 × 1002.86 × 10−8
max4.12 × 10−64.46 × 1001.38 × 1012.89 × 10−5
F15mean3.18 × 10−101.78 × 10−117.03 × 10−121.75 × 10−16
std1.38 × 10−101.04 × 10−111.01 × 10−115.99 × 10−16
min5.84 × 10−112.91 × 10−121.29 × 10−132.30 × 10−17
max6.33 × 10−104.72 × 10−114.54 × 10−113.39 × 10−15
Table 4. The experimental results under 50 dimensions.
Table 4. The experimental results under 50 dimensions.
FunctionIndexPSODEFApGWO–CSA
F1mean1.31 × 10−76.52 × 1011.51 × 1003.29 × 10−31
std2.80 × 10−72.88 × 1011.84 × 10−13.21 × 10−31
min7.01 × 10−152.19 × 1011.17 × 1002.68 × 10−33
max1.30 × 10−61.27 × 1021.83 × 1001.33 × 10−30
F2mean5.42 × 10−68.15 × 1002.03 × 1017.23 × 10−20
std2.41 × 10−54.81 × 1004.57 × 1014.78 × 10−20
min1.62 × 10−103.68 × 1006.38 × 1009.01 × 10−21
max1.35 × 10−43.09 × 1011.97 × 1021.95 × 10−19
F3mean2.28 × 10−41.28 × 1036.61 × 1016.91 × 10−30
std1.05 × 10−36.20 × 1021.27 × 1017.60 × 10−30
min2.86 × 10−145.03 × 1023.62 × 1011.26 × 10−31
max5.83 × 10−33.34 × 1038.97 × 1012.58 × 10−29
F4mean1.10 × 10−18.85 × 1013.77 × 1005.16 × 10−7
std1.84 × 10−19.30 × 1002.99 × 1004.01 × 10−7
min3.01 × 10−75.04 × 1015.27 × 10−18.20 × 10−8
max8.20 × 10−19.48 × 1011.05 × 1011.50 × 10−6
F5mean4.86 × 1011.75 × 1043.93 × 1024.70 × 101
std3.42 × 10−11.18 × 1044.15 × 1026.79 × 10−1
min4.78 × 1012.66 × 1031.92 × 1024.58 × 101
max4.89 × 1014.68 × 1042.39 × 1034.86 × 101
F6mean9.62 × 1007.36 × 1011.53 × 1002.05 × 100
std2.75 × 10−13.16 × 1011.83 × 10−14.53 × 10−1
min8.96 × 1002.28 × 1011.17 × 1001.18 × 100
max1.01 × 1011.44 × 1021.86 × 1002.76 × 100
F7mean2.81 × 10−32.47 × 10−13.11 × 1002.61 × 10−3
std2.41 × 10−35.41 × 10−28.20 × 10−11.89 × 10−3
min1.59 × 10−41.21 × 10−11.88 × 1003.74 × 10−4
max1.11 × 10−23.40 × 10−14.98 × 1009.15 × 10−3
F8mean−3.77 × 103−7.67 × 103−6.65 × 103−9.02 × 103
std4.48 × 1026.35 × 1023.09 × 1031.25 × 103
min−5.46 × 103−9.57 × 103−1.25 × 104−1.09 × 104
max−3.25 × 103−6.58 × 103−2.90 × 103−6.20 × 103
F9mean5.52 × 10−74.32 × 1024.82 × 1020.00 × 100
std2.56 × 10−61.33 × 1014.13 × 1010.00 × 100
min6.39 × 10−143.93 × 1023.92 × 1020.00 × 100
max1.43 × 10−54.57 × 1025.48 × 1020.00 × 100
F10mean6.03 × 10−51.17 × 1011.99 × 1012.69 × 10−14
std2.68 × 10−48.32 × 1001.48 × 10−15.08 × 10−15
min5.93 × 10−92.48 × 1001.95 × 1011.42 × 10−14
max1.50 × 10−32.00 × 1012.02 × 1013.91 × 10−14
F11mean1.26 × 10−61.63 × 1007.43 × 10−20.00 × 100
std3.48 × 10−62.85 × 10−11.16 × 10−20.00 × 100
min0.00 × 1001.26 × 1005.44 × 10−20.00 × 100
max1.40 × 10−52.64 × 1001.02 × 10−10.00 × 100
F12mean4.96 × 10−93.43 × 1013.57 × 1010.00 × 100
std1.19 × 10−84.35 × 1002.15 × 1000.00 × 100
min0.00 × 1002.53 × 1013.16 × 1010.00 × 100
max5.42 × 10−84.26 × 1013.99 × 1010.00 × 100
F13mean3.43 × 10−62.33 × 1013.10 × 1012.67 × 10−15
std1.23 × 10−56.15 × 1003.39 × 1001.38 × 10−14
min1.42 × 10−111.34 × 1012.47 × 1016.62 × 10−20
max6.55 × 10−53.63 × 1014.05 × 1017.72 × 10−14
F14mean4.50 × 10−71.41 × 1024.49 × 1019.82 × 10−6
std1.70 × 10−67.07 × 1011.10 × 1012.50 × 10−5
min4.98 × 10−163.43 × 1013.05 × 1011.06 × 10−7
max9.18 × 10−63.35 × 1027.79 × 1011.42 × 10−4
F15mean3.99 × 10−163.16 × 10−182.48 × 10−202.24 × 10−23
std2.58 × 10−163.12 × 10−184.57 × 10−206.01 × 10−23
min3.36 × 10−172.09 × 10−191.85 × 10−212.24 × 10−24
max1.24 × 10−151.49 × 10−172.37 × 10−193.26 × 10−22
Table 5. Main parameters of the five algorithms.
Table 5. Main parameters of the five algorithms.
AlgorithmThe Main Parameters
GWO a linearly decreases from 2 to 0
OGWO a nonlinearly decreases from 2 to 0, u = 2
DGWO1 a linearly decreases from 2 to 0
DGWO2 a linearly decreases from 2 to 0
pGWO-CSA a nonlinearly decreases from 2 to 0, u = 2
Table 6. The experimental results under 30 dimensions.
Table 6. The experimental results under 30 dimensions.
FunctionIndexGWOOGWODGWO1DGWO2pGWO-CSA
F1mean1.03 × 10−272.99 × 10−408.07 × 10−212.49 × 10−625.97 × 10−44
std1.25 × 10−277.79 × 10−401.14 × 10−205.68 × 10−621.12 × 10−43
min2.63 × 10−291.25 × 10−453.28 × 10−221.61 × 10−656.65 × 10−47
max5.66 × 10−273.96 × 10−395.94 × 10−202.84 × 10−614.87 × 10−43
F2mean8.66 × 10−173.38 × 10−231.16 × 10−122.48 × 10−353.66 × 10−27
std5.90 × 10−176.77 × 10−237.36 × 10−133.84 × 10−353.51 × 10−27
min2.39 × 10−176.53 × 10−271.72 × 10−131.59 × 10−363.98 × 10−28
max2.45 × 10−162.95 × 10−223.06 × 10−122.03 × 10−341.54 × 10−26
F3mean1.75 × 10−262.08 × 10−386.58 × 10−204.60 × 10−602.24 × 10−42
std2.61 × 10−267.19 × 10−384.32 × 10−202.36 × 10−598.92 × 10−42
min4.37 × 10−286.61 × 10−458.41 × 10−216.87 × 10−653.59 × 10−45
max1.33 × 10−253.87 × 10−371.73 × 10−191.32 × 10−585.00 × 10−41
F4mean1.21 × 10−061.44 × 10−113.02 × 10−52.76 × 10−141.08 × 10−11
std1.47 × 10−63.57 × 10−111.83 × 10−56.15 × 10−141.21 × 10−11
min9.64 × 10−86.57 × 10−165.36 × 10−61.27 × 10−161.00 × 10−12
max6.12 × 10−61.64 × 10−107.48 × 10−53.19 × 10−134.94 × 10−11
F5mean2.72 × 1012.69 × 1012.68 × 1012.70 × 1012.67 × 101
std6.10 × 10−15.60 × 10−17.98 × 10−17.49 × 10−15.18 × 10−1
min2.60 × 1012.62 × 1012.59 × 1012.61 × 1012.59 × 101
max2.87 × 1012.80 × 1012.86 × 1012.85 × 1012.80 × 101
F6mean7.83 × 10−16.53 × 10−15.27 × 10−15.98 × 10−14.20 × 10−1
std4.10 × 10−13.43 × 10−13.30 × 10−12.59 × 10−13.28 × 10−1
min8.81 × 10−34.32 × 10−46.32 × 10−52.46 × 10−13.14 × 10−6
max1.66 × 1001.49 × 1001.26 × 1001.00 × 1001.50 × 100
F7mean1.64 × 10−31.67 × 10−42.06 × 10−31.60 × 10−31.38 × 10−3
std6.39 × 10−41.38 × 10−48.83 × 10−47.95 × 10−48.15 × 10−4
min6.23 × 10−41.25 × 10−55.40 × 10−44.23 × 10−43.62 × 10−4
max3.20 × 10−34.88 × 10−44.67 × 10−33.83 × 10−33.67 × 10−3
F8mean−5.71 × 103−4.10 × 103−5.85 × 103−5.74 × 103−6.13 × 103
std9.10 × 1021.38 × 1038.53 × 1028.88 × 1027.70 × 102
min−6.73 × 103−7.63 × 103−7.37 × 103−6.84 × 103−7.55 × 103
max−3.42 × 103−2.82 × 103−3.60 × 103−3.15 × 103−4.68 × 103
F9mean3.51 × 1001.44 × 10−17.76 × 1004.31 × 10−10.00 × 100
std8.96 × 1007.75 × 10−12.26 × 1011.50 × 1000.00 × 100
min0.00 × 1000.00 × 1003.73 × 10−140.00 × 1000.00 × 100
max4.67 × 1014.32 × 1001.28 × 1026.51 × 1000.00 × 100
F10mean9.90 × 10−148.64 × 10−151.55 × 10−112.45 × 10−148.17 × 10−15
std1.26 × 10−143.96 × 10−158.32 × 10−124.04 × 10−152.62 × 10−15
min6.75 × 10−143.55 × 10−154.68 × 10−121.42 × 10−143.55 × 10−15
max1.28 × 10−131.78 × 10−144.06 × 10−112.84 × 10−141.42 × 10−14
F11mean3.72 × 10−31.12 × 10−32.73 × 10−37.71 × 10−30.00 × 100
std7.49 × 10−34.34 × 10−36.32 × 10−31.30 × 10−20.00 × 100
min0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
max2.79 × 10−22.14 × 10−22.19 × 10−24.92 × 10−20.00 × 100
F12mean0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
std0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
min0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
max0.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
F13mean5.30 × 1045.69 × 10−51.07 × 10−33.09 × 10−51.28 × 10−23
std7.11 × 10−41.83 × 10−48.13 × 10−41.01 × 10−43.86 × 10−23
min2.66 × 10−171.79 × 10−276.06 × 10−123.18 × 10−371.70 × 10−27
max2.46 × 10−38.65 × 10−42.67 × 1035.46 × 10−41.68 × 10−22
F14mean2.00 × 10−59.68 × 10−62.69 × 10−53.23 × 10−65.36 × 10−6
std1.93 × 10−51.79 × 10−52.25 × 10−55.40 × 10−67.72 × 10−6
min7.40 × 10−71.79 × 10−74.62 × 10−64.23 × 10−82.86 × 10−8
max7.44 × 10−59.70 × 10−51.01 × 10−42.79 × 10−52.89 × 10−5
F15mean9.80 × 10−169.12 × 10−129.65 × 10−162.82 × 10−151.75 × 10−16
std4.96 × 10−161.64 × 10−114.40 × 10−161.07 × 10−145.99 × 10−16
min4.87 × 10−163.76 × 10−154.41 × 10−163.12 × 10−162.30 × 10−17
max3.11 × 10−156.83 × 10−112.97 × 10−156.05 × 10−143.39 × 10−15
Table 7. The experimental results under 50 dimensions.
Table 7. The experimental results under 50 dimensions.
FunctionIndexGWOOGWODGWO1DGWO2pGWO-CSA
F1mean7.84 × 10−201.19 × 10−291.54 × 10−148.58 × 10−473.29 × 10−31
std8.07 × 10−204.26 × 10−291.32 × 10−141.98 × 10−463.21 × 10−31
min8.05 × 10−211.15 × 10−343.03 × 10−155.11 × 10−492.68 × 10−33
max3.11 × 10−192.13 × 10−287.68 × 10−148.72 × 10−461.33 × 10−30
F2mean2.20 × 10−121.15 × 10−175.02 × 10−98.20 × 10−277.23 × 10−20
std1.30 × 10−122.79 × 10−172.85 × 10−99.32 × 10−274.78 × 10−20
min4.72 × 10−132.66 × 10−191.57 × 10−97.22 × 10−289.01 × 10−21
max6.73 × 10−121.50 × 10−161.58 × 10−84.41 × 10−261.95 × 10−19
F3mean1.22 × 10−183.52 × 10−285.31 × 10−131.93 × 10−456.91 × 10−30
std1.37 × 10−181.16 × 10−278.55 × 10−133.81 × 10−457.60 × 10−30
min1.07 × 10−191.30 × 10−363.51 × 10−143.01 × 10−481.26 × 10−31
max6.30 × 10−186.16 × 10−274.24 × 10−121.98 × 10−442.58 × 10−29
F4mean4.49 × 10−43.67 × 10−85.11 × 10−31.48 × 10−95.16 × 10−7
std3.16 × 10−49.79 × 10−85.40 × 10−32.37 × 10−94.01 × 10−7
min7.15 × 10−52.84 × 10−121.55 × 10−31.79 × 10−118.20 × 10−8
max1.33 × 10−34.01 × 10−72.93 × 10−21.07 × 10−81.50 × 10−6
F5mean4.75 × 1014.71 × 1014.71 × 1014.71 × 1014.70 × 101
std8.65 × 10−16.93 × 10−17.38 × 10−16.92 × 10−16.79 × 10−1
min4.60 × 1014.61 × 1014.61 × 1014.61 × 1014.58 × 101
max4.87 × 1014.85 × 1014.86 × 1014.85 × 1014.86 × 101
F6mean2.66 × 1002.28 × 1002.05 × 1002.71 × 1002.05 × 100
std5.41 × 10−15.24 × 10−16.12 × 10−14.58 × 10−14.53 × 10−1
min1.25 × 1001.38 × 1006.89 × 10−11.75 × 1001.18 × 100
max4.00 × 1003.50 × 1003.25 × 1003.95 × 1002.76 × 100
F7mean3.52 × 10−32.27 × 10−44.75 × 10−32.75 × 10−32.61 × 10−3
std1.80 × 10−32.70 × 10−42.20 × 10−31.03 × 10−31.89 × 10−3
min8.32 × 10−41.65 × 10−51.64 × 10−31.02 × 10−33.74 × 10−4
max8.12 × 10−31.42 × 10−31.21 × 10−24.90 × 10−39.15 × 10−3
F8mean−8.79 × 103−5.64 × 103−8.46 × 103−8.91 × 103−9.21 × 103
std1.49 × 1032.17 × 1031.96 × 1039.69 × 1029.61 × 102
min−1.09 × 104−1.05 × 104−1.08 × 104−1.10 × 104−1.13 × 104
max−4.19 × 103−3.95 × 103−3.82 × 103−7.33 × 103−7.62 × 103
F9mean4.54 × 1004.71 × 10−59.85 × 1009.88 × 10−10.00 × 100
std5.08 × 1002.54 × 10−46.01 × 1002.86 × 1000.00 × 100
min2.49 × 10−140.00 × 1008.42 × 10−110.00 × 1000.00 × 100
max1.94 × 1011.41 × 10−32.57 × 1011.38 × 1010.00 × 100
F10mean3.77 × 10−112.14 × 10−141.55 × 10−83.94 × 10−142.69 × 10−14
std2.50 × 10−111.72 × 10−145.43 × 10−92.80 × 10−155.08 × 10−15
min1.18 × 10−113.55 × 10−155.64 × 10−93.20 × 10−141.42 × 10−14
max1.31 × 10−106.75 × 10−142.87 × 10−84.26 × 10−143.91 × 10−14
F11mean3.56 × 10−32.37 × 10−33.47 × 10−32.83 × 10−30.00 × 100
std7.35 × 10−37.32 × 10−38.44 × 10−36.50 × 10−30.00 × 100
min0.00 × 1000.00 × 1006.33 × 10−150.00 × 1000.00 × 100
max2.31 × 10−23.12 × 10−23.58 × 10−22.15 × 10−20.00 × 100
F12mean0.00 × 1000.00 × 1001.75 × 10−140.00 × 1000.00 × 100
std0.00 × 1000.00 × 1001.31 × 10−140.00 × 1000.00 × 100
min0.00 × 1000.00 × 1004.77 × 10−150.00 × 1000.00 × 100
max0.00 × 1000.00 × 1005.51 × 10−140.00 × 1000.00 × 100
F13mean7.99 × 10−42.10 × 10−42.00 × 10−31.00 × 10−42.67 × 10−15
std9.17 × 10−44.57 × 10−41.23 × 10−32.39 × 10−41.38 × 10−14
min1.83 × 10−127.86 × 10−222.17 × 10−96.97 × 10−286.62 × 10−20
max3.35 × 10−31.94 × 10−34.81 × 10−31.02 × 10−37.72 × 10−14
F14mean3.55 × 10−51.37 × 10−56.99 × 10−55.78 × 10−69.82 × 10−6
std3.21 × 10−51.09 × 10−54.69 × 10−56.70 × 10−62.50 × 10−5
min1.46 × 10−64.20 × 10−79.40 × 10−64.63 × 10−81.06 × 10−7
max1.24 × 10−43.78 × 10−52.15 × 10−42.39 × 10−51.42 × 10−4
F15mean1.14 × 10−229.50 × 10−185.62 × 10−235.52 × 10−232.24 × 10−23
std2.82 × 10−222.07 × 10−176.32 × 10−235.49 × 10−236.01 × 10−23
min2.08 × 10−231.82 × 10−211.80 × 10−231.83 × 10−232.24 × 10−24
max1.60 × 10−218.02 × 10−173.20 × 10−222.82 × 10−223.26 × 10−22
Table 8. The results of the Wilcoxon test.
Table 8. The results of the Wilcoxon test.
FunctionDimensionPSODEFAGWOOGWODGWO1DGWO2
F130+
50+
F230+
50+
F330+
50+
F430+
50++
F530
50
F630+
50+=
F730+
50+
F830
50
F930
50
F1030
50+
F1130
50
F1230====
50===
F1330
50
F1430++
50++
F1530
50
+21104010
28292928242818
=0002222
Table 9. Experimental results.
Table 9. Experimental results.
PSODEFAGWOOGWODGWO1DGWO2pGWO-CSA
min8.81 × 1007.56 × 1007.17 × 1007.57 × 1007.57 × 1007.56 × 1008.36 × 1007.16 × 100
max1.03 × 1019.44 × 1001.34 × 1019.27 × 1008.74 × 1009.22 × 1001.01 × 1018.51 × 100
mean9.97 × 1008.46 × 1001.08 × 1018.30 × 1008.10 × 1008.13 × 1009.51 × 1007.94 × 100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ou, Y.; Yin, P.; Mo, L. An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning. Biomimetics 2023, 8, 84. https://doi.org/10.3390/biomimetics8010084

AMA Style

Ou Y, Yin P, Mo L. An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning. Biomimetics. 2023; 8(1):84. https://doi.org/10.3390/biomimetics8010084

Chicago/Turabian Style

Ou, Yun, Pengfei Yin, and Liping Mo. 2023. "An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning" Biomimetics 8, no. 1: 84. https://doi.org/10.3390/biomimetics8010084

Article Metrics

Back to TopTop