Improved Grey Wolf Optimization Algorithm and Application

This paper proposed an improved Grey Wolf Optimizer (GWO) to resolve the problem of instability and convergence accuracy when GWO is used as a meta-heuristic algorithm with strong optimal search capability in the path planning for mobile robots. We improved chaotic tent mapping to initialize the wolves to enhance the global search ability and used a nonlinear convergence factor based on the Gaussian distribution change curve to balance the global and local searchability. In addition, an improved dynamic proportional weighting strategy is proposed that can update the positions of grey wolves so that the convergence of this algorithm can be accelerated. The proposed improved GWO algorithm results are compared with the other eight algorithms through several benchmark function test experiments and path planning experiments. The experimental results show that the improved GWO has higher accuracy and faster convergence speed.


Introduction
Path planning is widely used in mobile robot navigation, which of the aim is to find an optimal trajectory that connects the starting point with the target point while avoiding collisions with obstacles [1,2]. There are many commonly used algorithms, such as A* algorithm [3], particle swarm algorithm (PSO) [4,5], genetic algorithm (GA) [6], and grey wolf algorithm (GWO) [7][8][9].
GWO is a new pack intelligence optimization algorithm that is widely used in many significant fields. It mainly imitates the grey wolf race pack's hierarchical pattern and hunting behavior and achieves optimization through the wolf pack's tracking, encircling, and pouncing behaviors. Compared with traditional optimization algorithms such as PSO and GA, GWO has the advantages of fewer parameters, simple principles, and implementing easily. However, GWO has the disadvantages of slow convergence speed, low solution accuracy, and easy to fall into the local optimum. For this reason, many scholars have made many improvements. Yang Zhang [10] proposed MGWO, which introduced an exponential regular convergence factor strategy, an adaptive update strategy, and a dynamic weighting strategy to improve the GWO search capability. Min Wang [11] proposed NGWO, which used reverse learning of the initial racial group and introduced a nonlinear convergence factor to improve the algorithm search capability. Luis Rodriguez [12] proposed the Grey Wolf algorithm (GWO-fuzzy) based on a fuzzy hierarchical operator and compared two proportional weighting strategies. Saremi [13] proposed the grey Wolf Algorithm for Evolutionary Population Dynamics (GWO-EPD), which focuses on the location change of poorly adapted grey wolf individuals to improve search accuracy. Qiuping Wang [14] proposed an improved grey wolf algorithm (CGWO), which uses the cosine law to vary the convergence factor to improve the searchability, and introduces a proportional weight based on the step Euclidean distance to update the position of the grey wolf to speed up the convergence speed. Shipeng Wang [15] proposed a new hybrid algorithm (FWGWO), which combines the advantages of both algorithms and effectively achieves the global optimum. 1.
An improved GWO algorithm based on a multi-strategy hybrid is proposed.

2.
The improved GWO algorithm is applied to the path planning of mobile robot. 3.
The remainder of this paper is structured as follows. Section 2 summarizes the related work. Section 3 describes the deployment scheme of this paper to improve the gray wolf algorithm. The experimental results are discussed in Section 4. Section 5 concludes the paper.

Research Situation
Path planning is a typical complex multi-aim optimization problem that finds a workable or optimal path from the starting point to the goal point under careful consideration of various environmental conditions. Intelligent algorithms are widely used in such problems as path planning because of their better robustness.
Research on solving path planning problems using swarm intelligence algorithms is gradually increasing. For example, Yin Ling [19] fused the improved grey wolf algorithm with the artificial potential field method to solve the problem of unreachable target points because of the influence of dynamic obstacles in path planning. Dazhang You [20] combined GWO with particle swarm algorithm to reduce the cost consumption of path planning by introducing cooperative quantitative optimization of the grey wolf population. Kumar R [21] introduced a new technique named modified grey wolf optimization (MGWO) algorithm to solve the path planning problem for multi-robots. Ge Fawei [22] proposed the grey wolf fruit fly optimization algorithm (GWFOA), which combines the fruit fly optimization algorithm (FOA) with GWO for the Unmanned Aerial Vehicle (UAV) path planning problem in oil field inspection, resulting in a satisfactory solution for UAV in complex environments. One more powerful algorithm named variable weight grey wolf optimization (VW-GWO) was recently proposed by Kumar [23] to obtain an optimal solution for the path planning problem of mobile robots.

GWO Algorithm
In 2014, inspired by the predatory behavior of grey wolf packs, Seyedali Mirjalili et al. proposed the grey wolf algorithm (GWO) [7]. The algorithm simulates the unique hunting and prey-seeking characteristics of the grey wolf. Grey wolves belong to the group of living canines. Each wolf plays a different role in the group and accomplishes tasks through cooperation between wolves. The GWO divided the grey wolf population into four levels of social hierarchy ( Figure 1). The first rank is wolf α, responsible for deciding on activities such as hunting. The second rank is wolf β, subordinate to wolf α and helps make decisions with wolf α, also the best candidate for wolf α. The third rank is wolf δ, subordinate to wolf α and wolf β, responsible for tasks such as scouting and hunting. The fourth rank is wolf ω, the lowest rank, responsible for maintaining the wolf pack. Grey wolf hunting is divided into tracking, chasing, and attacking prey. named variable weight grey wolf optimization (VW-GWO) was rec Kumar [23] to obtain an optimal solution for the path planning proble

GWO Algorithm
In 2014, inspired by the predatory behavior of grey wolf packs, S al. proposed the grey wolf algorithm (GWO) [7]. The algorithm sim hunting and prey-seeking characteristics of the grey wolf. Grey wo group of living canines. Each wolf plays a different role in the group tasks through cooperation between wolves. The GWO divided the gre into four levels of social hierarchy (Figure 1). The first rank is wolf deciding on activities such as hunting. The second rank is wolf β, sub and helps make decisions with wolf α, also the best candidate for wol is wolf δ, subordinate to wolf α and wolf β, responsible for tasks suc hunting. The fourth rank is wolf ω, the lowest rank, responsible for m pack. Grey wolf hunting is divided into tracking, chasing, and attackin During the GWO operation, the positions of wolf α, wolf β continuously updated at each iteration, whose mathematical model is (1) is the distance between the grey wolf and the pr number of current iterations, and X p (t) and X(t) are the prey's loca wolf's location at t iterations, respectively. Equation (2) is the formul location of the grey wolf. A and C are the coefficient vectors, which ar following equations: where r 1 , r 2 are random vectors between [0, 1], and the primary rol randomness of the grey wolf movement. ɑ represents the convergence decay linearly from 2 to 0 as the algorithm progresses, and the linear r GWO: During the GWO operation, the positions of wolf α, wolf β, and wolf δ are continuously updated at each iteration, whose mathematical model is described as: (1) Equation (1) is the distance between the grey wolf and the prey, where t is the number of current iterations, and X p (t) and X(t) are the prey's locations and the grey wolf's location at t iterations, respectively. Equation (2) is the formula for updating the location of the grey wolf. A and C are the coefficient vectors, which are calculated by the following equations: where r 1 , r 2 are random vectors between [0, 1], and the primary role is to increase the randomness of the grey wolf movement. a represents the convergence factor, which will decay linearly from 2 to 0 as the algorithm progresses, and the linear relationship defines GWO: where t is the current number of iterations and T max is the maximum number of iterations of the algorithm. Predating in abstract space and accurately identifying the location of prey is impossible. GWO simulated hunting behavior. Based on the fitness value, wolf α, wolf β, and wolf δ were selected to find the prey using the relationship between the three positions and guide the other wolves to move toward the prey, as in Figure 2.
Predating in abstract space and accurately identifying the location impossible. GWO simulated hunting behavior. Based on the fitness value, w and wolf δ were selected to find the prey using the relationship betwee positions and guide the other wolves to move toward the prey, as in Figure 2  By iterating several times until the location of the prey is reached, the m model is as follows: where: D a is the distance between wolf pack w and ɑ wolf, D B is the distan wolf pack w and β wolf, and D δ is the distance between wolf pack w and Equation (7) presents the location of the new generation of wolves after the u

Wolf Pack Initialization
Since the initialized grey wolf population determines whether the optim be found and the convergence speed, a diversity of initialized population improve the algorithm's performance in finding the optimal path. Tradit randomly initializes wolf pack positions, which primarily affects the search the algorithm, so the initialized populations need to be distributed as evenly in the initial space.
In optimization, chaotic mappings positively impact the convergenc GWO algorithms, and chaotic sequences have characteristics such as n ergodicity, and preventing algorithms from falling into local optimality. decade, chaotic mapping has been widely used to help optimize more d global search spaces for intelligent algorithms. There are over ten mappin mapping, piecewise-linear chaotic system mapping(pwlcm), singer mappin mapping. These mappings can choose the initial value of any number according to the chaotic mapping range). Among them, logistic mappin mapping are most commonly used, but logistic mapping is less ergodi By iterating several times until the location of the prey is reached, the mathematical model is as follows: where: D a is the distance between wolf pack w and a wolf, D B is the distance between wolf pack w and β wolf, and D δ is the distance between wolf pack w and wolf δ. The Equation (7) presents the location of the new generation of wolves after the update.

Wolf Pack Initialization
Since the initialized grey wolf population determines whether the optimal path can be found and the convergence speed, a diversity of initialized populations can help improve the algorithm's performance in finding the optimal path. Traditional GWO randomly initializes wolf pack positions, which primarily affects the search efficiency of the algorithm, so the initialized populations need to be distributed as evenly as possible in the initial space.
In optimization, chaotic mappings positively impact the convergence speed of GWO algorithms, and chaotic sequences have characteristics such as nonlinearity, ergodicity, and preventing algorithms from falling into local optimality. In the last decade, chaotic mapping has been widely used to help optimize more dynamic and global search spaces for intelligent algorithms. There are over ten mappings: logistic mapping, piecewise-linear chaotic system mapping(pwlcm), singer mapping, and tent mapping. These mappings can choose the initial value of any number [0, 1] (or according to the chaotic mapping range). Among them, logistic mapping and tent mapping are most commonly used, but logistic mapping is less ergodic than tent mapping, and the sensitivity of initial parameters leads to the high density of mapped points at the edges and less density in the middle region, which is not conducive to optimal path planning. Compared with logistic mapping, tent mapping is more suitable for GWO, but it is a small period. Therefore, a random variable rand()/N is added to the tent mapping.
where: i is the grey wolf pack size, j is the chaotic sequence number, rand() belongs to [0,1], v belongs to [0,2], N is the population number. Introducing rand()/N can maintain the ergodicity and regularity of tent mapping and effectively solve the tent falling into small and unstable periodic points during iteration. Figure 3 shows the change curves of two Tent chaotic mappings. The tent mapping has significantly improved reversibility and uniform distribution compared with the tent. Improved tent mapping steps: 1.

3.
Stop iterating when the iteration reaches the maximum value and saves the sequence. sequence.
Finally, map it to the grey Wolf Pack search space.
where lb and ub are the upper and lower limits of the grey wolf pos introducing random variables in the tent mapping can effectively avo minor cycle points and limit the random values to a set range. Improv enables the GWO initialized wolf pack positions to be uniformly search space.

Nonlinear Convergence Factor
In GWO, the excellent or lousy convergence factor affects the search ability and local exploitation ability. The global search ability i grey wolf pack to other unopened areas to prevent the wolf pack from optimal solutions. Equation (3) |A| > 1, the grey wolf pack needs to the entire space. The local exploitation ability represents the accurac When |A| < 1, the grey wolf pack wants to surround and attack the p ability also determines the convergence speed, so the converge significant role. The convergence factor used in traditional GWO is a factor, decreasing from 2 to 0. However, it is found that the actual is n and nonlinearity is more applicable to GWO. in addition, the first mainly for a global search for optimal solutions, and the middle and local development, with different needs for convergence factors.
Therefore, this paper uses a convergence factor based on the Gau change curve. Finally, map it to the grey Wolf Pack search space.
where lb and ub are the upper and lower limits of the grey wolf position, respectively, introducing random variables in the tent mapping can effectively avoid the shortage of minor cycle points and limit the random values to a set range. Improving tent mapping enables the GWO initialized wolf pack positions to be uniformly distributed in the search space.

Nonlinear Convergence Factor
In GWO, the excellent or lousy convergence factor affects the algorithm's global search ability and local exploitation ability. The global search ability is the search of the grey wolf pack to other unopened areas to prevent the wolf pack from falling into local optimal solutions. Equation (3) |A| > 1, the grey wolf pack needs to search the prey in the entire space. The local exploitation ability represents the accuracy in a small area. When |A| < 1, the grey wolf pack wants to surround and attack the prey, and the local ability also determines the convergence speed, so the convergence factor has a significant role. The convergence factor used in traditional GWO is a linear decreasing factor, decreasing from 2 to 0. However, it is found that the actual is not a linear change, and nonlinearity is more applicable to GWO. in addition, the first stage of GWO is mainly for a global search for optimal solutions, and the middle and later stages are for local development, with different needs for convergence factors.
Therefore, this paper uses a convergence factor based on the Gaussian distribution change curve.
where Ø, ϕ is the decreasing function, changes with the number of iterations, and ∂ is the cut-off. Figure 4 compares the convergence factors of GWO, Improved Gray Wolf Optimizer Algorithm (MGWO) in literature [10], and improved GWO proposed in this paper.
the cut-off. Figure 4 compares the convergence fac Optimizer Algorithm (MGWO) in literature [10], an paper. The convergence factor of GWO is linearly dec application of the algorithm in practice. The conve the exponential law, which does not guarantee the a stage of the search. The improved convergence fac the nonlinear normal distribution, and the converg decays slower at the beginning of the iteration so t for the optimal solution to the unknown global searchability in the early stage and preventing it f The convergence factor is more minor and decays improve the algorithm's local search accuracy and c factor is more minor and decays more in the later search accuracy and convergence speed. Therefore, better balance GWO global search and local search a

Dynamic Proportional Weighting Strategy
The traditional GWO uses Equation (8) as the f the effect is not good. The [24] proposed two meth formula by increasing the weights. The convergence factor of GWO is linearly decreasing, which does not apply to the application of the algorithm in practice. The convergence factor of MGWO is based on the exponential law, which does not guarantee the accuracy of the local search at the late stage of the search. The improved convergence factor is a curve decaying according to the nonlinear normal distribution, and the convergence factor is more significant and decays slower at the beginning of the iteration so that the population can better search for the optimal solution to the unknown global region, thus improving the global searchability in the early stage and preventing it from falling into the local optimum. The convergence factor is more minor and decays more at the later iteration stage to improve the algorithm's local search accuracy and convergence speed. The convergence factor is more minor and decays more in the later iterations, thus improving the local search accuracy and convergence speed. Therefore, the improved convergence factor can better balance GWO global search and local search ability.

Dynamic Proportional Weighting Strategy
The traditional GWO uses Equation (8) as the formula for wolf position update, but the effect is not good. The [24] proposed two methods to improve the position update formula by increasing the weights.
Equations (12) and (13) set α, β, and w with different coefficients to highlight their importance, and Equation (12) increases the coefficient 5 for α, 3 for β, and 2 for w according to the importance. W in Equation (13) denotes the weight of the three wolves, and f denotes the current adaptation degree of the three wolves and increases the weight of the wolves according to the adaptation degree. Inspired by the above, a proportional weighting strategy based on fitness and location is proposed to make the grey wolf pack find the optimal solution more precisely: The complexity of the traditional GWO algorithm is , which shows the number of subgroups in the operation process. The improved GWO algorithm of this paper uses chaotic tent mapping, which is based on the nonlinear convergence factor of the normal distribution, and the complexity of this algorithm is O (N2 × d × T max ). The algorithm complexity shows that the algorithm complexity of the improved GWO is higher, but the comparison of the above benchmark test function shows that the solution accuracy and convergence speed are better than the other algorithms.
The improved GWO algorithm pseudo-code is shown in Algorithm 1.
Algorithm 1: Pseudo Code of Improved GWO 1 Initialize (Xi (i = 1, 2 . . . , n)) t, T max , a, A, C 2 Initialize Tent map x 0 3 Calculate the fitness of each wolf 4 X a = best wolf. X β = second wolf. X w = third wolf. 5 While t < T max 6 Sort fitness of each wolf 7 Update chaotic number, a 8 for each search agent 9 Update position current wolf using 10 end 11 Calculate fitness of each wolf 12 Update X a , X β , X w 13 t = t + 1 14 end

Result
In order to verify the performance of the improved algorithm, 15 international standard benchmark test functions are selected for simulation experiments. For the fairness of the results, the relevant parameters of all compared algorithms are configured in Tables 1 and 2 shows the benchmark test functions. GWO, MGWO [10], NGWO [11], GWO-fuzzy [12], GWO-EPD [13], and the improved GWO in this paper were selected for comparison of simulation experiments. Simulation experiments were conducted using Matlab on a Lenovo R7000P, containing a 2020H, 2.90 GHz processor. Table 3 shows the comparison of the mean and standard deviation of the results of 30 independent runs of the algorithms, and the best results of the compared algorithms are in bold in the Tables 3 and 4. Furthermore, Figure 5 shows the convergence curves of the six algorithms on some of the tested functions. Initial value of convergence factor 2 a2 Final value of convergence factor 0 Table 2. Benchmark functions.

Function
Dim Scope Solution       fairness of the results, the relevant parameters of all compared algorithms are configured in Tables 1 and 2 shows the benchmark test functions. GWO, MGWO [10], NGWO [11], GWO-fuzzy [12], GWO-EPD [13] , and the improved GWO in this paper were selected for comparison of simulation experiments. Simulation experiments were conducted using Matlab on a Lenovo R7000P, containing a 2020H, 2.90 GHz processor. Table 3 shows the comparison of the mean and standard deviation of the results of 30 independent runs of the algorithms, and the best results of the compared algorithms are in bold in the Table3 and Table4. Furthermore, Figure 5 shows the convergence curves of the six algorithms on some of the tested functions.  Initial value of convergence factor 2 a2 Final value of convergence factor 0

Convergence Accuracy Analysis
From the traditional GWO principle, it is known that the exploration ability of the algorithm depends mainly on the convergence factor, and in practical experiments, it can be observed that the convergence factor decays not linearly from 2 to 0 but with the number of iterations [10]. MGWO convergence factor uses a nonlinear exponential convergence factor, which will work well compared to the linear convergence factor, which illustrates the effectiveness of a nonlinear convergence factor.
The results in Table 3 show that the improved GWO algorithm outperforms several other improved algorithms tested under 15 sets of test functions because the initial set number of iterations is satisfied. The single-peak test function is mainly used to test the development capability of the algorithm. For f1, f2, f3, and f4, it can be found the theoretical optimal value of 0 in terms of the stability of the search and the accuracy of the search. In solving f7, although the effect is not very obvious after using the improved algorithm, the mean and standard deviation are still better than the other algorithms and for functions f5 and f6, although the improved GWO does not show the superiority of the algorithm, the difference with the other algorithms is not much. The improved GWO outperforms the other algorithms in terms of superiority-seeking ability and stability for the single-peak test function. The multi-peak test function is mainly used to test the exploration performance of the algorithm. The test results show that the improved GWO algorithm can reach the theoretical optimal value on f8 and f10, and f9. Although it cannot reach the optimal value, it is still better than other improved algorithms.
In summary, the improved GWO algorithm improves the performance of the 15 benchmark functions, and it is stable and robust, especially in f1-f4, f8, and f10. The improved algorithm can improve by several orders of magnitude, which is very obvious. The convergence speed of the improved GWO algorithm is also better than other improved algorithms, and during the experiment, it was found that the improved algorithm has excellent real-time performance and can effectively avoid the trap of local optimum in real-time, which proves the feasibility and superiority of the improved GWO algorithm compared with other improved algorithms.

Convergence Speed Analysis
In order to visualize the convergence speed and search accuracy of the improved algorithm, the convergence curves of the analyzed 15 benchmark functions (d = 30) are shown in Figure 5. Figure 5a-e show the single-peak convergence curve, and (f-l) show the multi-peak convergence curve. Compared with several other algorithms, the convergence speed and search accuracy of the improved GWO algorithm is improved. The convergence curve verifies that the improved GWO algorithm solves single-peak and multi-peak functions. The improved algorithm in this paper can basically converge to the optimal value under the test of the benchmark function, and the last result is closer to the optimal value without acquired the best quality.
Moreover, it is found in the simulation process that the algorithm has good stability and a high success rate. The improved algorithm proposed in this paper has fewer iterations and higher optimal search accuracy than MGWO and NGWO, although they all can reach the optimal solution. Chaotic tent mapping, nonlinear convergence factor, and dynamic weighting strategy are combined in improved GWO, so that the problem of the algorithm falling into local optimum has been effectively solved and the convergence speed has been greatly improved. In summary, the improved algorithm can acquire a higher mean and standard deviation, which shows that the improved algorithm has higher solution accuracy and stability in most of the tested functions.

Comparison with Other Intelligent Optimization Algorithms
To further demonstrate the effectiveness of the improved algorithm, the improved algorithm is compared with the classical optimization algorithms Particle Swarm Optimization (PSO) algorithm, Sparrow Search Algorithm (SSA), and Mayfly Algorithm (MA) on 15 benchmark functions. The comparison results are shown in Table 4.
As can be seen from the results in Table 4, under the condition that the number of iterations is 500, compared with the other three classical algorithms, the improved GWO can reach the theoretical optimal value of 0 for the single-peaked benchmark functions f(1)-f(4), f (8), and f(10). In addition, the standard deviations and mean values got on the other benchmark functions have better performance, showing that the improved algorithm is practical and workable. The convergence curves are not put into the text due to length limitation. It is found that the improved algorithm has higher convergence accuracy and faster convergence speed by comparing the convergence curves with other intelligent algorithms.
Comparing algorithms based on mean and standard deviation values is not enough. Wilcoxon's nonparametric statistical test is conducted at the 5% significance level to determine whether the improved GWO provides a significant improvement compared to other algorithms. The different algorithms on the benchmark function were employed to test the Wilcoxon rank-sum, and P and R values were obtained as a significant level indicator. If the p value is less than 0.05, the null hypothesis is rejected, and the two algorithms tested are considered significantly different. Conversely, the two algorithms tested are considered not to be significantly different. R result of "+", "−", and "=" represent, respectively, improved GWO performance better than, worse than, and equivalent to the comparison algorithm. If the p value is NaN, it means that the data is invalid, that is, the experimental results of the improved algorithm are similar to those of the compared algorithm, and their performance is similar.
This paper tests the Wilcoxon rank-sum with 30 repeated experiments on 15 benchmark functions by the improved GWO algorithm and other algorithms. The test results are shown in Table 5. In the most cases, the R values of the test results are "+", except that the results p values for SSA, MA, and improved GWO on f5 are greater than 0.05 and the R values are "−", and the results p values for MGWO and Improved GWO on f8 and f10 are NaN and the R values are "=". This means the optimization efficiency of Improved GWO and MGWO is similar in f8 and f10. The results show that the Improved GWO algorithm's performance is significantly improved compared with other algorithms in most cases. In path planning with obstacle avoidance for mobile robot, the mathematical model of robot environment should be established firstly replacing the virtual environment. After setting the start and end point of the mobile robot in the environment model, an intelligent algorithm is used to find a continuous curve that satisfies a specific performance index, which can avoid the obstacles in the environment.
The randomly generated individuals based on the intelligent optimization algorithm do not conform to the search space. It is necessary to establish a suitable fitness function consider various constraints, and then eliminate the individuals in the population who do not meet the constraints to acquire the better individuals. The mobile robot has to consider various factors in its actual operation. Therefore, it has the following main constraints.

1.
Maximum cornering angle constraint When using the algorithm for mobile robot path planning, it is necessary to consider the maximum steering angle constraint, which affects robot safety. This node is discarded if the specific rotation angle is outside the maximum performance range that the robot should withstand. If the rotation angle can satisfy the robot's maneuverability, judge the other constraints. The maximum turning angle is specified as 60 • in the simulation experiment.

2.
Threat area constraints Mobile robot path planning makes the robot reach its destination in the shortest distance while bypassing obstacles. The mathematical expression for the obstacle area can be got. Assuming that the distance between the robot and the center of the obstacle is d T , the damage to the robot caused by obstacle area, defined as Probability P T (d T ), can be calculated as: (15) where d Tmax indicates the maximum radius affected by the area, d Tmin is the region where the probability of robot collision is 1.

Path Planning
The main steps of applying the improved grey wolf algorithm to path planning are as follows: 1.
Establish the search space according to the actual environment, and set the starting point and target point.

2.
Initialize the parameters of grey wolf algorithm, including the number of wolves, the maximum number of iterations, tent mapping parameters, and upper and lower bounds for parameter values.

3.
Initialize the grey wolf's position and objective function according to the utilization mapping.

4.
Calculate each grey wolf's fitness and select the top three grey wolves as wolf α, wolf β, and wolf w for the fitness ranking.

5.
Compare with the objective function to update the position and the objective function. 6.
Update the convergence factor at each iteration. 7.
Calculate the next position of other wolves according to the positions of wolf α, wolf β, and wolf w.

8.
Reach the maximum number of iterations and output the optimal path.
To verify the performance of the improved GWO algorithm, the improved GWO algorithm applies to the path planning of mobile robot for verification analysis. The robot's starting point is set as (0,0), and the target point is set as (100,100). The obstacles are generated randomly. a1 = 2, a2 = 0, the initial number of grey wolves is 30, and the maximum number of iterations is 500. the GWO, literature [10] MGWO, literature [11] NGWO, literature [12] GWO-fuzzy, literature [13] GWO-EPD, and the improved GWO algorithm in this paper, are applied to path planning for comparison. Figure 6a-e shows the obstacle avoidance paths planned by each improved GWO, and Figure 6f shows the convergence curves of the corresponding algorithms.
As shown in Figure 6a-e, except for MGWO, other improved algorithms find poorer and more costly paths. Although the path length of MGWO is short, the planned path is too close to the danger area, which is not conducive to the application of mobile robots. In addition, it can be seen from Figure 6f that the algorithm in this paper has better convergence compared with other improved algorithms. In summary, the improved GWO proposed in this paper can stably plan a safe path with optimal cost and satisfying constraints.
are generated randomly. a1 = 2, a2 = 0, the initial number of grey wolves is 30, and the maximum number of iterations is 500. the GWO, literature [10] MGWO, literature [11] NGWO, literature [12] GWO-fuzzy, literature [13] GWO-EPD, and the improved GWO algorithm in this paper, are applied to path planning for comparison. Figure 6a-e shows the obstacle avoidance paths planned by each improved GWO, and Figure 6f shows the convergence curves of the corresponding algorithms. As shown in Figure 6a e, except for MGWO, other improved algorithms find poorer and more costly paths. Although the path length of MGWO is short, the planned path is too close to the danger area, which is not conducive to the application of mobile robots. In addition, it can be seen from Figure 6f that the algorithm in this paper has better convergence compared with other improved algorithms. In summary, the improved GWO proposed in this paper can stably plan a safe path with optimal cost and satisfying constraints.

Conclusions
This paper proposes and applies an improved GWO to the path planning of mobile robot. First, an improved chaotic tent mapping is proposed, which is applied to the initial stage of the algorithm to increase the diversity of population initialization and improve the global search capability. Second, a nonlinear convergence factor based on the change curve of Gaussian distribution is used to balance the algorithm's global search capability and local search capability. Finally, the traditional GWO is optimized with an improved dynamic weighting strategy. In order to test the competence of the improved GWO,15 well-known benchmark functions having a wide range of dimensions and varied complexities are used in this paper. The results of the proposed improved GWO are compared to eight other algorithms. The results show that the improved GWO has better convergence speed and solution accuracy. In addition, the improved GWO is applied to the mobile robot path planning. The test results show that the improved GWO significantly improves cost consumption and convergence speed compared with other algorithms.
The improved GWO algorithm proposed in this paper is applied to mobile robots' obstacle avoidance path planning. The situation of falling into local extremes can be avoided and the convergence speed and stability can be improved when the algorithm is applied to obstacle avoidance path planning of mobile robot. In the next research, we will continue to improve the algorithm and apply the improved algorithm to more practical mobile robots.

Conclusions
This paper proposes and applies an improved GWO to the path planning of mobile robot. First, an improved chaotic tent mapping is proposed, which is applied to the initial stage of the algorithm to increase the diversity of population initialization and improve the global search capability. Second, a nonlinear convergence factor based on the change curve of Gaussian distribution is used to balance the algorithm's global search capability and local search capability. Finally, the traditional GWO is optimized with an improved dynamic weighting strategy. In order to test the competence of the improved GWO, 15 well-known benchmark functions having a wide range of dimensions and varied complexities are used in this paper. The results of the proposed improved GWO are compared to eight other algorithms. The results show that the improved GWO has better convergence speed and solution accuracy. In addition, the improved GWO is applied to the mobile robot path planning. The test results show that the improved GWO significantly improves cost consumption and convergence speed compared with other algorithms.
The improved GWO algorithm proposed in this paper is applied to mobile robots' obstacle avoidance path planning. The situation of falling into local extremes can be avoided and the convergence speed and stability can be improved when the algorithm is applied to obstacle avoidance path planning of mobile robot. In the next research, we will continue to improve the algorithm and apply the improved algorithm to more practical mobile robots.