Next Article in Journal
sEMG and Vibration System Monitoring for Differential Diagnosis in Temporomandibular Joint Disorders
Previous Article in Journal
Information Theory Solution Approach to the Air Pollution Sensor Location–Allocation Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Grey Wolf Optimization Algorithm and Application

1
School of Information and Electrical Engineering, Shandong Jianzhu University, Jinan 250101, China
2
Shandong Key Laboratory of Intelligent Building Technology, Jinan 250101, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(10), 3810; https://doi.org/10.3390/s22103810
Submission received: 27 April 2022 / Revised: 16 May 2022 / Accepted: 16 May 2022 / Published: 17 May 2022
(This article belongs to the Topic Advances in Mobile Robotics Navigation)

Abstract

:
This paper proposed an improved Grey Wolf Optimizer (GWO) to resolve the problem of instability and convergence accuracy when GWO is used as a meta-heuristic algorithm with strong optimal search capability in the path planning for mobile robots. We improved chaotic tent mapping to initialize the wolves to enhance the global search ability and used a nonlinear convergence factor based on the Gaussian distribution change curve to balance the global and local searchability. In addition, an improved dynamic proportional weighting strategy is proposed that can update the positions of grey wolves so that the convergence of this algorithm can be accelerated. The proposed improved GWO algorithm results are compared with the other eight algorithms through several benchmark function test experiments and path planning experiments. The experimental results show that the improved GWO has higher accuracy and faster convergence speed.

1. Introduction

Path planning is widely used in mobile robot navigation, which of the aim is to find an optimal trajectory that connects the starting point with the target point while avoiding collisions with obstacles [1,2]. There are many commonly used algorithms, such as A* algorithm [3], particle swarm algorithm (PSO) [4,5], genetic algorithm (GA) [6], and grey wolf algorithm (GWO) [7,8,9].
GWO is a new pack intelligence optimization algorithm that is widely used in many significant fields. It mainly imitates the grey wolf race pack’s hierarchical pattern and hunting behavior and achieves optimization through the wolf pack’s tracking, encircling, and pouncing behaviors. Compared with traditional optimization algorithms such as PSO and GA, GWO has the advantages of fewer parameters, simple principles, and implementing easily. However, GWO has the disadvantages of slow convergence speed, low solution accuracy, and easy to fall into the local optimum. For this reason, many scholars have made many improvements. Yang Zhang [10] proposed MGWO, which introduced an exponential regular convergence factor strategy, an adaptive update strategy, and a dynamic weighting strategy to improve the GWO search capability. Min Wang [11] proposed NGWO, which used reverse learning of the initial racial group and introduced a nonlinear convergence factor to improve the algorithm search capability. Luis Rodriguez [12] proposed the Grey Wolf algorithm (GWO-fuzzy) based on a fuzzy hierarchical operator and compared two proportional weighting strategies. Saremi [13] proposed the grey Wolf Algorithm for Evolutionary Population Dynamics (GWO-EPD), which focuses on the location change of poorly adapted grey wolf individuals to improve search accuracy. Qiuping Wang [14] proposed an improved grey wolf algorithm (CGWO), which uses the cosine law to vary the convergence factor to improve the searchability, and introduces a proportional weight based on the step Euclidean distance to update the position of the grey wolf to speed up the convergence speed. Shipeng Wang [15] proposed a new hybrid algorithm (FWGWO), which combines the advantages of both algorithms and effectively achieves the global optimum. In order to effectively improve the coverage of a wireless sensor network in the monitoring area, a coverage optimization algorithm for wireless sensor networks with a Virtual Force-Lévy-embedded Grey Wolf Optimization (VFLGWO) algorithm is proposed [16].
Although the GWO algorithm has been widely used in various engineering problems, such as numerical simulation and stability domains [17,18], classification of data sets, feature acquiring selection, etc., it has been less applied in mobile robot path planning. The research object is the path planning of mobile robots. The shortest path is the objective function, the environment is the constraint condition, and the grey wolf optimization algorithm applies to the path planning of mobile robots to avoid obstacles. To address the defects of the gray wolf optimization algorithm in solving the path planning problem of mobile robots, such as falling into local extremes, poor stability, and poor local search capability. Summarizing the above research results, we know that there are three factors determining the performance of the grey wolf algorithm in finding the best path: the initialized wolf pack, the convergence factor, and the proportional weighting strategy. In this paper, we mainly improve these three aspects of GWO. First, initialize the wolf pack position using improved chaotic tent mapping. The second is applying a nonlinear convergence factor based on the Gaussian distribution variation to improve the search capability. Finally, a dynamic weighting strategy is introduced to speed up the convergence. Several benchmark functions are simulated and compared with various improved GWO and classical intelligent optimization algorithms to show the effectiveness of the improved algorithms. The improved GWO has been tested on mobile robot path planning to verify the algorithm’s practicality.
The contributions of this paper are:
  • An improved GWO algorithm based on a multi-strategy hybrid is proposed.
  • The improved GWO algorithm is applied to the path planning of mobile robot.
  • The performance of the proposed approach is compared with standard GWO, Sparrow Search Algorithm (SSA), Mayfly Algorithm (MA), Modified Grey Wolf Optimization Algorithm (MGWO) [10], Novel Grey Wolf Optimization Algorithm (NGWO) [11], A Fuzzy Hierarchical Operator in the Grey Wolf Optimizer Algorithm (GWO-fuzzy) [12], and Evolutionary population dynamics and grey wolf optimizer (GWO-EPD) [13].
The remainder of this paper is structured as follows. Section 2 summarizes the related work. Section 3 describes the deployment scheme of this paper to improve the gray wolf algorithm. The experimental results are discussed in Section 4. Section 5 concludes the paper.

2. Related Work

2.1. Research Situation

Path planning is a typical complex multi-aim optimization problem that finds a workable or optimal path from the starting point to the goal point under careful consideration of various environmental conditions. Intelligent algorithms are widely used in such problems as path planning because of their better robustness.
Research on solving path planning problems using swarm intelligence algorithms is gradually increasing. For example, Yin Ling [19] fused the improved grey wolf algorithm with the artificial potential field method to solve the problem of unreachable target points because of the influence of dynamic obstacles in path planning. Dazhang You [20] combined GWO with particle swarm algorithm to reduce the cost consumption of path planning by introducing cooperative quantitative optimization of the grey wolf population. Kumar R [21] introduced a new technique named modified grey wolf optimization (MGWO) algorithm to solve the path planning problem for multi-robots. Ge Fawei [22] proposed the grey wolf fruit fly optimization algorithm (GWFOA), which combines the fruit fly optimization algorithm (FOA) with GWO for the Unmanned Aerial Vehicle (UAV) path planning problem in oil field inspection, resulting in a satisfactory solution for UAV in complex environments. One more powerful algorithm named variable weight grey wolf optimization (VW-GWO) was recently proposed by Kumar [23] to obtain an optimal solution for the path planning problem of mobile robots.

2.2. GWO Algorithm

In 2014, inspired by the predatory behavior of grey wolf packs, Seyedali Mirjalili et al. proposed the grey wolf algorithm (GWO) [7]. The algorithm simulates the unique hunting and prey-seeking characteristics of the grey wolf. Grey wolves belong to the group of living canines. Each wolf plays a different role in the group and accomplishes tasks through cooperation between wolves. The GWO divided the grey wolf population into four levels of social hierarchy (Figure 1). The first rank is wolf α, responsible for deciding on activities such as hunting. The second rank is wolf β, subordinate to wolf α and helps make decisions with wolf α, also the best candidate for wolf α. The third rank is wolf δ, subordinate to wolf α and wolf β, responsible for tasks such as scouting and hunting. The fourth rank is wolf ω, the lowest rank, responsible for maintaining the wolf pack. Grey wolf hunting is divided into tracking, chasing, and attacking prey.
During the GWO operation, the positions of wolf α, wolf β, and wolf δ are continuously updated at each iteration, whose mathematical model is described as:
D = | C X p ( t ) X ( t ) | ω
X ( t + 1 ) = X p ( t ) A D
Equation (1) is the distance between the grey wolf and the prey, where t is the number of current iterations, and Xp(t) and X(t) are the prey’s locations and the grey wolf’s location at t iterations, respectively. Equation (2) is the formula for updating the location of the grey wolf. A and C are the coefficient vectors, which are calculated by the following equations:
A = 2 a r 1 a
C = 2 r 2
where r1, r2 are random vectors between [0, 1], and the primary role is to increase the randomness of the grey wolf movement. a represents the convergence factor, which will decay linearly from 2 to 0 as the algorithm progresses, and the linear relationship defines GWO:
a = 2 2 t / T max
where t is the current number of iterations and Tmax is the maximum number of iterations of the algorithm.
Predating in abstract space and accurately identifying the location of prey is impossible. GWO simulated hunting behavior. Based on the fitness value, wolf α, wolf β, and wolf δ were selected to find the prey using the relationship between the three positions and guide the other wolves to move toward the prey, as in Figure 2.
By iterating several times until the location of the prey is reached, the mathematical model is as follows:
D a = | C 1 X α X | D β = | C 2 X β X | D δ = | C 3 X δ X |
X 1 = | X α A 1 D α | X 2 = | X β A 2 D β | X 3 = | X δ A 3 D δ |
X ( t + 1 ) = ( X 1 + X 2 + X 3 ) / 3
where: Da is the distance between wolf pack w and a wolf, DB is the distance between wolf pack w and β wolf, and Dδ is the distance between wolf pack w and wolf δ. The Equation (7) presents the location of the new generation of wolves after the update.

3. Improved GWO Algorithm

3.1. Wolf Pack Initialization

Since the initialized grey wolf population determines whether the optimal path can be found and the convergence speed, a diversity of initialized populations can help improve the algorithm’s performance in finding the optimal path. Traditional GWO randomly initializes wolf pack positions, which primarily affects the search efficiency of the algorithm, so the initialized populations need to be distributed as evenly as possible in the initial space.
In optimization, chaotic mappings positively impact the convergence speed of GWO algorithms, and chaotic sequences have characteristics such as nonlinearity, ergodicity, and preventing algorithms from falling into local optimality. In the last decade, chaotic mapping has been widely used to help optimize more dynamic and global search spaces for intelligent algorithms. There are over ten mappings: logistic mapping, piecewise-linear chaotic system mapping(pwlcm), singer mapping, and tent mapping. These mappings can choose the initial value of any number [0, 1] (or according to the chaotic mapping range). Among them, logistic mapping and tent mapping are most commonly used, but logistic mapping is less ergodic than tent mapping, and the sensitivity of initial parameters leads to the high density of mapped points at the edges and less density in the middle region, which is not conducive to optimal path planning. Compared with logistic mapping, tent mapping is more suitable for GWO, but it is a small period. Therefore, a random variable rand()/N is added to the tent mapping.
y i , j + 1 = υ y i , j + r a n d ( ) / N , 0 y i , j + 1 0.5 υ ( 1 y i , j ) + r a n d ( ) / N , 0.5 < y i , j + 1 1
where: i is the grey wolf pack size, j is the chaotic sequence number, rand() belongs to [0, 1], v belongs to [0, 2], N is the population number. Introducing rand()/N can maintain the ergodicity and regularity of tent mapping and effectively solve the tent falling into small and unstable periodic points during iteration. Figure 3 shows the change curves of two Tent chaotic mappings. The tent mapping has significantly improved reversibility and uniform distribution compared with the tent. Improved tent mapping steps:
  • Produce random initial values y0 in (0, 1) with i = 0.
  • Calculate iteratively using Equation (9) to produce the sequence.
  • Stop iterating when the iteration reaches the maximum value and saves the sequence.
Finally, map it to the grey Wolf Pack search space.
x i , j = l b + y i , j ( u b l b )
where lb and ub are the upper and lower limits of the grey wolf position, respectively, introducing random variables in the tent mapping can effectively avoid the shortage of minor cycle points and limit the random values to a set range. Improving tent mapping enables the GWO initialized wolf pack positions to be uniformly distributed in the search space.

3.2. Nonlinear Convergence Factor

In GWO, the excellent or lousy convergence factor affects the algorithm’s global search ability and local exploitation ability. The global search ability is the search of the grey wolf pack to other unopened areas to prevent the wolf pack from falling into local optimal solutions. Equation (3) |A| > 1, the grey wolf pack needs to search the prey in the entire space. The local exploitation ability represents the accuracy in a small area. When |A| < 1, the grey wolf pack wants to surround and attack the prey, and the local ability also determines the convergence speed, so the convergence factor has a significant role. The convergence factor used in traditional GWO is a linear decreasing factor, decreasing from 2 to 0. However, it is found that the actual is not a linear change, and nonlinearity is more applicable to GWO. in addition, the first stage of GWO is mainly for a global search for optimal solutions, and the middle and later stages are for local development, with different needs for convergence factors.
Therefore, this paper uses a convergence factor based on the Gaussian distribution change curve.
a = ϕ 1 2 π ( T max / 3 ) e t 2 2 ( T max / 3 ) 2 , t T max a = φ 1 2 π ( T max / 3 ) e t 2 2 ( T max / 3 ) 2 , T max t < T max
where Ø, φ is the decreasing function, changes with the number of iterations, and ∂ is the cut-off. Figure 4 compares the convergence factors of GWO, Improved Gray Wolf Optimizer Algorithm (MGWO) in literature [10], and improved GWO proposed in this paper.
The convergence factor of GWO is linearly decreasing, which does not apply to the application of the algorithm in practice. The convergence factor of MGWO is based on the exponential law, which does not guarantee the accuracy of the local search at the late stage of the search. The improved convergence factor is a curve decaying according to the nonlinear normal distribution, and the convergence factor is more significant and decays slower at the beginning of the iteration so that the population can better search for the optimal solution to the unknown global region, thus improving the global searchability in the early stage and preventing it from falling into the local optimum. The convergence factor is more minor and decays more at the later iteration stage to improve the algorithm’s local search accuracy and convergence speed. The convergence factor is more minor and decays more in the later iterations, thus improving the local search accuracy and convergence speed. Therefore, the improved convergence factor can better balance GWO global search and local search ability.

3.3. Dynamic Proportional Weighting Strategy

The traditional GWO uses Equation (8) as the formula for wolf position update, but the effect is not good. The [24] proposed two methods to improve the position update formula by increasing the weights.
X ( t + 1 ) = 5 X 1 + 3 X 2 + 2 X 3 10
W a = f a   +   f β   +   f ω f a , W β = f a   +   f β   +   f ω f β , W ω = f a   +   f β   +   f ω f ω X ( t   +   1 ) = X 1 W a   +   X 2 W β   +   X 3 W ω W a   +   W β   +   W ω
Equations (12) and (13) set α, β, and w with different coefficients to highlight their importance, and Equation (12) increases the coefficient 5 for α, 3 for β, and 2 for w according to the importance. W in Equation (13) denotes the weight of the three wolves, and f denotes the current adaptation degree of the three wolves and increases the weight of the wolves according to the adaptation degree.
Inspired by the above, a proportional weighting strategy based on fitness and location is proposed to make the grey wolf pack find the optimal solution more precisely:
W a = f a   +   f β   +   f ω f a , W β = f a   +   f β   +   f ω f β , W ω = f a   +   f β   +   f ω f ω V 1 = | X 1 |   +   | X 2 |   +   | X 3 | | X 1 | , V 2 = | X 1 |   +   | X 2 |   +   | X 3 | | X 2 | , V 3 = | X 1 |   +   | X 2 |   +   | X 3 | | X 3 | , X ( t   +   1 ) = V 1 W a   +   V 2 W β   +   V 3 W ω 3
The complexity of the traditional GWO algorithm is O (N × d × Tmax). The complexity of the GWO-EPD algorithm is O (2N × d × Tmax), which is mainly between GWO and EPD. the complexity of the NGWO algorithm is O (3N × d × Tmax). The complexity of the MGWO algorithm is O (N × d × Tmax), which shows the number of subgroups in the operation process. The improved GWO algorithm of this paper uses chaotic tent mapping, which is based on the nonlinear convergence factor of the normal distribution, and the complexity of this algorithm is O (N2 × d × Tmax). The algorithm complexity shows that the algorithm complexity of the improved GWO is higher, but the comparison of the above benchmark test function shows that the solution accuracy and convergence speed are better than the other algorithms.
The improved GWO algorithm pseudo-code is shown in Algorithm 1.
Algorithm 1: Pseudo Code of Improved GWO
1Initialize (Xi (i = 1, 2…, n)) t, Tmax, a, A, C
2Initialize Tent map x0
3Calculate the fitness of each wolf
4  Xa = best wolf. Xβ = second wolf. Xw = third wolf.
5While t < Tmax
6   Sort  fitness of each wolf
7   Update chaotic number, a
8   for each search agent
9        Update position current wolf using
10    end
11    Calculate  fitness of each wolf
12    Update Xa, Xβ, Xw
13    t = t + 1
14 end

4. Result

In order to verify the performance of the improved algorithm, 15 international standard benchmark test functions are selected for simulation experiments. For the fairness of the results, the relevant parameters of all compared algorithms are configured in Table 1 and Table 2 shows the benchmark test functions. GWO, MGWO [10], NGWO [11], GWO-fuzzy [12], GWO-EPD [13], and the improved GWO in this paper were selected for comparison of simulation experiments. Simulation experiments were conducted using Matlab on a Lenovo R7000P, containing a 2020H, 2.90 GHz processor. Table 3 shows the comparison of the mean and standard deviation of the results of 30 independent runs of the algorithms, and the best results of the compared algorithms are in bold in the Table 3 and Table 4. Furthermore, Figure 5 shows the convergence curves of the six algorithms on some of the tested functions.

4.1. Comparison with GWO and Other Improvement GWO

4.1.1. Convergence Accuracy Analysis

From the traditional GWO principle, it is known that the exploration ability of the algorithm depends mainly on the convergence factor, and in practical experiments, it can be observed that the convergence factor decays not linearly from 2 to 0 but with the number of iterations [10]. MGWO convergence factor uses a nonlinear exponential convergence factor, which will work well compared to the linear convergence factor, which illustrates the effectiveness of a nonlinear convergence factor.
The results in Table 3 show that the improved GWO algorithm outperforms several other improved algorithms tested under 15 sets of test functions because the initial set number of iterations is satisfied. The single-peak test function is mainly used to test the development capability of the algorithm. For f1, f2, f3, and f4, it can be found the theoretical optimal value of 0 in terms of the stability of the search and the accuracy of the search. In solving f7, although the effect is not very obvious after using the improved algorithm, the mean and standard deviation are still better than the other algorithms and for functions f5 and f6, although the improved GWO does not show the superiority of the algorithm, the difference with the other algorithms is not much. The improved GWO outperforms the other algorithms in terms of superiority-seeking ability and stability for the single-peak test function. The multi-peak test function is mainly used to test the exploration performance of the algorithm. The test results show that the improved GWO algorithm can reach the theoretical optimal value on f8 and f10, and f9. Although it cannot reach the optimal value, it is still better than other improved algorithms.
In summary, the improved GWO algorithm improves the performance of the 15 benchmark functions, and it is stable and robust, especially in f1–f4, f8, and f10. The improved algorithm can improve by several orders of magnitude, which is very obvious. The convergence speed of the improved GWO algorithm is also better than other improved algorithms, and during the experiment, it was found that the improved algorithm has excellent real-time performance and can effectively avoid the trap of local optimum in real-time, which proves the feasibility and superiority of the improved GWO algorithm compared with other improved algorithms.

4.1.2. Convergence Speed Analysis

In order to visualize the convergence speed and search accuracy of the improved algorithm, the convergence curves of the analyzed 15 benchmark functions (d = 30) are shown in Figure 5. Figure 5a–e show the single-peak convergence curve, and (f–l) show the multi-peak convergence curve. Compared with several other algorithms, the convergence speed and search accuracy of the improved GWO algorithm is improved. The convergence curve verifies that the improved GWO algorithm solves single-peak and multi-peak functions. The improved algorithm in this paper can basically converge to the optimal value under the test of the benchmark function, and the last result is closer to the optimal value without acquired the best quality.
Moreover, it is found in the simulation process that the algorithm has good stability and a high success rate. The improved algorithm proposed in this paper has fewer iterations and higher optimal search accuracy than MGWO and NGWO, although they all can reach the optimal solution. Chaotic tent mapping, nonlinear convergence factor, and dynamic weighting strategy are combined in improved GWO, so that the problem of the algorithm falling into local optimum has been effectively solved and the convergence speed has been greatly improved. In summary, the improved algorithm can acquire a higher mean and standard deviation, which shows that the improved algorithm has higher solution accuracy and stability in most of the tested functions.

4.2. Comparison with Other Intelligent Optimization Algorithms

To further demonstrate the effectiveness of the improved algorithm, the improved algorithm is compared with the classical optimization algorithms Particle Swarm Optimization (PSO) algorithm, Sparrow Search Algorithm (SSA), and Mayfly Algorithm (MA) on 15 benchmark functions. The comparison results are shown in Table 4.
As can be seen from the results in Table 4, under the condition that the number of iterations is 500, compared with the other three classical algorithms, the improved GWO can reach the theoretical optimal value of 0 for the single-peaked benchmark functions f(1)–f(4), f(8), and f(10). In addition, the standard deviations and mean values got on the other benchmark functions have better performance, showing that the improved algorithm is practical and workable. The convergence curves are not put into the text due to length limitation. It is found that the improved algorithm has higher convergence accuracy and faster convergence speed by comparing the convergence curves with other intelligent algorithms.
Comparing algorithms based on mean and standard deviation values is not enough. Wilcoxon’s nonparametric statistical test is conducted at the 5% significance level to determine whether the improved GWO provides a significant improvement compared to other algorithms. The different algorithms on the benchmark function were employed to test the Wilcoxon rank-sum, and P and R values were obtained as a significant level indicator. If the p value is less than 0.05, the null hypothesis is rejected, and the two algorithms tested are considered significantly different. Conversely, the two algorithms tested are considered not to be significantly different. R result of ”+”, “−“, and “=“ represent, respectively, improved GWO performance better than, worse than, and equivalent to the comparison algorithm. If the p value is NaN, it means that the data is invalid, that is, the experimental results of the improved algorithm are similar to those of the compared algorithm, and their performance is similar.
This paper tests the Wilcoxon rank-sum with 30 repeated experiments on 15 benchmark functions by the improved GWO algorithm and other algorithms. The test results are shown in Table 5. In the most cases, the R values of the test results are “+”, except that the results p values for SSA, MA, and improved GWO on f5 are greater than 0.05 and the R values are “−”, and the results p values for MGWO and Improved GWO on f8 and f10 are NaN and the R values are “=”. This means the optimization efficiency of Improved GWO and MGWO is similar in f8 and f10. The results show that the Improved GWO algorithm’s performance is significantly improved compared with other algorithms in most cases.

4.3. Path Planning Application

4.3.1. Problem Description

In path planning with obstacle avoidance for mobile robot, the mathematical model of robot environment should be established firstly replacing the virtual environment. After setting the start and end point of the mobile robot in the environment model, an intelligent algorithm is used to find a continuous curve that satisfies a specific performance index, which can avoid the obstacles in the environment.
The randomly generated individuals based on the intelligent optimization algorithm do not conform to the search space. It is necessary to establish a suitable fitness function consider various constraints, and then eliminate the individuals in the population who do not meet the constraints to acquire the better individuals. The mobile robot has to consider various factors in its actual operation. Therefore, it has the following main constraints.
  • Maximum cornering angle constraint
When using the algorithm for mobile robot path planning, it is necessary to consider the maximum steering angle constraint, which affects robot safety. This node is discarded if the specific rotation angle is outside the maximum performance range that the robot should withstand. If the rotation angle can satisfy the robot’s maneuverability, judge the other constraints. The maximum turning angle is specified as 60° in the simulation experiment.
2.
Threat area constraints
Mobile robot path planning makes the robot reach its destination in the shortest distance while bypassing obstacles. The mathematical expression for the obstacle area can be got. Assuming that the distance between the robot and the center of the obstacle is dT, the damage to the robot caused by obstacle area, defined as Probability PT(dT), can be calculated as:
P T ( d T ) = 0 , d T > d T max 1 d T , d T min d T d T max 1 , d T < d T min
where dTmax indicates the maximum radius affected by the area, dTmin is the region where the probability of robot collision is 1.

4.3.2. Path Planning

The main steps of applying the improved grey wolf algorithm to path planning are as follows:
  • Establish the search space according to the actual environment, and set the starting point and target point.
  • Initialize the parameters of grey wolf algorithm, including the number of wolves, the maximum number of iterations, tent mapping parameters, and upper and lower bounds for parameter values.
  • Initialize the grey wolf’s position and objective function according to the utilization mapping.
  • Calculate each grey wolf’s fitness and select the top three grey wolves as wolf α, wolf β, and wolf w for the fitness ranking.
  • Compare with the objective function to update the position and the objective function.
  • Update the convergence factor at each iteration.
  • Calculate the next position of other wolves according to the positions of wolf α, wolf β, and wolf w.
  • Reach the maximum number of iterations and output the optimal path.
To verify the performance of the improved GWO algorithm, the improved GWO algorithm applies to the path planning of mobile robot for verification analysis. The robot’s starting point is set as (0,0), and the target point is set as (100,100). The obstacles are generated randomly. a1 = 2, a2 = 0, the initial number of grey wolves is 30, and the maximum number of iterations is 500. the GWO, literature [10] MGWO, literature [11] NGWO, literature [12] GWO-fuzzy, literature [13] GWO-EPD, and the improved GWO algorithm in this paper, are applied to path planning for comparison. Figure 6a–e shows the obstacle avoidance paths planned by each improved GWO, and Figure 6f shows the convergence curves of the corresponding algorithms.
As shown in Figure 6a–e, except for MGWO, other improved algorithms find poorer and more costly paths. Although the path length of MGWO is short, the planned path is too close to the danger area, which is not conducive to the application of mobile robots. In addition, it can be seen from Figure 6f that the algorithm in this paper has better convergence compared with other improved algorithms. In summary, the improved GWO proposed in this paper can stably plan a safe path with optimal cost and satisfying constraints.

5. Conclusions

This paper proposes and applies an improved GWO to the path planning of mobile robot. First, an improved chaotic tent mapping is proposed, which is applied to the initial stage of the algorithm to increase the diversity of population initialization and improve the global search capability. Second, a nonlinear convergence factor based on the change curve of Gaussian distribution is used to balance the algorithm’s global search capability and local search capability. Finally, the traditional GWO is optimized with an improved dynamic weighting strategy. In order to test the competence of the improved GWO, 15 well-known benchmark functions having a wide range of dimensions and varied complexities are used in this paper. The results of the proposed improved GWO are compared to eight other algorithms. The results show that the improved GWO has better convergence speed and solution accuracy. In addition, the improved GWO is applied to the mobile robot path planning. The test results show that the improved GWO significantly improves cost consumption and convergence speed compared with other algorithms.
The improved GWO algorithm proposed in this paper is applied to mobile robots’ obstacle avoidance path planning. The situation of falling into local extremes can be avoided and the convergence speed and stability can be improved when the algorithm is applied to obstacle avoidance path planning of mobile robot. In the next research, we will continue to improve the algorithm and apply the improved algorithm to more practical mobile robots.

Author Contributions

Conceptualization, Y.H.; methodology, Y.H.; software, Y.H.; validation, Y.H. and H.G.; formal analysis, Y.H.; investigation, Y.H.; resources, Y.H.; data curation, Y.H.; writing—original draft preparation, Y.H., Z.W. and C.D.; writing—review and editing, H.G.; visualization, Y.H.; supervision, H.G.; project administration, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China under Grant (No. 61903227). We also wish to acknowledge the support of the Important R&D Program of Shandong, China (Grant No. 2019GGX104105).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The processed data required to reproduce these findings cannot be shared as the data also forms part of an ongoing study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zafar, M.N.; Mohanta, J.C. Methodology for path planning and optimization of mobile robots: A review. Procedia Comput. Sci. 2018, 133, 141–152. [Google Scholar] [CrossRef]
  2. Zhao, X. Mobile robot path planning based on an improved A* algorithm. Robot 2018, 40, 903–910. [Google Scholar]
  3. Chongqing, T.Z. Path planning of mobile robot with A* algorithm based on the artificial potential field. Comput. Sci. 2021, 48, 327–333. [Google Scholar]
  4. Eberhart, R.C. Guest editorial special issue on particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 201–203. [Google Scholar] [CrossRef]
  5. Zhangfang, H. Improved particle swarm optimization algorithm for mobile robot path planning. Comput. Appl. Res. 2021, 38, 3089–3092. [Google Scholar] [CrossRef]
  6. Wang, H. Robot Path Planning Based on Improved Adaptive Genetic Algorithm. Electro Optics & Control: 1–7. Available online: http://kns.cnki.net/kcms/detail/41.1227.TN.20220105.1448.015.html (accessed on 21 March 2022).
  7. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  8. Saxena, A.; Kumar, R.; Das, S. β-chaotic map-enabled grey wolf optimizer. Appl. Soft Comput. 2019, 75, 84–105. [Google Scholar] [CrossRef]
  9. Cai, J. Non-linear grey wolf optimization algorithm based on Tent mapping and elite Gauss perturbation. Comput. Eng. Des. 2022, 43, 186–195. [Google Scholar] [CrossRef]
  10. Zhang, Y. Modified grey wolf optimization algorithm for global optimization problems. J. Univ. Shanghai Sci. Techol. 2021, 43, 73–82. [Google Scholar] [CrossRef]
  11. Wang, M. Novel grey wolf optimization algorithm based on nonlinear convergence factor. Appl. Res. Comput. 2016, 33, 3648–3653. [Google Scholar]
  12. Rodríguez, L.; Castillo, O.; Soria, J.; Melin, P.; Valdez, F.; Gonzalez, C.I.; Martinez, G.E.; Soto, J. A fuzzy hierarchical operator in the grey wolf optimizer algorithm. Appl. Soft Comput. 2017, 57, 315–328. [Google Scholar] [CrossRef]
  13. Saremi, S.; Zahra, M.S.; Mohammad, M.S. Evolutionary population dynamics and grey wolf optimizer. Neural Comput. Appl. 2015, 26, 1257–1263. [Google Scholar] [CrossRef]
  14. Wang, Q. Improved grey wolf optimizer with convergence factor and proportion weight. Comput. Eng. Appl. 2019, 55, 60–65+98. [Google Scholar]
  15. Yue, Z.; Zhang, S.; Xiao, W. A novel hybrid algorithm based on grey wolf optimizer and fireworks algorithm. Sensors 2020, 20, 2147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Wang, S.; Yang, X.; Wang, X.; Qian, Z. A virtual force algorithm-lévy-embedded grey wolf optimization algorithm for wireless sensor network coverage optimization. Sensors 2019, 19, 2735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Mahdy, A.M.S.; Lotfy, K.; Hassan, W.; El-Bary, A.A. Analytical solution of magneto-photothermal theory during variable thermal conductivity of a semiconductor material due to pulse heat flux and volumetric heat source. Waves Random Complex Media 2021, 31, 2040–2057. [Google Scholar] [CrossRef]
  18. Khamis, A.K.; Lotfy, K.; El-Bary, A.A.; Mahdy, A.M.; Ahmed, M.H. Thermal-piezoelectric problem of a semiconductor medium during photo-thermal excitation. Waves Random Complex Media 2021, 31, 2499–2513. [Google Scholar] [CrossRef]
  19. Yin, L. Path Planning Combined with Improved Grey Wolf Optimization Algorithm and Artificial Potential Filed Method. Elector Measurement Technology: 1–11. Available online: https://kns.cnki.net/kcms/detail/detail.aspx?doi=10.19651/j.cnki.emt.2108659 (accessed on 21 March 2022).
  20. You, D. A path planning method for mobile robot based on improved grey wolf optimizer. Mach. Tool Hydraul. 2021, 49, 6. [Google Scholar]
  21. Kumar, R.; Singh, L.; Tiwari, R. Path planning for the autonomous robots using modified grey wolf optimization approach. J. Intell. Fuzzy Syst. 2021, 40, 9453–9470. [Google Scholar] [CrossRef]
  22. Ge, F.; Li, K.; Xu, W. Path planning of UAV for oilfield inspection based on improved grey wolf optimization algorithm. In Proceedings of the 2019 Chinese Control and Decision Conference (CCDC), Nanchang, China, 3–5 June 2019. [Google Scholar]
  23. Kumar, R.; Singh, L.; Tiwari, R. Comparison of two meta–heuristic algorithms for path planning in robotics. In Proceedings of the 2020 International Conference on Contemporary Computing and Applications (IC3A), Lucknow, India, 5–7 February 2020. [Google Scholar]
  24. Shrivastava, V.K.; Makhija, P.; Raj, R. Joint optimization of energy efficiency and scheduling strategies for side-link relay system. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017. [Google Scholar]
Figure 1. Grey wolf class system.
Figure 1. Grey wolf class system.
Sensors 22 03810 g001
Figure 2. Prey tracing map.
Figure 2. Prey tracing map.
Sensors 22 03810 g002
Figure 3. Chaotic mapping curve. (a) Tent; (b) improved tent.
Figure 3. Chaotic mapping curve. (a) Tent; (b) improved tent.
Sensors 22 03810 g003
Figure 4. Convergence factor.
Figure 4. Convergence factor.
Sensors 22 03810 g004
Figure 5. Convergence curves of algorithms on test function. (a) f1 function; (b) f2 function; (c) f3 function; (d) f4 function; (e) f5 function; (f) f7 function; (g) f8 function; (h) f9 function; (i) f10 function; (j) f11 function; (k) f12 function; (l) f14 function.
Figure 5. Convergence curves of algorithms on test function. (a) f1 function; (b) f2 function; (c) f3 function; (d) f4 function; (e) f5 function; (f) f7 function; (g) f8 function; (h) f9 function; (i) f10 function; (j) f11 function; (k) f12 function; (l) f14 function.
Sensors 22 03810 g005aSensors 22 03810 g005b
Figure 6. Path planning results. (a) Improved GWO; (b) MGWO; (c) NGWO; (d) GWO-fuzzy; (e) GWO-EPD; (f) convergence curves.
Figure 6. Path planning results. (a) Improved GWO; (b) MGWO; (c) NGWO; (d) GWO-fuzzy; (e) GWO-EPD; (f) convergence curves.
Sensors 22 03810 g006aSensors 22 03810 g006b
Table 1. Parameter Configuration.
Table 1. Parameter Configuration.
Parameter SymbolsMeaningTake Value
NPopulation size30
TmaxMaximum Iteration500
a1Initial value of convergence factor2
a2Final value of convergence factor0
Table 2. Benchmark functions.
Table 2. Benchmark functions.
FunctionDimScopeSolution
f 1 = i = 1 n x i 2 30[−100, 100]0
f 2 = i = 1 n x i + i = 1 n | x i | 30[−10, 10]0
f 3 = i = 1 n ( j 1 i x j ) 2 30[−100, 100]0
f 4 = max i { | x i | , 1 i n } 30[−100, 100]0
f 5 = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
f 6 = i = 1 d ( x i + 0.5 ) 2 30[−100, 100]0
f 7 = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
f 8 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
f 9 = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) + 20 + e ) 30[−32, 32]0
f 10 = 1 4000 i = 1 n x i 2 i = 1 d cos ( x i i ) + 1 ] 30[−600, 600]0
f 11 = π n { 10 sin ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 D u ( x i , 5 , 100 , 4 ) 30[−50, 50]0.398
f 12 = 0.1 { 10 sin ( 3 π x 1 ) + i = 1 D 1 ( x i 1 ) 2 [ 1 + 10 sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 } + i = 1 D u ( x i , 5 , 100 , 4 ) 30[−50, 50]3
f 13 = i = 1 D | x i sin ( x i ) + 0.1 x i | 30[−10, 10]0
f 14 = 0.5 + ( ( sin ( i = 1 D x i 2 ) ) 2 0.5 ) ( 1 + 0.001 ( i = 1 D x i 2 ) ) 2 30[−100, 100]0
f 15 = ( i = 1 D [ x i 2 + 2 x i + 1 2 0.3 cos ( 3 π x i ) 0.4 cos ( 4 π x i + 1 ) + 0.7 ] 30[−15, 15]0
Table 3. Test functions results.
Table 3. Test functions results.
FunctionAlgorithmAverage ValueStandard Deviation
f1GWO4.389 × 10−271.056 × 10−27
Improved GWO00
MGWO5.996 × 10−1990
NGWO9.939 × 10−494.754 × 10−48
GWO-fuzzy9.887 × 10−404.977 × 10−40
GWO-EPD1.501 × 10−312.289 × 10−30
f2GWO2.167 × 10−53.958 × 10−6
Improved GWO00
MGWO1.617 × 10−1022.154 × 10−102
NGWO2.133 × 10−261.143 × 10−26
GWO-fuzzy1.572 × 10−241.374 × 10−23
GWO-EPD1.893 × 10−192.358 × 10−20
f3GWO1.115 × 10−73.463 × 10−5
Improved GWO00
MGWO6.982 × 10−1660
NGWO1.015 × 10−333.789 × 10−31
GWO-fuzzy5.981 × 10−83.753 × 10−7
GWO-EPD4.505 × 10−82.456 × 10−6
f4GWO8.423 × 10−74.583 × 10−7
Improved GWO00
MGWO5.368 × 10−909.664 × 10−89
NGWO4.414 × 10−201.104 × 10−19
GWO-fuzzy4.995 × 10−98.259 × 10−7
GWO-EPD3.395 × 10−77.652 × 10−6
f5GWO2.706 × 1016.824 × 10−1
Improved GWO2.867 × 1012.611 × 10−2
MGWO2.761 × 1013.917 × 10−1
NGWO2.719 × 1015.836 × 10−1
GWO-fuzzy2.855 × 1018.518 × 10−1
GWO-EPD2.818 × 1018.075 × 10−1
f6GWO1.0132.816 × 10−1
Improved GWO6.533 × 10−12.860 × 10−1
MGWO5.2616.381 × 10−1
NGWO1.8293.763 × 10−1
GWO-fuzzy2.3245.052 × 10−1
GWO-EPD1.2384.725 × 10−1
f7GWO1.154 × 10−31.226 × 10−3
Improved GWO2.961 × 10−72.373 × 10−7
MGWO1.914 × 10−41.369 × 10−4
NGWO1.347 × 10−32.747 × 10−4
GWO-fuzzy1.744 × 10−31.047 × 10−3
GWO-EPD1.646 × 10−31.031 × 10−3
f8GWO6.934 × 10−124.701
Improved GWO00
MGWO00
NGWO5.684 × 10−142.017 × 10−1
GWO-fuzzy6.130 × 10−11.657 × 10−1
GWO-EPD1.715 × 10−133.852
f9GWO1.103 × 10−131.633 × 10−14
Improved GWO8.811 × 10−161.164 × 10−16
MGWO4.440 × 10−156.486 × 10−15
NGWO2.930 × 10−142.420 × 10−15
GWO-fuzzy2.930 × 10−143.923 × 10−15
GWO-EPD4.352 × 10−146.4963 × 10−15
f10GWO7.558 × 10−31.412 × 10−2
Improved GWO00
MGWO00
NGWO00
GWO-fuzzy7.2159 × 10−43.0047 × 10−3
GWO-EPD5.6751 × 10−35.7892 × 10−3
f11GWO3.8124 × 10−16.7824 × 10−2
Improved GWO2.1331 × 10−36.8945 × 10−3
MGWO5.3122 × 10−13.1121 × 10−2
NGWO1.1021 × 1013.0031
GWO-fuzzy1.38118.3221
GWO-EPD1.2254 × 10−24.2214 × 10−1
f12GWO7.37124.1077 × 10−1
Improved GWO1.2922 × 10−27.6012 × 10−2
MGWO8.32113.2454 × 10−1
NGWO1.6722 × 1013.1207
GWO-fuzzy6.1545 × 10−14.5512
GWO-EPD8.21475 × 1028.1542 × 102
f13GWO4.5214 × 10−32.5784 × 10−3
Improved GWO2.4457 × 10−66.3641 × 10−6
MGWO7.7541 × 10−58.2231 × 10−4
NGWO2.1441 × 1018.1601
GWO-fuzzy1.2215 × 1012.2232 × 101
GWO-EPD1.2014 × 10−21.2424 × 101
f14GWO1.4125 × 10−22.3622 × 10−3
Improved GWO3.1337 × 10−31.1184 × 10−3
MGWO4.3221 × 10−31.4752 × 10−3
NGWO4.8842 × 10−12.4821 × 10−3
GWO-fuzzy1.3315 × 10−22.4774 × 10−1
GWO-EPD3.9454 × 10−11.7424 × 10−1
f15GWO1.2547 × 10−107.2242 × 10−11
Improved GWO2.4467 × 10−131.0871 × 10−14
MGWO7.2101 × 10−47.9945 × 10−5
NGWO1.5547 × 1019.0141
GWO-fuzzy2.4875 × 10−131.0401 × 101
GWO-EPD7.2154 × 1029.4012 × 101
Table 4. Test functions results.
Table 4. Test functions results.
FunctionAlgorithmAverage ValueStandard Deviation
f1Improved GWO00
PSO3.125 × 10−22.716 × 10−2
SSA1.891 × 10−2570
MA1.711 × 10−434.254 × 10−43
f2Improved GWO00
PSO1.416 × 10−13.581−1
SSA1.435 × 10−938.487 × 10−93
MA2.255 × 1028.183 × 102
f3Improved GWO00
PSO7.225 × 10−25.331 × 10−1
SSA2.821 × 10−1800
MA7.318 × 10−55149 × 10−4
f4Improved GWO00
PSO9.225 × 10−21.153 × 10−1
SSA1.354 × 10−936.81 × 10−93
MA8.154 × 10−76.518 × 10−5
f5Improved GWO2.867 × 1012.611 × 10−2
PSO1.314 × 1021.795 × 102
SSA2.327 × 10−32.189 × 10−3
MA4.501 × 10−15.587 × 10−1
f6Improved GWO6.5332.801 × 10−1
PSO8.792 × 1059.782 × 105
SSA1.047 × 1014.772
MA3.128 × 1018.791 × 102
f7Improved GWO2.961 × 10−72.373 × 10−7
PSO2.561 × 10−17.844 × 10−1
SSA1.144 × 10−43.581 × 10−3
MA3.254 × 10−24.358 × 10−1
f8Improved GWO00
PSO3.0152.641
SSA8.161 × 10−1851.254 × 10−186
MA2.271 × 10−455.174 × 10−44
f9Improved GWO8.881 × 10−161.604 × 10−16
PSO3.712 × 10−22.816 × 10−1
SSA8.881 × 10−160
MA4.213 × 10−101.576 × 10−9
f10Improved GWO00
PSO5.001 × 10−32.655 × 10−1
SSA4.114 × 10−2103.241 × 10−211
MA5.260 × 10−1400
f11Improved GWO2.1331 × 10−36.8945 × 10−3
PSO1.87414.4411
SSA1.496 × 10−22.106 × 10−2
MA2.714 × 10−11.954 × 10−17
f12Improved GWO1.292 × 10−27.6012 × 10−2
PSO8.41528.3372
SSA7.346 × 10−11.355 × 10−2
MA8.2141.245 × 10−2
f13Improved GWO2.4457 × 10−66.3641 × 10−6
PSO1.052 × 1021.2362
SSA1.232 × 10−31.571 × 10−4
MA3.247 × 10−35.014 × 10−3
f14Improved GWO3.1337 × 10−31.1184 × 10−3
PSO3.958 × 10−11.541 × 10−2
SSA9.001 × 10−20
MA3.971 × 10−16.051 × 10−1
f15Improved GWO2.4467 × 10−131.0871 × 10−14
PSO7.15229.142 × 101
SSA4.701 × 10−73.147 × 10−8
MA5.445 × 10−24.401 × 10−2
Table 5. Wilcoxon’s rank test of Improved GWO and other algorithms on 15 benchmark functions.
Table 5. Wilcoxon’s rank test of Improved GWO and other algorithms on 15 benchmark functions.
FunctionGWOMGWONGWOGWO-FuzzyGWO-EPDSSAMAPSO
f1P6.52 × 10128.78 × 1085.05 × 10126.52 × 10126.52 × 10126.01 × 1056.52 × 10126.52 × 1012
R++++++++
f2P2.07 × 10111.40 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 1011
R++++++++
f3P3.77 × 10106.52 × 10126.52 × 10126.52 × 10126.52 × 10123.77 × 10106.52 × 10126.52 × 1012
R++++++++
f4P6.52 × 10125.05 × 10116.52 × 10126.52 × 10126.52 × 10123.77 × 10116.52 × 10126.52 × 1012
R++++++++
f5P4.60 × 1031.20 × 10−56.01 × 10−31.09 × 10−21.68 × 10−42.05 × 10−24.23 × 10−11.20 × 10−6
R+++++--+
f6P2.07 × 10111.41 × 10−112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 1011
R++++++++
f7P3.01 × 10−115.24 × 10−93.01 × 10−112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 1011
R++++++++
f8P6.52 × 1012NaN6.52 × 10126.52 × 10126.52 × 10122.07 × 10−116.52 × 10126.52 × 1012
R+=++++++
f9P2.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10113.77 × 10−102.07 × 10112.07 × 1011
R++++++++
f10P6.52 × 1012NaNNaN6.52 × 10126.52 × 10122.07 × 10112.07 × 10116.52 × 1012
R+==+++++
f11P6.52 × 10122.07 × 10112.07 × 10112.07 × 10116.52 × 10122.07 × 10112.07 × 10116.52 × 1012
R++++++++
f12P6.52 × 10122.07 × 10112.07 × 10112.07 × 10116.52 × 10122.07 × 10112.07 × 10116.52 × 1012
R++++++++
f13P6.52 × 10121.20e−066.52 × 10126.52 × 10122.07 × 10112.07 × 10112.07 × 10116.52 × 1012
R++++++++
f14P6.52 × 10122.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10112.07 × 10116.52 × 1012
R++++++++
f15P2.07 × 10116.52 × 10126.52 × 10122.07 × 10116.52 × 10126.52 × 10126.52 × 10126.52 × 1012
R++++++++
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hou, Y.; Gao, H.; Wang, Z.; Du, C. Improved Grey Wolf Optimization Algorithm and Application. Sensors 2022, 22, 3810. https://doi.org/10.3390/s22103810

AMA Style

Hou Y, Gao H, Wang Z, Du C. Improved Grey Wolf Optimization Algorithm and Application. Sensors. 2022; 22(10):3810. https://doi.org/10.3390/s22103810

Chicago/Turabian Style

Hou, Yuxiang, Huanbing Gao, Zijian Wang, and Chuansheng Du. 2022. "Improved Grey Wolf Optimization Algorithm and Application" Sensors 22, no. 10: 3810. https://doi.org/10.3390/s22103810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop