Improved Gravitational Search Algorithm Based on Adaptive Strategies

The gravitational search algorithm is a global optimization algorithm that has the advantages of a swarm intelligence algorithm. Compared with traditional algorithms, the performance in terms of global search and convergence is relatively good, but the solution is not always accurate, and the algorithm has difficulty jumping out of locally optimal solutions. In view of these shortcomings, an improved gravitational search algorithm based on an adaptive strategy is proposed. The algorithm uses the adaptive strategy to improve the updating methods for the distance between particles, gravitational constant, and position in the gravitational search model. This strengthens the information interaction between particles in the group and improves the exploration and exploitation capacity of the algorithm. In this paper, 13 classical single-peak and multi-peak test functions were selected for simulation performance tests, and the CEC2017 benchmark function was used for a comparison test. The test results show that the improved gravitational search algorithm can address the tendency of the original algorithm to fall into local extrema and significantly improve both the solution accuracy and the ability to find the globally optimal solution.


Introduction
With the progress of science and technology as well as the development of production and management, optimization problems cover almost all aspects of human life and production, becoming an important theoretical basis and indispensable method of modern science. The main solutions to optimization problems include traditional optimization methods and modern optimization methods. Traditional optimization methods are based on single-point optimization, and the main approaches are enumeration methods, numerical methods, and analytical methods. Modern optimization methods use swarm intelligence algorithms, inspired by the stimulation of biological evolution, that simulate the structural characteristics, evolutionary laws, thinking structures, and behavior patterns of human, natural, and other biological populations. Typical swarm intelligence algorithms include the evolutionary algorithm, artificial immune algorithm, memory algorithm, particle swarm optimization, shuffled frog leaping algorithm, cat swarm optimization, bacterial foraging optimization, artificial fish school algorithm, ant colony algorithm, and artificial bee colony algorithm. Traditional optimization methods have strict requirements for the optimization problems in practical projects, and the calculation speed is slow and the convergence is poor when solving large-scale complex problems. Often, the solution to the problem cannot be found in an acceptable time. Modern optimization methods have loose requirements for solving problems, and have good adaptability, robustness, and global search ability.
The gravitational search algorithm (GSA) was proposed by Raahedi et al. in 2009. It is a swarm intelligence global optimization algorithm that is simple to implement and achieves relatively good performance in global search and convergence. It is widely used in path planning [1,2], image classification [3][4][5], neural networks [6,7], data prediction [8][9][10], scheduling and parameter estimation [11][12][13][14][15][16][17], and other fields. However, the solution accuracy is not high, and the algorithm finds it difficult to jump out of locally optimal solutions in the later stages. To solve the shortcomings of this algorithm, scholars have improved the GSA algorithm with respect to the following three aspects: improvements to the adaptive strategy, integration with other swarm intelligence algorithms, and the introduction of other improvement strategies.
To address the first aspect, the authors of [18] proposed an adaptive GSA called SGSA in which an exponential decay model was introduced to the gravitation constant so that the algorithm can adjust the relevant parameters as required by the algorithm as the iteration proceeds. This also improves the exploration ability of the algorithm. An adaptive strategy based on population density was proposed for the distance between particles to prevent the algorithm from degenerating into random motion and to accelerate the convergence speed. The authors of [19] designed a new dynamic inertial weight and velocity position trend factor to improve the GSA, so that the inertial mass of particles has a certain trend as the iterations progress. This gives the change in position of each particle randomness and stability and gives the algorithm a certain degree of adaptability.
To address the second aspect, the authors of [20] combined the GSA with an immune algorithm, which introduces antibody diversity and immune memory characteristics into GSA and improves its global search ability. To overcome the problems of slow iterations and tendency to fall into local minima during the optimization of the standard GSA, one study [21] introduced the speed update mechanism of particle swarm optimization into the position update of GSA, combining the exploitation ability of particle swarm optimization and the exploration ability of GSA, and effectively solving the abovementioned problems. Another study [22] combined the free search differential evolution algorithm with the GSA to make full use of the exploration ability of GSA and the exploitation ability of the free search differential evolution algorithm, and to avoid the premature convergence of GSA. The authors of [23] combined the GSA with the sperm swarm optimization algorithm, which combines the advantages of both algorithms. Through testing, the hybrid method was found to have a better ability to avoid local extrema, and its convergence speed is relatively fast.
Finally, several studies have addressed the third aspect. The authors of [24] introduced a mutation operator to GSA and performed the mutation operation on particles with poor fitness values in the population. The particles were reinitialized, effectively preventing the algorithm from falling into locally optimal values, and the improved algorithm was successfully applied to an economic load scheduling problem. Another study [25] proposed the rotation GSA, which optimizes the selection of the k best in GSA by introducing a rotation operator so that unincluded particles have the opportunity to affect the motion of other particles and to enhance the exploration ability of the algorithm. The study [26] proposed a GSA based on Levy flight and chaos theory. The Levy distribution was used to improve the diversity of the population search space, and chaos search was used to strengthen candidate solutions to achieve global optimization. The authors of [27] proposed an improved GSA based on mutation strategy and reverse evaluation mechanism. The reverse learning bidirectional evaluation mechanism proposed by Tizhoosh was used to initialize and update the population so that particles were better distributed. In addition, the best individual and particles with poor fitness were cross-mutated using a mutation strategy to avoid premature convergence.
In summary, the key to improving the performance of the GSA is to balance the diversity and convergence of particles and prevent the algorithm from falling into local extreme values too early. Among the above three aspects of improvement, the adaptive strategy has a better effect and obtains the best performance. However, scholars have only used two adaptive strategies at most to improve the algorithm performance. Hence, there is room for further improvements in the algorithm's performance. In this paper, a new adaptive GSA is proposed in which three adaptive strategies for population density, gravitation constant, and location update are used in combination to improve the optimization accuracy and convergence of the GSA. The organizational structure of this paper is as follows: Section 2 outlines the basic GSA, Section 3 introduces the three adaptive improvement strategies, Section 4 describes the idea and steps of the improved GSA and analyzes the space-time complexity and convergence performance. Finally, Section 5 evaluates the performance of the improved GSA through experiments, and Section 6 summarizes the conclusions.

Basic GSA
The universal GSA treats all particles as objects with mass. During the optimization process, all particles move unimpeded. Each particle is affected by the gravity of the other particles in the solution space and generates acceleration to move toward the particles with greater mass. Because the mass of the particles is related to their fitness, particles with large fitness will have greater mass. Therefore, particles with small masses will gradually approach the optimal solution in the optimization problem in the process of approaching particles with large masses. The GSA is different from other swarm intelligence algorithms. In the GSA, particles do not need to perceive the environment through environmental factors but realize information sharing through the interaction of the gravitational forces between individuals. Therefore, without the influence of environmental factors, particles can also perceive the global situation to conduct a global search in the environment, thus realizing the global optimization of the problem.
In a GSA, we assume that a D-dimensional search space contains N objects, and the position of the i-th object is In Equation (1), x k i represents the position of the i-th object in the k-th dimension.

Inertial Mass Calculation
In the GSA, the inertial mass of each particle is directly related to the fitness value obtained from the particle location. At time t, the mass of particle X i is expressed by M i (t). Because the inertial mass M is calculated according to its corresponding fitness value, the particles with larger M values are closer to the optimal solution in the solution space, and they exert a greater attraction on other objects.
Particle mass M i (t) is calculated according to Here, f it i (t) represents the fitness of particle X i , best(t) represents the best solution at time t, worst(t) represents the worst solution at time t, and the calculation is as follows: It can be seen from Equation (2) that m i (t) normalizes the fitness of particles to the range [0, 1] and then takes its proportion in the total mass as the mass M i (t) of the particles.

Gravitational Calculation
At time t, the calculation of the gravitational force of object j subjected to object i in the k-th dimension is as follows: where ε represents a very small constant, M aj (t) represents the inertial mass of the action object j, and M ai (t) represents the inertial mass of the action object i. Furthermore, G(t) represents the constant of universal gravitation transformed over time, where its size is related to the number of iterations, and its calculation is In Equation (5), G 0 represents the value of G at time t 0 , where G 0 = 100, α = 20, and T is the maximum number of iterations. Finally, R ij (t) represents the Euclidean distance between objects X i and X j , and is calculated as At time t, the force acting on X i in the k-th dimension is equal to the sum of the forces exerted on it by all other particles around as follows:

Location Update
When a particle is subjected to the gravitational action of other particles, it will generate acceleration. According to the gravity calculated in Equation (7), the acceleration obtained by object i in the k-th dimension is the ratio of its force to inertial mass. The calculation is as follows: In each iteration, the algorithm updates the speed and position of object i according to the calculated acceleration. The update method is The basic GSA implementation steps are as follows: 1.
Initialize the position and acceleration of all particles in the algorithm, and set the number of iterations and parameters.

2.
Calculate the fitness value for each particle, and update the gravitation constant according to the formula. 3.
The mass of each particle is calculated according to the calculated fitness value, and the acceleration of each particle is calculated using Equations (2)-(8).

4.
Calculate the speed of each particle according to Equation (9) and then update the particle position according to Equation (10).

5.
If the termination condition is not met, return to step 2; otherwise, output the optimal solution of the algorithm.

Adaptive Population Density Strategy
The distance between particles in the basic GSA is the Euclidean distance. Through a large number of experiments in [18], it was found that a constant fixed distance is better than the Euclidean distance, but the fixed distance value has obvious shortcomings: First, when the population is divided, the distance between particles is large and the interaction force is very small, and hence, the GSA degenerates into random motion. Second, when the population is dense, the distance between particles is very small, and the interaction force is very large. The particles in the algorithm will oscillate at a high frequency near the optimal solution and reduce the convergence speed. The population density is an indicator for evaluating the distance between particles, and it is the median of the average distance of all particles in the population. A smaller population density means the population is more concentrated; by contrast, a higher population density means the population is more dispersed. To solve the above two issues, balance the exploration and exploitation abilities of the algorithm, adjust the search ability of the algorithm, and propose an adaptive strategy based on population density, we dynamically adjust the distance between particles according to the GSA population density. That is, when the population density is relatively large, the population is relatively dispersed, reducing the particle distance between populations, promoting information exchange between particles, and preventing random movement between particles. When the population density value is small, the population is dense. We hence increase the distance between population particles appropriately to speed up the convergence of the algorithm. The calculation of population density δ is as follows: where N is the number of particles, D is the dimensionality of the particles, and dis i is the average distance between the i-th particle and all other particles, calculated as follows: The gravitational force calculated in the basic universal GSA is modified as follows: The calculation of Rp(δ) is where Rp max and Rp min are the maximum and minimum values of the given fixed distance, respectively, and δ is the population density.

Adaptive Gravitational Constant Strategy
Gravitational constant G is an important variable that transforms over time. Its change directly affects the magnitude of resultant force and acceleration, as well as determines the current step size and convergence speed of particles in the algorithm. The reasonable selection of parameters G 0 and α plays an important role in the size of the iterative steps in the algorithm and determines whether the algorithm can jump out of local optima and determine the direct factor of optimal accuracy. If the original gravitational constant decreases quickly at the beginning of the algorithm, the algorithm can converge quickly, but it also tends to fall into the local optima and is difficult to jump out. To improve the exploration ability of the algorithm, prevent the algorithm from falling into locally optimal solutions, and improve the accuracy of the solution, an adaptive strategy for the universal gravitational constant is proposed. The adaptive gravitational constant G is expressed as follows: Here, G 0 is the initial value of the universal gravitational constant, α is the parameter of the decay rate, T is the total number of iterations, and t c is a constant value in the interval [0, T).

Adaptive Location Update Strategy
A position in the basic GSA is updated according to the current speed of the algorithm and the position in the last iteration. In each iteration, if the current update speed of particles is small, the change in position will also be small, the convergence ability of the algorithm will be reduced, and the algorithm will tend to fall into local extrema. By contrast, if the current update speed of particles is too large, the change in position will also increase, and the algorithm will move far from the global optimum. To address these defects, the improved strategy in [3] is adopted in this study to make the position of the particles change with respect to the iterative evolution. In the early stages of the algorithm, each particle moves with a large step size so that the algorithm can quickly converge to the vicinity of the optimal solution; in the later stages of the algorithm, the particle update step is smaller, and the particle depth search is near the optimal value. The expression of the adaptive position update is where α is calculated by and β is calculated by where dim is the dimension; ω is an integer in the range [1,50]; T is the current number of iterations of the algorithm; T max is the maximum number of iterations set for the algorithm; and betarnd is the random number generated by the [0, 1] beta distribution. The range of α is (0, 1) and the range of β is (0, 1).

Basic Concept
To improve the low solution accuracy and difficulty of jumping out of the locally optimal solutions of conventional GSA, the parameters for the distance between particles, gravitational constant, and position update in GSA are improved using adaptive strategies to strengthen the information interaction between particles and improve the exploration and exploitation capabilities of the algorithm.
The steps of the proposed algorithm are as follows.
Step 1: The proposed adaptive GSA is initialized to generate the initial particle swarm. Set the size of algorithm particle swarm N and the maximum number of iterations NC Max , search space dimension X Dim , maximum distance Rp max , minimum distance Rp min , gravitational constant, attenuation rate, constant value, and other parameter values.
Step 2: Check the particle boundary in the population, and calculate the fitness value of particles in the population.
Step 4: The inertial mass M i (t) of the particles is obtained according to best(t), worst(t), and Equation (2).
Step 5: Update the gravitational constant G according to Equation (15).
Step 7: Calculate the gravitational and resultant forces around particles according to Equation (13).
Step 8: Calculate the acceleration of the particles according to Equation (8).
Step 10: Return to the iteration cycle in step 2 until the number of cycles or accuracy requirements are met.
Step 11: Exit the loop and output the algorithm results.

Time Complexity Analysis
The time complexity of the algorithm is the time spent executing the algorithm, which is equal to the cumulative number of times the algorithm performs basic operations such as addition, subtraction, multiplication, division, and comparison. Assuming that the particle swarm size of the proposed adaptive GSA is N, the time complexity of the algorithm is analyzed according to the steps of the algorithm execution using the method in [28].
In step 1, the initialization of the particle swarm of the proposed adaptive GSA requires N operations, and the initialization operations of the other parameters are a constant; thus, the time complexity of step 1 is O (N).
In step 2, the particle boundary check requires N operations, the fitness calculation requires N operations, and hence, the time complexity of step 2 is O(N) + O(N).
In step 3, it takes one operation to calculate the best fitness value best(t) and one operation to calculate the worst fitness value worst(t), and hence, the time complexity of step 3 is O(1) + O(1).
Calculating the inertial mass M i (t) of particles in step 4 requires N operations; thus, the time complexity of step 4 is O(N).
Updating the gravitational constant G in step 5 requires one operation; thus, the time complexity of step 5 is O(1).
In step 6, calculating the average distance of all particles requires N × (N − 1) operations, calculating the population density requires one operation, and calculating the particle distance requires one operation, and hence, the time complexity of step 6 is In step 7, calculating the gravity of particles in the population requires N operations, and calculating the resultant force of particles in the population requires N operations; thus, the time complexity of step 7 is O(N) + O(N).
In step 8, calculating particle acceleration requires N operations, and calculating particle velocity requires N operations, and hence, the time complexity of step 8 is O(N) + O(N).
Updating particle positions in step 9 requires N operations; thus, the time complexity of step 9 is O(N).
In step 10, evaluating the termination condition requires one comparison operation, and terminating the algorithm requires one assignment operation; thus, the time complexity of step 10 is O(1) + O(1).
After the above steps, the proposed adaptive GSA performs NC max iterations. The time complexity the proposed adaptive GSA after the maximum number of iterations is O(NC max × (N × N)).

Spatial Complexity Analysis
Space complexity is a measure of the storage space occupied by the algorithm during execution. Assume that the population size of the proposed adaptive GSA is N, the number of iterations of the algorithm is NC max , and the dimensionality of the optimization function is D. We perform spatial complexity analysis according to the steps of algorithm execution. In the proposed adaptive GSA, X [N] [D] is used to store the value of initialization independent variable, Y [1] [N] is used to store the fitness value of initialization function, Xm [1] [N] is used to store the inertial mass value of population particles, Xd [1] [N] is used to store the average distance between population particles, Xf [N] [D] is used to store the resultant force value around each particle in the population, Xa [N] [D] is used to store the acceleration value of each particle in the population, and Xv [N] [D] is used to store the velocity value of each particle in the population. Therefore, the space complexity of the whole GSA based on adaptive strategy improvement is 4 × O(N × D).

Analysis of Algorithm Convergence
The convergence of the proposed adaptive GSA is proven using the contraction mapping theorem. For the relevant concepts and theorems in space compression theory, we refer readers to the definitions in [28]. Theorem 1. As the time tends to infinity, the proposed adaptive GSA is convergent.

Proof :
The state of the proposed adaptive GSA in the optimization process is represented by set X. The mutual transformation of states in set X is actually the embodiment of the whole optimization process of the proposed adaptive GSA. Therefore, the optimization process of the proposed adaptive GSA is a self-mapping process. If f is an optimization process mapping from X to X, then X k+1 = f (X k ). Suppose ∃ρ : X × X → R is the distance between two points in metric space (X, ρ) and {x n } is any optimization sequence in (X, ρ).
The proposed adaptive GSA is a continuous iterative process. Under the action of gravity, individuals in the algorithm attract each other, forcing small mass individuals to constantly move to larger mass individuals to determine the optimal solution X * . Therefore, in metric space (X, ρ), for any ε, there exists an N, where n > N, such that ρ(x n , X * ) < ε is true. According to its definition, the optimization sequence {x n } converges to X * . Moreover, {x n } is a Cauchy sequence, and (X, ρ) is a complete metric space. Let ε be a random number in the range [0, 1]. Since the proposed adaptive GSA is a continuous optimization process, individuals are constantly approaching the optimal value, and hence, it is a convergent process. Then, there must be ρ( f (x), f (y) ≤ ε * ρ(x, y) in the metric space (X, ρ), and f is a compression mapping. According to Theorem 4.2 in [28] x * ∈ X is the only fixed point in the compressed mapping f . Hence, the proposed adaptive GSA is convergent, and Theorem 1 is proven.

Performance of the Algorithm Improvement Strategies
This section evaluates and analyzes the combinations of the three adaptive improvement strategies in the algorithm: the adaptive population density strategy, adaptive gravitational constant strategy, and adaptive location update strategy. For the convenience of description, we refer to the GSA based on the adaptive population density strategy as the RGSA. Similarly, we call the GSA based on the adaptive gravitational constant strategy the GGSA, and the GSA based on the adaptive position strategy is called the LGSA. The GSA based on the adaptive population density and gravity constant strategies is the RGGSA, the GSA based on the adaptive population density and location update strategies is the RLGSA, the GSA based on the adaptive gravity constant and position update strategies is the GLGSA, and the GSA based on all three adaptive population density, gravity constant, and location update strategies is the RGLGSA. In the experiment, we tested the convergence of the basic GSA and the GSA with the seven different adaptive strategies on benchmark functions f1 to f15. Table 1 lists the test functions. Among the 15 standard test functions, the minimum value 0.397887 is obtained at points (π, 12.275), (π, 2.275), and (9.42478, 2.475), and the other functions obtain the minimum value 0 at point (0, 0, ..., 0). The functions f1, f2, f3, f7, f8, f10, and f13 are unimodal test functions. These functions only have one globally optimal solution, and are mainly used to test the solution accuracy and development ability of the algorithm. Functions f4, f5, f6, f9, f11, f12, f14, and f15 are multimodal test functions. These functions have many local extrema. The GSA is prone to premature convergence or falling into local extrema. To determine the optimal values for these test functions, the algorithm must have the ability to jump out of local extrema, avoid premature convergence, and have strong global exploration ability.

Data Analysis
We set the initial parameters of the algorithms as follows: for the basic GSA, G 0 was 100 and α was 20; for RGSA: G 0 was 100, α · was 20, t c was T/4, Rp max was 1.5, and Rp min was 0.5; for GGSA: G 0 was 50 and α · was 30; for SGSA: G 0 was 100, α was 20, and ω was 10. The parameter settings of the other GSAs were consistent with those of the previous three GSAs. The population size for all algorithms was 50, the number of algorithm iterations was 1000, the test dimensions were 30 and 50, functions f6 and f11 were two-dimensional tests, and the number of independent algorithm runs was 30.
The following observations can be inferred from the simulation test results of Tables 2-16.            First, the results of the RGSA, GGSA, and LGSA on the test functions are better than those of the GSA. The results show that the three adaptive strategies of population density, gravitational constant, and location update can effectively improve the performance of the GSA. The detailed analysis is as follows. The result of the GGSA is inferior to those of the RGSA and LGSA, but superior to that of the GSA. This shows that although the adaptive gravitational constant strategy is inferior to the crowd density and location update strategy in improving the performance of the GSA, it also improves the performance of the GSA to a certain extent because it helps to improve the iteration step size and convergence speed. The result of the RGSA is much better than that of the GSA, which indicates that the adaptive population density strategy dynamically adjusts the distance between particles according to the population density in the evolution process and better balances the exploration and mining capabilities of the algorithm, thus improving the search capability of the algorithm. The LGSA is not only better than the GSA, but also better than the GGSA and RGSA, which shows that the location update strategy plays the largest role in improving the performance of the GSA and reflects that the balance of location and speed between individuals is the key to ensuring the good solution quality of a swarm intelligence algorithm.
Second, the results of the RGGSA, RLGSA, and GLGSA on the test function are better than those of the RGSA, GGSA, and LGSA. The results show that the combination of two of the three adaptive strategies proposed in this paper can effectively improve the performance of the GSAs using a single strategy. The result of the GLGSA is better than that of the RGGSA, but it is also inferior to that of the RLGSA, which indicates that the better single improvement strategy still has a higher performance advantage in the combined improvement strategy. The test result of the RGLGSA is higher than those of the RGGSA, RLGSA and GLGSA. The results shows that the RGSGSA, which combines the three strategies, leverages the advantages of the RGSA, GGSA, and LGSA, making the advantages more effective in the search process and achieving the best solution performance.

Comparison and Analysis of Algorithm Test Results
To fully evaluate the overall performance of the adaptive GSA proposed in this paper (RGLGSA), we selected the following classic and efficient GSA algorithms: the weightbased GSA (GSAGJ) [29], SGSA [18], and multipoint adaptive constraint-based gravitation improved algorithm (MACGSA) [19]. A comparative analysis of simulation tests was performed using 16 benchmark test functions of the CEC2017 benchmark. In the experiment, we tested the convergence of the basic GSA and the four comparison algorithms in 10, 30, and 50 dimensions.

Data Analysis
To prevent errors caused by accidental factors and to ensure objectivity and fairness of the evaluation, in the experiment, the five algorithms were independently run 30 times and were iterated 1000 times in the same environment. The other parameter settings of the algorithms were consistent with those listed in Section 5.1.
The CEC2017 benchmark test function simulation results in Tables 18-33 reveal the following.        First, for the unimodal functions f16, f18, f24, and f25, both the RGLGSA and MACGSA have high accuracy. Under the same number of dimensions, the optimization accuracy of the RGLGSA proposed in this paper is significantly higher than those of the GSA, GSAGJ, SGSA, and MACGSA. With different dimensions, as the dimensions increase, the solution accuracy of the five algorithms gradually decreases, but the solution accuracy of the RGLGSA is still higher than those of the GSA, GSAGJ, SGSA, and MACGSA. For function f17, both the RGLGSA and MACGSA have very high accuracy. Under the same number of dimensions, the optimization accuracy of the RGLGSA is superior to those of the other four algorithms. As the dimensions increase, the solution accuracy of the GSA, GSAGJ, and SGSA decrease gradually, with an obvious trend. The solution accuracies of the RGLGSA and MACGSA increase gradually, but the solution accuracy of the RGLGSA is still higher than that of the other four algorithms.
For the multimodal functions f19, f29, and f30, the four algorithms all become trapped in the local extrema, with little difference in solution accuracy. For functions f20, f21, f26, f27, and f28, the RGLGSA and MACGSA find the globally optimal solution 0, but other algorithms cannot. For function f22, when the dimensions are 10, the RGLGSA and MACGSA can find the global optimal solution 0; when the dimensions are high, only the RGLGSA can find the globally optimal solution 0. For function f23, when the dimensions are 10, the SGSA has a higher precision, and when the dimensions are 30 and 50, the RGLGSA has the highest precision. For function f31, under the same number of dimensions, the optimization accuracy of the RGLGSA is significantly higher than those of the other algorithms. As the dimensions increase, the solution accuracies of the GSA, GSAGJ, and SGSA decrease gradually. When the dimensions reach 50, the solution accuracies of the MACGSA and RGLGSA decrease significantly.
The comparison and test results of the above algorithms reveal that, compared with other classical and efficient improved GSAs, the RGLGSA has a relatively stable overall search ability on unimodal functions and multimodal functions. Moreover, it has a high convergence accuracy. Figures 1-16 show the convergence curves of the RGLGSA, basic GSA, and the comparison GSAs. The convergence curves of the unimodal functions reveal that there is little difference between the convergence speed of the five algorithms in the first 500 generations. After 500 generations, the GSA, GSAGJ, and SGSA converge to the local extreme values and stop searching. The MACGSA and RGLGSA do not fall into the local extreme values, but continue to evolve. However, the convergence speed of the RGLGSA is faster than that of the MACGSA, and the obtained value is better.                For multimodal functions f19, f29 and f30, in the first 400 iterations, the convergence speed of the RLSGSA is roughly the same as those of the other four algorithms. Because of the characteristics of the functions and algorithms, after 400 iterations, the five algorithms converge to a local extremum, and the differences between the solutions obtained by the algorithms are not significant. For multimodal functions f20, f21, f27, and f28, GSA, GSAGJ, and SGSA converge to the local extreme value after a certain number of iterations and stop searching. The MACGSA and RGLGSA do not fall into the local extreme value but continue to evolve to find the globally optimal solution 0. However, the RGLGSA converges faster than the MACGSA. For multimodal function f22, in the first 500 iterations, the convergence speed of the RLSGSA is roughly the same as that of the other four algorithms. After 500 generations, the other four algorithms fall into local extrema, and the RLSGSA finds the global optimal solution 0 after about 650 generations. For multimodal function f31, in the first 300 iterations, the convergence rates of the five algorithms are similar. After 300 generations, the GSA, GSAGJ, and SGSA have stopped iterative evolutions, and the algorithms have fallen into locally optimal solutions. The MACGSA and RGLGSA continue to evolve after 600 generations, and finally, the MACGSA finds a better solution at a faster speed. For multimodal function f23, in 10 dimensions, the GSA and GSAGJ stop iterations after about 400 generations. After 400 generations, the convergence speed of the RGLGSA is faster than that of the SGSA and MACGSA. At 30 and 50 dimensions, the convergence accuracies of the five algorithms are not high. The GSAGJ has the fastest convergence speed, followed by the RGLGSA, but the convergence accuracy of the RGLGSA is higher than that of the GSAGJ. For multimodal function f26, the convergence speed of the MACGSA is higher before generation 500, the convergence speed of the RGLGSA is higher after generation 500, and the optimal value is obtained around generation 880. As a whole, the convergence speed of the RGLGSA is higher than those of the other comparison GSAs.

Curve Analysis
In general, compared with other GSA methods, the proposed adaptive GSA based on the adaptive strategies of population density, gravitational constant, and location update has a greatly improved optimization accuracy and stability, and performs well with respect to convergence.

Conclusions
In this paper, we proposed an improved GSA that is based on adaptive strategies. To address the shortcomings of the basic GSA, this algorithm introduces adaptive strategies based on crowd density, the gravitational constant, and location update into the basic universal GSA simultaneously. Moreover, it dynamically adjusts the distance between particles and the step size of the particle iteration, strengthens the information exchange between particles, and greatly increases the diversity of particles in the population. Therefore, it can effectively overcome the disadvantages of the basic GSA, which tends to fall into local extreme values. The simulation results show that the improved algorithm is superior to other classically improved GSAs in terms of search accuracy, convergence speed, stability, and other factors. It hence is an effective extension to the algorithm.
No Free Lunch Theory is a very important theorem in the field of optimization research, which reflects that no optimization algorithm can outperform other algorithms in average performance on all optimization problems. The improved algorithm in this paper also has this problem, and the experimental analysis cannot fully cover many optimization problems, which is convincing. The future research direction is mainly to optimize the overall performance of the algorithm, so that the performance of the improved algorithm is better than that of other algorithms on as many optimization problems as possible.