Particle Swarm Optimization Combined with Inertia-Free Velocity and Direction Search

: The particle swarm optimization algorithm (PSO) is a widely used swarm-based natural inspired optimization algorithm. However, it suffers search stagnation from being trapped into a sub-optimal solution in an optimization problem. This paper proposes a novel hybrid algorithm (SDPSO) to improve its performance on local searches. The algorithm merges two strategies, the static exploitation (SE, a velocity updating strategy considering inertia-free velocity), and the direction search (DS) of Rosenbrock method, into the original PSO. With this hybrid, on the one hand, extensive exploration is still maintained by PSO; on the other hand, the SE is responsible for locating a small region, and then the DS further intensiﬁes the search. The SDPSO algorithm was implemented and tested on unconstrained benchmark problems (CEC2014) and some constrained engineering design problems. The performance of SDPSO is compared with that of other optimization algorithms, and the results show that SDPSO has a competitive performance.


Introduction
Optimization problems are difficult to be solved and commonly arise in fields such as engineering [1], management [2], aerospace technology [3] and scientific research [4]. Such optimization problems are often multidimensional, complex, and time consuming. Solving these problems has always attracted extensive research interest.
The meta-heuristic algorithms, due to their simplicity and flexibility, have been widely used. In the past few decades, they have achieved great development [5]. The PSO algorithm is a meta-heuristic algorithm that simulates the natural tendency of birds to find food, and it is also one of the most popular meta-heuristic technologies for large solution spaces with large numbers of peaks. Aler et al. [6] has made a comprehensive review for the PSO algorithm in detail.
Although the PSO algorithm presents fast convergence [7] with a good exploration ability, it is susceptible to premature over the complex fitness landscapes [8]. The important reason of affecting the convergence is that the PSO is strong randomness, making optimization process degenerate to half-blind state, thereby lead to the poor local search ability and slowing convergence rate.
The randomness is influenced by the velocity of particles in PSO; however, the velocity is difficult to be adjusted or controlled through algorithm parameters. Although there are many methods with adaptive parameters adjustment to the velocity [9], the adjustment does not depend on the positions but only the iterative process. Thus, some areas near where particles go may be neglected during the search process for too fast or too slow velocity. Considering that particles are allowed to forget their own flying experience, we let their history velocity no longer work to weak the randomness for locating possible local areas. That is to say, we set the inertia weight in velocity formula to be zero and call the search with the inertia-free velocity the static exploitation (SE).

Review of Improving the PSO Algorithm
Many studies have attempted to improve the classical PSO algorithm in a variety of strategies. Frans et al. [12] proposed the cooperative evolutionary framework using multiple swarms to optimize different components of the solution vector cooperatively. Liang et al. [13] introduced a comprehensive learning PSO that can scan larger search spaces and increase the probability of finding global optimality. Liang et al. [14] divided the entire population into small swarms and regrouped them with information exchange. Sun, L. et al. [15] presented a cooperative particle swarm optimizer with statistical variable interdependence learning. Li et al. [16] generated candidate positions by estimating the distribution information of historical promising individual best position with an estimation of distribution algorithm. Gülcü et al. [17] took the master-slave paradigm for multiple groups to set up a comprehensive learning particle swarm optimizer. Considering multipopulation cooperation, Li et al. [18] constructed learning exemplars for information sharing with a multidimensional comprehensive learning strategy and a dynamic segmentbased mean learning strategy. Xu et al. [19] proposed a dimensional learning strategy, in which the personal best position of a particle learns from the global best position in a way of dimension by dimension to construct the learning exemplar. Considering neighborhood and historical memory, Li [20] proposed that every particle' position is guided by the experience of its neighbors and its competitors for decreasing the risk of premature convergence.
However, velocity update and mixing with other algorithms are the main strategies for PSO in this paper. Hence, these two aspects are reviewed below.

Modifying the Velocity Update Strategy of Particles
He et al. [21] introduced the biological force to the velocity update equation for preserving swarm integrity. Considering that the velocity of a bird is decided by both its current velocity and the acceleration, Zeng [22] introduced the acceleration to the PSO algorithm. To avoid premature convergence in the early process, Chen [23] set a function of sine cosine as acceleration coefficients of the velocity update equation. Liu et al. [24] thought frequent velocity update is not good for local exploitation of particles, so the velocities of the particles should be not always updated in a continuous way. To restrict the particles in the feasible range, Liu et al. [25] introduced the momentum factor into the position update equation. Modifying the limit of the speeds or the positions of the particles may also improve performance of PSO algorithm. Stacey et al. [26] proposed a speed limit for re-randomization speed and a position limit for re-randomized positions. In view of the perturbation of the velocity, Miao et al. [27] introduced perturbation to the velocity update for the original PSO.
Although the above literatures have considered the importance of velocity and partially modified the velocity formula, they did not consider the case that the history velocities of the particles are ignored. In this paper, we will consider this case for particles to exploit promising areas.

The PSO Hybrid Algorithm
Many efforts have been made to overcome the weakness of optimization algorithms. It is a common strategy to mix two or more different algorithms to get better performance, e.g., hybridizing differential evolution algorithm with krill herd algorithm [28]. The hybrid purpose is to aggregate the advantages of different algorithms to improve the search ability. A good hybrid algorithm should have a reasonable scheme in allocating exploration and exploitation [8].
Evolutionary algorithm may be the most commonly used tool to mix with the PSO algorithm. A genetic algorithm is a class of evolutionary algorithm, and various genetic operators are usually used. For example, Tawhid et al. [29] combined a genetic arithmetical crossover operator with PSO, and Chen et al. [30] crossed over the personal historical best position of each particle to produce promising exemplars. Differential evolution is another class of evolution algorithm. Parsopoulos et al. [31] used it to control the heuristic parameters for PSO. Zhang et al. [32] applied differential evolution operator to PSO for the bell-shaped mutations on the population diversity. Chen et al. [8] evolved the personal best positions in PSO with differential evolution. In it, two differential mutation operators with different characteristics are adopted: one mutation operator has good global exploration ability, and another mutation operator has good local exploitation ability.
A variety of other meta-heuristic algorithms are applied in the hybrid algorithms. Artificial bee colony is a popular meta-heuristic algorithm, and Vitorino et al. [7] utilized it to produce population diversity when particles fall into local optimum. Cuckoo search is also a swarm intelligent algorithm. Ibrahim et al. [33] incorporated it to PSO, and Huang et al. [34] incorporated the continuous ant colony optimization with PSO and presented four types of hybridization. Additionally, Javidrad et al. [35] presented a PSO algorithm hybridized with simulated annealing algorithm, which is utilized as a local search to improve convergence behavior of PSO.
On the other hand, non-meta-heuristic algorithms are also used to mix with PSO to generate new hybrid algorithms. Luo et al. [36] embedded the gradient descent direction in the velocity update formula of PSO to achieve faster convergence. Wang et al. [37] used two phases: attaining feasible local minima by gradient descent method, and then escaping from the local minimum with the help of PSO. Salajegheh et al. [38] combined first and second order gradient directions with the PSO algorithm to promote the performance and reliability. Derivative-free deterministic approaches are also competitive, and they should receive more attention [39]. Direct search is an important method for solving optimization problems that does not require any information about the gradient of the objective function. They mainly include Pattern Search Algorithm [40], Rosenbrock method [10], and the Mesh Adaptive Search Algorithm [41], etc. Liu et al. [42] proposed the line search to optimize the step-size of the velocity direction of the particle of PSO. Fan et al. [43] applied Nelder-Mead simplex search operator to the top few elite particles and applied PSO to update those particles with worst objective function value. El-Wakeel and Smith [44] also introduced the simplex search to PSO. They first applied PSO to locate the interval that likely contains the global minimum, and then utilized the solution of the PSO as a starting solution for simplex. The PSO algorithm is responsible for avoiding local minima, whereas the simplex algorithm is responsible for avoiding slowness and near-optimum convergence. Tawhid [29] also applied the simplex method as a local search method to accelerate the convergence when no improvements during research in the final stage.
In the above non-meta-heuristic algorithms, Rosenbrock method is a derivative-free deterministic direct search approach. Kang et al. [45] combined the rotational direction of Rosenbrock method to artificial bee colony algorithm, which is utilized in the exploration phase while the direction rotation appeared in the exploitation phase. In this paper, we will make use of the direction search of Rosenbrock method and ignore the direction rotation.

Static Exploitation: Searching with Inertia-Free Velocity from the Original PSO
The original PSO is a simulator of social behavior that embodies the movement of a bird's flock. PSO quickly converges due to its inherent parallel ability. Each particle moves towards its best previous position and towards the best particle in the entire swarm. Suppose that the search space is D-dimensional, and a swarm consisting of N particles search in it (i.e., the i-th particle is a D-dimensional vector). In the current work, we consider the PSO global version, in which the best position to be ever attained by all individuals of the swarm is communicated to all the particles at each iteration. The i-th particle updates its position according to the following equations: where represents the velocity for the i-th particle; pbest i represents the best previous position of the i-th particle; where i = 1, 2, . . . , N, and N is the size of the swarm, and n = 1, 2, . . . , is the generation number; gbest represents the best previous position of the population; c 1 and c 2 are the acceleration constants. rand 1 and rand 2 are two random numbers in the range [0, 1], and ω is the inertia weight. The velocity update Formula (1) consists of three parts: part 1 is the inertia velocity of the particle, which may balance the global and local search ability; part 2 is the cognitive part, which indicates that the particles think and allows them to have a sufficiently strong global search ability; part 3 acts as a social part, which shows the information sharing and cooperation between the particles. The three parts together determine the space search ability of the particle.
The above formula is a general method to update a particle's position in the original PSO method. In part 1 of Equation (1), the inertia weight ω may control the flying velocity and balance the global and local search [16]. A large value of the weight encourages the global search while a small value enhances the local search [46]. However, it does not seem that there is an accurate parameter (ω) to identify this effect.
If the weight velocity item is cancelled, i.e., let ω = 0, then the particle velocity of the previous generation (history velocity) takes effect no longer. That is to say, the kinetic energy of the particle is greatly reduced after it reaches a point (x S i ). At this time, the particle is controlled by only the previous best position of itself and the population (i.e., pbest (n,t) i and gbest n ). At this time, the velocity of the particle becomes v (n,t) i = c 1 · rand n 1 · (pbest n i − x n i ) + c 2 · rand n 2 · (gbest n − x n i ), t = 1, 2, . . . , T.
Equation (3) suggests that the particles fly without with inertia. Thus, we call the process "static exploitation" (SE). Where v (n,t) i denotes the velocity gained by the t-th SE of the i-th particle at generation n. Because of the decrease of kinetic energy, the particles may exploit a few positions in a small scope around the point x S i . Let T be the maximum exploiting times.
where x (n,t) i denotes the position gained by the t-th SE of the i-th particle at generation n. Let x S = x n+1 i as the exploiting center. The best point x D 0 i will be selected in the T exploitations: According to our experiments, excessive exploiting times for trial points do not yield a better effect, generally let T = 10.
The attraction of the individual best historical position and the global best position is strengthened when the particles move without inertia, and faster velocity is flattened, which implies that the search on a local area is strengthened.

The Rosenbrock Method
The Rosenbrock method [10,47] is a gradient-free minimization direct search method, which is based on orthogonal search directions.
The Rosenbrock method like other direct search methods make a promising descent direction and have surprisingly sound heuristics. It often avoids the pitfalls that plague more sophisticated approaches [48]. The Rosenbrock method can take advantage of a nonzero step and particularly, a promising descent direction along which the next search stage will be conducted, which is apt for searching heuristically at the bottom of a valley.
Rosenbrock's search works through two main search procedures. One is direction search (DS), an exploration by discrete steps along an orthogonal direction set of n vectors, and the other is a new search direction set generation and rotation procedure of the direction set, which is generated using the Gram-Schmidt orthonormalization.
The flow chart of the Rosenbrock method is shown in Figure 1. x S i is the initial point of the Rosenbrock search.  The DS procedure involves the round-search and the loop-search. A "round" includes D times searching along D coordinate axes. y (0) , y (D) denote the start and the end point of the round, and y (j) denotes the attainted point on the j-th axis, j∈{1,2, . . . ,D}. This search is always proceeding along the coordinate axes, which are search directions, and cycles until it fails in finding a better point. Thus, a "loop-search" consists one or more consecutive round-searches ( Figure 2).   After a loop-search, proceeding with loop-search and rotating coordinate axes are two choices. The rotation happens (i.e., a new set of search directions is formed) when at least one success is obtained during a loop search, i.e., at least one point attained by the round-search is better than the starting point of the loop-search. The orthonormal basis is usually updated by the Gram-Schmidt procedure. Let z (k) (k ∈ {1, 2, . . . , }) be the end point of the kth loop-search.

The Procedure of the Round-Search
The directions for the round-search are the coordinate directions of the D-dimensional coordinate system, i.e., the orthonormal basis d (j) (j = 1, 2, . . . , D), which is a group of orthogonal axis directions. These directions are vectors of zeros, except for the unit length 1.0 in the i-th direction.
A round-search starts from y (0) in direction d (1) in direction and then exploits a new point y (j) , gained in direction d (j) by the step-side δ j , until reaching the end point of the round-search y (D) at the last direction d (D) . The round-search along the coordinate axis is defined as Algorithm 1. while (j ≤ D && δ j > ε) do 5: { y = y (j−1) + δ j d (j) ; 6: If f (y) < f (y (j−1) ), let y (j) = y and δ j = α · δ j ;//trial step is successful 7: Else let y (j) = y (j−1) and δ j = β · δ j .;//trial step is not successful 8: } 9: j = j + 1; 10: } where α is the expansion factor (α > 1), which represents that the step size is increased in this direction, when a successful point is found; β is the constriction factor (β ∈ (−1, 0)), which represents that the step size is decreased, and the search proceeds on the opposite direction when no point is found at this direction. With the expansion and the constriction of the step size, some valuable points are attained.

Direction Search (DS, Rosenbrock Procedure without Coordinate Rotation)
The Rosenbrock method is extremely sensitive to the initial point, and easy to get stuck in local minima in many problems by our experiments, and the iterative othonormalization (rotation) procedure of it is time-consuming, increasing the time complexity [49]. However, through the orthogonal search directions (zig-zags in Figure 3), searching can move near a ridge to reach the optimum, which can adapt itself to the local terrain. Simple and heuristic, it performs better on sharp ridges of functions [11]. Algorithm2: The DS Procedure. 1: Initialization. Step 4. End repeat. 17: Step 5. Set Algorithm 2 ends when finding a better point than the start point of the loop-search, or each j δ in different directions is less than a given small value (the termination tolerance) ε; otherwise, the loop-searches composed of the round-searches go on.
The above procedure does not refer to the coordinate rotation for new directions, avoiding the low efficiency of the Rosenbrock method. Moreover, it may lay the foundation of a stable solution due to that it is a deterministic and direct process. Hence, we consider the procedure to be a component of the following algorithm. Thus, we take the DS procedure from the above Rosenbrock method as a simple component of our hybrid system disregarding the coordinate rotation procedure for intensifying the local optimum search ability of PSO. This procedure is listed in Algorithm 2, where x (R) i denotes the end point of the procedure.
The above procedure does not refer to the coordinate rotation for new directions, avoiding the low efficiency of the Rosenbrock method. Moreover, it may lay the foundation of a stable solution due to that it is a deterministic and direct process. Hence, we consider the procedure to be a component of the following algorithm.

Proposed Hybrid Particle Swarm Optimization Algorithm
The original PSO tends to present its search advantages in a broad area with its velocity. Then, two additional search stages, SE and DS, follow the original PSO search for exploiting local area. The SE locates a small search area; further, the DS takes a search for this area.
Hence the SE and the DS are incorporated into the original PSO. The SE actually refers to a PSO with inertia-free velocity. So, this algorithm is named hybrid PSO algorithm with inertia-free velocity and the DS (simplified as SDPSO).

The Procedure of the SDPSO
The procedure of the proposed SDPSO is outlined in Figure 4. for exploiting local area. The SE locates a small search area; further, the DS takes a search for this area.
Hence the SE and the DS are incorporated into the original PSO. The SE actually refers to a PSO with inertia-free velocity. So, this algorithm is named hybrid PSO algorithm with inertia-free velocity and the DS (simplified as SDPSO).

The Procedure of the SDPSO
The procedure of the proposed SDPSO is outlined in Figure 4. The i-th particle is taken for example to express the process: The search of the origin PSO is first executed with Equations (1) and (2) to update particle velocity and position for each particle in each cycle.
Stage 2 (static exploitation, SE): If the i-th particle has flown into an infeasible solution space or convergence stagnation, this particle immediately stops flying and returns to its last feasible position S i x , and takes it as the center to exploit several times via Equations (3) and (4). If one or more feasible solutions can be found, select a best one from them and set it to D x 0 as the start point of the following DS. The i-th particle is taken for example to express the process: Stage 1 (original PSO search): The search of the origin PSO is first executed with Equations (1) and (2) to update particle velocity and position for each particle in each cycle.
Stage 2 (static exploitation, SE): If the i-th particle has flown into an infeasible solution space or convergence stagnation, this particle immediately stops flying and re-turns to its last feasible position x S i , and takes it as the center to exploit several times via Equations (3) and (4). If one or more feasible solutions can be found, select a best one from them and set it to x i as an initial point of the DS, searching for a new point through the DS. Then, update pbest n i and gbest n . The SE and the DS are two strengthening search stages. The former exploits the microarea, while the latter further performs a local search in order to obtain a better solution.
The following simple example illustrates the search procedure of the SDPSO algorithm. Considering the two-dimensional (D = 2) problem,  The SE and the DS are two strengthening search stages. The former exploits the micro-area, while the latter further performs a local search in order to obtain a better solution.
The following simple example illustrates the search procedure of the SDPSO algorithm. Considering the two-dimensional (D = 2) problem, x x the solutions found at the 1th and 2th generation of particle 1 with the origin PSO; , x x the solutions found at the 1th and 2th generation of particle 2 with the origin PSO; , S S x x the center points of particle 1 and particle 2 with SE; , , , , x x x x x the exploited points of particle 1 with DS.  The optimal solution to the problem is (2,1).
x 1 1 , x 2 1 the solutions found at the 1th and 2th generation of particle 1 with the origin PSO; x 1 2 , x 2 2 the solutions found at the 1th and 2th generation of particle 2 with the origin PSO; x S 1 , x S 2 the center points of particle 1 and particle 2 with SE; x 1 the exploited points of particle 1 with DS. There are two particles in Figure 5. Produce x 1 1 (0.716, 0.059) and x 1 2 (2.179, 1.092) by initialization. Stage 1. (see Figure 5a). The two particles reach x 2 1 (2.642, −0.277) and x 2 2 (1.441, 0.125) by the original PSO. Stage 2. (see Figure 5a). For particle 1, because x 2 1 is not a feasible solution, SE performs several random exploitations based on x S 1 (i.e., the last feasible point x 1 1 ). Five points are exploited, and the best point x 1 , after then, with which the pbest 1 or gbest is updated.
The above process is on the first generation, and other generations continue as it.

The Joint Roles of the Three Stages
The three different stages play different roles in during search process. Stage 1 keeps the diversity of the original PSO by a wide range search; Stage 2 (the SE) locates a small local region with a few trials; and stage 3 (the DS) further probes near and along ridges into a micro-region to improve the solution.
The original PSO contributes more to global searches while the other two stages focus more attention on local searches. Furthermore, the DS enforces the solution stability for its non-stochastic performance. Thus, with the three components, the SDPSO algorithm has a comprehensive and joint effect.

Experimental Study on Unconstrained and Constrained Optimization Problems
Several experiments on problems from the optimization literature are used to evaluate the approach proposed in Section 5. These experiments include unconstrained benchmark problems (CEC2014) and some constrained engineering design problems, whose solutions obtained by other techniques are available for comparing with and evaluating the proposed approach.
For handling constrains in these experiments on constrained problems, the penalty function method is applied to unfeasible solutions. We add a very high penalty value to the objective function, and the value 1.0 × 10 20 was empirically chosen for each experiment on constrained problems. The Problems are listed in Appendix A.

Experimental Study on Unconstrained Benchmark Problems (CEC2014)
In this section, to check the performance of the SDPSO algorithm on unconstrained benchmark problems, 30 benchmark functions from CEC 2014 are chosen. These functions are shifted or rotated, and most of them are both shifted and rotated. They are complex and can be qualified for evaluating the tested algorithms' characteristics on various problems. The descriptions of these benchmark functions are listed in Table 1.
The parameters of SDPSO are set as: c 1 = 2.0, c 2 = 2.0, ω = 0.3, α = 2.0, β = −0.6. According to the special session at CEC 2014, we set a maximum FES of 300,000 for the 30-D problem. The dimension of the functions is set to 30 and the number of particles is 100.  The mean error value f (x) − f (x * ) is used to evaluate the success of the algorithm; the best mean error value is shown in bold. In addition, Wilcoxon rank-sum test [50] are used in this study to compare the mean errors obtained by SDSPO and the other algorithms at the 0.05 level of significance. The statistical significance level of the aggregate results is given in the last three rows of the tables. "−" indicates a case where a compared algorithm shows a poor performance than SDPSO. "+" means that a compared algorithm shows a better performance than SDPSO. "≈" signifies that a compared algorithm and SDPSO are not significantly different.

Comparison 1: SDPSO and Five Standard Algorithms
In this section, 30 benchmark functions with 30 dimensions from CEC 2014 shown in Table 1 are employed for the comparison of SDPSO and five standard algorithms: PSO [51], GA [52], ABC [53], BBO [54] and SA [55]. For each test function, the mean error and standard deviation are calculated over 30 independent runs of each algorithm with FES = 300,000. Comparison results are listed in Table 2.
As shown in Table 2, for unimodal functions (f1-f3), SDPSO shows competitive performance compared with five standard algorithms. For multimodal functions (f4-f16), SDPSO shows better comprehensive performance than PSO, ABC and SA, but overall, BBO performs best. For f4, f11 and f16, the performance of SDPSO closely follows the best performance of GA and BBO. For hybrid functions (f17-f22), SDPSO displays better performance than the other algorithms. For f23, the performance of SDPSO is only worse than that of GA. For f24, the performance of SDPSO is only worse than that of PSO. Finally, based on the statistical results of Wilcoxon rank-sum test, it can be concluded that SDPSO is a highly competitive metaheuristic algorithm variant compared with the five standard algorithms.
Besides, Figure 6 shows some typical convergence curves of SDPSO and PSO for a part of benchmark functions with 30 dimensions from CEC 2014. It can be seen that SDPSO can effectively accelerate convergence though it does not show an obvious advantage in early iterations. This implies that the hybrid system can help the search to jump out of local optimal minima, and SDPSO has the competitive ability of searching the global optimum.

Comparison 2: SDPSO and PSO Variants
In this section, the comparison of SDPSO and three PSO variants (CLPSO [13], APSO [56] and OLPSO [57]) is conducted. Their mean errors and standard deviations are obtained under 30 independent runs, which are taken from [58] and presented in Table 3. To make a fair comparison, SDPSO is run 30 times for each test function. For all algorithms, the FES = 300,000.   In this section, the comparison of SDPSO and three PSO variants (CLPSO [13], APSO [56] and OLPSO [57]) is conducted. Their mean errors and standard deviations are obtained under 30 independent runs, which are taken from [58] and presented in Table 3. To make a fair comparison, SDPSO is run 30 times for each test function. For all algorithms, the FES = 300,000.  As shown in Table 3, for unimodal functions (f1-f3) and composition functions (f23-f30), SDPSO is fully superior to the other three algorithms except for very few results from f2, f24 and f27, though the composition function can have different properties for different variables subcomponents.
The Shifted and Rotated Rosenbrock's Function (f4) has a very narrow valley from local optimum to global optimum [59], but SDPSO solves it better than the others, which suggests that the DS provides help to jump out of local optimal regions. However, for f6-f12 of simple multiple functions, SDPSO is worst among the four PSOs, the possible reason is that their local optima's number are huge and second better local optimum is far from the global optimum [59].
For the functions with 30 dimensions, SDPSO outperforms the others on a majority of the functions. Further, in order to check the performance of SDPSO in higher-dimensional functions, the dimension of the functions is set to 100 and the results are compared with other state-of-the-art PSO variants, Switch-PSO [60], S-PSO [61], AIW-PSO [62] and DLI-PSO [63].
The parameters of SDPSO are set as: c 1 = 2.0, c 2 = 2.0, ω = 0.3, α = 2.0, β = −0.6, and the number of particles is 100. For the sake of fairness, the same maximum FES (200,000) with the Switch-PSO, S-PSO, AIW-PSO and DLI-PSO is set for each function of the CEC 2014. The average performances of the five algorithms are tabulated in Table 4.
As shown in Table 4, from the 30 functions tested, the SDPSO is fully superior to the DLI-PSO algorithm in all functions and the other three algorithms in 26 functions. Thus, the performance of SDPSO in higher dimensions is outstanding in comparison to the other state-of-the-art PSO variants and has the potential to solve higher dimensional problems. Generally, the SDPSO is observed to be a good algorithm for solving the unconstrained benchmark problems.

Experimental Study on Constrained Engineering Design Problems
In order to verify the performance of the SDPSO on constrained optimization problems, the experimental results of a few design optimization problems are compared with some state-of-the-art algorithms.
The parameters of SDPSO are set as: c 1 = 2.0, c 2 = 2.0, ω = 0.5, α = 3.0, β = −0.5. The number of particles is 50. Each experiment is independently run 100 times for all the compared algorithms. The number of function evaluations (FES) is different with the problems, since the comparative results that come from different literatures have different FES.
Because the results of these algorithms come from different literatures, and the corresponding FES for the results is different. For the SDPSO, the results at 20,000 FES and 42,100 FES are recorded for comparison, respectively (Tables 5 and 6). For a comparable situation, the comparative results with the above literatures are summarized in the two tables by the different FES.  From Table 5, all the statistical results (the best, the mean, and the worst result of the objective function values) of the SDPSO algorithm are much better than those of the other algorithms. The optimal solution for the 5885.902 is (0.778464074, 0.384810342, 40.33477797, 199.7890799).
At the case that FES = 42,100 (Table 6), the SDPSO also keeps the best performance in terms of the best, mean and worst values. The SDPSO not only finds the new global solution on the problem, but also the best, mean and worst values are much smaller than those of the other algorithms. Hence, it can be concluded that the SDPSO is more efficient than the other algorithms for the pressure vessel design problem, and the optimal solution for the 5885.378 is (0.778177268, 0.384652711, 40.31982465, 199.9971357).

Speed Reducer Design Optimization Problem (Problem 2)
This problem has been solved by society and civilization method (SCM) [75], accelerating adaptive trade-off model (AATM) [76], differential evolution with level comparison (DELC) [77], multi-view differential evolution algorithm (MVDE) [78], passing vehicle search (PVS) [70]. This problem is solved by the SDPSO with 30,000 FES, and the results are shown in Table 7. As shown in Table 7, it is obvious that the best value (2994.471067) found by the SDPSO is close to the best value (2994.471066) found by the DELC, MVDE and PVS. For the mean, the value 2994.471081 found by the SDPSO ranks the second, and just slightly worse than the best mean (2994.471066). For the worst value, the SDPSO has also ranked among the best three algorithms.

Spring Design Optimization Problem (Problem 3)
For this problem, the SDPSO compares with 11 different algorithms. Because the comparative results come from different literatures, their FES are different. Thus, their results are listed into two tables according to different FES, and the results of the SDPSO are given at 20,000 and 42,100 FES (see Tables 8 and 9).  For the best value, the SDPSO can find it before 20,000 FES as the other algorithms (Table 8). For the mean value, the SDPSO falls into the top category at 42,100 FES (Table 9), though it is not the best at 20,000 FES. For the worst value, in Table 7, 0.012668 is slightly less than the first place (0.012665) at 42,100 FES.

Welded Beam Design Problem (Problem 4)
For the welded beam design problem, the results of the SDPSO are compared with those of the following four algorithms: society and the civilization model (SCM) [75], hybrid real-parameter genetic algorithm (ARSAGA) [79], differential evolution with dynamic stochastic selection (DSS-MDE) [80] and a multiagent evolutionary optimization algorithm (RAER) [81]. The comparison results are shown in Table 10. Although the average and worst value of the SDPSO is not as good as the other algorithms in the table above for the welded beam design problem, the best value (2.381017466) it finds ranks second and is very close to the best result (2.38095658) attainted by DSS-MED in the table.

Three-Bar Truss Design Problem (Problem 5)
For this problem, the results of the SDPSO are compared with those of the 7 algorithms. From Table 11, the best value of the SDPSO almost reaches the best, and the mean and the worst of it rank second and almost also attain the best result of the other algorithms, even though the performance of it is not outstanding in these results of the comparative algorithms. Based on the above comparisons, it can be concluded that the SDPSO sustains competitiveness on solving constrained engineering design problems.

Parameters Study of SDPSO
Shi, Y. and Eberhart, R.C. [82] have found that the original PSO algorithm can get a better performance with the inertia weight ω in the range [0.9, 1.2], and the acceleration factors c 1 and c 2 fixed at 2.0 in the early experiments [51,83]. Later studies analyzed the relationship between the parameters, for its convergence, to set c 1 = c 2 = 1.49445, ω = 0.729 [84]. However, SDPSO has taken a different strategy from the original PSO, which may produce a different influence from ω. Parameters should be well selected by experiments for good performance. In this paper, seven traditional unconstrained benchmark test functions [23] in Table 12 were adopted to test for finding a suitable scope of ω, c 1 and c 2 for a stable performance. α = 3.0 and β = −0.5 are from [10]. They are moderate and used in SDPSO.

Fun
Functions Domain Best

Impacts of Inertia Weight on SDPSO
For testing the impact of inertia weight w, it is chosen from 0.1 to 2.0 with 0.05 steps. Let c 1 = 2.0, c 2 = 2.0, ω = 3.0, α = 3.0, β = −0.5, and the swarm size of 100. Each run reached the known solution with an error of 0.00001.
The average function evaluation value with different ω is drawn in Figure 7. In the figure, the best ω is generally located between 0.1 and 0.6. For a higher value, the performance of the algorithm has degraded from the Figure 7. It is different from the original PSO, where the inertia weight ω is usually larger [84]. Shi and Eberhart [82] have stated that the inertia weight is responsible for balancing between the local and global search abilities, and the small inertia weight facilitates local search while the large inertia weight facilitates global search. Therefore, with the lower values of ω in SDPSO, local search is intensified, which may be due to the introduction of the SE and the DS. The average function evaluation value with different ω is drawn in Figure 7. In the figure, the best ω is generally located between 0.1 and 0.6. For a higher value, the performance of the algorithm has degraded from the Figure 7. It is different from the original PSO, where the inertia weight ω is usually larger [84]. Shi and Eberhart [82] have stated that the inertia weight is responsible for balancing between the local and global search abilities, and the small inertia weight facilitates local search while the large inertia weight facilitates global search. Therefore, with the lower values of ω in SDPSO, local search is intensified, which may be due to the introduction of the SE and the DS.

Parameter c1 and c2
For a reasonable range of parameter c1 and c2 for the SDPSO, given c1, test the scope of c2 in this experiment. When the percentage of success (the ratio of the number of successful runs to the total number of runs) is 100%, the value of c2 can be noted. Results are filled into Tables 13 and 14 (the particle size is 100, ω = 0.3, α = 3.0, and β = −0.5).  For a reasonable range of parameter c 1 and c 2 for the SDPSO, given c 1 , test the scope of c 2 in this experiment. When the percentage of success (the ratio of the number of successful runs to the total number of runs) is 100%, the value of c 2 can be noted. Results are filled into Tables 13 and 14 (the particle size is 100, ω = 0.3, α = 3.0, and β = −0.5).  Note: (1) before the "/" is c 2 , after that is the number of cycles when reach the optimal result; (2) operation time is relatively long when c 1 is greater than 2.0, and it is a difficult to gain an optimal solution.  Note: (1) before the "/" is c 2 , after that is the number of cycles when reach the optimal result; (2) operation time is relatively long when c 1 is greater than 2.0, and it is a difficult to gain an optimal solution.
From Tables 13 and 14, in most cases, when the acceleration factor c 1 ∈(0.1, 2.0) and c 2 ∈(0.1, 4.0), the success rate reaches 100%. The range of the parameters is so wide, indicating that the SDPSO is not sensitive to these parameters.
For these simple functions, we see that one cycle for most of them is enough to achieve the optimal value, which implies that SE and the DS (not PSO process) play main roles.

Conclusions
Although PSO has a good exploration ability and provides fast convergence, it suffers search stagnation from being trapped into a sub-optimal solution in an optimization. To promote local search ability of PSO, a novel hybrid algorithm (SDPSO) is proposed in this paper.
The SDPSO combines the SE (the search with the inertia-free velocity) and the DS (a component in the Rosenbrock method) with the original PSO. Thus, it maintains the global exploration ability of the PSO algorithm; meanwhile, the local search ability is reinforced by the union of the SE and the DS. For the SE, the weight inertia is cut from the velocity formula to balances the flight speed of a particle for reducing randomness. For the DS, moving near and along the ridge of a function enhance its local search ability.
In this paper, SDPSO is experimented with 30 unconstrained benchmark functions from CEC2014 and some constrained engineering design optimization problems. The performance of SDPSO is compared with that of other evolutionary algorithms and other PSO variants, and the results show that the SDPSO has overall good performance for the unconstrained benchmark functions and constrained problems.