## 1. Introduction

The development of evolutionary computation is motivated by natural evolution. In a population of individuals employed to encode potential solutions of the problem, the evolution is emulated pertaining to the rule of “survival of the fittest”, and a better solution is then expected for generation to generation, such as genetic algorithm [

1]. With contrast to evolutionary computation, Eberhart and Kennedy [

2] developed a new population-based optimization technique, termed particle swarm optimization (PSO). PSO is a stochastic optimization algorithm that operates on a population of a set of initial solutions to explore the search space in a continuous domain. Akin to ant colony optimization [

3], the idea of PSO is inspired by the interesting concept of a social behavior, communications and interactions in bird flocking and fish schooling. It has been widely recognized in the literature that PSO has demonstrated consistently good performance in solving various real-valued optimization problems. Even so, the ordinary PSO still suffers a serious problem that it, sometimes, fails to efficiently explore the local neighborhood of the found solution and accurately anchor the optimum. Many previous studies showed that PSO has an innate global search ability, but local search ability can vary mostly from case to case. Shi and Eberhart [

4] introduced a time decreasing inertia factor to balance between global and local search ability of the swarm. The hybridization of PSO has been a popular topic in recent days. For instance, the SAPSO [

5] is an optimization algorithm combining the PSO with the SA algorithm (Simulated Annealing), where the PSO is used for the global search and SA for the local search. Cui, Zeng and Cai [

6] presented the SPSO where the PSO was combined with the Tabu technique to enhance the local search capability of PSO. Zahara, Fan and Tsai [

7] presented the PSO hybridized with the Nelder–Mead simplex search method (NM–PSO) to solve the objective of Gaussian curve fitting and the Otsu’s method in the field of image thresholding. The algorithmic development of NM-PSO was fully addressed in Fan, Liang and Zahara [

8] and Fan and Zahara [

9].

The previously reviewed algorithms have been shown quite effective to improve PSO’s local search ability by hybridization. Nonetheless, the PSO is still vulnerable to a risk of falling into a local optimum as the dimensionality of the search space dramatically increases. On this account, various modifications of PSO were developed, trying to make search diversified as to locate the global optimum of high-dimensional problems. Among, one of the most noted cooperative PSO algorithms is the CPSO addressed by van den Bergh and Engelbrecht [

10]. The search is conducted by using multiple swarms to optimize different components of the solution vector cooperatively. In addition, the evolutionary operators such as selection, crossover and mutation have been used in the PSO to keep the best particle mobilized, and also to improve the ability to escape from local minima [

11,

12]. In order to maintain the diversity and to jump from local optima, relocating the particles can be a promising strategy when most of the particles are too close to each other [

13]. In their modified PSO, the search space is partitioned into several lower dimensional subspace and multiple swarms are applied to perform a cooperative search. On the other hand, Rada-Vilela et al. [

14] proposed the hybrid PSO algorithms that incorporate noise mitigation mechanisms. The performance of the algorithms was analyzed by means of a set of population statistics that measure different characteristics of the swarms throughout the search process. Taghiyeh and Xu [

15] proposed the algorithm that works with a set of statistically global best positions including one or more positions with objective function values of statistical equivalence. That PSO algorithm is also integrated with adaptive resampling procedures in order to enhance the capability of coping with noisy objective functions. In this paper, we develop an enhanced PSO termed enhanced partial search particle swarm optimization (EPS-PSO) by using a multi-swarm strategy in attempt to improve the ordinary PSO search that may easily get trapped in a local optimum.

## 3. Enhanced Partial Search Particle Swarm Optimization (EPS-PSO) Algorithm

In the standard PSO, there always exists a phenomenon that it might lead to a stagnant state if the global best position cannot be improved in several consecutive iterations. In van den Bergh [

18], a premature convergence of the standard PSO algorithm has been addressed. To avoid this pitfall, a supplementary search direction may be well supplied as to help the swarm jump out of the local optimum. In

Figure 1, an example illustrates a scenario of how the standard PSO should be improved for escaping from the local optimum. The figure displays a contour map of a two-dimensional problem having multiple local optima. The points G1 and G2 are local optima and the point G3 is the global optimum. If the current global best solution/position is very close to G1, then the swarm may be entirely contracted toward G1 and has difficulty to escape from the local region. If the point G2 that exhibits a better solution than the point G1 appears during the optimization steps, then information exchange between these two local regions must rejuvenate the swarm being stagnated. As such, a new search direction (or path) will be developed from G1 to G2. This type of local partial search, if regularly changed in time, will definitely increase the chance of anchoring the global optimum, like G3 in

Figure 1.

To achieve the foregoing cooperative strategy for unconstrained optimization, an additional population is required to take up the local search assignment. The additional population can be regarded as an enhanced local explorer in addition to the ordinary PSO search, hence the new algorithm to be proposed is termed the Enhanced Partial Search-Particle Swarm Optimization (EPS-PSO) algorithm. To start the EPS-PSO algorithm, first, the entire population is divided into two equal-sized sub-swarms, named the traditional swarm and the co-search swarm, respectively. Every

t generations (or iterations), where

t is called the re-initialization period, the EPS-PSO algorithm will re-initialize the co-search swarm. There is one exception if the current global best position of the co-search swarm outperforms that of the traditional swarm, then the re-initialization of the co-search swarm will be called off. In other words, the primary difference between the traditional swarm and the co-search swarm is the search topology assigned. The traditional swarm is mainly aimed at global exploration in the entire search space. By contrast, the co-search swarm is re-initialized periodically for local exploration. In the context of unconstrained optimization, the search space is restrained within the box constraints. Note also that the enhanced partial search is done within the given solution domain and the cooperation/communication occurs only as the co-search swarm exploits better outcomes than the traditional swarm. In a minimization case, let

${P}_{co-g,d}$ and

${P}_{T-g,d}$ be the global best positions of the co-search swarm and the traditional swarm, respectively. If the fitness improves by the co-search swarm,

$f({P}_{co-g,d})<\text{\hspace{0.17em}}f({P}_{T-g,d})$, then

${P}_{co-g,d}$ replaces

${P}_{T-g,d}$ to be the global best position of the traditional swarm (see the illustration in

Figure 2b). Otherwise, both swarms proceed on their own without any communication (see the illustration in

Figure 2a). The co-search swam provides a new search path to assists the particles of the traditional swarm to jump out of the local optimum, implying that the favorable search information is periodically supplied if appropriate.

For the clarity of presentation, the procedure of the EPS-PSO Algorithm 1 can be formalized as below:

**The Enhanced Partial Search Particle Swarm Optimization (EPS-PSO) Algorithm 1** |

**Define** |

$m$: Each swarm’s population size |

$n$: Swarm ID number |

$t$: Re-initialization period |

k: function evaluation index |

**For** each particle in each swarm |

Circumscribe the search space for the traditional and co-search swarms within the box constraints |

Initialize position ${x}_{i}$, particle’s personal best ${p}_{i}$ and velocity ${v}_{i}$ for both swarms |

Perform the function evaluation for each particle and update k, |

**Endfor** |

**Repeat:** |

**For** each swarm $j\in [1\dots n]$: |

**For** each particle $i\in [1\dots m]$: |

**If** $f({x}_{i})<f({p}_{i})$ |

**then** ${p}_{i}\leftarrow {x}_{i}$ |

**If** $f({p}_{i})<f({p}_{g,d})$ |

**then** ${p}_{g,d}\leftarrow {p}_{i}$ |

**Endfor** |

Perform particle velocity and position updates via Equations (1–2) |

**Endfor** |

Perform the function evaluation for both swarms and update k |

**If** the criterion of the re-initialization period $t$ for the co-search swarm is met |

**For** each particle in the co-search swarm |

Circumscribe the search space within the box constraints |

Re-initialize position ${x}_{i}$, particle’s personal best ${p}_{i}$ and velocity ${v}_{i}$ for the co-search swarm |

Perform the function evaluation for each particle and update k |

**Endfor** |

**If** $f({P}_{co-g,d})<\text{\hspace{0.17em}}f({P}_{T-g,d})$ **then** ${P}_{T-g,d}\leftarrow {P}_{co-g,d}$ |

**Until** the maximum number k of function evaluations is satisfied |

## 4. Experiment Setup

To evaluate the performance of the EPS-PSO algorithm, a suite of benchmark functions with different difficulties are chosen. All the functions presented here have the objective value 0 in their global minima. The definitions of each test problem are tabulated in

Table 1. Among the five benchmark functions, Rosenbrock and Quadric are unimodal functions while the others are multimodal (i.e., Ackley, Rastrigin and Griewank functions). However, Shang and Qiu [

19] had verified that the

n-dimensional (

$n=4~30$) Rosenbrock function is not unimodal and has two minima, one with the optimal objective of zero and the other with the optimal objective of around 3.7~3.9. The performance of the EPS-PSO algorithm will be assessed by these five 30-dimensional bench mark functions

${f}_{1}~{f}_{5}$, and then the comparison is made against the other two algorithms: the traditional PSO [

2] and 4 different versions of CPSO [

10]. The CPSO algorithms are briefly mentioned as follows:

CPSO-$S$: A maximally “split” swarm where the search space vector is split into 30 parts.

CPSO-${S}_{6}$: The search space vector for CPSO-${S}_{6}$ is split into only six parts (of five components each).

CPSO-$H$: A hybrid swarm, consisting of a maximally split swarm, coupled with a plain swarm.

CPSO-${H}_{6}$: A hybrid swarm, consisting of a CPSO-${S}_{6}$ swarm, coupled with a plain swarm.

All the experiments were conducted using three different swarm sizes of 10, 15 and 20 except the EPS-PSO algorithm that used the population size of 20 (i.e., the sub-swarm size of 10 is used for the traditional swarm and the co-search swarm each). Every algorithm was halted after

$2\times {10}^{5}$ function evaluations. Regarding the replication and comparison of computational experiments in applied evolutionary computing, interested readers can refer to the work by Črepinšek et al. [

20]. Note again that all the benchmark functions have the same problem dimension of 30. For all the PSO-based algorithms compared in the experimental study, the two random numbers,

${r}_{1}$ and

${r}_{2}$ in (1), were randomly generated from the range

$(0,\text{\hspace{0.17em}}1)$. The value of

${v}_{\mathrm{max}}$ is clamped to the allowable range of

$\text{\hspace{0.17em}}{x}_{i}$, and

${c}_{1}={c}_{2}=1.49$. In all the implementations, the inertia weight parameter

$\omega $ decreases with the number of iteration, akin to the scheme described in Shi and Eberhart [

21] except the traditional PSO. That is,

$\omega $ decreases linearly over iteration from 1 down to 0, and the traditional PSO using

$\text{\hspace{0.17em}}\omega \text{\hspace{0.17em}}=0.72\text{\hspace{0.17em}}$[

4]. Moreover, to pose a more challenging optimization task, all the algorithms were further tested using the transformed benchmark functions using Salomon’s coordinate rotation [

22]. Prior to each individual optimization, a new rotation was independently executed, and therefore no bias was introduced because of a specific rotation. In the co-search swarm, the size of limited search space is set to half the allowable range of each dimension.

Note that all the tested cooperative PSO variants and the proposed EPS-PSO use the same initialization procedure, i.e., the random generation within the box constraints. In the proposed EPS-PSO algorithm, two sub-swarms, traditional and co-search ones, are initialized by random generation. Since then, the co-search swarm is to be re-initialized every re-initialization period only if the co-search swarm is not able to locate a better global best solution than the traditional swarm. If the co-search swarm exploits better outcomes than the traditional swarm, then the re-initialization will not be executed. The proposed EPS-PSO algorithm will be compared to the other 5 PSO variants in terms of the unrotated and rotated versions of the five benchmark functions.

## 5. Computational Result and Discussion

In this section, the experimental results of five benchmark functions obtained by running 6 different versions of PSO (i.e., including the traditional PSO, 4 different versions of CPSO, and EPS-PSO) are presented. To conduct fair comparisons among algorithms, all experiments were run 50 times, and the computational results are exhibited in

Table 2,

Table 3,

Table 4,

Table 5 and

Table 6. In the tables, the second column lists the entire population size for the PSOs. The third column lists how many iterations the co-search swarm would be re-initialized in a limited search space for the EPS-PSO algorithm. Five levels of the re-initialization period are examined,

t = 100, 500, 1000, 5000, 10,000. As

t = 10,000 is used, indicating two traditional swarms of 10 particles each without communication throughout the search process. The use of

$t=1000$ stands for 10 re-initializations and communications. The fourth and fifth columns list the mean function error with 95% confidence interval after

$2\times {10}^{5}$ function evaluations for the unrotated and rotated versions of the benchmark functions. Note that a “new” rotation is performed prior to every individual optimization, so each rotated function has a different functional form but the same global minimum. Thus, the minimum objective value is still zero.

In evolutionary computation, the Rosenbrock function is frequently chosen to evaluate the performance of optimization algorithms due to its highly nonlinear and non-convex properties. The global minimum hides within an elongated valley, as known as “Banana” function. To reach the valley is trivial but to locate accurately the global minimum is of primary importance. In Shang and Qiu [

19], the function has been shown to own multiple local minima as the problem size exceeds 4.

Table 2 shows that the Rosenbrock function in its unrotated form can be easily solved by the EPS-PSO algorithm as the re-initialization period is

$t\ge 500$. The other 5 algorithms cannot compete with the EPS-PSO algorithm. However, when the search space is rotated, the quality of solutions generated by the EPS-PSO algorithm has deteriorated quickly but the other 5 algorithms seem invariant to the coordinate rotation. Even so, the EPS-PSO algorithm still takes the lead.

Figure 3 shows the best convergence result of the algorithms among 50 independent runs in terms of the logarithm of function error over iteration. For the un-rotated case, the EPS-PSO algorithm has achieved the global optimum after

$1.2\times {10}^{5}$ function evaluations. For the rotated case, the EPS-PSO algorithm dominates the other algorithms after

$9\times {10}^{4}$ function evaluations. Note that the CPSO result shown in

Figure 3 corresponds to the best convergence result among 4 versions of the CPSO algorithms over 50 independent runs. For further details about the family of CPSOs, interested readers can refer to van den Bergh and Engelbrecht [

10].

The computational results of the Quadric function solved by the 6 algorithms are tabulated in

Table 3. All the algorithms perform well for the unrotated case; the EPS-PSO algorithm converges perfectly to the optimal point without any function error. Although the Quadric function is convex and unimodal, the coordinate rotation really makes it difficult to solve. The EPS-PSO algorithm is only slightly influenced by the coordinate rotation and still yields competitive performance. The other 5 algorithms cannot provide satisfactory performance. The best convergence results among 50 independent runs are displayed in

Figure 4. For the unrotated case, the EPS-PSO algorithm converges the way faster than the other algorithms. In spite of the coordinate rotation as exhibited in

Figure 4b, the EPS-PSO algorithm is still able to continuously improve the function error as iterations elapsed. Surprisingly, the traditional PSO and CPSO algorithms fail to solve the rotated Quadric function and the improvement in the function error becomes stagnated after

$6\times {10}^{4}$ function evaluations.

The Ackley’s Function is a multimodal function with many local minima positioned on a regular grid. The computational results are shown in

Table 4. For the un-rotated case, the traditional PSO algorithm cannot solve the problem successfully; the 4 CPSO algorithms perform quite well except the

$\mathrm{CPSO}-{S}_{6}$ algorithm. The performance of the EPS-PSO algorithm deteriorates quickly as the re-initialization period increases. The global optimum is attained as

$t=100$. The explanation of how the EPS-PSO algorithm works using

$t=100,\text{}500$ is that re-initializing the co-search swarm more often definitely helps the search jump out of the local optima. For the rotated case, all the 6 algorithms are seriously affected by the coordinate rotation except for the

$\mathrm{CPSO}-{H}_{6}$ algorithm with

$s=20$ and the EPS-PSO algorithm with

$t=500$. As before, the performance of the EPS-PSO algorithm becomes worse as

$t$ increases. The best convergence results are plotted in

Figure 5. For the un-rotated case, the EPS-PSO algorithm converges to the global optimum after

$4\times {10}^{4}$ function evaluations. For the rotated case, the traditional PSO algorithm is trapped in a local solution, as can be seen from

Figure 5b; the CPSO and EPS-PSO algorithms perform equally well.

The Rastrigin function is a typical nonlinear multimodal function, which contains only one global optimum with a large number of local minima, making the problem more difficult to solve. The computational results are listed in

Table 5. For the un-rotated case, only the

$\mathrm{CPSO}-S$ algorithm, the

$\mathrm{CPSO}-H$ algorithm and the EPS-PSO algorithm with

$t=500,\text{}1000$ can locate the global optimum. Once the search space is rotated, only the EPS-PSO algorithm with

$t=500$ can produce satisfactory performance. The best convergence plots are shown in

Figure 6.

Table 6 shows that the EPS-PSO algorithm performs better than the other PSO algorithms in all experiments on the un-rotated Griewank function. For the rotated case, the EPS-PSO algorithm still performs best among the studied algorithms. The best convergence performance is plotted in

Figure 7. It can clearly be seen from the figure that the traditional PSO and CPSO algorithms have almost no improvement in the function error after

$4\times {10}^{4}$ function evaluations due to the multimodal structure. The EPS-PSO algorithm converges really fast from the outset, but the convergence slows down after

$4\times {10}^{4}$ function evaluations.

In

Table 7, the performances between the proposed EPS-PSO and different PSO algorithms are compared for verifying the effectiveness of EPS-PSO. In the table, the t-test was used to test whether the proposed EPS-PSO algorithm can outperform significantly the other algorithms in the minimization case. In terms of the two-sample t-test, the null and alternative hypotheses on the difference of the objective functions achieved are constructed to be

${H}_{0}:\text{}{\mu}_{EPS-PSO}-{\mu}_{other\text{}PSO}\ge 0$ and

${H}_{1}:\text{}{\mu}_{EPS-PSO}-{\mu}_{other\text{}PSO}0$, respectively. The p-value of the test statistic less than 0.05 indicates that the proposed EPS-PSO algorithm generates a significantly smaller objective function than the compared PSO variant, which is marked “○”. Conversely, the p-value greater than 0.05 indicates that the EPS-PSO algorithm does not return a significantly smaller objective function than the compared PSO variant, which is marked as “×”. Overall, the EPS-PSO algorithm exhibits a significantly better performance than the other five algorithms except the functions Ackley and Rastrigin in the unrotated case. For the rotated case, the EPS-PSO algorithm outperforms overwhelmingly the other 5 PSO variants.

As evidenced by the foregoing computational results, the enhanced partial search (EPS) mechanism has proved a valuable investment that helps to prevent the particles from being trapped in the local optima more likely than purely using the traditional PSO search. The re-initialization period $t$ is designed to control the frequency of how the co-search swarm is reinitialized. The five levels of $t$ are tested for the five unrotated and rotated benchmark functions, and the EPS-PSO algorithm using the level of 500 iterations produces the most reliable performance. To summarize, the better results obtained by the proposed EPS-PSO algorithm are attributed largely to (i) the topology of two sub-swarms between which the communication is carried out periodically and (ii) the re-initialization of the co-search swarm that is augmented to diversify local exploitation.

## 6. Conclusions and Directions of Future Research

This paper presents a cooperative multi-swarm structure in order to enhance the searching ability of the traditional PSO algorithm. The new search mechanism is called the enhanced partial search (EPS), making the new algorithm dubbed EPS-PSO. The EPS is a rerandomization strategy that works on an auxiliary swarm, termed the co-search swarm. The intension is to mobilize the particles and then make the swarm have more chances to explore different search areas. Moreover, the EPS mechanism has been implemented to allow the co-search swarm to share information with the traditional swarm during the evaluation process. Five benchmark functions in the unrotated and rotated versions are used as the test bed to compare the EPS-PSO algorithm to the traditional PSO and 4 different cooperative PSO algorithms. Five levels of the re-initialization period are examined, $t=100,\text{}500,\text{}1000,\text{}5000,\text{}\mathrm{10,000}$. The EPS-PSO algorithm performs best when the re-initialization period is set to 500 iterations. The comparison results show that the proposed algorithm improves upon the traditional PSO algorithm and outperforms the other 4 cooperative PSO algorithms. In other words, the EPS is shown to be a simple but effective way that helps to prevent the swarm from becoming stuck in the local optima.

Built upon the current research, there are still several PSO-related research topics that deserve further scrutiny. The first potential investigation is to extend the propose EPS-PSO algorithm to constrained optimization. In this case, a mechanism to handle the constraint violation should be invented and comprehensively tested. A further study can be done by comparing the proposed EPS-PSO algorithm to some PSO variants equipped with the swarm collapse detection mechanism. Furthermore, the co-search swarm strategy may also be useful for PSO dealing with noisy function evaluations.