Learning Competitive Swarm Optimization

Particle swarm optimization (PSO) is a popular method widely used in solving different optimization problems. Unfortunately, in the case of complex multidimensional problems, PSO encounters some troubles associated with the excessive loss of population diversity and exploration ability. This leads to a deterioration in the effectiveness of the method and premature convergence. In order to prevent these inconveniences, in this paper, a learning competitive swarm optimization algorithm (LCSO) based on the particle swarm optimization method and the competition mechanism is proposed. In the first phase of LCSO, the swarm is divided into sub-swarms, each of which can work in parallel. In each sub-swarm, particles participate in the tournament. The participants of the tournament update their knowledge by learning from their competitors. In the second phase, information is exchanged between sub-swarms. The new algorithm was examined on a set of test functions. To evaluate the effectiveness of the proposed LCSO, the test results were compared with those achieved through the competitive swarm optimizer (CSO), comprehensive particle swarm optimizer (CLPSO), PSO, fully informed particle swarm (FIPS), covariance matrix adaptation evolution strategy (CMA-ES) and heterogeneous comprehensive learning particle swarm optimization (HCLPSO). The experimental results indicate that the proposed approach enhances the entropy of the particle swarm and improves the search process. Moreover, the LCSO algorithm is statistically and significantly more efficient than the other tested methods.


Introduction
Swarm intelligence (SI) is a branch of artificial intelligence (AI) based on the social behavior of simple organisms occurring in natural environments [1]. The source of its inspiration was observations of the collective behavior of animals such as birds, fishes, bees, bacteria, ants, squirrels and others [2][3][4][5][6][7][8]. There is a number of methods based on swarm intelligence. One of them is particle swarm optimization (PSO).
PSO was developed by Kennedy and Eberhart [9] as a stochastic method of optimization. The standard PSO is based on a swarm of particles, each of which wanders through the search space to find better solutions. The key to the success of the method is the ability to share information found by population individuals. Due to many advantages (simplicity, easy implementation, lack of coding and special operators) [4], the PSO method has been widely applied in solving various optimization problems, including control systems [10], prediction problems [11], image classification [12], energy management [13], bilevel programming problems [14,15], antenna design [16], scheduling problems [17,18], electromagnetism [19,20] and many others.
Unfortunately, in cases of complex, high-dimensional optimization problems with many local optima, PSO can encounter some difficulties associated with preserving a sufficient diversity of particles and maintaining a balance between exploration and exploitation. This leads to premature convergence and renders the results unsatisfactory. In order to avoid these inconveniences, various improved versions of the PSO have been developed. The improvements rely on [21]: • Adjustment of parameters. Shi and Eberhart [22] indicated that the performance of the PSO method depends primarily on the inertia weight. In their opinion, the best results give inertial weight, which decreases linearly from 0.9 to 0.4. Zhang et al. [23] and Niu et al. [24] proposed to use a random inertia weight. In contrast to Zhang et al. [23] and Niu et. al. [24], Clerc [25] suggested that all coefficients should rather be constant and proved that inertia weight w = 0.729 and factors c1 = c2 = 1.494 can increase the rate of convergence. Venter and Sobieszczański [26] found that the PSO method is more effective when the acceleration coefficients are constant but different. According to them, the social coefficient should be greater than the cognitive one, and they proved that c2 = 2.5 and c1 = 1.5 produce superior performance. Nonlinear, dynamically changing coefficients were proposed by Borowska [27]. In this approach, the values of coefficients were affected by the performance of PSO and the number of iterations. In order to more efficiently control the searching process, Ratnaweera et al. [28] recommended time-varying coefficients (TVACs). An approach based on fuzzy systems was proposed by Chen [29]. • Topology. Topological structure has a great influence on the performance of the PSO algorithm. According to Kennedy and Mendes [30], a proper topology significantly improves the exploration ability of PSO. Lin et al. [31] and Borowska [32] indicated that the ring topology can help maintain swarm diversity and improve the algorithm's adaptability. An approach based on multi-swarm structure was proposed by Chen et al. [33] and Niu [24]. In turn, a two-layer cascading structure was recommended by Gong et al. [34]. To alleviate premature convergence, Mendes et al. [35] introduced a fully informed swarm in which particles are updated based on the best locations of their neighbors. A PSO variant with adaptive time-varying topology connectivity (PSO-ATVTC) was developed by Lim et al. [36]. Carvalho et al. [37] proposed a particle topology based on the clan structure. The dynamic Clan PSO topology was described by Bastos-Filho et al. [38]. Shen et al. [39] proposed a multi-stage search strategy supplemented by mutual repulsion and attraction among particles. The proposed algorithm increases the entropy of the particle population and leads to a more balanced search process.

•
Combining PSO with other methods. In order to obtain higher-quality solutions and enhance the performance of PSO, in many papers, researchers merge two or more different methods or their advantageous elements. Ali et al. [40] and Sharma et al. [41] combined PSO with genetic operators. A modified particle swarm optimization algorithm with simulated annealing strategy (SA) was proposed by Shieh et al. [42]. A PSO method with ant colony optimization (ACO) was developed by Holden et al. [43]. Cooperation of many swarms and four other methods for improving the efficiency of PSO was applied by Liu and Zhou [44]. Not only did the authors combine multipopulation-based particle swarm optimization with the simulated annealing method but also with co-evolution theory, quantum behavior theory and mutation strategy. A different approach was presented by Cheng et al. [45]. To improve the exploration ability of PSO, they used a multi-swarm framework combining the feedback mechanism with the convergence strategy and the mutation strategy. The proposed approach helps reach a balance between exploration and exploitation and reduces the risk of premature convergence. • Adaptation of learning strategy. This approach allows particles to acquire knowledge from high-quality exemplars. In order to increase the adaptability of PSO, Ye et al. [46] developed dynamic learning strategy. A comprehensive learning strategy based on historical knowledge about particle position was recommended by Liang et al. [47] and was also developed by Lin et al. [48]. Instead of individual learning of particles based on their historical knowledge, Cheng and Jin [49] introduced a social learning mechanism using sorting of swarm and learning from demonstrators (any better particles) of the current swarm. The method turned out to be effective and computationally efficient. A learning strategy with operators of the genetic algorithm (GA) and a modified updating equation based on exemplars was proposed by Gong et al. [34].
To enhance diversity and improve the efficiency of PSO, Lin et al. [31] merged PSO with genetic operators and also connected them with global learning strategy and ring topology. Learning strategy with genetic operators and interlaced ring topology was also proposed by Borowska [32]. To improve the searching process, Niu et al. [50] recommended applying learning multi-swarm PSO based on a symbiosis.
In order to improve the performance of the particle swarm optimization method, Cheng et al. [21] introduced a competitive swarm optimizer (CSO) based on PSO. In CSO, neither the personal best position of particles nor the global best position is required. Instead of them, a simple pairwise competition mechanism within one single swarm was introduced. Particles do not need knowledge about their historical positions as they learn only from the winner.
Unfortunately, although the CSO method has better search capability than traditional PSO, it does not always perform well and obtain expected results for complex optimization problems. Difficulties are associated with the loss of population diversity too quickly and maintaining a balance between exploration and exploitation. This leads to a deterioration in the effectiveness of the method and premature convergence.
To reduce these inconveniences, ensure diversity of particles and limit the risk of getting stuck in the local optimum, in this paper, a new learning competitive swarm optimization called LCSO is presented. The proposed approach is based on the particle swarm optimization method (PSO) and a competition concept. In LCSO, particles do not use information about their personal best position and global best particle in the swarm; instead of that, the competition mechanism was applied but in a different way than in CSO. In LCSO, the swarm is divided into sub-swarms, each of which can work independently. In each sub-swarm, three particles participate in the tournament. The participants who lost the tournament learn from their competitors. The winners take part in the tournament between sub-swarms. The new algorithm was examined on a set of test functions. To evaluate the effectiveness of the proposed LCSO, the test results were compared with those achieved through the competitive swarm optimizer (CSO) [21], comprehensive particle swarm optimizer (CLPSO) [47], PSO [51], fully informed particle swarm (FIPS) [35], the covariance matrix adaptation evolution strategy (CMA-ES) [52] and heterogeneous comprehensive learning particle swarm optimization (HCLPSO) [53].

The PSO Method
As mentioned in the introduction, particle swarm optimization is a method based on swarm intelligence and the collective behavior of animal societies. It was first proposed by Kennedy and Eberhart [9] as a new simple optimization tool. Because of its effectiveness, it has been regarded as a powerful method of optimization and has become a competitive approach to the genetic algorithm and other artificial intelligence tools [1].
In PSO, the optimization process is performed by the population of individuals called a swarm of particles. The swarm consists of N particles that move in the D-dimensional search space. The individual particle within the swarm can be considered as a point of the search space determined by two vectors: . . , v iD ) named a velocity vector. Both vectors are randomly generated at the first step of the PSO algorithm. Each particle roams through the search space according to its velocity vector V i and remembers its personal best position pbest i = (pbest i1 , pbest i2 , . . . , pbest iD ) that it found during its way and the best position gbest = (gbest 1 , gbest 2 , . . . , gbest D ) found in an entire swarm. The particles are also evaluated on their quality, which is measured based on the objective function of the optimized task. The particle wandering in the search space is performed according to the velocity equation determined as follows: The change of the particle position is realized by adding the velocity vector to its position vector according to the given equation: where t is the iteration number, w means the inertia weight (that determines the impact of the previous velocity of the particle on its current velocity), factors c 1 and c 2 are acceleration coefficients, and r 1 and r 2 represent two random numbers uniformly distributed between 0 and 1. The pseudo code of the standard PSO method is presented in Algorithm 1.
Algorithm 1 Pseudo code of the PSO algorithm.
Determine the size of the swarm for j = 1 to size of the swarm do Generate an initial position and velocity of the particle, Evaluate the particle; Determine the pbest (personal best position) of the particle end Select the gbest (the best position) found in the swarm while termination criterion is not met do for j = 1 to size of the swarm do Update the velocity and position of each particle according to Equations (1) and (2) Evaluate new position of the particle; Update the pbest (personal best position) of each particle end Update the gbest (the best position) found in the swarm end

The Proposed LCSO Algorithm
The proposed LCSO (learning competitive swarm optimization) is a two-stage-learningbased method, which combines particle swarm optimization with a competition concept. In the first stage, particles take part in the tournaments within sub-swarms. In the second stage, tournaments are held between sub-swarms. Although LCSO is based on PSO, the knowledge acquisition mechanism is different. In LCSO, particles do not store any historical information about both their personal best position and global best particle in the swarm. Instead, particles derive knowledge from their better competitors.
In the first stage of LCSO, the swarm of N particles is divided into p sub-swarms. In every iteration, each particle within a sub-swarm participates in a tournament. In the sub-swarms, the tournaments are organized independently. In one tournament, participate 3 randomly selected particles. In one iteration, a particle takes part in a competition only once. The best particle of the tournament (winner) goes to the next iteration without updating. The particle that has finished the tournament as a second one (runner-up) learns from the winner according to the equations: The particle that took the last place (loser) learns from its competitors according to the following formulas: where X w is the position of the best particle (winner), V s and X s are velocity and position of the particle that got second place (runner-up), V l and X l are velocity and position of the particle that got last place (loser), t is the iteration number, and r 1 , r 2 and r 3 are randomly generated numbers in the range [0, 1]. This means that for the sub-swarm of M particles, only two out of three particles are updated, whereas one in three particles of each sub-swarm go to the next stage without updating. Then the tournaments are organized between sub-swarms. For each sub-swarm, one particle from the set of the winners is selected. Subsequently, the (each three) particles participate in the tournament. The particle that finished the tournament as a second one (runner-up) learns from the winner according to Equations (3) and (4). The particle that got last place (loser) learns from its competitors according to (5) and (6). Next, the tournaments are organized again in sub-swarms. Figure 1 illustrates the concept of learning LCSO. This means that among particles that participate in competition between sub-swarms only two out of three particles are updated and one-third of them go to the next iteration without updating.
The proposed sub-swarms topology limits the excessive loss of swarm diversity and helps maintain a balance between exploration and exploitation. The competition strategy (in the sub-swarm) promotes the interaction among particles of the sub-swarm. The introduced learning strategy in the sub-swarm ensures the exchange of information between them. The tournament strategy among the winners ensures sharing of information between sub-swarms. In this way, the sub-swarm that falls into the local optimum has a chance to jump out of it by learning from the winners of other sub-swarms.
The pseudo code of the LCSO method is summarized in Algorithm 2.
Algorithm 2 Pseudo code of the LCSO algorithm. FEs is the number of fitness evaluations. The termination condition is the maximum number of fitness evaluations (maxFEs).
Determine the size of the swarm; Determine the number of sub-swarms; Determine number of particles in the sub-swarms; Randomly initialize position and velocity of the particles in sub-swarms; t = 0; while termination criterion (FEs ≤ maxFEs) is not met do Evaluate the particles fitness; Parallel in each sub-swarm organize tournament independently: while sub-swarm = Ø do Randomly select three particles, compare their fitness and determine winner X w , runner-up X r and loser X l Update runner-up's position according to (3) and (4) Update loser's position according to (5) and (6) end Organize tournament between sub-swarms: Randomly select one particle from the winners of each sub-swarm while set of selected winners = Ø do Randomly select three particles compare fitness of the selected particles to determine winner X w , runner-up X r and loser X l Update runner-up's position according to (3) and (4) Update loser's position according to (5) and (6) end t = t + 1; end output the best solution

Results
The tests of the proposed algorithm were performed on a set of benchmark functions, sixteen of which are presented in this article and described in Table 1.
The experiments were conducted on a PC with an Intel Core i7-3632QM 2.2 GHz CPU and Microsoft Windows 10 64-bit operating system. The LCSO was implemented in C with Microsoft Visual Studio 2010 Enterprise.
The parameter settings of the comparison algorithms are listed in Table 2. The details of algorithms used for comparison can be found in [21,35,47,51,53].
For all tested functions, the experiments with dimension size D = 30 were conducted. The population size of all algorithms is N = 72, whereas the maximum number of evalu-ations is 10,000. The number of sub-swarms p = 3. The exemplary results of the tests are summarized in Table 3, and the presented values were averaged over 32 runs. The best results are shown in bold.   To facilitate understanding of the results, the comparison of the algorithm's effectiveness on a logarithmic scale is shown in Figures 2 and 3. In order to compare the convergence rate of the tested algorithms, the convergence curves on six representative functions were plotted and presented in Figures 4 and 5. To evaluate the effectiveness of the proposed method, a statistical t-test was conducted. For all comparisons, a confidence level of 0.05 was used. The t-values between LCSO and the other considered algorithms for 30 dimensions are presented in Table 4.
The results of the tests indicate that the LCSO method with the proposed approach is effective as it achieved superior performance over the other tested algorithms. For the unimodal functions, LCSO obtained the best results (although the standard deviation values were not always the lowest) among all the algorithms, and only for f 4 , the results of LCSO and CSO were comparable, but LCSO turned out to be more stable. The weakest results for the f 1 function were obtained by CLPSO, for f 2 and f 3 by FIPS and for f 5 by PSO. For multimodal functions, in the case of f 6 , f 9 , f 10 , f 11 , f 12 , f 14 , f 15 and f 16 , the LCSO method also achieved the best outcomes and was more stable than the other tested algorithms. In the case of f 7 and f 13 functions, LCSO achieved higher performance compared to the other methods, but it was not as stable as them. In the case of f 7 , the HCLPSO method turned out to be more stable than the others. For f 13 function, the CSO method was more stable than LCSO. In the case of f 8 function, LCSO performed worse than CSO and HCLPSO but much better than CLPSO, PSO and FIPS. In the case of f 16 function, the results gained by CSO are also much better than those achieved by CLPSO, PSO, FIPS and even HCLPSO but not as good as LCSO. In all the tests, the least effective was PSO as it obtained the poorest results. Particles of PSO moved irregularly and had a tendency to fall and stop in the local minima. The convergence curves presented in Figures 4 and 5 indicate that, for unimodal functions, LCSO converged slower in the early stage than most of the compared methods. At this stage, CSO was found to perform with the highest rate. However, after about 3000 iterations, LCSO surpasses CSO and other tested methods. Regarding multimodal functions, at an early stage, LCSO also converges slower than CSO but faster than CLPSO, PSO, FIPS and HCLPSO. In the middle stage, LCSO surpasses CSO and converges faster than the other algorithms (except f 8 ), but it needs as many as 6000 iterations to achieve this. However, for f 8 function, LCSO slows down in the middle stage and ultimately converges slower than CSO and HCLPSO.     The obtained results confirm that the proposed method is effective and efficient. The advantages of the proposed LCSO method include:

•
The use of sub-swarms helps maintain the diversity of the population and keep the balance between the global exploration and local exploitation; • Particles learning from the winners can effectively search space for a better position; • Good information found by sub-swarm is not lost; • Particles can learn from the useful information found by other sub-swarms; • In each iteration, the position and velocity of only two out of three particles is updated which significantly reduces the cost of computations; • Particles do not need to remember their personal best position; instead, the competition mechanism is applied; • LCSO can obtain better results and convergence than the other algorithms.
Based on the t-test results summarized in Table 4, it can be concluded that the performance of the proposed learning competitive swarm optimization (LCSO) algorithm is significantly better than the other methods with a 95% confidence level in a statistically meaningful way.
In order to assess the effectiveness of the LCSO method for a larger dimension of the search space, functions with dimensions of 100, 500 and 1000 were investigated. The tests were performed with a population of N = 99 particles. The maximum number of evaluations was 80,000. The results of the tests were averaged over 10 runs and are presented in Table 5. It should be noted that, despite the increase of the search space dimensions, the tproposed LCSO method is still effective for functions f 1 , f 6 , f 7 , f 8 , f 9 , f 12 , f 13 and f 16 . An increase in the number of dimensions requires an increase in the size of the population. It also entails an increase in computation costs.

Conclusions
In this paper, a new learning competitive swarm optimization algorithm (LCSO) based on the particle swarm optimization method and competition mechanism was proposed. In LCSO, particles do not have to remember their personal best positions; instead of that, they participate in the tournament. The tournaments take place in sub-swarms as well as between them. The participants of the tournament update their positions by learning from their competitors.
The efficiency of the LCSO method was tested on a set of benchmark functions. Then the test results were compared with four different variants of PSO, including CSO, CLPSO, PSO, CMA-ES, FIPS and HCLPSO. The obtained results indicate that the proposed LCSO is faster and more effective than the other examined algorithms. The competition mechanism used in LCSO helps maintain particle diversity and improve its exploration ability.
Future work will focus on the implementation of the LCSO algorithm in real-world optimization problems. The application of the LCSO method in machine learning and image compression is also a promising area of research.